qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
|---|---|---|---|---|---|---|
51,760,986
|
I was going through the CPython source code I found the following piece of code from the standard library([`ast.py`](https://github.com/python/cpython/blob/master/Lib/ast.py#L60)).
```
if isinstance(node.op, UAdd):
return + operand
else:
return - operand
```
I tried the following in my python interpreter
```
>>> def s():
... return + 1
...
>>> s()
1
```
But this is same as the following right?
```
def s():
return 1
```
Can any one help me to understand what does the expression `return +` or `return -` do in python and when we should use this?
|
2018/08/09
|
[
"https://Stackoverflow.com/questions/51760986",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6699447/"
] |
plus and minus in this context are [unary operators](https://en.wikipedia.org/wiki/Unary_operation). That is, they accept **a single operand**. This is in comparison to the binary operator `*` (for example) that operates on **two operands**. Evidently `+1` is just `1`. So the unary operator `+` in your return statement is redundant.
|
`return + operand` is equivalent to `return operand` (if operand is a number). The only purpose I see is to insist on the fact that we do not want the opposite of `operand`.
| 15,224
|
67,685,667
|
I have a problem when I want to host an HTML file using python on the web server.
I go to cmd and type this in:
```
python -m http.server
```
The command works but when I open the domain name I get a list of HTML pages, which I can click and they open. How can I make it so that when I open the domain in chrome it immediately shows me the main.html?
Thank you
|
2021/05/25
|
[
"https://Stackoverflow.com/questions/67685667",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14456620/"
] |
You can add additional options:
```
python -m http.server --directory C:/ESD/ --bind 127.0.0.1
```
`--directory` will tell python in which folder to look for html files, here i specified the path `C:/ESD/`
`--bind` will tell python on which ip the server should run at
If you want to directly open the file instead of the directory listing all html files, you have to rename your `main.html` to `index.html`
|
*it works but when I open it up in chrome it shows me lists of my html files and when I click on one it goes to it.*
This is expected behavior.
*how can I do it so that when I open it in chrome it immediately shows me the main.html?*
Rename `main.html` to `index.html`. As [docs](https://docs.python.org/3/library/http.server.html#http.server.SimpleHTTPRequestHandler.do_GET) says standard procedure to handle `GET` request is
>
> If the request was mapped to a directory, the directory is checked for a file named index.html or index.htm (in that order). If found, the file’s contents are returned; otherwise a directory listing is generated by calling the list\_directory() method. This method uses os.listdir() to scan the directory, and returns a 404 error response if the listdir() fails.
>
>
>
| 15,229
|
25,000,994
|
I tried to install pyFFTW 0.9.2 to OSX mavericks, but I encounter the following errors:
```
/usr/bin/clang -bundle -undefined dynamic_lookup
-L//anaconda/lib -arch x86_64 -arch x86_64
build/temp.macosx-10.5-x86_64-2.7/anaconda/lib/python2.7/site-packages/pyFFTW-master/pyfftw/pyfftw.o
-L//anaconda/lib -lfftw3 -lfftw3f -lfftw3l -lfftw3_threads -lfftw3f_threads -lfftw3l_threads
-o build/lib.macosx-10.5-x86_64-2.7/pyfftw/pyfftw.so
ld: library not found for -lfftw3
clang: error: linker command failed with exit code 1 (use -v to see invocation)
```
As mentioned in [pyFFTW installation -> cannot find -lfftw3\_threads](https://stackoverflow.com/questions/24834822/pyfftw-installation-cannot-find-lfftw3-threads), I tried to compile and install fftw 3.3.4 for three times. But it is not working for me.
How I did was:
```
./configure --enable-float --enable-share => make => make install
./configure --enable-long-double --enable-share => make => make install
./configure --enable-threads --enable-share => make => make install
```
then I run python (2.7) setup files in pyFFTW folder, and I get the error above.
I appreciate your help.
|
2014/07/28
|
[
"https://Stackoverflow.com/questions/25000994",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3885085/"
] |
I had the same issue on OSX 10.9.4 Maverick.
Try this:
download FFTW 3.3.4 than open a terminal window and go in the extracted FFTW directory and run these commands:
```
$ ./configure --enable-long-double --enable-threads
$ make
$ sudo make install
$ ./configure --enable-float --enable-threads
$ make
$ sudo make install
```
Than install pyFFTW using pip as suggested:
```
$ sudo pip install pyfftw
```
|
I'm using MacOX 10.11.4 and Python 3.5.1 installed through `conda` and the above answer didn't work for me.
I would still get this error:
```
ld: library not found for -lfftw3l
clang: error: linker command failed with exit code 1 (use -v to see invocation)
error: command 'gcc' failed with exit status 1
----------------------------------------
Failed building wheel for pyfftw
```
or:
```
ld: library not found for -lfftw3l_threads
clang: error: linker command failed with exit code 1 (use -v to see invocation)
error: command 'gcc' failed with exit status 1
----------------------------------------
Failed building wheel for pyfftw
```
What *did* work for me was a slight variation on what I found [here](https://github.com/pyFFTW/pyFFTW/issues/16):
### First install long-double libraries
```
comp:fftw-3.3.4 user$ ./configure --enable-threads --enable-shared --disable-fortran --enable-long-double CFLAGS="-O3 -fno-common -fomit-frame-pointer -fstrict-aliasing"
comp:fftw-3.3.4 user$ make
comp:fftw-3.3.4 user$ sudo make install
```
### Then install float and double libraries
```
comp:fftw-3.3.4 user$ ./configure --enable-threads --enable-shared --disable-fortran --enable-sse2 --enable-float CFLAGS="-O3 -fno-common -fomit-frame-pointer -fstrict-aliasing"
comp:fftw-3.3.4 user$ make
comp:fftw-3.3.4 user$ sudo make install
comp:fftw-3.3.4 user$ ./configure --enable-threads --enable-shared --disable-fortran --enable-sse2 CFLAGS="-O3 -fno-common -fomit-frame-pointer -fstrict-aliasing"
comp:fftw-3.3.4 user$ make
comp:fftw-3.3.4 user$ sudo make install
```
### Then install `pyfftw`
```
comp:fftw-3.3.4 user$ sudo -H pip install pyfftw
```
I don't think the `--disable-fortran` and `--enable-sse2` flags are necessary and I'm not sure `sudo` is necessary for `pip` but this is what worked for me.
Note that your `/usr/local/lib` folder should contain the following files when you're done:
```
libfftw3.3.dylib
libfftw3.a
libfftw3.dylib
libfftw3.la
libfftw3_threads.3.dylib
libfftw3_threads.a
libfftw3_threads.dylib
libfftw3_threads.la
libfftw3f.3.dylib
libfftw3f.a
libfftw3f.dylib
libfftw3f.la
libfftw3f_threads.3.dylib
libfftw3f_threads.a
libfftw3f_threads.dylib
libfftw3f_threads.la
libfftw3l.3.dylib
libfftw3l.a
libfftw3l.dylib
libfftw3l.la
libfftw3l_threads.3.dylib
libfftw3l_threads.a
libfftw3l_threads.dylib
libfftw3l_threads.la
```
| 15,230
|
32,171,138
|
I need to generate all possible strings of certain length X that satisfies the following two rules:
1. Must end with '0'
2. There can't be two or more adjacent '1'
For example, when X = 4, all 'legal' strings are [0000, 0010, 0100, 1000, 1010].
I already wrote a piece of recursive code that simply append the newly found string to a list.
```
def generate(pre0, pre1, cur_len, max_len, container = []):
if (cur_len == max_len-1):
container.append("".join([pre0, pre1, "0"]))
return
if (pre1 == '1'):
cur_char = '0'
generate(pre0+pre1, cur_char, cur_len+1, max_len, container)
else:
cur_char = '0'
generate(pre0+pre1, cur_char, cur_len+1, max_len, container)
cur_char = '1'
generate(pre0+pre1, cur_char, cur_len+1, max_len, container)
if __name__ == "__main__":
container = []
_generate("", "", 0, 4, container)
print container
```
However this method won't work while X reaches 100+ because the memory complexity. I am not familiar with the generator method in Python, so could anyone here help me figure out how to re-write it into a generator? Thanks so much!
P.S. I am working on a Project Euler problem, this is not homework.
Update 1:
Grateful to the first 3 answers, I am using Python 2.7 and can switch to python 3.4. The reason I am asking for the generator is that I can't possibly hold even just the final result list in my memory. A quick mathematical proof will show that there are Fibonacci(X) possible strings for the length X, which means I have to really use a generator and filter the result on the fly.
|
2015/08/23
|
[
"https://Stackoverflow.com/questions/32171138",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2759486/"
] |
Assuming you're using python version >= 3.4, where `yield from` is available, you can `yield` instead of accumulating+returning. You don't need to pass around a container.
```
def generate(pre0, pre1, cur_len, max_len):
if (cur_len == max_len-1):
yield "".join((pre0, pre1, "0"))
return
if (pre1 == '1'):
yield from generate(pre0+pre1, '0', cur_len+1, max_len)
else:
yield from generate(pre0+pre1, '0', cur_len+1, max_len)
yield from generate(pre0+pre1, '1', cur_len+1, max_len)
if __name__ == "__main__":
for result in generate("", "", 0, 4):
print result
```
If you're using a python version where `yield from` is not available, replace those lines with:
```
for x in generate(...):
yield x
```
|
If you were doing it in Python 3, you would adapt it as follows:
* Remove the `container` parameter.
* Change all occurrences of `container.append(*some\_value*)` to `yield *some\_value*`.
* Prepend `yield from` to all recursive calls.
To do it in Python 2, you do the same, except that Python 2 doesn’t support `yield from`. Rather than doing `yield from *iterable*`, you’ll need to use:
```
for item in *iterable*:
yield item
```
You’ll then also need to change your call site, removing the `container` argument and instead storing the return value of the call. You’ll also need to iterate over the result, printing the values, as `print` will just give you `<generator object at 0x...>`.
| 15,231
|
22,398,881
|
I have a Django app on Heroku. I set up another app on the same Heroku account.
Now I want another instance of the first app.
I just cloned the first app and pushed into the newly created app, but it's not working.
Got this error when doing `git push heroku master`
```
Running setup.py install for distribute
Before install bootstrap.
Scanning installed packages
Setuptools installation detected at /app/.heroku/python/lib/python2.7/site-packages/setuptools-2.1-py2.7.egg
Egg installation
Patching...
Renaming /app/.heroku/python/lib/python2.7/site-packages/setuptools-2.1-py2.7.egg into /app/.heroku/python/lib/python2.7/site-packages/setuptools-2.1-py2.7.egg.OLD.1394782343.31
Patched done.
Relaunching...
Traceback (most recent call last):
File "<string>", line 1, in <module>
NameError: name 'install' is not defined
Complete output from command /app/.heroku/python/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip_build_u57096/distribute/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-TYqPAN-record/install-record.txt --single-version-externally-managed --compile:
Before install bootstrap.
Scanning installed packages
Setuptools installation detected at /app/.heroku/python/lib/python2.7/site-packages/setuptools-2.1-py2.7.egg
Egg installation
Patching...
Renaming /app/.heroku/python/lib/python2.7/site-packages/setuptools-2.1-py2.7.egg into /app/.heroku/python/lib/python2.7/site-packages/setuptools-2.1-py2.7.egg.OLD.1394782343.31
Patched done.
Relaunching...
Traceback (most recent call last):
File "<string>", line 1, in <module>
NameError: name 'install' is not defined
----------------------------------------
Cleaning up...
Command /app/.heroku/python/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip_build_u57096/distribute/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-TYqPAN-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /tmp/pip_build_u57096/distribute
Storing debug log for failure in /app/.pip/pip.log
! Push rejected, failed to compile Python app
To git@heroku.com:gentle-plateau-6569.git
! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to 'git@heroku.com:gentle-plateau-6569.git'
```
my requirements.txt file is
```
Django==1.4
South==0.7.5
boto==2.5.2
distribute==0.6.27
dj-database-url==0.2.1
django-debug-toolbar==0.9.4
django-flash==1.8
django-mailgun==0.2.1
django-registration==0.8
django-session-security==2.0.3
django-sslify==0.2
django-storages==1.1.5
gunicorn==0.14.6
ipdb==0.7
ipython==0.13
newrelic==1.6.0.13
psycopg2==2.4.5
raven==2.0.3
requests==0.13.6
simplejson==2.4.0
wsgiref==0.1.2
xlrd==0.7.9
xlwt==0.7.4
```
please help me to get rid of this
|
2014/03/14
|
[
"https://Stackoverflow.com/questions/22398881",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2088432/"
] |
This appears to be related to an issue [here](https://bitbucket.org/tarek/distribute/issue/91/install-glitch-when-using-pip-virtualenv#comment-5782834).
In other words, `distribute` has been deprecated and is being replaced with `setuptools`. Try replacing the line `distribute==0.6.27` with `setuptools>=0.7` as per the recommendation in the link.
|
virtualenv is a tool to create isolated Python environments.
you will need to add the following to fix `command python setup.py egg_info failed with error code 1`, so inside your requirements.txt add this:
>
> virtualenv==12.0.7
>
>
>
| 15,236
|
45,335,659
|
Let's say I have the following simple situation:
```
import pandas as pd
def multiply(row):
global results
results.append(row[0] * row[1])
def main():
results = []
df = pd.DataFrame([{'a': 1, 'b': 2}, {'a': 3, 'b': 4}, {'a': 5, 'b': 6}])
df.apply(multiply, axis=1)
print(results)
if __name__ == '__main__':
main()
```
This results in the following traceback:
```
Traceback (most recent call last):
File "<ipython-input-2-58ca95c5b364>", line 1, in <module>
main()
File "<ipython-input-1-9bb1bda9e141>", line 11, in main
df.apply(multiply, axis=1)
File "C:\Users\bbritten\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\core\frame.py", line 4262, in apply
ignore_failures=ignore_failures)
File "C:\Users\bbritten\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\core\frame.py", line 4358, in _apply_standard
results[i] = func(v)
File "<ipython-input-1-9bb1bda9e141>", line 5, in multiply
results.append(row[0] * row[1])
NameError: ("name 'results' is not defined", 'occurred at index 0')
```
I know that I can move `results = []` to the `if` statement to get this example to work, but is there a way to keep the structure I have now and make it work?
|
2017/07/26
|
[
"https://Stackoverflow.com/questions/45335659",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2789863/"
] |
You must declare results outside the functions like:
```
import pandas as pd
results = []
def multiply(row):
# the rest of your code...
```
### UPDATE
Also note that `list` in python is mutable, hence you don't need to specify it with global in the beginning of the functions. Example
```
def multiply(row):
# global results -> This is not necessary!
results.append(row[0] * row[1])
```
|
You must move results outside of the function. I don't think there is any other way without moving the variable out.
One way is to pass results as a parameter to multiply method.
| 15,237
|
49,495,211
|
When I run **virt-manager --no-fork** on macOS 10.13 High Sierra I get the following:
```
Traceback (most recent call last):
File "/usr/local/Cellar/virt-manager/1.5.0/libexec/share/virt-manager/virt-manager", line 31, in <module>
import gi
ImportError: No module named gi
```
python version 2.7.6 on macOS
Tried mulitple solutions (by googling) none fixed the issue, any ideas how to solve "ImportError: No module named gi" error?
|
2018/03/26
|
[
"https://Stackoverflow.com/questions/49495211",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2575019/"
] |
On debian 9.7 I had the same problem, I fixed:
```sh
apt install python-gobject-2-dev python-gobject-dev libvirt-glib-1.0-dev python3-libxml2 libosinfo-1.0-dev
```

|
(Arch-linux) I have anaconda installed which breaks virt-manager. But there is no need to uninstall it, just renamed anaconda folder to anything else and virt-manager works again. When need to use anaconda, undo the folder name change. Anaconda modifies PATH environment and when the folder is not present (the trick to rename it) system will look to a standard location. I think the problem could be related.
| 15,238
|
4,356,329
|
I have a generated file with thousands of lines like the following:
`CODE,XXX,DATE,20101201,TIME,070400,CONDITION_CODES,LTXT,PRICE,999.0000,QUANTITY,100,TSN,1510000001`
Some lines have more fields and others have fewer, but all follow the same pattern of key-value pairs and each line has a TSN field.
When doing some analysis on the file, I wrote a loop like the following to read the file into a dictionary:
```
#!/usr/bin/env python
from sys import argv
records = {}
for line in open(argv[1]):
fields = line.strip().split(',')
record = dict(zip(fields[::2], fields[1::2]))
records[record['TSN']] = record
print 'Found %d records in the file.' % len(records)
```
...which is fine and does exactly what I want it to (the `print` is just a trivial example).
However, it doesn't feel particularly "pythonic" to me and the line with:
```
dict(zip(fields[::2], fields[1::2]))
```
Which just feels "clunky" (how many times does it iterate over the fields?).
Is there a better way of doing this in Python 2.6 with just the standard modules to hand?
|
2010/12/04
|
[
"https://Stackoverflow.com/questions/4356329",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/78845/"
] |
Not so much better as just [more efficient...](https://stackoverflow.com/questions/2631189/python-every-other-element-idiom/2631256#2631256)
[Full explanation](https://stackoverflow.com/questions/2233204/how-does-zipitersn-work-in-python)
|
```
import itertools
def grouper(n, iterable, fillvalue=None):
"grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx"
args = [iter(iterable)] * n
return itertools.izip_longest(fillvalue=fillvalue, *args)
record = dict(grouper(2, line.strip().split(","))
```
[source](http://docs.python.org/library/itertools.html)
| 15,239
|
36,867,567
|
Is there someway to import the attribute namespace of a class instance into a method?
Also, is there someway to direct output to the attribute namespace of the instance?
For example, consider this simple block of code:
```
class Name():
def __init__(self,name='Joe'):
self.name = name
def mangle_name(self):
# Magical python command like 'import self' goes here (I guess)
self.name = name + '123' # As expected, error here as 'name' is not defined
a_name = Name()
a_name.mangle_name()
print(a_name.name)
```
As expected, this block of code fails because `name` is not recognised. What I want to achieve is that the method `mangle_name` searches the attribute namespace of `self` to look for `name` (and therefore finding `'Joe'` in the example). Is there some command such that the attribute namespace of `self` is imported, such that there will be no error and the output will be:
```
>>> Joe123
```
I understand that this is probably bad practice, but in some circumstances it would make the code a lot clearer without heaps of references to `self.`something throughout it. If this is instead 'so-terrible-and-never-should-be-done' practice, could you please explain why?
With regards to the second part of the question, what I want to do is:
```
class Name():
def __init__(self,name='Joe'):
self.name = name
def mangle_name(self):
# Magical python command like 'import self' goes here (I guess)
name = name + '123' # This line is different
a_name = Name()
a_name.mangle_name()
print(a_name.name)
```
Where this code returns the output:
```
>>> Joe123
```
Trying the command `import self` returns a module not found error.
I am using Python 3.5 and would therefore prefer a Python 3.5 compatible answer.
EDIT: For clarity.
|
2016/04/26
|
[
"https://Stackoverflow.com/questions/36867567",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2728074/"
] |
@martin-lear has correctly answered the second part of your question. To address the first part, I'd offer these observations:
Firstly, there is no language feature that lets you "import" the instance's namespace into the method scope, other than performing attribute lookups on self.
You could probably abuse `exec` or `eval` and the instance's `__ dict __` to get something like what you want, but you'd have to manually dump the instance namespace into the method scope at the beginning of the method and then update the instance's `__dict __` at the end.
Even if this can be done it should be avoided though. Source code is primarily a means of communication between humans, and if your code violates its readers' expectations in such a way it will be more difficult to understand and read, and the cost of this over time will outweigh the cost of you having to type *self* in front of instance variable names. The principle that you should [code as if the future maintainers are psychopaths who know where you live](http://c2.com/cgi/wiki?CodeForTheMaintainer) applies here.
Finally, consider that the irritation of having to type *self* a lot is telling you something about your design. Perhaps having to type something like `self.a + self.b + self.c + self.d + self.e + self.f + self.g + self.h + ....` is a sign that you are missing an abstraction - perhaps a collection in this case - and you should be looking to design your class in such a way that you only have to type `sum(self.foos)`.
|
So in your mangle\_name function, you are getting the error becuase you have not defined the name variable
```
self.name = name + '123'
```
should be
```
self.name = self.name + '123'
```
or something similiar
| 15,245
|
13,489,299
|
I am learning python and this is from
```
http://www.learnpython.org/page/MultipleFunctionArguments
```
They have an example code that does not work- I am wondering if it is just a typo or if it should not work at all.
```
def bar(first, second, third, **options):
if options.get("action") == "sum":
print "The sum is: %d" % (first + second + third)
if options.get("return") == "first":
return first
result = bar(1, 2, 3, action = "sum", return = "first")
print "Result: %d" % result
```
Learnpython thinks the output should have been-
```
The sum is: 6
Result: 1
```
The error i get is-
```
Traceback (most recent call last):
File "/base/data/home/apps/s~learnpythonjail/1.354953192642593048/main.py", line 99, in post
exec(cmd, safe_globals)
File "<string>", line 9
result = bar(1, 2, 3, action = "sum", return = "first")
^
SyntaxError: invalid syntax
```
Is there a way to do what they are trying to do or is the example wrong? Sorry I did look at the python tutorial someone had answered but I don't understand how to fix this.
|
2012/11/21
|
[
"https://Stackoverflow.com/questions/13489299",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/612258/"
] |
`return` is a keyword in python - you can't use it as a variable name. If you change it to something else (eg `ret`) it will work fine.
```
def bar(first, second, third, **options):
if options.get("action") == "sum":
print "The sum is: %d" % (first + second + third)
if options.get("ret") == "first":
return first
result = bar(1, 2, 3, action = "sum", ret = "first")
print "Result: %d" % result
```
|
You probably should not use "`return`" as an argument name, because it's a Python command.
| 15,246
|
51,998,627
|
I want to provide an id (unique) to every new row added to my sql database without explicitly asking the user. I was thinking about using `bigint` and incrementing value for every row added compared to the id of the previous row.
Example :
```
name id
xyz 1
```
now for another data to be added to this table, say name = 'abc', i want the id to automatically increment to 2 and then save all looking like
```
name id
xyz 1
abc 2
```
Is there any way I can do so? Or any other data type that can increment with each new row?
P.S. I am using sql on python so I can access the last row and then add 1 to it through `fetchall()` query but that doesn't seem like the efficient way to do it.
|
2018/08/24
|
[
"https://Stackoverflow.com/questions/51998627",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
How To - Fullscreen - <https://www.w3schools.com/howto/howto_js_fullscreen.asp>
This is the current "angular way" to do it.
**HTML**
```
<h2 (click)="openFullscreen()">open</h2>
<h2 (click)="closeFullscreen()">close</h2>
```
**Component**
```
import { DOCUMENT } from '@angular/common';
import { Component, Inject, OnInit } from '@angular/core';
@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.scss']
})
export class AppComponent implements OnInit {
constructor(@Inject(DOCUMENT) private document: any) {}
elem;
ngOnInit() {
this.elem = document.documentElement;
}
openFullscreen() {
if (this.elem.requestFullscreen) {
this.elem.requestFullscreen();
} else if (this.elem.mozRequestFullScreen) {
/* Firefox */
this.elem.mozRequestFullScreen();
} else if (this.elem.webkitRequestFullscreen) {
/* Chrome, Safari and Opera */
this.elem.webkitRequestFullscreen();
} else if (this.elem.msRequestFullscreen) {
/* IE/Edge */
this.elem.msRequestFullscreen();
}
}
/* Close fullscreen */
closeFullscreen() {
if (this.document.exitFullscreen) {
this.document.exitFullscreen();
} else if (this.document.mozCancelFullScreen) {
/* Firefox */
this.document.mozCancelFullScreen();
} else if (this.document.webkitExitFullscreen) {
/* Chrome, Safari and Opera */
this.document.webkitExitFullscreen();
} else if (this.document.msExitFullscreen) {
/* IE/Edge */
this.document.msExitFullscreen();
}
}
}
```
|
You can try with `requestFullscreen`
I have create a demo on **[Stackblitz](https://angular-awqvmd.stackblitz.io/)**
```
fullScreen() {
let elem = document.documentElement;
let methodToBeInvoked = elem.requestFullscreen ||
elem.webkitRequestFullScreen || elem['mozRequestFullscreen']
||
elem['msRequestFullscreen'];
if (methodToBeInvoked) methodToBeInvoked.call(elem);
}
```
| 15,247
|
11,190,353
|
I am with a python script. I want to open a file to retrieve data inside. I add the right path to `sys.path`:
```
sys.path.append('F:\WORK\SIMILITUDE\ALGOCODE')
sys.path.append('F:\WORK\SIMILITUDE\ALGOCODE\DTW')
```
More precisely, the file `file.txt` I will open is in DTW folder, and I also add upper folder ALGOCODE. Then, I have command
```
inputASTM170512 = open("file.txt","r")
```
I have this present:
```
Traceback (most recent call last):
File "<pyshell#24>", line 1, in <module>
inputASTM170512 = open("ASTM-170512.txt","r")
IOError: [Errno 2] No such file or directory: 'ASTM-170512.txt'
```
Why? Do you have any idea?
|
2012/06/25
|
[
"https://Stackoverflow.com/questions/11190353",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1141493/"
] |
`open()` checks only the current working directory and does not traverse your system path looking for the file. Only `import` works with that mechanism.
You will either need to change your working directory before you open the file, with `os.chdir(PATH)` or to include the entire path when trying to open it.
|
When you try to open file with `open`, for example:
```
open("ASTM-170512.txt","r")
```
you will try to open a file in the current directory.
It does not depend on `sys.path`. The `sys.path` variable is used when you try to import modules, but not when you open files.
You need to specify the full path to file in the `open` or change the current directory to the correspondent place (I think that the former is better).
| 15,257
|
55,056,805
|
I have a simple `dataframe` as follows:
```
Condition State Value
0 A AM 0.775651
1 B XP 0.700265
2 A HML 0.688315
3 A RMSML 0.666956
4 B XAD 0.636014
5 C VAP 0.542897
6 C RMSML 0.486664
7 B XMA 0.482742
8 D VCD 0.469553
```
Now I would like to have a *barplot* with each value, and same color for each state if Condition is the same. I tried the following python code:
```
Data_Control = pd.ExcelFile('Bar_plot_example.xlsx')
df_Control= Data_Control.parse('Sheet2')# my dataframe
s = pd.Series(df_Control.iloc[:,2].values, index=df_Control.iloc[:,1])
colors = {'A': 'r', 'B': 'b', 'C': 'g', 'D':'k'}
s.plot(kind='barh', color=[colors[i] for i in df_Control['Condition']])
plt.legend()
```
But I am not able to get legend correctly for each condition. I am getting the following plot.

So how should I get correct legend for each condition? Any help is highly appreciated, Thanks.
|
2019/03/08
|
[
"https://Stackoverflow.com/questions/55056805",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3578648/"
] |
You can create the handles and labels for the legend directly from the data:
```
labels = df['Condition'].unique()
handles = [plt.Rectangle((0,0),1,1, color=colors[l]) for l in labels]
plt.legend(handles, labels, title="Conditions")
```
Complete example:
```
u = """ Condition State Value
0 A AM 0.775651
1 B XP 0.700265
2 A HML 0.688315
3 A RMSML 0.666956
4 B XAD 0.636014
5 C VAP 0.542897
6 C RMSML 0.486664
7 B XMA 0.482742
8 D VCD 0.469553"""
import io
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv(io.StringIO(u),sep="\s+" )
s = pd.Series(df.iloc[:,2].values, index=df.iloc[:,1])
colors = {'A': 'r', 'B': 'b', 'C': 'g', 'D':'k'}
s.plot(kind='barh', color=[colors[i] for i in df['Condition']])
labels = df['Condition'].unique()
handles = [plt.Rectangle((0,0),1,1, color=colors[l]) for l in labels]
plt.legend(handles, labels, title="Conditions")
plt.show()
```
[](https://i.stack.imgur.com/kF5P7.png)
|
So I haven't worked much with plotting directly from pandas, but you'll have to access the handles and use that to construct lists of handles and labels that you can pass to `plt.legend`.
```py
s.plot(kind='barh', color=[colors[i] for i in df['Condition']])
# Get the original handles.
original_handles = plt.gca().get_legend_handles_labels()[0][0]
# Hold the handles and labels that will be passed to legend in lists.
handles = []
labels = []
conditions = df['Condition'].values
# Seen conditions helps us make sure that each label is added only once.
seen_conditions = set()
# Iterate over the condition and handle together.
for condition, handle in zip(conditions, original_handles):
# If the condition was already added to the labels, then ignore it.
if condition in seen_conditions:
continue
# Add the handle and label.
handles.append(handle)
labels.append(condition)
seen_conditions.add(condition)
# Call legend with the stored handles and labels.
plt.legend(handles, labels)
```
| 15,258
|
31,570,675
|
Couldn't find much information on try vs. if when you're checking more than one thing. I have a tuple of strings, and I want to grab an index and store the index that follows it. There are 2 cases.
```
mylist = ('a', 'b')
```
or
```
mylist = ('y', 'z')
```
I want to look at `a` or `y`, depending on which exists, and grab the index that follows. Currently, I'm doing this:
```
item['index'] = None
try:
item['index'] = mylist[mylist.index('a') + 1]
except ValueError:
pass
try:
item['index'] = mylist[mylist.index('y') + 1]
except ValueError:
pass
```
After reading [Using try vs if in python](https://stackoverflow.com/questions/1835756/using-try-vs-if-in-python#), I think it may actually be better/more efficient to write it this way, since half the time, it will likely raise an exception, (`ValueError`), which is more expensive than an if/else, if I'm understanding correctly:
```
if 'a' in mylist:
item['index'] = mylist[mylist.index('a') + 1]
elif 'y' in mylist:
item['index'] = mylist[mylist.index('y') + 1]
else:
item['index'] = None
```
Am I right in my assumption that `if` is better/more efficient here? Or is there a better way of writing this entirely that I'm unaware of?
|
2015/07/22
|
[
"https://Stackoverflow.com/questions/31570675",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2887891/"
] |
Your first code snippet should rather be like below:
```
try:
item['index'] = mylist[mylist.index('a') + 1]
except ValueError:
try:
item['index'] = mylist[mylist.index('y') + 1]
except ValueError:
item['index'] = None
```
or have a for loop like this:
```
for element in ['a', 'y']:
try:
item['index'] = mylist[mylist.index(element) + 1]
except ValueError:
continue
else:
break
else:
item['index'] = None
```
IMO using try blocks is better performance-wise as you can avoid if-checking in every case irrespective of how often the ValueError occurs or not and its also more readable and pythonic.
|
Performance-wise, exceptions in Python are not heavy like in other languages and you shouldn't worry about using them whenever it's appropriate.
That said, IMO the second code snippet is more concise and clean.
| 15,259
|
23,178,606
|
My python code has been crashing with error 'GC Object already Tracked' . Trying to figure out the best approach to debug this crashes.
OS : Linux.
* Is there a proper way to debug this issue.
There were couple of suggestions in the following article.
[Python memory debugging with GDB](https://stackoverflow.com/questions/273043/python-memory-debugging-with-gdb/23143858#23143858)
Not sure which approach worked for the author.
* Is there a way to generate memory dumps in such scenario which could be analyzed. Like in Windows world.
Found some article on this. But not entirely answers my question:
<http://pfigue.github.io/blog/2012/12/28/where-is-my-core-dump-archlinux/>
|
2014/04/20
|
[
"https://Stackoverflow.com/questions/23178606",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/992301/"
] |
Found out the reason for this issue in my scenario (not necessarily the only reason for the GC object crash).
I used the GDB and Core dumps to debug this issue.
I have Python and C Extension Code (in shared object).
Python code registers a Callback routine with C Extension code.
In a certain workflow a thread from C Extension code was calling the registered Call back routine in Python code.
This usually worked fine but when multiple threads did the same action concurrently it resulted in the Crash with 'GC Object already tracked'.
Synchronizing the access to python objects for multiple thread does resolve this issue.
Thanks to any responded to this.
|
The problem is that you try to add an object to Python's cyclic garbage collector tracking twice.
Check out [this bug](http://bugs.python.org/issue6128), specifically:
* The [documentation for supporting cyclic garbage collection](https://docs.python.org/2/c-api/gcsupport.html) in Python
* [My documentation patch for the issue](http://bugs.python.org/file33497/gcsupport-doc.diff) and
* [My explanation in the bug report itself](http://bugs.python.org/msg208283)
Long story short: if you set `Py_TPFLAGS_HAVE_GC` and you are using Python's built-in memory allocation (standard `tp_alloc`/`tp_free`), you don't ever have to manually call `PyObject_GC_Track()` or `PyObject_GC_UnTrack()`. Python handles it all behind your back.
Unfortunately, this is not documented very well, at the moment. Once you've fixed the issue, feel free to chime in on the bug report (linked above) about better documentation of this behavior.
| 15,262
|
72,890,039
|
I have two list of list and I want to check what element of l1 is not in l2 and add all this element in a new list ls in l2. For example,
input:
l1 = [[1,2,3],[5,6],[7,8,9,10]]
l2 = [[1,8,10],[3,9],[5,6]]
output:
l2 = [[1,8,10],[3,9],[5,6],[2,7]]
I write this code but it is seem doesn't work...
```
ls = []
for i in range(len(l1)):
for j in range(len(l1[i])):
if l1[i][j] not in l2 and l1[i][j] not in ls:
ls.append(l1[i][j])
l2.append(ls)
```
I code this in python. Any ideas where can be the problem ?
|
2022/07/06
|
[
"https://Stackoverflow.com/questions/72890039",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19419689/"
] |
This should work:
```
l1 = [[1,2,3],[5,6],[7,8,9,10]]
l2 = [[1,8,10],[3,9],[5,6]]
l1n = set([x for xs in l1 for x in xs])
l2n = set([x for xs in l2 for x in xs])
l2.append(list(l1n - l2n))
```
|
You are checking whether the elements are in `l2`, not if they are in any of the sublists of `l2`. First, you should probably create a `set` from all the elements in all the sublists of `l2` to make that check faster. Then you can create a similar list comprehension on `l1` to get all the elements that are not in that set. Finally, add that to `l2` or create a new list.
```
>>> l1 = [[1,2,3],[5,6],[7,8,9,10]]
>>> l2 = [[1,8,10],[3,9],[5,6]]
>>> s2 = {x for lst in l2 for x in lst}
>>> [x for lst in l1 for x in lst if x not in s2]
[2, 7]
>>> l2 + [_]
[[1, 8, 10], [3, 9], [5, 6], [2, 7]]
```
| 15,264
|
32,555,508
|
I have a large document with a similar structure:
```
Data800,
Data900,
Data1000,
]
}
```
How would I go about removing the last character from the 3rd to last line (in this case, where the comma is positioned next to Data1000). The output should look like this:
```
Data800,
Data900,
Data1000
]
}
```
It will always be the 3rd to last line in which needs the last character removed. Back-end is linux, and can use perl, bash, python, etc.
|
2015/09/13
|
[
"https://Stackoverflow.com/questions/32555508",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3139767/"
] |
A simple solution using `wc` to count lines and `sed` to do the editing:
```
sed "$(( $(wc -l <file) - 2))s/,$//" file
```
That will output the edited file on stdout; you can edit inplace with `sed -i`.
|
In python 2.\* :
```
with open('/path/of/file', 'w+') as f:
f.write(f.read().replace(',\n]', '\n]'))
```
| 15,267
|
4,756,205
|
Some python 3 features and modules having been backported to python 2.7 what are the notable differences between python 3.1 and python 2.7?
|
2011/01/21
|
[
"https://Stackoverflow.com/questions/4756205",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/538850/"
] |
I think these resources might help you:
* A [introduction to Python "3000"](http://www.python.org/doc/essays/ppt/pycon2008/Py3kAndYou.pdf) from Guido van Rossum
* [Porting your code to Python 3](http://peadrop.com/slides/mp5.pdf)
* and of course the documentation of [changes in Python 3.0](http://docs.python.org/py3k/whatsnew/3.0.html)
And as you said
>
> Some python 3 features and modules having been backported to python 2.7
>
>
>
... I would invert that sentence and say [only few packages](https://python3wos.appspot.com/) yet have been ported from Python 2.x to 3.x. Great libraries like [PyGTK](http://pygtk.org/) still only work in Python 2. Migration can take a while in many projects so before you decide to use Python 3 you may rather think about writing your own projects in Python 2, while ensuring compatibility by testing with [2to3](http://docs.python.org/library/2to3.html) regularly.
|
If you want to use any of the python 3 function in python 2.7 then you can import **future** module at the beginning and then you can use it in your code.
| 15,274
|
22,636,974
|
I am trying to figure out how to properly make a discrete state Markov chain model with [`pymc`](http://pymc-devs.github.io/pymc/index.html).
As an example (view in [nbviewer](http://nbviewer.ipython.org/github/shpigi/pieces/blob/master/toyDbnExample.ipynb)), lets make a chain of length T=10 where the Markov state is binary, the initial state distribution is [0.2, 0.8] and that the probability of switching states in state 1 is 0.01 while in state 2 it is 0.5
```
import numpy as np
import pymc as pm
T = 10
prior0 = [0.2, 0.8]
transMat = [[0.99, 0.01], [0.5, 0.5]]
```
To make the model, I make an array of state variables and an array of transition probabilities that depend on the state variables (using the pymc.Index function)
```
states = np.empty(T, dtype=object)
states[0] = pm.Categorical('state_0', prior0)
transPs = np.empty(T, dtype=object)
transPs[0] = pm.Index('trans_0', transMat, states[0])
for i in range(1, T):
states[i] = pm.Categorical('state_%i' % i, transPs[i-1])
transPs[i] = pm.Index('trans_%i' %i, transMat, states[i])
```
Sampling the model shows that the states marginals are what they should be (compared with model built with Kevin Murphy's BNT package in Matlab)
```
model = pm.MCMC([states, transPs])
model.sample(10000, 5000)
[np.mean(model.trace('state_%i' %i)[:]) for i in range(T)]
```
prints out:
```
[-----------------100%-----------------] 10000 of 10000 complete in 7.5 sec
[0.80020000000000002,
0.39839999999999998,
0.20319999999999999,
0.1118,
0.064199999999999993,
0.044600000000000001,
0.033000000000000002,
0.026200000000000001,
0.024199999999999999,
0.023800000000000002]
```
My question is - this does not seem like the most elegant way to build a Markov chain with pymc. Is there a cleaner way that does not require the array of deterministic functions?
My goal is to write a pymc based package for more general dynamic Bayesian networks.
|
2014/03/25
|
[
"https://Stackoverflow.com/questions/22636974",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3459899/"
] |
Use this:
```
$('#window')
.data("kendoWindow")
.title("Add new category")
.content("") //this little thing does magic
.refresh({
url: _url
})
.center()
.open();
```
I would however suggest you rearrange your calls:
```
$('#window')
.data("kendoWindow")
.title("Add new category")
//.content("") //this little thing does magic
.content("<img src='ajax-loader.gif' alt='Loading...'/>")
.center()
.open();
.refresh({
url: _url
})
```
Using the second configuration and providing a valid loading image, the user will see your window and be informed that the content is being loaded. This is very helpful (not to mention user friendly) since the Kendo window makes an AJAX request when you use the `refresh` function.
Alternatively you could add an event on the window `close` event and set `content("")` inside the handler.
|
You might also want to add
```
.Events(e =>
{
e.Refresh("onRefresh");
})
<script>
function onRefresh() {
this.center();
}
<script>
```
to keep the window centered after the content is loaded as it is loaded asynchronously.
(if your content height is variable)
| 15,275
|
18,983,906
|
Why is time taken to access an element in a non-homogeneous list considered as constant or How is a non-homogeneous list stored in memory in case of python?
In case of a homogeneous list, knowing the type and position, the memory location can be determined but how is it done in case of a non homogeneous list so fast as to consider it as a function of constant time?
Going through this page [Does Python use linked lists for lists? Why is inserting slow?](https://stackoverflow.com/questions/12274060/does-python-use-linked-lists-for-lists-why-is-inserting-slow) I assume it stores elements in contiguous memory locations.
|
2013/09/24
|
[
"https://Stackoverflow.com/questions/18983906",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2216108/"
] |
Expand your controller action with a new parameter:
```
public JsonResult ResolveProfileSelectionRequired(ResolveProfileSelectionRequiredModel model, bool newtransferee = false)
```
and check it inside the action.
|
You could use:
```
if (Request.Form["newtransferee"].ToLower() == "true")
```
or does it work if you write this instead of the manual html?
```
@Html.HiddenFor(model => model.NewTransferee)
```
| 15,276
|
13,116,026
|
I am new to Python. I am writing a string function to check if a certain string contains certain multiple values. Is there a syntax in python to allow me to do something like this:
```
def strParse(str):
a = 't'
b = 'br'
c = 'ht'
if a in str AND b in str AND c in str:
print('Ok!')
```
(The part I am unsure about is having multiple if statements on line.) Thanks!
|
2012/10/29
|
[
"https://Stackoverflow.com/questions/13116026",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1507139/"
] |
Almost correct, just make the `and` lower-case:
```
def strParse(str):
a = 't'
b = 'br'
c = 'ht'
if a in str and b in str and c in str:
print('Ok!')
```
Doesn't cause problems in this case, but you should avoid using variable-names that are also [built-in functions](http://docs.python.org/2/library/functions.html) (`str` is a builtin function/type)
If you have more values, you can do the same thing more tidily like so:
```
values = ['t', 'br', 'ht']
if all(x in instr for x in values):
print("Ok!")
```
|
Why don't you try typing this into the Python REPL? What you're trying to do is perfectly valid in python, except the `and` keyword is lowercase, not uppercase.
| 15,279
|
67,628,700
|
Please help me understand this as I am new to python:
I have a list of names...
```
names = ['John', 'William', 'Amanda']
```
I need to iterate through this list and print out the names as follows:
```
1. John
2. William
3. Amanda
```
I've managed to print the list out however I'm getting this extra space between the index number and the dot like this:
```
1' '. John
2' '. William
3' '. Amanda
```
my code:
```
index = 1
for i in names:
print(index, '.', i)
index += 1
```
|
2021/05/20
|
[
"https://Stackoverflow.com/questions/67628700",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15166148/"
] |
Couple of easy ways by which you can achieve this:
1. `enumerate` and `f-string`:
```
for index,item in enumerate(names,start=1):
print(f'{index}. {item}')
```
2. `enumerate` and `format function`:
```
for index,item in enumerate(names,start=1):
print('{}. {}'.format(index,item))
```
3. `enumerate` and `string formating`:
```
for index,item in enumerate(names,start=1):
print('%s. %s' %(index,item))
```
4. `enumerate` and manipulating args of `print` function:
```
for index,item in enumerate(names,start=1):
print(index,'. ',item, sep='')
```
5. With `range`:
```
for i in range(len(names)):
print((i+1),'. ',names[i], sep='')
```
6. with range and `f-string`:
```
for i in range(len(names)):
print(f'{(i+1)}. {names[i]}')
```
7. using `extra variable` (Not recommended):
```
index = 1
names = ['John', 'William', 'Amanda']
for i in names:
print(index, '. ', i, sep='')
index += 1
```
NOTE: There are various other ways to do this. But, I'll recommend using `enumerate` with `f-string`.
|
You can add a separator using the `sep` option in `print`
Like this
```
index = 1
names = ['John', 'William', 'Amanda']
for i in names:
print(index, '.' ,' ', i, sep='') #If you dont want the space between the '.' and i you may remove it
index += 1
#OUTPUT:
#1. John
#2. William
#3. Amanda
```
| 15,281
|
5,529,075
|
I'm working on an academic project aimed at studying people behavior.
The project will be divided in three parts:
1. A program to read the data from some remote sources, and build a local data pool with it.
2. A program to validate this data pool, and to keep it coherent
3. A web interface to allow people to read/manipulate the data.
The data consists of a list of people, all with an ID #, and with several characteristics: height, weight, age, ...
I need to easily make groups out of this data (e.g.: all with a given age, or a range of heights) and the data is several TB big (but can reduced in smaller subsets of 2-3 gb).
I have a strong background on the theoretical stuff behind the project, but I'm not a computer scientist. I know java, C and Matlab, and now I'm learning python.
I would like to use python since it seems easy enough and greatly reduce the verbosity of Java. The problem is that I'm wondering how to handle the data pool.
I'm no expert of databases but I guess I need one here. What tools do you think I should use?
Remember that the aim is to implement very advanced mathematical functions on sets of data, thus we want to reduce complexity of source code. Speed is not an issue.
|
2011/04/03
|
[
"https://Stackoverflow.com/questions/5529075",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/269931/"
] |
Sounds that the main functionality needed can be found from:
[pytables](http://www.pytables.org/moin)
and
[scipy/numpy](http://www.scipy.org/)
|
Since you aren't an expert I recommend you to use mysql database as the backend of storing your data, it's easy to learn and you'll have a capability to query your data using SQL and write your data using python see this [MySQL Guide](http://dev.mysql.com/doc/refman/5.1/en/index.html) [Python-Mysql](http://www.kitebird.com/articles/pydbapi.html)
| 15,288
|
52,334,508
|
When trying to implement a custom `get_success_url` method in python, Django throws a `TypeError: quote_from_bytes()`error. For example:
```
class SomeView(generic.CreateView):
#...
def get_success_url(self):
return HttpResponseRedirect(reverse('index'))
```
|
2018/09/14
|
[
"https://Stackoverflow.com/questions/52334508",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1649917/"
] |
`get_success_url` doesn't return an HttpResponseRedirect instead it should return the url you want to redirect to. So you can just return `reverse('index')`:
```
def get_success_url(self):
return reverse('index')
```
|
A shortcut for HttpResponseRedirect is redirect("View\_name")
it returns an HttpResponse (HTML)
Reverse returns a url
| 15,291
|
29,499,819
|
Trying to make concentric squares in python with Turtle. Here's my attempt:
```
import turtle
def draw_square(t, size):
for i in range(4):
t.forward(size)
t.left(90)
wn = turtle.Screen()
dan = turtle.Turtle()
sizevar = 1
for i in range(10):
draw_square(dan,sizevar)
sizevar += 20
dan.penup()
dan.backward(sizevar/ 2)
dan.right(90)
dan.forward(sizevar / 2)
dan.left(90)
dan.pendown()
```
I'm not sure why they aren't concentric, my `dan.backward(sizevar/2)` and
`dan.forward(sizevar/2)` lines seem to move the square down and to the left too much?
|
2015/04/07
|
[
"https://Stackoverflow.com/questions/29499819",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/721938/"
] |
What you're currently doing is adding an onclick event to a link that calls a function *that adds another onclick event via jQuery*.
Remove the `onclick` property and the `open_file()` function wrapper so that jQuery adds the event as you intended.
|
You do not need `onclick="open_file()"` this:
```
<div id='linkstofiles'>
<a id="3.txt">3.txt</a>
//other links go here
</div>
$("#linkstofiles a").click(function (event) {
var file_name = event.target.id;
$("#f_name").val(file_name);
$.ajax({
url: "docs/" + file_name,
dataType: "text",
success: function (data) {
$("#text_form").val(data);
$('#text_form').removeAttr('readonly');
$('#delete_button').removeAttr('disabled');
$('#save_button').removeAttr('disabled');
}
})
});
```
| 15,292
|
7,492,454
|
I have 2 large, unsorted arrays (structured set of xyz coordinates) and I'm trying to find the positions of all identical subarrays (common points consisting of 3 coordinates). Example:
```
a = array([[0, 1, 2], [3, 4, 5]])
b = array([[3, 4, 5], [6, 7, 8]])
```
Here the correct subarray would be `[3, 4, 5]`, but more than one identical subarrays are possible. The correct indexes would be `[0,1]` for `a` and `[1,0]` for `b`.
I already implemented a pure python method by iterating over all points of one array and comparing them to every point of the other array, but this is extremely slow.
My question is, is there an efficient way to find the indexes for both arrays (preferably in numpy, because I need the arrays for further calculations)? Perhaps a rolling\_window approach?
|
2011/09/20
|
[
"https://Stackoverflow.com/questions/7492454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/958582/"
] |
A general solution for Python iterables (not specific to numpy or arrays) that works in linear average time (O(n+m), n is the number of subarrays and m is the number of unique subarrays):
```
a = [[0, 1, 2], [3, 4, 5]]
b = [[3, 4, 5], [6, 7, 8]]
from collections import defaultdict
indexmap = defaultdict(list)
for row, sublist in enumerate((a, b)):
for column, item in enumerate(sublist):
indexmap[tuple(item)].append((row, column))
repeats = dict((key, value) for key, value in indexmap.iteritems() if len(value) > 1)
```
Gives
```
{(3, 4, 5): [(0, 1), (1, 0)]}
```
If you don't need the double-row-indexes (index in the list and in the stored index) you can simplify the loop to
```
for row in (a, b):
for column, item in enumerate(sublist):
indexmap[tuple(item)].append(column)
```
as `a` will be processed before `b`, any duplicates will get numbered by row automatically:
```
{(3, 4, 5): [1, 0]}
```
With `repeats[key][rownum]` returning the column index for that row.
|
Can you map each sub-array to its position index in a hash table? So basically, you change your data structure. After that in linear time O(n), where n is the size of the biggest hash hash table, in one loop you can O(1) query each hash table and find out if you have same sub-array present in two or more hash tables.
| 15,295
|
66,170,590
|
Currently I have a program that request JSONS from specific API. The creators of the API have claimed this data is in GeoJSON but QGIS cannot read it.
So I want to extend my Python Script to converting the JSON to GEOJSON in a readable format to get into QGIS and process it further there.
However, I have a few problems,
one I do not know where to start and How the JSON files are constructed...its kind of mess. The JSONS were actually maded based on two API Get Request. One Api Get Request request the Points and the second one requests the details of the points. See this question here for some more context:
[API Request within another API request (Same API) in Python](https://stackoverflow.com/questions/66152751/api-request-within-another-api-request-same-api-in-python/66153040?noredirect=1#comment116958447_66153040)
Note because of character limit I cannot post the code here:
These details are of course supposed to be "the contours" of the area surrounding these points.
The thing is..the point itself is mentioned in the JSON, making it kind of hard to specify what coordinate is for each point. Not to mention all other attributes that are intresting to us, are in the "point" part of the GeoJSON.
Take a look at the JSON itself to see what I mean:
```
{
"comment": null,
"contactDetails": {
"city": null,
"country": "BE",
"email": null,
"extraAddressInfo": null,
"firstName": null,
"kboNumber": null,
"lastName": "TMVW",
"number": null,
"organisation": "TMVW - Brugge-Kust",
"phoneNumber1": null,
"phoneNumber2": null,
"postalCode": null,
"street": null
},
"contractor": null,
"description": "WEGENISWERKEN - Oostende - 154-W - H Baelskaai-project Oosteroever",
"diversions": [],
"endDateTime": "2021-06-30T00:00:00",
"gipodId": 1042078,
"hindrance": {
"description": null,
"direction": null,
"effects": [
"Omleiding: beide richtingen",
"Afgesloten: volledige rijweg",
"Fietsers hebben geen doorgang",
"Handelaars bereikbaar",
"Plaatselijk verkeer: toegelaten",
"Voetgangers hebben doorgang"
],
"important": true,
"locations": [
"Voetpad",
"Fietspad",
"Parkeerstrook",
"Rijbaan"
]
},
"latestUpdate": "2020-12-01T12:16:00.03",
"location": {
"cities": [
"Oostende"
],
"coordinate": {
"coordinates": [
2.931988215468502,
51.23633810341717
],
"crs": {
"properties": {
"name": "urn:ogc:def:crs:OGC:1.3:CRS84"
},
"type": "name"
},
"type": "Point"
},
"geometry": {
"coordinates": [
[
[
2.932567101705748,
51.23657315009855
],
[
2.9309934586397337,
51.235776874431004
],
[
2.9328606392338914,
51.2345112414401
],
[
2.9344086040607285,
51.23535468563417
],
[
2.9344709862243095,
51.23529463700852
],
[
2.932928489694045,
51.23447026126373
],
[
2.935453674897618,
51.2326691257775
],
[
2.937014893295095,
51.23347469462423
],
[
2.9370649363167556,
51.23342209549579
],
[
2.9355339718818847,
51.23261689467634
],
[
2.937705787093551,
51.23108125372614
],
[
2.939235922008332,
51.23191301940206
],
[
2.9393162149112086,
51.231860784836144
],
[
2.9377921292631313,
51.23102909334536
],
[
2.9395494398210404,
51.22978103014327
],
[
2.9395326861153492,
51.22973522407282
],
[
2.9307116955342982,
51.23588365892173
],
[
2.93077732400986,
51.235914858980586
],
[
2.930921969180147,
51.23581685905391
],
[
2.932475593354336,
51.23662429379119
],
[
2.932567101705748,
51.23657315009855
]
]
],
"crs": {
"properties": {
"name": "urn:ogc:def:crs:OGC:1.3:CRS84"
},
"type": "name"
},
"type": "Polygon"
}
},
"mainContractor": null,
"owner": "TMVW - Brugge-Kust",
"reference": "DOM-154/15/004-W",
"startDateTime": "2017-05-02T00:00:00",
"state": "In uitvoering",
"type": "Wegeniswerken (her)aanleg",
"url": null
}
```
As a reminder these JSON's are not supposed to be point files and the geometry can be either a polygon as seen above: A multipolygon or a multiLinestring (do not have an example here).
**So how do I get started and make sure I not only get the detail attributes out but clearly the contour geometry and coordinates?**
The only Unique id is the GIPOD ID, because this one is where the actual link in this API database is.
Edit 1:
So this is the supposed geojson standard.
```
{
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [125.6, 10.1]
},
"properties": {
"name": "Dinagat Islands"
}
}
```
Based on this standard, the converted JSON should be this:
```
"type": "Feature",
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
2.932567101705748,
51.23657315009855
],
[
2.9309934586397337,
51.235776874431004
],
[
2.9328606392338914,
51.2345112414401
],
[
2.9344086040607285,
51.23535468563417
],
[
2.9344709862243095,
51.23529463700852
],
[
2.932928489694045,
51.23447026126373
],
[
2.935453674897618,
51.2326691257775
],
[
2.937014893295095,
51.23347469462423
],
[
2.9370649363167556,
51.23342209549579
],
[
2.9355339718818847,
51.23261689467634
],
[
2.937705787093551,
51.23108125372614
],
[
2.939235922008332,
51.23191301940206
],
[
2.9393162149112086,
51.231860784836144
],
[
2.9377921292631313,
51.23102909334536
],
[
2.9395494398210404,
51.22978103014327
],
[
2.9395326861153492,
51.22973522407282
],
[
2.9307116955342982,
51.23588365892173
],
[
2.93077732400986,
51.235914858980586
],
[
2.930921969180147,
51.23581685905391
],
[
2.932475593354336,
51.23662429379119
],
[
2.932567101705748,
51.23657315009855
]
]
},
"properties": {
Too much columns to type, cannot claim them all here because of character limit.
}
}
```
Edit 3: The Solution provided was a good step in the right direction but it gives me two problems:
1. The saved geometry is Null making this JSON a table without
geometry.
2. Only 1 data\_point is saved and not all data\_points that are
being requested.
Here is the JSON:
```
{"type": "FeatureCollection", "features": [{"type": "Feature", "geometry": null, "properties": {"gipodId": 3099549, "StartDateTime": null, "EndDateTime": null, "state": null, "location": {"cities": ["Waregem", "Wielsbeke"], "coordinate": {"coordinates": [3.4206971887218445, 50.91662742195287], "type": "Point", "crs": {"type": "name", "properties": {"name": "urn:ogc:def:crs:OGC:1.3:CRS84"}}}}}}]}
```
Python Code: Below to see how far it has gone:
```
import requests
import json
import os
import glob
import shutil
def process_location_data(location_json):
"""Converts the data point into required geojson format"""
# assigning variable to store geometry details
geometery_details = location_json.get("location").get("geometery")
#geometery_details.pop("crs") # removes the "crs" from geometry
# includes city and location_coordinates
location_details = {
"cities": location_json.get("location").get("cities"),
"coordinate": location_json.get("location").get("coordinate")
}
#EndDateTime
end_datetime = location_json.get("EndDateTime")
#StarDateTime
start_datetime = location_json.get("StarDateTime")
#State
state = location_json.get("State")
#gipodId
gipod_id = location_json.get("gipodId")
#adding all these details into another dict
properties = {
"gipodId": gipod_id,
"StartDateTime": start_datetime,
"EndDateTime": end_datetime,
"state": state,
"location": location_details
}
# creating the final dict to be returned.
geojson_data_point = {
"type": "Feature",
"geometry" : geometery_details,
"properties": properties
}
return geojson_data_point
def process_all_location_data(all_location_points):
"""
For all the points in the location details we will
create the feature collection
"""
feature_collection = {
"type": "FeatureCollection",
"features": []
} #creates dict with zero features.
for data_point in all_location_points:
feature_collection.get("features").append(
process_location_data(data_point)
)
return feature_collection
def fetch_details(url: str, file_name: str):
# Makes request call to get the data of detail
response = requests.get(url)
print("Sending Request for details of gpodId: " + file_name)
folder_path ='api_request_jsons/fetch_details/JSON unfiltered'
text = json.dumps(response.json(),sort_keys=False, indent=4)
print("Details extracted for: "+ file_name)
save_file(folder_path,file_name,text)
return response.json()
# save_file(folder_path,GipodId,text2)
# any other processe
def fetch_points(url: str):
response = requests.get(url)
folder_path ='api_request_jsons/fetch_points'
text = json.dumps(response.json(),sort_keys=False, indent=4)
print("Points Fetched, going to next step: Extracting details")
for obj in response.json():
all_location_points = [fetch_details(obj.get("detail"),str(obj.get("gipodId")))]
save_file(folder_path,'points',text)
feature_collection_json = process_all_location_data(all_location_points)
text2 = json.dumps(process_all_location_data(all_location_points))
folder_path2 = "api_request_jsons/fetch_details/Coordinates"
file_name2 = "Converted"
save_file(folder_path2,file_name2,text2)
return feature_collection_json
def save_file(save_path: str, file_name: str, file_information: str):
completeName = os.path.join(save_path, file_name +".json")
print(completeName + " saved")
file1 = open(completeName, "wt")
file1.write(file_information)
file1.close()
api_response_url = "http://api.gipod.vlaanderen.be/ws/v1/workassignment"
fetch_points(api_response_url)
```
|
2021/02/12
|
[
"https://Stackoverflow.com/questions/66170590",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11604150/"
] |
In AuthenticatesUsers.php
```
protected function sendLoginResponse(Request $request)
{
$request->session()->regenerate();
$this->clearLoginAttempts($request);
if ($response = $this->authenticated($request, $this->guard()->user())) {
return $response;
}
return $request->wantsJson()
? new JsonResponse([], 204)
: redirect()->back();
}
```
Or You Can Do this in Your Default Login Controller in Line 31
```
protected $redirectTo = "/your-path";
```
|
When the auth middleware detects an unauthenticated user, it will redirect the user to the login named route. You may modify this behavior by updating the redirectTo function in your application's `app/Http/Middleware/Authenticate.php` file
<https://laravel.com/docs/8.x/authentication#redirecting-unauthenticated-users>
| 15,298
|
9,002,569
|
I am using python 2.7 and I would like to do some web2py stuff. I am trying to run web2py from eclipse and I am getting the following error trying to run web2py.py script:
WARNING:web2py:GUI not available because Tk library is not installed
I am running on Windows 7 (64 bit).
Any suggestions?
Thank you
|
2012/01/25
|
[
"https://Stackoverflow.com/questions/9002569",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/945446/"
] |
Simplest solution: The Oracle client is not installed on the remote server where the SSIS package is being executed.
Slightly less simple solution: The Oracle client is installed on the remote server, but in the wrong bit-count for the SSIS installation. For example, if the 64-bit Oracle client is installed but SSIS is being executed with the 32-bit `dtexec` executable, SSIS will not be able to find the Oracle client.
The solution in this case would be to install the 32-bit Oracle client side-by-side with the 64-bit client.
|
1.Go to My Computer Properties
2.Then click on Advance setting.
3.Go to Environment variable
4.Set the path to
```
F:\oracle\product\10.2.0\db_2\perl\5.8.3\lib\MSWin32-x86;F:\oracle\product\10.2.0\db_2\perl\5.8.3\lib;F:\oracle\product\10.2.0\db_2\perl\5.8.3\lib\MSWin32-x86;F:\oracle\product\10.2.0\db_2\perl\site\5.8.3;F:\oracle\product\10.2.0\db_2\perl\site\5.8.3\lib;F:\oracle\product\10.2.0\db_2\sysman\admin\scripts;
```
change your drive and folder depending on your requirement...
| 15,299
|
72,539,738
|
```
import RPi.GPIO as GPIO
import paho.mqtt.client as mqtt
import time
def privacyfunc():
# Pin Definitions:
led_pin_1 = 7
led_pin_2 = 21
but_pin = 18
# blink LED 2 quickly 5 times when button pressed
def blink(channel):
x=GPIO.input(18)
print("blinked")
for i in range(1):
GPIO.output(led_pin_2, GPIO.HIGH)
time.sleep(0.5)
GPIO.output(led_pin_2, GPIO.LOW)
time.sleep(0.5)
mqttBroker="mqtt.fluux.io"#connect mqtt broker
client=mqtt.Client("privacybtn") #create a client and give a name
client.connect_async(mqttBroker)#from the client -connect broker
while True:
client.publish("privacy", x)#publish this random number to the topic called temparature
print("Just published"+str(x)+"to Topc to privacy")#just print random no to topic temparature
break
def main():
# Pin Setup:
GPIO.setmode(GPIO.BOARD) # BOARD pin-numbering scheme
GPIO.setup([led_pin_1, led_pin_2], GPIO.OUT) # LED pins set as output
GPIO.setup(but_pin, GPIO.IN) # button pin set as input
# Initial state for LEDs:
GPIO.output(led_pin_1, GPIO.LOW)
GPIO.output(led_pin_2, GPIO.LOW)
GPIO.add_event_detect(but_pin, GPIO.FALLING, callback=blink, bouncetime=10)
print("Starting demo now! Press CTRL+C to exit")
try:
while True:
x=GPIO.input(18)
print(x)
# blink LED 1 slowly
GPIO.output(led_pin_1, GPIO.HIGH)
time.sleep(2)
finally:
GPIO.cleanup() # cleanup all GPIOs
if __name__ == '__main__':
main()
```
need to take this whole code into a function that can be accessed from another python file.plz, help me with this coding part.
|
2022/06/08
|
[
"https://Stackoverflow.com/questions/72539738",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19295287/"
] |
For the function call you happened to post, the following regex seems to work:
```
(?:\w+ \w+|\w+\(\w*\))
```
This pattern matches, alternatively, a type followed by variable name, or a function name, with optional parameters. Here is a working [demo](https://regex101.com/r/IAep1Z/1).
|
You can use
```none
(?:\G(?!^)\s*,\s*|\()\K\w+(?:\s+\w+|\s*\([^()]*\))
```
See the [regex demo](https://regex101.com/r/OZZi09/1). *Details*:
* `(?:\G(?!^)\s*,\s*|\()` - either the end of the preceding match and then a comma enclosed with zero or more whitespaces, or a `(` char
* `\K` - omit the text matched so far
* `\w+` - one or more word chars
* `(?:\s+\w+|\s*\([^()]*\))` - a non-capturing group matching one of two alternatives
+ `\s+\w+` - one or more whitespaces and then one or more word chars
+ `|` - or
+ `\s*` - zero or more whitespaces
+ `\([^()]*\)` - a `(` char, then zero or more chars other than `(` and `)` and then a `)` char.
| 15,306
|
4,574,605
|
I wrote a simple python script using the SocketServer, it works well on Windows, but when I execute it on a remote Linux machine(Ubuntu), it doesn't work at all..
The script is like below:
```
#-*-coding:utf-8-*-
import SocketServer
class MyHandler(SocketServer.BaseRequestHandler):
def handle(self):
data_rcv = self.request.recv(1024).strip()
print data_rcv
myServer = SocketServer.ThreadingTCPServer(('127.0.0.1', 7777), MyHandler)
myServer.serve_forever()
```
I upload it to the remote machine by SSH, and then run the command `python server.py` on the remote machine, and try to access to `xxx.xxx.xxx.xxx:7777/test` with my browser, but nothing is printed on the remote machine's teminal...any ideas?
**UPDATE: Problem solved, it's a firewall issue, thanks you all.**
|
2011/01/01
|
[
"https://Stackoverflow.com/questions/4574605",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/325241/"
] |
You are binding to 127.0.0.1:7777 but then trying to access it through the servers external IP (I'll use your placeholder - xxx.xxx.xxx.xxx). 127.0.0.1:7777 and xxx.xxx.xxx.xxx:7777 are *different ports* and can be bound by different processes IIRC.
If that doesn't fix it, check your firewall, many hosts set up firewalls that block everything but the handful you are likely to use
|
Try with telnet or nc first, telnet to your public ip with your port and see what response you get. Also, why are accessing /test from the browser? I don't see that part in the code. I hope you have taken care of that.
| 15,308
|
17,548,865
|
I followed this link : "<https://pypi.python.org/pypi/bottle-mysql/0.1.1>"
and "<http://bottlepy.org/docs/dev/>"
this is my py file:
```
import bottle
from bottle import route, run, template
import bottle_mysql
app = bottle.Bottle()
# # dbhost is optional, default is localhost
plugin = bottle_mysql.Plugin(dbuser='root', dbpass='root', dbname='delhipoc')
app.install(plugin)
@route('/hai/<name>')
def show(name,dbname):
dbname.execute('SELECT id from poc_people where name="%s"', (name))
print "i am in show"
return template('<b>Hello {{name}}</b>!',name=name)
run(host='localhost', port=8080)
```
this is my code and it is throwing error like:
```
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\bottle.py", line 764, i
return route.call(**args)
File "C:\Python27\lib\site-packages\bottle.py", line 1575,
rv = callback(*a, **ka)
TypeError: show() takes exactly 2 arguments (1 given)
```
please help me
|
2013/07/09
|
[
"https://Stackoverflow.com/questions/17548865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2049591/"
] |
Here is the complete behavior you wanted
Use `$(document)` instead `$('body')`
```
$(document).not('#menu, #menu-trigger').click(function(event) {
event.preventDefault();
if (menu.is(":visible"))
{
menu.slideUp(400);
}
});
```
[### Updated Fiddle](http://jsfiddle.net/yGMZt/6/)
And also use `event.stopPropagation()` in `click()` of menu trigger
|
You need to add `event.stopPropagation();` [demo](http://jsfiddle.net/ruddog2003/yGMZt/3/)
>
> Description: Prevents the event from bubbling up the DOM tree, preventing any parent handlers from being notified of the event.
> [more](http://api.jquery.com/event.stopPropagation/)
>
>
>
| 15,311
|
46,647,167
|
I am working on a simple chat app in python 3.6.1 for personal use. I get this error with select.select:
```
Traceback (most recent call last):
File "C:\Users\Nathan Glover\Google Drive\MAGENTA Chat\chat_server.py", line
27, in <module>
ready_to_read,ready_to_write,in_error = select.select(SOCKET_LIST,[],[],0)
ValueError: file descriptor cannot be a negative integer (-1)
```
Here is the code:
```
ready_to_read,ready_to_write,in_error = select.select(SOCKET_LIST,[],[],0)
```
This is entirely because i don't understand select very well, and the documentation was no help. Could someone explain why this is happening?
|
2017/10/09
|
[
"https://Stackoverflow.com/questions/46647167",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8099482/"
] |
I know it's been a long time since this question was asked. But I wanted to let OP and others know about the problem here.
The problem here is that the SOCKET\_LIST must be containing a non-existing socket connection which may have been disconnected earlier. If you pass such connection to select it gives this error
```
ValueError: file descriptor cannot be a negative integer (-1)
```
A simple solution to this would be to put the `select` block inside a try - except block and the catch the error. When an error is found, the connection can be removed from the SOCKET\_LIST.
|
I happened to have this problem too, and I have solved it.
I hope this answer will help everyone.
The fileno of the closed socket will become -1.
So it is most likely because there is a closed socket in the input parameter list of slelect.
That is to say, in the select cycle, when your judgment logic is added to rlist, wlist, xlist, these logics may be problematic.
Although the simple and crude method is try-except to remove the socket with negative fileno.
But I recommend that you reorganize your logic. If you are not sure, use rlist, wlist, and xlist instead of lists to avoid duplicate elements in the list.
| 15,318
|
63,182,541
|
I am trying to send message from Raspberry Pi (Ubuntu 20) to Laptop (Virtualbox Ubuntu 20) via UDP socket. So I am using simple code from <https://wiki.python.org/moin/UdpCommunication>
Sending (from Raspberry Pi)
```
import socket
UDP_IP = "127.0.0.1"
UDP_PORT = 5005
MESSAGE = b"Hello, World!"
print("UDP target IP: %s" % UDP_IP)
print("UDP target port: %s" % UDP_PORT)
print("message: %s" % MESSAGE)
sock = socket.socket(socket.AF_INET, # Internet
socket.SOCK_DGRAM) # UDP
sock.sendto(MESSAGE, (UDP_IP, UDP_PORT))
```
Receiving (from laptop)
```
import socket
UDP_IP = "127.0.0.1"
UDP_PORT = 5005
sock = socket.socket(socket.AF_INET, # Internet
socket.SOCK_DGRAM) # UDP
sock.bind((UDP_IP, UDP_PORT))
while True:
data, addr = sock.recvfrom(1024) # buffer size is 1024 bytes
print("received message: %s" % data)
```
I tried `UDP_IP = "0.0.0.0"` in both ends.
I tried Ip address of my laptop at RPI end.
I tried both machine's IP address at both ends.
I tried `sock.bind(("", UDP_PORT))` to bind all.
I tried adding the UDP port number in firewall settings.
I checked multiple questions from this forum related to this.
Still, I cannot get any packets at the laptop receiving side. I do not know what is wrong.
Please advise.
|
2020/07/30
|
[
"https://Stackoverflow.com/questions/63182541",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13118262/"
] |
The problem may be in the IP address because you are using the IP '127.0.0.1' (localhost) to reach an outside device. Please find out the IPs of your devices, try using ifconfig Linux command. Also, check that nothing is blocking your connection.
Consider that for socket.bind(address) you can use '0.0.0.0' and for socket.sendto(bytes, address) you should use the IP of the device you want to send to.
I recommend you to download a program called Hercules, with this program you can create a UDP peer to figure out what is not working properly. For example, you could use python in one side and Hercules in the other to rule out mistakes in one side of the code execution, you could also try two Hercules connections and see if you can establish communication in which case the problem is most likely related to the code execution, in the other hand if you can not establish a connection between the two Hercules UDP peers the problem is most likely with the devices or the network itself.
|
If you are using a static IP on the RPi, you need to add a static route:
```
sudo ip route add 236.0.0.0/8 dev eth0
```
| 15,323
|
7,213,965
|
I am using Windows 7, Apache 2.28 and Web Developer Server Suite for my server.
All files are stored under C:/www/vhosts
I downloaded Portable Python 2.7 from <http://www.portablepython.com/> and have installed it to
C:/www/portablepython
I'm trying to find mod\_wsgi to get it to work with 2.7 - but how can I do this?
The reason I'm doing all this is to get a basic site running that uses Python coding, with a view to using Django, in the same way that <http://www.heart.co.uk/westmids/> or <http://www.capitalfm.com/birmingham> do. Obviously my site won't be as advanced as theirs, but you get the gist of it; I'm using Python/Django as a sort of CMS for a news/articles website.
In any case, here's my code from C:/www/vhosts/localhost/testing.py:
```
#!/www/portablepython
print "Content-type: text/html"
print
print "<html><head>"
print ""
print "</head><body>"
print "Hello."
print "</body></html>"
```
This generates a 403 Forbidden error, i.e.:
**You don't have permission to access /testing.py on this server.**
I followed <http://code.google.com/p/modwsgi/wiki/InstallationOnWindows> but renamed modwsgi-version-number-datedownload.so to modwsgi.so so did that cause the error?
What do I need to do to prevent this re-occurring?
I used the Portable version for testing purposes, thinking that I can just delete the folder, and I can install again if necessary without adding to environment variables (I think portable ones do this, correct me if I'm wrong)?
What, if any changes do I need to make? Do I need to make them to the vhosts in httpd-vhosts.conf [my virtual hosts] or elsewhere?
Any help is appreciated; I'll post more as this situation develops.
|
2011/08/27
|
[
"https://Stackoverflow.com/questions/7213965",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/847047/"
] |
The script you have at C:/www/vhosts/localhost/testing.py is a CGI script and not a WSGI script. Follow the instructions for configuring mod\_wsgi and what a WSGI script file for hello world should look like at:
<http://code.google.com/p/modwsgi/wiki/QuickConfigurationGuide>
|
also, you should look into using a system install of python from python.org and pip+distribute+virtualenv to keep contained python environments for your different sites. This will give you maximum portability.
| 15,325
|
28,097,471
|
**Update**: Here is a more specific example
Suppose I want to compile some statistical data from a sizable set of files:
I can make a generator `(line for line in fileinput.input(files))` and some processor:
```
from collections import defaultdict
scores = defaultdict(int)
def process(line):
if 'Result' in line:
res = line.split('\"')[1].split('-')[0]
scores[res] += 1
```
The question is how to handle this when one gets to the `multiprocessing.Pool`.
Of course it's possible to define a `multiprocessing.sharedctypes` as well as a custom `struct` instead of a `defaultdict` but this seems rather painful. On the other hand I can't think of a pythonic way to instantiate something before the process or to return something after a generator has run out to the main thread.
|
2015/01/22
|
[
"https://Stackoverflow.com/questions/28097471",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3467349/"
] |
So you basically create a histogram. This is can easily be parallelized, because histograms can be merged without complication. One might want to say that this problem is trivially parallelizable or ["embarrassingly parallel"](http://en.wikipedia.org/wiki/Embarrassingly_parallel). That is, you do not need to worry about communication among workers.
Just split your data set into multiple chunks, let your workers work on these chunks **independently**, collect the histogram of each worker, and then merge the histograms.
In practice, this problem is best off by letting each worker process/read its own file. That is, a "task" could be a file name. You should not start pickling file contents and send them around between processes through pipes. Let each worker process retrieve the bulk data *directly* from files. Otherwise your architecture spends too much time with inter-process communication, instead of doing some real work.
Do you need an example or can you figure this out yourself?
Edit: example implementation
----------------------------
I have a number of data files with file names in this format: `data0.txt`, `data1.txt`, ... .
Example contents:
```
wolf
wolf
cat
blume
eisenbahn
```
The goal is to create a histogram over the words contained in the data files. This is the code:
```
from multiprocessing import Pool
from collections import Counter
import glob
def build_histogram(filepath):
"""This function is run by a worker process.
The `filepath` argument is communicated to the worker
through a pipe. The return value of this function is
communicated to the manager through a pipe.
"""
hist = Counter()
with open(filepath) as f:
for line in f:
hist[line.strip()] += 1
return hist
def main():
"""This function runs in the manager (main) process."""
# Collect paths to data files.
datafile_paths = glob.glob("data*.txt")
# Create a pool of worker processes and distribute work.
# The input to worker processes (function argument) as well
# as the output by worker processes is transmitted through
# pipes, behind the scenes.
pool = Pool(processes=3)
histograms = pool.map(build_histogram, datafile_paths)
# Properly shut down the pool of worker processes, and
# wait until all of them have finished.
pool.close()
pool.join()
# Merge sub-histograms. Do not create too many intermediate
# objects: update the first sub-histogram with the others.
# Relevant docs: collections.Counter.update
merged_hist = histograms[0]
for h in histograms[1:]:
merged_hist.update(h)
for word, count in merged_hist.items():
print "%s: %s" % (word, count)
if __name__ == "__main__":
main()
```
Test output:
```
python countwords.py
eisenbahn: 12
auto: 6
cat: 1
katze: 10
stadt: 1
wolf: 3
zug: 4
blume: 5
herbert: 14
destruction: 4
```
|
I had to modify the original pool.py (the trouble was worker is defined as a method without any inheritance) to get what I want but it's not so bad, and probably better than writing a new pool entirely.
```
class worker(object):
def __init__(self, inqueue, outqueue, initializer=None, initargs=(), maxtasks=None,
wrap_exception=False, finalizer=None, finargs=()):
assert maxtasks is None or (type(maxtasks) == int and maxtasks > 0)
put = outqueue.put
get = inqueue.get
self.completed = 0
if hasattr(inqueue, '_writer'):
inqueue._writer.close()
outqueue._reader.close()
if initializer is not None:
initializer(self, *initargs)
def run(self):
while maxtasks is None or (maxtasks and self.completed < maxtasks):
try:
task = get()
except (EOFError, OSError):
util.debug('worker got EOFError or OSError -- exiting')
break
if task is None:
util.debug('worker got sentinel -- exiting')
break
job, i, func, args, kwds = task
try:
result = (True, func(*args, **kwds))
except Exception as e:
if wrap_exception:
e = ExceptionWithTraceback(e, e.__traceback__)
result = (False, e)
try:
put((job, i, result))
except Exception as e:
wrapped = MaybeEncodingError(e, result[1])
util.debug("Possible encoding error while sending result: %s" % (
wrapped))
put((job, i, (False, wrapped)))
self.completed += 1
if finalizer:
finalizer(self, *finargs)
util.debug('worker exiting after %d tasks' % self.completed)
run(self)
```
| 15,327
|
13,335,679
|
I want to run a system command in the background using python 2.7, this is what I have:
```
import commands
path = '/fioverify.fio'
cmd= "/usr/local/bin/fio" + path + " "+ " &"
print cmd
handle = commands.getstatusoutput(cmd)
```
This fails. If I remove the ampersand `&` it works. I need to run a command (`/usr/local/bin/fio/fioverifypath`) in the background.
Any pointers on how to accomplish this?
|
2012/11/11
|
[
"https://Stackoverflow.com/questions/13335679",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1618238/"
] |
Don't use `commands`; it's deprecated and not actually useful for your purposes. Use `subprocess` instead.
```
fio = subprocess.Popen(["/usr/local/bin/fio", path])
```
runs the `fio` command in parallel with your process and binds the variable `fio` to a handle to the process. You can then call `fio.wait()` to wait for the process to finish and retrieve its return status.
|
Use the [subprocess](http://docs.python.org/2/library/subprocess.html) module, [`subprocess.Popen`](http://docs.python.org/2/library/subprocess.html#subprocess.Popen) allows you to run a command as a subprocess (in the background) and check its status.
| 15,328
|
53,055,671
|
How do you run the command line in order to read and write a csv file in python on a MacBook?
|
2018/10/30
|
[
"https://Stackoverflow.com/questions/53055671",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10387703/"
] |
The first APK you linked to isn't a valid APK. It's just a plain text file, with the following text repeated over and over:
```
HTTP/1.1 200 OK
Date: Sat, 27 Oct 2018 17:35:36 GMT
Strict-Transport-Security: max-age=31536000;includeSubDomains; preload
Last-Modified: Sat, 28 Jul 2018 11:40:03 GMT
ETag: "23b1fe5-5720db0636ac0"
Accept-Ranges: bytes
Content-Length: 37429221
Keep-Alive: timeout=20
Connection: Keep-Alive
```
Obviously, just HTTP response headers repeated don't form a valid APK. The reason that your tools are failing on that file isn't that it's encrypted/obfuscated/hardened, but that it's not really an APK at all, and wouldn't work if you tried to install it.
---
The second APK you linked to extracts for me fine when I `unzip` it.
My conclusion is that the "hardening" you mention doesn't exist (it seemed to only due to mixing up valid and invalid APKs), and that any APK that successfully installs can also be successfully extracted.
|
That's encryption java classes feature (Like dexgaurd or Bangcle kh); and also that's protected with Native Library Encryption (NLE) + JNI Obfuscation (JNI) From Something like dexprotector (i found that in dynamic analysis tools)
and many tanks to semanticscholar for [This](https://pdfs.semanticscholar.org/2b02/44a2944d11f16da3c38ece8aa617ded090d4.pdf) article and [this](https://pdfs.semanticscholar.org/b725/5db6f0aa10aa8d6fa83ac8ab7129e692e927.pdf)
| 15,331
|
61,861,185
|
Let's say I have the following class.
```py
class A:
def __init__(self, x=0, y=0):
self.x = x
self.y = y
obj_A = A(y=2) # Dynamically pass the parameters
```
My question is, how can I dynamically create an object of a certain class by passing only one (perhaps multiple but not all) parameters? Also, I have prior knowledge of the attributes name i.e. **'x'** and **'y'** and they all have a default value.
Edit: To clear up some confusion. The class can have any number of attributes. Is it possible to create an instance by passing any subset of the parameters during runtime?
I know I can use `getters` and `setters` to achieve this, but I'm working on OpenCV python, and these functions are not implemented for some algorithms.
|
2020/05/18
|
[
"https://Stackoverflow.com/questions/61861185",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1474204/"
] |
There is `asSequence()` function for iterator, but it returns sequence that can be iterated only once. The point is to use the same iterator for each iteration.
```
// I don't know how to name the function...
public fun <T> Iterable<T>.asIteratorSequence(): Sequence<T> {
val iterator = this.iterator()
return Sequence { iterator }
}
fun main() {
val seq = listOf(0, 1, 2, 3, 4, 5, 6, 7, 8, 9).asIteratorSequence()
println(seq.take(4).toList().toString()) // [0, 1, 2, 3]
println(seq.toList().toString()) // [4, 5, 6, 7, 8, 9]
println(seq.toList().toString()) // []
}
```
|
I figured out we can use iterators for this:
```
fun main() {
val seq = listOf(0, 1, 2, 3, 4, 5, 6, 7, 8, 9).asSequence().iterator()
println(seq.asSequence().take(4).toList().toString());
println(seq.asSequence().toList().toString())
}
```
This way the sequences advance the inner iterator. If the sequence was originally obtained from an `Iterable`, as is the case above, we can simplify even further:
```
fun main() {
val seq = listOf(0, 1, 2, 3, 4, 5, 6, 7, 8, 9).iterator()
println(seq.asSequence().take(4).toList().toString());
println(seq.asSequence().toList().toString())
}
```
| 15,332
|
40,946,211
|
I've installed my Django app on an Ubuntu server with Apache2.4.7 and configured it to use py3.5.2 from a virtual environment.
However, from what I can see in the errors, it's starting at 3.5 and defaulting to 3.4.
Please explain why this is happening:
```
/var/www/venv/lib/python3.5/site-packages
/usr/lib/python3.4
```
See the full error below:
```
SyntaxError at /
invalid syntax (forms.py, line 2)
Request Method: GET
Request URL: http://intranet.example.com/
Django Version: 1.10.1
Exception Type: SyntaxError
Exception Value:
invalid syntax (forms.py, line 2)
Exception Location: /var/www/intranet/formater/views.py in <module>, line 7
Python Executable: /usr/bin/python3
Python Version: 3.4.3
Python Path:
['/var/www/intranet',
'/var/www/venv/lib/python3.5/site-packages',
'/usr/lib/python3.4',
'/usr/lib/python3.4/plat-x86_64-linux-gnu',
'/usr/lib/python3.4/lib-dynload',
'/usr/local/lib/python3.4/dist-packages',
'/usr/lib/python3/dist-packages',
'/var/www/intranet',
'/var/www/intranet/venv/lib/python3.5/site-packages']
```
Here's my apache2.conf file:
```
WSGIScriptAlias / /var/www/intranet/intranet/wsgi.py
#WSGIPythonPath /var/www/intranet/:/var/www/intranet/venv/lib/python3.5/site-packages
WSGIDaemonProcess intranet.example.com python-path=/var/www/intranet:/var/www/venv/lib/python3.5/site-packages
WSGIProcessGroup intranet.example.com
<Directory /var/www/intranet/intranet>
<Files wsgi.py>
Order deny,allow
Allow from all
</Files>
</Directory>
```
What am I doing wrong here?
|
2016/12/03
|
[
"https://Stackoverflow.com/questions/40946211",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2591242/"
] |
The mod\_wsgi module for Apache is compiled for a specific Python version. You cannot make it run using a different Python version by pointing it at a Python virtual environment for a different Python version. This is cleared mentioned in the mod\_wsgi documentation about use of Python virtual environments at:
* <http://modwsgi.readthedocs.io/en/develop/user-guides/virtual-environments.html>
The only way you can have mod\_wsgi run as Python 3.5, if it was original compiled for Python 3.4, is to uninstall that version of mod\_wsgi and build/install a version of mod\_wsgi compiled for Python 3.5.
|
The source of the problem was Graham Dumpleton's answer.
I just want to give some more information in case it helps someone facing the same problem as me.
There's no official repo for Python 3.5.2 in Ubuntu server 14.04.
Rather than using some unsupported repo like [this one](https://launchpad.net/~fkrull/+archive/ubuntu/deadsnakes), I compiled Python 3.5.2 from source using this very simple tutorial [here](http://tecadmin.net/install-python-3-5-on-ubuntu/).
After jumping through many hoops, I couldn't install mod\_wsgi for Python 3.5.2 because of a library path that was different.
Having already spent too much time on this, I uninstalled everything: Python, Apache, libraries and installed everything from scratch, using Python 3.4 this time.
It's officially supported for Ubuntu 14.04 and for my project, I noticed no compatibility issues.
So here's my shortlist for what to install:
**from apt:**
python3,
python3-pip,
apache2,
apache2-dev,
libapache2-mod-wsgi-py3 and
**from pip:**
Django,
mod-wsgi,
virtualenv (if you plan to use a venv).
Then just configure "/etc/apache2/apache.conf", run "apache2ctl configtest" and restart the service.
For extra help see this guide [here](https://docs.djangoproject.com/en/1.10/howto/deployment/wsgi/modwsgi/).
| 15,334
|
42,621,190
|
I am following a tutorial on using python v3.6 to do decision tree with machine learning using scikit-learn.
Here is the code;
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import mglearn
import graphviz
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(cancer.data, cancer.target, stratify=cancer.target, random_state=42)
tree = DecisionTreeClassifier(random_state=0)
tree.fit(X_train, y_train)
tree = DecisionTreeClassifier(max_depth=4, random_state=0)
tree.fit(X_train, y_train)
from sklearn.tree import export_graphviz
export_graphviz(tree, out_file="tree.dot", class_names=["malignant", "benign"],feature_names=cancer.feature_names, impurity=False, filled=True)
import graphviz
with open("tree.dot") as f:
dot_graph = f.read()
graphviz.Source(dot_graph)
```
How do I use Graphviz to see what is inside dot\_graph? Presumably, it should look something like this;
[](https://i.stack.imgur.com/WNZ8q.jpg)
|
2017/03/06
|
[
"https://Stackoverflow.com/questions/42621190",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3848207/"
] |
`graphviz.Source(dot_graph)` returns a [`graphviz.files.Source`](http://graphviz.readthedocs.io/en/latest/api.html#source) object.
```
g = graphviz.Source(dot_graph)
```
use [`g.render()`](http://graphviz.readthedocs.io/en/latest/api.html#graphviz.Source.render) to create an image file. When I ran it on your code without an argument I got a `Source.gv.pdf` but you can specify a different file name. There is also a shortcut [`g.view()`](http://graphviz.readthedocs.io/en/latest/api.html#graphviz.Source.view), which saves the file and opens it in an appropriate viewer application.
If you paste the code as it is in a rich terminal (such as Spyder/IPython with inline graphics or a Jupyter notebook) it will automagically display the image instead of the object's Python representation.
|
You can use display from IPython.display. Here is an example:
```
from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
model = DecisionTreeClassifier()
model.fit(X, y)
from IPython.display import display
display(graphviz.Source(tree.export_graphviz(model)))
```
| 15,335
|
30,560,266
|
I have a router configuration file. The router has thousands of "Interfaces", and I'm trying to make sure it DOES have *two* certain lines in EACH Interface configuration section. The typical router configuration file would look like this:
```
<whitespaces> Interface_1
<whitespaces> description Interface_1
<whitespaces> etc etc'
<whitespaces> etc etc etc
<whitespaces> <config line that i am searching to confirm is present>
<whitespaces> etc etc etc etc
!
!
! random number of router lines
<whitespaces> <second config line that i am searching to confirm is
present>
!
<whitespaces> Interface_2
<whitespaces> description Interface_2
<whitespaces> etc etc
<whitespaces> etc etc etc
<whitespaces> <config line that I am searching to confirm is present>
<whitespaces> etc etc etc etc
! random number of router lines
<whitespaces> <second config line that i am searching to confirm is
present>
etc
```
So effectively I want this logic:
- go thru the router config. When you see Interface\_X, be on the lookout for two lines AFTER that, and make sure they are present in the config. And then move on to the next Interface, and do the same thing, over and over again.
Here is the tricky part:
- I want the two lines to be in the Interface config, and python needs to know that the search 'area' is any line AFTER Interface\_X and BEFORE the next Interface\_Y config.
- The two lines I'm searching for are RANDOMLY spaced in the config, they aren't like the 10th line and the 12th line after Interface\_X. They can be present anywhere between Interface\_X and Interface\_Y (the next interface definition).
Dying. Been on this for four days and can't seem to match it correctly.
Effectively i just want the python script to spit out output that says example:
"Interface\_22 and Interface\_89 are missing the two lines you are looking for Tom". I don't care if the interface is right honestly, i really only care about when the interface configuration is WRONG.
```
file_name = 'C:\\PYTHON\\router.txt'
f = open(file_name, 'r')
output_string_obj = f.read()
a = output_string_obj.splitlines()
for line in a:
if re.match("(.*)(Interface)(.*)", line):
# logic to search for the presence of two lines after matching
# Interface, but prior to next instance of a Interface
# definition
if re.match("(.*)(check-1-line-present)(.*)", line + ???):
if re.match("(.*)(check-2-line-present)(.*)", line ?):
print ("This Interface has the appropriate config")
else:
print ("This Interface is missing the key two lines")
```
Dying guys. This is the first question I've ever posted to this board. I'll take any thoughts/logic flow / statements/ ideas anyone has. I get in this type of search situation alot, where i don't know where in the router config something is missing, but i know it has to be somewhere between point A and point B, but have nothing else to 'grip' onto....
'''
'
|
2015/05/31
|
[
"https://Stackoverflow.com/questions/30560266",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4958929/"
] |
I solved this issue using [collection.distinct](https://mongodb.github.io/node-mongodb-native/api-generated/collection.html#distinct):
```
collection.distinct("DataFields",(function(err, docs){
console.log(docs);
assert.equal(null, err);
db.close();
}))
```
|
You could also do this by way of **[aggregation framework](http://docs.mongodb.org/manual/core/aggregation-introduction/)**. The suitable aggregation pipeline would consist of the [**`$group`**](http://docs.mongodb.org/manual/reference/operator/aggregation/group/#pipe._S_group) operator stage which groups the document by the `DataFields` key and create an aggregated field `count` which stores the documents produced by this grouping. The accumulator operator for this is [**`$sum`**](http://docs.mongodb.org/manual/reference/operator/aggregation/sum/#grp._S_sum).
The next pipeline stage would be the [**`$match`**](http://docs.mongodb.org/manual/reference/operator/aggregation/match/#pipe._S_match) operator which then filters the documents having a `count` of 1 to depict distinction.
The last pipeline operator [**`$group`**](http://docs.mongodb.org/manual/reference/operator/aggregation/group/#pipe._S_group) then group those distinct elements and adds them to a list by way of [**`$push`**](http://docs.mongodb.org/manual/reference/operator/aggregation/push) operator.
In the end your aggregation would look like this:
```
var collection = db.collection('Fixed_Asset_Register'),
pipeline = [
{
"$group": {
"_id": "$DataFields",
"count": {"$sum": 1}
}
},
{
"$match": { "count": 1 }
},
{
"$group": {
"_id": 0,
"distinct": { "$push": "$_id" }
}
}],
result = collection.aggregate(pipeline).toArray(),
distictDataFieldsValue = result[0].distinct;
```
| 15,343
|
10,839,302
|
I have a django application that requires PIL and I have been getting errors so I decided to play around with PIL on my hosting server.
PIL is installed in my virtual environment. However, when running the following I get an error, and when I run it outside the virtual environment it works.
```
Python 2.7.3 (default, Apr 16 2012, 15:47:14)
[GCC 4.4.6 20110731 (Red Hat 4.4.6-3)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import Image
>>> im = Image.open('test.png')
>>> im
<PngImagePlugin.PngImageFile image mode=RGBA size=28x22 at 0x7F477CFFAE18>
>>> im.convert('RGB')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/python27/lib/python2.7/site-packages/PIL-1.1.7-py2.7-linux-x86_64.egg/Image.py", line 679, in convert
self.load()
File "/opt/python27/lib/python2.7/site-packages/PIL-1.1.7-py2.7-linux-x86_64.egg/ImageFile.py", line 189, in load
d = Image._getdecoder(self.mode, d, a, self.decoderconfig)
File "/opt/python27/lib/python2.7/site-packages/PIL-1.1.7-py2.7-linux-x86_64.egg/Image.py", line 385, in _getdecoder
raise IOError("decoder %s not available" % decoder_name)
IOError: decoder zip not available
>>>
```
|
2012/05/31
|
[
"https://Stackoverflow.com/questions/10839302",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/269106/"
] |
Most likely the Python you are using in your virtualenv has been built by yourself, not the system Python - is that right? If so your problem is that the header (.h) files for zlib are not installed in your system when building Python itself.
You have to have the "ziplib-devel" (or equivalent) package in Linux when building the Python you will use on your virtualenv. You can test if it works by trying to `import zlib` from the Python console.
As an alternative to rebuilding Python you can find your system's Python zip-related files and copy them to the Python used in your virtualenv (if they are the same Python version).
|
You could try [Pillow](http://pypi.python.org/pypi/Pillow) which is a repackaged PIL which plays much nicer with virtualenv.
| 15,344
|
46,423,582
|
I'm attempting to install cvxopt using Conda (which comes with the Anaconda python distribution), and I received the error message below. Apparently my Anaconda installation is using python 3.6, whereas cvxopt wants python 3.5\*. How can I fix this and install cvxopt using Conda?
After typing conda install cvxopt at the Anaconda prompt, the message I received was:
>
> Fetching package metadata ...........
>
>
> Solving package specifications: .
>
>
> UnsatisfiableError: The following specifications were found to be in
> conflict:
>
>
>
> ```
> - cvxopt -> python 3.5*
> - python 3.6*
>
> ```
>
> Use "conda info < package >" to see the dependencies for each package.
>
>
>
Here's a screenshot of the error message:
[](https://i.stack.imgur.com/oEbpa.png)
|
2017/09/26
|
[
"https://Stackoverflow.com/questions/46423582",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1854748/"
] |
It would appear that `cvxopt` requires Python 3.5. Easiest solution would be to use `conda` to create a separate environment for python 3.5 and then install cvxopt (and any other desired python packages). For example...
```
conda create -n cvxopt-env python=3.5 cvxopt numpy scipy matplotlib jupyter
```
...depending on your operating system you can then activate this environment using either...
```
source activate cvxopt-env
```
...or...
```
activate cvxopt-env
```
...you can then switch back to your default python install using...
```
deactivate
```
...check out the [`conda`](https://conda.io/docs/) docs for more details. In particular the docs for the [`conda create`](https://conda.io/docs/commands/conda-create.html) command.
|
try
```
conda install cvxopt=1.1.8
```
its the new version and only version having support for python3.6
| 15,345
|
55,358,337
|
How can we apply conditions for a Dataset in python, specially applying those and want to fetch the column name as an output?
let's say the below one is the dataframe so my question is how can we retrieve a colname(let's say "name") as an output by applying conditions on this dataframe
```
name salary jobDes
______________________________________
store1 | daniel | 50k | datascientist
store2 | paladugu | 55k | datascientist
store3 | theodor | 53k | dataEngineer
```
fetch a column name as a result like let's say "name"
|
2019/03/26
|
[
"https://Stackoverflow.com/questions/55358337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11260410/"
] |
(edit: slightly simplified non-recursive solution)
You can do it like this, just for each iteration consider if the item should be included or excluded.
```
def f(maxK,K, N, L, S):
if L == 0 or not N or K == 0:
return S
#either element is included
included = f(maxK,maxK, N[1:], L-1, S + N[0] )
#or excluded
excluded = f(maxK,K-1, N[1:], L, S )
return max(included, excluded)
assert f(2,2,[10,1,1,1,1,10],3,0) == 12
assert f(3,3,[8, 3, 7, 6, 2, 1, 9, 2, 5, 4],4,0) == 30
```
If N is very long you can consider changing to a table version, you could also change the input to tuples and use memoization.
Since OP later included the information that N can be 100 000, we can't really use recursive solutions like this. So here is a solution that runs in O(n*K*L), with same memory requirement:
```
import numpy as np
def f(n,K,L):
t = np.zeros((len(n),L+1))
for l in range(1,L+1):
for i in range(len(n)):
t[i,l] = n[i] + max( (t[i-k,l-1] for k in range(1,K+1) if i-k >= 0), default = 0 )
return np.max(t)
assert f([10,1,1,1,1,10],2,3) == 12
assert f([8, 3, 7, 6, 2, 1, 9],3,4) == 30
```
Explanation of the non recursive solution. Each cell in the table t[ i, l ] expresses the value of max subsequence with exactly l elements that use the element in position i and only elements in position i or lower where elements have at most K distance between each other.
subsequences of length n (those in t[i,1] have to have only one element, n[i] )
Longer subsequences have the n[i] + a subsequence of l-1 elements that starts at most k rows earlier, we pick the one with the maximal value. By iterating this way, we ensure that this value is already calculated.
Further improvements in memory is possible by considering that you only look at most K steps back.
|
Extending the code for `itertools.combinations` shown at the [docs](https://docs.python.org/3/library/itertools.html#itertools.combinations), I built a version that includes an argument for the maximum index distance (`K`) between two values. It only needed an additional `and indices[i] - indices[i-1] < K` check in the iteration:
```
def combinations_with_max_dist(iterable, r, K):
# combinations('ABCD', 2) --> AB AC AD BC BD CD
# combinations(range(4), 3) --> 012 013 023 123
pool = tuple(iterable)
n = len(pool)
if r > n:
return
indices = list(range(r))
yield tuple(pool[i] for i in indices)
while True:
for i in reversed(range(r)):
if indices[i] != i + n - r and indices[i] - indices[i-1] < K:
break
else:
return
indices[i] += 1
for j in range(i+1, r):
indices[j] = indices[j-1] + 1
yield tuple(pool[i] for i in indices)
```
Using this you can bruteforce over all combinations with regards to K, and then find the one that has the maximum value sum:
```
def find_subseq(a, L, K):
return max((sum(values), values) for values in combinations_with_max_dist(a, L, K))
```
Results:
```
print(*find_subseq([10, 1, 1, 1, 1, 10], L=3, K=2))
# 12 (10, 1, 1)
print(*find_subseq([8, 3, 7, 6, 2, 1, 9, 2, 5, 4], L=4, K=3))
# 30 (8, 7, 6, 9)
```
Not sure about the performance if your value lists become very long though...
| 15,346
|
71,737,316
|
So, I'm having the classic trouble install lxml.
Initially I was just pip installing, but when I tried to free up memory using `Element.clear()` I was getting the following error:
```
Python(58695,0x1001b4580) malloc: *** error for object 0x600000bc3f60: pointer being freed was not allocated
```
I thought this must be because lxml is using the system's libxml2 which is probably out of date.
So I used homebrew to install libxml2 and libxlt, and I force linked them both.
I then tried to install using the following command:
```
❯ STATIC_DEPS=true pip install lxml --no-cache-dir 13:01:46
Collecting lxml
Downloading lxml-4.8.0.tar.gz (3.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.2/3.2 MB 5.4 MB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Building wheels for collected packages: lxml
Building wheel for lxml (setup.py) ... done
Created wheel for lxml: filename=lxml-4.8.0-cp310-cp310-macosx_12_0_arm64.whl size=1683935 sha256=47912c1ba66d274c3ad7b2a2db00243f96d334a3fd5e439725f5005a7a72a602
Stored in directory: /private/var/folders/g9/lqph46sj36n9kkvjt1pzdxhm0000gn/T/pip-ephem-wheel-cache-4_v4ov7s/wheels/e4/52/34/64064e2e2f1ce84d212a6dde6676f3227846210a7996fc2530
Successfully built lxml
Installing collected packages: lxml
Successfully installed lxml-4.8.0
```
..but then when I tried to import etree I would get this error:
```
Traceback (most recent call last):
File "/Users/human/Code/ia_book_images/viewer/book_image_downloader.py", line 4, in <module>
from lxml import etree as ET
ImportError: dlopen(/Users/human/.virtualenvs/ia_book_images/lib/python3.10/site-packages/lxml/etree.cpython-310-darwin.so, 0x0002): symbol not found in flat namespace '___htmlDefaultSAXHandler'
```
So then I thought let's make 100% sure that it's using the right versions of libxml2 using CFLAGS and got the following result:
```
❯ CFLAGS="-I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include" STATIC_DEPS=true pip install lxml --no-cache-dir
Collecting lxml
Downloading lxml-4.8.0.tar.gz (3.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.2/3.2 MB 4.4 MB/s eta 0:00:00
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [199 lines of output]
Checking for gcc...
Checking for shared library support...
Building shared library libz.1.2.12.dylib with gcc.
Checking for size_t... Yes.
Checking for off64_t... No.
Checking for fseeko... Yes.
Checking for strerror... Yes.
Checking for unistd.h... Yes.
Checking for stdarg.h... Yes.
Checking whether to use vs[n]printf() or s[n]printf()... using vs[n]printf().
Checking for vsnprintf() in stdio.h... Yes.
Checking for return value of vsnprintf()... Yes.
Checking for attribute(visibility) support... Yes.
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -I. -c -o example.o test/example.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o adler32.o adler32.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o crc32.o crc32.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o deflate.o deflate.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o infback.o infback.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o inffast.o inffast.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o inflate.o inflate.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o inftrees.o inftrees.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o trees.o trees.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o zutil.o zutil.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o compress.o compress.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o uncompr.o uncompr.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o gzclose.o gzclose.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o gzlib.o gzlib.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o gzread.o gzread.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o gzwrite.o gzwrite.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -I. -c -o minigzip.o test/minigzip.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/adler32.o adler32.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/crc32.o crc32.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/deflate.o deflate.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/infback.o infback.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/inflate.o inflate.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/inffast.o inffast.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/inftrees.o inftrees.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/trees.o trees.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/zutil.o zutil.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/gzclose.o gzclose.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/uncompr.o uncompr.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/compress.o compress.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/gzlib.o gzlib.c
libtool -o libz.a adler32.o crc32.o deflate.o infback.o inffast.o inflate.o inftrees.o trees.o zutil.o compress.o uncompr.o gzclose.o gzlib.o gzread.o gzwrite.o
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/gzread.o gzread.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/gzwrite.o gzwrite.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -o example example.o -L. libz.a
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -o minigzip minigzip.o -L. libz.a
gcc -dynamiclib -install_name /private/var/folders/g9/lqph46sj36n9kkvjt1pzdxhm0000gn/T/pip-install-kl4hmrrk/lxml_4ecb3c255ad049e39a89a66ee0a50e76/build/tmp/libxml2/lib/libz.1.dylib -compatibility_version 1 -current_version 1.2.12 -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -o libz.1.2.12.dylib adler32.lo crc32.lo deflate.lo infback.lo inffast.lo inflate.lo inftrees.lo trees.lo zutil.lo compress.lo uncompr.lo gzclose.lo gzlib.lo gzread.lo gzwrite.lo -lc -arch x86_64
ld: warning: ignoring file crc32.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64
ld: warning: ignoring file adler32.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64
ld: warning: ignoring file deflate.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64
ld: warning: ignoring file infback.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64
ld: warning: ignoring file inffast.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64
ld: warning: ignoring file inflate.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64
ld: warning: ignoring file inftrees.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64
ld: warning: ignoring file trees.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64
ld: warning: ignoring file compress.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64
ld: warning: ignoring file zutil.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64
ld: warning: ignoring file uncompr.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64
ld: warning: ignoring file gzread.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64
ld: warning: ignoring file gzlib.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64
ld: warning: ignoring file gzclose.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64
ld: warning: ignoring file gzwrite.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64
rm -f libz.dylib libz.1.dylib
ln -s libz.1.2.12.dylib libz.dylib
ln -s libz.1.2.12.dylib libz.1.dylib
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -o examplesh example.o -L. libz.1.2.12.dylib
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -o minigzipsh minigzip.o -L. libz.1.2.12.dylib
ld: warning: ignoring file libz.1.2.12.dylib, building for macOS-arm64 but attempting to link with file built for macOS-x86_64
ld: warning: ignoring file libz.1.2.12.dylib, building for macOS-arm64 but attempting to link with file built for macOS-x86_64
Undefined symbols for architecture arm64:
"_gzclose", referenced from:
_gz_compress in minigzip.o
_gz_uncompress in minigzip.o
"_gzdopen", referenced from:
_main in minigzip.o
"_gzerror", referenced from:
_gz_compress in minigzip.o
_gz_uncompress in minigzip.o
"_gzopen", referenced from:
_file_compress in minigzip.o
_file_uncompress in minigzip.o
_main in minigzip.o
"_gzread", referenced from:
_gz_uncompress in minigzip.o
"_gzwrite", referenced from:
_gz_compress in minigzip.o
ld: symbol(s) not found for architecture arm64
Undefined symbols for architecture arm64:
"_compress", referenced from:
_test_compress in example.o
(maybe you meant: _test_compress)
"_deflate", referenced from:
_test_deflate in example.o
_test_large_deflate in example.o
_test_flush in example.o
_test_dict_deflate in example.o
(maybe you meant: _test_large_deflate, _test_deflate , _test_dict_deflate )
"_deflateEnd", referenced from:
_test_deflate in example.o
_test_large_deflate in example.o
_test_flush in example.o
_test_dict_deflate in example.o
"_deflateInit_", referenced from:
_test_deflate in example.o
_test_large_deflate in example.o
_test_flush in example.o
_test_dict_deflate in example.o
"_deflateParams", referenced from:
_test_large_deflate in example.o
"_deflateSetDictionary", referenced from:
_test_dict_deflate in example.o
"_gzclose", referenced from:
_test_gzio in example.o
"_gzerror", referenced from:
_test_gzio in example.o
"_gzgetc", referenced from:
_test_gzio in example.o
"_gzgets", referenced from:
_test_gzio in example.o
"_gzopen", referenced from:
_test_gzio in example.o
"_gzprintf", referenced from:
_test_gzio in example.o
"_gzputc", referenced from:
_test_gzio in example.o
"_gzputs", referenced from:
_test_gzio in example.o
"_gzread", referenced from:
_test_gzio in example.o
"_gzseek", referenced from:
_test_gzio in example.o
"_gztell", referenced from:
_test_gzio in example.o
"_gzungetc", referenced from:
_test_gzio in example.o
"_inflate", referenced from:
_test_inflate in example.o
_test_large_inflate in example.o
_test_sync in example.o
_test_dict_inflate in example.o
(maybe you meant: _test_large_inflate, _test_inflate , _test_dict_inflate )
"_inflateEnd", referenced from:
_test_inflate in example.o
_test_large_inflate in example.o
_test_sync in example.o
_test_dict_inflate in example.o
"_inflateInit_", referenced from:
_test_inflate in example.o
_test_large_inflate in example.o
_test_sync in example.o
_test_dict_inflate in example.o
"_inflateSetDictionary", referenced from:
_test_dict_inflate in example.o
"_inflateSync", referenced from:
_test_sync in example.o
"_uncompress", referenced from:
_test_compress in example.o
"_zlibCompileFlags", referenced from:
_main in example.o
"_zlibVersion", referenced from:
_main in example.o
clang: error: linker command failed with exit code 1 (use -v to see invocation)
ld: symbol(s) not found for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make: *** [minigzipsh] Error 1
make: *** Waiting for unfinished jobs....
make: *** [examplesh] Error 1
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/private/var/folders/g9/lqph46sj36n9kkvjt1pzdxhm0000gn/T/pip-install-kl4hmrrk/lxml_4ecb3c255ad049e39a89a66ee0a50e76/setup.py", line 270, in <module>
**setup_extra_options()
File "/private/var/folders/g9/lqph46sj36n9kkvjt1pzdxhm0000gn/T/pip-install-kl4hmrrk/lxml_4ecb3c255ad049e39a89a66ee0a50e76/setup.py", line 162, in setup_extra_options
ext_modules = setupinfo.ext_modules(
File "/private/var/folders/g9/lqph46sj36n9kkvjt1pzdxhm0000gn/T/pip-install-kl4hmrrk/lxml_4ecb3c255ad049e39a89a66ee0a50e76/setupinfo.py", line 74, in ext_modules
XML2_CONFIG, XSLT_CONFIG = build_libxml2xslt(
File "/private/var/folders/g9/lqph46sj36n9kkvjt1pzdxhm0000gn/T/pip-install-kl4hmrrk/lxml_4ecb3c255ad049e39a89a66ee0a50e76/buildlibxml.py", line 428, in build_libxml2xslt
cmmi(zlib_configure_cmd, zlib_dir, multicore, **call_setup)
File "/private/var/folders/g9/lqph46sj36n9kkvjt1pzdxhm0000gn/T/pip-install-kl4hmrrk/lxml_4ecb3c255ad049e39a89a66ee0a50e76/buildlibxml.py", line 352, in cmmi
call_subprocess(
File "/private/var/folders/g9/lqph46sj36n9kkvjt1pzdxhm0000gn/T/pip-install-kl4hmrrk/lxml_4ecb3c255ad049e39a89a66ee0a50e76/buildlibxml.py", line 335, in call_subprocess
raise Exception('Command "%s" returned code %s' % (cmd_desc, returncode))
Exception: Command "make -j6" returned code 2
Building lxml version 4.8.0.
Latest version of zlib is 1.2.12
Downloading zlib into libs/zlib-1.2.12.tar.gz from https://zlib.net/zlib-1.2.12.tar.gz
Unpacking zlib-1.2.12.tar.gz into build/tmp
Latest version of libiconv is 1.16
Downloading libiconv into libs/libiconv-1.16.tar.gz from https://ftp.gnu.org/pub/gnu/libiconv/libiconv-1.16.tar.gz
Unpacking libiconv-1.16.tar.gz into build/tmp
Latest version of libxml2 is 2.9.12
Downloading libxml2 into libs/libxml2-2.9.12.tar.gz from http://xmlsoft.org/sources/libxml2-2.9.12.tar.gz
Unpacking libxml2-2.9.12.tar.gz into build/tmp
Latest version of libxslt is 1.1.34
Downloading libxslt into libs/libxslt-1.1.34.tar.gz from http://xmlsoft.org/sources/libxslt-1.1.34.tar.gz
Unpacking libxslt-1.1.34.tar.gz into build/tmp
Starting build in build/tmp/zlib-1.2.12
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
```
Do I need to do something special to build lxml on an m1 mac?
|
2022/04/04
|
[
"https://Stackoverflow.com/questions/71737316",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/311220/"
] |
One solution that works - build it from source:
```
git clone https://github.com/lxml/lxml
cd lxml
git checkout tags/lxml-4.9.1
python3 setup.py bdist_wheel
cd dist/
sudo pip3 install lxml-4.9.1-cp310-cp310-macosx_12_0_arm64.whl
```
For additional resources:
<https://lxml.de/installation.html>
|
It turned out that installing lxml with a simple `pip install` *was* working fine.
The reason for my malloc error was the fact that I was trying to clear the element before the end tag had been seen. Turns out this isn't possible and you need to wait for the end tag even if you already know you aren't interested in the element.
| 15,351
|
4,944,331
|
I wonder about the best/easiest way to authenticate the user for the Google Data API in a desktop app.
I read through the [docs](http://code.google.com/intl/de/apis/contacts/docs/3.0/developers_guide_python.html) and it seems that my options are ClientLogin or OAuth.
For ClientLogin, it seems I have to implement the UI for login/password (and related things like saving this somewhere etc.) myself. I really wonder if there is some more support there which may pop up some default login/password screen and uses the OS keychain to store the password, etc. I wonder why such support isn't there? Wouldn't that be the standard procedure? By leaving that implementation to the dev (well, the possibility to leave that impl to the dev is good of course), I would guess that many people have come up with very ugly solutions here (when they just wanted to hack together a small script).
OAuth seems to be the better solution. However, there seems to be some code missing and/or most code I found seems only to be relevant for web applications. Esp., I followed the documentation and got [here](http://code.google.com/intl/de/apis/gdata/docs/auth/oauth.html#OAuthRequestToken). Already in the introduction, it speaks about web application. Then later on, I need to specify a callback URL which does not make sense for a desktop application. Also I wonder what consumer key/secret I should put as that also doesn't really make sense for a desktop app (esp. not for an open-source one). I searched a bit around and it was said [here (on SO)](https://stackoverflow.com/questions/2569968/googles-oauth-for-installed-apps-vs-oauth-for-web-apps/2572795#2572795) that I should use "anonymous"/"anonymous" as the consumer key/secret; but where does it say that in the Google documentation? And how do I get the token after the user has authenticated itself?
Is there some sample code? (Not with a hardcoded username/password but with a reusable full authentication method.)
Thanks,
Albert
---
My code so far:
```
import gdata.gauth
import gdata.contacts.client
CONSUMER_KEY = 'anonymous'
CONSUMER_SECRET = 'anonymous'
SCOPES = [ "https://www.google.com/m8/feeds/" ] # contacts
client = gdata.contacts.client.ContactsClient(source='Test app')
import BaseHTTPServer
import SocketServer
Handler = BaseHTTPServer.BaseHTTPRequestHandler
httpd = BaseHTTPServer.HTTPServer(("", 0), Handler)
_,port = httpd.server_address
oauth_callback_url = 'http://localhost:%d/get_access_token' % port
request_token = client.GetOAuthToken(
SCOPES, oauth_callback_url, CONSUMER_KEY, consumer_secret=CONSUMER_SECRET)
loginurl = request_token.generate_authorization_url(google_apps_domain=None)
loginurl = str(loginurl)
import webbrowser
webbrowser.open(loginurl)
```
However, this does not work. I get this error:
>
> Sorry, you've reached a login page for a domain that isn't using Google Apps. Please check the web address and try again.
>
>
>
I don't quite understand that. Of course I don't use Google Apps.
---
Ah, that error came from `google_apps_domain=None` in `generate_authorization_url`. Leave that away (i.e. just `loginurl = request_token.generate_authorization_url()` and it works so far.
My current code:
```
import gdata.gauth
import gdata.contacts.client
CONSUMER_KEY = 'anonymous'
CONSUMER_SECRET = 'anonymous'
SCOPES = [ "https://www.google.com/m8/feeds/" ] # contacts
client = gdata.contacts.client.ContactsClient(source='Test app')
import BaseHTTPServer
import SocketServer
httpd_access_token_callback = None
class Handler(BaseHTTPServer.BaseHTTPRequestHandler):
def do_GET(self):
if self.path.startswith("/get_access_token?"):
global httpd_access_token_callback
httpd_access_token_callback = self.path
self.send_response(200)
def log_message(self, format, *args): pass
httpd = BaseHTTPServer.HTTPServer(("", 0), Handler)
_,port = httpd.server_address
oauth_callback_url = 'http://localhost:%d/get_access_token' % port
request_token = client.GetOAuthToken(
SCOPES, oauth_callback_url, CONSUMER_KEY, consumer_secret=CONSUMER_SECRET)
loginurl = request_token.generate_authorization_url()
loginurl = str(loginurl)
print "opening oauth login page ..."
import webbrowser; webbrowser.open(loginurl)
print "waiting for redirect callback ..."
while httpd_access_token_callback == None:
httpd.handle_request()
print "done"
request_token = gdata.gauth.AuthorizeRequestToken(request_token, httpd_access_token_callback)
# Upgrade the token and save in the user's datastore
access_token = client.GetAccessToken(request_token)
client.auth_token = access_token
```
That will open the Google OAuth page with the hint at the bottom:
>
> This website has not registered with Google to establish a secure connection for authorization requests. We recommend that you deny access unless you trust the website.
>
>
>
It still doesn't work, though. When I try to access the contacts (i.e. just a `client.GetContacts()`), I get this error:
```
gdata.client.Unauthorized: Unauthorized - Server responded with: 401, <HTML>
<HEAD>
<TITLE>Token invalid - AuthSub token has wrong scope</TITLE>
</HEAD>
<BODY BGCOLOR="#FFFFFF" TEXT="#000000">
<H1>Token invalid - AuthSub token has wrong scope</H1>
<H2>Error 401</H2>
</BODY>
</HTML>
```
---
Ok, it seems that I really had set the wrong scope. When I use `http` instead of `https` (i.e. `SCOPES = [ "http://www.google.com/m8/feeds/" ]`), it works.
But I really would like to use https. I wonder how I can do that.
---
Also, another problem with this solution:
In the list of Authorized Access to my Google Account, I now have a bunch of such localhost entries:
>
> localhost:58630 — Google Contacts [ Revoke Access ]
>
> localhost:58559 — Google Contacts [ Revoke Access ]
>
> localhost:58815 — Google Contacts [ Revoke Access ]
>
> localhost:59174 — Google Contacts [ Revoke Access ]
>
> localhost:58514 — Google Contacts [ Revoke Access ]
>
> localhost:58533 — Google Contacts [ Revoke Access ]
>
> localhost:58790 — Google Contacts [ Revoke Access ]
>
> localhost:59012 — Google Contacts [ Revoke Access ]
>
> localhost:59191 — Google Contacts [ Revoke Access ]
>
>
>
I wonder how I can avoid that it will make such entries.
When I use `xoauth_displayname`, it displays that name instead but still makes multiple entries (probably because the URL is still mostly different (because of the port) each time). How can I avoid that?
---
My current code is now on [Github](https://github.com/albertz/google-contacts-sync/blob/master/goauth.py).
---
I also wonder, where, how and for how long I should store the access token and/or the request token so that the user is not asked always again and again each time when the user starts the application.
|
2011/02/09
|
[
"https://Stackoverflow.com/questions/4944331",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/133374/"
] |
You can use the file streaming ability available on the MediaPlayer class to do the above.
I have not tested the following code, but it should be something on these lines:
```
MediaPlayer mp = new MediaPlayer();
mp.setDataSource(http://somedomain/some.pls);
mp.prepare();
mp.start();
```
But remember to do the following:
1. Surround each of the statements above with `try` and `catch` statements and print copiously to `LogCat`. It will help debug easily.
2. Release the resources once you are done with it.
HTH,
Sriram
|
You can try download pls file, parse the urls that are in there, and put them in MediaPlayer.
| 15,352
|
48,013,074
|
So I am trying to use cffi to access a c library quickly in pypy.
I am using my macpro with command line tools 9.1
specifically I am comparing a pure python priority queue, with heapq, with a cffi, and ctypes for a project.
I have got code from roman10's website for a C implementation of a priority queue.
I am following the docs for cffi but I get an error when trying to call a function that is inside of the library.
I have 5 files: the .c/.h `cheap.c` `cheap.h`for the priority queue, the interface python file with cffi `heap_inter.py`, a wrapper for the program to access the queue `heap.py`, and a test script `test_heap.py`
This is the error: I tried with `export CC=gcc` and `export CC=clang` and also my mpich
`pypy test_heap.py`
AttributeError: cffi library '\_heap\_i' has no function, constant, or global variable named 'initQueue'
however I do have a function in the c library that is being compiled called initQueue.
when i call pypy on heap\_inter.py the commands executed are
`building '_heap_i' extension
clang -DNDEBUG -O2 -fPIC -I/opt/local/include -I/opt/local/lib/pypy/include -c _heap_i.c -o ./_heap_i.o
cc -pthread -shared -undefined dynamic_lookup ./_heap_i.o -L/opt/local/lib -o ./_heap_i.pypy-41.so
clang: warning: argument unused during compilation: '-pthread' [-Wunused-command-line-argument]`
here are my sources,
```
cheap.h
#include <stdio.h>
#include <stdlib.h>
/* priority Queue implimentation via roman10.net */
struct heapData {
//everything from event
//tx string
char tx[64];
//txID int
int txID;
//rx string
char rx[64];
//rxID int
int rxID;
//name string
char name[64];
//data Object
//time float
float time;
};
struct heapNode {
int value;
struct heapData data; //dummy
};
struct PQ {
struct heapNode* heap;
int size;
};
void insert(struct heapNode aNode, struct heapNode* heap, int size);
void shiftdown(struct heapNode* heap, int size, int idx);
struct heapNode removeMin(struct heapNode* heap, int size);
void enqueue(struct heapNode node, struct PQ *q);
struct heapNode dequeue(struct PQ *q);
struct heapNode peak(struct PQ *q);
void initQueue(struct PQ *q, int n);
int nn = 1000000;
struct PQ q;
int main(int argc, char **argv);
```
then `heap.c`
```
void insert(struct heapNode aNode, struct heapNode* heap, int size) {
int idx;
struct heapNode tmp;
idx = size + 1;
heap[idx] = aNode;
while (heap[idx].value < heap[idx/2].value && idx > 1) {
tmp = heap[idx];
heap[idx] = heap[idx/2];
heap[idx/2] = tmp;
idx /= 2;
}
}
void shiftdown(struct heapNode* heap, int size, int idx) {
int cidx; //index for child
struct heapNode tmp;
for (;;) {
cidx = idx*2;
if (cidx > size) {
break; //it has no child
}
if (cidx < size) {
if (heap[cidx].value > heap[cidx+1].value) {
++cidx;
}
}
//swap if necessary
if (heap[cidx].value < heap[idx].value) {
tmp = heap[cidx];
heap[cidx] = heap[idx];
heap[idx] = tmp;
idx = cidx;
} else {
break;
}
}
}
struct heapNode removeMin(struct heapNode* heap, int size) {
int cidx;
struct heapNode rv = heap[1];
//printf("%d:%d:%dn", size, heap[1].value, heap[size].value);
heap[1] = heap[size];
--size;
shiftdown(heap, size, 1);
return rv;
}
void enqueue(struct heapNode node, struct PQ *q) {
insert(node, q->heap, q->size);
++q->size;
}
struct heapNode dequeue(struct PQ *q) {
struct heapNode rv = removeMin(q->heap, q->size);
--q->size;
return rv;
}
struct heapNode peak(struct PQ *q) {
return q->heap[1];
}
void initQueue(struct PQ *q, int n) {
q->size = 0;
q->heap = (struct heapNode*)malloc(sizeof(struct heapNode)*(n+1));
}
int nn = 1000000;
struct PQ q;
int main(int argc, char **argv) {
int n;
int i;
struct PQ q;
struct heapNode hn;
n = atoi(argv[1]);
initQueue(&q, n);
srand(time(NULL));
for (i = 0; i < n; ++i) {
hn.value = rand()%10000;
printf("enqueue node with value: %dn", hn.value);
enqueue(hn, &q);
}
printf("ndequeue all values:n");
for (i = 0; i < n; ++i) {
hn = dequeue(&q);
printf("dequeued node with value: %d, queue size after removal: %dn", hn.value, q.size);
}
}
```
`heap_inter.py`:
```
from cffi import FFI
ffibuilder = FFI()
ffibuilder.set_source("_heap_i",
r"""//passed to C compiler
#include <stdio.h>
#include <stdlib.h>
#include "cheap.h"
""",
libraries=[])
ffibuilder.cdef("""
struct heapData {
char tx[64];
int txID;
char rx[64];
int rxID;
char name[64];
float time;
} ;
struct heapNode {
int value;
struct heapData data; //dummy
} ;
struct PQ {
struct heapNode* heap;
int size;
} ;
""")
#
# Rank (Simian Engine) has a Bin heap
if __name__ == "__main__":
ffibuilder.compile(verbose=True)
```
then `heap.py`
```
from _heap_i import ffi, lib
infTime = 0
# /* create heap */
def init():
#global infTime
#infTime = int(engine.infTime) + 1
cheap = ffi.new("struct PQ *")
lib.initQueue(cheap,nn)
return cheap
def push(arr, element):
hn = ffi.new("struct heapNode")
value = element["time"]
hn.value = value
rx = ffi.new("char[]", element["rx"])
hn.data.rx = rx
tx = ffi.new("char[]", element["tx"])
hn.data.tx = tx
txID = ffi.new("int", element["txID"])
hn.data.txID = txID
rxID = ffi.new("int", element["rxID"])
hn.data.rxID = rxID
name = ffi.new("int", element["name"])
hn.data.name = name
hn.data.time = value
result = lib.enqueue(hn, arr)
def pop(arr):
hn = lib.dequeue(arr)
element = {"time": hn.value,
"rx" : hn.data.rx,
"tx" : hn.data.tx,
"rxID" : hn.data.rxID,
"txID" : hn.data.txID,
"name" : hn.data.name,
"data" : hn.data.data,
}
return element
def annihilate(arr, event):
pass
def peak(arr):
hn = lib.peak(arr)
element = {"time": hn.value,
"rx" : hn.data.rx,
"tx" : hn.data.tx,
"rxID" : hn.data.rxID,
"txID" : hn.data.txID,
"name" : hn.data.name,
"data" : hn.data.data,
}
return element
def isEvent(arr):
if arr.size:
return 1
else:
return None
def size(arr):
return arr.size
```
and finally test\_heap.py
```
import heap
pq = heap.init()
for x in range(1000) :
heap.push({"time" : x,
"rx" : "a",
"tx" : "b",
"txID" : 1,
"rxID" : 1,
"name" : "bob",
"data": "none",
},pq)
for x in range(100) :
heap.peak(pq)
for x in range(1000):
y = heap.dequeue(pq)
print y
```
Thanks anyone for looking and let me know if you have experienced this before.
|
2017/12/28
|
[
"https://Stackoverflow.com/questions/48013074",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5062570/"
] |
You must tell ffi.set\_source() about `heap.c`. The easiest way is
`with open('heap.c', 'r') as fid:
ffi.set_source(fid.read())`
like in [this example](http://cffi.readthedocs.io/en/latest/overview.html#purely-for-performance-api-level-out-of-line). The way it works is:
* declarations of functions, structs, enums, typedefs that will be accessible from python appear in your ffi.cdef(...)
* the source code that implements those functions is given as a string to ffi.set\_source(...) ( you can also use various `distutils` keywords there)
* ffi.compile() builds a shared object that exports the things defined in ffi.cdef() and arranges it so those things will be exposed to python
|
Your ffi.cdef() does not have any function declaration, so the resulting \_heap\_i library doesn't export any function. Try to copy-paste the function declaration from the .h file.
| 15,356
|
71,395,163
|
I have a txt file of hundreds of thousands of words. I need to get into some format (I think dictionary is the right thing?) where I can put into my script something along the lines of;
```
for i in word_list:
word_length = len(i)
print("Length of " + i + word_length, file=open("LengthOutput.txt", "a"))
```
Currently, the txt file of words is separated by each word being on a new line, if that helps. I've tried importing it to my python script with
```
From x import y
```
.... and similar, but it seems like it needs to be in some format to actually get imported? I've been looking around stackoverflow for a wile now and nothing seems to really cover this specifically but apologies if this is super-beginner stuff that I'm just really not understanding.
|
2022/03/08
|
[
"https://Stackoverflow.com/questions/71395163",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15604077/"
] |
A list would be the correct way to store the words. A dictionary requires a key-value pair and you don't need it in this case.
```
with open('filename.txt', 'r') as file:
x = [word.strip('\n') for word in file.readlines()]
```
|
What you are trying to do is to read a file. An `import` statement is used when you want to, loosely speaking, use python code from another file.
The [docs](https://docs.python.org/3/tutorial/inputoutput.html#reading-and-writing-files) have a good introduction on reading and writing files -
To read a file, you first open the file, load the contents to memory and finally close the file.
```py
f = open('my_wordfile.txt', 'r')
for line in f:
print(len(line))
f.close()
```
A better way is to use the `with` statement and you can find more about that in the docs as well.
| 15,357
|
11,729,120
|
I am having a problem reading specific lines. It's similar to the question answered here:
[python - Read file from and to specific lines of text](https://stackoverflow.com/questions/7559397/python-read-file-from-and-to-specific-lines-of-text)
The difference, I don't have a fixed end mark. Let me show an example:
```
--------------------------------
\n
***** SOMETHING ***** # i use this as my start
\n
--------------------------------
\n
data of interest
data of interest
data of interest
\n
----------------------- #this will either be dashes, or EOF
***** SOMETHING *****
-----------------------
```
I tried doing something similar to the above link, but I can't create a if statement to break the loop since I don't know if it will be the EOF or not.
|
2012/07/30
|
[
"https://Stackoverflow.com/questions/11729120",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1443368/"
] |
Take a look at [this file](https://github.com/void256/nifty-gui/blob/1.3/nifty-renderer-lwjgl/src/main/java/de/lessvoid/nifty/renderer/lwjgl/input/LwjglKeyboardInputEventCreator.java) of the source code of [NiftyGUI](https://github.com/void256/nifty-gui), which should contain this text handling code.
|
Just delete your shift handling line and add:
```
if(Keyboard.isKeyDown(Keyboard.KEY_LSHIFT) && !Keyboard.isKeyDown(Keyboard.KEY_RSHIFT))
shift=true;
```
before the beginning of the While loop.
| 15,358
|
39,142,168
|
I just started learning python a few days ago and I have been using Grok Learning. For the challenge I have everything working as far as i can see but when i submit it i am told "Testing yet another case that starts with a vowel. Your submission raised an exception of type IndexError. This occurred on line 8 of your submission." I am not sure how to solve this or even what i am doing wrong. By the way i am making a program to check if the message starts with a vowel and if so times the first letter by 10 if not then times the second letter by 10.
```
msg = input("Enter a word: ")
h = " "
half =" "
first = msg[0]
second = msg[1]
msg2 = "gg"
length = len(msg)
third = msg[2]
if first not in "aeiou":
if second != third:
print(msg.replace(msg[1], msg[1] * 10))
elif second == third:
msg2 = third * 6
msg3 = (msg.replace(msg[2], msg2))
msg4 = first + msg3[2:]
print(msg4)
else:
half = first * 10
msg10 = msg[1:length]
print((half) + msg10)
```
|
2016/08/25
|
[
"https://Stackoverflow.com/questions/39142168",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
use the below query..
```
WITH cte_data
AS (
SELECT cost,(cost*round(rand()+rand(),2)origCost
FROM [dbo].[x])
UPDATE a
SET a.cost=a.origCost
FROM cte_data a
```
if you need different numbers for calculation use the below script
```
WITH cte_data
AS (
SELECT cost,ROW_NUMBER()OVER(ORDER BY cost)*(cost)origCost
FROM [dbo].[x])
UPDATE a
SET a.cost=a.origCost
FROM cte_data a
```
|
To get random decimal within 0..2 use `CAST (ABS(CHECKSUM(NewId())) % 200 /100. AS DECIMAL(3,2))` instead of `round(rand()+rand(),2)`
| 15,359
|
72,010,560
|
I am running [a model from github](https://github.com/wentaoyuan/pcn) and I already encountered several errors with pathing etc. After fixing this, I think the main error for now is tensorflow. This repo was probably done when TF 1.x, and now with the change to TF 2, I might need to migrate everything.
Mainly, I get the following error:
```
@ops.RegisterShape('ApproxMatch')
AttributeError: module 'tensorflow.python.framework.ops' has no attribute 'RegisterShape'
```
in:
```
import tensorflow as tf
from tensorflow.python.framework import ops
import os.path as osp
base_dir = osp.dirname(osp.abspath(__file__))
approxmatch_module = tf.load_op_library(osp.join(base_dir, 'tf_approxmatch_so.so'))
def approx_match(xyz1,xyz2):
'''
input:
xyz1 : batch_size * #dataset_points * 3
xyz2 : batch_size * #query_points * 3
returns:
match : batch_size * #query_points * #dataset_points
'''
return approxmatch_module.approx_match(xyz1,xyz2)
ops.NoGradient('ApproxMatch')
#@tf.RegisterShape('ApproxMatch')
@ops.RegisterShape('ApproxMatch')
def _approx_match_shape(op):
shape1=op.inputs[0].get_shape().with_rank(3)
shape2=op.inputs[1].get_shape().with_rank(3)
return [tf.TensorShape([shape1.dims[0],shape2.dims[1],shape1.dims[1]])]
```
2 main things I am not understanding:
1. I read that this probably will make me have to create the `ops` [C++ routines](https://www.tensorflow.org/guide/create_op) but at the same time, I can see that these are done in here: `@ops.RegisterShape('ApproxMatch')` . [Like this](https://stackoverflow.com/questions/59043649/attributeerror-module-tensorflow-python-framework-ops-has-no-attribute-regis) and [this](https://stackoverflow.com/questions/59643248/attributeerror-module-has-no-attribute-registershape) with `REGISTER_OP(...).SetShapeFn(...)`. But I dont think I am understanding the process and have seen other questions with the same, but no real implementation/answer.
2. If I go to the location of the tf\_approxmatch shared library ( `approxmatch_module = tf.load_op_library(osp.join(base_dir, 'tf_approxmatch_so.so'))` ), I cannot open it or edit it with `gedit`, so I am assuming I am not supposed to change anything in there (?).
There are `py`, `cpp` and `cu` files in that folder (I already did `make` yesterday and everything ran smoothly).
```
__init__.py tf_approxmatch.cu.o tf_nndistance.cu.o
makefile tf_approxmatch.py tf_nndistance.py
__pycache__ tf_approxmatch_so.so tf_nndistance_so.so
tf_approxmatch.cpp tf_nndistance.cpp
tf_approxmatch.cu tf_nndistance.cu
```
My main guess is that I should register the operation of RegisterShape somehow in the cpp file as it already has some registered ops, but I am a bit lost because **I am not even sure if I am understanding the problem I have**. I will just show the first lines of the file:
```
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/op_kernel.h"
#include <algorithm>
#include <vector>
#include <math.h>
using namespace tensorflow;
REGISTER_OP("ApproxMatch")
.Input("xyz1: float32")
.Input("xyz2: float32")
.Output("match: float32");
REGISTER_OP("MatchCost")
.Input("xyz1: float32")
.Input("xyz2: float32")
.Input("match: float32")
.Output("cost: float32");
REGISTER_OP("MatchCostGrad")
.Input("xyz1: float32")
.Input("xyz2: float32")
.Input("match: float32")
.Output("grad1: float32")
.Output("grad2: float32");
```
|
2022/04/26
|
[
"https://Stackoverflow.com/questions/72010560",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7396613/"
] |
**Disclaimer**: Unless no other choice, I would strongly recommend sticking to TensorFlow 1.x. Migrating code from TF 1.x to 2.x can be incredibly time consuming.
---
Registering a shape is done in c++ using `SetShapeFn` and not in python since TF 1.0. However, the private python API was kept in TF 1.x (I presume for backward compatibility reasons), but was completely removed in TF 2.0.
The [Create an Op](https://www.tensorflow.org/guide/create_op) guide is really useful to migrate the code in that case, and I would really recommend reading it.
First of all, why is it needed to register a shape? It's needed for shape inference, a feature that allow TensorFlow to know the shape of the inputs and outputs in the computation graph without running actual code. Shape Inference allows, for example, error handling when trying to use an op on Tensors that don't have a compatible shape.
In your specific case, you need to convert the python code that uses `ops.RegisterShape` to c++ code using `SetShapeFn`. Thankfully, the github repository you are working with provides comments that are going to be helpful.
Let's start with the `approx_match` function. The python code is the following:
```
def approx_match(xyz1,xyz2):
'''
input:
xyz1 : batch_size * #dataset_points * 3
xyz2 : batch_size * #query_points * 3
returns:
match : batch_size * #query_points * #dataset_points
'''
return approxmatch_module.approx_match(xyz1,xyz2)
@ops.RegisterShape('ApproxMatch')
def _approx_match_shape(op):
shape1=op.inputs[0].get_shape().with_rank(3)
shape2=op.inputs[1].get_shape().with_rank(3)
return [tf.TensorShape([shape1.dims[0],shape2.dims[1],shape1.dims[1]])]
```
Reading the code and the comments, we understand the following:
* There is 2 inputs `xyz1`, and `xyz2`
* `xyz1` has the shape `(batch_size, dataset_points, 3)`
* `xyz2` has the shape `(batch_size, query_points, 3)`
* There is 1 output: `match`
* `match` has the shape `(batch_size, query_points, dataset_points)`
This would translate to the following C++ code:
```cpp
#include "tensorflow/core/framework/shape_inference.h"
using namespace tensorflow;
REGISTER_OP("ApproxMatch")
.Input("xyz1: float32")
.Input("xyz2: float32")
.Output("match: float32")
.SetShapeFn([](shape_inference::InferenceContext* c) {
shape_inference::ShapeHandle xyz1_shape = c->input(0);
shape_inference::ShapeHandle xyz2_shape = c->input(1);
// batch_size is the first dimension
shape_inference::DimensionHandle batch_size = c->Dim(xyz1_shape, 0);
// dataset_points points is the 2nd dimension of the first input
shape_inference::DimensionHandle dataset_points = c->Dim(xyz1_shape, 1);
// query_points points is the 2nd dimension of the second input
shape_inference::DimensionHandle query_points = c->Dim(xyz2_shape, 1);
// Creating the new shape (batch_size, query_points, dataset_points)
// and setting it to the output
c->set_output(0, c->MakeShape({batch_size, query_points, dataset_points}));
// Returning a status telling that everything went well
return Status::OK();
});
```
**Warning**: This code does not contain any error handling (for example, checking that the first dimensions of the 2 inputs are the same, or that the last dimension of both inputs is 3). I let that as an exercise to the reader, you could look at the aforementioned guide or directly to the [source code of some ops](https://github.com/tensorflow/tensorflow/blob/v2.8.0/tensorflow/core/ops/math_ops.cc) to have an idea on how to do error handling, with, for example, the macro `TF_RETURN_IF_ERROR`.
---
The same steps can be applied to the `match_cost` function, which could look like that:
```cpp
REGISTER_OP("MatchCost")
.Input("xyz1: float32")
.Input("xyz2: float32")
.Input("match: float32")
.Output("cost: float32")
.SetShapeFn([](shape_inference::InferenceContext* c) {
shape_inference::DimensionHandle batch_size = c->Dim(c->input(0), 0);
c->set_output(0, c->Vector(batch_size));
return Status::OK();
});
```
---
You need then to recompile the `so` library with the makefile included in the project. You might have to change some flags, for example, TF 2.8 uses the c++ standard c++14 instead of c++11, so the flag `-std=c++14` is needed. Once the library is compiled, you can test importing it in python:
```
>>> import tensorflow as tf
>>> approxmatch_module = tf.load_op_library('./tf_approxmatch_so.so')
>>> a = tf.random.uniform((10,20,3))
>>> b = tf.random.uniform((10,50,3))
>>> c = approxmatch_module.approx_match(a,b)
>>> c.shape
TensorShape([10, 50, 20])
```
|
according to tensorflow's release log, `RegisterShape` is deprecated, and you should use `SetShapeFn` to define the shape when registering your operator in c++ source file.
| 15,362
|
29,570,085
|
I am using a Makefile to provide consistent single commands for setting up a virtualenv, running tests, etc. I have configured my Jenkins instance to pull from a mercurial repo and then run "make virtualenv", which does this:
```
virtualenv --python=/usr/bin/python2.7 --no-site-packages . && . ./bin/activate && pip install -r requirements.txt
```
But for some reason it insists on using the system-installed pip and trying to install my package dependencies in the system site-packages rather than the virtualenv:
```
error: could not create '/usr/local/lib/python2.7/dist-packages/flask': Permission denied
```
If I add some debugging commands and explicitly point to the pip in my virtualenv, things get even more confusing:
```
virtualenv --python=/usr/bin/python2.7 --no-site-packages . && . ./bin/activate && ls -l bin && which pip && pwd && ./bin/pip install -r requirements.txt
```
Which generates the following output:
```
New python executable in ./bin/python2.7
Not overwriting existing python script ./bin/python (you must use ./bin/python2.7)
Installing setuptools, pip...done.
Running virtualenv with interpreter /usr/bin/python2.7
```
It appears Jenkins doesn't rebuild the environment from scratch for each build, which strikes me as an odd choice, but shouldn't effect my immediate issue
The output from the "ls -l bin" shows pip to be installed in the virtualenv and executable:
```
-rw-r--r-- 1 jenkins jenkins 2248 Apr 9 21:14 activate
-rw-r--r-- 1 jenkins jenkins 1304 Apr 9 21:14 activate.csh
-rw-r--r-- 1 jenkins jenkins 2517 Apr 9 21:14 activate.fish
-rw-r--r-- 1 jenkins jenkins 1129 Apr 9 21:14 activate_this.py
-rwxr-xr-x 1 jenkins jenkins 278 Apr 9 21:14 easy_install
-rwxr-xr-x 1 jenkins jenkins 278 Apr 9 21:14 easy_install-2.7
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip2
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip2.7
lrwxrwxrwx 1 jenkins jenkins 9 Apr 10 19:31 python -> python2.7
lrwxrwxrwx 1 jenkins jenkins 9 Apr 10 19:31 python2 -> python2.7
-rwxr-xr-x 1 jenkins jenkins 3349512 Apr 10 19:31 python2.7
```
The output of "which pip" seems to want to use the correct one:
```
/var/lib/jenkins/jobs/Run Tests/workspace/bin/pip
```
My current working directory is what I expect it to be:
```
/var/lib/jenkins/jobs/Run Tests/workspace
```
But... wtf?
```
/bin/sh: 1: ./bin/pip: Permission denied
make: *** [virtualenv] Error 126
Build step 'Execute shell' marked build as failure
Finished: FAILURE
```
|
2015/04/10
|
[
"https://Stackoverflow.com/questions/29570085",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1332304/"
] |
Jenkins pipelines can be made to run with virtual environments but there are multiple things to consider.
* The default shell that Jenkins uses is `/bin/sh` - this is configurable in **Manage Jenkins -> Configure System -> Shell -> Shell executable**. Setting this to `/bin/bash` will make `source` work.
* An activated venv simply changes environment variables, and environment variables do not persist between stages in jenkins. See [withEnv](https://jenkins.io/doc/pipeline/steps/workflow-basic-steps/#code-withenv-code-set-environment-variables)
* If you are using version controlled multibranch pipelines jenkins creates a workspace with the branch name and a commit hash in the path - which can be quite long. venv scripts (e.g. pip) all start with a hashbang line which includes the full path to the python interpreter in the venv (the python interpreter itself is a symlink). E.g.,
```
~/workspace/ink_feature-use-jenkinsfile-VGRPYD53GGGDDSBIJDLSUDYPJ34QR63ITGMC5VJNB56W6ID244AA/env/bin$ cat pip
#!/var/jenkins_home/workspace/ink_feature-use-jenkinsfile-VGRPYD53GGGDDSBIJDLSUDYPJ34QR63ITGMC5VJNB56W6ID244AA/env/bin/python3.5
```
Bash only reads the first `N` characters of any executable file - which I found did not quite include the full venv path:
```
bash: ./pip: /var/jenkins_home/workspace/ink_feature-use-jenkinsfile-VGRPYD53GGGDDSBIJDLSU: bad interpreter: No such file or directory
```
This particular problem can be avoided by executing the script with Python instead. E.g. `python3.5 ./pip`
|
[@hardbyte](https://stackoverflow.com/users/809692/hardbyte)'s answer
*The default shell that Jenkins uses is /bin/sh - this is configurable in Manage Jenkins -> Configure System -> Shell -> Shell executable. Setting this to /bin/bash will make source work.*
plus:
* <https://stackoverflow.com/a/70812248/1497139>
* [pyvenv-3.4 returned non-zero exit status 1](https://stackoverflow.com/questions/24123150/pyvenv-3-4-returned-non-zero-exit-status-1)
got me working
```bash
sudo apt install python3.10-venv
```
and then in jenkins in the execute shell step:
```bash
python3.10 -m venv .venv
source .venv/bin/activate
...
```
| 15,363
|
55,980,027
|
I need to add to a .csv file based on user input. Other portions of the code add to the file but I can't figure out how to have it add user input. I'm new to python and coding in general.
I have other portions of the code that can merge or draw the data from a .csv database and write it to the separate file, but can't figure how to get it to take multiple user inputs to write or append to the outgoing file.
```
def manualentry():
stock = input("Enter Stock #: ") #Generate data for each column to fill in to the output file.
VIN = input("Enter Full VIN: ") #Each line asks the user to add data do the line.
make = input("Enter Make: ")
model = input("Enter Model: ")
year = input("Enter Year: ")
l8v = input("Enter L8V: ")
print(stock, VIN, make, model, year, l8v) #Prints the line of user data
input4 = input("Append to inventory list? Y/N") #Asks user to append the data to the output file.
if input4 == "Y" or input4 == "y":
with open('INV.csv','a', newline='') as outfile: #Pull up a seperate csv to write to, an output for collected data
w = csv.writer(outfile) #Need to write the user input to the .csv file.
w.writerow([stock, VIN, make, model, year, l8v]) #<-This is the portion that seems to fall apart.
print("INVENTORY UPDATED")
starter() #Restarts whole program from begining.
if input4 == "N" or input4 == "n":
print("SKIPPING. RESTARTING....")
starter() #Reset
else:
print("Invalid entry restarting program.")
starter() #Reset
starter() #R E S E T !
```
Just need the user inputs to be applied to the .csv and saved there. Earlier portions of the code function perfectly except this to add to the .csv file. It's to fill in missing data that would otherwise not be listed in a separate database.
|
2019/05/04
|
[
"https://Stackoverflow.com/questions/55980027",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11450864/"
] |
Trying to output a SimpleXMLElement using `var_dump()` isn't a good idea and as you have seen - doesn't give much.
If you just want to see the XML it has loaded, instead use...
```
echo $xml->asXML();
```
which will show you the XML has loaded OK, so then to output the field your after is just
```
$xml = simplexml_load_string($response);
echo $xml->Header->AsyncReplyFlag;
```
|
Using the `XPath` query
```
$xml = new SimpleXMLElement($xmlData);
echo $xml->xpath('//AsyncReplyFlag')[0];
```
OR
You can use `xml_parser_create`
```
$p = xml_parser_create();
xml_parse_into_struct($p, $xml, $values, $indexes);// $xml containing the XML
xml_parser_free($p);
echo $values[12]['value'];
```
For other details, you can `print_r($values)`
| 15,373
|
419,698
|
**Overall Plan**
Get my class information to automatically optimize and select my uni class timetable
Overall Algorithm
1. Logon to the website using its
Enterprise Sign On Engine login
2. Find my current semester and its
related subjects (pre setup)
3. Navigate to the right page and get the data from each related
subject (lecture, practical and
workshop times)
4. Strip the data of useless
information
5. Rank the classes which are closer
to each other higher, the ones on
random days lower
6. Solve a best time table solution
7. Output me a detailed list of the
BEST CASE information
8. Output me a detailed list of the
possible class information (some
might be full for example)
9. Get the program to select the best
classes automatically
10. Keep checking to see if we can
achieve 7.
6 in detail
Get all the classes, using the lectures as a focus point, would be highest ranked (only one per subject), and try to arrange the classes around that.
**Questions**
Can anyone supply me with links to something that might be similar to this hopefully written in python?
In regards to 6.: what data structure would you recommend to store this information in? A linked list where each object of uniclass?
Should i write all information to a text file?
I am thinking uniclass to be setup like the following
attributes:
* Subject
* Rank
* Time
* Type
* Teacher
I am hardly experienced in Python and thought this would be a good learning project to try to accomplish.
Thanks for any help and links provided to help get me started, **open to edits to tag appropriately or what ever is necessary** (not sure what this falls under other than programming and python?)
EDIT: can't really get the proper formatting i want for this SO post ><
|
2009/01/07
|
[
"https://Stackoverflow.com/questions/419698",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/45211/"
] |
Depending on how far you plan on taking #6, and how big the dataset is, it may be non-trivial; it certainly smacks of NP-hard global optimisation to me...
Still, if you're talking about tens (rather than hundreds) of nodes, a fairly dumb algorithm should give good enough performance.
So, you have two constraints:
1. A total ordering on the classes by score;
this is flexible.
2. Class clashes; this is not flexible.
What I mean by flexible is that you *can* go to more spaced out classes (with lower scores), but you *cannot* be in two classes at once. Interestingly, there's likely to be a positive correlation between score and clashes; higher scoring classes are more likely to clash.
My first pass at an algorithm:
```
selected_classes = []
classes = sorted(classes, key=lambda c: c.score)
for clas in classes:
if not clas.clashes_with(selected_classes):
selected_classes.append(clas)
```
Working out clashes might be awkward if classes are of uneven lengths, start at strange times and so on. Mapping start and end times into a simplified representation of "blocks" of time (every 15 minutes / 30 minutes or whatever you need) would make it easier to look for overlaps between the start and end of different classes.
|
[BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/) was mentioned here a few times, e.g [get-list-of-xml-attribute-values-in-python](https://stackoverflow.com/questions/87317/get-list-of-xml-attribute-values-in-python).
>
> Beautiful Soup is a Python HTML/XML parser designed for quick turnaround projects like screen-scraping. Three features make it powerful:
>
>
> 1. Beautiful Soup won't choke if you give it bad markup. It yields a parse tree that makes approximately as much sense as your original document. This is usually good enough to collect the data you need and run away.
> 2. Beautiful Soup provides a few simple methods and Pythonic idioms for navigating, searching, and modifying a parse tree: a toolkit for dissecting a document and extracting what you need. You don't have to create a custom parser for each application.
> 3. Beautiful Soup automatically converts incoming documents to Unicode and outgoing documents to UTF-8. You don't have to think about encodings, unless the document doesn't specify an encoding and Beautiful Soup can't autodetect one. Then you just have to specify the original encoding.
>
>
> Beautiful Soup parses anything you give it, and does the tree traversal stuff for you. You can tell it "Find all the links", or "Find all the links of class externalLink", or "Find all the links whose urls match "foo.com", or "Find the table heading that's got bold text, then give me that text."
>
>
> Valuable data that was once locked up in poorly-designed websites is now within your reach. Projects that would have taken hours take only minutes with Beautiful Soup.
>
>
>
| 15,376
|
5,997,027
|
I don't know if this is an obvious bug, but while running a Python script for varying the parameters of a simulation, I realized the results with delta = 0.29 and delta = 0.58 were missing. On investigation, I noticed that the following Python code:
```
for i_delta in range(0, 101, 1):
delta = float(i_delta) / 100
(...)
filename = 'foo' + str(int(delta * 100)) + '.dat'
```
generated identical files for delta = 0.28 and 0.29, same with .57 and .58, the reason being that python returns float(29)/100 as 0.28999999999999998. But that isn't a systematic error, not in the sense it happens to every integer. So I created the following Python script:
```
import sys
n = int(sys.argv[1])
for i in range(0, n + 1):
a = int(100 * (float(i) / 100))
if i != a: print i, a
```
And I can't see any pattern in the numbers for which this rounding error happens. Why does this happen with those particular numbers?
|
2011/05/13
|
[
"https://Stackoverflow.com/questions/5997027",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/752910/"
] |
Any number that can't be built from exact powers of two can't be represented exactly as a floating point number; it needs to be approximated. Sometimes the closest approximation will be less than the actual number.
Read [What Every Computer Scientist Should Know About Floating-Point Arithmetic](http://download.oracle.com/docs/cd/E19957-01/806-3568/ncg_goldberg.html).
|
Its very well known due to the nature of [floating point numbers](http://download.oracle.com/docs/cd/E19957-01/806-3568/ncg_goldberg.html).
If you want to do decimal arithmetic not floating point arithmatic there are [libraries](http://docs.python.org/library/decimal.html) to do this.
E.g.,
```
>>> from decimal import Decimal
>>> Decimal(29)/Decimal(100)
Decimal('0.29')
>>> Decimal('0.29')*100
Decimal('29')
>>> int(Decimal('29'))
29
```
In general decimal is probably going overboard and still will have rounding errors in rare cases when the number does not have a finite decimal representation (for example any fraction where the denominator is not 1 or divisible by 2 or 5 - the factors of the decimal base (10)). For example:
```
>>> s = Decimal(7)
>>> Decimal(1)/s/s/s/s/s/s/s*s*s*s*s*s*s*s
Decimal('0.9999999999999999999999999996')
>>> int(Decimal('0.9999999999999999999999999996'))
0
```
So its best to always just round before casting floating points to ints, unless you want a floor function.
```
>>> int(1.9999)
1
>>> int(round(1.999))
2
```
Another alternative is to use the Fraction class from the [fractions](http://docs.python.org/library/fractions.html) library which doesn't approximate. (It justs keeps adding/subtracting and multiplying the integer numerators and denominators as necessary).
| 15,378
|
25,288,927
|
I have a dict and behind each key is a list stored.
Looks like this:
```
dict with values: {
u'New_York': [(u'New_York', u'NY', datetime.datetime(2014, 8, 13, 0, 0), 10), (u'New_York', u'NY', datetime.datetime(2014, 8, 13, 0, 0), 4), (u'New_York', u'NY', datetime.datetime(2014, 8, 13, 0, 0), 3)],
u'Jersy': [(u'Jersy', u'JY', datetime.datetime(2014, 8, 13, 0, 0), 6), (u'Jersy', u'JY', datetime.datetime(2014, 8, 13, 0, 0), 7)],
u'Alameda': [(u'Alameda', u'CA', datetime.datetime(2014, 8, 13, 0, 0), 1), (u'Alameda', u'CA', datetime.datetime(2014, 8, 13, 0, 0), 2), (u'Alameda', u'CA', datetime.datetime(2014, 8, 13, 0, 0), 3), (u'Alameda', u'CA', datetime.datetime(2014, 8, 13, 0, 0), 1)]
}
```
What I want is to iterate through the dic lists and return the max in a certain position of the list for each KEY. The result should contain the KEY and the whole element of the list with the max value. Perfect would be to store the returned element in a dic as well.
Example:
Max Value here with the values at the attributes last position of the list.
```
somedic = {
u'New_York': (u'New_York', u'NY', datetime.datetime(2014, 8, 13, 0, 0), 10)
u'Jersy': (u'Jersy', u'JY', datetime.datetime(2014, 8, 13, 0, 0), 7)
u'Alameda': (u'Alameda', u'CA', datetime.datetime(2014, 8, 13, 0, 0), 3)
}
```
I tried a couple of thinks, by looking at these Freds:
[Getting key with maximum value in dictionary?](https://stackoverflow.com/questions/268272/getting-key-with-maximum-value-in-dictionary)
[Max/Min value of Dictionary of List](https://stackoverflow.com/questions/14884376/max-min-value-of-dictionary-of-list)
[Python miminum value in dictionary of lists](https://stackoverflow.com/questions/11729670/python-miminum-value-in-dictionary-of-lists?lq=1)
[Python miminum length/max value in dictionary of lists](https://stackoverflow.com/questions/11730552/python-miminum-length-max-value-in-dictionary-of-lists)
But I cannot get my head around it. It is simply above my abilities. I just started to learn python. I tried something like this:
```
import operator
maxvalues = {}
maxvalues = max(countylist.iteritems(), key=operator.itemgetter(1))[0]
print "should be max values here: ", maxvalues
#gave me New York
```
Is that possible?
I am working on Python 2.7
It would be awesome if the code snipped one posts as an answer could be explained, since I want to learn something!
Btw I am not looking for a ready to use code. Some hints and code snippet work for me. I will work my way from there on through it. That's how I will learn the most.
|
2014/08/13
|
[
"https://Stackoverflow.com/questions/25288927",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3935035/"
] |
[`max()`](https://docs.python.org/2/library/functions.html#max) takes a second parameter, a callable `key` that lets you specify how to calculate the maximum. It is called for each entry in the input iterable and its return value is used to find the highest value. You need to apply this against each *value* in your dictionary; you are finding the maximum of each individual *list* here, not the maximum of all values in a dictionary.
Use that against the values; the rest is just formatting for the output; I've used a [dict comprehension](https://docs.python.org/2/reference/expressions.html#dictionary-displays) here to process each of your key-value pairs from the input and produce a dictionary again for the output:
```
{k: max(v, key=lambda i: i[-1]) for k, v in somedic.iteritems()}
```
You could also use the [`operator.itemgetter()` function](https://docs.python.org/2/library/operator.html#operator.itemgetter) to produce a callable for you, instead of using a `lambda`:
```
from operator import itemgetter
{k: max(v, key=itemgetter(-1)) for k, v in somedic.iteritems()}
```
Both grab the last element of each input tuple.
Demo:
```
>>> import datetime
>>> from pprint import pprint
>>> somedic = {
... u'New_York': (u'New_York', u'NY', datetime.datetime(2014, 8, 13, 0, 0), 10), (u'New_York', u'NY', datetime.datetime(2014, 8, 13, 0, 0), 4), (u'New_York', u'NY', datetime.datetime(2014, 8, 13, 0, 0), 3)],
... u'Jersy': [(u'Jersy', u'JY', datetime.datetime(2014, 8, 13, 0, 0), 6), (u'Jersy', u'JY', datetime.datetime(2014, 8, 13, 0, 0), 7)],
... u'Alameda': [(u'Alameda', u'CA', datetime.datetime(2014, 8, 13, 0, 0), 1), (u'Alameda', u'CA', datetime.datetime(2014, 8, 13, 0, 0), 2), (u'Alameda', u'CA', datetime.datetime(2014, 8, 13, 0, 0), 3), (u'Alameda', u'CA', datetime.datetime(2014, 8, 13, 0, 0), 1)]
... }
>>> {k: max(v, key=lambda i: i[-1]) for k, v in somedic.iteritems()}
{u'New_York': (u'New_York', u'NY', datetime.datetime(2014, 8, 13, 0, 0), 10), u'Jersy': (u'Jersy', u'JY', datetime.datetime(2014, 8, 13, 0, 0), 7), u'Alameda': (u'Alameda', u'CA', datetime.datetime(2014, 8, 13, 0, 0), 3)}
>>> pprint(_)
{u'Alameda': (u'Alameda', u'CA', datetime.datetime(2014, 8, 13, 0, 0), 3),
u'Jersy': (u'Jersy', u'JY', datetime.datetime(2014, 8, 13, 0, 0), 7),
u'New_York': (u'New_York', u'NY', datetime.datetime(2014, 8, 13, 0, 0), 10)}
```
|
Can you have a look at : <https://wiki.python.org/moin/HowTo/Sorting#Sorting_Mini-HOW_TO>
The answer may be :
```
In [2]: import datetime
In [3]: d = {
u'New_York': [(u'New_York', u'NY', datetime.datetime(2014, 8, 13, 0, 0), 10), (u'New_York', u'NY', datetime.datetime(2014, 8, 13, 0, 0), 4), (u'New_York', u'NY', datetime.datetime(2014, 8, 13, 0, 0), 3)],
u'Jersy': [(u'Jersy', u'JY', datetime.datetime(2014, 8, 13, 0, 0), 6), (u'Jersy', u'JY', datetime.datetime(2014, 8, 13, 0, 0), 7)],
u'Alameda': [(u'Alameda', u'CA', datetime.datetime(2014, 8, 13, 0, 0), 1), (u'Alameda', u'CA', datetime.datetime(2014, 8, 13, 0, 0), 2), (u'Alameda', u'CA', datetime.datetime(2014, 8, 13, 0, 0), 3), (u'Alameda', u'CA', datetime.datetime(2014, 8, 13, 0, 0), 1)]
}
In [4]: def give_max_values(d):
...: res = {}
...: for key, vals in d.iteritems():
...: res[key] = max(vals, key=lambda x: x[3])
...: return res
In [5]: somedic = give_max_values(d)
In [6]: print somedic
{u'New_York': (u'New_York', u'NY', datetime.datetime(2014, 8, 13, 0, 0), 10), u'Jersy': (u'Jersy', u'JY', datetime.datetime(2014, 8, 13, 0, 0), 7), u'Alameda': (u'Alameda', u'CA', datetime.datetime(2014, 8, 13, 0, 0), 3)}
```
| 15,379
|
1,027,990
|
I'm trying to build my first facebook app, and it seems that the python facebook ([pyfacebook](http://code.google.com/p/pyfacebook/)) wrapper is really out of date, and the most relevant functions, like stream functions, are not implemented.
Are there any mature python frontends for facebook? If not, what's the best language for facebook development?
|
2009/06/22
|
[
"https://Stackoverflow.com/questions/1027990",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/51197/"
] |
The updated location of pyfacebook is [on github](http://github.com/sciyoshi/pyfacebook/tree/master). Plus, as [arstechnica](http://arstechnica.com/open-source/news/2009/04/how-to-using-the-new-facebook-stream-api-in-a-desktop-app.ars) well explains:
>
> PyFacebook is also very easy to extend
> when new Facebook API methods are
> introduced. Each Facebook API method
> is described in the PyFacebook library
> using a simple data structure that
> specifies the method's name and
> parameter types.
>
>
>
so, even should you be using a pyfacebook version that doesn't yet implement some brand-new thing you need, it's easy to add said thing, as Ryan Paul shows [here](http://bazaar.launchpad.net/~segphault/gwibber/template-facebook-stream/revision/289#gwibber/microblog/support/facelib.py) regarding some of the stream functions (back in April right after they were launched).
|
Try [this site](http://github.com/sciyoshi/pyfacebook/tree/master) instead.
It's pyfacebooks site on GitHub. The one you have is outdated.
| 15,380
|
42,802,194
|
Hi I am really new to JSON and Python, here is my dilemma, it's been bugging me for two days.
Here is the sample json that I want to parse.
```
{
"Tag1":"{
"TagX": [
{
"TagA": "A",
"TagB": 1.6,
"TagC": 1.4,
"TagD": 3.5,
"TagE": "01",
"TagF": null
},
{
"TagA": "A",
"TagB": 1.6,
"TagC": 1.4,
"TagD": 3.5,
"TagE": "02",
"TagF": null
}
],
"date": "10.03.2017 21:00:00"
}"
}
```
Here is my python code:
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import requests
import json
import urllib2
jaysonData = json.load(urllib2.urlopen('URL'))
print jaysonData["Tag1"]
```
How can I get values of TagB and TagC?
When I try to access them with
```
jaysonData = json.load(urllib2.urlopen('URL'))
print jaysonData["Tag1"]["TagX"]["TagB"]
```
The output is:
```
TypeError: string indices must be integers
```
When I do this:
```
print jaysonData["Tag1"]
```
The output is:
```
{
"TagX": [
{
"TagA": "A",
"TagB": 1.6,
"TagC": 1.4,
"TagD": 3.5,
"TagE": "01",
"TagF": null
},
{
"TagA": "A",
"TagB": 1.6,
"TagC": 1.4,
"TagD": 3.5,
"TagE": "02",
"TagF": null
}
],
"date": "10.03.2017 21:00:00"
}"
```
I need to reach TagX, TagD, TagE but the below code gives this error:
```
print jaysonData["Tag1"]["TagX"]
```
prints
```
print jaysonData["Tag1"]["TagX"]
TypeError: string indices must be integers
```
How can I access TagA to TagF with python?
Thanks in advance.
|
2017/03/15
|
[
"https://Stackoverflow.com/questions/42802194",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7672271/"
] |
In the returned JSON, the value of `Tag1` is a string, not more JSON. It does appear to be JSON encoded as a string though, so convert that to json once more:
```
jaysonData = json.load(urllib2.urlopen('URL'))
tag1JaysonData = json.load(jaysonData['Tag1'])
print tag1JaysonData["TagX"]
```
Also note that `TagX` is a list, not a dictionary, so there are multiple `TagB`s in it:
```
print [x['TagB'] for x in tag1JaysonData['TagX']]
```
|
`TagX` is a `list` consisting of 2 dictionaries and `TagB` is a `dictionary`.
```
print jaysonData["Tag1"]["TagX"][0]["TagB"]
```
You need to remove double quotations before and after the curly braces of `Tag1`.
```
{
"Tag1":{
"TagX": [
{
"TagA": "A",
"TagB": 1.6,
"TagC": 1.4,
"TagD": 3.5,
"TagE": "01",
"TagF": null
},
{
"TagA": "A",
"TagB": 1.6,
"TagC": 1.4,
"TagD": 3.5,
"TagE": "02",
"TagF": null
}
],
"date": "10.03.2017 21:00:00"
}
}
```
| 15,385
|
19,868,457
|
I have renamed a css class name in a number of (python-django) templates. The css files however are wide-spread across multiple files in multiple directories. I have a python snippet to start renaming from the root dir and then recursively rename all the css files.
```
from os import walk, curdir
import subprocess
COMMAND = "find %s -iname *.css | xargs sed -i s/[Ff][Oo][Oo]/bar/g"
test_command = 'echo "This is just a test. DIR: %s"'
def renamer(command):
print command # Please ignore the print commands.
proccess = subprocess.Popen(command.split(), stdout = subprocess.PIPE)
op = proccess.communicate()[0]
print op
for root, dirs, files in walk(curdir):
if root:
command = COMMAND % root
renamer(command)
```
It doesn't work, gives:
```
find ./cms/djangoapps/contentstore/management/commands/tests -iname *.css | xargs sed -i s/[Ee][Dd][Xx]/gurukul/g
find: paths must precede expression: |
Usage: find [-H] [-L] [-P] [-Olevel] [-D help|tree|search|stat|rates|opt|exec] [path...] [expression]
find ./cms/djangoapps/contentstore/views -iname *.css | xargs sed -i s/[Ee][Dd][Xx]/gurukul/g
find: paths must precede expression: |
Usage: find [-H] [-L] [-P] [-Olevel] [-D help|tree|search|stat|rates|opt|exec] [path...] [expression]
```
When I copy and run the same command (printed above), `find` doesn't error out and sed either gets no input files or it works.
What is wrong with the python snippet?
|
2013/11/08
|
[
"https://Stackoverflow.com/questions/19868457",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/860421/"
] |
You're not trying to run a single command, but a shell pipeline of multiple commands, and you're trying to do it without invoking the shell. That can't possibly work. The way you're doing this, `|` is just one of the arguments to `find`, which is why `find` is telling you that it doesn't understand that argument with that "paths must precede expression: |" error.
You *can* fix that by adding `shell=True` to your `Popen`.
But a better solution is to do the pipeline in Python and keep the shell out of it. See [Replacing Older Functions with the `subprocess` Module](http://docs.python.org/2/library/subprocess.html#replacing-older-functions-with-the-subprocess-module) in the docs for an explanation, but I'll show an example.
Meanwhile, you should never use `split` to split a command line. The best solution is to write the list of separate arguments instead of joining them up into a string just to split them out. If you must do that, use the `shlex` module; that's what it's for. But in your case, even that won't help you, because you're inserting random strings verbatim, which could easily have spaces or quotes in them, and there's no way anything—`shlex` or otherwise—can reconstruct the data in the first place.
So:
```
pfind = Popen(['find', root, '-iname', '*.css'], stdout=PIPE)
pxargs = Popen(['xargs', 'sed', '-i', 's/[Ff][Oo][Oo]/bar/g'],
stdin=pfind.stdout, stdout=PIPE)
pfind.stdout.close()
output = pxargs.communicate()
```
---
But there's an even better solution here.
Python has `os.walk` to do the same thing as `find`, you can simulate `xargs` easily, but there's really no need to do so, and it has its own `re` module to use instead of `sed`. So, why not use them?
Or, conversely, bash is much better at driving and connecting up simple commands than Python, so if you'd rather use `find` and `sed` instead of `os.walk` and `re.sub`, why write the driving script in Python in the first place?
|
The problem is the pipe. To use a pipe with the subprocess module, you have to pass `shell=True`.
| 15,388
|
28,125,214
|
I'm very new to python and I'm trying to print the url of an open websitein Chrome. Here is what I could gather from this page and googeling a bit:
```
import win32gui, win32con
def getWindowText(hwnd):
buf_size = 1 + win32gui.SendMessage(hwnd, win32con.WM_GETTEXTLENGTH, 0, 0)
buf = win32gui.PyMakeBuffer(buf_size)
win32gui.SendMessage(hwnd, win32con.WM_GETTEXT, buf_size, buf)
return str(buf)
hwnd = win32gui.FindWindow(None, "Chrome_WidgetWin_1" )
omniboxHwnd = win32gui.FindWindowEx(hwnd, 0, 'Chrome_OmniboxView', None)
print(getWindowText(hwnd))
```
I get this as a result:
```
<memory at 0x00CA37A0>
```
I don't really know what goes wrong, whether he gets into the window and the way I try to print it is wrong or whether he just doesn't get into the window at all.
Thanks for the help
|
2015/01/24
|
[
"https://Stackoverflow.com/questions/28125214",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4489629/"
] |
You are exactly right because at that time view will not be drawn so for that you have to use viewtreeobserver:
like this:
```
final ViewTreeObserver treeObserver = viewToMesure
.getViewTreeObserver();
treeObserver
.addOnGlobalLayoutListener(new OnGlobalLayoutListener() {
@Override
public void onGlobalLayout() {
// TODO Auto-generated method stub
System.out.println(viewToMesure.getHeight()
+ "SDAFASFASDFas");
heith = viewToMesure.getHeight();
}
});
```
this view tree observer will be called after creating the view based on that you calculate and you can change.
|
You can use a custom view to set the view's height to be tha same as its width by overriding onMeasure:
```
public class SquareButton extends Button {
public SquareButton (Context context) {
super(context);
}
public SquareButton (Context context, AttributeSet attrs) {
super(context, attrs);
}
public SquareButton (Context context, AttributeSet attrs, int defStyleAttr) {
super(context, attrs, defStyleAttr);
}
@Override
protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
super.onMeasure(widthMeasureSpec, heightMeasureSpec);
setMeasuredDimension(getMeasuredWidth(), getMeasuredWidth());
}
}
```
All you have to is use the custom button in your xml layout and you don't have to do anything in the activity:
```
<com.package.name.SquareButton
android:layout_with="0dp"
android:layout_height="wrap_content"
android:layout_weight="1" />
```
| 15,389
|
70,945,717
|
I have Snowflake tasks that runs every 30 minutes. Currently, when the task fails due to underlying data issue in the stored procedure that the Task calls, there is no way to notify the users on the failure.
```
SELECT *
FROM TABLE(INFORMATION_SCHEMA.TASK_HISTORY());
```
How can notifications be setup for a Snowflake Task failure? The design plan I have in mind is to build a python application that runs every 30mins and looks for any error on TASK\_HISTORY table. Please advise if there are any better approaches to handle failure notifications
|
2022/02/01
|
[
"https://Stackoverflow.com/questions/70945717",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11252662/"
] |
I think currently a python script would the best way to address this.
You can use this SQL to query last runs, read into a data frame and filter out errors
```
select *
from table(information_schema.task_history(scheduled_time_range_start=>dateadd(minutes, -30,current_timestamp())))
```
|
It is possible to create Notification Integration and send message when error occurs. As of May 2022 this feature is in preview, supported by accounts on Amazon Web Servives.
#### [Enabling Error Notifications for Tasks](https://docs.snowflake.com/en/user-guide/tasks-errors.html#enabling-error-notifications-for-tasks)
>
> This topic provides instructions for configuring error notification support for tasks using cloud messaging. **This feature triggers a notification describing the errors encountered when a task executes SQL code**
>
>
>
> ```
> Currently, error notifications rely on cloud messaging provided by the
> Amazon Simple Notification Service service;
> support for Google Cloud Pub/Sub queue and Microsoft Azure Event Grid is planned.
>
> ```
>
> New Tasks
>
>
> Create a new task using CREATE TASK. For descriptions of all available task parameters, see the SQL command topic:
>
>
>
> ```
> CREATE TASK <name>
> [...]
> ERROR_INTEGRATION = <integration_name>
> AS <sql>
>
> ```
>
> Existing tasks:
>
>
>
> ```
> ALTER TASK <name> SET ERROR_INTEGRATION = <integration_name>;
>
> ```
>
>
| 15,392
|
67,030,593
|
**Problem**
When trying to access and purchase a specific item from store X, which releases limited quantities randomly throughout the week, trying to load the page via the browser is essentially pointless. 99 out of 100 requests time out. By the time 1 page loads, the stock is sold out.
**Question**
What would be the fastest way to load these pages from a website -- one that is currently under high amounts of stress and timing out regularly -- programmatically, or even via the browser?
For example, is it better to send multiple requests and wait until a "timed out" response is received? Is it best to retry the request after X seconds has passed regardless? Etc, etc.
**Tried**
I've tried both solutions above in browser without much luck, so I'm thinking of putting together a python or javascript solution in order to better my chances, but couldn't find an answer to my question via Google.
EDIT:
Just to clarify, the website in question doesn't sporadically time out -- it is strictly when new stock is released and the website is bombarded with visitors. Once stock is bought up, the site returns to normal. New stock releases last anywhere from 5 minutes to 25 minutes.
|
2021/04/10
|
[
"https://Stackoverflow.com/questions/67030593",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15596095/"
] |
You don't need to `JSON.stringify` your 'params'.
Change it into:
```js
const params = {
email: "someEmail@gmail.com",
name: "Some Name",
password: "123qwe!!"
}
```
|
use this one-
axios.post('http://localhost:8080/users', params, {
headers: {
'Content-Type': 'application/json'
}
}
| 15,395
|
4,135,261
|
I am having a problem connecting to a device with a Paramiko (version 1.7.6-2) ssh client:
```
$ python
Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import paramiko
>>> ssh = paramiko.SSHClient()
>>> ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
>>> ssh.connect("123.0.0.1", username="root", password=None)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 327, in connect
self._auth(username, password, pkey, key_filenames, allow_agent, look_for_keys)
File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 481, in _auth
raise saved_exception
paramiko.AuthenticationException: Authentication failed.
>>>
```
When I use ssh from the command line, it works fine:
```
ssh root@123.0.0.1
BusyBox v1.12.1 (2010-11-03 13:18:46 EDT) built-in shell (ash)
Enter 'help' for a list of built-in commands.
#
```
Anyone seen this before?
**Edit 1**
Here is the verbose output of the ssh command:
```
:~$ ssh -v root@123.0.0.1
OpenSSH_5.3p1 Debian-3ubuntu4, OpenSSL 0.9.8k 25 Mar 2009
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug1: Connecting to 123.0.0.1 [123.0.0.1] port 22.
debug1: Connection established.
debug1: identity file /home/waffleman/.ssh/identity type -1
debug1: identity file /home/waffleman/.ssh/id_rsa type -1
debug1: identity file /home/waffleman/.ssh/id_dsa type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1
debug1: match: OpenSSH_5.1 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.3p1 Debian-3ubuntu4
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-ctr hmac-md5 none
debug1: kex: client->server aes128-ctr hmac-md5 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug1: Host '123.0.0.1' is known and matches the RSA host key.
debug1: Found key in /home/waffleman/.ssh/known_hosts:3
debug1: ssh_rsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentication succeeded (none).
debug1: channel 0: new [client-session]
debug1: Requesting no-more-sessions@openssh.com
debug1: Entering interactive session.
debug1: Sending environment.
debug1: Sending env LANG = en_US.utf8
```
**Edit 2**
Here is the python output with debug output:
```
Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import paramiko, os
>>> paramiko.common.logging.basicConfig(level=paramiko.common.DEBUG)
>>> ssh = paramiko.SSHClient()
>>> ssh.load_system_host_keys()
>>> ssh.load_host_keys(os.path.expanduser('~/.ssh/known_hosts'))
>>> ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
>>> ssh.connect("123.0.0.1", username='root', password=None)
DEBUG:paramiko.transport:starting thread (client mode): 0x928756cL
INFO:paramiko.transport:Connected (version 2.0, client OpenSSH_5.1)
DEBUG:paramiko.transport:kex algos:['diffie-hellman-group-exchange-sha256', 'diffie-hellman-group-exchange-sha1', 'diffie-hellman-group14-sha1', 'diffie-hellman-group1-sha1'] server key:['ssh-rsa', 'ssh-dss'] client encrypt:['aes128-cbc', '3des-cbc', 'blowfish-cbc', 'cast128-cbc', 'arcfour128', 'arcfour256', 'arcfour', 'aes192-cbc', 'aes256-cbc', 'rijndael-cbc@lysator.liu.se', 'aes128-ctr', 'aes192-ctr', 'aes256-ctr'] server encrypt:['aes128-cbc', '3des-cbc', 'blowfish-cbc', 'cast128-cbc', 'arcfour128', 'arcfour256', 'arcfour', 'aes192-cbc', 'aes256-cbc', 'rijndael-cbc@lysator.liu.se', 'aes128-ctr', 'aes192-ctr', 'aes256-ctr'] client mac:['hmac-md5', 'hmac-sha1', 'umac-64@openssh.com', 'hmac-ripemd160', 'hmac-ripemd160@openssh.com', 'hmac-sha1-96', 'hmac-md5-96'] server mac:['hmac-md5', 'hmac-sha1', 'umac-64@openssh.com', 'hmac-ripemd160', 'hmac-ripemd160@openssh.com', 'hmac-sha1-96', 'hmac-md5-96'] client compress:['none', 'zlib@openssh.com'] server compress:['none', 'zlib@openssh.com'] client lang:[''] server lang:[''] kex follows?False
DEBUG:paramiko.transport:Ciphers agreed: local=aes128-ctr, remote=aes128-ctr
DEBUG:paramiko.transport:using kex diffie-hellman-group1-sha1; server key type ssh-rsa; cipher: local aes128-ctr, remote aes128-ctr; mac: local hmac-sha1, remote hmac-sha1; compression: local none, remote none
DEBUG:paramiko.transport:Switch to new keys ...
DEBUG:paramiko.transport:Trying discovered key b945197b1de1207d9aa0663f01888c3c in /home/waffleman/.ssh/id_rsa
DEBUG:paramiko.transport:userauth is OK
INFO:paramiko.transport:Authentication (publickey) failed.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 327, in connect
self._auth(username, password, pkey, key_filenames, allow_agent, look_for_keys)
File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 481, in _auth
raise saved_exception
paramiko.AuthenticationException: Authentication failed.
>>>
```
|
2010/11/09
|
[
"https://Stackoverflow.com/questions/4135261",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/197108/"
] |
As a very late follow-up on this matter, I believe I was running into the same issue as waffleman, in a context of a confined network.
The hint about using `auth_none` on the `Transport` object turned out quite helpful, but I found myself a little puzzled as to how to implement that. Thing is, as of today at least, I can't get the `Transport` object of an `SSHClient` object until it has connected; but it won't connect in the first place...
So In case this is useful to others, my work around is below. I just override the `_auth` method.
OK, this is fragile, as `_auth` is a private thing. My other alternatives were - actually still are - to manually create the `Transport` and `Channel` objects, but for the time being I feel like I'm much better off with all this still under the hood.
```
from paramiko import SSHClient, BadAuthenticationType
class SSHClient_try_noauth(SSHClient):
def _auth(self, username, *args):
try:
self._transport.auth_none(username)
except BadAuthenticationType:
super()._auth(username, *args)
```
|
[paramiko's SSHClient](http://www.lag.net/paramiko/docs/paramiko.SSHClient-class.html) has [`load_system_host_keys`](http://www.lag.net/paramiko/docs/paramiko.SSHClient-class.html#load_system_host_keys) method which you could use to load user specific set of keys. As example in the docs explain, it needs to be run before connecting to a server.
| 15,397
|
73,353,608
|
My script takes `-d`, `--delimiter` as argument:
```
parser.add_argument('-d', '--delimiter')
```
but when I pass it `--` as delimiter, it is empty
```
script.py --delimiter='--'
```
I know `--` is special in argument/parameter parsing, but I am using it in the form `--option='--'` and quoted.
Why does it not work?
I am using Python 3.7.3
Here is test code:
```
#!/bin/python3
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--delimiter')
parser.add_argument('pattern')
args = parser.parse_args()
print(args.delimiter)
```
When I run it as `script --delimiter=-- AAA` it prints empty `args.delimiter`.
|
2022/08/14
|
[
"https://Stackoverflow.com/questions/73353608",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7287412/"
] |
Existing bug report
-------------------
Patches have been suggested, but it hasn't been applied. [Argparse incorrectly handles '--' as argument to option](https://github.com/python/cpython/issues/58572)
Some simple examples:
---------------------
```
In [1]: import argparse
In [2]: p = argparse.ArgumentParser()
In [3]: a = p.add_argument('--foo')
In [4]: p.parse_args(['--foo=123'])
Out[4]: Namespace(foo='123')
```
The unexpected case:
```
In [5]: p.parse_args(['--foo=--'])
Out[5]: Namespace(foo=[])
```
Fully quote passes through - but I won't get into how you might achieve this via shell call:
```
In [6]: p.parse_args(['--foo="--"'])
Out[6]: Namespace(foo='"--"')
```
'--' as separate string:
```
In [7]: p.parse_args(['--foo','--'])
usage: ipython3 [-h] [--foo FOO]
ipython3: error: argument --foo: expected one argument
...
```
another example of the double quote:
```
In [8]: p.parse_args(['--foo','"--"'])
Out[8]: Namespace(foo='"--"')
```
In `_parse_known_args`, the input is scanned and classified as "O" or "A". The '--' is handled as
```
# all args after -- are non-options
if arg_string == '--':
arg_string_pattern_parts.append('-')
for arg_string in arg_strings_iter:
arg_string_pattern_parts.append('A')
```
I think the '--' are stripped out after that, but I haven't found that part of the code yet. I'm also not finding were the '--foo=...' version is handled.
I vaguely recall some bug/issues over handling of multiple occurances of '--'. With the migration to github, I'm not following `argparse` developements as much as I used to.
edit
----
`get_values` starts with:
```
def _get_values(self, action, arg_strings):
# for everything but PARSER, REMAINDER args, strip out first '--'
if action.nargs not in [PARSER, REMAINDER]:
try:
arg_strings.remove('--')
except ValueError:
pass
```
Why that results in a empty list will require more thought and testing.
The '=' is handled in `_parse_optional`, which is used during the first scan:
```
# if the option string before the "=" is present, return the action
if '=' in arg_string:
option_string, explicit_arg = arg_string.split('=', 1)
if option_string in self._option_string_actions:
action = self._option_string_actions[option_string]
return action, option_string, explicit_arg
```
old bug issues
--------------
[argparse handling multiple "--" in args improperly](https://bugs.python.org/issue13922)
[argparse: Allow the use of -- to break out of nargs and into subparser](https://github.com/python/cpython/issues/53780)
|
It calls `parse_args` which calls `parse_known_args` which calls `_parse_known_args`.
Then, on line 2078 (or something similar), it does this (inside a while loop going through the string):
```py
start_index = consume_optional(start_index)
```
which calls the `consume_optional` (which makes sense, because this is an optional argument it is parsing right now) defined earlier in the method `_parse_known_args`. When given `--delimiter='--'`, it will make this `action_tuples`:
```py
# if the action expect exactly one argument, we've
# successfully matched the option; exit the loop
elif arg_count == 1:
stop = start_index + 1
args = [explicit_arg]
action_tuples.append((action, args, option_string))
break
##
## The above code gives you the following:
##
action_tuples=[(_StoreAction(option_strings=['-d', '--delimiter'], dest='delimiter', nargs=None, const=None, default=None, type=None, choices=None, help=None, metavar=None), ['--'], '--delimiter')]
```
That is then iterated to, and is then fed to `take_action` on line 2009:
```py
assert action_tuples
for action, args, option_string in action_tuples:
take_action(action, args, option_string)
return stop
```
The `take_action` function will then call `self._get_values(action, argument_strings)` on line 1918, which, as mentioned in the answer by @hpaulj, removes the `--`. Then, you're left with the empty list.
| 15,407
|
18,219,529
|
In python, logging to syslog is fairly trivial:
```
syslog.openlog("ident")
syslog.syslog(0, "spilled beer on server")
syslog.closelog()
```
Is there an equivalently simple way in Java? After quite a bit of googling, I've been unable to find an easy to understand method that doesn't require reconfiguring rsyslogd or syslogd. If there's no equivalent, what is the simplest way to log to syslog?
|
2013/08/13
|
[
"https://Stackoverflow.com/questions/18219529",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/643675/"
] |
One way to go direct to the log without udp is with [syslog4j](http://www.syslog4j.org/). I wouldn't necessarily say it's simple, but it doesn't require reconfiguring syslog, at least.
|
The closest I can think of, would be using [Log4J](https://logging.apache.org/log4j/) and configuring the [SyslogAppender](https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/net/SyslogAppender.html) so it writes to syslog. Sorry, that's not as easy as in Python!
| 15,410
|
48,102,393
|
I have 1000 files each having one million lines. Each line has the following form:
```
a number,a text
```
I want to remove all of the numbers from the beginning of every line of every file. including the ,
Example:
```
14671823,aboasdyflj -> aboasdyflj
```
What I'm doing is:
```
os.system("sed -i -- 's/^.*,//g' data/*")
```
and it works fine but it's taking a huge amount of time.
What is the fastest way to do this?
I'm coding in python.
|
2018/01/04
|
[
"https://Stackoverflow.com/questions/48102393",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5120089/"
] |
This is much faster:
```
cut -f2 -d ',' data.txt > tmp.txt && mv tmp.txt data.txt
```
On a file with 11 million rows it took less than one second.
To use this on several files in a directory, use:
```sh
TMP=/pathto/tmpfile
for file in dir/*; do
cut -f2 -d ',' "$file" > $TMP && mv $TMP "$file"
done
```
A thing worth mentioning is that it often takes much longer time to do stuff in place rather than using a separate file. I tried your sed command but switched from in place to a temporary file. Total time went down from 26s to 9s.
|
I would use GNU `awk` (to leverage the `-i inplace` editing of file) with `,` as the field separator, *no expensive Regex manipulation*:
```
awk -F, -i inplace '{print $2}' file.txt
```
For example, if the filenames have a common prefix like `file`, you can use shell globbing:
```
awk -F, -i inplace '{print $2}' file*
```
`awk` will treat each file as different argument while applying the in-place modifications.
---
As a side note, you could simply run the shell command in the shell directly instead of wrapping it in `os.system()` which is insecure and deprecated BTW in favor of `subprocess`.
| 15,413
|
61,380,617
|
when I'm trying open a website with urllib library I'm getting the error. I'm not getting why this error occurs? currently I'm using python 3.6 version. Is this problem with version?
```
url = 'https://example.com'
html = urllib.request.urlopen(url).read().decode('utf-8')
text = get_text(html)
data = text.split()
print(data)
```
|
2020/04/23
|
[
"https://Stackoverflow.com/questions/61380617",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13139499/"
] |
You should not have duplicate column names in the dataframe, we correct that using `make.unique`.
```
names(df) <- make.unique(names(df))
```
We can then remove empty rows and get data in long format using `pivot_longer`.
```
library(dplyr)
library(tidyr)
df %>%
filter(orig != '' | dest != '') %>%
pivot_longer(cols = -c(orig, dest),
names_to = c('.value', 'index'),
names_sep = '\\.') %>%
select(-index)
```
---
For the updated dataset we can use :
```
df %>%
pivot_longer(cols = -c(orig, dest), names_to = 'year') %>%
mutate(.copy = c('cartrip', 'walking')[.copy]) %>%
pivot_wider(names_from = .copy, values_from = value)
# orig dest year cartrip walking
# <fct> <fct> <chr> <int> <int>
# 1 Seoul Inchon 1997 543 543
# 2 Seoul Inchon 2002 524 524
# 3 Seoul Inchon 2006 364 364
# 4 Seoul Inchon 2010 452 452
# 5 Seoul Inchon 2016 845 845
# 6 Seoul Gyeongi 1997 543 543
# 7 Seoul Gyeongi 2002 524 524
# 8 Seoul Gyeongi 2006 364 364
# 9 Seoul Gyeongi 2010 452 452
#10 Seoul Gyeongi 2016 845 845
#11 Inchon Seoul 1997 543 543
#12 Inchon Seoul 2002 524 524
#13 Inchon Seoul 2006 364 364
#14 Inchon Seoul 2010 452 452
#15 Inchon Seoul 2016 845 845
```
|
a `data.table` solution. You might need to play around the `year`. As `melt` now in `data.table` cannot handle the `year` in your question correctly. I guess `pivot_longer` from `tidyr` can do this in one shot.
```r
library(data.table)
df <- fread('orig dest cartrip cartrip cartrip cartrip cartrip walking walking walking walking walking
1997 2002 2006 2010 2016 1997 2002 2006 2010 2016
Seoul Inchon 543 524 364 452 845 543 524 364 452 845
Seoul Gyeongi 543 524 364 452 845 543 524 364 452 845
Inchon Seoul 543 524 364 452 845 543 524 364 452 845
')
result <- melt(df[orig!="",],measure.vars = patterns(walking="^walking",cartrip="^cartrip"),variable.name = "year")
result[,year:=forcats::lvls_revalue(year,c("1997", "2002", "2006", "2010", "2016")
)]
result[order(orig,dest)][,.(year,orig,dest,cartrip,walking)]
#> year orig dest cartrip walking
#> 1: 1997 Inchon Seoul 543 543
#> 2: 2002 Inchon Seoul 524 524
#> 3: 2006 Inchon Seoul 364 364
#> 4: 2010 Inchon Seoul 452 452
#> 5: 2016 Inchon Seoul 845 845
#> 6: 1997 Seoul Gyeongi 543 543
#> 7: 2002 Seoul Gyeongi 524 524
#> 8: 2006 Seoul Gyeongi 364 364
#> 9: 2010 Seoul Gyeongi 452 452
#> 10: 2016 Seoul Gyeongi 845 845
#> 11: 1997 Seoul Inchon 543 543
#> 12: 2002 Seoul Inchon 524 524
#> 13: 2006 Seoul Inchon 364 364
#> 14: 2010 Seoul Inchon 452 452
#> 15: 2016 Seoul Inchon 845 845
```
Created on 2020-04-23 by the [reprex package](https://reprex.tidyverse.org) (v0.3.0)
| 15,418
|
15,118,974
|
I'm trying to learn ruby, so I'm following an exercise of google dev. I'm trying to parse some links. In the case of successful redirection (considering that I know that it its possible only to get redirected once), I get redirect forbidden. I noticed that I go from a http protocol link to an https protocol link. Any concrete idea how could I implement in this in ruby because google's exercise is for python?
error:
```
ruby fix.rb
redirection forbidden: http://code.google.com/edu/languages/google-python-class/images/puzzle/p-bija-baei.jpg -> https://developers.google.com/edu/python/images/puzzle/p-bija-baei.jpg?csw=1
```
code that should achieve what I'm looking for:
```
def acquireData(urls, imgs) #List item urls list of valid urls !checked, imgs list of the imgs I'll download afterwards.
begin
urls.each do |url|
page = Nokogiri::HTML(open(url))
puts page.body
end
rescue Exception => e
puts e
end
end
```
|
2013/02/27
|
[
"https://Stackoverflow.com/questions/15118974",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1388172/"
] |
Ruby's [OpenURI](http://www.ruby-doc.org/stdlib-1.9.3/libdoc/open-uri/rdoc/OpenURI.html) will automatically handle redirects for you, as long as they're not "[meta-refresh](http://en.wikipedia.org/wiki/Meta_refresh)" that occur inside the HTML itself.
For instance, this follows a redirect automatically:
```
irb(main):008:0> page = open('http://www.example.org')
#<StringIO:0x00000002ae2de0>
irb(main):009:0> page.base_uri.to_s
"http://www.iana.org/domains/example"
```
In other words, the request to "www.example.org" got redirected to "www.iana.org" and OpenURI tracked it correctly.
If you are trying to learn HOW to handle redirects, read the [Net::HTTP](http://ruby-doc.org/stdlib-1.9.3/libdoc/net/http/rdoc/Net/HTTP.html) documentation. Here is the example how to do it from the document:
>
> Following Redirection
>
>
> Each Net::HTTPResponse object belongs to a class for its response code.
>
>
> For example, all 2XX responses are instances of a Net::HTTPSuccess subclass, a 3XX response is an instance of a Net::HTTPRedirection subclass and a 200 response is an instance of the Net::HTTPOK class. For details of response classes, see the section “HTTP Response Classes” below.
>
>
> Using a case statement you can handle various types of responses properly:
>
>
>
```
def fetch(uri_str, limit = 10)
# You should choose a better exception.
raise ArgumentError, 'too many HTTP redirects' if limit == 0
response = Net::HTTP.get_response(URI(uri_str))
case response
when Net::HTTPSuccess then
response
when Net::HTTPRedirection then
location = response['location']
warn "redirected to #{location}"
fetch(location, limit - 1)
else
response.value
end
end
print fetch('http://www.ruby-lang.org')
```
If you want to handle meta-refresh statements, reflect on this:
```
require 'nokogiri'
doc = Nokogiri::HTML(%[<meta http-equiv="refresh" content="5;URL='http://example.com/'">])
meta_refresh = doc.at('meta[http-equiv="refresh"]')
if meta_refresh
puts meta_refresh['content'][/URL=(.+)/, 1].gsub(/['"]/, '')
end
```
Which outputs:
```
http://example.com/
```
|
Basically the url in code.google that you're trying to open redirects to a https url. You can see that by yourself if you paste `http://code.google.com/edu/languages/google-python-class/images/puzzle/p-bija-baei.jpg` into your browser
Check the following [bug report](http://bugs.ruby-lang.org/issues/859) that explains why open-uri can't redirect to https;
So the solution to your problem is simply: use a different set of urls (that don't redirect to https)
| 15,419
|
29,656,173
|
I'm a student doing a computer science course and for part of the assessment we have to write a program that will take 10 digits from the user and used them to calculate an 11th number in order to produce an ISBN. The numbers that the user inputs HAVE to be limited to one digit, and an error message should be displayed if more than one digit is entered. This is the code that I am using:
```
print('Please enter your 10 digit number')
a = int(input("FIRST NUMBER: "))
aa = (a*11)
if len(a) > 1:
print ("Error. Only 1 digit allowed!")
b = int(input("SECOND NUMBER: "))
bb = (b*10)
if len(a) > 1:
print ("Error. Only 1 digit allowed!")
```
ect.
I have to keep the inputs as integers so that some of the calculations in the rest of the program work, but when I run the program, an error saying "object of type 'int' has no len()". I'm assuming that it is referring to the fact that it is an integer and has no length. Is there any way that I can keep 'a' as an integer but limit the length to 1 digit?
(Also I understand that there is probably a more efficient way of writing the program, but I have a fairly limited knowledge of python)
|
2015/04/15
|
[
"https://Stackoverflow.com/questions/29656173",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4678142/"
] |
You have to convert the int to a string because int does not have a length property. Also You were checking if the digit was longer than 1 for a twice so I switched the SECOND NUMBER check to b
```
print('Please enter your 10 digit number')
a = raw_input("FIRST NUMBER: ")
if len(a) > 1:
print ("Error. Only 1 digit allowed!")
a = int(a)
aa = (a*10)
b = raw_input("SECOND NUMBER: ")
if len(b) > 1:
print ("Error. Only 1 digit allowed!")
b = int(b)
bb = (b*10)
```
### Or more simply:
You could ask for the number and keep asking until the length is 10 and the input is a number
```
num = raw_input('Please enter your 10 digit number:')
while len(num) != 10 or (not num.isdigit()):
print 'Not a 10 digit number'
num = raw_input('Please enter your 10 digit number:')
num = int(num)
print 'The final number is: ', num
```
|
Firstly, I'm assuming you are using 3.x. Secondly, if you are using 2.x, you can't use `len` on numbers.
This is what I would suggest:
```
print('Please enter your 10 digit number')
number = ''
for x in range(1,11):
digit = input('Please enter digit ' + str(x) + ': ')
while len(digit) != 1:
# digit is either empty or not a single digit so keep asking
digit = input('That was not 1 digit. Please enter digit ' + str(x) + ': ')
number += digit # digit is a single digit so add to number
# do the rest
```
It makes more sense to keep all the numbers in a `str` as you can then split them out later as you need them e.g. `number[0]` will be the first digit, `number[1]` will be the second.
If you can adapt your program to not have to explicitly use a, b, c ,d etc. and instead use slicing, it will be quite simple to construct.
Obviously if you can use a whole 10 digit number than the best method would be:
```
number = input('Please enter your 10 digit number: ')
while len(number) != 10:
number = input('That was not a 10 digit number. Please enter your 10 digit number ')
```
As a last resort, if you absolutely have to have individual variable names per digit, you can use `exec` and `eval`:
```
var_names = [('a', 1), ('b', 2), ('c', 3), ('d', 4)] # add more as needed
for var, num in var_names:
exec(var + ' = input("Please enter digit " + str(num) + ": ")')
while eval('len(' + var + ')') != 1:
exec(var + ' = input("That was not a single digit. Please enter digit " + str(num) + ": ")')
```
That would give you vars a,b,c and d equalling the digits given.
| 15,420
|
66,533,544
|
```
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-1-b7eb239f86a7> in <module>
1 # Initialize path to SQLite database
2 path = 'data/classic_rock.db'
----> 3 con = sq3.Connection(path)
4
5
NameError: name 'sq3' is not defined
```
|
2021/03/08
|
[
"https://Stackoverflow.com/questions/66533544",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14204983/"
] |
It looks like from the traceback that you are attempting to use `sq3` and you either did not `import` the library or did not correctly alias the library in question. Cannot know for sure without your code though.
|
'sq3' is not defined
That's why. Somewhere in your code you're expecting a variable called sql3 but it doesn't exist.
| 15,424
|
36,467,658
|
I installed firewalld on my centos server but as I tried to start it I got this:
```
$ sudo systemctl start firewalld
Job for firewalld.service failed. See 'systemctl status firewalld.service' and 'journalctl -xn' for details.
```
here is the systemctl status:
```
sudo systemctl status firewalld
firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled)
Active: failed (Result: exit-code) since پنجشنبه 2016-04-07 05:36:17 UTC; 9s ago
Process: 929 ExecStart=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS (code=exited, status=1/FAILURE)
Main PID: 929 (code=exited, status=1/FAILURE)
آوریل 07 05:36:17 server1.hamed1soleimani.ir systemd[1]: firewalld.service: main process exited, code=exited, status=1/FAILURE
آوریل 07 05:36:17 server1.hamed1soleimani.ir systemd[1]: Failed to start firewalld - dynamic firewall daemon.
آوریل 07 05:36:17 server1.hamed1soleimani.ir systemd[1]: Unit firewalld.service entered failed state.
```
and firewall-cmd status:
```
sudo firewall-cmd --stat
Traceback (most recent call last):
File "/bin/firewall-cmd", line 24, in <module>
from gi.repository import GObject
File "/usr/lib64/python2.7/site-packages/gi/__init__.py", line 37, in <module>
from . import _gi
ImportError: /usr/lib64/python2.7/site-packages/gi/_gi.so: undefined symbol: g_type_check_instance_is_fundamentally_a
```
I cant realize relation between firewalld and some gtk python extensions!
|
2016/04/07
|
[
"https://Stackoverflow.com/questions/36467658",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3383936/"
] |
This worked for me:
```
systemctl stop firewalld
pkill -f firewalld
systemctl start firewalld
```
|
I know that it is an old thread , But I was facing this problem and I just fixed it, Figured it will help someone in the nearby future.
I thought the problem was in my code or that I mis placed the file.
Well , sadly This file is corrupted (perhaps misplaced)
`/usr/lib/python2.7/site-packages/gi/_gi.so` or I think it has been compiled badly.
What you need is to Update **Glib 2** since it will overwrite & fix it , You can do this using **yum**
Try `yum update glib2`
I tested the above using **CentOS Linux release 7.1.1503 (Core)**
Cheers
| 15,425
|
38,282,659
|
I have two data points `x` and `y`:
```
x = 5 (value corresponding to 95%)
y = 17 (value corresponding to 102.5%)
```
No I would like to calculate the value for `xi` which should correspond to 100%.
```
x = 5 (value corresponding to 95%)
xi = ?? (value corresponding to 100%)
y = 17 (value corresponding to 102.5%)
```
How should I do this using python?
|
2016/07/09
|
[
"https://Stackoverflow.com/questions/38282659",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6306448/"
] |
is that what you want?
```
In [145]: s = pd.Series([5, np.nan, 17], index=[95, 100, 102.5])
In [146]: s
Out[146]:
95.0 5.0
100.0 NaN
102.5 17.0
dtype: float64
In [147]: s.interpolate(method='index')
Out[147]:
95.0 5.0
100.0 13.0
102.5 17.0
dtype: float64
```
|
We can easily plot this on a graph without Python:
[](https://i.stack.imgur.com/PW6fy.png)
This shows us what the answer should be (13).
But how do we calculate this? First, we find the gradient with this:
[](https://i.stack.imgur.com/zQVKFs.png)
The numbers substituted into the equation give this:
[](https://i.stack.imgur.com/cgsQy.png)
So we know for 0.625 we increase the Y value by, we increase the X value by 1.
We've been given that Y is 100. We know that 102.5 relates to 17. `100 - 102.5 = -2.5`. `-2.5 / 0.625 = -4` and then `17 + -4 = 13`.
This also works with the other numbers: `100 - 95 = 5`, `5 / 0.625 = 8`, `5 + 8 = 13`.
We can also go backwards using the reciprocal of the gradient (`1 / m`).
We've been given that X is 13. We know that 102.5 relates to 17. `13 - 17 = -4`. `-4 / 0.625 = -2.5` and then `102.5 + -2.5 = 100`.
How do we do this in python?
```
def findXPoint(xa,xb,ya,yb,yc):
m = (xa - xb) / (ya - yb)
xc = (yc - yb) * m + xb
return
```
And to find a Y point given the X point:
```
def findYPoint(xa,xb,ya,yb,xc):
m = (ya - yb) / (xa - xb)
yc = (xc - xb) * m + yb
return yc
```
This function will also extrapolate from the data points.
| 15,430
|
34,778,397
|
I am currently creating a music player in python 3.3 and I have a way of opening the mp3/wav files, namely through using through 'os.startfile()', but, this way of running the files means that if I run more than one, the second cancels the first, and the third cancels the second, and so on and so forth, so I only end up running the last file. So, basically, I would like a way of reading the mp3 file length so that I can use 'time.sleep(SongLength)' between the start of each file.
Thanks in advance.
**EDIT:**
I forgot to mention, but I would prefer to do this using only pre-installed libraries, as i am hoping to publish this online as a part of a (much) larger program
|
2016/01/13
|
[
"https://Stackoverflow.com/questions/34778397",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2966288/"
] |
i've managed to do this Using an external module, as after ages of trying to do it without any, i gave up and used [tinytag](https://pypi.python.org/pypi/tinytag/), as it is easy to install and use.
|
Nothing you can do without external libraries, as far as I know. Try using [pymad](http://spacepants.org/src/pymad/).
Use it like this:
```
import mad
SongFile = mad.MadFile("something.mp3")
SongLength = SongFile.total_time()
```
| 15,432
|
22,490,833
|
I have this string:
```
Email: promo@elysianrealestate.com
```
I want to get the email address:
### I tried this
```
Email:.*
```
but I got the whole string, not just the email
help please
### i am using scrapy with python
|
2014/03/18
|
[
"https://Stackoverflow.com/questions/22490833",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2038257/"
] |
If your string always finish with the email, you use:
```
r'Email:\s*(.*)'
```
I got the idea from [here](http://doc.scrapy.org/en/0.7/topics/selectors.html#using-selectors-with-regular-expressions) but I can't test it as I don't have a scrapy shell availabl at the moment.
|
This should capture your emails, it ensures that you only capture correctly formed emails:
```
Email:\s+(\b[A-Za-z0-9(._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,4}\b)
```
Here's how I tested it:
```
>>> import re
>>> txt = """
I have this string:
Email: promo@elysianrealestate.com foo bar baz
I want to get the email address:"""
>>> re.findall(r"""
Email:\s+
(\b # edge of first part
[A-Za-z0-9(._%+-]+ # name, can be dotted
@ # @
[A-Za-z0-9.-]+ # domain, e.g. something.something
\. # .
[A-Za-z]{2,4}\b) # any lettered end, 2 to 4 letters long
""", txt, re.VERBOSE)
['promo@elysianrealestate.com']
```
| 15,433
|
57,854,020
|
**My Problem**
I am trying to create a column in python which is the conditional smoothed moving 14 day average of another column. The condition is that I only want to include positive values from another column in the rolling average.
I am currently using the following code which works exactly how I want it to, but it is really slow because of the loops. I want to try and re-do it without using loops. The dataset is simply the last closing price of a stock.
**Current Working Code**
```
import numpy as np
import pandas as pd
csv1 = pd.read_csv('stock_price.csv', delimiter = ',')
df = pd.DataFrame(csv1)
df['delta'] = df.PX_LAST.pct_change()
df.loc[df.index[0], 'avg_gain'] = 0
for x in range(1,len(df.index)):
if df["delta"].iloc[x] > 0:
df["avg_gain"].iloc[x] = ((df["avg_gain"].iloc[x - 1] * 13) + df["delta"].iloc[x]) / 14
else:
df["avg_gain"].iloc[x] = ((df["avg_gain"].iloc[x - 1] * 13) + 0) / 14
df
```
**Correct Output Example**
```
Dates PX_LAST delta avg_gain
03/09/2018 43.67800 NaN 0.000000
04/09/2018 43.14825 -0.012129 0.000000
05/09/2018 42.81725 -0.007671 0.000000
06/09/2018 43.07725 0.006072 0.000434
07/09/2018 43.37525 0.006918 0.000897
10/09/2018 43.47925 0.002398 0.001004
11/09/2018 43.59750 0.002720 0.001127
12/09/2018 43.68725 0.002059 0.001193
13/09/2018 44.08925 0.009202 0.001765
14/09/2018 43.89075 -0.004502 0.001639
17/09/2018 44.04200 0.003446 0.001768
```
**Attempted Solutions**
I tried to create a new column that only comprises of the positive values and then tried to create the smoothed moving average of that new column but it doesn't give me the right answer
```
df['new_col'] = df['delta'].apply(lambda x: x if x > 0 else 0)
df['avg_gain'] = df['new_col'].ewm(14,min_periods=1).mean()
```
The maths behind it as follows...
Avg\_Gain = ((Avg\_Gain(t-1) \* 13) + (New\_Col \* 1)) / 14
where New\_Col only equals the positive values of Delta
Does anyone know how I might be able to do it?
Cheers
|
2019/09/09
|
[
"https://Stackoverflow.com/questions/57854020",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12041022/"
] |
This should speed up your code:
`df['avg_gain'] = df[df['delta'] > 0]['delta'].rolling(14).mean()`
Does your current code converge to zero? If you can provide the data, then it would be easier for the folk to do some analysis.
|
I would suggest you add a column which is 0 if the value is < 0 and instead has the same value as the one you want to consider if it is >= 0. Then you take the running average of this new column.
```
df['new_col'] = df.apply(lambda x: x['delta'] if x['delta'] >= 0 else 0)
df['avg_gain'] = df['new_value'].rolling(14).mean()
```
This would take into account zeros instead of just discarding them.
| 15,436
|
44,934,948
|
I try to get all lattitudes and longtitudes from this json.
Code:
```
import urllib.parse
import requests
raw_json = 'http://live.ksmobile.net/live/getreplayvideos?userid='
print()
userid = 735890904669618176
#userid = input('UserID: ')
url = raw_json + urllib.parse.urlencode({'userid=': userid}) + '&page_size=1000'
print(url)
json_data = requests.get(url).json()
print()
for coordinates in json_data['data']['video_info']:
print(coordinates['lat'], coordinates['lnt'])
print()
```
Error:
```
/usr/bin/python3.6 /media/anon/3D0B8DD536C9574F/PythonProjects/getLocation/getCoordinates
http://live.ksmobile.net/live/getreplayvideos?userid=userid%3D=735890904669618176&page_size=1000
Traceback (most recent call last):
File "/media/anon/3D0B8DD536C9574F/PythonProjects/getLocation/getCoordinates", line 17, in <module>
for coordinates in json_data['data']['video_info']:
TypeError: list indices must be integers or slices, not str
Process finished with exit code 1
```
Where do I go wrong?
In advance, thanks for your help and time.
I just post some of the json to show what it looks like.
The json looks like this:
```
{
"status": "200",
"msg": "",
"data": {
"time": "1499275646",
"video_info": [
{
"vid": "14992026438883533757",
"watchnumber": "38",
"topicid": "0",
"topic": "",
"vtime": "1499202678",
"title": "happy 4th of july",
"userid": "735890904669618176",
"online": "0",
"addr": "",
"isaddr": "2",
"lnt": "-80.1282576",
"lat": "26.2810628",
"area": "A_US",
"countryCode": "US",
"chatSystem": "1",
},
```
Full json: <https://pastebin.com/qJywTqa1>
|
2017/07/05
|
[
"https://Stackoverflow.com/questions/44934948",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8258990/"
] |
Your URL construction is incorrect. The URL you have built (as shown in the output of your script) is:
```
http://live.ksmobile.net/live/getreplayvideos?userid=userid%3D=735890904669618176&page_size=1000
```
Where you actually want this:
```
http://live.ksmobile.net/live/getreplayvideos?userid=735890904669618176&page_size=1000
```
So your were actually getting this JSON in your response:
```
{
"status": "200",
"msg": "",
"data": []
}
```
Which is why you were seeing that error.
Here is the corrected script:
```
import urllib.parse
import requests
raw_json = 'http://live.ksmobile.net/live/getreplayvideos?'
print()
userid = 735890904669618176
#userid = input('UserID: ')
url = raw_json + urllib.parse.urlencode({'userid': userid}) + '&page_size=1000'
print(url)
json_data = requests.get(url).json()
print()
for coordinates in json_data['data']['video_info']:
print(coordinates['lat'], coordinates['lnt'])
print()
```
|
According to your posted json, you have problem in this statement-
`print(coordinates['lat'], coordinates['lnt'])`
Here `coordinates` is a list having only one item which is dictionary. So your statement should be-
`print(coordinates[0]['lat'], coordinates[0]['lnt'])`
| 15,437
|
63,301,691
|
my code works perfectly in Python 3.8, but when I switch to Python 3.5 in same operating system, with same code and everything else, it starts throwing out "SyntaxError: invalid syntax".
Here is the error, and the part of the code that I think which relates to the error :
```
Traceback (most recent call last):
File "pwb.py", line 390, in <module>
if not main():
File "pwb.py", line 385, in main
file_package)
File "pwb.py", line 100, in run_python_file
exec(compile(source, filename, 'exec', dont_inherit=True),
File ".\scripts\signbot.py", line 83
namespace: int
^
SyntaxError: invalid syntax
CRITICAL: Exiting due to uncaught exception <class 'SyntaxError'>
```
And here is the part of the code :
```
@dataclass
class RevisionInfo:
namespace: int
title: str
type: str
bot: bool
comment: str
user: str
oldRevision: Optional[int]
newRevision: int
timestamp: int
```
Sorry if the question title is not specific, but I'm having troubles getting this code working in Python 3.5. The server I'm going to run this code in only supports Python 3.5, so I need to get this working with 3.5. Thanks.
|
2020/08/07
|
[
"https://Stackoverflow.com/questions/63301691",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10214891/"
] |
There are at least two issues here:
1. [Variable annotations](https://docs.python.org/3/whatsnew/3.6.html#whatsnew36-pep526) were new in Python 3.6.
2. The [`dataclasses`](https://docs.python.org/3/library/dataclasses.html?highlight=dataclass#module-dataclasses) module was new in Python 3.7.
Either use Python 3.7 or greater, or rewrite your code so it doesn't rely on dataclasses and variable annotations.
This is one of many reasons that it's a good idea to use the same version of Python in development as you intend to use in production. You can avoid writing code that won't work on your server.
|
One new and exciting feature coming in Python 3.7 is the data class. you're not able to use it in python 3.5.
You should use the traditional way and use constructor:
```
class Mapping:
def __init__(self, iterable):
self.items_list = []
self.__update(iterable)
def update(self, iterable):
for item in iterable:
self.items_list.append(item)
```
| 15,438
|
46,366,398
|
I am using Pymodm as a mongoDB odm with python flask. I have looked through code and documentation (<https://github.com/mongodb/pymodm> and <http://pymodm.readthedocs.io/en/latest>) but could not find what I was looking for.
I am looking for an easy way to fetch data from the database without converting it to a pymodm object but as plain JSON. Is this possible with pymodm?
Currently, I am overloading the flask JSONEncoder to handle DateTime and ObjectID and use that to convert the pymodm Object to JSON.
|
2017/09/22
|
[
"https://Stackoverflow.com/questions/46366398",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3531894/"
] |
It is not obvious from the PyMODM documentation, but here's how to do it:
```
pymodm_obj.to_son().to_dict()
```
Actually, I just re-read your question, and I don't think anything is forcing you to use PyMODM everywhere in your project once you have made the decision to use it. So if you are just looking for the JSON structures, you could just use the base pymongo package functionality.
|
Having:
```
from pymodm import MongoModel, fields
import json
class Foo(MongoModel):
name = fields.CharField(required=True)
a=Foo()
```
You can do:
```
jsonFooString=json.dumps(a.to_son().to_dict())
```
| 15,439
|
15,059,082
|
This is my code. In the first def function, I made it return column\_choose, and I wanna use column\_choose's value in second def function(get\_data\_list). What can I do? I have tried many times. But IDLE always show:global name 'column\_choose' is not defined.
How to use column\_choose's value in second function?
By the way, I use python 3.2
```
def get_column_number():
while True:
column_choose = input('What column:')
if column_choose == '1' or column_choose == '2' or column_choose == '3' or column_choose == '4' or column_choose == '5' or column_choose == '6':
column_choose = int(column_choose)
return column_choose
break
else:
print('bad column number, try again')
def get_data_list(column_number):
new_column_number = column_number.split(',')
date_column = new_column_number[0]
choose_column = new_column_number[column_choose-1]
return result
def main():
#Call get_input_descriptor
get_input_descriptor()
#Call get_column_number
get_column_number()
file_obj = open("table.csv", "r")
for column_number in file_obj:
column_number = (column_number.strip())
result = get_data_list(column_number)
print(result)
file_obj.close()
main()
```
|
2013/02/25
|
[
"https://Stackoverflow.com/questions/15059082",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2011210/"
] |
Using `float: left` on the elements will cause them to ignore the `nowrap` rule. Since you are already using `display: inline-block`, you don't need to float the elements to have them display side-by-side. Just remove `float: left`
|
Was because of the float:left;, once i removed that, fine. Spotted it after typing out question sorry.
| 15,442
|
70,647,836
|
Very new to python. I am trying to iterate over a list of floating points and append elements to a new list based on a condition. Everytime I populate the list I get double the output, for example a list with three floating points gives me an output of six elements in the new list.
```
tempretures = [39.3, 38.2, 38.1]
new_list = []
i=0
while i <len(tempretures):
for tempreture in range(len(tempretures)):
if tempreture < 38.3:
new_list = new_list + ['L']
elif tempreture > 39.2:
new_list = new_list + ['H']
else: tempreture (38.3>=39.2)
new_list = new_list + ['N']
i=i+1
print (new_list)
print (i)
```
```
['L', 'N', 'L', 'N', 'L', 'N']
3
```
|
2022/01/10
|
[
"https://Stackoverflow.com/questions/70647836",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17887798/"
] |
The issue is with the indentation of the line `new_list = new_list + ['N']`. Because it is under-indented, it runs for every instance.
If I can suggest an easier syntax:
```
temperatures = [39.3, 38.2, 38.1]
new_list = []
for temperature in temperatures:
if temperature < 38.3:
new_list.append('L')
elif temperature > 39.2:
new_list.append('H')
else:
new_list.append('N')
print(new_list)
print(len(new_list))
```
->
```
['H', 'L', 'L']
3
```
|
Hope it will solve your issue:
```
tempretures = [39.3, 38.2, 38.1]
new_list = []
for temperature in tempretures:
if temperature > 39.2:
new_list.append('H')
elif temperature>=38.3 and temperature<39.2:
new_list.append('N')
else:
new_list.append('L')
print (new_list)
```
sample output: `['H', 'L', 'L']`
| 15,443
|
57,010,692
|
i am trying to extract specific data from requested json file
so after passing Authorization and using requests.get i got my request , i think it is called dictionary for python coders and called json for javascript coders
it containt too much information that i dont need and i would like to extract one or two only
for example {"bio" : " hello world " }
and that json file contains more that one " bio "
for example i am scraping 100 accounts and i would like to extract all " bio " in one code
so i tried this :
```py
from bs4 import BeautifulSoup
import requests
headers = {"Authorization" : "xxxx"}
req = requests.get('website', headers = headers)
data = req.text
soup = BeautifulSoup(data,'html.parser')
titles = soup.find_all('span',{'class':'bio'})
for title in titles :
print(title.text)
```
and didnt work , i tried multiple ideas with no success
if possible please write me a code that i can understande since iam trying to learn more about my mistakes
thanks
|
2019/07/12
|
[
"https://Stackoverflow.com/questions/57010692",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9952973/"
] |
The `Aphid` library I created is perfect for this.
from command-prompt
```py
py -m pip install Aphid
```
Then its just as easy as loading your json data and searching it with aphid.
```
import json
import Aphid
resp = requests.get(yoururl)
data = json.loads(resp.text)
results = Aphid.findall(data, 'bio')
```
`results` is now equal to a list of tuples(key, value), of every occurence of the 'bio' key.
|
After you get your request either:
* you get a simple json file (in which case you import it to python using [json](https://docs.python.org/3/library/json.html)) **or**
* you get an html file from which you can extract the json code (using BeautifulSoup) which in turn you will parse using json library.
| 15,446
|
16,956,523
|
[Using Python3] I have a csv file that has two columns (an email address and a country code; script is made to actually make it two columns if not the case in the original file - kind of) that I want to split out by the value in the second column and output in separate csv files.
```
eppetj@desrfpkwpwmhdc.com us ==> output-us.csv
uheuyvhy@zyetccm.com de ==> output-de.csv
avpxhbdt@reywimmujbwm.com es ==> output-es.csv
gqcottyqmy@romeajpui.com it ==> output-it.csv
qscar@tpcptkfuaiod.com fr ==> output-fr.csv
qshxvlngi@oxnzjbdpvlwaem.com gb ==> output-gb.csv
vztybzbxqq@gahvg.com us ==> output-us.csv
... ... ...
```
Currently my code kind of does this, but instead of writing each email address to the csv it overwrites the email placed before that. Can someone help me out with this?
I am very new to programming and Python and I might not have written the code in the most pythonic way, so I would really appreciate any feedback on the code in general!
Thanks in advance!
Code:
```
import csv
def tsv_to_dict(filename):
"""Creates a reader of a specified .tsv file."""
with open(filename, 'r') as f:
reader = csv.reader(f, delimiter='\t') # '\t' implies tab
email_list = []
# Checks each list in the reader list and removes empty elements
for lst in reader:
email_list.append([elem for elem in lst if elem != '']) # List comprehension
# Stores the list of lists as a dict
email_dict = dict(email_list)
return email_dict
def count_keys(dictionary):
"""Counts the number of entries in a dictionary."""
return len(dictionary.keys())
def clean_dict(dictionary):
"""Removes all whitespace in keys from specified dictionary."""
return { k.strip():v for k,v in dictionary.items() } # Dictionary comprehension
def split_emails(dictionary):
"""Splits out all email addresses from dictionary into output csv files by country code."""
# Creating a list of unique country codes
cc_list = []
for v in dictionary.values():
if not v in cc_list:
cc_list.append(v)
# Writing the email addresses to a csv based on the cc (value) in dictionary
for key, value in dictionary.items():
for c in cc_list:
if c == value:
with open('output-' +str(c) +'.csv', 'w') as f_out:
writer = csv.writer(f_out, lineterminator='\r\n')
writer.writerow([key])
```
|
2013/06/06
|
[
"https://Stackoverflow.com/questions/16956523",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2445114/"
] |
You can simplify this a lot by using a `defaultdict`:
```
import csv
from collections import defaultdict
emails = defaultdict(list)
with open('email.tsv','r') as f:
reader = csv.reader(f, delimiter='\t')
for row in reader:
if row:
if '@' in row[0]:
emails[row[1].strip()].append(row[0].strip()+'\n')
for key,values in emails.items():
with open('output-{}.csv'.format(key), 'w') as f:
f.writelines(values)
```
As your separated files are not comma separated, but single columns - you don't need the csv module and can simply write the rows.
The `emails` dictionary contains a key for each country code, and a list for all the matching email addresses. To make sure the email addresses are printed correctly, we remove any whitespace and add the a line break (this is so we can use `writelines` later).
Once the dictionary is populated, its simply a matter of stepping through the keys to create the files and then writing out the resulting list.
|
Not a Python answer, but maybe you can use this Bash solution.
```
$ while read email country
do
echo $email >> output-$country.csv
done < in.csv
```
This reads the lines from `in.csv`, splits them into two parts `email` and `country`, and appends (`>>`) the `email` to the file called `output-$country.csv`.
| 15,447
|
71,117,916
|
I'm looking to use the Kubernetes python client to delete a deployment, but then block and wait until all of the associated pods are deleted as well. A lot of the examples I'm finding recommend using the watch function something like follows.
```
try:
# try to delete if exists
AppsV1Api(api_client).delete_namespaced_deployment(namespace="default", name="mypod")
except Exception:
# handle exception
# wait for all pods associated with deployment to be deleted.
for e in w.stream(
v1.list_namespaced_pod, namespace="default",
label_selector='mylabel=my-value",
timeout_seconds=300):
pod_name = e['object'].metadata.name
print("pod_name", pod_name)
if e['type'] == 'DELETED':
w.stop()
break
```
However, I see two problems with this.
1. If the pod is already gone (or if some other process deletes all pods before execution reaches the watch stream), then the watch will find no events and the for loop will get stuck until the timeout expires. Watch does not seem to generate activity if there are no events.
2. Upon seeing events in the event stream for the pod activity, how do know all the pods got deleted? Seems fragile to count them.
I'm basically looking to replace the `kubectl delete --wait` functionality with a python script.
Thanks for any insights into this.
|
2022/02/14
|
[
"https://Stackoverflow.com/questions/71117916",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/226081/"
] |
```css
.card {
display: flex;
flex-direction: column;
flex-wrap: wrap;
justify-content: center;
align-items: center;
height: 400px;
}
.card img {
height: 400px;
max-width:50%;
}
```
```html
<div class = "container">
<div class = "card">
<img src="https://www.unfe.org/wp-content/uploads/2019/04/SM-placeholder.png">
<h2>Gift Cards</h2>
<p>
Lorem ipsum dolor, sit amet consectetur adipisicing elit. Neque expedita tempore quasi omnis a aut et totam illo fuga accusamus dolorum vero, ut harum consectetur. Minima molestias officiis culpa non sed dicta itaque. Et aliquam illo obcaecati molestias veritatis porro.
</p>
<p>Already have an Orange MyTunes Music Gift Card?</p>
<hr>
<a href="#">>Redeem</a>
</div>
</div>
```
Not the best approach however,
I suggest you change HTML code to be like this:
```css
.card {
display: flex;
justify-content: center;
align-items: center;
}
.card-img {
width: 50%;
display: flex;
align-items: center;
}
.card-img img {
width: 100%;
}
.card-body {
width: 50%
}
```
```html
<div class="container">
<div class="card">
<div class="card-img">
<img src="https://s6.uupload.ir/files/giftcard_scrj.png">
</div>
<div class="card-body">
<h2>Gift Cards</h2>
<p>
Lorem ipsum dolor, sit amet consectetur adipisicing elit. Neque expedita tempore quasi omnis a aut et totam illo fuga accusamus dolorum vero, ut harum consectetur. Minima molestias officiis culpa non sed dicta itaque. Et aliquam illo obcaecati molestias
veritatis porro.
</p>
<p>Already have an Orange MyTunes Music Gift Card?</p>
<hr>
<a href="#">>Redeem</a>
</div>
</div>
</div>
```
|
You need to `flex`:
```css
.card{
display: flex;
justify-content: center;
gap: 20px;
margin-top: 20px;
}
.img{
width: 40%;
}
img{
width: 100%;
}
.text{
width: 40%;
}
.text p{
font-size: 12px;
}
```
```html
<div class="card">
<div class="img">
<img src="https://s6.uupload.ir/files/magearray-giftcard-icon_n30.png">
</div>
<div class="text">
<h2>Gift Cards</h2>
<p>
Lorem ipsum dolor, sit amet consectetur adipisicing elit. Neque expedita tempore quasi omnis a aut et totam illo fuga accusamus dolorum vero, ut harum consectetur. Minima molestias officiis culpa non sed dicta itaque. Et aliquam illo obcaecati molestias
veritatis porro.
</p>
<p>Already have an Orange MyTunes Music Gift Card?</p>
<hr>
<a href="#">>Redeem</a>
</div>
</div>
```
| 15,450
|
50,259,795
|
I just installed the discord.py rewrite branch, but attempting to use `import discord` or `from discord.ext import commands` simply results in a TypeError.
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/site-packages/discord/__init__.py", line 20, in <module>
from .client import Client, AppInfo
File "/usr/local/lib/python3.6/site-packages/discord/client.py", line 30, in <module>
from .guild import Guild
File "/usr/local/lib/python3.6/site-packages/discord/guild.py", line 39, in <module>
from .channel import *
File "/usr/local/lib/python3.6/site-packages/discord/channel.py", line 31, in <module>
from .webhook import Webhook
File "/usr/local/lib/python3.6/site-packages/discord/webhook.py", line 27, in <module>
import aiohttp
File "/usr/local/lib/python3.6/site-packages/aiohttp/__init__.py", line 6, in <module>
from .client import * # noqa
File "/usr/local/lib/python3.6/site-packages/aiohttp/client.py", line 15, in <module>
from . import connector as connector_mod
File "/usr/local/lib/python3.6/site-packages/aiohttp/connector.py", line 17, in <module>
from .client_proto import ResponseHandler
File "/usr/local/lib/python3.6/site-packages/aiohttp/client_proto.py", line 6, in <module>
from .http import HttpResponseParser, StreamWriter
File "/usr/local/lib/python3.6/site-packages/aiohttp/http.py", line 8, in <module>
from .http_parser import (HttpParser, HttpRequestParser, HttpResponseParser,
File "/usr/local/lib/python3.6/site-packages/aiohttp/http_parser.py", line 15, in <module>
from .http_writer import HttpVersion, HttpVersion10
File "/usr/local/lib/python3.6/site-packages/aiohttp/http_writer.py", line 304, in <module>
class URL(yarl.URL):
File "/usr/local/lib/python3.6/site-packages/yarl/__init__.py", line 232, in __init_subclass__
"is forbidden".format(cls))
TypeError: Inheritance a class <class 'aiohttp.http_writer.URL'> from URL is forbidden
```
Although the error is technically from yarl rather than from discord.py itself, the error only occurs upon trying to import the modules.
I've already tried reinstalling python as well as the discord.py rewrite branch, and if it makes any difference am running on a RPi 3 B+
|
2018/05/09
|
[
"https://Stackoverflow.com/questions/50259795",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9766666/"
] |
Your aiohttp package might be out of date.
Try
```
pip install --upgrade aiohttp
```
|
I tried to install discord.py on my python 3.7 and it didn't work.
I had to install python 3.6.6 to make it work, maybe you are using python 3.7, if so you should try rolling back to python 3.6.6
| 15,451
|
66,306,167
|
Can you please explain how the below python code is evaluated to be True
```
if 50 == 10 or 30:
print('True')
else:
print('False')
```
Output: True
|
2021/02/21
|
[
"https://Stackoverflow.com/questions/66306167",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7121586/"
] |
Replace internals of while- loop with simpler version
```
while leftIndex < left.count && rightIndex < right.count {
if left[leftIndex] <= right[rightIndex] {
mergedArr.append(left[leftIndex])
leftIndex += 1
} else {
mergedArr.append(right[rightIndex])
rightIndex += 1
}
}
```
and you forgot about the rest of arrays:
```
while leftIndex < left.count {
mergedArr.append(left[leftIndex])
leftIndex += 1
}
while rightIndex < right.count {
mergedArr.append(right[rightIndex])
rightIndex += 1
}
```
Why? When you merge two arrays, tail of one array may be not treated yet. For example, you merge `[1 3 5] and [1 2]`. After copying to result `[1 1 2]` the second array is finished (`rightIndex` becomes equal to `right.count`) and main while-loop stops. But what about `[3,5]` piece?
|
Consider if the data remains on the `left` or `right` only.
```
public func mergeSort<T: Comparable>(_ array: [T]) -> [T] {
if array.count < 2 {
return array
}
let mid = array.count / 2
let left = [T](array[0..<mid])
let right = [T](array[mid..<array.count])
return merge(left, right)
}
private func merge<T: Comparable>(_ left: [T], _ right: [T]) -> [T] {
var leftIndex = 0
var rightIndex = 0
var tempList = [T]()
while leftIndex < left.count && rightIndex < right.count {
if left[leftIndex] < right[rightIndex] {
tempList.append(left[leftIndex])
leftIndex += 1
} else if left[leftIndex] > right[rightIndex] {
tempList.append(right[rightIndex])
rightIndex += 1
}
else {
tempList.append(left[leftIndex])
tempList.append(right[rightIndex])
leftIndex += 1
rightIndex += 1
}
}
//Don't miss this part.
while leftIndex < left.count {
tempList.append(left[leftIndex])
leftIndex += 1
}
while rightIndex < right.count {
tempList.append(right[rightIndex])
rightIndex += 1
}
return tempList
}
```
| 15,452
|
13,907,949
|
I'm having an issue and I have no idea why this is happening and how to fix it. I'm working on developing a Videogame with python and pygame and I'm getting this error:
```
File "/home/matt/Smoking-Games/sg-project00/project00/GameModel.py", line 15, in Update
self.imageDef=self.values[2]
TypeError: 'NoneType' object has no attribute '__getitem__'
```
The code:
```
import pygame,components
from pygame.locals import *
class Player(components.Entity):
def __init__(self,images):
components.Entity.__init__(self,images)
self.values=[]
def Update(self,events,background):
move=components.MoveFunctions()
self.values=move.CompleteMove(events)
self.imageDef=self.values[2]
self.isMoving=self.values[3]
def Animation(self,time):
if(self.isMoving and time==1):
self.pos+=1
if (self.pos>(len(self.anim[self.imageDef])-1)):
self.pos=0
self.image=self.anim[self.imageDef][self.pos]
```
Can you explain to me what that error means and why it is happening so I can fix it?
|
2012/12/17
|
[
"https://Stackoverflow.com/questions/13907949",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1908896/"
] |
BrenBarn is correct. The error means you tried to do something like `None[5]`. In the backtrace, it says `self.imageDef=self.values[2]`, which means that your `self.values` is `None`.
You should go through all the functions that update `self.values` and make sure you account for all the corner cases.
|
The function `move.CompleteMove(events)` that you use within your class probably doesn't contain a `return` statement. So nothing is returned to `self.values` (==> None). Use `return` in `move.CompleteMove(events)` to return whatever you want to store in `self.values` and it should work. Hope this helps.
| 15,454
|
58,500,923
|
I have an array of elements [a\_1, a\_2, ... a\_n] and array ofprobabilities associated with this elements [p\_1, p\_2, ..., p\_n].
I want to choose "k" elements from [a\_1,...a\_n], k << n, according to probabilities [p\_1,p\_2,...,p\_n].
How can I code it in python? Thank you very much, I am not experienced at programming
|
2019/10/22
|
[
"https://Stackoverflow.com/questions/58500923",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12256362/"
] |
use `numpy.random.choice`
example:
```
from numpy.random import choice
sample_space = np.array([a_1, a_2, ... a_n]) # substitute the a_i's
discrete_probability_distribution = np.array([p_1, p_2, ..., p_n]) # substitute the p_i's
# picking N samples
N = 10
for _ in range(N):
print(choice(sample_space, discrete_probability_distribution)
```
|
Perhaps you want something similar to this?
```
import random
data = ['a', 'b', 'c', 'd']
probabilities = [0.5, 0.1, 0.9, 0.2]
for _ in range(10):
print([d for d,p in zip(data,probabilities) if p>random.random()])
```
The above would output something like:
```
['c']
['c']
['a', 'c']
['a', 'c']
['a', 'c']
[]
['a', 'c']
['c', 'd']
['a', 'c']
['d']
```
| 15,457
|
15,305,634
|
Today I progressed further into [this Python roguelike tutorial](http://roguebasin.roguelikedevelopment.org/index.php?title=Complete_Roguelike_Tutorial,_using_python%2Blibtcod), and got to the inventory. As of now, I can pick up items and use them. The only problem is, when accessing the inventory, it's only visible for a split second, even though I used the `console_wait_for_keypress(True)` function. I'm not sure as to why it disappears. Here's the code that displays a menu(in this case, the inventory):
```
def menu(header,options,width):
if len(options)>26: raise ValueError('Cannot have a menu with more than 26 options.')
header_height=libtcod.console_get_height_rect(con,0,0,width,SCREEN_HEIGHT,header)
height=len(options)+header_height
window=libtcod.console_new(width,height)
libtcod.console_set_default_foreground(window,libtcod.white)
libtcod.console_print_rect_ex(window,0,0,width,height,libtcod.BKGND_NONE,libtcod.LEFT,header)
y=header_height
letter_index=ord('a')
for option_text in options:
text='('+chr(letter_index)+')'+option_text
libtcod.console_print_ex(window,0,y,libtcod.BKGND_NONE,libtcod.LEFT,text)
y+=1
letter_index+=1
x=SCREEN_WIDTH/2-width/2
y=SCREEN_HEIGHT/2-height/2
libtcod.console_blit(window,0,0,width,height,0,x,y,1.0,0.7)
libtcod.console_flush()
key=libtcod.console_wait_for_keypress(True)
index=key.c-ord('a')
if index>=0 and index<len(options): return index
return None
```
I'd appreciate anyone's help or input to this problem.
|
2013/03/09
|
[
"https://Stackoverflow.com/questions/15305634",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2138040/"
] |
Your code is messy, there may be multiple issues but I think this line is your problem
```
txView.SetBackgroundResource(Resource.Color.PrimaryColor);
```
As you can see [here](http://developer.android.com/reference/android/view/View.html#setBackgroundResource%28int%29) in the documentation you should only pass a reference to a `Drawable` as the parameter not a `Color`.
You need to use this method [here](http://developer.android.com/reference/android/view/View.html#setBackgroundColor%28int%29) like so:
`txView.SetBackgroundColor(Resource.Color.PrimaryColor);`
Also the code in the question won't compile, where does the txView variable come from? I'm assuming it's meant to be titleView?
And another thing; the log entry you have posted will be displayed every time you run your project, you can ignore it. See [here](http://mono-for-android.1047100.n5.nabble.com/Runtime-version-supported-by-this-application-is-unavailable-tp5677802p5678753.html) for more info. The actual log entry you should have posted would have come much later (and only *after* you click Continue in Visual Studio)
|
You need to tell which view to find id in. So instantiate a `view` after get the `factory`.
```
var view = factory.Inflate(Resource.Layout.DialogRegister, null);
```
Because the `titleView` would reference to null, it causes the crash,
Then, you can find the `title` using the `view` you just created. One thing to point out, `Android.Resource` references to resources of Android framework in Mono for Android, and `Resource` is actually the reference to your layouts, ids etc. So, the code would be like:
```
var titleView = view.FindViewById<TextView>(Resource.Id.title);
```
[`SetBackgroundResource`](http://developer.android.com/reference/android/view/View.html#setBackgroundResource%28int%29) can only take drawable as an effective parameter, so color won't work in this case. However `SetBackgroundColor` would work because `Android.Graphics.Color.Red` is a [`Color`](http://developer.android.com/reference/android/graphics/Color.html) object.
Also, you can `SetView(view)` while building the dialog.
| 15,458
|
56,324,750
|
I want to get docker host machine ip address and interface names i.e ifconfig of docker host machine. instead im getting docker container ip address on doing ifconfig in docker container.
It would be great if someone tell me to fetch ip address of docker host machine from a docker container.
i have tried doing ifconfig dockerhostname,
as a result i get error
dockerhostmachi: error fetching interface information: Device not found
This is my dockerfile
FROM ubuntu:14.04
Install dependencies
====================
RUN apt-get update && apt-get install -y \
python-dev \
libffi-dev \
libssl-dev \
python-enum \
apache2 \
libapache2-mod-wsgi \
python-pip \
python-qt4
```
RUN chmod -R 777 /var/log/apache2/error.log
RUN chmod -R 777 /var/log/apache2/other_vhosts_access.log
RUN chmod -R 777 /var/log/apache2/access.log
RUN chmod -R 777 /var/run/apache2/
```
RUN a2enmod wsgi
i need to get ifconfig result for docker host machine from a docker container or when i run docker image.
|
2019/05/27
|
[
"https://Stackoverflow.com/questions/56324750",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11361438/"
] |
This is an [XY problem](https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem)
Why? Because the issue is not the amount of data not fitting the index, that's the *symptom*.
Real problem
------------
Real problem is how to stop duplicate email entries.
Attempted solution
------------------
Attempted solution is to create a `UNIQUE` index on the `email` column. It's a great attempt at solving the issue except - emails can be unusually large, your index length will vary. Sometimes it might be a 10 bytes, sometimes 30, sometimes 50.. and sometimes 255 - that's *not good*.
Back to the drawing board
-------------------------
What if all emails had fixed length? That's a much easier problem to tackle. You don't have to worry about index size limitation, all you need to make sure is that it's below the default limit of 767 bytes.
Better solution
---------------
Let's not index the `email` field. Let's create another column, `email_unique` and let's store the **hash** of the email there. Then, make the hash a `UNIQUE` index.
Benefits:
- always fixed with
- always falls within default index length of 767 bytes
- no worrying about utf8
How to do it to waste as little space as possible
-------------------------------------------------
* choose a hashing algorithm. `sha1` is completely fine, although you can go for `sha256`. I'll use 256-bit version of `SHA-2` algorithm
* create a `binary(32 field)`. It will hold the raw value of our hashing function. It will always be fixed with for any kind of email
* I'll use triggers, `before create` and `before update` to maintain the value of the hash so I don't have to worry about it in my language's logic.
### Add the binary column
```
ALTER TABLE admins ADD email_hash BINARY(32) AFTER email;
```
### Add the before insert trigger
```
DELIMITER $$
CREATE TRIGGER `admins_before_insert` BEFORE INSERT
ON admins
FOR EACH ROW BEGIN
SET NEW.email_hash = UNHEX(SHA2(NEW.email, 256)); -- this creates a binary representation of a sha-256 hashed email column
END$$
DELIMITER ;
```
### Add before update trigger
```
DELIMITER $$
CREATE TRIGGER `admins_before_insert` BEFORE UPDATE
ON admins
FOR EACH ROW BEGIN
SET NEW.email_hash = UNHEX(SHA2(NEW.email, 256)); -- this creates a binary representation of a sha-256 hashed email column
END$$
DELIMITER ;
```
Final words
-----------
I added trigger example code but I didn't test it. The idea is to be able to add and update emails and have MySQL tell you if there's a duplicate. Some people don't like using triggers. That's why the step with triggers is optional and an example how to achieve the effect if you prefer that route.
You can, of course, increase the amount of bytes MySQL will accept for indexing. However, that's not the optimal solution as you can quickly fill up your memory and basically waste resources. Down the line, you might exceed newly set limit.
|
You can chenge the innodb\_large\_prefix in your config file to ON. That will set your index key prefixes up to 3072 bytes as the mysql [doc](https://dev.mysql.com/doc/refman/5.6/en/innodb-restrictions.html) says.
```
[mysqld]
innodb_large_prefix = 1
```
| 15,461
|
63,320,723
|
Im a beginner in python and im currently working on a problem on code forces called Lecture Sleep. The question gives you 3 lines of inputs:
```
6 3
1 3 5 2 5 4
1 1 0 1 0 0
```
I'm trying to figure out how to link the second array of numbers `(1 3 5 2 5 4)` to the 3rd array of numbers `(1 1 0 1 0 0)`. So that `1 = 1, 3 = 1, 5 = 0, 2 = 1, 5 = 0, 4 = 0`.
|
2020/08/08
|
[
"https://Stackoverflow.com/questions/63320723",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14072979/"
] |
It might not be the solution for you, but I tell what we do.
1. Prefix the package names, and using namespaces (eg. `company.product.tool`).
2. When we install our packages (including their in-house dependencies), we use a `requirements.txt` file including our PyPI URL. We run everything in container(s) and we install all public dependencies in them when we are building the images.
|
Your company could redirect all requests to pypi to a service you control first (perhaps just at your build servers' `hosts` file(s))
This would potentially allow you to
* prefer/override arbitrary packages with local ones
* detect such cases
* cache common/large upstream packages locally
* reject suspect/non-known versions/names of [upstream packages](https://en.wikipedia.org/wiki/Supply_chain_attack)
| 15,462
|
12,468,022
|
I have several threads running in parallel from Python on a cluster system. Each python thread outputs to a directory `mydir`. Each script, before outputting checks if *mydir* exists and if not creates it:
```
if not os.path.isdir(mydir):
os.makedirs(mydir)
```
but this yields the error:
```
os.makedirs(self.log_dir)
File "/usr/lib/python2.6/os.py", line 157, in makedirs
mkdir(name,mode)
OSError: [Errno 17] File exists
```
I suspect it might be due to a race condition, where one job creates the *dir* before the other gets to it. Is this possible? If so, how can this error be avoided?
I'm not sure it's a race condition so was wondering if other issues in Python can cause this odd error.
|
2012/09/17
|
[
"https://Stackoverflow.com/questions/12468022",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
Any time code can execute between when you check something and when you act on it, you will have a race condition. One way to avoid this (and the usual way in Python) is to just try and then handle the exception
```
while True:
mydir = next_dir_name()
try:
os.makedirs(mydir)
break
except OSError, e:
if e.errno != errno.EEXIST:
raise
# time.sleep might help here
pass
```
If you have a lot of threads trying to make a predictable series of directories this will still raise a lot of exceptions, but you will get there in the end. Better to just have one thread creating the dirs in that case
|
Catch the exception and, if the errno is 17, ignore it. That's the only thing you can do if there's a race condition between the `isdir` and `makedirs` calls.
However, it could also be possible that a *file* with the same name exists - in that case `os.path.exists` would return `True` but `os.path.isdir` returns false.
| 15,466
|
63,783,587
|
My goal is to install a package to a specific directory on my machine so I can package it up to be used with AWS Lambda.
Here is what I have tried:
`pip install snowflake-connector-python -t .`
`pip install --system --target=C:\Users\path2folder --install-option=--install-scripts=C:\Users\path2folder --upgrade snowflake-connector-python`
Both of these options have returned the following error message:
`ERROR: Can not combine '--user' and '--target'`
In order for the AWS Lambda function to work, I need to have my dependencies installed in a specific directory to create a .zip file for deployment. I have searched through Google and StackOverflow, but have not seen a thread that has answered this issue.
Update: This does not seem to be a problem on Mac. The issue described is on Windows 10.
|
2020/09/07
|
[
"https://Stackoverflow.com/questions/63783587",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12655086/"
] |
We encountered the same issue when running `pip install --target ./py_pkg -r requirements.txt --upgrade` with Microsoft store version of Python 3.9.
Adding `--no-user` to the end of it seems solves the issue. Maybe you can try that in your command and let us know if this solution works?
`pip install --target ./py_pkg -r requirements.txt --upgrade --no-user`
|
We had the same issue just in a Python course: The error comes up if Python is installed as an app from the Microsoft app store. In our case it was resolved after re-installing Python by downloading and using the installation package directly from the Python website.
| 15,475
|
56,578,199
|
I am trying to save AWS CLI command into python variable (list). The trick is the following code returns result I want but doesn't save it into variable and return empty list.
```
import os
bashCommand = 'aws s3api list-buckets --query "Buckets[].Name"'
f = [os.system(bashCommand)]
print(f)
```
output:
```
[
"bucket1",
"bucket2",
"bucket3"
]
[0]
```
desired output:
```
[
"bucket1",
"bucket2",
"bucket3"
]
("bucket1", "bucket2", "bucket3")
```
|
2019/06/13
|
[
"https://Stackoverflow.com/questions/56578199",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10941134/"
] |
If you are using Python and you wish to list buckets, then it would be better to use the AWS SDK for Python, which is `boto3`:
```
import boto3
s3 = boto3.resource('s3')
buckets = [bucket.name for bucket in s3.buckets.all()]
```
See: [S3 — Boto 3 Docs](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html)
This is much more sensible that calling out to the AWS CLI (which itself uses boto!).
|
I use this command to create a Python list of all buckets:
```
bucket_list = eval(subprocess.check_output('aws s3api list-buckets --query "Buckets[].Name"').translate(None, '\r\n '))
```
| 15,478
|
63,993,912
|
python version 3.8.3
```
import telegram #imorted methodts
from telegram.ext import Updater, CommandHandler
import requests
from telegram import ReplyKeyboardMarkup, KeyboardButton
from telegram.ext.messagehandler import MessageHandler
import json
# below is function defined the buttons to be return
def start(bot, update):
button1 = KeyboardButton("hello")
button2 = KeyboardButton("by")
keyboard = [button1, button2]
reply_markup = telegram.ReplyKeyboardMarkup(keyboard)
chat_id = update.message.chat_id
bot.send_message(chat_id=chat_id, text='please choose USD or EUR', reply_markup = reply_markup) # it works and returns text if reply_markup parameter disabled.
def main():
updater = Updater('my token')
dp = updater.dispatcher
dp.add_handler(CommandHandler('start', start))
updater.start_polling()
updater.idle()
main()
```
buttons are not working. Please help to check what could be the reason of this issue.
|
2020/09/21
|
[
"https://Stackoverflow.com/questions/63993912",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14315167/"
] |
After a great effort, I have found a solution for this.
```
class LinearProgressWithTextWidget extends StatelessWidget {
final Color color;
final double progress;
LinearProgressWithTextWidget({Key key,@required this.color, @required this.progress}) : super(key: key);
@override
Widget build(BuildContext context) {
double totalWidth = ((MediaQuery.of(context).size.width/2)-padding);
return Container(
child: Column(
children: [
Transform.translate(
offset: Offset((totalWidth * 2 * progress) - totalWidth, -5),
child: Container(
padding: EdgeInsets.only(left: 4, right: 4, top: 4, bottom: 4),
decoration: BoxDecoration(
borderRadius: BorderRadius.circular(2.0),
color: color,
),
child: Text(
"${progress * 100}%",
style: TextStyle(
fontSize: 12.0,
fontFamily: 'Kanit-Medium',
color: Colors.white,
height: 0.8
),
),
),
),
LinearPercentIndicator(
padding: EdgeInsets.zero,
lineHeight: 15,
backgroundColor: HexColor("#F8F8F8"),
percent: progress,
progressColor: color,
),
],
)
);
}
}
```
|
I added the loading indicator inside of a stack and wrapped the whole widget with a `LayoutBuilder`, which will give you the BoxConstraints of the current widget. You can use that to calculate the position of the percent indicator and place a widget (text) above it.
[](https://i.stack.imgur.com/lUP37.png)
```dart
class MyProgressIndicator extends StatelessWidget {
const MyProgressIndicator({
Key key,
}) : super(key: key);
@override
Widget build(BuildContext context) {
double percent = .5;
return LayoutBuilder(
builder: (context, constraints) {
return Container(
child: Stack(
fit: StackFit.expand,
overflow: Overflow.visible,
children: [
Positioned(
top: 0, // you can adjust this through negatives to raise your child widget
left: (constraints.maxWidth * percent) - (50 / 2), // child width / 2 (this is to get the center of the widget),
child: Center(
child: Container(
width: 50,
alignment: Alignment.topCenter,
child: Text('${percent * 100}%'),
),
),
),
Positioned(
top: 0,
right: 0,
left: 0,
bottom: 0,
child: LinearPercentIndicator(
padding: EdgeInsets.zero,
lineHeight: 15,
width: constraints.maxWidth,
backgroundColor: Colors.black,
percent: percent,
progressColor: Colors.yellow,
),
),
],
),
);
},
);
}
}
```
| 15,481
|
40,840,480
|
I would like to install my own package in my local dir (I do not have the root privilege). How to install python3-dev locally?
I am using ubuntu 16.04.
|
2016/11/28
|
[
"https://Stackoverflow.com/questions/40840480",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5651936/"
] |
You have a widget for this:
```
{{ form_errors(form) }}
```
|
Accessing errors from **TWIG**
Displays all errors in template
```
{{ form_errors(form) }}
```
Access error for specific field
```
{{ form_errors(form.username) }}
```
Read More: [How to get error message of each field from form object in symfony2?](https://stackoverflow.com/a/40712685/2689199)
| 15,482
|
48,965,221
|
I'm having a dataset which as the following
```
customer products Sales
1 a 10
1 a 10
2 b 20
3 c 30
```
How can I reshape and to do that in python and pandas? I've tried with the pivot tools but since I have duplicated CUSTOMER ID it's not working...
```
Products
customerID a b c
1 10
1 10
2 20
3 30
{' update': {209: 'Originator',
211: 'Originator',
212: 'Originator',
213: 'Originator',
214: 'Originator'},
'CUSTOMER ID': {209: 1000368,
211: 1000368, 212: 1000968, 213: 1000968, 214: 1000968},
'NET SALES VALUE SANOFI':{209: 426881.0,
211: 332103.0, 212: 882666.0, 213: 882666.0, 214: 294222.0},
'PRODUCT FAMILY': {209: 'APROVEL',
211: 'APROVEL', 212: 'APROVEL', 213: 'APROVEL', 214: 'APROVEL'},
'CHANNEL DEFINITION':
{209: 'PHARMACY', 211: 'PHARMACY', 212: 'PHARMACY', 213: 'PHARMACY', 214: 'PHARMACY'},
'index': {209: 209, 211: 211, 212: 212, 213: 213, 214: 214}
CUSTOMER ID 1228675 non-null int64
DISTRIBUTOR ID 1228675 non-null float64
PRODUCT FAMILY 1228675 non-null
object GROSS SALES QUANTITY 1228675
non-null int64 GROSS SALES VALUE 1228675
non-null int64 NET SALES VALUE 1228675
non-null int64 DISCOUNT VALUES 1228675
non-null int64 CHANNEL DEFINITION 1228675 non-null object
```
what i tried also : `ONLY_PHARMA.pivot_table(values = "NET SALES VALUE ", index = ["CUSTOMER ID"], columns = "PRODUCT FAMILY").reset_index()`
what im getting now a mix of float and Int....?? Why?
```
ID A B C
1000167 NaN 2.380122e+05 244767.466667
or im having :
ValueError: negative dimensions are not allowed
```
OR I've done which also return me floats and int:
```
pvt = pd.pivot_table(ONLY_PHARMA.reset_index(), index=['CUSTOMER ID'],
columns='PRODUCT FAMILY', values='NET SALES VALUE' , fill_value='') \
.reset_index()
```
|
2018/02/24
|
[
"https://Stackoverflow.com/questions/48965221",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9406236/"
] |
You can use [`cumcount`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.cumcount.html) with [`set_index`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html) + [`unstack`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.unstack.html) for reshape:
```
g = df.groupby(['customer', 'products']).cumcount()
df = (
df.set_index([g, 'customer', 'products'])['Sales']
.unstack().sort_index(level=1)
.reset_index(level=0, drop=True)
)
print (df)
products a b c
customer
1 10.0 NaN NaN
1 10.0 NaN NaN
2 NaN 20.0 NaN
3 NaN NaN 30.0
```
Notice:
If duplicated values, maybe need aggregation, check [how to pivot a dataframe](https://stackoverflow.com/questions/47152691/how-to-pivot-a-dataframe)
|
Your question is unclear. In case of duplicate key, we usually aggregate values. Is that what you want ? Try this:
```
df.pivot_table(index='customer', columns='products', values ='Sales', aggfunc='sum')
products customer a b c
0 1 20.0 NaN NaN
1 2 NaN 20.0 NaN
2 3 NaN NaN 30.0
```
| 15,483
|
26,409,964
|
The [pickle documentation](https://docs.python.org/2/library/pickle.html#what-can-be-pickled-and-unpickled) states that "when class instances are pickled, their class’s data are not pickled along with them. Only the instance data are pickled." Can anyone provide a recipe for including class variables as well as instance variables when pickling and unpickling?
|
2014/10/16
|
[
"https://Stackoverflow.com/questions/26409964",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4147103/"
] |
Use `dill` instead of pickle, and code exactly how you probably have done already.
```
>>> class A(object):
... y = 1
... x = 0
... def __call__(self, x):
... self.x = x
... return self.x + self.y
...
>>> b = A()
>>> b.y = 4
>>> b(2)
6
>>> b.z = 5
>>> import dill
>>> _b = dill.dumps(b)
>>> b_ = dill.loads(_b)
>>>
>>> b_.z
5
>>> b_.x
2
>>> b_.y
4
>>>
>>> A.y = 100
>>> c = A()
>>> _c = dill.dumps(c)
>>> c_ = dill.loads(_c)
>>> c_.y
100
```
|
You can do this easily using the standard library functions by using `__getstate__` and `__setstate__`:
```
class A(object):
y = 1
x = 0
def __getstate__(self):
ret = self.__dict__.copy()
ret['cls_x'] = A.x
ret['cls_y'] = A.y
return ret
def __setstate__(self, state):
A.x = state.pop('cls_x')
A.y = state.pop('cls_y')
self.__dict__.update(state)
```
| 15,488
|
25,384,922
|
I've installed some packages during the execution of my script as a user. Those packages were the first user packages, so python didn't add `~/.local/lib/python2.7/site-packages` to the `sys.path` before script run. I want to import those installed packages. But I cannot because they are not in `sys.path`.
How can I refresh `sys.path`?
I'm using python 2.7.
|
2014/08/19
|
[
"https://Stackoverflow.com/questions/25384922",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2108548/"
] |
As explained in [What sets up sys.path with Python, and when?](https://stackoverflow.com/questions/4271494/what-sets-up-sys-path-with-python-and-when) `sys.path` is populated with the help of builtin `site.py` module.
So you just need to reload it. You cannot it in one step because you don't have `site` in your namespace. To sum up:
```
import site
from importlib import reload
reload(site)
```
That's it.
|
It might be better to add it directly to your `sys.path` with:
```
import sys
sys.path.append("/your/new/path")
```
Or, if it needs to be found first:
```
import sys
sys.path.insert(1, "/your/new/path")
```
| 15,490
|
18,621,624
|
[I'm taking an intro to python class online](http://cscircles.cemc.uwaterloo.ca/8-remix/) and the site is designed to auto-enter input() data into the program that you write to resolve various python logic problems.
[Please see this page to see how the online class's tool uses input entries](http://cscircles.cemc.uwaterloo.ca/visualize/#code=lis%20%3D%20%5B%27Text%27%2C%20%27in%27%2C%20%27the%27%2C%20%27middle!%27%2C%20%27END%27%5D%0Awidth%20%3D%2013%0Afor%20s1%20in%20lis%3A%0A%20%20%20%20L%20%3D%20len%28s1%29%0A%20%20%20%20periods_rtside%20%3D%20%28width%20-%20L%29%2F%2F2%0A%20%20%20%20periods_leftside%20%3D%20width%20-%20periods_rtside%20-%20L%0A%20%20%20%20periods_rt_str%20%3D%20%27.%27%20%2a%20periods_rtside%0A%20%20%20%20periods_left_str%20%3D%20%27.%27%20%2a%20periods_leftside%0A%20%20%20%20line1%20%3D%20periods_left_str%20%2B%20s1%20%2B%20periods_rt_str%0A%20%20%20%20if%20s1%20%3D%3D%20%27END%27%3A%0A%20%20%20%20%20%20%20%20%20break%0A%20%20%20%20print%28line1%29%0A%20%20%20%20)
For example the class randomly inputs the following:
```
30
centered
text
is
great
testing
is
great
for
python!
END
```
Obviously, I would have to convert the 30 to an int. How do I convert the rest into a usable list or array?
```
width = int(input())
lis = ['centered', 'text', 'is', 'great', 'END']
```
|
2013/09/04
|
[
"https://Stackoverflow.com/questions/18621624",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2609312/"
] |
It sounds like you want to call `input` in a loop. Here's one way to do it:
```
lst = []
s = input()
while s != 'END':
lst.append(s)
s = input()
```
There are other options for how to set up the condition on the loop, but I think this is the most straight forward. If the calculation for when to stop looping were more complicated, an alternative design might be to make the loop unconditional (with `while True`) and then `break` if the right conditions were met.
|
You can create a list and read each string one by one, and add it to the list:
```
width=int(input())
lis=[]
tmp=''
while tmp!='END':
tmp=input() #receives a string, in python 3.0+
lis.append(tmp)
```
| 15,491
|
65,643,645
|
I'm pretty new to python and to programming in general. I'm trying to make the game Bounce. The game runs as expected but as soon as I close the window, it shows an error.
This is the code:
```
from tkinter import *
import random
import time
# Creating the window:
window = Tk()
window.title("Bounce")
window.geometry('600x600')
window.resizable(False, False)
# Creating the canvas containing the game:
canvas = Canvas(window, width = 450, height = 450, bg = "black")
canvas.pack(padx = 50, pady= 50)
score = canvas.create_text(10, 20, fill = "white")
window.update()
# Creating the ball:
class Ball:
def __init__(self, canvas1, paddle1, color):
self.canvas = canvas1
self.paddle = paddle1
self.id = canvas1.create_oval(10, 10, 25, 25, fill = color) # The starting point of the ball
self.canvas.move(self.id, 190, 160)
starting_direction = [-3, -2, -1, 0, 1, 2, 3]
random.shuffle(starting_direction)
self.x = starting_direction[0]
self.y = -3
self.canvas_height = self.canvas.winfo_height()
self.canvas_width = self.canvas.winfo_width()
# Detecting the collision between the ball and the paddle:
def hit_paddle(self, ballcoords):
paddle_pos = self.canvas.coords(self.paddle.id)
if ballcoords[0] <= paddle_pos[2] and ballcoords[2] >= paddle_pos[0]:
if paddle_pos[3] >= ballcoords[3] >= paddle_pos[1]:
return True
return False
# Detecting the collision between the the ball and the canvas sides:
def draw(self):
self.canvas.move(self.id, self.x, self.y)
ballcoords = self.canvas.coords(self.id)
if ballcoords[1] <= 0:
self.y = 3
if ballcoords[3] >= self.canvas_height:
self.y = 0
self.x = 0
self.canvas.create_text(225, 150, text = "Game Over!", font = ("Arial", 16), fill = "white")
if ballcoords[0] <= 0:
self.x = 3
if ballcoords[2] >= self.canvas_width:
self.x = -3
if self.hit_paddle(ballcoords):
self.y = -3
class Paddle:
def __init__(self, canvas1, color):
self.canvas1 = canvas
self.id = canvas.create_rectangle(0, 0, 100, 10, fill = color)
self.canvas1.move(self.id, 180, 350)
self.x = 0
self.y = 0
self.canvas1_width = canvas1.winfo_width()
self.canvas1.bind_all("<Left>", self.left)
self.canvas1.bind_all("<Right>", self.right)
def draw(self):
self.canvas1.move(self.id, self.x, 0)
paddlecoords = self.canvas1.coords(self.id)
if paddlecoords[0] <= 0:
self.x = 0
if paddlecoords[2] >= self.canvas1_width:
self.x = 0
def right(self, event):
self.x = 3
def left(self, event):
self.x = -3
paddle = Paddle(canvas, color = "white")
ball = Ball(canvas, paddle, color = "red")
while True:
ball.draw()
paddle.draw()
window.update_idletasks()
window.update()
time.sleep(0.001)
```
This is the error:
```
Traceback (most recent call last):
File "D:\CSCI201\Arcade Games Project\Bounce\Bounce_Game.py", line 111, in <module>
ball.draw()
File "D:\CSCI201\Arcade Games Project\Bounce\Bounce_Game.py", line 64, in draw
self.canvas.move(self.id, self.x, self.y)
File "C:\Users\M.Youssry\AppData\Local\Programs\Python\Python39\lib\tkinter\__init__.py", line 2916, in move
self.tk.call((self._w, 'move') + args)
_tkinter.TclError: invalid command name ".!canvas"
```
I've tried inserting .mainloop() as suggested to another user having the same problem but it hasn't worked for me.
|
2021/01/09
|
[
"https://Stackoverflow.com/questions/65643645",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14915849/"
] |
It is caused by close button on top-right corner of window, the only way you have to stop script. After you click close button, window destried, so no widget, like canvas, exist.
You can set a flag to identify if while loop should stop and exit in handler of window close button event.
```py
window.protocol("WM_DELETE_WINDOW", handler)
```
Here, you can exit script any time by click close button of window.
```py
from tkinter import *
import random
import time
# Creating the window:
window = Tk()
window.title("Bounce")
window.geometry('600x600')
window.resizable(False, False)
# Creating the canvas containing the game:
canvas = Canvas(window, width = 450, height = 450, bg = "black")
canvas.pack(padx = 50, pady= 50)
score = canvas.create_text(10, 20, fill = "white")
window.update()
# Creating the ball:
class Ball:
def __init__(self, canvas1, paddle1, color):
self.canvas = canvas1
self.paddle = paddle1
self.id = canvas1.create_oval(10, 10, 25, 25, fill = color) # The starting point of the ball
self.canvas.move(self.id, 190, 160)
starting_direction = [-3, -2, -1, 0, 1, 2, 3]
random.shuffle(starting_direction)
self.x = starting_direction[0]
self.y = -3
self.canvas_height = self.canvas.winfo_height()
self.canvas_width = self.canvas.winfo_width()
# Detecting the collision between the ball and the paddle:
def hit_paddle(self, ballcoords):
paddle_pos = self.canvas.coords(self.paddle.id)
if ballcoords[0] <= paddle_pos[2] and ballcoords[2] >= paddle_pos[0]:
if paddle_pos[3] >= ballcoords[3] >= paddle_pos[1]:
return True
return False
# Detecting the collision between the the ball and the canvas sides:
def draw(self):
self.canvas.move(self.id, self.x, self.y)
ballcoords = self.canvas.coords(self.id)
if ballcoords[1] <= 0:
self.y = 3
if ballcoords[3] >= self.canvas_height:
self.y = 0
self.x = 0
self.canvas.create_text(225, 150, text = "Game Over!", font = ("Arial", 16), fill = "white")
if ballcoords[0] <= 0:
self.x = 3
if ballcoords[2] >= self.canvas_width:
self.x = -3
if self.hit_paddle(ballcoords):
self.y = -3
class Paddle:
def __init__(self, canvas1, color):
self.canvas1 = canvas
self.id = canvas.create_rectangle(0, 0, 100, 10, fill = color)
self.canvas1.move(self.id, 180, 350)
self.x = 0
self.y = 0
self.canvas1_width = canvas1.winfo_width()
self.canvas1.bind_all("<Left>", self.left)
self.canvas1.bind_all("<Right>", self.right)
def draw(self):
self.canvas1.move(self.id, self.x, 0)
paddlecoords = self.canvas1.coords(self.id)
if paddlecoords[0] <= 0:
self.x = 0
if paddlecoords[2] >= self.canvas1_width:
self.x = 0
def right(self, event):
self.x = 3
def left(self, event):
self.x = -3
paddle = Paddle(canvas, color = "white")
ball = Ball(canvas, paddle, color = "red")
# New code after here
def handler():
global run
run = False
window.protocol("WM_DELETE_WINDOW", handler)
run = True
while run:
# New code before here
ball.draw()
paddle.draw()
window.update_idletasks()
window.update()
time.sleep(0.01)
window.destroy() # should always destroy window before exit
```
|
I had this problem and solved it by restarting my iPython-console (Spyder)
| 15,493
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.