qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
|---|---|---|---|---|---|---|
57,712,218
|
I try to use requests to get a url of file. It works well locally but it doesn't work with nameko.
I tried 3 libs of python3.7. But all has the same error.
import urllib.request,urllib3,requests
it works well locally like this:
```py
import requests
url = "https://www.python.org/static/img/python-logo.png"
r = requests.get(url)
print(r.content)
```
But it can't work with nameko:
```py
import requests
from nameko.web.handlers import http
@http("POST", "/import")
def testurl(self,request):
url = "https://www.python.org/static/img/python-logo.png"
r = requests.get(url)
print(r.content)
```
```py
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/usr/local/lib/python3.7/site-packages/nameko/rpc.py", line 373, in __call__
return reply.result()
File "/usr/local/lib/python3.7/site-packages/nameko/rpc.py", line 331, in result
raise deserialize(error)
nameko.exceptions.RemoteError: Exception Error on testurl: Cause : wrap_socket() got an unexpected keyword argument '_context'
```
|
2019/08/29
|
[
"https://Stackoverflow.com/questions/57712218",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11994979/"
] |
It is an eventlet bug. If it is possible you need to downgrade to Python 3.6.
<https://github.com/eventlet/eventlet/issues/526>
Nameko has a PR for this issue which is on pause until the above is fixed.
<https://github.com/nameko/nameko/pull/644>
|
I caught the same error with python 3.7, eventlet 0.25.2, requests 2.24.0.
It works fine with requests 2.23.0
| 6,944
|
6,918,719
|
Whenever I try to create a table using python and sqlite3, it gives me the following error:
```
Traceback (most recent call last)
File "directory.py", line 14, in <module>
'Children' TEXT, 'Other' TEXT, 'Masul' TEXT);''')
sqlite3.OperationalError: near ")": syntax error
```
The way I'm trying to create the table is:
```
conn.execute('''create table Jamaat
(id integer primary key,
Email TEXT,
LastName TEXT,
Address1 TEXT,
Apt TEXT,
Address2 TEXT,
City TEXT,
State TEXT,
Zip TEXT,
HomePhone TEXT,
FaxNumber TEXT,
Primary TEXT,
Spouse TEXT,
Children TEXT,
Other TEXT,
Masul TEXT);''')
conn.commit()
```
I'm using python 2.7 and trying to import a csv spreadsheet into sqlite3
Thanks in advance
EDIT: I've tried the code without the trailing comma and it still doesn't work...
|
2011/08/02
|
[
"https://Stackoverflow.com/questions/6918719",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/764746/"
] |
Often when you get this kind of error it is because you are using keywords as column (or table) names.
I see that you have a column called `primary`.
You will want to put backticks around it or rename it because [it is a keyword in SQLite](http://www.sqlite.org/lang_keywords.html); e.g.:
```
...
`Primary` TEXT,
...
```
|
You have a trailing "," before the closing parenthesis.
| 6,945
|
2,811,822
|
In other languages (ruby, python, ...) I can use `zip(list1, list2)` which works like this:
If `list1 is {1,2,3,4}` and `list2 is {a,b,c}`
then `zip(list1, list2)` would return: `{(1,a), (2,b), (3,c), (d,null)}`
Is such a method available in .NET's Linq extensions?
|
2010/05/11
|
[
"https://Stackoverflow.com/questions/2811822",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11545/"
] |
.NET 4 gives us a [`Zip`](http://msdn.microsoft.com/en-us/library/dd267698.aspx) method but it is not available in .NET 3.5. If you are curious, [Eric Lippert provides an implementation of `Zip`](http://blogs.msdn.com/ericlippert/archive/2009/05/07/zip-me-up.aspx) that you may find useful.
|
neither implementation will fill in the missing values (or check that the lengths are the same) as the question asked.
here is an implementation that can:
```
public static IEnumerable<TResult> Zip<TFirst, TSecond, TResult> (this IEnumerable<TFirst> first, IEnumerable<TSecond> second, Func<TFirst, TSecond, TResult> selector, bool checkLengths = true, bool fillMissing = false) {
if (first == null) { throw new ArgumentNullException("first");}
if (second == null) { throw new ArgumentNullException("second");}
if (selector == null) { throw new ArgumentNullException("selector");}
using (IEnumerator<TFirst> e1 = first.GetEnumerator()) {
using (IEnumerator<TSecond> e2 = second.GetEnumerator()) {
while (true) {
bool more1 = e1.MoveNext();
bool more2 = e2.MoveNext();
if( ! more1 || ! more2) { //one finished
if(checkLengths && ! fillMissing && (more1 || more2)) { //checking length && not filling in missing values && ones not finished
throw new Exception("Enumerables have different lengths (" + (more1 ? "first" : "second") +" is longer)");
}
//fill in missing values with default(Tx) if asked too
if (fillMissing) {
if ( more1 ) {
while ( e1.MoveNext() ) {
yield return selector(e1.Current, default(TSecond));
}
} else {
while ( e2.MoveNext() ) {
yield return selector(default(TFirst), e2.Current);
}
}
}
yield break;
}
yield return selector(e1.Current, e2.Current);
}
}
}
}
```
| 6,947
|
31,823,262
|
I just started python a few days ago and have been working on a calculator (not extremely basic, but also not advanced). The problem doesn't prevent code from running or anything, it is just a visual thing.
Output in the console looks like this (stuff in parenthesis is explaining what is happening and is not actually part of the output):
```
4 (user prompted for first number, press enter afterwards)
+ (user prompted for an operator, press enter afterwards
5 (user prompted for second number, press enter afterwards)
9.00000 (answer is printed)
Process finished with exit code 0
```
Basically what I want it to look like is this when I'm entering it into the console:
```
4+5
9.00000
```
I don't want it to start a newline after I enter a number or operator or whatever, it looks more like an actual calculator when it prints along one line. Is this possible to do and if so how? Btw I know `end=""` works with `print` but not with `input` since it doesn't accept arguments. Also I know the whole calculator thing is kind of redundant considering you can make calculations really easily in the python IDLE but I thought it was a good way for me to learn. Here is the entire code if you need it:
```
import math
while True:
try:
firstNumber = float(input())
break
except ValueError:
print("Please enter a number... ", end="")
while True:
operators = ['+', '-', '*', '/', '!', '^']
userOperator = str(input())
if userOperator in operators:
break
else:
print("Enter a valid operator... ", end="")
if userOperator == operators[4]:
answer = math.factorial(firstNumber)
print(answer)
pause = input()
raise SystemExit
while True:
try:
secondNumber = float(input())
break
except ValueError:
print("Please enter a number... ", end="")
if userOperator == operators[0]:
answer = firstNumber + secondNumber
print('%.5f' % round(answer, 5))
elif userOperator == operators[1]:
answer = firstNumber - secondNumber
print('%.5f' % round(answer, 5))
elif userOperator == operators[2]:
answer = firstNumber * secondNumber
print('%.5f' % round(answer, 5))
elif userOperator == operators[3]:
answer = firstNumber / secondNumber
print('%.5f' % round(answer, 5))
elif userOperator == operators[5]:
answer = firstNumber ** secondNumber
print('%.5f' % round(answer, 5))
pause = input()
raise SystemExit
```
|
2015/08/05
|
[
"https://Stackoverflow.com/questions/31823262",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5166610/"
] |
Your problem is that you're asking for `input()` without specifying what you want. So if you take a look at the first one: `firstNumber = float(input())` It's executing properly, but you hit `enter` it gives an error which is only then you're specifying what you want.
Try replacing with these:
```
...
try
firstNumber = float(input("Please enter a number... "))
...
userOperator = str(input("Enter a valid operator... "))
...
secondNumber = float(input("Please enter a number... "))
```
Is that what you're looking for?
Using my method I suggested:
```
Please enter a number... 5
Enter a valid operator... +
Please enter a number... 6
11.00000
```
Using your method:
```
Please enter a number... 5
Enter a valid operator... +
Please enter a number... 6
11.00000
```
Extra newlines which is what I'm assuming you're referring to.
|
That's a [nice exercise](http://alfasin.com/2015/08/05/a-simple-calculator-in-python/), and as I wrote in the comments, I would ignore whitespaces, take the expression as a whole from the user and then parse it and calculate the result. Here's a small demo:
```
def is_number(s):
try:
float(s)
return True
except ValueError:
return False
def calc(expr):
if is_number(expr):
return float(expr)
arr = expr.split('+')
if len(arr) > 1:
return sum(map(calc, arr))
arr = expr.split('-')
if len(arr) > 1:
return reduce(lambda x,y: x-y, map(calc, arr))
arr = expr.split('*')
if len(arr) > 1:
return reduce(lambda x,y: x*y, map(calc, arr), 1)
arr = expr.split('/')
if len(arr) > 1:
return reduce(lambda x,y: x/y, map(calc, arr))
print calc("3+4-2 *2/ 2") # 5
```
| 6,948
|
12,632,421
|
I'm trying to write python code to print the [powerset](/tags/powerset/info) of a string, but am running into some bugs. Here's what I've got:
```
def getperm (string):
perm = []
if len(string) == 0:
perm.append("")
return perm
#if len(string) == 1:
# perm.append(string)
# perm.append("")
first = string[0]
print "first = " + str(first)
rem = string[1:len(string)]
print "rem = " + str(rem)
words = getperm(rem)
for word in words:
for i in range(len(word)):
temp = string[0:i] + first + string[i:len(string)]
print "temp = " + str(temp)
perm.append(temp)
return perm
if __name__=="__main__":
a = "ab"
mag = getperm(a)
print mag
```
My expected output would be:
```
['', 'a', 'b', 'ab']
```
My actual output is:
```
[]
```
Can anyone help me figure out what's going on? Is this some nuance of python, or is there a bug in my code? I think my code should be ok -- I'm going off the fifth edition of Cracking the coding interview
Thank you!
|
2012/09/28
|
[
"https://Stackoverflow.com/questions/12632421",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1390268/"
] |
You're overthinking it
This part is trying to do too much
```
for word in words:
for i in range(len(word)):
temp = string[0:i] + first + string[i:len(string)]
print "temp = " + str(temp)
perm.append(temp)
```
See how simple it really should be
```
def get_powerset (string):
perm = []
if len(string) == 0:
perm.append("")
return perm
#if len(string) == 1:
# perm.append(string)
# perm.append("")
first = string[0]
print "first = " + str(first)
rem = string[1:len(string)]
print "rem = " + str(rem)
words = get_powerset(rem)
perm.extend(words)
for word in words:
perm.append(first+word)
return perm
if __name__=="__main__":
a = "ab"
mag = get_powerset(a)
print mag
```
Now you should be able to make the code look a lot nicer with a little refactoring
|
Here's a refactored iterative solution **without** the `itertools` module:
```
def powerset(s):
a = ['']
for i,c in enumerate(s):
for k in range(2**i):
a.append(a[k]+c)
return a
```
| 6,949
|
9,887,319
|
I am running a server with cherrypy and python script. Currently, there is a web page containing data of a list of people, which i need to get. The format of the web page is as follow:
```
www.url1.com, firstName_1, lastName_1
www.url2.com, firstName_2, lastName_2
www.url3.com, firstName_3, lastName_3
```
I wish to display the list of names on my own webpage, with each name hyperlinked to their corresponding website.
I have read the webpage into a list with the following method:
```
@cherrypy.expose
def receiveData(self):
""" Get a list, one per line, of currently known online addresses,
separated by commas.
"""
method = "whoonline"
fptr = urllib2.urlopen("%s/%s" % (masterServer, method))
data = fptr.readlines()
fptr.close()
return data
```
But I don't know how to break the list into a list of lists at where the comma are. The result should give each smaller list three elements; URL, First Name, and Last Name. So I was wondering if anyone could help.
Thank you in advance!
|
2012/03/27
|
[
"https://Stackoverflow.com/questions/9887319",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1008340/"
] |
You need the `split(',')` method on each string:
```
data = [ line.split(',') for line in fptr.readlines() ]
```
|
```
lists = []
for line in data:
lists.append([x.strip() for x in line.split(',')])
```
| 6,959
|
46,188,797
|
I use a script to parce some sites and get news from there.
Each function in this script parse one site and return list of articles and then I want to combine them all in one big list.
If I parce site by site it takes to long and I desided to use multithreading.
I found a sample like this one in the bottom, but it seems not pithonic for me.
If I will add one more function to parse one more site, I will need to add each time the same block of code:
```
qN = Queue()
Thread(target=wrapper, args=(last_news_from_bar, qN)).start()
news_from_N = qN.get()
for new in news_from_N:
news.append(new)
```
Is there another solution to do this kind of stuff?
```
#!/usr/bin/python
# -*- coding: utf-8 -*-
from queue import Queue
from threading import Thread
def wrapper(func, queue):
queue.put(func())
def last_news_from_bar():
...
return list_of_articles #[['title1', 'http://someurl1', '2017-09-13'],['title2', 'http://someurl2', '2017-09-13']]
def last_news_from_foo():
...
return list_of_articles
q1, q2 = Queue(), Queue()
Thread(target=wrapper, args=(last_news_from_bar, q1)).start()
Thread(target=wrapper, args=(last_news_from_foo, q2)).start()
news_from_bar = q1.get()
news_from_foo = q2.get()
all_news = []
for new in news_from_bar:
news.append(new)
for new in news_from_foo:
news.append(new)
print(all_news)
```
|
2017/09/13
|
[
"https://Stackoverflow.com/questions/46188797",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6217484/"
] |
Solution without `Queue`:
```
NEWS = []
LOCK = Lock()
def gather_news(url):
while True:
news = news_from(url)
if not news: break
with LOCK:
NEWS.append(news)
if __name__ == '__main__':
T = []
for url in ['url1', 'url2', 'url3']:
t = Thread(target=gather_news, args=(url,))
t.start()
T.append(t)
# Wait until all Threads done
for t in T:
t.join()
print(NEWS)
```
|
All, that you should do, is using a single queue and extend your result array:
```
q1 = Queue()
Thread(target=wrapper, args=(last_news_from_bar, q1)).start()
Thread(target=wrapper, args=(last_news_from_foo, q1)).start()
all_news = []
all_news.extend(q1.get())
all_news.extend(q1.get())
print(all_news)
```
| 6,964
|
1,874,592
|
As PEP8 suggests keeping below the 80 column rule for your python program, how can I abide to that with long strings, i.e.
```
s = "this is my really, really, really, really, really, really, really long string that I'd like to shorten."
```
How would I go about expanding this to the following line, i.e.
```
s = "this is my really, really, really, really, really, really" +
"really long string that I'd like to shorten."
```
|
2009/12/09
|
[
"https://Stackoverflow.com/questions/1874592",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/154280/"
] |
Since neighboring string constants are automatically concatenated, you can code it like this:
```
s = ("this is my really, really, really, really, really, really, "
"really long string that I'd like to shorten.")
```
Note no plus sign, and I added the extra comma and space that follows the formatting of your example.
Personally I don't like the backslashes, and I recall reading somewhere that its use is actually deprecated in favor of this form which is more explicit. Remember "Explicit is better than implicit."
I consider the backslash to be less clear and less useful because this is actually escaping the newline character. It's not possible to put a line end comment after it if one should be necessary. It is possible to do this with concatenated string constants:
```
s = ("this is my really, really, really, really, really, really, " # comments ok
"really long string that I'd like to shorten.")
```
---
I used a Google search of "python line length" which returns the PEP8 link as the first result, but also links to another good StackOverflow post on this topic: "[Why should Python PEP-8 specify a maximum line length of 79 characters?](https://stackoverflow.com/questions/88942/why-should-python-pep-8-specify-a-maximum-line-length-of-79-characters)"
Another good search phrase would be "python line continuation".
|
I've used textwrap.dedent in the past. It's a little cumbersome so I prefer line continuations now but if you really want the block indent, I think this is great.
Example Code (where the trim is to get rid of the first '\n' with a slice):
```
import textwrap as tw
x = """\
This is a yet another test.
This is only a test"""
print(tw.dedent(x))
```
Explanation:
dedent calculates the indentation based on the white space in the first line of text before a new line. If you wanted to tweak it, you could easily reimplement it using the `re` module.
This method has limitations in that very long lines may still be longer than you want in which case other methods that concatenate strings is more suitable.
| 6,965
|
43,540,159
|
I am trying to build an executable out of my .py script using Pyinstaller. The problem is that it builds it using Python 2.7 instead of Python 3.5, so my executable won't even run.
```
cali@californiki-pc ~/Desktop $ pyinstaller --onefile Vocabulary.py
25 INFO: PyInstaller: 3.2.1
25 INFO: Python: 2.7.12
26 INFO: Platform: Linux-4.4.0-72-generic-x86_64-with-LinuxMint-18.1-serena
26 INFO: wrote /home/cali/Desktop/Vocabulary.spec
31 INFO: UPX is not available.
32 INFO: Extending PYTHONPATH with paths
['/home/cali/Desktop', '/home/cali/Desktop']
32 INFO: checking Analysis
33 INFO: Building Analysis because out00-Analysis.toc is non existent
33 INFO: Initializing module dependency graph...
34 INFO: Initializing module graph hooks...
139 INFO: running Analysis out00-Analysis.toc
160 INFO: Caching module hooks...
164 INFO: Analyzing /home/cali/Desktop/Vocabulary.py
246 INFO: Processing pre-safe import module hook _xmlplus
1991 INFO: Processing pre-find module path hook distutils
2209 INFO: Loading module hooks...
2209 INFO: Loading module hook "hook-distutils.py"...
2210 INFO: Loading module hook "hook-xml.py"...
2959 INFO: Loading module hook "hook-httplib.py"...
2960 INFO: Loading module hook "hook-encodings.py"...
3427 INFO: Looking for ctypes DLLs
3428 INFO: Analyzing run-time hooks ...
3435 INFO: Looking for dynamic libraries
3663 INFO: Looking for eggs
3663 INFO: Python library not in binary depedencies. Doing additional searching...
3707 INFO: Using Python library /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0
3710 INFO: Warnings written to /home/cali/Desktop/build/Vocabulary/warnVocabulary.txt
3768 INFO: checking PYZ
3768 INFO: Building PYZ because out00-PYZ.toc is non existent
3768 INFO: Building PYZ (ZlibArchive) /home/cali/Desktop/build/Vocabulary/out00-PYZ.pyz
4122 INFO: Building PYZ (ZlibArchive) /home/cali/Desktop/build/Vocabulary/out00-PYZ.pyz completed successfully.
4172 INFO: checking PKG
4172 INFO: Building PKG because out00-PKG.toc is non existent
4172 INFO: Building PKG (CArchive) out00-PKG.pkg
7322 INFO: Building PKG (CArchive) out00-PKG.pkg completed successfully.
7336 INFO: Bootloader /usr/local/lib/python2.7/dist-packages/PyInstaller/bootloader/Linux-64bit/run
7336 INFO: checking EXE
7336 INFO: Building EXE because out00-EXE.toc is non existent
7336 INFO: Building EXE from out00-EXE.toc
7337 INFO: Appending archive to ELF section in EXE /home/cali/Desktop/dist/Vocabulary
7352 INFO: Building EXE from out00-EXE.toc completed successfully.
```
How can I overcome the issue?
@EDIT:
I tried to install Pyinstaller using `pip3 install pyinstaller` as Claudio suggested, but I am getting:
```
cali@californiki-pc ~/Desktop $ pip3 install pyinstaller
Collecting pyinstaller
Using cached PyInstaller-3.2.1.tar.bz2
Collecting setuptools (from pyinstaller)
Using cached setuptools-35.0.1-py2.py3-none-any.whl
Collecting appdirs>=1.4.0 (from setuptools->pyinstaller)
Using cached appdirs-1.4.3-py2.py3-none-any.whl
Collecting packaging>=16.8 (from setuptools->pyinstaller)
Using cached packaging-16.8-py2.py3-none-any.whl
Collecting six>=1.6.0 (from setuptools->pyinstaller)
Using cached six-1.10.0-py2.py3-none-any.whl
Collecting pyparsing (from packaging>=16.8->setuptools->pyinstaller)
Using cached pyparsing-2.2.0-py2.py3-none-any.whl
Building wheels for collected packages: pyinstaller
Running setup.py bdist_wheel for pyinstaller ... error
Complete output from command /usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-42qpk7iy/pyinstaller/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" bdist_wheel -d /tmp/tmpmmn5007ppip-wheel- --python-tag cp35:
usage: -c [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: -c --help [cmd1 cmd2 ...]
or: -c --help-commands
or: -c cmd --help
error: invalid command 'bdist_wheel'
----------------------------------------
Failed building wheel for pyinstaller
Running setup.py clean for pyinstaller
Failed to build pyinstaller
Installing collected packages: appdirs, six, pyparsing, packaging, setuptools, pyinstaller
Running setup.py install for pyinstaller ... done
Successfully installed appdirs-1.4.3 packaging-16.8 pyinstaller-3.2.1 pyparsing-2.2.0 setuptools-35.0.1 six-1.10.0
You are using pip version 8.1.1, however version 9.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
```
|
2017/04/21
|
[
"https://Stackoverflow.com/questions/43540159",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
To overcome the problem you face install PyInstaller using:
>
> pip3 install pyinstaller
>
>
>
Then take care that you run the right one (there will then be two of them in different locations, one in the path of Python2.7 modules and one in the path of Python3.5 modules)
Just installed PyInstaller for Python 3.5 on my machine:
```
$ pip3 install pyinstaller
Collecting pyinstaller
Collecting setuptools (from pyinstaller)
Using cached setuptools-35.0.1-py2.py3-none-any.whl
Collecting six>=1.6.0 (from setuptools->pyinstaller)
Using cached six-1.10.0-py2.py3-none-any.whl
Collecting appdirs>=1.4.0 (from setuptools->pyinstaller)
Using cached appdirs-1.4.3-py2.py3-none-any.whl
Collecting packaging>=16.8 (from setuptools->pyinstaller)
Using cached packaging-16.8-py2.py3-none-any.whl
Collecting pyparsing (from packaging>=16.8->setuptools->pyinstaller)
Using cached pyparsing-2.2.0-py2.py3-none-any.whl
Installing collected packages: six, appdirs, pyparsing, packaging, setuptools, pyinstaller
Successfully installed appdirs-1.4.3 packaging-16.8 pyinstaller-3.2.1 pyparsing-2.0.3 setuptools-20.7.0 six-1.10.0
```
It installs without problems ... Hmmm ...
Try:
```
sudo -H pip3 install setuptools --upgrade
```
(see for more [here](https://stackoverflow.com/questions/34819221/why-is-python-setup-py-saying-invalid-command-bdist-wheel-on-travis-ci) - you are not alone with this problem)
|
Just set an option -upx-dir in pyinstaller specifying your path for python 3.5. It can be the virtual environment too. For instance:
```
pyinstaller --upx-dir="$HOME/virtual-envs/<your-virtual-env>/lib/python3.5/site-packages/" <your-script>.py'
```
| 6,975
|
61,250,928
|
```
array = []
total = 0
text = int(input("How many students in your class: "))
print("\n")
while True:
for x in range(text):
score = int(input("Input score {} : ".format(x+1)))
if score <= 0 & score >= 101:
break
print(int(input("Invalid score, please re-enter: ")))
array.append(score)
print("\n")
print("Maximum: {}".format(max(array)))
print("Minimum: {}".format(min(array)))
print("Average: {}".format(sum(array)/text))
```
I tried to make a python program, to validate the score, but it's still a mistake, I want to make a program if I enter a score of less than 0 it will ask to re-enter the score as well if I input more than 100. Where is my error?
|
2020/04/16
|
[
"https://Stackoverflow.com/questions/61250928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13111498/"
] |
Change the if statement:
```
array = []
total = 0
text = int(input("How many students in your class: "))
print("\n")
for x in range(text):
score = int(input("Input score {} : ".format(x+1)))
while True:
if 0 <= score <= 100:
break
score = int(input("Invalid score, please re-enter: "))
array.append(score)
print("\n")
print("Maximum: {}".format(max(array)))
print("Minimum: {}".format(min(array)))
print("Average: {}".format(sum(array)/text))
```
---
Here, the `score` at the same time can't be less than 0 and greater than 100. So as you want to break of the score is between 0 and 100, we use `0 <= score <= 100` as the `break`ing condition.
Also the loops were reversed, since you won't get what you expected to.
|
try this one:
```
array = []
total = 0
num_of_students = int(input("How many students in your class: "))
print("\n")
for x in range(num_of_students):
score = int(input("Input score {} : ".format(x + 1)))
while True:
if score < 0 or score > 100:
score = int(input("Invalid score, please re-enter: "))
else:
array.append(score)
break
print("\n")
print("Maximum: {}".format(max(array)))
print("Minimum: {}".format(min(array)))
print("Average: {}".format(sum(array)/num_of_students))
```
| 6,977
|
62,214,293
|
I usually start all my scripts with the shebang line
```
#!/usr/bin/env python
```
However our production server has Python 2 as the default `python`, while all of our new scripts and programs are being built under Python 3. To help keep people from accidentally running the script with the default Python 2, I am considering switching all my shebangs from now on to this;
```
#!/usr/bin/env python3
```
On our server, `python3` indeed points to Python 3, and our basic scripts will run correctly on it. However I am not clear if this is something specific to our installation, or if `python3` is *always* available if Python 3 is installed?
I know this probably will not help a user who runs `$ python myscript.py` when the default Python is loaded, but its better than nothing and is clear enough in letting a user who inspects the script realize they are using the wrong Python version. Though now I also realize that, with Python being on version 3.8, a Python 4 is imminent... at the same time, I am not sure I am ready to embed code in every single script to check if Python >= 3 is loaded...
|
2020/06/05
|
[
"https://Stackoverflow.com/questions/62214293",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5359531/"
] |
Yes, this is a safe bet.
[PEP 394](https://www.python.org/dev/peps/pep-0394/) recommends Python 3 be available under the binary name `python3` and most Linux distributions follow this recommendation. In fact, this is the *only* name under which Python 3 has been available in most distributions (the only outlier being Arch Linux, but even that also provides a `python3` binary), and plans to make the ‘plain’ `python` binary also refer to Python 3 have only been made quite recently. The article [‘Revisiting PEP 394’](https://lwn.net/Articles/780737/) on LWN.net has more details.
|
I believe that a python 3 version only install's `python3` if there is already another version of python installed, no matter if it is a python 2 or python 3 version, because the standard `python` command would then not work properly for the new version of python.
But please correct me if I'm wrong!
| 6,982
|
38,811,966
|
I'm trying to create an exe for my python script using pyinstaller each time it runs into errors which can be found in a pastebin [here](http://pastebin.com/DJrZjVkv).
Also when I double click the exe file it shows this error:
>
> C:Users\Afro\AppData\Local\Temp\_MEI51322\VCRUNTIME140.dll is either not designed to run on windows or it contains an error. Try installing the program again using the original installation media or contact your system administrator or the software vendor for support. Error status 0xc000007b
>
>
>
and then this:
>
> Error loading Python DLL:
> C:\Users\Afro\AppData\Local\Temp\_MEI51322\python35.dll(error code 193)
>
>
>
what's wrong, please?
|
2016/08/07
|
[
"https://Stackoverflow.com/questions/38811966",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6483094/"
] |
I was haunted with similar issue. It might be that in your case UPX is breaking vcruntime140.dll.
Solution to this is turning off UPX, so just add **--noupx** to your pyinstaller call.
```
pyinstaller --noupx --onedir --onefile --windowed get.py
```
Long explanation here: [UPX breaking vcruntime140.dll (64bit)](https://github.com/pyinstaller/pyinstaller/issues/1565)
|
In my case it was:
```
pyinstaller --clean --win-private-assemblies --noupx --onedir --onefile script.py
```
**--windowed** caused problems with wxWidgets
| 6,983
|
45,628,653
|
I have written a small program in `tkinter` in `python 3.5`
I'm making executable out of it using `pyintaller`
I have included a custom icon to the window to replace default feather icon of tkinter
```
from tkinter import *
from tkinter import messagebox
import webbrowser
calculator = Tk()
calculator.title("TBE Calculator")
calculator.resizable(0, 0)
iconFile = 'calculator.ico'
calculator.iconbitmap(default=iconFile)
```
icon works fine when running `program.py` file directly
But when making it executable using
```
pyinstaller --onefile --windowed --icon=program.ico program.py
```
and running program.exe from dist directory, it gives error as
```
failed to execute script program
```
I also tried with
```
pyinstaller --onefile --windowed --icon=program.ico --add-data="calculator.ico;ico" program.py
```
But still same error.
**program.spec** file
```
# -*- mode: python -*-
block_cipher = None
a = Analysis(['program.py'],
pathex=['C:\\Users\\anuj\\PycharmProjects\\YouTubePlayer\\Program'],
binaries=[],
datas=[],
hiddenimports=[],
hookspath=[],
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher)
pyz = PYZ(a.pure, a.zipped_data,
cipher=block_cipher)
exe = EXE(pyz,
a.scripts,
a.binaries,
a.zipfiles,
a.datas,
name='calculator',
debug=False,
strip=False,
upx=True,
console=False , icon='program.ico')
```
Removing the line `calculator.iconbitmap(default=iconFile)` works fine but with default feather icon.
**How to include window icon file with .exe executable?**
|
2017/08/11
|
[
"https://Stackoverflow.com/questions/45628653",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3719167/"
] |
Uninstall and re install the visual studio with azure service fabric tools resolved the problem.
|

I have install only runtime this way. Hope this will help you to install runtime only.
| 6,988
|
52,287,641
|
I have a the below `df1`:
```
Date Tickers Qty
01-01-2018 ABC 25
02-01-2018 BCD 25
02-01-2018 XYZ 31
05-01-2018 XYZ 25
```
and another `df2` as below
```
Date ABC BCD XYZ
01-01-2018 123 5 78
02-01-2018 125 7 79
03-01-2018 127 6 81
04-01-2018 126 7 82
05-01-2018 124 6 83
```
I want a resultant column in `df1` which is the product of the correct column and row in `df2` - getting the right ticker's rate on the given date and let the other dates have nan within `df1`
```
Date df1['Product']
01-01-2018 3075
02-01-2018 175
02-01-2018 2449
03-01-2018 nan
04-01-2018 nan
05-01-2018 2075
```
This seems like standard python operation, but I just am unable to achieve this without writing a loop - which is taking a very long time to execute:
I merged the above 2 tables on `Date` and then ran the below loop
```
for i in range(len(df1)):
try:
df1['Product'][i] = df1[df1['Ticker'][i]][i]
except ValueError:
df['Product'][i] = np.nan
```
Is there any better pythonic way of achieving this and not writing this loop pls?
|
2018/09/12
|
[
"https://Stackoverflow.com/questions/52287641",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6577574/"
] |
Use:
```
df11 = df1.pivot('Date', 'Tickers','Qty')
df22 = df2.set_index('Date')
s = df22.mul(df11).bfill(axis=1).iloc[:, 0]
print (s)
Date
01-01-2018 3075.0
02-01-2018 175.0
03-01-2018 NaN
04-01-2018 NaN
05-01-2018 2075.0
Name: ABC, dtype: float64
```
Solution for add new column to `df1`:
```
df11 = df1.pivot('Date', 'Tickers','Qty')
df22 = df2.set_index('Date')
df = df1.join(df22.mul(df11).stack().rename('new'), on=['Date','Tickers'], how='left')
print (df)
Date Tickers Qty new
0 01-01-2018 ABC 25 3075.0
1 02-01-2018 BCD 25 175.0
2 05-01-2018 XYZ 25 2075.0
```
EDIT:
If pairs `Date`s with `Tickers` are duplicated, solution above is not possible use.
```
print (df1)
Date Tickers Qty
0 01-01-2018 ABC 25
1 01-01-2018 ABC 20 <-added duplicated pairs 01-01-2018 and ABC
2 02-01-2018 XYZ 31
3 02-01-2018 BCD 25
4 05-01-2018 XYZ 25
df3 = df1[['Date']].copy()
#add new values to column
df3['new'] = df2.set_index('Date').lookup(df1['Date'], df1['Tickers']) * df1['Qty']
#add missing values to duplicated Dates
df3 = df2[['Date']].drop_duplicates().merge(df3, how='left')
print (df3)
Date new
0 01-01-2018 3075.0
1 01-01-2018 2460.0
2 02-01-2018 2449.0
3 02-01-2018 175.0
4 03-01-2018 NaN
5 04-01-2018 NaN
6 05-01-2018 2075.0
```
|
you need to set 'Date' as index and multiply,
```
df1=df1.set_index('Date')
df2=df2.set_index('Date')
df3=(df2['ABC']*df1['Qty']).reset_index()
print(df3)
Date 0
0 01-01-2018 3075.0
1 02-01-2018 3125.0
2 03-01-2018 NaN
3 04-01-2018 NaN
4 05-01-2018 3100.0
```
| 6,989
|
55,514,933
|
am trying to write a loop that gets .json from an url via requests, then writes the .json to a .csv file. Then I need it to it over and over again until my list of names (.txt file) is finished(89 lines). I can't get it to go over the list, it just get the error:
```py
AttributeError: module 'response' has no attribute 'append'
```
I can´t find the issue, if I change 'response' to 'responses' I get also an error
```py
with open('listan-{}.csv'.format(pricelists), 'w') as outf:
OSError: [Errno 22] Invalid argument: "listan-['A..
```
I can't seem to find a loop fitting for my purpose. Since I am a total beginner of python I hope I can get some help here and learn more.
My code so far.
```py
#Opens the file with pricelists
pricelists = []
with open('prislistor.txt', 'r') as f:
for i, line in enumerate(f):
pricelists.append(line.strip())
# build responses
responses = []
for pricelist in pricelists:
response.append(requests.get('https://api.example.com/3/prices/sublist/{}/'.format(pricelist), headers=headers))
#Format each response
fullData = []
for response in responses:
parsed = json.loads(response.text)
listan=(json.dumps(parsed, indent=4, sort_keys=True))
#Converts and creates a .csv file.
fullData.append(parsed['Prices'])
with open('listan-{}.csv'.format(pricelists), 'w') as outf:
dw.writeheader()
for data in fullData:
dw = csv.DictWriter(outf, data[0].keys())
for row in data:
dw.writerow(row)
print ("The file list-{}.csv is created!".format(pricelists))
```
|
2019/04/04
|
[
"https://Stackoverflow.com/questions/55514933",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6553605/"
] |
VS2019 also introduced new "enhanced" colors for .NET languages, for which there is a separate option to toggle on and off:
[](https://i.stack.imgur.com/IpRoV.png)
The same checkbox is listed for both C# and Basic (VB).
|
It is possible to change in `Options->Environment->Fonts and Colors`. There is a list with different `User Memebers - ...` and `User Tyeps - ...` that define these colors.
[](https://i.stack.imgur.com/D6XSR.png)
I have actually changed `User Members - Fields` and `User Members - Properties` to be same color as `User Member - Parameters`. It became much better, white and yellow did not work too well for me :)
Now it's almost like Visual Studio Code
| 6,990
|
72,066,195
|
I want to insert zero at certain locations in an array, but the index position of the location exceeds the size of the array
I wanted that as the numbers get inserted one by one, size also gets increased in that process (of the array X), so till it reaches index 62, it will not produce that error.
```
import numpy as np
X = np.arange(0,57,1)
desired_location = [ 0, 1, 24, 25, 26, 27, 62, 63]
for i in desired_location:
X_new = np.insert(X,i,0)
print(X_new)
```
output
```
File "D:\python programming\random python files\untitled4.py", line 15, in <module>
X_new = np.insert(X,i,0)
File "<__array_function__ internals>", line 6, in insert
File "D:\spyder\pkgs\numpy\lib\function_base.py", line 4560, in insert
"size %i" % (obj, axis, N))
IndexError: index 62 is out of bounds for axis 0 with size 57
```
|
2022/04/30
|
[
"https://Stackoverflow.com/questions/72066195",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18746410/"
] |
Make a copy of `X` into `X_new` so the array gets longer in loop as you desire.
```
X_new = X.copy()
for i in desired_location:
X_new = np.insert(X_new, i, 0)
```
|
how silly I was.
```
import numpy as np
X = np.arange(0,57,1)
desired_location = [ 0, 1, 24, 25, 26, 27, 62, 63]
for i in desired_location:
X = np.insert(X,i,0)
print(X)
```
| 6,996
|
2,686,520
|
For what I've read I need Python-Dev, how do I install it on OSX?
I think the problem I have, is, my Xcode was not properly installed, and I don't have the paths where I should.
This previous question:
[Where is gcc on OSX? I have installed Xcode already](https://stackoverflow.com/questions/2685887/where-is-gcc-on-osx-i-have-installed-xcode-already)
Was about I couldn't find gcc, now I can't find Python.h
Should I just link my /Developer directory to somewhere else in /usr/ ???
This is my output:
```
$ sudo easy_install mercurial
Password:
Searching for mercurial
Reading http://pypi.python.org/simple/mercurial/
Reading http://www.selenic.com/mercurial
Best match: mercurial 1.5.1
Downloading http://mercurial.selenic.com/release/mercurial-1.5.1.tar.gz
Processing mercurial-1.5.1.tar.gz
Running mercurial-1.5.1/setup.py -q bdist_egg --dist-dir /tmp/easy_install-_7RaTq/mercurial-1.5.1/egg-dist-tmp-l7JP3u
mercurial/base85.c:12:20: error: Python.h: No such file or directory
...
```
Thanks in advance.
|
2010/04/21
|
[
"https://Stackoverflow.com/questions/2686520",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20654/"
] |
Might depend on what version of Mac OSX you have, I have it in these spots:
```
/Developer/SDKs/MacOSX10.5.sdk/System/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h
/System/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5/Python.h
```
Also I believe the version of python that comes with Xcode is a custom build that plays well with xcode but you have to jump through some hoops if you use another dev environment.
|
Are you sure you want to build Mercurial from source? There are [binary packages available](https://www.mercurial-scm.org/downloads), including the nice [MacHg](http://jasonfharris.com/machg/) which comes with a bundled Mercurial.
| 6,999
|
67,211,732
|
I'm trying to update my Heroku DB from a Python script I have on my computer. I set up my app on Heroku with NodeJS (because I just like Javascript for that sort of thing), and I'm not sure I can add in a Python script to manage everything. I was able to fill out the DB once, with the script, and it had no hangups. When I try to update it, I get the following statement in my console:
```
Traceback (most recent call last):
File "/home/alan/dev/python/smog_usage_stats/scripts/DBManager.py", line 17, in <module>
CONN = pg2.connect(
File "/home/alan/dev/python/smog_usage_stats/venv/lib/python3.8/site-packages/psycopg2/__init__.py", line 127, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: FATAL: role "alan" does not exist
```
and this is my script:
```py
#DBManager.py
import os
import zipfile
import psycopg2 as pg2
from os.path import join, dirname
from dotenv import load_dotenv
# -------------------------------
# Connection variables
# -------------------------------
dotenv_path = join(dirname(__file__), '.env')
load_dotenv(dotenv_path)
# -------------------------------
# Connection to database
# -------------------------------
# Server connection
CONN = pg2.connect(
database = os.environ.get('PG_DATABASE'),
user = os.environ.get('PG_USER'),
password = os.environ.get('PG_PASSWORD'),
host = os.environ.get('PG_HOST'),
port = os.environ.get('PG_PORT')
)
# Local connection
# CONN = pg2.connect(
# database = os.environ.get('LOCAL_DATABASE'),
# user = os.environ.get('LOCAL_USER'),
# password = os.environ.get('LOCAL_PASSWORD'),
# host = os.environ.get('LOCAL_HOST'),
# port = os.environ.get('LOCAL_PORT')
# )
print("Connected to POSTGRES!")
global CUR
CUR = CONN.cursor()
# -------------------------------
# Database manager class
# -------------------------------
class DB_Manager:
def __init__(self):
self.table_name = "smogon_usage_stats"
try:
self.__FILE = os.path.join(
os.getcwd(),
"data/statsmaster.csv"
)
except:
print('you haven\'t downloaded any stats')
# -------------------------------
# Create the tables for the database
# -------------------------------
def construct_tables(self):
master_file = open(self.__FILE)
columns = master_file.readline().strip().split(",")
sql_cmd = "DROP TABLE IF EXISTS " + self.table_name + ";\n"
sql_cmd += "CREATE TABLE " + self.table_name + " (\n"
sql_cmd += (
"id_ SERIAL PRIMARY KEY,\n"
+ columns[0] + " INTEGER,\n"
+ columns[1] + " VARCHAR(50),\n"
+ columns[2] + " FLOAT,\n"
+ columns[3] + " INTEGER,\n"
+ columns[4] + " FLOAT,\n"
+ columns[5] + " INTEGER,\n"
+ columns[6] + " FLOAT,\n"
+ columns[7] + " INTEGER,\n"
+ columns[8] + " VARCHAR(10),\n"
+ columns[9] + " VARCHAR(50));"
)
CUR.execute(sql_cmd)
CONN.commit()
# -------------------------------
# Copy data from CSV files created in smogon_pull.py into database
# -------------------------------.
def fill_tables(self):
master_file = open(self.__FILE, "r")
columns = tuple(master_file.readline().strip().split(","))
CUR.copy_from(
master_file,
self.table_name,
columns=columns,
sep=","
)
CONN.commit()
# -------------------------------
# Disconnect from database.
# -------------------------------
def close_db(self):
CUR.close()
print("Cursor closed.")
CONN.close()
print("Connection to server closed.")
if __name__ == "__main__":
manager = DB_Manager()
print("connected")
manager.construct_tables()
print("table made")
manager.fill_tables()
print("filled")
```
as I said, everything worked fine, but now I'm getting this unexpected error, and not sure how to trace it back. The name `"alan"` is not in any of my credentials, which is confusing me.
I'm not running it via CLI, but through my text editor (in this case VS code).
|
2021/04/22
|
[
"https://Stackoverflow.com/questions/67211732",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11790979/"
] |
The "month" view provided by fullCalendar does not have this flexibility - it always starts at the beginning of the month, like a traditional paper calendar.
IMHO it would be confusing to many users if it appeared differently.
Other types of view are more flexible - they will respond to the `visibleRange` setting if you do not specify the time range in the view name, such as if you specify `timeGrid` rather than `timeGridWeek` for example.
|
Change **type: 'dayGridMonth'** to **type: 'dayGrid'**
| 7,002
|
66,214,454
|
Ansible version: 2.8.3 or Any
I'm using `-m <module>` Ansible's **ad-hoc** command to ensure the following package is installed **--OR--** let's say if I have a task to install few yum packages, like (i.e. How can I do the same within a task (possibly when ***I'm not using ansible's shell / command*** modules):
```
- name: Installing necessary yum dependencies
yum:
name:
- wget
- python-devel
- openssl-devel
state: latest
```
**It works,** but how can I get the output of whole `yum` operation in a nice output format (instead of getting a one line format with bunch of `\n` characters embedded in it); like what we usually get when we run the same command (`yum install <some_package>`) on Linux command prompt.
**I want to ansible to retain a command's output in original format**
The line I want to see in more linted/beautified way is: `Loaded plugins: ...`
```
[root@ansible-server test_ansible]# ansible -i hosts all -m yum -a 'name=ncdu state=present'
host1 | SUCCESS => {
"changed": true,
"msg": "",
"rc": 0,
"results": [
"Loaded plugins: fastestmirror\nLoading mirror speeds from cached hostfile\n * base: mirror.netsite.dk\n * elrepo: mirrors.xservers.ro\n * epel: fedora.mirrors.telekom.ro\n * extras: centos.mirrors.telekom.ro\n * remi-php70: remi.schlundtech.de\n * remi-safe: remi.schlundtech.de\n * updates: centos.mirror.iphh.net\nResolving Dependencies\n--> Running transaction check\n---> Package ncdu.x86_64 0:1.14-1.el7 will be installed\n--> Finished Dependency Resolution\n\nDependencies Resolved\n\n================================================================================\n Package Arch Version Repository Size\n================================================================================\nInstalling:\n ncdu x86_64 1.14-1.el7 epel 51 k\n\nTransaction Summary\n================================================================================\nInstall 1 Package\n\nTotal download size: 51 k\nInstalled size: 87 k\nDownloading packages:\nRunning transaction check\nRunning transaction test\nTransaction test succeeded\nRunning transaction\n Installing : ncdu-1.14-1.el7.x86_64 1/1 \n Verifying : ncdu-1.14-1.el7.x86_64 1/1 \n\nInstalled:\n ncdu.x86_64 0:1.14-1.el7 \n\nComplete!\n"
]
}
```
Example of the aligned linted/beautified format that I'm looking for, is shown below if I execute: **# yum install ncdu**, we need need the exact same output, but how it's per line and easy to read/visualize on stdout.
```
Loaded plugins: amazon-id, langpacks, product-id, search-disabled-repos, subscription-manager
This system is registered with an entitlement server, but is not receiving updates. You can use subscription-manager to assign subscriptions.
*** WARNING ***
The subscription for following product(s) has expired:
- Oracle Java (for RHEL Server)
- Red Hat Ansible Engine
- Red Hat Beta
- Red Hat CodeReady Linux Builder for x86_64
- Red Hat Container Images
- Red Hat Container Images Beta
- Red Hat Developer Tools (for RHEL Server)
- Red Hat Developer Tools Beta (for RHEL Server)
- Red Hat Developer Toolset (for RHEL Server)
- Red Hat Enterprise Linux Atomic Host
- Red Hat Enterprise Linux Atomic Host Beta
- Red Hat Enterprise Linux Server
- Red Hat Enterprise Linux for x86_64
- Red Hat Software Collections (for RHEL Server)
- Red Hat Software Collections Beta (for RHEL Server)
- dotNET on RHEL (for RHEL Server)
- dotNET on RHEL Beta (for RHEL Server)
You no longer have access to the repositories that provide these products. It is important that you apply an active subscription in order to resume access to security and other critical updates. If you don't have other active subscriptions, you can renew the expired subscription.
Resolving Dependencies
--> Running transaction check
---> Package ncdu.x86_64 0:1.15.1-1.el7 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
==============================================================================================================================================================================================================================================================
Package Arch Version Repository Size
==============================================================================================================================================================================================================================================================
Installing:
ncdu x86_64 1.15.1-1.el7 epel 52 k
Transaction Summary
==============================================================================================================================================================================================================================================================
Install 1 Package
Total download size: 52 k
Installed size: 88 k
Is this ok [y/d/N]: y
Downloading packages:
ncdu-1.15.1-1.el7.x86_64.rpm | 52 kB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : ncdu-1.15.1-1.el7.x86_64 1/1
Verifying : ncdu-1.15.1-1.el7.x86_64 1/1
Installed:
ncdu.x86_64 0:1.15.1-1.el7
Complete!
```
|
2021/02/15
|
[
"https://Stackoverflow.com/questions/66214454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1499296/"
] |
You should add this line to your **MouseArea** to work:
```
anchors.fill: parent
```
|
thanks @Farshid616 for the help. The problem was that my MouseArea wasn't inside the Rectangular. So only I needed is open code and move Mouse Area into Rectangular area, so that the MouseArea would be child of the Rectangular.
| 7,003
|
67,484,068
|
I have a model with a unique "code" field and a form for the same model where the "code" field is hidden. I need to set the "code" value in the view after the user has copied the form, but I get an IntegrityError exception.
**model**
```
class Ticket(models.Model):
codice = models.CharField(unique=True, max_length = 13, default = '')
```
**form**
```
class NewTicketForm(forms.ModelForm):
codice = forms.CharField(widget = forms.HiddenInput(), required=False)
```
**view**
```
if request.method == 'POST':
form = NewTicketForm(request.POST)
form.codice = 'MC-PR' + get_random_string(length=8, allowed_chars='0123456789')
if form.is_valid():
while True:
try:
codice = 'MC-PR' + get_random_string(length=8, allowed_chars='0123456789')
form.codice = codice
form.save()
except:
break
form.save()
return redirect('ticket-homepage')
else:
form = NewTicketForm()
context = {
'form': form
}
return render(request, 'ticket/new_ticket_form.html', context)
```
I also tried to set form.code before form.is\_valid () but the exception is raised anyway. technically there shouldn't be any problems because I generate the value with get\_random\_string and try-except allows me to do it again as long as the value is not unique.
**traceback**
```
Traceback (most recent call last):
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/django/db/backends/mysql/base.py", line 73, in execute
return self.cursor.execute(query, args)
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/pymysql/cursors.py", line 148, in execute
result = self._query(query)
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/pymysql/cursors.py", line 310, in _query
conn.query(q)
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/pymysql/connections.py", line 548, in query
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/pymysql/connections.py", line 775, in _read_query_result
result.read()
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/pymysql/connections.py", line 1156, in read
first_packet = self.connection._read_packet()
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/pymysql/connections.py", line 725, in _read_packet
packet.raise_for_error()
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/pymysql/protocol.py", line 221, in raise_for_error
err.raise_mysql_exception(self._data)
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/pymysql/err.py", line 143, in raise_mysql_exception
raise errorclass(errno, errval)
The above exception ((1062, "Duplicate entry '' for key 'ticket_ticket.ticket_ticket_codice_f619a2bb_uniq'")) was the direct cause of the following exception:
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/var/www/framework_mc/ticket/views.py", line 34, in createNewTicket
form.save()
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/django/forms/models.py", line 460, in save
self.instance.save()
File "/var/www/framework_mc/ticket/models.py", line 72, in save
return super(Ticket, self).save(*args, **kwargs)
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/django/db/models/base.py", line 753, in save
self.save_base(using=using, force_insert=force_insert,
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/django/db/models/base.py", line 790, in save_base
updated = self._save_table(
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/django/db/models/base.py", line 895, in _save_table
results = self._do_insert(cls._base_manager, using, fields, returning_fields, raw)
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/django/db/models/base.py", line 933, in _do_insert
return manager._insert(
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/django/db/models/query.py", line 1254, in _insert
return query.get_compiler(using=using).execute_sql(returning_fields)
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/django/db/models/sql/compiler.py", line 1397, in execute_sql
cursor.execute(sql, params)
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/django/db/backends/utils.py", line 98, in execute
return super().execute(sql, params)
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/django/db/backends/utils.py", line 66, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/django/db/backends/utils.py", line 75, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/django/db/backends/mysql/base.py", line 73, in execute
return self.cursor.execute(query, args)
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/pymysql/cursors.py", line 148, in execute
result = self._query(query)
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/pymysql/cursors.py", line 310, in _query
conn.query(q)
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/pymysql/connections.py", line 548, in query
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/pymysql/connections.py", line 775, in _read_query_result
result.read()
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/pymysql/connections.py", line 1156, in read
first_packet = self.connection._read_packet()
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/pymysql/connections.py", line 725, in _read_packet
packet.raise_for_error()
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/pymysql/protocol.py", line 221, in raise_for_error
err.raise_mysql_exception(self._data)
File "/var/www/framework_mc/framework_mc/lib/python3.8/site-packages/pymysql/err.py", line 143, in raise_mysql_exception
raise errorclass(errno, errval)
Exception Type: IntegrityError at /ticket/new-ticket/
Exception Value: (1062, "Duplicate entry '' for key 'ticket_ticket.ticket_ticket_codice_f619a2bb_uniq'")
```
|
2021/05/11
|
[
"https://Stackoverflow.com/questions/67484068",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14293891/"
] |
To check for email and operate on it you are probably better of using Spring Integration which has out-of-the-box [email support](https://docs.spring.io/spring-integration/docs/current/reference/html/mail.html).
Regarding your question I suspect you misunderstood the [`ApplicationListener`](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/context/ApplicationListener.html). It can be used to receive events fired through the internal event API of Spring. It won't start a listener thread like the listeners for Kafka or JMS would do. Althouhg both listeners in their own sense they are quite different beast.
But as mentioned you are probably better of using the [email support](https://docs.spring.io/spring-integration/docs/current/reference/html/mail.html) of Spring Integration, saves you from writing your own.
|
I think you misunderstood the concept of spring context listeners.
Spring application context starts during the startup of spring driven application.
It has various "hooks" - points that you could listen to and be notified once they happen. So yes, context can be refreshed and when it happens, the listener "fires" and your code is executed. But that's it - it won't actually do anything useful afterwards, when the context starts. You can read about the context events [here](https://www.baeldung.com/spring-context-events) for example.
So with this in mind, basically your application context gets refreshed once, and the listener will be called also only once during the application startup.
Now I didn't understand how is RMI server related to all this :) You can remove the `@Component` annotation from the listener and restart the application, probably it will still search for the RMI server (please try that and update in the comment or something)
So first off, I think you should try to understand how is RMI related to all this, I think there is some other component in the system that gets loaded by spring boot and it searches for the RMI. Maybe You can take a thread dump or something and see which thread really works with RMI.
To get an event "when the message is added" is an entirely different thing, you will probably have to implement it by yourself or if the thirdparty that works with email has such a functionality - try to find how to achieve that with that thirdpary API, but for sure its not a ContextRefreshed event.
| 7,004
|
1,487,022
|
HI All,
Has anybody been able to extract the device tokens from the binary data that iPhone APNS feedback service returns using PHP? I am looking for something similar to what is been implementented using python here
[http://www.google.com/codesearch/p?hl=en&sa=N&cd=2&ct=rc#m5eOMDWiKUs/APNSWrapper/**init**.py&q=feedback.push.apple.com](http://www.google.com/codesearch/p?hl=en&sa=N&cd=2&ct=rc#m5eOMDWiKUs/APNSWrapper/__init__.py&q=feedback.push.apple.com)
As per the Apple documentation, I know that the first 4 bytes are timestamp, next 2 bytes is the length of the token and rest of the bytes are the actual token in binary format. (<http://developer.apple.com/IPhone/library/documentation/NetworkingInternet/Conceptual/RemoteNotificationsPG/CommunicatingWIthAPS/CommunicatingWIthAPS.html#//apple_ref/doc/uid/TP40008194-CH101-SW3>)
I am successfully able to extract the timestamp from the data feedback service returns, but the device token that I get after i convert to hexadecimal using the PHP's built in method bin2hex() is actually different than original device token. I am doing something silly in the conversion. Can anybody help me out if they have already implemented APNS feedback service using PHP?
TIA,
-Anish
|
2009/09/28
|
[
"https://Stackoverflow.com/questions/1487022",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/130985/"
] |
[PHP technique to query the APNs Feedback Server](https://stackoverflow.com/questions/1278834/php-technique-to-query-the-apns-feedback-server)
|
The best place to go for this is actually the Apple developer forums in internal to the iPhone portal - the have a bunch of examples in different languages for working with these push requests.
I'm also currently at an 360iDev push session, and they noted an open source PHP server can be found at:
<http://code.google.com/p/php-apns/>
| 7,005
|
13,336,623
|
I'm using Python 2.6.2. I have a list of tuples `pair` which I like to sort using two nested conditions.
1. The tuples are first sorted in descending count order of `fwd_count`,
2. If the value of count is the same for more than one tuple in `fwd_count`, only those tuples having equal count need to be sorted in descending order based on values in `rvs_count`.
3. The order does not matter and the positioning can be ignored, if
a) tuples have the same count in `fwd_count` and also in `rvs_count`, or
a) tuples have the same count in `fwd_count` and does not exist in `rvs_count`
I managed to write the following code:
```
pair=[((0, 12), (0, 36)), ((1, 12), (0, 36)), ((2, 12), (1, 36)), ((3, 12), (1, 36)), ((1, 36), (4, 12)), ((0, 36), (5, 12)), ((1, 36), (6, 12))]
fwd_count = {}
rvs_count = {}
for link in sorted(pair):
fwd_count[link[0]] = 0
rvs_count[link[1]] = 0
for link in sorted(pair):
fwd_count[link[0]] += 1
rvs_count[link[1]] += 1
#fwd_count {(6, 12): 1, (5, 12): 1, (4, 12): 1, (1, 36): 2, (0, 36): 2}
#rvs_count {(3, 12): 1, (1, 12): 1, (1, 36): 2, (0, 12): 1, (2, 12): 1, (0, 36): 1}
fwd_count_sort=sorted(fwd_count.items(), key=lambda x: x[1], reverse=True)
rvs_count_sort=sorted(rvs_count.items(), key=lambda x: x[1])
#fwd_count_sort [((1, 36), 2), ((0, 36), 2), ((6, 12), 1), ((5, 12), 1), ((4, 12), 1)]
#rvs_count_sort [((3, 12), 1), ((1, 12), 1), ((1, 36), 2), ((0, 12), 1), ((2, 12), 1), ((0, 36), 1)]
```
The result I am looking for is:
```
#fwd_count_sort_final [((0, 36), 2), ((1, 36), 2), ((6, 12), 1), ((5, 12), 1), ((4, 12), 1)]
```
Where the position of `(1, 36)` and `(0, 36)` have swapped position from the one in `fwd_count_sort`.
Question:
1. Is there a better way to do multi condition sorting using `fwd_count` and `rvs_count` information at the same time? (Only the tuples are important, the sort value need not be recorded.), or
2. Would I need to sort it individually for each conditions (as I did above) and try to find mean to integrate it to get the result I wanted?
I am currently working on item #2 above, but trying to learn if there are any simpler method.
This is the closest I can get to what I am looking for "Bidirectional Sorting with Numeric Values" at <http://stygianvision.net/updates/python-sort-list-object-dictionary-multiple-key/> but not sure I can use that if I create a new dictionary with {tuple: {fwd\_count : rvs\_count}} relationship.
**Update: 12 November 2012 -- SOLVED**
I have managed to solve this by using list. The below are the codes, hope it is useful for those whom are working to sort multi condition list.
```
#pair=[((0, 12), (0, 36)), ((1, 12), (1, 36)), ((2, 12), (0, 36)), ((3, 12), (1, 36)), ((1, 36), (4, 12)), ((0, 36), (5, 12)), ((1, 36), (6, 12))]
rvs_count = {}
fwd_count = {}
for link in sorted(pair):
rvs_count[link[0]] = 0
fwd_count[link[1]] = 0
for link in sorted(pair):
rvs_count[link[0]] += 1
fwd_count[link[1]] += 1
keys = []
for link in pair:
if link[0] not in keys:
keys.append(link[0])
if link[1] not in keys:
keys.append(link[1])
aggregated = []
for k in keys:
a = -1
d = -1
if k in fwd_count.keys():
a = fwd_count[k]
if k in rvs_count.keys():
d = rvs_count[k]
aggregated.append(tuple((k, tuple((a,d)) )))
def compare(x,y):
a1 = x[1][0]
d1 = x[1][1]
a2 = y[1][0]
d2 = y[1][1]
if a1 > a2:
return - a1 + a2
elif a1 == a2:
if d1 > d2:
return d1 - d2
elif d1 == d2:
return 0
else:
return d1 - d2
else:
return - a1 + a2
s = sorted(aggregated, cmp=compare)
print(s)
j = [v[0] for v in s]
print(j)
```
Thanks to Andre Fernandes, Brian and Duke for giving your comments on my work
|
2012/11/11
|
[
"https://Stackoverflow.com/questions/13336623",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1611813/"
] |
If you require to swap *all* first (of pair) elements (and not just `(1, 36)` and `(0, 36)`), you can do
`fwd_count_sort=sorted(rvs_count.items(), key=lambda x: (x[0][1],-x[0][0]), reverse=True)`
|
I'm not exactly sure on the definition of your sorting criteria, but this is a method to sort the `pair` list according to the values in `fwd_count` and `rvs_count`. Hopefully you can use this to get to the result you want.
```
def keyFromPair(pair):
"""Return a tuple (f, r) to be used for sorting the pairs by frequency."""
global fwd_count
global rvs_count
first, second = pair
countFirstInv = -fwd_count[first] # use the negative to reverse the sort order
countSecond = rvs_count[second]
return (first, second)
pairs_sorted = sorted(pair, key = keyFromPair)
```
The basic idea is to use Python's in-built tuple ordering mechanism to sort on multiple keys, and to invert one of the values in the tuple so make it a reverse-order sort.
| 7,007
|
69,281,148
|
I have around 30000 Urls in my csv. I need to check if it has meta content is present or not, for each url. I am using request\_cache to basically cache the response to a sqlite db. It was taking about 24hrs even after using a caching sys. Therefore I moved to concurrency. I think I have done something wrong with `out = executor.map(download_site, sites, headers)`. And do not know how to fix it.
AttributeError: 'str' object has no attribute 'items'
```
import concurrent.futures
import requests
import threading
import time
import pandas as pd
import requests_cache
from PIL import Image
from io import BytesIO
thread_local = threading.local()
df = pd.read_csv("test.csv")
sites = []
for row in df['URLS']:
sites.append(row)
# print("URL is shortened")
user_agent = 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.7) Gecko/2009021910 Firefox/3.0.7'
headers={'User-Agent':user_agent,}
requests_cache.install_cache('network_call', backend='sqlite', expire_after=2592000)
def getSess():
if not hasattr(thread_local, "session"):
thread_local.session = requests.Session()
return thread_local.session
def networkCall(url, headers):
print("In Download site")
session = getSess()
with session.get(url, headers=headers) as response:
print(f"Read {len(response.content)} from {url}")
return response.content
out = []
def getMeta(meta_res):
print("Get data")
for each in meta_res:
meta = each.find_all('meta')
for tag in meta:
if 'name' in tag.attrs.keys() and tag.attrs['name'].strip().lower() in ['description', 'keywords']:
content = tag.attrs['content']
if content != '':
out.append("Absent")
else:
out.append("Present")
return out
def allSites(sites):
with concurrent.futures.ThreadPoolExecutor(max_workers=50) as executor:
out = executor.map(networkCall, sites, headers)
return list(out)
if __name__ == "__main__":
sites = [
"https://www.jython.org",
"http://olympus.realpython.org/dice",
] * 15000
start_time = time.time()
list_meta = allSites(sites)
print("META ", list_meta)
duration = time.time() - start_time
print(f"Downloaded {len(sites)} in {duration} seconds")
output = getMeta(list_meta)
df["is it there"] = pd.Series(output)
df.to_csv('new.csv',index=False, header=True)
```
|
2021/09/22
|
[
"https://Stackoverflow.com/questions/69281148",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16433326/"
] |
I have tried to emulate your functionality. The following code executes in under 4 minutes:-
```
from bs4 import BeautifulSoup as BS
import concurrent.futures
import time
import queue
import requests
URLs = [
"https://www.jython.org",
"http://olympus.realpython.org/dice"
] * 15_000
user_agent = 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.7) Gecko/2009021910 Firefox/3.0.7'
headers = {'User-Agent': user_agent}
class SessionCache():
def __init__(self, cachesize=20):
self.cachesize = cachesize
self.sessions = 0
self.q = queue.Queue()
def getSession(self):
try:
return self.q.get(block=False)
except queue.Empty:
pass
if self.sessions < self.cachesize:
self.q.put(requests.Session())
self.sessions += 1
return self.q.get()
def putSession(self, session):
self.q.put(session)
CACHE = SessionCache()
def doGet(url):
try:
session = CACHE.getSession()
response = session.get(url, headers=headers)
response.raise_for_status()
soup = BS(response.text, 'lxml')
for meta in soup.find_all('meta'):
if (name := meta.attrs.get('name', None)):
if name.strip().lower() in ['description', 'keywords']:
if meta.attrs.get('content', '') != '':
return url, 'Present'
return url, 'Absent'
except Exception as e:
return url, str(e)
finally:
CACHE.putSession(session)
def main():
start = time.perf_counter()
with concurrent.futures.ThreadPoolExecutor() as executor:
for r in executor.map(doGet, URLs):
print(f'{r[0]} -> {r[1]}')
end = time.perf_counter()
print(f'Duration={end-start:.4f}s')
if __name__ == '__main__':
main()
```
|
AttributeError: 'str' object has no attribute 'items'
=====================================================
This error is happening in `requests.models.PrepareRequest.prepare_headers()`. When you call `executor.map(networkCall, sites, headers)`, it's casting `headers` to a list, so you end up with `request.headers = 'User-Agent'` instead of `request.headers = {'User-Agent': '...'}`.
Since it looks like the headers aren't actually changing, you can make that a constant and remove it as an argument from `networkCall()`:
```py
HEADERS = {'User-Agent':user_agent}
...
def networkCall(url):
session = getSess()
with session.get(url, headers=HEADERS) as response:
print(f"Read {len(response.content)} from {url}")
return response.content
...
def allSites(sites):
with concurrent.futures.ThreadPoolExecutor(max_workers=50) as executor:
out = executor.map(networkCall, sites)
return list(out)
```
sqlite3.OperationalError: database is locked
============================================
Another thing worth noting is that `requests_cache.install_cache()` is **not** thread-safe, which causes the `sqlite3.OperationalError` you got earlier. You can remove `install_cache()` and use `requests_cache.CachedSession` instead, which **is** thread-safe:
```py
def getSess():
if not hasattr(thread_local, "session"):
thread_local.session = requests_cache.CachedSession(
'network_call',
backend='sqlite',
expire_after=2592000,
)
return thread_local.session
```
For reference, there's more info in the requests-cache user guide on the [differences between sessions and patching](https://requests-cache.readthedocs.io/en/stable/user_guide/general.html).
Performance
===========
A note on performance: Since you're doing lots of concurrent writes, SQLite isn't ideal. It's fast for concurrent reads, but for writes it internally queues operations and writes data in serial, not in parallel. If possible, try using Redis or one of the other [requests-cache backends](https://requests-cache.readthedocs.io/en/stable/user_guide/backends.html), which are better optimized for concurrent writes.
| 7,008
|
18,937,057
|
I am searching for items that are not repeated in a list in python.
The current way I do it is,
```
python -mtimeit -s'l=[1,2,3,4,5,6,7,8,9]*99' '[x for x in l if l.count(x) == 1]'
100 loops, best of 3: 12.9 msec per loop
```
Is it possible to do it faster?
This is the output.
```
>>> l = [1,2,3,4,5,6,7,8,9]*99+[10,11]
>>> [x for x in l if l.count(x) == 1]
[10, 11]
```
|
2013/09/21
|
[
"https://Stackoverflow.com/questions/18937057",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/994176/"
] |
You can use [the `Counter` class](http://docs.python.org/dev/library/collections.html#collections.Counter) from `collections`:
```
from collections import Counter
...
[item for item, count in Counter(l).items() if count == 1]
```
My results:
```none
$ python -m timeit -s 'from collections import Counter; l = [1, 2, 3, 4, 5, 6, 7, 8, 9] * 99' '[item for item, count in Counter(l).items() if count == 1]'
1000 loops, best of 3: 366 usec per loop
$ python -mtimeit -s'l=[1,2,3,4,5,6,7,8,9]*99' '[x for x in l if l.count(x) == 1]'
10 loops, best of 3: 23.4 msec per loop
```
|
Basically you want to remove duplicate entries, so there are some answers here:
* [How do you remove duplicates from a list in Python whilst preserving order?](https://stackoverflow.com/questions/480214/how-do-you-remove-duplicates-from-a-list-in-python-whilst-preserving-order)
* [Remove duplicates in a list while keeping its order (Python)](https://stackoverflow.com/a/1549550/170413)
Using `in` as opposed to `count()` should be a little quicker because the query is done once it finds the first instance.
| 7,011
|
66,093,541
|
Is there a way in C++ to pass arguments by name like in python?
For example I have a function:
```
void foo(int a, int b = 1, int c = 3, int d = 5);
```
Can I somehow call it like:
```
foo(5 /* a */, c = 5, d = 8);
```
Or
```
foo(5, /* a */, d = 1);
```
|
2021/02/07
|
[
"https://Stackoverflow.com/questions/66093541",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15114707/"
] |
There are no named function parameters in C++, but you can achieve a similar effect with designated initializers from C++20.
Take all the function parameters and put them into a struct:
```
struct S
{
int a{}, b{}, c{}, d{};
};
```
Now modify your function to take an instance of that struct (by `const&` for efficiency)
```
void foo(S s)
{
std::cout << s.a << " " << s.b << " " << s.c << " " << s.d; // for example
}
```
and now you can call the function like this:
```
foo({.a = 2, .c = 3}); // prints 2 0 3 0
// b and d get default values of 0
```
Here's a [demo](https://godbolt.org/z/neTdeP)
|
**No**
You have to pass the arguments by order, so, to specify a value for *d*, you must also specify one for *c* since it's declared before it, for example
| 7,012
|
64,972,907
|
Output:
```none
pygame 2.0.0 (SDL 2.0.12, python 3.8.6)
Hello from the pygame community. https://www.pygame.org/contribute.html
Traceback (most recent call last):
File "C:\Users\New User\Python Projects\Aliens Invasion Game\alien_invasion.py", line 5, in <module>
class AlienInvasion:
File "C:\Users\New User\Python Projects\Aliens Invasion Game\alien_invasion.py", line 26, in AlienInvasion
ai = AlienInvasion()
NameError: name 'AlienInvasion' is not defined
```
```py
import sys
import pygame
class AlienInvasion:
"""Overall class to manage game assets and behaviour."""
def __init__(self):
"""Initialize the game, and create game resources."""
pygame.init()
self.screen = pygame.display.set_mode((1200, 800))
pygame.display.set_caption("Alien Invasion")
def run_game(self):
"""Start the main loop for the game."""
while True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
sys.exit()
#make the most recently drawn screen visible.
pygame.display.flip()
if __name__ == '__main__':
#make a game instance and run the game.
ai = AlienInvasion()
ai.run_game()
```
Why am I getting `NameError` on `AlienInvasion.py`?
|
2020/11/23
|
[
"https://Stackoverflow.com/questions/64972907",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14625233/"
] |
The original code you posted has an indentation issue - your `if __name__ == '__main__'` block is indented, meaning it's actually considered to be within the scope of `class AlienInvasion`:
```
import ...
class AlienInvasion:
def __init__(self):
...
def run_game(self):
...
# IMPORPERLY INDENTED:
if __name__ == '__main__':
#make a game instance and run the game.
ai = AlienInvasion()
ai.run_game()
```
Since you're trying to use `AlienInvasion` in the middle of its own definition, you'll therefore get something like this error:
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "<string>", line 16, in AlienInvasion
NameError: name 'AlienInvasion' is not defined
```
Thus, in order to fix the error, you have to fix your indentation, so that your `main` block is outside of `AlienInvasion`:
```
import sys
import pygame
class AlienInvasion:
"""Overall class to manage game assets and behaviour."""
def __init__(self):
"""Initialize the game, and create game resources."""
pygame.init()
self.screen = pygame.display.set_mode((1200, 800))
pygame.display.set_caption("Alien Invasion")
def run_game(self):
"""Start the main loop for the game."""
while True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
sys.exit()
#make the most recently drawn screen visible.
pygame.display.flip()
if __name__ == '__main__':
#make a game instance and run the game.
ai = AlienInvasion()
ai.run_game()
```
|
You should remove your main block from the definition of the `AlienInvasion` class.
Your `.py` file should look like this:
```
import sys
import pygame
class AlienInvasion:
"""Overall class to manage game assets and behaviour."""
def __init__(self):
"""Initialize the game, and create game resources."""
pygame.init()
self.screen = pygame.display.set_mode((1200, 800))
pygame.display.set_caption("Alien Invasion")
def run_game(self):
"""Start the main loop for the game."""
while True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
sys.exit()
#make the most recently drawn screen visible.
pygame.display.flip()
if __name__ == '__main__':
#make a game instance and run the game.
ai = AlienInvasion()
ai.run_game()
```
| 7,013
|
62,657,469
|
I want to search for the word in the file and print next value of it using any way using python
Following is the code :
```
def matchTest(testsuite, testList):
hashfile = open("/auto/file.txt", 'a')
with open (testsuite, 'r') as suite:
for line in suite:
remove_comment=line.split('#')[0]
for test in testList:
if re.search(test, remove_comment, re.IGNORECASE):
hashfile.writelines(remove_comment)
search_word=remove_comment.split(':component=>"', maxsplit=1)[-1].split(maxsplit=1)
print(search_word)
hashfile.close()
```
`remove_comment` has the following lines:
```
{:component=>"Cloud Tier Mgmt", :script=>"b.py", :testname=>"c", --clients=$LOAD_CLIENT --log_level=DEBUG --config_file=a.yaml"}
{:skipfilesyscheck=>1, :component=>"Content Store", :script=>"b.py", --clients=$LOAD_CLIENT --log_level=DEBUG --config_file=a.yaml -s"}
{:script=>"b.py", :params=>"--ddrs=$DDRS --clients=$LOAD_CLIENT --log_level=DEBUG --config_file=a.yaml", :numddr=>1, :timeout=>10000, :component=>"Cloud-Connectivity" }
```
So now I want the output to be only the compnent value as following:
```
Cloud Tier Mgmt
Content Store
Cloud-Connectivity
```
Please anyone help
|
2020/06/30
|
[
"https://Stackoverflow.com/questions/62657469",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11032819/"
] |
```
def matchTest(testsuite, testList):
hashfile = open("hash.txt", 'a')
with open ('text.txt', 'r') as suite:
lines = [line.strip() for line in suite.readlines() if line.strip()]
print(lines)
for line in lines:
f = re.search(r'component=>"(.*?)"', line) # find part you need
hashfile.write(f.group(1)+'\n')
hashfile.close()
```
**Output written to file:**
```
Cloud Tier Mgmt
Content Store
Cloud-Connectivity
```
|
Try this:
```
import re
remove_comment = '''{:component=>"Cloud Tier Mgmt", :script=>"b.py", :testname=>"c", --clients=$LOAD_CLIENT --log_level=DEBUG --config_file=a.yaml"}
{:skipfilesyscheck=>1, :component=>"Content Store", :script=>"b.py", --clients=$LOAD_CLIENT --log_level=DEBUG --config_file=a.yaml -s"}
{:script=>"b.py", :params=>"--ddrs=$DDRS --clients=$LOAD_CLIENT --log_level=DEBUG --config_file=a.yaml", :numddr=>1, :timeout=>10000, :component=>"Cloud-Connectivity" }'''
data = [x.split('"')[1] for x in re.findall(r'component=>[^,:}]*', remove_comment)]
print(data)
```
**Output:**
```
['Cloud Tier Mgmt', 'Content Store', 'Cloud-Connectivity']
```
| 7,014
|
52,451,119
|
I have few text files which contain URLs. I am trying to create a SQLite database to store these URLs in a table. The URL table has two columns i.e. primary key(INTEGER) and URL(TEXT).
I try to insert 100,000 entries in one insert command and loop till I finish the URL list. Basically, read all the text files content and save in list and then I use create smaller list of 100,000 entries and insert in table.
Total URLs in the text files are 4,591,415 and total text file size is approx 97.5 MB.
**Problems**:
1. When I chose file database, it takes around **7-7.5 minutes** to insert. I feel this is not a very fast insert given that I have solid state hard-disk which has faster read/write. Along with that I have approximately 10GB RAM available as seen in task manager. Processor is i5-6300U 2.4Ghz.
2. The total text files are approx. 97.5 MB. But after I insert the URLs in the SQLite, the SQLite database is approximately 350MB i.e. almost 3.5 times the original data size. Since the database doesn't contain any other tables, indexes etc. this database size looks little odd.
For problem 1, I tried playing with parameters and came up with as best ones based on test runs with different parameters.
```css
table, th, td {
border: 1px solid black;
border-collapse: collapse;
}
th, td {
padding: 15px;
text-align: left;
}
```
```html
<table style="width:100%">
<tr>
<th>Configuration</th>
<th>Time</th>
</tr>
<tr><th>50,000 - with journal = delete and no transaction </th><th>0:12:09.888404</th></tr>
<tr><th>50,000 - with journal = delete and with transaction </th><th>0:22:43.613580</th></tr>
<tr><th>50,000 - with journal = memory and transaction </th><th>0:09:01.140017</th></tr>
<tr><th>50,000 - with journal = memory </th><th>0:07:38.820148</th></tr>
<tr><th>50,000 - with journal = memory and synchronous=0 </th><th>0:07:43.587135</th></tr>
<tr><th>50,000 - with journal = memory and synchronous=1 and page_size=65535 </th><th>0:07:19.778217</th></tr>
<tr><th>50,000 - with journal = memory and synchronous=0 and page_size=65535 </th><th>0:07:28.186541</th></tr>
<tr><th>50,000 - with journal = delete and synchronous=1 and page_size=65535 </th><th>0:07:06.539198</th></tr>
<tr><th>50,000 - with journal = delete and synchronous=0 and page_size=65535 </th><th>0:07:19.810333</th></tr>
<tr><th>50,000 - with journal = wal and synchronous=0 and page_size=65535 </th><th>0:08:22.856690</th></tr>
<tr><th>50,000 - with journal = wal and synchronous=1 and page_size=65535 </th><th>0:08:22.326936</th></tr>
<tr><th>50,000 - with journal = delete and synchronous=1 and page_size=4096 </th><th>0:07:35.365883</th></tr>
<tr><th>50,000 - with journal = memory and synchronous=1 and page_size=4096 </th><th>0:07:15.183948</th></tr>
<tr><th>1,00,000 - with journal = delete and synchronous=1 and page_size=65535 </th><th>0:07:13.402985</th></tr>
</table>
```
I was checking online and saw this link <https://adamyork.com/2017/07/02/fast-database-inserts-with-python-3-6-and-sqlite/> where the system is much slower than what I but still performing very well.
Two things, that stood out from this link were:
1. The table in the link had more columns than what I have.
2. The database file didn't grow 3.5 times.
I have shared the python code and the files here: <https://github.com/ksinghgithub/python_sqlite>
Can someone guide me on optimizing this code. Thanks.
Environment:
1. Windows 10 Professional on i5-6300U and 20GB RAM and 512 SSD.
2. Python 3.7.0
***Edit 1:: New performance chart based on the feedback received on UNIQUE constraint and me playing with cache size value.***
```
self.db.execute('CREATE TABLE blacklist (id INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT, url TEXT NOT NULL UNIQUE)')
```
```css
table, th, td {
border: 1px solid black;
border-collapse: collapse;
}
th, td {
padding: 15px;
text-align: left;
}
```
```html
<table>
<tr>
<th>Configuration</th>
<th>Action</th>
<th>Time</th>
<th>Notes</th>
</tr>
<tr><th>50,000 - with journal = delete and synchronous=1 and page_size=65535 cache_size = 8192</th><th>REMOVE UNIQUE FROM URL</th><th>0:00:18.011823</th><th>Size reduced to 196MB from 350MB</th><th></th></tr>
<tr><th>50,000 - with journal = delete and synchronous=1 and page_size=65535 cache_size = default</th><th>REMOVE UNIQUE FROM URL</th><th>0:00:25.692283</th><th>Size reduced to 196MB from 350MB</th><th></th></tr>
<tr><th>100,000 - with journal = delete and synchronous=1 and page_size=65535 </th><th></th><th>0:07:13.402985</th><th></th></tr>
<tr><th>100,000 - with journal = delete and synchronous=1 and page_size=65535 cache_size = 4096</th><th></th><th>0:04:47.624909</th><th></th></tr>
<tr><th>100,000 - with journal = delete and synchronous=1 and page_size=65535 cache_size = 8192</th><th></th><<th>0:03:32.473927</th><th></th></tr>
<tr><th>100,000 - with journal = delete and synchronous=1 and page_size=65535 cache_size = 8192</th><th>REMOVE UNIQUE FROM URL</th><th>0:00:17.927050</th><th>Size reduced to 196MB from 350MB</th><th></th></tr>
<tr><th>100,000 - with journal = delete and synchronous=1 and page_size=65535 cache_size = default </th><th>REMOVE UNIQUE FROM URL</th><th>0:00:21.804679</th><th>Size reduced to 196MB from 350MB</th><th></th></tr>
<tr><th>100,000 - with journal = delete and synchronous=1 and page_size=65535 cache_size = default </th><th>REMOVE UNIQUE FROM URL & ID</th><th>0:00:14.062386</th><th>Size reduced to 134MB from 350MB</th><th></th></tr>
<tr><th>100,000 - with journal = delete and synchronous=1 and page_size=65535 cache_size = default </th><th>REMOVE UNIQUE FROM URL & DELETE ID</th><th>0:00:11.961004</th><th>Size reduced to 134MB from 350MB</th><th></th></tr>
</table>
```
|
2018/09/21
|
[
"https://Stackoverflow.com/questions/52451119",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/139406/"
] |
SQLite uses auto-commit mode by default. This permits `begin transaction` be to omitted. But here we want all the inserts to be in a transaction and the only way to do that is to start a transaction with `begin transaction` so that all the statements that are going to be ran are all in that transaction.
The method `executemany` is only a loop over `execute` done outside Python that calls the SQLite prepare statement function only once.
The following is a really bad way to remove the last N items from a list:
```
templist = []
i = 0
while i < self.bulk_insert_entries and len(urls) > 0:
templist.append(urls.pop())
i += 1
```
It is better to do this:
```
templist = urls[-self.bulk_insert_entries:]
del urls[-self.bulk_insert_entries:]
i = len(templist)
```
The slice and del slice work even on an empty list.
Both might have the same complexity but 100K calls to append and pop costs a lot more than letting Python do it outside the interpreter.
|
The UNIQUE constraint on column "url" is creating an implicit index on the URL. That would explain the size increase.
I don't think you can populate the table and afterwards add the unique constraint.
Your bottleneck is surely the CPU. Try the following:
1. Install toolz: `pip install toolz`
2. Use this method:
```
from toolz import partition_all
def add_blacklist_url(self, urls):
# print('add_blacklist_url:: entries = {}'.format(len(urls)))
start_time = datetime.now()
for batch in partition_all(100000, urls):
try:
start_commit = datetime.now()
self.cursor.executemany('''INSERT OR IGNORE INTO blacklist(url) VALUES(:url)''', batch)
end_commit = datetime.now() - start_commit
print('add_blacklist_url:: total time for INSERT OR IGNORE INTO blacklist {} entries = {}'.format(len(templist), end_commit))
except sqlite3.Error as e:
print("add_blacklist_url:: Database error: %s" % e)
except Exception as e:
print("add_blacklist_url:: Exception in _query: %s" % e)
self.db.commit()
time_elapsed = datetime.now() - start_time
print('add_blacklist_url:: total time for {} entries = {}'.format(records, time_elapsed))
```
The code was not tested.
| 7,020
|
26,214,328
|
After long debugging I found why my application using python regexps is slow. Here is something I find surprising:
```
import datetime
import re
pattern = re.compile('(.*)sol(.*)')
lst = ["ciao mandi "*10000 + "sol " + "ciao mandi "*10000,
"ciao mandi "*1000 + "sal " + "ciao mandi "*1000]
for s in lst:
print "string len", len(s)
start = datetime.datetime.now()
re.findall(pattern,s)
print "time spent", datetime.datetime.now() - start
print
```
The output on my machine is:
```
string len 220004
time spent 0:00:00.002844
string len 22004
time spent 0:00:05.339580
```
The first test string is 220K long, matches, and the matching is quite fast. The second test string is 20K long, does not match and it takes 5 seconds to compute!
I knew this report <http://swtch.com/~rsc/regexp/regexp1.html> which says that regexp implementation in python, perl, ruby is somewhat non optimal... Is this the reason? I didn't thought it could happen with such a simple expression.
**added**
My original task is to split a string trying different regex in turn. Something like:
```
for regex in ['(.*)sol(.*)', '\emph{([^{}])*)}(.*)', .... ]:
lst = re.findall(regex, text)
if lst:
assert len(lst) == 1
assert len(lst[0]) == 2
return lst[0]
```
This is to explain why I cannot use `split`. I solved my issue by replacing `(.*)sol(.*)` with `(.*?)sol(.*)` as suggested by Martijn.
Probably I should use `match` instead of `findall`... but I don't think this would have solved the issue since the regexp is going to match the entire input and hence findall should stop at first match.
Anyway my question was more about how easy is to fall in this problem for a regexp newbie... I think it is not so simple to understand that `(.*?)sol(.*)` is the solution (and for example `(.*?)sol(.*?)` is not).
|
2014/10/06
|
[
"https://Stackoverflow.com/questions/26214328",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1221660/"
] |
The Thompson NFA approach changes regular expressions from default greedy to default non-greedy. Normal regular expression engines can do the same; simply change `.*` to `.*?`. You should not use greedy expressions when non-greedy will do.
Someone already built an NFA regular expression parser for Python: <https://github.com/xysun/regex>
It indeed outperforms the default Python regular expression parser for the pathological cases. *However*, it **under**-performs on *everything else*:
>
> This regex engine underperforms Python's re module on normal inputs (using Glenn Fowler's test suite -- see below)
>
>
>
Fixing the pathological case at the expense of the typical is probably a good reason not to use the NFA approach as a default engine, not when the pathological case can simply be avoided instead.
Another reason is that certain features (such as back references) are either very hard or impossible to implement using the NFA approach. Also see the [response on the Python Ideas mailing list](https://mail.python.org/pipermail/python-ideas/2007-April/000407.html).
As such, your test can be made to perform much better if you made at least one of the patterns non-greedy to avoid the catastrophic backtracking:
```
pattern = re.compile('(.*?)sol(.*)')
```
or don't use a regex at all; you could use `str.partition()` to get the prefix and postfix instead:
```
before, sol, after = s.partition('sol')
```
e.g. not all text problems are regular-expression shaped, so put down that hammer and look at the rest of your toolbox!
In addition, you could perhaps look at the alternative `re` module, [`regex`](https://pypi.python.org/pypi/regex). This module implements some basic checks for pathological cases and avoids them deftly, without having to resort to a Thompson NFA implementation. Quoting [an entry to a Python bug report tracking `regex`](https://bugs.python.org/issue2636#msg90954):
>
> The internal engine no longer interprets a form of bytecode but instead
> follows a linked set of nodes, and it can work breadth-wise as well as
> depth-first, which makes it perform much better when faced with one of
> those 'pathological' regexes.
>
>
>
This engine can run your pathological case more than 100 thousand times faster:
```
>>> import re, regex
>>> import timeit
>>> p_re = re.compile('(.*)sol(.*)')
>>> p_regex = regex.compile('(.*)sol(.*)')
>>> s = "ciao mandi "*1000 + "sal " + "ciao mandi "*1000
>>> timeit.timeit("p.findall(s)", 'from __main__ import s, p_re as p', number=1)
2.4578459990007104
>>> timeit.timeit("p.findall(s)", 'from __main__ import s, p_regex as p', number=100000)
1.955532732012216
```
Note the numbers; I limited the `re` test to 1 run and it took 2.46 seconds, while the `regex` test runs 100k times in under 2 seconds.
|
```
^(?=(.*?sol))\1(.*)$
```
You can try this.This reduces backtracking and fails faster.Try your string here.
<http://regex101.com/r/hQ1rP0/22>
| 7,021
|
48,568,283
|
Here's an example to find the greatest common divisor for positive integers `a` and `b`, and `a <= b`. I started from the smaller `a` and minus one by one to check if it's the divisor of both numbers.
```
def gcdFinder(a, b):
testerNum = a
def tester(a, b):
if b % testerNum == 0 and a % testerNum == 0:
return testerNum
else:
testerNum -= 1
tester(a, b)
return tester(a, b)
print(gcdFinder(9, 15))
```
Then, I got error message,
`UnboundLocalError: local variable 'testerNum' referenced before assignment`.
After using `global testerNum`, it successfully showed the answer `3` in Spyder console...
[](https://i.stack.imgur.com/aJAcD.png)
but in pythontutor.com, it said `NameError: name 'testerNum' is not defined` ([link](http://pythontutor.com/visualize.html#code=def%20gcdFinder%28a,%20b%29%3A%0A%0A%20%20%20%20testerNum%20%3D%20a%20%20%20%0A%0A%20%20%20%20def%20tester%28a,%20b%29%3A%0A%20%20%20%20%20%20%20%20global%20testerNum%0A%20%20%20%20%20%20%20%20if%20b%20%25%20testerNum%20%3D%3D%200%20and%20a%20%25%20testerNum%20%3D%3D%200%3A%0A%20%20%20%20%20%20%20%20%20%20%20%20return%20testerNum%0A%20%20%20%20%20%20%20%20else%3A%0A%20%20%20%20%20%20%20%20%20%20%20%20testerNum%20-%3D%201%0A%20%20%20%20%20%20%20%20%20%20%20%20tester%28a,%20b%29%0A%0A%20%20%20%20return%20tester%28a,%20b%29%0A%0Aprint%28gcdFinder%289,%2015%29%29&cumulative=false&curInstr=8&heapPrimitives=false&mode=display&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false)).
[](https://i.stack.imgur.com/6SFHd.png)
~~**Q1:** In Spyder, I think that the `global testerNum` is a problem since `testerNum = a` is not in global scope. It's inside the scope of function `gcdFinder`. Is this description correct? If so, how did Spyder show the answer?~~
**Q2:** In pythontutor, say the last screenshot, how to solve the NameError problem in pythontutor?
~~**Q3:** Why there's difference between the results of Spyder and pythontutor, and which is correct?~~
**Q4:** Is it better not to use `global` method?
--
UPDATE: The Spyder issue was due to the value stored from previous run, so it's defined as `9` already. And this made the `global testerNum` work. I've deleted the Q1 & Q3.
|
2018/02/01
|
[
"https://Stackoverflow.com/questions/48568283",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4860812/"
] |
Answers to **Q2** and **Q4**.
As I wrote in the comments, you can parse the testerNum as parameter.
Your code would then look like this:
```
def gcdFinder(a, b):
testerNum = a
def tester(a, b, testerNum):
if b % testerNum == 0 and a % testerNum == 0:
return testerNum
else:
testerNum -= 1
return tester(a, b, testerNum) # you have to return this in order for the code to work
return tester(a, b, testerNum)
print(gcdFinder(9, 15))
```
edit: (see coment)
|
TO hit only question 4, yes, it's better not to use `global` at all. `global` generally highlights poor code design.
You're going to a lot of trouble; I strongly recommend that you look up standard methods of calculating the GCD, and implement Euclid's Algorithm instead. Coding details left as an exercise for the student.
| 7,024
|
70,643,142
|
Say I have this array:
```python
array = np.array([[1,2,3],[4,5,6],[7,8,9]])
```
Returns:
```none
123
456
789
```
How should I go about getting it to return something like this?
```none
111222333
111222333
111222333
444555666
444555666
444555666
777888999
777888999
777888999
```
|
2022/01/09
|
[
"https://Stackoverflow.com/questions/70643142",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16809168/"
] |
You'd have to use [`np.repeat`](https://numpy.org/doc/stable/reference/generated/numpy.repeat.html#numpy-repeat) twice here.
```
np.repeat(np.repeat(array, 3, axis=1), 3, axis=0)
# [[1 1 1 2 2 2 3 3 3]
# [1 1 1 2 2 2 3 3 3]
# [1 1 1 2 2 2 3 3 3]
# [4 4 4 5 5 5 6 6 6]
# [4 4 4 5 5 5 6 6 6]
# [4 4 4 5 5 5 6 6 6]
# [7 7 7 8 8 8 9 9 9]
# [7 7 7 8 8 8 9 9 9]
# [7 7 7 8 8 8 9 9 9]]
```
|
For fun (because the nested `repeat` will be more efficient), you could use `einsum` on the input array and an array of `ones` that has extra dimensions to create a multidimensional array with the dimensions in an ideal order to `reshape` to the expected 2D shape:
```
np.einsum('ij,ikjl->ikjl', array, np.ones((3,3,3,3))).reshape(9,9)
```
The generic method being:
```
i,j = array.shape
k = 3 # extra rows
l = 3 # extra cols
np.einsum('ij,ikjl->ikjl', a, np.ones((i,k,j,l))).reshape(i*k,j*l)
```
Output:
```
array([[1, 1, 1, 2, 2, 2, 3, 3, 3],
[1, 1, 1, 2, 2, 2, 3, 3, 3],
[1, 1, 1, 2, 2, 2, 3, 3, 3],
[4, 4, 4, 5, 5, 5, 6, 6, 6],
[4, 4, 4, 5, 5, 5, 6, 6, 6],
[4, 4, 4, 5, 5, 5, 6, 6, 6],
[7, 7, 7, 8, 8, 8, 9, 9, 9],
[7, 7, 7, 8, 8, 8, 9, 9, 9],
[7, 7, 7, 8, 8, 8, 9, 9, 9]])
```
What is however nice with this method, is that it's quite easy to change the order to obtain other patterns or work with higher dimensions.
Example with other patterns:
```
>>> np.einsum('ij,iklj->iklj', a, np.ones((3,3,3,3))).reshape(9,9)
array([[1, 2, 3, 1, 2, 3, 1, 2, 3],
[1, 2, 3, 1, 2, 3, 1, 2, 3],
[1, 2, 3, 1, 2, 3, 1, 2, 3],
[4, 5, 6, 4, 5, 6, 4, 5, 6],
[4, 5, 6, 4, 5, 6, 4, 5, 6],
[4, 5, 6, 4, 5, 6, 4, 5, 6],
[7, 8, 9, 7, 8, 9, 7, 8, 9],
[7, 8, 9, 7, 8, 9, 7, 8, 9],
[7, 8, 9, 7, 8, 9, 7, 8, 9]])
>>> np.einsum('ij,kjil->kjil', a, np.ones((3,3,3,3))).reshape(9,9)
array([[1, 1, 1, 4, 4, 4, 7, 7, 7],
[2, 2, 2, 5, 5, 5, 8, 8, 8],
[3, 3, 3, 6, 6, 6, 9, 9, 9],
[1, 1, 1, 4, 4, 4, 7, 7, 7],
[2, 2, 2, 5, 5, 5, 8, 8, 8],
[3, 3, 3, 6, 6, 6, 9, 9, 9],
[1, 1, 1, 4, 4, 4, 7, 7, 7],
[2, 2, 2, 5, 5, 5, 8, 8, 8],
[3, 3, 3, 6, 6, 6, 9, 9, 9]])
```
| 7,027
|
53,080,894
|
I am trying to make a GUI where the quantity of tkinter entries is decided by the user.
My Code:
```
from tkinter import*
root = Tk()
def createEntries(quantity):
for num in range(quantity):
usrInput = Entry(root, text = num)
usrInput.pack()
createEntries(10)
root.mainloop()
```
This code is based on [this tutorial](http://usingpython.com/dynamically-creating-widgets/) i found:
```
for num in range(10):
btn = tkinter.button(window, text=num)
btn.pack(side=tkinter.LEFT)
```
The problem is that I can only access the input in the latest created widget, because they all have the same name. Is there a way of dynamically creating widgets with unique names?
Any advice would be greatly appreciated
|
2018/10/31
|
[
"https://Stackoverflow.com/questions/53080894",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10585009/"
] |
The solution is to store the widgets in a data structure such as a list or dictionary. For example:
```
entries = []
for num in range(quantity):
usrInput = Entry(root, text = num)
usrInput.pack()
entries.append(usrInput)
```
Later, you can iterate over this list to get the values:
```
for entry in entries:
value = entry.get()
print("value: {}".format(value))
```
And, of course, you can access specific entries by number:
```
print("first item: {}".format(entries[0].get()))
```
|
With the following Code you can adjust the number of Buttons and Entrys depending on the Var "fields". I hope it helps
```
from tkinter import *
fields = 'Last Name', 'First Name', 'Job', 'Country'
def fetch(entries):
for entry in entries:
field = entry[0]
text = entry[1].get()
print('%s: "%s"' % (field, text))
def makeform(root, fields):
entries = []
for field in fields:
row = Frame(root)
lab = Label(row, width=15, text=field, anchor='w')
ent = Entry(row)
row.pack(side=TOP, fill=X, padx=5, pady=5)
lab.pack(side=LEFT)
ent.pack(side=RIGHT, expand=YES, fill=X)
entries.append((field, ent))
return entries
if __name__ == '__main__':
root = Tk()
ents = makeform(root, fields)
root.bind('<Return>', (lambda event, e=ents: fetch(e)))
b1 = Button(root, text='Show',
command=(lambda e=ents: fetch(e)))
b1.pack(side=LEFT, padx=5, pady=5)
b2 = Button(root, text='Quit', command=root.quit)
b2.pack(side=LEFT, padx=5, pady=5)
root.mainloop()
```
| 7,028
|
67,878,084
|
I'm trying to teach myself python and I am stuck in the for/while loops. Now I know the difference between the two but once nested loops get involved, i'm left feeling all over the place in terms of determining the hierarchy of the loops. Is there a way that I can get better at this? and how do I troubleshoot my loops once it gives me back an infinite loop? :(
for example: this code I was writing, to count the number of specific colored cars on each level of a garage.
```
level=1
red=0
blue=0
black=0
white=0
total=0
done=0
totalperlevel=0
sumofcars=0
currentlevel=0
cars=0
while level < 6:
cars=input(f"Enter the colour of a car on level {level}:").lower()
level+=1
while cars != done:
if cars == red:
totalperlevel=totalperlevel+1
red=red+1
elif cars == blue:
blue=blue+1
totalperlevel=totalperlevel+1
elif cars == white:
totalperlevel=totalperlevel+1
white=whitecar+1
elif cars == black:
black=black+1
totalperlevel=totalperlevel+1
else:
totalperlevel=totalperlevel
sumofcars= red+blue+white+black
cars=input(f"Enter the colour of a car on level {level}:").lower()
print(f"Total number of the 4-colours cars on Level {level} is {totalperlevel}")
print(f"Total number of red cars in the garage: {red}")
print(f"Total number of blue cars in the garage: {blue}")
print(f"Total number of black cars in the garage: {black}")
print(f"Total number of white cars in the garage: {white}")
print(f"Total number of the 4-colours cars altogether: {sumofcars}")
```
|
2021/06/07
|
[
"https://Stackoverflow.com/questions/67878084",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16078295/"
] |
If you want it to be a `dm` command we can use the restriction `@commands.dm_only()`
However we then also have to check where the `answer` was given, we do that via some kind of custom `check`. I have modified your command a bit, but you can make the changes again personally.
**Take a look at the following code:**
```py
@commands.dm_only() # Make it a DM-only command
@commands.command()
async def verify(self, ctx, length=10):
def check(m):
return ctx.author == m.author and isinstance(m.channel, discord.DMChannel)
verify_characters = []
for _ in range(length):
verify_characters.append(random.choice(string.ascii_letters + string.digits + "!§$%&/()=?`-.<>"))
verify_msg = "".join(verify_characters)
print(verify_msg)
await ctx.author.send(f"Verify with the number: {verify_msg}")
try: # Try to get the answer
answer = await bot.wait_for("message", check=check)
print(answer.content)
if verify_msg == answer.content:
await ctx.author.send("Verified!")
else:
await ctx.author.send("Verify again!")
except: # Used if you for example want to set a timelimit
pass
```
|
Edited to show full answer.
Hey ho done it lol.
Basically the message object contains a lot of data, so you need to pull the content of the message using answer.conent.
<https://discordpy.readthedocs.io/en/latest/api.html?highlight=message#discord.Message.content> for reference
```
@bot.command()
async def verify(ctx):
length=10
verify_characters = []
for _ in range(length):
verify_characters.append(random.choice(string.ascii_letters + string.digits + "!§$%&/()=?`-.<>"))
verify_msg = "".join(verify_characters)
print(verify_msg)
await ctx.author.send(f"Verify with the number {verify_msg}")
answer = await bot.wait_for('message')
print(answer.content)
print("done")
if verify_msg == answer.content:
await ctx.author.send("Verified")
else:
await ctx.author.send(f"Verify again!")
```
Give that a test run and let me know what happens ;)
| 7,029
|
1,205,449
|
Can anyone explain to me how to do more complex data sets like team stats, weather, dice, complex number types
i understand all the math and how everything works i just dont know how to input more complex data, and then how to read the data it spits out
if someone could provide examples in python that would be a big help
|
2009/07/30
|
[
"https://Stackoverflow.com/questions/1205449",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/147602/"
] |
You have to encode your input and your output to something that can be represented by the neural network units. ( for example 1 for "x has a certain property p" -1 for "x doesn't have the property p" if your units' range is in [-1, 1])
The way you encode your input and the way you decode your output depends on what you want to train the neural network for.
Moreover, there are many "neural networks" algoritms and learning rules for different tasks( Back propagation, boltzman machines, self organizing maps).
|
More complex data usually means adding more neurons in the input and output layers.
You can feed each "field" of your register, properly encoded as a real value (normalized, etc.) to each input neuron, or maybe you can even decompose even further into bit fields, assigning saturated inputs of 1 or 0 to the neurons... for the output, it depends on how you train the neural network, it will try to mimic the training set outputs.
| 7,030
|
2,120,332
|
I think this is more a python question than Django.
But basically I'm doing at Model A:
```
from myproject.modelb.models import ModelB
```
and at Model B:
```
from myproject.modela.models import ModelA
```
Result:
>
> cannot import name ModelA
>
>
>
Am I doing something forbidden? Thanks
|
2010/01/22
|
[
"https://Stackoverflow.com/questions/2120332",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/234167/"
] |
A Python module is imported by executing it top to bottom in a new namespace. When module A imports module B, the evaluation of A.py is paused until module B is loaded. When module B then imports module A, it gets the partly-initialized namespace of module A -- in your case, it lacks the `ModelA` class because the import of `myproject.modelb.models` happens before the definition of that class.
In Django you can fix this by referring to a model by name instead of by class object. So, instead of saying
```
from myproject.modela.models import ModelA
class ModelB:
a = models.ForeignKey(ModelA)
```
you would use (without the import):
```
class ModelB:
a = models.ForeignKey('ModelA')
```
|
Mutual imports usually mean you've designed your models incorrectly.
When A depends on B, you should not have B also depending on A.
Break B into two parts.
B1 - depends on A.
B2 - does not depend on A.
A depends on B1. B1 depends on B2. Circularity removed.
| 7,035
|
5,425,725
|
I have created an objective-C framework that I would like to import and access through a python script. I understand how to import this stuff in Python, but what do i need to do on the obj-c side to make that framework importable?
Thanks
|
2011/03/24
|
[
"https://Stackoverflow.com/questions/5425725",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/202431/"
] |
You can just use [PyObjC](http://pyobjc.sourceforge.net/), which is included in Mac OS X 10.5 and later.
|
I'm not sure if this particular combination works, but you might be able to use [SWIG](http://swig.org/) to create a Python module out of your Objective-C which can then be imported into Python.
| 7,036
|
11,842,202
|
I am trying to upgrade my plone version from 3.3.5 to 4.0. For this I went to this site: [updating plone](http://plone.org/documentation/manual/upgrade-guide/version/upgrading-plone-3-x-to-4.0/buildout-3-4). But I got stuck in the first point. In plone 3, I have python version of 2.4. But for plone 4.x I will need python 2.6. How do I upgrade my python version? In my buildout.cfg I have:
```
$extra-paths = ${instance:zope2-location}/lib/py
```
and in my versions.cfg, I have external dependencies and in that section I have `python-openid = 2.2.4`.
|
2012/08/07
|
[
"https://Stackoverflow.com/questions/11842202",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/596757/"
] |
To upgrade python it really depends on the underlying OS (operating system). If a OS specific upgrade fails, download python from <http://www.python.org/download/> and install it from source.
You might have to upgrade some of the paths in your buildout.cfg
|
You don't upgrade Python, you install a new version in parallell.
| 7,039
|
26,230,028
|
I just started using Crossbar.io to implement a live stats page. I've looked at a lot of code examples, but I can't figure out how to do this:
I have a Django service (to avoid confusion, you can assume I´m talking about a function in views.py) and I'd like it to publish messages in a specific topic, whenever it gets called. I've seen these approaches: (1) [Extending ApplicationSession](http://crossbar.io/docs/Python-Application-Components/) and (2) [using an Application instance that is "runned"](https://github.com/tavendo/AutobahnPython/blob/master/examples/twisted/wamp/app/hello/hello.py).
None of them work for me, because the Django service doesn't live inside a class, and is not executed as a stand-alone python file either, so I don't find a way to call the "publish" method (that is the only thing I want to do on the server side).
I tried to get an instance of "StatsBackend", which extends ApplicationSession, and publish something... But StatsBackend.\_instance is None always (even when I execute 'crossbar start' and StatsBackend.**init**() is called).
StatsBackend.py:
```
from twisted.internet.defer import inlineCallbacks
from autobahn import wamp
from autobahn.twisted.wamp import ApplicationSession
class StatsBackend(ApplicationSession):
_instance = None
def __init__(self, config):
ApplicationSession.__init__(self, config)
StatsBackend._instance = self
@classmethod
def update_stats(cls, amount):
if cls._instance:
cls._instance.publish('com.xxx.statsupdate', {'amount': amount})
@inlineCallbacks
def onJoin(self, details):
res = yield self.register(self)
print("CampaignStatsBackend: {} procedures registered!".format(len(res)))
```
test.py:
```
import StatsBackend
StatsBackend.update_stats(100) #Doesn't do anything, StatsBackend._instance is None
```
|
2014/10/07
|
[
"https://Stackoverflow.com/questions/26230028",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1445416/"
] |
Django is a blocking WSGI application, and that does not blend well with AutobahnPython, which is non-blocking (runs on top of Twisted or asyncio).
However, Crossbar.io has a built-in REST bridge, which includes a [HTTP Pusher](https://github.com/crossbario/crossbar/wiki/HTTP%20Pusher%20Service) to which you can submit events via any HTTP/POST capable client. Crossbar.io will forward those events to regular WAMP subscribers (eg via WebSocket in real-time).
Crossbar.io also comes with a complete application template to demonstrate above functionality. To try:
```
cd ~/test1
crossbar init --template pusher
crossbar start
```
Open your browser at `http://localhost:8080` (open the JS console) and in a second terminal
```
curl -H "Content-Type: application/json" \
-d '{"topic": "com.myapp.topic1", "args": ["Hello, world"]}' \
http://127.0.0.1:8080/push
```
You can then do the publish from within a blocking application like Django.
|
I found what I needed: It is possible to do a HTTP POST request to publish on a topic.
You can read the doc for more information: <https://github.com/crossbario/crossbar/wiki/Using-the-REST-to-WebSocket-Pusher>
| 7,040
|
21,834,702
|
Like I said in the title, my script only seems to work on the first line.
Here is my script:
```
#!/usr/bin/python
import sys
def main():
a = sys.argv[1]
f = open(a,'r')
lines = f.readlines()
w = 0
for line in lines:
spot = 0
cp = line
for char in reversed(cp):
x = -1
if char == ' ':
del line[x]
w += 0
if char != '\n' or char != ' ':
lines[spot] = line
spot += 1
break
x += 1
f.close()
f = open(a,'w')
f.writelines(lines)
print("White Space deleted: "+str(w))
if __name__ == "__main__":
main()
```
I'm not too experienced when it comes to loops.
|
2014/02/17
|
[
"https://Stackoverflow.com/questions/21834702",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2353168/"
] |
The following script do the same thing as your program, more compactly:
```
import fileinput
deleted = 0
for line in fileinput.input(inplace=True):
stripped = line.rstrip()
deleted += len(line) - len(stripped) + 1 # don't count the newline
print(stripped)
print("Whitespace deleted: {}".format(deleted))
```
Here [`str.rstrip()`](http://docs.python.org/3/library/stdtypes.html#str.rstrip) removes *all* whitespace from the end of a line (newlines, spaces and tabs).
The [`fileinput` module](http://docs.python.org/3/library/fileinput.html) takes care of handling `sys.argv` for you, opening files one by one if you name more than one file.
Using `print()` will add the newline back on to the end of the stripped lines.
|
`rstrip()` is probably what you want to use to achieve this.
```
>>> 'Here is my string '.rstrip()
'Here is my string'
```
A more compact way to iterate backwards over stings is
```
>>> for c in 'Thing'[::-1]:
print(c)
g
n
i
h
T
```
`[::-1]` is slice notation. SLice notaion can be represented as `[start:stop:step]`. In my example a `-1` for the step means it will step form the back by one index. `[x:y:z]` will start at index `x` stop at `y-1` and go forward by `z` places each step.
| 7,041
|
56,696,940
|
i have installed the cmake but still dlib is not installing which is required for the installation of face\_recognition module
the below mentioned error i am getting whenever i try to install the dlib by using the pip install dlib
```
ERROR: Complete output from command 'c:\users\sunil\appdata\local\programs\python\python37\python.exe' -u -c 'import setuptools, tokenize;__file__='"'"'C:\\Users\\sunil\\AppData\\Local\\Temp\\pip-install-oufh_gcl\\dlib\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\sunil\AppData\Local\Temp\pip-wheel-2fd_0qt9' --python-tag cp37:
ERROR: running bdist_wheel
running build
running build_py
package init file 'dlib\__init__.py' not found (or not a regular file)
running build_ext
Building extension for Python 3.7.0 (v3.7.0:1bf9cc5093, Jun 27 2018, 04:59:51) [MSC v.1914 64 bit (AMD64)]
Invoking CMake setup: 'cmake C:\Users\sunil\AppData\Local\Temp\pip-install-oufh_gcl\dlib\tools\python -DCMAKE_LIBRARY_OUTPUT_DIRECTORY=C:\Users\sunil\AppData\Local\Temp\pip-install-oufh_gcl\dlib\build\lib.win-amd64-3.7 -DPYTHON_EXECUTABLE=c:\users\sunil\appdata\local\programs\python\python37\python.exe -DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELEASE=C:\Users\sunil\AppData\Local\Temp\pip-install-oufh_gcl\dlib\build\lib.win-amd64-3.7 -A x64'
-- Building for: NMake Makefiles
CMake Error in CMakeLists.txt:
Generator
NMake Makefiles
does not support platform specification, but platform
x64
was specified.
CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage
CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage
-- Configuring incomplete, errors occurred!
See also "C:/Users/sunil/AppData/Local/Temp/pip-install-oufh_gcl/dlib/build/temp.win-amd64-3.7/Release/CMakeFiles/CMakeOutput.log".
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\sunil\AppData\Local\Temp\pip-install-oufh_gcl\dlib\setup.py", line 261, in <module>
'Topic :: Software Development',
File "c:\users\sunil\appdata\local\programs\python\python37\lib\site-packages\setuptools\__init__.py", line 129, in setup
return distutils.core.setup(**attrs)
File "c:\users\sunil\appdata\local\programs\python\python37\lib\distutils\core.py", line 148, in setup
dist.run_commands()
File "c:\users\sunil\appdata\local\programs\python\python37\lib\distutils\dist.py", line 966, in run_commands
self.run_command(cmd)
File "c:\users\sunil\appdata\local\programs\python\python37\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "c:\users\sunil\appdata\local\programs\python\python37\lib\site-packages\wheel\bdist_wheel.py", line 192, in run
self.run_command('build')
File "c:\users\sunil\appdata\local\programs\python\python37\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "c:\users\sunil\appdata\local\programs\python\python37\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "c:\users\sunil\appdata\local\programs\python\python37\lib\distutils\command\build.py", line 135, in run
self.run_command(cmd_name)
File "c:\users\sunil\appdata\local\programs\python\python37\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "c:\users\sunil\appdata\local\programs\python\python37\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "C:\Users\sunil\AppData\Local\Temp\pip-install-oufh_gcl\dlib\setup.py", line 135, in run
self.build_extension(ext)
File "C:\Users\sunil\AppData\Local\Temp\pip-install-oufh_gcl\dlib\setup.py", line 172, in build_extension
subprocess.check_call(cmake_setup, cwd=build_folder)
File "c:\users\sunil\appdata\local\programs\python\python37\lib\subprocess.py", line 328, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', 'C:\\Users\\sunil\\AppData\\Local\\Temp\\pip-install-oufh_gcl\\dlib\\tools\\python', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY=C:\\Users\\sunil\\AppData\\Local\\Temp\\pip-install-oufh_gcl\\dlib\\build\\lib.win-amd64-3.7', '-DPYTHON_EXECUTABLE=c:\\users\\sunil\\appdata\\local\\programs\\python\\python37\\python.exe', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELEASE=C:\\Users\\sunil\\AppData\\Local\\Temp\\pip-install-oufh_gcl\\dlib\\build\\lib.win-amd64-3.7', '-A', 'x64']' returned non-zero exit status 1.
----------------------------------------
ERROR: Failed building wheel for dlib
Running setup.py clean for dlib
Failed to build dlib
Installing collected packages: dlib
Running setup.py install for dlib ... error
ERROR: Complete output from command 'c:\users\sunil\appdata\local\programs\python\python37\python.exe' -u -c 'import setuptools, tokenize;__file__='"'"'C:\\Users\\sunil\\AppData\\Local\\Temp\\pip-install-oufh_gcl\\dlib\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\sunil\AppData\Local\Temp\pip-record-89jcoq15\install-record.txt' --single-version-externally-managed --compile:
ERROR: running install
running build
running build_py
package init file 'dlib\__init__.py' not found (or not a regular file)
running build_ext
Building extension for Python 3.7.0 (v3.7.0:1bf9cc5093, Jun 27 2018, 04:59:51) [MSC v.1914 64 bit (AMD64)]
Invoking CMake setup: 'cmake C:\Users\sunil\AppData\Local\Temp\pip-install-oufh_gcl\dlib\tools\python -DCMAKE_LIBRARY_OUTPUT_DIRECTORY=C:\Users\sunil\AppData\Local\Temp\pip-install-oufh_gcl\dlib\build\lib.win-amd64-3.7 -DPYTHON_EXECUTABLE=c:\users\sunil\appdata\local\programs\python\python37\python.exe -DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELEASE=C:\Users\sunil\AppData\Local\Temp\pip-install-oufh_gcl\dlib\build\lib.win-amd64-3.7 -A x64'
-- Building for: NMake Makefiles
CMake Error in CMakeLists.txt:
Generator
NMake Makefiles
does not support platform specification, but platform
x64
was specified.
CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage
CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage
-- Configuring incomplete, errors occurred!
See also "C:/Users/sunil/AppData/Local/Temp/pip-install-oufh_gcl/dlib/build/temp.win-amd64-3.7/Release/CMakeFiles/CMakeOutput.log".
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\sunil\AppData\Local\Temp\pip-install-oufh_gcl\dlib\setup.py", line 261, in <module>
'Topic :: Software Development',
File "c:\users\sunil\appdata\local\programs\python\python37\lib\site-packages\setuptools\__init__.py", line 129, in setup
return distutils.core.setup(**attrs)
File "c:\users\sunil\appdata\local\programs\python\python37\lib\distutils\core.py", line 148, in setup
dist.run_commands()
File "c:\users\sunil\appdata\local\programs\python\python37\lib\distutils\dist.py", line 966, in run_commands
self.run_command(cmd)
File "c:\users\sunil\appdata\local\programs\python\python37\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "c:\users\sunil\appdata\local\programs\python\python37\lib\site-packages\setuptools\command\install.py", line 61, in run
return orig.install.run(self)
File "c:\users\sunil\appdata\local\programs\python\python37\lib\distutils\command\install.py", line 545, in run
self.run_command('build')
File "c:\users\sunil\appdata\local\programs\python\python37\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "c:\users\sunil\appdata\local\programs\python\python37\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "c:\users\sunil\appdata\local\programs\python\python37\lib\distutils\command\build.py", line 135, in run
self.run_command(cmd_name)
File "c:\users\sunil\appdata\local\programs\python\python37\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "c:\users\sunil\appdata\local\programs\python\python37\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "C:\Users\sunil\AppData\Local\Temp\pip-install-oufh_gcl\dlib\setup.py", line 135, in run
self.build_extension(ext)
File "C:\Users\sunil\AppData\Local\Temp\pip-install-oufh_gcl\dlib\setup.py", line 172, in build_extension
subprocess.check_call(cmake_setup, cwd=build_folder)
File "c:\users\sunil\appdata\local\programs\python\python37\lib\subprocess.py", line 328, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', 'C:\\Users\\sunil\\AppData\\Local\\Temp\\pip-install-oufh_gcl\\dlib\\tools\\python', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY=C:\\Users\\sunil\\AppData\\Local\\Temp\\pip-install-oufh_gcl\\dlib\\build\\lib.win-amd64-3.7', '-DPYTHON_EXECUTABLE=c:\\users\\sunil\\appdata\\local\\programs\\python\\python37\\python.exe', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELEASE=C:\\Users\\sunil\\AppData\\Local\\Temp\\pip-install-oufh_gcl\\dlib\\build\\lib.win-amd64-3.7', '-A', 'x64']' returned non-zero exit status 1.
----------------------------------------
ERROR: Command "'c:\users\sunil\appdata\local\programs\python\python37\python.exe' -u -c 'import setuptools, tokenize;__file__='"'"'C:\\Users\\sunil\\AppData\\Local\\Temp\\pip-install-oufh_gcl\\dlib\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\sunil\AppData\Local\Temp\pip-record-89jcoq15\install-record.txt' --single-version-externally-managed --compile" failed with error code 1 in C:\Users\sunil\AppData\Local\Temp\pip-install-oufh_gcl\dlib\
```
can anyone tell me the easiest way to install the face\_recognition module for my windows 10
|
2019/06/21
|
[
"https://Stackoverflow.com/questions/56696940",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9628342/"
] |
**If you have conda installed in your system then follow these steps:**
* conda create -n py36 python=3.6
* activate py36
* conda config --add channels conda-forge
* conda install numpy
* conda install scipy
* conda install dlib
* pip install --no-dependencies face\_recognition
|
if your OS is windows7 :
1. download and install dlib.whl x64 or x86
2. download and install cmake app and add it to path
3. pip install cmake
4. pip install dlib only on python 3.6 to 3.7 with ".whl" file
5. pip install face\_recognotion
enjoy face\_recognition
| 7,043
|
71,822,376
|
Is there any way I can make **Excel** add-ins/extensions using Python?
I have tried javascript but haven't found any result about making add-ins on python.
|
2022/04/11
|
[
"https://Stackoverflow.com/questions/71822376",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17767517/"
] |
Try this code
```
nav .wrapper{
display: flex;
justify-content: space-between;
}
nav ul{
display: flex;
}
```
|
It should contain http
For example
href="https://classroom.udacity.com/nanodegrees/nd004-1mac-v2/dashboard/overview"
| 7,053
|
68,384,185
|
Hi I'm trying to build out a basic app django within a python/alpine image.
I am getting en error telling me that there is no matching image for the version of Django that I am looking for.
The Dockerfile in using a `python:3.9-alpine3.14` image and my requirements file is targeting `Django>=3.2.5,<3.3`.
From what i understand these should be compatible, Django >= 3 and Python 3.9.
When I run `docker-compose build` the RUN command gets so far as the `apk add` commands but fails on `pip`. I did try changing this to `pip3` but this had no effect.
Any idea what I am missing here that will fix this issue?
`requirements.txt`
```
Django>=3.2.5,<3.3
uWSGI>=2.0.19.1,<2.1
```
`Dockerfile`
```
FROM python:3.9-alpine3.14
LABEL maintainer="superapp"
ENV PYTHONUNBUFFERED 1
COPY requirements.txt /requirements.txt
RUN mkdir /app
COPY ./app /app
COPY ./scripts /scripts
WORKDIR /app
EXPOSE 8000
RUN python -m venv /py && \
/py/bin/pip install --upgrade pip && \
apk add --update --no-cache --virtual .build-deps \
build-base \
gcc \
linux-headers && \
/py/bin/pip install -r /requirements.txt && \
apk del .build-deps && \
adduser --disabled-password --no-create-home rsecuser && \
mkdir -p /vol/web/static && \
chown -R rsecuser:rsecuser /vol && \
chmod -R 755 /vol && \
chmod -R +x /scripts
ENV PATH="/scripts:/py/bin:$PATH"
USER rsecuser
CMD ["run.sh"]
```
`docker-compose.yml`
```
version: "3.9"
services:
app:
build:
context: .
command: >
sh -c "python manage.py runserver 0.0.0.0:8000"
ports:
- 8000:8000
volumes:
- ./app:/app
- ./data/web:/vol/web
environment:
- SECRET_KEY=development_key
- DEBUG=1
```
Error
```
...
...
...
(20/22) Installing build-base (0.5-r2)
(21/22) Installing linux-headers (5.10.41-r0)
(22/22) Installing .build-deps (20210714.193049)
Executing busybox-1.33.1-r2.trigger
OK: 212 MiB in 58 packages
ERROR: Could not find a version that satisfies the requirement Django<3.3,>=3.2.5 (from versions: none)
ERROR: No matching distribution found for Django<3.3,>=3.2.5
ERROR: Service 'app' failed to build : The command '/bin/sh -c python -m venv /py && /py/bin/pip install --upgrade pip && apk add --update --no-cache --virtual .build-deps build-base gcc linux-headers && /py/bin/pip install -r /requirements.txt && apk del .build-deps && adduser --disabled-password --no-create-home rsecuser && mkdir -p /vol/web/static && chown -R rsecuser:rsecuser /vol && chmod -R 755 /vol && chmod -R +x /scripts' returned a non-zero code: 1(20/22) Installing build-base (0.5-r2)
(21/22) Installing linux-headers (5.10.41-r0)
(22/22) Installing .build-deps (20210714.193049)
Executing busybox-1.33.1-r2.trigger
OK: 212 MiB in 58 packages
ERROR: Could not find a version that satisfies the requirement Django<3.3,>=3.2.5 (from versions: none)
ERROR: No matching distribution found for Django<3.3,>=3.2.5
ERROR: Service 'app' failed to build : The command '/bin/sh -c python -m venv /py && /py/bin/pip install --upgrade pip && apk add --update --no-cache --virtual .build-deps build-base gcc linux-headers && /py/bin/pip install -r /requirements.txt && apk del .build-deps && adduser --disabled-password --no-create-home rsecuser && mkdir -p /vol/web/static && chown -R rsecuser:rsecuser /vol && chmod -R 755 /vol && chmod -R +x /scripts' returned a non-zero code: 1
```
As per comments below, same command run with `/py/bin/pip -vv install -r /requirements.txt` in the Dockerfile.
```
(21/22) Installing linux-headers (5.10.41-r0)
(22/22) Installing .build-deps (20210714.195119)
Executing busybox-1.33.1-r2.trigger
OK: 212 MiB in 58 packages
Using pip 21.1.3 from /py/lib/python3.9/site-packages/pip (python 3.9)
Non-user install because user site-packages disabled
Created temporary directory: /tmp/pip-ephem-wheel-cache-gyetbpa_
Created temporary directory: /tmp/pip-req-tracker-og42yr4p
Initialized build tracking at /tmp/pip-req-tracker-og42yr4p
Created build tracker: /tmp/pip-req-tracker-og42yr4p
Entered build tracker: /tmp/pip-req-tracker-og42yr4p
Created temporary directory: /tmp/pip-install-p5b3_acb
1 location(s) to search for versions of django:
* https://pypi.org/simple/django/
Fetching project page and analyzing links: https://pypi.org/simple/django/
Getting page https://pypi.org/simple/django/
Found index url https://pypi.org/simple
Looking up "https://pypi.org/simple/django/" in the cache
Request header has "max_age" as 0, cache bypassed
Starting new HTTPS connection (1): pypi.org:443
https://pypi.org:443 "GET /simple/django/ HTTP/1.1" 200 42407
Could not fetch URL https://pypi.org/simple/django/: connection error: HTTPSConnectionPool(host='pypi.org', port=443): Read timed out. - skipping
Skipping link: not a file: https://pypi.org/simple/django/
Given no hashes to check 0 links for project 'django': discarding no candidates
ERROR: Could not find a version that satisfies the requirement Django<3.3,>=3.2.5 (from versions: none)
ERROR: No matching distribution found for Django<3.3,>=3.2.5
Exception information:
Traceback (most recent call last):
File "/py/lib/python3.9/site-packages/pip/_vendor/resolvelib/resolvers.py", line 341, in resolve
name, crit = self._merge_into_criterion(r, parent=None)
File "/py/lib/python3.9/site-packages/pip/_vendor/resolvelib/resolvers.py", line 173, in _merge_into_criterion
raise RequirementsConflicted(criterion)
pip._vendor.resolvelib.resolvers.RequirementsConflicted: Requirements conflict: SpecifierRequirement('Django<3.3,>=3.2.5')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/py/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 127, in resolve
result = self._result = resolver.resolve(
File "/py/lib/python3.9/site-packages/pip/_vendor/resolvelib/resolvers.py", line 473, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/py/lib/python3.9/site-packages/pip/_vendor/resolvelib/resolvers.py", line 343, in resolve
raise ResolutionImpossible(e.criterion.information)
pip._vendor.resolvelib.resolvers.ResolutionImpossible: [RequirementInformation(requirement=SpecifierRequirement('Django<3.3,>=3.2.5'), parent=None)]
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/py/lib/python3.9/site-packages/pip/_internal/cli/base_command.py", line 180, in _main
status = self.run(options, args)
File "/py/lib/python3.9/site-packages/pip/_internal/cli/req_command.py", line 205, in wrapper
return func(self, options, args)
File "/py/lib/python3.9/site-packages/pip/_internal/commands/install.py", line 318, in run
requirement_set = resolver.resolve(
File "/py/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 136, in resolve
raise error from e
pip._internal.exceptions.DistributionNotFound: No matching distribution found for Django<3.3,>=3.2.5
Removed build tracker: '/tmp/pip-req-tracker-og42yr4p'
ERROR: Service 'app' failed to build : The command '/bin/sh -c python -m venv /py && /py/bin/pip install --upgrade pip && apk add --update --no-cache --virtual .build-deps build-base gcc linux-headers && /py/bin/pip -vv install -r /requirements.txt && apk del .build-deps && adduser --disabled-password --no-create-home rsecuser && mkdir -p /vol/web/static && chown -R rsecuser:rsecuser /vol && chmod -R 755 /vol && chmod -R +x /scripts' returned a non-zero code: 1
```
|
2021/07/14
|
[
"https://Stackoverflow.com/questions/68384185",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3346752/"
] |
Continuing a list over multiple section is not a standard task, I think the clean way to go is definitely with a [counter](https://docs.asciidoctor.org/asciidoc/latest/attributes/counters/).
Instead of an ordered list you could instead use a [Description list](https://docs.asciidoctor.org/asciidoc/latest/lists/description/), it does not look the same but brings some flexibility and the main idea is there:
```
== Section A
{counter:list-counter}):: item 1
{counter:list-counter}):: item 2
== Section B
{counter:list-counter}):: item 3
{counter:list-counter}):: item 4
== Section C
{counter:list-counter}):: item 5
{counter:list-counter}):: item 6
```
|
Here's another hack:
```
= Document
== Section A
. item 1
. item 2
+
[discrete]
== Section B
. item 3
. item 4
+
[discrete]
== Section C
. item 5
. item 6
```
That gets the list items to have the correct item numbers, but the "discrete" headings are indented. You could use some CSS customization (say via [docinfo files](https://docs.asciidoctor.org/asciidoctor/latest/docinfo/)) to outdent them.
| 7,055
|
41,204,071
|
I have implemented the [python-social-auth](https://github.com/python-social-auth) library for Google OAuth2 in my Django project, and am successfully able to log users in with it. The library stores the `access_token` received in the response for Google's OAuth2 flow.
My question is: use of the [google-api-python-client](https://github.com/google/google-api-python-client) seems to rely on creating and authorizing a `credentials` object, then using it to build an API `service` like so:
```
...
# send user to Google consent URL, get auth_code in response
credentials = flow.step2_exchange(auth_code)
http_auth = credentials.authorize(httplib2.Http())
from apiclient.discovery import build
service = build('gmail', 'v1', http=http_auth)
# use service for API calls...
```
Since I'm starting with an `access_token` provided by python-social-auth, how do I create & authorize the API client `service` for future API calls?
**Edit**: to clarify, the code above is from the examples provided by Google.
|
2016/12/17
|
[
"https://Stackoverflow.com/questions/41204071",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/306374/"
] |
Given you already have the OAuth2 access token you can use the [`AccessTokenCredentials`](https://developers.google.com/api-client-library/python/guide/aaa_oauth#AccessTokenCredentials) class.
>
> The oauth2client.client.AccessTokenCredentials class is used when you have already obtained an access token by some other means. You can create this object directly without using a Flow object.
>
>
>
Example:
```
import httplib2
from googleapiclient.discovery import build
from oauth2client.client import AccessTokenCredentials
credentials = AccessTokenCredentials(access_token, user_agent)
http = httplib2.Http()
http = credentials.authorize(http)
service = build('gmail', 'v1', http=http)
```
|
You can check examples provided by Google Guide API, for example: sending email via gmail application, <https://developers.google.com/gmail/api/guides/sending>
| 7,056
|
52,221,769
|
I have two floats `no_a` and `no_b` and a couple of ranges represented as two element lists holding the lower and upper border.
I want to check if the numbers are both in one of the following ranges: `[0, 0.33]`, `[0.33, 0.66]`, or `[0.66, 1.0]`.
How can I write that statement neatly in python code?
|
2018/09/07
|
[
"https://Stackoverflow.com/questions/52221769",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5682455/"
] |
If you just want to get a `True` or `False` result, consider the following.
```
>>> a = 0.4
>>> b = 0.6
>>>
>>> ranges = [[0,0.33], [0.33,0.66], [0.66,1.0]]
>>>
>>> any(low <= a <= high and low <= b <= high for low, high in ranges)
True
```
If you have an arbitrary amount of numbers to check (not just `a` and `b`) you can generalize this to:
```
>>> numbers = [0.4, 0.6, 0.34]
>>> any(all(low <= x <= high for x in numbers) for low, high in ranges)
True
```
|
Have a look at [here](https://docs.scipy.org/doc/numpy/reference/generated/numpy.all.html).
Put your `no_a` and `no_b` into an array and check if all events pass your statement.
---
Second Edit:
As pointed out, the built-in `all` function outperforms the numpy version for this small dataset, so the usage of numpy has been removed:
```
ranges = [[0,0.33], [0.33,0.66], [0.66,1.0]]
for i in range(len(ranges)):
if all([ranges[i][0] < no_a < ranges[i][1],
ranges[i][0] < no_b < ranges[i][1]]):
print('Both values are in the interval of %s' %ranges[i])
```
which will print out the range that both values fall into.
| 7,058
|
17,604,130
|
I need help figuring out this code. This is my first programming class and we have a exam next week and I am trying to do the old exams.
There is one class with nested list that I am having trouble understanding. It basically says to convert `(list of [list of ints]) -> int`.
Basically given a list of list which ever has a even number in this case 0 is even return that index and if there are no even numbers we return -1.
Also we are given three examples
```
>>> first_even([[9, 1, 3], [2, 5, 7], [9, 9, 7, 2]])
1
>>> first_even([[1, 3, 5], [7, 9], [1, 0]])
2
>>> first_even([[1, 3, 5]])
-1
```
We are using python 3 in our class and I kind of have a idea in where to begin but I know its wrong. but ill give it a try
```
def first_even(L1):
count = 0
for i in range(L1):
if L1[i] % 2 = 0:
count += L1
return count
```
I thought this was it but it didn't work out.
If you guys could please help me out with hints or solution to this it would be helpful to me.
|
2013/07/11
|
[
"https://Stackoverflow.com/questions/17604130",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2574430/"
] |
If I understand correctly and you want to return the index of the first list that contains at least one even number:
```
In [1]: def first_even(nl):
...: for i, l in enumerate(nl):
...: if not all(x%2 for x in l):
...: return i
...: return -1
...:
In [2]: first_even([[9, 1, 3], [2, 5, 7], [9, 9, 7, 2]])
Out[2]: 1
In [3]: first_even([[1, 3, 5], [7, 9], [1, 0]])
Out[3]: 2
In [4]: first_even([[1, 3, 5]])
Out[4]: -1
```
[`enumerate`](http://docs.python.org/3/library/functions.html#enumerate) is a convenient built-in function that gives you both the index and the item if an iterable, and so you don't need to mess with the ugly `range(len(L1))` and indexing.
[`all`](http://docs.python.org/3/library/functions.html#all) is another built-in. If all remainders are non-zero (and thus evaluate to `True`) then the list doesn't contain any even numbers.
|
There are some minor problems with your code:
* `L1[i] % 2 = 0` is using the wrong operator. `=` is for assigning variables a value, while `==` is used for equality.
* You probably meant `range(len(L1))`, as range expects an integer.
* Lastly, you're adding the whole list to the count, when you only wanted to add the index. This could be achieved with `.index()`, but this doesn't work for duplicates in the list. You can use `enumerate`, as I'm about to show below.
If you're ever working with indexes, `enumerate()` is your function:
```
def first_even(L):
for x, y in enumerate(L):
if any(z % 2 == 0 for z in y): # If any of the numbers in the subsists are even
return x # Return the index. Function breaks
return -1 # No even numbers found. Return -1
```
| 7,063
|
62,984,417
|
I am trying to format a string in python, but the values are not being replaced.
Here is my example...
```
uid = results[0][0]
query = """
SELECT
m.whiteUid,
m.blackUid,
u1.displayName AS whiteDisplayName,
u2.displayName AS blackDisplayName,
m.created,
m.modified
FROM matches m
INNER JOIN users u1 ON u1.uid = m.whiteUid
INNER JOIN users u2 ON u2.uid = m.blackUid
WHERE
m.whiteUid = {uid} OR m.blackUid = {uid} OR m.id = {uid}
"""
query.format(uid=uid)
```
When I run this query the string {uid} still exists in all locations.
|
2020/07/19
|
[
"https://Stackoverflow.com/questions/62984417",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3321579/"
] |
You should do this:
`query = query.format(...)`.
The format method just returns the formatted string, it doesn't change `self`.
|
String is immutable.
Use f string. It's recommended.
```
query = f"""
SELECT
m.whiteUid,
m.blackUid,
u1.displayName AS whiteDisplayName,
u2.displayName AS blackDisplayName,
m.created,
m.modified
FROM matches m
INNER JOIN users u1 ON u1.uid = m.whiteUid
INNER JOIN users u2 ON u2.uid = m.blackUid
WHERE
m.whiteUid = {uid} OR m.blackUid = {uid} OR m.id = {uid}
"""
```
| 7,066
|
37,724,694
|
I am just trying to pass random arguments for below python script.
Code:
```
import json,sys,os,subprocess
arg1 = 'Site1'
arg2 = "443"
arg3 = 'admin@example.com'
arg4 = 'example@123'
arg5 = '--output req.txt'
arg6 = '-h'
obj=json.load(sys.stdin)
for i in range(len(obj['data'])):
print obj['data'][i]['value']
subprocess.call(['./malopinfo.py', arg1, arg2, arg3, arg4, obj , arg5])
```
In above code variable `obj` will change randomly, But apart from that all arguments are static.
Error:
```
root@prabhu:/home/teja/MalopInfo/version1/MalopInfo# ./crver1.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 116 0 116 0 0 5313 0 --:--:-- --:--:-- --:--:-- 5523
11.945403842773683082
Traceback (most recent call last):
File "qjson.py", line 15, in <module>
subprocess.call(['./malopinfo.py', arg1, arg2, arg3, arg4, obj])
File "/usr/lib/python2.7/subprocess.py", line 522, in call
return Popen(*popenargs, **kwargs).wait()
File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1335, in _execute_child
raise child_exception
```
I am trying to execute is
```
python malopinfo.py Site1 443 admin@example.com example@123 11.945403842773683082 --output req.txt
```
Please help me on this.
|
2016/06/09
|
[
"https://Stackoverflow.com/questions/37724694",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5005270/"
] |
It looks to me like you're passing the entire `obj` dictionary into the command. To get the desired invocation, pass `obj['data'][i]['value']` in the arguments list to `subprocess.call`. So, the final line of your script should be
```
subprocess.call(['./malopinfo.py', arg1, arg2, arg3, arg4, obj['data'][i]['value'], arg5])
```
Or, you can make a variable to contain that on each loop iteration, whatever works.
|
You are directly passing an object. Beforehand you need to convert that into string as `subprocess.call` will expect obj to be a string. Get the string value of one of the obj properties like you already have done `obj['data'][i]['value']` and pass it into your `subprocess.call`.
| 7,069
|
51,492,621
|
I've recently attempted google's iot end-to-end example (<https://cloud.google.com/iot/docs/samples/end-to-end-sample>) out of pure interest. However, towards the final part of the process where I had to connect devices, I kept running into a run time error.
```
Creating JWT using RS256 from private key file rsa_private.pem
Connection Result: 5: The connection was refused.
Disconnected: 5: The connection was refused.
Connection Result: 5: The connection was refused.
Disconnected: 5: The connection was refused.
Traceback (most recent call last):
File "cloudiot_pubsub_example_mqtt_device.py", line 259, in <module>
main()
File "cloudiot_pubsub_example_mqtt_device.py", line 234, in main
device.wait_for_connection(5)
File "cloudiot_pubsub_example_mqtt_device.py", line 100, in
wait_for_connection
raise RuntimeError('Could not connect to MQTT bridge.')
RuntimeError: Could not connect to MQTT bridge.
```
Above is the error obtained after inserting the command string that was on the clipboard. Below is a more elaborated process of how i got to the error.
Regarding the device ID, i manually created on the google iot platform in the registry. For the private/public rsa key pair, I generated them following Google's instruction and pasted the public key in the device's public key and copied the private key into the same directory with the python files in them.
Thanks.
|
2018/07/24
|
[
"https://Stackoverflow.com/questions/51492621",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10126223/"
] |
To solve this, just pass the correct cloud region parameter to the command --cloud\_region=asia-east1
|
You can try giving cloud region when running device script Ex : "--cloud\_region=asia-east1"
python cloudiot\_pubsub\_example\_mqtt\_device.py --project\_id=applied-grove-246108 --registry\_id=my-registry --device\_id=my-device --private\_key\_file=rsa\_private.pem --algorithm=RS256 --cloud\_region=asia-east1
| 7,070
|
30,103,965
|
Not by word boundaries, that is solvable.
Example:
```
#!/usr/bin/env python3
text = 'เมื่อแรกเริ่ม'
for char in text:
print(char)
```
This produces:
เ
ม
อ
แ
ร
ก
เ
ร
ม
Which obviously is not the desired output. Any ideas?
A portable representation of text is:
```
text = u'\u0e40\u0e21\u0e37\u0e48\u0e2d\u0e41\u0e23\u0e01\u0e40\u0e23\u0e34\u0e48\u0e21'
```
|
2015/05/07
|
[
"https://Stackoverflow.com/questions/30103965",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2397101/"
] |
tl;dr: Use `\X` regular expression to extract user-perceived characters:
```
>>> import regex # $ pip install regex
>>> regex.findall(u'\\X', u'เมื่อแรกเริ่ม')
['เ', 'มื่', 'อ', 'แ', 'ร', 'ก', 'เ', 'ริ่', 'ม']
```
---
While I do not know Thai, I know a little French.
Consider the letter `è`. Let `s` and `s2` equal `è` in the Python shell:
```
>>> s
'è'
>>> s2
'è'
```
Same letter? To a French speaker visually, oui. To a computer, no:
```
>>> s==s2
False
```
You can create the same letter either using the actual code point for `è` or by taking the letter `e` and adding a combining code point that adds that accent character. They have different encodings:
```
>>> s.encode('utf-8')
b'\xc3\xa8'
>>> s2.encode('utf-8')
b'e\xcc\x80'
```
And differnet lengths:
```
>>> len(s)
1
>>> len(s2)
2
```
But visually both encodings result in the 'letter' `è`. This is called a [grapheme](http://en.wikipedia.org/wiki/Grapheme), or what the end user considers one character.
You can demonstrate the same looping behavior you are seeing:
```
>>> [c for c in s]
['è']
>>> [c for c in s2]
['e', '̀']
```
Your string has several combining characters in it. Hence a 9 grapheme character Thai string to your eyes becomes a 13 character string to Python.
The solution in French is to normalize the string based on Unicode [equivalence](http://en.wikipedia.org/wiki/Unicode_equivalence):
```
>>> from unicodedata import normalize
>>> normalize('NFC', s2) == s
True
```
That does not work for many non Latin languages though. An easy way to deal with unicode strings that may be multiple code points composing a [single grapheme](http://www.regular-expressions.info/unicode.html) is with a regex engine that correctly deals with this by supporting `\X`. Unfortunately Python's included `re` module [doesn't](http://bugs.python.org/issue12733) yet.
The proposed replacement, [regex](https://pypi.python.org/pypi/regex), does support `\X` though:
```
>>> import regex
>>> text = 'เมื่อแรกเริ่ม'
>>> regex.findall(r'\X', text)
['เ', 'มื่', 'อ', 'แ', 'ร', 'ก', 'เ', 'ริ่', 'ม']
>>> len(_)
9
```
|
I cannot exactly reproduce, but here is a slight modified version of you script, with the output on IDLE 3.4 on a Windows7 64 system :
```
>>> for char in text:
print(char, hex(ord(char)), unicodedata.name(char),'-',
unicodedata.category(char), '-', unicodedata.combining(char), '-',
unicodedata.east_asian_width(char))
เ 0xe40 THAI CHARACTER SARA E - Lo - 0 - N
ม 0xe21 THAI CHARACTER MO MA - Lo - 0 - N
ื 0xe37 THAI CHARACTER SARA UEE - Mn - 0 - N
่ 0xe48 THAI CHARACTER MAI EK - Mn - 107 - N
อ 0xe2d THAI CHARACTER O ANG - Lo - 0 - N
แ 0xe41 THAI CHARACTER SARA AE - Lo - 0 - N
ร 0xe23 THAI CHARACTER RO RUA - Lo - 0 - N
ก 0xe01 THAI CHARACTER KO KAI - Lo - 0 - N
เ 0xe40 THAI CHARACTER SARA E - Lo - 0 - N
ร 0xe23 THAI CHARACTER RO RUA - Lo - 0 - N
ิ 0xe34 THAI CHARACTER SARA I - Mn - 0 - N
่ 0xe48 THAI CHARACTER MAI EK - Mn - 107 - N
ม 0xe21 THAI CHARACTER MO MA - Lo - 0 - N
>>>
```
I really do not know what those characters can be - my Thai is **very** poor :-) - but it shows that :
* text is acknowledged to be Thai ...
* output is coherent with `len(text)` (`13`)
* category and combining are different when characters are combined
If it is expected output, it proves that your problem is not in Python but more on the *console* where you display it. You should try to redirect output to a file, and then open the file in an unicode editor supporting Thai characters.
If expected output is only 9 characters, that is if you do not want to decompose composed characters, and provided there are no other composing rules that should be considered, you could use something like :
```
def Thaidump(t):
old = None
for i in t:
if unicodedata.category(i) == 'Mn':
if old is not None:
old = old + i
else:
if old is not None:
print(old)
old = i
print(old)
```
That way :
```
>>> Thaidump(text)
เ
มื่
อ
แ
ร
ก
เ
ริ่
ม
>>>
```
| 7,071
|
20,294,693
|
In the following example, I want to change the `a1` key of `d` in place by calling the `set_x()` function of the class `A`. But I don't see how to access a key in a `dict`.
```
#!/usr/bin/env python
class A(object):
def __init__(self, data=''):
self.data = data
self.x = ''
def set_x(self, x):
self.x = x
def __repr__(self):
return 'A(%s:%s)' % (self.data, self.x)
def __eq__(self, another):
return hasattr(another, 'data') and self.data == another.data
def __hash__(self):
return hash(self.data)
a1 = A('foo')
d = {a1: 'foo'}
print d #{A(foo:): 'foo'}
```
I want to change `d`, so that `d` will print as `{A(foo:word): 'foo'}`. Of course, the following does not work. Also I don't want to reassign the same values. Does anybody know a way to modify a key in place by the calling the key's member function? Thanks.
```
a2 = A('foo')
a2.set_x('xxxx')
d[a2]='foo'
print d
```
|
2013/11/29
|
[
"https://Stackoverflow.com/questions/20294693",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1424739/"
] |
You need to refer to the object itself, and modify it there.
Take a look at this console session:
```
>>> a = A("foo")
>>> d = {a:10}
>>> d
{A(foo:): 10}
>>> a.set_x('word')
>>> d
{A(foo:word): 10}
```
You can also get the key-value pair from `dict.items()`:
```
a, v = d.items()[0]
a.set_x("word")
```
Hope this helps!
|
You can keep the reference to the object and modify it. If you can't keep a reference to the key object, you can still iterate over the dict using `for k, v in d.items():` and then use the value to know which key you have (although this is somewhat backward in how to use a dict and highly ineficient)
```
a1 = A('foo')
d = {a1: 'foo'}
print(d) # {A(foo:): 'foo'}
a1.set_x('hello')
print(d) # {A(foo:hello): 'foo'}
```
| 7,073
|
59,876,292
|
I want to run gs command to copy data using python function in cloud function, is it possible to run a shell command inside the cloud function??.
|
2020/01/23
|
[
"https://Stackoverflow.com/questions/59876292",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12309624/"
] |
According to the official documentation [Cloud Functions Execution Environment](https://cloud.google.com/functions/docs/concepts/exec):
>
> Cloud Functions run in a fully-managed, serverless environment where
> Google handles infrastructure, operating systems, and runtime
> environments completely on your behalf. Each Cloud Function runs in
> its own isolated secure execution context, scales automatically, and
> has a lifecycle independent from other functions.
>
>
>
These are the runtimes that cloud functions supports:
```
Node.js 8, Node.js 10 (Beta), Python, Go 1.11, Go 1.13
```
Currently, it is not possible to run shell commands inside a Google Cloud Function.
However, assuming you would like to copy data to or from Cloud Storage, you can use [Cloud Storage Client Libraries for Python](https://cloud.google.com/storage/docs/reference/libraries#client-libraries-install-python)
|
Have tried using [subprocess](https://docs.python.org/3/library/subprocess.html) module to see if it helps you achieve what you need? I haven't tried this myself so I can't be sure if it will work.
```
import subprocess
subprocess.run(["ls", "-l"])
```
Alternatively you can also use CloudRun to run [Docker image with gcloud sdk](https://github.com/GoogleCloudPlatform/cloud-sdk-docker) which could help you execute shell commands directly (rather than going via python).
| 7,074
|
26,292,102
|
Because of inherited html parts when using template engines such as twig (PHP) or jinja2 (python), I may need to nest rows like below:
```
<div class="container">
<div class="row">
<div class="row">
</div>
...
<div class="row">
</div>
</div>
<div class="row">
...
</div>
</div>
```
Then should I wrap inner rows in column div like below:
```
<div class="container">
<div class="row">
<div class="col-xs-12">
<div class="row">
</div>
...
<div class="row">
</div>
</div>
</div>
<div class="row">
...
</div>
</div>
```
Or should they be wrappered in container again?
|
2014/10/10
|
[
"https://Stackoverflow.com/questions/26292102",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/85443/"
] |
You shouldn't wrap the nested rows in `.container` elements, but you *should* nest them in columns. Bootstrap's `row` class has negative left and right margins that are negated by the `col-X` classes' positive left and right margins. If you nest two `row` classes without intermediate `col-X` classes, you get double the negative margins.
This example demonstrates the double negative margins:
```html
<link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css" rel="stylesheet">
<!-- GOOD! Second "row" wrapped in "col" to negate negative margins. -->
<div class="container">
<div class="row">
<div class="col-xs-12" style="background: lime;">
<div class="row">
Here's my text!
</div>
</div>
</div>
</div>
<!-- BAD! Second "row" missing wrapping "col", gets double negative margins -->
<div class="container">
<div class="row">
<div class="row" style="background: tomato;">
Where's my text?
</div>
</div>
</div>
```
For further reading, [The Subtle Magic Behind Why the Bootstrap 3 Grid Works](http://www.helloerik.com/the-subtle-magic-behind-why-the-bootstrap-3-grid-works) explains the column system in great and interesting detai.
|
You shouldn't wrap them in another [container](http://getbootstrap.com/css/#grid) - containers are designed for a typical one-page layout. Unless it would look good / work well with your layout, you may want to look into `container-fluid` if you really want to do this.
**tl;dr** don't wrap in another container.
| 7,076
|
30,631,299
|
First i'm developing a django app, when i try to run the server with:
python manage.py runserver 0.0.0.0:8000
The terminal shows:
```
"django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: No module named MySQLdb"
```
So, i need to install that package:
```
(app1)Me% pip install MySQL-python
```
Errors:
```
Collecting mysql-python
Using cached MySQL-python-1.2.5.zip
Building wheels for collected packages: mysql-python
Running setup.py bdist_wheel for mysql-python
Complete output from command /Users/GFTecla/Documents/shoutout/bin/python -c "import setuptools;__file__='/private/var/folders/9g/1qrws9yj3wn7lghlrpgyh9cc0000gn/T/pip-build-ZMQOQm/mysql-python/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" bdist_wheel -d /var/folders/9g/1qrws9yj3wn7lghlrpgyh9cc0000gn/T/tmp1JfTpfpip-wheel-:
running bdist_wheel
running build
running build_py
creating build
creating build/lib.macosx-10.5-intel-2.7
copying _mysql_exceptions.py -> build/lib.macosx-10.5-intel-2.7
creating build/lib.macosx-10.5-intel-2.7/MySQLdb
copying MySQLdb/__init__.py -> build/lib.macosx-10.5-intel-2.7/MySQLdb
copying MySQLdb/converters.py -> build/lib.macosx-10.5-intel-2.7/MySQLdb
copying MySQLdb/connections.py -> build/lib.macosx-10.5-intel-2.7/MySQLdb
copying MySQLdb/cursors.py -> build/lib.macosx-10.5-intel-2.7/MySQLdb
copying MySQLdb/release.py -> build/lib.macosx-10.5-intel-2.7/MySQLdb
copying MySQLdb/times.py -> build/lib.macosx-10.5-intel-2.7/MySQLdb
creating build/lib.macosx-10.5-intel-2.7/MySQLdb/constants
copying MySQLdb/constants/__init__.py -> build/lib.macosx-10.5-intel-2.7/MySQLdb/constants
copying MySQLdb/constants/CR.py -> build/lib.macosx-10.5-intel-2.7/MySQLdb/constants
copying MySQLdb/constants/FIELD_TYPE.py -> build/lib.macosx-10.5-intel-2.7/MySQLdb/constants
copying MySQLdb/constants/ER.py -> build/lib.macosx-10.5-intel-2.7/MySQLdb/constants
copying MySQLdb/constants/FLAG.py -> build/lib.macosx-10.5-intel-2.7/MySQLdb/constants
copying MySQLdb/constants/REFRESH.py -> build/lib.macosx-10.5-intel-2.7/MySQLdb/constants
copying MySQLdb/constants/CLIENT.py -> build/lib.macosx-10.5-intel-2.7/MySQLdb/constants
running build_ext
building '_mysql' extension
creating build/temp.macosx-10.5-intel-2.7
gcc -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -Qunused-arguments -Qunused-arguments -Dversion_info=(1,2,5,'final',1) -D__version__=1.2.5 -I/usr/local/Cellar/mysql/5.6.23/include/mysql -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c _mysql.c -o build/temp.macosx-10.5-intel-2.7/_mysql.o -g -fno-omit-frame-pointer -fno-strict-aliasing
In file included from _mysql.c:44:
/usr/local/Cellar/mysql/5.6.23/include/mysql/my_config.h:348:11: warning: 'SIZEOF_SIZE_T' macro redefined
#define SIZEOF_SIZE_T SIZEOF_LONG
^
/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/pymacconfig.h:43:17: note: previous definition is here
# define SIZEOF_SIZE_T 8
^
In file included from _mysql.c:44:
/usr/local/Cellar/mysql/5.6.23/include/mysql/my_config.h:442:9: warning: 'HAVE_WCSCOLL' macro redefined
#define HAVE_WCSCOLL
^
/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/pyconfig.h:911:9: note: previous definition is here
#define HAVE_WCSCOLL 1
^
_mysql.c:1589:10: warning: comparison of unsigned expression < 0 is always false [-Wtautological-compare]
if (how < 0 || how >= sizeof(row_converters)) {
~~~ ^ ~
3 warnings generated.
gcc -bundle -undefined dynamic_lookup -arch i386 -arch x86_64 -isysroot / -Qunused-arguments -Qunused-arguments build/temp.macosx-10.5-intel-2.7/_mysql.o -L/usr/local/Cellar/mysql/5.6.23/lib -lmysqlclient -lssl -lcrypto -o build/lib.macosx-10.5-intel-2.7/_mysql.so
ld: library not found for -lbundle1.o
clang: error: linker command failed with exit code 1 (use -v to see invocation)
error: command 'gcc' failed with exit status 1
----------------------------------------
Failed building wheel for mysql-python
Failed to build mysql-python
Installing collected packages: mysql-python
Running setup.py install for mysql-python
Complete output from command /Users/GFTecla/Documents/shoutout/bin/python -c "import setuptools, tokenize;__file__='/private/var/folders/9g/1qrws9yj3wn7lghlrpgyh9cc0000gn/T/pip-build-ZMQOQm/mysql-python/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/9g/1qrws9yj3wn7lghlrpgyh9cc0000gn/T/pip-YOrOKA-record/install-record.txt --single-version-externally-managed --compile --install-headers /Users/GFTecla/Documents/shoutout/bin/../include/site/python2.7/mysql-python:
running install
running build
running build_py
copying MySQLdb/release.py -> build/lib.macosx-10.5-intel-2.7/MySQLdb
running build_ext
building '_mysql' extension
gcc -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -Qunused-arguments -Qunused-arguments -Dversion_info=(1,2,5,'final',1) -D__version__=1.2.5 -I/usr/local/Cellar/mysql/5.6.23/include/mysql -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c _mysql.c -o build/temp.macosx-10.5-intel-2.7/_mysql.o -g -fno-omit-frame-pointer -fno-strict-aliasing
In file included from _mysql.c:44:
/usr/local/Cellar/mysql/5.6.23/include/mysql/my_config.h:348:11: warning: 'SIZEOF_SIZE_T' macro redefined
#define SIZEOF_SIZE_T SIZEOF_LONG
^
/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/pymacconfig.h:43:17: note: previous definition is here
# define SIZEOF_SIZE_T 8
^
In file included from _mysql.c:44:
/usr/local/Cellar/mysql/5.6.23/include/mysql/my_config.h:442:9: warning: 'HAVE_WCSCOLL' macro redefined
#define HAVE_WCSCOLL
^
/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/pyconfig.h:911:9: note: previous definition is here
#define HAVE_WCSCOLL 1
^
_mysql.c:1589:10: warning: comparison of unsigned expression < 0 is always false [-Wtautological-compare]
if (how < 0 || how >= sizeof(row_converters)) {
~~~ ^ ~
3 warnings generated.
gcc -bundle -undefined dynamic_lookup -arch i386 -arch x86_64 -isysroot / -Qunused-arguments -Qunused-arguments build/temp.macosx-10.5-intel-2.7/_mysql.o -L/usr/local/Cellar/mysql/5.6.23/lib -lmysqlclient -lssl -lcrypto -o build/lib.macosx-10.5-intel-2.7/_mysql.so
ld: library not found for -lbundle1.o
clang: error: linker command failed with exit code 1 (use -v to see invocation)
error: command 'gcc' failed with exit status 1
----------------------------------------
Command "/Users/GFTecla/Documents/shoutout/bin/python -c "import setuptools, tokenize;__file__='/private/var/folders/9g/1qrws9yj3wn7lghlrpgyh9cc0000gn/T/pip-build-ZMQOQm/mysql-python/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/9g/1qrws9yj3wn7lghlrpgyh9cc0000gn/T/pip-YOrOKA-record/install-record.txt --single-version-externally-managed --compile --install-headers /Users/GFTecla/Documents/shoutout/bin/../include/site/python2.7/mysql-python" failed with error code 1 in /private/var/folders/9g/1qrws9yj3wn7lghlrpgyh9cc0000gn/T/pip-build-ZMQOQm/mysql-python
```
I have OS X 10.10.2
Django 1.8.2
Python 2.7
|
2015/06/03
|
[
"https://Stackoverflow.com/questions/30631299",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2544076/"
] |
The solution was in reinstalling the developer tools:
```
xcode-select --install
```
|
What fixed it for me was:
`sudo pip install --upgrade setuptools`
Make sure you have mysql installed:
```
brew install mysql
```
| 7,077
|
69,685,355
|
I'm sorry if that didn't make any sense! I'm very new to python and I could really use some help.
I don't want the question to be solved for me, but I would appreciate some advice as a starting point.
```
listA = [("Aleah", [74, 100, 120, 67]),
("Hannah", [95, 110, 110, 67]),
("Timothy", [71, 111, 98, 106])]
```
Essentially I need to find which person has the fastest average driving speed and then print their name.
How do I calculate the average of the second element in the list (also a list) while keeping it associated with the first element in the list (their name).
I don't even know where to begin, so any advice would be much appreciated. Thank you!
|
2021/10/23
|
[
"https://Stackoverflow.com/questions/69685355",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17225072/"
] |
You put your listelement inside the same container - so expanding the list would also expand everything.
You can put your listelement outside and/or give it an absolute position.
Minimal changes can be found here: [codepen](https://codepen.io/coyer/pen/KKvapbv)
Basically I wrapped the inputfield in a relative-positioned wrapper to keep that position; then I changed ul to `position:absolute`.
|
Try This
--------
---
```
ul {
max-height: 250px;
overflow-y: scroll;
}
```
| 7,080
|
47,822,740
|
I'm using Ubuntu 16.04, which comes with Python 2.7 and Python 3.5. I've installed Python 3.6 on it and symlink python3 to python3.6 through `alias python3=python3.6`.
Then, I've installed `virtualenv` using `sudo -H pip3 install virtualenv`. When I checked, the virtualenv got installed in `"/usr/local/lib/python3.5/dist-packages"` location, so when I'm trying to create virtualenv using `python3 -m venv ./venv1` it's throwing me errors:
```
Error Command: ['/home/wgetdj/WorkPlace/Programming/Python/myvenv/bin/python3', '-Im', 'ensurepip', '--upgrade', '--default-pip']
```
What should I do?
|
2017/12/14
|
[
"https://Stackoverflow.com/questions/47822740",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6609613/"
] |
We usually use `$ python3 -m venv myvenv` to create a new virtualenv (Here `myvenv` is the name of our virtualenv).
Similar to my case, if you have both `python3.5` as well as `python3.6` on your system, then you might get some errors.
**NOTE:** On some versions of Debian/Ubuntu you may receive the following error:
```
The virtual environment was not created successfully because ensure pip is not available. On Debian/Ubuntu systems, you need to install the python3-venv package using the following command.
apt-get installpython3-venv
You may need to use sudo with that command. After installing the python3-venv package, recreate your virtual environment.
```
In this case, follow the instructions above and install the python3-venv package:
```
$ sudo apt-get install python3-venv
```
**NOTE:** On some versions of Debian/Ubuntu initiating the virtual environment like this currently gives the following error:
```
Error Command: ['/home/wgetdj/WorkPlace/Programming/Python/myvenv/bin/python3', '-Im', 'ensurepip', '--upgrade', '--default-pip']
```
To get around this, use the virtualenv command instead.
```
$ sudo apt-get install python-virtualenv
$ virtualenv --python=python3.6 myvenv
```
**NOTE:** If you get an error like
>
> E: Unable to locate package python3-venv
>
>
>
then instead run:
```
sudo apt install python3.6-venv
```
|
Installing `python3.6` and `python3.6-venv` via `ppa:deadsnakes/ppa` instead of `ppa:jonathonf/python-3.6` worked for me
```
apt-get update \
&& apt-get install -y software-properties-common curl \
&& add-apt-repository ppa:deadsnakes/ppa \
&& apt-get update \
&& apt-get install -y python3.6 python3.6-venv
```
| 7,085
|
14,110,709
|
Using the python library matplotlib, I've found what suggests to be a solution to this question:
[Displaying (nicely) an algebraic expression in PyQt](https://stackoverflow.com/questions/14097463/displaying-nicely-an-algebraic-expression-in-pyqt) by utilising matplotlibs [TeX markup](http://matplotlib.org/users/mathtext.html).
What I'd like to do is take TeX code from my python program which represents a mathematical expression, and save it to an image that can be displayed in my PyQt GUI, rather than displaying the equation in ugly plain text.
Something like this essentially...
```
import matplotlib.pyplot as plt
formula = '$x=3^2$'
fig = plt.figure()
fig.text(0,0,formula)
fig.savefig('formula.png')
```
However, the pyplot module is primarily for displaying graphs and plots, not the samples of text like I need. The result of that code is usually a tiny bit of text in the bottom left corner of a huge, white, blank image.
If my formula involves fractions, and thus requires downward space, it is truncated, like in the image below.
**Note that this appears a blank image; look to the left side of the display**
[Fraction at coordinate (0,0) truncated and surrounded by whitespace](https://i.stack.imgur.com/gbxek.png)
I believe I could create a very large (space wise) figure, write the formula in the middle of the blank plot, save it, and use pixel analysis to trim it to as small an image as possible, but this seems rather crude.
Are plots the only intended output of matplotlib?
Is there nothing devoted to just outputting equations, that won't require me to worry about all the extra space or position of the text?
Thanks!
|
2013/01/01
|
[
"https://Stackoverflow.com/questions/14110709",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/848292/"
] |
The trick is to render the text, then get its bounding box, and finally adjust the figure size and the vertical positioning of text in the new figure. This saves the figure twice, but as is common in any text engine, the correct bounding box and other parameters can only be correctly obtained after the text has been rendered.
```
import pylab
formula = r'$x=3^2, y = \frac{1}{\frac{2}{3}}, %s$' % ('test' * 20)
fig = pylab.figure()
text = fig.text(0, 0, formula)
# Saving the figure will render the text.
dpi = 300
fig.savefig('formula.png', dpi=dpi)
# Now we can work with text's bounding box.
bbox = text.get_window_extent()
width, height = bbox.size / float(dpi) + 0.005
# Adjust the figure size so it can hold the entire text.
fig.set_size_inches((width, height))
# Adjust text's vertical position.
dy = (bbox.ymin/float(dpi))/height
text.set_position((0, -dy))
# Save the adjusted text.
fig.savefig('formula.png', dpi=dpi)
```
The `0.005` constant was added to `width` and `height` because, apparently, for certain texts Matplotlib is returning a slightly underestimated bounding box, i.e., smaller than required.
|
what about
```
import matplotlib.pyplot as plt
params = {
'figure.figsize': [2,2],
}
plt.rcParams.update(params)
formula = r'$x=\frac{3}{100}$'
fig = plt.figure()
fig.text(0.5,0.5,formula)
plt.savefig('formula.png')
```
The first two arguments of the matplotlib text() function set the position of the text (between 0 & 1, so 0.5 for both gets your text in the middle.)
You can change all kinds of things like the font and text size by setting the rc parameters. See <http://matplotlib.org/users/customizing.html>. I've editted the rc params for figure size, but you can change the defaults so you don't have to do this every time.
| 7,094
|
23,012,931
|
How to generate something like
```
[(), (1,), (1,2), (1,2,3)..., (1,2,3,...n)]
```
and
```
[(), (4,), (4,5), (4,5,6)..., (4,5,6,...m)]
```
then take the product of them and merge into
```
[(), (1,), (1,4), (1,4,5), (1,4,5,6), (1,2), (1,2,4)....(1,2,3,...n,4,5,6,...m)]
```
?
For the first two lists I've tried the powerset recipe in <https://docs.python.org/2/library/itertools.html#recipes> , but there will be something I don't want, like `(1,3), (2,3)`
For the product I've tested with `chain` and `product`, but I just can't merge the combinations of tuples into one.
Any idea how to do this nice and clean? Thanks!
|
2014/04/11
|
[
"https://Stackoverflow.com/questions/23012931",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1886382/"
] |
Please note that, single element tuples are denoted like this `(1,)`.
```
a = [(), (1,), (1, 2), (1, 2, 3)]
b = [(), (4,), (4, 5), (4, 5, 6)]
from itertools import product
for item1, item2 in product(a, b):
print item1 + item2
```
**Output**
```
()
(4,)
(4, 5)
(4, 5, 6)
(1,)
(1, 4)
(1, 4, 5)
(1, 4, 5, 6)
(1, 2)
(1, 2, 4)
(1, 2, 4, 5)
(1, 2, 4, 5, 6)
(1, 2, 3)
(1, 2, 3, 4)
(1, 2, 3, 4, 5)
(1, 2, 3, 4, 5, 6)
```
If you want them in a list, you can use list comprehension like this
```
from itertools import product
print [sum(items, ()) for items in product(a, b)]
```
Or even simpler,
```
print [items[0] + items[1] for items in product(a, b)]
```
|
If you don't want to use any special imports:
```
start = 1; limit = 10
[ range(start, start + x) for x in range(limit) ]
```
With `start = 1` the output is:
`[[], [1], [1, 2], [1, 2, 3], [1, 2, 3, 4], [1, 2, 3, 4, 5], [1, 2, 3, 4, 5, 6], [1, 2, 3, 4, 5, 6, 7], [1, 2, 3, 4, 5, 6, 7, 8], [1, 2, 3, 4, 5, 6, 7, 8, 9]]`
If you want to take the product, maybe using `itertools` might be most elegant.
| 7,097
|
15,920,413
|
I have a large ASCII file (~100GB) which consists of roughly 1.000.000 lines of known formatted numbers which I try to process with python. The file is too large to read in completely into memory, so I decided to process the file line by line:
```
fp = open(file_name)
for count,line in enumerate(fp):
data = np.array(line.split(),dtype=np.float)
#do stuff
fp.close()
```
It turns out, that I spend most of the run time of my program in the `data =` line. Are there any ways to speed up that line? Also, the execution speed seem much slower than what I could get from an native FORTRAN program with formated read (see this [question](https://stackoverflow.com/questions/15834327/strange-accuracy-difference-between-ipython-and-ipython-notebook-then-using-fort), I've implemented a FORTRAN string processor and used it with f2py, but the run time was only comparable with the `data =` line. I guess the I/O handling and type conversions between Python/FORTRAN killed what I gained from FORTRAN)
Since I know the formatting, shouldn't there be a better and faster way as to use `split()`? Something like:
```
data = readf(line,'(1000F20.10)')
```
I tried the [fortranformat](https://pypi.python.org/pypi/fortranformat) package, which worked well, but in my case was three times slower than thee `split()` approach.
P.S. As suggested by ExP and root I tried the np.fromstring and made this quick and dirtry benchmark:
```
t1 = time.time()
for i in range(500):
data=np.array(line.split(),dtype=np.float)
t2 = time.time()
print (t2-t1)/500
print data.shape
print data[0]
0.00160977363586
(9002,)
0.0015162509
```
and:
```
t1 = time.time()
for i in range(500):
data = np.fromstring(line,sep=' ',dtype=np.float,count=9002)
t2 = time.time()
print (t2-t1)/500
print data.shape
print data[0]
0.00159792804718
(9002,)
0.0015162509
```
so `fromstring` is in fact slightly slower in my case.
|
2013/04/10
|
[
"https://Stackoverflow.com/questions/15920413",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2010845/"
] |
Have you tried [`numpyp.fromstring`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.fromstring.html)?
```
np.fromstring(line, dtype=np.float, sep=" ")
```
|
The [*np.genfromtxt*](http://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html) function is a speed champion if you can get it to match you input format.
If not, then you may already be using the fastest method. Your line-by-line split-into-array approach exactly matches the [SciPy Cookbook examples](http://www.scipy.org/Cookbook/InputOutput).
| 7,099
|
28,031,210
|
I have a code who looks like this :
```
# step 1 remove from switch
for server in server_list:
remove_server_from_switch(server)
logger.info("OK : Removed %s", server)
# step 2 remove port
for port in port_list:
remove_ports_from_switch(port)
logger.info("OK : Removed port %s", port)
# step 3 execute the other operations
for descr in pairs:
move_descr(descr)
# step 4 add server back to switch
for server in server_list:
add_server_to_switch(server)
logger.info("OK : server added %s", server)
# step 5 add back port
for port in port_list:
add_ports_to_switch(port)
logger.info("OK : Added port %s", port)
```
functions inside the for loop can raise exceptions or the user can interrupt the script with the Ctrl+C.
But I would like to enter in a roll-back mode by undo changes already done before if exceptions are raised during the execution.
I mean, if an exception is raised during the step 3, I have to roll-back steps 1 and 2 (by executing actions in step 4 and 5 ).
Or if a user try to stop the script with a Ctrl+C in the middle of for loop in the step 1, I would like to roll-back the action and add back the servers removed.
How can it be done in a good pythonic way with the use of exceptions, please ? :)
|
2015/01/19
|
[
"https://Stackoverflow.com/questions/28031210",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4471200/"
] |
This is what context managers are for. Read up on the [with statement](https://docs.python.org/2/reference/compound_stmts.html#with) for details, but the general idea is you need to write context manager classes where the `__enter__` and `__exit__` functions do the removal/re-addition of your servers/ports. Then your code structure becomes something like:
```
with RemoveServers(server_list):
with RemovePorts(port_list):
do_stuff
# exiting the with blocks will undo the actions
```
|
Maybe something like this will work:
```
undo_dict = {remove_server_from_switch: add_server_to_switch,
remove_ports_from_switch: add_ports_to_switch,
add_server_to_switch: remove_server_from_switch,
add_ports_to_switch: remove_ports_from_switch}
def undo_action(action):
args = action[1:]
func = action[0]
undo_dict[func](*args)
try:
#keep track of all successfully executed actions
action_list = []
# step 1 remove from switch
for server in server_list:
remove_server_from_switch(server)
logger.info("OK : Removed %s", server)
action_list.append((remove_server_from_switch, server))
# step 2 remove port
for port in port_list:
remove_ports_from_switch(port)
logger.info("OK : Removed port %s", port)
action_list.append((remove_ports_from_switch, port))
# step 3 execute the other operations
for descr in pairs:
move_descr(descr)
# step 4 add server back to switch
for server in server_list:
add_server_to_switch(server)
logger.info("OK : server added %s", server)
action_list.append((add_server_to_switch, server))
# step 5 add back port
for port in port_list:
add_ports_to_switch(port)
logger.info("OK : Added port %s", port)
action_list.append((add_ports_to_switch, port))
except Exception:
for action in reverse(action_list):
undo_action(action)
logger.info("ERROR Recovery : undoing {func}({args})",func = action[0], args = action[1:])
finally:
del action_list
```
EDIT: [As tzaman said below](https://stackoverflow.com/a/28031889/2437514), the best thing to do in a situation like this is to wrap the entire thing into a context manager, and use the `with` statement. Then it doesn't matter whether or not there was an error encountered- all your actions are undone at the end of the `with` block.
Here's what it might look like:
```
class ActionManager():
def __init__(self, undo_dict):
self.action_list = []
self.undo_dict = undo_dict
def action_pop(self):
yield self.action_list.pop()
def action_add(self, *args):
self.action_list.append(args)
def undo_action(self, action):
args = action[1:]
func = action[0]
self.undo_dict[func](*args)
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
for action in self.action_stack:
undo_action(action)
logger.info("Action Manager Cleanup : undoing {func}({args})",func = action[0], args = action[1:])
```
Now you can just do this:
```
#same undo_dict as before
with ActionManager(undo_dict) as am:
# step 1 remove from switch
for server in server_list:
remove_server_from_switch(server)
logger.info("OK : Removed %s", server)
am.action_add(remove_server_from_switch, server)
# step 2 remove port
for port in port_list:
remove_ports_from_switch(port)
logger.info("OK : Removed port %s", port)
am.action_add(remove_ports_from_switch, port)
# step 3 execute the other operations
for descr in pairs:
move_descr(descr)
# steps 4 and 5 occur automatically
```
Another way to do it - and probably a lot better - would be to add the servers/ports in the `__enter__` method. You could subclass the `ActionManager` above and add the port addition and removal logic inside of it.
The `__enter__` method doesn't even have to return an instance of the `ActionManager` class - if it makes sense to do so, you could even write it so that `with SwitchManager(servers,ports)` returns your pairs object, and you could end up doing this:
```
with SwitchManager(servers, ports) as pairs:
for descr in pairs:
move_descr(descr)
```
| 7,100
|
2,157,665
|
I have created a templatetag that loads a yaml document into a python list. In my template I have `{% get_content_set %}`, this dumps the raw list data. What I want to be able to do is something like
```
{% for items in get_content_list %}
<h2>{{items.title}}</h2>
{% endfor %}`
```
|
2010/01/28
|
[
"https://Stackoverflow.com/questions/2157665",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/245889/"
] |
If the list is in a python variable X, then add it to the template context `context['X'] = X` and then you can do
```
{% for items in X %}
{{ items.title }}
{% endfor %}
```
A template tag is designed to render output, so won't provide an iterable list for you to use. But you don't need that as the normal context + for loop are fine.
|
Since writing complex templatetags is not an easy task (well documented though) i would take {% with %} tag source and adapt it for my needs, so it looks like
```
{% get_content_list as content %
{% for items in content %}
<h2>{{items.title}}</h2>
{% endfor %}`
```
| 7,103
|
58,578,181
|
I'm try to create python package in **3.6** But I also want backward compatibility to **2.7** How can I write a code for **3.6** and **2.7**
For example I have method called `geo_point()`.
```
def geo_point(lat: float, lng: float):
pass
```
This function work fine in **3.6** but not in **2.7** it show syntax error I think **2.7** not support type hinting. So I want to write another function which is supported by **2.7** and when user run my package on **2.7** it ignore all the function which is not supported
For example
```
@python_version(2.7)
def geo_point(lat, lng):
pass
```
Is it possible to have both function and python decide which function is compatible?
|
2019/10/27
|
[
"https://Stackoverflow.com/questions/58578181",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12280920/"
] |
If type hinting is the only issue you have with your code, then look at SO question [Type hinting in Python 2](https://stackoverflow.com/questions/35230635/type-hinting-in-python-2)
It says, that python3 respects also type hinting in comment lines.
Python2 will ignore it and python3 respects this alternative syntax. It has specifically been designed for code that still had to be python2 compatible.
However please note, that just because the code compiles with python2, it doesn't mean it will yield the correct result.
If you have more compatibility issues I strongly propose to look at the `future` module (not to be mixed up with the `from __future__ import xxx` statement.
You can install future ( <https://pypi.org/project/future/> ) with `pip install future`.
As you don't show any other code that causes problems I cannot advise on specific issues.
The url <https://python-future.org/compatible_idioms.html> shows a quite extensive list of potential python 2 / python 3 issues and how you might resolve them.
For example for opening files in python 2 with less encoding / unicode issues can be done by importing an alternative version of open with the line
```
from io import open
```
<https://python-future.org/compatible_idioms.html>
**Addendum**:
If you really are in the need of declaring one function for python3 and one function for python2 you can do:
```
import sys
if sys.version_info < (3, 0):
def myfunc(a, b): # python 2 compatible function
bla bla bla
else:
def myfunc(a, b): # python 3 compatible function
bla bla bla
```
**However:** Both function **must** be syntactically correct for python 2 and python 3
If you really want to have functions, which are only python2 or only python3 syntactically correct (e.g. print statement or await), then you could do something like:
```
import sys
if sys.version_info < (3, 0):
from myfunc_py2 import myfunc # the file myfunc_py2.py does not have to be python3 compatible
else:
from myfunc_py3 import myfunc # the file myfunc_py3.py does not have to be python2 compatible
```
|
I doubt it's worth the trouble, but as a proof of concept: You could use a combination of a decorator and the built-in `exec()` function. Using `exec()` is a way to avoid syntax errors due to language differences.
Here's what I mean:
```
import sys
sys_vers_major, sys_vers_minor, sys_vers_micro = sys.version_info[:3]
sys_vers = sys_vers_major + sys_vers_minor*.1 # + sys_vers_micro*.01
print('sys_vers: %s' % sys_vers)
def python_version(vers, code1, code2):
lcls = {} # Dictionary to temporarily store version of function defined.
if sys_vers == vers:
exec(code1, globals(), lcls)
else:
exec(code2, globals(), lcls)
def decorator(func):
return lcls[func.__name__]
return decorator
@python_version(2.7,
"""
def geo_point(lat, lng):
print('code1 version')
""",
"""
def geo_point(lat: float, lng: float):
print('code2 version')
""")
def geo_point(): pass # Needed to know name of function that's being defined.
geo_point(121, 47) # Show which version was defined.
```
| 7,104
|
4,838,740
|
Imagine that I have a model that describes the printers that an office has. They could be ready to work or not (maybe in the storage area or it has been bought but not still in th office ...). The model must have a field that represents the phisicaly location of the printer ("Secretary's office", "Reception", ... ). There cannot be two repeated locations and if it is not working it should not have a location.
I want to have a list in which all printers appear and for each one to have the locations where it is (if it has). Something like this:
```
ID | Location
1 | "Secretary's office"
2 |
3 | "Reception"
4 |
```
With this I can know that there are two printers that are working (1 and 3), and others off line (2 and 4).
The first approach for the model, should be something like this:
```
class Printer(models.Model):
brand = models.CharField( ...
...
location = models.CharField( max_length=100, unique=True, blank=True )
```
But this doesn't work properly. You can only store one register with one blank location. It is stored as an empty string in the database and it doesn't allow you to insert more than one time (the database says that there is another empty string for that field). If you add to this the "null=True" parameter, it behaves in the same way. This is beacuse, instead of inserting NULL value in the corresponding column, the default value is an empty string.
Searching in the web I have found <http://www.maniacmartin.com/2010/12/21/unique-nullable-charfields-django/>, that trys to resolve the problem in differnt ways. He says that probably the cleanest is the last one, in which he subclass the CharField class and override some methods to store different values in the database. Here is the code:
```
from django.db import models
class NullableCharField(models.CharField):
description = "CharField that obeys null=True"
def to_python(self, value):
if isinstance(value, models.CharField):
return value
return value or ""
def get_db_prep_value(self, value):
return value or None
```
This works fine. You can store multiple registers with no location, because instead of inserting an empty string, it stores a NULL. The problem of this is that it shows the blank locations with Nones instead of empty string.
```
ID | Location
1 | "Secretary's office"
2 | None
3 | "Reception"
4 | None
```
I supposed that there is a method (or multiple) in which must be specify how the data must be converted, between the model and the database class manager in the two ways (database to model and model to database).
Is this the best way to have an unique, blank CharField?
Thanks,
|
2011/01/29
|
[
"https://Stackoverflow.com/questions/4838740",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/454760/"
] |
There is a [`Queue`](http://docs.python.org/library/multiprocessing.html#multiprocessing.Queue) class within the `multiprocessing` module specifically for this purpose.
Edit: If you are looking for a complete framework for parallel computing which features a `map()` function using a task queue, have a look at the parallel computing facilities of [IPython](http://ipython.scipy.org/). In particlar, you can use the [`TaskClient.map()`](http://ipython.scipy.org/doc/stable/html/parallel/parallel_task.html#parallel-map) function to get a load-balanced mapping to the available processors.
|
About queue implementations. There are some.
Look at the Celery project. <http://celeryproject.org/>
So, in your case, you can run 12 conversions (one on each CPU) as Celery tasks, add a callback function (to the conversion or to the task) and in that callback function add a new conversion task running when one of the previous conversions is finished.
| 7,107
|
4,240,266
|
I have a little module that creates a window (program1). I've imported this into another python program of mine (program2).
How do I make program 2 get self.x and x that's in program1?
This is program1.
```
import Tkinter
class Class(Tkinter.Tk):
def __init__(self, parent):
Tkinter.Tk.__init__(self, parent)
self.parent = parent
self.Main()
def Main(self):
self.button= Tkinter.Button(self,text='hello')
self.button.pack()
self.x = 34
x = 62
def run():
app = Class(None)
app.mainloop()
if __name__ == "__main__":
run()
```
|
2010/11/21
|
[
"https://Stackoverflow.com/questions/4240266",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/433417/"
] |
You can access the variable `self.x` as a member of an instance of `Class`:
```
c = Class(parent)
print(c.x)
```
You cannot access the local variable - it goes out of scope when the method call ends.
|
I'm not sure exactly what the purpose of 'self.x' and 'x' are but one thing to note in the 'Main' method of class Class
```
def Main(self):
self.button= Tkinter.Button(self,text='hello')
self.button.pack()
self.x = 34
x = 62
```
is that 'x' and 'self.x' are two different variables. The variable 'x' is a local variable for the method 'Main' and 'self.x' is an instance variable. Like Mark says you can access the instance variable 'self.x' as an attribute of an instance of Class, but the method variable 'x' is only accessible from within the 'Main' method. If you would like to have the ability to access the method variable 'x' then you could change the signature of the 'Main' method as follows.
```
def Main(self,x=62):
self.button= Tkinter.Button(self,text='hello')
self.button.pack()
self.x = 34
return x
```
This way you can set the value of the method variable 'x' when you call the 'Main' method from an instance of Class
```
>> c = Class()
>> c.Main(4)
4
```
or just keep the default
```
>> c.Main()
62
```
and as before like Mark said you will have access to the instance variable 'self.x'
```
>> c.x
34
```
| 7,116
|
6,372,159
|
Can anyone suggest me what is the most pythonic way to import modules in python?
Let me explain - i have read a lot of python code and found several different ways of how to import modules or if to be more precise - when to import:
1. Use one module/several modules which include all the imports(third party modules) which are necessary for entire project so all of the imports are concentrated within few modules so it is easy to maintain imports. When any single module requires any module to be imported it ask references module for it.
For example in our project we have separated level named 'references' so it contains modules like 'system.py'(contains references to all system libraries), 'platform.py'(contains references to all platform libraries), 'devexpress.py'(contains references to all devexpress libraries) and so on. These modules looks like:
2. Each module imports all necessary classes and functions at the top of the module - e.g. there is a section with imports within each module in project
3. Each function/class uses import locally e.g right after definition and import only things that them really need.
Please find samples below.
1 sample import module - only 'import' and 'from ... import ...' statements(no any methods or classes):
=======================================================================================================
```
#references.py
import re
import clr
import math
import System
import System.Text.RegularExpressions
import System.Random
import System.Threading
import System.DateTime
# System assemblies
clr.AddReference("System.Core")
clr.AddReference("System.Data")
clr.AddReference("System.Drawing")
...
#test.py
from references.syslibs import (Array, DataTable, OleDbConnection, OleDbDataAdapter,
OleDbCommand, OleDbSchemaGuid)
def get_dict_from_data_table(dataTable):
pass
```
2 module with 'import' and 'from ... import ...' as well as methods and classes:
================================================================================
```
from ... import ...
from ... import ...
def Generate(param, param1 ...):
pass
```
3 module with 'import' and 'from ... import ...' statements which are used inside of methods and classes:
=========================================================================================================
```
import clr
clr.AddReference("assembly")
from ... import ...
...
def generate_(txt, param1, param2):
from ... import ...
from ... import ...
from ... import ...
if not cond(param1): res = "text"
if not cond(param2): name = "default"
```
So what is the most pythonic way to import modules in python?
|
2011/06/16
|
[
"https://Stackoverflow.com/questions/6372159",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/781150/"
] |
It *really* doesn't matter, so long as you don't `from ... import *`. The rest is all taste and getting around cyclic import issues. [PEP 8](http://www.python.org/dev/peps/pep-0008/) states that you should import at the top of the script, but even that isn't set in stone.
|
Python's "import" loads a Python module into its own namespace, so that you have to add the module name followed by a dot in front of references to any names from the imported module
```
import animals
animals.Elephant()
```
"from" loads a Python module into the current namespace, so that you can refer to it without the need to mention the module name again
```
from animals import Elephant
Elephant()
```
or
```
from animals import *
Elephant()
```
using from is good, (but using a wildcard import is discouraging). but if you have a big scaled project, importing from diffrent modules may cause naming confilicts. Like importing **Elephant()** function from two diffrent modules will cause problem (like using wildcard imports with **\***)
So, if you have a large scaled project where you import many diffrent things from other modules, it is better to use import and using imported things with **module\_name.your\_class\_or\_function**. Otherwise, use from notation...
| 7,117
|
50,105,459
|
Hello i have been playing around with python recently and have been trying to learn how to control external peripherals and i/o ports on my laptop.
I have been trying to disable USB ports and disable my network adapter. However when i run my program it does not work. The code does not have a specific syntax error but when it is ran nothing happens.
```
import subprocess
def main():
print("PROGRAM STARTED")
subprocess.call(["runas", "/user:Administrator", "cmd.exe /c netsh interface set interface '*' admin=disable"])
print("Program Exited")
if __name__ == "__main__":
main()
```
|
2018/04/30
|
[
"https://Stackoverflow.com/questions/50105459",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7802263/"
] |
I think you should try to run such commands as admin in windows. This might help: <https://social.technet.microsoft.com/Forums/windows/en-US/05cce5f6-3c3a-4bb8-8b72-8c1ce4b5eff1/how-to-run-a-program-as-adminitrator-via-the-command-line?forum=w7itproappcompat>
You can also modify your command to print the output in stdout to debug easily.
`print subprocess.check_output(['runas','/user:Bradley', "cmd.exe /c netsh interface set interface '*' admin=disable")`
|
I found the issue with the code. to start with i was using the `subprocess.call` function however trying to run the program with Administrator through python do it through command prompt and use this line of code instead
```
subprocess.run(["powershell","Disable-NetAdapter -Name '*'"])
```
Note\* Yes i changed from cmd to powershell this is because the command was easier to use.
| 7,122
|
67,915,722
|
I am fighiting with some listing all possibilities of command with optional and mandatory parameters in python. I need it to generate some autocomplete script in bash based on help output from some script.
E.g. fictional command:
```
add disk -pool <name> { -diskid <diskid> | -diskid auto [-fx | -tdr] } [-fx] [-status { enable | disable } ]
```
*Where: {} mandatory, [] optional, | or*
Expected result as all (24) possibilities of above command:
```
add disk -pool <name> -diskid <diskid>
add disk -pool <name> -diskid <diskid> -capacity_saving
add disk -pool <name> -diskid <diskid> -capacity_saving -status enable
add disk -pool <name> -diskid <diskid> -capacity_saving -status disable
add disk -pool <name> -diskid <diskid> -status enable
add disk -pool <name> -diskid <diskid> -status disable
add disk -pool <name> -diskid auto
add disk -pool <name> -diskid auto -capacity_saving
add disk -pool <name> -diskid auto -capacity_saving -status enable
add disk -pool <name> -diskid auto -capacity_saving -status disable
add disk -pool <name> -diskid auto -status enable
add disk -pool <name> -diskid auto -status disable
add disk -pool <name> -diskid auto -fx
add disk -pool <name> -diskid auto -fx -capacity_saving
add disk -pool <name> -diskid auto -fx -capacity_saving -status enable
add disk -pool <name> -diskid auto -fx -capacity_saving -status disable
add disk -pool <name> -diskid auto -fx -status enable
add disk -pool <name> -diskid auto -fx -status disable
add disk -pool <name> -diskid auto -tdr
add disk -pool <name> -diskid auto -tdr -capacity_saving
add disk -pool <name> -diskid auto -tdr -capacity_saving -status enable
add disk -pool <name> -diskid auto -tdr -capacity_saving -status disable
add disk -pool <name> -diskid auto -tdr -status enable
add disk -pool <name> -diskid auto -tdr -status disable
```
I've tried `import intertool` + `product()` but it only works for less complex commands like `{ -diskid <diskid> | -diskid auto }` so if there are no more parentheses in parentheses like below command with output:
```
# add disk -pool <name> { -diskid <diskid> | -diskid auto } [-fx]
command = [ ['add'], ['disk'], ['-pool <name>'], ['-diskid <diskid>', '-diskid auto'], ['', '-fx']]
print(list(itertools.product(*command)))
print(len(list(itertools.product(*command))))
```
Output:
```
[('add', 'disk', '-pool <name>', '-diskid <diskid>', ''),
('add', 'disk', '-pool <name>', '-diskid <diskid>', '-fx'),
('add', 'disk', '-pool <name>', '-diskid auto', ''),
('add', 'disk', '-pool <name>', '-diskid auto', '-fx')]
4
```
How can I get the expected result? :c
|
2021/06/10
|
[
"https://Stackoverflow.com/questions/67915722",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11387742/"
] |
The question is pretty lacking on what exactly wants to be retrieved from Kubernetes but I think I can provide a good baseline.
When you use Kubernetes, you are most probably using `kubectl` to interact with `kubeapi-server`.
Some of the commands you can use to retrieve the information from the cluster:
* `$ kubectl get RESOURCE --namespace NAMESPACE RESOURCE_NAME`
* `$ kubectl describe RESOURCE --namespace NAMESPACE RESOURCE_NAME`
---
### Example:
Let's assume that you have a `Service` of type `LoadBalancer` (I've redacted some output to be more readable):
* `$ kubectl get service nginx -o yaml`
```
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
spec:
clusterIP: 10.2.151.123
externalTrafficPolicy: Cluster
ports:
- nodePort: 30531
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: A.B.C.D
```
Getting a `nodePort` from this output could be done like this:
* `kubectl get svc nginx -o jsonpath='{.spec.ports[].nodePort}'`
```sh
30531
```
Getting a `loadBalancer IP` from this output could be done like this:
* `kubectl get svc nginx -o jsonpath="{.status.loadBalancer.ingress[0].ip}"`
```
A.B.C.D
```
You can also use `kubectl` with `custom-columns`:
* `kubectl get service -o=custom-columns=NAME:metadata.name,IP:.spec.clusterIP`
```sh
NAME IP
kubernetes 10.2.0.1
nginx 10.2.151.123
```
---
There are a lot of possible ways to retrieve data with `kubectl` which you can read more by following the:
* `kubectl get --help`:
>
> -o, --output='': Output format. One of:
> json|yaml|wide|name|custom-columns=...|custom-columns-file=...|go-template=...|go-template-file=...|jsonpath=...|jsonpath-file=...
> See [custom columns](http://kubernetes.io/docs/user-guide/kubectl-overview/#custom-columns), [golang template](http://golang.org/pkg/text/template/#pkg-overview) and [jsonpath template](http://kubernetes.io/docs/user-guide/jsonpath).
>
>
>
* *[Kubernetes.io: Docs: Reference: Kubectl: Cheatsheet: Formatting output](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#formatting-output)*
---
Additional resources:
* *[Kubernetes.io: Docs: Reference: Kubectl: Overview](https://kubernetes.io/docs/reference/kubectl/overview/)*
* *[Github.com: Kubernetes client: Python](https://github.com/kubernetes-client/python)* - if you would like to retrieve this information with Python
* *[Stackoverflow.com: Answer: How to parse kubectl describe output and get the required field value](https://stackoverflow.com/a/53669973/12257134)*
|
If you want to extract just single values, perhaps as part of scripts, then what you are searching for is `-ojsonpath` such as this example:
```
kubectl get svc service-name -ojsonpath='{.spec.ports[0].port}'
```
which will extract jus the value of the first port listed into the service **specs**.
docs - <https://kubernetes.io/docs/reference/kubectl/jsonpath/>
---
If you want to extract the whole definition of an object, such as a service, then what you are searching for is `-oyaml` such as this example:
```
kubectl get svc service-name -oyaml
```
which will output the whole service definition, all in yaml format.
---
If you want to get a more user-friendly description of a resource, such as a service, then you are searching for a describe command, such as this example:
```
kubectl describe svc service-name
```
---
docs - <https://kubernetes.io/docs/reference/kubectl/overview/#output-options>
| 7,123
|
35,934,735
|
I'd like to bind a class method to the object instance so that when the method is invoke as callback it can still access the object instance. I am using an event emitter to generate and fire events.
This is my code:
```
#!/usr/bin/env python3
from pyee import EventEmitter
class Component(object):
_emiter = EventEmitter()
def emit(self, event_type, event):
Component._emiter.emit(event_type, event)
def listen_on(event):
def listen_on_decorator(func):
print("set event")
Component._emiter.on(event, func)
def method_wrapper(*args, **kwargs):
return func(*args, **kwargs)
return method_wrapper
return listen_on_decorator
class TestComponent(Component):
@listen_on('test')
def on_test(self, event):
print("self is " + str(self))
print("FF" + str(event))
if __name__ == '__main__':
t = TestComponent()
t.emit('test', { 'a': 'dfdsf' })
```
If you run this code, an error is thrown :
```
File "component.py", line 29, in <module> [0/1889]
t.emit('test', { 'a': 'dfdsf' })
File "component.py", line 8, in emit
Component._emiter.emit('test', event)
File "/Users/giuseppe/.virtualenvs/Forex/lib/python3.4/site-packages/pyee/__init__.py", line 117, in emit
f(*args, **kwargs)
File "component.py", line 14, in method_wrapper
return func(*args, **kwargs)
TypeError: on_test() missing 1 required positional argument: 'event'
```
This is caused by the missing self when the method `on_test` is called.
|
2016/03/11
|
[
"https://Stackoverflow.com/questions/35934735",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1022525/"
] |
Get start day && end day:
```
$date = date('Y-m-d');
$startDate = new \DateTime($date);
$endDate = new \DateTime($date);
$endDate->modify("+1 day -1 second");
echo $startDate->format('Y-m-d H:i:s');
return dd($endDate);
```
|
change your output to:
```
echo $StartDate->format('Y-m-d H:i:s');
```
Here's a list of all the formatting characters that can be used to customize your output [Link](http://www.w3schools.com/php/func_date_date.asp)
| 7,124
|
63,345,326
|
I am new to OPC-UA and Eclipse Milo and I am trying to construct a client that can connect to the OPC-UA server of a machine we have just acquired.
I have been able to set up a simple OPC-UA server on my laptop by using this python tutorial series: <https://www.youtube.com/watch?v=NbKeBfK3pfk>. Additionally, I have been able to use the Eclipse Milo examples to run the subscription example successfully to read some values from this server.
However, I have been having difficulty connecting to the OPC-UA server of the machine we have just received. I have successfully connected to this server using the UaExpert client, but we want to build our own client using Eclipse Milo. I can see that some warnings come up when using UaExpert to connect to the server which appear to give clues about the issue but I have too little experience in server-client communications/OPC-UA and would appreciate some help. I will explain what happens when I use the UaExpert client as I have been using this to try and diagnose what is going on.
I notice that when I first launch UaExpert I get the following errors which could be relevant:
```
Discovery FindServersOnNetwork on opc.tcp://localhost:4840 failed (BadTimeout), falling back to FindServers
Discovery FindServers on opc.tpc://localhost:4840 failed (BadTimeout)
Discovery GetEndpoints on opc.tcp://localhost:4840 failed
```
I am really new to networking so not sure exactly what this means.
I will outline the process I have followed when trying to get the SubscriptionExample of Eclipse Milo working with this machine's server. Firstly, I change the getEndpointUrl() method to the ip address of the device we are using: return "opc.tcp://11.23.1.1:4840". I can successfully ping the device using ping 11.23.1.1 from my laptop. When I try to run the SubscriptionExample with this address I get the following error:
```
[NonceUtilSecureRandom] INFO o.e.m.o.stack.core.util.NonceUtil - SecureRandom seeded in 0ms.
18:36:23.879 [main] ERROR o.e.m.e.client.ClientExampleRunner - Error running client example: java.net.UnknownHostException: br-automation
java.util.concurrent.ExecutionException: java.net.UnknownHostException: br-automation
at java.util.concurrent.CompletableFuture.reportGet(Unknown Source)
at java.util.concurrent.CompletableFuture.get(Unknown Source)
at org.eclipse.milo.examples.client.SubscriptionExample.run(SubscriptionExample.java:50)
at org.eclipse.milo.examples.client.ClientExampleRunner.run(ClientExampleRunner.java:120)
at org.eclipse.milo.examples.client.SubscriptionExample.main(SubscriptionExample.java:42)
Caused by: java.net.UnknownHostException: br-automation
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(Unknown Source)
at java.net.InetAddress.getAddressesFromNameService(Unknown Source)
at java.net.InetAddress.getAllByName0(Unknown Source)
at java.net.InetAddress.getAllByName(Unknown Source)
at java.net.InetAddress.getAllByName(Unknown Source)
at java.net.InetAddress.getByName(Unknown Source)
at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:148)
at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:145)
at java.security.AccessController.doPrivileged(Native Method)
at io.netty.util.internal.SocketUtils.addressByName(SocketUtils.java:145)
at io.netty.resolver.DefaultNameResolver.doResolve(DefaultNameResolver.java:43)
at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:63)
at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:55)
at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:57)
at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:32)
at io.netty.resolver.AbstractAddressResolver.resolve(AbstractAddressResolver.java:108)
at io.netty.bootstrap.Bootstrap.doResolveAndConnect0(Bootstrap.java:200)
at io.netty.bootstrap.Bootstrap.access$000(Bootstrap.java:46)
at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:180)
at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:166)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:604)
at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84)
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetSuccess(AbstractChannel.java:984)
at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:504)
at io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:417)
at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:474)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(Unknown Source)
18:36:23.881 [ForkJoinPool.commonPool-worker-1] ERROR o.e.m.e.client.ClientExampleRunner - Error running example: java.net.UnknownHostException: br-automation
java.util.concurrent.ExecutionException: java.net.UnknownHostException: br-automation
at java.util.concurrent.CompletableFuture.reportGet(Unknown Source)
at java.util.concurrent.CompletableFuture.get(Unknown Source)
at org.eclipse.milo.examples.client.SubscriptionExample.run(SubscriptionExample.java:50)
at org.eclipse.milo.examples.client.ClientExampleRunner.run(ClientExampleRunner.java:120)
at org.eclipse.milo.examples.client.SubscriptionExample.main(SubscriptionExample.java:42)
Caused by: java.net.UnknownHostException: br-automation
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(Unknown Source)
at java.net.InetAddress.getAddressesFromNameService(Unknown Source)
at java.net.InetAddress.getAllByName0(Unknown Source)
at java.net.InetAddress.getAllByName(Unknown Source)
at java.net.InetAddress.getAllByName(Unknown Source)
at java.net.InetAddress.getByName(Unknown Source)
at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:148)
at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:145)
at java.security.AccessController.doPrivileged(Native Method)
at io.netty.util.internal.SocketUtils.addressByName(SocketUtils.java:145)
at io.netty.resolver.DefaultNameResolver.doResolve(DefaultNameResolver.java:43)
at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:63)
at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:55)
at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:57)
at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:32)
at io.netty.resolver.AbstractAddressResolver.resolve(AbstractAddressResolver.java:108)
at io.netty.bootstrap.Bootstrap.doResolveAndConnect0(Bootstrap.java:200)
at io.netty.bootstrap.Bootstrap.access$000(Bootstrap.java:46)
at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:180)
at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:166)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:604)
at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84)
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetSuccess(AbstractChannel.java:984)
at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:504)
at io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:417)
at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:474)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(Unknown Source)
```
When using UaExpert, "opc.tcp://11.23.1.1:4840" is the address of the server that I input when adding a new server when using Custom Discovery. When I enter this, a device appears as a dropdown of this server called B&R Embedded OPC-UA Server as the OPC-UA server is hosted on a B&R device in the machine. When I select this device to connect to, I get the following message:
*The hostname of the discovery URL used to call GetEndpoints (br-automation) was replaced by the hostname used to call FindServers (11.23.1.1). Do you also want to replace the hostnames of the EndpointURLs with this hostname?*
I have to accept this message for the server to be found, but I am confused exactly what is going on. I assume there is a difference in the endpoint used to find the server and the endpoint used for something else? I have found the resources online very difficult to understand. In the UaExpert logs there are three lines of logs in a row which report "Adding Url: ocp.tcp://br-automation:4840". It also then reports the endpoint: "ocp.tcp://br-automation:4840", the application Uri and the security policy (none). If I try change the address in the client's getEndpointUrl method to ocp.tcp://br-automation:4840 then I get the following error:
```
[main] INFO o.e.m.opcua.sdk.client.OpcUaClient - Eclipse Milo OPC UA Client SDK version: 0.4.3-SNAPSHOT
18:37:46.035 [main] ERROR o.e.m.e.client.ClientExampleRunner - Error getting client: java.util.concurrent.ExecutionException: java.net.UnknownHostException: br-automation
org.eclipse.milo.opcua.stack.core.UaException: java.util.concurrent.ExecutionException: java.net.UnknownHostException: br-automation
at org.eclipse.milo.opcua.sdk.client.OpcUaClient.lambda$create$1(OpcUaClient.java:204)
at java.util.Optional.orElseGet(Unknown Source)
at org.eclipse.milo.opcua.sdk.client.OpcUaClient.create(OpcUaClient.java:204)
at org.eclipse.milo.opcua.sdk.client.OpcUaClient.create(OpcUaClient.java:201)
at org.eclipse.milo.examples.client.ClientExampleRunner.createClient(ClientExampleRunner.java:73)
at org.eclipse.milo.examples.client.ClientExampleRunner.run(ClientExampleRunner.java:94)
at org.eclipse.milo.examples.client.SubscriptionExample.main(SubscriptionExample.java:42)
Caused by: java.util.concurrent.ExecutionException: java.net.UnknownHostException: br-automation
at java.util.concurrent.CompletableFuture.reportGet(Unknown Source)
at java.util.concurrent.CompletableFuture.get(Unknown Source)
at org.eclipse.milo.opcua.sdk.client.OpcUaClient.create(OpcUaClient.java:180)
... 4 common frames omitted
Caused by: java.net.UnknownHostException: br-automation
at java.net.InetAddress.getAllByName0(Unknown Source)
at java.net.InetAddress.getAllByName(Unknown Source)
at java.net.InetAddress.getAllByName(Unknown Source)
at java.net.InetAddress.getByName(Unknown Source)
at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:148)
at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:145)
at java.security.AccessController.doPrivileged(Native Method)
at io.netty.util.internal.SocketUtils.addressByName(SocketUtils.java:145)
at io.netty.resolver.DefaultNameResolver.doResolve(DefaultNameResolver.java:43)
at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:63)
at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:55)
at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:57)
at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:32)
at io.netty.resolver.AbstractAddressResolver.resolve(AbstractAddressResolver.java:108)
at io.netty.bootstrap.Bootstrap.doResolveAndConnect0(Bootstrap.java:200)
at io.netty.bootstrap.Bootstrap.access$000(Bootstrap.java:46)
at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:180)
at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:166)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:604)
at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84)
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetSuccess(AbstractChannel.java:984)
at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:504)
at io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:417)
at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:474)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(Unknown Source)
```
I don't know if this is enough information to diagnose the problem, but I would appreciate any help on how I can get the Eclipse Milo server to perform the same process and connect to the machine's server.
|
2020/08/10
|
[
"https://Stackoverflow.com/questions/63345326",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8381207/"
] |
I see a couple things to try.
First, make sure to set your custom js file to have 'slick-js' as a dependancy. This way it loads *after* slick slider does.
Also, jquery is already part of wordpress, so you **do not** need to enque it again. However, it should be a dependancy for both your custom script and slick:
```
wp_enqueue_script('main', get_stylesheet_directory_uri() . '/js/custom.js', array('slick-js', 'jquery'), NULL, true ) ;
```
Second, I'm not sure what val-slider is, but it could be conflicting with slick slider. I suggest only using one javascript slider for your theme. Slick is very powerful and customizable, so It's a good choice.
Third, slick slider typically also has a theme-styles.css file that you should include. This pretties up the slider and puts arrows/dots in the right place.
Fourth, I'm not sure what your HTML looks like, but make sure the div with class `.slideshow` is the immediate parent to your slides (typically a for or foreach loop.) If there is another surrounding div in there then slick will interpret that as one slide. Here's an example:
```
<div class="slideshow">
<?php
foreach($slides as $slide) {
echo '<div class="slide">'; //this class name is unimportant
//slide content here
echo '</div>';
};
?>
</div>
```
Fifth, not sure if this is a copy/paste error but you're missing the closing `)};` in your javascript.
Last thing, this wont break slick slider, but it could cause some weird things to happen; you have `slidesToScroll: 10` but are only showing one slide `slidesToShow:1`. I think it's a good practice to make these numbers the same.
|
Thank you very much, now it finally works. One other thing that I that I was not aware of, was that I was to replace the $ with jQuery, so my custom.js looks like this:
```
`jQuery('.slider').slick({
arrows: false,
slidesToShow: 1,
slidesToScroll: 1,
arrows: false,
focusOnSelect: false,
});`
```
Now it is finally running
| 7,125
|
43,037,588
|
I have a CSV file in the same directory as my Python script, and I would like to take that data and turn it into a list that I can use later. I would prefer to use Python's CSV module. After reading the the module's documentation and questions regarding it, I have still not found any help.
### Code
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import csv
inputfile = 'inputfile.csv'
inputm = []
reader = csv.reader(inputfile)
for row in reader:
inputm.append(row)
```
### inputfile.csv
```
point1,point2,point3,point4
0,1,4,1
0,1,1,1
0,1,4,1
0,1,0,0
0,1,2,1
0,0,0,0
0,0,0,0
0,1,3,1
0,1,4,1
```
It only returns the string of the filename I provide it.
`[['i'], ['n'], ['p'], ['u'], ['t'], ['f'], ['i'], ['l'], ['e']]`
I would like it to return each row of the CSV file as a sub-list instead of each letter of the filename as a sub-list.
|
2017/03/27
|
[
"https://Stackoverflow.com/questions/43037588",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6417530/"
] |
You need to open the file in read mode, read the contents!
That is,
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import csv
inputfile = 'inputfile.csv'
inputm = []
with open(inputfile, "rb") as f:
reader = csv.reader(f, delimiter="\t")
for row in reader:
inputm.append(row)
```
Output:
```
[['point1,point2,point3,point4'], ['0,1,4,1'], ['0,1,1,1'], ['0,1,4,1'], ['0,1,0,0'], ['0,1,2,1'], ['0,0,0,0'], ['0,0,0,0'], ['0,1,3,1'], ['0,1,4,1']]
```
|
You actually need to `open()` the file:
```
inputfile = open('inputfile.csv')
```
You may want to look at the `with` statement:
```
with open('inputfile.csv') as inputfile:
reader = csv.reader(inputfile)
inputm = list(reader)
```
| 7,126
|
11,306,641
|
Here on SO people sometimes say something like "you cannot parse X with regular expressions, because X is not a regular language". From my understanding however, modern regular expressions engines can match more than just regular languages in [Chomsky's sense](http://en.wikipedia.org/wiki/Chomsky_hierarchy). My questions:
given a regular expression engine that supports
* backreferences
* lookaround assertions of unlimited width
* recursion, like `(?R)`
what kind of languages can it parse? Can it parse any context-free language, and if not, what would be the counterexample?
(To be precise, by "parse" I mean "build a single regular expression that would accept all strings generated by the grammar X and reject all other strings").
Add.: I'm particularly interested to see an example of a context-free language that modern regex engines (Perl, Net, python regex module) would be unable to parse.
|
2012/07/03
|
[
"https://Stackoverflow.com/questions/11306641",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/989121/"
] |
Modern regex engines can certainly parse a bigger set of languages than the regular languages set. So said, none of the four classic Chomsky sets are exactly recognized by regexes. All regular languages are clearly recognized by regexes. There are some classic context-free languages that cannot be recognized by regexes, such as the balanced parenthesis language `a^n b^n`, unless backreferences with counting are available. However, a regex can parse the language `ww` which is context-sensitive.
Actually, regular expressions in formal language theory are only lightly related to regexes. Matching regexes with unlimited backreference is NP-Complete in the most general case, so all pattern matching algorithms for powerful enough regexes are exponential, at least in the general case. However most times for most input they are quite fast. It is known that matching context-free languages is at most something faster than `n^3`, so there are some languages in regexes that are not context-free (like `ww`) but not all context-free languages can be parsed by regexes. Type 0 languages are non-decidable in general, son regexes don't get there.
So as a not very conclusive conclusion, regexes can parse a broad set of languages that include all regular languages, and some context-free and context-sensitive, but it is not exactly equal to any of those sets. There are other categories of languages, and other taxonomies, where you could find a more precise answer, but no taxonomy that includes context-free languages as a proper subset in a hierarchy of languages can provide a single language exactly recognized by regexes, because regexes only intersect in some part with context-free languages, and neither is a proper subset of the other.
|
You can read about regexes in *[An Introduction to Language And Linguistics
By Ralph W. Fasold, Jeff Connor-Linton P.477](http://books.google.com/books?id=dlzthEZGkmsC&pg=PA477#v=onepage&q&f=false)*
**Chomsky Hierarchy**:
Type0 >= Type1 >= Type2 >= Type3
Computational Linguistics mainly features Type 2 & 3 Grammars
• *Type 3 grammars*:
–Include **regular expressions** and finite state automata (aka, finite state machines)
–The focal point of the rest of this talk
• *Type 2 grammars*:
–Commonly used for natural language parsers
–Used to model syntactic structure in many linguistic theories (often supplemented by other mechanisms)
–We will play a key roll in the next talk on parsing.
---
most XMLs like [Microsoft DGML](http://en.wikipedia.org/wiki/DGML) (Directed Graph Markup Language) that has inter-relational links are samples that Regex are useless.
---
and this three answers may be useful:
1 - [does-lookaround-affect-which-languages-can-be-matched-by-regular-expressions](https://stackoverflow.com/questions/2974210/does-lookaround-affect-which-languages-can-be-matched-by-regular-expressions/2991587#2991587)
2 - [regular-expressions-arent](https://cstheory.stackexchange.com/questions/448/regular-expressions-arent)
3 - [where-do-most-regex-implementations-fall-on-the-complexity-scale](https://cstheory.stackexchange.com/questions/1047/where-do-most-regex-implementations-fall-on-the-complexity-scale)
| 7,127
|
34,722,459
|
Is there a way to generate a file on HDFS directly?
I want to avoid generating a local file and then over hdfs command line like:
`hdfs dfs -put - "file_name.csv"` to copy to HDFS.
Or is there any python library?
|
2016/01/11
|
[
"https://Stackoverflow.com/questions/34722459",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5773478/"
] |
Have you tried with [HdfsCli](http://hdfscli.readthedocs.org/en/latest/quickstart.html)?
To quote the paragraph [Reading and Writing files](http://hdfscli.readthedocs.org/en/latest/quickstart.html#reading-and-writing-files):
```
# Loading a file in memory.
with client.read('features') as reader:
features = reader.read()
# Directly deserializing a JSON object.
with client.read('model.json', encoding='utf-8') as reader:
from json import load
model = load(reader)
```
|
Is extremly slow when I use hdfscli the write method?
Is there an any way to speedup with using hdfscli?
```
with client.write(conf.hdfs_location+'/'+ conf.filename, encoding='utf-8', buffersize=10000000) as f:
writer = csv.writer(f, delimiter=conf.separator)
for i in tqdm(10000000000):
row = [column.get_value() for column in conf.columns]
writer.writerow(row)
```
Thanks lot.
| 7,130
|
63,170,922
|
Is there a way to **try to** decode a bytearray without raising an error if the encoding fails?
**EDIT**: The solution needn't use bytearray.decode(...). Anything library (preferably standard) that does the job would be great.
**Note**: I don't want to ignore errors, (which I could do using `bytearray.decode(errors='ignore')`). I also don't want an exception to be raised. Preferably, I would like the function to return None, for example.
```py
my_bytearray = bytearray('', encoding='utf-8')
# ...
# Read some stream of bytes into my_bytearray.
# ...
text = my_bytearray.decode()
```
If my\_bytearray doesn't contain valid UTF-8 text, the last line will raise an error.
**Question**: Is there a way to perform the validation but without raising an error?
(I realize that raising an error is considered "pythonic". Let's assume this is undesirable for some or other good reason.)
I don't want to use a try-catch block because this code gets called thousands of times and I don't want my IDE to stop every time this exception is raised (whereas I do want it to pause on other errors).
|
2020/07/30
|
[
"https://Stackoverflow.com/questions/63170922",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4093278/"
] |
You could use the [suppress](https://docs.python.org/3/library/contextlib.html#contextlib.suppress) context manager to suppress the exception and have slightly prettier code than with try/except/pass:
```py
import contextlib
...
return_val = None
with contextlib.suppress(UnicodeDecodeError):
return_val = my_bytearray.decode('utf-8')
```
|
The `chardet` module can be used to detect the encoding of a bytearray before calling `bytearray.decode(...)`.
**The Code:**
```py
import chardet
identity = chardet.detect(my_bytearray)
```
The method `chardet.detect(...)` returns a dictionary with the following format:
```
{
'confidence': 0.99,
'encoding': 'ascii',
'language': ''
}
```
One could check `analysis['encoding']` to confirm that `my_bytearray` is compatible with an expected set of text encoding before calling `my_bytearray.decode()`.
One consideration of using this approach is that the encoding indicated by the analysis might indicate one of a number of equivalent encodings. In this case, for instance, the analysis indicates that the encoding is ASCII whereas it could equivalently be UTF-8.
(Credit to @simon who pointed this out on StackOverflow [here](https://stackoverflow.com/a/49480024/4093278).)
| 7,133
|
41,801,225
|
I'm very new to python. I am writing code to generate an array of number but the output is not as I want.
The code is as follows
```
import numpy as np
n_zero=input('Insert the amount of 0: ')
n_one =input('Insert the amount of 1: ')
n_two =input('Insert the amount of 2: ')
n_three = input('Insert the amount of 3: ')
data = [0]*n_zero + [1]*n_one + [2]*n_two + [3]*n_three
np.random.shuffle(data)
print(data)
```
The output is as follows :
```
Insert the amount of 0: 10
Insert the amount of 1: 3
Insert the amount of 2: 3
Insert the amount of 3: 3
[0, 0, 3, 1, 0, 3, 2, 0, 3, 0, 2, 0, 2, 1, 1, 0, 0, 0, 0]
```
I want the following output:
```
0031032030202110000
```
Thank you
|
2017/01/23
|
[
"https://Stackoverflow.com/questions/41801225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7456346/"
] |
It's better to use android latest versions. But you can resolve by replace below code in your `app/build.gradle` file
```
android {
compileSdkVersion 24
buildToolsVersion "24.2.1"
defaultConfig {
minSdkVersion 19
targetSdkVersion 24
...
}
```
Dependencies as follows:
```
compile 'com.android.support:appcompat-v7:24.2.1'
compile 'com.android.support:support-v4:24.2.1'
compile 'com.android.support:design:24.2.1'
```
|
I have just used Maven google repository in build.gradle(project) :
```
// Top-level build file where you can add configuration options common to all sub-projects/modules.
buildscript {
repositories {
jcenter()
google()
maven {
url 'https://maven.google.com'
}
}
dependencies {
classpath 'com.android.tools.build:gradle:4.0.1'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
}
}
allprojects {
repositories {
jcenter()
maven {
url 'https://maven.google.com'
}
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
```
| 7,134
|
70,771,156
|
I've converted my python script with tkinter module to a standalone executable file with PyInstaller but it doesn't work without image.png file in the same patch. How I can add this .png file to my app. And why .exe file have an enormous weight of ~350 Mb?
|
2022/01/19
|
[
"https://Stackoverflow.com/questions/70771156",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17971397/"
] |
I had the exact same situation with Tkinter and a single image needed in the GUI.
I combined Aleksandr Tyshkevich answer here and Jonathon Reinhart's answer here [Pyinstaller adding data files](https://stackoverflow.com/questions/41870727/pyinstaller-adding-data-files) as I need to send just the exe file to others, so it needs to work from any location.
I have one python file named gui.py. In gui.py I included the code:
```
import sys
import os
from PIL import Image
def resource_path(relative_path):
""" Get absolute path to resource, works for dev and for PyInstaller """
base_path = getattr(sys, '_MEIPASS', os.path.dirname(os.path.abspath(__file__)))
return os.path.join(base_path, relative_path)
path = resource_path("logo.png")
# When opening image
logo = Image.open(path)
```
In the terminal I used:
```
pyinstaller --onefile --add-data "logo.png;." gui.py
```
I ran into a problem when I tried to use a colon in the line above, as I'm using Windows OS so needed to use a semi-colon.
|
It works:
```py
import os
def resource_path(relative_path):
try:
base_path = sys._MEIPASS
except Exception:
base_path = os.path.abspath(".")
return os.path.join(base_path, relative_path)
path = resource_path("image.png")
photo = tk.PhotoImage(file=path)
```
| 7,144
|
34,712,248
|
Trying to download the website with python, but getting errors. My intention is to download the website, extract relevant information from it using python, save result to another file on my hard disk. Having trouble on step 1. Other steps were working until some strange SSL error. I am using python 2.7
```
import urllib
testsite = urllib.URLopener()
testsite.retrieve("https://thepiratebay.se/top/207", "C:\file.html")
```
This is what happens:
```
Traceback (most recent call last):
File "C:\Users\Xaero\Desktop\Python\class related\scratch.py", line 10, in <module>
testsite.retrieve("https://thepiratebay.se/top/207", "C:\file.html")
File "C:\Python27\lib\urllib.py", line 237, in retrieve
fp = self.open(url, data)
File "C:\Python27\lib\urllib.py", line 205, in open
return getattr(self, name)(url)
File "C:\Python27\lib\urllib.py", line 435, in open_https
h.endheaders(data)
File "C:\Python27\lib\httplib.py", line 940, in endheaders
self._send_output(message_body)
File "C:\Python27\lib\httplib.py", line 803, in _send_output
self.send(msg)
File "C:\Python27\lib\httplib.py", line 755, in send
self.connect()
File "C:\Python27\lib\httplib.py", line 1156, in connect
self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file)
File "C:\Python27\lib\ssl.py", line 342, in wrap_socket
ciphers=ciphers)
File "C:\Python27\lib\ssl.py", line 121, in __init__
self.do_handshake()
File "C:\Python27\lib\ssl.py", line 281, in do_handshake
self._sslobj.do_handshake()
IOError: [Errno socket error] [Errno 1] _ssl.c:499: error:14077438:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert internal error
```
Did some research online, and it turns out Piratebay is very python-unfriendly. I found some code that gives it a different user agent, and makes it load the page, but this too stopped working very recently. >\_<
Generates the same error:
```
import urllib2
import os
import datetime
import time
from urllib import FancyURLopener
from random import choice
today = datetime.datetime.today()
today = today.strftime('%Y.%m.%d')
user_agents = [
'Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.11) Gecko/20071127 Firefox/2.0.0.11',
'Opera/9.25 (Windows NT 5.1; U; en)',
'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)',
'Mozilla/5.0 (compatible; Konqueror/3.5; Linux) KHTML/3.5.5 (like Gecko) (Kubuntu)',
'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.12) Gecko/20070731 Ubuntu/dapper-security Firefox/1.5.0.12']
class MyOpener(FancyURLopener, object):
version = choice(user_agents)
myopener = MyOpener()
page = myopener.retrieve('https://thepiratebay.se/top/207', 'C:\TPB.HDMovies' + today + '.html')
```
Is anyone out there able to do this successfully?
|
2016/01/10
|
[
"https://Stackoverflow.com/questions/34712248",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5771257/"
] |
Okay. Let start from simple.
First you need get unique user\_id/dev id combinations
```
select distinct dev_id,user_id from reports
```
Result will be
```
dev_id user_id
------------------
111 1
222 2
111 2
333 3
```
After that you should get number of different user\_id per dev\_id
```
select dev_id,c from (
SELECT
dev_id,
count(*)-1 AS c
FROM
(select distinct user_id,dev_id from reports) as fixed_reports
GROUP BY dev_id
) as counts
```
Result of such query will be
```
dev_id c
-----------------
111 1
222 0
333 0
```
Now you should show users which have such dev\_id. For that you should join such dev\_id list with table from step1(which show which one user\_id,dev\_id pairs exist)
```
select distinct fixed_reports2.user_id,counts.c from (
SELECT
dev_id,
count(*)-1 AS c
FROM
(select distinct user_id,dev_id from reports) as fixed_reports
GROUP BY dev_id
) as counts
join
(select distinct user_id,dev_id from reports) as fixed_reports2
on fixed_reports2.dev_id=counts.dev_id
where counts.c>0 and counts.c is not null
```
"Distinct" here need to skip same rows.
Result should be for internal query
```
dev_id c
-----------------
111 1
```
For all query
```
user_id c
------------------
1 1
2 1
```
If you sure you need also rows with c=0, then you need do "left join" of fixed\_reports2 and large query,that way you will get all rows and rows with c=null will be rows with 0(can be changed by case/when statement)
|
I think following sql query should solve you problem:
```
SELECT t1.user_id, t1.dev_id, count(t2.user_id) as qu
FROM (Select Distinct * from reports) t1
Left Join (Select Distinct * from reports) t2
on t1.user_id != t2.user_id and t2.dev_id = t1.dev_id
group by t1.user_Id, t1.dev_id
```
[SQL Fiddle Link](http://sqlfiddle.com/#!9/86d89ff/9)
| 7,145
|
45,510,287
|
I want to split long math equation by multipliers.
The expression is given as a string where whitespaces are allowed.
For example:
```
"((a*b>0) * (e>500)) * (abs(j)>2.0) * (n>1)"
```
Should return:
```
['a*b>0', 'e>500', 'abs(j)>2.0', 'n>1']
```
If the division is used things get even more complicated, but let assume there is no division for the start. What would be the most pythonic way to solve this?
|
2017/08/04
|
[
"https://Stackoverflow.com/questions/45510287",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3152072/"
] |
```
import re
string = "((a-b>0) * (e + 10>500)) * (abs(j)>2.0) * (n>1)"
signals = {'+','*','/','-'}
###
##
def splitString(string):
arr_equations = re.split(''([\)]+(\*|\-|\+|\/)+[\(])'',string.replace(" ", ""))
new_array = []
for each_equa in arr_equations:
each_equa = each_equa.strip("()")
if (not(each_equa in signals)):
new_array.append(each_equa)
return new_array
###
##
print(splitString(string))
```
|
You can simply use the `split()` function:
```
ans_list = your_string.split(" * ")
```
Note the spaces around the multiplier sign. This assumes that your string is exactly as you say.
| 7,155
|
57,302,048
|
How to Fix this error? I tried visiting all the forums searching for answers to rectify this issue.
Here i am trying to perform multi-label classification using keras
```
from keras.preprocessing.text import Tokenizer
from keras.models import Sequential
from keras.layers import Dense
from keras.preprocessing.sequence import pad_sequences
from keras.layers import Input, Dense, Dropout, Embedding, LSTM, Flatten
from keras.models import Model
from keras.utils import to_categorical
from keras.callbacks import ModelCheckpoint
MAX_LENGTH = 500
tokenizer = Tokenizer()
tokenizer.fit_on_texts(df.overview.values)
post_seq = tokenizer.texts_to_sequences(df.overview.values)
post_seq_padded = pad_sequences(post_seq, maxlen=MAX_LENGTH)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(post_seq_padded, train_encoded, test_size=0.3)
vocab_size = len(tokenizer.word_index) + 1
inputs = Input(shape=(MAX_LENGTH, ))
embedding_layer = Embedding(vocab_size, 128, input_length=MAX_LENGTH)(inputs)
x = Dense(64, input_shape=(None,), activation='relu')(embedding_layer)
predictions = Dense(num_classes, activation='softmax')(x)
model = Model(inputs=[inputs], outputs=predictions)
model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['acc'])
model.summary()
filepath="weights.hdf5"
checkpointer = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')
history = model.fit(X_train, batch_size=64, y=to_categorical(y_train), verbose=1, validation_split=0.25, shuffle=True, epochs=10, callbacks=[checkpointer])
```
**ValueError Traceback (most recent call last)**
```
<ipython-input-11-7fdc4bff9648> in <module>
2 checkpointer = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')
3 history = model.fit(X_train, batch_size=64, y=to_categorical(y_train), verbose=1, validation_split=0.25,
**---->** 4 shuffle=True, epochs=10, callbacks=[checkpointer])
```
***ValueError: Error when checking target: expected dense\_3 to have shape (500, 4) but got array with shape (4, 2)***
I expected to have the output shape as (500,3) but i am getting (4,2) which is not matching to proceed further.
|
2019/08/01
|
[
"https://Stackoverflow.com/questions/57302048",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11050535/"
] |
Adding an index along the lines of the following might help the performance:
```
CREATE INDEX idx ON table_e (Phone_number, bill_date, col1, col2);
```
Here `col1` and `col2` are the other two columns which might appear in the `SELECT` clause. The strategy of this index, if used, would be to scan the relatively small `table_a`, which only has 309 records. For each record in `a`, MySQL would then use the above index to rapidly find the matching records in the `e` table.
|
If you can update then it is better to create `index` on `a.invoice_date` and . Please find the [link](https://dev.mysql.com/doc/refman/8.0/en/index-hints.html) for the same.
| 7,157
|
26,535,493
|
I wrote a custom python module. It consists of several functions divided thematically between 3 .py files, which are all in the same directory called `microbiome` in my home directory. So the whole path to my custom module directory is:
```
/Users/drosophila/microbiome
```
I'm working on OsX Mavericks. I want to import this module in python scripts which I run from a different directory.
I tried adding the `microbiome` directory to the path by editing `/etc/paths`:
```
sudo nano /etc/paths
```
Then in `/etc/paths` I write:
```
/usr/bin
/bin
/usr/sbin
/sbin
/usr/local/bin
/Users/drosophila/blast-2.2.22/bin
/Users/drosophila/blast-2.2.22/
/Users/drosophila/microbiome
```
I also tried editing `.bash_profile` as follows:
```
export PATH="/Users/drosophila/microbiome:/Users/drosophila/anaconda/bin:$PATH"
```
It seems that the 'microbiome' directory is added to the path successfully, since `echo $PATH` shows the directory is in there:
```
/Users/drosophilq/anaconda/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usearch:/Users/drosophila/blast-2.2.22/bin:/Users/drosophila/blast-2.2.22/:/Users/drosophila/microbiome:/opt/X11/bin:/usr/texbin
```
However, when I try to import the microbiome module in python, it insists that such a module doesn't exist. I have Python 3.4.1 |Anaconda 2.0.1
The 'microbiome' directory contains an empty `__init__.py` file.
What am I doing wrong?
|
2014/10/23
|
[
"https://Stackoverflow.com/questions/26535493",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1954277/"
] |
The *right* way to do this, as explained in the [Python Packaging User Guide](https://packaging.python.org/en/latest/), is to create a `setup.py`-based project.
Then, you can just install your code for any particular Python installation (or virtual environment) by using, e.g., `pip3 install .` from the root directory of the project. That makes sure everything gets copied, with the proper layout, into some appropriate site-packages directory, where it will be available for that Python installation to import.
Trying to do what the standard tools do yourself is just making things harder on yourself.
---
That being said, if you really, really want to, the key is that you need to get your new directory into the `sys.path` for the Python installation you want. Modifying `PATH` or `/etc/paths` won't do that. Modifying `PYTHONPATH` will, but it will affect *every* installation. The way to do this is to add a `.pth` file and/or a `sitecustomize.py` file, as described in the docs for the [`site`](https://docs.python.org/3/library/site.html) module.
I don't know where your Anaconda site-packages is (you can find out by `import sys; print(sys.path)` from within Python), but let's say it's `/usr/local/anaconda/lib/python3.4/site-packages`. So, you can create a `microbiome.pth` file in that directory, with the absolute path to your `/Users/drosophilia/microbiome` directory. Then, every time you start Python, that directory will be added to `sys.path`, and your import will work.
---
It's also worth noting that if you just want to reuse a directory as if it were part of a handful of different projects, and you don't want to even think about "installation" or anything like that, there are even simpler ways to do it: Symlink the directory into your different projects. Or, if you're using version control, create a git submodule. Or various other similar equivalents. Then, it looks like each project just includes `microbiome` as part of that project, and you don't have to worry about paths or anything else.
|
As you've discovered, `/etc/paths` affects `$PATH`. But `$PATH` does not affect where Python looks for modules. Try `$PYTHONPATH` instead. See `man python` for details.
| 7,158
|
56,973,032
|
So I am trying to install plaidML-keras so I can do tensor-flow stuff on my MacBookPro's gpu (radeon pro 560x). From my research, it can be done using plaidML-Keras ([instalation instrutions](https://github.com/plaidml/plaidml/blob/master/docs/install.rst#macos)). When I run `pip install -U plaidml-keras` it works fine, but the next step, `plaidml-setup` returns the following error.
```
Traceback (most recent call last):
File "/usr/local/bin/plaidml-setup", line 6, in <module>
from plaidml.plaidml_setup import main
File "/usr/local/lib/python3.7/site-packages/plaidml/__init__.py", line 50, in <module>
import plaidml.settings
File "/usr/local/lib/python3.7/site-packages/plaidml/settings.py", line 33, in <module>
_setup_config('PLAIDML_EXPERIMENTAL_CONFIG', 'experimental.json')
File "/usr/local/lib/python3.7/site-packages/plaidml/settings.py", line 30, in _setup_config
'Could not find PlaidML configuration file: "{}".'.format(filename))
plaidml.exceptions.PlaidMLError: Could not find PlaidML configuration file: "experimental.json".
```
From my limited understanding of the error message, it is saying that I am missing a conifuration file, but I don't know where to put it, or what to put in it. I am guessing that it has something to do with the following (vague) line from the instructions.
>
> Finally, set up PlaidML to use a preferred computing device
>
>
>
But how do I specify that I want it to use the radeon pro 560x. Also, I did check and my mac is compatible with openCL 1.2 (required for plaidML)
|
2019/07/10
|
[
"https://Stackoverflow.com/questions/56973032",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8565630/"
] |
Disclaimer: I'm on the PlaidML team, and we're actively working to improve the setup experience and documentation around it. We're sorry you were stuck on this. For now, here's some instructions to get you back on track.
1. Find out where plaidml-setup was installed. Typically, this is some variant of `/usr/local/bin` or a path to your virtual environment. The prefix of this path (i.e. `/usr/local`) is important to note for the next step.
2. Find the plaidml `share` directory. It's within the same prefix as plaidml-setup, i.e. `/usr/local/share/plaidml`.
3. Within the plaidml `share` directory, there should be a few files: at a minimum, `config.json` and `experimental.json` should be in there. If they're not in there, you can copy [the files here](https://github.com/plaidml/plaidml/tree/master/plaidml/configs) to your plaidml `share` directory.
After copying those json files over, you should be able to run `plaidml-setup` with no issue.
|
I'm facing the same problem and answers online are not very helpful. In this case, I'd suggest debugging yourself.
Since this is where the problem is:
```
File "/usr/local/lib/python3.7/site-packages/plaidml/settings.py", line 30, in _setup_config
'Could not find PlaidML configuration file: "{}".'.format(filename))
```
You can `vim /usr/local/lib/python3.7/site-packages/plaidml/settings.py`, and read the code. Basically it's trying to use function `_find_config` to get config files.
After `cfg_path = os.path.join(prefix, 'share', 'plaidml', name)`, I added `print(cfg_path)` to see what path it's looking for. And I got:
```
/usr/local/Caskroom/miniconda/base/share/plaidml/experimental.json
/usr/local/Caskroom/miniconda/base/share/plaidml/config.json
```
This is why it's hard to tell you where to put the files: it depends on your system setup. Not everyone is using `cask` and `conda` like me, so I assume it should be different in your OS.
@Denise Kutnick: thanks for your hard work, maybe either print cfg\_path when there's a problem, or try to add `.` as a search path, so that it would be easier for users to get some clue?
| 7,159
|
16,580,285
|
I am writing a python script to keep a buggy program open and I need to figure out if the program is not respoding and close it on windows. I can't quite figure out how to do this.
|
2013/05/16
|
[
"https://Stackoverflow.com/questions/16580285",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2125510/"
] |
On Windows you can do this:
```
import os
def isresponding(name):
os.system('tasklist /FI "IMAGENAME eq %s" /FI "STATUS eq running" > tmp.txt' % name)
tmp = open('tmp.txt', 'r')
a = tmp.readlines()
tmp.close()
if a[-1].split()[0] == name:
return True
else:
return False
```
It is more robust to use the PID though:
```
def isrespondingPID(PID):
os.system('tasklist /FI "PID eq %d" /FI "STATUS eq running" > tmp.txt' % PID)
tmp = open('tmp.txt', 'r')
a = tmp.readlines()
tmp.close()
if int(a[-1].split()[1]) == PID:
return True
else:
return False
```
From `tasklist` you can get more information than that. To get the "NOT RESPONDING" processes directly, just change "running" by "not responding" in the functions given. [See more info here](http://www.gossamer-threads.com/lists/python/python/796145).
|
Piling up on the awesome answer from @Saullo GP Castro, this is a version using `subprocess.Popen` instead of `os.system` to avoid creating a temporary file.
```py
import subprocess
def isresponding(name):
"""Check if a program (based on its name) is responding"""
cmd = 'tasklist /FI "IMAGENAME eq %s" /FI "STATUS eq running"' % name
status = subprocess.Popen(cmd, stdout=subprocess.PIPE).stdout.read()
return name in str(status)
```
The corresponding PID version is:
```
def isresponding_PID(pid):
"""Check if a program (based on its PID) is responding"""
cmd = 'tasklist /FI "PID eq %d" /FI "STATUS eq running"' % pid
status = subprocess.Popen(cmd, stdout=subprocess.PIPE).stdout.read()
return str(pid) in str(status)
```
The usage of `timeit` showed that the usage of `subprocess.Popen` is twice as fast (mainly because we don't need to go through a file):
```
+-----------------------------+---------------------------+
| Function | Time in s (10 iterations) |
+-----------------------------+---------------------------+
| isresponding_os | 8.902 |
+-----------------------------+---------------------------+
| isrespondingPID_os | 8.318 |
+-----------------------------+---------------------------+
| isresponding_subprocess | 4.852 |
+-----------------------------+---------------------------+
| isresponding_PID_subprocess | 4.868 |
+-----------------------------+---------------------------+
```
Suprisingly, it is a bit slower for `os.system` implementation if we use PID but not much different if we use `subprocess.Popen`.
Hope it can help.
| 7,165
|
5,230,699
|
```
gardai-plan-crackdown-on-troublemakers-at-protest-2438316.html': {'dail': 1, 'focus': 1, 'actions': 1, 'trade': 2, 'protest': 1, 'identify': 1, 'previous': 1, 'detectives': 1, 'republican': 1, 'group': 1, 'monitor': 1, 'clashes': 1, 'civil': 1, 'charge': 1, 'breaches': 1, 'travelling': 1, 'main': 1, 'disrupt': 1, 'real': 1, 'policing': 3, 'march': 6, 'finance': 1, 'drawn': 1, 'assistant': 1, 'protesters': 1, 'emphasised': 1, 'department': 1, 'traffic': 2, 'outbreak': 1, 'culprits': 1, 'proportionate': 1, 'instructions': 1, 'warned': 2, 'commanders': 1, 'michael': 2, 'exploit': 1, 'culminating': 1, 'large': 2, 'continue': 1, 'team': 1, 'hijack': 1, 'disorder': 1, 'square': 1, 'leaders': 1, 'deal': 2, 'people': 3, 'streets': 1, 'demonstrations': 2, 'observed': 1, 'street': 2, 'college': 1, 'organised': 1, 'operation': 1, 'special': 1, 'shown': 1, 'attendance': 1, 'normal': 1, 'unions': 2, 'individuals': 1, 'safety': 2, 'prosecuted': 1, 'ira': 1, 'ground': 1, 'public': 2, 'told': 1, 'body': 1, 'stewards': 2, 'obey': 1, 'business': 1, 'gathered': 1, 'assemble': 1, 'garda': 5, 'sinn': 1, 'broken': 1, 'fachtna': 1, 'management': 2, 'possibility': 1, 'groups': 3, 'put': 1, 'affiliated': 1, 'strong': 2, 'security': 1, 'stage': 1, 'behaviour': 1, 'involved': 1, 'route': 2, 'violence': 1, 'dublin': 3, 'fein': 1, 'ensure': 2, 'stand': 1, 'act': 2, 'contingency': 1, 'troublemakers': 2, 'facilitate': 2, 'road': 1, 'members': 1, 'prepared': 1, 'presence': 1, 'sullivan': 2, 'reassure': 1, 'number': 3, 'community': 1, 'strategic': 1, 'visible': 2, 'addressed': 1, 'notify': 1, 'trained': 1, 'eirigi': 1, 'city': 4, 'gpo': 1, 'from': 3, 'crowd': 1, 'visit': 1, 'wood': 1, 'editor': 1, 'peaceful': 4, 'expected': 2, 'today': 1, 'commissioner': 4, 'quay': 1, 'ictu': 1, 'advance': 1, 'murphy': 2, 'gardai': 6, 'aware': 1, 'closures': 1, 'courts': 1, 'branch': 1, 'deployed': 1, 'made': 1, 'thousands': 1, 'socialist': 1, 'work': 1, 'supt': 2, 'feehan': 1, 'mr': 1, 'briefing': 1, 'visited': 1, 'manner': 1, 'irish': 2, 'metropolitan': 1, 'spotters': 1, 'organisers': 1, 'in': 13, 'dissident': 1, 'evidence': 1, 'tom': 1, 'arrangements': 3, 'experience': 1, 'allowed': 1, 'sought': 1, 'rally': 1, 'connell': 1, 'officers': 3, 'potential': 1, 'holding': 1, 'units': 1, 'place': 2, 'events': 1, 'dignified': 1, 'planned': 1, 'independent': 1, 'added': 2, 'plans': 1, 'congress': 1, 'centre': 3, 'comprehensive': 1, 'measures': 1, 'yesterday': 2, 'alert': 1, 'important': 1, 'moving': 1, 'plan': 2, 'highly': 1, 'law': 2, 'senior': 2, 'fair': 1, 'recent': 1, 'refuse': 1, 'attempt': 1, 'brady': 1, 'liaising': 1, 'conscious': 1, 'light': 1, 'clear': 1, 'headquarters': 1, 'wing': 1, 'chief': 2, 'maintain': 1, 'harcourt': 1, 'order': 2, 'left': 1}}
```
I have a python script that extracts words from text files and counts the number of times they occur in the file.
I want to add them to an ".ARFF" file to use for weka classification.
Above is an example output of my python script.
How do I go about inserting them into an ARFF file, keeping each text file separate. Each file is differentiated by {"with their words in here!!"}
|
2011/03/08
|
[
"https://Stackoverflow.com/questions/5230699",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/515263/"
] |
There are details on the [ARFF file format here](http://www.cs.waikato.ac.nz/~ml/weka/arff.html) and it's very simple to generate. For example, using a cut-down version of your Python dictionary, the following script:
```
import re
d = { 'gardai-plan-crackdown-on-troublemakers-at-protest-2438316.html':
{'dail': 1,
'focus': 1,
'actions': 1,
'trade': 2,
'protest': 1,
'identify': 1 }}
for original_filename in d.keys():
m = re.search('^(.*)\.html$',original_filename,)
if not m:
print "Ignoring the file:", original_filename
continue
output_filename = m.group(1)+'.arff'
with open(output_filename,"w") as fp:
fp.write('''@RELATION wordcounts
@ATTRIBUTE word string
@ATTRIBUTE count numeric
@DATA
''')
for word_and_count in d[original_filename].items():
fp.write("%s,%d\n" % word_and_count)
```
Generates output of the form:
```
@RELATION wordcounts
@ATTRIBUTE word string
@ATTRIBUTE count numeric
@DATA
dail,1
focus,1
actions,1
trade,2
protest,1
identify,1
```
... in a file called `gardai-plan-crackdown-on-troublemakers-at-protest-2438316.arff`. If that's not exactly what you want, I'm sure you can easily alter it. (For example, if the "words" might have spaces or other punctuation in them, you probably want to quote them.)
|
I know it's pretty easy to generate an arff file on your own, but I still wanted to make it simpler so I wrote a python package
<https://github.com/ubershmekel/arff>
It's also on pypi so `easy_install arff`
| 7,166
|
18,507,559
|
Since I too have also seen this question on SO, so this might be a duplicate for many, but I've not found an answer to this question.
I want select an item from the navigation bar and show the content inside another tag by replacing the current data with AJAX-generated data.
Currently I'm able to post the data into the python services, it processes it and finally returns it back to the client. This last part of changing the data into the `div` is not happening.
Here is my code.
**python service**
```
def dashboard(request):
if request.is_ajax():
which_nav_bar = request.POST['which_nav_bar']
print which_nav_bar // prints which_nav_bar inside the terminal
ctx = {'result' : "hellloooo"}
return render_to_response('dashboard/dashboard1.html',ctx, context_instance = RequestContext(request))
```
**JS file**
```
$(function (){
$("#nav_bar li").click(function(){
alert($(this).text()); // works fine
$.ajax({
type: "POST", //able to post the data behind the scenes
url: "/dashboard/",
data : { 'which_nav_bar' : $(this).text() },
success: function(result){
$(".container-fluid").html(result);
}
});
});
});
```
**HTML**
```
<div class="navbar">
<div class="navbar-inner">
<ul class="nav" id="nav_bar">
<li class="active"><a href="#">Home</a></li>
<li><a href="#">Device</a></li>
<li><a href="#">Content</a></li>
<li><a href="#">About</a></li>
<li><a href="#">Help</li>
</ul>
</div>
</div>
<div class="container-fluid"></div>
```
**OUTPUT**
On `$(".container-fluid").html(result);` [this](http://www.flickr.com/photos/37280036@N04/9618885201/) is what I get actually. I instead want that my python code should return something(in this case `ctx`) and prints the ctx variable.
|
2013/08/29
|
[
"https://Stackoverflow.com/questions/18507559",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1162512/"
] |
change it id
```
<div id="container-fluid"></div>
```
this is id selector `$("#container-fluid")`
[id-selector](http://api.jquery.com/id-selector/)
if you want to access by class you can use
`$(".container-fluid")`
[class-selector](http://api.jquery.com/class-selector/)
|
try
```
$("#nav_bar li").click(function(){
var text_input = $(this).text(); // works fine
$.ajax({
type: "POST", //able to post the data behind the scenes
url: "/dashboard/",
data : { 'which_nav_bar' : text_input }
success: function(result){
$("#container-fluid").text(result);
}
});
});
```
in your code you use
```
data : { 'which_nav_bar' : $(this).text()},
```
but in your ajax request `$(this).text()` would be undefined ,
so assign it to a variable and use it inside the `data{}`
and also remove the comma at the end
| 7,169
|
44,830,396
|
I am parsing a websocket message and due do a bug in a specific socket.io version (Unfortunately I don't have control over the server side), some of the payload is double encoded as utf-8:
The correct value would be **Wrocławskiej** (note the l letter which is LATIN SMALL LETTER L WITH STROKE) but I actually get back **WrocÅawskiej**.
I already tried to decode/encode it again with java
```
String str = new String(wrongEncoded.getBytes(StandardCharsets.UTF_8), StandardCharsets.UTF_8);
```
Unfortunately the string stays the same. Any idea on how to do a double decoding in java? I saw a python version where they convert it to `raw_unicode` first and then parse it again, but I don't know this works or if there is a similar solution for Java.
I already read through a couple of posts on that topic, but none helped.
Edit: To clarify in Fiddler I receive the following byte sequence for the above mentionend word:
```
WrocÃÂawskiej
byte[] arrOutput = { 0x57, 0x72, 0x6F, 0x63, 0xC3, 0x85, 0xC2, 0x82, 0x61, 0x77, 0x73, 0x6B, 0x69, 0x65, 0x6A };
```
|
2017/06/29
|
[
"https://Stackoverflow.com/questions/44830396",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3450689/"
] |
You text was encoding to UTF-8, those bytes were then interpreted as ISO-8859-1 and re-encoded to UTF-8.
`Wrocławskiej` is unicode: 0057 0072 006f 0063 **0142** 0061 0077 0073 006b 0069 0065 006a
Encoding to UTF-8 it is: 57 72 6f 63 **c5 82** 61 77 73 6b 69 65 6a
In [ISO-8859-1](https://en.wikipedia.org/wiki/ISO/IEC_8859-1#Codepage_layout), `c5` is `Å` and `82` is *undefined*.
As ISO-8859-1, those bytes are: `WrocÅawskiej`
Encoding to UTF-8 it is: 57 72 6f 63 **c3 85 c2 82** 61 77 73 6b 69 65 6a
Those are likely the bytes you are receiving.
So, to undo that, you need:
```
String s = new String(bytes, StandardCharsets.UTF_8);
// fix "double encoding"
s = new String(s.getBytes(StandardCharsets.ISO_8859_1), StandardCharsets.UTF_8);
```
|
Well, double encoding may not be the only issue to deal with. Here is a solution that counts for more then one reason
```
String myString = "heartbroken ð";
myString = new String(myString.getBytes(StandardCharsets.ISO_8859_1), StandardCharsets.UTF_8);
String cleanedText = StringEscapeUtils.unescapeJava(myString);
byte[] bytes = cleanedText.getBytes(StandardCharsets.UTF_8);
String text = new String(bytes, StandardCharsets.UTF_8);
Charset charset = Charset.forName("UTF-8");
CharsetDecoder decoder = charset.newDecoder();
decoder.onMalformedInput(CodingErrorAction.IGNORE);
decoder.onUnmappableCharacter(CodingErrorAction.IGNORE);
CharsetEncoder encoder = charset.newEncoder();
encoder.onMalformedInput(CodingErrorAction.IGNORE);
encoder.onUnmappableCharacter(CodingErrorAction.IGNORE);
try {
// The new ByteBuffer is ready to be read.
ByteBuffer bbuf = encoder.encode(CharBuffer.wrap(text));
// The new ByteBuffer is ready to be read.
CharBuffer cbuf = decoder.decode(bbuf);
String str = cbuf.toString();
} catch (CharacterCodingException e) {
logger.error("Error Message if you want to");
}
```
A
| 7,171
|
63,550,237
|
Here z is a list of dict().
```
z = [{'loss': [1, 2, 2] , 'val_loss':[2,4,5], 'accuracy':[3,8,9], 'val_accuracy':[5,9,7]},
{'loss': [1, 2, 2] , 'val_loss':[2,4,5], 'accuracy':[3,8,9], 'val_accuracy':[5,9,7]},
{'loss': [1, 2, 2] , 'val_loss':[2,4,5], 'accuracy':[3,8,9], 'val_accuracy':[5,9,7]},
{'loss': [1, 2, 2] , 'val_loss':[2,4,5], 'accuracy':[3,8,9], 'val_accuracy':[5,9,7]},
{'loss': [1, 2, 2] , 'val_loss':[2,4,5], 'accuracy':[3,8,9], 'val_accuracy':[5,9,7]},
{'loss': [1, 2, 2] , 'val_loss':[2,4,5], 'accuracy':[3,8,9], 'val_accuracy':[5,9,7]},
{'loss': [1, 2, 2] , 'val_loss':[2,4,5], 'accuracy':[3,8,9], 'val_accuracy':[5,9,7]},
{'loss': [1, 2, 2] , 'val_loss':[2,4,5], 'accuracy':[3,8,9], 'val_accuracy':[5,9,7]},
{'loss': [1, 2, 2] , 'val_loss':[2,4,5], 'accuracy':[3,8,9], 'val_accuracy':[5,9,7]},
{'loss': [1, 2, 2] , 'val_loss':[2,4,5], 'accuracy':[3,8,9], 'val_accuracy':[5,9,7]},]
```
I want to append all dictionary values of 'loss' in a separate list and similarly 'val\_loss', 'accuracy', 'val\_accuracy'.
For that, I tried to write the below python code:
```
a = b = c = d = []
for lis in z:
a.append(lis['loss'])
b.append(lis['val_loss'])
c.append(lis['accuracy'])
d.append(lis['val_accuracy'])
```
But when I am trying to print the length of the list `print(len(a))` the output is 40 instead of 10?
I just want to append all `'loss'` into `a`.
|
2020/08/23
|
[
"https://Stackoverflow.com/questions/63550237",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3168982/"
] |
The notation `a = b = c = d = []` assign a new list to `d`, then assign the 3 others variable to `d`, so you have 4 variables pointing to one same list, so you put 4\*10 items in the same list.
Do :
```
a, b, c, d = [], [], [], []
```
Using `map` and `itemgetter` you can do
```
from operator import itemgetter
loss = list(map(itemgetter("loss"), z))
val_loss = list(map(itemgetter("val_loss"), z))
accuracy = list(map(itemgetter("accuracy"), z))
val_accuracy = list(map(itemgetter("val_accuracy"), z))
```
|
In python, when you create a variable, you are just creating a pointer to an object, and not a copy of the object. In this case, when you are initializing your list with `a = b = c = d = []`, you are actually making a, b, c, and d point to the same list instead of creating four different lists.
Take the following example:
```
In [1]: a = b = c = d = []
In [2]: a.append("a")
In [3]: b
Out[3]: ['a']
```
What you actually need to do is initialize four different lists:
```
a = []
b = []
c = []
d = []
for lis in z:
a.append(lis['loss'])
b.append(lis['val_loss'])
c.append(lis['accuracy'])
d.append(lis['val_accuracy'])
```
| 7,174
|
300,925
|
So, I've spent enough time using ASP.NET webforms to know that I'd almost rather go back to doing classic ASP than use them. But I'm hesitant to move to ASP.NET MVC until it becomes more mature. Are there any open source alternatives?
The main thing I'm looking for is something that's easy to learn and to get a prototype up and running with. If it's any help, my main preference is a python "mix and match" approach (like say web.py/SQLAlchemy/whatever templating engine suits my fancy at the time).
|
2008/11/19
|
[
"https://Stackoverflow.com/questions/300925",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2147/"
] |
Another alternative, with which I have no experience, is [ProMesh](http://www.codeplex.com/promesh). Personally, I moving to ASP.NET MVC.
|
One alternative that seems interesting is [MonoRail](http://www.castleproject.org/monorail/index.html) although I haven't tested it out fully.
| 7,175
|
44,183,891
|
Hello community / developers,
I am currently trying to install SCIP with python and found that there is Windows Support and a pip installer based on <https://github.com/SCIP-Interfaces/PySCIPOpt/blob/master/INSTALL.md>.
Nevertheless I run into a problem "Cannot open include file"
Below is a list of the things I performed to get to this step.
1. Download Python Anaconda 2.7 64 bit
2. Install with all checkboxes as they are
3. Download PyCharm Community edition
4. Click 64 bit desktop link, and associate with .py checkboxes
5. Open CMD > write: easy\_install -U pip
6. Download Visual C++ Compiler for Python 2.7
7. Setup folder structure and downloaded header files
8. CMD > pip install pyscipopt leads to error:
C:\Users\UserName\Downloads\SCIPOPTDIR\include\scip/def.h(32) : fatal error C1083: Cannot open include file: 'stdint.h': No such file or directory
error: command 'C:\Users\UserName\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\cl.exe' failed with exit status 2
My environment variables and folder directory can be found here:
[http://imgur.com/a/mJRva](https://imgur.com/a/mJRva)
Help is very much appreciated,
Kind regards
|
2017/05/25
|
[
"https://Stackoverflow.com/questions/44183891",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7646564/"
] |
This looks like a `UNION` of two `INNER JOIN`s. One gets the information from `stock` and has `NULL` values in the columns from `sales_item`, the other gets information from `sales_item` and has `NULL` for the columns from `stock`.
```
SELECT i.item_id, i.name, s.stock_id, s.quantity, NULL AS sales_item_id, NULL AS sales_id, NULL AS price
FROM items AS i
JOIN stocks AS s ON s.item_id = i.item_id
UNION
SELECT i.item_id, i.name, NULL, NULL, si.sales_item_id, si.sales_id, si.price
FROM items AS i
JOIN sales_item AS si ON i.item_id = si.item_id
```
|
All the nulls in your example shows that you are trying to join together two completely different result sets: join the items with the stock and get all that data, then join the items with the sales and return all that data. The trickiness is that you have two different kinds of results in your desired join table. The only thing both sets have in common is an item id and name.
A union (which is what you are going to have to use) requires both sets to have the same number of columns. However, your sets have a different number of columns, and very little overlap. As a result you have to explicitly "select" some non-existent columns as placeholders. So your final query should look something like this:
Working through the parts you have two join statements:
```
SELECT items.item_id, items.name, stocks.stock_id, stocks.quantity FROM items JOIN stocks ON stocks.item_id=items.id
```
That gets your items + stock information. Then another join to get sales data:
```
SELECT items.item_id, items.name, sales.sales_item_id, sales.sales_id, sales.price FROM items JOIN sales_item ON sales_item.item_id=items.item_id
```
These are your two queries that need to be joined. Of course, you could right these a bit shorter by leaving out some table references in the select and using a different join syntax, but you get the idea. The exact type of join you want to do here will depend on your underlying problem, so I'm just guessing at a plain JOIN (which is an alias for INNER JOIN).
Now you need to put them together with a UNION, and when you do that you have to add in some more columns as placeholders:
```
SELECT items.item_id, items.name, stocks.stock_id, stocks.quantity, null AS sales_item_id, null AS sales_id, null AS price FROM items JOIN stocks ON stocks.item_id=items.id
UNION ALL
SELECT items.item_id, items.name, null AS stock_id, null AS quantity, sales.sales_item_id, sales.sales_id, sales.price FROM items JOIN sales_item ON sales_item.item_id=items.item_id
```
And then (if needed) you can throw in a sort:
```
SELECT * FROM (
SELECT items.item_id, items.name, stocks.stock_id, stocks.quantity, null AS sales_item_id, null AS sales_id, null AS price FROM items JOIN stocks ON stocks.item_id=items.id
UNION ALL
SELECT items.item_id, items.name, null AS stock_id, null AS quantity, sales.sales_item_id, sales.sales_id, sales.price FROM items JOIN sales_item ON sales_item.item_id=items.item_id
) U order by item_id ASC
```
| 7,184
|
17,053,103
|
I saw [this question](https://stackoverflow.com/questions/903557/pythons-with-statement-versus-with-as), and I understand when you would want to use `with foo() as bar:`, but I don't understand when you would just want to do:
```
bar = foo()
with bar:
....
```
Doesn't that just remove the tear-down benefits of using `with ... as`, or am I misunderstanding what is happening? Why would someone want to use just `with`?
|
2013/06/11
|
[
"https://Stackoverflow.com/questions/17053103",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1388603/"
] |
For example when you want to use `Lock()`:
```
from threading import Lock
myLock = Lock()
with myLock:
...
```
You don't really need the `Lock()` object. You just need to know that it is on.
|
Using `with` without `as` still gets you the exact same teardown; it just doesn't get you a new local object representing the context.
The reason you want this is that sometimes the context itself isn't directly useful—in other words, you're only using it for the side effects of its context enter and exit.
For example, with a `Lock` object, you have to already have the object for the `with` block to be useful—so, even if you need it within the block, there's no reason to rebind it to another name. The same is true when you use `contextlib.closing` on an object that isn't a context manager—you already have the object itself, so who cares what `closing` yields?
With something like `sh.sudo`, there isn't even an object that you'd have any use for, period.
There are also cases where the point of the context manager is just there to stash and auto-restore some state. For example, you might want to write a `termios.tcsetattr`-stasher, so you can call `tty.setraw()` inside the block. You don't care what the stash object looks like, all you care about is that it gets auto-restored.
`decimal.localcontext` can work in any of these ways—you can pass it an object you already have (and therefore don't need a new name for), or pass it an unnamed temporary object, or have it just stash the current context to be auto-restored. But in any of those cases.
There are some hybrid cases where sometimes you want the context, sometimes you don't. For example, if you just want a database transaction to auto-commit, you might write `with autocommit(db.begin()):`, because you aren't going to access it inside the block. But if you want it to auto-rollback unless you explicitly commit it, you'd probably write `with autorollback(db.begin()) as trans:`, so you can `trans.commit()` inside the block. (Of course often, you'd actually want a transaction that commits on normal exit and rolls back on exception, as in [PEP 343](http://www.python.org/dev/peps/pep-0343/)'s `transaction` example. But I couldn't think of a better hybrid example here…)
[PEP 343](http://www.python.org/dev/peps/pep-0343/) and its predecessors (PEP 310, PEP 340, and other things linked from 343) explains all of this to some extent, but it's understandable that you wouldn't pick that up on a casual read—there's so much information that isn't relevant, and it mainly just explains the mile-high overview and then the implementation-level details, skipping over everything in between.
| 7,185
|
52,844,036
|
I have been trying to create a folder inside a Jenkins pipeline with the following code:
```
pipeline {
agent {
node {
label 'python'
}
}
stages{
stage('Folder'){
steps{
folder 'New Folder'
}
}
}
}
```
But I get the following error message
java.lang.NoSuchMethodError: No such DSL method 'folder' found among steps
Jenkins already has installed the Cloudbees-Folder plugin so not sure why it is happening.
|
2018/10/16
|
[
"https://Stackoverflow.com/questions/52844036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9131570/"
] |
You can pass an updater function to setState. Here's an example of how this might work.
The object returned from the updater function will be merged into the previous state.
```
const updateBubble = ({y, vy, ...props}) => ({y: y + vy, vy: vy + 0.1, ...props})
this.setState(state => ({bubbles: state.bubbles.map(updateBubble)}))
```
Change the updateBubble function to add bouncing and so on.
|
I would sugest you should change your approach. You should only manage the state as the first you showed, and then, in another component, manage multiple times the component you currently have.
You could use something like this:
```
import React from 'react'
import Box from './box'
export default class Boxes extends React.Component {
constructor(props) {
super(props);
this.state = {
started:[false,false,false,false] /*these flags will be changed on your "onClick" event*/
}
}
render() {
const {started} = this.state
return(
<div>
{started[0] && <Box />}
{started[1] && <Box />}
{started[2] && <Box />}
{started[3] && <Box />}
</div>
)
}
}
```
| 7,188
|
39,427,946
|
I'm wondering how I can import the six library to python 2.5.2? It's not possible for me to install using pip, as it's a closed system I'm using.
I have tried to add the six.py file into the lib path. and then use "import six". However, it doesnt seem to be picking up the library from this path.
|
2016/09/10
|
[
"https://Stackoverflow.com/questions/39427946",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/264975/"
] |
According to project history, [version 1.9.0](https://bitbucket.org/gutworth/six/src/a9b120c9c49734c1bd7a95e7f371fd3bf308f107?at=1.9.0) supports Python 2.5. Compatibility broke with 1.10.0 release.
>
> Six supports every Python version since 2.5. It is contained in only
> one Python file, so it can be easily copied into your project. (The
> copyright and license notice must be retained.)
>
>
>
[There is a commit in version control system which mentions change of minimum supported version](https://bitbucket.org/gutworth/six/commits/2dfeb4ba983d8d5985b5efae3859417d2a57e487).
Note that pip is able to install fixed version of package if you want to.
```
pip install six==1.9.0
```
|
You can't use `six` on Python 2.5; it requires Python 2.6 or newer.
From the [`six` project homepage](https://bitbucket.org/gutworth/six):
>
> Six supports every Python version since 2.6.
>
>
>
Trying to install `six` on Python 2.5 anyway fails as the included `setup.py` tries to import the `six` module, which tries to access objects not available in Python 2.5:
```
Traceback (most recent call last):
File "<string>", line 16, in <module>
File "/private/tmp/test/build/six/setup.py", line 8, in <module>
import six
File "six.py", line 604, in <module>
viewkeys = operator.methodcaller("viewkeys")
AttributeError: 'module' object has no attribute 'methodcaller'
```
| 7,191
|
36,610,806
|
Here is a [link](https://drive.google.com/folderview?id=0B0bHr4crS9cpaWlockpxcmJxelE&usp=drive_web) to a project and output that you can use to reproduce the problem I describe below.
I'm using **coverage** with **tox** against multiple versions of python. My tox.ini file looks something like this:
```
[tox]
envlist =
py27
py34
[testenv]
deps =
coverage
commands =
coverage run --source=modules/ -m pytest
coverage report -m
```
My problem is that coverage will run using only one version of python (in my case, py27), not both py27 and py34. This is a problem whenever I have code execution dependent on the python version, e.g.:
```
def add(a, b):
import sys
if sys.version.startswith('2.7'):
print('2.7')
if sys.version.startswith('3'):
print('3')
return a + b
```
Running coverage against the above code will incorrectly report that line 6 ("print('3')") is "Missing" for both py27 and py34. It should only be Missing for py34.
I know why this is happening: coverage is installed on my base OS (which uses python2.7). Thus, when **tox** is run, it notices that coverage is already installed and inherits coverage from the base OS rather than installing it in the virtualenv it creates.
This is fine and dandy for py27, but causes incorrect results in the coverage report for py34. I have a hacky, temporary work-around: I require a slightly earlier version of coverage (relative to the one installed on my base OS) so that tox will be forced to install a separate copy of coverage in the virtualenv. E.g.
```
[testenv]
deps =
coverage==4.0.2
pytest==2.9.0
py==1.4.30
```
I don't like this workaround, but it's the best I've found for now. Any suggestions on a way to force tox to install the current version of coverage in its virtualenv's, even when I already have it installed on my base OS?
|
2016/04/13
|
[
"https://Stackoverflow.com/questions/36610806",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3188632/"
] |
I came upon this problem today, but couldn't find an easy answer. So, for future reference, here is the solution that I came up with.
1. Create an `envlist` that contains each version of Python that will be tested and a custom env for `cov`.
2. For all versions of Python, set `COVERAGE_FILE` environment varible to store the `.coverage` file in `{envdir}`.
3. For the `cov` env I use two commands.
1. `coverage combine` that combines the reports, and
2. `coverage html` to generate the report and, if necessary, fail the test.
4. Create a `.coveragerc` file that contains a `[paths]` section to lists the `source=` locations.
1. The first line is where the actual source code is found.
2. The subsequent lines are the subpaths that will be eliminated by `coverage combine'.
**tox.ini:**
```
[tox]
envlist=py27,py36,py35,py34,py33,cov
[testenv]
deps=
pytest
pytest-cov
pytest-xdist
setenv=
py{27,36,35,34,33}: COVERAGE_FILE={envdir}/.coverage
commands=
py{27,36,35,34,33}: python -m pytest --cov=my_project --cov-report=term-missing --no-cov-on-fail
cov: /usr/bin/env bash -c '{envpython} -m coverage combine {toxworkdir}/py*/.coverage'
cov: coverage html --fail-under=85
```
**.coveragerc:**
```
[paths]
source=
src/
.tox/py*/lib/python*/site-packages/
```
The most peculiar part of the configuration is the invocation of `coverage combine`. Here's a breakdown of the command:
* `tox` does not handle Shell expansions `{toxworkdir}/py*/.coverage`, so we need to invoke a shell (`bash -c`) to get the necessary expansion.
+ If one were inclined, you could just type out all the paths individually and not jump through all of these hoops, but that would add maintenance and `.coverage` file dependency for each `pyNN` env.
* `/usr/bin/env bash -c '...'` to ensure we get the correct version of `bash`. Using the fullpath to `env` avoids the need for setting `whitelist_externals`.
* `'{envpython} -m coverage ...'` ensures that we invoke the correct `python` and `coverage` for the `cov` env.
* **NOTE:** The unfortunate problem of this solution is that the `cov` env is dependent on the invocation of `py{27,36,35,34,33}` which has some not so desirable side effects.
+ My suggestion would be to only invoke `cov` through `tox`.
+ Never invoke `tox -ecov` because, either
- It will likely fail due to a missing `.coverage` file, or
- It could give bizarre results (combining differing tests).
+ If you must invoke it as a subset (`tox -epy27,py36,cov`), then wipe out the `.tox` directory first (`rm -rf .tox`) to avoid the missing `.coverage` file problem.
|
I don't understand why tox wouldn't install coverage in each virtualenv properly. You should get two different coverage reports, one for py27 and one for py35. A nicer option might be to produce one combined report. Use `coverage run -p` to record separate data for each run, and then `coverage combine` to combine them before reporting.
| 7,192
|
29,419,322
|
I am getting the following error while executing the below code snippet exactly at the line `if uID in repo.git.log():`,
the problem is in `repo.git.log()`, I have looked at all the similar questions on Stack Overflow which suggests to use `decode("utf-8")`.
how do I convert `repo.git.log()` into `decode("utf-8")`?
```
UnicodeDecodeError: 'utf8' codec can't decode byte 0x92 in position 377826: invalid start byte
```
Relavant code:
```
..................
uID = gerritInfo['id'].decode("utf-8")
if uID in repo.git.log():
inwslist.append(gerritpatch)
.....................
Traceback (most recent call last):
File "/prj/host_script/script.py", line 1417, in <module>
result=main()
File "/prj/host_script/script.py", line 1028, in main
if uID in repo.git.log():
File "/usr/local/lib/python2.7/dist-packages/git/cmd.py", line 431, in <lambda>
return lambda *args, **kwargs: self._call_process(name, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/git/cmd.py", line 802, in _call_process
return self.execute(make_call(), **_kwargs)
File "/usr/local/lib/python2.7/dist-packages/git/cmd.py", line 610, in execute
stdout_value = stdout_value.decode(defenc)
File "/usr/lib/python2.7/encodings/utf_8.py", line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0x92 in position 377826: invalid start byte
```
|
2015/04/02
|
[
"https://Stackoverflow.com/questions/29419322",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
0x92 is a smart quote(’) of Windows-1252. It simply doesn't exist in unicode, therefore it can't be decoded.
Maybe your file was edited by a Windows machine which basically caused this problem?
|
After good research, I got the solution. In my case, **`datadump.json`** file was having the issue.
* Simply Open the file in notepad format
* Click on save as option
* Go to encoding section below & Click on "UTF-8"
* Save the file.
Now you can try running the command. You are good to go :)
For your reference, I have attached images below.
[Step1](https://i.stack.imgur.com/tsLiH.png)
[Step2](https://i.stack.imgur.com/usyPh.png)
[Step3](https://i.stack.imgur.com/L0K19.png)
| 7,193
|
32,586,612
|
I was getting started with **AWS' Elastic Beanstalk**.
I am following this [tutorial](https://realpython.com/blog/python/deploying-a-django-app-to-aws-elastic-beanstalk/) to **deploy a Django/PostgreSQL app**.
I did everything before the 'Configuring a Database' section. The deployment was also successful but I am getting an Internal Server Error.
Here's the traceback from the logs:
```
mod_wsgi (pid=30226): Target WSGI script '/opt/python/current/app/polly/wsgi.py' cannot be loaded as Python module.
[Tue Sep 15 12:06:43.472954 2015] [:error] [pid 30226] [remote 172.31.14.126:53947] mod_wsgi (pid=30226): Exception occurred processing WSGI script '/opt/python/current/app/polly/wsgi.py'.
[Tue Sep 15 12:06:43.474702 2015] [:error] [pid 30226] [remote 172.31.14.126:53947] Traceback (most recent call last):
[Tue Sep 15 12:06:43.474727 2015] [:error] [pid 30226] [remote 172.31.14.126:53947] File "/opt/python/current/app/polly/wsgi.py", line 12, in <module>
[Tue Sep 15 12:06:43.474777 2015] [:error] [pid 30226] [remote 172.31.14.126:53947] from django.core.wsgi import get_wsgi_application
[Tue Sep 15 12:06:43.474799 2015] [:error] [pid 30226] [remote 172.31.14.126:53947] ImportError: No module named django.core.wsgi
```
Any idea what's wrong?
|
2015/09/15
|
[
"https://Stackoverflow.com/questions/32586612",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4201498/"
] |
Have you created a `requirements.txt` in the root of your application? [Elastic Beanstalk will automatically install the packages from this file upon deployment.](http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/python-configuration-requirements.html) (Note it might need to be checked into source control to be deployed.)
`pip freeze > requirements.txt`
(You will probably want to do that from within a virtualenv so that you only pick up the packages your app actually needs to run. Doing that with your system Python will pick up every package you've ever installed system-wide.)
|
The answer (<https://stackoverflow.com/a/47209268/6169225>) by [carl-g](https://stackoverflow.com/users/39396/carl-g) is correct. One thing that got me was that `requirements.txt` was in the wrong directory. Let's say you created a django project called `mysite`. This is the directory in which you run the `eb` command(s) --> make sure `requirements.txt` is in this directory.
| 7,198
|
52,308,349
|
I have successfully installed mysql-connector using pip.
```
Installing collected packages: mysql-connector
Running setup.py install for mysql-connector ... done
Successfully installed mysql-connector-2.1.6
```
However, in PyCharm when I have a script that uses the line:
```
import mysql-connector
```
PyCharm gives me an error saying there isn't a package called **"mysql"** installed. Is there some sort of syntax that should be used to indicate that the entire package name contains the "-" and is not just "mysql"?
When I run my script in IDLE, mysql.connector imports just fine. (I changed it to mysql-connector after seeing the "-" in the name of the package and having trouble in PyCharm.)
EDIT: per @FlyingTeller's suggestions, in the terminal, "where python" returns C:...Programs\Python\Python36-32\python.exe. "where pip" returns ...Python\Python36-32\Scripts\pip.exe. The interpreter in PyCharm for this project is this same filepath & exe as "where python" in the terminal.
Per @Tushar's comment, this program isn't using a virtual environment and the mysql-connector library is already present in the Preferences->Project->Python Interpreter.
Thanks for the feedback and any additional guidance you may be able to provide.
|
2018/09/13
|
[
"https://Stackoverflow.com/questions/52308349",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8207436/"
] |
It may be because you are using a virtual environment inside pyCharm, while you might have installed the library using system's default pip.
Check `Preferences->Project->Python Interpreter` inside Pycharm, and see if your library is listed there. If not, install it using **`+`** icon. Normally, if you use pyCharm's inbuilt terminal, it is already using the same virtual env as your project. So using pip there may help.
Usage syntax is as below:
```
import mysql.connector
conn = mysql.connector.connect(
user='root',
password='#####',
host='127.0.0.1',
database='some_db')
conn.close()
```
|
Go to project interpreter and download mysql-connector.You need to install it also in pycharm
| 7,201
|
49,076,648
|
Doing this seemingly trivial task should be simple and obvious using PIVOT - but isn't.
What is the cleanest way to do the conversion, not necessarily using pivot, **when limited to ONLY using "pure" SQL** (see other factors, below)?
It shouldn't affect the answer, but note that a Python 3.X front end is being used to run SQL queries on a MS SQL Server 2012 backend.
Background :
I need to create CSV files by calling SQL code from Python 3.x. The CSV header line is created from the field (column) names of the SQL table that holds the results of the query.
The following SQL code extracts the field names and returns them as N rows of 1 column - but I need them as 1 row of N columns. (In the example below, the final result must be "A", "B", "C" .)
```
CREATE TABLE #MyTable -- ideally the real code uses "DECLARE @MyTable TABLE"
(
A varchar( 32 ),
B varchar( 32 ),
C varchar( 32 )
) ;
CREATE TABLE #MetaData -- ideally the real code uses "DECLARE @MetaData TABLE"
(
NameOfField varchar( 32 ) not NULL
) ;
INSERT INTO #MetaData
SELECT name
FROM tempdb.sys.columns as X
WHERE ( object_id = Object_id( 'tempdb..#MyTable' ) )
ORDER BY column_id ; -- generally redundant, ensures correct order if results returned in random order
/*
OK so far, the field names are returned as 3 rows of 1 column (entitled "NameOfField").
Pivoting them into 1 row of 3 columns should be something simple like:
*/
SELECT NameOfField
FROM #MetaData AS Source
PIVOT
(
COUNT( [ NameOfField ] ) FOR [ NameOfField ]
IN ( #MetaData ) -- I've tried "IN (SELECT NameOfField FROM #Metadata)"
) AS Destination ;
```
This error gets raised twice, once for the COUNT and once for the "FOR" clause of the PIVOT statement:
```
Msg 207, Level 16, State 1, Line 32
Invalid column name ' NameOfField'.
```
How do I use the contents of #Metadata to get PIVOT to work? Or is there another simple way?
Other background factors to be aware of:
* OBDC (Python's pyodbc package) is being used to pass the SQL queries from - and return the results (a cursor) to - a Python 3.x front end. Consequently there is no opportunity to use any type of manual intervention before the result set is returned to Python.
* The above SQL code is intended to become standard boilerplate for every query passed to SQL. The code must dynamically "adapt" itself to the structure of #MyTable (e.g. if field B is removed while D and E are added after C, the end result must be "A", "C","D", "E"). This means that the field names of a table must never appear inside PIVOT's IN clause (the #MetaData table is intended to supply those values).
* "Standard" SQL must be used. ALL vendor specific (e.g. Microsoft) extensions/utilities (e.g. "bcp", sqlcmd) must be avoided unless there is a very compelling reason to use them (because "it's there" doesn't count).
* For known reasons the select clause (into #Metadata) doesn't work for temporary variables (@MyTable). Is there an equivalent Select that works for temporary variables(i.e. @MetaData)?
UPDATE: This problem is subtly different from that in [SQL Server dynamic PIVOT query?](https://stackoverflow.com/questions/10404348/sql-server-dynamic-pivot-query). In my case I have to preserve the order of the fields, *something not required by that question*.
WHY I NEED TO DO THIS:
* The python code is a GUI for non-technical people. They use the GUI to pick & chose which (or even all) SQL reports to run from a HUGE number of reports.
* Apps like Excel are being used to view these files: to keep our users happy each CSV file must have a header line. The header line will consist of the field names from the SQL table that holds the results of the query.
* *These scripts can change at any time (e.g. add/delete a column) without any advance notice. To meet our users needs the header line must automatically "adjust itself" to make the corresponding changes. The SQL code below accomplishes this.*
* The header line gets merged (using UNION) with the query results to form the result set (a cursor) that gets passed back to Python. Python then processes the returned data and creates the CSV file (including the header line) that gets used by our customers.
In a nutshell: We have many sites, many users, many queries. By having SQL "dynmically create" the header line we remove the headache of having to manually manage/coordinate/rollout the SQL changes to all affected parties.
|
2018/03/02
|
[
"https://Stackoverflow.com/questions/49076648",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1459519/"
] |
**EDIT:**
The answer is incorrect, because it is one type of recursion. It is called indirect recursion <https://en.wikipedia.org/wiki/Recursion_(computer_science)#Indirect_recursion>.
~~I think the simplest way to do this without recursion is the following:~~
```
import java.util.LinkedList;
import java.util.List;
interface Handler {
void handle(Chain chain);
}
interface Chain {
void process();
}
class FirstHandler implements Handler {
@Override
public void handle(Chain chain) {
System.out.println("first handler");
chain.process();
}
}
class SecondHandler implements Handler {
@Override
public void handle(Chain chain) {
System.out.println("second handler");
chain.process();
}
}
class Runner implements Chain {
private List<Handler> handlers;
private int size = 5000; // change this parameter to avoid stackoverflowerror
private int n = 0;
public static void main(String[] args) {
Runner runner = new Runner();
runner.setHandlers();
runner.process();
}
private void setHandlers() {
handlers = new LinkedList<>();
int i = 0;
while (i < size) {
// there can be different implementations of handler interface
handlers.add(new FirstHandler());
handlers.add(new SecondHandler());
i += 2;
}
}
public void process() {
if (n < size) {
Handler handler = handlers.get(n++);
handler.handle(this);
}
}
}
```
At first glance this example looks a little crazy, but it's not as unrealistic as it seems.
The main idea of this approach is the **chain of responsibility** pattern. You can reproduce this exception in real life by implementing chain of responsibility pattern. For instance, you have some objects and every object after doing some logic call the next object in chain and pass the results of his job to the next one.
You can see this in java filter ([javax.servlet.Filter](https://docs.oracle.com/javaee/6/api/javax/servlet/Filter.html)).
I don't know detailed mechanism of working this class, but it calls the next filter in chain using doFilter method and after all filters/servlets processing request, it continue working in the same method below doFilter.
In other words it intercepts request/response before servlets and before sending response to a client.It is dangerous piece of code because all called methods are in the same stack at the same thread. Thus, **it may initiate stackoverflow exception if the chain is too big or you call doFilter method on deep level that also provide the same situation. Perhaps, during debugging you might see chain of calls
in one thread** and it potentially can be the cause of stackoverflowerror.
Also you can take chain of responsibility pattern example from links below and add **collection of elements** instead of several and you also will get stackoverflowerror.
Links with the pattern:
[<https://www.journaldev.com/1617/chain-of-responsibility-design-pattern-in-java>](https://www.journaldev.com/1617/chain-of-responsibility-design-pattern-in-java)
[<https://en.wikipedia.org/wiki/Chain-of-responsibility_pattern>](https://en.wikipedia.org/wiki/Chain-of-responsibility_pattern)
I hope it was helpful for you.
|
Since the question is very interesting, I have tried to simplify the answer of *hide* :
```
public class Stackoverflow {
static class Handler {
void handle(Chain chain){
chain.process();
System.out.println("yeah");
}
}
static class Chain {
private List<Handler> handlers = new ArrayList<>();
private int n = 0;
private void setHandlers(int count) {
int i = 0;
while (i++ < count) {
handlers.add(new Handler());
}
}
public void process() {
if (n < handlers.size()) {
Handler handler = handlers.get(n++);
handler.handle(this);
}
}
}
public static void main(String[] args) {
Chain chain = new Chain();
chain.setHandlers(10000);
chain.process();
}
}
```
It's important to note that if stackoverflow occurs, the string "yeah" will never be output.
| 7,209
|
59,707,234
|
I'm unable to install pygraphviz even after installing graphviz and ensuring that cgraph.h is present in the directory.
I've also manually specified the directory for install. e.g. install-path
fatal error C1083: Cannot open include file: 'graphviz/cgraph.h': No such file or directory
Looking for any and all suggestions. Using Windows.
```
C:\Users\mmcgown\Desktop\School\MSDS452\pygraphviz-1.5>python setup.py install --prefix=C:\Program_Files_(x86)\Graphviz2.38 --include-path=C:\Program_Files_(x86)\Graphviz2.38\include\ --library-path=C:\Program_Files_(x86)\Graphviz2.38\lib\
```
```
running install
running build
running build_py
running egg_info
writing pygraphviz.egg-info\PKG-INFO
writing dependency_links to pygraphviz.egg-info\dependency_links.txt
writing top-level names to pygraphviz.egg-info\top_level.txt
reading manifest file 'pygraphviz.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching '*.png' under directory 'doc'
warning: no files found matching '*.html' under directory 'doc'
warning: no files found matching '*.txt' under directory 'doc'
warning: no files found matching '*.css' under directory 'doc'
warning: no previously-included files matching '*~' found anywhere in distribution
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '.svn' found anywhere in distribution
no previously-included directories found matching 'doc\build'
writing manifest file 'pygraphviz.egg-info\SOURCES.txt'
running build_ext
building 'pygraphviz._graphviz' extension
C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.24.28314\bin\HostX86\x64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MT -IC:\Program_Files_(x86)\Graphviz2.38\include\ -IC:\Users\mmcgown\AppData\Local\Continuum\anaconda3\include -IC:\Users\mmcgown\AppData\Local\Continuum\anaconda3\include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.24.28314\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.24.28314\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\cppwinrt" /Tcpygraphviz/graphviz_wrap.c /Fobuild\temp.win-amd64-3.7\Release\pygraphviz/graphviz_wrap.obj
graphviz_wrap.c
pygraphviz/graphviz_wrap.c(2987): fatal error C1083: Cannot open include file: 'graphviz/cgraph.h': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.24.28314\\bin\\HostX86\\x64\\cl.exe' failed with exit status 2
```
|
2020/01/12
|
[
"https://Stackoverflow.com/questions/59707234",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9682236/"
] |
On Ubuntu please do
`sudo apt install graphviz-dev`
|
For those who visit this page, you may already come across to this [fix](https://github.com/tan-wei/pygraphviz) or this [issue](https://github.com/pygraphviz/pygraphviz/issues/155) on GitHub and try to install GraphViz2.38 manually. But neither of them will work since GraphViz and PyGraphViz are 2 different libraries
Mac or Ubuntu already have their solutions on GitHub, however for Win10 64-bit, it does not receive any fix yet since 2018. [Installing pygraphviz on Windows 10 64-bit, Python 3.6](https://stackoverflow.com/questions/45093811/installing-pygraphviz-on-windows-10-64-bit-python-3-6)
Someone have created a build of PyGraphviz 1.5 on his [Anaconda channel](https://anaconda.org/alubbock/pygraphviz) for Windows 64 bit running Python 3.6, Python 3.7 or Python 3.8. If you're running Anaconda, you can install with:
```
conda install -c alubbock pygraphviz
```
Please mark this as a possible duplicate with [this question](https://stackoverflow.com/questions/40809758/howto-install-pygraphviz-on-windows-10-64bit) if someone see it.
| 7,219
|
14,119,978
|
I'm a newbie and was trying something in python 2.7.2 with Numpy which wasn't working as expected so wanted to check if there was something basic I was misunderstanding.
I was calculating a value for a triangle (trinormals) and then updating a value per point of the triangle (vertnormals) using an array of triangle indexes (trivertexidx). As a loop I was calculating:
```
for itri in range(ntriangles) :
vertnormals[(trivertidx[itri,0]),:] += trinormals[itri,:]
vertnormals[(trivertidx[itri,1]),:] += trinormals[itri,:]
vertnormals[(trivertidx[itri,2]),:] += trinormals[itri,:]
```
As this was a little slow I thought it could be modified to :
```
vertnormals[(trivertidx[:,0]),:] += trinormals[:,:]
vertnormals[(trivertidx[:,1]),:] += trinormals[:,:]
vertnormals[(trivertidx[:,2]),:] += trinormals[:,:]
```
However this doesn't give the same results. Is there another simpler way to write the loop? Any pointers appreciated. Note the intent here was to get a single value for each entry in vertnormals and then normalise the result.
|
2013/01/02
|
[
"https://Stackoverflow.com/questions/14119978",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1942439/"
] |
First the column in which you store the date should be an long type. This column will store the milliseconds from epoch for the date.
Now for Query
```
Calendar calendar = Calendar.getInstance(); // This will give you the current time.
// Removing the timestamp from current time to point to todays date
calendar.set(Calendar.HOUR_OF_DAY, 0);
calendar.set(Calendar.MINUTE, 0);
calendar.set(Calendar.SECOND, 0);
calendar.set(Calendar.MILLISECOND, 0);
calendar.add(Calendar.DATE, -3); // Will subtract 3 days from today.
Date beforeThreeDays = calendar.getTime();
calendar.add(Calendar.DATE, 6); // Will be your 3 days after today
Date afterThreeDays = calendar.getTime();
db.query("Table", null, "YOUR_DATE_COLUMN >= ? AND YOUR_DATE_COLUMN <= ?", new String[] { beforeThreeDays.getTime() + "", afterThreeDays.getTime() + "" }, null, null, null);
```
|
```
Select *
From TABLE_NAME
WHERE DATEDIFF(day, GETDATE(), COLUMN_TABLE) <= 3
```
| 7,221
|
41,914,398
|
I've written this very short spider to go to a U.S. News link and take the names of the colleges listed there.
```
#!/usr/bin/python
# -*- coding: utf-8 -*-
import scrapy
class CollegesSpider(scrapy.Spider):
name = "colleges"
start_urls = [
'http://colleges.usnews.rankingsandreviews.com/best-colleges/rankings/national-universities?_mode=list&acceptance-rate-max=20'
]
def parse(self, response):
for school in response.css('div.items'):
yield {
'name': school.xpath('//*[@id="view-1c4ddd8a-8b04-4c93-8b68-9b7b4e5d8969"]/div/div[1]/div[1]/h3/a').extract_first(),
}
```
However, when I run this spider and ask for the names to be stored in a file named schools.json, the file comes out blank. What am I doing wrong?
|
2017/01/28
|
[
"https://Stackoverflow.com/questions/41914398",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5535448/"
] |
Got it! It is because the robot detection.
Encode
```
>>> r = requests.get('http://colleges.usnews.rankingsandreviews.com/best-colleges/rankings/national-universities?_mode=list&acceptance-rate-max=20', headers={'User-Agent':'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36'})
>>> r.status_code
200
```
Then you will have all the content you need. Do whatever parsing or extraction you need. The procedure to encode a header should be very similar in Scrapy.
[scrapy doc for request with headers](https://doc.scrapy.org/en/latest/topics/request-response.html)
User agent for Chrome
```
Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36
```
|
I am on my mobile so don't remember exact variable name, but it should be robots\_follow
Set it to False
| 7,222
|
28,610,556
|
I'm a beginner in python, and I'm trying to write a program that makes a call to Weibo(Chinese Twitter) API and receive a json response. It's just a basic keyword search and fetching search result example.
But the problem is I don't know how to make an api call from python, so I'm keep getting error messages. The API I'm trying to use is <http://open.weibo.com/wiki/2/search/topics>
It's in Chinese but basically it says the api url, method -> GET, and the list of parameters I need. My guess is that I messed up with the parameters, that method: GET shouldn't be treated as a parameter but in some other ways which I don't know. Can somebody help??
Below is what I tried. I'm just pasting the relevant part, before this part there is a api authorization codes.
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# sudo pip install sinaweibopy
import sys
import urllib, urllib2
from weibo import APIClient
import webbrowser
APP_KEY = '1234' # there are real values here in the actual code
APP_SECRET = '1234'
CALLBACK_URL = 'http://111.111'
def get_auth():
# some code here, not pasted
def get_data():
access_token = '1234'
expires_in = '1234'
# This works fine
client = APIClient(app_key=APP_KEY, app_secret=APP_SECRET, redirect_uri=CALLBACK_URL)
client.set_access_token(access_token, expires_in)
r = client.statuses.user_timeline.get()
for st in r.statuses:
print st.text.encode('utf-8')
# This doesn't work
# statuses = client.search.topics.get(q=u'eland')
# This also doesn't work
# url = 'https://api.weibo.com/2/search/topics.json'
# params = {'method': GET, 'source': APP_KEY, 'access_token': access_token, 'q': 'new balance', 'count' : 50}
# request = urllib2.Request(url, urllib.urlencode(params))
# response = urllib2.urlopen(request)
```
error message (url call):
```
Traceback (most recent call last):
File "weibopr.py", line 85, in <module>
elif opt == '2': get_data()
File "weibopr.py", line 57, in get_data
response = urllib2.urlopen(request)
File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 410, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 448, in error
return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 403: Forbidden
```
|
2015/02/19
|
[
"https://Stackoverflow.com/questions/28610556",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4584513/"
] |
If "if clause" has to span more than one lines of code you have to surround it with curly brackets "{}". Change your code to :
```
using System;
class Program
{
static void Main(string[] args)
{
int tal1, tal2;
int slinga;
tal2 = Convert.ToInt32(Console.ReadLine());
for (slinga = 0; slinga < 2; slinga++)
{
if (tal1 == 56)
{
Console.WriteLine(Addera(slinga, tal1));
tal2--;
}
else
tal1 = 56;
}
}
static int Addera(int tal1, int tal2)
{
return tal1 + tal2;
}
}
```
|
Following code lines are wrong
```
if (tal1 == 56)
Console.WriteLine(Addera(slinga, tal1));
tal2--;
else tal1 = 56;
```
You need to update it to
```
if (tal1 == 56){
Console.WriteLine(Addera(slinga, tal1));
tal2--;
}
else {
tal1 = 56;
}
```
Reason : You need `{ }` for multi-lined `if-else` conditions
[MSDN says](https://msdn.microsoft.com/en-us/library/5011f09h.aspx)
>
> Both the then-statement and the else-statement can consist of a single statement or multiple statements that are enclosed in braces ({}). For a single statement, the braces are optional but recommended.
>
>
>
So you need `{ }` since it's multi-lined
| 7,224
|
27,270,530
|
I am worrying that this might be a really stupid question. However I can't find a solution.
I want to do the following operation in python without using a loop, because I am dealing with large size arrays.
Is there any suggestion?
```
import numpy as np
a = np.array([1,2,3,..., N]) # arbitrary 1d array
b = np.array([[1,2,3],[4,5,6],[7,8,9]]) # arbitrary 2d array
c = np.zeros((N,3,3))
c[0,:,:] = a[0]*b
c[1,:,:] = a[1]*b
c[2,:,:] = a[2]*b
c[3,:,:] = ...
...
...
c[N-1,:,:] = a[N-1]*b
```
|
2014/12/03
|
[
"https://Stackoverflow.com/questions/27270530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3683468/"
] |
To avoid Python-level loops, you could use `np.newaxis` to expand `a` (or None, which is the same thing):
```
>>> a = np.arange(1,5)
>>> b = np.arange(1,10).reshape((3,3))
>>> a[:,None,None]*b
array([[[ 1, 2, 3],
[ 4, 5, 6],
[ 7, 8, 9]],
[[ 2, 4, 6],
[ 8, 10, 12],
[14, 16, 18]],
[[ 3, 6, 9],
[12, 15, 18],
[21, 24, 27]],
[[ 4, 8, 12],
[16, 20, 24],
[28, 32, 36]]])
```
Or [`np.einsum`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html), which is overkill here, but is often handy and makes it very explicit what you want to happen with the coordinates:
```
>>> c2 = np.einsum('i,jk->ijk', a, b)
>>> np.allclose(c2, a[:,None,None]*b)
True
```
|
Didn't understand this multiplication.. but here is a way to make matrix multiplication in python using numpy:
```
import numpy as np
a = np.matrix([1, 2])
b = np.matrix([[1, 2], [3, 4]])
result = a*b
print(result)
>>>result
matrix([7, 10])
```
| 7,226
|
36,774,171
|
While building python from source on a MacOS, I accidntally overwrote the python that came with MacOS, now it doesn't have SSL. I tried to build again by running `--with-ssl` option
```
./configure --with-ssl
```
but when I subsequently ran `make`, it said this
```
Python build finished, but the necessary bits to build these modules were not found:
_bsddb _ssl dl
imageop linuxaudiodev ossaudiodev
readline spwd sunaudiodev
To find the necessary bits, look in setup.py in detect_modules() for the module's name.
```
It's not clear to me from looking at `setup.py` what I'm supposed to do to find the "necessary bits". What can I do to build python with SSL on MacOS?
|
2016/04/21
|
[
"https://Stackoverflow.com/questions/36774171",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/577455/"
] |
Just open `setup.py` and find method `detect_modules()`. It has some lines like (2.7.11 for me):
```
# Detect SSL support for the socket module (via _ssl)
search_for_ssl_incs_in = [
'/usr/local/ssl/include',
'/usr/contrib/ssl/include/'
]
ssl_incs = find_file('openssl/ssl.h', inc_dirs,
search_for_ssl_incs_in
)
if ssl_incs is not None:
krb5_h = find_file('krb5.h', inc_dirs,
['/usr/kerberos/include'])
if krb5_h:
ssl_incs += krb5_h
ssl_libs = find_library_file(self.compiler, 'ssl',lib_dirs,
['/usr/local/ssl/lib',
'/usr/contrib/ssl/lib/'
] )
if (ssl_incs is not None and
ssl_libs is not None):
exts.append( Extension('_ssl', ['_ssl.c'],
include_dirs = ssl_incs,
library_dirs = ssl_libs,
libraries = ['ssl', 'crypto'],
depends = ['socketmodule.h']), )
else:
missing.append('_ssl')
```
So it seems that you need SSL and Kerberos. Kerberos comes installed with Mac. So You need to install `openssl`. You can do it with `brew`:
```
brew install openssl
```
`openssl` headers could be installed in a path different than Python will search. So issue
```
locate ssl.h
```
and add the path to `search_for_ssl_incs_in`. For example for me it is:
```
/usr/local/Cellar/openssl/1.0.2d_1/include/openssl/ssl.h
```
So I should add `/usr/local/Cellar/openssl/1.0.2d_1/include/` to `search_for_ssl_incs_in`.
Don't forget that these are for Python 2.7.11. But the process should be same.
Hope that helps.
|
First of all, MacOS only includes LibreSSL 2.2.7 libraries and no headers, you really want to install OpenSSL using homebrew:
```
$ brew install openssl
```
The openssl formula is a *keg-only* formula because the LibreSSL library is shadowing OpenSSL and Homebrew will not interfere with this. This means that you can find OpenSSL not in `/usr/local` but in `/usr/local/opt/openssl`. But Homebrew includes the necessary command-line tools to figure out what path to use.
You then need to tell `configure` about these. If you are building Python 3.7 or newer, use the `--with-openssl` switch:
```
./configure --with-openssl=$(brew --prefix openssl)
```
If you are building an older release, set the `CPPFLAGS` and `LDFLAGS` environment variables:
```
CPPFLAGS="-I$(brew --prefix openssl)/include" \
LDFLAGS="-L$(brew --prefix openssl)/lib" \
./configure
```
and the Python configuration infrastructure takes it from there.
Know that *now ancient* Python releases (2.6 or older, 3.0-3.4) only work with OpenSSL 1.0.x and before, which no longer is installable from homebrew core.
| 7,228
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.