qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
|---|---|---|---|---|---|---|
45,300,287
|
I'm new in python programming and I'm having some issues in developing a specific part of my GUI with Tkinter.
What I'm trying to do is, a space where the user could enter (type) his math equation and the software make the calculation with the variables previously calculated.
I've found a lot of calculators for Tkinter, but none of then is what I'm looking for. And I don't have much experience with classes definitions.
I made this simple layout to explain better what I want to do:
```
import tkinter as tk
root = tk.Tk()
Iflabel = tk.Label(root, text = "If...")
Iflabel.pack()
IfEntry = tk.Entry(root)
IfEntry.pack()
thenlabel = tk.Label(root, text = "Then...")
thenEntry = tk.Entry(root)
thenlabel.pack()
thenEntry.pack()
elselabel = tk.Label(root, text = "else..")
elseEntry = tk.Entry(root)
elselabel.pack()
elseEntry.pack()
applybutton = tk.Button(root, text = "Calculate")
applybutton.pack()
root.mainloop()
```
This simple code for Python 3 have 3 Entry spaces
1st) If...
2nd Then...
3rd) Else...
So, the user will enter with his conditional expression and the software will do the job. In my mind, another important thing is if the user left the "if" space in blank, he will just type his expression inside "Then..." Entry and press the button "calculate" or build all expression with the statements.
If someone could give some ideas about how and what to do....
(without classes, if it is possible)
I'l give some situations for exemplification
1st using statements:
```
var = the variable previously calculated and stored in the script
out = output
if var >= 10
then out = 4
else out = 2
```
2nd Without using statement the user will type in "Then" Entry the expression that he want to calculate and that would be:
```
Then: Out = (((var)**2) +(2*var))**(1/2)
```
Again, it's just for exemplification...I don't need this specific layout. If anyone has an idea how to construct it better, is welcome.
Thanks all.
|
2017/07/25
|
[
"https://Stackoverflow.com/questions/45300287",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8267211/"
] |
For a start webview consumes memory because it load has to load and render html data. Rather than using a webview in a recycler view, I think it would be better if you implemented it either of these two ways:
1. You handle the list of data in html and send it into the webview and remove the recycler view completely
2. You draw the layout of the expected contents in xml and inflate it directly into the recyclerview and remove webview completed. (Note: you can inflate different views into a recycler depending on the adapter position and data at that position).
Using a webview might seem the easy way to implement whatever you are trying but trust me, the drawbacks outweight the benefit. So its just best to avoid it.
|
This problem can arrive for many possible reasons
* When you scroll very fast
Recyclerview is purely based on Inflating the view minimal times and reusing the existing views. This means that while you are scrolling when a view(An item) exits your screen the same view is bought below just by changing its contents.When you load from internet it's always better to first download all the data and then display it. Webview's consume a lot of data and its totally Against the design principle to have them in a Recyclerview.
To repair this you could possibly add some button to reload data or refresh each time you display the view.
* Nougat removed some functions from http urlconnection class
I am not sure about this one. But in one of google developer video ,T had seen something about depreciation of some functions and methods
hope You find this Helpful.
| 12,233
|
51,927,893
|
So I started learning python 3 and I wanted to run a very simple code on ubuntu:
```
print type("Hello World")
^
SyntaxError: invalid syntax
```
When I tried to compile that with command python3 hello.py in terminal it gave me the error above, but when used python hello.py (I think it means to use python 2 instead of 3) then it's all fine. Same when using python 3 and 2 shells in the terminal.
It seems like I'm missing something really stupid because I did some research and found no one with the same issue.
|
2018/08/20
|
[
"https://Stackoverflow.com/questions/51927893",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10140067/"
] |
In Python3, `print` [was changed](https://docs.python.org/3.0/whatsnew/3.0.html#print-is-a-function) from a statement to a function (with brackets):
i.e.
```
# In Python 2.x
print type("Hello World")
# In Python 3.x
print(type("Hello World"))
```
|
In Python 3.x [`print()`](https://docs.python.org/3/library/functions.html#print) is a function, while in 2.x it was a statement. The correct syntax in Python 3 would be:
```
print(type("Hello World"))
```
| 12,238
|
10,352,538
|
I am stuck with this problem for the past few hours.
This is how the XML looks like
```
<xmlblock>
<data1>
<username>someusername</username>
<id>12345</id>
</data1>
<data2>
<username>username</username>
<id>11111</id>
</data1>
</xmlblock>
```
The problem is this:
I need the username when it matches a given id.
I am not sure how to do a double search using iterfind or any other lxml module in python.
Any help will be greatly appreciated. Thanks!
|
2012/04/27
|
[
"https://Stackoverflow.com/questions/10352538",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/897906/"
] |
A (probably not the best) solution
```
>>> id_to_match = 12345
>>> for event, element in cElementTree.iterparse('xmlfile.xml'):
... if 'data' in element.tag:
... for data in element:
... if data.tag == 'username':
... username = data.text
... if data.tag == 'id':
... if data.text == id_to_match:
... print username
someusername
```
|
If you are ok with using [minidom](http://docs.python.org/library/xml.dom.minidom.html) the following should work
```
from xml.dom import minidom
doc = minidom.parseString('<xmlblock><data1><username>someusername</username><id>12345</id></data1><data2><username>username</username><id>11111</id></data2></xmlblock>')
username = [elem.parentNode.getElementsByTagName('username') for elem in doc.getElementsByTagName('id') if elem.firstChild.data == '12345'][0][0].firstChild.data
print username
```
| 12,241
|
66,910,159
|
Consider the below set of lists that contain two strings each.
The pairing of two strings in a given list means the values they represent are equal. So item A is the same as item B and C, and so on.
```
l1 = [ 'A' , 'B' ]
l2 = [ 'A' , 'C' ]
l3 = [ 'B' , 'C' ]
```
What is the most efficient/pythonic way to collect these relationships into a dict such as the one below:
```
{ "A" : [ "B" , "C" ] }
```
p.s. Apologies for the poor title, I did not know how to describe the problem!
**EDIT:**
To make the problem clearer, I am trying to filter out duplicated samples from thousands of records.
I have pairwise comparisons for each sample in the data set which indicate if they are a duplicate of one another.
Sometimes a sample may appear in the data set in triplicate/quadruplicate with a different identifier.
It is important to keep just ONE of the duplicated samples.
Hence wanting a dict or similar structure which contains the selected sample as a key, and a list of its duplicates in the dataset as values.
|
2021/04/01
|
[
"https://Stackoverflow.com/questions/66910159",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3234810/"
] |
I would use a defaultdict from collections so the dictionary expands and just takes whatever you throw into it. If this is a one way correspondence I would use this code (assuming you make a list of the lists)
```
dd = collections.defaultdict(list)
for line in lists:
dd[line[0]].append(line[1])
```
for given set this will make dictionaries
```
{"A" : ["B", "C"] }
{"B" : ["C"] }
'''
this should be a good starting point
```
|
---
Assuming you can get a list of those lists like so:
```
[[ 'A' , 'B' ], [ 'A' , 'C' ],[ 'B' , 'C' ]]
```
This should give you the relation you want:
```
d = {}
l1 = [[ 'A' , 'B' ],[ 'A' , 'C' ],[ 'B' , 'C' ]]
for sublist in l1:
if sublist[0] not in d.keys(): #check if first value is already a key in the dict
d[sublist[0]] = [] #init new key with empty list
d[sublist[0]].append(sublist[1]) #append second value
```
output:
```
{'A': ['B', 'C'], 'B': ['C']}
```
| 12,243
|
17,039,457
|
I want to convert the first column of data from a text file into a list in python
```
data = open ('data.txt', 'r')
data.read()
```
provides
```
'12 45\n13 46\n14 47\n15 48\n16 49\n17 50\n18 51'
```
Any help, please.
|
2013/06/11
|
[
"https://Stackoverflow.com/questions/17039457",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2473267/"
] |
You can use `str.split` and a `list comprehension` here:
```
with open('data.txt') as f:
lis = [int(line.split()[0]) for line in f]
>>> lis
[12, 13, 14, 15, 16, 17, 18]
```
If the numbers to be strings:
```
>>> with open('abc') as f:
lis = [line.split()[0] for line in f]
>>> lis
['12', '13', '14', '15', '16', '17', '18']
```
Simplified version:
```
>>> with open('abc') as f: #Always use with statement for handling files
... lis = []
... for line in f: # for loop on file object returns one line at a time
... spl = line.split() # split the line at whitespaces, str.split returns a list
... lis.append(spl[0]) # append the first item to the output list, use int() get an integer
... print lis
...
['12', '13', '14', '15', '16', '17', '18']
```
help and example on `str.split`:
```
>>> strs = "a b c d ef gh i"
>>> strs.split()
['a', 'b', 'c', 'd', 'ef', 'gh', 'i']
>>> print str.split.__doc__
S.split([sep [,maxsplit]]) -> list of strings
Return a list of the words in the string S, using sep as the
delimiter string. If maxsplit is given, at most maxsplit
splits are done. If sep is not specified or is None, any
whitespace string is a separator and empty strings are removed
from the result.
```
|
```
import csv
with open ('data.txt', 'rb') as f:
print [row[0] for row in csv.reader(f, delimiter=' ')]
```
---
```
['12', '13', '14', '15', '16', '17', '18']
```
| 12,245
|
27,801,200
|
How do i put this in a loop in python so that it keeps asking if player 1 has won the game, until it reaches the number of games in the match. i tried a while loop but it didn't work :(
```
Y="yes"
N="no"
PlayerOneScore=0
PlayerTwoScore=0
NoOfGamesInMatch=int(input("How many games? :- "))
while PlayerOneScore < NoOfGamesInMatch:
PlayerOneWinsGame=str(input("Did Player 1 win the game?\n(Enter Y or N): "))
if PlayerOneWinsGame== "Y":
PlayerOneScore= PlayerOneScore+1
else:
PlayerTwoScore= PlayerTwoScore+1
print("Player 1: " ,PlayerOneScore)
print("Player 2: " ,PlayerTwoScore)
print("\n\nPress the RETURN key to end")
```
|
2015/01/06
|
[
"https://Stackoverflow.com/questions/27801200",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4424276/"
] |
You can have a preRender set on the *listLabelsPage.xhtml* page you're loading
```
<f:event type="preRenderView" listener="#{yourBean.showGrowl}" />
```
and a showGrowl method having only
```
public void showGrowl() {
FacesContext context = FacesContext.getCurrentInstance();
context.addMessage(null, new FacesMessage(FacesMessage.SEVERITY_INFO, "Success!", "Label has been added with success!"));
}
```
|
I post an answer to my own question in order to help another people which face the same problem like I did:
```
public String addLabelInDB() {
try {
//some logic to insert in db
//below I set a flag on context which helps me to display a growl message only when the insertion was done with success
ExternalContext ec = FacesContext.getCurrentInstance().getExternalContext();
ec.getRequestMap().put("addedWithSuccess","true");
} catch (Exception e) {
logger.debug(e.getMessage());
}
return "listLabelsPage";
}
public void showGrowl() {
ExternalContext ec = FacesContext.getCurrentInstance().getExternalContext();
String labelAddedWithSuccess = (String) ec.getRequestMap().get("addedWithSuccess");
//if the flag on context is true show the growl message
if (labelAddedWithSuccess!=null && labelAddedWithSuccess.equals("true")) {
FacesContext context = FacesContext.getCurrentInstance();
context.addMessage(null, new FacesMessage(FacesMessage.SEVERITY_INFO, "Success!", "Label has been added with success!"));
}
}
```
and in my xhtml I have:
```
<f:event type="preRenderView" listener="#{labelsManager.showGrowl}" />
```
| 12,246
|
51,413,816
|
Before I begin, I'd like to preface that I'm relatively new to python, and haven't had to use it much before this little project of mine. I'm trying to make a twitter bot as part of an art project, and I can't seem to get tweepy to import. I'm using macOS High Sierra and Python 3.7. I first installed tweepy by using
```
pip3 install tweepy
```
and this appeared to work, as I'm able to find the tweepy files in finder. However, when I simply input
```
import tweepy
```
into the IDLE, I get this error:
```
Traceback (most recent call last):
File "/Users/jacobhill/Documents/CicadaCacophony.py", line 1, in <module>
import tweepy
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tweepy/__init__.py", line 17, in <module>
from tweepy.streaming import Stream, StreamListener
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tweepy/streaming.py", line 358
def _start(self, async):
^
SyntaxError: invalid syntax
```
Any idea on how to remedy this? I've looked at other posts on here and the other errors seem to be along the lines of "tweepy module not found", so I don't know what to do with my error. Thanks!
|
2018/07/19
|
[
"https://Stackoverflow.com/questions/51413816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10102844/"
] |
Using `async` as an identifier [has been deprecated since Python 3.5, and became an error in Python 3.7](https://www.python.org/dev/peps/pep-0492/#deprecation-plans), because it's a keyword.
This Tweepy bug was [reported on 16 Mar](https://github.com/tweepy/tweepy/issues/1017), and [fixed on 12 May](https://github.com/tweepy/tweepy/pull/1042), but [there hasn't been a new release yet](https://github.com/tweepy/tweepy/issues/1063). Which is why, as [the repo's main page says](https://github.com/tweepy/tweepy):
>
> Python 2.7, 3.4, 3.5 & 3.6 are supported.
>
>
>
---
For the time being, you can install the development version:
```
pip3 install git+https://github.com/tweepy/tweepy.git
```
Or, since you've already installed an earlier version:
```
pip3 install --upgrade git+https://github.com/tweepy/tweepy.git
```
---
You could also follow the instructions from the repo:
```
git clone https://github.com/tweepy/tweepy.git
cd tweepy
python3 setup.py install
```
However, this will mean `pip` may not fully understand what you've installed.
|
In Python3.7, [`async`](https://docs.python.org/3/reference/compound_stmts.html#async) became a reserved word (as can be seen in *whats new* section [here](https://docs.python.org/3/whatsnew/3.7.html)) and therefore cannot be used as argument. This is why this `Syntax Error` is raised.
That said, and following `tweetpy`s official GitHub ([here](https://github.com/tweepy/tweepy)), only
>
> Python 2.7, 3.4, 3.5 & 3.6 are supported.
>
>
>
---
However, if you really must use Python3.7, there is a workaround. Following [this](https://github.com/tweepy/tweepy/issues/1017) suggestion, you can
>
> open streaming.py and replace `async` with `async_`
>
>
>
and it should work
| 12,249
|
48,143,394
|
So I play a game in which I have 12 pieces of gear. Each piece of gear (for the purposes of my endeavor) has four buffs I am interested in: power, haste, critical damage, critical rating.
I have a formula in which I can enter the total power, haste, CD, and CR and generate the expected damage per second output.
However, not every piece of gear has all four buffs. Currently I am interested in two case scenarios: gear that only has one of the four, and gear that has three of the four.
In the first scenario, each of the twelve pieces of gear will have a single buff on it that can be any of the four. What I want to do is write a program that finds which arrangement outputs the most damage.
So then what I need to do is write a program that tries every possible arrangement in this scenario. If we figure that each of the twelve pieces can have one of four values, that's 4^12 possible arrangements to test - or 16,777,216 - easy-peasy for a machine, right?
However I have to loop through all these arrangements, and at the moment I can only imagine 12 nested FOR loops of value 1-4 each, with the formula in the middle.
This seems un-pythonic in terms of readability and just duplication of effort.
Is there a better, more pythonic way to check to see which my formula likes best (generates max damage), or is 12 nested FOR loops, as excessive as that seems, the best and clearest way?
|
2018/01/08
|
[
"https://Stackoverflow.com/questions/48143394",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5208967/"
] |
Use iterator to replace for-loop:
```
keys = ['p', 'h', 'cd', 'cr']
iter_keys = itertools.product(*([keys] * 12))
for item in iter_keys:
print item
```
Output:
```
('p', 'p', 'p', 'p', 'p', 'cr', 'cd', 'h', 'cr', 'p', 'h', 'cr')
('p', 'p', 'p', 'p', 'p', 'cr', 'cd', 'h', 'cr', 'p', 'cd', 'p')
('p', 'p', 'p', 'p', 'p', 'cr', 'cd', 'h', 'cr', 'p', 'cd', 'h')
('p', 'p', 'p', 'p', 'p', 'cr', 'cd', 'h', 'cr', 'p', 'cd', 'cd')
('p', 'p', 'p', 'p', 'p', 'cr', 'cd', 'h', 'cr', 'p', 'cd', 'cr')
....
('cr', 'cr', 'cr', 'cr', 'cr', 'cr', 'cr', 'cr', 'cr', 'cr', 'cr', 'cr')
```
|
If you have 12 nested for loops, you probably need a recursive design like this :
```
def loops (values, num, current_list):
if num > 0:
for v in values:
loops(values, num-1, current_list+list(v))
else:
print current_list
loops (('a', 'b', 'c', 'd'), 12, [])
```
Then, you will probably rewrite it in a pythonic way like Mad Lee's one, but this shows the principle.
| 12,250
|
12,969,897
|
I have some questions about encoding in python 2.7.
1.The python code is as below,
```
#s = u"严"
s = u'\u4e25'
print 's is:', s
print 'len of s is:', len(s)
s1 = "a" + s
print 's1 is:', s1
print 'len of s1 is:', len(s1)
```
the output is:
```
s is: 严
len of s is: 1
s1 is: a严
len of s1 is: 2
```
I am confused that why the len of `s` is 1, how could `4e25` be stored in 1 byte? I also notice that USC-2 is 2-bytes long and USC-4 is 4-bytes long, why unicode string `s`'s length is 1?
2.
(1)New a file named `a.py` with notepad++(Windows 7), and set the file's encoding `ANSI`, code in `a.py` is as below:
```
# -*- encoding:utf-8 -*-
import sys
print sys.getdefaultencoding()
s = "严"
print "s:", s
print "type of s:", type(s)
```
the output is:
```
ascii
s: 严
type of s: <type 'str'>
```
(2)New a file named `b.py` with notepad++(Windows 7), and set the file's encoding `UTF-8`, code in `b.py` is as below:
```
# -*- encoding:gbk -*-
import sys
print sys.getdefaultencoding()
s = "严"
print "s:", s
print "type of s:", type(s)
```
the output is:
```
File "D:\pyws\code\\b.py", line 1
SyntaxError: encoding problem: utf-8
```
(3)change file `b.py` as below(the file's encoding style is `UTF-8`):
```
import sys
print sys.getdefaultencoding()
s = "严"
print "s:", s
print "type of s:", type(s)
```
the output is:
```
ascii
s: 涓
type of s: <type 'str'>
```
(4)change file `a.py` as below(the file's encoding style is `ANSI`):
```
import sys
print sys.getdefaultencoding()
s = "严"
print "s:", s
print "type of s:", type(s)
```
the output is:
```
File "D:\pyws\code\a1.py", line 3
SyntaxError: Non-ASCII character '\xd1' in file D:\pyws\code\a1.py on
line 3, but no encoding declared; see http://www.python.org/peps/pep-0263.html f
or details
```
Why these 4 cases' outputs in question2 are different? Anybody can figure it out in detail?
|
2012/10/19
|
[
"https://Stackoverflow.com/questions/12969897",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1477871/"
] |
>
> I am confused that why the len of s is 1, how could 4e25 be stored in 1 byte? I also notice that USC-2 is 2-bytes long and USC-4 is 4-bytes long, why unicode string s's length is 1?
>
>
>
The whole point of unicode strings is to do this. The length of a unicode string is the number of *characters* (i.e., code points), not the number of bytes. The number of bytes may vary depending on the encoding, but the number of characters is an abstract invariant that doesn't change with encoding.
As for your second question, the answer is that in setting a file's encoding, you are telling Python how to map bytes in that file to characters. If you specify an encoding (with the `# encoding` syntax) that is inconsistent with the file's actual encoding, you will get unpredictable behavior, because Python is trying to interpret the bytes one way, but the file is set up so the bytes actually mean something else.
The kind of behavior you get will depend on the specifics of the encodings you use. Some possibilities are:
1. You'll get lucky and it will work even though you use conflicting encodings; this is what happened in your first case.
2. It will raise an error because the bytes in the file aren't consistent with the specified encoding; this is what happened in your second case.
3. It will seem to work, but produce different characters, because the bytes in the file's actual encoding mean something else when interpreted with the specified encoding. This seems to be what happened in your third case, although it ought to raise an error since that character isn't ASCII. (By "the file's encoding style is UTF-8" did you mean you set an `# encoding` directive to that effect in the file?)
4. If you don't specify any encoding, you'll get an error if you try to use any bytes that aren't in plain ASCII. This is what happened in your last case.
Also, the type of the string is `str` in all cases, because you didn't specify the string as being unicode (e.g., with `u"..."`). Specifying a file encoding doesn't make strings unicode. It just tells Python how to interpret the characters in the file.
However, there's a bigger question here, which is: why are you playing those games with encodings in your examples? There is no reason whatsoever to use an `# encoding` marker to specify an encoding other than the one the file is actually encoded in, and doing so is guaranteed to cause problems. Don't do it. You have to know what encoding the file is in, and specify that same encoding in the `# encoding` marker.
|
### Answer to Question 1:
In Python versions <3.3, length for a Unicode string `u''` is the number of UTF-16 or UTF-32 code units used (depending on build flags), not the number of bytes. `\u4e25` is one code unit, but not all characters are represented by one code unit if UTF-16 (default on Windows) is used.
```
>>> len(u'\u42e5')
1
>>> len(u'\U00010123')
2
```
In Python 3.3, the above will return 1 for both functions.
Also Unicode characters can be composed of combining code units, such as `é`. The `normalize` function can be used to generate the combined or decomposed form:
```
>>> import unicodedata as ud
>>> ud.name(u'\xe9')
'LATIN SMALL LETTER E WITH ACUTE'
>>> ud.normalize('NFD',u'\xe9')
u'e\u0301'
>>> ud.normalize('NFC',u'e\u0301')
u'\xe9'
```
So even in Python 3.3, a single display character can have 1 or more code units, and it is best to normalize to one form or another for consistent answers.
### Answer to Question 2:
The encoding declared at the top of the file **must** agree with the encoding in which the file is saved. The declaration lets Python know how to interpret the bytes in the file.
For example, the character `严` is saved as 3 bytes in a file saved as UTF-8, but two bytes in a file saved as GBK:
```
>>> u'严'.encode('utf8')
'\xe4\xb8\xa5'
>>> u'严'.encode('gbk')
'\xd1\xcf'
```
If you declare the wrong encoding, the bytes are interpreted incorrectly and Python either displays the wrong characters or throws an exception.
**Edit per comment**
**2(1)** - This is system dependent due to ANSI being the system locale default encoding. On my system that is `cp1252` and Notepad++ can't display a Chinese character. If I set my system locale to `Chinese(PRC)` then I get your results on a console terminal. The reason it works correctly in that case is a byte string is used and the bytes are just sent to the terminal. Since the file was encoded in `ANSI` on a `Chinese(PRC)` locale, the bytes the byte string contains are correctly interpreted by the `Chinese(PRC)` locale terminal.
**2(2)** - The file is encoded in UTF-8 but the encoding is declared as GBK. When Python reads the encoding it tries to interpret the file as GBK and fails. You've chosen `UTF-8` as the encoding, which on Notepad++ also includes a UTF-8 encoded byte order mark (BOM) as the first character in the file and the GBK codec doesn't read it as a valid GBK-encoded character, so fails on line 1.
**2(3)** - The file is encoded in UTF-8 (with BOM), but missing an encoding declaration. Python recognizes the UTF-8-encoded BOM and uses UTF-8 as the encoding, but the file is in GBK. Since a byte string was used, the UTF-8-encoded bytes are sent to the GBK terminal and you get:
```
>>> u'严'.encode('utf8').decode(
'\xe4\xb8\xa5'
>>> '\xe4\xb8'.decode('gbk')
u'\u6d93'
>>> print '\xe4\xb8'.decode('gbk')
涓
```
In this case I am surprised, because Python is ignoring the byte `\xa5`, and as you see below when I explicitly decode incorrectly Python throws an exception:
```
>>> u'严'.encode('utf8').decode('gbk')
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
UnicodeDecodeError: 'gbk' codec can't decode byte 0xa5 in position 2: incomplete multibyte sequence
```
**2(4)** - In this case, then encoding is ANSI (GBK) but no encoding is declared, and there is no BOM like in UTF-8 to give Python a hint, so it assumes ASCII and can't handle the GBK-encoded character on line 3.
| 12,251
|
13,623,634
|
Why doesn't following code print anything:
```
#!/usr/bin/python3
class test:
def do_someting(self,value):
print(value)
return value
def fun1(self):
map(self.do_someting,range(10))
if __name__=="__main__":
t = test()
t.fun1()
```
I'm executing the above code in Python 3. I think i'm missing something very basic but not able to figure it out.
|
2012/11/29
|
[
"https://Stackoverflow.com/questions/13623634",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1651941/"
] |
[`map()` returns an iterator](http://docs.python.org/3/library/functions.html#map), and will not process elements until you ask it to.
Turn it into a list to force all elements to be processed:
```
list(map(self.do_someting,range(10)))
```
or use `collections.deque()` with the length set to 0 to not produce a list if you don't need the map output:
```
from collections import deque
deque(map(self.do_someting, range(10)))
```
but note that simply using a `for` loop is far more readable for any future maintainers of your code:
```
for i in range(10):
self.do_someting(i)
```
|
Before Python 3, map() returned a list, not an iterator. So your example would work in Python 2.7.
list() creates a new list by iterating over its argument. ( list() is NOT JUST a type conversion from say tuple to list. So list(list((1,2))) returns [1,2]. ) So list(map(...)) is backwards compatible with Python 2.7.
| 12,252
|
48,055,372
|
I Converted a csv to list:
```
import csv
with open('DataAnalizada.csv', 'rb') as f:
reader = csv.reader(f)
a = list(reader)
```
I need to analyze the information on that list where it is analyzed by customer groups and dates first as AAA customer on 12/27/2017, AAA on 12/28/2017, BBB on 12/27/2017, BBB on 28 / 12/2017, CCC on 12/27/2017, CCC on 12/28/2017 and within each of these groups the Analysis is taken into account (Stable Alert or Increment, which are the 3 variables that can be presented) in this case if for the AAA client on 12/27/2017 all the Analysis values were Stable, I want the new csv file to appear: AAA, 12/27/2017, The client's performance was Stable and so on for each client and date!
I need some function that is conditional where for each list that the client and the date are equal analyze the column of Analisis and according to this if they are all Estable, AAA, 12/27/2017, Estable: The client's performance was Estable and if no AAA, 12/27/2017, No Analized
I'm fairly new to python and I can not do it on my own sincerely. I do not know how to go through a nested list and group it as I ask earlier. My apologies for the lack of code in the question
```
a = [['Cliente', 'Fecha', 'Variables', 'Dia Previo', 'Mayor/Menor', 'Dia a Analizar', 'Analisis'],
['AAA', '27/12/2017', 'ECPM_medio', '0.41', 'Dentro del Margen', '0.35', 'Estable'],
['AAA', '27/12/2017', 'Fill_rate', '2.25', 'Dentro del Margen', '2.7', 'Estable'],
['AAA', '27/12/2017', 'Importe_a_pagar_a_medio', '62.4', 'Dentro del Margen', '61.21', 'Estable'],
['AAA', '27/12/2017', 'Impresiones_exchange', '153927.0', 'Dentro del Margen', '173663.0', 'Estable'],
['AAA', '27/12/2017', 'Subastas', '6827946.0', 'Dentro del Margen', '6431093.0', 'Estable'],
['BBB', '27/12/2017', 'ECPM_medio', '1.06', 'Dentro del Margen', '1.06', 'Alerta'],
['BBB', '27/12/2017', 'Fill_rate', '26.67', 'Dentro del Margen', '27.2', 'Alerta'],
['BBB', '27/12/2017', 'Importe_a_pagar_a_medio', '11.34', 'Dentro del Margen', '12.77', 'Estable'],
['BBB', '27/12/2017', 'Impresiones_exchange', '10648.0', 'Dentro del Margen', '12099.0', 'Estable'],
['BBB', '27/12/2017', 'Subastas', '39930.0', 'Dentro del Margen', '44479.0', 'Estable'],
['AAA', '28/12/2017', 'ECPM_medio', '0.41', 'Dentro del Margen', '0.35', 'Estable'],
['AAA', '28/12/2017', 'Fill_rate', '2.25', 'Dentro del Margen', '2.7', 'Estable'],
['AAA', '28/12/2017', 'Importe_a_pagar_a_medio', '62.4', 'Dentro del Margen', '61.21', 'Estable'],
['AAA', '28/12/2017', 'Impresiones_exchange', '153927.0', 'Dentro del Margen', '173663.0', 'Estable'],
['AAA', '28/12/2017', 'Subastas', '6827946.0', 'Dentro del Margen', '6431093.0', 'Estable'],
['BBB', '28/12/2017', 'ECPM_medio', '1.06', 'Dentro del Margen', '1.06', 'Estable'],
['BBB', '28/12/2017', 'Fill_rate', '26.67', 'Dentro del Margen', '27.2', 'Estable'],
['BBB', '28/12/2017', 'Importe_a_pagar_a_medio', '11.34', 'Dentro del Margen', '12.77', 'Estable'],
['BBB', '28/12/2017', 'Impresiones_exchange', '10648.0', 'Dentro del Margen', '12099.0', 'Estable'],
['BBB', '28/12/2017', 'Subastas', '39930.0', 'Dentro del Margen', '44479.0', 'Estable']]
```
An example of the New csv I need:
```
Cliente,Fecha,Analisis
AAA,27/12/2017,Stable: The client's performance was Stable
AAA,28/12/2017,Stable: The client's performance was Stable
BBB,27/12/2017,Stable: The client's performance was Stable
BBB,28/12/2017, Stable: The client's performance was Stable
CCC,27/12/2017,Stable: The client's performance was Stable
CCC,28/12/2017,Stable: The client's performance was Stable
```
|
2018/01/02
|
[
"https://Stackoverflow.com/questions/48055372",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8717507/"
] |
I thought this might lead you to what you want, however, not sure this will be totally helpful. Since you don't have a condition to filter data, I tried the following way to just get your desired output. Note, this is just a try to guide you towards pandas.
`pandas` would be the best way to go about this as you could manipulate the data, the way you want. To read csv in [pandas](http://pythonhow.com/data-analysis-with-python-pandas/).
I just did like this to get your data into pandas data frame,
```
import pandas as pd
headers = a.pop(0)
df = pd.DataFrame(a, columns = headers)
df
```
output:
```
Cliente Fecha Variables Dia Previo Mayor/Menor Dia a Analizar Analisis
0 AAA 27/12/2017 ECPM_medio 0.41 Dentro del Margen 0.35 Estable
1 AAA 27/12/2017 Fill_rate 2.25 Dentro del Margen 2.7 Estable
...
```
After this, I created a new column with status (Still don't know the exact condition)
```
for i in df['Analisis']:
if i == 'Estable' or i == 'Alerta':
df['Status'] = 'Stable: The client''s performance was Stable'
```
Now, you can use `groupby` function in pandas to create your desired output.
```
df1= df.groupby(['Cliente','Fecha', 'Status']).size()
df1
```
output,
```
Cliente Fecha Status
AAA 27/12/2017 Stable: The clients performance was Stable 5
28/12/2017 Stable: The clients performance was Stable 5
BBB 27/12/2017 Stable: The clients performance was Stable 5
28/12/2017 Stable: The clients performance was Stable 5
```
When you use `groupby` you have to use an aggregate function, I used `.size()`.
Now, you can write this dataframe df1 into csv. You can always wrap these into a function also. Hope this will lead you to an efficient method of analysis for your purposes.
|
The Pandas package has the tools you need. However I would recommend starting with [scipy](https://www.scipy.org/about.html "scipy") and [anaconda](https://www.anaconda.com/download/#linux "anaconda") since I found installing Pandas on its own to be quite difficult.
| 12,255
|
52,571,930
|
I have a 2D array A:
```
28 39 52
77 80 66
7 18 24
9 97 68
```
And a vector array of column indexes B:
```
1
0
2
0
```
How, in a pythonian way, using base Python or Numpy, can I select the elements from A which DO NOT correspond to the column indexes in B?
I should get this 2D array which contains the elements of A, Not corresponding to the column indexes stored in B:
```
28 52
80 66
7 18
97 68
```
|
2018/09/29
|
[
"https://Stackoverflow.com/questions/52571930",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10434696/"
] |
You can make use of broadcasting and a row-wise mask to select elements not contained in your array for each row:
***Setup***
```
B = np.array([1, 0, 2, 0])
cols = np.arange(A.shape[1])
```
---
Now use broadcasting to create a mask, and index your array.
```
mask = B[:, None] != cols
A[mask].reshape(-1, 2)
```
```
array([[28, 52],
[80, 66],
[ 7, 18],
[97, 68]])
```
|
A spin off of my answer to your other question,
[Replace 2D array elements with zeros, using a column index vector](https://stackoverflow.com/questions/52573733/replace-2d-array-elements-with-zeros-using-a-column-index-vector)
We can make a boolean `mask` with the same indexing used before:
```
In [124]: mask = np.ones(A.shape, dtype=bool)
In [126]: mask[np.arange(4), B] = False
In [127]: mask
Out[127]:
array([[ True, False, True],
[False, True, True],
[ True, True, False],
[False, True, True]])
```
Indexing an array with a boolean mask produces a 1d array, since in the most general case such a mask could select a different number of elements in each row.
```
In [128]: A[mask]
Out[128]: array([28, 52, 80, 66, 7, 18, 97, 68])
```
In this case the result can be reshaped back to 2d:
```
In [129]: A[mask].reshape(4,2)
Out[129]:
array([[28, 52],
[80, 66],
[ 7, 18],
[97, 68]])
```
Since you allowed for 'base Python' here's list comprehension answer:
```
In [136]: [[y for i,y in enumerate(x) if i!=b] for b,x in zip(B,A)]
Out[136]: [[28, 52], [80, 66], [7, 18], [97, 68]]
```
If all the 0's in the other `A` come from the insertion, then we can also get the `mask` (`Out[127]`) with
```
In [142]: A!=0
Out[142]:
array([[ True, False, True],
[False, True, True],
[ True, True, False],
[False, True, True]])
```
| 12,257
|
55,808,362
|
I have just started working with python 3.7 and I am trying to create a series e.g from 0 to 23 and repeat it. Using
```
rep1 = pd.Series(range(24))
```
I figured out how to make the first 24 values and I wanted to "copy-paste" it many times so that the final series is the original 5 times, one after the other. The result with `rep = pd.Series.repeat(rep1, 5)` gives me a result that looks like this and it's not what I want
```
0 0 0 0 0 1 1 1 1 1 2 2 2 2 2 ...
```
What I seek for is the 0-23 range multiple times. Any advice?
|
2019/04/23
|
[
"https://Stackoverflow.com/questions/55808362",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11167002/"
] |
you can try this:
```
pd.concat([rep1]*5)
```
This will repeat your series 5 times.
|
You could use a list to generate directly your Series.
```
rep = pd.Series(list(range(24))*5)
```
| 12,258
|
6,474,923
|
I'm getting errors building an App store and Adhoc distributions of my project. I'm using the latest version of the three20 which I integrated into my Xcode 4 project using the given python script.
The release and debug version of the project build just fine without any build errors.
Here's the summary of the errors:
error: Three20/Three20.h: No such file or directory
.. cannot find interface declaration for 'TTDefaultStyleSheet', superclass of 'MyTTStyleSheet'
|
2011/06/25
|
[
"https://Stackoverflow.com/questions/6474923",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/539115/"
] |
I have figured out whats going on here. The python script the header search paths for three20 to:
```
$(BUILT_PRODUCTS_DIR)/../three20
$(BUILT_PRODUCTS_DIR)/../../three20
../../libs/external/three20/Build/Products/three20
```
These paths work fine for Debug and Release builds as the macros expand to paths without any spaces like (build/Debug-iphoneos/ and build/Release-iphoneos). Xcode 4 doesn't seem to like the Adhoc and Appstore distribution build folders since it has spaces in them. Those are build/Ad Hoc Distribution-iphoneos & build/Appstore Distribution-iphoneos. Double quoting the build path string has fixed these issues.
Set your header search path for three20 to:
```
"$(BUILT_PRODUCTS_DIR)/../three20"
"$(BUILT_PRODUCTS_DIR)/../../three20"
"../../libs/external/three20/Build/Products/three20"
```
|
It might have happened because you added these 2 new targets AFTER you use the python script to add three20 project.
You will need to run the python script again to add three20 to your new targets:
```
python three20/src/scripts/ttmodule.py -p ProjectName/ProjectName.xcodeproj -c NEW_TARGET_NAME Three20
```
| 12,263
|
35,562,234
|
I have a python script that displays the Date, hour and IP Address for an attack in a log file. My issue is that i need to be able to count how many attacks occur per hour per day but when i implement a count it just counts the total not what i want.
**The log file looks like this:**
```
Feb 3 08:50:39 j4-be02 sshd[620]: Failed password for bin from 211.167.103.172 port 39701 ssh2
Feb 3 08:50:45 j4-be02 sshd[622]: Failed password for invalid user virus from 211.167.103.172 port 41354 ssh2
Feb 3 08:50:49 j4-be02 sshd[624]: Failed password for invalid user virus from 211.167.103.172 port 42994 ssh2
Feb 3 13:34:00 j4-be02 sshd[666]: Failed password for root from 85.17.188.70 port 45481 ssh2
Feb 3 13:34:01 j4-be02 sshd[670]: Failed password for root from 85.17.188.70 port 46802 ssh2
Feb 3 13:34:03 j4-be02 sshd[672]: Failed password for root from 85.17.188.70 port 47613 ssh2
Feb 3 13:34:05 j4-be02 sshd[676]: Failed password for root from 85.17.188.70 port 48495 ssh2
Feb 3 21:45:18 j4-be02 sshd[746]: Failed password for invalid user test from 62.45.87.113 port 50636 ssh2
Feb 4 08:39:46 j4-be02 sshd[1078]: Failed password for root from 1.234.51.243 port 60740 ssh2
Feb 4 08:39:55 j4-be02 sshd[1082]: Failed password for root from 1.234.51.243 port 34124 ssh2
```
**the code i have so far is:**
```
import re
myAuthlog=open('auth.log', 'r') #open the auth.log for reading
for line in myAuthlog: #go through each line of the file and return it to the variable line
ip_addresses = re.findall(r'([A-Z][a-z]{2}\s\s\d\s\d\d).+Failed password for .+? from (\S+)', line)
print ip_addresses
```
**the outcome is as shown**
```
[('Feb 5 08', '5.199.133.223')]
[]
[('Feb 5 08', '5.199.133.223')]
[]
[('Feb 5 08', '5.199.133.223')]
[]
[('Feb 5 08', '5.199.133.223')]
[]
[('Feb 5 08', '5.199.133.223')]
```
|
2016/02/22
|
[
"https://Stackoverflow.com/questions/35562234",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4671509/"
] |
The python function [`groupby()`](https://docs.python.org/2/library/itertools.html#itertools.groupby) will group your items according to any criteria you specify.
This code will print the number of attacks per hour, per day:
```
from itertools import groupby
with open('auth.log') as myAuthlog:
for key, group in groupby(myAuthlog, key = lambda x: x[:9]):
print "%d attacks in hour %s"%(len(list(group)), key)
```
Or, with an additional requirement from the comments:
```
from itertools import groupby
with open('auth.log') as myAuthlog:
myAuthlog = (line for line in myAuthlog if "Failed password for" in line)
for key, group in groupby(myAuthlog, key = lambda x: x[:9]):
print "%d attacks in hour %s"%(len(list(group)), key)
```
Or, with different formatting:
```
from itertools import groupby
with open('auth.log') as myAuthlog:
myAuthlog = (line for line in myAuthlog if "Failed password for" in line)
for key, group in groupby(myAuthlog, key = lambda x: x[:9]):
month, day, hour = key[0:3], key[4:6], key[7:9]
print "%s:00 %s-%s: %d"%(hour, day, month, len(list(group)))
```
|
```
import collections
from datetime import datetime as dt
answer = collections.defaultdict(int)
with open('path/to/logfile') as infile:
for line in infile:
stamp = line[:9]
t = dt.strptime(stamp, "%b\t%d\t%H")
answer[t] += 1
```
| 12,264
|
70,088,746
|
In python I am returning numbers but I only want the last 10 numbers
Ex: 221234567890 should return 1234567890
In excel it looks like: if(len(cell) > 10, right (cell,10),.. but don't know how to do this in python
|
2021/11/23
|
[
"https://Stackoverflow.com/questions/70088746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17492723/"
] |
```py
a = 221234567890
result = a % 10000000000
```
This should work for ints and
```py
a = "221234567890"
result = a[-10:]
```
should work for strings
|
```
a = '123456789abcdefg'
a[-10:]
```
| 12,265
|
70,951,954
|
So im trying to make a script using Lexing and Parsing. I wanted to try and change the color of the texts when a user inputs something.
Say in python when I do:
'''print("Hello")'''
It changes the color of print, string, parens, etc. I just wanted to know how to do it
and/or the code to do it.
Say if my user types:
'''hello NAME'''
It will change the color of "hello" to green or something. Does anyone know how?
|
2022/02/02
|
[
"https://Stackoverflow.com/questions/70951954",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18097622/"
] |
You can do like this
```
import colorama
from colorama import Fore
print(Fore.RED + 'This text is red in color')
```
|
Try using colorama, or termcolor in python:
Here is a link:
<https://www.geeksforgeeks.org/print-colors-python-terminal/>
| 12,266
|
63,345,648
|
I have been able to filter all the image url from a page and displayed them one after the other
```
import requests
from bs4 import BeautifulSoup
article_URL = "https://medium.com/bhavaniravi/build-your-1st-python-web-app-with-flask-b039d11f101c"
response = requests.get(article_URL)
soup = bs4.BeautifulSoup(response.text,'html.parser')
images = soup.find('body').find_all('img')
i = 0
image_url = []
for im in images:
print(im)
i+=1
url = im.get('src')
image_url.append(url)
print('Downloading: ', url)
try:
response = requests.get(url, stream=True)
with open(str(i) + '.jpg', 'wb') as out_file:
shutil.copyfileobj(response.raw, out_file)
del response
except:
print('Could not download: ', url)
new = [x for x in image_url if x is not None]
for url in new:
resp = requests.get(url, stream=True).raw
image = np.asarray(bytearray(resp.read()), dtype="uint8")
image = cv2.imdecode(image, cv2.IMREAD_COLOR)
# height, width, channels = image.shape
height, width, _ = image.shape
dimension = []
for items in height, width:
dimension.append(items)
# print(height, width)
print(dimension)
```
**I want to print the image with the largest dimension from the list of url**
This is the result I have from the list which is not good enough
```
[72, 72]
[95, 96]
[13, 60]
[227, 973]
[17, 60]
[229, 771]
```
|
2020/08/10
|
[
"https://Stackoverflow.com/questions/63345648",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8053783/"
] |
There is no native javascript api that allows you to find event listeners that were added using `eventTarget.addEventListener`.
You can still get events added using the `onclick` attribute whether the attribute was set using javascript or inline through html - in this case u are not getting the event listener, but you are getting the value of the `onclick`attribute which are two different things.
Javascript offers no api for doing so, because dom elements can be removed while event listeners still referencing them.
If you want to keep track of event listeners attached to dom elements you have to do that yourself.
Apart from that chrome has `getEventListeners` command line api which works with dom elements, however it is a developer tools command line api and so it only
works when called from developer tools.
|
There is no way, to do so directly with JavaScript.
However, you can use this approach and add an attribute while binding events to the elements.
```js
document.getElementById('test2').addEventListener('keypress', function() {
this.setAttribute("event", "yes");
console.log("foo");
}
)
document.querySelectorAll('test3').forEach(item => {
item.addEventListener('click', event => {
this.setAttribute("event", "yes");
console.log("bar");
})
})
document.getElementById('test4').onclick = function(event) {
let target = event.target;
this.setAttribute("event", "yes");
if (target.tagName != 'li') {
event.target.addClass('highlight');
}
};
```
And this is how you can find the elements having events bind to them :
```js
var eventElements = document.querySelectorAll("[event='yes']");
var countEventElements = eventElements.length;
```
| 12,267
|
67,137,419
|
I have started to use AWS SAM for python. When testing my functions locally I run:
```
sam build --use-container
sam local start-api
You can now browse to the above endpoints to invoke your functions. You do **not** need to restart/reload SAM CLI while working on your functions, changes will be reflected instantly/automatically.
sam local invoke ...
```
Then, if I make changes to the code it is not reflected without rebuilding when I invoke my function again. Is there any trick that I am missing here? This prompt is not clear to me.
|
2021/04/17
|
[
"https://Stackoverflow.com/questions/67137419",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6281479/"
] |
I believe your goal as follows.
* You want to reduce the process cost of the following script.
```
resultsSheet.hideColumns(11);
resultsSheet.hideColumns(18);
resultsSheet.hideColumns(19);
resultsSheet.showColumns(26);
resultsSheet.showColumns(27);
resultsSheet.showColumns(28);
resultsSheet.showColumns(29);
```
### Modification points:
* When `hideColumns` and `showColumns` are used, the arguments are `columnIndex, numColumns`. In this case, when the columns you want to hide and show are continuously existing like the column "A", "B" and "C", you can achieve this using one `hideColumns` and `showColumns`. But when the columns you want to hide and show are scattered, multiple `hideColumns` and `showColumns` are required to be used.
* In this case, in order to achieve your goal using one call, I would like to propose to use the method of batchUpdate in Sheets API. When Sheets API is used, both `hideColumns` and `showColumns` can be used by one API call.
The sample script is as follows.
### Sample script:
Please copy and paste the following script to the script editor of Spreadsheet you want to use. Before you use this script, [please enable Sheets API at Advanced Google services](https://developers.google.com/apps-script/guides/services/advanced#enable_advanced_services). And, please set the sheet name.
```
function myFunction() {
const hideColumns = [11, 18, 19];
const showColumns = [26, 27, 28, 29];
const sheetName = "Sheet1";
const ss = SpreadsheetApp.getActiveSpreadsheet();
const sheetId = ss.getSheetByName(sheetName).getSheetId();
const requests = [];
// Create requests for the hide columns.
if (hideColumns.length > 0) {
hideColumns.forEach(c =>
requests.push({ updateDimensionProperties: { properties: { hiddenByUser: true }, range: { sheetId: sheetId, dimension: "COLUMNS", startIndex: c - 1, endIndex: c }, fields: "hiddenByUser" } })
);
}
// Create requests for the show columns.
if (showColumns.length > 0) {
showColumns.forEach(c =>
requests.push({ updateDimensionProperties: { properties: { hiddenByUser: false }, range: { sheetId: sheetId, dimension: "COLUMNS", startIndex: c - 1, endIndex: c }, fields: "hiddenByUser" } })
);
}
// Request to Sheets API using the created requests.
if (requests.length > 0) Sheets.Spreadsheets.batchUpdate({requests: requests}, ss.getId());
}
```
* In above sample script, when only `hideColumns` is declared, the columns are hidden using `hideColumns`. when only `showColumns` is declared, the columns are shown using `showColumns`. When both `hideColumns` and `showColumns` are declared, the columns are hidden and shown using `hideColumns` and `showColumns`.
+ Above process is done by one API call.
* The values of `hideColumns` and `showColumns` are from your script
### References:
* [hideColumns(columnIndex, numColumns)](https://developers.google.com/apps-script/reference/spreadsheet/sheet#hideColumns(Integer,Integer))
* [showColumns(columnIndex, numColumns)](https://developers.google.com/apps-script/reference/spreadsheet/sheet#showcolumnscolumnindex,-numcolumns)
* [Method: spreadsheets.batchUpdate](https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets/batchUpdate)
* [UpdateDimensionPropertiesRequest](https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets/request#updatedimensionpropertiesrequest)
|
To do this faster, you can hide and show rows in groups, with one SpreadsheetApp call required per group. For example, you can hide the seven columns listed in your code sample with this:
`hideColumns_(resultSheet, [11, 18, 19, 26, 27, 28, 29]);`
To show columns, use a similar pattern with `showColumns_()`. Here's the code:
```
/**
* Shows and hides columns that have a magic value in a given row.
*/
function showAndHideColumnsByValue() {
const rowToWatch = 2; // 1-indexed
const valueShow = 'show';
const valueHide = 'hide';
const sheet = SpreadsheetApp.getActive().getActiveSheet();
const columnStart = sheet.getFrozenColumns() + 1;
const showOrHideValues = sheet.getRange(rowToWatch, columnStart, 1, sheet.getLastColumn())
.getValues()
.flat();
const columnsToShow = showOrHideValues
.map((value, index) => value === valueShow ? columnStart + index : 0)
.filter(Number);
showColumns_(sheet, columnsToShow);
const columnsToHide = showOrHideValues
.map((value, index) => value === valueHide ? columnStart + index : 0)
.filter(Number);
hideColumns_(sheet, columnsToHide);
}
/**
* Shows columns fast by grouping them before unhiding.
*
* @param {SpreadsheetApp.Sheet} sheet The sheet where to show rows.
* @param {Number[]} columnsToShow The 1-indexed column numbers of columns to show.
*/
function showColumns_(sheet, columnsToShow) {
countConsecutives_(columnsToShow.sort((a, b) => a - b))
.forEach(group => sheet.showColumns(group[0], group[1]));
}
/**
* Hides columns fast by grouping them before hiding.
*
* @param {SpreadsheetApp.Sheet} sheet The sheet where to hide rows.
* @param {Number[]} columnsToHide The 1-indexed column numbers of columns to hide.
*/
function hideColumns_(sheet, columnsToHide) {
countConsecutives_(columnsToHide.sort((a, b) => a - b))
.forEach(group => sheet.hideColumns(group[0], group[1]));
}
/**
* Counts consecutive numbers in an array and returns a 2D array that
* lists the first number of each run and the count of numbers in each run.
* Duplicate values in numbers will give duplicates in result.
*
* The numbers array [1, 2, 3, 5, 8, 9, 11, 12, 13, 5, 4] will get
* the result [[1, 3], [5, 1], [8, 2], [11, 3], [5, 1], [4, 1]].
*
* Typical usage:
* const runLengths = countConsecutives_(numbers.sort((a, b) => a - b));
*
* @param {Number[]} numbers The numbers to group into runs.
* @return {Number[][]} The numbers grouped into runs.
*/
function countConsecutives_(numbers) {
return numbers.reduce(function (acc, value, index) {
if (!index || value !== 1 + numbers[index - 1]) {
acc.push([value]);
}
acc[acc.length - 1][1] = (acc[acc.length - 1][1] || 0) + 1;
return acc;
}, []);
}
```
This will use the minimum number of hide operations when using the SpreadsheetApp API, but it may still be slow if many of the columns are separated by other columns. The Sheets API lets you do the same with just one call — see the answer by @Tanaike in this thread.
| 12,268
|
6,576,829
|
I'am looking for python async SMTP client to connect it with Torando IoLoop. I found only simple implmementation (<http://tornadogists.org/907491/>) but it's a blocking solution so it might bring performance issues.
Does anyone encountered non blocking SMTP client for Tornado? Some code snippet would be also very useful.
|
2011/07/04
|
[
"https://Stackoverflow.com/questions/6576829",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/747179/"
] |
I wrote solution based on threads and queue. One thread per tornado process. This thread is a worker, gets email from queue and then send it via SMTP. You send emails from tornado application by adding it to queue. Simple and easy.
Here is sample code on GitHub: [link](https://github.com/marcinc81/quemail)
|
Just FYI - I just whipped up a ioloop based smtp client. While I can't say it's production tested, it will be in the near future.
<https://gist.github.com/1358253>
| 12,269
|
51,307,411
|
I'm trying to make an api for Pokemon, and I was thinking of packaging it, but no matter what I do, as soon as I try to import from this file, it comes up with this error.
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/student/anaconda3/lib/python3.6/site-packages/pokeapi/__init__.py", line 1, in <module>
from Pokemon import *
ModuleNotFoundError: No module named 'Pokemon'
```
The directory is like this:
```
/pokeapi
/pokeapi
__init__.py
Pokemon.py
setup.py
```
I install it with pip, and that error comes up.
Code for **init**.py:
```
from Pokemon import *
```
Code for Pokemon.py: <https://hastebin.com/qegupucuma.py>
I don't know what I'm doing wrong
|
2018/07/12
|
[
"https://Stackoverflow.com/questions/51307411",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4503723/"
] |
I think the `if btnState == 2` statement is in the wrong block.
Also, what is `questionNumber`, when will it be incremented? How does it relate to `self.turn`?
You could try this:
```
@IBAction func btnUncoverQuestion(_ sender: RoundedButton) {
btnUncoverQuestion.setTitle(questionArray?[questionNumber].title ?? "No question here", for: .normal)
btnState = btnState + 1
if btnState == 1 {
sender.setTitle("Başlat", for: .normal)
sender.setTitleColor(UIColor.flatWhite, for: .normal)
startTimer()
} else if btnState == 2 {
btnState = 0
sender.setTitle("Tamam", for: .normal)
sender.setTitleColor(UIColor.flatWhite, for: .normal)
stopTimer()
let alert = UIAlertController(title: "Sizce cevap doğru mu", message: "Seçimin yap", preferredStyle: .alert)
alert.addAction(UIAlertAction(title: "Doğru", style: .default, handler: { (trueAction) in
self.playerArray[self.turn].point += 1
self.turn += 1
self.updateQuestionScreen(button: sender)
}))
alert.addAction(UIAlertAction(title: "Yanlış", style: .default, handler: { (wrongAction) in
self.turn += 1
self.updateQuestionScreen(button: sender)
}))
}
}
```
|
Please use tag for different event.Example in 1st press you set the button.tag = 100 and 2nd press set the tag 200.
Check the tag in button action:
```
@IBAction func btnAction(_ sender: UIButton) {
switch sender.tag {
case 100:
//your action
sender.tag = 200
case 200:
//your action
sender.tag = 300
case 300:
//your action
sender.tag = 100
default:
break
}
}
```
Hope it helps you.Thank you
| 12,273
|
22,391,419
|
what is the difference between curly brace and square bracket in python?
```
A ={1,2}
B =[1,2]
```
when I print `A` and `B` on my terminal, they made no difference. Is it real?
And sometimes, I noticed some code use `{}` and `[]` to initialize different variables.
E.g. `A=[]`, `B={}`
Is there any difference there?
|
2014/03/13
|
[
"https://Stackoverflow.com/questions/22391419",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2911587/"
] |
Curly braces create [dictionaries](https://docs.python.org/3/library/stdtypes.html#mapping-types-dict) or [sets](https://docs.python.org/3/library/stdtypes.html#set-types-set-frozenset). Square brackets create [lists](https://docs.python.org/3/library/stdtypes.html#sequence-types-list-tuple-range).
They are called *literals*; a set literal:
```
aset = {'foo', 'bar'}
```
or a dictionary literal:
```
adict = {'foo': 42, 'bar': 81}
empty_dict = {}
```
or a list literal:
```
alist = ['foo', 'bar', 'bar']
empty_list = []
```
To create an empty set, you can only use `set()`.
Sets are collections of *unique* elements and you cannot order them. Lists are ordered sequences of elements, and values can be repeated. Dictionaries map keys to values, keys must be unique. Set and dictionary keys must meet other restrictions as well, so that Python can actually keep track of them efficiently and know they are and will remain unique.
There is also the [`tuple` type](https://docs.python.org/3/library/stdtypes.html#tuple), using a comma for 1 or more elements, with parenthesis being optional in many contexts:
```
atuple = ('foo', 'bar')
another_tuple = 'spam',
empty_tuple = ()
WARNING_not_a_tuple = ('eggs')
```
Note the comma in the `another_tuple` definition; it is that comma that makes it a `tuple`, not the parenthesis. `WARNING_not_a_tuple` is not a tuple, it has no comma. Without the parentheses all you have left is a string, instead.
See the [data structures chapter](http://docs.python.org/3/tutorial/datastructures.html) of the Python tutorial for more details; lists are introduced in the [introduction chapter](http://docs.python.org/3/tutorial/introduction.html#lists).
Literals for containers such as these are also called [displays](https://docs.python.org/3/reference/expressions.html#displays-for-lists-sets-and-dictionaries) and the syntax allows for procedural creation of the contents based of looping, called *comprehensions*.
|
They create different types.
```
>>> type({})
<type 'dict'>
>>> type([])
<type 'list'>
>>> type({1, 2})
<type 'set'>
>>> type({1: 2})
<type 'dict'>
>>> type([1, 2])
<type 'list'>
```
| 12,274
|
17,904,216
|
I've done some searches, but I'm actually not sure of the way to word what I want to take place, so I started a question. I'm sure its been covered before, so my apologies.
The code below doesn't work, but hopefully it illustrates what I'm trying to do.
```
sieve[i*2::i] *= ((i-1) / i):
```
I want to take a list and go through each item in the list that is a multiple of "i" and change its value by multiplying by the same amount.
So for example if I had a list
```
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
```
and I want to start at 2 and change every 2nd item in the list, by itself \* (2 - 1 ) / 2. So after it would look like
```
[1, 2, 3, 2, 5, 3, 7, 4, 9, 5]
```
how do I do that pythonically?
thank you very much!
EDIT to add:
sorry, I see where my poor wording has caused some confusion (ive changed it in the above).
I dont want to change every multiple of 2, I want to change every second item in the list, even if its not a multiple of 2. So I cant use x % 2 == 0. Sorry!
|
2013/07/28
|
[
"https://Stackoverflow.com/questions/17904216",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2209860/"
] |
You can do:
```
>>> def sieve(L, i):
... temp = L[:i]
... for x, y in zip(L[i::2], L[i+1::2]):
... temp.append(x)
... temp.append(y/2)
... return temp
...
>>> sieve([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 2)
[1, 2, 3, 2, 5, 3, 7, 4, 9, 5]
```
Note that `itself * (2 - 1 ) / 2` is equivalent to `itself * 1 / 2` which is equivalent to `itself / 2`.
|
```
map(lambda x : x * (2 - 1) / 2 if x % 2 == 0 else x, list)
```
This should do what you want it to.
**Edit:**
Alternately in style, you could use list comprehensions for this as follows:
```
i = 2
list[:i] + [x * (i - 1) / i if x % i == 0 else x for x in list[i:]]
```
| 12,277
|
50,567,475
|
I am upgrading my django application from `Django 1.5` to `Django 1.7`. While upgrading I am getting `django.core.exceptions.AppRegistryNotReady: Models aren't loaded yet` error. I tried with some solution I got by searching. But nothing is worked for me. I think it because of one of my model. Please help me to fix this.
```
Traceback (most recent call last):
File "./manage.py", line 15, in <module>
execute_from_command_line(sys.argv)
File "/home/venkat/sample-applications/wfmis-django-upgrade/venv/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 385, in execute_from_command_line
utility.execute()
File "/home/venkat/sample-applications/wfmis-django-upgrade/venv/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 354, in execute
django.setup()
File "/home/venkat/sample-applications/wfmis-django-upgrade/venv/local/lib/python2.7/site-packages/django/__init__.py", line 21, in setup
apps.populate(settings.INSTALLED_APPS)
File "/home/venkat/sample-applications/wfmis-django-upgrade/venv/local/lib/python2.7/site-packages/django/apps/registry.py", line 108, in populate
app_config.import_models(all_models)
File "/home/venkat/sample-applications/wfmis-django-upgrade/venv/local/lib/python2.7/site-packages/django/apps/config.py", line 197, in import_models
self.models_module = import_module(models_module_name)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/home/venkat/sample-applications/wfmis-django-upgrade/wfmis-upgrade/django-pursuite/apps/admin/models/__init__.py", line 14, in <module>
from occupational_standard import *
File "/home/venkat/sample-applications/wfmis-django-upgrade/wfmis-upgrade/django-pursuite/apps/admin/models/occupational_standard.py", line 160, in <module>
admin.site.register(OccupationalStandard, OccupationalStandardAdmin)
File "/home/venkat/sample-applications/wfmis-django-upgrade/venv/local/lib/python2.7/site-packages/django/contrib/admin/sites.py", line 99, in register
admin_class.check(model)
File "/home/venkat/sample-applications/wfmis-django-upgrade/venv/local/lib/python2.7/site-packages/django/contrib/admin/options.py", line 153, in check
return cls.checks_class().check(cls, model, **kwargs)
File "/home/venkat/sample-applications/wfmis-django-upgrade/venv/local/lib/python2.7/site-packages/django/contrib/admin/checks.py", line 497, in check
errors.extend(self._check_list_filter(cls, model))
File "/home/venkat/sample-applications/wfmis-django-upgrade/venv/local/lib/python2.7/site-packages/django/contrib/admin/checks.py", line 668, in _check_list_filter
for index, item in enumerate(cls.list_filter)
File "/home/venkat/sample-applications/wfmis-django-upgrade/venv/local/lib/python2.7/site-packages/django/contrib/admin/checks.py", line 713, in _check_list_filter_item
get_fields_from_path(model, field)
File "/home/venkat/sample-applications/wfmis-django-upgrade/venv/local/lib/python2.7/site-packages/django/contrib/admin/utils.py", line 457, in get_fields_from_path
fields.append(parent._meta.get_field_by_name(piece)[0])
File "/home/venkat/sample-applications/wfmis-django-upgrade/venv/local/lib/python2.7/site-packages/django/db/models/options.py", line 416, in get_field_by_name
cache = self.init_name_map()
File "/home/venkat/sample-applications/wfmis-django-upgrade/venv/local/lib/python2.7/site-packages/django/db/models/options.py", line 445, in init_name_map
for f, model in self.get_all_related_m2m_objects_with_model():
File "/home/venkat/sample-applications/wfmis-django-upgrade/venv/local/lib/python2.7/site-packages/django/db/models/options.py", line 563, in get_all_related_m2m_objects_with_model
cache = self._fill_related_many_to_many_cache()
File "/home/venkat/sample-applications/wfmis-django-upgrade/venv/local/lib/python2.7/site-packages/django/db/models/options.py", line 577, in _fill_related_many_to_many_cache
for klass in self.apps.get_models():
File "/home/venkat/sample-applications/wfmis-django-upgrade/venv/local/lib/python2.7/site-packages/django/utils/lru_cache.py", line 101, in wrapper
result = user_function(*args, **kwds)
File "/home/venkat/sample-applications/wfmis-django-upgrade/venv/local/lib/python2.7/site-packages/django/apps/registry.py", line 168, in get_models
self.check_models_ready()
File "/home/venkat/sample-applications/wfmis-django-upgrade/venv/local/lib/python2.7/site-packages/django/apps/registry.py", line 131, in check_models_ready
raise AppRegistryNotReady("Models aren't loaded yet.")
```
This is my project strucutre.
[](https://i.stack.imgur.com/K8C1p.jpg)
setting.py
```
INSTALLED_APPS = (
'admin.apps.AdminConfig',
'account.apps.AccountConfig',
'...............'
)
```
wsgi.py
```
import os
import sys
PROJECT_ROOT = os.path.dirname(__file__)
sys.path.insert(0, os.path.join(PROJECT_ROOT, '../apps'))
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "pursuite.settings.production")
import settings
import django.core.management
django.core.management.setup_environ(settings) # Setup settings for core mgmt
utility = django.core.management.ManagementUtility()
command = utility.fetch_command('runserver')
command.validate()
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
```
apps/admin/apps.py
```
from django.apps import AppConfig
class AdminConfig(AppConfig):
name = 'apps.admin'
label = 'wfmis_admin'
```
apps/admin/models/occupational\_standard.py
```
from tinymce.models import HTMLField
from django.db import models
from django.contrib import admin
from django.contrib import messages
from django.core.urlresolvers import reverse
from django.core.exceptions import ValidationError
from haystack import indexes
from .validators import validate_os_code, validate_version
import admin.common as common
__all__ = ['OccupationalStandard', 'OccupationalStandardIndex']
class OccupationalStandard(models.Model):
'''
Occupational Standard
'''
class Meta:
'''
Meta properties for this model
'''
app_label = 'wfmis_admin'
unique_together = ('code', 'version')
code = models.CharField(
max_length=9, default=None, validators=[validate_os_code],
db_index=True,
)
version = models.CharField(
max_length=8, default=None, validators=[validate_version],
db_index=True,
)
is_draft = models.BooleanField(default=True, verbose_name="Draft")
sub_sector = models.ForeignKey(
'SubSector', db_index=True, verbose_name="Industry Sub-sector",
)
title = models.CharField(
max_length=50, default=None, db_index=True, verbose_name="Unit Title",
)
description = models.TextField(default=None)
scope = HTMLField(default=None)
performace_criteria = HTMLField(default=None)
knowledge = HTMLField(default=None)
skills = HTMLField(default=None)
attachment = models.FileField(upload_to='os_attachments')
drafted_on = models.DateTimeField(auto_now_add=True)
last_reviewed_on = models.DateTimeField(auto_now=True) # Write date
next_review_on = models.DateField()
def __unicode__(self):
'''
Returns object display name. This comprises code and version.
For example: SSC/O2601-V0.1
'''
return "%s-V%s%s (%s)" % (
self.code, self.version, "draft" if self.is_draft else "",
self.title,
)
@property
def sector(self):
"""
Returns sector corresponding to occupational standard.
"""
return self.sub_sector.sector
def get_absolute_url(self):
'''
get absolute url
'''
return reverse('occupational_standard', args=(self.code,))
def clean(self):
'''
Validate model instance
'''
if OccupationalStandard.objects.filter(code=self.code, is_draft=True) \
.exclude(pk=self.pk):
# Check one OS should have one version in draft
raise ValidationError(
'There is already a version in draft for %s' % self.code
)
```
Reference links : [Django 1.7 throws django.core.exceptions.AppRegistryNotReady: Models aren't loaded yet](https://stackoverflow.com/questions/25537905/django-1-7-throws-django-core-exceptions-appregistrynotready-models-arent-load)
|
2018/05/28
|
[
"https://Stackoverflow.com/questions/50567475",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5939865/"
] |
From the traceback I see the following:
```
File "/home/venkat/sample-applications/wfmis-django-upgrade/wfmis-upgrade/django-pursuite/apps/admin/models/__init__.py", line 14, in <module>
from occupational_standard import *
File "/home/venkat/sample-applications/wfmis-django-upgrade/wfmis-upgrade/django-pursuite/apps/admin/models/occupational_standard.py", line 160, in <module>
admin.site.register(OccupationalStandard, OccupationalStandardAdmin)
```
There is a call to `admin.site.register` in the `models`. Registering models should happen in `admin` not in `models`.
|
Stop the venv before upgrading django.
Stop the server before upgrading.
Update to 1.7 style wsgi handler.
Also, use pip to manage & upgrade packages, your script is bound to break the packages otherwise.
| 12,285
|
11,055,165
|
From a file, i have taken a line, split the line into 5 columns using `split()`. But i have to write those columns as tab separated values in an output file.
Lets say that i have `l[1], l[2], l[3], l[4], l[5]`...a total of 5 entries. How can i achieve this using python? And also, i am not able to write `l[1], l[2], l[3], l[4], l[5]` values to an output file.
I tried both these codes, both not working(i am using python 2.6):
code 1:
```
with open('output', 'w'):
print l[1], l[2], l[3], l[4], l[5] > output
```
code 2:
```
with open('output', 'w') as outf:
outf.write(l[1], l[2], l[3], l[4], l[5])
```
|
2012/06/15
|
[
"https://Stackoverflow.com/questions/11055165",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1411416/"
] |
The `write()` method takes a string as its first argument (not a variable number of strings). Try this:
```
outf.write(l[1] + l[2] + l[3] + l[4] + l[5])
```
or better yet:
```
outf.write('\t'.join(l) + '\n')
```
|
```
outf.write('{0[1]}\t{0[2]}\t{0[3]}\t{0[4]}\t{0[4]}\n'.format(l))
```
will write the data to the file tab separated. Note that write doesn't automatically append a `\n`, so if you need it you'll have to supply it yourself.
Also, it's better to open the file using `with`:
```
with open('output', 'w') as outf:
outf.write('{0[1]}\t{0[2]}\t{0[3]}\t{0[4]}\t{0[4]}\n'.format(l))
```
as this will automatically close your file for you when you are done or an exception is encountered.
| 12,286
|
56,899,892
|
I am following along with this pycon video on python packaging.
I have a directory:
* `mypackage/`
+ `__init__.py`
+ `mypackage.py`
* `readme.md`
* `setup.py`
The contents of `mypackage.py`:
```
class MyPackage():
'''
My Damn Package
'''
def spam(self):
return "eggs"
```
The contents of `setup.py`:
```
import setuptools
setuptools.setup(
name='mypackage',
version='0.0.1',
description='My first package',
packages=setuptools.find_packages()
)
```
Now I create a virtual env and install the package with:
```
pip install -e .
```
Now I do:
```
python
>>> import mypackage
>>> mypackage.MyPackage().spam()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'mypackage' has no attribute 'MyPackage'
```
Why is this not working as per the guy's tutorial?
|
2019/07/05
|
[
"https://Stackoverflow.com/questions/56899892",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2118666/"
] |
First, you usually can find the root cause on **last** `Caused by` statement for debugging.
Therefore, according to the error log you posted, `Caused by: org.hibernate.HibernateException: Access to DialectResolutionInfo cannot be null when 'hibernate.dialect' not set` should be key!
Although Hibernate is database agnostic, we can specify the current database dialect to let it generate better SQL queries for that database. Therefore, this exception can be solved by simply identifying `hibernate.dialect` in your properties file as follows:
For application.properties:
```
spring.jpa.database-platform=org.hibernate.dialect.MySQL5Dialect
```
For application.yml:
```
spring:
jpa:
database-platform: org.hibernate.dialect.MySQL5Dialect
```
|
In my case:
1. I have created **separate new project** with the **same code** and in the **same work-space**.
2. Started the application.
3. This time tomcat started successfully in the first instance itself.
| 12,289
|
8,077,756
|
in my views.py i obtain 5 dicts, which all are something like {date:value}
all 5 dicts have the same length and in my template i want to obtain some urls based on these dicts, with the common field being the date - as you would do in an sql query when joining 5 tables based on a common column
in python you would do something like:
```
for key, value in loc.items():
print key, loc[key], ctg[key], sctg[key], title[key], id[key]
```
but in django templates all i could come up with is this:
```
{% for lock, locv in loc.items %}
{% for ctgk, ctgv in ctg.items %}
{% for subctgk, subctgv in subctg.items %}
{% for titlek, titlev in titlu.items %}
{% for idk, idv in id.items %}
{% ifequal lock ctgk %}
{% ifequal ctgk subctgk %}
{% ifequal subctgk titlek %}
{% ifequal titlek idk %}
<br />{{ lock|date:"d b H:i" }} - {{ locv }} - {{ ctgv }} - {{ subctgv }} - {{ titlev }} - {{idv }}
.... {% endifequals & endfors %}
```
which of course is ugly and takes a lot of time to be rendered
right now i am looking into building a custom tag, but i was wondering if you guys have any feedback on this topic?
|
2011/11/10
|
[
"https://Stackoverflow.com/questions/8077756",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1023857/"
] |
`youredittext.requestFocus()` call it from activity
```
oncreate();
```
and use the above code there
|
```
>>you can write your code like
if (TextUtils.isEmpty(username)) {
editTextUserName.setError("Please enter username");
editTextUserName.requestFocus();
return;
}
if (TextUtils.isEmpty(password)) {
editTextPassword.setError("Enter a password");
editTextPassword.requestFocus();
return;
}
```
| 12,296
|
34,169,770
|
I am trying to select sensors by placing a box around their geographic coordinates:
```
In [1]: lat_min, lat_max = lats(data)
lon_min, lon_max = lons(data)
print(np.around(np.array([lat_min, lat_max, lon_min, lon_max]), 5))
Out[1]: [ 32.87248 33.10181 -94.37297 -94.21224]
In [2]: select_sens = sens[(lat_min<=sens['LATITUDE']) & (sens['LATITUDE']<=lat_max) &
(lon_min<=sens['LONGITUDE']) & (sens['LONGITUDE']<=lon_max)].copy()
Out[2]: ---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-12-7881f6717415> in <module>()
4 lon_min, lon_max = lons(data)
5 select_sens = sens[(lat_min<=sens['LATITUDE']) & (sens['LATITUDE']<=lat_max) &
----> 6 (lon_min<=sens['LONGITUDE']) & (sens['LONGITUDE']<=lon_max)].copy()
7 sens_data = data[data['ID'].isin(select_sens['ID'])].copy()
8 sens_data.describe()
/home/kartik/miniconda3/lib/python3.5/site-packages/pandas/core/ops.py in wrapper(self, other, axis)
703 return NotImplemented
704 elif isinstance(other, (np.ndarray, pd.Index)):
--> 705 if len(self) != len(other):
706 raise ValueError('Lengths must match to compare')
707 return self._constructor(na_op(self.values, np.asarray(other)),
TypeError: len() of unsized object
```
Of course, `sens` is a pandas DataFrame. Even when I use `.where()` it raises the same error. I am completely stumped, because it is a simple comparison that shouldn't raise any errors. Even the data types match:
```
In [3]: sens.dtypes
Out[3]: ID object
COUNTRY object
STATE object
COUNTY object
LENGTH float64
NUMBER object
NAME object
LATITUDE float64
LONGITUDE float64
dtype: object
```
So what is going on?!?
**-----EDIT------**
As per Ethan Furman's answer, I made the following changes:
```
In [2]: select_sens = sens[([lat_min]<=sens['LATITUDE']) & (sens['LATITUDE']<=[lat_max]) &
([lon_min]<=sens['LONGITUDE']) & (sens['LONGITUDE']<=[lon_max])].copy()
```
And (drumroll) it worked... But why?
|
2015/12/09
|
[
"https://Stackoverflow.com/questions/34169770",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3765319/"
] |
I'm not familiar with NumPy nor Pandas, but the error is saying that one of the objects in the comparison `if len(self) != len(other)` does not have a `__len__` method and therefore has no length.
Try doing `print(sens_data)` to see if you get a similar error.
|
I found a similar issue and think the problem may be related to the Python version you are using.
I wrote my code in Spyder
**Python 3.6.1 |Anaconda 4.4.0 (64-bit)**
but then passed it to someone using Spyder but
**Python 3.5.2 |Anaconda 4.2.0 (64-bit)**
I had one numpy.float64 object (as far as i understand, similar to lat\_min, lat\_max, lon\_min and lon\_max in your code) `MinWD.MinWD[i]`
```
In [92]: type(MinWD.MinWD[i])
Out[92]: numpy.float64
```
and a Pandas data frame `WatDemandCur` with one column called `Percentages`
```
In [96]: type(WatDemandCur)
Out[96]: pandas.core.frame.DataFrame
In [98]: type(WatDemandCur['Percentages'])
Out[98]: pandas.core.series.Series
```
and i wanted to do the following comparison
```
In [99]: MinWD.MinWD[i]==WatDemandCur.Percentages
```
There was no problem with this line when running the code in my machine (**Python 3.6.1**)
But my friend got something similar to you in (**Python 3.5.2**)
```
MinWD.MinWD[i]==WatDemandCur.Percentages
Traceback (most recent call last):
File "<ipython-input-99-3e762b849176>", line 1, in <module>
MinWD.MinWD[i]==WatDemandCur.Percentages
File "C:\Program Files\Anaconda3\lib\site-packages\pandas\core\ops.py", line 741, in wrapper
if len(self) != len(other):
TypeError: len() of unsized object
```
My solution to his problem was to change the code to
```
[MinWD.MinWD[i]==x for x in WatDemandCur.Percentages]
```
and it worked in both versions!
With this and your evidence, i would assume that it is not possible to compare numpy.float64 and perhaps numpy.integers objects with Pandas Series, and this could be partly related to the fact that the former have no **len** function.
Just for curiosity, i did some tests with float and integer objects (please tell the difference with numpy.float64 object)
```
In [122]: Temp=1
In [123]: Temp2=1.0
In [124]: type(Temp)
Out[124]: int
In [125]: type(Temp2)
Out[125]: float
In [126]: len(Temp)
Traceback (most recent call last):
File "<ipython-input-126-dc80ab11ca9c>", line 1, in <module>
len(Temp)
TypeError: object of type 'int' has no len()
In [127]: len(Temp2)
Traceback (most recent call last):
File "<ipython-input-127-a1b836f351d2>", line 1, in <module>
len(Temp2)
TypeError: object of type 'float' has no len()
Temp==WatDemandCur.Percentages
Temp2==WatDemandCur.Percentages
```
Both worked!
Conclusions
1. In another python version your code should work!
2. The problem with the comparison is specific for numpy.floats and perhaps numpy.integers
3. When you include [] or when I create the list with my solution, the type of object is changed from a numpy.float to a list, and in this way it works fine.
4. Although the problem seems to be related to the fact that numpy.float64 objects have no len function, floats and integers, which do not have a len function either, do work.
Hope some of this works for you or someone else facing a similar issue.
| 12,306
|
26,593,344
|
I'm writing a pandas Dataframe to a Postgres database:
```
from sqlalchemy import create_engine, MetaData
engine = create_engine(r'postgresql://user:password@localhost:5432/db')
meta = MetaData(engine, schema='data_quality')
meta.reflect(engine, schema='data_quality')
pdsql = pd.io.sql.PandasSQLAlchemy(engine, meta=meta)
pdsql.to_sql(dataframe, table_name)
```
It was working perfectly, but now SQLAlchemy is throwing the following error at the 5th line:
```
AttributeError: 'module' object has no attribute 'PandasSQLAlchemy'
```
I'm not sure if it's related, but Pandas broke at the same time - exactly like in this google-api-python-client issue:
[Could not Import Pandas: TypeError](https://stackoverflow.com/questions/26481285/could-not-import-pandas-)
I installed the google-api-python-client yesterday and uninstalling it fixed the problem with Pandas, but SQLAlchemy still doesn't work.
|
2014/10/27
|
[
"https://Stackoverflow.com/questions/26593344",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3591836/"
] |
I suppose you are using pandas 0.15. `PandasSQLAlchemy` was not yet really public, and was renamed in pandas 0.15 to `SQLDatabase`. So if you replace that in your code, it should work (so `pdsql = pd.io.sql.SQLDatabase(engine, meta=meta)`).
However, starting from pandas 0.15, there is also schema support in the `read_sql_table` and `to_sql` functions, so it should not be needed to make a `MetaData` and `SQLDatabase` object manually. Instead, this should do it:
```
dataframe.to_sql(table_name, engine, schema='data_quality')
```
See the 0.15 release notes: <http://pandas.pydata.org/pandas-docs/stable/whatsnew.html#improvements-in-the-sql-io-module>
|
I have encountered this error recently and it was solved my removing the .pyc files located in the same directory as .py files. These files (.pyc) holds the previous version information and time, date.
| 12,307
|
67,220,607
|
I'm trying to save a loop output into a text file with python. However, when I try to do so only the first line of the result gets printed on the file.
This is the line I want to print the result of:
```
with open('myfile.txt','w') as f_output:
f_output.write(
for k, v in mydic.items():
print(f"{k:11}{v[0]}{v[1]:12}"))
```
This prints only the first line of the result.
My dict looks like this:
```
mydic = {'1': [22, 23], '2': [33,24], '3': [44,25]}
```
I need it to print this to the file:
```
1 22 23
2 33 24
3 44 25
```
How can I do this?
|
2021/04/22
|
[
"https://Stackoverflow.com/questions/67220607",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12763792/"
] |
Write in append mode with `a`:
```py
mydic = {'1': [22, 23], '2': [33,24], '3': [44,25]}
with open('myfile.txt','a') as f_output:
for k, v in mydic.items():
# Also need `\n` for newlines:
f_output.write(f"{k:11}{v[0]}{v[1]:12}\n")
```
Output:
```
1 22 23
2 33 24
3 44 25
```
|
Change the argument from 'w' (write) to 'a' (append).
```py
mydic = {'1': [22, 23], '2': [33, 24], '3': [44, 25]}
with open('myfile.txt','a') as f_output:
for k, v in mydic.items():
res=f"{k:11}{v[0]}{v[1]:12}"
f_output.write(f"{res}\n")
print(res)
```
| 12,308
|
45,317,767
|
```
def generate_n_chars(n,s="."):
res=""
count=0
while count < n:
count=count+1
res=res+s
return res
print generate_n_chars(raw_input("Enter the integer value : "),raw_input("Enter the character : "))
```
I am beginner in python and I don't know why this loop going to infinity. Please someone correct my program
|
2017/07/26
|
[
"https://Stackoverflow.com/questions/45317767",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7504540/"
] |
I personally have another mental model which doesn't deal directly with identity and memory and whatnot.
`prvalue` comes from "pure rvalue" while `xvalue` comes from "expiring value" and is this information I use in my mental model:
**Pure rvalue** refers to an object that is a temporary in the "pure sense": an expression for which the compiler can tell with absolute certainty that its evaluation is an object that is a temporary that has just been created and that is immediately expiring (unless we intervene to prolong it's lifetime by reference binding). The object was created during the evaluation of the expression and it will die according to the rules of the "mother expression".
By contrast, an **expiring value** is a expression that evaluates to a reference to an object that is *promised* to expire soon. That is it gives you a promise that you can do whatever you want to this object because it will be destroyed next anyway. But you don't know when this object was created, or when it is supposed to be destroyed. You just know that you "intercepted" it as it is just about to die.
In practice:
```
struct X;
auto foo() -> X;
```
```
X x = foo();
^~~~~
```
in this example evaluating `foo()` will result in a `prvalue`. Just by looking at this expression you know that this object was created as part of the return of `foo` and will be destroyed at the end of this full expression. Because you know all of these things you can prologue it's lifetime:
```
const X& rx = foo();
```
now the object returned by foo has it's lifetime prolongued to the lifetime of `rx`
```
auto bar() -> X&&
```
```
X x = bar();
^~~~
```
In this example evaluating `bar()` will result in a `xvalue`. `bar` *promises* you that is giving you an object that is about to expire, but you don't know when this object was created. It can be created way before the call to `bar` (as a temporary or not) and then `bar` gives you an `rvalue reference` to it. The advantage is you know you can do whatever you want with it because it won't be used afterwords (e.g. you can move from it). But you don't know when this object is supposed to be destroyed. As such you cannot extend it's lifetime - because you don't know what its original lifetime is in the first place:
```
const X& rx = bar();
```
this won't extend the lifetime.
|
When calling a `func(T&& t)` the caller is saying "there's a t here" and also "I don't care what you do to it". C++ does not specify the nature of "here".
On a platform where reference parameters are implemented as addresses, this means there must be an object present somewhere. On that platform identity == address. However this is not a requirement of the language, but of the platform calling convention.
A platform could implement references simply by arranging for the objects to be enregistered in a particular manner in both the caller and callee. Here an identity could be "register edi".
| 12,309
|
25,618,016
|
I have the following script which just isnt working for me :(. I essentially want to create 10 threads to port scan a range of 100 ports. It should seem simple but I dont know where I am going wrong. Im new to python and have been looking at how to get this working for the past two weeks and I know give up. When executed it just does nothing. Help please :).
EDIT: Updated the code but it now states none when I run it.
#Import Modules
from scapy.all import \*
from Queue import Queue
from threading import Thread
```
#Set Variables
threadCount = 10
destIP = "192.168.136.131"
portLength = 100
q = Queue(maxsize=0)
#Empty Arrays
openPorts = []
closedPorts = []
threads = []
def qProcessor(q):
while True:
try:
print q.get()
#getQ = q.get()
#getQ()
#if getQ() is None:
# break
#else:
q.task_done()
except Exception as e:
print 'error in qprocessor function'
print e
def portScan(port, dstIP):
scan = sr1(IP(dst=dstIP)/TCP(dport=port,flags="S"), verbose=0)
if scan.getlayer(TCP).flags == 0x12:
openPorts.append("IP: %s \t Port: %s"%(scan.getlayer(IP).src, scan.getlayer(TCP).sport))
sr1(IP(dst=dstIP)/TCP(dport=port,flags="R"),verbose=0)
if scan.getlayer(TCP).flags == 0x14:
closedPorts.append("IP: %s \t Port: %s"%(scan.getlayer(IP).src, scan.getlayer(TCP).sport))
def main():
try:
for i in range(threadCount):
worker = Thread(target=qProcessor, args =(q,))
worker.setDaemon(True)
worker.start()
except Exception as e:
print "error in worker section"
print e
for p in range(portLength):
q.put(portScan(p, destIP))
q.join()
if __name__ == '__main__':
main()
for port in openPorts:
print port
```
So i found the answer. This has killed me for two weeks and I ended up debugging the application with the pdb module and the '-v' switch. I have learnt a lot from this exercise and want to kill python after this lol. But working it out with the little hints from stackoverflow has been awesome. Here is the final script. I have commented the line that was giving me issues whilst I work out a way around it. BTW this works fine without threading.
```
#Import Modules
from scapy.all import *
from Queue import Queue
from threading import Thread
#Set Variables
threadCount = 10
destIP = "192.168.136.131"
portLength = 100
q = Queue(maxsize=0)
#Empty Arrays
openPorts = []
closedPorts = []
threads = []
def main():
try:
for i in range(threadCount):
worker = Thread(target=qProcessor, args =(q,))
worker.setDaemon(True)
worker.start()
except Exception as e:
print "error in worker section"
print e
for p in range(portLength):
q.put(portScan(p, destIP))
q.join()
def qProcessor(q):
while True:
try:
q.get()
q.task_done()
except Exception as e:
print 'error in qprocessor function'
print e
def portScan(port, dstIP):
scan = sr1(IP(dst=dstIP)/TCP(dport=port,flags="S"), verbose=0)
if scan.getlayer(TCP).flags == 0x12:
openPorts.append("IP: %s \t Port: %s"%(scan.getlayer(IP).src, scan.getlayer(TCP).sport))
# sr1(IP(dst=dstIP)/TCP(dport=port,flags="R"),verbose=0)
if scan.getlayer(TCP).flags == 0x14:
closedPorts.append("IP: %s \t Port: %s"%(scan.getlayer(IP).src, scan.getlayer(TCP).sport))
else:
pass
if __name__ == '__main__':
main()
for port in openPorts:
print port
```
|
2014/09/02
|
[
"https://Stackoverflow.com/questions/25618016",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3999393/"
] |
You have an extra space after resource, have you tried removing that?
```
'resource ': {
```
|
It looks like you're doing extra encoding of the inserted rows. They should be sent as raw json, rather than encoding the whole row as a string. That is, something like this:
```
'rows': [
{
'insertId': 123456,
'json': {'id': 123,'name':'test1'}
}
]
```
(note the difference from what you have above is just that the `{'id': 123,'name':'test1'}` line isn't quoted.
| 12,310
|
70,943,395
|
I did the following in google colab notebook and get an error. Any idea?
```
%pip install pyenchant
import enchant
```
and get the following error:
ImportError Traceback (most recent call last)
in ()
----> 1 import enchant
1 frames
/usr/local/lib/python3.7/dist-packages/enchant/\_enchant.py in ()
159 """
160 )
--> 161 raise ImportError(msg)
162
163
ImportError: The 'enchant' C library was not found and needs to be installed.
See <https://pyenchant.github.io/pyenchant/install.html>
for details
|
2022/02/01
|
[
"https://Stackoverflow.com/questions/70943395",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18092070/"
] |
After lots of research, I found the solution [here](https://github.com/googlecolab/colabtools/issues/441).
Run this code on goggle colab before `import enchant`
```
!apt update
!apt install enchant --fix-missing
!apt install -qq enchant
!pip install pyenchant
```
|
Yes enchant doesnt work on Google colab because of C libraries. You can use Jupyter notebook for this library and it will work just fine.
| 12,311
|
43,094,861
|
I would like to split a string into separated, single strings and save each in a new variable. That's the use case:
* Direct user input with `BC1 = input("BC1: ")` in the following format: `'17899792270101010000000000', '17899792270102010000000000', '17899792270103010000000000'`
* Now I want each number - and just the number in a single variable:
```
a = 17899792270101010000000000
b = 17899792270102010000000000
c = 17899792270103010000000000
```
How to realise that in python 3?
Sadly I'm not able to create a suitable regular expression for it and neither a way to save string parts in separate variables. I hope someone of you could help me.
Thanks already in advance!
|
2017/03/29
|
[
"https://Stackoverflow.com/questions/43094861",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6051700/"
] |
Look into [re](https://docs.python.org/2/library/re.html#re.findall)
```
import re
input = "'17899792270101010000000000', '17899792270102010000000000', '17899792270103010000000000'"
matches = re.findall('(\d+)', input)
# matches = ['17899792270101010000000000', '17899792270102010000000000', '17899792270103010000000000']
a, b, c = re.findall('(\d+)', input)
# a = '17899792270101010000000000'
# b = '17899792270102010000000000'
# c = '17899792270103010000000000'
```
edit:
If you want `int` not `str`, you can `map(int, matches)`
|
if it's already "format" at the input :
```
>>> BC1 = input("BC1: ")
BC1: '17899792270101010000000000', '17899792270102010000000000', '17899792270103010000000000'
>>> a=int(BC1[0])
>>> b=int(BC1[1])
>>> c=int(BC1[2])
>>> a
17899792270101010000000000
>>> b
17899792270102010000000000
>>> c
17899792270103010000000000
```
| 12,312
|
38,550,322
|
Hello everyone!
I'm new to python networking programming.
My development environments are as below.
* Windows 7
* Python 3.4
I am studying with "Python Network Programming Cookbook". In this book, there's an example of **ThreadingMixIn** socket server application.
This book's code is written in Python 2.7. So I've modified for python 3.4.
The code is...
```
# coding: utf-8
import socket
import threading
import socketserver
SERVER_HOST = 'localhost'
SERVER_PORT = 0 # tells the kernel to pick up a port dynamically
BUF_SIZE = 1024
def client(ip, port, message):
""" A client to test threading mixin server"""
# Connect to the server
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((ip, port))
try:
message = bytes(message, encoding="utf-8")
sock.sendall(message)
response = sock.recv(BUF_SIZE)
print("Client received: {0}".format(response))
finally:
sock.close()
class ThreadedTCPRequestHandler(socketserver.BaseRequestHandler):
""" An example of threaded TCP request handler """
def handle(self):
data = self.request.recv(1024)
current_thread = threading.current_thread()
response = "{0}: {0}".format(current_thread.name, data)
response = bytes(response, encoding="utf-8")
self.request.sendall(response)
class ThreadedTCPServer(socketserver.ThreadingMixIn, socketserver.TCPServer):
"""Nothing to add here, inherited everything necessary from parents"""
pass
if __name__ == "__main__":
# Run server
server = ThreadedTCPServer((SERVER_HOST, SERVER_PORT),
ThreadedTCPRequestHandler)
ip, port = server.server_address # retrieve ip address
# Start a thread with the server -- one thread per request
server_thread = threading.Thread(target=server.serve_forever)
# Exit the server thread when the main thread exits
server_thread.daemon = True
server_thread.start()
print("Server loop running on thread: {0}".format(server_thread))
# Run clients
client(ip, port, "Hello from client 1")
client(ip, port, "Hello from client 2")
client(ip, port, "Hello from client 3")
```
This code works perfect. Every client's request processed by new thread. And when the client's request is over, program ends.
**I want to make server serves forever.** So when the additional client's request has come, server send its response to that client.
What should I do?
Thank you for reading my question.
P.S: Oh, one more. I always write say hello in top of my post of stack overflow. In preview it shows normally. But when the post has saved, first line always gone. Please anyone help me XD
|
2016/07/24
|
[
"https://Stackoverflow.com/questions/38550322",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6192555/"
] |
Your program exits because your server thread is a daemon:
```
# Exit the server thread when the main thread exits
server_thread.daemon = True
```
You can either remove that line or add `server_thread.join()` at the bottom of the code to prevent the main thread from exiting early.
|
You will have to run on an infinite loop and on each loop wait for some data to come from client. This way the connection will be kept alive.
Same infinite loop for the server to accept more clients.
However, you will have to somehow detect when a client closes the connection with the server because in most times the server won't be notified.
| 12,315
|
53,312,339
|
I need to install COCOAPI for Python 3.5 on my linux machine but when I do "make" it automatically installs it for 2.7. Is there an option to choose for a python version while using "make" ?
**EDIT 1 :**
Going to PythonAPI folder and installing it via python3 setup.py install gives the following error.
```
sudo python3 setup.py install
running install
running bdist_egg
running egg_info
creating pycocotools.egg-info
writing requirements to pycocotools.egg-info/requires.txt
writing dependency_links to pycocotools.egg-info/dependency_links.txt
writing top-level names to pycocotools.egg-info/top_level.txt
writing pycocotools.egg-info/PKG-INFO
writing manifest file 'pycocotools.egg-info/SOURCES.txt'
reading manifest file 'pycocotools.egg-info/SOURCES.txt'
writing manifest file 'pycocotools.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
creating build
creating build/lib.linux-x86_64-3.5
creating build/lib.linux-x86_64-3.5/pycocotools
copying pycocotools/coco.py -> build/lib.linux-x86_64-3.5/pycocotools
copying pycocotools/__init__.py -> build/lib.linux-x86_64-3.5/pycocotools
copying pycocotools/mask.py -> build/lib.linux-x86_64-3.5/pycocotools
copying pycocotools/cocoeval.py -> build/lib.linux-x86_64-3.5/pycocotools
running build_ext
cythoning pycocotools/_mask.pyx to pycocotools/_mask.c
/home/pradyumn/.local/lib/python3.5/site-packages/Cython/Compiler/Main.py:367: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /home/pradyumn/cocoapi-master/PythonAPI/pycocotools/_mask.pyx
tree = Parsing.p_module(s, pxd, full_module_name)
building 'pycocotools._mask' extension
creating build/common
creating build/temp.linux-x86_64-3.5
creating build/temp.linux-x86_64-3.5/pycocotools
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/pradyumn/.local/lib/python3.5/site-packages/numpy/core/include -I../common -I/usr/include/python3.5m -c ../common/maskApi.c -o build/temp.linux-x86_64-3.5/../common/maskApi.o -Wno-cpp -Wno-unused-function -std=c99
../common/maskApi.c: In function ‘rleToBbox’:
../common/maskApi.c:141:31: warning: ‘xp’ may be used uninitialized in this function [-Wmaybe-uninitialized]
if(j%2==0) xp=x; else if(xp<x) { ys=0; ye=h-1; }
^
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/pradyumn/.local/lib/python3.5/site-packages/numpy/core/include -I../common -I/usr/include/python3.5m -c pycocotools/_mask.c -o build/temp.linux-x86_64-3.5/pycocotools/_mask.o -Wno-cpp -Wno-unused-function -std=c99
pycocotools/_mask.c:4:20: fatal error: Python.h: No such file or directory
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
enter code here
```
**Edit 2**
I tried installing cython and then pycocotools but that gives me the following error.
```
pip3 install pycocotools
Collecting pycocotools
Using cached https://files.pythonhosted.org/packages/96/84/9a07b1095fd8555ba3f3d519517c8743c2554a245f9476e5e39869f948d2/pycocotools-2.0.0.tar.gz
Building wheels for collected packages: pycocotools
Running setup.py bdist_wheel for pycocotools ... error
Complete output from command /usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-p1jp5iir/pycocotools/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /tmp/pip-wheel-xirt_d3c --python-tag cp35:
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.5
creating build/lib.linux-x86_64-3.5/pycocotools
copying pycocotools/__init__.py -> build/lib.linux-x86_64-3.5/pycocotools
copying pycocotools/coco.py -> build/lib.linux-x86_64-3.5/pycocotools
copying pycocotools/mask.py -> build/lib.linux-x86_64-3.5/pycocotools
copying pycocotools/cocoeval.py -> build/lib.linux-x86_64-3.5/pycocotools
running build_ext
building 'pycocotools._mask' extension
creating build/temp.linux-x86_64-3.5
creating build/temp.linux-x86_64-3.5/pycocotools
creating build/temp.linux-x86_64-3.5/common
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/pradyumn/.local/lib/python3.5/site-packages/numpy/core/include -Icommon -I/usr/include/python3.5m -c pycocotools/_mask.c -o build/temp.linux-x86_64-3.5/pycocotools/_mask.o -Wno-cpp -Wno-unused-function -std=c99
pycocotools/_mask.c:32:20: fatal error: Python.h: No such file or directory
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
Failed building wheel for pycocotools
Running setup.py clean for pycocotools
Failed to build pycocotools
Installing collected packages: pycocotools
Running setup.py install for pycocotools ... error
Complete output from command /usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-p1jp5iir/pycocotools/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-record-pipcgtu8/install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.5
creating build/lib.linux-x86_64-3.5/pycocotools
copying pycocotools/__init__.py -> build/lib.linux-x86_64-3.5/pycocotools
copying pycocotools/coco.py -> build/lib.linux-x86_64-3.5/pycocotools
copying pycocotools/mask.py -> build/lib.linux-x86_64-3.5/pycocotools
copying pycocotools/cocoeval.py -> build/lib.linux-x86_64-3.5/pycocotools
running build_ext
building 'pycocotools._mask' extension
creating build/temp.linux-x86_64-3.5
creating build/temp.linux-x86_64-3.5/pycocotools
creating build/temp.linux-x86_64-3.5/common
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/pradyumn/.local/lib/python3.5/site-packages/numpy/core/include -Icommon -I/usr/include/python3.5m -c pycocotools/_mask.c -o build/temp.linux-x86_64-3.5/pycocotools/_mask.o -Wno-cpp -Wno-unused-function -std=c99
pycocotools/_mask.c:32:20: fatal error: Python.h: No such file or directory
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
Command "/usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-p1jp5iir/pycocotools/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-record-pipcgtu8/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-install-p1jp5iir/pycocotools/
```
**EDIT 3**
To solve the error above run the below for appropriate version of python.
```
sudo apt-get install python3.5-dev
```
Then do the following:
```
pip3 install pycocotools --user
```
And then rerun the following:
```
sudo python3 setup.py install
```
|
2018/11/15
|
[
"https://Stackoverflow.com/questions/53312339",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8715275/"
] |
Clone the repo using `git clone https://github.com/cocodataset/cocoapi.git` then enter the dir where it is located and type `python3 setup.py install` which should install using python3.
If the above doesn't work, try `pip3 install cython` followed by `pip3 install pycocotools` (you can add the `--user` flag to all of these if necessary)
|
You can make sure the default `python` resolves in the one you are interested in, for example by symbolically linking `python` to `python2` wherever it is defined (possibly `/usr/bin`)
| 12,316
|
70,909,920
|
I'm trying to write a code that will search for specific data from multiple report files, and write them into columns in a single csv.
The report file lines i'm looking for aren't always on the same line, so i'm looking for the data associated on the lines below:
Estimate file: pog\_example.bef
Estimate ID: o1\_p1
61078 (100.0%) estimated.
And I want to write the data from each text file into columns in a csv as below:
example.bef, o1\_p1, 61078 (100.0%) estimated
So far I have this script which will list out the first of my criteria, but I can't figure out how to loop it through to find my second and third lines to populate the second and third columns
```
from glob import glob
import fileinput
import csv
with open('percentage_estimated.csv', 'w', newline='') as est_report:
writer = csv.writer(est_report)
for line in fileinput.input(glob('*.bef*')):
if 'Estimate file' in line:
writer.writerow([line.split('pog_')[1].strip()])
```
I'm pretty new to python so any help would be appreciated!
|
2022/01/29
|
[
"https://Stackoverflow.com/questions/70909920",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17361896/"
] |
Liquid is not going to work on JSON like this. If you want to iterate through an array of JSON objects, use Javascript.
|
As lov2code points out by adding (-) it trims the output for any unnecessary white space, which enables you to traverse the JSON array.
| 12,317
|
28,228,238
|
Hello guys I am really new to python and I am trying to sort the /etc/passwd file using PYTHON 3.4 based on the following criteria:
Input (regular /etc/passwd file on linux system:
```
raj:x:501:512::/home/raj:/bin/ksh
ash:x:502:502::/home/ash:/bin/zsh
jadmin:x:503:503::/home/jadmin:/bin/sh
jwww:x:504:504::/htdocs/html:/sbin/nologin
wwwcorp:x:505:511::/htdocs/corp:/sbin/nologin
wwwint:x:506:507::/htdocs/intranet:/bin/bash
scpftp:x:507:507::/htdocs/ftpjail:/bin/bash
rsynftp:x:508:512::/htdocs/projets:/bin/bash
mirror:x:509:512::/htdocs:/bin/bash
jony:x:510:511::/home/jony:/bin/ksh
amyk:x:511:511::/home/amyk:/bin/ksh
```
Output that I am looking for either to the file or returned to the screen:
```
Group 511 : jony, amyk, wwwcorp
Group 512 : mirror, rsynftp, raj
Group 507 : wwwint, scpftp
and so on
```
Here is my plan:
```
1) Open and read the whole file or do it line by line
2) Loop through the file using python regex
3) Write it into temp file or create a dictionary
4) Print the dictionary keys and values
```
I will really appreciate the example how it can be done efficiently or apply any
sorting algorithm.
Thanks!
|
2015/01/30
|
[
"https://Stackoverflow.com/questions/28228238",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2214003/"
] |
You can open the file, throw it into a list and then throw all the users into some kinda hash table
```
with open("/etc/passwd") as f:
lines = f.readlines()
group_dict = {}
for line in lines:
split_line = line.split(":")
user = split_line[0]
gid = split_line[3]
# If the group id is not in the dict then put it in there with a list of users
# that are under that group
if gid not in group_dict:
group_dict[gid] = [user]
# If the group id does exist then add the new user to the list of users in
# the group
else:
group_dict[gid].append(user)
# Iterate over the groups and users we found. Keys (group) will be the first item in the tuple,
# and the list of users will be the second item. Print out the group and users as we go
for group, users in group_dict.iteritems():
print("Group {}, users: {}".format(group, ",".join(users)))
```
|
This should loop through your `/etc/passwd` and sort users by group. You don't have to do anything fancy to solve this problem.
```
with open('/etc/passwd', 'r') as f:
res = {}
for line in f:
parts = line.split(':')
try:
name, gid = parts[0], int(parts[3])
except IndexError:
print("Invalid line.")
continue
try:
res[gid].append(name)
except KeyError:
res[gid] = [name]
for key, value in res.items():
print(str(key) + ': ' + ', '.join(value))
```
| 12,319
|
26,643,705
|
So, I have tried this problem for what it seems like a hundred times this week alone.
It's filling in the blank for the following program...
You entered jackson and ville.
When these are combined, it makes jacksonville.
Taking every other letter gives us jcsnil.
The blanks I have filled are fine, but the rest of the blanks, I can't figure out. Here they are.
```
x = raw_input("Enter a word: ")
y = raw_input("Enter another word: ")
print("You entered %s and %s." % (x,y))
combined = x + y
print("When these are combined, it makes %s." % combined)
every_other = ""
counter = 0
for __________________ :
if ___________________ :
every_other = every_other + letter
____________
print("Taking every other letter gives us %s." % every_other)
```
I just need three blanks to this program. This is basic python, so nothing too complicated or something I can match wit the twenty options. Please, I appreciate your help!
|
2014/10/30
|
[
"https://Stackoverflow.com/questions/26643705",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4196499/"
] |
Assuming that each word is on it's own line, you should be reading the file more like...
```
try (Scanner in = new Scanner(new File(fileName))) {
while (in.hasNextLine()) {
String dictionaryword = in.nextLine();
dictionary.add(dictionaryword);
}
}
```
Remember, if you open a resource, you are responsible for closing. See [The try-with-resources Statement](http://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html) for more details...
Calculating the metrics can be done after reading the file, but since your here, you could do something like...
```
int totalWordLength = 0;
String longest = "";
while (in.hasNextLine()) {
String dictionaryword = in.nextLine();
totalWordLength += dictionaryword.length();
dictionary.add(dictionaryword);
if (dictionaryword.length() > longest.length()) {
longest = dictionaryword;
}
}
int averageLength = Math.round(totalWordLength / (float)dictionary.size());
```
But you could just as easily loop through the `dictionary` and use the same idea
(nb- I've used local variables, so you will either want to make them class fields or return them wrapped in some kind of "metrics" class - your choice)
|
Set a two counters and a variable that holds the current longest word found before you start reading in with your while loop. To find the average have one counter be incremented by one each time the line is read and have the second counter add up the total number of characters in each word (obviously the total number of characters entered, divided by the total number of words read -- as denoted by the total number of lines -- is the average length of each word.
As for the longest word, set the longest word to be the empty string or some dummy value like a single character. Each time you read in a line compare the current word with the previously found longest word (using the `.length()` method on the String to find its length) and if its longer set a new longest word found
Also, if you have all this in a file, I'd use a [buffered reader](http://docs.oracle.com/javase/7/docs/api/java/io/BufferedReader.html) to read in your input data
| 12,320
|
61,160,595
|
I'm trying to write a function that takes in a list and returns true if it contains the numbers 0,0,7 in that order. When I run this code:
```
def prob11(abc):
if 7 and 0 and 0 not in abc:
return False
x = abc.index(0)
elif 7 and 0 and 0 in abc and abc[x + 1] == 0 and abc[x + 2] == 7:
return True
else:
return False
```
I get this error:
```
File "<ipython-input-12-e2879221a9bf>", line 5
elif 7 and 0 and 0 in abc and abc[x + 1] == 0 and abc[x + 2] == 7:
^
SyntaxError: invalid syntax
```
Whats wrong with my elif statement?
|
2020/04/11
|
[
"https://Stackoverflow.com/questions/61160595",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13201582/"
] |
Your screenshot is showing data in Realtime Database, but your code is querying Firestore. They are completely different databases with different APIs. You can't use the Firestore SDK to query Realtime Database. If you want to work with Realtime Database, use the documentation [here](https://firebase.google.com/docs/database/ios/read-and-write).
|
There is `author` between `posts` and `username` field in your data structure.
Your code means that right under some specific post there is `username` field.
So such code will work because `date` right undes post:
```
db.collection("posts").whereField("date", isEqualTo: "some-bla-bla-date")
```
In your case you have two options as I see:
* duplicate `username` and place this field on the same level as
`date` and `guests`.
* re-write you code to check `username` inside `author` document.
Hope it will help you in your investigation.
| 12,322
|
34,271,807
|
I am trying to write a python script with several text files inside a subdirectory, e.g.
```
python script.py --inputdir ~/subdirectory
```
which will execute each file inside this subdirectory. How can one use argparse to do this? Should you write a function to access and open each file?
```
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--inputdir", help="path to your subdirectory",
required=True)
args = parser.parse_args()
```
Now, what do I do with `args.inputdir`? How do I extract files?
|
2015/12/14
|
[
"https://Stackoverflow.com/questions/34271807",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4596596/"
] |
I think your a mixing up the return value of setInterval.
setInterval returns an handle to the function scheduled, so you can call clearInterval().
From Mdn:
<https://developer.mozilla.org/en-US/docs/Web/API/WindowTimers/setInterval>
```
Syntax
var intervalID = window.setInterval(func, delay[, param1, param2, ...]);
var intervalID = window.setInterval(code, delay);
intervalID is a unique interval ID you can pass to clearInterval().
```
If you then call startTime + 1 in that function, what happens?
nothing is returned, startTime is not changed, is just evaluated and the function exit.
to count seconds passed, is an overkill to call a function every second and add 1, it will be also not precise probably.
Try to do like this:
```
when start button is pressed:
startTime = new Date().getTime();
```
when stop button is pressed:
```
millisecondsPassed = new Date().getTime() - startTime;
secondsPassed = Math.floor(millisecondPassed / 1000);
```
If the only meaning of the function is to increase time, you do not have to call it at all, no intervals to clear.
|
[`setInterval`](https://developer.mozilla.org/en-US/docs/Web/API/WindowTimers/setInterval) returns a reference to the interval you have just created, so that later you can stop the interval.
```
var myRef = setInterval(...);
clearInterval(myRef);
```
Every time `moveBall()` is called and you run this
```
startTime = setInterval(function(){ startTime + 1;}, 1000);
$('#time').html(startTime);
```
you are creating a new interval and outputting it's reference. In most browsers that is a simple counter that starts from 1. You should not modify the `startTime` value if you are planning on using later as you will have lost the reference value to your interval timer.
| 12,324
|
7,513,133
|
From a windows application written on C++ or python, how can I execute arbitrary shell commands?
My installation of Cygwin is normally launched from the following bat file:
```
@echo off
C:
chdir C:\cygwin\bin
bash --login -i
```
|
2011/09/22
|
[
"https://Stackoverflow.com/questions/7513133",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/490908/"
] |
From Python, run bash with `os.system`, `os.popen` or `subprocess` and pass the appropriate command-line arguments.
```
os.system(r'C:\cygwin\bin\bash --login -c "some bash commands"')
```
|
Bash should accept a command from args when using the -c flag:
```
C:\cygwin\bin\bash.exe -c "somecommand"
```
Combine that with C++'s [`exec`](http://linux.about.com/library/cmd/blcmdl3_execvp.htm) or python's `os.system` to run the command.
| 12,325
|
67,915,835
|
I and my colleague is working on a django (python) project and pushing our code on same branch(lets say branch1), as a beginner i know how to push the code on a particular branch but have no idea how pull and merge can be done. what should i do if i want full project including his codes and my codes together without overriding the whole file(lets say he made views1.py and i made views2.py, then after merge and all, the result must be views3.py)
* now views3.py contains my code and his code.
Any kind of help would be appreciated.
|
2021/06/10
|
[
"https://Stackoverflow.com/questions/67915835",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13963543/"
] |
For node.js subprocesses there is the [cluster module](https://nodejs.org/api/cluster.html) and I strongly recommend using this. For general subprocesses (e.g. bash scripts as you mentioned) you have to use `child_process` (-> execa). Communication between processes may then be accomplished via grpc. Your approach is fine, so you can consider moving forward with it.
|
I decided to go full with `pm2` for the time being, as they have an excellent [programmatic API](https://pm2.keymetrics.io/docs/usage/pm2-api/) - also (which I only just learned about) you can specify [different interpreters](https://pm2.keymetrics.io/docs/usage/process-management/#start-any-process-type) to run your script. So not only `node` apps are possible but also `bash`, `python`, `php` and so on - which is exactly what I am looking for.
| 12,328
|
36,477,552
|
I've got a python script that's being run from the **if-up** script that's called by the **ppp** program on Linux when the PPP connection is established. The python script basically calls a command line program, parses the result and returns it:
```
import subprocess
result = subprocess.check_output(["fw_printenv", "serialnr"])
result = # some operation
return result
```
Although this code works 100% fine when I run the python script manually from the command line (e.g. `python script.py`), it doesn't work at all when it's run by PPP from if-up. An exception is raised when `subprocess.check_output` is called: `[Errno 2] No such file or directory: 'fw_printenv'`.
I can only get it to work if I change the code to:
```
result = subprocess.check_output("fw_printenv serialnr", shell=True)
```
Why?
|
2016/04/07
|
[
"https://Stackoverflow.com/questions/36477552",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/286701/"
] |
`int &foo();` declares a function called `foo()` with return type `int&`. If you call this function without providing a body then you are likely to get an undefined reference error.
In your second attempt you provided a function `int foo()`. This has a different return type to the function declared by `int& foo();`. So you have two declarations of the same `foo` that don't match, which violates the One Definition Rule causing undefined behaviour (no diagnostic required).
For something that works, take out the local function declaration. They can lead to silent undefined behaviour as you have seen. Instead, only use function declarations outside of any function. Your program could look like:
```
int &foo()
{
static int i = 2;
return i;
}
int main()
{
++foo();
std::cout << foo() << '\n';
}
```
|
In that context the & means a reference - so foo returns a reference to an int, rather than an int.
I'm not sure if you'd have worked with pointers yet, but it's a similar idea, you're not actually returning the value out of the function - instead you're passing the information needed to find the location in memory where that int is.
So to summarize you're not assigning a value to a function call - you're using a function to get a reference, and then assigning the value being referenced to a new value. It's easy to think everything happens at once, but in reality the computer does everything in a precise order.
If you're wondering - the reason you're getting a segfault is because you're returning a numeric literal '2' - so it's the exact error you'd get if you were to define a const int and then try to modify its value.
If you haven't learned about pointers and dynamic memory yet then I'd recommend that first as there's a few concepts that I think are hard to understand unless you're learning them all at once.
| 12,329
|
39,637,164
|
How can i used the rt function, as i understand leading & trailing underscores `__and__()` is available for native python objects or you wan't to customize behavior in specific situations. how can the user take advantages of it . For ex: in the below code can i use this function at all,
```
class A(object):
def __rt__(self,r):
return "Yes special functions"
a=A()
print dir(a)
print a.rt('1') # AttributeError: 'A' object has no attribute 'rt'
```
But
```
class Room(object):
def __init__(self):
self.people = []
def add(self, person):
self.people.append(person)
def __len__(self):
return len(self.people)
room = Room()
room.add("Igor")
print len(room) #prints 1
```
|
2016/09/22
|
[
"https://Stackoverflow.com/questions/39637164",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/277603/"
] |
Python doesn't translate one name into another. Specific operations will *under the covers* call a `__special_method__` if it has been defined. For example, the `__and__` method is called by Python to hook into the `&` operator, because the Python interpreter *explicitly looks for that method* and documented how it should be used.
In other words, calling `object.rt()` is not translated to `object.__rt__()` anywhere, not automatically.
Note that Python *reserves* such names; future versions of Python may use that name for a specific purpose and then your existing code using a `__special_method__` name for your own purposes would break.
From the [*Reserved classes of identifiers* section](https://docs.python.org/3/reference/lexical_analysis.html#reserved-classes-of-identifiers):
>
> `__*__`
>
> System-defined names. These names are defined by the interpreter and its implementation (including the standard library). Current system names are discussed in the [Special method names](https://docs.python.org/3/reference/datamodel.html#specialnames) section and elsewhere. More will likely be defined in future versions of Python. Any use of `__*__` names, in any context, that does not follow explicitly documented use, is subject to breakage without warning.
>
>
>
You can ignore that advice of course. In that case, you'll have to write code that actually *calls your method*:
```
class SomeBaseClass:
def rt(self):
"""Call the __rt__ special method"""
try:
return self.__rt__()
except AttributeError:
raise TypeError("The object doesn't support this operation")
```
and subclass from `SomeBaseClass`.
Again, Python won't automatically call your new methods. You still need to actually write such code.
|
Because there are builtin methods that you can overriden and then you can use them, ex `__len__` -> `len()`, `__str__` -> `str()` and etc.
Here is the [list of these functions](https://docs.python.org/3/reference/datamodel.html#basic-customization)
>
> The following methods can be defined to customize the meaning of attribute access (use of, assignment to, or deletion of x.name) for class instances.
>
>
>
| 12,339
|
65,571,031
|
I am trying to install a package on a python project but having some issues with python-Levenshtein library. I'm using a virtual environment on PyCharm which is running with Python3.8 and installed all libraries in requirements.txt with pip. However I am not able to install this library.
What I've tried so far:
1. try to install with pip and pip3
2. try to install with anaconda
3. try to install with brew (to make sure it's not like matplotlib [here](https://stackoverflow.com/questions/25701133/how-to-tell-homebrew-to-install-inside-virtualenv))
I share the error message below. Could you help me to solve this problem
```
Collecting python-Levenshtein==0.12.0
Using cached python-Levenshtein-0.12.0.tar.gz (48 kB)
Requirement already satisfied: setuptools in /Users/suleyman/Projects/venv/lib/python3.8/site-packages (from python-Levenshtein==0.12.0) (51.1.1)
Building wheels for collected packages: python-Levenshtein
Building wheel for python-Levenshtein (setup.py): started
Building wheel for python-Levenshtein (setup.py): finished with status 'error'
Running setup.py clean for python-Levenshtein
Failed to build python-Levenshtein
Installing collected packages: python-Levenshtein
Running setup.py install for python-Levenshtein: started
Running setup.py install for python-Levenshtein: finished with status 'error'
ERROR: Command errored out with exit status 1:
```
|
2021/01/04
|
[
"https://Stackoverflow.com/questions/65571031",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5331231/"
] |
That index could be used if you wrote the query like this:
```
select rh."EventHistory"
from "RemittanceHistory" rh join "ClaimPaymentHistory" ph
on ph."EventHistory" @> jsonb_build_array(jsonb_build_object('rk',rh."RemittanceRefKey"))
where ph."ClaimRefKey" = 5;
```
However, this unlikely to have good performance unless "RemittanceHistory" has few rows in it.
>
> ...what would be my best options for indexing?
>
>
>
The obvious choice, if you don't have them already, would be regular (btree) indexes on rh."RemittanceRefKey" and ph."ClaimRefKey".
Also, look at (and show us) the `EXPLAIN (ANALYZE, BUFFERS)` for the original query you want to make faster.
|
I wound up refactoring the table structure. Instead of a join through `RemittanceRefKey` I added a JSONB column to `RemittanceHistory` called `ClaimRefKeys`. This is simply an array of integer values and now I can lookup the desired rows with:
```
select "EventHistory" from "RemittanceHistory" where "ClaimRefKeys" @> @ClaimRefKey;
```
This combined with the following index gives pretty fantastic performance.
```
CREATE INDEX remittance_history_claimrefkeys_gin_idx ON "RemittanceHistory" USING gin ("ClaimRefKeys" jsonb_path_ops);
```
| 12,340
|
57,593,041
|
```
>>> x = 1
>>> def f():
... print x
...
>>> f()
1
>>> x = 1
>>> def f():
... x = 3
... print x
...
>>> f()
3
>>> x
1
>>> x = 1
>>> def f():
... print x
... x = 5
...
>>> f()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in f
UnboundLocalError: local variable 'x' referenced before assignment
>>> x = 1
>>> def f():
... global x
... print x
... x = 5
... print x
...
>>> f()
1
5
>>> x
5
```
How to treat the variable "**x**" inside the function as local without altering the global one when I have print statement above the variable assignment?
I expect the result of "**x**" to be 5 inside the function and the global x should be unaltered and remains the same in value (i.e) 1
I guess, there is no keyword called **local** in python contrary to **global**
```
>>> x = 1
>>> def f():
... print x
... global x
... x = 5
...
<stdin>:3: SyntaxWarning: name 'x' is used prior to global declaration
```
|
2019/08/21
|
[
"https://Stackoverflow.com/questions/57593041",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1335601/"
] |
>
> In Python, variables that are only referenced inside a function are implicitly global. If a variable is assigned a value anywhere within the function’s body, it’s assumed to be a local unless explicitly declared as global.
>
>
>
[Source.](https://docs.python.org/3/faq/programming.html#what-are-the-rules-for-local-and-global-variables-in-python)
It's true there's no `local` keyword in Python; instead, Python has this rule to decide which variables are local.
Any variable in your function is either local or global. It can't be local in one part of the function and global in another. If you have a local variable `x`, then the function can't access the global `x`. If you want a local variable while accessing the global `x`, you can call the local variable some other name.
|
The behaviour is already what you want. The presence of `x =` inside the function body makes `x` a local variable which entirely shadows the outer variable. You're merely trying to print it before you assign any value to it, which is causing an error. This would cause an error under any other circumstance too; you can't print what you didn't assign.
| 12,341
|
56,184,013
|
Anyone know if Tensorflow Lite has GPU support for Python? I've seen guides for Android and iOS, but I haven't come across anything about Python. If `tensorflow-gpu` is installed and `tensorflow.lite.python.interpreter` is imported, will GPU be used automatically?
|
2019/05/17
|
[
"https://Stackoverflow.com/questions/56184013",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5349476/"
] |
According to [this](https://github.com/tensorflow/tensorflow/issues/31377) thread, it is not.
|
You can force the computation to take place on a GPU:
```
import tensorflow as tf
with tf.device('/gpu:0'):
for i in range(10):
t = np.random.randint(len(x_test) )
...
```
Hope this helps.
| 12,342
|
58,126,489
|
I follow the tutorial from Traversy Media on Youtube videos. When I put the command
>
> python manage.py migrate
>
>
>
Then I got such an error like this:
```
C:\Users\Acer\Project\djangoproject>python manage.py migrate
Traceback (most recent call last):
File "manage.py", line 21, in <module>
main()
File "manage.py", line 17, in main
execute_from_command_line(sys.argv)
File "C:\Users\Acer\AppData\Local\Programs\Python\Python37-32\lib\site-package
s\django\core\management\__init__.py", line 381, in execute_from_command_line
utility.execute()
File "C:\Users\Acer\AppData\Local\Programs\Python\Python37-32\lib\site-package
s\django\core\management\__init__.py", line 357, in execute
django.setup()
File "C:\Users\Acer\AppData\Local\Programs\Python\Python37-32\lib\site-package
s\django\__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "C:\Users\Acer\AppData\Local\Programs\Python\Python37-32\lib\site-package
s\django\apps\registry.py", line 114, in populate
app_config.import_models()
File "C:\Users\Acer\AppData\Local\Programs\Python\Python37-32\lib\site-package
s\django\apps\config.py", line 211, in import_models
self.models_module = import_module(models_module_name)
File "C:\Users\Acer\AppData\Local\Programs\Python\Python37-32\lib\importlib\__
init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "C:\Users\Acer\AppData\Local\Programs\Python\Python37-32\lib\site-package
s\django\contrib\auth\models.py", line 2, in <module>
from django.contrib.auth.base_user import AbstractBaseUser, BaseUserManager
File "C:\Users\Acer\AppData\Local\Programs\Python\Python37-32\lib\site-package
s\django\contrib\auth\base_user.py", line 47, in <module>
class AbstractBaseUser(models.Model):
File "C:\Users\Acer\AppData\Local\Programs\Python\Python37-32\lib\site-package
s\django\db\models\base.py", line 117, in __new__
new_class.add_to_class('_meta', Options(meta, app_label))
File "C:\Users\Acer\AppData\Local\Programs\Python\Python37-32\lib\site-package
s\django\db\models\base.py", line 321, in add_to_class
value.contribute_to_class(cls, name)
File "C:\Users\Acer\AppData\Local\Programs\Python\Python37-32\lib\site-package
s\django\db\models\options.py", line 204, in contribute_to_class
self.db_table = truncate_name(self.db_table, connection.ops.max_name_length(
))
File "C:\Users\Acer\AppData\Local\Programs\Python\Python37-32\lib\site-package
s\django\db\__init__.py", line 28, in __getattr__
return getattr(connections[DEFAULT_DB_ALIAS], item)
File "C:\Users\Acer\AppData\Local\Programs\Python\Python37-32\lib\site-package
s\django\db\utils.py", line 201, in __getitem__
backend = load_backend(db['ENGINE'])
File "C:\Users\Acer\AppData\Local\Programs\Python\Python37-32\lib\site-package
s\django\db\utils.py", line 110, in load_backend
return import_module('%s.base' % backend_name)
File "C:\Users\Acer\AppData\Local\Programs\Python\Python37-32\lib\importlib\__
init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "C:\Users\Acer\AppData\Local\Programs\Python\Python37-32\lib\site-package
s\django\db\backends\mysql\base.py", line 36, in <module>
raise ImproperlyConfigured('mysqlclient 1.3.13 or newer is required; you hav
e %s.' % Database.__version__)
django.core.exceptions.ImproperlyConfigured: mysqlclient 1.3.13 or newer is requ
ired; you have 0.9.3.
```
Btw, I already install the C++ Build Visual Studio from another error that I got, and I also installed already the **mysqlclient-1.4.4-cp37-cp37m-win32.whl** to get it done. But still, it gives me such an error.
Please help me, and thank you for those who already respond on this.
|
2019/09/27
|
[
"https://Stackoverflow.com/questions/58126489",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11803617/"
] |
There are several dedicated packages for this. For example have a look at the `combine`, `subdocs` or `docmute` packages (A list with even more suggestions can be fond at <https://www.ctan.org/recommendations/docmute>).
Here a short example with the `docmute` package
```
\documentclass{book}
\usepackage{lipsum}
\usepackage{docmute}
\begin{document}
\tableofcontents
text
\chapter{imported paper}
\input{test}% assuming your paper is called test.tex
\end{document}
```
|
A Latex document cannot have multiple `\documentclass`. One solution would be to split the header/content of your latex document in overleaf:
* Create a `master.tex` with the documentclass and put all your content (text between `\begin{document}` and `\end{document}` in a second `content.tex`. In the master, just `\input{content}`.
* In your dissertation, just copy `content.tex`, its figure and add `\input{}` in the master file of your University which has the specific documentclass and bibliography settings.
| 12,345
|
43,736,163
|
I have successfully read a csv file using pandas. When I am trying to print the a particular column from the data frame i am getting keyerror. Hereby i am sharing the code with the error.
```
import pandas as pd
reviews_new = pd.read_csv("D:\\aviva.csv")
reviews_new['review']
```
\*\*
```
reviews_new['review']
Traceback (most recent call last):
File "<ipython-input-43-ed485b439a1c>", line 1, in <module>
reviews_new['review']
File "C:\Users\30216\AppData\Local\Continuum\Anaconda2\lib\site-packages\pandas\core\frame.py", line 1997, in __getitem__
return self._getitem_column(key)
File "C:\Users\30216\AppData\Local\Continuum\Anaconda2\lib\site-packages\pandas\core\frame.py", line 2004, in _getitem_column
return self._get_item_cache(key)
File "C:\Users\30216\AppData\Local\Continuum\Anaconda2\lib\site-packages\pandas\core\generic.py", line 1350, in _get_item_cache
values = self._data.get(item)
File "C:\Users\30216\AppData\Local\Continuum\Anaconda2\lib\site-packages\pandas\core\internals.py", line 3290, in get
loc = self.items.get_loc(item)
File "C:\Users\30216\AppData\Local\Continuum\Anaconda2\lib\site-packages\pandas\indexes\base.py", line 1947, in get_loc
return self._engine.get_loc(self._maybe_cast_indexer(key))
File "pandas\index.pyx", line 137, in pandas.index.IndexEngine.get_loc (pandas\index.c:4154)
File "pandas\index.pyx", line 159, in pandas.index.IndexEngine.get_loc (pandas\index.c:4018)
File "pandas\hashtable.pyx", line 675, in pandas.hashtable.PyObjectHashTable.get_item (pandas\hashtable.c:12368)
File "pandas\hashtable.pyx", line 683, in pandas.hashtable.PyObjectHashTable.get_item (pandas\hashtable.c:12322)
KeyError: 'review'
```
\*\*
Can someone help me in this ?
|
2017/05/02
|
[
"https://Stackoverflow.com/questions/43736163",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5123815/"
] |
I think first is best investigate, what are real columns names, if convert to list better are seen some whitespaces or similar:
```
print (reviews_new.columns.tolist())
```
---
I think there can be 2 problems (obviously):
**1.whitespaces in columns names (maybe in data also)**
Solutions are [`strip`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.strip.html) whitespaces in column names:
```
reviews_new.columns = reviews_new.columns.str.strip()
```
Or add parameter `skipinitialspace` to [`read_csv`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html):
```
reviews_new = pd.read_csv("D:\\aviva.csv", skipinitialspace=True)
```
**2.different separator as default `,`**
Solution is add parameter `sep`:
```
#sep is ;
reviews_new = pd.read_csv("D:\\aviva.csv", sep=';')
#sep is whitespace
reviews_new = pd.read_csv("D:\\aviva.csv", sep='\s+')
reviews_new = pd.read_csv("D:\\aviva.csv", delim_whitespace=True)
```
EDIT:
You get whitespace in column name, so need `1.solutions`:
```
print (reviews_new.columns.tolist())
['Name', ' Date', ' review']
^ ^
```
|
```
import pandas as pd
df=pd.read_csv("file.txt", skipinitialspace=True)
df.head()
df['review']
```
| 12,346
|
28,952,282
|
I'm using REPL with sublime text 3 (latest version as of today) and I'm coding in python 3.4. As far as I understand the documentation on REPL if do: tools>sublimeREPL>python>python-RUN current file
then I should run the code I have typed in using REPL. However when I do this I get an error pop up saying:
FileNotFoundError(2, 'The system cannot find the file specified.',None,2)
I get this error whatever the code I typed in is (I tried print ("Hello World") on its own and also big long programs I've made before)
Can someone please help me with this and explain what the problem is, thanks :)
|
2015/03/09
|
[
"https://Stackoverflow.com/questions/28952282",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4651552/"
] |
I also had this problem. This is most probably due to the default location of python. If you are running portable python,
```
{
...
"default_extend_env": {"PATH": "{PATH}:\\Programming\\Python\\Portable Python 2.7.6.1\\App\\"}
...
}
```
Otherwise,
```
{
"default_extend_env": {"PATH":"C:\\python27\\"},
}
```
would suffice. This code is to be pasted in:
```
Preferences -> Package Settings -> SublimeREPL -> Settings - User
```
|
I had the same problem, when I installed REPL for the first time. Now, that could sound crazy, but the way to solve the problem (at least, the trick worked for me!) is to restart once Sublime Text 3.
**Update**: As pointed out by Mark in the comments, apparently you could have to restart Sublime more than once to solve the problem.
| 12,349
|
63,650,186
|
Sorry in advance for what I'm sure will be a very simple question to answer, I'm *very* new to python.
I have a project that I'm working on that takes inputs about the size of a room and cost of materials/installation and outputs costs and amount of materials needed.
I've got everything working but I can't make my dollar sign in my ouputs appear next to the outputs themselves, they always appear a space away. ($ 400.00). I know that I can add a plus sign somewhere to smush the two together but I keep getting errors when I try. I'm not exactly sure what I'm doing wrong but I would appreciate any input. I'll paste the code that works without error below. I put spaces inbetween the lines so it can be seen more clearly.
```
wth_room = (int(input('Enter the width of room in feet:')))
lth_room = (int(input('Enter the length of room in feet:')))
mat_cost = (float(input('Enter material cost of tile per square foot:')))
labor = (float(input('Enter labor installation cost of tile per square foot:')))
tot_tile = (float(wth_room * lth_room))
tot_mat = (float(tot_tile * mat_cost))
tot_lab = (float(labor * tot_tile))
project = (float(mat_cost + labor) * tot_tile)
print('Square feet of tile needed:', tot_tile, 'sqaure feet')
print('Material cost of the project: $', tot_mat)
print('Labor cost of the project: $', tot_lab)
print('Total cost of the project: $', project)
```
|
2020/08/29
|
[
"https://Stackoverflow.com/questions/63650186",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14188403/"
] |
You can change the separator to the empty string (the default is a space).
```
print('Material cost of the project: $', tot_mat, sep='')
print('Labor cost of the project: $', tot_lab, sep='')
print('Total cost of the project: $', project, sep='')
```
|
Replace `,` with `+` because `,` leaves a space by default, for example:
```
print('Material cost of the project: $' + tot_mat)
print('Labor cost of the project: $' + tot_lab)
print('Total cost of the project: $' + project)
```
You can also use `f-strings` as so:
```
print(f'Material cost of the project: ${tot_mat}')
print(f'Labor cost of the project: ${tot_lab}')
print(f'Total cost of the project: ${project}')
```
| 12,350
|
25,150,502
|
Im looping though a dictionary using
```
for key, value in mydict.items():
```
And I wondered if theres some pythonic way to also access the loop index / iteration number. Access the index while still maintaining access to the key value information.
```
for key, value, index in mydict.items():
```
its is because I need to detect the first time the loop runs. So inside I can have something like
```
if index != 1:
```
|
2014/08/06
|
[
"https://Stackoverflow.com/questions/25150502",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1794743/"
] |
You can use [`enumerate`](https://docs.python.org/2/library/functions.html#enumerate) function, like this
```
for index, (key, value) in enumerate(mydict.items()):
print index, key, value
```
The `enumerate` function gives the current index of the item and the actual item itself. In this case, the second value is actually a tuple of key and value. So, we explicitly group them as a tuple, during the unpacking.
|
If you only need the index to do something special on the first iteration, you could also use `.popitem()`
```
key, val = mydict.popitem()
...
for key, val in mydict.items()
...
```
this will remove the first `key, val` pair from `mydict` (but perhaps that's not an issue for you?)
| 12,356
|
47,799,275
|
I need to be able to login into a remote server, switch user and then, do whatever it is required.
I played with ansible and found the "become" tool, so I tried it, after all... it allows dzdo.
My playbook became something like this:
```
- name: Create empty file
file: path=touchedFile.txt state=touch
become: true
become_method: dzdo
become_user: userid
```
I ran it and got:
"Sorry, user someuser is not allowed to execute '/bin/sh -c echo BECOME-SUCCESS-xklihidlmxpfvxxnbquvsqrgfjlyrsah; /usr/bin/python /tmp/ansible-tmp-1513185770.1-52571838933499/command.py'
Mmm... I thought that maybe it is trying to execute something like this:
```
dzdo touch touchedFile.txt
```
Unfortunately, it doesn't work like that in my company. The policy forces us to log in as ourselves and then switch to the required user like this:
```
dzdo su - userid
```
I did a bit of research and tried running several commands in a single block, my logic thought that if I switched users first, then everything else would be executed as the other user. My playbook was updated to look like this:
```
- name: Create empty file
shell: |
dzdo su - userid
touch touchedFile.txt
```
It failed and I tried this then:
```
- name: Create empty file
command: "{{ item }}"
with_items:
- dzdo su - userid
- touch touchedFile.txt
```
And failed again... both approaches create touchedFile.txt but as my user and not the one they should...
Is there a way to do what I need directly with Ansible? Or do I need to start looking for more complex alternatives?
In the past I achieved what I'm trying to do now with a script that mainly used "expect", but it was prone to errors... that's why I'm looking for better alternatives.
**EDIT 2018-01-08:**
I can now use "`sudo su - userid`" without the need of a password; but somehow Ansible always expect input from the user, a timeout occurs and my play fails:
```
fatal: [240]: FAILED! => {
"failed": true,
"msg": "Timeout (12s) waiting for privilege escalation prompt: "
}
```
One thing I noticed is that Ansible is doing the following:
```
EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no
-o 'IdentityFile="./.ssh/fatCamel"' -o KbdInteractiveAuthentication=no
-o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
-o PasswordAuthentication=no -o User=login_userid -o ConnectTimeout=10
-o ControlPath=/Users/local_userid/.ansible/cp/446eee77f4
-tt server_url '/bin/sh -c '"'"'sudo su - sudo_userid -u root /bin/sh
-c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-hsyxhtaoxiepyjexaffecfiblmjezopu;
/usr/bin/python /u/users/login_userid/.ansible/tmp/ansible-tmp-1515438271.05-219108659465262/command.py;
rm -rf "/u/users/login_userid/.ansible/tmp/ansible-tmp-1515438271.05-219108659465262/"
> /dev/null 2>&1'"'"'"'"'"'"'"'"' && sleep 0'"'"''
```
This part is what I caught my attention **sudo su - sudo\_userid -u root**
If I try to run it in the server (copy&paste) it also fails... Why is Ansible adding the "-u root" and is there a way to prevent it from doing so? I will never be granted ROOT access to any server.
Also, I am setting the **ansible\_become\_pass** variable to the correct value... but it still fails.
By the way, I check several bugs reported to Ansible (like <https://github.com/ansible/ansible/issues/23921>), and my error is similar, but their work-arounds don't work with my case.
Any help will be much appreciated!!
|
2017/12/13
|
[
"https://Stackoverflow.com/questions/47799275",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3803228/"
] |
I have finally found a work-around for my problem, and I'm sharing this answer in case someone finds it useful.
Ansible become module is great, but for my company it is not working. As I explained in the question, it is adding a "**-u root**" at the end of the sudo, which makes the whole command to fail.
I was able to make it work with the following snippet:
```
- name: Create empty file as sudo_userid
command: "sudo su - sudo_userid -c 'touch touchedFile.txt'"
```
I did several tests, and all of them worked! I didn't even got an Ansible warning!
So, cheers everyone!
|
This playbook works for me in ansible 2.4 for your limited test case, I'm not sure how well it would work against larger / more complex tasks or modules. It basically just works around your site's dzdo/sudo limitations.
```
---
- hosts: 127.0.0.1
become: yes
become_method: dzdo
become_flags: "su - root -c"
gather_facts: no
tasks:
- name: Create empty file
file: path=touchedFile.txt state=touch
```
| 12,359
|
12,113,498
|
I'm trying to take the dot product of two lil\_matrix sparse matrices that are approx. 100,000 x 50,000 and 50,000 x 100,000 respectively.
```
from scipy import sparse
a = sparse.lil_matrix((100000, 50000))
b = sparse.lil_matrix((50000, 100000))
c = a.dot(b)
```
and getting this error:
```
File "/usr/lib64/python2.6/site-packages/scipy/sparse/base.py", line 211, in dot
return self * other
File "/usr/lib64/python2.6/site-packages/scipy/sparse/base.py", line 247, in __mul__
return self._mul_sparse_matrix(other)
File "/usr/lib64/python2.6/site-packages/scipy/sparse/base.py", line 300, in _mul_sparse_matrix
return self.tocsr()._mul_sparse_matrix(other)
File "/usr/lib64/python2.6/site-packages/scipy/sparse/compressed.py", line 290, in _mul_sparse_matrix
indices = np.empty(nnz, dtype=np.intc)
ValueError: negative dimensions are not allowed
```
Any ideas on what might be happening - running this on a machine with about 64GB of ram, and using about 13GB when executing the dot.
|
2012/08/24
|
[
"https://Stackoverflow.com/questions/12113498",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1623172/"
] |
This is a bad error message, but the "problem" quite simply is that your resulting matrix would be too big (has too many nonzero elements, not its dimension).
Scipy uses `int32` to store `indptr` and `indices` for the sparse formats. This means that your sparsematrix cannot have more then (approximatly) 2^31 nonzero elements. Maybe you could change the code in scipy to use `int64` or `uint32`, if this is not just a toy problem anyways. But maybe the use of sparse matrixes is not the best solution for solving this anyways?
**EDIT:** This is solved in the new scipy versions AFIAK.
|
Just to add to @seberg's answer.
There are two issues related to this on github.com/scipy/scipy.
[Issue #1833](https://github.com/scipy/scipy/issues/1833#ref-issue-13651914) (marked closed April 2013) and [Issue #442](https://github.com/scipy/scipy/pull/442) with some pull requests that haven't been merged (Nov 2013 - SciPy version 0.13.1) due to some missing tests etc. You should be able to pull those into your own installation and compile a version that supports larger sparse matrices.
| 12,360
|
32,369,147
|
I want to get the url of the link of tag. I have attached the class of the element to type selenium.webdriver.remote.webelement.WebElement in python:
```
elem = driver.find_elements_by_class_name("_5cq3")
```
and the html is:
```
<div class="_5cq3" data-ft="{"tn":"E"}">
<a class="_4-eo" href="/9gag/photos/a.109041001839.105995.21785951839/10153954245456840/?type=1" rel="theater" ajaxify="/9gag/photos/a.109041001839.105995.21785951839/10153954245456840/?type=1&src=https%3A%2F%2Fscontent.xx.fbcdn.net%2Fhphotos-xfp1%2Ft31.0-8%2F11894571_10153954245456840_9038620401603938613_o.jpg&smallsrc=https%3A%2F%2Fscontent.xx.fbcdn.net%2Fhphotos-prn2%2Fv%2Ft1.0-9%2F11903991_10153954245456840_9038620401603938613_n.jpg%3Foh%3D0c837ce6b0498cd833f83cfbaeb577e7%26oe%3D567D8819&size=651%2C1000&fbid=10153954245456840&player_origin=profile" style="width:256px;">
<div class="uiScaledImageContainer _4-ep" style="width:256px;height:394px;" id="u_jsonp_2_r">
<img class="scaledImageFitWidth img" src="https://fbcdn-photos-h-a.akamaihd.net/hphotos-ak-prn2/v/t1.0-0/s526x395/11903991_10153954245456840_9038620401603938613_n.jpg?oh=15f59e964665efe28943d12bd00cefd9&oe=5667BDBA&__gda__=1448928574_a7c6da855842af4c152c2fdf8096e1ef" alt="9GAG's photo." width="256" height="395">
</div>
</a>
</div>
```
I want the href value of the a tag falling inside the class `_5cq3`.
|
2015/09/03
|
[
"https://Stackoverflow.com/questions/32369147",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5159284/"
] |
Why not do it directly?
```
url = driver.find_element_by_class_name("_4-eo").get_attribute("href")
```
And if you need the div element first you can do it this way:
```
divElement = driver.find_elements_by_class_name("_5cq3")
url = divElement.find_element_by_class_name("_4-eo").get_attribute("href")
```
or another way via xpath (given that there is only one link element inside your 5cq3 Elements:
```
url = driver.find_element_by_xpath("//div[@class='_5cq3']/a").get_attribute("href")
```
|
You can use xpath for same
If you want to take href of "a" tag, 2nd line according to your HTML code then use
```
url = driver.find_element_by_xpath("//div[@class='_5cq3']/a[@class='_4-eo']").get_attribute("href")
```
If you want to take href of "img" tag, 4nd line according to your HTML code then use
```
url = driver.find_element_by_xpath("//div[@class='_5cq3']/a/div/img[@class='scaledImageFitWidth img']").get_attribute("href")
```
| 12,361
|
23,034,781
|
I am using scrapy 0.20 with python 2.7
According to [scrapy architecture](http://doc.scrapy.org/en/latest/topics/architecture.html), the spider sends requests to the engine. Then, after the whole crawling process, the item goes through the item pipeline.
So, the item pipeline has nothing to do when the spider opens or closes. Also, item pipeline components can't know when the spider opens or closes. So, how the `open_spider` method exists in item pipeline components according to [this page](http://doc.scrapy.org/en/latest/topics/item-pipeline.html#writing-your-own-item-pipeline)?
|
2014/04/12
|
[
"https://Stackoverflow.com/questions/23034781",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2038257/"
] |
I had similar problem and then figured it out.
It is possible that all of your left-hand side values (V5) are the same. The error is thrown as a saying that no decision can be made as it is too easy.
My source: <http://kleinfelter.com/learning-r-painful-r-learnings>
|
After removing all 'NA', the problem has gone. Also, the first column has to be index column.
| 12,363
|
10,145,201
|
We moved our SQL Server 2005 database to a new physical server, and since then it has been terminating any connection that persist for 30 seconds.
We are experiencing this in Oracle SQL developer and when connecting from python using pyodbc
Everything worked perfectly before, and now python returns this error after 30 seconds:
('08S01', '[08S01] [FreeTDS][SQL Server]Read from the server failed (20004) (SQLExecDirectW)')
|
2012/04/13
|
[
"https://Stackoverflow.com/questions/10145201",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/931235/"
] |
First of all what you need is profile the sql server to see if any activity is happening. Look for slow running queries, CPU and memory bottlenecks.
Also you can include the timeout in the querystring like this:
"Data Source=(local);Initial Catalog=AdventureWorks;Integrated Security=SSPI;Connection Timeout=30";
and extend that number if you want.
But remember "timeout" doesn't means time connection, this is just the time to wait while trying to establish a connection before terminating.
I think this problem is more about database performance or maybe a network issue.
|
Maybe check your remote query timeout? It should default to 600, but maybe it's set to 30? [Info here](http://msdn.microsoft.com/en-us/library/ms189040%28v=sql.90%29.aspx)
| 12,364
|
18,921,141
|
I keep getting an error that there's no such module.
The project name is gmblnew, and I have two subfolders- `core` and `gmblnew` - the app I'm working on is core.
My **urls.py** file is
```
from django.conf.urls import *
# Uncomment the next two lines to enable the admin:
from django.contrib import admin
admin.autodiscover()
urlpatterns = patterns('',
# Examples:
# url(r'^$', 'gmblnew.views.home', name='home'),
# url(r'^gmblnew/', include('gmblnew.foo.urls')),
url(r'^league/', include('core.views.league')),
# Uncomment the admin/doc line below to enable admin documentation:
url(r'^admin/doc/', include('django.contrib.admindocs.urls')),
# Uncomment the next line to enable the admin:
url(r'^admin/', include(admin.site.urls)),
)
```
This seems to be fine. The **views.py** file is:
```
from django.http import HttpResponse
def league(request):
from core.models import Division
response = HttpResponse()
response['mimetype'] = 'text/plain'
response.write("<HTML><>BODY>\n")
response.write("< TABLE BORDER=1><CAPTION>League List</CAPTION><TR>\n")
all_leagues = Division.objects.all()
for league in all_leagues:
response.write("<TR>\n")
response.write("<TD> %s" % league)
response.write("</TD>\n")
response.write("</BODY></HTML>")
return response
```
Traceback:
```
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
103. resolver_match = resolver.resolve(request.path_info)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/urlresolvers.py" in resolve
319. for pattern in self.url_patterns:
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/urlresolvers.py" in url_patterns
347. patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/urlresolvers.py" in urlconf_module
342. self._urlconf_module = import_module(self.urlconf_name)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/utils/importlib.py" in import_module
35. __import__(name)
File "/Users/chris/Dropbox/Django/gmblnew/gmblnew/urls.py" in <module>
12. url(r'^league/', include('core.views.league')),
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/conf/urls/__init__.py" in include
25. urlconf_module = import_module(urlconf_module)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/utils/importlib.py" in import_module
35. __import__(name)
Exception Type: ImportError at /admin/
Exception Value: No module named league
```
I've tried a number of variants on the `url(r'^league/', include('core.views.league')),` line, including `gmblnew.core.views.league`, `views.league`, `views.view_league`, etc. I'm obviously missing something super simple on the structure of that line.
|
2013/09/20
|
[
"https://Stackoverflow.com/questions/18921141",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1293222/"
] |
Your problem is here:
```
url(r'^league/', include('core.views.league')),
```
By using `include` you are specifying a module, which does not exist.
[`include` is used to include other url confs](https://docs.djangoproject.com/en/dev/topics/http/urls/#including-other-urlconfs), and not to target view methods
What you want is refer to the view method `league`
```
url(r'^league/$', 'core.views.league'),
```
should work.
Also, note the `$` after `^league/` , which represents the *end* of the URL pattern.
|
`include` takes a path to a url file, not a view. Just write this instead:
```
url(r'^league/', 'core.views.league'),
```
| 12,365
|
8,575,713
|
I've got a following structure:
```
|-- dirBar
| |-- __init__.py
| |-- bar.py
|-- foo.py
`-- test.py
```
bar.py
```
def returnBar():
return 'Bar'
```
foo.py
```
from dirBar.bar import returnBar
def printFoo():
print returnBar()
```
test.py
```
from mock import Mock
from foo import printFoo
from dirBar import bar
bar.returnBar = Mock(return_value='Foo')
printFoo()
```
the result of `python test.py` is `Bar`.
How to mock the `printBar` to make it return `Foo` so that `printFoo` will print it?
EDIT: Without modifying any other file that `test.py`
|
2011/12/20
|
[
"https://Stackoverflow.com/questions/8575713",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/23457/"
] |
I'm guessing you are going to mock the function `returnBar`, you'd like to use [`patch` decorator](http://www.voidspace.org.uk/python/mock/patch.html):
```
from mock import patch
from foo import printFoo
@patch('foo.returnBar')
def test_printFoo(mockBar):
mockBar.return_value = 'Foo'
printFoo()
test_printFoo()
```
|
Just import the `bar` module before the `foo` module and mock it:
```
from mock import Mock
from dirBar import bar
bar.returnBar = Mock(return_value='Foo')
from foo import printFoo
printFoo()
```
When you are importing the `returnBar` in `foo.py`, you are binding the value of the module to a variable called `returnBar`. This variable is local so is put in the closure of the `printFoo()` function when `foo` is imported - and the values in the closure cannot be updated by code from outiside it. So, it should have the new value (that is, the mocking function) *before* the importing of `foo`.
**EDIT**: the previous solution workis but is not robust since it depends on ordering the imports. That is not much ideal. Another solution (that occurred me after the first one) is to import the `bar` module in `foo.py` instead of only import the `returnBar()` function:
```
from dirBar import bar
def printFoo():
print bar.returnBar()
```
This will work because the `returnBar()` is now retrieved directly from the `bar` module instead of the closure. So if I update the module, the new function will be retrieved.
| 12,366
|
29,454,002
|
I'm new in python and i'm using `pydub` modules to play mp3 track.
Here is my simple code to play mp3:
```
#Let's play some mp3 files using python!
from pydub import AudioSegment
from pydub.playback import play
song = AudioSegment.from_mp3("/media/rajendra/0C86E11786E10256/05_I_Like_It_Rough.mp3")
play(song)
```
When i run this program, it says:
```
*/usr/bin/python3.4 /home/rajendra/PycharmProjects/pythonProject5/myProgram.py
/usr/local/lib/python3.4/dist-packages/pydub/utils.py:161: RuntimeWarning: Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work
warn("Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work", RuntimeWarning)
/usr/local/lib/python3.4/dist-packages/pydub/utils.py:174: RuntimeWarning: Couldn't find ffplay or avplay - defaulting to ffplay, but may not work
warn("Couldn't find ffplay or avplay - defaulting to ffplay, but may not work", RuntimeWarning)
Traceback (most recent call last):
File "/home/rajendra/PycharmProjects/pythonProject5/myProgram.py", line 11, in <module>
song = AudioSegment.from_mp3("/media/rajendra/0C86E11786E10256/05_I_Like_It_Rough.mp3")
File "/usr/local/lib/python3.4/dist-packages/pydub/audio_segment.py", line 355, in from_mp3
return cls.from_file(file, 'mp3')
File "/usr/local/lib/python3.4/dist-packages/pydub/audio_segment.py", line 339, in from_file
retcode = subprocess.call(convertion_command, stderr=open(os.devnull))
File "/usr/lib/python3.4/subprocess.py", line 533, in call
with Popen(*popenargs, **kwargs) as p:
File "/usr/lib/python3.4/subprocess.py", line 848, in __init__
restore_signals, start_new_session)
File "/usr/lib/python3.4/subprocess.py", line 1446, in _execute_child
raise child_exception_type(errno_num, err_msg)
FileNotFoundError: [Errno 2] No such file or directory: 'ffmpeg'
Process finished with exit code 1*
```
Please help me! I've checked everything path but it's not working. I'm currently using Ubuntu.
Help would be appreciated!
|
2015/04/05
|
[
"https://Stackoverflow.com/questions/29454002",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4750748/"
] |
Like the warning says:
```none
Couldn't find ffplay or avplay - defaulting to ffplay, but may not work
```
You need to have either `ffplay` or `avplay`; however `ffplay` refers to `ffmpeg` which is not installable in Ubuntu in recent versions. Install the `libav-tools` package with `apt-get`:
```
sudo apt-get install libav-tools
```
|
Seems like you need ffmpeg, but
```
sudo apt-get install ffmpeg
```
does not work anymore. You can get ffmpeg by:
```
sudo add-apt-repository ppa:jon-severinsson/ffmpeg
sudo apt-get update
sudo apt-get install ffmpeg
```
| 12,369
|
70,375,415
|
For example the original list:
`['k','a','b','c','a','d','e','a','b','e','f','j','a','c','a','b']`
We want to split the list into lists started with `'a'` and ended with `'a'`, like the following:
`['a','b','c','a']`
`['a','d','e','a']`
`['a','b','e','f','j','a']`
`['a','c','a']`
The final ouput can also be a list of lists. I have tried a double for loop approach with `'a'` as the condition, but this is inefficient and not pythonic.
|
2021/12/16
|
[
"https://Stackoverflow.com/questions/70375415",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4143312/"
] |
One possible solution is using `re` (regex)
```
import re
l = ['k','a','b','c','a','d','e','a','b','e','f','j','a','c','a','b']
r = [list(f"a{_}a") for _ in re.findall("(?<=a)[^a]+(?=a)", "".join(l))]
print(r)
# [['a', 'b', 'c', 'a'], ['a', 'd', 'e', 'a'], ['a', 'b', 'e', 'f', 'j', 'a'], ['a', 'c', 'a']]
```
|
You can do this in one loop:
```
lst = ['k','a','b','c','a','d','e','a','b','e','f','j','a','c','a','b']
out = [[]]
for i in lst:
if i == 'a':
out[-1].append(i)
out.append([])
out[-1].append(i)
out = out[1:] if out[-1][-1] == 'a' else out[1:-1]
```
Also using `numpy.split`:
```
out = [ary.tolist() + ['a'] for ary in np.split(lst, np.where(np.array(lst) == 'a')[0])[1:-1]]
```
Output:
```
[['a', 'b', 'c', 'a'], ['a', 'd', 'e', 'a'], ['a', 'b', 'e', 'f', 'j', 'a'], ['a', 'c', 'a']]
```
| 12,379
|
55,697,976
|
I have this input `<input type="file" id="file" name="file" accept="image/*" multiple>` this allow the user select several images and I need to pass all of them to my `FormData` so I do this:
```
var formdata = new FormData();
var files = $('#file')[0].files[0];
formdata.append('file',files);
```
But that only take the first image from de list, How can i take all the images and store all of them in `var files`?
Thanks in advance
**EDIT:** The backend I use is `django/python` and if I use this way in my backend detect only one image from the list like this `[<InMemoryUploadedFile: img.png (image/png)>]` and using just `var files = $('#file')[0].files;` show me nothing.
|
2019/04/15
|
[
"https://Stackoverflow.com/questions/55697976",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5727540/"
] |
There are many problems:
1. You can't redeclare the same variable if you want to keep its previous values
2. You need to change the index so that it's not saving to the same spot
3. $("#file") - shouldn't be an array, it's an object so i'm surprised it's not throwing an error
Let's say your code is legit. You could do this:
```
var files=[];
var length = $("#file").length;
for (i = 0; i < length; i++) {
files[i] = $('#file')[i];
}
formdata.append('file',files);
```
|
This was my solution
```
var formdata = new FormData();
var files=[];
var count = document.getElementById('file').files.length;
for (i = 0; i < cont; i++) {
files[i] = document.getElementById('file').files[i];
formdata.append('file',files[i]);
}
```
Using `JQuery` for length only brings me 1 element and gives me more problems so
I use puere `JavaScript` for this part and works fine
| 12,386
|
6,965,431
|
I'm attempting to use GAE TaskQueue's REST API to pull tasks from a queue to an external server (a server not on GAE).
* Is there a library that does this for me?
* The API is simple enough, so I just need to figure out authentication. I examined the request sent by `gtaskqueue_sample` from `google-api-python-client` using `--dump_request` and found the `authorization: OAuth XXX` header. Adding that token to my own requested worked, but the token seems to expire periodically (possibly daily), and I can't figure out how to re-generate it. For that matter, gtaskqueue\_sample itself no longer works (the call to `https://accounts.google.com/o/oauth2/token` fails with `No JSON object could be decoded`).
How does one take care of authentication? This is a server app so ideally I could generate a token that I could use from then on.
|
2011/08/06
|
[
"https://Stackoverflow.com/questions/6965431",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13055/"
] |
These APIS work only for GAE server since the queues can be created only via queue.yaml and infact API does not expose any API for inserting queue and tasks or project.
|
The pull queues page has a [whole section](http://code.google.com/appengine/docs/python/taskqueue/overview-pull.html#Pulling_Tasks_from_Outside_App_Engine) about client libraries and sample code.
| 12,387
|
18,401,385
|
I'm using Eclipse (on the PyDev perspective), and I just installed (using pip) the python 'requests' module.
Eclipse is giving me an error warning on the 'import requests' line, saying that it is an unresolved import, but I've run it it imports just fine. (But the error message won't go away).
Its really bugging me, and I can't right-click and delete the error either. (The option is gray).
Is there any way to fix this? (Even if it involves manually removing the error?)
I know that there is a similar problem here:
[Eclipse Pydev - Misdiplayed Import Error](https://stackoverflow.com/questions/18301970/eclipse-pydev-misdiplayed-import-error?rq=1)
but the answer to that was that PyDev had a bug specifically with PIL, so that is why I'm asking a different question.
|
2013/08/23
|
[
"https://Stackoverflow.com/questions/18401385",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2258464/"
] |
Sometimes, PyDev is a little buggy... When it happens, I usually right-click on the folder containing the file in the PyDev Package Explorer, then "PyDev->remove error markers". And then re-run code analysis.
If it still doesn't work, try removing and adding again the directory to your `requests` module to the PyDev Path. As I said, PyDev is a little buggy...
|
You should manually configure properties of you PyDev project.
Right click on your project name, select **PyDev - PYTHONPATH**, then in External Libraries tab press **Add source folder** and choose the root directory of your library.
| 12,390
|
71,792,025
|
My bots doesn't queue the songs when i use the play command, it just plays them. Im trying to get all my commands to work before using spotify & soundcloud in the bot.So, when i use the play command & I try to queue the songs, I cannot queue them. So, can anyone help me ? I have checked the wavelink doc but I couldn't find anything.. (im a beginner with this library)
Code:
```
import nextcord
from nextcord.ext import commands
import wavelink
import random
import datetime
from datetime import datetime
import os
bot = commands.Bot(command_prefix=">")
@bot.event
async def on_wavelink_track_end(player:wavelink.Player,track:wavelink.Track,reason):
ctx = player.ctx
vc: player = ctx.voice_client
if vc.loop:
return await vc.play(track)
next_song = vc.queue.get()
await vc.play(next_song)
emb = nextcord.Embed(description=f"Now playing {next_song.title}",color=nextcord.Color.magenta())
emb.set_image(url=next_song.thumbnail)
await ctx.send(embed=emb)
@bot.command(aliases=["p"])
async def play(ctx,*,search:wavelink.YouTubeTrack):
if not ctx.voice_client:
vc: wavelink.Player = await ctx.author.voice.channel.connect(cls=wavelink.Player)
elif not getattr(ctx.author.voice,"channel",None):
embed=nextcord.Embed(description=f"{ctx.author.mention}: No song(s) are playing.",color=nextcord.Color.blue())
return await ctx.send(embed=embed)
else:
vc:wavelink.Player = ctx.voice_client
if vc.queue.is_empty and vc.is_playing:
await vc.play(search)
embe = nextcord.Embed(description=f"Now playing [{search.title}]({search.uri}) ",color=nextcord.Color.magenta())
embe.set_image(url=search.thumbnail)
await ctx.send(embed=embe)
else:
await vc.queue.put_wait(search)
emb = nextcord.Embed(description=f"Added [{search.title}]({search.uri}) to the queue.",color=nextcord.Color.magenta())
emb.set_image(url=search.thumbnail)
await ctx.send(embed=emb)
vc.ctx = ctx
setattr(vc,"loop",False)
@bot.command()
async def queue(ctx):
if not ctx.voice_client:
vc: wavelink.Player = await ctx.author.voice.channel.connect(cls=wavelink.Player)
elif not getattr(ctx.author.voice,"channel",None):
embed=nextcord.Embed(description=f"{ctx.author.mention}: No song(s) are playing.",color=nextcord.Color.blue())
return await ctx.send(embed=embed)
else:
vc:wavelink.Player = ctx.voice_client
if vc.queue.is_empty:
emb = nextcord.Embed(description=f"{ctx.author.mention}: The queue is empty. Try adding songs.",color=nextcord.Color.red())
return await ctx.send(embed=emb)
lp = nextcord.Embed(title="Queue",color=nextcord.Color.blue())
queue = vc.queue.copy()
song_count = 0
for song in queue:
song_count += 1
lp.add_field(name=f"[{song_count}] Song",value=f"{song.title}")
return await ctx.send(embed=lp)
bot.run("TOKEN IS HERE JUST NOT LEAKING IT")
```
Error:
```
Traceback (most recent call last):
File "/home/runner/Solved-Music/venv/lib/python3.8/site-packages/nextcord/client.py", line 417, in _run_event
await coro(*args, **kwargs)
File "main.py", line 181, in on_wavelink_track_end
song_count += 1
File "/home/runner/Solved-Music/venv/lib/python3.8/site-packages/wavelink/queue.py", line 212, in get
raise QueueEmpty("No items in the queue.")
wavelink.errors.QueueEmpty: No items in the queue.
```
|
2022/04/08
|
[
"https://Stackoverflow.com/questions/71792025",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18384528/"
] |
It also happen to me. For me, it because `DOMContentLoaded` callback triggered twice.
My fix just make sure the container rendered only once.
```js
let container = null;
document.addEventListener('DOMContentLoaded', function(event) {
if (!container) {
container = document.getElementById('root1') as HTMLElement;
const root = createRoot(container)
root.render(
<React.StrictMode>
<h1>ZOO</h1>
</React.StrictMode>
);
}
});
```
|
The answer is inside the warning itself.
>
> You are calling ReactDOMClient.createRoot() on a **container** that has
> already been passed to createRoot() **before**.
>
>
>
The root cause of the warning at my end is that the same DOM element is used to create the root more than once.
To overcome the issue it is to be sure that the `createRoot` is called only once on **one DOM element** and after that `root.render(reactElement);` can be used to update and `createRoot` should not be used.
| 12,391
|
51,539,051
|
I'm actually trying to send pictures(.jpg) saved on a directory of my computer to a FTP server with a python script and ftplib .
The path where are the images is : "D:/directory\_image".
I use python 2.7 and the command .storbinary from ftplib to send .jpg.
Despite my search, I get an error message that I can't resolve :
```
`AttributeError: 'str' object has no attribute 'storbinary'
```
Here's the part of my code that cause problems :
```
from ftplib import FTP
import time
import os
ftp = FTP('Host')
connect= ftp.login('user', 'passwd')
path = "D:/directory_image"
FichList = os.listdir( path )
i = len(FichList)
u = 0
While u < i :
image_name= FichList[u]
jpg_to_send = path + '/' + image_name
file_open = open (image_name, 'rb')
connect.storbinary('STOR '+ jpg_to_send, file_open)
file_open.close()
u = u + 1
```
I know that the file argument in Storbinary () must be an open file object instead of a string... But it's an open file object in my script, isn't it?
Thanks a lot,
Clara
|
2018/07/26
|
[
"https://Stackoverflow.com/questions/51539051",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10138893/"
] |
[`@Transactional` can't work on private method](https://stackoverflow.com/questions/4396284/does-spring-transactional-attribute-work-on-a-private-method) because it's applied using an aspect (using dynamic proxies)
Basically you want the `retrieveAndSaveInformationFromBac()` to be a single unit of work, ie. a transaction.
So annotate it with `@Transactional`.
|
Since you are using `Hibernate` , you can handle this by a property:
```
<property name="hibernate.connection.autocommit">false</property>
```
Then you can commit with the transaction, like
```
Session session = sessionFactory.openSession();
Transaction tx = session.beginTransaction();
//do stuff and then
tx.commit();
```
| 12,396
|
40,499,481
|
I configured a new debug environment in Visual Studio Code under OS X.
```
{
"name": "Kivy",
"type": "python",
"request": "launch",
"stopOnEntry": false,
"pythonPath": "/Applications/Kivy3.app/Contents/Frameworks/python/3.5.0/bin",
"program": "${file}",
"debugOptions": [
"WaitOnAbnormalExit",
"WaitOnNormalExit",
"RedirectOutput"
]
},
```
When it runs, it said **"Error: spawn EACCES"**
I assume this is because my current user doesn't habe the according permission to this folder since it is under the root rather than my user folder.
I tried the 2 methods, nothing works, how to handle it?
1. create a soft link from that folder to my own folder, but still same error
2. sudo VSC, still the same
How to solve this problem?
|
2016/11/09
|
[
"https://Stackoverflow.com/questions/40499481",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1321025/"
] |
@Albert Gao,
The path you have specified above doesn't contain the name of the python file. You need to provide the path to the file, include the file name. I believe you need to change it as follows:
`"pythonPath": "/Applications/Kivy3.app/Contents/Frameworks/python/3.5.0/bin/python",`
If that doesn't work, then type the following into your command (terminal) window:
- `which python`
- Next copy that path and paste it into `settings.json`
|
If you are getting the spawn error above while using **OpenOCD** for the Raspberry Pi Pico, make sure that your "**cortex-debug.openocdPath**" in "*settings.json*" is set to "*<Path\_to\_openocd\_executable>***/openocd**" for example:
`"cortex-debug.openocdPath": "/home/vbhunt/pico/openocd/src/openocd", "cortex-debug.gdbPath": "/bin/gdb-multiarch"`
This is a specific instance for the Raspberry Pi Pico of @Albert Gao's excellent answer.
| 12,397
|
24,043,499
|
Could any please help me convert this to python? I don't how to translate the conditional operators from C++ into python?
```
Math.easeInExpo = function (t, b, c, d) {
return (t==0) ? b : c * Math.pow(2, 10 * (t/d - 1)) + b;
```
|
2014/06/04
|
[
"https://Stackoverflow.com/questions/24043499",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1461304/"
] |
```
def easeInExpo( t, b, c, d ):
return b if t == 0 else c * pow( 2, 10 * (t/d - 1) ) + b
```
|
Use `if` / `else`:
```
return b if t == 0 else c * pow(2, 10 * (t/d - 1)) +b
```
| 12,398
|
3,987,732
|
I have following python code:
```
def scrapeSite(urlToCheck):
html = urllib2.urlopen(urlToCheck).read()
from BeautifulSoup import BeautifulSoup
soup = BeautifulSoup(html)
tdtags = soup.findAll('td', { "class" : "c" })
for t in tdtags:
print t.encode('latin1')
```
This will return me following html code:
```
<td class="c">
<a href="more.asp">FOO</a>
</td>
<td class="c">
<a href="alotmore.asp">BAR</a>
</td>
```
I'd like to get the text between the a-Node (e.g. FOO or BAR), which would be t.contents.contents. Unfortunately it doesn't work that easy :)
Does anyone have an idea how to solve that?
Thanks a lot, any help is appreciated!
Cheers,
Joseph
|
2010/10/21
|
[
"https://Stackoverflow.com/questions/3987732",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/123172/"
] |
In this case, you can use `t.contents[1].contents[0]` to get FOO and BAR.
The thing is that contents returns a list with all elements (Tags and NavigableStrings), if you print contents, you can see it's something like
`[u'\n', <a href="more.asp">FOO</a>, u'\n']`
So, to get to the actual tag you need to access `contents[1]` (if you have the exact same contents, this can vary depending on the source HTML), after you've find the proper index you can use `contents[0]` afterwards to get the string inside the a tag.
Now, as this depends on the exact contents of the HTML source, it's very fragile. A more generic and robust solution would be to use `find()` again to find the 'a' tag, via `t.find('a')` and then use the contents list to get the values in it `t.find('a').contents[0]` or just `t.find('a').contents` to get the whole list.
|
For your specific example, pyparsing's makeHTMLTags can be useful, since they are tolerant of many HTML variabilities in HTML tags, but provide a handy structure to the results:
```
html = """
<td class="c">
<a href="more.asp">FOO</a>
</td>
<td class="c">
<a href="alotmore.asp">BAR</a>
</td>
<td class="d">
<a href="alotmore.asp">BAZZ</a>
</td>
"""
from pyparsing import *
td,tdEnd = makeHTMLTags("td")
a,aEnd = makeHTMLTags("a")
td.setParseAction(withAttribute(**{"class":"c"}))
pattern = td + a("anchor") + SkipTo(aEnd)("aBody") + aEnd + tdEnd
for t,_,_ in pattern.scanString(html):
print t.aBody, '->', t.anchor.href
```
prints:
```
FOO -> more.asp
BAR -> alotmore.asp
```
| 12,400
|
68,269,165
|
I have a problem. I've created model called "Flower", everything works fine, i can create new "Flowers", i can get data from them etc. The problem is when I want to use column "owner\_id" in SQL query I got an error that this column don't exist, despite I can use it to get data from objects (for example flower1.owner\_id). I've deleted my sqlite database several times, made new migrations and used migrate but that still not worked.I also changed name of the column and re-created it, but that still doesn't helped.
My models.py:
```
from django.db import models
from django.contrib.auth.models import User
class Flower(models.Model):
name = models.CharField(max_length=80)
water_time = models.IntegerField()
owner_id = models.ForeignKey(User, on_delete=models.CASCADE, default=1)
def __str__(self):
return self.name
```
My views.py (view, where i want to use it):
```
class workspaceView(generic.DetailView):
template_name = 'floris/workspace.html'
def get_object(self):
with connection.cursor() as cursor:
cursor.execute(f'SELECT id,name FROM floris_Flower WHERE owner_id = {self.request.user.id}')
row = cursor.fetchall()
object = row
print(object)
if self.request.user.id == object.id:
return object
else:
print('Error')
```
My urls.py:
```
from django.urls import path
from . import views
urlpatterns = [
path('', views.index2, name='index2'),
path('login/', views.loginView, name='login'),
path('register/', views.register, name='register'),
path('addflower/', views.AddPlantView, name='addflower'),
path('index.html', views.logout_view, name='logout'),
path('workspace/', views.workspaceView.as_view(), name='workspace'),
path('profile/<str:username>/', views.ProfileView.as_view(), name='profile'),
]
```
And my error code:
```
OperationalError at /floris/workspace/
no such column: owner_id
Request Method: GET
Request URL: http://127.0.0.1:8000/floris/workspace/
Django Version: 3.2.4
Exception Type: OperationalError
Exception Value:
no such column: owner_id
Exception Location: /home/stazysta-kamil/.local/lib/python3.8/site-packages/django/db/backends/sqlite3/base.py, line 421, in execute
Python Executable: /usr/bin/python3
Python Version: 3.8.10
Python Path:
['/home/stazysta-kamil/Desktop/floris/mysite',
'/usr/lib/python38.zip',
'/usr/lib/python3.8',
'/usr/lib/python3.8/lib-dynload',
'/home/stazysta-kamil/.local/lib/python3.8/site-packages',
'/usr/local/lib/python3.8/dist-packages',
'/usr/lib/python3/dist-packages']
Server time: Tue, 06 Jul 2021 10:43:32 +0000
```
|
2021/07/06
|
[
"https://Stackoverflow.com/questions/68269165",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16390472/"
] |
Since `owner_id` is declared as a `ForeignKey`, it will be available in the actual SQL database as `owner_id_id`. The additional prefix `_id` is automatically appended by Django for that relational field. When using Django ORM, you would just access it via `owner_id` then Django will automatically handle things for you in the background but if you are using raw SQL command, then you have to use the actual table column name which is `owner_id_id`. If you don't want such behavior, set the `db_column` of the model field with the exact name you want e.g. `owner_id = models.ForeignKey(User, on_delete=models.CASCADE, default=1, db_column="owner_id")`.
As stated in Django documentation:
>
> Behind the scenes, Django appends "\_id" to the field name to create
> its database column name. In the above example, the database table for
> the Car model will have a manufacturer\_id column.
>
>
>
Related references:
* <https://docs.djangoproject.com/en/3.2/ref/models/fields/#database-representation>
* <https://docs.djangoproject.com/en/3.1/topics/db/models/#field-name-hiding-is-not-permitted>
* <https://docs.djangoproject.com/en/3.1/topics/db/optimization/#use-foreign-key-values-directly>
|
The problem is in field "owner\_id" of Flower model. When you define a Foreign Key in model, is not necessary to add "\_id" at the end since Django add it automatically
in this case, should be enough replace
```
owner_id = models.ForeignKey(User, on_delete=models.CASCADE, default=1)
```
with
```
owner = models.ForeignKey(User, on_delete=models.CASCADE, default=1)
```
| 12,401
|
54,085,972
|
I am trying to run a playbook locally but I want all the vars in the role's task/main.yml file to refer to a group\_var in a specific inventory file.
Unfortunately the playbook is unable to access to the group\_vars directory as if fail to recognize the vars specified in the role.
The command ran is the following:
`/usr/local/bin/ansible-playbook --connection=local /opt/ansible/playbooks/create.yml -i ./inventory-file`
but fails to find the group\_vars in the /group\_vars directory at the same directory level of the inventory file
```
fatal: [127.0.0.1]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'admin_user_name' is undefined\n\nThe error appears to have been in '/opt/roles/create/tasks/main.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: create org\n ^ here\n"
}
```
This is my configuration:
```
ansible-playbook 2.7.5
config file = /etc/ansible/ansible.cfg
configured module search path = ['/opt/ansible-modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible-playbook
python version = 3.6.3 (default, Oct 3 2017, 21:45:48) [GCC 7.2.0]
```
Any help is appreciated.
Thanks,
dom
|
2019/01/08
|
[
"https://Stackoverflow.com/questions/54085972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3130919/"
] |
So, theoretically adding localhost in the inventory would have been a good solution, but in my specific case (and in general for large deployments) was not an option.
I also added `--extra-vars "myvar.json"` but did not work either.
Turns out (evil detail...) that the right way to add a var file via command line is: `--extra-vars "@myvar.json"`
Posting it here in hope nobody else struggle days to find this solution.
Cheers,
dom
|
As per the error, your ansible is not able to read the group\_vars, Can you please make sure that your group\_vars have the same folder called localhost.
Example Playbook
host is localhost
`- hosts: localhost
become: true
roles:
- { role: common, tags: [ 'common' ] }
- { role: docker, tags: [ 'docker' ] }`
So in group\_vars, it should be **localhost** and in that folder file in **main.yml**
Or
You can create a folder called all in **group\_vars** and create a file called **all.yml**
This should solve the issue
| 12,402
|
61,976,842
|
So i look into text recognition of licensplates. Im using google cloude service for this.
it returns me a list of possible stuff. But also text on the image not containing the license plates get recognized. So i thought i could just tell python to take from the list the one text that matches the pattern of the license plate.
For germany it is like this:
1 or 3 letters. 1 Whitespace.1 or 2 letters. whitespace. up to 4 numbers.
So i have basically 3 parts. In the smalles case it could be something like
H A 4
In the biggest case something like
HHH AB 1234
Hope it got clear. Thanks for any help.
|
2020/05/23
|
[
"https://Stackoverflow.com/questions/61976842",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10685847/"
] |
You could use a regex for this:
```
^[A-Z]{1,3}\s[A-Z]{1,2}\s\d{1,4}$
```
An explanation:
```
----------------------------------------------------------------------
^ the beginning of the string
----------------------------------------------------------------------
[A-Z]{1,2} any character of: 'A' to 'Z' (between 1
and 2 times (matching the most amount
possible))
----------------------------------------------------------------------
\s whitespace (\n, \r, \t, \f, and " ")
----------------------------------------------------------------------
[A-Z]{1,2} any character of: 'A' to 'Z' (between 1
and 2 times (matching the most amount
possible))
----------------------------------------------------------------------
\s whitespace (\n, \r, \t, \f, and " ")
----------------------------------------------------------------------
\d{1,4} digits (0-9) (between 1 and 4 times
(matching the most amount possible))
----------------------------------------------------------------------
$ before an optional \n, and the end of the
string
```
|
Here's a way:
```
import re
string='frg3453453HHH AB 1234e456 2sf 3245 yKDEH A 4 554YFDN'
print(re.findall('[A-Z]{1,3}\s[A-Z]{1,2}\s\d{1,4}',string))
```
Output:
```
['HHH AB 1234', 'DEH A 4']
```
| 12,403
|
34,677,230
|
Given a list below:
```
snplist = [[1786, 0.0126525], [2463, 0.0126525], [2907, 0.0126525], [3068, 0.0126525], [3086, 0.0126525], [3398, 0.0126525], [5468,0.012654], [5531,0.0127005], [5564,0.0127005], [5580,0.0127005]]
```
I want to do a pairwise comparison of the second element in each sublist of the list, i.e. compare to see `0.0126525` from `[1786, 0.0126525]` is equal to `0.0126525` from `[2463, 0.0126525]` and so forth, if so, print the output as indicated in the code.
Using for loop, I achieve the result:
```
for index, item in enumerate(snplist, 0):
if index < len(snplist)-1:
if snplist[index][1] == snplist[index+1][1]:
print snplist[index][0], snplist[index+1][0], snplist[index][1]
```
When doing pairwise comparisons of the elements of a loop using list index, I always get into the error of `'index out of range'` because of the last element. I solve this problem by adding a condition
```
if index < len(snplist)-1:
```
I don't think this is the best way of doing this. I was wondering if there are more elaborate ways of doing pairwise comparisons of list elements in python?
EDIT: I had not thought about the level of tolerance when comparing floats. I would consider two floats with `0.001` difference as being equal.
|
2016/01/08
|
[
"https://Stackoverflow.com/questions/34677230",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1945881/"
] |
You can `zip` the `snplist` with the same list excluding the first element, and do the comparison, like this
```
for l1, l2 in zip(snplist, snplist[1:]):
if l1[1] == l2[1]:
print l1[0], l2[0], l1[1]
```
Since you are comparing floating point numbers, I would recommend using [`math.isclose`](https://docs.python.org/3/library/math.html#math.isclose) function from Python 3.5, like this
```
def isclose(a, b, rel_tol=1e-09, abs_tol=0.0):
return abs(a-b) <= max(rel_tol * max(abs(a), abs(b)), abs_tol)
```
As you want to have 0.001 tolerance, you can do the comparison like this
```
if is_close(l1[1], l2[1], 0.001):
```
|
I suggest that you use `izip` for this to create a generator of item-neighbor pairs. Leaving the problem of comparing floating points aside, the code would look like this:
```
>>> from itertools import izip
>>> lst = [[1,2], [3,4], [5,4], [7,8], [9,10], [11, 10]]
>>> for item, next in izip(lst, lst[1:]):
... if item[1] == next[1]:
... print item[0], next[0], item[1]
...
3 5 4
9 11 10
```
Remember to specify a tolerance when comparing floats, do *not* compare them with == !
You could define an `almost_equal` function for this, for example:
```
def almost_equal(x, y, tolerance):
return abs(x-y) < tolerance
```
Then in the code above, use `almost_equal(item[1], next[1], tolerance)` instead of the comparison with ==.
| 12,404
|
73,675,665
|
I know there are three thread mapping model in operating system.
1. One to One
2. Many to One
3. Many to Many
In this question I assume we use **One to One model**.
Let's say, right now I restart my computer, and there are **10** kernel-level threads already running.
After a while, I decide to run a python program which will launch one process with four threads. Three of the threads have to run a function that do a system call.
**Here is a question, what is the correct scenario when I run the python program.**
a) When a python program start, the kernel will launch another 4 threads in kernel space immediately (so there are 14 threads in kernel space now). When those 3 threads in user level initiate a system call, kernel will map those user-level threads to 3 of 4 kernel-level threads that kernel created when python program start, which also means we will waste 1 kernel-level thread.
b) When a python program start, the kernel **will not** launch another 4 threads in kernel space immediately. Instead, kernel will create new kernel-level threads whenever those 3 user-level thread initiate a system call and ready to talk with kernel. In this case kernel will just create 3 threads exactly, which also means we will not waste any kernel-level threads.
c) Very similar to second scenario, but in this case when those 3 user-level threads ready to run system call and talk with kernel, what kernel will do is make 3 of kernel-level threads that already created stop doing their current job, and then ask them to do the job that python program ask kernel do.
Which means the scheduler will pick up 3 random kernel-level threads to stop what they're doing, and then store those tasks information to somewhere. After that, scheduler will ask those 3 kernel-level thread to finish the python program job first. In this case we always have only 10 threads in kernel-level.
Any reply and suggested material to study is appreciated!
|
2022/09/10
|
[
"https://Stackoverflow.com/questions/73675665",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16030398/"
] |
Kernel threads are like a specialized task responsible for doing a specific operation (not meant to last long). They are not threads waiting for incoming request from user-land threads. Moreover, a system call does not systematically create a kernel thread (see [this post](https://stackoverflow.com/questions/17683067/user-threads-v-s-kernel-threads) for more information and [this one](https://stackoverflow.com/questions/15917544/linux-system-call-flow-sequence) for some context about system calls): a kernel thread is started when a background task is required like for dealing with IO requests for example ([this post](https://stackoverflow.com/questions/72908947/writing-a-small-file-blocks-for-20-ms/73013307#73013307) shows a good practical case though the description is a bit deep). Basic system calls just runs in the same user-thread but with higher privileges. Note the kernel functions use a dedicated kernel stack: each user-level thread has 2 stacks on Linux: one for user-land functions and one for kernel-land functions (for sake of security).
As a result, in practice, I think all answers are wrong in usual cases (ie. assuming the target operation do not require to create a kernel thread). If the target system calls done actually require to create kernel threads, then b) is the correct answer. Indeed, kernel threads are like a one-shot specialized task as previously stated. Creating/Destroying new kernel-threads is not much expensive since it is just basically a relatively lightweight `task_struct` data structure internally.
|
To answer this question directly. You have mixed kernel threads and threading. They are not completely different concepts, but a little different at the OS level. Also, kernel threads may last indefinitely in many cases.
There are at least three types of data,
1. `thread_info` - specific schedulable entity; always exists.
2. `task_struct` - files open and other specific (not for kernel thread)
3. A memory management context (usually, but not always).
For kernel threads, there is no memory management nor user space allocations. Kernel threads do have a separate stack in kernel space. Only kernel code may run in the context of the kernel stack, for example on behalf of user space via a syscall. It can also be borrowed during an interrupt; which may cause a context switch.
The initial 10 kernel threads are just a number to add to the total.
>
> b) When a python program start, the kernel will not launch another 4 threads in kernel space immediately. Instead, kernel will create new kernel-level threads whenever those 3 user-level thread initiate a system call and ready to talk with kernel. In this case kernel will just create 3 threads exactly, which also means we will not waste any kernel-level threads.
>
>
>
This is correct. Your `pthread_create()` will use the same `task_struct`, which allows the threads to share file handles and static memory. Only the user stack and kernel stack are different. The threads share the same memory management structure as well. This is the only difference from a completely separate process. The context switch is light for a thread as there is no 'mm switch' which may incur all sorts of flushing. This `thread_info` structure can allow threads to live on different cores for SMP cases.
The `thread_info` is the only real structure/memory for a kernel thread. `thread_info` is contained in an 8K region (2 pages) which also contains the kernel stack. It is the only 'schedulable' entity by the kernel. The stack itself contains information on how to return to user space, if it is not a kernel thread.
For user space, we have additional structures which the `thread_info` has pointers to. So, it is one-to-one at least as far as `thread_info` goes. It is many-to-one for the other structure/data sets. Ie, the threads share them. For processes, they are **typically** one-to-one (not for `fork()`, but `execv()` type calls).
| 12,405
|
65,175,268
|
The formula below is a special case of the Wasserstein distance/optimal transport when the source and target distributions, `x` and `y` (also called marginal distributions) are 1D, that is, are vectors.
[](https://i.stack.imgur.com/aKURS.jpg)
where **F^{-1}** are inverse probability distribution functions of the cumulative distributions of the marginals `u` and `v`, derived from real data called `x` and `y`, both generated from the normal distribution:
```
import numpy as np
from numpy.random import randn
import scipy.stats as ss
n = 100
x = randn(n)
y = randn(n)
```
How can the integral in the formula be coded in python and scipy? I'm guessing the x and y have to be converted to ranked marginals, which are non-negative and sum to 1, while Scipy's `ppf` could be used to calculate the inverse **F^{-1}**'s?
|
2020/12/07
|
[
"https://Stackoverflow.com/questions/65175268",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11637005/"
] |
Note that when *n* gets large we have that a sorted set of *n* samples approaches the inverse CDF sampled at 1/n, 2/n, ..., n/n. E.g.:
```py
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
plt.plot(norm.ppf(np.linspace(0, 1, 1000)), label="invcdf")
plt.plot(np.sort(np.random.normal(size=1000)), label="sortsample")
plt.legend()
plt.show()
```
[](https://i.stack.imgur.com/PmryH.png)
Also note that your integral from 0 to 1 can be approximated as a sum over 1/n, 2/n, ..., n/n.
Thus we can simply answer your question:
```
def W(p, u, v):
assert len(u) == len(v)
return np.mean(np.abs(np.sort(u) - np.sort(v))**p)**(1/p)
```
Note that if `len(u) != len(v)` you can still apply the method with linear interpolation:
```py
def W(p, u, v):
u = np.sort(u)
v = np.sort(v)
if len(u) != len(v):
if len(u) > len(v): u, v = v, u
us = np.linspace(0, 1, len(u))
vs = np.linspace(0, 1, len(v))
u = np.linalg.interp(u, us, vs)
return np.mean(np.abs(u - v)**p)**(1/p)
```
---
An alternative method if you have prior information about the sort of distribution of your data, but not its parameters, is to find the best fitting distribution on your data (e.g. with `scipy.stats.norm.fit`) for both `u` and `v` and then do the integral with the desired precision. E.g.:
```
from scipy.stats import norm as gauss
def W_gauss(p, u, v, num_steps):
ud = gauss(*gauss.fit(u))
vd = gauss(*gauss.fit(v))
z = np.linspace(0, 1, num_steps, endpoint=False) + 1/(2*num_steps)
return np.mean(np.abs(ud.ppf(z) - vd.ppf(z))**p)**(1/p)
```
|
I guess I am a bit late but, but this is what I would do for an exact solution (using only numpy):
```
import numpy as np
from numpy.random import randn
n = 100
m = 80
p = 2
x = np.sort(randn(n))
y = np.sort(randn(m))
a = np.ones(n)/n
b = np.ones(m)/m
# cdfs
ca = np.cumsum(a)
cb = np.cumsum(b)
# points on which we need to evaluate the quantile functions
cba = np.sort(np.hstack([ca, cb]))
# weights for integral
h = np.diff(np.hstack([0, cba]))
# construction of first quantile function
bins = ca + 1e-10 # small tolerance to avoid rounding errors and enforce right continuity
index_qx = np.digitize(cba, bins, right=True) # right=True becouse quantile function is
# right continuous
qx = x[index_qx] # quantile funciton F^{-1}
# construction of second quantile function
bins = cb + 1e-10
index_qy = np.digitize(cba, bins, right=True) # right=True becouse quantile function is
# right continuous
qy = y[index_qy] # quantile funciton G^{-1}
ot_cost = np.sum((qx - qy)**p * h)
print(ot_cost)
```
In case you are interested, here you can find a more detailed numpy based implementation of the ot problem on the real line with dual and primal solutions as well: <https://github.com/gnies/1d-optimal-transport>. (I am still working on it though).
| 12,406
|
31,357,459
|
I try to understand the non-greedy regex in python, but I don't understand why the following examples have this results:
```
print(re.search('a??b','aaab').group())
ab
print(re.search('a*?b','aaab').group())
aaab
```
I thought it would be 'b' for the first and 'ab' for the second.
Can anyone explain that?
|
2015/07/11
|
[
"https://Stackoverflow.com/questions/31357459",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5105884/"
] |
This happens because the matches you are asking match *afterwards*. If you try to follow how the matching for `a??b` happens from left to right you'll see something like this:
* Try 0 `a` plus `b` vs `aaab`: no match (`b != a`)
* Try 1 `a` plus `b` vs `aaab` : no match (`ab != aa`)
* Try 0 `a` plus `b` vs `aab`: no match (`b != a`) (match position moved to the right by one)
* Try 1 `a` plus `b` vs `aab` : no match (`ab != aa`)
* Try 0 `a` plus `b` vs `ab`: no match (`b != a`) (match position moved to the right by one)
* Try 1 `a` plus `b` vs `ab` : **match** (`ab == ab`)
Similarly for `*?`.
The fact is that the `search` function returns the *leftmost* match. Using `??` and `*?` changes only the behaviour to prefer the *shortest leftmost* match but it will *not* return a shorter match that starts at the right of an already found match.
Also note that the `re` module doesn't return overlapping matches, so even using `findall` or `finditer` you will not be able to find the two matches you are looking for.
|
Its because of that `??` is [*lazy*](http://www.rexegg.com/regex-quantifiers.html#lazy_solution) while `?` is greedy.and a lazy quantifier will match zero or one (its left token), zero if that still allows the overall pattern to match.for example all the following will returns an empty string :
```
>>> print(re.search('a??','a').group())
>>> print(re.search('a??','aa').group())
>>> print(re.search('a??','aaaa').group())
```
And the regex `a??b` will match `ab` or `b` :
```
>>> print(re.search('a??b','aaab').group())
ab
>>> print(re.search('a??b','aacb').group())
b
```
And if it doesn't allows the overall pattern to match and there was not any `b` it will return None :
```
>>> print(re.search('a??b','aac').group())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'NoneType' object has no attribute 'group'
```
And about the second part you have a none-greedy regex and the result is very obvious.It will match any number of `a` and then `b`:
```
print(re.search('a*?b','aaab').group())
aaab
```
| 12,407
|
40,762,671
|
I want to run a process on a remote machine and I want it to get terminated when my host program exits.
I have a small test script which looks like this:
```
import time
while True:
print('hello')
time.sleep(1)
```
and I start this process on a remote machine via a script like this one:
import paramiko
```
ssh = paramiko.SSHClient()
ssh.load_system_host_keys()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect('my_machine', username='root', key_filename='some_key')
_in, _out, _err = ssh.exec_command('python /home/me/loop.py')
time.sleep(5) # could do something with in/out/err
ssh.close()
```
But my problem is that the started process keeps running even after I've closed the Python process which started the SSH connection.
Is there a way to force closing the remote session and the remotely started process when the host process gets terminated?
**Edit**:
[This question](https://stackoverflow.com/questions/7734679/paramiko-and-exec-command-killing-remote-process) sounds similar but there is no satisfying answer. I tried to set the `keepalive` but with no effect:
```
ssh.get_transport().set_keepalive(1)
```
|
2016/11/23
|
[
"https://Stackoverflow.com/questions/40762671",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1668622/"
] |
When the SSH connection is closed it'll not kill the running command on remote host.
The easiest solution is:
```
ssh.exec_command('python /home/me/loop.py', get_pty=True)
# ... do something ...
ssh.close()
```
Then when the SSH connection is closed, the pty (on remote host) will also be closed and the kernel (on remote host) will send the `SIGHUP` signal to the remote command. By default `SIGHUP` will terminate the process so the remote command will be killed.
---
According to the [APUE](http://www.apuebook.com/) book:
>
> *SIGHUP* is sent to *the controlling process (session leader)* associated
> with a *controlling terminal* if a disconnect is detected by the terminal
> interface.
>
>
>
|
Try [`closer`](https://haarcuba.github.io/closer/) - a library I've written specifically for this sort of thing. Doesn't use Paramiko, but perhaps it will work for you anyway.
| 12,409
|
44,354,394
|
how do I get a canvas to actually have a size?
```
root = Tk()
canv = Canvas(root, width=600, height=600)
canv.pack(fill = BOTH, expand = True)
root.after(1, draw)
mainloop()
```
just creates a window with a 1px canvas in the top left corner
edit: I omitted draw, because it didn’t throw anything, thus didn’t seem relevant. But since it runs perfectly if I n through it via pdb, here’s the full code:
```
from tkinter import *
from pdb import set_trace
class Field: #abstact
def __init__(self, size):
super().__init__()
self.tiles = [[(None, 0) for i in range(size[1])] for j in range(size[0])] #list of columns
def get_tile(self, x,y):
return tiles[x][y]
def inc_tile(self, x,y,player):
t = self.tiles[x][y]
tiles[x][y] = (player, t[1])
class RectField(Field):
def __init__(self, size):
super().__init__(size)
def select_tile(self, x, y):
lx = len(self.tiles)
rx = floor(x*lx)
ly = len(self.tiles[rx])
ry = floor(x*ly)
return (rx, ry)
def draw(self, canvas):
canvas.delete(ALL)
w = canvas.winfo_width()
h = canvas.winfo_height()
canvas.create_rectangle(0, 0, w, h, fill='#f0f')
sx = w/len(self.tiles)
for i in range(len(self.tiles)):
sy = h/len(self.tiles[i])
for j in range(len(self.tiles[i])):
pl = self.tiles[i][j][0]
cl = '#888' if not pl else ('#f20' if pl == 1 else '#02f')
canvas.create_rectangle(i*sx, j*sy, (i+1)*sx, (j+1)*sy, fill=cl, outline='#000')
############################################################################## #####################
# MAIN
############################################################################## #####################
root = Tk()
canv = Canvas(root, width=600, height=600)
canv.pack(fill = BOTH, expand = True)
#set_trace()
field = RectField((4,4))
def init():
canv.create_rectangle(0, 0, 600, 600, fill='#f0f')
set_trace()
field.draw(canv)
root.update()
root.after(1, init)
mainloop()
```
OS is ubuntu 16.04,
python version is the preinstalled python3, run via terminal
|
2017/06/04
|
[
"https://Stackoverflow.com/questions/44354394",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4690599/"
] |
Your canvas is not staying minimized. If you were to give the canvas a distinct background color you would see that it immediately fills the whole window, and stays that way.
You are getting `1` for the window width and height because you aren't giving tkinter enough time to draw it before asking for the size. `winfo_width` and `winfo_height` return the actual height, and the actual height can't be computed until the window is actually visible on the screen.
The simplest solution is to force an update before calling your `init` function. You can do this by calling `update` and then calling `init` rather than using `after`.
Instead of this:
```
root.after(1, init)
```
Do this:
```
root.update()
init()
```
|
I was able to get your canvas to show up and work fine. It looks like your `init` function was the problem. You don't need to define a time to wait when calling your init() function just call it directly and the program will do the rest.
Also I have looked over the tkinter documentation for canvas and I do not see anything like `.draw()` I do not think a function like that exist for tkinter. I could be wrong but if it does exist the documentation is not obvious.
use this instead:
```
def init():
canv.create_rectangle(0, 0, 600, 600, fill='blue')
# I only changed the color to make sure it was working
#set_trace() # not needed.
#field.draw(canv) # does nothing?
#root.update() # not needed in this case.
init()
```
| 12,410
|
34,035,270
|
My task is to remove all instances of one particular element ('6' in this example) and move those to the end of the list. The requirement is to traverse a list making in-line changes (creating no supplemental lists).
Input example: [6,4,6,2,3,6,9,6,1,6,5]
Output example: [4,2,3,9,1,5,6,6,6,6,6]
So far, I have been able to do this only by making supplemental lists (breaking the task's requirements), so this working code is not allowed:
```
def shift_sixes(nums):
b = []
c = 0
d = []
for i in nums:
if i == 6:
b.insert(len(nums),i)
elif i != 6:
c = c +1
d.insert(c,i)
ans = d + b
return ans
```
I've also tried `list.remove()` and `list.insert()` but have gotten into trouble with the indexing (which moves when I `insert()` then move the element to the end): For example -
```
a = [6,4,6,2,3,6,9,6,1,6,5]
def shift_sixes(nums):
for i in nums:
if i == 6:
nums.remove(i)
nums.insert(nums[len(nums)-1], 0)
elif i != 0:
i
shift_sixes(a)
```
Additionally, I have tried to use the enumerate() function as follows, but run into problems on the right hand side of the b[idx] assigment line:
```
for idx, b in enumerate(a):
a[idx] = ???
```
Have read other stackoverflow entries [here](https://stackoverflow.com/questions/1540049/replace-values-in-list-using-python), [here](https://stackoverflow.com/questions/24201926/how-to-replace-all-occurrences-of-an-element-in-a-list-in-python-in-place) and [here](https://stackoverflow.com/questions/2582138/finding-and-replacing-elements-in-a-list-python), but they do not tackle the movment of the element to one end.
Would appreciate any help on this list traversal / inplace switching issue. Many thanks.
---
EDIT
@eph - thank you. this is indeed an elegant response. I am sure it will pass my 'no new list' requirement? I surely intend to learn more about lambda and its uses
@falsetru - thank you for the reminder of the append/pop combination (which I tried to do in my original query via list.remove() and list.insert()
@tdelaney - thank you as well. somehow your response is closest to what I was attempting, but it seems not to pass the test for [0, 0, 5].
|
2015/12/02
|
[
"https://Stackoverflow.com/questions/34035270",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5476661/"
] |
Iterating the list reverse way, [pop](https://docs.python.org/3/library/stdtypes.html#mutable-sequence-types) the element if it's 6, then [append](https://docs.python.org/3/library/stdtypes.html#mutable-sequence-types) it.
```
xs = [6,4,6,2,3,6,9,6,1,6,5]
for i in range(len(xs)-1, -1, -1): # 10 to 0
if xs[i] == 6:
xs.append(xs.pop(i))
```
|
Why not try something like this?
Basically, the approach is to first count the number of values.
If 0, then returns (since Python produces a ValueError if the list.index method is called for an element not in the list).
We can then set the first acceptable index for the value to be the length of the list minus the number of occurrences it exists in the list.
We can then combine list.pop/list.append to then traverse the list until all the values desired occur at the end of the list.
```
def shift_value(lst, value):
counts = lst.count(value) # 5
if not counts:
return lst
value_index = len(lst) - counts
index = lst.index(value)
while index != value_index:
lst.append(lst.pop(index))
index = lst.index(value)
return lst
lst = [6,4,6,2,3,6,9,6,1,6,5]
print(shift_value(lst, 6))
```
EDIT: This is horribly inefficient, better answer suggested above.
This requires O(n^2) time, rather than O(n) time.
| 12,411
|
2,769,516
|
I'm trying to learn OpenGL ES quickly (I know, I know, but these are the pressures that have been thrusted upon me) and I have been read around a fair bit, which lots of success at rendering basic models, some basic lighting and 'some' texturing success too.
But this is CONSTANTLY the point at which all OpenGL ES tutorials end, they never say more of what a real life app may need. So I have a few questions that Im hoping arent too difficult.
1. How do people get 3d models from their favorite 3d modeling tool into the iPhone/iPad application? I have seen a couple of blog posts where people have written some python scripts for tools like Blender which create .h files that you can use, is this what people seem to do everytime? Or do the "big" tooling suites (3DS, Maya, etc...) have exporting features?
2. Say I have my model in a nice .h file, all the vertexes, texture points, etc.. are lined up, how to I make my model (say of a basic person) walk? Or to be more general, how do you animate "part" of a model (legs only, turn head, etc...)? Do they need to be a massive mash-up of many different tiny models, or can you pre-bake animations these days "into" models (somehow)
3. Truely great 3D games for the iPhone are (im sure) unbelievably complex, but how do people (game dev firms) seem to manage that designer/developer workflow? Surely not all the animations, textures, etc... are done programatically.
I hope these are not stupid questions, and in actual fact, my app that Im trying to investigate how to make is really quite simple, just a basic 3D model that I want to be able to pan/tilt around using touch. Has anyone ever done/seen anything like this that I might be able to read up on?
Thanks for any help you can give, I appreciate all types of response big or small :)
Cheers,
Mark
|
2010/05/04
|
[
"https://Stackoverflow.com/questions/2769516",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/26310/"
] |
1. something many people are surprised with when starting OpenGL development is that there's no such thing as a "OpenGL file format" for models, let alone animated ones. (DirectX for example comes with a .x file format supported right away). This is because OpenGL acts somewhat at a lower level. Of course, as tm1rbrt mentioned, there are plenty of libraries available. You can easily create your own file format though if you only need geometry. Things get more complex when you want to take also animation and shading into account. Take a look at [Collada](http://collada.org) for that sort of things.
2. again, animation can be done in several ways. Characters are often animated with skeletal animation. Have a look at the [cal3d](http://gna.org/projects/cal3d) library as a starting point for this.
3. you definitely want to spend some time creating a good pipeline for your content creation. Artist must have a set of tools to create their models and animations and to test them in the game engine. Artist must also be instructed about the limits of the engine, both in terms of polygons and of shading. Sometimes complex custom editors are coded to create levels, worlds, etc. in a way compatible with your specific needs.
|
1. Write or use a model loading library. Or use an existing graphics library; this will have routines to load models/textures already.
2. Animating models is done with bones in the 3d model editor. Graphics library will take care of moving the vertices etc for you.
3. No, artists create art and programmers create engines.
[This is a link to my favourite graphics engine.](http://irrlicht.sourceforge.net/)
Hope that helps
| 12,421
|
71,868,469
|
I'm trying to save an object using cbv's im new to using it, and I'm trying to save an object using create view but is getting this error:
"NOT NULL constraint failed: forum\_question.user\_id"
I would appreciate beginner friendly explanation on how to fix this and maybe tips as well, thank you!
models.py:
```
class Question(VoteModel, models.Model):
user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
title = models.CharField(max_length=30)
detail = models.TextField()
tags = models.TextField(default='')
add_time = models.DateTimeField(auto_now_add=True)
def __str__(self):
return self.title
```
forms.py:
```
class QuestionForm(ModelForm):
class Meta:
model = Question
fields = ['title', 'detail', 'tags']
```
views.py:
```
class AskForm(CreateView):
def post(self):
user = self.request.user
model = Question
form_class = QuestionForm
template_name = 'forum/ask-question.html'
if form_class.is_valid():
form_class.save()
```
exceptions?:
[](https://i.stack.imgur.com/MUXva.png)
edit 3:
[](https://i.stack.imgur.com/SxtFu.png)
extra info:
Traceback (most recent call last):
File "/home/titanium/.local/lib/python3.8/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get\_response(request)
File "/home/titanium/.local/lib/python3.8/site-packages/django/core/handlers/base.py", line 181, in \_get\_response
response = wrapped\_callback(request, \*callback\_args, \*\*callback\_kwargs)
File "/home/titanium/.local/lib/python3.8/site-packages/django/views/generic/base.py", line 69, in view
return self.dispatch(request, \*args, \*\*kwargs)
File "/home/titanium/.local/lib/python3.8/site-packages/django/views/generic/base.py", line 101, in dispatch
return handler(request, \*args, \*\*kwargs)
File "/home/titanium/.local/lib/python3.8/site-packages/django/views/generic/edit.py", line 174, in post
return super().post(request, \*args, \*\*kwargs)
File "/home/titanium/.local/lib/python3.8/site-packages/django/views/generic/edit.py", line 144, in post
return self.form\_valid(form)
File "/home/titanium/.local/lib/python3.8/site-packages/django/views/generic/edit.py", line 127, in form\_valid
self.object = form.save()
File "/home/titanium/.local/lib/python3.8/site-packages/django/forms/models.py", line 466, in save
self.instance.save()
File "/home/titanium/.local/lib/python3.8/site-packages/vote/models.py", line 67, in save
super(VoteModel, self).save(\*args, \*\*kwargs)
File "/home/titanium/.local/lib/python3.8/site-packages/django/db/models/base.py", line 743, in save
self.save\_base(using=using, force\_insert=force\_insert,
File "/home/titanium/.local/lib/python3.8/site-packages/django/db/models/base.py", line 780, in save\_base
updated = self.\_save\_table(
File "/home/titanium/.local/lib/python3.8/site-packages/django/db/models/base.py", line 885, in \_save\_table
results = self.\_do\_insert(cls.\_base\_manager, using, fields, returning\_fields, raw)
File "/home/titanium/.local/lib/python3.8/site-packages/django/db/models/base.py", line 923, in \_do\_insert
return manager.\_insert(
File "/home/titanium/.local/lib/python3.8/site-packages/django/db/models/manager.py", line 85, in manager\_method
return getattr(self.get\_queryset(), name)(\*args, \*\*kwargs)
File "/home/titanium/.local/lib/python3.8/site-packages/django/db/models/query.py", line 1301, in \_insert
return query.get\_compiler(using=using).execute\_sql(returning\_fields)
File "/home/titanium/.local/lib/python3.8/site-packages/django/db/models/sql/compiler.py", line 1441, in execute\_sql
cursor.execute(sql, params)
File "/home/titanium/.local/lib/python3.8/site-packages/django/db/backends/utils.py", line 99, in execute
return super().execute(sql, params)
File "/home/titanium/.local/lib/python3.8/site-packages/django/db/backends/utils.py", line 67, in execute
return self.\_execute\_with\_wrappers(sql, params, many=False, executor=self.\_execute)
File "/home/titanium/.local/lib/python3.8/site-packages/django/db/backends/utils.py", line 76, in \_execute\_with\_wrappers
return executor(sql, params, many, context)
File "/home/titanium/.local/lib/python3.8/site-packages/django/db/backends/utils.py", line 85, in \_execute
return self.cursor.execute(sql, params)
File "/home/titanium/.local/lib/python3.8/site-packages/django/db/utils.py", line 90, in **exit**
raise dj\_exc\_value.with\_traceback(traceback) from exc\_value
File "/home/titanium/.local/lib/python3.8/site-packages/django/db/backends/utils.py", line 85, in \_execute
return self.cursor.execute(sql, params)
File "/home/titanium/.local/lib/python3.8/site-packages/django/db/backends/sqlite3/base.py", line 416, in execute
return Database.Cursor.execute(self, query, params)
django.db.utils.IntegrityError: NOT NULL constraint failed: forum\_question.user\_id
[14/Apr/2022 09:58:02] "POST /ask/ HTTP/1.1" 500 175023
|
2022/04/14
|
[
"https://Stackoverflow.com/questions/71868469",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17130619/"
] |
A forum question instance must have a non null user field, but you are not specifying the user related to the object you're creating. In the case you dont want to add the user, update your model's user field to be:
```
user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE, blank=True, null=True)
```
or in your ask form you overload form\_valid() in order to add the user sort of like this (Note I have not tested this directly, follow the [documentation here](https://docs.djangoproject.com/en/4.0/topics/class-based-views/generic-editing/#models-and-request-user)):
```
class AskForm(CreateView):
def post(self):
user = self.request.user
model = Question
form_class = QuestionForm
template_name = 'forum/ask-question.html'
if form_class.is_valid():
form_class.save()
def form_valid(self, form):
form.instance.created_by = self.request.user
return super().form_valid(form)
```
|
I'm not sure if this is still useful, however, I ran into the same error. You can fix the error by deleting your migration files and the database.
The error is due to the sending of NULL data(no data) to an already existing field in the database, usually after that field have been modified or deleted.
| 12,424
|
58,711,540
|
What is the equivalent of C++ STL set<> in python 3?
If there is not an implementation what should I use in python to:
1) Store a list of numbers
2) Find a not less than element in that list? like lower\_bound<> of stl`s set
|
2019/11/05
|
[
"https://Stackoverflow.com/questions/58711540",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6138473/"
] |
The content scripts run in an "isolated world" which is a different context. By default devtools works in the page context so you need to [switch the context selector](https://developers.google.com/web/tools/chrome-devtools/console/reference#context) in devtools console toolbar to your extension:

An alternative solution is to expose the functions in the page context by putting them into a `<script>` element in the web page, but that won't be your content script anymore, it'd be just a normal page script function ([more info](https://stackoverflow.com/a/9517879)).
|
You can access your extension's console by right click on the extension popup and then selecting "Inspect".
| 12,425
|
7,943,751
|
What is the Python 3 equivalent of `python -m SimpleHTTPServer`?
|
2011/10/30
|
[
"https://Stackoverflow.com/questions/7943751",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/845948/"
] |
Using 2to3 utility.
```
$ cat try.py
import SimpleHTTPServer
$ 2to3 try.py
RefactoringTool: Skipping implicit fixer: buffer
RefactoringTool: Skipping implicit fixer: idioms
RefactoringTool: Skipping implicit fixer: set_literal
RefactoringTool: Skipping implicit fixer: ws_comma
RefactoringTool: Refactored try.py
--- try.py (original)
+++ try.py (refactored)
@@ -1 +1 @@
-import SimpleHTTPServer
+import http.server
RefactoringTool: Files that need to be modified:
RefactoringTool: try.py
```
Like many \*nix utils, `2to3` accepts `stdin` if the argument passed is `-`. Therefore, you can test without creating any files like so:
```
$ 2to3 - <<< "import SimpleHTTPServer"
```
|
In one of my projects I run tests against Python 2 and 3. For that I wrote a small script which starts a local server independently:
```
$ python -m $(python -c 'import sys; print("http.server" if sys.version_info[:2] > (2,7) else "SimpleHTTPServer")')
Serving HTTP on 0.0.0.0 port 8000 ...
```
As an alias:
```
$ alias serve="python -m $(python -c 'import sys; print("http.server" if sys.version_info[:2] > (2,7) else "SimpleHTTPServer")')"
$ serve
Serving HTTP on 0.0.0.0 port 8000 ...
```
Please note that I control my Python version via [conda environments](https://conda.io/docs/user-guide/tasks/manage-environments.html), because of that I can use `python` instead of `python3` for using Python 3.
| 12,426
|
7,976,733
|
I am relaying the output of my script to a local port in my system viz -
$python script.py | nc 127.0.0.1 8033
Let's assume that my computer has ip 10.0.0.3
Now, Is it possible that some other computer (say IP 10.0.0.4) can listen to this port via nc or anything else. Please suggest.
|
2011/11/02
|
[
"https://Stackoverflow.com/questions/7976733",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/270216/"
] |
Not directly. The program listening on the port must be on the local machine (meaning 10.0.0.3 in your example). You could arrange for a program on the local machine to listen and send the information to another machine, but the socket connection can only be established on the host.
|
I use Perl to do exactly this - you could use python, of course.
In Perl, I use the `IO::Socket::INET` library.
I instantiate a new instance of `INET` with the `IP`, `port` and `Protocol`, and a time out for the `comms`. I then use the `recv` method to read data from that socket.
It's not as simple as nc; I wish NC did this - it would be a lot easier :)
Here's an outline of the actual Perl
---
```
my $data;
my socket;
$socket=IO::Socket::INET->new( PeerAddr => 10.0.0.3, PeerPort-> 8033, Proto => "tcp", Timeout => 1 ) or die "Unable to open port";
$socket->recv($data,bytes_to_read); # Put your chosen read size in stead of bytes_to_read
print $data;
```
---
| 12,436
|
21,535,061
|
Is it possible to create a python program, that can interact with Google's Translate?
I'm thinking of a way that firstly opens a .txt file, then reads the first line, then interacts with google translate and translates the word from a spesific language to a spesific language, then logs it into a different txt file.
Main question: Is it possible to make Python 3.3 interact with Google Translate?
Please tell me if I didn't explain myself enough.
Thank you,
Tharix
|
2014/02/03
|
[
"https://Stackoverflow.com/questions/21535061",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2802035/"
] |
Oh, the mind-bending horror of weak memory ordering...
The first snippet is your basic atomic read-modify-write - if someone else touches whatever address `x1` points to, the store-exclusive will fail and it will try again until it succeeds. So far so good. However, this only applies to the address (or more rightly region) covered by the exclusive monitor, so whilst it's good for *atomicity*, it's ineffective for *synchronisation* of anything other than that value.
Consider a case where CPU1 is waiting for CPU0 to write some data to a buffer. CPU1 sits there waiting on some kind of synchronisation object (let's say a semaphore), waiting for CPU0 to update it to signal that new data is ready.
1. CPU0 writes to the data address.
2. CPU0 increments the semaphore (atomically, as you do) which happens to be elsewhere in memory.
3. ???
4. CPU1 sees the new semaphore value.
5. CPU1 reads *some* data, which may or may not be the old data, the new data, or some mix of the two.
Now, what happened at step 3? Maybe it all occurred in order. Quite possibly, the hardware decided that since there was no address dependency it would let the store to the semaphore go ahead of the store to the data address. Maybe the semaphore store hit in the cache whereas the data didn't. Maybe it just did so because of complicated reasons only those hardware guys understand. Either way it's perfectly possible for CPU1 to see the semaphore update *before* the new data has hit memory, thus read back invalid data.
To fix this, CPU0 must have a barrier between steps 1 and 2, to ensure the data has definitely been written *before* the semaphore is written. Having the atomic write *be* a barrier is a nice simple way to do this. However since barriers are pretty performance-degrading you want the lightweight no-barrier version as well for situations where you don't need this kind of full synchronisation.
Now, the *even less* intuitive part is that CPU1 could also reorder its loads. Again since there is no address dependency, it would be free to speculate the data load before the semaphore load irrespective of CPU0's barrier. Thus CPU1 also needs its own barrier between steps 4 and 5.
For the more authoritative, but pretty heavy going, version have a read of ARM's [Barrier Litmus Tests and Cookbook](http://infocenter.arm.com/help/topic/com.arm.doc.genc007826/index.html). Be warned, this stuff can be *confusing* ;)
As an aside, in this case the architectural semantics of acquire/release complicate things further. Since they are only one-way barriers, whilst `OSAtomicAdd32Barrier` adds up to a full barrier relative to code before and after it, it doesn't actually guarantee any ordering relative to the atomic operation itself - see [this discussion from Linux](http://lists.infradead.org/pipermail/linux-arm-kernel/2014-February/229588.html) for more explanation. Of course, that's from the theoretical point of view of the architecture; in reality it's not inconceivable that the A7 hardware has taken the 'simple' option of wiring up `LDAXR` to just do `DMB+LDXR`, and so on, meaning they can get away with this since they're at liberty to code to their own implementation, rather than the specification.
|
I would guess that this is simply a way of reproducing existing architecture-independent semantics for this operation.
With the `ldaxr`/`stlxr` pair, the above sequence will assure correct ordering if the AtomicAdd32 is used as a synchronization mechanism (mutex/semaphore) - regardless of whether the resulting higher-level operation is an acquire or release.
So - this is not about enforcing consistency of the atomic add, but about enforcing ordering between acquiring/releasing a mutex and any operations performed on the resource protected by that mutex.
It is less efficient than the `ldxar`/`stxr` or `ldxr`/`stlxr` you would use in a normal native synchronization mechanism, but if you have existing platform-independent code expecting an atomic add with those semantics, this is probably the best way to implement it.
| 12,437
|
66,357,772
|
django+gunicorn+nginx gives 404 while serving static files
I am trying to deploy a Django project using nginx + gunicorn + postgresql. All the configuration is done, my admin panel project static file will serve , but other static files; it returns a 404 error.(iam use run python manage.py collectstatic)
```
my error.log nginx :: "/blogpy/static/home/test.css" failed (2: No such file or directory)"
Structure:
blogpy
-blogpy
-config
-nginx
-nginx.conf
-docker-compose.yml
-Dockerfile
-home
-static
-home
-test.css(not working)
- requirements
-static
-templates
-.env
-docker-compose.yml
-Dockerfile
setting.py:
DEBUG = False
ALLOWED_HOSTS = ['*']
STATIC_URL = '/static/'
STATIC_ROOT = BASE_DIR / 'static'
nginx configuration:
---- nginx.conf:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
upstream blogpy{
server blogpy:8000;
}
server {
listen 80;
server_name localhost;
charset utf-8;
location / {
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_pass http://blogpy;
}
location /static/ {
alias /blogpy/static/;
}
}
}
```
|
2021/02/24
|
[
"https://Stackoverflow.com/questions/66357772",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14751614/"
] |
Try
In ---- nginx.conf:
```
location /static/ {
autoindex off;
alias /home/ubuntu/blogpy/static/; #add full path of static file directry
}
```
|
To get /blogpy/home/static/ files been copied into /blogpy/static/ by collectstatic command, you need to specify STATICFILES\_DIRS setting
<https://docs.djangoproject.com/en/3.1/ref/settings/#std:setting-STATICFILES_DIRS>
```
STATICFILES_DIRS = [
BASE_DIR / 'home' / 'static',
]
```
| 12,439
|
59,802,608
|
I have this code and it raise an error in python 3 and such a comparison can work on python 2
how can I change it?
```
import tensorflow as tf
def train_set():
class MyCallBacks(tf.keras.callbacks.Callback):
def on_epoch_end(self,epoch,logs={}):
if(logs.get('acc')>0.95):
print('the training will stop !')
self.model.stop_training=True
callbacks=MyCallBacks()
mnist_dataset=tf.keras.datasets.mnist
(x_train,y_train),(x_test,y_test)=mnist_dataset.load_data()
x_train=x_train/255.0
x_test=x_test/255.0
classifier=tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28,28)),
tf.keras.layers.Dense(512,activation=tf.nn.relu),
tf.keras.layers.Dense(10,activation=tf.nn.softmax)
])
classifier.compile(
optimizer='sgd',
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
history=classifier.fit(x_train,y_train,epochs=20,callbacks=[callbacks])
return history.epoch,history.history['acc'][-1]
train_set()
```
|
2020/01/18
|
[
"https://Stackoverflow.com/questions/59802608",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11214617/"
] |
Tensorflow 2.0
==============
```
DESIRED_ACCURACY = 0.979
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epochs, logs={}) :
if(logs.get('acc') is not None and logs.get('acc') >= DESIRED_ACCURACY) :
print('\nReached 99.9% accuracy so cancelling training!')
self.model.stop_training = True
callbacks = myCallback()
```
|
I had the same problem and instead of using 'acc', I changed it to 'accuracy' everywhere. So it seems that maybe it is better to try changing 'acc' to 'accuracy'.
| 12,441
|
32,991,119
|
I am writing C extensions for python. I am just experimenting for the time being and I have written a hello world extension that looks like this :
```
#include <Python2.7/Python.h>
static PyObject* helloworld(PyObject* self)
{
return Py_BuildValue("s", "Hello, Python extensions!!");
}
static char helloworld_docs[] = "helloworld( ): Any message you want to put here!!\n";
static PyMethodDef helloworld_funcs[] = {
{"helloworld", (PyCFunction)helloworld, METH_NOARGS, helloworld_docs},
{NULL,NULL,0,NULL}
};
void inithelloworld(void)
{
Py_InitModule3("helloworld", helloworld_funcs,"Extension module example!");
}
```
the code works perfectly fine, after installing it from a setup.py file I wrote, and installing it from command line
```
python setup.py install
```
What I want is the following :
I want to use the C file as a python extension module, without installing it, that is I want to use it as just another python file in my project, and not a file that I need to install before my python modules get to use its functionality. Is there some way of doing this ?
|
2015/10/07
|
[
"https://Stackoverflow.com/questions/32991119",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5414031/"
] |
You can simply compile the extension without installing (usually something like `python setup.py build`). Then you have to make sure the interpreter can find the compiled module (for example by copying it next to a script that imports it, or setting `PYTHONPATH`).
|
You can create your "own interpreter" by not extending python, but embedding it into your application. In that way, your objects will be always available for the users who are running your program. This is a pretty common thing to do in certain cases, for example look at the Blender project where all the `bpy`, `bmesh` and `bge` modules are already included.
The downside is, your users can't use the `python` command directly, they have to use your `hello_world_python` instead. (But of course you can provide your extension as a module as well.) And that also means, you have to compile and distribute your application for all platforms you want to support -- in case you want to distribute it as a binary, to make your users lives a bit easier.
For further information on embedding python into your program, read the propriate sections of the documentation:
[Embedding Python in Another Application](https://docs.python.org/2/extending/embedding.html)
>
> ***Personal suggestion:*** Use Python 3.5 whenever you can, and stop supporting the old 2.x versions. For more information, read this article: [Should I use Python 2 or Python 3 for my development activity?](https://wiki.python.org/moin/Python2orPython3)
>
>
>
| 12,451
|
31,962,569
|
I am working on MQTT and using python paho-mqtt <https://pypi.python.org/pypi/paho-mqtt>
I am unable to understand how can I publish msg to a specific client or list of clients?
I'll appreciate your help.
|
2015/08/12
|
[
"https://Stackoverflow.com/questions/31962569",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1073780/"
] |
This isn't directly possible with strict MQTT, although some brokers may offer that functionality, or you can construct your application so that the topic design works to do what you need.
|
Although I do agree that in some cases it would be useful to send a message to a particular client (or list of clients) that's simply not how the publish/subscribe messaging paradigm works. [Read more on the publish-subscribe pattern on Wikipedia.](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) If all your system needs to do is send messages to unique clients, then I would perhaps suggest thinking of a different architecture for the system you are designing. That being said, you can leverage off pub/sub to achieve what you want using a clever topic design architecture.
For example, let's assume all clients are part of a group (list), you could think of the following topic design:
Unique per client: *P2P/< client-name >*
List/Group subscription: *LIST/< list-name >*
For example, *P2P/user12345* and *LIST/QA* where only user12345 subscribes to *P2P/user12345* but all users of the QA group subscribe to *LIST/QA*.
It would be the client's responsibility to ensure that it is subscribed to its own topic(s) (or if your broker allows it, you could also add the topics administratively to non-clean clients).
With this design, a publisher would be able to send a message to a specific user or all members of a defined group (list).
| 12,452
|
61,581,612
|
i am in the process of converting some cython code to python, and it went well until i came to the bitwise operations. Here is a snippet of the code:
```
in_buf_word = b'\xff\xff\xff\xff\x00'
bits = 8
in_buf_word >>= bits
```
If i run this it will spit out this error:
```
TypeError: unsupported operand type(s) for >>=: 'str' and 'int'
```
how would i fix this?
|
2020/05/03
|
[
"https://Stackoverflow.com/questions/61581612",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13462790/"
] |
```
import bitstring
in_buf_word = b'\xff\xff\xff\xff\x00'
bits = 8
in_buf_word = bitstring.BitArray(in_buf_word ) >> bits
```
If you dont have it. Go to your terminal
```
pip3 install bitstring --> python 3
pip install bitstring --> python 2
```
To covert it back into bytes use the tobytes() method:
```
print(in_buf_word.tobytes())
```
|
Shifting to the right by 8 bits just means cutting off the rightmost byte.
Since you already have a `bytes` object, this can be done more easily:
```
in_buf_word = in_buf_word[:-1]
```
| 12,453
|
10,331,413
|
I am working on the exel parsing using python.
till now I have worked with english language but when I encounter the regional languages, I am getting the error.
example :
```
IR05 měsíční (monthly)
```
It gives me the error as
```
UnicodeEncodeError: 'ascii' codec can't encode character u'\u011b' in position 6: ordinal not in range(128)
```
how I can parse it and I can again write in same language in output files?
my code :
```
for j in val:
print 'j is - ', j
str(j).replace("'", "")
```
I am getting error at replace statement.
|
2012/04/26
|
[
"https://Stackoverflow.com/questions/10331413",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/778942/"
] |
```
>>> "IR05 měsíční (monthly)".decode('utf8')
u'IR05 m\u011bs\xed\u010dn\xed (monthly)'
```
which is a unicode version of your original string (which was encoded in utf8).
Now you can compare it to your other string (from the file), which you decode (from utf8 or latin2 or a different format) and you can compare them.
```
>>> 'IR05 m\xecs\xed\xe8n\xed (monthly)'.decode('latin2')
u'IR05 m\u011bs\xed\u010dn\xed (monthly)'
```
now you can compare the two unicode strings:
```
>>> s_utf8 = "IR05 měsíční (monthly)"
>>> s_latin2 = 'IR05 m\xecs\xed\xe8n\xed (monthly)'
>>> s_utf8.decode('utf8') == s_latin2.decode('latin2')
True
```
To write the string into a file, `encode` it again:
```
>>> s = s_utf8.decode('utf8')
>>> filehandle.write(s.encode('utf8'))
```
|
the error may be caused by str(j),
try this:
```
for j in val:
print 'j is - ', j
j.replace("'", "")
```
| 12,455
|
16,269,396
|
I know I've seen clean examples on the proper way to do this, and could even swear it was in one of the standard python libraries. I can't seem to find it now. Could you please point me in the right direction.
Iterator for a list of lists that only returns arbitrary values from the sub-list. The idea is to have this in a list-comprehension.
```
alist = [ [1,2,3,4,5],
[2,4,6,8,10],
[3,6,9,12,15] ]
only_some_values = [list(x[2]]) for x in alist]
[[3],[6],[9]]
```
But I am quite sure there is a function that does this same thing, but in iterator fashion, thus getting rid of my direct index access on the left side and chaining an additional iterator on the right side of the `for`. Perhaps it was a form of embedded list-comprehension, but I really thought there was a cleaner way (using imap maybe?).
|
2013/04/29
|
[
"https://Stackoverflow.com/questions/16269396",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2097818/"
] |
Perhaps you were thinking of `itemgetter`?
```
>>> from operator import itemgetter
>>> map(itemgetter(2), alist)
[3, 6, 9]
```
But that doesn't leave the elements in sublists
```
only_some_values = [[x[2]] for x in alist]
```
Gives your desired output
|
Here is what I had in mind:
```
from operator import itemgetter
alist = [ [1,2,3,4,5],
[2,4,6,8,10],
[3,6,9,12,15] ]
[list(x) for x in zip(map(itemgetter(2),alist),
map(itemgetter(0),alist)) ]
[[3,1], [6,2], [9,3]]
```
The idea is that you keep the left side of the comprehension clean, probably replacing `list(x)` with another important call, and so it is easy to read where the first-level list elements are coming from.
I'm still not sure where I got this from, but its definitely not one of my original ideas.
| 12,456
|
48,252,967
|
I'm currently doing a system that scrap data from foursquare. Right now i have scrap the review from the website using python and beautiful soup and have a json file like below
```
{"review": "From sunset too variety food u cant liked it.."}{"review": "Byk motor laju2"}{"review": "Good place to chill"}{"review": "If you wan to play bubble and take photo, here is the best place"}{"review": "Scenic view for weekend getaway... coconut shake not taste as original klebang coconut shake..."}{"review": "Getting dirtier"}{"review": "Weekend getaway!"}{"review": "Good for casual walk & watching sunset with loved ones since my last visit macam2 ade kat sini very packed during public holidays"}{"review": "Afternoon time quite dry..beach is normal. Maybe evening/night might be better. The coconut shake they add vanilla ice cream,hmmm"}{"review": "Pantai awesome beb"}{"review": "Nice place for picnic"}{"review": "Cannot mandi here. Good place for recreation.. Calm place for weekdays! Haha"}{"review": "Very bz place. Need to go there early if you want to lepak. If not, no parking for you"}{"review": "So many good attraction here, worth a visit"}{"review": "Beautiful place for sunset"}{"review": "New beach! Like all beaches, awesome view & windy. Some stretch got many small crabs."}{"review": "There is bustel \"hotel in a bus\" can get coconut shake or fried seafood in the evening at 5pm. Bustel rate is from RM80. Bus cafe, bus toilet... Total bus transformation"}{"review": "Too crowded la"}{"review": "Muzium kapal selam closed since 1/3 until further notice..\ud83d\ude29"}{"review": "If you are looking for public toilets, look for a red bus. An old bus was modified and transformed to operate as toilets. Cool."}{"review": "Most of the shops closed after 12 midnight..helloo,this place should be the place for the late nighters..late night supposed to be the peak hour for business..live band bar maybe?? :-P"}
```
My question is how do i insert the data into a database right away? Is MYSQL can be use, or should i go with PyMongo instead.
|
2018/01/14
|
[
"https://Stackoverflow.com/questions/48252967",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9216577/"
] |
It depends on your usages. Basically, MongoDB is suitable for JSON document so, you will be able to insert your Python object "directly". If you want/need to use MySQL, you will probably need to perform some transformations before inserting. Check this post for more information: [Inserting JSON into MySQL using Python](https://stackoverflow.com/questions/4251124/inserting-json-into-mysql-using-python)
|
You can convert your json into a string (json.dumps()) and store in a character field.
Or, Django has support for JSONField when using Postgres ([docs](https://docs.djangoproject.com/en/2.0/ref/contrib/postgres/fields/#jsonfield)), this has some additional features like querying inside the json
| 12,457
|
44,060,080
|
The subject of the study was taken from [Text processing and detection from a specific dictionary in python](https://stackoverflow.com/questions/43988958/text-processing-and-detection-from-a-specific-dictionary-in-python/43989724#43989724) topic. Perhaps i misunderstood the OP's problem but i have tried to improve the codes. So, perhaps my question can be a bit different. Before explaining what i wanted to do, let me share the codes with you:
```
dict_1={"Liquid Biopsy":"Blood for analysis","cfDNA":"Blood for analysis"}
list_1=[u'Liquid', u'biopsy',u'based', u'on', u'circulating', u'cell-free', u'DNA', u'(cfDNA)', u'analysis', u'are', u'described', u'as', u'surrogate', u'samples', u'for', u'molecular', u'analysis.']
for i in dict_1:
if i.lower() in " ".join(list_1).lower():
print("Key: {}\nValue: {}\n".format(i,dict_1[i]))
```
These codes can catch the dictionary keys from a plain text which was written in `list_1`. However, when i was studying with this codes, i wondered what if some dictionary keys repeat in `list_1`. Then i wrote same keys two times in this `list_1`. And the above codes didn't recognize the repeated ones, the program gave the same result as below.
```
Key: cfDNA
Value: Blood for analysis
Key: Liquid Biopsy
Value: Blood for analysis
Process finished with exit code 0
```
Then i tried to change my method and wrote a different code which is given below:
```
dict_1={"Liquid Biopsy":"Blood for analysis","cfDNA":"Blood for analysis"}
list_1=[u'Liquid', u'biopsy',u'based', u'on', u'circulating', u'cell-free', "cfdna",u'DNA', u'(cfDNA)', u'analysis', u'are', u'described', u'as', u'surrogate', u'samples', u'for', u'molecular', u'analysis.']
for i in list_1:
for j in dict_1:
for k in j.split():
count=0
if k.lower() in i.lower():
count+=1
print("Key: {}\nValue: {}\nCount: {}\nDescription: Came from '{}'\n".format(j, dict_1[j],str(count),i))
```
But it was obvious, the last codes would give undesirable result. As can be seen at the below, the program catch both `liquid` and `biopsy` words from the `list_1`. `cfDNA` was written second times in the `list_1` so the program catches two times. But is it possible to write the result one time but sum up the catch time?
```
Key: Liquid Biopsy
Value: Blood for analysis
Count: 1
Description: Came from 'Liquid'
Key: Liquid Biopsy
Value: Blood for analysis
Count: 1
Description: Came from 'biopsy'
Key: cfDNA
Value: Blood for analysis
Count: 1
Description: Came from 'cfdna'
Key: cfDNA
Value: Blood for analysis
Count: 1
Description: Came from '(cfDNA)'
Process finished with exit code 0
```
I hope you understand what i wanted to do. I want to catch all of the keys which is written in a text. And also i want to count how many times, these keys repeat in a text.
|
2017/05/19
|
[
"https://Stackoverflow.com/questions/44060080",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8016168/"
] |
If I understood correctly, you want to find the number of times a "keyword" appears in a text. You can use the "re" module for this.
```
import re
dict_1={"Liquid Biopsy":"Blood for analysis","cfDNA":"Blood for analysis", "asfdafaf":"dunno"}
list_1=[u'Liquid', u'biopsy',u'based', u'on', u'circulating', u'cell-free', "cfdna",u'DNA', u'(cfDNA)', u'analysis', u'are', u'described', u'as', u'surrogate', u'samples', u'for', u'molecular', u'analysis.']
text = ' '.join(list_1).lower()
for key in dict_1:
n = len(re.findall(key.lower(), text))
if n > 0:
print('Key:', key)
print('Value:', dict_1[key])
print('n:', n)
print()
```
|
Recently i have learned a new method about counting how many times does a dictionary key repeat in a plain text without importing "re" module. Perhaps it's suitable to put another method in this topic.
```
dict_1={"Liquid Biopsy":"Blood for analysis","cfDNA":"Blood for analysis"}
list_1=[u'Liquid', u'biopsy', u'liquid', u'biopsy',u'based',u'cfdna' ,u'on', u'circulating', u'cell-free', u'DNA', u'(cfDNA)', u'analysis', u'are', u'described', u'as', u'surrogate', u'samples', u'for', u'molecular', u'analysis.']
string_1=" ".join(list_1).lower()
for i in dict_1:
if i.lower() in string_1:
print("Key: {}\nValue: {}\nCount: {}\n".format(i,dict_1[i],string_1.count(i.lower())))
```
The above codes give almost the same results with the method of importing re module. The difference is, it doesnt't write the keys two times. So it's a bit similar to the first code structure which was written in the first post.
```
Key: Liquid Biopsy
Value: Blood for analysis
Count: 2
Key: cfDNA
Value: Blood for analysis
Count: 2
Process finished with exit code 0
```
| 12,458
|
54,830,602
|
Preface
=======
I understand that `dict`s/`set`s should be created/updated with hashable objects only due to their implementation, so when this kind of code fails
```
>>> {{}} # empty dict of empty dict
Traceback (most recent call last):
File "<input>", line 1, in <module>
TypeError: unhashable type: 'dict'
```
it's ok and I've seen tons of this kind of messages.
But if I want to check if some unhashable object is in `set`/`dict`
```
>>> {} in {} # empty dict not in empty dict
```
I get error as well
```
Traceback (most recent call last):
File "<input>", line 1, in <module>
TypeError: unhashable type: 'dict'
```
Problem
=======
What is the rationale behind this behavior? I understand that lookup and updating may be logically connected (like in [`dict.setdefault` method](https://docs.python.org/3/library/stdtypes.html#dict.setdefault)), but shouldn't it fail on modification step instead of lookup? Maybe I have some hashable "special" values that I handle in some way, but others (possibly unhashable) -- in another:
```
SPECIAL_CASES = frozenset(range(10)) | frozenset(range(100, 200))
...
def process_json(obj):
if obj in SPECIAL_CASES:
... # handle special cases
else:
... # do something else
```
so with given lookup behavior I'm forced to use one of the options
* [LBYL](https://docs.python.org/3/glossary.html#term-lbyl) way: check if `obj` is hashable and only after that check if it is one of `SPECIAL_CASES` (which is not great since it is based on `SPECIAL_CASES` structure and lookup mechanism restrictions, but can be encapsulated in separate predicate),
* [EAFP](https://docs.python.org/3/glossary.html#term-eafp) way: use some sort of utility for "safe lookup" like
```
def safe_contains(dict_or_set, obj):
try:
return obj in dict_or_set
except TypeError:
return False
```
* use `list`/`tuple` for `SPECIAL_CASES` (which is not `O(1)` on lookups).
Or am I missing something trivial?
|
2019/02/22
|
[
"https://Stackoverflow.com/questions/54830602",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5997596/"
] |
I have found a solution and want to share it here so it helps someone else looking to do the same thing. The user running the docker command (without sudo) needs to have the docker group. So I tried adding the service account as a user and gave it the docker group and that's it. `docker login` to gcr worked and so did `docker run`. So the problem is solved but this raises a couple of additional questions.
First, is this the correct way to do it? If it is not, then what is? If this is indeed the correct way, then perhaps a service account selected while creating a VM must be added as a user when it (the VM) is created. I can understand this leads to some complications such as what happens when the service account is changed. Does the old service account user gets deleted or should it be retained? But I think at least an option can be given to add the service account user to the VM - something like a checkbox in the console - so the end user can take a call. Hope someone from GCP reads this.
|
As stated in this [article](https://docs.docker.com/install/linux/linux-postinstall/), the steps you taken are the correct way to do it. Adding users to the "docker" group will allow the users to run docker commands as non root. If you create a new service account and would like to have that service account run docker commands within a VM instance, then you will have to add that service account to the docker group as well.
If you change the service account on a VM instance, then the old service account should still be able to run docker commands as long as the older service account is not removed from the docker group and has not been deleted from Cloud IAM; however, you will still need to add the new service account to the docker group to allow it to run docker commands as non root.
Update: automating the creation of a service account when at VM instance creation manually would be tedious. Within your startup script, you would have to first create the Service Account using the [gcloud commands](https://cloud.google.com/iam/docs/creating-managing-service-accounts#creating_a_service_account) and then add the appropriate [IAM roles](https://cloud.google.com/iam/docs/granting-roles-to-service-accounts#granting_access_to_a_service_account_for_a_resource). Once that is done, you would have to still add the service account to the docker groupadd directory.
It would be much easier to create the service account from the [Console](https://cloud.google.com/iam/docs/creating-managing-service-accounts#iam-service-accounts-create-console) when the VM instance is being created. Once the VM instance is created, you can add the service account to the docker groupadd directory.
If you would like to request for a new feature within GCE, you can submit a Public Issue Tracker by visiting this [site](https://issuetracker.google.com).
| 12,459
|
32,834,419
|
I received this message:
```
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-23-60bbe78150c2> in <module>()
17 men_only_stats=data[0::4]!="male"
18
---> 19 women_onboard = data[women_only_stats,1].astype(np.float)
20 men_onboard = data[men_only_stats,1].astype(np.float)
21 proportion_women_survive= sum(women_onboard)/size(women_onboard)
IndexError: too many indices for array
```
when I enter my code here:
```
import csv as csv
import numpy as np
csv_file_object = csv.reader(open(r"C:\Users\IT'S OVER 9000\Downloads\train.csv", 'rb'))
header = csv_file_object.next()
data=[]
for row in csv_file_object:
data.append(row)
data=np.array(data)
number_passengers= np.size(data[0::4,1].astype(np.float))
passengers_survived=np.sum(data[0::4,1].astype(np.float))
proportion_survived=passengers_survived/number_passengers
women_only_stats= data[0::4]=="female"
men_only_stats=data[0::4]!="male"
women_onboard = data[women_only_stats,1].astype(np.float)
men_onboard = data[men_only_stats,1].astype(np.float)
proportion_women_survive= sum(women_onboard)/size(women_onboard)
proportion_men_survive= sum(men_onboard)/size(men_onboard)
print proportion_women_survive
print proportion_men_survive
```
Here are two lines of data from my cvs file:
```
PassengerId,Survived,Pclass,Name,Sex,Age,SibSp,Parch,Ticket,Fare,Cabin,Embarked,,,,
1,0,3,"Braund, Mr. Owen Harris",male,22,1,0,A/5,21171,7.25,,S,,,
2,1,1,"Cumings, Mrs. John Bradley (Florence Briggs Thayer)",female,38,1,0,PC,17599,71.2833,C85,C,,,
```
What did I do wrong, what caused it, and how do I fix it?
|
2015/09/29
|
[
"https://Stackoverflow.com/questions/32834419",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5386822/"
] |
You need to use `getChildFragmentManager()` instead of `getFragmentManager()` for placing and managing Fragments inside of a Fragment.
So
```
public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
view = inflater.inflate(R.layout.fragment_layout, container, false);
FragmentTransaction fragmentTransaction = getChildFragmentManager().beginTransaction();
fragmentTransaction.replace(R.id.fragmentContainer, subFragment1);
fragmentTransaction.commit();
return view;
}
public void showSubFragment2() {
FragmentTransaction fragmentTransaction = getChildFragmentManager().beginTransaction();
fragmentTransaction.setCustomAnimations(R.animator.slide_in_left, R.animator.slide_out_right);
fragmentTransaction.replace(R.id.fragmentContainer, subFragment2);
fragmentTransaction.commit();
}
public void showSubFragment1() {
FragmentTransaction fragmentTransaction = getChildFragmentManager().beginTransaction();
fragmentTransaction.setCustomAnimations(R.animator.slide_in_right, R.animator.slide_out_left);
fragmentTransaction.replace(R.id.fragmentContainer, subFragment1);
fragmentTransaction.commit();
}
```
|
Try calling `getChildFragmentManager()` instead of `getFragmentManager()` and see if that helps.
| 12,460
|
72,590,538
|
I have 2 models 1) patientprofile and 2) medInfo. In the first model patientprofile, I am trying to get patients informations like (name and other personal information) and 2nd model I want to add patients Medical information data.. when I am trying check is there a existing medical information for the patient then show and update it. otherwise need to create and update it. in medinfo model using forignkey of patientprofile (id). Its working good in admin panel perfectly. but when I am trying to do it UI getting error.
below is code:view.py
```
@login_required
def medinfoupdate(request, patid):
# to get patient name and id in medinfo page accessing patientprofile data
patprofileedit = patientprofile.objects.get(id=patid)
try:
med = medInfo.objects.get(pat_ID=patid)
if request.method == 'GET':
form = medInfo_form(instance=med)
return render(request, 'pmp/medinfo.html', {'med': med, 'form':form, 'patprofileedit' : patprofileedit} )
except:
if request.method == 'GET':
return render(request, 'pmp/medinfo.html', {'patprofileedit' : patprofileedit} )
if request.method == 'POST':
try:
form = medInfo_form(request.POST, instance=med)
form.save()
return redirect(patientlist)
except ValueError:
return render(request, 'pmp/medinfo.html', {'form': form, 'error': 'Data entered is wrong!'})
```
below is error :
```
UnboundLocalError at /medinfo/pat-11
local variable 'med' referenced before assignment
Request Method: POST
Request URL: http://localhost:8000/medinfo/pat-11
Django Version: 4.0.4
Exception Type: UnboundLocalError
Exception Value:
local variable 'med' referenced before assignment
Exception Location: E:\py\patient_management_project\pmp\views.py, line 143, in medinfoupdate
Python Executable: C:\Users\Lenovo\AppData\Local\Programs\Python\Python310\python.exe
Python Version: 3.10.4
Python Path:
['E:\\py\\patient_management_project',
'C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Python\\Python310\\python310.zip',
'C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Python\\Python310\\DLLs',
'C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Python\\Python310\\lib',
'C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Python\\Python310',
'C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages']
Server time: Sun, 12 Jun 2022 08:05:07 +0000
```
|
2022/06/12
|
[
"https://Stackoverflow.com/questions/72590538",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15837741/"
] |
You have to make sure to understand the difference between string literals [1] and references to exported attribute(s) from the resources [2]. The way you are currently trying to get the output means it will output `aws_subnet.main.availability_zone[*]` as a string literal. To make sure you get the values you just need to remove the double quotes from the start and the end of the string literal:
```
output "list_of_az" {
value = aws_subnet.main[*].availability_zone
}
```
---
[1] <https://www.terraform.io/language/expressions/strings>
[2] <https://www.terraform.io/language/expressions/references>
|
If your goal is to display all the Availability Zones in a region, you don't necessary need to iterate over your subnets you have created. You simply display the names from the `data.aws_availability_zones`:
```hcl
data "aws_availability_zones" "available" {
state = "available"
}
output "list_of_az" {
value = data.aws_availability_zones.available[*].names
}
```
This will output something like:
```
list_of_az = [
tolist([
"us-east-1a",
"us-east-1b",
"us-east-1c",
"us-east-1d",
"us-east-1e",
"us-east-1f",
]),
]
```
Obviously, the output will depend on your current region.
| 12,461
|
32,833,575
|
I a new to python and am stuck on this one exercise. I am supposed to enter a sentence and find the longest word. If there are two or more words that have the same longest length, then it is to return the first word. This is what I have so far:
```
def find_longest_word(word_list):
longest_word = ''
for word in word_list:
print(word, len(word))
words = input('Please enter a few words')
word_list = words.split()
find_longest_word(word_list)
```
But I do not know how to compare the lists and return the first/longest word.
|
2015/09/28
|
[
"https://Stackoverflow.com/questions/32833575",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5386649/"
] |
Use `max` python built-in function, using as `key` parameter the `len` function. It would iterate over `word_list` applying `len` function and then returning the longest one.
```
def find_longest_word(word_list):
longest_word = max(word_list, key=len)
return longest_word
```
|
You shouldn't print out the length of each word. Instead, compare the length of the current `word` and the length of `longest_word`. If `word` is longer, you update `longest_word` to `word`. When you have been through all words, the longest world will be stored in `longest_word`.
Then you can print or return it.
```
def find_longest_word(word_list):
longest_word = ''
for word in word_list:
if len(word) > len(longest_word)
longest_word = word
print longest_word
```
edit:
levi's answer is much more elegant, this is a solution with a simple for loop, and is somewhat close to the one you tried to make yourself.
| 12,462
|
14,369,739
|
I'm used to using dicts to represent graphs in python, but I'm running into some serious performance issues with large graphs and complex calculations, so I think I should cross over to using adjacency matrixes to bypass the overhead of hash tables. My question is, if I have a graph of the form g: {node: {vertex: weight . . . } . . . }, what would be the most efficient way to convert that into a list-based adjacency matrix?
|
2013/01/16
|
[
"https://Stackoverflow.com/questions/14369739",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1427661/"
] |
Probably not the most efficient, but a simple way to convert your format to an adjacency matrix on a list-basis could look like this:
```
g = {1:{2:.5, 3:.2}, 2:{4:.7}, 4:{5:.6, 3:.3}}
hubs = g.items() # list of nodes and outgoing vertices
size=max(map(lambda hub: max(hub[0], max(hub[1].keys())), hubs))+1 # matrix dimension is highest known node index + 1
matrix=[[None]*size for row in range(size)] # set up a matrix of the appropriate size
for node, vertices in hubs: # loop through every node in dictionary
for vertice, weight in vertices.items(): # loop through vertices
matrix[vertice][node] = weight # define adjacency of both nodes by assigning the vertice's weight
```
This works for directed graphs assuming that the nodes are represented simply by their indexes starting with zero. Here is a visualization and the resulting matrix for the graph processed in this sample:

```
0 1 2 3 4 5
------------------------------
0 |
1 |
2 | 0.5
3 | 0.2 0.3
4 | 0.7
5 | 0.6
```
If your graph is in fact undirected, I could think of some chances to optimize. In case the dictionary containes every node as a key with all its vertices listed, like `{1:{2:.2, 3:.3}, 2:{1:.2}, 3:{1:.3}}`, you could sort the corresponding list before traversing and limit the inner loop:
```
hubs = sorted(g.items())
for node, vertices in hubs:
for vertice, weight in reversed(sorted(vertices.items())):
if vertice >= node:
matrix[vertice][node] = weight
matrix[node][vertice] = weight
else: # do only care about vertices that haven't been saved before,
break # continue with next node when the current one won't introduce any more vertices
```
You can probably have this more efficient by using [binary search](http://docs.python.org/2/library/bisect.html). Since the resulting matrix will obviously be a mirror-symmetric one, you could also go further and only store one half of it. Easiest way to do this is maybe to flip it on the vertical axis:
```
# unlike the one before, this sample doesn't rely on the dictionary containing every vertice twice
matrix=[[None]*size for row in range(size)]
for node, vertices in hubs:
for vertice, weight in vertices.items():
matrix[vertice][size-node-1] = weight
```
Because of one half of the matrix being cut off, not every lookup for the vertice between nodes `(u,v)` will work, so it has to be made sure that the index of the column is greater than the row's for the cell to look up:
```
u,v = sorted((u,v))
weight = matrix[v][u]
```
Good luck!
|
Well to implement in a adjacency list, you can create two classes, one for storing the information about the vertex's.
```
# Vertex, which will represent each vertex in the graph.Each Vertex uses a dictionary
# to keep track of the vertices to which it is connected, and the weight of each edge.
class Vertex:
# Initialze a object of this class
# we use double underscore
def __init__(self, key):
# we identify the vertex with its key
self.id = key
# this stores the info about the various connections any object
# (vertex) of this class has using a dictionary which is called connectedTo.
# initially its not connected to any other node so,
self.connectedTo={}
# Add the information about connection between vertexes into the dictionary connectedTo
def addNeighbor(self,neighbor,weight=0):
# neighbor is another vertex we update the connectedTo dictionary ( Vertex:weight )
# with the information of this new Edge, the key is the vertex and
# the edge's weight is its value. This is the new element in the dictionary
self.connectedTo[neighbor] = weight
# Return a string containing a nicely printable representation of an object.
def __str__(self):
return str(self.id) + ' connectedTo: ' + str([x.id for x in self.connectedTo])
# Return the vertex's self is connected to in a List
def getConnections(self):
return self.connectedTo.keys()
# Return the id with which we identify the vertex, its name you could say
def getId(self):
return self.id
# Return the value (weight) of the edge (or arc) between self and nbr (two vertices)
def getWeight(self,nbr):
return self.connectedTo[nbr]
```
As you can see from the comments, each vertex stores a 'key' which is used to identify it,
and it had a dictionary 'connectedTo' which holds the key-weight pairs of all connections from this vertex. key of the connected vertex and weight of the edge.
Now we can store a collection of such vertex's using the Graph class which can be implemented like this,
```
# The Graph class contains a dictionary that maps vertex keys to vertex objects (vertlist) and a count of the number of vertices in the graph
class Graph:
def __init__(self):
self.vertList = {}
self.numVertices = 0
# Returns a vertex which was added to the graph with given key
def addVertex(self,key):
self.numVertices = self.numVertices + 1
# create a vertex object
newVertex = Vertex(key)
# set its key
self.vertList[key] = newVertex
return newVertex
# Return the vertex object corresponding to the key - n
def getVertex(self,n):
if n in self.vertList:
return self.vertList[n]
else:
return None
# Returns boolean - checks if graph contains a vertex with key n
def __contains__(self,n):
return n in self.vertList
# Add's an edge to the graph using addNeighbor method of Vertex
def addEdge(self,f,t,cost=0):
# check if the 2 vertices involved in this edge exists inside
# the graph if not they are added to the graph
# nv is the Vertex object which is part of the graph
# and has key of 'f' and 't' respectively, cost is the edge weight
if f not in self.vertList:
nv = self.addVertex(f)
if t not in self.vertList:
nv = self.addVertex(t)
# self.vertList[f] gets the vertex with f as key, we call this Vertex
# object's addNeighbor with both the weight and self.vertList[t] (the vertice with t as key)
self.vertList[f].addNeighbor(self.vertList[t], cost)
# Return the list of all key's corresponding to the vertex's in the graph
def getVertices(self):
return self.vertList.keys()
# Returns an iterator object, which contains all the Vertex's
def __iter__(self):
return iter(self.vertList.values())
```
here, we have the graph class which holds the number of vertex's in 'numVertices' and has the
dictionary 'vertList' which has the key and Vertex (the class we just made) objects present in
the graph.
we can create a graph and set it up by calling
```
# Now lets make the graph
the_graph=Graph()
print "enter the number of nodes in the graph"
no_nodes=int(raw_input())
# setup the nodes
for i in range(no_nodes):
print "enter the "+str(i+1)+" Node's key"
the_graph.addVertex(raw_input())
print "enter the number of edges in the graph"
no_edges=int(raw_input())
print "enter the maximum weight possible for any of edges in the graph"
max_weight=int(raw_input())
# setup the edges
for i in range(no_edges):
print "For the "+str(i+1)+" Edge, "
print "of the 2 nodes involved in this edge \nenter the first Node's key"
node1_key=raw_input()
print "\nenter the second Node's key"
node2_key=raw_input()
print "\nenter the cost (or weight) of this edge (or arc) - an integer"
cost=int(raw_input())
# add the edge with this info
the_graph.addEdge(node1_key,node2_key,cost)
```
If you want undirected graphs then add this line `the_graph.addEdge(node2_key,node1_key,cost)`
Thus the connection will be stored not as a connected to b but a connected to b and b connected to a.
Also mind the indentation for both class implementations above, it might be incorrect.
| 12,465
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.