qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
|---|---|---|---|---|---|---|
5,898,555
|
I'm playing with the [pyflakes plugin for vim](https://github.com/kevinw/pyflakes-vim) and now when I open a python file I get the error messages in the screenshot [here](http://dl.dropbox.com/u/6114719/Screenshot.png)
Any ideas how to fix this?
Thanks in advance...
|
2011/05/05
|
[
"https://Stackoverflow.com/questions/5898555",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/91748/"
] |
Could be an issue with the version of Python you're running under vs. what the package you're using is looking for. A quick google for "Module getChildNodes python" got me to the page for [Python compiler package](http://docs.python.org/library/compiler.html) which has one of those nice little "Deprecated" messages on it. So it might be that the pyflakes plugin is out of synch with the version of Python you have installed. "Python -V" will show you what version you're running.
```
C:\projects\fun>python -V
Python 2.7.1
```
|
This is a bug in pyflakes and we cannot help you with this here.
Try filing an issue on [their git repository](https://github.com/kevinw/pyflakes-vim/issues).
| 15,494
|
16,794,663
|
In python I'm trying to grab multiple inputs from string using regular expression; however, I'm having trouble. For the string:
```
inputs = 12 1 345 543 2
```
I tried using:
```
match = re.match(r'\s*inputs\s*=(\s*\d+)+',string)
```
However, this only returns the value `'2'`. I'm trying to capture all the values `'12','1','345','543','2'` but not sure how to do this.
Any help is greatly appreciated!
EDIT: Thank you all for explaining why this is does not work and providing alternative suggestions. Sorry if this is a repeat question.
|
2013/05/28
|
[
"https://Stackoverflow.com/questions/16794663",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/877334/"
] |
You could try something like:
`re.findall("\d+", your_string)`.
|
You should look at this answer: <https://stackoverflow.com/a/4651893/1129561>
In short:
>
> In Python, this isn’t possible with a single regular expression: each capture of a group overrides the last capture of that same group (in .NET, this would actually be possible since the engine distinguishes between captures and groups).
>
>
>
| 15,500
|
35,469,118
|
I have been using my Raspberry Pi 2 to do some motion detection using a USB webcam and the motion package and am incredibly frustrated.
**Can someone explain to me how the on\_motion\_detected method is supposed to work??????**
The idea is that when the camera detects motion, a script is executed. The script just echo's a few words for testing purposes.
The video stream works great on my local network, and I can see the motion writing the JPG and .avi files to the directory.
Even when I try to add my script to the movie start trigger, **nothing happens**.
Some examples I have tried:
* ; on\_motion\_detected python /home/pi/Desktop/Python/script.py
* ; on\_movie\_start python /home/pi/Desktop/Python/script.py
* ; on\_picture\_save sudo python /home/pi/Desktop/Python/script.py
I have also changed the script to different directories, speculating a permission issue. Still nothing happens. I have tried removing the ; before the methods, still nothing happens. I have tried sudo, I have tried executing as a script. Please, can someone offer some help. I have searched years and years of threads and not found an answer anywhere.
*My script is not being executed.*
This question has been asked 1000 times and nobody has answered it. I have been searching for several hours for an answer.
Here's just a few of the threads that have gone uanswered:
<https://unix.stackexchange.com/questions/59091/problems-running-python-script-from-motion>
<https://www.raspberrypi.org/forums/viewtopic.php?t=86534&p=610482>
<https://raspberrypi.stackexchange.com/questions/8273/running-script-in-motion>
|
2016/02/17
|
[
"https://Stackoverflow.com/questions/35469118",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4851753/"
] |
Ypu have to remove ; which stands for commenting the line
; on\_motion\_detected python /home/pi/Desktop/Python/script.py
but
```
on_motion_detected python /home/pi/Desktop/Python/script.py
```
|
Check that you have specified the process id properly in motion.conf:
**process\_id\_file /var/run/motion/motion.pid**
Once you have checked that, change the following settings to arbitrarily low values e.g.
**threshold 1**
**noise\_level 1**
| 15,505
|
63,400,324
|
I am relatively new to python and pandas. I am trying to replicate a battleship game. My goal is to locate the row and column that has 1 and storage that location as the Battleship location. I created a CSV file and it looks like this
```
col0,col1,col2,col3,col4,col5
0,0,0,0,0,0
0,0,0,0,0,0
0,0,0,0,0,0
0,1,0,0,0,0
0,0,0,0,0,0
0,0,0,0,0,0
```
This is the code to read the created CSV file as a DataFrame.
```
import pandas
df = pandas.read_csv('/content/pandas_tutorial/My Drive/pandas/myBattleshipmap.csv',)
print(df)
```
How do I use nested for loop and Pandas' iloc to locate the row and column that has 1(which is row 3 and column 1) and storage that location as the Battleship location?
I found out that a nested for loop and pandas iloc to give the entire dataframe look something like this
```
for x in range(len(df)):
for y in range(len(df)):
df.iloc[:,:]
```
Do I have to put the if statement to locate the row and column?
|
2020/08/13
|
[
"https://Stackoverflow.com/questions/63400324",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11763345/"
] |
You really don't need to loop, you can use [`numpy.where`](https://numpy.org/doc/stable/reference/generated/numpy.where.html):
```
import pandas as pd
import numpy as np
df = pd.read_csv('/content/pandas_tutorial/My Drive/pandas/myBattleshipmap.csv',)
r, c = np.where(df.astype(bool))
print(r.tolist())
print(df.columns[c].tolist())
```
Ouput:
```
[3]
['col1']
```
---
### For loops like you trying...
```
for x in range(df.shape[0]):
for y in range(df.shape[1]):
if df.iloc[x, y] == 1:
print(f'Row: {x}, Col: {df.columns[y]}')
```
Output:
```
Row: 3, Col: col1
```
|
Use `where` to turn all values that are not 1 to `NaN`, then stack will leave you with a MultiIndex Series, whose index gives you the (row\_label, col\_label) tuples of everything that was 1.
```
df.where(df.eq(1)).stack().index
#MultiIndex([(3, 'col1')],
# )
```
---
If you don't want the column names, perhaps get rid of the column labels. IF you only expect a single location we can grab that with `.item()`:
```
df.columns = range(df.shape[1])
print(f'The Battle Ship Location is: {df.where(df.eq(1)).stack().index.item()}')
#The Battle Ship Location is: (3, 1)
```
| 15,513
|
47,145,930
|
I read the Susan Fowler's book "production ready microservices" and in two places (until now) I found
* (page 26) "Avoid Versioning Microservices and Endpoints",
* "versioning microservices can easily become an organizational nightmare" (page 27),
* In microservice ecosystems, the versioning of microservices is discouraged(page 58)
Anyway, I used all types of versioning for all kind of different projects: git tag, deb package versioning, python packages versioning, http api versions and I never had very big problems to manage the project's versions. Beside of this I knew exactly to what version to roll out in case of some failures or bugs from customers.
Anybody have any clue why in this book the microservice versioning is so blamed and what advises would you have regarding the topic?
|
2017/11/06
|
[
"https://Stackoverflow.com/questions/47145930",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1526119/"
] |
Are you sure that she was not talking about incorporating the version into the **name** of the service or into the **name** of the endpoint?
Service called OrderProcessing\_2\_4\_1 with a versioned endpoint of get\_order\_2\_4\_1 is a very bad idea. OrderProcessing\_2\_4 with a versioned endpoint of get\_order\_2\_4 is a little less evil but still problematic.
if your calls to microservices have to address endpoints that have a version number in the name, you will have a maintenance (and production) nightmare every time you change a service. Every other service and client will have to be checked to change any reference to your updated service.
That is different from having versions for your API, code pr services. Those are required if you are going to actually get many of the benefits of microservices.
Your orchestration function has to be able to find the right version of the service that matches the client's version requirements (Version 2.6.2 of client "Enter New Order" app requires service "OrderProcessing" that is at least version 2.4.0 in order to support the NATO product classification).
You may have multiple versions of a service in production at the same time (2.4.0 being deprecated but still servicing some clients, 2.4.1 being introduced, version 3.0.0 in beta for customers who are testing the newest UI and features prior to GA).
This is particularly true if you run 24/7 and have to dynamically update services.
The orchestration function is the place where services and endpoints are matched up and when you introduce a new version of a service, you update the orchestration database to describe the versions of other services that are required. (My new Version 2.4.1 of OrderProcessing requires the a version 2.2.0 or later of the ProductManager because that was when the NATO Product Classification was added to the product data).
|
The author of the book is correct in that it is difficult to update the version of an API, especially if it is popular. This is because you will have to either hunt down all the users of the older version and have them upgrade or you will have to support two versions of your software in production at the same time.
But you should still version your API's, IMHO. Just never change the API version. The reason you should probably support a version in your API is because you never know when you may have to change it. So add version but you should avoid every changing it. On many cases it is easier to bring up a new API or extend an existing API in a way that does not break the current version contract.
BTW, in the future you never know when new technology or design patterns will materialize that will allow two versions of one API to easily exist on the same software instance in production in a very elegant way. If and when something like this comes out, then you may see more version changes in API's.
| 15,514
|
12,054,772
|
I'm new to python (learning for 2 weeks only) and there's something I really can't even try (I have been googling for an hour and coulndn't find any).
`file1` and `file2` are both CSV files.
I've got a function that looks like:
```
def save(file1, file2):
```
it is for `file2` to have the same content as `file1`. For example, when I do:
```
save(file1, file2)
```
`file2` should have the same content as `file1`.
Thanks in advance and sorry for an empty code. Any help would be appreciated!
|
2012/08/21
|
[
"https://Stackoverflow.com/questions/12054772",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1614184/"
] |
Python has a standard module [`shutil`](https://docs.python.org/2/library/shutil.html) which is useful for these sorts of things.
If you need to write the code to do it yourself, simply open two files (the input and the output). Loop over the file object, Reading lines from the input file and write them into the output file.
|
If you simply want to copy a file you can do this:
```
def save(file1, file2):
with open(file1, 'rb') as infile:
with open(file2, 'wb') as outfile:
outfile.write(infile.read())
```
this copies the file with name `file1` to the file with name `file2`. It really doesn't matter what the content of those files is.
| 15,515
|
38,709,118
|
I'm trying to validate a certificate with a CA bundle file. The original Bash command takes two file arguments like this;
```
openssl verify -CAfile ca-ssl.ca cert-ssl.crt
```
I'm trying to figure out how to run the above command in python subprocess whilst having ca-ssl.ca and cert-ssl.crt as variable strings (as opposed to files).
If I ran the command with variables (instead of files) in bash then this would work;
```
ca_value=$(<ca-ssl.ca)
cert_value=$(<cert-ssl.crt)
openssl verify -CAfile <(echo "$ca_value") <(echo "$cert_value")
```
However, I'm struggling to figure out how to do the above with Python, preferably without needing to use `shell=True`. I have tried the following but doesn't work and instead prints 'help' commands for openssl;
```
certificate = ''' cert string '''
ca_bundle = ''' ca bundle string '''
def ca_valid(cert, ca):
ca_validation = subprocess.Popen(['openssl', 'verify', '-CAfile', ca, cert], stdin=subprocess.PIPE, stdout=subprocess.PIPE, bufsize=1)
ca_validation_output = ca_validation.communicate()[0].strip()
ca_validation.wait()
ca_valid(certificate, ca_bundle)
```
Any guidance/clues on what I need to look further into would be appreciated.
|
2016/08/01
|
[
"https://Stackoverflow.com/questions/38709118",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1165419/"
] |
Bash process substitution `<(...)` in the end is supplying a file path as an argument to `openssl`.
You will need to make a helper function to create this functionality since Python doesn't have any operators that allow you to inline pipe data into a file and present its path:
```
import subprocess
def validate_ca(cert, ca):
with filearg(ca) as ca_path, filearg(cert) as cert_path:
ca_validation = subprocess.Popen(
['openssl', 'verify', '-CAfile', ca_path, cert_path],
stdout=subprocess.PIPE,
)
return ca_validation.communicate()[0].strip()
```
Where `filearg` is a context manager which creates a named temporary file with your desired text, closes it, hands the path to you, and then removes it after the `with` scope ends.
```
import os
import tempfile
from contextlib import contextmanager
@contextmanger
def filearg(txt):
with tempfile.NamedTemporaryFile('w', delete=False) as fh:
fh.write(txt)
try:
yield fh.name
finally:
os.remove(fh.name)
```
Anything accessing this temporary file(like the subprocess) needs to work inside the context manager.
By the way, the `Popen.wait(self)` is redundant since `Popen.communicate(self)` waits for termination.
|
If you want to use process substitution, you will *have* to use `shell=True`. This is unavoidable. The `<(...)` process substitution syntax is bash syntax; you simply must call bash into service to parse and execute such code.
Additionally, you have to ensure that `bash` is invoked, as opposed to `sh`. On some systems `sh` may refer to an old Bourne shell (as opposed to the Bourne-again shell `bash`) in which case process substitution will definitely not work. On some systems `sh` will invoke `bash`, but process substitution will still not work, because when invoked under the name `sh` the `bash` shell enters something called POSIX mode. Here are some excerpts from the `bash` man page:
>
> ...
>
>
> INVOCATION
>
>
> ... When invoked as sh, bash enters posix mode after the startup files are read. ....
>
>
> ...
>
>
> SEE ALSO
>
>
> ...
>
>
> <http://tiswww.case.edu/~chet/bash/POSIX> -- a description of posix mode
>
>
> ...
>
>
>
From the above web link:
>
> 28. Process substitution is not available.
>
>
>
`/bin/sh` seems to be the default shell in python, whether you're using `os.system()` or `subprocess.Popen()`. So you'll have to specify the argument `executable='bash'`, or `executable='/bin/bash'` if you want to specify the full path.
This is working for me:
```
subprocess.Popen('printf \'argument: "%s"\\n\' verify -CAfile <(echo ca_value) <(echo cert_value);',executable='bash',shell=True).wait();
## argument: "verify"
## argument: "-CAfile"
## argument: "/dev/fd/63"
## argument: "/dev/fd/62"
## 0
```
---
Here's how you can actually embed the string values from variables:
```
bashEsc = lambda s: "'"+s.replace("'","'\\''")+"'";
ca_value = 'x';
cert_value = 'y';
cmd = 'printf \'argument: "%%s"\\n\' verify -CAfile <(echo %s) <(echo %s);'%(bashEsc(ca_value),bashEsc(cert_value));
subprocess.Popen(cmd,executable='bash',shell=True).wait();
## argument: "verify"
## argument: "-CAfile"
## argument: "/dev/fd/63"
## argument: "/dev/fd/62"
## 0
```
| 15,516
|
62,811,729
|
mistakenly first i installed python 3.6 then install pip,then i install python 3.8 after that i checked the pip version its shows me.
```
pip 20.1.1 from /usr/local/lib/python3.6/dist-packages/pip (python 3.6)
```
Can i change to
```
pip 20.1.1 from /usr/local/lib/python3.8/dist-packages/pip (python 3.8)
```
|
2020/07/09
|
[
"https://Stackoverflow.com/questions/62811729",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2820094/"
] |
I believe you can do that if you simply remove `col2` from your select and group by. Because `col2` will no longer be returned, you should also remove the having statement. I think it should look something like this:
```sql
select
c.col1,
count(1)
from
table_1 a,
table_2 b,
table_3 c
where
a.key = b.key
and b.no = c.no
group by
c.col1;
```
I hope this helps.
|
use sum() and only group by for the col1
```
select c.col1, sum(a.col2) as total
from table_1 a,table_2 b.table_3 c
where a.key =b.key and b.no = c.no
group by c.col1;
```
**Output---**
```
c.col1 total
aa1 10
aa2 5
```
| 15,517
|
19,737,844
|
I know that the Jinja2 library allows me to pass datastore models from my python code to html and access this data from inside the html code as shown [in this example](https://developers.google.com/appengine/docs/python/gettingstartedpython27/templates) . However Jinja2 isn't compatible with javascript and I want to access the data inside my Javascript code . What is the simplest templating library which allows to iterate over my datastore entities in Javascript ? I've heard about things like Mustache and Jquery , I think they look a bit too complicated. Is there anything simpler?
|
2013/11/02
|
[
"https://Stackoverflow.com/questions/19737844",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2559007/"
] |
You should create a python controller which serves JSON formatted data, which any Javascript library (especially jQuery) can consume from. Then, setup the Jinja2 template to contain some Javascript which calls, loads and displays said data.
|
It has nothing to do with compatibility. Jinja is server side templating. You can use javascript for client side coding.
Using Jinja you can create HTML, which can be accessed by javascript like normal HTML.
To send datastore entities to your client you can use Jinja to pass a Python list or use a json webservice.
| 15,520
|
66,412,526
|
I am quite new to python and I have a table of occupancy that looks like this:
```
| room | free| date | place
| room_1 | 0 | 2021-01-13| Boston|
|room_2 |1| 2021-02-14| Boston|
|room_2|0|2021-02-15|Boston|
```
...
How can I calculate how often a room was free within a timeframe of a month and a week for each room?
So I could say: room\_1 was free for 95 % free within February?
1 = free
0 = not free
I have the data in `csv` format.
|
2021/02/28
|
[
"https://Stackoverflow.com/questions/66412526",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10094012/"
] |
Hopefully there aren't many holes in the Unity documentation, but when you do find one you can often find sample code in the [Unity Quickstarts](https://github.com/firebase/quickstart-unity) (as these are also used to validate changes to the Unity SDK).
From [`UIHandler.cs`](https://github.com/firebase/quickstart-unity/blob/master/auth/testapp/Assets/Firebase/Sample/Auth/UIHandler.cs) in the [Auth quickstart](https://github.com/firebase/quickstart-unity/tree/master/auth/testapp):
```
public Task SignInWithGameCenterAsync() {
var credentialTask = Firebase.Auth.GameCenterAuthProvider.GetCredentialAsync();
var continueTask = credentialTask.ContinueWithOnMainThread(task => {
if(!task.IsCompleted)
return null;
if(task.Exception != null)
Debug.Log("GC Credential Task - Exception: " + task.Exception.Message);
var credential = task.Result;
var loginTask = auth.SignInWithCredentialAsync(credential);
return loginTask.ContinueWithOnMainThread(HandleSignInWithUser);
});
return continueTask;
}
```
What's weird about GameCenter is that you don't manage the credential directly, rather you [request it from GameCenter](https://firebase.google.com/docs/reference/unity/class/firebase/auth/game-center-auth-provider#getcredentialasync) ([Here's PlayGames](https://firebase.google.com/docs/reference/unity/class/firebase/auth/play-games-auth-provider#getcredential) by comparison, which you have to [cache the result of `Authenticate` manually](https://firebase.google.com/docs/auth/unity/play-games#authenticate_with_firebase)).
So in your code, I'd go into the `if (success)` block and add:
```
GameCenterAuthProvider.GetCredentialAsync().ContinueWith(task => {
// I used ContinueWith
// be careful, I'm on a background thread here.
// If this is a problem, use ContinueWithOnMainThread
if (task.Exception != null) {
FirebaseAuth.GetInstance.SignInWithCredentialAsync(task.Result).ContinueWithOnMainThread(task => {
// I used ContinueWithOnMainThread
// You're on the Unity main thread here, so you can add or change scene objects
// If you're comfortable with threading, you can change this to ContinueWith
});
}
});
```
|
<https://firebase.google.com/docs/auth/ios/game-center> This might answer your question. You may have to implement this feature in XCode when you export the project from unity
| 15,527
|
52,459,081
|
Good morning everyone!
I am new to programming and am learning python. I am trying to create a function that converts each individual char in string into each corresponding individuals ints and displays them one after another. The first error it generates is "c is not defined".
```
c=''
def encode(secret_message):
for c in secret_message:
int_message=+ord(c)
return int_message
Example of what I want it to do:
secret_message='You' (this is a string)
return: 89 111 117 (this should be an int, not a list)
note: 'Y'=89, 'o'=111, 'u'=117
```
the idea is that encode takes in a parameter secret message. It then iterates through each char in c and converts each char from a string to an int. Then it returns the entire message in ints.
I'm also not sure how to get each char to appear in int\_message. As of now, it looks like it will add all the ints together. I want it to simply place them together (like a string). Do I need to convert it back to a string after I get the int values then concatenate?
|
2018/09/22
|
[
"https://Stackoverflow.com/questions/52459081",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10401403/"
] |
It was very difficult to understand your code, but as you asked for logic to improve your understanding, so sharing psuedocode, which you could refer to correct your code accordingly.
```
Node delete (index i, Node n) // pass index and head reference node and return head
if (n==null) // if node is null
return null;
if (i==1) // if reached to node, which needs to be deleted, return next node reference.
return n.next;
n.next= delete(n.next,i-1);
return n; // recursively return current node reference
```
|
after struggling i managed to solve the problem, here is the answer, but i am still not sure about the complexity whether it's O(n) or O(log n).
```
public void delete(int index){
//check if the index is valid
if((index<0)||(index>length())){
System.out.println("Array out of bound!");
return;
}
//pass the value head to temp only in the first run
if(this.index==0)
temp=head;
//if the given index is zero then move the head to next element and return
if(index==0){
head=head.next;
return;}
//if the array is empty or has only one element then move the head to null
if(head.next==null||head==null){
head=null;
return;}
if(temp!=null){
prev=temp;
temp=temp.next;
this.index=this.index+1;
//if the given index is reached
//then link the node prev to the node that comes after temp
//and unlink temp
if(this.index==index){
prev.next=temp.next;
temp=null;
return;
}
//if not then call the function again
delete(index);
}
}
```
| 15,529
|
54,021,168
|
I am using flask and python3 to upload a video on server. The result is saved in format(filename)+result.jpg , where filename is the videoname
This image result was visible on browser with url /video\_feed before appending result.jpg with string
How can I now access the url to see result.jpg
```
@app.route('/video_feed_now')
def video_feed_now():
#return send_file('result.jpg', mimetype='image/gif')
return send_file(format(filename)+'result.jpg', frame)
```
|
2019/01/03
|
[
"https://Stackoverflow.com/questions/54021168",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9781290/"
] |
Using a [list comprehension](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions):
```
return [x for x in xs if f(x) > 0]
```
Without using a list comprehension:
```
return filter(lambda x: f(x) > 0, xs)
```
Since you said it should return a list:
```
return list(filter(lambda x: f(x) > 0, xs))
```
|
Two solutions are possible using recursion, which do not use looping or comprehensions - which implement the iteration protocol internally.
**Method 1:**
```py
lst = list()
def foo(index):
if index < 0 or index >= len(xs):
return
if f(xs[index]) > 0:
lst.append(xs[index])
# print xs[index] or do something else with the value
foo(index + 1)
# call foo with index = 0
```
**Method 2:**
```py
lst = list()
def foo(xs):
if len(xs) <= 0:
return
if f(xs[0]) > 0:
lst.append(xs[0])
foo(xs[1:])
# call foo with xs
```
Both these methods create a new list consisting of the desired values. The second method uses list slicing, which I am not sure whether internally implements iteration protocol or not.
| 15,531
|
17,866,724
|
I have a long python script that uses print statements often, I was wondering if it was possible to add some code that would log all of the print statements into a text file or something like that. I still want all of the print statements to go to the command line as the user gets prompted throughout the program. If possible logging the user's input would be beneficial as well.
I found this which somewhat answers my question but makes it where the "print" statement no longer prints to the command line
[Redirect Python 'print' output to Logger](https://stackoverflow.com/questions/11124093/redirect-python-print-output-to-logger)
|
2013/07/25
|
[
"https://Stackoverflow.com/questions/17866724",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1464660/"
] |
You can add this to your script:
```
import sys
sys.stdout = open('logfile', 'w')
```
This will make the print statements write to `logfile`.
If you want the option of printing to `stdout` and a file, you can try this:
```
class Tee(object):
def __init__(self, *files):
self.files = files
def write(self, obj):
for f in self.files:
f.write(obj)
f = open('logfile', 'w')
backup = sys.stdout
sys.stdout = Tee(sys.stdout, f)
print "hello world" # this should appear in stdout and in file
```
To revert to just printing to console, just restore the "backup"
```
sys.stdout = backup
```
|
Here is a program that does what you describe:
```
#! /usr/bin/python3
class Tee:
def write(self, *args, **kwargs):
self.out1.write(*args, **kwargs)
self.out2.write(*args, **kwargs)
def __init__(self, out1, out2):
self.out1 = out1
self.out2 = out2
import sys
sys.stdout = Tee(open("/tmp/log.txt", "w"), sys.stdout)
print("hello")
```
| 15,532
|
54,246,133
|
I am an experienced programmer in ruby, python and javascript (specifically back-end node.js), I have worked in java, perl and c++, and I've used lisp and haskell academically, but I'm brand new to Scala and trying to learn some conventions.
I have a function that accepts a function as a parameter, similar to how a sort function accepts a comparator function. What is the more idiomatic way of implementing this?
Suppose this is the function that accepts a function parameter y:
```
object SomeMath {
def apply(x: Int, y: IntMath): Int = y(x)
}
```
Should `IntMath` be defined as a `trait` and different implementations of `IntMath` defined in different objects? (let's call this option A)
```
trait IntMath {
def apply(x: Int): Int
}
object AddOne extends IntMath {
def apply(x: Int): Int = x + 1
}
object AddTwo extends IntMath {
def apply(x: Int): Int = x + 2
}
AddOne(1)
// => 2
AddTwo(1)
// => 3
SomeMath(1, AddOne)
// => 2
SomeMath(1, AddTwo)
// => 3
```
Or should `IntMath` be a type alias for a function signature? (option B)
```
type IntMath = Int => Int
object Add {
def one: IntMath = _ + 1
def two: IntMath = _ + 2
}
Add.one(1)
// => 2
Add.two(1)
// => 3
SomeMath(1, Add.one)
// => 2
SomeMath(1, Add.two)
// => 3
```
but which one is more idiomatic scala?
Or are neither idiomatic? (option C)
My previous experiences in functional languages leans me towards B, but I have never seen that before in scala. On the other hand, though the `trait` seems to add unnecessary clutter, I have seen that implementation and it seems to work much more smoothly in Scala (since the object becomes callable with the `apply` function).
[**Update**] Fixed the examples code where an `IntMath` type is passed into `SomeMath`. The syntatic sugar that scala provides where an `object` with an `apply` method becomes callable like a function gives the illusion that `AddOne` and `AddTwo` are functions and passed like functions in Option A.
|
2019/01/18
|
[
"https://Stackoverflow.com/questions/54246133",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5158185/"
] |
Since Scala has explicit function types, I'd say that if you need to pass a function to your function, use the function type, i.e. your option B. They are explicitly there for this purpose.
It is not exactly clear what do you mean by saying that you "have never seen that before in scala", BTW. A lot of standard library methods accept functions as their parameters, for example, collection transformation methods. Creating type aliases to convey semantics of a more complex type is perfectly idiomatic Scala as well, and it does not really matter whether the aliased type is a function or not. For example, one of the core types in my current project is actually a type alias to a function, which conveys the semantics using a descriptive name and also allows to easily pass conforming functions without a necessity to explicitly create a subclass of some trait.
It is also important, I believe, to understand that syntax sugar of the `apply` method does not really have anything to do with how functions or functional traits are used. Indeed, `apply` methods do have this ability to be invoked just with parentheses; however, it does not mean that different types having an `apply` method, even with the same signature, are *interoperable*, and I think that this interoperability is what matters in this situation, as well as an ability to easily construct instances of such types. After all, in your specific example it only matters for *your* code whether you could use the syntax sugar on `IntMath` or not, but for users of your code an ability to easily construct an instance of `IntMath`, as well as an ability to pass some existing thing they already have as `IntMath`, is much more important.
With the `FunctionN` types, you have the benefit of being able to use the anonymous function syntax to construct instances of these types (actually, several syntaxes, at least these: `x => y`, `{ x => y }`, `_.x`, `method _`, `method(_)`). There even was no way to create instances of "Single Abstract Method" types before Scala 2.11, and even there it requires a compiler flag to actually enable this feature. This means that the users of your type will have to write either this:
```
SomeMath(10, _ + 1)
```
or this:
```
SomeMath(10, new IntMath {
def apply(x: Int): Int = x + 1
})
```
Naturally, the former approach is much clearer.
Additionally, `FunctionN` types provide a single common denominator of functional types, which improves interoperability. Functions in math, for example, do not have any kind of "type" except their signature, and you can use any function in place of any other function, as long as their signatures match. This property is beneficial in programming too, because it allows reusing definitions you already have for different purposes, reducing the number of conversions and therefore potential mistakes that you might do when writing these conversions.
Finally, there are indeed situations where you would want to create a separate type for your function-like interface. One example is that with full-fledged types, you have an ability to define a companion object, and companion objects participate in implicits resolution. This means that if instances of your function type are supposed to be provided implicitly, then you can take advantage of having a companion object and define some common implicits there, which would make them available for all users of the type without additional imports:
```
// You might even want to extend the function type
// to improve interoperability
trait Converter[S, T] extends (S => T) {
def apply(source: S): T
}
object Converter {
implicit val intToStringConverter: Converter[Int, String] = new Converter[Int, String] {
def apply(source: Int): String = source.toString
}
}
```
Here having a piece of implicit scope associated with the type is useful, because otherwise users of `Converter` would need to always import contents of some object/package to obtain default implicit definitions; with this approach, however, all implicits defined in `object Converter` will be searched by default.
However, these situations are not very common. As a general rule, I think, you should first try to use the regular function type. If you find that you *do* need a separate type, for a practical reason, only then you should create your own type.
|
Another option is not to define anything and keep the function type explicit:
```
def apply(x: Int, y: Int => Int): Int = y(x)
```
This makes the code more readable by making it clear which arguments are data objects and which are function objects. (Purists will say that there is no distinction in a functional language, but I think this is a useful distinction in most real-world code, especially if you come from a more declarative background)
If the arguments or results become more complicated then they can be defined as their own types, but the function itself remains explicit:
```
def apply(x: MyValue, y: MyValue => MyValue2): MyValue2 = y(x)
```
Failing that, `B` is preferable because it allows you to pass functions as lambdas:
```
SomeMath(1, _ + 3)
```
`A` would require you to declare a new instance of `IntMath` which is more cumbersome and less idiomatic.
| 15,541
|
1,644,856
|
For testing purposes I have this shell script
```
#!/bin/bash
echo $$
find / >/dev/null 2>&1
```
Running this from an interactive terminal, ctrl+c will terminate bash, and the find command.
```
$ ./test-k.sh
13227
<Ctrl+C>
$ ps -ef |grep find
$
```
Running it in the background, and killing the shell only will orphan the commands running in the script.
```
$ ./test-k.sh &
[1] 13231
13231
$ kill 13231
$ ps -ef |grep find
nos 13232 1 3 17:09 pts/5 00:00:00 find /
$
```
I want this shell script to terminate all its child processes when it exits regardless of how it's called. It'll eventually be started from a python and java application - and some form of cleanup is needed when the script exits - any options I should look into or any way to rewrite the script to clean itself up on exit?
|
2009/10/29
|
[
"https://Stackoverflow.com/questions/1644856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/126769/"
] |
Send a signal to the group.
So instead of `kill 13231` do:
```
kill -- -13231
```
If you're starting from python then have a look at:
<http://www.pixelbeat.org/libs/subProcess.py>
which shows how to mimic the shell in starting
and killing a group
|
@Patrick's answer almost did the trick, but it doesn't work if the *parent* process of your *current* shell is in the same group (it kills the parent too).
I found this to be better:
`trap 'pkill -P $$' EXIT`
See [here](https://unix.stackexchange.com/a/124148) for more info.
| 15,542
|
22,415,345
|
I have a unicode string in python code:
```
name = u'Mayte_Martín'
```
I would like to use it with a SPARQL query, which meant that I should encode the string using 'utf-8' and use urllib.quote\_plus or requests.quote on it. However, both these quote functions behave strangely as can be seen when used with and without the 'safe' arguments.
```
from urllib import quote_plus
```
Without 'safe' argument:
```
quote_plus(name.encode('utf-8'))
Output: 'Mayte_Mart%C3%ADn'
```
With 'safe' argument:
```
quote_plus(name.encode('utf-8'), safe=':/')
Output:
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-164-556248391ee1> in <module>()
----> 1 quote_plus(v, safe=':/')
/usr/lib/python2.7/urllib.pyc in quote_plus(s, safe)
1273 s = quote(s, safe + ' ')
1274 return s.replace(' ', '+')
-> 1275 return quote(s, safe)
1276
1277 def urlencode(query, doseq=0):
/usr/lib/python2.7/urllib.pyc in quote(s, safe)
1264 safe = always_safe + safe
1265 _safe_quoters[cachekey] = (quoter, safe)
-> 1266 if not s.rstrip(safe):
1267 return s
1268 return ''.join(map(quoter, s))
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 10: ordinal not in range(128)
```
The problem seems to be with rstrip function. I tried to make some changes and call as...
```
quote_plus(name.encode('utf-8'), safe=u':/'.encode('utf-8'))
```
But that did not solve the issue. What could be the issue here?
|
2014/03/14
|
[
"https://Stackoverflow.com/questions/22415345",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/464476/"
] |
I'm answering my own question, so that it may help others who face the same issue.
This particular issue arises when you make the following import in the current workspace before executing anything else.
```
from __future__ import unicode_literals
```
This has somehow turned out to be incompatible with the following sequence of code.
```
from urllib import quote_plus
name = u'Mayte_Martín'
quote_plus(name.encode('utf-8'), safe=':/')
```
The same code without importing unicode\_literals works fine.
|
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
import urllib
name = u'Mayte_Martín'
print urllib.quote_plus(name.encode('utf-8'), safe=':/')
```
works without problem for me (Py 2.7.9, Debian)
(I don't know the answer, but I cannot make comments with regard to reputation)
| 15,552
|
6,488,345
|
I have tried both terminate() and kill() but both have failed to stop a subprocess I start in my python code.
Is there any other way?
On Windows with Python 2.7
I have also tried the following with no results...
```
os.kill(p.pid, signal.SIGTERM)
```
and
```
import ctypes
PROCESS_TERMINATE = 1
handle = ctypes.windll.kernel32.OpenProcess(PROCESS_TERMINATE, False, theprocess.pid)
ctypes.windll.kernel32.TerminateProcess(handle, -1)
ctypes.windll.kernel32.CloseHandle(handle)
```
|
2011/06/27
|
[
"https://Stackoverflow.com/questions/6488345",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/775302/"
] |
>
> Unhandled exception type FileNotFoundException myclass.java /myproject/src/mypackage
>
>
>
This is a compiler error. Eclipse is telling you that your program does not compile to java byte code (so of course you can't run it). For now, you can fix it by simply declaring that your program may throw this exception. Like so:
```
public static void main(String[] args) throws FileNotFoundException {
```
`FileNotFoundException` is a "checked exception" (google this) which means that the code has to state what the JVM should do if it is encountered. In code, a try-catch block or a throws declaration indicate to the JVM how to handle the exception.
For future reference, please note that the red squiggly underline in Eclipse means there is a compiler error. If you hover the mouse over the problem, Eclipse will usually suggest some very good solutions. In this case, one suggestion would be to "add a throws clause to main".
|
This is a compiler error. Eclipse is telling you that your program does not compile to java byte code (so of course you can't run it). For now, you can fix it by simply declaring that your program may throw this exception. Like so:
```
public static void main(String[] args) throws IOException{
}
```
| 15,555
|
16,696,225
|
How to instantiated a class if its name is given as a string variable (i.e. dynamically instantiate object of the class). Or alternatively, how does the following PHP 5.3+ code
```
<?php
namespace Foo;
class Bar {};
$classname = 'Foo\Bar';
$bar = new $classname();
```
can be spelled in python?
**Also see**
[Does python have an equivalent to Java Class.forName()?](https://stackoverflow.com/questions/452969/does-python-have-an-equivalent-to-java-class-forname)
|
2013/05/22
|
[
"https://Stackoverflow.com/questions/16696225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/544463/"
] |
What I would do is move the call to `get_int` into the condition of the while loop:
```
int main(void)
{
int integers;
printf("Enter some integers. Enter 0 to end.\n");
while ((integers = get_int()) != 0)
{
printf("%d is a number\n", integers);
}
return(0);
} // end main
```
The problem with your existing code is that between calling `get_int()` and printing the value, you're not checking to see if it's returned your sentinel of `0`.
The other option would be to add an `if (integers == 0) { break; }` condition in between,
but in my mind doing the assignment in the condition is cleaner.
|
You are correct to suspect that you need to retool the `while` loop. Did you try something like this?
```
for (;;)
{
integers = get_int();
if (integers == 0) break;
printf("%d is a number\n", integers);
}
```
Also, your `get_int` would be better written with `fgets` (or `getline` if available) and `strtol`. `scanf` is seductively convenient but almost always more trouble than it's worth.
| 15,565
|
39,734,278
|
Is it possible to receive google drive push notifications if coded on aws lambda via api gateway?
Google drive requires the webhook address to be verified so is it possible to verify api gateway endpoint?
Here are the possible ways of verifying the endpoint:
1) Upload a file and test via /file and the rest are below:
[](https://i.stack.imgur.com/avpdw.jpg)
Well, here is the image of how google does metatag verification: In order to get the required verification meta tag, I need to first enter which url/endpoint I want to verify. So below image shows the endpoint that I created:
[](https://i.stack.imgur.com/Lb3SR.png)
Then here in webmaster I am verifying the url: But the verification fails.
[](https://i.stack.imgur.com/31DNj.png)
Here is the python code that I added[](https://i.stack.imgur.com/ThNZD.png)
Please guide here of how I can make verification successful!
|
2016/09/27
|
[
"https://Stackoverflow.com/questions/39734278",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2005490/"
] |
@Atihska, it seems you have setup this API:
```
https://x8f3******.execute-api.us-east-1.amazonaws.com/prod/google-endpointverification
```
From what I understand, Google Drive's HTML tag verification method will try to verify the metadata in the **home page**. As per Google, the home page here is:
```
https://x8f3******.execute-api.us-east-1.amazonaws.com/
```
But the above URL won't work because it doesn't have a stage name (like "prod") in it.
The right way to do this would be to use a custom domain name. So, you will need to buy a domain name like foodomain.com and use [custom domain](http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html) name option in API Gateway to point to your API. That way, you can make **foodomain.com** (home page) to point to **<https://x8f3>\*\*\*\*\*\*.execute-api.us-east-1.amazonaws.com/prod/google-endpointverification**
Also, you can simply use [Mock integration](http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-mock-integration.html) instead of Lambda as this is just static content.
|
I don't know for sure how the registration process works for verifying the webhook address, but it is certainly possible to configure the webhook itself in API Gateway.
API Gateway supports [custom domain names](http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html "custom domain names") like ***api.example.com*** if Google doesn't accept the default API domain name.
Edit:
Based on those options, it seems like you could use the default endpoint xxxx.execute-api...amazonaws.com if you configure the HTML meta tag.
You could do this by setting up a GET method on I guess the root resource that is a MOCK integration. That integration response can return static content, so in the integration response section you could paste whatever HTML Google is looking for. You would probably also need to set the response 'Content-Type' header to 'text/html'.
| 15,570
|
17,050,377
|
I'm following the tutorial "Think Python" and I'm supposed to install the package called swampy. I'm running python 2.7.3 although I also have python 3 installed. I extracted the package and placed it in site-packages:
C:\Python27\Lib\site-packages\swampy-2.1.1
C:\Python31\Lib\site-packages\swampy-2.1.1
But when i try to import a module from it within python:
```
import swampy.TurtleWorld
```
I just get no module named swampy.TurtleWorld.
I'd really appreciate it if someone could help me out, here's a link to the lesson if that helps:
<http://www.greenteapress.com/thinkpython/html/thinkpython005.html>
|
2013/06/11
|
[
"https://Stackoverflow.com/questions/17050377",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2475605/"
] |
>
> I extracted the package and placed it in site-packages:
>
>
>
No, that's the wrong way of "installing" a package. Python packages come with a `setup.py` script that should be used to install them. Simply do:
```
python setup.py install
```
And the module will be installed correctly in the site-packages of the python interpreter you are using. If you want to install it for a specific python version use `python2`/`python3` instead of `python`.
|
If anyone else is having trouble with this on Windows, I just added my sites-package directory to my PATH variable and it worked like any normal module import.
```
C:\Python34\Lib\site-packages
```
Hope it helps.
| 15,579
|
70,393,570
|
I am trying to solve the question in which I am asked to use property method to count the number of times the circles are created . Below is the code for the same.
```
import os
import sys
#Add Circle class implementation below
class Circle:
counter = 0
def __init__(self,radius):
self.radius = radius
Circle.counter = Circle.counter + 1
def area(self):
return self.radius*self.radius*3.14
def counters():
print(Circle.counter)
no_of_circles = property(counter)
if __name__ == "__main__":
res_lst = list()
lst = list(map(lambda x: float(x.strip()), input().split(',')))
for radius in lst:
res_lst.append(Circle(radius).area())
print(str(res_lst), str(Circle.no_of_circles))
```
The above code gives correct output for the area but counter should be 3 and instead I am getting below output . Below is the output for input = 1,2,3
```
[3.14, 12.56, 28.26] <property object at 0x0000024AB3234D60>
```
I have tried everything but no luck. In the main section of the code no\_of\_circles is called as Circle.no\_of\_circles which suggests me that it will use property method of python. But the output is wrong. Please help me find where I am going wrong.
|
2021/12/17
|
[
"https://Stackoverflow.com/questions/70393570",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10254216/"
] |
A small calculation may help:
```
...
WHERE
([user_id] = 1) AND
([year] * 100 + [week_number]) BETWEEN 202152 AND 202205
```
or
```
...
WHERE
([user_id] = 1) AND
(202152 <= [year] * 100 + [week_number]) AND
([year] * 100 + [week_number] <= 202205)
```
|
You could...use variables:
```
DECLARE @myUser int = 1,
@startYear int = 2021,
@endYear int = 2022,
@startWeek int = 5,
@endWeek INT = 13;
SELECT *
FROM [db].[dbo].[table]
WHERE [user_id] = @myUser
AND (
(@startYear = [year] AND @startWeek = [week_number] AND @startYear = @endYear AND @startWeek = @endWeek) --the selection is only one week
OR ([year] = @startYear AND [week_number] >= @startWeek AND [week_number] <= @endYear AND @startYear = @endYear AND @endWeek > @startWeek) --same year, multiple weeks
OR (@endYear > @startYear -- Spans multiple years
AND (
([year] >= @startYear AND [year] < @endYear AND [week_number] >= @startWeek ) --anything after start week for each year that is before the end year
OR ([year] <= @endYear AND [year] > @startYear AND [week_number] <= @endWeek) --anything before end week for any year after the start year
)
);
```
| 15,580
|
46,032,570
|
I have a Django REST backend, and it has a `/users` endpoint where I can add new users through `POST` method from frontend.
`/users` endpoint url:
`http://192.168.201.211:8024/users/`
In this endpoint I can view all users information and add new user, so I must avoid others entry it except Administrator. I create a superuser `admin` with password `admin123` by `python manage.py createsuperuser`.
My question is, If I want to do a HTTP `POST` from frontend(I use Angular) I have to pass the Administrator's user name and password, `admin` and `admin123`, along with `POST` head information. So I let others know the user name and password who check the source code of frontend.
Is there any other way to do this `Authentication` without exposing Administrator's user name and password to others?
|
2017/09/04
|
[
"https://Stackoverflow.com/questions/46032570",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2803344/"
] |
From the docs:
You have to download and install the ARCore Services.
>
> You must use a supported, physical device. ARCore does not support virtual devices such as the Android Emulator. To prepare your device:
>
>
> Enable developer options
>
> Enable USB debugging
>
> Download the [ARCore Service](https://github.com/google-ar/arcore-android-sdk/releases/download/sdk-preview/arcore-preview.apk), then install it with the following adb command:
>
> `adb install -r -d arcore-preview.apk`
>
>
>
See: <https://developers.google.com/ar/develop/java/getting-started> Prepare your device
|
I think you need to Install Tango core.
<https://play.google.com/store/apps/details?id=com.google.tango&hl=zh_TW>
| 15,581
|
9,319,767
|
I have a color photo of apple, how can I show only its outline (inside white, background black) with python/PIL?
|
2012/02/16
|
[
"https://Stackoverflow.com/questions/9319767",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1212200/"
] |
Something like this should work.
```
from PIL import Image, ImageFilter
image = Image.open('your_image.png')
image = image.filter(ImageFilter.FIND_EDGES)
image.save('new_name.png')
```
If that doesn't give you the result you are looking for then you try implementing either Prewitt edge detection, Sobel edge detection or Canny edge detection using PIL and Python and other libraries see related [question](https://stackoverflow.com/questions/1603688/python-image-recognition) and the following [example](http://cyroforge.wordpress.com/2012/01/21/canny-edge-detection/) .
If you are trying to do particle detection / analysis rather than just edge detection, you can try using [py4ij](http://code.google.com/p/py4ij/) to call the ImageJ method you link to give you expect the same result, or try another Particle Analysis Python library [EMAN](http://www.msg.ucsf.edu/local/programs/eman/index.html) alternately you can write a Particle detection algorithm using PIL, SciPy and NumPy.
|
If your object and background have fairly well contrast
```
from PIL import Image
image = Image.open(your_image_file)
mask=image.convert("L")
th=150 # the value has to be adjusted for an image of interest
mask = mask.point(lambda i: i < th and 255)
mask.save(file_where_to_save_result)
```
if higher contrast is in one (of 3 colors), you may split the image into bands instead of converting it into grey scale.
if an image or background is fairly complicated, more sophisticated processing will be required
| 15,582
|
51,434,996
|
I have the following code:
```
#!/usr/bin/python2.7
import json, re, sys
x = json.loads('''{"status":{"code":"200","msg":"ok","stackTrace":null},"dbTimeCost":11,"totalTimeCost":12,"hasmore":false,"count":5,"result":[{"_type":"Compute","_oid":"555e262fe4b059c7fbd6af72","label":"lvs3b01c-ea7c.stratus.lvs.ebay.com"},{"_type":"Compute","_oid":"555e27d8e4b059c7fbd6bab9","label":"lvs3b01c-9073.stratus.lvs.ebay.com"},{"_type":"Compute","_oid":"555e27c9e4b059c7fbd6ba7e","label":"lvs3b01c-b14b.stratus.lvs.ebay.com"},{"_type":"Compute","_oid":"555e2798e4b0800601a83b0f","label":"lvs3b01c-6ae2.stratus.lvs.ebay.com"},{"_type":"Compute","_oid":"555e2693e4b087582f108200","label":"lvs3b01c-a228.stratus.lvs.ebay.com"}]}''')
print x['result'][4]['label']
sys.exit()
```
The desired result should be all the labels. But, when I run it, it only prints the first label. What am I doing wrong here?
And also, before I could figure out that "result" was the key to use, I needed to copy and paste the json data to a site like "jsonlint.com" to reformat it in a readable fashion. Im wondering if there's a better way to do that, preferably without having to copy and paste the json data anywhere.
So two questions:
1. How do I get the above code to list all the labels
2. How do I know the key to the field I want, without having to reformat the given ugly one liner json data
|
2018/07/20
|
[
"https://Stackoverflow.com/questions/51434996",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7005590/"
] |
just change your code which is used to print the label
```
print x['result'][4]['label'] # here you are just printing the 4th label only
```
to
```
print [i["label"] for i in x['result']]
```
|
Using a list comprehension to get all label
**Ex:**
```
import json, re, sys
x = json.loads('''{"status":{"code":"200","msg":"ok","stackTrace":null},"dbTimeCost":11,"totalTimeCost":12,"hasmore":false,"count":5,"result":[{"_type":"Compute","_oid":"555e262fe4b059c7fbd6af72","label":"lvs3b01c-ea7c.stratus.lvs.ebay.com"},{"_type":"Compute","_oid":"555e27d8e4b059c7fbd6bab9","label":"lvs3b01c-9073.stratus.lvs.ebay.com"},{"_type":"Compute","_oid":"555e27c9e4b059c7fbd6ba7e","label":"lvs3b01c-b14b.stratus.lvs.ebay.com"},{"_type":"Compute","_oid":"555e2798e4b0800601a83b0f","label":"lvs3b01c-6ae2.stratus.lvs.ebay.com"},{"_type":"Compute","_oid":"555e2693e4b087582f108200","label":"lvs3b01c-a228.stratus.lvs.ebay.com"}]}''')
print [i["label"] for i in x['result']]
sys.exit()
```
**Output:**
```
[u'lvs3b01c-ea7c.stratus.lvs.ebay.com', u'lvs3b01c-9073.stratus.lvs.ebay.com', u'lvs3b01c-b14b.stratus.lvs.ebay.com', u'lvs3b01c-6ae2.stratus.lvs.ebay.com', u'lvs3b01c-a228.stratus.lvs.ebay.com']
```
You can use `pprint` to view your JSON in a better format.
**Ex:**
```
import pprint
pprint.pprint(x)
```
**Output:**
```
{u'count': 5,
u'dbTimeCost': 11,
u'hasmore': False,
u'result': [{u'_oid': u'555e262fe4b059c7fbd6af72',
u'_type': u'Compute',
u'label': u'lvs3b01c-ea7c.stratus.lvs.ebay.com'},
{u'_oid': u'555e27d8e4b059c7fbd6bab9',
u'_type': u'Compute',
u'label': u'lvs3b01c-9073.stratus.lvs.ebay.com'},
{u'_oid': u'555e27c9e4b059c7fbd6ba7e',
u'_type': u'Compute',
u'label': u'lvs3b01c-b14b.stratus.lvs.ebay.com'},
{u'_oid': u'555e2798e4b0800601a83b0f',
u'_type': u'Compute',
u'label': u'lvs3b01c-6ae2.stratus.lvs.ebay.com'},
{u'_oid': u'555e2693e4b087582f108200',
u'_type': u'Compute',
u'label': u'lvs3b01c-a228.stratus.lvs.ebay.com'}],
u'status': {u'code': u'200', u'msg': u'ok', u'stackTrace': None},
u'totalTimeCost': 12}
```
| 15,585
|
47,422,284
|
How can I elegantly do it in `go`?
In python I could use attribute like this:
```
def function():
function.counter += 1
function.counter = 0
```
Does `go` have the same opportunity?
|
2017/11/21
|
[
"https://Stackoverflow.com/questions/47422284",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5537840/"
] |
For example,
`count.go`:
```
package main
import (
"fmt"
"sync"
"time"
)
type Count struct {
mx *sync.Mutex
count int64
}
func NewCount() *Count {
return &Count{mx: new(sync.Mutex), count: 0}
}
func (c *Count) Incr() {
c.mx.Lock()
c.count++
c.mx.Unlock()
}
func (c *Count) Count() int64 {
c.mx.Lock()
count := c.count
c.mx.Unlock()
return count
}
var fncCount = NewCount()
func fnc() {
fncCount.Incr()
}
func main() {
for i := 0; i < 42; i++ {
go fnc()
}
time.Sleep(time.Second)
fmt.Println(fncCount.Count())
}
```
Output:
```
$ go run count.go
42
```
Also, run the race detector,
```
$ go run -race count.go
42
```
See:
[Introducing the Go Race Detector](https://blog.golang.org/race-detector).
[Benign data races: what could possibly go wrong?](https://software.intel.com/en-us/blogs/2013/01/06/benign-data-races-what-could-possibly-go-wrong).
[The Go Memory Model](https://golang.org/ref/mem)
---
And here's a racy solution ([@maerics answer](https://stackoverflow.com/a/47422446/221700) is racy for the same reason),
```
package main
import (
"fmt"
"time"
)
var fncCount = 0
func fnc() {
fncCount++
}
func main() {
for i := 0; i < 42; i++ {
go fnc()
}
time.Sleep(time.Second)
fmt.Println(fncCount)
}
```
Output:
```
$ go run racer.go
39
```
And, with the race detector,
Output:
```
$ go run -race racer.go
==================
WARNING: DATA RACE
Read at 0x0000005b5380 by goroutine 7:
main.fnc()
/home/peter/gopath/src/so/racer.go:11 +0x3a
Previous write at 0x0000005b5380 by goroutine 6:
main.fnc()
/home/peter/gopath/src/so/racer.go:11 +0x56
Goroutine 7 (running) created at:
main.main()
/home/peter/gopath/src/so/racer.go:16 +0x4f
Goroutine 6 (finished) created at:
main.main()
/home/peter/gopath/src/so/racer.go:16 +0x4f
==================
42
Found 1 data race(s)
exit status 66
$
```
|
```
var doThingCounter = 0
func DoThing() {
// Do the thing...
doThingCounter++
}
```
| 15,587
|
65,398,433
|
**Problem description:** I want to create a program that can update one whole row (or cells in this row within given range) in one single line (i.e. one single API request).
This is what've seen in the **documentation**, that was related to my problem:
```py
# Updates A2 and A3 with values 42 and 43
# Note that update range can be bigger than values array
worksheet.update('A2:B4', [[42], [43]])
```
This is how I tried to implement it into **my program**:
```py
sheet.update(f'A{rowNumber + 1}:H{rowNumber + 1}', [[str(el)] for el in list_of_auctions[rowNumber]])
```
This is what printing `[[str(el)] for el in list_of_auctions[rowNumber]]` looks like:
```py
[['068222bb-c251-47ad-8c2a-e7ad7bad2f60'], ['urlLink'], ['100'], ['250'], ['20'], [''], [' ,'], ['0']]
```
As far as I can tell, everything here is according to the docs, however I get this **output**:
```
error from callback <bound method SocketHandler.handle_message of <amino.socket.SocketHandler object at 0x00000196F8C46070>>: {'code': 400, 'message': "Requested writing within range ['Sheet1'!A1:H1], but tried writing to row [2]", 'status': 'INVALID_ARGUMENT'}
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\websocket\_app.py", line 344, in _callback
callback(*args)
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\amino\socket.py", line 80, in handle_message
self.client.handle_socket_message(data)
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\amino\client.py", line 345, in handle_socket_message
return self.callbacks.resolve(data)
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\amino\socket.py", line 204, in resolve
return self.methods.get(data["t"], self.default)(data)
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\amino\socket.py", line 192, in _resolve_chat_message
return self.chat_methods.get(key, self.default)(data)
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\amino\socket.py", line 221, in on_text_message
def on_text_message(self, data): self.call(getframe(0).f_code.co_name, objects.Event(data["o"]).Event)
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\amino\socket.py", line 209, in call
handler(data)
File "C:\Users\1\Desktop\python bots\auction-bot\bot.py", line 315, in on_text_message
bid_rub(link, data.message.author.nickname, data.message.author.userId, id, linkto)
File "C:\Users\1\Desktop\python bots\auction-bot\bot.py", line 162, in bid_rub
sheet.update(f'A{ifExisting + 1}:H{ifExisting + 1}', [[str(el)] for el in list_of_auctions[ifExisting]])
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\gspread\utils.py", line 592, in wrapper
return f(*args, **kwargs)
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\gspread\models.py", line 1096, in update
response = self.spreadsheet.values_update(
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\gspread\models.py", line 235, in values_update
r = self.client.request('put', url, params=params, json=body)
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\gspread\client.py", line 73, in request
raise APIError(response)
```
**Question:** I cannot figure out what can be the issue here, trying to tackle this for quite a while.
|
2020/12/21
|
[
"https://Stackoverflow.com/questions/65398433",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13682294/"
] |
The error indicates that the data you are trying to write has more rows than the rows in the range. Each list inside your list represent single row of data in the spreadsheet.
In your example:
```
[['068222bb-c251-47ad-8c2a-e7ad7bad2f60'], ['urlLink'], ['100'], ['250'], ['20'], [''], [' ,'], ['0']]
```
It represents 8 rows of data.
```
row 1 = ['068222bb-c251-47ad-8c2a-e7ad7bad2f60']
row 2 = ['urlLink']
row 3 = ['100']
row 4 = ['250']
row 5 = ['20']
row 6 = ['']
row 7 = [' ,']
row 8 = ['0']
```
If you want to write into a single row, the format should be:
```
[['068222bb-c251-47ad-8c2a-e7ad7bad2f60', 'urlLink', '100', '250', '20', '', ' ,', '0']]
```
To solve this, remove the `[` and `]` in `[str(el)]` and add another `[` and `]` to `str(el) for el in list_of_auctions[rowNumber]` or use itertools.chain() to make single list within a list
Your code should look like this:
```
sheet.update(f'A{rowNumber + 1}:H{rowNumber + 1}', [[str(el) for el in list_of_auctions[rowNumber]]])
or
import itertools
sheet.update(f'A{rowNumber + 1}:H{rowNumber + 1}', [list(itertools.chain(str(el) for el in list_of_auctions[rowNumber]))]
```
Reference:
----------
[Writing to a single range](https://developers.google.com/sheets/api/guides/values#writing_to_a_single_range)
[Python itertools](https://docs.python.org/3/library/itertools.html)
|
Hi you may try the following:
```
def gs_writer(sheet_name,dataframe,sheet_url,boolean,row,col):
import gspread
from gspread_dataframe import get_as_dataframe, set_with_dataframe
import google.oauth2
from oauth2client.service_account import ServiceAccountCredentials
scope = ['https://spreadsheets.google.com/feeds','https://www.googleapis.com/auth/drive']
credentials = ServiceAccountCredentials.from_json_keyfile_name('gsheet_credentials.json', scope)
gc = gspread.authorize(credentials)
sht2 = gc.open_by_url(sheet_url)
sht = sht2.worksheet(sheet_name)
set_with_dataframe(sht,dataframe,resize = boolean,row=row,col=col)
```
Now after defining this function, you may simply get your required data in a dataframe and input your row and col number where you want this printed, like this:
```
gs_writer("sheet_name",df,sheet_url,False,1,1)
```
This should write your data starting from A1
| 15,589
|
42,214,228
|
In numpy and tensorflow it's possible to add matrices (or tensors) of different dimensionality if the shape of smaller matrix is a suffix of bigger matrix. This is an example:
```
x = np.ndarray(shape=(10, 7, 5), dtype = float)
y = np.ndarray(shape=(7, 5), dtype = float)
```
For these two matrices operation `x+y` is a shortcut for:
```
for a in range(10):
for b in range(7):
for b in range(5):
result[a,b,c] = x[a,b,c] + y[b,c]
```
In my case however I have matrices of shape `(10,7,5)` and `(10,5)` and likewise I would like to perform the `+` operation using similar logic:
```
for a in range(10):
for b in range(7):
for b in range(5):
result[a,b,c] = x[a,b,c] + y[a,c]
^
```
In this case however `x+y` operation fails as neither numpy nor tensorflow understands what I want to do. Is there any way I can perform this operation efficiently (without writing python loop myself)?
In so far I have figured that I could introduce a temporary matrix `z` of shape `(10,7,5)` using einsum like this:
```
z = np.einsum('ij,k->ikj', y, np.ones(7))
x + z
```
but this creates an explicit three-dimensional matrix (or tensor) and if possible I would prefer to avoid that.
|
2017/02/13
|
[
"https://Stackoverflow.com/questions/42214228",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/766551/"
] |
In NumPy, you could extend `y` to `3D` and then add -
```
x + y[:,None,:]
```
Haven't dealt with `tensorflow` really, but looking into its docs, it seems, we could use [`tf.expand_dims`](https://www.tensorflow.org/api_docs/python/array_ops/shapes_and_shaping#expand_dims) -
```
x + tf.expand_dims(y, 1)
```
The extended version would still be a view into `y` and as such won't occupy anymore of the memory, as tested below -
```
In [512]: np.may_share_memory(y, y[:,None,:])
Out[512]: True
```
|
As correctly pointed out in the accepted answer the solution is to expand dimensions using available construct.
The point is to understand how numpy is doing the broadcasting of matrices in case of adding matrices if their dimensions don't match. The rule is that two matrices must have exactly the same dimensions with an exception that some dimensions in either of matrices can be replaced by 1.
E.g.
```
A (4d array): 8 x 1 x 6 x 1
B (3d array): 7 x 1 x 5
Result (4d array): 8 x 7 x 6 x 5
```
This example and detailed explanation can be found in scipy docs
<https://docs.scipy.org/doc/numpy-1.10.0/user/basics.broadcasting.html>
| 15,590
|
16,918,063
|
We're running into a problem (which is described <http://wiki.python.org/moin/UnicodeDecodeError>) -- read the second paragraph '...Paradoxically...'.
Specifically, we're trying to up-convert a string to unicode and we are receiving a UnicodeDecodeError.
Example:
```
>>> unicode('\xab')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xab in position 0: ordinal not in range(128)
```
But of course, this works without any problems
```
>>> unicode(u'\xab')
u'\xab'
```
Of course, this code is to demonstrate the conversion problem. In our actual code, we are not using string literals and we can cannot just pre-pend the unicode 'u' prefix, but instead we are dealing with strings returned from an os.walk(), and the file name includes the above value. Since we cannot coerce the value to a unicode without calling unicode() constructor, we're not sure how to proceed.
One really horrible hack that occurs is to write our own str2uni() method, something like:
```
def str2uni(val):
r"""brute force coersion of str -> unicode"""
try:
return unicode(src)
except UnicodeDecodeError:
pass
res = u''
for ch in val:
res += unichr(ord(ch))
return res
```
But before we do this -- wanted to see if anyone else had any insight?
**UPDATED**
I see everyone is getting focused on HOW I got to the example I posted, rather than the result. Sigh -- ok, here's the code that caused me to spend hours reducing the problem to the simplest form I shared above.
```
for _,_,files in os.walk('/path/to/folder'):
for fname in files:
filename = unicode(fname)
```
That piece of code tosses a UnicodeDecodeError exception when the filename has the following value '3\xab Floppy (A).link'
To see the error for yourself, do the following:
```
>>> unicode('3\xab Floppy (A).link')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xab in position 1: ordinal not in range(128)
```
**UPDATED**
I really appreciate everyone trying to help. And I also appreciate that most people make some pretty simple mistakes related to string/unicode handling. But I'd like to underline the reference to the **UnicodeDecodeError** exception. We are getting this when calling the unicode() constructor!!!
I believe the underlying cause is described in the aforementioned Wiki article <http://wiki.python.org/moin/UnicodeDecodeError>. Read from the second paragraph on down about how **"Paradoxically, a UnicodeDecodeError may happen when *encoding*..."**. The Wiki article very accurately describes what we are experiencing -- but while it elaborates on the cuases, it makes no suggestions for resolutions.
As a matter of fact, the third paragraph starts with the following astounding admission **"Unlike a similar case with UnicodeEncodeError, such a failure cannot be always avoided..."**.
Since I am not used to "cant get there from here" information as a developer, I thought it would be interested to cast about on Stack Overflow for the experiences of others.
|
2013/06/04
|
[
"https://Stackoverflow.com/questions/16918063",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/590028/"
] |
I think you're confusing Unicode strings and Unicode encodings (like UTF-8).
`os.walk(".")` returns the filenames (and directory names etc.) as strings that are *encoded* in the current codepage. It will silently *remove* characters that are not present in your current codepage ([see this question for a striking example](https://stackoverflow.com/q/7545511/20670)).
Therefore, if your file/directory names contain characters outside of your encoding's range, then you definitely need to use a Unicode string to specify the starting directory, for example by calling `os.walk(u".")`. Then you don't need to (and shouldn't) call `unicode()` on the results any longer, because they already *are* Unicode strings.
If you don't do this, you first need to *decode* the filenames (as in `mystring.decode("cp850")`) which will give you a Unicode string:
```
>>> "\xab".decode("cp850")
u'\xbd'
```
*Then* you can *encode* that into UTF-8 or any other encoding.
```
>>> _.encode("utf-8")
'\xc2\xbd'
```
If you're still confused why `unicode("\xab")` throws a *decoding* error, maybe the following explanation helps:
`"\xab"` is an *encoded* string. Python has no way of knowing which encoding that is, but before you can convert it to Unicode, it needs to be decoded first. Without any specification from you, `unicode()` assumes that it is encoded in ASCII, and when it tries to decode it under this assumption, it fails because `\xab` isn't part of ASCII. So either you need to find out which encoding is being used by your filesystem and call `unicode("\xab", encoding="cp850")` or whatever, or start with Unicode strings in the first place.
|
```
'\xab'
```
Is a **byte**, number 171.
```
u'\xab'
```
Is a **character**, U+00AB Left-pointing double angle quotation mark («).
`u'\xab'` is a short-hand way of saying `u'\u00ab'`. It's not the same (not even the same datatype) as the byte `'\xab'`; it would probably have been clearer to always use the `\u` syntax in Unicode string literals IMO, but it's too late to fix that now.
To go from bytes to characters is known as a **decode** operation. To go from characters to bytes is known as an **encode** operation. For either direction, you need to know which encoding is used to map between the two.
```
>>> unicode('\xab')
UnicodeDecodeError
```
`unicode` is a character string, so there is an implicit decode operation when you pass bytes to the `unicode()` constructor. If you don't tell it which encoding you want you get the default encoding which is often `ascii`. ASCII doesn't have a meaning for byte 171 so you get an error.
```
>>> unicode(u'\xab')
u'\xab'
```
Since `u'\xab'` (or `u'\u00ab'`) is already a character string, there is no implicit conversion in passing it to the `unicode()` constructor - you get an unchanged copy.
```
res = u''
for ch in val:
res += unichr(ord(ch))
return res
```
The encoding that maps each input byte to the Unicode character with the same ordinal value is ISO-8859-1. Consequently you could replace this loop with just:
```
return unicode(val, 'iso-8859-1')
```
(However note that if Windows is in the mix, then the encoding you want is probably not that one but the somewhat-similar `windows-1252`.)
>
> One really horrible hack that occurs is to write our own str2uni() method
>
>
>
This isn't generally a good idea. `UnicodeError`s are Python telling you you've misunderstood something about string types; ignoring that error instead of fixing it at source means you're more likely to hide subtle failures that will bite you later.
```
filename = unicode(fname)
```
So this would be better replaced with: `filename = unicode(fname, 'iso-8859-1')` if you know your filesystem is using ISO-8859-1 filenames. If your system locales are set up correctly then it should be possible to find out the encoding your filesystem is using, and go straight to that:
```
filename = unicode(fname, sys.getfilesystemencoding())
```
Though actually if it *is* set up correctly, you can skip all the encode/decode fuss by asking Python to treat filesystem paths as native Unicode instead of byte strings. You do that by passing a Unicode character string into the `os` filename interfaces:
```
for _,_,files in os.walk(u'/path/to/folder'): # note u'' string
for fname in files:
filename = fname # nothing more to do!
```
PS. The character in `3″ Floppy` should really be U+2033 Double Prime, but there is no encoding for that in ISO-8859-1. Better in the long term to use UTF-8 filesystem encoding so you can include any character.
| 15,591
|
17,627,193
|
I'm on a fresh Virtualbox install of CentOS 6.4.
After installing zsh 5.0.2 from source using `./configure --prefix=/usr && make && make install` and setting it as the shell with `chsh -s /usr/bin/zsh`, everything is good.
Then some time after, after installing python it seems, it starts acting strange.
1. Happens with PuTTY and iTerm2 over SSH, does not happen on the raw terminal through Virtualbox.
2. typing something, then erasing it: rather than removing the char and moving the cursor back, the cursor moves forward.
3. Typing Ctrl+V then Backspace repeatedly prints out this repeating pattern '^@?'
4. Running cat from zsh works fine. Prints out '^H' if I type that, backspaces like normal if I type normal backspace.
Surely someone's seen this before and knows exactly what the hell it is.
I'm not positive yet, but it seems that installing `oh-my-zsh` can fix this. But I really want to know what the specific issue is here.
|
2013/07/13
|
[
"https://Stackoverflow.com/questions/17627193",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/340947/"
] |
OK , I suggest you try
export TERM=xterm
in your .zshrc configuration
the Changing into Zsh caused the bug.
|
**sigh** I knew I solved this before.
It's too damn easy to forget things.
The solution is to compile and apply the proper terminfo data with `tic`, as I have a custom config with my terminal clients, `xterm-256color-italic`, that confuses zsh.
There appear to be other ways to configure this stuff too; I basically just need it to be properly set up so italics work everywhere (including in tmux) so hopefully I can figure out how to do this more portably than I am currently.
| 15,595
|
58,777,374
|
I was recently studying someone's code and a portion of code given below
```
class Node:
def __init__(self, height=0, elem=None):
self.elem = elem
self.next = [None] * height
```
What does it mean by `[None] * height` in the above code
I know what does `*` operator (as multiplication and unpacking) and `None` means in python but this is somehow different.
|
2019/11/09
|
[
"https://Stackoverflow.com/questions/58777374",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8401374/"
] |
It means a list of `None`s with a `height` number of elements. e.g., for `height = 3`, it is this list:
```
[None, None, None]
```
|
```
>>> [None] * 5
[None, None, None, None, None]
```
Gives you a list of size `height` in your case
| 15,598
|
55,233,846
|
I'm trying to send HTTP post rest API call to the flask server. I'm able to do it in postman but How can I call it using python requests module?
payloads are key-project value-daynight and key-file value-
postman request is as shown in the image
[postman](https://i.stack.imgur.com/i8ZMT.png)
when I tried I ended up getting an internal server error . Problem is with payload i was passing . How can I call it successfully and get a proper response
|
2019/03/19
|
[
"https://Stackoverflow.com/questions/55233846",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10865641/"
] |
Just recently ran into this. One thing to look out for is to ensure nodeIntegration is set to true when creating your renderer windows.
```
mainWindow = new electron.BrowserWindow({
width: width,
height: height,
webPreferences: {
nodeIntegration: true
}
});
```
|
AFAIU the recommended way is to use `contextBridge` module (in the `preload.js` script). It allows you to keep the context isolation enabled but safely expose your APIs to the context the website is running in.
<https://www.electronjs.org/docs/latest/tutorial/context-isolation>
Following this way, I also found that it was no longer necessary to specify `target` property in the Webpack config.
| 15,600
|
1,054,380
|
I'm trying to analyze my keystrokes over the next month and would like to throw together a simple program to do so. I don't want to exactly log the commands but simply generate general statistics on my key presses.
I am the most comfortable coding this in python, but am open to other suggestions. Is this possible, and if so what python modules should I look at? Has this already been done?
I'm on OSX but would also be interested in doing this on an Ubuntu box and Windows XP.
|
2009/06/28
|
[
"https://Stackoverflow.com/questions/1054380",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/90025/"
] |
Unless you are planning on writing the interfaces yourself, you are going to require some library, since as other posters have pointed out, you need to access low-level key press events managed by the desktop environment.
On Windows, the [PyHook](http://pypi.python.org/pypi/pyHook/1.4/) library would give you the functionality you need.
On Linux, you can use the [Python X Library](http://python-xlib.sourceforge.net/) (assuming you are running a graphical desktop).
Both of these are used to good effect by [pykeylogger](http://pykeylogger.wiki.sourceforge.net/). You'd be best off downloading the source (see e.g. pyxhook.py) to see specific examples of how key press events are captured. It should be trivial to modify this to sum the distribution of keys rather than recording the ordering.
|
Depending on what statistics you want to collect, maybe you do not have to write this yourself; the program [Workrave](http://www.workrave.org/) is a program to remind you to take small breaks and does so by monitoring keyboard and mouse activity. It keeps statistics of this activity which you probably could use (unless you want very detailed/more specific statistics). In worst case you could look at the source (C++) to find how it is done.
| 15,602
|
14,001,578
|
What's the correct way to specify a hg dependency in `tox.ini`. e.g.
```
[testenv]
deps =
hg+https://code.google.com/p/python-progressbar/
```
Unfortunately this does not work, and the following is spewed out:
```
ERROR: invocation failed, logfile: /Users/brad/project/.tox/py33-dj/log/py33-dj-1.log
ERROR: actionid=py33-dj
msg=getenv
cmdargs=[local('/Users/brad/project/.tox/py33-dj/bin/pip'), 'install', '--download-cache=/Users/brad/.pip/downloads', 'hg+https://code.google.com/p/python-progressbar/', 'https://github.com/dag/attest/tarball/master', 'django-attest', 'django-celery', 'coverage', 'https://github.com/django/django/tarball/master']
env={'PYTHONIOENCODING': 'utf_8', 'TERM_PROGRAM_VERSION': '309', 'LOGNAME': 'brad', 'USER': 'brad', 'PATH': '/Users/brad/project/.tox/py33-dj/bin:/usr/local/share/python:/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/opt/X11/bin:/usr/texbin', 'HOME': '/Users/brad', 'DISPLAY': '/tmp/launch-zayh2U/org.macosforge.xquartz:0', 'TERM_PROGRAM': 'Apple_Terminal', 'LANG': 'en_AU.UTF-8', 'TERM': 'xterm-256color', 'SHLVL': '1', '_': '/usr/local/share/python/tox', 'TERM_SESSION_ID': 'E8FC4113-C18B-4DB4-9594-C0909A132D76', 'SSH_AUTH_SOCK': '/tmp/launch-kia8RP/Listeners', 'SHELL': '/bin/bash', 'TMPDIR': '/var/folders/zr/m_ys6vwd1z19rqh73jd8z88w0000gn/T/', '__CF_USER_TEXT_ENCODING': '0x1F5:0:15', 'PWD': '/Users/brad/project', 'PIP_DOWNLOAD_CACHE': '/Users/brad/.pip/downloads', 'COMMAND_MODE': 'unix2003'}
abort: couldn't find mercurial libraries in [/usr/local/Cellar/mercurial/2.4.1/libexec /Users/brad/project/.tox/py33-dj/lib/python3.3/site-packages/distribute-0.6.31-py3.3.egg /Users/brad/project/.tox/py33-dj/lib/python3.3/site-packages/pip-1.2.1-py3.3.egg /Users/brad/project/.tox/py33-dj/lib/python33.zip /Users/brad/project/.tox/py33-dj/lib/python3.3 /Users/brad/project/.tox/py33-dj/lib/python3.3/plat-darwin /Users/brad/project/.tox/py33-dj/lib/python3.3/lib-dynload /usr/local/Cellar/python3/3.3.0/Frameworks/Python.framework/Versions/3.3/lib/python3.3 /usr/local/Cellar/python3/3.3.0/Frameworks/Python.framework/Versions/3.3/lib/python3.3/plat-darwin /Users/brad/project/.tox/py33-dj/lib/python3.3/site-packages]
(check your install and PYTHONPATH)
Downloading/unpacking hg+https://code.google.com/p/python-progressbar/
Cloning hg https://code.google.com/p/python-progressbar/ to /var/folders/zr/m_ys6vwd1z19rqh73jd8z88w0000gn/T/pip-88u_g0-build
Complete output from command /usr/local/bin/hg clone --noupdate -q https://code.google.com/p/python-progressbar/ /var/folders/zr/m_ys6vwd1z19rqh73jd8z88w0000gn/T/pip-88u_g0-build:
----------------------------------------
Command /usr/local/bin/hg clone --noupdate -q https://code.google.com/p/python-progressbar/ /var/folders/zr/m_ys6vwd1z19rqh73jd8z88w0000gn/T/pip-88u_g0-build failed with error code 255 in None
Storing complete log in /Users/brad/.pip/pip.log
ERROR: could not install deps [hg+https://code.google.com/p/python-progressbar/]
_________________________________________ summary _________________________________________
ERROR: py33-dj: could not install deps [hg+https://code.google.com/p/python-progressbar/]
```
The last line in `pip.log` is:
```
pip.exceptions.InstallationError: Command /usr/local/bin/hg clone --noupdate -q https://code.google.com/p/python-progressbar/ /var/folders/zr/m_ys6vwd1z19rqh73jd8z88w0000gn/T/pip-dakkvs-build failed with error code 255 in None
```
However running `pip install hg+https://code.google.com/p/python-progressbar/` works.
The tox test environment is targeting Python 3.3.
|
2012/12/22
|
[
"https://Stackoverflow.com/questions/14001578",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/253686/"
] |
It's possible to specify the mercurial dependency in two ways within `deps = …`:
* `-ehg+https://code.google.com/p/python-progressbar/#egg=progressbar` (no space)
* `hg+https://code.google.com/p/python-progressbar/`
tox treats each line in `deps` as a single argument to `pip install` (whitespace included).
pip supports `-e` as either a standalone argument followed by a URL, or the prefix of a single argument, i.e.:
* `['pip', 'install', '-e', 'hg+https://...']`
* `['pip', 'install', '-ehg+https://...']`
but will not handle `['pip', 'install', '-e hg+https://...']` (because the URL is parsed to include a space character)
So why didn't it work?
======================
Mercurial was installed under Python 2.7. When tox runs `pip install`, it sets the `PATH` env variable to refer to the Python 3.3 env it created. If you look carefully through the error message you'll see the line:
```
abort: couldn't find mercurial libraries in …
```
which is printed by the `/usr/local/bin/hg` script when `import mercurial` fails. Typically the solution would be to install the library (mercurial) for Python 3.3. Unfortunately mercurial does not support Python 3.
|
typically this is a artifact of a broken mercurial installation that refers to env python
in virtualenv for python 3 the env python is simply the worst thing to work with
| 15,607
|
50,617,233
|
I have a dataset (Product\_ID,date\_time, Sold) which has products sold on various dates. The dates are not consistent and are given for 9 months with random 13 days or more from a month. I have to segregate the data in a such a way that the for each product how many products were sold on 1-3 given days, 4-7 given days, 8-15 given days and >16 given days. . So how can I code this in python using pandas and other packages
`PRODUCT_ID DATE_LOCATION Sold
0E4234 01-08-16 0:00 2
0E4234 02-08-16 0:00 7
0E4234 04-08-16 0:00 3
0E4234 08-08-16 0:00 1
0E4234 09-08-16 0:00 2
.
. (same product for 9 months sold data)
.
0G2342 02-08-16 0:00 1
0G2342 03-08-16 0:00 2
0G2342 06-08-16 0:00 1
0G2342 09-08-16 0:00 1
0G2342 11-08-16 0:00 3
0G2342 15-08-16 0:00 3
.
.
.(goes for 64 products each with 9 months of data)
.`
I don't know even how to code for this in python
The output needed is
```
PRODUCT_ID Days Sold
0E4234 1-3 9
4-7 3
8-15 16
>16 (remaing values sum)
0G2342 1-3 3
4-7 1
8-15 7
>16 (remaing values sum)
.
.(for 64 products)
.
```
Would be happy if at least someone posted a link to where to start
|
2018/05/31
|
[
"https://Stackoverflow.com/questions/50617233",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9846590/"
] |
You can first convert dates to dtetimes and get days by [`dt.day`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.day.html):
```
df['DATE_LOCATION'] = pd.to_datetime(df['DATE_LOCATION'], dayfirst=True)
days = df['DATE_LOCATION'].dt.day
```
Then binning by [`cut`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.cut.html):
```
rng = pd.cut(days, bins=[0,3,7,15,31], labels=['1-3', '4-7','8-15', '>=16'])
print (rng)
0 1-3
1 1-3
2 4-7
3 8-15
4 8-15
5 1-3
6 1-3
7 4-7
8 8-15
9 8-15
10 8-15
Name: DATE_LOCATION, dtype: category
Categories (4, object): [1-3 < 4-7 < 8-15 < >=16]
```
And aggregate `sum` by product and binned `Series`:
```
df = df.groupby(["PRODUCT_ID",rng])['Sold'].sum()
print (df)
PRODUCT_ID DATE_LOCATION
0E4234 1-3 9
4-7 3
8-15 3
0G2342 1-3 3
4-7 1
8-15 7
Name: Sold, dtype: int64
```
If need also count per `year`s:
```
df = df.groupby([df['DATE_LOCATION'].dt.year.rename('YEAR'), "PRODUCT_ID",rng])['Sold'].sum()
print (df)
YEAR PRODUCT_ID DATE_LOCATION
2016 0E4234 1-3 9
4-7 3
8-15 3
0G2342 1-3 3
4-7 1
8-15 7
Name: Sold, dtype: int64
```
|
Assume your dataframe named df.
```
df["DATE_LOCATION"] = pd.to_datetime(df.DATE_LOCATION)
df["DAY"] = df.DATE_LOCATION.dt.day
def flag(x):
if 1<=x<=3:
return '1-3'
elif 4<=x<=7:
return '4-7'
elif 8<=x<=15:
return '8-15'
else:
return '>16' # maybe you mean '>=16'.
df["Days"] = df.DAY.apply(flag)
df.groupby(["PRODUCT_ID","Days"]).Sold.sum()
```
| 15,608
|
1,062,562
|
I want to call a wrapped C++ function from a python script which is not returning immediately (in detail: it is a function which starts a QApplication window and the last line in that function is QApplication->exec()). So after that function call I want to move on to my next line in the python script but on executing this script and the previous line it hangs forever.
In contrast when I manually type my script line for line in the python command line I can go on to my next line after pressing enter a second time on the non-returning function call line.
So how to solve the issue when executing the script?
Thanks!!
Edit:
My python interpreter is embedded in an application. I want to write an extension for this application as a separate Qt4 window. All the python stuff is only for make my graphical plugin accessible per script (per boost.python wrapping).
My python script:
```
import imp
import os
Plugin = imp.load_dynamic('Plugin', os.getcwd() + 'Plugin.dll')
qt = Plugin.StartQt4() # it hangs here when executing as script
pl = PluginCPP.PluginCPP() # Creates a QMainWindow
pl.ShowWindow() # shows the window
```
The C++ code for the Qt start function looks like this:
```
class StartQt4
{
public:
StartQt4()
{
int i = 0;
QApplication* qapp = new QApplication(i, NULL);
qapp->exec();
}
};
```
|
2009/06/30
|
[
"https://Stackoverflow.com/questions/1062562",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/80480/"
] |
Use a thread ([longer example here](http://www.wellho.net/solutions/python-python-threads-a-first-example.html)):
```
from threading import Thread
class WindowThread(Thread):
def run(self):
callCppFunctionHere()
WindowThread().start()
```
|
QApplication::exec() starts the main loop of the application and will only return after the application quits. If you want to run code after the application has been started, you should resort to Qt's event handling mechanism.
From <http://doc.trolltech.com/4.5/qapplication.html#exec> :
>
> To make your application perform idle
> processing, i.e. executing a special
> function whenever there are no pending
> events, use a QTimer with 0 timeout.
> More advanced idle processing schemes
> can be achieved using processEvents().
>
>
>
| 15,609
|
32,867,501
|
I have a python program that takes a .txt file with a list of information. Then the program proceeds to number every line, then remove all returns. Now I want to add returns to the lines that are numbered without double spacing so I can continue to edit the file. Here is my program.
```
import sys
from time import sleep
# request for filename
f = raw_input('filename > ')
print ''
# function for opening file load
def open_load(text):
for c in text:
print c,
sys.stdout.flush()
sleep(0.5)
print "Opening file",
open_load('...')
sleep(0.1)
# loading contents into global variable
f_ = open(f)
f__ = f_.read()
# contents are now contained in variable 'f__' (two underscores)
print f__
raw_input("File opened. Press enter to number the lines or CTRL+C to quit. ")
print ''
print "Numbering lines",
open_load('...')
sleep(0.1)
# set below used to add numbers to lines
x = f
infile=open(x, 'r')
lines=infile.readlines()
outtext = ['%d %s' % (i, line) for i, line in enumerate (lines)]
f_o = (str("".join(outtext)))
print f_o
# used to show amount of lines
with open(x) as f:
totallines = sum(1 for _ in f)
print "total lines:", totallines, "\n"
# -- POSSIBLE MAKE LIST OF AMOUNT OF LINES TO USE LATER TO INSERT RETURNS? --
raw_input("Lines numbered. Press enter to remove all returns or CTRL+C to quit. ")
print ''
print "Removing returns",
open_load('...')
sleep(0.1)
# removes all instances of a return
f_nr = f_o.replace("\n", "")
# newest contents are now located in variable f_nr
print f_nr
print ''
raw_input("Returns removed. Press enter to add returns on lines or CTRL+C to quit. ")
print ''
print "Adding returns",
open_load('...')
sleep(0.1)
```
Here is an example of what I need. In my code, here are no returns (\n) in this below. I have the terminal set to where the lines are in order without having returns (\n).
```
1 07/07/15 Mcdonalds $20 1 123 12345
2 07/07/15 Mcdonalds $20 1 123 12345
3 07/07/15 Mcdonalds $20 1 123 12345
4 07/07/15 Mcdonalds $20 1 123 12345
5 07/07/15 Mcdonalds $20 1 123 12345
```
The numbering, 1-5, needs to be replaced with returns so each row is it's own line. This is what it would look like in after being edited
```
# the numbering has been replaced with returns (no double spacing)
07/07/15 Mcdonalds $20 1 123 12345
07/07/15 Mcdonalds $20 1 123 12345
07/07/15 Mcdonalds $20 1 123 12345
07/07/15 Mcdonalds $20 1 123 12345
07/07/15 Mcdonalds $20 1 123 12345
```
|
2015/09/30
|
[
"https://Stackoverflow.com/questions/32867501",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5391570/"
] |
You can inject the `$location` service if you are using html5 mode.
Because you are using URLs without a base root (such as a ! or a #), you need to explicitly add a `<base>` tag that defines the "root" of your application OR you can configure `$locationProvider` to not require a base tag.
HTML:
```
<head>
<base href="/">
...
</head>
<body>
<div ng-controller="ParamsCtrl as vm">
...
</div>
</body>
```
JS:
```
app.config(['$locationProvider', function($locationProvider){
$locationProvider.html5Mode(true);
// or
$locationProvider.html5Mode({
enabled: true,
requireBase: false
});
}]);
app.controller("TestCtrl", ['$location', function($location) {
var params = $location.search();
alert('Foo is: ' + params.foo + ' and bar is: ' + params.bar);
});
```
It is worth noting that you can add a router library such as ngRoute, ui-router or the new, unfinished component router. These routers rely on a base route off of a symbol (! or #) and define everything after the symbol as a client-side route which gets controlled by the router. This will allow your Angular app to function as a Single Page App and will give you access to $routeParams or the ui-router equivalent.
|
You can use `$location.search()`.
```
var parameters = $location.search();
console.log(parameters);
-> object{
foo: foovalue,
bar: barvalue
}
```
SO these values will be accessible with `parameters.foo` and `parameters.bar`
| 15,612
|
18,823,139
|
I am new to selenium, I have a script that uploads a file to a server.
In the ide version sort of speak it uploads the file, but when I export test case as python 2 /unittest / webdriver it doesn't upload it..
It doesn't give me any errors, just doesn't upload it...
The python script is:
```
driver.find_element_by_id("start-upload-button-single").click()
driver.find_element_by_css_selector("input[type=\"file\"]").clear()
driver.find_element_by_css_selector("input[type=\"file\"]").send_keys("C:\\\\Documents and Settings\\\\pcname\\\\Desktop\\\\ffdlt\\\\test.jpeg")
```
I searched for solutions but I haven't found any except integrating it with AutoIt or AutoHotKey...
The first line opens the File Upload Box of Firefox.
|
2013/09/16
|
[
"https://Stackoverflow.com/questions/18823139",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2782827/"
] |
Your code work perfectly for me (I test it with Firefox, Chrome driver)
One thing I supect is excessive backslash(`\`) escape.
Try following:
```
driver.find_element_by_id("start-upload-button-single").click()
driver.find_element_by_css_selector('input[type="file"]').clear()
driver.find_element_by_css_selector('input[type="file"]').send_keys("C:\\Documents and Settings\\pcname\\Desktop\\ffdlt\\test.jpeg")
```
or
```
driver.find_element_by_id("start-upload-button-single").click()
driver.find_element_by_css_selector('input[type="file"]').clear()
driver.find_element_by_css_selector('input[type="file"]').send_keys(r"C:\Documents and Settings\pcname\Desktop\ffdlt\test.jpeg")
```
|
If I run the following lines from the IDE it works just fine, it uploads the file.
```
Command | Target | Value
_____________________________________________________________
open | /upload |
click | id=start-upload-button-single |
type | css=input[type="file"] | C:\\Documents and Settings\\cristian\\Desktop\\ffdl\\MyWork.avi
```
But when I export it for Python webdriver it just doesn't upload it, I have tried everything.
The last resort is to make it work with AutoHotKey, but I want it to work.
What I have done is tested the solutions that I have found with/on other sites to see if the problem is only on the site that i am trying to make the upload(youtube), the solutions work(EX: <http://dev.sencha.com/deploy/ext-4.0.0/examples/form/file-upload.html>) they are valid, you can upload a file to most servers, it just doesn't work on it.
Thank you for your help.
| 15,613
|
16,504,990
|
How is it possible I am getting a permission denied using the below? I am using python 2.7 and ubuntu 12.04
Below is my mapper.py file
```
import sys
import json
for line in sys.stdin:
line = json.loads(line)
key = "%s:%s" % (line['user_key'],line['item_key'])
value = 1
sys.stdout.write('%s\t%s\n' % (key,value))
```
Below is my data file
```
{"action": "show", "user_key": "heythat:htuser:32", "utc": 1368339334.568242, "id": "d518b7c5-a180-439b-8036-2bb40ca080cd", "item_key": "heythat:htitem:1000"}
{"action": "click", "user_key": "heythat:htuser:32", "utc": 1368339334.573988, "id": "cc8c35ec-9e67-4ef8-a189-6116c7d0336a", "item_key": "heythat:htitem:1001"}
{"action": "click", "user_key": "heythat:htuser:32", "utc": 1368339334.575226, "id": "6c457f9a-afc2-4b61-be2f-d4ea2863aa69", "item_key": "heythat:htitem:1002"}
{"action": "show", "user_key": "heythat:htuser:32", "utc": 1368339334.575315, "id": "e0b08c30-459b-4f77-b9a4-05939457ab99", "item_key": "heythat:htitem:1000"}
{"action": "click", "user_key": "heythat:htuser:32", "utc": 1368339334.57538, "id": "90084ea2-75c6-4b8a-bc22-9d9f2da1c0de", "item_key": "heythat:htitem:1002"}
{"action": "show", "user_key": "heythat:htuser:32", "utc": 1368339334.57538, "id": "2f76a861-2b66-430a-b70d-2af6e1b9f365", "item_key": "heythat:htitem:1001"}
{"action": "show", "user_key": "heythat:htuser:32", "utc": 1368339334.57538, "id": "282eec8a-7f6d-4ad3-917a-aae049062d87", "item_key": "heythat:htitem:1002"}
{"action": "show", "user_key": "heythat:htuser:32", "utc": 1368339334.575447, "id": "bc48a6bc-f8f8-420e-9b80-0bd0c2bbde0d", "item_key": "heythat:htitem:1000"}
{"action": "show", "user_key": "that:htuser:32", "utc": 1368339334.575513, "id": "14b49763-e2fe-4beb-bff6-f4b34b3d2ef3", "item_key": "that:htitem:1001"}
{"action": "show", "user_key": "that:htuser:32", "utc": 1368339334.575596, "id": "983cbcf3-4375-4b3b-86ed-a8fbc86ff4b3", "item_key": "that:htitem:1002"}
```
Below is my error
```
cat /home/ubuntu/workspace/logging/data.txt | /home/ubuntu/workspace/logging/mapper.py
bash: /home/ubuntu/workspace/logging/mapper.py: Permission denied
```
|
2013/05/12
|
[
"https://Stackoverflow.com/questions/16504990",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1203556/"
] |
Your `mapper.py` file needs to be executable (on some executable partition) so `chmod a+x mapper.py`
The underlying [execve(2)](http://man7.org/linux/man-pages/man2/execve.2.html) syscall is failing with
```
EACCES Execute permission is denied for the file or a script or ELF
interpreter.
EACCES The file system is mounted noexec.
```
|
you can add 'python' to the command, like so
```
cat /home/ubuntu/workspace/logging/data.txt | python /home/ubuntu/workspace/logging/mapper.py
```
| 15,622
|
65,647,986
|
I am trying to interface a micropython board with python on my computer using serial read and write, however I can't find a way to read usb serial data in micropython that is non-blocking.
Basicly I want to call the input function without requiring an input to move on. (something like <https://github.com/adafruit/circuitpython/pull/1186> but for usb)
I have tried using [tasko](https://github.com/WarriorOfWire/tasko), uselect (it failed to import the library and I can't find a download), and await functions. I'm not sure it there is a way to do this, but any help would be appreciated.
|
2021/01/09
|
[
"https://Stackoverflow.com/questions/65647986",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13016068/"
] |
Instead of typing xlsx, type xlsx like this:
```
import xlsxwriter
import pandas as pd
from pandas import DataFrame
path = ('mypath.xlsx')
xl = pd.ExcelFile(path)
print(xl.sheet_names)
```
It'll work.
|
The module name is xlsxwriter not xlxswriter, so replace that line with:
```
import xlsxwriter
```
| 15,623
|
26,856,793
|
I am trying to load as a pandas dataframe a file that has Chinese characters in its name.
I've tried:
```
df=pd.read_excel("url/某物2008.xls")
```
and
```
import sys
df=pd.read_excel("url/某物2008.xls", encoding=sys.getfilesystemencoding())
```
But the response is something like: "no such file or directory "url/\xa1\xa92008.xls"
I've also tried changing the names of the files using os.rename, but the filenames aren't even read properly (asking python to just print the filenames yields only question marks or squares).
|
2014/11/11
|
[
"https://Stackoverflow.com/questions/26856793",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2827060/"
] |
```
df=pd.read_excel(u"url/某物2008.xls", encoding=sys.getfilesystemencoding())
```
may work... but you may have to declare an encoding type at the top of the file
|
try this for unicode conversion:
`df=pd.read_excel(u"url/某物2008.xls", encoding='utf-8')`
| 15,624
|
11,204,053
|
I have two classes for example:
```
class Parent(object):
def hello(self):
print 'Hello world'
def goodbye(self):
print 'Goodbye world'
class Child(Parent):
pass
```
class Child must inherit only hello() method from Parent and and there should be no mention of goodbye().
Is it possible ?
ps yes, I read [this](https://stackoverflow.com/questions/5647118/is-it-possible-to-do-partial-inheritance-with-python)
Important NOTE: And I can modify only Child class (in the parent class of all possible should be left as is)
|
2012/06/26
|
[
"https://Stackoverflow.com/questions/11204053",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/438882/"
] |
```
class Child(Parent):
def __getattribute__(self, attr):
if attr == 'goodbye':
raise AttributeError()
return super(Child, self).__getattribute__(attr)
```
|
This Python example shows how to design classes to achieve child class inheritance:
```
class HelloParent(object):
def hello(self):
print 'Hello world'
class Parent(HelloParent):
def goodbye(self):
print 'Goodbye world'
class Child(HelloParent):
pass
```
| 15,625
|
67,350,490
|
I am trying to make a simple flask app using putty but it is not working
here is my hello.py file:
```
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello():
return 'Hello, World'
```
command I am running in putty (when in the file directory of hello.py)
pip install flask
python -c "import flask; print(flask.**version**)"
Output:1.1.2
export FLASK\_APP=hello
export FLASK\_ENV=development
flask run
when I go to the ip address: <http://127.0.0.1:5000/>
I get this
[enter image description here](https://i.stack.imgur.com/rMjbu.png)
also here is the link of the instruction I am following/did:
<https://www.digitalocean.com/community/tutorials/how-to-make-a-web-application-using-flask-in-python-3>
if someone could let me know how to make it work would be great!
thank you!
|
2021/05/01
|
[
"https://Stackoverflow.com/questions/67350490",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12333529/"
] |
I used my terminal on my window machine instead of putty which is a remote machine
|
Check your wlan0 inet adress by typing ifconfig in bash and use it, also try to add the following code in your flask app:
```
if __name__ == '__main__':
app.run(debug=True, port=5000, host='wlan0 IP Add')
```
Run the application by typing >>sudo python3 hello.py
Did this work?
| 15,628
|
8,408,970
|
>
> **Possible Duplicate:**
>
> [Not getting exact result in python with the values leading zero. Please tell me what is going on there](https://stackoverflow.com/questions/3067409/not-getting-exact-result-in-python-with-the-values-leading-zero-please-tell-me)
>
>
>
I want to create dictionary which value begins by 0. However, after I create dictionary value has changing. What I do wrong?
```
>>> sample={'first_value':0123456}
>>> sample
{'first_value': 42798}
```
|
2011/12/07
|
[
"https://Stackoverflow.com/questions/8408970",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/613985/"
] |
The leading zero is telling Python to interpret it as an octal number.
|
When you write 0123456, it gets interpreted as base-8, which is 42798 decimal
| 15,629
|
41,782,396
|
I am trying to deploy a [sample app](https://github.com/GoogleCloudPlatform/python-docs-samples/tree/master/appengine/flexible/django_cloudsql) to the Google App Engine Flexible Environment based on [this](https://cloud.google.com/python/django/flexible-environment) tutorial. The deployment works, however, the application cannot start up. I get the following error message:
```
Updating service [default]...failed.
ERROR: (gcloud.app.deploy) Error Response: [9]
Application startup error:
[2017-01-21 17:01:14 +0000] [5] [INFO] Starting gunicorn 19.6.0
[2017-01-21 17:01:14 +0000] [5] [INFO] Listening at: http://0.0.0.0:8080 (5)
[2017-01-21 17:01:14 +0000] [5] [INFO] Using worker: sync
[2017-01-21 17:01:14 +0000] [8] [INFO] Booting worker with pid: 8
[2017-01-21 17:01:14 +0000] [8] [ERROR] Exception in worker process
Traceback (most recent call last):
File "/env/lib/python3.5/site-packages/gunicorn/arbiter.py", line 557, in spawn_worker
worker.init_process()
File "/env/lib/python3.5/site-packages/gunicorn/workers/base.py", line 126, in init_process
self.load_wsgi()
File "/env/lib/python3.5/site-packages/gunicorn/workers/base.py", line 136, in load_wsgi
self.wsgi = self.app.wsgi()
File "/env/lib/python3.5/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/env/lib/python3.5/site-packages/gunicorn/app/wsgiapp.py", line 65, in load
return self.load_wsgiapp()
File "/env/lib/python3.5/site-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp
return util.import_app(self.app_uri)
File "/env/lib/python3.5/site-packages/gunicorn/util.py", line 357, in import_app
__import__(module)
ImportError: No module named 'mysite'
[2017-01-21 17:01:14 +0000] [8] [INFO] Worker exiting (pid: 8)
[2017-01-21 17:01:14 +0000] [5] [INFO] Shutting down: Master
[2017-01-21 17:01:14 +0000] [5] [INFO] Reason: Worker failed to boot.
```
As you can see on GitHub (see link above), the **/app.yaml** file looks like this:
```
# [START runtime]
runtime: python
env: flex
entrypoint: gunicorn -b :$PORT mysite.wsgi
beta_settings:
cloud_sql_instances: <your-cloudsql-connection-string>
runtime_config:
python_version: 3
# [END runtime]
```
And the **/mysite/wsgi.py** file like this:
```
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.settings")
application = get_wsgi_application()
```
Since the Flexible Environment is in beta, I am not sure if this might be a bug. However, I am using the original app from GitHub without any changes following the official documentation, so I would expect it to work.
I appreciate your help.
|
2017/01/21
|
[
"https://Stackoverflow.com/questions/41782396",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1373359/"
] |
`setf` returns the original flag value, so you can simply store that then put it back when you're done.
The same is true of `precision`.
So:
```
// Change flags & precision (storing original values)
const auto original_flags = std::cout.setf(std::ios::fixed | std::ios::showpoint);
const auto original_precision = std::cout.precision(2);
// Do your I/O
std::cout << someValue;
// Reset the precision, and affected flags, to their original values
std::cout.setf(original_flags, std::ios::fixed | std::ios::showpoint);
std::cout.precision(original_precision);
```
Read [some documentation](http://en.cppreference.com/w/cpp/io/ios_base/setf).
|
You can use flags() method to save and restore all flags or unsetf() the one returned by setf
```
std::ios::fmtflags oldFlags( cout.flags() );
cout.setf(std::ios::fixed);
cout.setf(std::ios::showpoint);
std::streamsize oldPrecision(cout.precision(2));
// output whatever you should.
cout.flags( oldFlags );
cout.precision(oldPrecision)
```
| 15,635
|
68,403,642
|
I need to share well described data and want to do this in a modern way that avoids managing bureaucratic documentation no one will read. Fields require some description or note (eg. "values don't include ABC because XYZ") which I'd like to associate to columns that'll be saved with `pd.to_<whatever>()`, but I don't know of such functionality in pandas.
The format can't [present security concerns](https://docs.python.org/3/library/pickle.html#module-pickle), and should have a practical compromise between data integrity, performance, and file size. [Looks like JSON without index might suit](https://stackoverflow.com/a/33570065/4107349).
[JSON documentation writes of schema annotations](http://json-schema.org/understanding-json-schema/reference/generic.html#annotations), which supports pairing keywords like `description` with strings, but I can't figure out how to use this with options described in [pandas `to_json` documentation](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.to_json.html).
Example df:
```
df = pd.DataFrame({"numbers": [6, 2],"strings": ["foo", "whatever"]})
df.to_json('temptest.json', orient='table', indent=4, index=False)
```
We can edit the JSON to include `description`:
```
"schema":{
"fields":[
{
"name":"numbers",
"description": "example string",
"type":"integer"
},
...
```
We can then `df = pd.read_json("temptest.json", orient='table')` but descriptions seem ignored and are lost upon saving.
The [only other answer I found saves separate dicts and dfs](https://stackoverflow.com/a/33113390/4107349) into a single JSON, but I couldn't replicate this without "ValueError: Trailing data". I need something less cumbersome and error prone, and files requiring custom instructions on how to open them aren't appropriate.
How can we can work with and save brief data descriptions with JSON or another format?
|
2021/07/16
|
[
"https://Stackoverflow.com/questions/68403642",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4107349/"
] |
Would the following rough sketch be something you could live with:
**Step 1**: Create json structure out of `df` like you did
```
df.to_json('temp.json', orient='table', indent=4, index=False)
```
**Step 2**: Add the column description to the so produced json-file as you already did (could be done easily in a structured/programatically manner):
```
{
"schema":{
"fields":[
{
"name":"numbers",
"type":"integer",
"description": "example number"
},
{
"name":"strings",
"type":"string",
"description": "example string"
}
],
"pandas_version":"0.20.0"
},
"data":[...]
}
```
One way to do that would be to write a little function that uses Pandas `.to_json` as a base output and then adds the desired descriptions to the Pandas json-dump (this is step 1 & 2 together):
```
import json
def to_json_annotated(filepath, df, annotations):
df_json = json.loads(df.to_json(orient='table', index=False))
for field in df_json['schema']['fields']:
field['description'] = annotations.get(field['name'], None)
with open(filepath, 'w') as file:
json.dump(df_json, file)
```
As in the example above:
```
annotations = {'numbers': 'example number',
'strings': 'example string'}
to_json_annotated('temp.json', df, annotations)
```
**Step 3**: Reading the information back into Pandas-format:
```
import json
with open('temp.json', 'r') as file:
json_df = json.load(file)
df_data = pd.json_normalize(json_df, 'data') # pd.read_json('temp.json', orient='table') works too
df_meta = pd.json_normalize(json_df, ['schema', 'fields'])
```
with the results:
```
df_data:
numbers strings
0 6 foo
1 2 whatever
```
```
df_meta:
name type description
0 numbers integer example number
1 strings string example string
```
|
I didn't see that there will be such an option but I think you can just add a description inside of each variable:
```
schema = {'first':{'Variable':'x',"description": "example string","value": 2},"second":{"Variable":"y","description": "example string","value": 3}}
```
It creates a table:
```
first second
Variable x y
description example string example string
value 2 3
```
| 15,638
|
53,340,120
|
I would like to install the package : newsapi in Python
I run the command
```
pip3 install newsapi-python
```
The package was succefully installed. But I import him in Anaconda :
```
from newsapi import NewsApiClient
>> ModuleNotFoundError: No module named 'newsapi'
```
I would like to know how to solve this kind of problem. I think that is linked to some path, but I am not sure
|
2018/11/16
|
[
"https://Stackoverflow.com/questions/53340120",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10651723/"
] |
I think the issue is you aren't importing vue-router into your app. Only into router.js and then you don't import all of router.js into your app, only the createRouter function. Try this:
```
//routes.js
import Home from "../components/Home/Home.vue";
import About from "../components/About.vue";
import FallBackPage from "../components/FallBackPage/FallBackPage.vue";
import MyOrders from "../components/MyOrders/MyOrders.vue";
export const routes [
{ path: "/", component: Home },
{ path: "/about", component: About },
{ path: "/myorders", component: MyOrders },
{ path: "/*", component: FallBackPage }
]
//main.js
import Vue from 'vue'
import App from './App.vue'
import VueRouter from 'vue-router'
import {routes} from './routes'
Vue.use(VueRouter)
const router = new VueRouter({
mode: 'history',
routes
})
new Vue({
router,
render: h => h(App)
}).$mount('#app')
```
Then in your component you use `this.$router.push("/")`
|
To change route
```
this.$router.push({ path: '/' })
this.$router.push({ name: 'Home' })
```
//main.js
```
import Vue from "vue";
import App from "./App.vue";
import VueRouter from "vue-router";
import { routes } from "./router/router";
Vue.use(VueRouter);
export function createApp() {
const router = new VueRouter({
mode: "history",
routes
});
const app = new Vue({
router,
render: h => h(App)
}).$mount("#app");
return { app, router };
}
```
//routes.js
```
import Home from "../components/Home/Home.vue";
import About from "../components/About.vue";
import FallBackPage from "../components/FallBackPage/FallBackPage.vue";
import MyOrders from "../components/MyOrders/MyOrders.vue";
export const routes = [
{ path: "/", component: Home },
{ path: "/about", component: About },
{ path: "/myorders", component: MyOrders },
{ path: "/*", component: FallBackPage }
]
```
on top of main.js changes to change route we need to use the following
```
this.$router.push({ path: '/' })
this.$router.push({ name: 'Home' })
```
| 15,639
|
51,605,651
|
I'm looking to find the max run of consecutive zeros in a DataFrame with the result grouped by user. I'm interested in running the RLE on usage.
### sample input:
user--day--usage
A-----1------0
A-----2------0
A-----3------1
B-----1------0
B-----2------1
B-----3------0
### Desired output
user---longest\_run
a - - - - 2
b - - - - 1
```
mydata <- mydata[order(mydata$user, mydata$day),]
user <- unique(mydata$user)
d2 <- data.frame(matrix(NA, ncol = 2, nrow = length(user)))
names(d2) <- c("user", "longest_no_usage")
d2$user <- user
for (i in user) {
if (0 %in% mydata$usage[mydata$user == i]) {
run <- rle(mydata$usage[mydata$user == i]) #Run Length Encoding
d2$longest_no_usage[d2$user == i] <- max(run$length[run$values == 0])
} else {
d2$longest_no_usage[d2$user == i] <- 0 #some users did not have no-usage days
}
}
d2 <- d2[order(-d2$longest_no_usage),]
```
this works in R but I want to do the same thing in python, I'm totally stumped
|
2018/07/31
|
[
"https://Stackoverflow.com/questions/51605651",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10158469/"
] |
Use [`groupby`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html) with [`size`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.size.html) by columns `user`, `usage` and helper `Series` for consecutive values first:
```
print (df)
user day usage
0 A 1 0
1 A 2 0
2 A 3 1
3 B 1 0
4 B 2 1
5 B 3 0
6 C 1 1
df1 = (df.groupby([df['user'],
df['usage'].rename('val'),
df['usage'].ne(df['usage'].shift()).cumsum()])
.size()
.to_frame(name='longest_run'))
print (df1)
longest_run
user val usage
A 0 1 2
1 2 1
B 0 3 1
5 1
1 4 1
C 1 6 1
```
Then filter only `zero` rows, get `max` and add [`reindex`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html) for append non `0` groups:
```
df2 = (df1.query('val == 0')
.max(level=0)
.reindex(df['user'].unique(), fill_value=0)
.reset_index())
print (df2)
user longest_run
0 A 2
1 B 1
2 C 0
```
**Detail**:
```
print (df['usage'].ne(df['usage'].shift()).cumsum())
0 1
1 1
2 2
3 3
4 4
5 5
6 6
Name: usage, dtype: int32
```
|
I think the following does what you are looking for, where the `consecutive_zero` function is an adaptation of the top answer [here](https://codereview.stackexchange.com/questions/138550/count-consecutive-ones-in-a-binary-list).
Hope this helps!
```
import pandas as pd
from itertools import groupby
df = pd.DataFrame([['A', 1], ['A', 0], ['A', 0], ['B', 0],['B',1],['C',2]],
columns=["user", "usage"])
def len_iter(items):
return sum(1 for _ in items)
def consecutive_zero(data):
x = list((len_iter(run) for val, run in groupby(data) if val==0))
if len(x)==0: return 0
else: return max(x)
df.groupby('user').apply(lambda x: consecutive_zero(x['usage']))
```
Output:
```
user
A 2
B 1
C 0
dtype: int64
```
| 15,640
|
69,339,582
|
I would like to compute the **hash of the contents (sequence of *bits*)** of a file (whose length could be any number of bits, and so not necessarily a multiple of the *trendy* eight) and send that file to a friend along with the hash-value. My friend should be able to compute the same hash from the file contents. I want to use Python 3 to compute the hash, but my friend can't use Python 3 (because I'll wait till next year to send the file and by then Python 3 will be out of style, and he'll want to be using Python++ or whatever). All I can guarantee is that my friend will know how to compute the hash, in a mathematical sense---he might have to write his own code to run on his implementation of the [MIX](https://en.wikipedia.org/wiki/MIX) machine (which he will know how to do).
**What hash do I use**, and, more importantly, **what do I take the hash of**? For example, do I hash the `str` returned from a `read` on the file opened for reading as *text*? Do I hash some `bytes`-like object returned from a *binary* `read`? What if the file has weird end-of-line markers? Do I pad the tail end first so that the thing I am hashing is an appropriate size?
```
import hashlib
FILENAME = "filename"
# Now, what?
```
**I say "sequence of *bits*"** because not all computers are based on the 8-bit byte, and saying "sequence of bytes" is therefore too ambiguous. For example, [GreenArrays, Inc.](http://www.greenarraychips.com/) has designed a [supercomputer on a chip](http://www.greenarraychips.com/home/products/index.html), where each computer has 18-bit (eighteen-bit) words (when these words are used for encoding native instructions, they are composed of three 5-bit "bytes" and one 3-bit byte each). I also understand that before the 1970's, a variety of byte-sizes were used. Although the 8-bit byte may be the most common choice, and may be optimal in some sense, the choice of 8 bits per byte is arbitrary.
See Also
--------
[Is python's hash() portable?](https://stackoverflow.com/questions/31170783/is-pythons-hash-portable)
|
2021/09/26
|
[
"https://Stackoverflow.com/questions/69339582",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8341274/"
] |
First of all, the `hash()` function in Python is not the same as **cryptographic hash functions** in general. Here're the differences:
### `hash()`
>
> A hash is an fixed sized integer that identifies a particular value. Each value needs to have its own hash, so for the same value you will get the same hash even if it's not the same object.
>
>
>
>
> Note that the hash of a value only needs to be the same for one run of Python. In Python 3.3 they will in fact change for every new run of Python
>
>
>
[What does hash do in python?](https://stackoverflow.com/questions/17585730/what-does-hash-do-in-python)
### Cryptographic hash functions
>
> A cryptographic hash function (CHF) is a mathematical algorithm that maps data of an arbitrary size (often called the "message") to a bit array of a fixed size
>
>
>
>
> It is deterministic, meaning that the same message always results in the same hash.
>
>
>
<https://en.wikipedia.org/wiki/Cryptographic_hash_function>
---
Now let's come back to your question:
>
> I would like to compute the hash of the contents (sequence of bits) of a file (whose length could be any number of bits, and so not necessarily a multiple of the trendy eight) and send that file to a friend along with the hash-value. My friend should be able to compute the same hash from the file contents.
>
>
>
What you're looking for is one of the **cryptographic hash functions**. Typically, to calculate the file hash, MD5, SHA-1, SHA-256 are used. You want to open the file as **binary** and hash the binary bits, and finally digest it & encode it in hexadecimal form.
```
import hashlib
def calculateSHA256Hash(filePath):
h = hashlib.sha256()
with open(filePath, "rb") as f:
data = f.read(2048)
while data != b"":
h.update(data)
data = f.read(2048)
return h.hexdigest()
print(calculateSHA256Hash(filePath = 'stackoverflow_hash.py'))
```
The above code takes itself as an input, hence it produced an SHA-256 hash for itself, being `610e15155439c75f6b63cd084c6a235b42bb6a54950dcb8f2edab45d0280335e`. This remains consistent as long as the code is not changed.
Another example would be to hash a txt file, `test.txt` with content `Helloworld`.
This is done by simply changing the last line of the code to "test.txt"
```
print(calculateSHA256Hash(filePath = 'text.txt'))
```
This gives a SHA-256 hash of `5ab92ff2e9e8e609398a36733c057e4903ac6643c646fbd9ab12d0f6234c8daf`.
|
I arrived at `sha256hexdigestFromFile`, an alternative to @Lincoln Yan 's `calculateSHA256Hash`, after reviewing the [standard](http://dx.doi.org/10.6028/NIST.FIPS.180-4) for SHA-256.
This is also a response to my comment about `2048`.
```
def sha256hexdigestFromFile(filePath, blocks = 1):
'''Return as a str the SHA-256 message digest of contents of
file at filePath.
Reference: Introduction of NIST (2015) Secure Hash
Standard (SHS), FIPS PUB 180-4. DOI:10.6028/NIST.FIPS.180-4
'''
assert isinstance(blocks, int) and 0 < blocks, \
'The blocks argument must be an int greater than zero.'
with open(filePath, 'rb') as MessageStream:
from hashlib import sha256
from functools import reduce
def hashUpdated(Hash, MESSAGE_BLOCK):
Hash.update(MESSAGE_BLOCK)
return Hash
def messageBlocks():
'Return a generator over the blocks of the MessageStream.'
WORD_SIZE, BLOCK_SIZE = 4, 512 # PER THE SHA-256 STANDARD
BYTE_COUNT = WORD_SIZE * BLOCK_SIZE * blocks
yield MessageStream.read(BYTE_COUNT)
return reduce(hashUpdated, messageBlocks(), sha256()).hexdigest()
```
| 15,643
|
51,797,321
|
I am beginning to program in python. I want to delete elements from array based on the list of index values that I have. Here is my code
```
x = [12, 45, 55, 6, 34, 37, 656, 78, 8, 99, 9, 4]
del_list = [0, 4, 11]
desired output = [45, 55, 6, 37, 656, 78, 8, 99, 9]
```
Here is what I have done
```
x = [12, 45, 55, 6, 34, 37, 656, 78, 8, 99, 9, 4]
index_list = [0, 4, 11]
for element in index_list:
del x[element]
print(x)
```
I get this error. **I can figure out that because of deleting elements the list shortens and the index goes out of range**. But i am not sure how to do it
```
Traceback (most recent call last):
IndexError: list assignment index out of range
```
|
2018/08/11
|
[
"https://Stackoverflow.com/questions/51797321",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10211342/"
] |
This question already has an answer here. [How to delete elements from a list using a list of indexes?](https://stackoverflow.com/questions/38647439/how-to-delete-elements-from-a-list-using-a-list-of-indexes).
BTW this will do for you
```
x = [12, 45, 55, 6, 34, 37, 656, 78, 8, 99, 9, 4]
index_list = [0, 4, 11]
value_list = []
for element in index_list:
value_list.append(x[element])
for element in value_list:
x.remove(element)
print(x)
```
|
You can sort the `del_list` list in descending order and then use the `list.pop()` method to delete specified indexes:
```
for i in sorted(del_list, reverse=True):
x.pop(i)
```
so that `x` would become:
```
[45, 55, 6, 37, 656, 78, 8, 99, 9]
```
| 15,644
|
53,501,778
|
I have installed `mysql connector` , which already has a built in sql adapter, i also don't need to install `mysqlclient` as i have mysql connector. But when i start python manage.py migrate, it is asking me to download mysqlclient. But i can not install `mysqlclient`. Can anyone help me how to fix the problem. Thanks
error:
```
File "C:\Program Files (x86)\Python37-32\lib\site-packages\django\db\models\base.py", line 101, in __new__
new_class.add_to_class('_meta', Options(meta, app_label))
File "C:\Program Files (x86)\Python37-32\lib\site-packages\django\db\models\base.py", line 305, in add_to_class
value.contribute_to_class(cls, name)
File "C:\Program Files (x86)\Python37-32\lib\site-packages\django\db\models\options.py", line 203, in contribute_to_class
self.db_table = truncate_name(self.db_table, connection.ops.max_name_length())
File "C:\Program Files (x86)\Python37-32\lib\site-packages\django\db\__init__.py", line 33, in __getattr__
return getattr(connections[DEFAULT_DB_ALIAS], item)
File "C:\Program Files (x86)\Python37-32\lib\site-packages\django\db\utils.py", line 202, in __getitem__
backend = load_backend(db['ENGINE'])
File "C:\Program Files (x86)\Python37-32\lib\site-packages\django\db\utils.py", line 110, in load_backend
return import_module('%s.base' % backend_name)
File "C:\Program Files (x86)\Python37-32\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "C:\Program Files (x86)\Python37-32\lib\site-packages\django\db\backends\mysql\base.py", line 20, in <module>
) from err
django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module.
Did you install mysqlclient?
```
|
2018/11/27
|
[
"https://Stackoverflow.com/questions/53501778",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
Maybe you need to install mysql-connector-c connector and after:
```
pip install mysqlclient==1.3.13
```
|
[There are two backends](https://docs.djangoproject.com/en/2.1/ref/databases/#mysql-db-api-drivers) for using MySQL with python
* mysqlclient - recommended by the docs, but can be tricky to install on Windows
* mysql connector - doesn't always support the latest Django version.
First you need to decide which backend you want to use.
If you want to use mysql connector, then you don't need to install `mysqlclient`. You do need to change `ENGINES` in your `DATABASES` setting to use the connector backend.
```
DATABASES = {
'default': {
'NAME': 'user_data',
'ENGINE': 'mysql.connector.django',
...
},
...
}
```
If you make this change, it should stop the *did you install mysqlclient* error messages. See [the docs](https://dev.mysql.com/doc/connector-python/en/connector-python-django-backend.html) for more info about using mysql connector.
If you want to use `mysqlclient`, then leave the ENGINE as `'django.db.backends.mysql'` in your `DATABASES` setting. Installing `mysqlclient` on Windows can be tricky, you have a few different options:
1. Install an official wheel. As of December 2018, there are wheels for the latest release mysqlclient 1.3.14 for Python 3.6 and Python 3.7. Install mysqlclient with:
```
pip install mysqlclient==1.3.14
```
2. Install one of the [unofficial wheels](https://www.lfd.uci.edu/~gohlke/pythonlibs/#mysqlclient). This will allow you to use Python 3.7 and a more up-to-date version of mysqlclient.
3. Install mysqlclient from source. This can be tricky on Windows, and I can't offer any tips on how to do that.
| 15,646
|
13,578,923
|
Can anybody figure out how the python code below works and give me a possible way to port it to Objective-C (iOS) to work in my own project?
`month_id = calendar.timegm(datetime(year, month, 1, hour, 0, 0).timetuple()) * 1000`
Thanks a ton!
|
2012/11/27
|
[
"https://Stackoverflow.com/questions/13578923",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/981575/"
] |
calendar.timegm converts the given time to time in seconds since epoch of 1970. More information at <http://docs.python.org/3/library/calendar.html?highlight=calendar.timegm#calendar.timegm>
|
open a python interpreter, then enter this:
```
>>> import calendar
>>> from datetime import datetime
```
then input your line of code, and you should be able to get a pretty good idea.
| 15,647
|
40,514,205
|
I am developing a slack bot with plugins using entry points. I want to dynamically add a plugin during runtime.
I have a project with this structure:
```
+ ~/my_project_dir/
+ my_projects_python_code/
+ plugins/
- plugin1.py
- plugin2.py
- ...
- pluginN.py
- setup.py
- venv/
- install.sh
```
My `setup.py` file looks like this:
```
from setuptools import setup, find_packages
setup(
name="My_Project_plugins",
version="1.0",
packages=['plugins'],
entry_points="""
[my_project.plugins]
plugin1 = plugins.plugin1:plugin1_class
plugin2 = plugins.plugin2:plugin2_class
...
pluginN = plugins.pluginN:pluginN_class
"""
)
```
Running `sudo install.sh` does the following:
1. Copies the needed files to `/usr/share/my_project_dir/`
2. Activate virtualenv at `/usr/share/my_project_dir/venv/bin/activate`
3. Run: `python setup.py develop`
This works as expected and sets up my entry points correctly so that I can use them through the bot.
But I want to be able to add a plugin to `setup.py` and be able to use it while the bot is running. So I want to add a line: `pluginN+1 = plugins.pluginN+1:pluginN+1_class` and have pluginN+1 available to use.
What I've tried/learned:
* After `/usr/share/my_project_dir/venv/bin/activate` I open a Python interactive shell and iterate through `pkg_resources.iter_entry_points()`, which lists everything that was loaded from the initial state of setup.py (i.e. plugin1 through pluginN)
* If I add a line to `setup.py` and run `sudo python setup.py develop` and iterate again with the same Python shell, it doesn't pick up the new plugin but if I exit the shell and reopen it, the new plugin gets picked up.
* I noticed that when I install the bot, part of the output says:
+ `Copying My_Project_plugins-1.0-py2.7.egg to /usr/share/my_project-dir/venv/lib/python2.7/site-packages`
* When I `cd /usr/share/my_project_dir/`, activate my virtualenv, and run `setup.py` from the shell it says:
+ `Creating /usr/local/lib/python2.7/dist-packages/My_Project-plugins.egg-link (link to .)
My_Project-plugins 1.0 is already the active version in easy-install.pth`
|
2016/11/09
|
[
"https://Stackoverflow.com/questions/40514205",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1496918/"
] |
I needed to do something similar to load a dummy plugin for test purposes. This differs slightly from your use-case in that I was specifically trying to avoid needing to define the entry points in the package (as it is just test code).
I found I could dynamically insert entries into the pkg\_resources data structures as follows:
```
import pkg_resources
# Create the fake entry point definition
ep = pkg_resources.EntryPoint.parse('dummy = dummy_module:DummyPlugin')
# Create a fake distribution to insert into the global working_set
d = pkg_resources.Distribution()
# Add the mapping to the fake EntryPoint
d._ep_map = {'namespace': {'dummy': ep}}
# Add the fake distribution to the global working_set
pkg_resources.working_set.add(d, 'dummy')
```
This, at run time, added an entry point called 'dummy' to 'namespace', which would be the class 'DummyPlugin' in 'dummy\_module.py'.
This was determined through use of the setuptools docs, and dir() on the objects to get more info as needed.
Docs are here: <http://setuptools.readthedocs.io/en/latest/pkg_resources.html>
You might especially look at <http://setuptools.readthedocs.io/en/latest/pkg_resources.html#locating-plugins> if all you need to do is load a plugin that you have just stored to your local filesystem.
|
It's more than at least 5 years, since the time when I first asked myself almost the same question, and your question now is an impulse to finally find it out.
For me it was as well interesting, if one can add entry points from the same directory as the script without installation of a package. Though I always knew that the only contents of the package might be some meta with entry points looking at some other packages.
Anyway, here is some setup of my directory:
```
ep_test newtover$ tree
.
├── foo-0.1.0.dist-info
│ ├── METADATA
│ └── entry_points.txt
└── foo.py
1 directory, 3 files
```
Here is the contents of `foo.py`:
```
ep_test newtover$ cat foo.py
def foo1():
print 'foo1'
def foo2():
print 'foo2'
```
Now let's open `ipython`:
```
In [1]: def write_ep(lines): # a helper to update entry points file
...: with open('foo-0.1.0.dist-info/entry_points.txt', 'w') as f1:
...: print >> f1, '\n'.join(lines)
...:
In [2]: write_ep([ # only one entry point under foo.test
...: "[foo.test]",
...: "foo_1 = foo:foo1",
...: ])
In [3]: !cat foo-0.1.0.dist-info/entry_points.txt
[foo.test]
foo1 = foo:foo1
In [4]: import pkg_resources
In [5]: ws = pkg_resources.WorkingSet() # here is the answer on the question
In [6]: list(ws.iter_entry_points('foo.test'))
Out[6]: [EntryPoint.parse('foo_1 = foo:foo1')]
In [7]: write_ep([ # two entry points
...: "[foo.test]",
...: "foo_1 = foo:foo1",
...: "foo_2 = foo:foo2"
...: ])
In [8]: ws = pkg_resources.WorkingSet() # a new instance of WorkingSet
```
With default parameters `WorkingSet` just revisits each entry in sys.path, but you can narrow the list. `pkg_resources.iter_entry_points` is bound to a global instance of `WorkingSet`.
```
In [9]: list(ws.iter_entry_points('foo.test')) # both are visible
Out[9]: [EntryPoint.parse('foo_1 = foo:foo1'), EntryPoint.parse('foo_2 = foo:foo2')]
In [10]: foos = [ep.load() for ep in ws.iter_entry_points('foo.test')]
In [11]: for func in foos: print 'name is {}'.format(func.__name__); func()
name is foo1
foo1
name is foo2
foo2
```
And the contents of METADATA as well:
```
ep_test newtover$ cat foo-0.1.0.dist-info/METADATA
Metadata-Version: 1.2
Name: foo
Version: 0.1.0
Summary: entry point test
```
**UPD1**: I thought it over once again and now understand that you need an additional step before using the new plugins: you need to reload the modules.
This might be as easy as:
```
In [33]: modules_to_reload = {ep1.module_name for ep1 in ws.iter_entry_points('foo.test')}
In [34]: for module_name in modules_to_reload:
....: reload(__import__(module_name))
....:
```
But if a new version of your plugins package is based on significant changes in other used modules, you might need a particular order of reloading and reloading of those changed modules. This might become a cumbersome task, so that restarting the bot would be the only way to go.
| 15,648
|
51,004,898
|
I'm trying to programmatically retrieve ASIN numbers for over 500+ books.
example: Product Catch-22 by Joseph Heller
Amazon URL: [https://www.amazon.com/Catch-22-Joseph-Heller/dp/3866155239](https://rads.stackoverflow.com/amzn/click/com/3866155239)
I can get the product numbers manually by searching for each product through a browser however that's not efficient. I would like to use an API or wget/curl at the worst case, but I'm hitting some stumbling blocks.
The Amazon API is not exactly the easiest to use...(I've been hitting my head against the wall trying to get the Signature Request Hash correct with python to no avail..)
Then I thought googler may be another option however after 15 request (even with time.sleep(30) google locks me out for a few hours [coming from multiple IP sources as well]).
How about bing...well they don't show any Amazon results via the API...which is really odd...
I tried writing my own Google Parser with wget but then I would have to import all that into BeautifulSoup and reparse...my sed and awk skills leave a lot to be desired...
Basically...Has anyone come across an easier way of obtaining the ASIN number for a product programmatically?
|
2018/06/23
|
[
"https://Stackoverflow.com/questions/51004898",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9751133/"
] |
<https://isbndb.com/> charges for the API :(
so...
Went the Google Web Scrape Route
```
from urllib.request import Request, urlopen
from bs4 import BeautifulSoup as soup
import requests
import time
def get_amazon_link(book_title):
url = 'https://www.google.com/search?q=amazon+novel+'+book_title
print(url)
url = Request(url)
url.add_header('User-Agent', 'Mozilla/5.0')
with urlopen(url) as f:
data = f.readlines()
page_soup = soup(str(data), 'html.parser')
for line in page_soup.findAll('h3',{'class':'r'}):
for item in line.findAll('a', href=True):
item = item['href'].split('=')[1]
item = item.split('&')[0]
return item
def get_wiki_link(book_title):
url = 'https://www.google.com/search?q=wiki+novel+'+book_title
print(url)
url = Request(url)
url.add_header('User-Agent', 'Mozilla/5.0')
with urlopen(url) as f:
data = f.readlines()
page_soup = soup(str(data), 'html.parser')
for line in page_soup.findAll('h3',{'class':'r'}):
for item in line.findAll('a', href=True):
item = item['href'].split('=')[1]
item = item.split('&')[0]
return item
a = open('amazonbookslinks','w')
w = open('wikibooklinks','w')
with open('booklist') as b:
books = b.readlines()
for book in books:
book_title = book.replace(' ','+')
amazon_result = get_amazon_link(book_title)
amazon_msg = book +'@'+ amazon_result
a.write(amazon_msg + '\n')
time.sleep(5)
wiki_result = get_wiki_link(book_title)
wiki_msg = book +'@'+ wiki_result
w.write(wiki_msg + '\n')
time.sleep(5)
a.close()
w.close()
```
Not Pretty but it worked :)
|
According to Amazon's customer service page:
<https://www.amazon.co.uk/gp/help/customer/display.html?nodeId=898182>
>
> ASIN stands for Amazon Standard Identification Number. Almost every
> product on our site has its own ASIN, a unique code we use to identify
> it. For books, the ASIN is the same as the ISBN number, but for all
> other products a new ASIN is created when the item is uploaded to our
> catalogue.
>
>
>
This means that for the book 'Catch 22', its ISBN-10 is `3866155239`.
I suggest that you use a website like <https://isbndb.com/> to find ISBNs for books which will automatically give you the ASINs you are looking for. It also comes with a REST API which you can read about at <https://isbndb.com/apidocs>.
| 15,650
|
17,260,003
|
This question is in python:
```
battleships = [['0','p','0','s'],
['0','p','0','s'],
['p','p','0','s'],
['0','0','0','0']]
def fun(a,b,bships):
c = len(bships)
return bships[c-b][a-1]
print(fun(1,1,battleships))
print(fun(1,2,battleships))
```
first print gives 0
second print gives p
I can't work out why, if you could give an explanation it would be much appreciated.
Thank you to those who help :)
|
2013/06/23
|
[
"https://Stackoverflow.com/questions/17260003",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2266115/"
] |
Indexing starts at `0`. So battleships contains items at indexes `0`, `1`, `2`, `3`.
First `len(bships)` gets the length of the list of lists `battleships`, which is 4.
`bships[c-b][a-1]` accesses items in a list through their index value. So with your first call to the function:
```
print(fun(1,1,battleships))
```
It's `bships[4-1][1-1]` which is `bships[3][0]` which is `['0','0','0','0'][0]` which is `0`
|
You can work it out easily by replacing the calculations with the actual values:
In the first call, you are indexing:
```
bships[c-b][a-1] == bships[4-1][1-1] == bships[3][0]
```
Counting from 0, that's the last row, `['0','0','0','0']`, first element, `'0'`.
The second call evaluates to:
```
bships[c-b][a-1] == bships[4-2][1-1] == bships[2][0]
```
so first cell of the second-last row, `['p','p','0','s']` is a `'p'`.
Note that in Python, you can use negative indices without calculating the `len()` first; remove the `c` from your function and it'll all work just the same:
```
>>> battleships = [['0','p','0','s'],
... ['0','p','0','s'],
... ['p','p','0','s'],
... ['0','0','0','0']]
>>> def fun(a,b,bships):
... return bships[-b][a-1]
...
>>> print(fun(1,1,battleships))
0
>>> print(fun(1,2,battleships))
p
```
That is because Python treats negative indices as counting from the end; internally it'll use the length of the sequence (which is stored with the sequence) to calculate just the same thing but faster.
| 15,651
|
56,434,745
|
I'm trying to load MySQL JDBC driver from a python app. I'm not invoking 'bin/pyspark' or 'spark-submit' program; instead I have a Python script in which I'm initializing 'SparkContext' and 'SparkSession' objects.
I understand that we can pass '--jars' option when invoking 'pyspark', but how do I load and specify jdbc driver in my python app?
|
2019/06/03
|
[
"https://Stackoverflow.com/questions/56434745",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1208108/"
] |
I think you want do something like this
```py
from pyspark.sql import SparkSession
# Creates spark session with JDBC JAR
spark = SparkSession.builder \
.appName('stack_overflow') \
.config('spark.jars', '/path/to/mysql/jdbc/connector') \
.getOrCreate()
# Creates your DataFrame with spark session with JDBC
df = spark.createDataFrame([
(1, 'Hello'),
(2, 'World!')
], ['Index', 'Value'])
df.write.jdbc('jdbc:mysql://host:3306/my_db', 'my_table',
mode='overwrite',
properties={'user': 'db_user', 'password': 'db_pass'})
```
|
Answer is to create SparkContext like this:
```py
spark_conf = SparkConf().set("spark.jars", "/my/path/mysql_jdbc_driver.jar")
sc = SparkContext(conf=spark_conf)
```
This will load mysql driver into classpath.
| 15,656
|
5,835,043
|
I'm trying to use the ipy.vim script to set up a small python dev environment, but I'm running into a connection problem. When I type ipy\_vimserver.setup("demo") I get this error:
```
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python2.6/threading.py", line 532, in __bootstrap_inner
self.run()
File "/usr/lib/pymodules/python2.6/IPython/Extensions/ipy_vimserver.py", line 109, in serve_me
self.listen()
File "/usr/lib/pymodules/python2.6/IPython/Extensions/ipy_vimserver.py", line 93, in listen
self.socket.bind(self.__sname)
File "<string>", line 1, in bind
error: [Errno 98] Address already in use
```
When I type it a second time, everything is fine but when I launch gvim the F4/F5 command do nothing and state that they can't connect to the Ipython server.
any suggestion?
|
2011/04/29
|
[
"https://Stackoverflow.com/questions/5835043",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/731470/"
] |
I found it ! Thank you Anders
```
function UnixToDateTime(USec: Longint): TDateTime;
const
// Sets UnixStartDate to TDateTime of 01/01/1970
UnixStartDate: TDateTime = 25569.0;
begin
Result := (USec / 86400) + UnixStartDate;
end;
```
|
That is the [unix timestamp](http://en.wikipedia.org/wiki/Unix_time) for Fri, 29 Apr 2011 11:42:31 GMT.
**Edit**
According to [IBS](http://ibs.sourceforge.net/documentation.html#introduction), it uses postgresql as its backend database. You should be able to convert it using [to\_timestamp](http://www.postgresql.org/docs/8.4/interactive/functions-formatting.html).
| 15,657
|
39,190,714
|
Sorry if this has already been answered using terminology I don't know to search for.
I have one project:
```
project1/
class1.py
class2.py
```
Where `class2` imports some things from `class1`, but each has its own `if __name__ == '__main__'` that uses their respective classes I run frequently. But then, I have a second project which creates a subclass of each of the classes from `project1`. So I would like `project1` to be a package, so that I can import it into `project2` nicely:
```
project2/
project1/
__init__.py
class1.py
class2.py
subclass1.py
subclass2.py
```
However, I'm having trouble with the importing with this. If I make `project1` a package then inside `class2.py` I would want to import `class1.py` code using `from project1.class1 import class1`. This makes `project2` code run correctly. But now when I'm trying to use `project1` not as a package, but just running code from directly within that directory, the `project1` code fails (since it doesn't know what `project1` is). If I set it up for `project1` to work directly within that directory (i.e. the import in `class2` is `from class1 import Class1`), then this import fails when trying to use `project1` as a package from `project2`.
Is there a way to have it both ways (use `project1` both as a package and not as a package)? If there is a way, is it a discouraged way and I should be restructuring my code anyway? Other suggestions on how I should be handling this? Thanks!
**EDIT**
Just to clarify, the problem arrises because `subclass2` imports `class2` which in turn imports `class1`. Depending on which way `class2` imports `class1` the import will fail from `project2` or from `project1` because one sees `project1` as a package while the other sees it as the working directory.
**EDIT 2**
I'm using Python 3.5. Apparently this works in Python 2, but not in my current version of python.
|
2016/08/28
|
[
"https://Stackoverflow.com/questions/39190714",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1191812/"
] |
EDIT 2: Added code to class2.py to attach the parent directory to the PYTHONPATH to comply with how Python3 module imports work.
```
import sys
import os
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
```
Removed relative import of Class1.
Folder structure:
```
project2
- class3.py
- project1
- __init__.py
- class1.py
- class2.py
```
*project2/project1/class1.py*
```
class Class1(object):
def __init__(self):
super(Class1, self).__init__()
self.name = "DAVE!"
def printname(self):
print(self.name)
def run():
thingamy = Class1()
thingamy.printname()
if __name__ == "__main__":
run()
```
*project2/project1/class2.py*
```
import sys
import os
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
from class1 import Class1
class Class2(Class1):
def childMethod(self):
print('Calling child method')
def run():
thingamy = Class2()
thingamy.printname()
thingamy.childMethod()
if __name__ == "__main__":
run()
```
*project2/class3.py*
```
from project1.class2 import Class2
from project1.class1 import Class1
class Class3(Class2):
def anotherChildMethod(self):
print('Calling another child method')
def run():
thingamy = Class3()
thingamy.printname()
thingamy.anotherChildMethod()
if __name__ == "__main__":
run()
```
With this setup each of class1, 2 and 3 can be run as standalone scripts.
|
You could run `class2.py` from inside the `project2` folder, i.e. with the current working directory set to the `project2` folder:
```
user@host:.../project2$ python project1/class2.py
```
On windows that would look like this:
```
C:\...project2> python project1/class2.py
```
Alternatively you could modify the python path inside of `class2.py`:
```
import sys
sys.path.append(".../project2")
from project1.class1 import class1
```
Or modify the `PYTHONPATH` environment variable similarly.
To be able to extend your project and import for example something in `subclass1.py` from `subclass2.py` you should consider starting the import paths always with `project2`, for example in `class2.py`:
```
from project2.project1.class1 import class1
```
Ofcourse you would need to adjust the methods I just showed to match the new path.
| 15,658
|
63,994,247
|
I'm new to Docker. I'm trying to create a dockerfile which basically sets kubectl (Kubernetes client), helm 3 and Python 3.7. I used:
```
FROM python:3.7-alpine
COPY ./ /usr/src/app/
WORKDIR /usr/src/app
```
Now I'm trying to figure out how to add `kubectl` and `helm`. What would be the best way to install those two?
|
2020/09/21
|
[
"https://Stackoverflow.com/questions/63994247",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9808098/"
] |
Working Dockerfile. This will install the latest and stable versions of `kubectl` and `helm-3`
```
FROM python:3.7-alpine
COPY ./ /usr/src/app/
WORKDIR /usr/src/app
RUN apk add curl openssl bash --no-cache
RUN curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl" \
&& chmod +x ./kubectl \
&& mv ./kubectl /usr/local/bin/kubectl \
&& curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 \
&& chmod +x get_helm.sh && ./get_helm.sh
```
|
Python should be available from a python base image I guess.
My take would be s.th like
```
ENV K8S_VERSION=v1.18.X
ENV HELM_VERSION=v3.X.Y
ENV HELM_FILENAME=helm-${HELM_VERSION}-linux-amd64.tar.gz
```
and then in the Dockerfile
```
RUN curl -L https://storage.googleapis.com/kubernetes-release/release/${K8S_VERSION}/bin/linux/amd64/kubectl -o /usr/local/bin/kubectl \
&& curl -L https://storage.googleapis.com/kubernetes-helm/${HELM_FILENAME} | tar xz && mv linux-amd64/helm /bin/helm && rm -rf linux-amd64
```
But be aware of the availability of curl or wget in the baseimage, maybe these or other tools and programs have to be installed before you can use it in the Dockerfile. This always depends on the baseimage used
| 15,659
|
61,856,478
|
I'm developing a python script to deploy an Azure Function App. For this reason I can't use another Python version to make this easier.
In azure portal I get this error:
[Azure Function app pyarrow module not found](https://i.stack.imgur.com/QM50w.png)
When I try to install it via VS Code with pip I get this error: [Error installing Pyarrow](https://i.stack.imgur.com/Hv7dV.png)
I managed to make this work using anaconda environment but since my goal is to make it run in an azure function I don't know how to solve this situation.
This is my requirements.txt:
```
arrow-cpp==0.17.*
pyarrow=0.17.*
adal==1.2.2
astroid==2.3.3
attrs==19.3.0
Automat==20.2.0
azure-applicationinsights==0.1.0
azure-batch==4.1.3
azure-common==1.1.24
azure-core==1.5.0
azure-cosmosdb-nspkg==2.0.2
azure-cosmosdb-table==1.0.6
azure-eventgrid==1.3.0
azure-functions==1.2.0
azure-graphrbac==0.40.0
azure-keyvault==1.1.0
azure-loganalytics==0.1.0
azure-mgmt==4.0.0
azure-mgmt-advisor==1.0.1
azure-mgmt-applicationinsights==0.1.1
azure-mgmt-authorization==0.50.0
azure-mgmt-batch==5.0.1
azure-mgmt-batchai==2.0.0
azure-mgmt-billing==0.2.0
azure-mgmt-cdn==3.1.0
azure-mgmt-cognitiveservices==3.0.0
azure-mgmt-commerce==1.0.1
azure-mgmt-compute==4.6.2
azure-mgmt-consumption==2.0.0
azure-mgmt-containerinstance==1.5.0
azure-mgmt-containerregistry==2.8.0
azure-mgmt-containerservice==4.4.0
azure-mgmt-cosmosdb==0.4.1
azure-mgmt-datafactory==0.6.0
azure-mgmt-datalake-analytics==0.6.0
azure-mgmt-datalake-nspkg==3.0.1
azure-mgmt-datalake-store==0.5.0
azure-mgmt-datamigration==1.0.0
azure-mgmt-devspaces==0.1.0
azure-mgmt-devtestlabs==2.2.0
azure-mgmt-dns==2.1.0
azure-mgmt-eventgrid==1.0.0
azure-mgmt-eventhub==2.6.0
azure-mgmt-hanaonazure==0.1.1
azure-mgmt-iotcentral==0.1.0
azure-mgmt-iothub==0.5.0
azure-mgmt-iothubprovisioningservices==0.2.0
azure-mgmt-keyvault==1.1.0
azure-mgmt-loganalytics==0.2.0
azure-mgmt-logic==3.0.0
azure-mgmt-machinelearningcompute==0.4.1
azure-mgmt-managementgroups==0.1.0
azure-mgmt-managementpartner==0.1.1
azure-mgmt-maps==0.1.0
azure-mgmt-marketplaceordering==0.1.0
azure-mgmt-media==1.0.0
azure-mgmt-monitor==0.5.2
azure-mgmt-msi==0.2.0
azure-mgmt-network==2.7.0
azure-mgmt-notificationhubs==2.1.0
azure-mgmt-nspkg==3.0.2
azure-mgmt-policyinsights==0.1.0
azure-mgmt-powerbiembedded==2.0.0
azure-mgmt-rdbms==1.9.0
azure-mgmt-recoveryservices==0.3.0
azure-mgmt-recoveryservicesbackup==0.3.0
azure-mgmt-redis==5.0.0
azure-mgmt-relay==0.1.0
azure-mgmt-reservations==0.2.1
azure-mgmt-resource==2.2.0
azure-mgmt-scheduler==2.0.0
azure-mgmt-search==2.1.0
azure-mgmt-servicebus==0.5.3
azure-mgmt-servicefabric==0.2.0
azure-mgmt-signalr==0.1.1
azure-mgmt-sql==0.9.1
azure-mgmt-storage==2.0.0
azure-mgmt-subscription==0.2.0
azure-mgmt-trafficmanager==0.50.0
azure-mgmt-web==0.35.0
azure-nspkg==3.0.2
azure-servicebus==0.21.1
azure-servicefabric==6.3.0.0
azure-servicemanagement-legacy==0.20.6
azure-storage==0.36.0
azure-storage-blob==12.3.1
azure-storage-common==1.4.2
azure-storage-file==1.4.0
azure-storage-file-datalake==12.0.1
azure-storage-queue==1.4.0
beautifulsoup4==4.8.1
bs4==0.0.1
certifi==2019.11.28
cffi==1.14.0
chardet==3.0.4
colorama==0.4.3
constantly==15.1.0
cryptography==2.8
cssselect==1.1.0
hyperlink==19.0.0
idna==2.7
incremental==17.5.0
isodate==0.6.0
isort==4.3.21
lazy-object-proxy==1.4.3
llvmlite==0.32.0
lxml==4.5.0
mccabe==0.6.1
msrest==0.6.11
msrestazure==0.6.2
numba==0.49.0
numpy==1.18.1
oauthlib==3.1.0
pandas==1.0.1
parsel==1.5.2
Protego==0.1.16
proxyscrape==0.3.0
pycparser==2.19
PyHamcrest==2.0.1
PyJWT==1.7.1
pylint==2.4.4
pyOpenSSL==19.1.0
python-dateutil==2.8.1
pytz==2019.3
requests==2.23.0
requests-oauthlib==1.3.0
six==1.14.0
soupsieve==1.9.5
thrift==0.13.0
urllib3==1.23
w3lib==1.21.0
wrapt==1.11.2
zope.interface==4.7.1
```
|
2020/05/17
|
[
"https://Stackoverflow.com/questions/61856478",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13562280/"
] |
As you use `conda` as the package manager, you should also use it to install `pyarrow` and `arrow-cpp` using it. In your above output VSCode uses `pip` for the package management. You should consider reporting this as a bug to VSCode. Your current environment is detected as `venv` and not as `conda` environment as you can see in the Python environment box in the lower left.
The best workaround for now is to go to the terminal and manually type in `conda install pyarrow=0.17 arrow-cpp=0.17`. Note that you don't actually need to supply `0.17.*` as `conda` automatically expands `0.17` to `0.17.*`.
|
Thank you for your answer.
In fact, I think I can't change the environment because this `venv` is actually the environment used by Azure Functions. Is there any way I can use `conda` installed packages in the azure function
| 15,660
|
36,465,337
|
I am using a Dymo USB scale with PyUSB and everything is really great apart from the scale's automatic shutdown after three minutes. I would like to keep it running as long as my python program is running. Is there any way to do this using python?
I am new to PyUSB and have followed this tutorial successfully so far: <http://steventsnyder.com/reading-a-dymo-usb-scale-using-python/>.
The auto shutdown can be disabled manually as said here:
<http://www.manualslib.com/manual/472133/Dymo-S100.html?page=7>
but this must be performed every single time which is a problem.
Many thanks in advance for any suggestions!
|
2016/04/07
|
[
"https://Stackoverflow.com/questions/36465337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4083055/"
] |
As Ignacio said, there doesn't seem to be any computing way to do this. We eventually managed to stop the automatic shutdown by wiring a timer directly to the button which changes the units mode from grams to ounces. "Pressing" this every few seconds prevents the shutdown, and a little bit of extra coding allows for reading the mass from either mode correctly.
Not as simple a solution as I'd hoped for, but perhaps the idea will help anyone with a weirdly-specific problem similar to this.
|
Here is a hardware description to open/modify and code to toggle the scale button of DYMO (to avoid auto-shutdown)
<https://learn.adafruit.com/data-logging-iot-weight-scale/code-walkthrough>
| 15,661
|
16,114,358
|
is there a way to trace all the calls made by a web page when loading it? Say for example I went in a video watching site, I would like to trace all the GET calls recursively until I find an mp4/flv file. I know a way to do that would be to follow the URLs recursively, but this solution is not always suitable and quite limitative( say there's a few thousand links, or the links are in a file which can't be read). Is there a way to do this? Ideally, the implementation could be in python, but PHP as well as C is fine too
|
2013/04/19
|
[
"https://Stackoverflow.com/questions/16114358",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2300843/"
] |
The following code may be what you are looking for. The object should flash for the total amount of `time` changing its color after `intervalTime`
```
using UnityEngine;
using System.Collections;
public class FlashingObject : MonoBehaviour {
private Material mat;
private Color[] colors = {Color.yellow, Color.red};
public void Awake()
{
mat = GetComponent<MeshRenderer>().material;
}
// Use this for initialization
void Start () {
StartCoroutine(Flash(5f, 0.05f));
}
IEnumerator Flash(float time, float intervalTime)
{
float elapsedTime = 0f;
int index = 0;
while(elapsedTime < time )
{
mat.color = colors[index % 2];
elapsedTime += Time.deltaTime;
index++;
yield return new WaitForSeconds(intervalTime);
}
}
}
```
|
The answer above is great, but it does not stop flashing, because it takes far too long for the float to reach 5f.
The trick is to set the first parameter in the **Flash** function to something smaller, like 1f, instead of 5f. Take this script, and attach it to any game object. It will flash between yellow and red for about 2.5 seconds, then stop.
```
using UnityEngine;
using System.Collections;
public class FlashingObject : MonoBehaviour
{
private Material _mat;
private Color[] _colors = { Color.yellow, Color.red };
private float _flashSpeed = 0.1f;
private float _lengthOfTimeToFlash = 1f;
public void Awake()
{
this._mat = GetComponent<MeshRenderer>().material;
}
// Use this for initialization
void Start()
{
StartCoroutine(Flash(this._lengthOfTimeToFlash, this._flashSpeed));
}
IEnumerator Flash(float time, float intervalTime)
{
float elapsedTime = 0f;
int index = 0;
while (elapsedTime < time)
{
_mat.color = _colors[index % 2];
elapsedTime += Time.deltaTime;
index++;
yield return new WaitForSeconds(intervalTime);
}
}
}
```
| 15,662
|
64,354,599
|
What is the best way to create software client (service) that installs and then runs in the background. When you turn on the device (where the service is installed), this service will automatically send network availability information of device to web server. On web server will be only state of devices on network. (on/off)
So what is the optimal solution for create software client? Using powershell, python or other?
Do you have anyone experience with similar problem?
|
2020/10/14
|
[
"https://Stackoverflow.com/questions/64354599",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14449220/"
] |
Use
```
(?m)^<hr>\r?\nBitmap:[\s\S]*?(?=^<hr>$|\Z)
```
See [proof](https://regex101.com/r/i64K0W/4).
**Explanation**
```
--------------------------------------------------------------------------------
(?m) set flags for this block (with ^ and $
matching start and end of line) (case-
sensitive) (with . not matching \n)
(matching whitespace and # normally)
--------------------------------------------------------------------------------
^ the beginning of a "line"
--------------------------------------------------------------------------------
<hr> '<hr>'
--------------------------------------------------------------------------------
\r? '\r' (carriage return) (optional (matching
the most amount possible))
--------------------------------------------------------------------------------
\n '\n' (newline)
--------------------------------------------------------------------------------
Bitmap: 'Bitmap:'
--------------------------------------------------------------------------------
[\s\S]*? any character of: whitespace (\n, \r, \t,
\f, and " "), non-whitespace (all but \n,
\r, \t, \f, and " ") (0 or more times
(matching the least amount possible))
--------------------------------------------------------------------------------
(?= look ahead to see if there is:
--------------------------------------------------------------------------------
^ the beginning of a "line"
--------------------------------------------------------------------------------
<hr> '<hr>'
--------------------------------------------------------------------------------
$ before an optional \n, and the end of a
"line"
--------------------------------------------------------------------------------
| OR
--------------------------------------------------------------------------------
\Z the end of the string
--------------------------------------------------------------------------------
) end of look-ahead
```
|
This works: `<hr>\nBitmap:.*\n(?:.*\n){1,2}`
See: <https://regex101.com/r/i64K0W/3>
The problem in your regex was the `*`, which is greedy.
| 15,663
|
60,907,340
|
Try to open file that exists with visual Studio Code 1.43.2
This is the py file:
```
with open('pi_digits.txt') as file_object:
contents = file_object.read()
print(contents)
```
This is the result:
```
PS C:\Users\Osori> & C:/Python/Python38-32/python.exe "c:/Users/Osori/Desktop/python_work/9_files and exceptions/file_reader.py"
Traceback (most recent call last):
File "c:/Users/Osori/Desktop/python_work/9_files and exceptions/file_reader.py", line 1, in <module>
with open('pi_digits.txt') as file_object:
FileNotFoundError: [Errno 2] No such file or directory: 'pi_digits.txt'
```

The files exists and the spelling is correct. if the file\_reader.py is run with IDLE runs just fine.

|
2020/03/28
|
[
"https://Stackoverflow.com/questions/60907340",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13143836/"
] |
Below is for BigQuery Standard SQL
```
#standardSQL
SELECT *,
COUNTIF(New_Session_Flag = 1) OVER(PARTITION BY Fullvisitorid ORDER BY Visitid) Rank_Session_Order
FROM `project.dataset.table`
```
|
The answer by Mikhail Berlyant using a conditional window count is corret and works. I am answering because I find that a window sum is even simpler (and possibly more efficient on a large dataset):
```
select
t.*,
sum(new_session_flag) over(partition by fullvisitorid order by visid_id) rank_session_order
from mytable t
```
This works because the `new_session_flag` contains `0`s and `1`s only; so counting the `1`s is actually equivalent to suming all values.
| 15,664
|
53,042,453
|
I'm currently working on a Mac with Mojave. I have successfully installed python 3.7 with brew
```
brew install python3
```
But I have tried several methods to install pip for python 3.7 (installing with get-pip.py, easy\_install pip, etc.), which had worked for installing pip in the python 2.7 folder, but not in the python 3.7.
Currently when I call
```
pip --version
```
I get
```
pip 18.1 from /Library/Python/2.7/site-packages/pip (python 2.7)
```
And pip3 seems not to exist.
How can I get pip3 installed in the python 3.7 folder? Thanks!
|
2018/10/29
|
[
"https://Stackoverflow.com/questions/53042453",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10573864/"
] |
If you want to ensure pip installed for python 3.7, try something like this:
```
wget https://bootstrap.pypa.io/get-pip.py
sudo python3.7 get-pip.py
```
|
Maybe you're using an older version of Brew?
In that case run
brew postinstall python3
| 15,665
|
40,219,946
|
I have a large data set (millions of rows) in memory, in the form of **numpy arrays** and **dictionaries**.
Once this data is constructed I want to store them into files;
so, later I can load these files into memory quickly, without reconstructing this data from the scratch once again.
**np.save** and **np.load** functions does the job smoothly for numpy arrays.
But I am facing problems with dict objects.
See below sample. d2 is the dictionary which was loaded from the file. **See #out[28] it has been loaded into d2 as a numpy array, not as a dict.** So further dict operations such as get are not working.
Is there a way to load the data from the file as dict (instead of numpy array) ?
```
In [25]: d1={'key1':[5,10], 'key2':[50,100]}
In [26]: np.save("d1.npy", d1)
In [27]: d2=np.load("d1.npy")
In [28]: d2
Out[28]: array({'key2': [50, 100], 'key1': [5, 10]}, dtype=object)
In [30]: d1.get('key1') #original dict before saving into file
Out[30]: [5, 10]
In [31]: d2.get('key2') #dictionary loaded from the file
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-31-23e02e45bf22> in <module>()
----> 1 d2.get('key2')
AttributeError: 'numpy.ndarray' object has no attribute 'get'
```
|
2016/10/24
|
[
"https://Stackoverflow.com/questions/40219946",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2101900/"
] |
It's a structured array. Use `d2.item()` to retrieve the actual dict object first:
```
import numpy as np
d1={'key1':[5,10], 'key2':[50,100]}
np.save("d1.npy", d1)
d2=np.load("d1.npy")
print d1.get('key1')
print d2.item().get('key2')
```
result:
```
[5, 10]
[50, 100]
```
|
[pickle](https://docs.python.org/3/library/pickle.html) module can be used. Example code:
```
from six.moves import cPickle as pickle #for performance
from __future__ import print_function
import numpy as np
def save_dict(di_, filename_):
with open(filename_, 'wb') as f:
pickle.dump(di_, f)
def load_dict(filename_):
with open(filename_, 'rb') as f:
ret_di = pickle.load(f)
return ret_di
if __name__ == '__main__':
g_data = {
'm':np.random.rand(4,4),
'n':np.random.rand(2,2,2)
}
save_dict(g_data, './data.pkl')
g_data2 = load_dict('./data.pkl')
print(g_data['m'] == g_data2['m'])
print(g_data['n'] == g_data2['n'])
```
You may also save multiple python objects in a single pickled file. Each `pickle.load` call will load a single object in that case.
| 15,673
|
44,696,676
|
Once I have installed miniconda, I am permanently inside the root miniconda environment eg:
```
luc@montblanc:~$ conda info --envs
# conda environments:
#
bunnies /home/luc/miniconda3/envs/bunnies
expose /home/luc/miniconda3/envs/expose
testano /home/luc/miniconda3/envs/testano
testcondaenv /home/luc/miniconda3/envs/testcondaenv
root * /home/luc/miniconda3
```
Which results on the use of this environment python3 executable:
```
luc@montblanc:~$ which python3
/home/luc/miniconda3/bin/python3
```
How can I get out of this root environment without actually uninstalling python. E.g. I want
```
luc@montblanc:~$ which python3
/usr/bin/python3
```
and refer to the miniconda distribution of python explicitly (using full path `/home/luc/miniconda3/bin/python3`) when I need it.
I don't want to achieve any final goal doing this, I just want to understand what is happening and how it works.
|
2017/06/22
|
[
"https://Stackoverflow.com/questions/44696676",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4803860/"
] |
See your .bashrc file. Miniconda adds their paths and change the default, find this file and then change or add the path you want, or remove the anaconda/miniconda path.
In your .bashrc (probably ~/.bashrc) you will see something like:
```
# added by Miniconda3 4.3.14 installer
export PATH="/path/to/miniconda3/bin:$PATH"
```
Add your path after this line, change this path or, for temporarily, use `export` on command line.
**Obs.**
* After this, probably you will have to use miniconda call by full
path.
* Restart your session after changes in .bashrc.
|
Here is a way to do this on the fly without editing one's init files:
```
(base) ➜ ~ which python
/home/xxx/anaconda3/bin/python
(base) ➜ ~ echo $PATH
/home/xxx/anaconda3/bin:/home/xxx/anaconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
(base) ➜ ~ export PATH=$(echo ${PATH} | awk -v RS=: -v ORS=: '/conda/ {next} {print}' | sed 's/:*$//')
(base) ➜ ~ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
(base) ➜ ~ which python
/usr/bin/python
(base) ➜ ~
```
| 15,674
|
2,032,677
|
How does this import work, what file does it use?
```
import _functools
```
In python 2.5:
```
import _functools
print _functools.__file__
```
Gives:
```
Traceback (most recent call last):
File "D:\zjm_code\mysite\zjmbooks\a.py", line 5, in <module>
print _functools.__file__
AttributeError: 'module' object has no attribute '__file__'
```
How can I get the meaning of partial (from \_functools import partial) if I can't read C code?
|
2010/01/09
|
[
"https://Stackoverflow.com/questions/2032677",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/234322/"
] |
C-coded modules can be built-in (lacking `__file__`) or live in a `.so` or `.pyd` dynamic library (which their `__file__` will indicate) -- that's an implementation detail that you should not care about.
If you want to understand how a Python-callable, C-coded function works by studying code, learning to **read** C is generally best (far less hard than actually productively *coding* in C;-). However, often you'll find (suggestive, non-authoritative) "sample Python implementations" of C-coded functionality, and you can study *those*.
A particularly fruitful repository of Python-coded equivalents to Python standard library functionality that's normally coded in C is the [pypy](http://codespeak.net/pypy/dist/pypy/doc/) project (which does Python implementations coded in Python) -- its sources are browseable [here](http://codespeak.net/svn/pypy/trunk/) and of course you can download and peruse them on your machine.
In particular, this is pypy's [\_functools.py](http://codespeak.net/svn/pypy/trunk/pypy/lib/_functools.py) implementation:
```
""" Supplies the internal functions for functools.py in the standard library """
class partial:
"""
partial(func, *args, **keywords) - new function with partial application
of the given arguments and keywords.
"""
__slots__ = ['func', 'args', 'keywords']
def __init__(self, func, *args, **keywords):
if not callable(func):
raise TypeError("the first argument must be callable")
self.func = func
self.args = args
self.keywords = keywords
def __call__(self, *fargs, **fkeywords):
newkeywords = self.keywords.copy()
newkeywords.update(fkeywords)
return self.func(*(self.args + fargs), **newkeywords)
```
Pretty trivial to read and understand, I hope!
|
please be more clear when asking questions next time. I assume you want this
```
>>> import _functools
>>> _functools.__file__
'/usr/lib/python2.6/lib-dynload/_functools.so'
```
| 15,677
|
9,916,367
|
I have a dll named ExpensiveAndLargeObfuscatedFoo.dll.
Lets says it defines a type named ExpensiveAndLargeObfuscatedFooSubClass.
It's been compiled for .NET.
Are there any tools (free, paid, whatever) that will generate c# or vb class files that will do nothing but wrap around *everything* defined in this expensive dll? That way I can add functionality, fix bugs (that CorpFUBAR won't fix), add logging, etc?
Literally, I want output that looks like this
```
namespace easytoread {
public class SubClass {
private ExpensiveAndLargeObfuscatedFoo.SubClass _originalSubClass;
public SubClass() {
this._originalSubClass = new ExpensiveAndLargeObfuscatedFoo.SubClass ();
}
public string StupidBuggyMethod(string param1,int param2) {
return _originalSubClass.StupidBuggyMethod(param1, param2);
}
}
}
```
It would have to handle custom return types as well as primitives
```
namespace easytoread {
public class SubFooClass {
private ExpensiveAndLargeObfuscatedFoo.SubFooClass _originalSubFooClass;
public SubFooClass() {
this._originalSubFooClass= new ExpensiveAndLargeObfuscatedFoo.SubFooClass ();
}
private SubFooClass(ExpensiveAndLargeObfuscatedFoo.SubFooClass orig) {
this._originalSubFooClass = orig;
}
public SubFooClass StupidBuggyMethod(string param1,int param2) {
return new SubFooClass(_originalSubFooClass.StupidBuggyMethod(param1, param2));
}
}
}
```
And so on and so forth for every single defined class.
Basically, poor mans dynamic proxy? (yay, Castle Project is awesome!)
We'd also like to rename some of our wrapper classes, but the tool doesn't need to do that.
Without renaming, we'd be able to replace the old assembly with our new generated one, change using statements and continue on like nothing happened (except the bugs were fixed!)
It just needs to examine the dll and do code generation. the generated code can even be VB.NET, or ironpython, or anything CLR.
This is a slippery slope and I'm not happy that I ended up here, but this seems to be the way to go. I looked at the Castle Project, but unless I'm mistaken that won't work for two reasons: 1) I can't rename anything (don't ask), 2) none of the assemblies methods are declared virtual or even overridable. Even if they were, there's hundreds of types I'd have to override manually, which doesn't sound fun.
|
2012/03/28
|
[
"https://Stackoverflow.com/questions/9916367",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/346272/"
] |
[ReSharper](http://www.jetbrains.com/resharper/) can do much of the work for you.
You will need to declare a basic class:
```
namespace easytoread {
public class SubClass {
private ExpensiveAndLargeObfuscatedFoo.SubClass _originalSubClass;
}
}
```
Then, choose ReSharper > Edit > Generate Code (Alt+Ins), select "Delegating Members", select all, and let it generate the code.
It won't wrap return values with custom classes (it will return the original type), so that would still have to be added manually.
|
1. If you have access to the source code, rename and fix in the source
code.
2. If you don't have access (and you can do it legally) use some
tool like Reflector or [dotPeek](http://www.jetbrains.com/decompiler/) to get the source code and then,
goto to the first point.
| 15,681
|
40,254,007
|
I am looking for a way to store multiple values for each key (just like we can in python using dictionary) using Go.
Is there a way this can be achieved in Go?
|
2016/10/26
|
[
"https://Stackoverflow.com/questions/40254007",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3902984/"
] |
Based on your response in comments I would suggest something like the following using a struct (though if you are only interested in a single value like `name` for each item in your slice then you could just use a `map[int][]string{}`
```
type Thing struct {
name string
age int
}
myMap := map[int][]Thing{}
```
If you want to add things then you just do...
```
myMap[100] = append(myMap[100], Thing{"sberry": 37})
```
Or if you want to create it in place:
```
myMap := map[int][]Thing{
100: []Thing{Thing{"sberry", 37}},
2: []Thing{Thing{"johndoe", 22}},
}
```
**EDIT:** Adding a "nameIn" function based on comments as demo:
```
func nameIn(things []Thing, name string) bool {
for _, thing := range things {
if thing.name == name {
return true
}
}
return false
}
if nameIn(myMap[100], "John") {
...
```
If the slices are really big and speed is concern then you could keep a reverse lookup map like `map[string]int` where an entry would be `John: 100`, but you will need to most likely use some user defined functions to do your map modifications so it can change the reverse lookup as well. This also limits you by requiring unique names.
Most likely the `nameIn` would work just fine.
|
In go, the key/value collection is called a `map`. You create it with `myMap := map[keyType]valType{}`
Usually something like `mapA := map[string]int{}`. If you want to store multiple values per key, perhaps something like:
`mapB := map[string][]string{}` where each element is itself a slice of strings. You can then add members with something like:
`mapB["foo"] = append(mapB["foo"], "fizzbuzz")`
For a great read see [Go maps in action](https://blog.golang.org/go-maps-in-action)
| 15,684
|
61,496,129
|
I need to run selenium-side-runner in docker.I wrote in the dockerfile to install Google Chrome and googledrive. But when the code is executed, the error is as follows:
```
WebDriverError: unknown error: Chrome failed to start: exited abnormally.
(unknown error: DevToolsActivePort file doesn't exist)
(The process started from chrome location /usr/bin/google-chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
at Object.throwDecodedError (../../usr/lib/node_modules/selenium-side-runner/node_modules/selenium-webdriver/lib/error.js:550:15)
at parseHttpResponse (../../usr/lib/node_modules/selenium-side-runner/node_modules/selenium-webdriver/lib/http.js:560:13)
at Executor.execute (../../usr/lib/node_modules/selenium-side-runner/node_modules/selenium-webdriver/lib/http.js:486:26)
```
this is my dockerfile
```
FROM python:3.6-stretch
# install google chrome
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >>
/etc/apt/sources.list.d/google-chrome.list'
RUN apt-get -y update
RUN apt-get install -y google-chrome-stable
# install chromedriver
RUN apt-get install -yqq unzip
RUN wget -O /tmp/chromedriver.zip http://chromedriver.storage.googleapis.com/`curl -sS
chromedriver.storage.googleapis.com/LATEST_RELEASE`/chromedriver_linux64.zip
RUN unzip /tmp/chromedriver.zip chromedriver -d /usr/local/bin/
# set display port to avoid crash
ENV DISPLAY=:99
RUN apt-get update
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash
RUN apt-get install -y nodejs
RUN apt-get install -y npm
RUN npm install -g selenium-side-runner
WORKDIR /app
ENTRYPOINT ["/entrypoint.sh"]
```
this is my code:
```
for ind, file in enumerate(test_file_list):
file_path = os.path.join(self.save_path, file)
start_time = timezone.now()
result = subprocess.run(
['selenium-side-runner', '-c', "goog:chromeOptions.args=[--headless,--nogpu]
browserName=chrome",file_path])
end_time = timezone.now()
result_code = result.returncode
```
|
2020/04/29
|
[
"https://Stackoverflow.com/questions/61496129",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13054579/"
] |
A hack
======
Instead of installing a browser by hand, I would recommend using already available images like <https://github.com/SeleniumHQ/docker-selenium>.
A solution (recommended)
========================
From an architectural point of view, each Docker container should have only one purpose. In your case, the container is responsible for running a browser and tests. That's why I propose to split the solution into two containers:
* the first container with `selenium-side-runner` and `.side` files that will run tests
* the second container runs a browser
Based on [the docs](https://www.seleniumhq.org/selenium-ide/docs/en/introduction/command-line-runner/#running-on-selenium-grid), `selenium-side-runner` can work with a Selenium Grid.
So, the solution will look like
1. Prepare `Dockerfile` that installs `selenium-side-runner`, adds `.side` files, and configures an `ENTRYPOINT`. Here you can also set a default URL to Selenium Grid like `selenium-side-runner -w 10 --server http://selenium-browsers:4444/wd/hub` where `selenium-browsers` will be a name of the container with a browser or Selenium Grid.
2. Build your custom Docker image
3. Create Docker network with `docker network create selenium-tests`
4. Run a browser or Selenium Grid
* Option 1: a browser. `docker run -id --shm-size=2g --network selenium-tests --name selenium-browsers selenium/standalone-chrome:3.141.59-2020040` based on [SeleniumHQ/docker-selenium](https://github.com/SeleniumHQ/docker-selenium#running-the-images)
* Option 2: Selenium Grid. [SeleniumHQ/docker-selenium](https://github.com/SeleniumHQ/docker-selenium) provides instructions how to do this. Also, you may look at [Zalenium](https://opensource.zalando.com/zalenium/) or [Selenoid](https://aerokube.com/selenoid/latest/) that provide additional features as video recordings, test execution preview, etc.
5. Run your container with `docker run -it --network selenium-tests --name my-tests <your custom image>`
As both containers are in the same custom network `selenium-tests`, both containers can communicate through the container names. So, `my-tests` container can send the requests to `selenium-browsers` container (see #1 and `--server` option).
If needed, you can create [docker-compose.yaml](https://docs.docker.com/compose/compose-file/) file later to simplify the usage of the solution.
|
You could use [this project](https://github.com/nixel2007/docker-selenium-side-runner), put your .side files in side directory and run `docker-composer up`
| 15,685
|
40,314,960
|
I have a csv file with columns id, name, address, phone. I want to read each row such that I store the data in two variables, `key` and `data`, where data contains name, address and phone. I want to do this because I want to append another column country in each row in a new csv. Can anyone help me with the code in python. I tried this but it didn't work:
```
dict = {}
reader = csv.reader(open('info.csv', 'r'))
for row in reader:
key, *data = row
dict[key] = ','.join(data)
```
|
2016/10/29
|
[
"https://Stackoverflow.com/questions/40314960",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2433565/"
] |
`key, *data = row` is Python 3 syntax. For Python 2 you can do this:
```
key, data = row[0], row[1:]
```
|
You can try the code below. I have put two option one is to make a list for each row, and other is comma separated. Based on the usage you can use either one.
```
import csv
dict = {}
reader = csv.reader(open('info.csv', 'r'))
for row in reader:
data = []
print row
key,data = row[0],row[1:]
dict[key] = data
# dict[key] = ','.join(data)
print dict
```
| 15,686
|
38,290,965
|
I am a beginner in Python.
I use Python 2.7 with ElementTree to parse XML files.
I have a big XML file (~700 MB), which contains multiple root instances, for example:
```
<?xml version="1.0" ?> <foo> <bar> <sometag> Mehdi </sometag> <someothertag> blahblahblah </someothertag> . . . </bar> </foo>
<?xml version="1.0" ?> <foo> <bar> <sometag> Hamidi </sometag> <someothertag> blahblahblah </someothertag> . . . </bar> </foo>
...
...
```
each xml instance is placed in one line.
I need to parse such file in python. I used ElementTree this way:
```
import xml.etree.ElementTree as ET
tree = ET.parse('filename.xml')
root = tree.getroot()
```
but it seems it just can access to the first root XML instance line.
What is the proper way to parse all XML instances in this type of file?
|
2016/07/10
|
[
"https://Stackoverflow.com/questions/38290965",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4811059/"
] |
You can try [this](https://jsfiddle.net/loginshivam/jq4c4wg7/2/)
```
<input value="0" id="id">
```
give a id to input field
on button click call a function
```
<button onclick="myFunction()">Add +10</button>
<script>
function myFunction() {
document.getElementById("id").value = parseInt(document.getElementById("id").value) +10;
}
</script>
```
|
You can use a click counter like [this](https://stackoverflow.com/questions/22402777/html-javascript-button-click-counter)
and edit it replacing `+= 1` with `+= 10`.
Here my code,
```js
var input = document.getElementById("valor"),
button = document.getElementById("buttonvalue");
var clicks = 0;
button.addEventListener("click", function()
{
clicks += 10;
input.value = clicks;
});
```
```html
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width">
<title>Script</title>
</head>
<body>
<input id="valor" value="0">
<button id="buttonvalue">10</button>
</body>
</html>
```
| 15,687
|
61,679,525
|
**Is it possible to have a variable that is defined in a python file be passed through to a html file in the same directory and used in a 'script' tag?**
Synopsis:
Im producing a flask website that pulls data from a database and uses it to create a line chart in chartjs. Flask uses jijna to manage templates. The data in the databse has been normalised so that it can be read by the chart script (label is in a list, data as a tuple). So far I have produced routes so that each page can be accessed and passed the value into the return\_template parameter as shown here:
```
@app.route('/room2')
def room2():
return render_template('room2.html', title = 'Room2', time = newLabel, count = newData )
```
with the html for the room being called along with the title which is accessed here:
```
{% if title %}
<title>Website title - {{ title }}</title>
{% else%}
<title>Website</title>
{% endif %}
```
The chart is being generated in the html file, and I want to take the data (time and count) and use it in the script the same way the title is used. However the script for chart.js doesnt take {{ }} (which is essentially a print function anyway). So far ive managed to get the data i want INTO the required html file, which is proven by the data being printed out onto the page itself but im not sure how to translate it into a variable to be used by chartjs.
Any suggestions?
(NOTE: using routes like this may be the wrong starting point but it's one that makes logical sense and hasn't produced any errors beside logical)
Files:
* data.py: Takes information from database and converts it into two variables;
```
label = ['09:00', '10:00', '11:00', '12:00', '13:00', '14:00', '15:00']
data = ['4,7,2,3,9,9,5]
```
* routes.py: holds the routes to all the pages,
* layout.html: holds overarching html code for website,
* room.html: holds graph, extends from layout
|
2020/05/08
|
[
"https://Stackoverflow.com/questions/61679525",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13486528/"
] |
For beginners who try to put Flask and ChartJS together, a common approach seems to be to write Jinja code which uses loops to output Javascript, or manually uses Jinja expressions within the JS. This can quick become a maintenence nightmare.
Here's my approach which lets you define the data you want charted in Python, place a canvas in your template with some minimal JS, then dynamically update the chart using the Javascript Fetch API.
You can clone the repo [flask-chartjs](https://github.com/vulcan25/flask-chartjs/tree/0.0.1).
You can have one route which renders the page containing the chart:
```
@app.route('/')
def index():
return render_template('index.html', CHART_ENDPOINT = url_for('data'))
```
`CHART_ENDPOINT` in this case will be `/data` which corresponds to another route which returns the JSON. I also have a helper function which converts epoch times to a ISO 8601 compatible format, which works with moment.
```
import datetime as DT
def conv(epoch):
return DT.datetime.utcfromtimestamp(epoch).isoformat()
@app.route('/data')
def data():
d = {'datasets':
[
{'title': 'From Dict',
'data': [ {'x': conv(1588745371), 'y': 400},
{'x': conv(1588845371), 'y': 500},
{'x': conv(1588946171), 'y': 800} ]
},
]
}
return jsonify(d)
```
Now in the template you can place the chart's canvas element with `data-endpoint` attribute:
```
<canvas id="canvas" data-endpoint='{{CHART_ENDPOINT}}'></canvas>
```
Then I've implemented two JS functions which in that same template allow you to create the chart, and load the data from the provided endpoint:
```
<script type='text/javascript'>
var ctx = document.getElementById('canvas');
myChart = create_chart(ctx);
window.onload = function () {
load_data(myChart)
};
</script>
```
In the [`create_chart` function](https://github.com/vulcan25/flask-chartjs/blob/fe54a7278808ceca635b5ad5ab8801ea09799c32/static/flask_chart.js#L47-L58) the endpoint is obtained from the `data-endpoint` attrib, and added to the `config` object before it is assigned to that chart [(credit)](https://stackoverflow.com/a/44840756/2052575):
>
>
> ```
> config.endpoint = ctx.dataset.endpoint;
> return new Chart(ctx.getContext('2d'), config);
>
> ```
>
>
The `load_data` function then accesses the endpoint from `chart.config.endpoint` meaning it always grabs the correct data for the provided chart.
---
You can also set the [time units](https://www.chartjs.org/docs/latest/axes/cartesian/time.html#time-units) when creating the chart:
```
myChart = create_chart(ctx, 'hour') # defaults to 'day'
```
I've found this usually needs to be tweaked depending on your data range.
It would be trivial to modify that code to obtain this in the same manner as the endpoint, within the `create_chart` function. Something like `config.options.scales.xAxes[0].time.unit = ctx.datasets.unit` if the attribute was `data-unit`. This could also be done for other variables.
---
You can also pass a string from the frontend when loading data (say `dynamicChart` is another chart, created with the method above):
```
load_data(dynamicChart, 'query_string')
```
This would make `'query_string'` available in the flask function as `request.args.get('q')`
This is useful if you want to implement (for example) a text input field which sends the string to the backend, so the backend can process it somehow and return a customised dataset which is rendered on the chart. The `/dynamic` route in the repo kind of demonstrates this.
Here's what it looks like rendered:
[](https://i.stack.imgur.com/c6ABc.png)
As you can see it's possible then to have multiple charts on one page also.
|
You can use the same jinja templating feature to access your Variable in js too.
Maybe if you want strings you should enclose them like this `"{{variable}}"`
| 15,689
|
19,351,060
|
What is the most pythonic way to iterate a dictionary and conditionally execute a method on values.
e.g.
```
if dict_key is "some_key":
method_1(dict_keys_value)
else
method_2(dict_keys_value)
```
It can't be a dict comprehension because I am not trying to create a dictionary from the result. It is just iterating a dictionary and executing some method on values of the dictionary.
|
2013/10/13
|
[
"https://Stackoverflow.com/questions/19351060",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/462455/"
] |
What you have is perfectly fine, and you can iterate with something like:
```
for key, value in my_dict.items(): # use `iteritems()` in Python 2.x
if key == "some_key": # use `==`, not `is`
method_1(value)
else:
method_2(value)
```
See: [`dict.items()`](http://docs.python.org/3/library/stdtypes.html#dict.items)
---
For your enlightenment, it is possible to condense that into two lines:
```
for key, value in my_dict.items():
(method_1 if key == "some_key" else method_2)(value)
```
but I don't think this gains you anything... it just makes it more confusing. I would *by far* prefer the first approach.
|
Why not create a dict with lambda functions?
```
methods = {
'method1': lambda val: val+1,
'method2': lambda val: val+2,
}
for key, val in dict.iteritems():
methods[key](val)
```
| 15,690
|
25,884,534
|
Can anyone tell me how to define a json data type to a particular field in models.
I have tried this
```
from django.db import models
import jsonfield
class Test(models.Model):
data = jsonfield.JSONField()
```
but when I say `python manage.py sqlall xyz`
its taking the data field as text
```
BEGIN;
CREATE TABLE "xyz_test" (
"data" text NOT NULL
)
;
```
I still tried to insert json data into that field but its giving error as :
```
ERROR: value too long for type character varying(15)
```
someone please help. Thanks in Advance.
|
2014/09/17
|
[
"https://Stackoverflow.com/questions/25884534",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3323440/"
] |
In the background [JSONField actually is a TextField](https://github.com/dmkoch/django-jsonfield/blob/master/jsonfield/fields.py#L154), so that output from sqlall is not a problem, that's the expected behavior.
Further, I recreated your model and it worked just fine, both when entering the value as a string and as a python dictionary, no character limitation whatsoever. So my best guess it that the issue is a completely unrelated field that *does* have a 15 chars limit.
|
A JSONField data type has been added to Django for certain databases to improve handling of JSON. More info is available in this post: [Django 1.9 - JSONField in Models](https://stackoverflow.com/questions/37007109/django-1-9-jsonfield-in-models)
| 15,693
|
38,043,683
|
I'm writing a git pre-commit hook, but it requires user input and hooks don't run in an interactive terminal. With Python I could do something like this to get access to user input:
```
#!/usr/bin/python
import sys
# This is required because git hooks are run in non-interactive
# mode. You aren't technically supposed to have access to stdin.
# This hack works on MaxOS and Linux. Mileage may vary on Windows.
sys.stdin = open('/dev/tty')
result = input("Gimme some input: ")
```
What is the appropriate way to do this in Crystal?
|
2016/06/26
|
[
"https://Stackoverflow.com/questions/38043683",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13973/"
] |
You may try:
```
STDIN.reopen("/dev/tty")
```
|
This seems to work:
```
file = File.open("/dev/tty")
line = file.gets
p line
```
You can't reassign STDIN and we don't have a reassignable global variable for it. I don't know much about this, maybe reopen and dup can be used for that. But otherwise you can use that `file` instead of `STDIN` in your program, I guess.
| 15,698
|
50,982,990
|
I'm designing an Application where username will be an `AutoIntegerField` and unique.
Here's my model.
```
class ModelA(models.Model):
username = models.BigAutoField(primary_key=True, db_index=False)
user_id = models.UUIDField(default=uuid.uuid4, unique=True,
editable=False)
```
Initially, I wanted user\_id to be a `primary_key`, but I can't create an AutoField which is not `primary_key`. As a result, I'd to let go off `user_id` as primary\_key and assigned `username` as the primary key.
Now, when I run the migrations, it throws an error saying,
```
django.db.utils.ProgrammingError: operator class "varchar_pattern_ops" does not accept data type bigint
```
**Complete StackTrace:-**
```
Applying users.0005_auto_20180626_0914...Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/utils.py", line 65, in execute
return self.cursor.execute(sql, params)
psycopg2.ProgrammingError: operator class "varchar_pattern_ops" does not accept data type bigint
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "manage.py", line 22, in <module>
execute_from_command_line(sys.argv)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/core/management/__init__.py", line 364, in execute_from_command_line
utility.execute()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/core/management/__init__.py", line 356, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/core/management/base.py", line 283, in run_from_argv
self.execute(*args, **cmd_options)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/core/management/base.py", line 330, in execute
output = self.handle(*args, **options)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/core/management/commands/migrate.py", line 204, in handle
fake_initial=fake_initial,
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/migrations/executor.py", line 115, in migrate
state = self._migrate_all_forwards(state, plan, full_plan, fake=fake, fake_initial=fake_initial)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/migrations/executor.py", line 145, in _migrate_all_forwards
state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/migrations/executor.py", line 244, in apply_migration
state = migration.apply(state, schema_editor)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/migrations/migration.py", line 129, in apply
operation.database_forwards(self.app_label, schema_editor, old_state, project_state)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/migrations/operations/fields.py", line 216, in database_forwards
schema_editor.alter_field(from_model, from_field, to_field)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/base/schema.py", line 515, in alter_field
old_db_params, new_db_params, strict)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/postgresql/schema.py", line 112, in _alter_field
new_db_params, strict,
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/base/schema.py", line 684, in _alter_field
params,
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/base/schema.py", line 120, in execute
cursor.execute(sql, params)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/utils.py", line 80, in execute
return super(CursorDebugWrapper, self).execute(sql, params)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/utils.py", line 65, in execute
return self.cursor.execute(sql, params)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/utils.py", line 94, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/utils/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/django/db/backends/utils.py", line 65, in execute
return self.cursor.execute(sql, params)
django.db.utils.ProgrammingError: operator class "varchar_pattern_ops" does not accept data type bigint
```
|
2018/06/22
|
[
"https://Stackoverflow.com/questions/50982990",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1162512/"
] |
I think the issue is that you still have an old index on your `username` field that clashes with the new type. The `db_index=False` argument has no effect because `primary_key=True` always generates an index.
You might be able to solve this by removing `primary_key=True`, creating a migration, and then re-adding it and creating another migration. Alternatively, you can do it by hand by dropping and re-creating the index. If you want to go down that path, consult [this answer](https://stackoverflow.com/a/20284951/211960).
If your project is still in an early stage and you don't have any valuable data, you can take the easy way out and drop any tables related to your `users` app or even the complete database, delete all migrations in the `users` app and create them from scratch.
|
in my case I was connecting django to postgres at localhost pgadmin
first deleted all the migrations except default one and also in the pycache
then just run python manage.py makemigrations and python manage.py migrate in your terminal
| 15,701
|
716,386
|
I was trying to hack up a tool to visualize shaders for my game and I figured I would try using python and cocoa. I have ran into a brick wall of sorts though. Maybe its my somewhat poor understand of objective c but I can not seem to get this code for a view I was trying to write working:
```
from objc import YES, NO, IBAction, IBOutlet
from Foundation import *
from AppKit import *
import gv
class SceneView(NSOpenGLView):
def __init__(self):
NSOpenGLView.__init__(self)
self.renderer = None
def doinit(self):
self.renderer = gv.CoreRenderer()
def initWithFrame_(self, frame):
self = super(SceneView, self).initWithFrame_(frame)
if self:
self.doinit()
print self.__dict__
return self
def drawRect_(self, rect):
clearColor = [0.0,0.0,0.0,0.0]
print self.__dict__
self.renderer.clear(CF_Target|CF_ZBuffer,clearColor)
```
It outputs this when executed:
```
{'renderer': <gv.CoreRenderer; proxy of <Swig Object of type 'GV::CoreRenderer *' at 0x202c7d0> >}
{}
2009-04-03 19:13:30.941 geom-view-edit[50154:10b] An exception has occured:
Traceback (most recent call last):
File "/System/Library/Frameworks/Python.framework/Versions/2.5/Extras/lib/python/PyObjCTools/AppHelper.py", line 235, in runEventLoop
File "/mnt/gilead/amcharg/projects/geom-view-edit/build/Debug/geom-view-edit.app/Contents/Resources/SceneView.py", line 37, in drawRect_
self.renderer.clear(CF_Target|CF_ZBuffer,clearColor)
AttributeError: 'SceneView' object has no attribute 'renderer'
```
It seems to be losing my renderer variable which is not that surprising considering how funky the initWithFrame\_ code is but this was something xcode seemed to write which I suppose makes sense since objective C has the init separate from alloc idiom. It is still strange seeing it python however.
Is there anyways to salvage this or should I take it out behind the code shed shoot it and use QT or wxPython? I considered using objective-c but I want to test out these nifty swig bindings I just compiled =)
|
2009/04/04
|
[
"https://Stackoverflow.com/questions/716386",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/84805/"
] |
Depending on what's happening elsewhere in your app, your instance might actually be getting copied.
In this case, implement the `copyWithZone` method to ensure that the new copy gets the renderer as well. (Caveat, while I am a Python developer, and an Objective-C cocoa developer, I haven't used PyObjC myself, so I can't say for certain if you should be implementing `copyWithZone` or `__copy__`).
In fact, shoving a `copyWithZone` method into the class with a print will allow you to tell if the method is being called and if that's the reason renderer appears to have vanished.
---
**Edit**: Base on your feedback, I've pasted your code into a blank xcode python project (just substituting something else for gv.CoreRenderer, since I don't have that), and it works fine with some minor modifications. How are you instantiating your SceneView?
In my case I:
* Created a blank xcode project using the Cocoa-Python template
* Created a new file called `SceneView.py`. I pasted in your code.
* Opened the `MainMenu.xib` file, and dragged an NSOpenGLView box onto the window.
* With the NSOpenGLView box selected, I went to the attributes inspector and changed the class of the box to `SceneView`
* Back in xcode, I added `import SceneView` in the imports in `main.py` so that the class would be available when the xib file is loaded
* I implemented an `awakeFromNib` method in `SceneView.py` to handle setting up `self.renderer`. Note that `__init__`, and `initWithFrame` are not called for nib objects during your program execution... they are considered "serialized" into the nib file, and therefore already instantiated. I'm glossing over some details, but this is why awakeFromNib exists.
* Everything worked on run. The `__dict__` had appropriate values in the `drawRect_` call, and such.
Here's the awakeFromNib function:
```
def awakeFromNib(self):
print "Awake from nib"
self.renderer = gv.CoreRenderer()
```
So, I'm guessing there are just some crossed wires somewhere in how your object is being instantiated and/or added to the view. Are you using Interface Builder for your object, or are you manually creating it and adding it to a view later? I'm curious to see that you are getting loggin outputs from initWithFrame, which is why I'm asking how you are creating the SceneView.
|
Even if they weren't serialized, the \_\_init\_\_-constructor of python isn't supported by the ObjectiveC-bridge. So one needs to overload e.g. initWithFrame: for self-created Views.
| 15,702
|
23,894,545
|
I would like to use ArangoDB in Django, but I don't know which of the following options is better: using the [ArangoDB Python driver](http://blog.klymyshyn.com/2013/02/arangodb-driver-for-python.html) or building a new API with Foxx. I think that the ArangoDB Python driver is not based on Foxx and I don't know the pros and cons of building a new API from scratch, even if it is made easier by Foxx. In addition, I'm afraid that using javascript in the interface between Foxx and the backend could make things slower. Would it be faster if I used [Guacamole ODM](http://github.com/triAGENS/guacamole) together with Ruby on Rails?
|
2014/05/27
|
[
"https://Stackoverflow.com/questions/23894545",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1960092/"
] |
Better option for your case is to use ArangoDB Python driver.
Here is couple of reasons:
* easy-to-start - just install driver and move on with development
* some similarity to Django ORM API
* have some documentation
* all your business logic will be in place and in Python which should be great advantage
And here is why Foxx is not the best option for your case:
* you have to build your own API which means:
+ bunch of code in JavaScript
+ some documentation to describe API
* additional logic level (in Foxx and in Django project) which increase tangling in your project
* it probably not increase performance because you still retrieve your data using HTTP
Foxx is good option when you build Single page APP using ArangoDB as data layer. Also probably Foxx will be great for building hight-level API's with Foxx as preprocessed/aggregated data provider.
|
I made a python ArangoDB driver (<https://github.com/saeschdivara/ArangoPy>)
and I created on top of that kind of a bridge for Django (<https://github.com/saeschdivara/ArangoDjango>). So you can use kind of an orm for ArangoDB and still use the Django Restframework to create your API.
| 15,703
|
4,205,697
|
My goal is to use to make it easy for non-programmers to execute a Python script with fairly complex options, on a single local machine that I have access to. I'd like to use the browser (specifically Safari on OS X) as a poor man's GUI. A short script would process the form data and then send it on to the main program(s).
I have some basic examples of python scripts run using the built-in Apache server, by clicking submit on a form whose html is th:

e.g. [here](http://telliott99.blogspot.com/2010/11/simple-server-running-python-script-on_17.html). What I want to do now is do it *without* the server, just getting the form to invoke the script in the same directory. Do I have to learn javascript or ...? I'd be grateful for any leads you have. Thanks.
|
2010/11/17
|
[
"https://Stackoverflow.com/questions/4205697",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/215679/"
] |
Use an [AOP framework](http://www.postsharp.com) for this, to inject code when a certain method is hit. You can also do this native via the .NET framework with a [ContextBoundObject](http://msdn.microsoft.com/en-us/library/system.contextboundobject.aspx); which is probably what they've used in the framework.
|
You are thinking about this wrong. It's not that the attribute has a changing value, it's that its interpretation by the code that uses the attribute is based on runtime state. In the example you gave, it is probably the case that the code which checks for that attribute also does the checking of Thread.CurrentPrinciple and takes action based on that. The attribute does nothing but store a few constant fields. It's best if you do the same. Remember also that putting an attribute on a method (or anything else) doesn't magically do anything. You have to have code that by reflection grabs those attributes and interprets the data in them in some useful and relevant way. So it's up to your interpreter code, not the attribute.
| 15,704
|
62,528,247
|
I've updated the initial script with a modified version of Bryan-Oakley's [answer](https://stackoverflow.com/a/6789351/10364425). It now has 2 canvas, 1 with the draggable rectangle, and 1 with the plot. I would like the rectangle to be dragged along the x-axis on the plot if that is possible?
```
import tkinter as tk # python 3
# import Tkinter as tk # python 2
import numpy as np
from matplotlib.backends.backend_tkagg import (
FigureCanvasTkAgg, NavigationToolbar2Tk)
# Implement the default Matplotlib key bindings.
from matplotlib.backend_bases import key_press_handler
from matplotlib.figure import Figure
class Example(tk.Frame):
"""Illustrate how to drag items on a Tkinter canvas"""
def __init__(self, parent):
tk.Frame.__init__(self, parent)
fig = Figure(figsize=(5, 4), dpi=100)
t = np.arange(0, 3, .01)
fig.add_subplot(111).plot(t, 2 * np.sin(2 * np.pi * t))
#create a canvas
self.canvas = tk.Canvas(width=200, height=300)
self.canvas.pack(fill="both", expand=True)
canvas = FigureCanvasTkAgg(fig, master=root) # A tk.DrawingArea.
canvas.draw()
canvas.get_tk_widget().pack(side=tk.TOP, fill=tk.BOTH, expand=1)
# this data is used to keep track of an
# item being dragged
self._drag_data = {"x": 0, "y": 0, "item": None}
# create a movable object
self.create_token(100, 150, "black")
# add bindings for clicking, dragging and releasing over
# any object with the "token" tag
self.canvas.tag_bind("token", "<ButtonPress-1>", self.drag_start)
self.canvas.tag_bind("token", "<ButtonRelease-1>", self.drag_stop)
self.canvas.tag_bind("token", "<B1-Motion>", self.drag)
def create_token(self, x, y, color):
"""Create a token at the given coordinate in the given color"""
self.canvas.create_rectangle(
x - 5,
y - 100,
x + 5,
y + 100,
outline=color,
fill=color,
tags=("token",),
)
def drag_start(self, event):
"""Begining drag of an object"""
# record the item and its location
self._drag_data["item"] = self.canvas.find_closest(event.x, event.y)[0]
self._drag_data["x"] = event.x
self._drag_data["y"] = event.y
def drag_stop(self, event):
"""End drag of an object"""
# reset the drag information
self._drag_data["item"] = None
self._drag_data["x"] = 0
self._drag_data["y"] = 0
def drag(self, event):
"""Handle dragging of an object"""
# compute how much the mouse has moved
delta_x = event.x - self._drag_data["x"]
delta_y = 0
# move the object the appropriate amount
self.canvas.move(self._drag_data["item"], delta_x, delta_y)
# record the new position
self._drag_data["x"] = event.x
self._drag_data["y"] = event.y
if __name__ == "__main__":
root = tk.Tk()
Example(root).pack(fill="both", expand=True)
root.mainloop()
```
Any help is greatly appreciated.
|
2020/06/23
|
[
"https://Stackoverflow.com/questions/62528247",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13796693/"
] |
The issue with your code is that you create two canvases, one for the matplotlib figure and one for the draggable rectangle while you want both on the same.
To solve this, I merged the current code of the question with the one before the edit, so the whole matplotlib figure is now embedded in the Tkinter window. The key modification I made to the `DraggableLine` class is that it now takes the canvas as an argument.
```
import tkinter as tk # python 3
# import Tkinter as tk # python 2
import numpy as np
import matplotlib.lines as lines
from matplotlib.backends.backend_tkagg import (
FigureCanvasTkAgg, NavigationToolbar2Tk)
from matplotlib.figure import Figure
class DraggableLine:
def __init__(self, ax, canvas, XorY):
self.ax = ax
self.c = canvas
self.XorY = XorY
x = [XorY, XorY]
y = [-2, 2]
self.line = lines.Line2D(x, y, color='red', picker=5)
self.ax.add_line(self.line)
self.c.draw_idle()
self.sid = self.c.mpl_connect('pick_event', self.clickonline)
def clickonline(self, event):
if event.artist == self.line:
self.follower = self.c.mpl_connect("motion_notify_event", self.followmouse)
self.releaser = self.c.mpl_connect("button_press_event", self.releaseonclick)
def followmouse(self, event):
self.line.set_xdata([event.xdata, event.xdata])
self.c.draw_idle()
def releaseonclick(self, event):
self.XorY = self.line.get_xdata()[0]
self.c.mpl_disconnect(self.releaser)
self.c.mpl_disconnect(self.follower)
class Example(tk.Frame):
"""Illustrate how to drag items on a Tkinter canvas"""
def __init__(self, parent):
tk.Frame.__init__(self, parent)
fig = Figure(figsize=(5, 4), dpi=100)
t = np.arange(0, 3, .01)
ax = fig.add_subplot(111)
ax.plot(t, 2 * np.sin(2 * np.pi * t))
# create the canvas
canvas = FigureCanvasTkAgg(fig, master=root) # A tk.DrawingArea.
canvas.draw()
canvas.get_tk_widget().pack(side=tk.TOP, fill=tk.BOTH, expand=1)
self.line = DraggableLine(ax, canvas, 0.1)
if __name__ == "__main__":
root = tk.Tk()
Example(root).pack(fill="both", expand=True)
root.mainloop()
```
|
I'm not sure how to do it with tkinter or pyQt but I know how to make something like this with PyGame which is another GUI solution for python. I hope this example helps you:
```
import pygame
SCREEN_WIDTH = 430
SCREEN_HEIGHT = 410
WHITE = (255, 255, 255)
RED = (255, 0, 0)
FPS = 30
pygame.init()
screen = pygame.display.set_mode((SCREEN_WIDTH, SCREEN_HEIGHT))
rectangle = pygame.rect.Rect(176, 134, 17, 170)
rectangle_draging = False
clock = pygame.time.Clock()
running = True
while running:
for event in pygame.event.get():
if event.type == pygame.QUIT:
running = False
elif event.type == pygame.MOUSEBUTTONDOWN:
if event.button == 1:
if rectangle.collidepoint(event.pos):
rectangle_draging = True
mouse_x, mouse_y = event.pos
offset_x = rectangle.x - mouse_x
offset_y = rectangle.y - mouse_y
elif event.type == pygame.MOUSEBUTTONUP:
if event.button == 1:
rectangle_draging = False
print("Line is at: (", rectangle.x, ";", rectangle.y,")")
elif event.type == pygame.MOUSEMOTION:
if rectangle_draging:
mouse_x, mouse_y = event.pos
rectangle.x = mouse_x + offset_x
rectangle.y = mouse_y + offset_y
screen.fill(WHITE)
pygame.draw.rect(screen, RED, rectangle)
pygame.display.flip()
clock.tick(FPS)
pygame.quit()
```
And when you move the line around, the console prints the location that it is in.
| 15,706
|
29,826,430
|
I want to extract a variable named `value` that is set in a second, arbitrarily chosen, python script.
The process works when do it manually in pyhton's interactive mode, but when I run the main script from the command line, `value` is not imported.
The main script's input arguments are already successfully forwarded, but `value` seems to be in the local scope of the executed script.
I already tried to define `value` in the main script, and I also tried to set its accessibility to global.
This is the script I have so far
```py
import sys
import getopt
def main(argv):
try:
(opts, args) = getopt.getopt(argv, "s:o:a:", ["script=", "operations=", "args="])
except getopt.GetoptError as e:
print(e)
sys.exit(2)
# script to be called
script = ""
# arguments that are expected by script
operations = []
argv = []
for (opt, arg) in opts:
if opt in ("-o", "--operations"):
operations = arg.split(',')
print("operations = '%s'" % str(operations))
elif opt in ("-s", "--script"):
script = arg;
print("script = '%s'" % script)
elif opt in ("-a", "--args"):
argv = arg.split(',')
print("arguments = '%s'" % str(argv))
# script should define variable 'value'
exec(open(script).read())
print("Executed '%s'. Value is printed below." % script)
print("Value = '%s'" % value)
if __name__ == "__main__":
main(sys.argv[1:])
```
|
2015/04/23
|
[
"https://Stackoverflow.com/questions/29826430",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1809463/"
] |
In my case I am using Handlebars for mandrill templates and I have solved this by sending HTML to the template `<br/>` instead of `\n` by replacing `\n` in my string with `<br/>` something like: `.Description.Replace("\n", "<br/>");`
And then on the mandrill template I put the variable inside {{{ variable }}} instead of {{ variable }}
<http://blog.mandrill.com/handlebars-for-templates-and-dynamic-content.html>
>
> **HTML escaping**
>
>
> Handlebars HTML-escapes values returned by an
> `{{expression}}`. If you don't want Handlebars to escape a value, use
> the "triple-stash", "{{{.
>
>
> In your template:
>
>
> `<div> {{{html_tag_content}}} </div>` In your API request:
>
>
>
```
"global_merge_vars": [ {
"name": "html_tag_content",
"content": "This example<br>is all about<br>the magical world of handlebars" } ]
```
>
> The result:
>
>
> This example
>
> is all about
>
> the magical world of handlebars
>
>
>
|
If you want to send more complex content, be it html, or variables with break lines and whatnot, you can first render the template and then send the message, instead of directly using `send-template`.
Render the template with a call to [`templates.render`](https://mandrillapp.com/api/docs/templates.JSON.html#method=render):
```
{
"key": "example key",
"template_name": "Example Template",
"template_content": [
{
"name": "editable",
"content": "<div>content to inject *|MERGE1|*</div>"
}
],
"merge_vars": [
{
"name": "merge1",
"content": "merge1 content\n contains line breaks"
}
]
}
```
Save the result to a variable `rendered_content`.
Send the message using the rendered template with [`messages.send`](https://mandrillapp.com/api/docs/messages.JSON.html#method=send):
```
{
"message": {
"html": rendered_content,
...
...
}
```
| 15,707
|
57,561,119
|
Using python 3, I'm trying to append a sheet from an existing excel file to another excel file.
I have conditional formats in this excel file so I can't just use pandas.
```
from openpyxl import load_workbook
final_wb = load_workbook("my_final_workbook_with_lots_of_sheets.xlsx")
new_wb = load_workbook("workbook_with_one_sheet.xlsx")
worksheet_new = new_wb['Sheet1']
final_wb.create_sheet(worksheet_new)
final_wb.save("my_final_workbook_with_lots_of_sheets.xlsx")
```
This code doesn't work because the `.create_sheet` method only makes a new blank sheet, I want to insert my `worksheet_new` that I loaded from the other file into the `final_wb`
|
2019/08/19
|
[
"https://Stackoverflow.com/questions/57561119",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11193105/"
] |
So another approach, without color ranges.
A couple of things are not going right in your code I think. First, you are drawing the contours on `thresh_binary`, but that already has the outer lines of the other cells as well - the lines you are trying to get rid off. I think that is why you use `opening`(?) while in this case you shouldn't.
To fix things, first a little information on how findContours works. findContours starts looking for white shapes on a black background and then looks for black shapes inside that white contour and so on. That means that the white outline of the cells in the `thresh_binary` are detected as a contour. Inside of it are other contours, including the one you want. [docs with examples](https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_contours/py_contours_hierarchy/py_contours_hierarchy.html#contours-hierarchy)
What you should do is first look only for contours that have no contours inside of them. The findContours also returns a hierarchy of contours. It indicates whether a contour has 'childeren'. If it has none (value: -1) then you look at the size of the contour and disregard the ones that are to small. You could also just look for the largest, as that is probably the one you want. Finally you draw the contour on a black mask.
Result:
[](https://i.stack.imgur.com/YP0aj.png)
Code:
```
import cv2 as cv
import numpy as np
# load image as grayscale
cell1 = cv.imread("PjMQR.png",0)
# threshold image
ret,thresh_binary = cv.threshold(cell1,107,255,cv.THRESH_BINARY)
# findcontours
contours, hierarchy = cv.findContours(image =thresh_binary , mode = cv.RETR_TREE,method = cv.CHAIN_APPROX_SIMPLE)
# create an empty mask
mask = np.zeros(cell1.shape[:2],dtype=np.uint8)
# loop through the contours
for i,cnt in enumerate(contours):
# if the contour has no other contours inside of it
if hierarchy[0][i][2] == -1 :
# if the size of the contour is greater than a threshold
if cv2.contourArea(cnt) > 10000:
cv.drawContours(mask,[cnt], 0, (255), -1)
# display result
cv2.imshow("Mask", mask)
cv2.imshow("Img", cell1)
cv2.waitKey(0)
cv2.destroyAllWindows()
```
Note: I used the image you uploaded, your image probably has far fewer pixels, so a smaller contourArea
Note2: `enumerate` loops through the contours, and returns both a contour and an index for each loop
|
Actually, in your code the 'box' is a legitimate extra contour. And you draw all contours on the final image, so that includes the 'box'. This could cause issues if any of the other colored cells are fully in the image.
A better approach is to separate out the color you want. The code below creates a binary mask that only displays the pixels that are in the defined range of red colors. You can use this mask with `findContours`.
Result:
[](https://i.stack.imgur.com/1pmg4.png)
Code:
```
import cv2
# load image
img = cv2.imread("PjMQR.png")
# Convert HSV
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# define range of red color in HSV
lower_val = np.array([0,20,0])
upper_val = np.array([15,255,255])
# Threshold the HSV image to get only red colors
mask = cv2.inRange(hsv, lower_val, upper_val)
# display image
cv2.imshow("Mask", mask)
cv2.waitKey(0)
cv2.destroyAllWindows()
```
[This code](https://github.com/twenty-twenty/opencv_basic_color_separator) can help you understand how the different values in this process (HSV with inRange) works. [inRange docs](https://docs.opencv.org/3.4.5/d2/de8/group__core__array.html#ga48af0ab51e36436c5d04340e036ce981)
| 15,708
|
27,829,575
|
I have a python script that calls a system program and reads the output from a file `out.txt`, acts on that output, and loops. However, it doesn't work, and a close investigation showed that the python script just opens `out.txt` once and then keeps on reading from that old copy. How can I make the python script reread the file on each iteration? I saw a similar question here on SO but it was about a python script running alongside a program, not calling it, and the solution doesn't work. I tried closing the file before looping back but it didn't do anything.
EDIT:
I already tried closing and opening, it didn't work. Here's the code:
```
import subprocess, os, sys
filename = sys.argv[1]
file = open(filename,'r')
foo = open('foo','w')
foo.write(file.read().rstrip())
foo = open('foo','a')
crap = open(os.devnull,'wb')
numSolutions = 0
while True:
subprocess.call(["minisat", "foo", "out"], stdout=crap,stderr=crap)
out = open('out','r')
if out.readline().rstrip() == "SAT":
numSolutions += 1
clause = out.readline().rstrip()
clause = clause.split(" ")
print clause
clause = map(int,clause)
clause = map(lambda x: -x,clause)
output = ' '.join(map(lambda x: str(x),clause))
print output
foo.write('\n'+output)
out.close()
else:
break
print "There are ", numSolutions, " solutions."
```
|
2015/01/07
|
[
"https://Stackoverflow.com/questions/27829575",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3154996/"
] |
You need to flush `foo` so that the external program can see its latest changes. When you write to a file, the data is buffered in the local process and sent to the system in larger blocks. This is done because updating the system file is relatively expensive. In your case, you need to force a flush of the data so that minisat can see it.
```
foo.write('\n'+output)
foo.flush()
```
|
You take your file\_var and end the loop with file\_var.close().
```
for ... :
ga_file = open(out.txt, 'r')
... do stuff
ga_file.close()
```
Demo of an implementation below (as simple as possible, this is all of the Jython code needed)...
```
__author__ = ''
import time
var = 'false'
while var == 'false':
out = open('out.txt', 'r')
content = out.read()
time.sleep(3)
print content
out.close()
```
generates this output:
```
2015-01-09, 'stuff added'
2015-01-09, 'stuff added' # <-- this is when i just saved my update
2015-01-10, 'stuff added again :)' # <-- my new output from file reads
```
I strongly recommend reading the error messages. They hold quite a lot of information.
I think the full file name should be written for debug purposes.
| 15,709
|
74,448,363
|
I am trying to find out if a hex color is "blue". This might be a very subjective thing when comparing different (lighter/ darker) shades of blue or close to blue colors but in my case it does not have to be very precise. I just want to determine if a color is blue or not.
The more generalized question would be, is there a way to calculate which of the "basic colors" is closest to a given hex value? Basic colors being red, blue ,green, yellow, purple, etc. without being too specific.
I am looking for an implementation or library in python but a general solution would also suffice.
|
2022/11/15
|
[
"https://Stackoverflow.com/questions/74448363",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4644897/"
] |
This is a somewhat complicated question, see more discussion here: <https://graphicdesign.stackexchange.com/questions/92984/how-can-i-tell-basic-color-a-hex-code-is-closest-to>
I don't know of any library or implementation that already exists for this. If you really need this functionality though and don't need it to be insanely precise, what you can do is use a color table with the colors you want to classify, e.g.
```
Black #000000 (0,0,0)
White #FFFFFF (255,255,255)
Red #FF0000 (255,0,0)
Lime #00FF00 (0,255,0)
Blue #0000FF (0,0,255)
Yellow #FFFF00 (255,255,0)
Cyan / Aqua #00FFFF (0,255,255)
Magenta / Fuchsia #FF00FF (255,0,255)
Silver #C0C0C0 (192,192,192)
Gray #808080 (128,128,128)
Maroon #800000 (128,0,0)
Olive #808000 (128,128,0)
Green #008000 (0,128,0)
Purple #800080 (128,0,128)
Teal #008080 (0,128,128)
Navy #000080 (0,0,128)
```
(from <https://www.rapidtables.com/web/color/RGB_Color.html>)
And calculate which of these the given hex color is closest to. This isn't going to be completely accurate because of the way we perceive specific colors on the spectrum, but I don't think this problem has a general solution anyways so approximation may be the only path forward.
|
I mean you could do:
```
hex = input()
if hex == '#0000FF':
print('Blue')
else:
print('Not blue')
```
If that is what you are looking for.
| 15,712
|
64,532,869
|
I'm very new to java but i have decent experience with c++ and python. So, I'm doing a question in which im required to implement an airplane booking system, which does the following -
1.initialize all seats to not occupied(false)
2.ask for input(eco or first class)
3.check if seat is not occupied
4.if seat is not occupied allocate seat else look for next seat
5.if economy seats are booked out, ask if user wants to bump up to first class
6.if user is negative display msg "next plane is in 3hrs"
but,
```
package oop;
import java.util.Arrays;
import java.util.Scanner;
public class AirplaneBooking {
private final static int MAX_ECO_SEATS = 5;
private final static int MAX_FIRST_CLASS_SEATS = 3;
private final static boolean[] ECO_SEATS = new boolean[MAX_ECO_SEATS];
private final static boolean[] FIRST_CLASS_SEATS = new boolean[MAX_FIRST_CLASS_SEATS];
private static int current_eco_seat = 0;
private static int current_first_class_seat = 0;
public static void initialilze_seats(boolean[] first_class_seats, boolean[] eco_class_seats){
Arrays.fill(first_class_seats, Boolean.FALSE);
Arrays.fill(eco_class_seats, Boolean.FALSE);
}
public static void display(boolean[] seats){
System.out.print("[");
for(boolean seat : seats){
System.out.print(seat + ",");
}
System.out.println("]");
}
public static void book_seat(boolean [] seats, int current_seat){
seats[current_seat] = true;
current_seat++;
System.out.println(current_seat);
}
public static int user_input() {
Scanner input = new Scanner(System.in);
System.out.print("Enter 1 for Economy class or 2 for First class : ");
int user_seat_prefrence = input.nextInt();
if (user_seat_prefrence == 1){
if(current_eco_seat < MAX_ECO_SEATS){
book_seat(ECO_SEATS, current_eco_seat);
}
else{
System.out.println("Looks like eco seats are full, would you like to book for first class insted(1/0) ?");
Scanner next_input = new Scanner(System.in);
int user_next_seat_prefrence = next_input.nextInt();
if (user_next_seat_prefrence == 1){
book_seat(FIRST_CLASS_SEATS, current_first_class_seat);
user_seat_prefrence = 2;
}
else{
System.out.println("next flight leaves in 3 hrs");
}
}
}
else if (user_seat_prefrence == 2){
if (current_first_class_seat < MAX_FIRST_CLASS_SEATS){
book_seat(FIRST_CLASS_SEATS, current_first_class_seat);
}
else{
System.out.println("Looks like first class seats are full, would you like to book economy instead?(1/0)");
int user_next_seat_prefrence = input.nextInt();
if (user_next_seat_prefrence == 1){
book_seat(ECO_SEATS, current_eco_seat);
user_seat_prefrence = 1;
}
else{
System.out.println("Next flight leaves in 3hrs");
}
}
}
else {
System.out.println("Enter valid option");
}
return user_seat_prefrence;
}
public static void print_boarding_pass(int user_seat_prefrence){
if (user_seat_prefrence == 1){
System.out.println("eco");
System.out.println(current_eco_seat - 1);
}
else{
System.out.println("first class");
System.out.println(current_first_class_seat - 1);
}
}
public static void main(String args[]){
initialilze_seats(FIRST_CLASS_SEATS, ECO_SEATS);
display(FIRST_CLASS_SEATS);
display(ECO_SEATS);
while(true){
int user_seat_prefrence = user_input();
print_boarding_pass(user_seat_prefrence);
display(FIRST_CLASS_SEATS);
display(ECO_SEATS);
System.out.print("book another seat:");
Scanner choice = new Scanner(System.in);
boolean book_another_seat = choice.nextBoolean();
if (book_another_seat == false)
break;
}
}
}
```
The problem i'm having with this code is if the seats for eco class(for example) are full, the program is supposed to ask if i want to book for first class instead and wait for my input, if I press `1` it should book in first class but the program does not await for my input and proceeds to `else` statement instead.
Also, i use a `static` variable `current_eco_seat` and `current_first_class_seat` to keep track of the current seat being booked, and i pass that static variable to `book_seat` function, the program then books the seat and increments the `current_eco_seat` or `current_first_class_seat`(depending which is passed) so that next seat can be booked in next interation. But the static variable does not get incremented at all.
These are the only problems i have with the program.
Any help is appreciated, thanks
|
2020/10/26
|
[
"https://Stackoverflow.com/questions/64532869",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7617688/"
] |
As Java calls methods by value,
Your problem about static is you are passing the value of `current_seat` to the `book_seat` method, so changing the value doesn't affect that variable after returning from the method.
To solve it just call the method and do not pass your static vars. It's static, so you have access it from everywhere.
i.e:
```
public static void book_seat(boolean [] seats){
seats[current_seat] = true;
current_first_class_seat++;
System.out.println(current_seat);
}
```
|
1. Checking Inout stream
Not sure wether your question is related to "static" variables or more related to "How to handle Input Stream?".
Regarding:
>
> if I press 1 it should book in first class but the program does not await for my input and proceeds to else statement instead.
>
>
>
You should think about "flushing" the Input Stream before reading again. [flush-clear-system-in-stdin-before-reading](https://stackoverflow.com/questions/18273751/flush-clear-system-in-stdin-before-reading)
2. Method Parameter usage
On the other hand this method is wrong
```
public static void book_seat(boolean [] seats, int current_seat){
seats[current_seat] = true;
current_seat++;
System.out.println(current_seat);
}
```
this command has no affect, but printing an information to the User. The variable you used in the caller "current\_eco\_seat" will not change at all.
| 15,713
|
2,284,666
|
I've been able to do this through the django environment shell, but hasn't worked on my actual site. Here is my model:
```
class ShoeReview(models.Model):
def __unicode__(self):
return self.title
title = models.CharField(max_length=200)
slug = models.SlugField(unique=True)
keywords = models.TextField()
author = models.ForeignKey(User, related_name='reviews')
pub_date = models.DateTimeField(auto_now_add='true')
Shoe = models.OneToOneField(Shoe) # make this select those without a shoe review in admin area
owner_reviews = OwnerReview.objects.filter(Shoe__name=Shoe)
```
I'm sure this is something silly, but I've been at it for around 6 hours. It's part of the reason I haven't slept in the last 18 :S
**I'm trying to return all OwnerReview objects from the OwnerReview Model that match the current Shoe:**
```
class OwnerReview(models.Model):
def __unicode__(self):
return self.comments
name = models.CharField(max_length=50)
pub_date = models.DateTimeField(auto_now_add='true')
Shoe = models.ForeignKey(Shoe)
email = models.EmailField()
rating = models.PositiveSmallIntegerField()
comments = models.CharField(max_length=500)
```
Here is what i'm trying to accomplish done through "python manage.py shell":
```
>>> from runningshoesreview.reviews.models import Shoe, OwnerReview
>>> ariake = Shoe.objects.get(pk=1)
>>> ariake.name
u'Ariake'
>>> owner_reviews_for_current_shoe = OwnerReview.objects.filter(Shoe__name=ariake.name)
>>> owner_reviews_for_current_shoe
[<OwnerReview: great pk shoe!>, <OwnerReview: good shoes, sometimes tear easy though.>]
```
|
2010/02/17
|
[
"https://Stackoverflow.com/questions/2284666",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/200916/"
] |
```
ariake = Shoe.objects.get(pk=1)
# get the OwnerReviews
ariake.ownerreview_set.all()
# or the ShoeReviews
akiake.shoereview_set.all()
```
Or if you really want to use the OwnerReview class directly
```
OwnerReview.objects.filter(shoe=ariaki)
```
A question for you. Did you mean to use OnoToOneField(Shoe) and not ForeignKey(Shoe) in the ShoeReview class? ForeignKey makes more sense to me. I would lowercase the 'Shoe' field in both of your review models and remove the reference to OwnerReview in ShoeReview.
```
class ShoeReview(models.Model):
# ...
shoe = models.ForeignKey(Shoe)
#^^^ lowercase field and ^^^^ capitalized model
# VVV remove this next line VVV
#owner_reviews = OwnerReview.objects.filter(Shoe__name=Shoe)
```
My example assumes that you have switched from 'Shoe' to 'shoe' for the field name in your models.
|
The reason why you're getting an empty QuerySet is because on your ShoeReview model, your filter argument is wrong:
>
> `owner_reviews = OwnerReview.objects.filter(Shoe__name=Shoe)`
>
>
>
Change to this:
>
> `owner_reviews = OwnerReview.objects.filter(Shoe=Shoe) #without __name`
> or you can do like this also:
> `owner_reviews = OwnerReview.objects.filter(Shoe__name=Shoe.name)`
>
>
>
The point is that you can't compare the name attribute to the Shoe object. Either compare an object to an object or an attribute to an attribute.
| 15,715
|
37,209,213
|
I am working on a project with some friends and we're facing a bit of a problem with our implementation of `picamera`.
We're trying to import `cv2` and `picamera` at the start of the program (working with Python 3) and so far importing `cv2` works just fine.
When we're trying to import picamera it tells us this: `ImportError: No module named picamera`.
I made sure we installed `picamera` with "`sudo apt-get install python-picamera`" and with "`...python3-picamera`" and it tells me the modules are installed.
Does anyone know how to fix this problem?
Our goal is to take a photo every 0.5s and use it for our OpenCV-program, and we wanted to do that with `picamera`, and only that.
Might the problem be that our project is located on the Desktop and not in some kind of Project folder or something?
Thanks in advance.
|
2016/05/13
|
[
"https://Stackoverflow.com/questions/37209213",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6140273/"
] |
I know this was posted a while ago, but for those who are experiencing the same issue, try this:
```
pip install "picamera[array]"
```
According to [piimagesearch.com](http://www.pyimagesearch.com/2015/03/30/accessing-the-raspberry-pi-camera-with-opencv-and-python/ "pyimagesearch.com") it's necessary to install the optional "array" sub-module for openCV to interface properly with picamera.
|
This should might help
```
sudo pip3 install picamera
```
I ran this on my desktop and something installed so it should work
if pip isn't installed you may have to run
```
sudo apt-get install python3-pip
```
sources:
[How to install pip with Python 3?](https://stackoverflow.com/questions/6587507/how-to-install-pip-with-python-3)
| 15,716
|
57,524,198
|
You won´t be able to run the script, sadly I don´t know why.
It's about EOL but I'm not that much into python so I need your help,
I´ve tried different stuff and didn't work. Also, my friend that actually is into phyton tried and failed.
this is just a menu code for running multiple antiviruses whenever I want to check my computer
```
import sys, string, os, arcgisscripting
def menu():
print ("Welcome To S1MPL3 MENU, an simple made antivirus for open Wi-Fi and Computer Repair.\n 1. Easy File Check \n 2.Total Time Security \n 3.Suspicius Ip Check")
choice = input()
if choice == "1":
print("Checking Files ... (The process wont take long !")
os.chdir 'C:\Users\alexa\Desktop\Core_Files\Projects\S1mpl3 Antivirus\Check\Files\File_Check.vbs\
menu()
if choice == "2":
print("TTS Chosen!")
os.chdir 'C:\Users\alexa\Desktop\Projects\S1mpl3_Antivirus\Check\\Files\Ip_Check\'
menu()
if choice == "3":
print("Checking For Suspicius Ip in your Home Wi-Fi")
os.chdir 'C:\Users\alexa\Desktop\Core_Files\Projects\S1mpl3 Antivirus\Check\Files\Ip_Check\'
menu()
menu()
```
The error should be in the S1m of choice 2
>
> Error: Syntax Error EOL while scanning string literal
>
>
>
|
2019/08/16
|
[
"https://Stackoverflow.com/questions/57524198",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11935868/"
] |
You are using backslash characters '\' in your paths. While this is OK on the command line, it is (mostly) not correct in source code. The backslash character is used as escape character to change the meaning of the following character. In your case the trailing apostroph is escaped so that the path string is not closed.
You can use a raw string like @RonaldAaronson proposed:
```
r'C:\Users\alexa\Desktop\Projects\S1mpl3_Antivirus\Check\Files\Ip_Check\'
```
Or replace all single backslash with double like this:
```
'C:\\Users\\alexa\\Desktop\\Projects\\S1mpl3_Antivirus\\Check\\Files\\Ip_Check\\'
```
Many Windows function also work with the unixoid path separator '/', and `os.chdir()` does so, too:
```
os.chdir('C:/Users/alexa/Desktop/Projects/S1mpl3_Antivirus/Check/Files/Ip_Check')
```
The library `os` has `os.sep` and `os.altsep` that are pathname separators. Use these to write better portable code. Please read the documentation. This is always a good idea.
Second observation: You need to call `os.chdir()` with parentheses.
|
You are missing a single quote at the end of the line:
```
if choice == "1":
print("Checking Files ... (The process wont take long !")
os.chdir 'C:\Users\alexa\Desktop\Core_Files\Projects\S1mpl3 Antivirus\Check\Files\File_Check.vbs\ **<---here**
menu()
```
| 15,717
|
65,273,118
|
I am new to deep learning and I have been trying to install tensorflow-gpu version in my pc in vain for the last 2 days. I avoided installing CUDA and cuDNN drivers since several forums online don't recommend it due to numerous compatibility issues. Since I was already using the conda distribution of python before, I went for the `conda install -c anaconda tensorflow-gpu` as written in their official website here: <https://anaconda.org/anaconda/tensorflow-gpu> .
However even after installing the gpu version in a fresh virtual environment (to avoid potential conflicts with pip installed libraries in the base env), tensorflow doesn't seem to even recognize my GPU for some mysterious reason.
Some of the code snippets I ran(in anaconda prompt) to understand that it wasn't recognizing my GPU:-
1.
```
>>>from tensorflow.python.client import device_lib
>>>print(device_lib.list_local_devices())
[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 7692219132769779763
]
```
As you can see it completely ignores the GPU.
2.
```
>>>tf.debugging.set_log_device_placement(True)
>>>a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
2020-12-13 10:11:30.902956: I tensorflow/core/platform/cpu_feature_guard.cc:142] This
TensorFlow
binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU
instructions in performance-critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
>>>b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
>>>c = tf.matmul(a, b)
>>>print(c)
tf.Tensor(
[[22. 28.]
[49. 64.]], shape=(2, 2), dtype=float32)
```
Here, it was supposed to indicate that it ran with a GPU by showing `Executing op MatMul in device /job:localhost/replica:0/task:0/device:GPU:0` (as written here: <https://www.tensorflow.org/guide/gpu>) but nothing like that is present. Also I am not sure what the message after the 2nd line means.
I have also searched for several solutions online including here but almost all of the issues are related to the first manual installation method which I haven't tried yet since everyone recommended this approach.
I don't use cmd anymore since the environment variables somehow got messed up after uninstalling tensorflow-cpu from the base env and on re-installing, it worked perfectly with anaconda prompt but not cmd. This is a separate issue (and widespread also) but I mentioned it in case that has a role to play here. I installed the gpu version in a fresh virtual environment to ensure a clean installation and as far as I understand path variables need to be set up only for manual installation of CUDA and cuDNN libraries.
The card which I use:-(which is CUDA enabled)
```
C:\WINDOWS\system32>wmic path win32_VideoController get name
Name
NVIDIA GeForce 940MX
Intel(R) HD Graphics 620
```
Tensorflow and python version I am using currently:-
```
>>> import tensorflow as tf
>>> tf.__version__
'2.3.0'
Python 3.8.5 (default, Sep 3 2020, 21:29:08) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
```
System information: Windows 10 Home, 64-bit operating system, x64-based processor.
Any help would be really appreciated. Thanks in advance.
|
2020/12/13
|
[
"https://Stackoverflow.com/questions/65273118",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14372142/"
] |
I see that your GPU has **[compute capability 5.0](https://developer.nvidia.com/cuda-gpus)** which is OK, TensorFlow should like it. Thus I assume something went wrong during the environment setup. Please try creating a new environment using:
```
conda create --name tf_gpu tensorflow-gpu
```
Then install all other packages you want in ***tf\_gpu*** and try again.
P.S: it is really important that in the environment you have only one TensorFlow package (the gpu one). If you have more than one, there is no guarantee that
```
import tensorflow as tf
```
will import the one you want ...
|
Using `conda` to install TensorFlow is always a better way to manage the multi versions of TensorFlow itself as well as CUDA and CUDNN. I recently create a new conda environment and prepare to install the newest TensorFlow too. I also encountered the issue you mentioned. I checked the dependency list from `conda install tensorflow-gpu` and found that the `cudatoolkit` and `cudnn` packages are missing. As the latest version of tensorflow-gpu at Anaconda is 2.3, I think the problem was already pointed out by @GZ0's answer at the GitHub issue.
Here I list the output below:
Using `conda install tensorflow=2.3`:
```
PS > conda install tensorflow-gpu=2.3
## Package Plan ##
environment location: C:\Anaconda3\envs\test_cuda_38
added / updated specs:
- tensorflow-gpu=2.3
The following packages will be downloaded:
package | build
---------------------------|-----------------
absl-py-0.12.0 | py38haa95532_0 176 KB
aiohttp-3.7.4 | py38h2bbff1b_1 513 KB
astunparse-1.6.3 | py_0 17 KB
async-timeout-3.0.1 | py38haa95532_0 14 KB
blas-1.0 | mkl 6 KB
blinker-1.4 | py38haa95532_0 23 KB
brotlipy-0.7.0 |py38h2bbff1b_1003 412 KB
cachetools-4.2.1 | pyhd3eb1b0_0 13 KB
cffi-1.14.5 | py38hcd4344a_0 224 KB
chardet-3.0.4 |py38haa95532_1003 194 KB
click-7.1.2 | pyhd3eb1b0_0 64 KB
coverage-5.5 | py38h2bbff1b_2 272 KB
cryptography-3.4.7 | py38h71e12ea_0 643 KB
cython-0.29.23 | py38hd77b12b_0 1.7 MB
gast-0.4.0 | py_0 15 KB
google-auth-1.29.0 | pyhd3eb1b0_0 76 KB
google-auth-oauthlib-0.4.4 | pyhd3eb1b0_0 18 KB
google-pasta-0.2.0 | py_0 46 KB
grpcio-1.36.1 | py38hc60d5dd_1 1.7 MB
h5py-2.10.0 | py38h5e291fa_0 841 KB
hdf5-1.10.4 | h7ebc959_0 7.9 MB
icc_rt-2019.0.0 | h0cc432a_1 6.0 MB
idna-2.10 | pyhd3eb1b0_0 52 KB
importlib-metadata-3.10.0 | py38haa95532_0 34 KB
intel-openmp-2021.2.0 | haa95532_616 1.8 MB
keras-applications-1.0.8 | py_1 29 KB
keras-preprocessing-1.1.2 | pyhd3eb1b0_0 35 KB
libprotobuf-3.14.0 | h23ce68f_0 1.9 MB
markdown-3.3.4 | py38haa95532_0 144 KB
mkl-2021.2.0 | haa95532_296 115.5 MB
mkl-service-2.3.0 | py38h2bbff1b_1 49 KB
mkl_fft-1.3.0 | py38h277e83a_2 137 KB
mkl_random-1.2.1 | py38hf11a4ad_2 223 KB
multidict-5.1.0 | py38h2bbff1b_2 61 KB
numpy-1.20.1 | py38h34a8a5c_0 23 KB
numpy-base-1.20.1 | py38haf7ebc8_0 4.2 MB
oauthlib-3.1.0 | py_0 91 KB
opt_einsum-3.1.0 | py_0 54 KB
protobuf-3.14.0 | py38hd77b12b_1 242 KB
pyasn1-0.4.8 | py_0 57 KB
pyasn1-modules-0.2.8 | py_0 72 KB
pycparser-2.20 | py_2 94 KB
pyjwt-1.7.1 | py38_0 48 KB
pyopenssl-20.0.1 | pyhd3eb1b0_1 49 KB
pyreadline-2.1 | py38_1 145 KB
pysocks-1.7.1 | py38haa95532_0 31 KB
requests-2.25.1 | pyhd3eb1b0_0 52 KB
requests-oauthlib-1.3.0 | py_0 23 KB
rsa-4.7.2 | pyhd3eb1b0_1 28 KB
scipy-1.6.2 | py38h66253e8_1 13.0 MB
tensorboard-plugin-wit-1.6.0| py_0 630 KB
tensorflow-2.3.0 |mkl_py38h8557ec7_0 6 KB
tensorflow-base-2.3.0 |eigen_py38h75a453f_0 49.5 MB
tensorflow-estimator-2.3.0 | pyheb71bc4_0 271 KB
termcolor-1.1.0 | py38haa95532_1 9 KB
typing-extensions-3.7.4.3 | hd3eb1b0_0 12 KB
typing_extensions-3.7.4.3 | pyh06a4308_0 28 KB
urllib3-1.26.4 | pyhd3eb1b0_0 105 KB
werkzeug-1.0.1 | pyhd3eb1b0_0 239 KB
win_inet_pton-1.1.0 | py38haa95532_0 35 KB
wrapt-1.12.1 | py38he774522_1 49 KB
yarl-1.6.3 | py38h2bbff1b_0 153 KB
------------------------------------------------------------
Total: 210.0 MB
```
Using `conda install tensorflow=2.1`:
```
PS > conda install tensorflow-gpu=2.1
## Package Plan ##
environment location: C:\Anaconda3\envs\test_cuda
added / updated specs:
- tensorflow-gpu=2.1
The following packages will be downloaded:
package | build
---------------------------|-----------------
_tflow_select-2.1.0 | gpu 3 KB
absl-py-0.12.0 | py37haa95532_0 175 KB
aiohttp-3.7.4 | py37h2bbff1b_1 507 KB
astor-0.8.1 | py37haa95532_0 47 KB
async-timeout-3.0.1 | py37haa95532_0 14 KB
blas-1.0 | mkl 6 KB
blinker-1.4 | py37haa95532_0 23 KB
brotlipy-0.7.0 |py37h2bbff1b_1003 337 KB
cachetools-4.2.1 | pyhd3eb1b0_0 13 KB
cffi-1.14.5 | py37hcd4344a_0 220 KB
chardet-3.0.4 |py37haa95532_1003 192 KB
click-7.1.2 | pyhd3eb1b0_0 64 KB
coverage-5.5 | py37h2bbff1b_2 273 KB
cryptography-3.4.7 | py37h71e12ea_0 641 KB
cudatoolkit-10.1.243 | h74a9793_0 300.3 MB
cudnn-7.6.5 | cuda10.1_0 179.1 MB
cython-0.29.23 | py37hd77b12b_0 1.7 MB
gast-0.2.2 | py37_0 155 KB
google-auth-1.29.0 | pyhd3eb1b0_0 76 KB
google-auth-oauthlib-0.4.4 | pyhd3eb1b0_0 18 KB
google-pasta-0.2.0 | py_0 46 KB
grpcio-1.36.1 | py37hc60d5dd_1 1.7 MB
h5py-2.10.0 | py37h5e291fa_0 808 KB
hdf5-1.10.4 | h7ebc959_0 7.9 MB
icc_rt-2019.0.0 | h0cc432a_1 6.0 MB
idna-2.10 | pyhd3eb1b0_0 52 KB
importlib-metadata-3.10.0 | py37haa95532_0 34 KB
intel-openmp-2021.2.0 | haa95532_616 1.8 MB
keras-applications-1.0.8 | py_1 29 KB
keras-preprocessing-1.1.2 | pyhd3eb1b0_0 35 KB
libprotobuf-3.14.0 | h23ce68f_0 1.9 MB
markdown-3.3.4 | py37haa95532_0 144 KB
mkl-2021.2.0 | haa95532_296 115.5 MB
mkl-service-2.3.0 | py37h2bbff1b_1 48 KB
mkl_fft-1.3.0 | py37h277e83a_2 133 KB
mkl_random-1.2.1 | py37hf11a4ad_2 214 KB
multidict-5.1.0 | py37h2bbff1b_2 85 KB
numpy-1.20.1 | py37h34a8a5c_0 23 KB
numpy-base-1.20.1 | py37haf7ebc8_0 4.1 MB
oauthlib-3.1.0 | py_0 91 KB
opt_einsum-3.1.0 | py_0 54 KB
protobuf-3.14.0 | py37hd77b12b_1 240 KB
pyasn1-0.4.8 | py_0 57 KB
pyasn1-modules-0.2.8 | py_0 72 KB
pycparser-2.20 | py_2 94 KB
pyjwt-1.7.1 | py37_0 49 KB
pyopenssl-20.0.1 | pyhd3eb1b0_1 49 KB
pyreadline-2.1 | py37_1 143 KB
pysocks-1.7.1 | py37_1 28 KB
requests-2.25.1 | pyhd3eb1b0_0 52 KB
requests-oauthlib-1.3.0 | py_0 23 KB
rsa-4.7.2 | pyhd3eb1b0_1 28 KB
scipy-1.6.2 | py37h66253e8_1 12.8 MB
six-1.15.0 | py37haa95532_0 51 KB
tensorboard-plugin-wit-1.6.0| py_0 630 KB
tensorflow-2.1.0 |gpu_py37h7db9008_0 4 KB
tensorflow-base-2.1.0 |gpu_py37h55f5790_0 105.3 MB
tensorflow-estimator-2.1.0 | pyhd54b08b_0 251 KB
tensorflow-gpu-2.1.0 | h0d30ee6_0 3 KB
termcolor-1.1.0 | py37haa95532_1 9 KB
typing-extensions-3.7.4.3 | hd3eb1b0_0 12 KB
typing_extensions-3.7.4.3 | pyh06a4308_0 28 KB
urllib3-1.26.4 | pyhd3eb1b0_0 105 KB
werkzeug-0.16.1 | py_0 258 KB
win_inet_pton-1.1.0 | py37haa95532_0 35 KB
wrapt-1.12.1 | py37he774522_1 49 KB
yarl-1.6.3 | py37h2bbff1b_0 151 KB
------------------------------------------------------------
Total: 745.0 MB
```
Therefore, you may install the latest version (v2.3) of tensorflow-gpu from Anaconda on Windows platform using the advises from @GZ0 and @geometrikal, or just using `conda install tensorflow-gpu=2.1` to get the newest and right environment.
Notice that tensorflow-gpu v2.1 only support Python between 3.5-3.7.
| 15,719
|
74,365,103
|
I have a small code created in python and from an api I would like to go through all the
`code = url.json()["data"][0]["name"]`
But I do not know how to do it
this is my little code:
```
import requests
swf = input("write: ")
url = requests.get(f"https://apihabbo.com/api/furnis?hotel=es&name={swf}")
code = url.json()["data"][0]["name"]
print(code)
```
Can someone help me, thank you very much in advance!
this is the url:
<https://apihabbo.com/api/furnis?hotel=es&name=ducha>
I have tried with this code, but no success
```
response = requests.get("https://apihabbo.com/api/furnis?hotel=es&name=Gorro%20con%20Pomp%C3%B3n")
data = response.json()
for i in data['data'][0]['code']:
print("{}".format(i['code']))
```
|
2022/11/08
|
[
"https://Stackoverflow.com/questions/74365103",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20452313/"
] |
`data['data'][0]['code']` is not a list. The list is `data['data']`, you need to loop over that.
```
for d in data['data']:
print(d['code'])
```
|
You have to iterate through the list of received data points.
```
response = requests.get("https://apihabbo.com/api/furnis?hotel=es&name=Gorro%20con%20Pomp%C3%B3n")
data = response.json()
for i in data['data']:
print("{}".format(i['code']))
```
| 15,729
|
61,235,115
|
**Mac OS**: when I try to run anything involving pip, I get
```
-bash: pip: command not found
```
This happened after I accidentally deleted the pip Unix file in `usr/local/bin` while trying to solve a different problem with pip. At this point, I've pretty much given up on solving the problem manually.
>
> Is there a way to just completely uninstall python and pip and start all over again?
>
>
>
|
2020/04/15
|
[
"https://Stackoverflow.com/questions/61235115",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13322285/"
] |
In recent python versions pip is as module rather than as individual script. Try:
```
python -m pip
```
|
solution was surprisingly simple: deleted everything python related from my computer:
1. deleted `Python` App in `Applications` Folder
2. deleted all python and pip related files in `usr/local/bin`
3. deleted the `Python.framework` folder in `Libraray/Frameworks`
4. searched for and deleted all folders named `python`
Then I reinstalled Python via the official python page. Pip seems to work now.
| 15,730
|
11,616,003
|
I install virtualenv with command `sudo /usr/bin/pip-2.6 install virtualenv`
And it says
```
Requirement already satisfied (use --upgrade to upgrade):
virtualenv in /usr/local/lib/python2.6/dist-packages
Cleaning up...
```
Why pip from /usr/bin looks to /usr/local/lib?
I need to install virtualenv scripts directly to /usr/bin, so I write
```
sudo /usr/bin/pip-2.6 install --install-option="--install-scripts=/usr/bin" virtualenv
```
But again it responds with
```
Requirement already satisfied (use --upgrade to upgrade):
virtualenv in /usr/local/lib/python2.6/dist-packages
Cleaning up...
```
Adding --upgrade doesn't help.
How can I install virtualenv scripts to /usr/bin ?
|
2012/07/23
|
[
"https://Stackoverflow.com/questions/11616003",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/429873/"
] |
Starting with iOS 6, you MUST set the audio session category to 'playback' before creating the UIWebView. This is all you have to do. It is not necessary to make the session active.
This should be used for html video as well, because if you don't configure the session, your video will be muted when the ringer switch is off.
```
#import <AVFoundation/AVFoundation.h>
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
BOOL ok;
NSError *setCategoryError = nil;
ok = [audioSession setCategory:AVAudioSessionCategoryPlayback
error:&setCategoryError];
if (!ok) {
NSLog(@"%s setCategoryError=%@", __PRETTY_FUNCTION__, setCategoryError);
}
```
Ensure that your target links to the AVFoundation framework.
---
If using Cordova, the file you need to modify is `platforms/ios/MyApp/Classes/AppDelegate.m`, and will end up looking like this:
```
#import "AppDelegate.h"
#import "MainViewController.h"
#import <AVFoundation/AVFoundation.h>
@implementation AppDelegate
- (BOOL)application:(UIApplication*)application didFinishLaunchingWithOptions:(NSDictionary*)launchOptions
{
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
BOOL ok;
NSError *setCategoryError = nil;
ok = [audioSession setCategory:AVAudioSessionCategoryPlayback error:&setCategoryError];
if (!ok) {
NSLog(@"%s setCategoryError=%@", __PRETTY_FUNCTION__, setCategoryError);
}
self.viewController = [[MainViewController alloc] init];
return [super application:application didFinishLaunchingWithOptions:launchOptions];
}
@end
```
Also, as mentioned in the comments, you need to link the AVFoundation Framework, as explained in [this answer](https://stackoverflow.com/questions/19337890/how-to-add-an-existing-framework-in-xcode-5/19337932#19337932):
* Open your project with xcode `open ./platforms/ios/MyApp.xcworkspace/`
* Project navigator > target My App > General
* Scroll to the bottom to find Linked Frameworks and Libraries
|
This plugin will make your app ignore the mute switch. It's basically the same code that's in the other answers but it's nicely wrapped into a plugin so that you don't have to do any manual objective c edits.
<https://github.com/EddyVerbruggen/cordova-plugin-backgroundaudio>
Run this command to add it to your project:
```
cordova plugin add https://github.com/EddyVerbruggen/cordova-plugin-backgroundaudio.git
```
| 15,731
|
15,616,139
|
I am python beginner struggling to create and save a list containing tuples from csv file in python.
The code I got for now is:
```
def load_file(filename):
fp = open(filename, 'Ur')
data_list = []
for line in fp:
data_list.append(line.strip().split(','))
fp.close()
return data_list
```
and then I would like to save the file
```
def save_file(filename, data_list):
fp = open(filename, 'w')
for line in data_list:
fp.write(','.join(line) + '\n')
fp.close()
```
Unfortunately, my code returns a list of lists, not a list of tuples... Is there a way to create one list containing multiple tuples without using csv module?
|
2013/03/25
|
[
"https://Stackoverflow.com/questions/15616139",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2207588/"
] |
`split` returns a list, if you want a tuple, convert it to a tuple:
```
data_list.append(tuple(line.strip().split(',')))
```
Please use the `csv` module.
|
First question: why is a list of lists bad? In the sense of "duck-typing", this should be fine, so maybe you think about it again.
If you really need a list of tuples - only small changes are needed.
Change the line
```
data_list.append(line.strip().split(','))
```
to
```
data_list.append(tuple(line.strip().split(',')))
```
That's it.
If you ever want to get rid of custom code (less code is better code), you could stick to the [csv](http://docs.python.org/2/library/csv.html)-module. I'd strongly recommend using as many library methods as possible.
To show-off some advanced Python features: your `load_file`-method could also look like:
```
def load_file(filename):
with open(filename, 'Ur') as fp:
data_list = [tuple(line.strip().split(",") for line in fp]
```
I use a list comprehension here, it's very concise and easy to understand.
Additionally, I use the `with`-statement, which will close your file pointer, even if an exception occurred within your code. Please always use `with` when working with external resources, like files.
| 15,737
|
21,352,457
|
I'm trying to create a program that will launch livestreamer.exe with flags (-example), but cannot figure out how to do so.
When using the built in "run" function with windows, I type this:
`livestreamer.exe twitch.tv/streamer best`
And here is my python code so far:
```
import os
streamer=input("Streamer (full name): ")
quality=input("""Quality:
Best
High
Medium
Low
Mobile
: """).lower()
os.chdir("C:\Program Files (x86)\Livestreamer")
os.startfile("livestreamer.exe twitch.tv "+streamer+" "+quality)
```
I understand that the code is looking for a file not named livestreamer.exe (FileNotFoundError), but one with all the other code put in. Does anyone know how to launch the program with the arguments built in? Thanks.
|
2014/01/25
|
[
"https://Stackoverflow.com/questions/21352457",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2925095/"
] |
```
arr = [["food", "eggs"],["beverage", "milk"],["desert", "cake"]]
arr.inject([]) do |hash, (v1, v2)|
hash << { category: v1, item: v2 }
end
```
I used [`inject`](http://ruby-doc.org/core-2.1.0/Enumerable.html#method-i-inject) to keep the code concise.
Next time you may want to show what you have tried in the question, just to demonstrate that you actually tried to do something before asking for code.
|
```
hash = arr.each_with_object({}){|elem, hsh|hsh[elem[0]] = elem[1]}
```
| 15,740
|
47,810,110
|
I have a text file which has 30 multiple choice questions in the following pattern
1. question one goes here ?
A. Option 1
B. Option 2
C. Option 3
D. Option 4
and so on to 30
Number of options is variable; there are minimum two and maximum six options.
I want to practice these questions in a interface like html/php quiz which allows me to select options and at last displays the result.
I tried reading the file in python and then tried to store questions and answers in separate lists but it did not work.
Below is my code:
```
#to prevent IndexError
question = ['','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','']
answers = ['','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','','']
qOrA = "q"
mcq_file = "mcqs.txt"
mcq = open(mcq_file, "r")
data_list = mcq.readlines()
for i in range(len(data_list)):
element = list(data_list[i])
if element[0] == "A" and element[1] == ".":
qOrA = "a"
if qOrA == "q":
question[i] = question[i]+ " " + data_list[i]
elif qOrA == "a":
answers[i] = answers[i]+ " " + data_list[i]
```
mcq.readlines() output upto question no. 3 is given below
Note: Actually there are multiple line breaks so the file is not properly structured.
```
['\n', '1.\n', '\n', ' \n', '\n', 'Which computer component contains all the \n', '\n', 'circuitry necessary for all components or \n', '\n', 'devices to communicate with each other?\n', '\n', ' \n', '\n', ' \n', '\n', ' \n', '\n', ' \n', '\n', ' \n', '\n', 'A. Motherboard\n', '\n', ' \n', '\n', ' \n', '\n', 'B. Hard Drive\n', '\n', ' \n', '\n', ' \n', '\n', 'C. Expansion Bus\n', '\n', ' \n', '\n', ' \n', '\n', 'D. Adapter Card\n', '\n', ' \n', '\n', ' \n', '\n', '\n', '\n', '\n', '2. \n', '\n', 'Which case type is typically \n', '\n', 'used for servers?\n', '\n', ' \n', '\n', ' \n', '\n', ' \n', '\n', ' \n', '\n', ' \n', '\n', 'A.\n', '\n', ' \n', '\n', ' \n', '\n', 'Mini Tower\n', '\n', ' \n', '\n', ' \n', '\n', 'B.\n', '\n', ' \n', '\n', ' \n', '\n', 'Mid Tower\n', '\n', ' \n', '\n', ' \n', '\n', 'C.\n', '\n', ' \n', '\n', ' \n', '\n', 'Full Tower\n', '\n', ' \n', '\n', ' \n', '\n', 'D.\n', '\n', ' \n', '\n', ' \n', '\n', 'desktop\n', '\n', ' \n', '\n', ' \n', '\n', ' \n', '\n', ' \n', '\n', ' \n', '\n', ' \n', '\n', '\n', '\n', '\n', '3.\n', '\n', ' \n', '\n', 'What is the most reliable way for users to buy the \n', '\n', 'correct RAM to upgrade a computer?\n', '\n', ' \n', '\n', ' \n', '\n', ' \n', '\n', ' \n', '\n', 'A.\n', '\n', ' \n', '\n', ' \n', '\n', 'Buy RAM that is the same color as the memory sockets \n', '\n', 'on the motherboard.\n', '\n', ' \n', '\n', ' \n', '\n', 'B.\n', '\n', ' \n', '\n', ' \n', '\n', 'Ensure that the RAM chip is the same size as the ROM chip.\n', '\n', ' \n', '\n', ' \n', '\n', 'C.\n', '\n', ' \n', '\n', ' \n', '\n', 'Ensure that the RAM is \n', '\n', 'compatible\n', '\n', ' \n', '\n', 'with the peripherals \n', '\n', 'installed on the motherboard.\n', '\n', ' \n', '\n', ' \n', '\n', 'D.\n', '\n', ' \n', '\n', ' \n', '\n', 'Check the motherboard manual or manufacturer’s website.\n', '\n', ' \n', '\n', ' \n', '\n', ' \n', '\n', ' \n', '\n', '\n', '\n', '\n']
```
|
2017/12/14
|
[
"https://Stackoverflow.com/questions/47810110",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3986321/"
] |
You just need to change your JS to:
```
$(document).ready(function(){
$('section').mouseenter(function(){
var id = $(this).attr('id');
$('a').removeClass('colorAdded');
$("a[href='#"+id+"']").addClass('colorAdded');
});
});
```
It was an issue with not including quotations in selecting the `a` tag with the `href` targeting.
|
I'm actually don't know why your codepen example is not working, I didn't look into your code carefully, but I tried to create simple code like bellow and it worked.
the thing you probably should care about is how you import JQuery into your page.
`<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>`
hopefully this following codes can help you.
```js
$(document).ready(function(){
$('button').mouseenter( function() {
console.log('hahaha');
})
});
```
```html
<html>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<body>
<div class="test">
<button class="test-button" > test button </button>
</div>
</body>
</html>
```
| 15,745
|
17,957,651
|
For those who need to know, I'm running a 64 bit Ubuntu 12.04, and trying to run the problematic script using a pip-installed python3.2
For a project I was writing I wanted to display an image in a tkinter window. To do this I installed Pillow via pip and installed tkinter for python 3 like so:
```
pip-3.2 install pillow #install stuff here
sudo apt-get install python3-tk
```
I then tried to run the following script
```
import tkinter
from PIL import Image, ImageTk
root = tkinter.Tk()
i = Image.open(<path_to_file>)
p = ImaageTk.PhotoImage(i)
```
There's more, but that block throws the errors. Anyways, when I try and run this I get the following error output
```
/usr/bin/python3.2 "/home/anish/PycharmProjects/Picture Renamer/default/Main.py"
Traceback (most recent call last):
File "/usr/local/lib/python3.2/dist-packages/PIL/ImageTk.py", line 184, in paste
tk.call("PyImagingPhoto", self.__photo, block.id)
_tkinter.TclError: invalid command name "PyImagingPhoto"
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/anish/PycharmProjects/Picture Renamer/default/Main.py", line 26, in <module>
n = Main("/home/anish/Desktop/Images/")
File "/home/anish/PycharmProjects/Picture Renamer/default/Main.py", line 11, in __init__
self.PictureList = self.MakeImageList(self.dir)
File "/home/anish/PycharmProjects/Picture Renamer/default/Main.py", line 21, in MakeImageList
tkim = ImageTk.PhotoImage(temp_im)
File "/usr/local/lib/python3.2/dist-packages/PIL/ImageTk.py", line 123, in __init__
self.paste(image)
File "/usr/local/lib/python3.2/dist-packages/PIL/ImageTk.py", line 188, in paste
from PIL import _imagingtk
ImportError: cannot import name _imagingtk
Process finished with exit code 1
```
No amount of googling has given me a solution- the topics I find concerning this usually say to reinstall tkinter and/or pillow.
Here are the contents of my /usr/lib/python3.2/tkinter/
```
['dialog.py', 'scrolledtext.py', 'simpledialog.py', 'tix.py', 'dnd.py', 'ttk.py', '__main__.py', '_fix.py', 'font.py', '__pycache__', 'messagebox.py', '__init__.py', 'commondialog.py', 'constants.py', 'colorchooser.py', 'filedialog.py']
```
And here are the [many] files within my /usr/local/lib/python3.2/dist-packages/PIL/
```
['OleFileIO.py', 'ImageFileIO.py', 'ImageCms.py', 'GimpGradientFile.py', 'PSDraw.py', 'ImageDraw2.py', 'GimpPaletteFile.py', 'TiffImagePlugin.py', 'ImageChops.py', 'ImageShow.py', 'ImageStat.py', 'FliImagePlugin.py', 'ImageColor.py', 'XpmImagePlugin.py', 'ImageOps.py', 'ExifTags.py', 'FpxImagePlugin.py', 'PngImagePlugin.py', 'ImageFile.py', 'WalImageFile.py', 'PixarImagePlugin.py', 'PsdImagePlugin.py', '_util.py', 'ImageDraw.py', 'GribStubImagePlugin.py', 'ContainerIO.py', 'CurImagePlugin.py', 'JpegPresets.py', '_imagingft.cpython-32mu.so', '_imagingmath.cpython-32mu.so', 'PpmImagePlugin.py', 'BmpImagePlugin.py', 'XbmImagePlugin.py', 'DcxImagePlugin.py', 'PaletteFile.py', 'SunImagePlugin.py', 'BufrStubImagePlugin.py', 'JpegImagePlugin.py', 'SpiderImagePlugin.py', 'ImageEnhance.py', 'TgaImagePlugin.py', 'IcnsImagePlugin.py', 'MspImagePlugin.py', 'ImageSequence.py', 'GifImagePlugin.py', 'ImageTransform.py', 'FontFile.py', 'GbrImagePlugin.py', 'EpsImagePlugin.py', 'XVThumbImagePlugin.py', 'BdfFontFile.py', 'PcdImagePlugin.py', 'TarIO.py', 'FitsStubImagePlugin.py', 'ImageMode.py', 'ArgImagePlugin.py', 'IcoImagePlugin.py', '_imaging.cpython-32mu.so', 'McIdasImagePlugin.py', '_binary.py', '__pycache__', 'ImageQt.py', 'Hdf5StubImagePlugin.py', 'PalmImagePlugin.py', 'ImagePalette.py', 'WebPImagePlugin.py', 'ImageFont.py', 'ImagePath.py', 'TiffTags.py', 'ImImagePlugin.py', 'ImageWin.py', 'ImageFilter.py', '__init__.py', 'SgiImagePlugin.py', 'ImageTk.py', 'ImageMath.py', 'GdImageFile.py', 'WmfImagePlugin.py', 'PcfFontFile.py', 'ImageGrab.py', 'PdfImagePlugin.py', 'IptcImagePlugin.py', 'ImtImagePlugin.py', 'MpegImagePlugin.py', 'MicImagePlugin.py', 'Image.py', 'PcxImagePlugin.py']
```
Can you guys help with this? I'm totally unsure, this has been confusing me for a few days. I'm thinking that MAYBE Ubuntu's python3-tk package is incomplete, but I cant see that being the case. The same goes for pip's pillow. Any ideas?
|
2013/07/30
|
[
"https://Stackoverflow.com/questions/17957651",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1543167/"
] |
So after posting an issue on [GitHub](https://github.com/python-imaging/Pillow/issues/322#issuecomment-23053260 "GitHub") I was told I was missing some libraries.
Specifically I needed to
`sudo apt-get install tk8.5-dev tcl8.5-dev`
and then
`pip install -I pillow`
to rebuild pillow. This worked on my raspberry pi running rasbian
|
I don't have the rep to comment, so I'll answer instead
I too was getting the error `can't from PIL import _imagingtk`
using python3 on Linux Mint 17
when trying to do a `tk_im = ImageTk(im)`
First I installed the tk8.6-dev and tcl8.6-dev as suggested above
Then I tried the `pip3 --upgrade route`, which didn't fix the problem
If I imported PIL in a shell, PIL.PILLOW\_VERSION was 2.7.0.
Using pip3 list however, it claimed to be 2.3.0.
so I did `sudo pip3 uninstall pillow`, followed by
`sudo pip3 install pillow`
Now, both methods of reading the version return 2.7.0
and the program works!
When/how I caught pillow 2.3, why it wouldn't upgrade with pip, and why it showed as 2.7 in the python shell I don't know, but the uninstall/install fixed it.
| 15,746
|
414,896
|
I'm trying to determine what is the fastest way to read in large text files with many rows, do some processing, and write them to a new file. In C#/.net, it appears StreamReader is a seemingly quick way of doing this but when I try to use for this file (reading line by line), it goes about 1/3 the speed of python's I/O (which worries me because I keep hearing that Python 2.6's IO was relatively slow).
If there isn't faster .Net solution for this, would it be possible to write a solution faster than StreamReader or does it already use complicated buffer/algorithm/optimizations that I would never hope to beat?
|
2009/01/05
|
[
"https://Stackoverflow.com/questions/414896",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1469014/"
] |
Do you have a code sample of what your doing, or the format of the file you are reading?
Another good question would be how much of the stream are you keeping in memory at a time?
|
A general note:
1. High performance streaming isn't complicated. You usually have to modify the logic that uses the streamed data; **that's** complicated.
Actually, that's it.
| 15,749
|
55,459,783
|
I am currently trying to code Uno in python for my Computer science principles class in school and I created a definition to draw cards from the deck into the player's hand and whenever I run the code I keep getting this error. I was just wondering how to fix it because I have tried a couple of things and have gotten nowhere.
I've tried adding items to the player's hand (which starts empty).
I tried using tuples. I've tried using making the drawing variable a list. `x` stipulates which player's hand it is while `y` is how many they draw and `z` is what cards are in the deck.
```py
import random
import time
import sys
def draw_cards(x,y,z):
for q in range(y):
draw = random.choice(z)
x = x.insert(0,draw)
z = z.remove(draw)
return x,z
cards_in_deck = ["red 0","red 1", "red 2", "red 3", "red 4", "red 5","red 6","red 7", "red 8", "red 9", "red skip", "red reverse","red +2","wild","yellow 0","yellow 1", "yellow 2", "yellow 3", "yellow 4", "yellow 5","yellow 6","yellow 7", "yellow 8", "yellow 9", "yellow skip", "yellow reverse","yellow +2","wild","green 0","green 1", "green 2", "green 3", "green 4", "green 5","green 6","green 7", "green 8", "green 9", "green skip", "green reverse","green +2","wild","blue 0","blue 1", "blue 2", "blue 3", "blue 4", "blue 5","blue 6","blue 7", "blue 8", "blue 9", "blue skip", "blue reverse","blue +2","wild","red 1", "red 2", "red 3", "red 4", "red 5","red 6","red 7", "red 8", "red 9", "red skip", "red reverse","red +2","wild +4","yellow 1", "yellow 2", "yellow 3", "yellow 4", "yellow 5","yellow 6","yellow 7", "yellow 8", "yellow 9", "yellow skip", "yellow reverse","yellow +2","wild +4","green 1", "green 2", "green 3", "green 4", "green 5","green 6","green 7", "green 8", "green 9", "green skip", "green reverse","green +2","wild +4","blue 1", "blue 2", "blue 3", "blue 4", "blue 5","blue 6","blue 7", "blue 8", "blue 9", "blue skip", "blue reverse","blue +2","wild +4"]
player_hand = []
ai_dusty_hand = []
ai_cutie_hand = []
ai_smooth_hand= []
draw_cards(ai_dusty_hand,7,cards_in_deck)
draw_cards(ai_cutie_hand,7,cards_in_deck)
draw_cards(ai_smooth_hand,7,cards_in_deck)
draw_cards(player_hand,7,cards_in_deck)
```
I expected the result to be each player having a starting hand but, the output ends in an error,
|
2019/04/01
|
[
"https://Stackoverflow.com/questions/55459783",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11293669/"
] |
Lists in Python are mutable. So when you manipulate a list (even within the scope of a function) it will reflect everywhere that list is referenced.
```
x = x.insert(0,draw)
z = z.remove(draw)
```
These lines of code are assigning the return of the method calls on the list. Both of these method calls don't return anything (therefore they return `None`).
Remove the assignments of the lists in your function.
|
The problem comes from these two lines, because remove does not return the list :
```
x = x.insert(0, draw)
z = z.remove(draw)
```
`insert` and `remove` do not return anything. Do not reassign `x` and `z` and it should work:
```
x.insert(0, draw)
z.remove(draw)
```
In addition, you should return `z` to save the remaining cards:
```
def draw_cards(x,y,z):
for q in range(y):
draw = random.choice(z)
x.insert(0,draw)
z.remove(draw)
return z
cards_in_deck = draw_cards(ai_dusty_hand,7,cards_in_deck)
cards_in_deck = draw_cards(ai_cutie_hand,7,cards_in_deck)
cards_in_deck = draw_cards(ai_smooth_hand,7,cards_in_deck)
cards_in_deck = draw_cards(player_hand,7,cards_in_deck)
```
| 15,759
|
64,657,061
|
I'm trying to create a simple python calculator to calculate dose of a medication.
For a sample weighing 60kg. The dose should be (60\*15) divide by 80.
The supposed output should be 11.25 vials.
However, Im getting 7.575757575757576e+27. Please help me out to diagnose the problem here. Thanks
Here is the sample code that i've used.
```
# This is a program to calculate dose for IV Drug X
print('Hello Doctor!')
print('What is your name?')
myName= input()
print('It is good to meet you, ' 'Dr.' +myName)
print('What is the weight of patient?') # Patient weight
ptWeight = input()
print ('Number of vial is ' + str(int((ptWeight)*15) / 80)+ ' vials.')
```
And Here is the output that I got.
```
Hello Doctor!
What is your name?
Brian
It is good to meet you, Dr.Brian
What is the weight of patient?
60
Number of vial is 7.575757575757576e+27 vials.
```
|
2020/11/03
|
[
"https://Stackoverflow.com/questions/64657061",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11362617/"
] |
You're multiplying the string
Try this:
```
print ('Number of vial is ' + str(int(ptWeight)*15 / 80)+ ' vials.')
```
|
You are multiplying as a sting. Try this instead
```
print('Hello Doctor!')
print('What is your name?')
myName= input()
print('It is good to meet you, ' 'Dr.' +myName)
print('What is the weight of patient?') # Patient weight
ptWeight = int(input())
vitals = round(int((ptWeight)*15) / 80,2)
print ('Number of vial is '+ str(vitals) +' vials.')
```
if you want the value to exact decimal places then use this
```
vitals = int((ptWeight)*15) / 80
```
round() function syntax
```
round(number, places)
```
| 15,760
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.