qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
16,794,663
|
In python I'm trying to grab multiple inputs from string using regular expression; however, I'm having trouble. For the string:
```
inputs = 12 1 345 543 2
```
I tried using:
```
match = re.match(r'\s*inputs\s*=(\s*\d+)+',string)
```
However, this only returns the value `'2'`. I'm trying to capture all the values `'12','1','345','543','2'` but not sure how to do this.
Any help is greatly appreciated!
EDIT: Thank you all for explaining why this is does not work and providing alternative suggestions. Sorry if this is a repeat question.
|
2013/05/28
|
[
"https://Stackoverflow.com/questions/16794663",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/877334/"
] |
You could try something like:
`re.findall("\d+", your_string)`.
|
You should look at this answer: <https://stackoverflow.com/a/4651893/1129561>
In short:
>
> In Python, this isn’t possible with a single regular expression: each capture of a group overrides the last capture of that same group (in .NET, this would actually be possible since the engine distinguishes between captures and groups).
>
>
>
|
16,794,663
|
In python I'm trying to grab multiple inputs from string using regular expression; however, I'm having trouble. For the string:
```
inputs = 12 1 345 543 2
```
I tried using:
```
match = re.match(r'\s*inputs\s*=(\s*\d+)+',string)
```
However, this only returns the value `'2'`. I'm trying to capture all the values `'12','1','345','543','2'` but not sure how to do this.
Any help is greatly appreciated!
EDIT: Thank you all for explaining why this is does not work and providing alternative suggestions. Sorry if this is a repeat question.
|
2013/05/28
|
[
"https://Stackoverflow.com/questions/16794663",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/877334/"
] |
You could try something like:
`re.findall("\d+", your_string)`.
|
You cannot do this with a single regex (unless you were using .NET), because each capturing group will only ever return one result even if it is repeated (the last one in the case of Python).
Since variable length lookbehinds are also not possible (in which case you could do `(?<=inputs.*=.*)\d+`), you will have to separate this into two steps:
```
match = re.match(r'\s*inputs\s*=\s*(\d+(?:\s*\d+)+)', string)
integers = re.split(r'\s+',match.group(1))
```
So now you capture the entire list of integers (and the spaces between them), and then you split that capture at the spaces.
The second step could also be done using `findall`:
```
integers = re.findall(r'\d+',match.group(1))
```
The results are identical.
|
16,794,663
|
In python I'm trying to grab multiple inputs from string using regular expression; however, I'm having trouble. For the string:
```
inputs = 12 1 345 543 2
```
I tried using:
```
match = re.match(r'\s*inputs\s*=(\s*\d+)+',string)
```
However, this only returns the value `'2'`. I'm trying to capture all the values `'12','1','345','543','2'` but not sure how to do this.
Any help is greatly appreciated!
EDIT: Thank you all for explaining why this is does not work and providing alternative suggestions. Sorry if this is a repeat question.
|
2013/05/28
|
[
"https://Stackoverflow.com/questions/16794663",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/877334/"
] |
You could try something like:
`re.findall("\d+", your_string)`.
|
You can embed your regular expression:
```
import re
s = 'inputs = 12 1 345 543 2'
print re.findall(r'(\d+)', re.match(r'inputs\s*=\s*([\s\d]+)', s).group(1))
>>>
['12', '1', '345', '543', '2']
```
Or do it in layers:
```
import re
def get_inputs(s, regex=r'inputs\s*=\s*([\s\d]+)'):
match = re.match(regex, s)
if not match:
return False # or raise an exception - whatever you want
else:
return re.findall(r'(\d+)', match.group(1))
s = 'inputs = 12 1 345 543 2'
print get_inputs(s)
>>>
['12', '1', '345', '543', '2']
```
|
16,794,663
|
In python I'm trying to grab multiple inputs from string using regular expression; however, I'm having trouble. For the string:
```
inputs = 12 1 345 543 2
```
I tried using:
```
match = re.match(r'\s*inputs\s*=(\s*\d+)+',string)
```
However, this only returns the value `'2'`. I'm trying to capture all the values `'12','1','345','543','2'` but not sure how to do this.
Any help is greatly appreciated!
EDIT: Thank you all for explaining why this is does not work and providing alternative suggestions. Sorry if this is a repeat question.
|
2013/05/28
|
[
"https://Stackoverflow.com/questions/16794663",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/877334/"
] |
You cannot do this with a single regex (unless you were using .NET), because each capturing group will only ever return one result even if it is repeated (the last one in the case of Python).
Since variable length lookbehinds are also not possible (in which case you could do `(?<=inputs.*=.*)\d+`), you will have to separate this into two steps:
```
match = re.match(r'\s*inputs\s*=\s*(\d+(?:\s*\d+)+)', string)
integers = re.split(r'\s+',match.group(1))
```
So now you capture the entire list of integers (and the spaces between them), and then you split that capture at the spaces.
The second step could also be done using `findall`:
```
integers = re.findall(r'\d+',match.group(1))
```
The results are identical.
|
You should look at this answer: <https://stackoverflow.com/a/4651893/1129561>
In short:
>
> In Python, this isn’t possible with a single regular expression: each capture of a group overrides the last capture of that same group (in .NET, this would actually be possible since the engine distinguishes between captures and groups).
>
>
>
|
16,794,663
|
In python I'm trying to grab multiple inputs from string using regular expression; however, I'm having trouble. For the string:
```
inputs = 12 1 345 543 2
```
I tried using:
```
match = re.match(r'\s*inputs\s*=(\s*\d+)+',string)
```
However, this only returns the value `'2'`. I'm trying to capture all the values `'12','1','345','543','2'` but not sure how to do this.
Any help is greatly appreciated!
EDIT: Thank you all for explaining why this is does not work and providing alternative suggestions. Sorry if this is a repeat question.
|
2013/05/28
|
[
"https://Stackoverflow.com/questions/16794663",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/877334/"
] |
You can embed your regular expression:
```
import re
s = 'inputs = 12 1 345 543 2'
print re.findall(r'(\d+)', re.match(r'inputs\s*=\s*([\s\d]+)', s).group(1))
>>>
['12', '1', '345', '543', '2']
```
Or do it in layers:
```
import re
def get_inputs(s, regex=r'inputs\s*=\s*([\s\d]+)'):
match = re.match(regex, s)
if not match:
return False # or raise an exception - whatever you want
else:
return re.findall(r'(\d+)', match.group(1))
s = 'inputs = 12 1 345 543 2'
print get_inputs(s)
>>>
['12', '1', '345', '543', '2']
```
|
You should look at this answer: <https://stackoverflow.com/a/4651893/1129561>
In short:
>
> In Python, this isn’t possible with a single regular expression: each capture of a group overrides the last capture of that same group (in .NET, this would actually be possible since the engine distinguishes between captures and groups).
>
>
>
|
35,469,118
|
I have been using my Raspberry Pi 2 to do some motion detection using a USB webcam and the motion package and am incredibly frustrated.
**Can someone explain to me how the on\_motion\_detected method is supposed to work??????**
The idea is that when the camera detects motion, a script is executed. The script just echo's a few words for testing purposes.
The video stream works great on my local network, and I can see the motion writing the JPG and .avi files to the directory.
Even when I try to add my script to the movie start trigger, **nothing happens**.
Some examples I have tried:
* ; on\_motion\_detected python /home/pi/Desktop/Python/script.py
* ; on\_movie\_start python /home/pi/Desktop/Python/script.py
* ; on\_picture\_save sudo python /home/pi/Desktop/Python/script.py
I have also changed the script to different directories, speculating a permission issue. Still nothing happens. I have tried removing the ; before the methods, still nothing happens. I have tried sudo, I have tried executing as a script. Please, can someone offer some help. I have searched years and years of threads and not found an answer anywhere.
*My script is not being executed.*
This question has been asked 1000 times and nobody has answered it. I have been searching for several hours for an answer.
Here's just a few of the threads that have gone uanswered:
<https://unix.stackexchange.com/questions/59091/problems-running-python-script-from-motion>
<https://www.raspberrypi.org/forums/viewtopic.php?t=86534&p=610482>
<https://raspberrypi.stackexchange.com/questions/8273/running-script-in-motion>
|
2016/02/17
|
[
"https://Stackoverflow.com/questions/35469118",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4851753/"
] |
Ypu have to remove ; which stands for commenting the line
; on\_motion\_detected python /home/pi/Desktop/Python/script.py
but
```
on_motion_detected python /home/pi/Desktop/Python/script.py
```
|
Check that you have specified the process id properly in motion.conf:
**process\_id\_file /var/run/motion/motion.pid**
Once you have checked that, change the following settings to arbitrarily low values e.g.
**threshold 1**
**noise\_level 1**
|
35,469,118
|
I have been using my Raspberry Pi 2 to do some motion detection using a USB webcam and the motion package and am incredibly frustrated.
**Can someone explain to me how the on\_motion\_detected method is supposed to work??????**
The idea is that when the camera detects motion, a script is executed. The script just echo's a few words for testing purposes.
The video stream works great on my local network, and I can see the motion writing the JPG and .avi files to the directory.
Even when I try to add my script to the movie start trigger, **nothing happens**.
Some examples I have tried:
* ; on\_motion\_detected python /home/pi/Desktop/Python/script.py
* ; on\_movie\_start python /home/pi/Desktop/Python/script.py
* ; on\_picture\_save sudo python /home/pi/Desktop/Python/script.py
I have also changed the script to different directories, speculating a permission issue. Still nothing happens. I have tried removing the ; before the methods, still nothing happens. I have tried sudo, I have tried executing as a script. Please, can someone offer some help. I have searched years and years of threads and not found an answer anywhere.
*My script is not being executed.*
This question has been asked 1000 times and nobody has answered it. I have been searching for several hours for an answer.
Here's just a few of the threads that have gone uanswered:
<https://unix.stackexchange.com/questions/59091/problems-running-python-script-from-motion>
<https://www.raspberrypi.org/forums/viewtopic.php?t=86534&p=610482>
<https://raspberrypi.stackexchange.com/questions/8273/running-script-in-motion>
|
2016/02/17
|
[
"https://Stackoverflow.com/questions/35469118",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4851753/"
] |
if you use threads: threat1.conf, threat2.conf -- try to write script config to them.
My threads:
thread1.conf
```
videodevice /dev/video0
text_left USBWebcam-1
target_dir /samba/hdd/share/motion/usb
rotate 180
width 1024
height 768
#webcam_port 9081
#on_event_start /bin/bash /root/scripts/tg_msg.sh
on_picture_save /bin/bash /root/scripts/tg_msg.sh %f
#on_motion_detected /bin/bash /root/scripts/tg_msg.sh
#on_area_detected /bin/bash /root/scripts/tg_msg.sh
#on_movie_start /bin/bash /root/scripts/tg_msg.sh
```
and thread2.conf:
```
text_left Foscam IP
netcam_url http://192.168.111.7:8080/video
target_dir /samba/hdd/share/motion/ip
width 1024
height 768
#on_event_start /bin/bash /root/scripts/tg_msg.sh
on_picture_save /bin/bash /root/scripts/tg_msg.sh %f
#on_motion_detected /bin/bash /root/scripts/tg_msg.sh
#on_area_detected /bin/bash /root/scripts/tg_msg.sh
#on_movie_start /bin/bash /root/scripts/tg_msg.sh
```
|
Check that you have specified the process id properly in motion.conf:
**process\_id\_file /var/run/motion/motion.pid**
Once you have checked that, change the following settings to arbitrarily low values e.g.
**threshold 1**
**noise\_level 1**
|
35,469,118
|
I have been using my Raspberry Pi 2 to do some motion detection using a USB webcam and the motion package and am incredibly frustrated.
**Can someone explain to me how the on\_motion\_detected method is supposed to work??????**
The idea is that when the camera detects motion, a script is executed. The script just echo's a few words for testing purposes.
The video stream works great on my local network, and I can see the motion writing the JPG and .avi files to the directory.
Even when I try to add my script to the movie start trigger, **nothing happens**.
Some examples I have tried:
* ; on\_motion\_detected python /home/pi/Desktop/Python/script.py
* ; on\_movie\_start python /home/pi/Desktop/Python/script.py
* ; on\_picture\_save sudo python /home/pi/Desktop/Python/script.py
I have also changed the script to different directories, speculating a permission issue. Still nothing happens. I have tried removing the ; before the methods, still nothing happens. I have tried sudo, I have tried executing as a script. Please, can someone offer some help. I have searched years and years of threads and not found an answer anywhere.
*My script is not being executed.*
This question has been asked 1000 times and nobody has answered it. I have been searching for several hours for an answer.
Here's just a few of the threads that have gone uanswered:
<https://unix.stackexchange.com/questions/59091/problems-running-python-script-from-motion>
<https://www.raspberrypi.org/forums/viewtopic.php?t=86534&p=610482>
<https://raspberrypi.stackexchange.com/questions/8273/running-script-in-motion>
|
2016/02/17
|
[
"https://Stackoverflow.com/questions/35469118",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4851753/"
] |
I have a simple solution but someone needs to explain how this works. Start motion as root. Add a line to /etc/rc.local 'sudo motion' reboot and most events will work if started as root, on\_event\_end sudo ...command or script...
|
Check that you have specified the process id properly in motion.conf:
**process\_id\_file /var/run/motion/motion.pid**
Once you have checked that, change the following settings to arbitrarily low values e.g.
**threshold 1**
**noise\_level 1**
|
35,469,118
|
I have been using my Raspberry Pi 2 to do some motion detection using a USB webcam and the motion package and am incredibly frustrated.
**Can someone explain to me how the on\_motion\_detected method is supposed to work??????**
The idea is that when the camera detects motion, a script is executed. The script just echo's a few words for testing purposes.
The video stream works great on my local network, and I can see the motion writing the JPG and .avi files to the directory.
Even when I try to add my script to the movie start trigger, **nothing happens**.
Some examples I have tried:
* ; on\_motion\_detected python /home/pi/Desktop/Python/script.py
* ; on\_movie\_start python /home/pi/Desktop/Python/script.py
* ; on\_picture\_save sudo python /home/pi/Desktop/Python/script.py
I have also changed the script to different directories, speculating a permission issue. Still nothing happens. I have tried removing the ; before the methods, still nothing happens. I have tried sudo, I have tried executing as a script. Please, can someone offer some help. I have searched years and years of threads and not found an answer anywhere.
*My script is not being executed.*
This question has been asked 1000 times and nobody has answered it. I have been searching for several hours for an answer.
Here's just a few of the threads that have gone uanswered:
<https://unix.stackexchange.com/questions/59091/problems-running-python-script-from-motion>
<https://www.raspberrypi.org/forums/viewtopic.php?t=86534&p=610482>
<https://raspberrypi.stackexchange.com/questions/8273/running-script-in-motion>
|
2016/02/17
|
[
"https://Stackoverflow.com/questions/35469118",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4851753/"
] |
Ypu have to remove ; which stands for commenting the line
; on\_motion\_detected python /home/pi/Desktop/Python/script.py
but
```
on_motion_detected python /home/pi/Desktop/Python/script.py
```
|
sudo chown motion:motion /home/pi/Desktop/Python/script.py
|
35,469,118
|
I have been using my Raspberry Pi 2 to do some motion detection using a USB webcam and the motion package and am incredibly frustrated.
**Can someone explain to me how the on\_motion\_detected method is supposed to work??????**
The idea is that when the camera detects motion, a script is executed. The script just echo's a few words for testing purposes.
The video stream works great on my local network, and I can see the motion writing the JPG and .avi files to the directory.
Even when I try to add my script to the movie start trigger, **nothing happens**.
Some examples I have tried:
* ; on\_motion\_detected python /home/pi/Desktop/Python/script.py
* ; on\_movie\_start python /home/pi/Desktop/Python/script.py
* ; on\_picture\_save sudo python /home/pi/Desktop/Python/script.py
I have also changed the script to different directories, speculating a permission issue. Still nothing happens. I have tried removing the ; before the methods, still nothing happens. I have tried sudo, I have tried executing as a script. Please, can someone offer some help. I have searched years and years of threads and not found an answer anywhere.
*My script is not being executed.*
This question has been asked 1000 times and nobody has answered it. I have been searching for several hours for an answer.
Here's just a few of the threads that have gone uanswered:
<https://unix.stackexchange.com/questions/59091/problems-running-python-script-from-motion>
<https://www.raspberrypi.org/forums/viewtopic.php?t=86534&p=610482>
<https://raspberrypi.stackexchange.com/questions/8273/running-script-in-motion>
|
2016/02/17
|
[
"https://Stackoverflow.com/questions/35469118",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4851753/"
] |
Ypu have to remove ; which stands for commenting the line
; on\_motion\_detected python /home/pi/Desktop/Python/script.py
but
```
on_motion_detected python /home/pi/Desktop/Python/script.py
```
|
if you use threads: threat1.conf, threat2.conf -- try to write script config to them.
My threads:
thread1.conf
```
videodevice /dev/video0
text_left USBWebcam-1
target_dir /samba/hdd/share/motion/usb
rotate 180
width 1024
height 768
#webcam_port 9081
#on_event_start /bin/bash /root/scripts/tg_msg.sh
on_picture_save /bin/bash /root/scripts/tg_msg.sh %f
#on_motion_detected /bin/bash /root/scripts/tg_msg.sh
#on_area_detected /bin/bash /root/scripts/tg_msg.sh
#on_movie_start /bin/bash /root/scripts/tg_msg.sh
```
and thread2.conf:
```
text_left Foscam IP
netcam_url http://192.168.111.7:8080/video
target_dir /samba/hdd/share/motion/ip
width 1024
height 768
#on_event_start /bin/bash /root/scripts/tg_msg.sh
on_picture_save /bin/bash /root/scripts/tg_msg.sh %f
#on_motion_detected /bin/bash /root/scripts/tg_msg.sh
#on_area_detected /bin/bash /root/scripts/tg_msg.sh
#on_movie_start /bin/bash /root/scripts/tg_msg.sh
```
|
35,469,118
|
I have been using my Raspberry Pi 2 to do some motion detection using a USB webcam and the motion package and am incredibly frustrated.
**Can someone explain to me how the on\_motion\_detected method is supposed to work??????**
The idea is that when the camera detects motion, a script is executed. The script just echo's a few words for testing purposes.
The video stream works great on my local network, and I can see the motion writing the JPG and .avi files to the directory.
Even when I try to add my script to the movie start trigger, **nothing happens**.
Some examples I have tried:
* ; on\_motion\_detected python /home/pi/Desktop/Python/script.py
* ; on\_movie\_start python /home/pi/Desktop/Python/script.py
* ; on\_picture\_save sudo python /home/pi/Desktop/Python/script.py
I have also changed the script to different directories, speculating a permission issue. Still nothing happens. I have tried removing the ; before the methods, still nothing happens. I have tried sudo, I have tried executing as a script. Please, can someone offer some help. I have searched years and years of threads and not found an answer anywhere.
*My script is not being executed.*
This question has been asked 1000 times and nobody has answered it. I have been searching for several hours for an answer.
Here's just a few of the threads that have gone uanswered:
<https://unix.stackexchange.com/questions/59091/problems-running-python-script-from-motion>
<https://www.raspberrypi.org/forums/viewtopic.php?t=86534&p=610482>
<https://raspberrypi.stackexchange.com/questions/8273/running-script-in-motion>
|
2016/02/17
|
[
"https://Stackoverflow.com/questions/35469118",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4851753/"
] |
Ypu have to remove ; which stands for commenting the line
; on\_motion\_detected python /home/pi/Desktop/Python/script.py
but
```
on_motion_detected python /home/pi/Desktop/Python/script.py
```
|
I have a simple solution but someone needs to explain how this works. Start motion as root. Add a line to /etc/rc.local 'sudo motion' reboot and most events will work if started as root, on\_event\_end sudo ...command or script...
|
35,469,118
|
I have been using my Raspberry Pi 2 to do some motion detection using a USB webcam and the motion package and am incredibly frustrated.
**Can someone explain to me how the on\_motion\_detected method is supposed to work??????**
The idea is that when the camera detects motion, a script is executed. The script just echo's a few words for testing purposes.
The video stream works great on my local network, and I can see the motion writing the JPG and .avi files to the directory.
Even when I try to add my script to the movie start trigger, **nothing happens**.
Some examples I have tried:
* ; on\_motion\_detected python /home/pi/Desktop/Python/script.py
* ; on\_movie\_start python /home/pi/Desktop/Python/script.py
* ; on\_picture\_save sudo python /home/pi/Desktop/Python/script.py
I have also changed the script to different directories, speculating a permission issue. Still nothing happens. I have tried removing the ; before the methods, still nothing happens. I have tried sudo, I have tried executing as a script. Please, can someone offer some help. I have searched years and years of threads and not found an answer anywhere.
*My script is not being executed.*
This question has been asked 1000 times and nobody has answered it. I have been searching for several hours for an answer.
Here's just a few of the threads that have gone uanswered:
<https://unix.stackexchange.com/questions/59091/problems-running-python-script-from-motion>
<https://www.raspberrypi.org/forums/viewtopic.php?t=86534&p=610482>
<https://raspberrypi.stackexchange.com/questions/8273/running-script-in-motion>
|
2016/02/17
|
[
"https://Stackoverflow.com/questions/35469118",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4851753/"
] |
if you use threads: threat1.conf, threat2.conf -- try to write script config to them.
My threads:
thread1.conf
```
videodevice /dev/video0
text_left USBWebcam-1
target_dir /samba/hdd/share/motion/usb
rotate 180
width 1024
height 768
#webcam_port 9081
#on_event_start /bin/bash /root/scripts/tg_msg.sh
on_picture_save /bin/bash /root/scripts/tg_msg.sh %f
#on_motion_detected /bin/bash /root/scripts/tg_msg.sh
#on_area_detected /bin/bash /root/scripts/tg_msg.sh
#on_movie_start /bin/bash /root/scripts/tg_msg.sh
```
and thread2.conf:
```
text_left Foscam IP
netcam_url http://192.168.111.7:8080/video
target_dir /samba/hdd/share/motion/ip
width 1024
height 768
#on_event_start /bin/bash /root/scripts/tg_msg.sh
on_picture_save /bin/bash /root/scripts/tg_msg.sh %f
#on_motion_detected /bin/bash /root/scripts/tg_msg.sh
#on_area_detected /bin/bash /root/scripts/tg_msg.sh
#on_movie_start /bin/bash /root/scripts/tg_msg.sh
```
|
sudo chown motion:motion /home/pi/Desktop/Python/script.py
|
35,469,118
|
I have been using my Raspberry Pi 2 to do some motion detection using a USB webcam and the motion package and am incredibly frustrated.
**Can someone explain to me how the on\_motion\_detected method is supposed to work??????**
The idea is that when the camera detects motion, a script is executed. The script just echo's a few words for testing purposes.
The video stream works great on my local network, and I can see the motion writing the JPG and .avi files to the directory.
Even when I try to add my script to the movie start trigger, **nothing happens**.
Some examples I have tried:
* ; on\_motion\_detected python /home/pi/Desktop/Python/script.py
* ; on\_movie\_start python /home/pi/Desktop/Python/script.py
* ; on\_picture\_save sudo python /home/pi/Desktop/Python/script.py
I have also changed the script to different directories, speculating a permission issue. Still nothing happens. I have tried removing the ; before the methods, still nothing happens. I have tried sudo, I have tried executing as a script. Please, can someone offer some help. I have searched years and years of threads and not found an answer anywhere.
*My script is not being executed.*
This question has been asked 1000 times and nobody has answered it. I have been searching for several hours for an answer.
Here's just a few of the threads that have gone uanswered:
<https://unix.stackexchange.com/questions/59091/problems-running-python-script-from-motion>
<https://www.raspberrypi.org/forums/viewtopic.php?t=86534&p=610482>
<https://raspberrypi.stackexchange.com/questions/8273/running-script-in-motion>
|
2016/02/17
|
[
"https://Stackoverflow.com/questions/35469118",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4851753/"
] |
I have a simple solution but someone needs to explain how this works. Start motion as root. Add a line to /etc/rc.local 'sudo motion' reboot and most events will work if started as root, on\_event\_end sudo ...command or script...
|
sudo chown motion:motion /home/pi/Desktop/Python/script.py
|
63,400,324
|
I am relatively new to python and pandas. I am trying to replicate a battleship game. My goal is to locate the row and column that has 1 and storage that location as the Battleship location. I created a CSV file and it looks like this
```
col0,col1,col2,col3,col4,col5
0,0,0,0,0,0
0,0,0,0,0,0
0,0,0,0,0,0
0,1,0,0,0,0
0,0,0,0,0,0
0,0,0,0,0,0
```
This is the code to read the created CSV file as a DataFrame.
```
import pandas
df = pandas.read_csv('/content/pandas_tutorial/My Drive/pandas/myBattleshipmap.csv',)
print(df)
```
How do I use nested for loop and Pandas' iloc to locate the row and column that has 1(which is row 3 and column 1) and storage that location as the Battleship location?
I found out that a nested for loop and pandas iloc to give the entire dataframe look something like this
```
for x in range(len(df)):
for y in range(len(df)):
df.iloc[:,:]
```
Do I have to put the if statement to locate the row and column?
|
2020/08/13
|
[
"https://Stackoverflow.com/questions/63400324",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11763345/"
] |
You really don't need to loop, you can use [`numpy.where`](https://numpy.org/doc/stable/reference/generated/numpy.where.html):
```
import pandas as pd
import numpy as np
df = pd.read_csv('/content/pandas_tutorial/My Drive/pandas/myBattleshipmap.csv',)
r, c = np.where(df.astype(bool))
print(r.tolist())
print(df.columns[c].tolist())
```
Ouput:
```
[3]
['col1']
```
---
### For loops like you trying...
```
for x in range(df.shape[0]):
for y in range(df.shape[1]):
if df.iloc[x, y] == 1:
print(f'Row: {x}, Col: {df.columns[y]}')
```
Output:
```
Row: 3, Col: col1
```
|
Use `where` to turn all values that are not 1 to `NaN`, then stack will leave you with a MultiIndex Series, whose index gives you the (row\_label, col\_label) tuples of everything that was 1.
```
df.where(df.eq(1)).stack().index
#MultiIndex([(3, 'col1')],
# )
```
---
If you don't want the column names, perhaps get rid of the column labels. IF you only expect a single location we can grab that with `.item()`:
```
df.columns = range(df.shape[1])
print(f'The Battle Ship Location is: {df.where(df.eq(1)).stack().index.item()}')
#The Battle Ship Location is: (3, 1)
```
|
47,145,930
|
I read the Susan Fowler's book "production ready microservices" and in two places (until now) I found
* (page 26) "Avoid Versioning Microservices and Endpoints",
* "versioning microservices can easily become an organizational nightmare" (page 27),
* In microservice ecosystems, the versioning of microservices is discouraged(page 58)
Anyway, I used all types of versioning for all kind of different projects: git tag, deb package versioning, python packages versioning, http api versions and I never had very big problems to manage the project's versions. Beside of this I knew exactly to what version to roll out in case of some failures or bugs from customers.
Anybody have any clue why in this book the microservice versioning is so blamed and what advises would you have regarding the topic?
|
2017/11/06
|
[
"https://Stackoverflow.com/questions/47145930",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1526119/"
] |
Are you sure that she was not talking about incorporating the version into the **name** of the service or into the **name** of the endpoint?
Service called OrderProcessing\_2\_4\_1 with a versioned endpoint of get\_order\_2\_4\_1 is a very bad idea. OrderProcessing\_2\_4 with a versioned endpoint of get\_order\_2\_4 is a little less evil but still problematic.
if your calls to microservices have to address endpoints that have a version number in the name, you will have a maintenance (and production) nightmare every time you change a service. Every other service and client will have to be checked to change any reference to your updated service.
That is different from having versions for your API, code pr services. Those are required if you are going to actually get many of the benefits of microservices.
Your orchestration function has to be able to find the right version of the service that matches the client's version requirements (Version 2.6.2 of client "Enter New Order" app requires service "OrderProcessing" that is at least version 2.4.0 in order to support the NATO product classification).
You may have multiple versions of a service in production at the same time (2.4.0 being deprecated but still servicing some clients, 2.4.1 being introduced, version 3.0.0 in beta for customers who are testing the newest UI and features prior to GA).
This is particularly true if you run 24/7 and have to dynamically update services.
The orchestration function is the place where services and endpoints are matched up and when you introduce a new version of a service, you update the orchestration database to describe the versions of other services that are required. (My new Version 2.4.1 of OrderProcessing requires the a version 2.2.0 or later of the ProductManager because that was when the NATO Product Classification was added to the product data).
|
The author of the book is correct in that it is difficult to update the version of an API, especially if it is popular. This is because you will have to either hunt down all the users of the older version and have them upgrade or you will have to support two versions of your software in production at the same time.
But you should still version your API's, IMHO. Just never change the API version. The reason you should probably support a version in your API is because you never know when you may have to change it. So add version but you should avoid every changing it. On many cases it is easier to bring up a new API or extend an existing API in a way that does not break the current version contract.
BTW, in the future you never know when new technology or design patterns will materialize that will allow two versions of one API to easily exist on the same software instance in production in a very elegant way. If and when something like this comes out, then you may see more version changes in API's.
|
12,054,772
|
I'm new to python (learning for 2 weeks only) and there's something I really can't even try (I have been googling for an hour and coulndn't find any).
`file1` and `file2` are both CSV files.
I've got a function that looks like:
```
def save(file1, file2):
```
it is for `file2` to have the same content as `file1`. For example, when I do:
```
save(file1, file2)
```
`file2` should have the same content as `file1`.
Thanks in advance and sorry for an empty code. Any help would be appreciated!
|
2012/08/21
|
[
"https://Stackoverflow.com/questions/12054772",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1614184/"
] |
Python has a standard module [`shutil`](https://docs.python.org/2/library/shutil.html) which is useful for these sorts of things.
If you need to write the code to do it yourself, simply open two files (the input and the output). Loop over the file object, Reading lines from the input file and write them into the output file.
|
If you simply want to copy a file you can do this:
```
def save(file1, file2):
with open(file1, 'rb') as infile:
with open(file2, 'wb') as outfile:
outfile.write(infile.read())
```
this copies the file with name `file1` to the file with name `file2`. It really doesn't matter what the content of those files is.
|
38,709,118
|
I'm trying to validate a certificate with a CA bundle file. The original Bash command takes two file arguments like this;
```
openssl verify -CAfile ca-ssl.ca cert-ssl.crt
```
I'm trying to figure out how to run the above command in python subprocess whilst having ca-ssl.ca and cert-ssl.crt as variable strings (as opposed to files).
If I ran the command with variables (instead of files) in bash then this would work;
```
ca_value=$(<ca-ssl.ca)
cert_value=$(<cert-ssl.crt)
openssl verify -CAfile <(echo "$ca_value") <(echo "$cert_value")
```
However, I'm struggling to figure out how to do the above with Python, preferably without needing to use `shell=True`. I have tried the following but doesn't work and instead prints 'help' commands for openssl;
```
certificate = ''' cert string '''
ca_bundle = ''' ca bundle string '''
def ca_valid(cert, ca):
ca_validation = subprocess.Popen(['openssl', 'verify', '-CAfile', ca, cert], stdin=subprocess.PIPE, stdout=subprocess.PIPE, bufsize=1)
ca_validation_output = ca_validation.communicate()[0].strip()
ca_validation.wait()
ca_valid(certificate, ca_bundle)
```
Any guidance/clues on what I need to look further into would be appreciated.
|
2016/08/01
|
[
"https://Stackoverflow.com/questions/38709118",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1165419/"
] |
Bash process substitution `<(...)` in the end is supplying a file path as an argument to `openssl`.
You will need to make a helper function to create this functionality since Python doesn't have any operators that allow you to inline pipe data into a file and present its path:
```
import subprocess
def validate_ca(cert, ca):
with filearg(ca) as ca_path, filearg(cert) as cert_path:
ca_validation = subprocess.Popen(
['openssl', 'verify', '-CAfile', ca_path, cert_path],
stdout=subprocess.PIPE,
)
return ca_validation.communicate()[0].strip()
```
Where `filearg` is a context manager which creates a named temporary file with your desired text, closes it, hands the path to you, and then removes it after the `with` scope ends.
```
import os
import tempfile
from contextlib import contextmanager
@contextmanger
def filearg(txt):
with tempfile.NamedTemporaryFile('w', delete=False) as fh:
fh.write(txt)
try:
yield fh.name
finally:
os.remove(fh.name)
```
Anything accessing this temporary file(like the subprocess) needs to work inside the context manager.
By the way, the `Popen.wait(self)` is redundant since `Popen.communicate(self)` waits for termination.
|
If you want to use process substitution, you will *have* to use `shell=True`. This is unavoidable. The `<(...)` process substitution syntax is bash syntax; you simply must call bash into service to parse and execute such code.
Additionally, you have to ensure that `bash` is invoked, as opposed to `sh`. On some systems `sh` may refer to an old Bourne shell (as opposed to the Bourne-again shell `bash`) in which case process substitution will definitely not work. On some systems `sh` will invoke `bash`, but process substitution will still not work, because when invoked under the name `sh` the `bash` shell enters something called POSIX mode. Here are some excerpts from the `bash` man page:
>
> ...
>
>
> INVOCATION
>
>
> ... When invoked as sh, bash enters posix mode after the startup files are read. ....
>
>
> ...
>
>
> SEE ALSO
>
>
> ...
>
>
> <http://tiswww.case.edu/~chet/bash/POSIX> -- a description of posix mode
>
>
> ...
>
>
>
From the above web link:
>
> 28. Process substitution is not available.
>
>
>
`/bin/sh` seems to be the default shell in python, whether you're using `os.system()` or `subprocess.Popen()`. So you'll have to specify the argument `executable='bash'`, or `executable='/bin/bash'` if you want to specify the full path.
This is working for me:
```
subprocess.Popen('printf \'argument: "%s"\\n\' verify -CAfile <(echo ca_value) <(echo cert_value);',executable='bash',shell=True).wait();
## argument: "verify"
## argument: "-CAfile"
## argument: "/dev/fd/63"
## argument: "/dev/fd/62"
## 0
```
---
Here's how you can actually embed the string values from variables:
```
bashEsc = lambda s: "'"+s.replace("'","'\\''")+"'";
ca_value = 'x';
cert_value = 'y';
cmd = 'printf \'argument: "%%s"\\n\' verify -CAfile <(echo %s) <(echo %s);'%(bashEsc(ca_value),bashEsc(cert_value));
subprocess.Popen(cmd,executable='bash',shell=True).wait();
## argument: "verify"
## argument: "-CAfile"
## argument: "/dev/fd/63"
## argument: "/dev/fd/62"
## 0
```
|
62,811,729
|
mistakenly first i installed python 3.6 then install pip,then i install python 3.8 after that i checked the pip version its shows me.
```
pip 20.1.1 from /usr/local/lib/python3.6/dist-packages/pip (python 3.6)
```
Can i change to
```
pip 20.1.1 from /usr/local/lib/python3.8/dist-packages/pip (python 3.8)
```
|
2020/07/09
|
[
"https://Stackoverflow.com/questions/62811729",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2820094/"
] |
I believe you can do that if you simply remove `col2` from your select and group by. Because `col2` will no longer be returned, you should also remove the having statement. I think it should look something like this:
```sql
select
c.col1,
count(1)
from
table_1 a,
table_2 b,
table_3 c
where
a.key = b.key
and b.no = c.no
group by
c.col1;
```
I hope this helps.
|
use sum() and only group by for the col1
```
select c.col1, sum(a.col2) as total
from table_1 a,table_2 b.table_3 c
where a.key =b.key and b.no = c.no
group by c.col1;
```
**Output---**
```
c.col1 total
aa1 10
aa2 5
```
|
62,811,729
|
mistakenly first i installed python 3.6 then install pip,then i install python 3.8 after that i checked the pip version its shows me.
```
pip 20.1.1 from /usr/local/lib/python3.6/dist-packages/pip (python 3.6)
```
Can i change to
```
pip 20.1.1 from /usr/local/lib/python3.8/dist-packages/pip (python 3.8)
```
|
2020/07/09
|
[
"https://Stackoverflow.com/questions/62811729",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2820094/"
] |
A "simple" option is to use your current query (rewritten to use `JOIN`s, which is nowadays preferred way of *joining* tables) as an inline view:
```
SELECT col1, SUM (cnt)
FROM ( SELECT c.col1, a.col2, COUNT (*) cnt --> your current query begins here
FROM table_1 a
JOIN table_2 b ON a.key = b.key
JOIN table_3 c ON c.no = b.no
GROUP BY c.col1, a.col2
HAVING COUNT (a.col2) > 1) --> and ends here
GROUP BY col1;
```
---
Or, remove `a.col2` from `select`:
```
SELECT c.col1, COUNT (*) cnt
FROM table_1 a
JOIN table_2 b ON a.key = b.key
JOIN table_3 c ON c.no = b.no
GROUP BY c.col1, a.col2
HAVING COUNT (a.col2) > 1;
```
|
I believe you can do that if you simply remove `col2` from your select and group by. Because `col2` will no longer be returned, you should also remove the having statement. I think it should look something like this:
```sql
select
c.col1,
count(1)
from
table_1 a,
table_2 b,
table_3 c
where
a.key = b.key
and b.no = c.no
group by
c.col1;
```
I hope this helps.
|
62,811,729
|
mistakenly first i installed python 3.6 then install pip,then i install python 3.8 after that i checked the pip version its shows me.
```
pip 20.1.1 from /usr/local/lib/python3.6/dist-packages/pip (python 3.6)
```
Can i change to
```
pip 20.1.1 from /usr/local/lib/python3.8/dist-packages/pip (python 3.8)
```
|
2020/07/09
|
[
"https://Stackoverflow.com/questions/62811729",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2820094/"
] |
A "simple" option is to use your current query (rewritten to use `JOIN`s, which is nowadays preferred way of *joining* tables) as an inline view:
```
SELECT col1, SUM (cnt)
FROM ( SELECT c.col1, a.col2, COUNT (*) cnt --> your current query begins here
FROM table_1 a
JOIN table_2 b ON a.key = b.key
JOIN table_3 c ON c.no = b.no
GROUP BY c.col1, a.col2
HAVING COUNT (a.col2) > 1) --> and ends here
GROUP BY col1;
```
---
Or, remove `a.col2` from `select`:
```
SELECT c.col1, COUNT (*) cnt
FROM table_1 a
JOIN table_2 b ON a.key = b.key
JOIN table_3 c ON c.no = b.no
GROUP BY c.col1, a.col2
HAVING COUNT (a.col2) > 1;
```
|
use sum() and only group by for the col1
```
select c.col1, sum(a.col2) as total
from table_1 a,table_2 b.table_3 c
where a.key =b.key and b.no = c.no
group by c.col1;
```
**Output---**
```
c.col1 total
aa1 10
aa2 5
```
|
19,737,844
|
I know that the Jinja2 library allows me to pass datastore models from my python code to html and access this data from inside the html code as shown [in this example](https://developers.google.com/appengine/docs/python/gettingstartedpython27/templates) . However Jinja2 isn't compatible with javascript and I want to access the data inside my Javascript code . What is the simplest templating library which allows to iterate over my datastore entities in Javascript ? I've heard about things like Mustache and Jquery , I think they look a bit too complicated. Is there anything simpler?
|
2013/11/02
|
[
"https://Stackoverflow.com/questions/19737844",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2559007/"
] |
You should create a python controller which serves JSON formatted data, which any Javascript library (especially jQuery) can consume from. Then, setup the Jinja2 template to contain some Javascript which calls, loads and displays said data.
|
It has nothing to do with compatibility. Jinja is server side templating. You can use javascript for client side coding.
Using Jinja you can create HTML, which can be accessed by javascript like normal HTML.
To send datastore entities to your client you can use Jinja to pass a Python list or use a json webservice.
|
19,737,844
|
I know that the Jinja2 library allows me to pass datastore models from my python code to html and access this data from inside the html code as shown [in this example](https://developers.google.com/appengine/docs/python/gettingstartedpython27/templates) . However Jinja2 isn't compatible with javascript and I want to access the data inside my Javascript code . What is the simplest templating library which allows to iterate over my datastore entities in Javascript ? I've heard about things like Mustache and Jquery , I think they look a bit too complicated. Is there anything simpler?
|
2013/11/02
|
[
"https://Stackoverflow.com/questions/19737844",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2559007/"
] |
It works. I had to serialize(convert) my datastore entities to json format, which Javascript understands well. I created a function which converts every instance of my datastore into a dictionnary then encapsulates all these instances into a list which is then converted to Json using json.dumps. When I pass this result to the Java script , I can then easily access my values as seen below.
```
import json
import webapp2
from google.appengine.ext import db
import jinja2
JINJA_ENVIRONMENT = jinja2.Environment(
loader=jinja2.FileSystemLoader(os.path.dirname(__file__)),
extensions=['jinja2.ext.autoescape'],
autoescape=True)
# serialize datastore model to JSON format
def serialize(model):
allInstances = model.all() # fetching every instance of model
itemsList = [] #initial empty list
for p in allInstances:
d = db.to_dict(p)
itemsList.append(d)
return json.dumps(itemsList)
class myModel(db.Model):
v = db.FloatProperty()
c = db.FloatProperty()
tdate = db.DateTimeProperty(auto_now_add=True)
class MainPage(webapp2.RequestHandler):
def get(self):
myModel(v=4.5, c=3.0).put()
#creating template variables
template_values = {
'json_data': serialize(myModel)
}
template = JINJA_ENVIRONMENT.get_template('index.html')
self.response.write(template.render(template_values))
```
Inside my 'index.html' file, I have:
```
{% autoescape true %}
<!DOCTYPE html>
<html>
<head>
<title> webpage </title>
<script type="text/javascript">
// I retrieve my data here
var results = "{{ json_data }}";
for(var i = 0; i < db_results.length; i++) {
document.write("myModel instance:" + i + results[i] + "<br>");
}
</script>
</head>
<body>
</body>
</html>
{% endautoescape %}
```
|
You should create a python controller which serves JSON formatted data, which any Javascript library (especially jQuery) can consume from. Then, setup the Jinja2 template to contain some Javascript which calls, loads and displays said data.
|
19,737,844
|
I know that the Jinja2 library allows me to pass datastore models from my python code to html and access this data from inside the html code as shown [in this example](https://developers.google.com/appengine/docs/python/gettingstartedpython27/templates) . However Jinja2 isn't compatible with javascript and I want to access the data inside my Javascript code . What is the simplest templating library which allows to iterate over my datastore entities in Javascript ? I've heard about things like Mustache and Jquery , I think they look a bit too complicated. Is there anything simpler?
|
2013/11/02
|
[
"https://Stackoverflow.com/questions/19737844",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2559007/"
] |
One more approach to consider: If the Python object is not dynamic, you may want to use json.dumps() to store it as a TextProperty, and simply JSON.parse(unescape(model\_text)) on the JS side. Reduces the overhead, and memory hit which can be important when trying to stay within an F1 limit. For example, I run an instance that very easily runs inside an F1. There is one large dictionary object that we deliver. Were this object to exist as a Python dictionary inside the instance we would kill the instance due to the soft memory limit. Using the TextProperty approach we can pass this large dict to the client without any issues. (Note: we did have to momentarily boost our instance up to an F4 when initially creating this object -- something incredibly easy inside the Admin web page.) With more dynamic objects, answers above apply.
|
It has nothing to do with compatibility. Jinja is server side templating. You can use javascript for client side coding.
Using Jinja you can create HTML, which can be accessed by javascript like normal HTML.
To send datastore entities to your client you can use Jinja to pass a Python list or use a json webservice.
|
19,737,844
|
I know that the Jinja2 library allows me to pass datastore models from my python code to html and access this data from inside the html code as shown [in this example](https://developers.google.com/appengine/docs/python/gettingstartedpython27/templates) . However Jinja2 isn't compatible with javascript and I want to access the data inside my Javascript code . What is the simplest templating library which allows to iterate over my datastore entities in Javascript ? I've heard about things like Mustache and Jquery , I think they look a bit too complicated. Is there anything simpler?
|
2013/11/02
|
[
"https://Stackoverflow.com/questions/19737844",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2559007/"
] |
Jinja2 and Javascript play fine together. You need to arrange to have template expansion emit your Python data structures into a JS-friendly form.
<https://sites.google.com/a/khanacademy.org/forge/technical/autoescape-in-jinja2-templates> covers it fairly well. (Note the use of the `escapejs` filter.)
|
It has nothing to do with compatibility. Jinja is server side templating. You can use javascript for client side coding.
Using Jinja you can create HTML, which can be accessed by javascript like normal HTML.
To send datastore entities to your client you can use Jinja to pass a Python list or use a json webservice.
|
19,737,844
|
I know that the Jinja2 library allows me to pass datastore models from my python code to html and access this data from inside the html code as shown [in this example](https://developers.google.com/appengine/docs/python/gettingstartedpython27/templates) . However Jinja2 isn't compatible with javascript and I want to access the data inside my Javascript code . What is the simplest templating library which allows to iterate over my datastore entities in Javascript ? I've heard about things like Mustache and Jquery , I think they look a bit too complicated. Is there anything simpler?
|
2013/11/02
|
[
"https://Stackoverflow.com/questions/19737844",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2559007/"
] |
It works. I had to serialize(convert) my datastore entities to json format, which Javascript understands well. I created a function which converts every instance of my datastore into a dictionnary then encapsulates all these instances into a list which is then converted to Json using json.dumps. When I pass this result to the Java script , I can then easily access my values as seen below.
```
import json
import webapp2
from google.appengine.ext import db
import jinja2
JINJA_ENVIRONMENT = jinja2.Environment(
loader=jinja2.FileSystemLoader(os.path.dirname(__file__)),
extensions=['jinja2.ext.autoescape'],
autoescape=True)
# serialize datastore model to JSON format
def serialize(model):
allInstances = model.all() # fetching every instance of model
itemsList = [] #initial empty list
for p in allInstances:
d = db.to_dict(p)
itemsList.append(d)
return json.dumps(itemsList)
class myModel(db.Model):
v = db.FloatProperty()
c = db.FloatProperty()
tdate = db.DateTimeProperty(auto_now_add=True)
class MainPage(webapp2.RequestHandler):
def get(self):
myModel(v=4.5, c=3.0).put()
#creating template variables
template_values = {
'json_data': serialize(myModel)
}
template = JINJA_ENVIRONMENT.get_template('index.html')
self.response.write(template.render(template_values))
```
Inside my 'index.html' file, I have:
```
{% autoescape true %}
<!DOCTYPE html>
<html>
<head>
<title> webpage </title>
<script type="text/javascript">
// I retrieve my data here
var results = "{{ json_data }}";
for(var i = 0; i < db_results.length; i++) {
document.write("myModel instance:" + i + results[i] + "<br>");
}
</script>
</head>
<body>
</body>
</html>
{% endautoescape %}
```
|
It has nothing to do with compatibility. Jinja is server side templating. You can use javascript for client side coding.
Using Jinja you can create HTML, which can be accessed by javascript like normal HTML.
To send datastore entities to your client you can use Jinja to pass a Python list or use a json webservice.
|
19,737,844
|
I know that the Jinja2 library allows me to pass datastore models from my python code to html and access this data from inside the html code as shown [in this example](https://developers.google.com/appengine/docs/python/gettingstartedpython27/templates) . However Jinja2 isn't compatible with javascript and I want to access the data inside my Javascript code . What is the simplest templating library which allows to iterate over my datastore entities in Javascript ? I've heard about things like Mustache and Jquery , I think they look a bit too complicated. Is there anything simpler?
|
2013/11/02
|
[
"https://Stackoverflow.com/questions/19737844",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2559007/"
] |
It works. I had to serialize(convert) my datastore entities to json format, which Javascript understands well. I created a function which converts every instance of my datastore into a dictionnary then encapsulates all these instances into a list which is then converted to Json using json.dumps. When I pass this result to the Java script , I can then easily access my values as seen below.
```
import json
import webapp2
from google.appengine.ext import db
import jinja2
JINJA_ENVIRONMENT = jinja2.Environment(
loader=jinja2.FileSystemLoader(os.path.dirname(__file__)),
extensions=['jinja2.ext.autoescape'],
autoescape=True)
# serialize datastore model to JSON format
def serialize(model):
allInstances = model.all() # fetching every instance of model
itemsList = [] #initial empty list
for p in allInstances:
d = db.to_dict(p)
itemsList.append(d)
return json.dumps(itemsList)
class myModel(db.Model):
v = db.FloatProperty()
c = db.FloatProperty()
tdate = db.DateTimeProperty(auto_now_add=True)
class MainPage(webapp2.RequestHandler):
def get(self):
myModel(v=4.5, c=3.0).put()
#creating template variables
template_values = {
'json_data': serialize(myModel)
}
template = JINJA_ENVIRONMENT.get_template('index.html')
self.response.write(template.render(template_values))
```
Inside my 'index.html' file, I have:
```
{% autoescape true %}
<!DOCTYPE html>
<html>
<head>
<title> webpage </title>
<script type="text/javascript">
// I retrieve my data here
var results = "{{ json_data }}";
for(var i = 0; i < db_results.length; i++) {
document.write("myModel instance:" + i + results[i] + "<br>");
}
</script>
</head>
<body>
</body>
</html>
{% endautoescape %}
```
|
One more approach to consider: If the Python object is not dynamic, you may want to use json.dumps() to store it as a TextProperty, and simply JSON.parse(unescape(model\_text)) on the JS side. Reduces the overhead, and memory hit which can be important when trying to stay within an F1 limit. For example, I run an instance that very easily runs inside an F1. There is one large dictionary object that we deliver. Were this object to exist as a Python dictionary inside the instance we would kill the instance due to the soft memory limit. Using the TextProperty approach we can pass this large dict to the client without any issues. (Note: we did have to momentarily boost our instance up to an F4 when initially creating this object -- something incredibly easy inside the Admin web page.) With more dynamic objects, answers above apply.
|
19,737,844
|
I know that the Jinja2 library allows me to pass datastore models from my python code to html and access this data from inside the html code as shown [in this example](https://developers.google.com/appengine/docs/python/gettingstartedpython27/templates) . However Jinja2 isn't compatible with javascript and I want to access the data inside my Javascript code . What is the simplest templating library which allows to iterate over my datastore entities in Javascript ? I've heard about things like Mustache and Jquery , I think they look a bit too complicated. Is there anything simpler?
|
2013/11/02
|
[
"https://Stackoverflow.com/questions/19737844",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2559007/"
] |
It works. I had to serialize(convert) my datastore entities to json format, which Javascript understands well. I created a function which converts every instance of my datastore into a dictionnary then encapsulates all these instances into a list which is then converted to Json using json.dumps. When I pass this result to the Java script , I can then easily access my values as seen below.
```
import json
import webapp2
from google.appengine.ext import db
import jinja2
JINJA_ENVIRONMENT = jinja2.Environment(
loader=jinja2.FileSystemLoader(os.path.dirname(__file__)),
extensions=['jinja2.ext.autoescape'],
autoescape=True)
# serialize datastore model to JSON format
def serialize(model):
allInstances = model.all() # fetching every instance of model
itemsList = [] #initial empty list
for p in allInstances:
d = db.to_dict(p)
itemsList.append(d)
return json.dumps(itemsList)
class myModel(db.Model):
v = db.FloatProperty()
c = db.FloatProperty()
tdate = db.DateTimeProperty(auto_now_add=True)
class MainPage(webapp2.RequestHandler):
def get(self):
myModel(v=4.5, c=3.0).put()
#creating template variables
template_values = {
'json_data': serialize(myModel)
}
template = JINJA_ENVIRONMENT.get_template('index.html')
self.response.write(template.render(template_values))
```
Inside my 'index.html' file, I have:
```
{% autoescape true %}
<!DOCTYPE html>
<html>
<head>
<title> webpage </title>
<script type="text/javascript">
// I retrieve my data here
var results = "{{ json_data }}";
for(var i = 0; i < db_results.length; i++) {
document.write("myModel instance:" + i + results[i] + "<br>");
}
</script>
</head>
<body>
</body>
</html>
{% endautoescape %}
```
|
Jinja2 and Javascript play fine together. You need to arrange to have template expansion emit your Python data structures into a JS-friendly form.
<https://sites.google.com/a/khanacademy.org/forge/technical/autoescape-in-jinja2-templates> covers it fairly well. (Note the use of the `escapejs` filter.)
|
66,412,526
|
I am quite new to python and I have a table of occupancy that looks like this:
```
| room | free| date | place
| room_1 | 0 | 2021-01-13| Boston|
|room_2 |1| 2021-02-14| Boston|
|room_2|0|2021-02-15|Boston|
```
...
How can I calculate how often a room was free within a timeframe of a month and a week for each room?
So I could say: room\_1 was free for 95 % free within February?
1 = free
0 = not free
I have the data in `csv` format.
|
2021/02/28
|
[
"https://Stackoverflow.com/questions/66412526",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10094012/"
] |
Hopefully there aren't many holes in the Unity documentation, but when you do find one you can often find sample code in the [Unity Quickstarts](https://github.com/firebase/quickstart-unity) (as these are also used to validate changes to the Unity SDK).
From [`UIHandler.cs`](https://github.com/firebase/quickstart-unity/blob/master/auth/testapp/Assets/Firebase/Sample/Auth/UIHandler.cs) in the [Auth quickstart](https://github.com/firebase/quickstart-unity/tree/master/auth/testapp):
```
public Task SignInWithGameCenterAsync() {
var credentialTask = Firebase.Auth.GameCenterAuthProvider.GetCredentialAsync();
var continueTask = credentialTask.ContinueWithOnMainThread(task => {
if(!task.IsCompleted)
return null;
if(task.Exception != null)
Debug.Log("GC Credential Task - Exception: " + task.Exception.Message);
var credential = task.Result;
var loginTask = auth.SignInWithCredentialAsync(credential);
return loginTask.ContinueWithOnMainThread(HandleSignInWithUser);
});
return continueTask;
}
```
What's weird about GameCenter is that you don't manage the credential directly, rather you [request it from GameCenter](https://firebase.google.com/docs/reference/unity/class/firebase/auth/game-center-auth-provider#getcredentialasync) ([Here's PlayGames](https://firebase.google.com/docs/reference/unity/class/firebase/auth/play-games-auth-provider#getcredential) by comparison, which you have to [cache the result of `Authenticate` manually](https://firebase.google.com/docs/auth/unity/play-games#authenticate_with_firebase)).
So in your code, I'd go into the `if (success)` block and add:
```
GameCenterAuthProvider.GetCredentialAsync().ContinueWith(task => {
// I used ContinueWith
// be careful, I'm on a background thread here.
// If this is a problem, use ContinueWithOnMainThread
if (task.Exception != null) {
FirebaseAuth.GetInstance.SignInWithCredentialAsync(task.Result).ContinueWithOnMainThread(task => {
// I used ContinueWithOnMainThread
// You're on the Unity main thread here, so you can add or change scene objects
// If you're comfortable with threading, you can change this to ContinueWith
});
}
});
```
|
<https://firebase.google.com/docs/auth/ios/game-center> This might answer your question. You may have to implement this feature in XCode when you export the project from unity
|
66,412,526
|
I am quite new to python and I have a table of occupancy that looks like this:
```
| room | free| date | place
| room_1 | 0 | 2021-01-13| Boston|
|room_2 |1| 2021-02-14| Boston|
|room_2|0|2021-02-15|Boston|
```
...
How can I calculate how often a room was free within a timeframe of a month and a week for each room?
So I could say: room\_1 was free for 95 % free within February?
1 = free
0 = not free
I have the data in `csv` format.
|
2021/02/28
|
[
"https://Stackoverflow.com/questions/66412526",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10094012/"
] |
Hopefully there aren't many holes in the Unity documentation, but when you do find one you can often find sample code in the [Unity Quickstarts](https://github.com/firebase/quickstart-unity) (as these are also used to validate changes to the Unity SDK).
From [`UIHandler.cs`](https://github.com/firebase/quickstart-unity/blob/master/auth/testapp/Assets/Firebase/Sample/Auth/UIHandler.cs) in the [Auth quickstart](https://github.com/firebase/quickstart-unity/tree/master/auth/testapp):
```
public Task SignInWithGameCenterAsync() {
var credentialTask = Firebase.Auth.GameCenterAuthProvider.GetCredentialAsync();
var continueTask = credentialTask.ContinueWithOnMainThread(task => {
if(!task.IsCompleted)
return null;
if(task.Exception != null)
Debug.Log("GC Credential Task - Exception: " + task.Exception.Message);
var credential = task.Result;
var loginTask = auth.SignInWithCredentialAsync(credential);
return loginTask.ContinueWithOnMainThread(HandleSignInWithUser);
});
return continueTask;
}
```
What's weird about GameCenter is that you don't manage the credential directly, rather you [request it from GameCenter](https://firebase.google.com/docs/reference/unity/class/firebase/auth/game-center-auth-provider#getcredentialasync) ([Here's PlayGames](https://firebase.google.com/docs/reference/unity/class/firebase/auth/play-games-auth-provider#getcredential) by comparison, which you have to [cache the result of `Authenticate` manually](https://firebase.google.com/docs/auth/unity/play-games#authenticate_with_firebase)).
So in your code, I'd go into the `if (success)` block and add:
```
GameCenterAuthProvider.GetCredentialAsync().ContinueWith(task => {
// I used ContinueWith
// be careful, I'm on a background thread here.
// If this is a problem, use ContinueWithOnMainThread
if (task.Exception != null) {
FirebaseAuth.GetInstance.SignInWithCredentialAsync(task.Result).ContinueWithOnMainThread(task => {
// I used ContinueWithOnMainThread
// You're on the Unity main thread here, so you can add or change scene objects
// If you're comfortable with threading, you can change this to ContinueWith
});
}
});
```
|
To complete Patrick's answer I found the machahamster GitHub repo where they use the Game Center Auth with Firebase on Unity.
If it can helps someone:
<https://github.com/google/mechahamster/blob/b5ab9762cc95a2a36156d0cbd1dc08fb767c4080/Assets/Hamster/Scripts/Menus/GameCenterSignIn.cs#L45>
|
52,459,081
|
Good morning everyone!
I am new to programming and am learning python. I am trying to create a function that converts each individual char in string into each corresponding individuals ints and displays them one after another. The first error it generates is "c is not defined".
```
c=''
def encode(secret_message):
for c in secret_message:
int_message=+ord(c)
return int_message
Example of what I want it to do:
secret_message='You' (this is a string)
return: 89 111 117 (this should be an int, not a list)
note: 'Y'=89, 'o'=111, 'u'=117
```
the idea is that encode takes in a parameter secret message. It then iterates through each char in c and converts each char from a string to an int. Then it returns the entire message in ints.
I'm also not sure how to get each char to appear in int\_message. As of now, it looks like it will add all the ints together. I want it to simply place them together (like a string). Do I need to convert it back to a string after I get the int values then concatenate?
|
2018/09/22
|
[
"https://Stackoverflow.com/questions/52459081",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10401403/"
] |
It was very difficult to understand your code, but as you asked for logic to improve your understanding, so sharing psuedocode, which you could refer to correct your code accordingly.
```
Node delete (index i, Node n) // pass index and head reference node and return head
if (n==null) // if node is null
return null;
if (i==1) // if reached to node, which needs to be deleted, return next node reference.
return n.next;
n.next= delete(n.next,i-1);
return n; // recursively return current node reference
```
|
after struggling i managed to solve the problem, here is the answer, but i am still not sure about the complexity whether it's O(n) or O(log n).
```
public void delete(int index){
//check if the index is valid
if((index<0)||(index>length())){
System.out.println("Array out of bound!");
return;
}
//pass the value head to temp only in the first run
if(this.index==0)
temp=head;
//if the given index is zero then move the head to next element and return
if(index==0){
head=head.next;
return;}
//if the array is empty or has only one element then move the head to null
if(head.next==null||head==null){
head=null;
return;}
if(temp!=null){
prev=temp;
temp=temp.next;
this.index=this.index+1;
//if the given index is reached
//then link the node prev to the node that comes after temp
//and unlink temp
if(this.index==index){
prev.next=temp.next;
temp=null;
return;
}
//if not then call the function again
delete(index);
}
}
```
|
52,459,081
|
Good morning everyone!
I am new to programming and am learning python. I am trying to create a function that converts each individual char in string into each corresponding individuals ints and displays them one after another. The first error it generates is "c is not defined".
```
c=''
def encode(secret_message):
for c in secret_message:
int_message=+ord(c)
return int_message
Example of what I want it to do:
secret_message='You' (this is a string)
return: 89 111 117 (this should be an int, not a list)
note: 'Y'=89, 'o'=111, 'u'=117
```
the idea is that encode takes in a parameter secret message. It then iterates through each char in c and converts each char from a string to an int. Then it returns the entire message in ints.
I'm also not sure how to get each char to appear in int\_message. As of now, it looks like it will add all the ints together. I want it to simply place them together (like a string). Do I need to convert it back to a string after I get the int values then concatenate?
|
2018/09/22
|
[
"https://Stackoverflow.com/questions/52459081",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10401403/"
] |
### A Java recursive delete method in a linked list
Ok, let's go through this with an example. It's simplistic, but once you get the hang of it and understand the `delete` recursion algorithm, you can easily make the sample classes `generic`, take care of encapsulation, optimize the code and then go on to production.
#### Classes in this example
Assume, for the sake of example, that the basic Singly `LinkedList` and `Node` classes are very simplistic. The inner static `Node` class only stores primitive `int` types and it only includes a `next` reference to the following `Node` element in the list. The `LinkedList` only includes a `head` node, which is the beginning of the linked list. This is not a doubly linked list and it does not have a reference to the previous node. Traversals are done sequentially from the given `Node` (typically head node) through the `next` reference, one node after the other. I've added a `toString()` implementation to both, which will come handy later:
```
public class LinkedList {
protected Node head;
public LinkedList(Node head) {
super();
this.head = head;
}
static class Node {
protected int data;
protected Node next;
Node(int data, Node next) {
this.data = data;
this.next = next;
}
@Override
public String toString() {
StringBuilder builder = new StringBuilder();
builder.append("Node ");
builder.append(data);
if (null != next)
builder.append(" -> ");
return builder.toString();
}
}
@Override
public String toString() {
StringBuilder builder = new StringBuilder();
builder.append("LinkedList [");
Node node = head;
while (node != null) {
builder.append(node);
node = node.next;
}
builder.append("]");
return builder.toString();
}
```
}
#### Implementing a recursive delete method
Now, let's add a recursive `delete()` method. Deleting a node in a singly linked list can only be done by un-linking it from the previous node's `next` reference. The only exception to this rule is the `head` node, which we null to delete. Hence, it is obvious that we'll need (in addition to a starting point `current node` reference), a reference to the previous node.
Thus, our recursive `delete()` method signature can be:
```
private LinkedList delete(Node node, Node prev, int key)
```
Although the return type of this method can be omitted altogether (void) it is very useful to support chain-ability, so that API calls can become one-liner, dot separated syntax such as:
```
System.out.println(list.push(0).append(2).deleteAll(1));
```
Hence, for the sake of chain-ability, we'll return a reference to the entire `LinkedList` instance from this method too. As per the arguments list:
The first argument is the current node to check if it matches the given `key`. The next argument is the previous node, in case we need to un-link the current node. The last argument is the key we're looking for in all nodes to be deleted (unlinked).
The method modifier is `private` because it's not meant to be used publicly. We'll wrap it in a user friendly `facade` method, which will start the recursion with `head` as the current node and `null` as the previous node:
```
public LinkedList deleteAll(int key) {
return delete(head, null, key);
}
```
Now, let's see how we can implement the recursive `delete(...)` method and we begin with the two base conditions that will terminate the recursion; a null current node or a single node in the list, which is also the `head` node:
```
private LinkedList delete(Node node, Node prev, int key) {
if (node == null)
return this;
if (node == head && head.data == key && null == head.next) { // head node w/o next pointer
head = null;
return this;
}
//...more code here
}
```
Reaching the first base condition means either that we've reached the end of the linked list (found the `key` or not), or that the linked list is empty. We're done and we return a reference to the linked list.
The second base condition checks to see if our current node is the head node, and that it matches the `key`. In this case we also check if it happens to be the single node in the linked list. In such case the head node requires a 'special' treatment and must be assigned `null` in order to be deleted. Naturally, after deleting the `head` node, the list is empty and we're done, so we return a reference to the linked list.
The next condition checks if the current node matches the `key`, if it's the head node, but is not alone in the list.
```
private LinkedList delete(Node node, Node prev, int key) {
//...previous code here
if (node == head && head.data == key) { // head with next pointer
head = head.next;
return delete(head, null, key);
}
//...more code here
}
```
We'll later optimize this code but for now, in such a case we simply move the reference to `head` one step forward, so `head` is effectively deleted (the old reference will be garbage collected) and we recur with the new `head` as the current node and `null` is still the previous node.
The next case covers a regular (middle or tail) node matching the `key`:
```
private LinkedList delete(Node node, Node prev, int key) {
//...previous code here
if (node.data == key) {
prev.next = node.next;
return delete(prev, null, key);
}
//...more code here
}
```
In this case we delete the current node by un-linking the `next` pointer of the previous node from the current node and assigning it the next to the current node address. We essentially 'skip' the current node, which becomes grabage. We then recur with the previous node being the current node and null as the previous node.
In all of these handled cases we had a match for `key`. Finally, we handle the case where there's no match:
```
private LinkedList delete(Node node, Node prev, int key) {
//...previous code here
return delete(node.next, node, key);
}
```
Obviously, we recur with the next node as the current node, and the old current node as the previous node. The `key` remains the same through all recursion calls.
The entire (un-optimized) method now looks like this:
```
private LinkedList delete(Node node, Node prev, int key) {
if (node == null)
return this;
if (node.data == key && node == head && null == node.next) { // head node w/o next pointer
head = null;
return this;
}
if (node.data == key && node == head) { // head with next pointer
head = head.next;
return delete(head, null, key);
}
if (node.data == key) { // middle / tail
prev.next = node.next;
return delete(prev, null, key);
}
return delete(node.next, node, key);
}
```
#### Tail Recursion Optimization
Many compilers (javac included) can optimize recursive methods, if they use a tail-recursion. A recursive method is tail recursive when a recursive call is the last thing executed by the method. The compiler can then replace the recursion with a simple `goto/label` mechanism and save the extra memory space required in run-time for each recursion frame.
We can easily optimize our recursive `delete(...)` method to comply. Instead of returning recursively from each of the handled conditions (cases) we can keep a reference to the current `node` and previous node `prev` and assign them with appropriate values inside each case handling. This way, the only recursive call will happen at the end of the method:
```
private LinkedList delete(Node node, Node prev, int key) {
if (node == null)
return this;
if (node.data == key && head == node && null == node.next) { // head node w/o next pointer
head = null;
return this;
}
Node n = node.next, p = node;
if (node.data == key && head == node) { // head with next pointer
head = head.next;
n = head;
p = null;
} else if (node.data == key) { // middle / tail
prev.next = node.next;
n = prev;
p = null;
}
return delete(n, p, key);
}
```
#### To test this recursive method:
I've added a simple `main` test driver method to test the `delete(...)` method implementation, via the facade method `deleteAll(...)`:
```
public static void main(String[] args) {
LinkedList list = new LinkedList(new Node(0, new Node(1, new Node(1, new Node(2, new Node(2, new Node(3, null)))))));
System.out.println(list);
System.out.println(list.deleteAll(6));
System.out.println(list.deleteAll(1));
System.out.println(list.deleteAll(3));
System.out.println(list.deleteAll(2));
System.out.println(list.deleteAll(0));
}
```
The output (using my supplied `toString()` methods) is:
```
LinkedList [Node 0 -> Node 1 -> Node 1 -> Node 2 -> Node 2 -> Node 3]
LinkedList [Node 0 -> Node 1 -> Node 1 -> Node 2 -> Node 2 -> Node 3]
LinkedList [Node 0 -> Node 2 -> Node 2 -> Node 3]
LinkedList [Node 0 -> Node 2 -> Node 2]
LinkedList [Node 0]
LinkedList []
```
Although it's been 3 years since the initial post, I trust some other beginning Java programmers, if not the OP, find this explanation useful.
|
after struggling i managed to solve the problem, here is the answer, but i am still not sure about the complexity whether it's O(n) or O(log n).
```
public void delete(int index){
//check if the index is valid
if((index<0)||(index>length())){
System.out.println("Array out of bound!");
return;
}
//pass the value head to temp only in the first run
if(this.index==0)
temp=head;
//if the given index is zero then move the head to next element and return
if(index==0){
head=head.next;
return;}
//if the array is empty or has only one element then move the head to null
if(head.next==null||head==null){
head=null;
return;}
if(temp!=null){
prev=temp;
temp=temp.next;
this.index=this.index+1;
//if the given index is reached
//then link the node prev to the node that comes after temp
//and unlink temp
if(this.index==index){
prev.next=temp.next;
temp=null;
return;
}
//if not then call the function again
delete(index);
}
}
```
|
54,021,168
|
I am using flask and python3 to upload a video on server. The result is saved in format(filename)+result.jpg , where filename is the videoname
This image result was visible on browser with url /video\_feed before appending result.jpg with string
How can I now access the url to see result.jpg
```
@app.route('/video_feed_now')
def video_feed_now():
#return send_file('result.jpg', mimetype='image/gif')
return send_file(format(filename)+'result.jpg', frame)
```
|
2019/01/03
|
[
"https://Stackoverflow.com/questions/54021168",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9781290/"
] |
Using a [list comprehension](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions):
```
return [x for x in xs if f(x) > 0]
```
Without using a list comprehension:
```
return filter(lambda x: f(x) > 0, xs)
```
Since you said it should return a list:
```
return list(filter(lambda x: f(x) > 0, xs))
```
|
Two solutions are possible using recursion, which do not use looping or comprehensions - which implement the iteration protocol internally.
**Method 1:**
```py
lst = list()
def foo(index):
if index < 0 or index >= len(xs):
return
if f(xs[index]) > 0:
lst.append(xs[index])
# print xs[index] or do something else with the value
foo(index + 1)
# call foo with index = 0
```
**Method 2:**
```py
lst = list()
def foo(xs):
if len(xs) <= 0:
return
if f(xs[0]) > 0:
lst.append(xs[0])
foo(xs[1:])
# call foo with xs
```
Both these methods create a new list consisting of the desired values. The second method uses list slicing, which I am not sure whether internally implements iteration protocol or not.
|
17,866,724
|
I have a long python script that uses print statements often, I was wondering if it was possible to add some code that would log all of the print statements into a text file or something like that. I still want all of the print statements to go to the command line as the user gets prompted throughout the program. If possible logging the user's input would be beneficial as well.
I found this which somewhat answers my question but makes it where the "print" statement no longer prints to the command line
[Redirect Python 'print' output to Logger](https://stackoverflow.com/questions/11124093/redirect-python-print-output-to-logger)
|
2013/07/25
|
[
"https://Stackoverflow.com/questions/17866724",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1464660/"
] |
You can add this to your script:
```
import sys
sys.stdout = open('logfile', 'w')
```
This will make the print statements write to `logfile`.
If you want the option of printing to `stdout` and a file, you can try this:
```
class Tee(object):
def __init__(self, *files):
self.files = files
def write(self, obj):
for f in self.files:
f.write(obj)
f = open('logfile', 'w')
backup = sys.stdout
sys.stdout = Tee(sys.stdout, f)
print "hello world" # this should appear in stdout and in file
```
To revert to just printing to console, just restore the "backup"
```
sys.stdout = backup
```
|
Here is a program that does what you describe:
```
#! /usr/bin/python3
class Tee:
def write(self, *args, **kwargs):
self.out1.write(*args, **kwargs)
self.out2.write(*args, **kwargs)
def __init__(self, out1, out2):
self.out1 = out1
self.out2 = out2
import sys
sys.stdout = Tee(open("/tmp/log.txt", "w"), sys.stdout)
print("hello")
```
|
17,866,724
|
I have a long python script that uses print statements often, I was wondering if it was possible to add some code that would log all of the print statements into a text file or something like that. I still want all of the print statements to go to the command line as the user gets prompted throughout the program. If possible logging the user's input would be beneficial as well.
I found this which somewhat answers my question but makes it where the "print" statement no longer prints to the command line
[Redirect Python 'print' output to Logger](https://stackoverflow.com/questions/11124093/redirect-python-print-output-to-logger)
|
2013/07/25
|
[
"https://Stackoverflow.com/questions/17866724",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1464660/"
] |
You can add this to your script:
```
import sys
sys.stdout = open('logfile', 'w')
```
This will make the print statements write to `logfile`.
If you want the option of printing to `stdout` and a file, you can try this:
```
class Tee(object):
def __init__(self, *files):
self.files = files
def write(self, obj):
for f in self.files:
f.write(obj)
f = open('logfile', 'w')
backup = sys.stdout
sys.stdout = Tee(sys.stdout, f)
print "hello world" # this should appear in stdout and in file
```
To revert to just printing to console, just restore the "backup"
```
sys.stdout = backup
```
|
If you use the built in [logging module](http://docs.python.org/2/library/logging.html) you can configure your loggers with as many outputs as you need: to file, to databases, to emails, etc. However it sounds like you're mixing print for two different uses: logging (recording program flow for later inspection) and prompts. The real work will be to split out those two uses of 'print' into different functions so you get what you need in each place.
A lot of people [replace python's generic sys.stdout and sys.stderr](https://stackoverflow.com/questions/616645/how-do-i-duplicate-sys-stdout-to-a-log-file-in-python) to automatically do stuff to text which is being sent to the console. The real console output always lives in `sys.__stdout__` and `sys.__stderr__` (so you don't need to worry about somehow 'losing' it) but if you stick any object with the same methods as a file into the variables `sys.stdout` and `sys.stderr` you can do whatever you like with the output process.
|
17,866,724
|
I have a long python script that uses print statements often, I was wondering if it was possible to add some code that would log all of the print statements into a text file or something like that. I still want all of the print statements to go to the command line as the user gets prompted throughout the program. If possible logging the user's input would be beneficial as well.
I found this which somewhat answers my question but makes it where the "print" statement no longer prints to the command line
[Redirect Python 'print' output to Logger](https://stackoverflow.com/questions/11124093/redirect-python-print-output-to-logger)
|
2013/07/25
|
[
"https://Stackoverflow.com/questions/17866724",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1464660/"
] |
You can add this to your script:
```
import sys
sys.stdout = open('logfile', 'w')
```
This will make the print statements write to `logfile`.
If you want the option of printing to `stdout` and a file, you can try this:
```
class Tee(object):
def __init__(self, *files):
self.files = files
def write(self, obj):
for f in self.files:
f.write(obj)
f = open('logfile', 'w')
backup = sys.stdout
sys.stdout = Tee(sys.stdout, f)
print "hello world" # this should appear in stdout and in file
```
To revert to just printing to console, just restore the "backup"
```
sys.stdout = backup
```
|
hm... is it hard to implement your own print() function and decorator that will log anything that is passed to your print function?
```
def logger(func):
def inner(*args, **kwargs):
log(*args, **kwargs) # your logging logic
return func(*args, **kwargs)
return inner
@logger
def lprint(string_to_print):
print(string_to_print)
```
|
17,866,724
|
I have a long python script that uses print statements often, I was wondering if it was possible to add some code that would log all of the print statements into a text file or something like that. I still want all of the print statements to go to the command line as the user gets prompted throughout the program. If possible logging the user's input would be beneficial as well.
I found this which somewhat answers my question but makes it where the "print" statement no longer prints to the command line
[Redirect Python 'print' output to Logger](https://stackoverflow.com/questions/11124093/redirect-python-print-output-to-logger)
|
2013/07/25
|
[
"https://Stackoverflow.com/questions/17866724",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1464660/"
] |
You can add this to your script:
```
import sys
sys.stdout = open('logfile', 'w')
```
This will make the print statements write to `logfile`.
If you want the option of printing to `stdout` and a file, you can try this:
```
class Tee(object):
def __init__(self, *files):
self.files = files
def write(self, obj):
for f in self.files:
f.write(obj)
f = open('logfile', 'w')
backup = sys.stdout
sys.stdout = Tee(sys.stdout, f)
print "hello world" # this should appear in stdout and in file
```
To revert to just printing to console, just restore the "backup"
```
sys.stdout = backup
```
|
When trying the best answer accepted on python 3.7 I had the following exception:
```
Exception ignored in: <__main__.Logger object at 0x7f04083760f0>
AttributeError: 'Logger' object has no attribute 'flush'
```
I added the following function to make it work:
```
def flush(self):
pass
```
|
17,866,724
|
I have a long python script that uses print statements often, I was wondering if it was possible to add some code that would log all of the print statements into a text file or something like that. I still want all of the print statements to go to the command line as the user gets prompted throughout the program. If possible logging the user's input would be beneficial as well.
I found this which somewhat answers my question but makes it where the "print" statement no longer prints to the command line
[Redirect Python 'print' output to Logger](https://stackoverflow.com/questions/11124093/redirect-python-print-output-to-logger)
|
2013/07/25
|
[
"https://Stackoverflow.com/questions/17866724",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1464660/"
] |
Here is a program that does what you describe:
```
#! /usr/bin/python3
class Tee:
def write(self, *args, **kwargs):
self.out1.write(*args, **kwargs)
self.out2.write(*args, **kwargs)
def __init__(self, out1, out2):
self.out1 = out1
self.out2 = out2
import sys
sys.stdout = Tee(open("/tmp/log.txt", "w"), sys.stdout)
print("hello")
```
|
If you use the built in [logging module](http://docs.python.org/2/library/logging.html) you can configure your loggers with as many outputs as you need: to file, to databases, to emails, etc. However it sounds like you're mixing print for two different uses: logging (recording program flow for later inspection) and prompts. The real work will be to split out those two uses of 'print' into different functions so you get what you need in each place.
A lot of people [replace python's generic sys.stdout and sys.stderr](https://stackoverflow.com/questions/616645/how-do-i-duplicate-sys-stdout-to-a-log-file-in-python) to automatically do stuff to text which is being sent to the console. The real console output always lives in `sys.__stdout__` and `sys.__stderr__` (so you don't need to worry about somehow 'losing' it) but if you stick any object with the same methods as a file into the variables `sys.stdout` and `sys.stderr` you can do whatever you like with the output process.
|
17,866,724
|
I have a long python script that uses print statements often, I was wondering if it was possible to add some code that would log all of the print statements into a text file or something like that. I still want all of the print statements to go to the command line as the user gets prompted throughout the program. If possible logging the user's input would be beneficial as well.
I found this which somewhat answers my question but makes it where the "print" statement no longer prints to the command line
[Redirect Python 'print' output to Logger](https://stackoverflow.com/questions/11124093/redirect-python-print-output-to-logger)
|
2013/07/25
|
[
"https://Stackoverflow.com/questions/17866724",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1464660/"
] |
Here is a program that does what you describe:
```
#! /usr/bin/python3
class Tee:
def write(self, *args, **kwargs):
self.out1.write(*args, **kwargs)
self.out2.write(*args, **kwargs)
def __init__(self, out1, out2):
self.out1 = out1
self.out2 = out2
import sys
sys.stdout = Tee(open("/tmp/log.txt", "w"), sys.stdout)
print("hello")
```
|
hm... is it hard to implement your own print() function and decorator that will log anything that is passed to your print function?
```
def logger(func):
def inner(*args, **kwargs):
log(*args, **kwargs) # your logging logic
return func(*args, **kwargs)
return inner
@logger
def lprint(string_to_print):
print(string_to_print)
```
|
17,866,724
|
I have a long python script that uses print statements often, I was wondering if it was possible to add some code that would log all of the print statements into a text file or something like that. I still want all of the print statements to go to the command line as the user gets prompted throughout the program. If possible logging the user's input would be beneficial as well.
I found this which somewhat answers my question but makes it where the "print" statement no longer prints to the command line
[Redirect Python 'print' output to Logger](https://stackoverflow.com/questions/11124093/redirect-python-print-output-to-logger)
|
2013/07/25
|
[
"https://Stackoverflow.com/questions/17866724",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1464660/"
] |
Here is a program that does what you describe:
```
#! /usr/bin/python3
class Tee:
def write(self, *args, **kwargs):
self.out1.write(*args, **kwargs)
self.out2.write(*args, **kwargs)
def __init__(self, out1, out2):
self.out1 = out1
self.out2 = out2
import sys
sys.stdout = Tee(open("/tmp/log.txt", "w"), sys.stdout)
print("hello")
```
|
When trying the best answer accepted on python 3.7 I had the following exception:
```
Exception ignored in: <__main__.Logger object at 0x7f04083760f0>
AttributeError: 'Logger' object has no attribute 'flush'
```
I added the following function to make it work:
```
def flush(self):
pass
```
|
17,866,724
|
I have a long python script that uses print statements often, I was wondering if it was possible to add some code that would log all of the print statements into a text file or something like that. I still want all of the print statements to go to the command line as the user gets prompted throughout the program. If possible logging the user's input would be beneficial as well.
I found this which somewhat answers my question but makes it where the "print" statement no longer prints to the command line
[Redirect Python 'print' output to Logger](https://stackoverflow.com/questions/11124093/redirect-python-print-output-to-logger)
|
2013/07/25
|
[
"https://Stackoverflow.com/questions/17866724",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1464660/"
] |
If you use the built in [logging module](http://docs.python.org/2/library/logging.html) you can configure your loggers with as many outputs as you need: to file, to databases, to emails, etc. However it sounds like you're mixing print for two different uses: logging (recording program flow for later inspection) and prompts. The real work will be to split out those two uses of 'print' into different functions so you get what you need in each place.
A lot of people [replace python's generic sys.stdout and sys.stderr](https://stackoverflow.com/questions/616645/how-do-i-duplicate-sys-stdout-to-a-log-file-in-python) to automatically do stuff to text which is being sent to the console. The real console output always lives in `sys.__stdout__` and `sys.__stderr__` (so you don't need to worry about somehow 'losing' it) but if you stick any object with the same methods as a file into the variables `sys.stdout` and `sys.stderr` you can do whatever you like with the output process.
|
hm... is it hard to implement your own print() function and decorator that will log anything that is passed to your print function?
```
def logger(func):
def inner(*args, **kwargs):
log(*args, **kwargs) # your logging logic
return func(*args, **kwargs)
return inner
@logger
def lprint(string_to_print):
print(string_to_print)
```
|
17,866,724
|
I have a long python script that uses print statements often, I was wondering if it was possible to add some code that would log all of the print statements into a text file or something like that. I still want all of the print statements to go to the command line as the user gets prompted throughout the program. If possible logging the user's input would be beneficial as well.
I found this which somewhat answers my question but makes it where the "print" statement no longer prints to the command line
[Redirect Python 'print' output to Logger](https://stackoverflow.com/questions/11124093/redirect-python-print-output-to-logger)
|
2013/07/25
|
[
"https://Stackoverflow.com/questions/17866724",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1464660/"
] |
When trying the best answer accepted on python 3.7 I had the following exception:
```
Exception ignored in: <__main__.Logger object at 0x7f04083760f0>
AttributeError: 'Logger' object has no attribute 'flush'
```
I added the following function to make it work:
```
def flush(self):
pass
```
|
hm... is it hard to implement your own print() function and decorator that will log anything that is passed to your print function?
```
def logger(func):
def inner(*args, **kwargs):
log(*args, **kwargs) # your logging logic
return func(*args, **kwargs)
return inner
@logger
def lprint(string_to_print):
print(string_to_print)
```
|
54,246,133
|
I am an experienced programmer in ruby, python and javascript (specifically back-end node.js), I have worked in java, perl and c++, and I've used lisp and haskell academically, but I'm brand new to Scala and trying to learn some conventions.
I have a function that accepts a function as a parameter, similar to how a sort function accepts a comparator function. What is the more idiomatic way of implementing this?
Suppose this is the function that accepts a function parameter y:
```
object SomeMath {
def apply(x: Int, y: IntMath): Int = y(x)
}
```
Should `IntMath` be defined as a `trait` and different implementations of `IntMath` defined in different objects? (let's call this option A)
```
trait IntMath {
def apply(x: Int): Int
}
object AddOne extends IntMath {
def apply(x: Int): Int = x + 1
}
object AddTwo extends IntMath {
def apply(x: Int): Int = x + 2
}
AddOne(1)
// => 2
AddTwo(1)
// => 3
SomeMath(1, AddOne)
// => 2
SomeMath(1, AddTwo)
// => 3
```
Or should `IntMath` be a type alias for a function signature? (option B)
```
type IntMath = Int => Int
object Add {
def one: IntMath = _ + 1
def two: IntMath = _ + 2
}
Add.one(1)
// => 2
Add.two(1)
// => 3
SomeMath(1, Add.one)
// => 2
SomeMath(1, Add.two)
// => 3
```
but which one is more idiomatic scala?
Or are neither idiomatic? (option C)
My previous experiences in functional languages leans me towards B, but I have never seen that before in scala. On the other hand, though the `trait` seems to add unnecessary clutter, I have seen that implementation and it seems to work much more smoothly in Scala (since the object becomes callable with the `apply` function).
[**Update**] Fixed the examples code where an `IntMath` type is passed into `SomeMath`. The syntatic sugar that scala provides where an `object` with an `apply` method becomes callable like a function gives the illusion that `AddOne` and `AddTwo` are functions and passed like functions in Option A.
|
2019/01/18
|
[
"https://Stackoverflow.com/questions/54246133",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5158185/"
] |
Since Scala has explicit function types, I'd say that if you need to pass a function to your function, use the function type, i.e. your option B. They are explicitly there for this purpose.
It is not exactly clear what do you mean by saying that you "have never seen that before in scala", BTW. A lot of standard library methods accept functions as their parameters, for example, collection transformation methods. Creating type aliases to convey semantics of a more complex type is perfectly idiomatic Scala as well, and it does not really matter whether the aliased type is a function or not. For example, one of the core types in my current project is actually a type alias to a function, which conveys the semantics using a descriptive name and also allows to easily pass conforming functions without a necessity to explicitly create a subclass of some trait.
It is also important, I believe, to understand that syntax sugar of the `apply` method does not really have anything to do with how functions or functional traits are used. Indeed, `apply` methods do have this ability to be invoked just with parentheses; however, it does not mean that different types having an `apply` method, even with the same signature, are *interoperable*, and I think that this interoperability is what matters in this situation, as well as an ability to easily construct instances of such types. After all, in your specific example it only matters for *your* code whether you could use the syntax sugar on `IntMath` or not, but for users of your code an ability to easily construct an instance of `IntMath`, as well as an ability to pass some existing thing they already have as `IntMath`, is much more important.
With the `FunctionN` types, you have the benefit of being able to use the anonymous function syntax to construct instances of these types (actually, several syntaxes, at least these: `x => y`, `{ x => y }`, `_.x`, `method _`, `method(_)`). There even was no way to create instances of "Single Abstract Method" types before Scala 2.11, and even there it requires a compiler flag to actually enable this feature. This means that the users of your type will have to write either this:
```
SomeMath(10, _ + 1)
```
or this:
```
SomeMath(10, new IntMath {
def apply(x: Int): Int = x + 1
})
```
Naturally, the former approach is much clearer.
Additionally, `FunctionN` types provide a single common denominator of functional types, which improves interoperability. Functions in math, for example, do not have any kind of "type" except their signature, and you can use any function in place of any other function, as long as their signatures match. This property is beneficial in programming too, because it allows reusing definitions you already have for different purposes, reducing the number of conversions and therefore potential mistakes that you might do when writing these conversions.
Finally, there are indeed situations where you would want to create a separate type for your function-like interface. One example is that with full-fledged types, you have an ability to define a companion object, and companion objects participate in implicits resolution. This means that if instances of your function type are supposed to be provided implicitly, then you can take advantage of having a companion object and define some common implicits there, which would make them available for all users of the type without additional imports:
```
// You might even want to extend the function type
// to improve interoperability
trait Converter[S, T] extends (S => T) {
def apply(source: S): T
}
object Converter {
implicit val intToStringConverter: Converter[Int, String] = new Converter[Int, String] {
def apply(source: Int): String = source.toString
}
}
```
Here having a piece of implicit scope associated with the type is useful, because otherwise users of `Converter` would need to always import contents of some object/package to obtain default implicit definitions; with this approach, however, all implicits defined in `object Converter` will be searched by default.
However, these situations are not very common. As a general rule, I think, you should first try to use the regular function type. If you find that you *do* need a separate type, for a practical reason, only then you should create your own type.
|
Another option is not to define anything and keep the function type explicit:
```
def apply(x: Int, y: Int => Int): Int = y(x)
```
This makes the code more readable by making it clear which arguments are data objects and which are function objects. (Purists will say that there is no distinction in a functional language, but I think this is a useful distinction in most real-world code, especially if you come from a more declarative background)
If the arguments or results become more complicated then they can be defined as their own types, but the function itself remains explicit:
```
def apply(x: MyValue, y: MyValue => MyValue2): MyValue2 = y(x)
```
Failing that, `B` is preferable because it allows you to pass functions as lambdas:
```
SomeMath(1, _ + 3)
```
`A` would require you to declare a new instance of `IntMath` which is more cumbersome and less idiomatic.
|
1,644,856
|
For testing purposes I have this shell script
```
#!/bin/bash
echo $$
find / >/dev/null 2>&1
```
Running this from an interactive terminal, ctrl+c will terminate bash, and the find command.
```
$ ./test-k.sh
13227
<Ctrl+C>
$ ps -ef |grep find
$
```
Running it in the background, and killing the shell only will orphan the commands running in the script.
```
$ ./test-k.sh &
[1] 13231
13231
$ kill 13231
$ ps -ef |grep find
nos 13232 1 3 17:09 pts/5 00:00:00 find /
$
```
I want this shell script to terminate all its child processes when it exits regardless of how it's called. It'll eventually be started from a python and java application - and some form of cleanup is needed when the script exits - any options I should look into or any way to rewrite the script to clean itself up on exit?
|
2009/10/29
|
[
"https://Stackoverflow.com/questions/1644856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/126769/"
] |
Send a signal to the group.
So instead of `kill 13231` do:
```
kill -- -13231
```
If you're starting from python then have a look at:
<http://www.pixelbeat.org/libs/subProcess.py>
which shows how to mimic the shell in starting
and killing a group
|
@Patrick's answer almost did the trick, but it doesn't work if the *parent* process of your *current* shell is in the same group (it kills the parent too).
I found this to be better:
`trap 'pkill -P $$' EXIT`
See [here](https://unix.stackexchange.com/a/124148) for more info.
|
1,644,856
|
For testing purposes I have this shell script
```
#!/bin/bash
echo $$
find / >/dev/null 2>&1
```
Running this from an interactive terminal, ctrl+c will terminate bash, and the find command.
```
$ ./test-k.sh
13227
<Ctrl+C>
$ ps -ef |grep find
$
```
Running it in the background, and killing the shell only will orphan the commands running in the script.
```
$ ./test-k.sh &
[1] 13231
13231
$ kill 13231
$ ps -ef |grep find
nos 13232 1 3 17:09 pts/5 00:00:00 find /
$
```
I want this shell script to terminate all its child processes when it exits regardless of how it's called. It'll eventually be started from a python and java application - and some form of cleanup is needed when the script exits - any options I should look into or any way to rewrite the script to clean itself up on exit?
|
2009/10/29
|
[
"https://Stackoverflow.com/questions/1644856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/126769/"
] |
Was looking for an elegant solution to this issue and found the following solution elsewhere.
```
trap 'kill -HUP 0' EXIT
```
My own man pages say nothing about what `0` means, but from digging around, it seems to mean the current process group. Since the script get's it's own process group, this ends up sending SIGHUP to all the script's children, foreground and background.
|
Just add a line like this to your script:
```
trap "kill $$" SIGINT
```
You might need to change 'SIGINT' to 'INT' on your setup, but this will basically kill your process and all child processes when you hit Ctrl-C.
|
1,644,856
|
For testing purposes I have this shell script
```
#!/bin/bash
echo $$
find / >/dev/null 2>&1
```
Running this from an interactive terminal, ctrl+c will terminate bash, and the find command.
```
$ ./test-k.sh
13227
<Ctrl+C>
$ ps -ef |grep find
$
```
Running it in the background, and killing the shell only will orphan the commands running in the script.
```
$ ./test-k.sh &
[1] 13231
13231
$ kill 13231
$ ps -ef |grep find
nos 13232 1 3 17:09 pts/5 00:00:00 find /
$
```
I want this shell script to terminate all its child processes when it exits regardless of how it's called. It'll eventually be started from a python and java application - and some form of cleanup is needed when the script exits - any options I should look into or any way to rewrite the script to clean itself up on exit?
|
2009/10/29
|
[
"https://Stackoverflow.com/questions/1644856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/126769/"
] |
@Patrick's answer almost did the trick, but it doesn't work if the *parent* process of your *current* shell is in the same group (it kills the parent too).
I found this to be better:
`trap 'pkill -P $$' EXIT`
See [here](https://unix.stackexchange.com/a/124148) for more info.
|
Just add a line like this to your script:
```
trap "kill $$" SIGINT
```
You might need to change 'SIGINT' to 'INT' on your setup, but this will basically kill your process and all child processes when you hit Ctrl-C.
|
1,644,856
|
For testing purposes I have this shell script
```
#!/bin/bash
echo $$
find / >/dev/null 2>&1
```
Running this from an interactive terminal, ctrl+c will terminate bash, and the find command.
```
$ ./test-k.sh
13227
<Ctrl+C>
$ ps -ef |grep find
$
```
Running it in the background, and killing the shell only will orphan the commands running in the script.
```
$ ./test-k.sh &
[1] 13231
13231
$ kill 13231
$ ps -ef |grep find
nos 13232 1 3 17:09 pts/5 00:00:00 find /
$
```
I want this shell script to terminate all its child processes when it exits regardless of how it's called. It'll eventually be started from a python and java application - and some form of cleanup is needed when the script exits - any options I should look into or any way to rewrite the script to clean itself up on exit?
|
2009/10/29
|
[
"https://Stackoverflow.com/questions/1644856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/126769/"
] |
I would do something like this:
```
#!/bin/bash
trap : SIGTERM SIGINT
echo $$
find / >/dev/null 2>&1 &
FIND_PID=$!
wait $FIND_PID
if [[ $? -gt 128 ]]
then
kill $FIND_PID
fi
```
Some explanation is in order, I guess. Out the gate, we need to change some of the default signal handling. `:` is a no-op command, since passing an empty string causes the shell to ignore the signal instead of doing something about it (the opposite of what we want to do).
Then, the `find` command is run in the background (from the script's perspective) and we call the `wait` builtin for it to finish. Since we gave a real command to `trap` above, when a signal is handled, `wait` will exit with a status greater than 128. If the process `wait`ed for completes, `wait` will return the exit status of that process.
Last, if the `wait` returns that error status, we want to `kill` the child process. Luckily we saved its PID. The advantage of this approach is that you can log some error message or otherwise identify that a signal caused the script to exit.
As others have mentioned, putting `kill -- -$$` as your argument to `trap` is another option if you don't care about leaving any information around post-exit.
For `trap` to work the way you want, you do need to pair it up with `wait` - the `bash` man page says "If `bash` is waiting for a command to complete and receives a signal for which a `trap` has been set, the `trap` will not be executed until the command completes." `wait` is the way around this hiccup.
You can extend it to more child processes if you want, as well. I didn't really exhaustively test this one out, but it seems to work here.
```
$ ./test-k.sh &
[1] 12810
12810
$ kill 12810
$ ps -ef | grep find
$
```
|
Was looking for an elegant solution to this issue and found the following solution elsewhere.
```
trap 'kill -HUP 0' EXIT
```
My own man pages say nothing about what `0` means, but from digging around, it seems to mean the current process group. Since the script get's it's own process group, this ends up sending SIGHUP to all the script's children, foreground and background.
|
1,644,856
|
For testing purposes I have this shell script
```
#!/bin/bash
echo $$
find / >/dev/null 2>&1
```
Running this from an interactive terminal, ctrl+c will terminate bash, and the find command.
```
$ ./test-k.sh
13227
<Ctrl+C>
$ ps -ef |grep find
$
```
Running it in the background, and killing the shell only will orphan the commands running in the script.
```
$ ./test-k.sh &
[1] 13231
13231
$ kill 13231
$ ps -ef |grep find
nos 13232 1 3 17:09 pts/5 00:00:00 find /
$
```
I want this shell script to terminate all its child processes when it exits regardless of how it's called. It'll eventually be started from a python and java application - and some form of cleanup is needed when the script exits - any options I should look into or any way to rewrite the script to clean itself up on exit?
|
2009/10/29
|
[
"https://Stackoverflow.com/questions/1644856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/126769/"
] |
Send a signal to the group.
So instead of `kill 13231` do:
```
kill -- -13231
```
If you're starting from python then have a look at:
<http://www.pixelbeat.org/libs/subProcess.py>
which shows how to mimic the shell in starting
and killing a group
|
Just add a line like this to your script:
```
trap "kill $$" SIGINT
```
You might need to change 'SIGINT' to 'INT' on your setup, but this will basically kill your process and all child processes when you hit Ctrl-C.
|
1,644,856
|
For testing purposes I have this shell script
```
#!/bin/bash
echo $$
find / >/dev/null 2>&1
```
Running this from an interactive terminal, ctrl+c will terminate bash, and the find command.
```
$ ./test-k.sh
13227
<Ctrl+C>
$ ps -ef |grep find
$
```
Running it in the background, and killing the shell only will orphan the commands running in the script.
```
$ ./test-k.sh &
[1] 13231
13231
$ kill 13231
$ ps -ef |grep find
nos 13232 1 3 17:09 pts/5 00:00:00 find /
$
```
I want this shell script to terminate all its child processes when it exits regardless of how it's called. It'll eventually be started from a python and java application - and some form of cleanup is needed when the script exits - any options I should look into or any way to rewrite the script to clean itself up on exit?
|
2009/10/29
|
[
"https://Stackoverflow.com/questions/1644856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/126769/"
] |
I would do something like this:
```
#!/bin/bash
trap : SIGTERM SIGINT
echo $$
find / >/dev/null 2>&1 &
FIND_PID=$!
wait $FIND_PID
if [[ $? -gt 128 ]]
then
kill $FIND_PID
fi
```
Some explanation is in order, I guess. Out the gate, we need to change some of the default signal handling. `:` is a no-op command, since passing an empty string causes the shell to ignore the signal instead of doing something about it (the opposite of what we want to do).
Then, the `find` command is run in the background (from the script's perspective) and we call the `wait` builtin for it to finish. Since we gave a real command to `trap` above, when a signal is handled, `wait` will exit with a status greater than 128. If the process `wait`ed for completes, `wait` will return the exit status of that process.
Last, if the `wait` returns that error status, we want to `kill` the child process. Luckily we saved its PID. The advantage of this approach is that you can log some error message or otherwise identify that a signal caused the script to exit.
As others have mentioned, putting `kill -- -$$` as your argument to `trap` is another option if you don't care about leaving any information around post-exit.
For `trap` to work the way you want, you do need to pair it up with `wait` - the `bash` man page says "If `bash` is waiting for a command to complete and receives a signal for which a `trap` has been set, the `trap` will not be executed until the command completes." `wait` is the way around this hiccup.
You can extend it to more child processes if you want, as well. I didn't really exhaustively test this one out, but it seems to work here.
```
$ ./test-k.sh &
[1] 12810
12810
$ kill 12810
$ ps -ef | grep find
$
```
|
@Patrick's answer almost did the trick, but it doesn't work if the *parent* process of your *current* shell is in the same group (it kills the parent too).
I found this to be better:
`trap 'pkill -P $$' EXIT`
See [here](https://unix.stackexchange.com/a/124148) for more info.
|
1,644,856
|
For testing purposes I have this shell script
```
#!/bin/bash
echo $$
find / >/dev/null 2>&1
```
Running this from an interactive terminal, ctrl+c will terminate bash, and the find command.
```
$ ./test-k.sh
13227
<Ctrl+C>
$ ps -ef |grep find
$
```
Running it in the background, and killing the shell only will orphan the commands running in the script.
```
$ ./test-k.sh &
[1] 13231
13231
$ kill 13231
$ ps -ef |grep find
nos 13232 1 3 17:09 pts/5 00:00:00 find /
$
```
I want this shell script to terminate all its child processes when it exits regardless of how it's called. It'll eventually be started from a python and java application - and some form of cleanup is needed when the script exits - any options I should look into or any way to rewrite the script to clean itself up on exit?
|
2009/10/29
|
[
"https://Stackoverflow.com/questions/1644856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/126769/"
] |
I would do something like this:
```
#!/bin/bash
trap : SIGTERM SIGINT
echo $$
find / >/dev/null 2>&1 &
FIND_PID=$!
wait $FIND_PID
if [[ $? -gt 128 ]]
then
kill $FIND_PID
fi
```
Some explanation is in order, I guess. Out the gate, we need to change some of the default signal handling. `:` is a no-op command, since passing an empty string causes the shell to ignore the signal instead of doing something about it (the opposite of what we want to do).
Then, the `find` command is run in the background (from the script's perspective) and we call the `wait` builtin for it to finish. Since we gave a real command to `trap` above, when a signal is handled, `wait` will exit with a status greater than 128. If the process `wait`ed for completes, `wait` will return the exit status of that process.
Last, if the `wait` returns that error status, we want to `kill` the child process. Luckily we saved its PID. The advantage of this approach is that you can log some error message or otherwise identify that a signal caused the script to exit.
As others have mentioned, putting `kill -- -$$` as your argument to `trap` is another option if you don't care about leaving any information around post-exit.
For `trap` to work the way you want, you do need to pair it up with `wait` - the `bash` man page says "If `bash` is waiting for a command to complete and receives a signal for which a `trap` has been set, the `trap` will not be executed until the command completes." `wait` is the way around this hiccup.
You can extend it to more child processes if you want, as well. I didn't really exhaustively test this one out, but it seems to work here.
```
$ ./test-k.sh &
[1] 12810
12810
$ kill 12810
$ ps -ef | grep find
$
```
|
The thing you would need to do is trap the kill signal, kill the find command and exit.
|
1,644,856
|
For testing purposes I have this shell script
```
#!/bin/bash
echo $$
find / >/dev/null 2>&1
```
Running this from an interactive terminal, ctrl+c will terminate bash, and the find command.
```
$ ./test-k.sh
13227
<Ctrl+C>
$ ps -ef |grep find
$
```
Running it in the background, and killing the shell only will orphan the commands running in the script.
```
$ ./test-k.sh &
[1] 13231
13231
$ kill 13231
$ ps -ef |grep find
nos 13232 1 3 17:09 pts/5 00:00:00 find /
$
```
I want this shell script to terminate all its child processes when it exits regardless of how it's called. It'll eventually be started from a python and java application - and some form of cleanup is needed when the script exits - any options I should look into or any way to rewrite the script to clean itself up on exit?
|
2009/10/29
|
[
"https://Stackoverflow.com/questions/1644856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/126769/"
] |
Send a signal to the group.
So instead of `kill 13231` do:
```
kill -- -13231
```
If you're starting from python then have a look at:
<http://www.pixelbeat.org/libs/subProcess.py>
which shows how to mimic the shell in starting
and killing a group
|
The thing you would need to do is trap the kill signal, kill the find command and exit.
|
1,644,856
|
For testing purposes I have this shell script
```
#!/bin/bash
echo $$
find / >/dev/null 2>&1
```
Running this from an interactive terminal, ctrl+c will terminate bash, and the find command.
```
$ ./test-k.sh
13227
<Ctrl+C>
$ ps -ef |grep find
$
```
Running it in the background, and killing the shell only will orphan the commands running in the script.
```
$ ./test-k.sh &
[1] 13231
13231
$ kill 13231
$ ps -ef |grep find
nos 13232 1 3 17:09 pts/5 00:00:00 find /
$
```
I want this shell script to terminate all its child processes when it exits regardless of how it's called. It'll eventually be started from a python and java application - and some form of cleanup is needed when the script exits - any options I should look into or any way to rewrite the script to clean itself up on exit?
|
2009/10/29
|
[
"https://Stackoverflow.com/questions/1644856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/126769/"
] |
Was looking for an elegant solution to this issue and found the following solution elsewhere.
```
trap 'kill -HUP 0' EXIT
```
My own man pages say nothing about what `0` means, but from digging around, it seems to mean the current process group. Since the script get's it's own process group, this ends up sending SIGHUP to all the script's children, foreground and background.
|
@Patrick's answer almost did the trick, but it doesn't work if the *parent* process of your *current* shell is in the same group (it kills the parent too).
I found this to be better:
`trap 'pkill -P $$' EXIT`
See [here](https://unix.stackexchange.com/a/124148) for more info.
|
1,644,856
|
For testing purposes I have this shell script
```
#!/bin/bash
echo $$
find / >/dev/null 2>&1
```
Running this from an interactive terminal, ctrl+c will terminate bash, and the find command.
```
$ ./test-k.sh
13227
<Ctrl+C>
$ ps -ef |grep find
$
```
Running it in the background, and killing the shell only will orphan the commands running in the script.
```
$ ./test-k.sh &
[1] 13231
13231
$ kill 13231
$ ps -ef |grep find
nos 13232 1 3 17:09 pts/5 00:00:00 find /
$
```
I want this shell script to terminate all its child processes when it exits regardless of how it's called. It'll eventually be started from a python and java application - and some form of cleanup is needed when the script exits - any options I should look into or any way to rewrite the script to clean itself up on exit?
|
2009/10/29
|
[
"https://Stackoverflow.com/questions/1644856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/126769/"
] |
I would do something like this:
```
#!/bin/bash
trap : SIGTERM SIGINT
echo $$
find / >/dev/null 2>&1 &
FIND_PID=$!
wait $FIND_PID
if [[ $? -gt 128 ]]
then
kill $FIND_PID
fi
```
Some explanation is in order, I guess. Out the gate, we need to change some of the default signal handling. `:` is a no-op command, since passing an empty string causes the shell to ignore the signal instead of doing something about it (the opposite of what we want to do).
Then, the `find` command is run in the background (from the script's perspective) and we call the `wait` builtin for it to finish. Since we gave a real command to `trap` above, when a signal is handled, `wait` will exit with a status greater than 128. If the process `wait`ed for completes, `wait` will return the exit status of that process.
Last, if the `wait` returns that error status, we want to `kill` the child process. Luckily we saved its PID. The advantage of this approach is that you can log some error message or otherwise identify that a signal caused the script to exit.
As others have mentioned, putting `kill -- -$$` as your argument to `trap` is another option if you don't care about leaving any information around post-exit.
For `trap` to work the way you want, you do need to pair it up with `wait` - the `bash` man page says "If `bash` is waiting for a command to complete and receives a signal for which a `trap` has been set, the `trap` will not be executed until the command completes." `wait` is the way around this hiccup.
You can extend it to more child processes if you want, as well. I didn't really exhaustively test this one out, but it seems to work here.
```
$ ./test-k.sh &
[1] 12810
12810
$ kill 12810
$ ps -ef | grep find
$
```
|
Just add a line like this to your script:
```
trap "kill $$" SIGINT
```
You might need to change 'SIGINT' to 'INT' on your setup, but this will basically kill your process and all child processes when you hit Ctrl-C.
|
22,415,345
|
I have a unicode string in python code:
```
name = u'Mayte_Martín'
```
I would like to use it with a SPARQL query, which meant that I should encode the string using 'utf-8' and use urllib.quote\_plus or requests.quote on it. However, both these quote functions behave strangely as can be seen when used with and without the 'safe' arguments.
```
from urllib import quote_plus
```
Without 'safe' argument:
```
quote_plus(name.encode('utf-8'))
Output: 'Mayte_Mart%C3%ADn'
```
With 'safe' argument:
```
quote_plus(name.encode('utf-8'), safe=':/')
Output:
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-164-556248391ee1> in <module>()
----> 1 quote_plus(v, safe=':/')
/usr/lib/python2.7/urllib.pyc in quote_plus(s, safe)
1273 s = quote(s, safe + ' ')
1274 return s.replace(' ', '+')
-> 1275 return quote(s, safe)
1276
1277 def urlencode(query, doseq=0):
/usr/lib/python2.7/urllib.pyc in quote(s, safe)
1264 safe = always_safe + safe
1265 _safe_quoters[cachekey] = (quoter, safe)
-> 1266 if not s.rstrip(safe):
1267 return s
1268 return ''.join(map(quoter, s))
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 10: ordinal not in range(128)
```
The problem seems to be with rstrip function. I tried to make some changes and call as...
```
quote_plus(name.encode('utf-8'), safe=u':/'.encode('utf-8'))
```
But that did not solve the issue. What could be the issue here?
|
2014/03/14
|
[
"https://Stackoverflow.com/questions/22415345",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/464476/"
] |
I'm answering my own question, so that it may help others who face the same issue.
This particular issue arises when you make the following import in the current workspace before executing anything else.
```
from __future__ import unicode_literals
```
This has somehow turned out to be incompatible with the following sequence of code.
```
from urllib import quote_plus
name = u'Mayte_Martín'
quote_plus(name.encode('utf-8'), safe=':/')
```
The same code without importing unicode\_literals works fine.
|
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
import urllib
name = u'Mayte_Martín'
print urllib.quote_plus(name.encode('utf-8'), safe=':/')
```
works without problem for me (Py 2.7.9, Debian)
(I don't know the answer, but I cannot make comments with regard to reputation)
|
22,415,345
|
I have a unicode string in python code:
```
name = u'Mayte_Martín'
```
I would like to use it with a SPARQL query, which meant that I should encode the string using 'utf-8' and use urllib.quote\_plus or requests.quote on it. However, both these quote functions behave strangely as can be seen when used with and without the 'safe' arguments.
```
from urllib import quote_plus
```
Without 'safe' argument:
```
quote_plus(name.encode('utf-8'))
Output: 'Mayte_Mart%C3%ADn'
```
With 'safe' argument:
```
quote_plus(name.encode('utf-8'), safe=':/')
Output:
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-164-556248391ee1> in <module>()
----> 1 quote_plus(v, safe=':/')
/usr/lib/python2.7/urllib.pyc in quote_plus(s, safe)
1273 s = quote(s, safe + ' ')
1274 return s.replace(' ', '+')
-> 1275 return quote(s, safe)
1276
1277 def urlencode(query, doseq=0):
/usr/lib/python2.7/urllib.pyc in quote(s, safe)
1264 safe = always_safe + safe
1265 _safe_quoters[cachekey] = (quoter, safe)
-> 1266 if not s.rstrip(safe):
1267 return s
1268 return ''.join(map(quoter, s))
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 10: ordinal not in range(128)
```
The problem seems to be with rstrip function. I tried to make some changes and call as...
```
quote_plus(name.encode('utf-8'), safe=u':/'.encode('utf-8'))
```
But that did not solve the issue. What could be the issue here?
|
2014/03/14
|
[
"https://Stackoverflow.com/questions/22415345",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/464476/"
] |
I'm answering my own question, so that it may help others who face the same issue.
This particular issue arises when you make the following import in the current workspace before executing anything else.
```
from __future__ import unicode_literals
```
This has somehow turned out to be incompatible with the following sequence of code.
```
from urllib import quote_plus
name = u'Mayte_Martín'
quote_plus(name.encode('utf-8'), safe=':/')
```
The same code without importing unicode\_literals works fine.
|
According to [this bug](https://bugs.python.org/issue23885), here is the workaround:
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from urllib import quote_plus
name = u'Mayte_Martín'
quote_plus(name.encode('utf-8'), safe=':/'.encode('utf-8'))
```
You must `encode` both argument in `quote` or `quote_plus` method to `utf-8`
|
22,415,345
|
I have a unicode string in python code:
```
name = u'Mayte_Martín'
```
I would like to use it with a SPARQL query, which meant that I should encode the string using 'utf-8' and use urllib.quote\_plus or requests.quote on it. However, both these quote functions behave strangely as can be seen when used with and without the 'safe' arguments.
```
from urllib import quote_plus
```
Without 'safe' argument:
```
quote_plus(name.encode('utf-8'))
Output: 'Mayte_Mart%C3%ADn'
```
With 'safe' argument:
```
quote_plus(name.encode('utf-8'), safe=':/')
Output:
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-164-556248391ee1> in <module>()
----> 1 quote_plus(v, safe=':/')
/usr/lib/python2.7/urllib.pyc in quote_plus(s, safe)
1273 s = quote(s, safe + ' ')
1274 return s.replace(' ', '+')
-> 1275 return quote(s, safe)
1276
1277 def urlencode(query, doseq=0):
/usr/lib/python2.7/urllib.pyc in quote(s, safe)
1264 safe = always_safe + safe
1265 _safe_quoters[cachekey] = (quoter, safe)
-> 1266 if not s.rstrip(safe):
1267 return s
1268 return ''.join(map(quoter, s))
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 10: ordinal not in range(128)
```
The problem seems to be with rstrip function. I tried to make some changes and call as...
```
quote_plus(name.encode('utf-8'), safe=u':/'.encode('utf-8'))
```
But that did not solve the issue. What could be the issue here?
|
2014/03/14
|
[
"https://Stackoverflow.com/questions/22415345",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/464476/"
] |
According to [this bug](https://bugs.python.org/issue23885), here is the workaround:
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from urllib import quote_plus
name = u'Mayte_Martín'
quote_plus(name.encode('utf-8'), safe=':/'.encode('utf-8'))
```
You must `encode` both argument in `quote` or `quote_plus` method to `utf-8`
|
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
import urllib
name = u'Mayte_Martín'
print urllib.quote_plus(name.encode('utf-8'), safe=':/')
```
works without problem for me (Py 2.7.9, Debian)
(I don't know the answer, but I cannot make comments with regard to reputation)
|
6,488,345
|
I have tried both terminate() and kill() but both have failed to stop a subprocess I start in my python code.
Is there any other way?
On Windows with Python 2.7
I have also tried the following with no results...
```
os.kill(p.pid, signal.SIGTERM)
```
and
```
import ctypes
PROCESS_TERMINATE = 1
handle = ctypes.windll.kernel32.OpenProcess(PROCESS_TERMINATE, False, theprocess.pid)
ctypes.windll.kernel32.TerminateProcess(handle, -1)
ctypes.windll.kernel32.CloseHandle(handle)
```
|
2011/06/27
|
[
"https://Stackoverflow.com/questions/6488345",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/775302/"
] |
>
> Unhandled exception type FileNotFoundException myclass.java /myproject/src/mypackage
>
>
>
This is a compiler error. Eclipse is telling you that your program does not compile to java byte code (so of course you can't run it). For now, you can fix it by simply declaring that your program may throw this exception. Like so:
```
public static void main(String[] args) throws FileNotFoundException {
```
`FileNotFoundException` is a "checked exception" (google this) which means that the code has to state what the JVM should do if it is encountered. In code, a try-catch block or a throws declaration indicate to the JVM how to handle the exception.
For future reference, please note that the red squiggly underline in Eclipse means there is a compiler error. If you hover the mouse over the problem, Eclipse will usually suggest some very good solutions. In this case, one suggestion would be to "add a throws clause to main".
|
This is a compiler error. Eclipse is telling you that your program does not compile to java byte code (so of course you can't run it). For now, you can fix it by simply declaring that your program may throw this exception. Like so:
```
public static void main(String[] args) throws IOException{
}
```
|
6,488,345
|
I have tried both terminate() and kill() but both have failed to stop a subprocess I start in my python code.
Is there any other way?
On Windows with Python 2.7
I have also tried the following with no results...
```
os.kill(p.pid, signal.SIGTERM)
```
and
```
import ctypes
PROCESS_TERMINATE = 1
handle = ctypes.windll.kernel32.OpenProcess(PROCESS_TERMINATE, False, theprocess.pid)
ctypes.windll.kernel32.TerminateProcess(handle, -1)
ctypes.windll.kernel32.CloseHandle(handle)
```
|
2011/06/27
|
[
"https://Stackoverflow.com/questions/6488345",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/775302/"
] |
>
> Unhandled exception type FileNotFoundException myclass.java /myproject/src/mypackage
>
>
>
This is a compiler error. Eclipse is telling you that your program does not compile to java byte code (so of course you can't run it). For now, you can fix it by simply declaring that your program may throw this exception. Like so:
```
public static void main(String[] args) throws FileNotFoundException {
```
`FileNotFoundException` is a "checked exception" (google this) which means that the code has to state what the JVM should do if it is encountered. In code, a try-catch block or a throws declaration indicate to the JVM how to handle the exception.
For future reference, please note that the red squiggly underline in Eclipse means there is a compiler error. If you hover the mouse over the problem, Eclipse will usually suggest some very good solutions. In this case, one suggestion would be to "add a throws clause to main".
|
you can fix it simply declaring that throw this exception. Like this:
```html
public static void main(String args[]) throws FileNotFoundException{
FileReader reader=new FileReader("db.properties");
Properties p=new Properties();
p.load(reader);
}
```
|
6,488,345
|
I have tried both terminate() and kill() but both have failed to stop a subprocess I start in my python code.
Is there any other way?
On Windows with Python 2.7
I have also tried the following with no results...
```
os.kill(p.pid, signal.SIGTERM)
```
and
```
import ctypes
PROCESS_TERMINATE = 1
handle = ctypes.windll.kernel32.OpenProcess(PROCESS_TERMINATE, False, theprocess.pid)
ctypes.windll.kernel32.TerminateProcess(handle, -1)
ctypes.windll.kernel32.CloseHandle(handle)
```
|
2011/06/27
|
[
"https://Stackoverflow.com/questions/6488345",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/775302/"
] |
I see that you tried to specify the full path to your file, but failed because of the following **mistake**:
**you haven't declared or tried to catch** `java.io.FileNotFoundException`.
To fix it, you can replace the line
```
FileReader fr = new FileReader("myscript.abc");
```
with the code:
```
try {
FileReader fr =
new FileReader("/home/jason/workspace/myproject/src/mypackage/myscript.abc");
} catch (FileNotFoundException ex) {
Logger.getLogger(myclass.class.getName()).log(Level.SEVERE, null, ex);
}
```
**The following code is successfully compiled, and it should work:**
```
package mypackage;
import java.io.*;
// It's better to use Camel style name for class name, for example: MyClass.
// In such a way it'll be easier to distinguish class name from variable name.
// This is common practice in Java.
public class myclass {
public static void main(String[] args) {
String myfile =
"/home/jason/workspace/myproject/src/mypackage/myscript.abc";
File file1 = new File(myfile);
if (file1.exists()) {
log("File " + myfile + " exists. length : " + myfile.length());
} else {
log("File " + myfile + " does not exist!");
}
try {
FileReader fr = new FileReader(myfile);
} catch (FileNotFoundException ex) {
// Do something with mistake or ignore
ex.printStackTrace();
}
log("\nAbsPath : " + new File(".").getAbsolutePath());
log("\nuser.dir : " + System.getProperty("user.dir"));
}
public static void log(String s) {
System.out.println(s);
}
}
```
|
you can fix it simply declaring that throw this exception. Like this:
```html
public static void main(String args[]) throws FileNotFoundException{
FileReader reader=new FileReader("db.properties");
Properties p=new Properties();
p.load(reader);
}
```
|
6,488,345
|
I have tried both terminate() and kill() but both have failed to stop a subprocess I start in my python code.
Is there any other way?
On Windows with Python 2.7
I have also tried the following with no results...
```
os.kill(p.pid, signal.SIGTERM)
```
and
```
import ctypes
PROCESS_TERMINATE = 1
handle = ctypes.windll.kernel32.OpenProcess(PROCESS_TERMINATE, False, theprocess.pid)
ctypes.windll.kernel32.TerminateProcess(handle, -1)
ctypes.windll.kernel32.CloseHandle(handle)
```
|
2011/06/27
|
[
"https://Stackoverflow.com/questions/6488345",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/775302/"
] |
I see that you tried to specify the full path to your file, but failed because of the following **mistake**:
**you haven't declared or tried to catch** `java.io.FileNotFoundException`.
To fix it, you can replace the line
```
FileReader fr = new FileReader("myscript.abc");
```
with the code:
```
try {
FileReader fr =
new FileReader("/home/jason/workspace/myproject/src/mypackage/myscript.abc");
} catch (FileNotFoundException ex) {
Logger.getLogger(myclass.class.getName()).log(Level.SEVERE, null, ex);
}
```
**The following code is successfully compiled, and it should work:**
```
package mypackage;
import java.io.*;
// It's better to use Camel style name for class name, for example: MyClass.
// In such a way it'll be easier to distinguish class name from variable name.
// This is common practice in Java.
public class myclass {
public static void main(String[] args) {
String myfile =
"/home/jason/workspace/myproject/src/mypackage/myscript.abc";
File file1 = new File(myfile);
if (file1.exists()) {
log("File " + myfile + " exists. length : " + myfile.length());
} else {
log("File " + myfile + " does not exist!");
}
try {
FileReader fr = new FileReader(myfile);
} catch (FileNotFoundException ex) {
// Do something with mistake or ignore
ex.printStackTrace();
}
log("\nAbsPath : " + new File(".").getAbsolutePath());
log("\nuser.dir : " + System.getProperty("user.dir"));
}
public static void log(String s) {
System.out.println(s);
}
}
```
|
You are expecting Eclipse to run the program in the project root directory. Unless you did something special with your "Run" configuration, I'd be suprised if it really starts there.
Try printing out your current working directory to make sure this is the right path.
Then try verifying that the bin / build directory contains your "\*.abc" files, as they are not Java source files and may have not been copied into the compilation output directory.
Assuming that they are in the compliation directory, rewrite your file loader to use a relative path based on the class laoder's path. This will work well in exanded collections of .class files in directories (and later in packed JAR files).
|
6,488,345
|
I have tried both terminate() and kill() but both have failed to stop a subprocess I start in my python code.
Is there any other way?
On Windows with Python 2.7
I have also tried the following with no results...
```
os.kill(p.pid, signal.SIGTERM)
```
and
```
import ctypes
PROCESS_TERMINATE = 1
handle = ctypes.windll.kernel32.OpenProcess(PROCESS_TERMINATE, False, theprocess.pid)
ctypes.windll.kernel32.TerminateProcess(handle, -1)
ctypes.windll.kernel32.CloseHandle(handle)
```
|
2011/06/27
|
[
"https://Stackoverflow.com/questions/6488345",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/775302/"
] |
I see that you tried to specify the full path to your file, but failed because of the following **mistake**:
**you haven't declared or tried to catch** `java.io.FileNotFoundException`.
To fix it, you can replace the line
```
FileReader fr = new FileReader("myscript.abc");
```
with the code:
```
try {
FileReader fr =
new FileReader("/home/jason/workspace/myproject/src/mypackage/myscript.abc");
} catch (FileNotFoundException ex) {
Logger.getLogger(myclass.class.getName()).log(Level.SEVERE, null, ex);
}
```
**The following code is successfully compiled, and it should work:**
```
package mypackage;
import java.io.*;
// It's better to use Camel style name for class name, for example: MyClass.
// In such a way it'll be easier to distinguish class name from variable name.
// This is common practice in Java.
public class myclass {
public static void main(String[] args) {
String myfile =
"/home/jason/workspace/myproject/src/mypackage/myscript.abc";
File file1 = new File(myfile);
if (file1.exists()) {
log("File " + myfile + " exists. length : " + myfile.length());
} else {
log("File " + myfile + " does not exist!");
}
try {
FileReader fr = new FileReader(myfile);
} catch (FileNotFoundException ex) {
// Do something with mistake or ignore
ex.printStackTrace();
}
log("\nAbsPath : " + new File(".").getAbsolutePath());
log("\nuser.dir : " + System.getProperty("user.dir"));
}
public static void log(String s) {
System.out.println(s);
}
}
```
|
Instead of trying to figure out what's going on, why not *print* what's going on...
Make this change to your code:
```
log(myfile.getName() + "(full path=" + myfile.getAbsolutePath() + ") does not exist");
```
You might find it either isn't using the directory you think, or (depending on your filesystem) it might be trying to create a file whose name is literally `"src/mypackage/myscript.abc"` - ie a filename with embedded slashes.
|
6,488,345
|
I have tried both terminate() and kill() but both have failed to stop a subprocess I start in my python code.
Is there any other way?
On Windows with Python 2.7
I have also tried the following with no results...
```
os.kill(p.pid, signal.SIGTERM)
```
and
```
import ctypes
PROCESS_TERMINATE = 1
handle = ctypes.windll.kernel32.OpenProcess(PROCESS_TERMINATE, False, theprocess.pid)
ctypes.windll.kernel32.TerminateProcess(handle, -1)
ctypes.windll.kernel32.CloseHandle(handle)
```
|
2011/06/27
|
[
"https://Stackoverflow.com/questions/6488345",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/775302/"
] |
>
> Unhandled exception type FileNotFoundException myclass.java /myproject/src/mypackage
>
>
>
This is a compiler error. Eclipse is telling you that your program does not compile to java byte code (so of course you can't run it). For now, you can fix it by simply declaring that your program may throw this exception. Like so:
```
public static void main(String[] args) throws FileNotFoundException {
```
`FileNotFoundException` is a "checked exception" (google this) which means that the code has to state what the JVM should do if it is encountered. In code, a try-catch block or a throws declaration indicate to the JVM how to handle the exception.
For future reference, please note that the red squiggly underline in Eclipse means there is a compiler error. If you hover the mouse over the problem, Eclipse will usually suggest some very good solutions. In this case, one suggestion would be to "add a throws clause to main".
|
I see that you tried to specify the full path to your file, but failed because of the following **mistake**:
**you haven't declared or tried to catch** `java.io.FileNotFoundException`.
To fix it, you can replace the line
```
FileReader fr = new FileReader("myscript.abc");
```
with the code:
```
try {
FileReader fr =
new FileReader("/home/jason/workspace/myproject/src/mypackage/myscript.abc");
} catch (FileNotFoundException ex) {
Logger.getLogger(myclass.class.getName()).log(Level.SEVERE, null, ex);
}
```
**The following code is successfully compiled, and it should work:**
```
package mypackage;
import java.io.*;
// It's better to use Camel style name for class name, for example: MyClass.
// In such a way it'll be easier to distinguish class name from variable name.
// This is common practice in Java.
public class myclass {
public static void main(String[] args) {
String myfile =
"/home/jason/workspace/myproject/src/mypackage/myscript.abc";
File file1 = new File(myfile);
if (file1.exists()) {
log("File " + myfile + " exists. length : " + myfile.length());
} else {
log("File " + myfile + " does not exist!");
}
try {
FileReader fr = new FileReader(myfile);
} catch (FileNotFoundException ex) {
// Do something with mistake or ignore
ex.printStackTrace();
}
log("\nAbsPath : " + new File(".").getAbsolutePath());
log("\nuser.dir : " + System.getProperty("user.dir"));
}
public static void log(String s) {
System.out.println(s);
}
}
```
|
6,488,345
|
I have tried both terminate() and kill() but both have failed to stop a subprocess I start in my python code.
Is there any other way?
On Windows with Python 2.7
I have also tried the following with no results...
```
os.kill(p.pid, signal.SIGTERM)
```
and
```
import ctypes
PROCESS_TERMINATE = 1
handle = ctypes.windll.kernel32.OpenProcess(PROCESS_TERMINATE, False, theprocess.pid)
ctypes.windll.kernel32.TerminateProcess(handle, -1)
ctypes.windll.kernel32.CloseHandle(handle)
```
|
2011/06/27
|
[
"https://Stackoverflow.com/questions/6488345",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/775302/"
] |
>
> Unhandled exception type FileNotFoundException myclass.java /myproject/src/mypackage
>
>
>
This is a compiler error. Eclipse is telling you that your program does not compile to java byte code (so of course you can't run it). For now, you can fix it by simply declaring that your program may throw this exception. Like so:
```
public static void main(String[] args) throws FileNotFoundException {
```
`FileNotFoundException` is a "checked exception" (google this) which means that the code has to state what the JVM should do if it is encountered. In code, a try-catch block or a throws declaration indicate to the JVM how to handle the exception.
For future reference, please note that the red squiggly underline in Eclipse means there is a compiler error. If you hover the mouse over the problem, Eclipse will usually suggest some very good solutions. In this case, one suggestion would be to "add a throws clause to main".
|
You are expecting Eclipse to run the program in the project root directory. Unless you did something special with your "Run" configuration, I'd be suprised if it really starts there.
Try printing out your current working directory to make sure this is the right path.
Then try verifying that the bin / build directory contains your "\*.abc" files, as they are not Java source files and may have not been copied into the compilation output directory.
Assuming that they are in the compliation directory, rewrite your file loader to use a relative path based on the class laoder's path. This will work well in exanded collections of .class files in directories (and later in packed JAR files).
|
6,488,345
|
I have tried both terminate() and kill() but both have failed to stop a subprocess I start in my python code.
Is there any other way?
On Windows with Python 2.7
I have also tried the following with no results...
```
os.kill(p.pid, signal.SIGTERM)
```
and
```
import ctypes
PROCESS_TERMINATE = 1
handle = ctypes.windll.kernel32.OpenProcess(PROCESS_TERMINATE, False, theprocess.pid)
ctypes.windll.kernel32.TerminateProcess(handle, -1)
ctypes.windll.kernel32.CloseHandle(handle)
```
|
2011/06/27
|
[
"https://Stackoverflow.com/questions/6488345",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/775302/"
] |
This is a compiler error. Eclipse is telling you that your program does not compile to java byte code (so of course you can't run it). For now, you can fix it by simply declaring that your program may throw this exception. Like so:
```
public static void main(String[] args) throws IOException{
}
```
|
you can fix it simply declaring that throw this exception. Like this:
```html
public static void main(String args[]) throws FileNotFoundException{
FileReader reader=new FileReader("db.properties");
Properties p=new Properties();
p.load(reader);
}
```
|
6,488,345
|
I have tried both terminate() and kill() but both have failed to stop a subprocess I start in my python code.
Is there any other way?
On Windows with Python 2.7
I have also tried the following with no results...
```
os.kill(p.pid, signal.SIGTERM)
```
and
```
import ctypes
PROCESS_TERMINATE = 1
handle = ctypes.windll.kernel32.OpenProcess(PROCESS_TERMINATE, False, theprocess.pid)
ctypes.windll.kernel32.TerminateProcess(handle, -1)
ctypes.windll.kernel32.CloseHandle(handle)
```
|
2011/06/27
|
[
"https://Stackoverflow.com/questions/6488345",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/775302/"
] |
Use the file descriptor that you created and verified before creating the file reader. Also, you will probably run into problems using relative paths. Why is the line with the full path commented out? In any case, here is the code:
```
if(file1.exists()) {
log(myfile + " exists. length : " + myfile.length());
FileReader fr = new FileReader(file1);
}
```
|
Instead of trying to figure out what's going on, why not *print* what's going on...
Make this change to your code:
```
log(myfile.getName() + "(full path=" + myfile.getAbsolutePath() + ") does not exist");
```
You might find it either isn't using the directory you think, or (depending on your filesystem) it might be trying to create a file whose name is literally `"src/mypackage/myscript.abc"` - ie a filename with embedded slashes.
|
6,488,345
|
I have tried both terminate() and kill() but both have failed to stop a subprocess I start in my python code.
Is there any other way?
On Windows with Python 2.7
I have also tried the following with no results...
```
os.kill(p.pid, signal.SIGTERM)
```
and
```
import ctypes
PROCESS_TERMINATE = 1
handle = ctypes.windll.kernel32.OpenProcess(PROCESS_TERMINATE, False, theprocess.pid)
ctypes.windll.kernel32.TerminateProcess(handle, -1)
ctypes.windll.kernel32.CloseHandle(handle)
```
|
2011/06/27
|
[
"https://Stackoverflow.com/questions/6488345",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/775302/"
] |
Use the file descriptor that you created and verified before creating the file reader. Also, you will probably run into problems using relative paths. Why is the line with the full path commented out? In any case, here is the code:
```
if(file1.exists()) {
log(myfile + " exists. length : " + myfile.length());
FileReader fr = new FileReader(file1);
}
```
|
You are expecting Eclipse to run the program in the project root directory. Unless you did something special with your "Run" configuration, I'd be suprised if it really starts there.
Try printing out your current working directory to make sure this is the right path.
Then try verifying that the bin / build directory contains your "\*.abc" files, as they are not Java source files and may have not been copied into the compilation output directory.
Assuming that they are in the compliation directory, rewrite your file loader to use a relative path based on the class laoder's path. This will work well in exanded collections of .class files in directories (and later in packed JAR files).
|
16,696,225
|
How to instantiated a class if its name is given as a string variable (i.e. dynamically instantiate object of the class). Or alternatively, how does the following PHP 5.3+ code
```
<?php
namespace Foo;
class Bar {};
$classname = 'Foo\Bar';
$bar = new $classname();
```
can be spelled in python?
**Also see**
[Does python have an equivalent to Java Class.forName()?](https://stackoverflow.com/questions/452969/does-python-have-an-equivalent-to-java-class-forname)
|
2013/05/22
|
[
"https://Stackoverflow.com/questions/16696225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/544463/"
] |
What I would do is move the call to `get_int` into the condition of the while loop:
```
int main(void)
{
int integers;
printf("Enter some integers. Enter 0 to end.\n");
while ((integers = get_int()) != 0)
{
printf("%d is a number\n", integers);
}
return(0);
} // end main
```
The problem with your existing code is that between calling `get_int()` and printing the value, you're not checking to see if it's returned your sentinel of `0`.
The other option would be to add an `if (integers == 0) { break; }` condition in between,
but in my mind doing the assignment in the condition is cleaner.
|
You are correct to suspect that you need to retool the `while` loop. Did you try something like this?
```
for (;;)
{
integers = get_int();
if (integers == 0) break;
printf("%d is a number\n", integers);
}
```
Also, your `get_int` would be better written with `fgets` (or `getline` if available) and `strtol`. `scanf` is seductively convenient but almost always more trouble than it's worth.
|
16,696,225
|
How to instantiated a class if its name is given as a string variable (i.e. dynamically instantiate object of the class). Or alternatively, how does the following PHP 5.3+ code
```
<?php
namespace Foo;
class Bar {};
$classname = 'Foo\Bar';
$bar = new $classname();
```
can be spelled in python?
**Also see**
[Does python have an equivalent to Java Class.forName()?](https://stackoverflow.com/questions/452969/does-python-have-an-equivalent-to-java-class-forname)
|
2013/05/22
|
[
"https://Stackoverflow.com/questions/16696225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/544463/"
] |
What I would do is move the call to `get_int` into the condition of the while loop:
```
int main(void)
{
int integers;
printf("Enter some integers. Enter 0 to end.\n");
while ((integers = get_int()) != 0)
{
printf("%d is a number\n", integers);
}
return(0);
} // end main
```
The problem with your existing code is that between calling `get_int()` and printing the value, you're not checking to see if it's returned your sentinel of `0`.
The other option would be to add an `if (integers == 0) { break; }` condition in between,
but in my mind doing the assignment in the condition is cleaner.
|
Consider the core of your loop:
```
integers = get_int();
printf("%d is a number\n", integers);
```
No matter what `get_int()` returns, the `printf` line will be executed. That line needs a separate `if`:
```
integers = get_int();
if (integers != 0) printf("%d is a number\n", integers);
```
|
16,696,225
|
How to instantiated a class if its name is given as a string variable (i.e. dynamically instantiate object of the class). Or alternatively, how does the following PHP 5.3+ code
```
<?php
namespace Foo;
class Bar {};
$classname = 'Foo\Bar';
$bar = new $classname();
```
can be spelled in python?
**Also see**
[Does python have an equivalent to Java Class.forName()?](https://stackoverflow.com/questions/452969/does-python-have-an-equivalent-to-java-class-forname)
|
2013/05/22
|
[
"https://Stackoverflow.com/questions/16696225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/544463/"
] |
What I would do is move the call to `get_int` into the condition of the while loop:
```
int main(void)
{
int integers;
printf("Enter some integers. Enter 0 to end.\n");
while ((integers = get_int()) != 0)
{
printf("%d is a number\n", integers);
}
return(0);
} // end main
```
The problem with your existing code is that between calling `get_int()` and printing the value, you're not checking to see if it's returned your sentinel of `0`.
The other option would be to add an `if (integers == 0) { break; }` condition in between,
but in my mind doing the assignment in the condition is cleaner.
|
The simplest way is to put your condition and assignment into the while-loop. Right now your code relies on `integer` being set in the loop, then loops again to check if it's zero.
```
while((integer = get_int()) != 0)
```
will let you check at the same time as assigning integer. Don't forget the parenthesis, or your integer value will be the result of `integer = (get_int != 0)`, because `!=` has higher priority than `=` in C and C++.
|
16,696,225
|
How to instantiated a class if its name is given as a string variable (i.e. dynamically instantiate object of the class). Or alternatively, how does the following PHP 5.3+ code
```
<?php
namespace Foo;
class Bar {};
$classname = 'Foo\Bar';
$bar = new $classname();
```
can be spelled in python?
**Also see**
[Does python have an equivalent to Java Class.forName()?](https://stackoverflow.com/questions/452969/does-python-have-an-equivalent-to-java-class-forname)
|
2013/05/22
|
[
"https://Stackoverflow.com/questions/16696225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/544463/"
] |
You are correct to suspect that you need to retool the `while` loop. Did you try something like this?
```
for (;;)
{
integers = get_int();
if (integers == 0) break;
printf("%d is a number\n", integers);
}
```
Also, your `get_int` would be better written with `fgets` (or `getline` if available) and `strtol`. `scanf` is seductively convenient but almost always more trouble than it's worth.
|
Consider the core of your loop:
```
integers = get_int();
printf("%d is a number\n", integers);
```
No matter what `get_int()` returns, the `printf` line will be executed. That line needs a separate `if`:
```
integers = get_int();
if (integers != 0) printf("%d is a number\n", integers);
```
|
16,696,225
|
How to instantiated a class if its name is given as a string variable (i.e. dynamically instantiate object of the class). Or alternatively, how does the following PHP 5.3+ code
```
<?php
namespace Foo;
class Bar {};
$classname = 'Foo\Bar';
$bar = new $classname();
```
can be spelled in python?
**Also see**
[Does python have an equivalent to Java Class.forName()?](https://stackoverflow.com/questions/452969/does-python-have-an-equivalent-to-java-class-forname)
|
2013/05/22
|
[
"https://Stackoverflow.com/questions/16696225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/544463/"
] |
The simplest way is to put your condition and assignment into the while-loop. Right now your code relies on `integer` being set in the loop, then loops again to check if it's zero.
```
while((integer = get_int()) != 0)
```
will let you check at the same time as assigning integer. Don't forget the parenthesis, or your integer value will be the result of `integer = (get_int != 0)`, because `!=` has higher priority than `=` in C and C++.
|
Consider the core of your loop:
```
integers = get_int();
printf("%d is a number\n", integers);
```
No matter what `get_int()` returns, the `printf` line will be executed. That line needs a separate `if`:
```
integers = get_int();
if (integers != 0) printf("%d is a number\n", integers);
```
|
39,734,278
|
Is it possible to receive google drive push notifications if coded on aws lambda via api gateway?
Google drive requires the webhook address to be verified so is it possible to verify api gateway endpoint?
Here are the possible ways of verifying the endpoint:
1) Upload a file and test via /file and the rest are below:
[](https://i.stack.imgur.com/avpdw.jpg)
Well, here is the image of how google does metatag verification: In order to get the required verification meta tag, I need to first enter which url/endpoint I want to verify. So below image shows the endpoint that I created:
[](https://i.stack.imgur.com/Lb3SR.png)
Then here in webmaster I am verifying the url: But the verification fails.
[](https://i.stack.imgur.com/31DNj.png)
Here is the python code that I added[](https://i.stack.imgur.com/ThNZD.png)
Please guide here of how I can make verification successful!
|
2016/09/27
|
[
"https://Stackoverflow.com/questions/39734278",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2005490/"
] |
@Atihska, it seems you have setup this API:
```
https://x8f3******.execute-api.us-east-1.amazonaws.com/prod/google-endpointverification
```
From what I understand, Google Drive's HTML tag verification method will try to verify the metadata in the **home page**. As per Google, the home page here is:
```
https://x8f3******.execute-api.us-east-1.amazonaws.com/
```
But the above URL won't work because it doesn't have a stage name (like "prod") in it.
The right way to do this would be to use a custom domain name. So, you will need to buy a domain name like foodomain.com and use [custom domain](http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html) name option in API Gateway to point to your API. That way, you can make **foodomain.com** (home page) to point to **<https://x8f3>\*\*\*\*\*\*.execute-api.us-east-1.amazonaws.com/prod/google-endpointverification**
Also, you can simply use [Mock integration](http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-mock-integration.html) instead of Lambda as this is just static content.
|
I don't know for sure how the registration process works for verifying the webhook address, but it is certainly possible to configure the webhook itself in API Gateway.
API Gateway supports [custom domain names](http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html "custom domain names") like ***api.example.com*** if Google doesn't accept the default API domain name.
Edit:
Based on those options, it seems like you could use the default endpoint xxxx.execute-api...amazonaws.com if you configure the HTML meta tag.
You could do this by setting up a GET method on I guess the root resource that is a MOCK integration. That integration response can return static content, so in the integration response section you could paste whatever HTML Google is looking for. You would probably also need to set the response 'Content-Type' header to 'text/html'.
|
39,734,278
|
Is it possible to receive google drive push notifications if coded on aws lambda via api gateway?
Google drive requires the webhook address to be verified so is it possible to verify api gateway endpoint?
Here are the possible ways of verifying the endpoint:
1) Upload a file and test via /file and the rest are below:
[](https://i.stack.imgur.com/avpdw.jpg)
Well, here is the image of how google does metatag verification: In order to get the required verification meta tag, I need to first enter which url/endpoint I want to verify. So below image shows the endpoint that I created:
[](https://i.stack.imgur.com/Lb3SR.png)
Then here in webmaster I am verifying the url: But the verification fails.
[](https://i.stack.imgur.com/31DNj.png)
Here is the python code that I added[](https://i.stack.imgur.com/ThNZD.png)
Please guide here of how I can make verification successful!
|
2016/09/27
|
[
"https://Stackoverflow.com/questions/39734278",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2005490/"
] |
I don't know for sure how the registration process works for verifying the webhook address, but it is certainly possible to configure the webhook itself in API Gateway.
API Gateway supports [custom domain names](http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html "custom domain names") like ***api.example.com*** if Google doesn't accept the default API domain name.
Edit:
Based on those options, it seems like you could use the default endpoint xxxx.execute-api...amazonaws.com if you configure the HTML meta tag.
You could do this by setting up a GET method on I guess the root resource that is a MOCK integration. That integration response can return static content, so in the integration response section you could paste whatever HTML Google is looking for. You would probably also need to set the response 'Content-Type' header to 'text/html'.
|
@Balaji I was able to figure our API mapping in order to link custom subdomain with the API. But I get 'Missing Authentication token' when I use <https://api.example.com>, in this case lambdanotifications.***.com. I also tried lambdanotifications.***.com/notifications and lambdanotifications.\*\*\*.com/notifications/test on browser but same thing.
[](https://i.stack.imgur.com/LJNpy.png)
|
39,734,278
|
Is it possible to receive google drive push notifications if coded on aws lambda via api gateway?
Google drive requires the webhook address to be verified so is it possible to verify api gateway endpoint?
Here are the possible ways of verifying the endpoint:
1) Upload a file and test via /file and the rest are below:
[](https://i.stack.imgur.com/avpdw.jpg)
Well, here is the image of how google does metatag verification: In order to get the required verification meta tag, I need to first enter which url/endpoint I want to verify. So below image shows the endpoint that I created:
[](https://i.stack.imgur.com/Lb3SR.png)
Then here in webmaster I am verifying the url: But the verification fails.
[](https://i.stack.imgur.com/31DNj.png)
Here is the python code that I added[](https://i.stack.imgur.com/ThNZD.png)
Please guide here of how I can make verification successful!
|
2016/09/27
|
[
"https://Stackoverflow.com/questions/39734278",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2005490/"
] |
I got this working finally.
So as mentioned by @Balaji and @Jack Kohn, I have to use custom domains. I followed this tutorial <http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html>
And the last step of mapping body templates is here:
[](https://i.stack.imgur.com/Ny5IT.png)
Sorry for so many cuttings but had to hide the values provided.
|
I don't know for sure how the registration process works for verifying the webhook address, but it is certainly possible to configure the webhook itself in API Gateway.
API Gateway supports [custom domain names](http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html "custom domain names") like ***api.example.com*** if Google doesn't accept the default API domain name.
Edit:
Based on those options, it seems like you could use the default endpoint xxxx.execute-api...amazonaws.com if you configure the HTML meta tag.
You could do this by setting up a GET method on I guess the root resource that is a MOCK integration. That integration response can return static content, so in the integration response section you could paste whatever HTML Google is looking for. You would probably also need to set the response 'Content-Type' header to 'text/html'.
|
39,734,278
|
Is it possible to receive google drive push notifications if coded on aws lambda via api gateway?
Google drive requires the webhook address to be verified so is it possible to verify api gateway endpoint?
Here are the possible ways of verifying the endpoint:
1) Upload a file and test via /file and the rest are below:
[](https://i.stack.imgur.com/avpdw.jpg)
Well, here is the image of how google does metatag verification: In order to get the required verification meta tag, I need to first enter which url/endpoint I want to verify. So below image shows the endpoint that I created:
[](https://i.stack.imgur.com/Lb3SR.png)
Then here in webmaster I am verifying the url: But the verification fails.
[](https://i.stack.imgur.com/31DNj.png)
Here is the python code that I added[](https://i.stack.imgur.com/ThNZD.png)
Please guide here of how I can make verification successful!
|
2016/09/27
|
[
"https://Stackoverflow.com/questions/39734278",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2005490/"
] |
I don't know for sure how the registration process works for verifying the webhook address, but it is certainly possible to configure the webhook itself in API Gateway.
API Gateway supports [custom domain names](http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html "custom domain names") like ***api.example.com*** if Google doesn't accept the default API domain name.
Edit:
Based on those options, it seems like you could use the default endpoint xxxx.execute-api...amazonaws.com if you configure the HTML meta tag.
You could do this by setting up a GET method on I guess the root resource that is a MOCK integration. That integration response can return static content, so in the integration response section you could paste whatever HTML Google is looking for. You would probably also need to set the response 'Content-Type' header to 'text/html'.
|
Looks like the question and answer to this are quite old. Providing updated references and typ. solution for this.
<https://cloud.google.com/monitoring/alerts/using-channels-api>
<https://developers.google.com/drive/api/guides/push>
**Initial request to establish the "channel"**. This will have a default life time of 6 hours. If persistence is needed, setup to re-trigger every 21595 seconds (5 hours 55 minutes). For lambda's, this is done via CloudWatch.
```
POST https://www.googleapis.com/drive/v3/changes/watch
Authorization: Bearer auth_token_for_current_user
Content-Type: application/json
{
"id": "4ba78bf0-6a47-11e2-bcfd-0800200c9a77", // Your channel ID.
"type": "web_hook",
"address": "domainhere", // Your receiving URL
"token": "target=myApp-myChangesChannelDest", // (Optional) Your channel token.
"expiration": 1426325213000 // (Optional) Your requested channel expiration time.
}
```
Note: This can be separated into two separate lambdas, as long as the lambda establishing the channel as the URL end point for the webhook receiver. I recommend service accounts acting with permissions of a user. Avoids 3-step Auth for all the API's into so far (Gmail, Workspace, Sheets, Forms, Calendar, GEO, & Reverse GEO).
**Handle the request**
```
{
"kind": "api#channel",
"id": "01234567-89ab-cdef-0123456789ab"", // ID you specified for this channel.
"resourceId": "o3hgv1538sdjfh", // ID of the watched resource.
"resourceUri": "https://www.googleapis.com/drive/v3/files/o3hgv1538sdjfh", // Version-specific ID of the watched resource.
"token": "target=myApp-myFilesChannelDest", // Present only if one was provided.
"expiration": 1426325213000, // Actual expiration time as Unix timestamp (in ms), if applicable.
"body": "response"
}
```
Refer to the event docs for specifics on what their different payloads look like.
|
39,734,278
|
Is it possible to receive google drive push notifications if coded on aws lambda via api gateway?
Google drive requires the webhook address to be verified so is it possible to verify api gateway endpoint?
Here are the possible ways of verifying the endpoint:
1) Upload a file and test via /file and the rest are below:
[](https://i.stack.imgur.com/avpdw.jpg)
Well, here is the image of how google does metatag verification: In order to get the required verification meta tag, I need to first enter which url/endpoint I want to verify. So below image shows the endpoint that I created:
[](https://i.stack.imgur.com/Lb3SR.png)
Then here in webmaster I am verifying the url: But the verification fails.
[](https://i.stack.imgur.com/31DNj.png)
Here is the python code that I added[](https://i.stack.imgur.com/ThNZD.png)
Please guide here of how I can make verification successful!
|
2016/09/27
|
[
"https://Stackoverflow.com/questions/39734278",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2005490/"
] |
@Atihska, it seems you have setup this API:
```
https://x8f3******.execute-api.us-east-1.amazonaws.com/prod/google-endpointverification
```
From what I understand, Google Drive's HTML tag verification method will try to verify the metadata in the **home page**. As per Google, the home page here is:
```
https://x8f3******.execute-api.us-east-1.amazonaws.com/
```
But the above URL won't work because it doesn't have a stage name (like "prod") in it.
The right way to do this would be to use a custom domain name. So, you will need to buy a domain name like foodomain.com and use [custom domain](http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html) name option in API Gateway to point to your API. That way, you can make **foodomain.com** (home page) to point to **<https://x8f3>\*\*\*\*\*\*.execute-api.us-east-1.amazonaws.com/prod/google-endpointverification**
Also, you can simply use [Mock integration](http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-mock-integration.html) instead of Lambda as this is just static content.
|
@Balaji I was able to figure our API mapping in order to link custom subdomain with the API. But I get 'Missing Authentication token' when I use <https://api.example.com>, in this case lambdanotifications.***.com. I also tried lambdanotifications.***.com/notifications and lambdanotifications.\*\*\*.com/notifications/test on browser but same thing.
[](https://i.stack.imgur.com/LJNpy.png)
|
39,734,278
|
Is it possible to receive google drive push notifications if coded on aws lambda via api gateway?
Google drive requires the webhook address to be verified so is it possible to verify api gateway endpoint?
Here are the possible ways of verifying the endpoint:
1) Upload a file and test via /file and the rest are below:
[](https://i.stack.imgur.com/avpdw.jpg)
Well, here is the image of how google does metatag verification: In order to get the required verification meta tag, I need to first enter which url/endpoint I want to verify. So below image shows the endpoint that I created:
[](https://i.stack.imgur.com/Lb3SR.png)
Then here in webmaster I am verifying the url: But the verification fails.
[](https://i.stack.imgur.com/31DNj.png)
Here is the python code that I added[](https://i.stack.imgur.com/ThNZD.png)
Please guide here of how I can make verification successful!
|
2016/09/27
|
[
"https://Stackoverflow.com/questions/39734278",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2005490/"
] |
I got this working finally.
So as mentioned by @Balaji and @Jack Kohn, I have to use custom domains. I followed this tutorial <http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html>
And the last step of mapping body templates is here:
[](https://i.stack.imgur.com/Ny5IT.png)
Sorry for so many cuttings but had to hide the values provided.
|
@Atihska, it seems you have setup this API:
```
https://x8f3******.execute-api.us-east-1.amazonaws.com/prod/google-endpointverification
```
From what I understand, Google Drive's HTML tag verification method will try to verify the metadata in the **home page**. As per Google, the home page here is:
```
https://x8f3******.execute-api.us-east-1.amazonaws.com/
```
But the above URL won't work because it doesn't have a stage name (like "prod") in it.
The right way to do this would be to use a custom domain name. So, you will need to buy a domain name like foodomain.com and use [custom domain](http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html) name option in API Gateway to point to your API. That way, you can make **foodomain.com** (home page) to point to **<https://x8f3>\*\*\*\*\*\*.execute-api.us-east-1.amazonaws.com/prod/google-endpointverification**
Also, you can simply use [Mock integration](http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-mock-integration.html) instead of Lambda as this is just static content.
|
39,734,278
|
Is it possible to receive google drive push notifications if coded on aws lambda via api gateway?
Google drive requires the webhook address to be verified so is it possible to verify api gateway endpoint?
Here are the possible ways of verifying the endpoint:
1) Upload a file and test via /file and the rest are below:
[](https://i.stack.imgur.com/avpdw.jpg)
Well, here is the image of how google does metatag verification: In order to get the required verification meta tag, I need to first enter which url/endpoint I want to verify. So below image shows the endpoint that I created:
[](https://i.stack.imgur.com/Lb3SR.png)
Then here in webmaster I am verifying the url: But the verification fails.
[](https://i.stack.imgur.com/31DNj.png)
Here is the python code that I added[](https://i.stack.imgur.com/ThNZD.png)
Please guide here of how I can make verification successful!
|
2016/09/27
|
[
"https://Stackoverflow.com/questions/39734278",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2005490/"
] |
@Atihska, it seems you have setup this API:
```
https://x8f3******.execute-api.us-east-1.amazonaws.com/prod/google-endpointverification
```
From what I understand, Google Drive's HTML tag verification method will try to verify the metadata in the **home page**. As per Google, the home page here is:
```
https://x8f3******.execute-api.us-east-1.amazonaws.com/
```
But the above URL won't work because it doesn't have a stage name (like "prod") in it.
The right way to do this would be to use a custom domain name. So, you will need to buy a domain name like foodomain.com and use [custom domain](http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html) name option in API Gateway to point to your API. That way, you can make **foodomain.com** (home page) to point to **<https://x8f3>\*\*\*\*\*\*.execute-api.us-east-1.amazonaws.com/prod/google-endpointverification**
Also, you can simply use [Mock integration](http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-mock-integration.html) instead of Lambda as this is just static content.
|
Looks like the question and answer to this are quite old. Providing updated references and typ. solution for this.
<https://cloud.google.com/monitoring/alerts/using-channels-api>
<https://developers.google.com/drive/api/guides/push>
**Initial request to establish the "channel"**. This will have a default life time of 6 hours. If persistence is needed, setup to re-trigger every 21595 seconds (5 hours 55 minutes). For lambda's, this is done via CloudWatch.
```
POST https://www.googleapis.com/drive/v3/changes/watch
Authorization: Bearer auth_token_for_current_user
Content-Type: application/json
{
"id": "4ba78bf0-6a47-11e2-bcfd-0800200c9a77", // Your channel ID.
"type": "web_hook",
"address": "domainhere", // Your receiving URL
"token": "target=myApp-myChangesChannelDest", // (Optional) Your channel token.
"expiration": 1426325213000 // (Optional) Your requested channel expiration time.
}
```
Note: This can be separated into two separate lambdas, as long as the lambda establishing the channel as the URL end point for the webhook receiver. I recommend service accounts acting with permissions of a user. Avoids 3-step Auth for all the API's into so far (Gmail, Workspace, Sheets, Forms, Calendar, GEO, & Reverse GEO).
**Handle the request**
```
{
"kind": "api#channel",
"id": "01234567-89ab-cdef-0123456789ab"", // ID you specified for this channel.
"resourceId": "o3hgv1538sdjfh", // ID of the watched resource.
"resourceUri": "https://www.googleapis.com/drive/v3/files/o3hgv1538sdjfh", // Version-specific ID of the watched resource.
"token": "target=myApp-myFilesChannelDest", // Present only if one was provided.
"expiration": 1426325213000, // Actual expiration time as Unix timestamp (in ms), if applicable.
"body": "response"
}
```
Refer to the event docs for specifics on what their different payloads look like.
|
39,734,278
|
Is it possible to receive google drive push notifications if coded on aws lambda via api gateway?
Google drive requires the webhook address to be verified so is it possible to verify api gateway endpoint?
Here are the possible ways of verifying the endpoint:
1) Upload a file and test via /file and the rest are below:
[](https://i.stack.imgur.com/avpdw.jpg)
Well, here is the image of how google does metatag verification: In order to get the required verification meta tag, I need to first enter which url/endpoint I want to verify. So below image shows the endpoint that I created:
[](https://i.stack.imgur.com/Lb3SR.png)
Then here in webmaster I am verifying the url: But the verification fails.
[](https://i.stack.imgur.com/31DNj.png)
Here is the python code that I added[](https://i.stack.imgur.com/ThNZD.png)
Please guide here of how I can make verification successful!
|
2016/09/27
|
[
"https://Stackoverflow.com/questions/39734278",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2005490/"
] |
I got this working finally.
So as mentioned by @Balaji and @Jack Kohn, I have to use custom domains. I followed this tutorial <http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html>
And the last step of mapping body templates is here:
[](https://i.stack.imgur.com/Ny5IT.png)
Sorry for so many cuttings but had to hide the values provided.
|
@Balaji I was able to figure our API mapping in order to link custom subdomain with the API. But I get 'Missing Authentication token' when I use <https://api.example.com>, in this case lambdanotifications.***.com. I also tried lambdanotifications.***.com/notifications and lambdanotifications.\*\*\*.com/notifications/test on browser but same thing.
[](https://i.stack.imgur.com/LJNpy.png)
|
39,734,278
|
Is it possible to receive google drive push notifications if coded on aws lambda via api gateway?
Google drive requires the webhook address to be verified so is it possible to verify api gateway endpoint?
Here are the possible ways of verifying the endpoint:
1) Upload a file and test via /file and the rest are below:
[](https://i.stack.imgur.com/avpdw.jpg)
Well, here is the image of how google does metatag verification: In order to get the required verification meta tag, I need to first enter which url/endpoint I want to verify. So below image shows the endpoint that I created:
[](https://i.stack.imgur.com/Lb3SR.png)
Then here in webmaster I am verifying the url: But the verification fails.
[](https://i.stack.imgur.com/31DNj.png)
Here is the python code that I added[](https://i.stack.imgur.com/ThNZD.png)
Please guide here of how I can make verification successful!
|
2016/09/27
|
[
"https://Stackoverflow.com/questions/39734278",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2005490/"
] |
I got this working finally.
So as mentioned by @Balaji and @Jack Kohn, I have to use custom domains. I followed this tutorial <http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html>
And the last step of mapping body templates is here:
[](https://i.stack.imgur.com/Ny5IT.png)
Sorry for so many cuttings but had to hide the values provided.
|
Looks like the question and answer to this are quite old. Providing updated references and typ. solution for this.
<https://cloud.google.com/monitoring/alerts/using-channels-api>
<https://developers.google.com/drive/api/guides/push>
**Initial request to establish the "channel"**. This will have a default life time of 6 hours. If persistence is needed, setup to re-trigger every 21595 seconds (5 hours 55 minutes). For lambda's, this is done via CloudWatch.
```
POST https://www.googleapis.com/drive/v3/changes/watch
Authorization: Bearer auth_token_for_current_user
Content-Type: application/json
{
"id": "4ba78bf0-6a47-11e2-bcfd-0800200c9a77", // Your channel ID.
"type": "web_hook",
"address": "domainhere", // Your receiving URL
"token": "target=myApp-myChangesChannelDest", // (Optional) Your channel token.
"expiration": 1426325213000 // (Optional) Your requested channel expiration time.
}
```
Note: This can be separated into two separate lambdas, as long as the lambda establishing the channel as the URL end point for the webhook receiver. I recommend service accounts acting with permissions of a user. Avoids 3-step Auth for all the API's into so far (Gmail, Workspace, Sheets, Forms, Calendar, GEO, & Reverse GEO).
**Handle the request**
```
{
"kind": "api#channel",
"id": "01234567-89ab-cdef-0123456789ab"", // ID you specified for this channel.
"resourceId": "o3hgv1538sdjfh", // ID of the watched resource.
"resourceUri": "https://www.googleapis.com/drive/v3/files/o3hgv1538sdjfh", // Version-specific ID of the watched resource.
"token": "target=myApp-myFilesChannelDest", // Present only if one was provided.
"expiration": 1426325213000, // Actual expiration time as Unix timestamp (in ms), if applicable.
"body": "response"
}
```
Refer to the event docs for specifics on what their different payloads look like.
|
17,050,377
|
I'm following the tutorial "Think Python" and I'm supposed to install the package called swampy. I'm running python 2.7.3 although I also have python 3 installed. I extracted the package and placed it in site-packages:
C:\Python27\Lib\site-packages\swampy-2.1.1
C:\Python31\Lib\site-packages\swampy-2.1.1
But when i try to import a module from it within python:
```
import swampy.TurtleWorld
```
I just get no module named swampy.TurtleWorld.
I'd really appreciate it if someone could help me out, here's a link to the lesson if that helps:
<http://www.greenteapress.com/thinkpython/html/thinkpython005.html>
|
2013/06/11
|
[
"https://Stackoverflow.com/questions/17050377",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2475605/"
] |
>
> I extracted the package and placed it in site-packages:
>
>
>
No, that's the wrong way of "installing" a package. Python packages come with a `setup.py` script that should be used to install them. Simply do:
```
python setup.py install
```
And the module will be installed correctly in the site-packages of the python interpreter you are using. If you want to install it for a specific python version use `python2`/`python3` instead of `python`.
|
If anyone else is having trouble with this on Windows, I just added my sites-package directory to my PATH variable and it worked like any normal module import.
```
C:\Python34\Lib\site-packages
```
Hope it helps.
|
70,393,570
|
I am trying to solve the question in which I am asked to use property method to count the number of times the circles are created . Below is the code for the same.
```
import os
import sys
#Add Circle class implementation below
class Circle:
counter = 0
def __init__(self,radius):
self.radius = radius
Circle.counter = Circle.counter + 1
def area(self):
return self.radius*self.radius*3.14
def counters():
print(Circle.counter)
no_of_circles = property(counter)
if __name__ == "__main__":
res_lst = list()
lst = list(map(lambda x: float(x.strip()), input().split(',')))
for radius in lst:
res_lst.append(Circle(radius).area())
print(str(res_lst), str(Circle.no_of_circles))
```
The above code gives correct output for the area but counter should be 3 and instead I am getting below output . Below is the output for input = 1,2,3
```
[3.14, 12.56, 28.26] <property object at 0x0000024AB3234D60>
```
I have tried everything but no luck. In the main section of the code no\_of\_circles is called as Circle.no\_of\_circles which suggests me that it will use property method of python. But the output is wrong. Please help me find where I am going wrong.
|
2021/12/17
|
[
"https://Stackoverflow.com/questions/70393570",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10254216/"
] |
A small calculation may help:
```
...
WHERE
([user_id] = 1) AND
([year] * 100 + [week_number]) BETWEEN 202152 AND 202205
```
or
```
...
WHERE
([user_id] = 1) AND
(202152 <= [year] * 100 + [week_number]) AND
([year] * 100 + [week_number] <= 202205)
```
|
You could...use variables:
```
DECLARE @myUser int = 1,
@startYear int = 2021,
@endYear int = 2022,
@startWeek int = 5,
@endWeek INT = 13;
SELECT *
FROM [db].[dbo].[table]
WHERE [user_id] = @myUser
AND (
(@startYear = [year] AND @startWeek = [week_number] AND @startYear = @endYear AND @startWeek = @endWeek) --the selection is only one week
OR ([year] = @startYear AND [week_number] >= @startWeek AND [week_number] <= @endYear AND @startYear = @endYear AND @endWeek > @startWeek) --same year, multiple weeks
OR (@endYear > @startYear -- Spans multiple years
AND (
([year] >= @startYear AND [year] < @endYear AND [week_number] >= @startWeek ) --anything after start week for each year that is before the end year
OR ([year] <= @endYear AND [year] > @startYear AND [week_number] <= @endWeek) --anything before end week for any year after the start year
)
);
```
|
46,032,570
|
I have a Django REST backend, and it has a `/users` endpoint where I can add new users through `POST` method from frontend.
`/users` endpoint url:
`http://192.168.201.211:8024/users/`
In this endpoint I can view all users information and add new user, so I must avoid others entry it except Administrator. I create a superuser `admin` with password `admin123` by `python manage.py createsuperuser`.
My question is, If I want to do a HTTP `POST` from frontend(I use Angular) I have to pass the Administrator's user name and password, `admin` and `admin123`, along with `POST` head information. So I let others know the user name and password who check the source code of frontend.
Is there any other way to do this `Authentication` without exposing Administrator's user name and password to others?
|
2017/09/04
|
[
"https://Stackoverflow.com/questions/46032570",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2803344/"
] |
From the docs:
You have to download and install the ARCore Services.
>
> You must use a supported, physical device. ARCore does not support virtual devices such as the Android Emulator. To prepare your device:
>
>
> Enable developer options
>
> Enable USB debugging
>
> Download the [ARCore Service](https://github.com/google-ar/arcore-android-sdk/releases/download/sdk-preview/arcore-preview.apk), then install it with the following adb command:
>
> `adb install -r -d arcore-preview.apk`
>
>
>
See: <https://developers.google.com/ar/develop/java/getting-started> Prepare your device
|
I think you need to Install Tango core.
<https://play.google.com/store/apps/details?id=com.google.tango&hl=zh_TW>
|
9,319,767
|
I have a color photo of apple, how can I show only its outline (inside white, background black) with python/PIL?
|
2012/02/16
|
[
"https://Stackoverflow.com/questions/9319767",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1212200/"
] |
Something like this should work.
```
from PIL import Image, ImageFilter
image = Image.open('your_image.png')
image = image.filter(ImageFilter.FIND_EDGES)
image.save('new_name.png')
```
If that doesn't give you the result you are looking for then you try implementing either Prewitt edge detection, Sobel edge detection or Canny edge detection using PIL and Python and other libraries see related [question](https://stackoverflow.com/questions/1603688/python-image-recognition) and the following [example](http://cyroforge.wordpress.com/2012/01/21/canny-edge-detection/) .
If you are trying to do particle detection / analysis rather than just edge detection, you can try using [py4ij](http://code.google.com/p/py4ij/) to call the ImageJ method you link to give you expect the same result, or try another Particle Analysis Python library [EMAN](http://www.msg.ucsf.edu/local/programs/eman/index.html) alternately you can write a Particle detection algorithm using PIL, SciPy and NumPy.
|
If your object and background have fairly well contrast
```
from PIL import Image
image = Image.open(your_image_file)
mask=image.convert("L")
th=150 # the value has to be adjusted for an image of interest
mask = mask.point(lambda i: i < th and 255)
mask.save(file_where_to_save_result)
```
if higher contrast is in one (of 3 colors), you may split the image into bands instead of converting it into grey scale.
if an image or background is fairly complicated, more sophisticated processing will be required
|
9,319,767
|
I have a color photo of apple, how can I show only its outline (inside white, background black) with python/PIL?
|
2012/02/16
|
[
"https://Stackoverflow.com/questions/9319767",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1212200/"
] |
Something like this should work.
```
from PIL import Image, ImageFilter
image = Image.open('your_image.png')
image = image.filter(ImageFilter.FIND_EDGES)
image.save('new_name.png')
```
If that doesn't give you the result you are looking for then you try implementing either Prewitt edge detection, Sobel edge detection or Canny edge detection using PIL and Python and other libraries see related [question](https://stackoverflow.com/questions/1603688/python-image-recognition) and the following [example](http://cyroforge.wordpress.com/2012/01/21/canny-edge-detection/) .
If you are trying to do particle detection / analysis rather than just edge detection, you can try using [py4ij](http://code.google.com/p/py4ij/) to call the ImageJ method you link to give you expect the same result, or try another Particle Analysis Python library [EMAN](http://www.msg.ucsf.edu/local/programs/eman/index.html) alternately you can write a Particle detection algorithm using PIL, SciPy and NumPy.
|
[Apple vs Lines](https://i.stack.imgur.com/sN9qU.jpg)
You can do it just using PIL and Python in less than 200 lines of code. Would be easier to use canny-edge-detection from a library.
Here is the steps. Convert to grayscale for lumens. Using kernel image processing detect edges using Sobel. Thin the edge using the magnitude and slope obtain from Sobel.
```
from PIL import Image
import math
def one_to_two_dimension_array(list_,columns):
#use list slice
return [ list_[i:i+columns] for i in range(0, len(list_),columns) ]
def flatten_matrix(matrix):
return [val for sublist in matrix for val in sublist]
def matrix_convole(matrix, kernel_matrix, multiplier):
return_list=[]
return_matrix=[]
border=(len(kernel_matrix) - 1) / 2;border=int(border)
center_kernel_pos=border
for matrix_row in range( len( matrix )):
for matrix_col in range(len( matrix[matrix_row] ) ):
accumulator = 0
if (matrix_row - border)<0 or \
(matrix_col-border)< 0 or \
(matrix_row+border) > (len( matrix )-border) or \
(matrix_col+border) > (len( matrix[matrix_row] )-border):
return_list.append(matrix[matrix_row][matrix_col])
continue
for kernel_row in range(len (kernel_matrix) ):
for kernel_col in range(len (kernel_matrix[kernel_row]) ):
relative_row= kernel_row - center_kernel_pos
relative_col= kernel_col - center_kernel_pos
kernel = kernel_matrix[kernel_row][kernel_col]
pixel = matrix [matrix_row + relative_row] [matrix_col + relative_col]
accumulator += pixel * kernel
return_list.append(accumulator* multiplier )
return_matrix = one_to_two_dimension_array( return_list, len( matrix[0] ) )
return return_matrix
def canny_round_degree(deg):
#0, 22.5, 45, 67.5, 90, 112.5, 135, 157.5, 180
if deg >= 0 and deg <= 22.5:
return 0
elif deg >= 22.5 and deg <= 67.5:
return 45
elif deg > 67.5 and deg <=112.5:
return 90
elif deg > 112.5 and deg <=157.5:
return 135
elif deg >= 157.5 and deg <= 180:
return 0
if deg <= 0 and deg >= -22.5:
return 0
elif deg <= -22.5 and deg >= -67.5:
return 135
elif deg < -67.5 and deg >= -112.5:
return 90
elif deg < -112.5 and deg >= -157.5:
return 45
elif deg <= -157.5 and deg >= -180:
return 0
image_path='apple.jpg'
gaussian_5x5_kernel=[[2,4,5,4,2],[4,9,12,9,4],[5,12,15,12,5],[4,9,12,9,4],[2,4,5,4,2]] #multiplier 1/159
sobel_kernel_gx=[[-1,0,1],[-2,0,2],[-1,0,1]]
sobel_kernel_gy=[[-1,-2,-1],[0,0,0],[1,2,1]]
im_list=list(Image.open(image_path).convert('L').getdata(0)) #grayscale, get first channel
im_width=Image.open(image_path).width
im_height=Image.open(image_path).height
im_matrix = one_to_two_dimension_array(im_list, im_width)
im_matrix_blur=matrix_convole(im_matrix,gaussian_5x5_kernel, 1/159)
sobel_gx_matrix=matrix_convole(im_matrix_blur,sobel_kernel_gx, 1)
sobel_gy_matrix=matrix_convole(im_matrix_blur,sobel_kernel_gy, 1)
sobel_gy_list=flatten_matrix(sobel_gy_matrix)
sobel_gx_list=flatten_matrix(sobel_gx_matrix)
sobel_g_magnitude_list = [math.hypot(gy,gx) for gx,gy in zip(sobel_gx_list,sobel_gy_list)]
sobel_g_angle_list = [ canny_round_degree(math.degrees(math.atan2(gy,gx))) for gx,gy in zip(sobel_gx_list,sobel_gy_list)]
sobel_g_angle_matrix = one_to_two_dimension_array(sobel_g_angle_list, im_width)
sobel_g_magnitude_matrix = one_to_two_dimension_array(sobel_g_magnitude_list, im_width)
suppression_list = []
for s_row in range( len( sobel_g_angle_matrix)):
for s_col in range(len( sobel_g_angle_matrix[s_row] ) ):
if (s_row - 1)<0 or \
(s_col-1)< 0 or \
(s_row+1) > (len( sobel_g_angle_matrix )-1) or \
(s_col+1) > (len( sobel_g_angle_matrix[s_row] )-1):
suppression_list.append(0)
continue
magnitude_in_question = sobel_g_magnitude_matrix[s_row][s_col]
#thresholding magnitude continue, arbitrary 129
if magnitude_in_question < 36:
suppression_list.append(0)
continue
angle_in_question = sobel_g_angle_matrix[s_row][s_col]
east_magnitude = sobel_g_magnitude_matrix[s_row][s_col-1]
west_magnitude = sobel_g_magnitude_matrix[s_row][s_col+1]
north_magnitude = sobel_g_magnitude_matrix[s_row-1][s_col]
south_magnitude = sobel_g_magnitude_matrix[s_row+1][s_col]
north_east_magnitude = sobel_g_magnitude_matrix[s_row-1][s_col-1]
north_west_magnitude = sobel_g_magnitude_matrix[s_row-1][s_col+1]
south_east_magnitude = sobel_g_magnitude_matrix[s_row+1][s_col-1]
south_west_magnitude = sobel_g_magnitude_matrix[s_row+1][s_col+1]
if angle_in_question == 0 and magnitude_in_question > east_magnitude \
and magnitude_in_question > west_magnitude:
suppression_list.append(1)
elif angle_in_question == 90 and magnitude_in_question > north_magnitude \
and magnitude_in_question > south_magnitude:
suppression_list.append(1)
elif angle_in_question == 135 and magnitude_in_question > north_west_magnitude \
and magnitude_in_question > south_east_magnitude:
suppression_list.append(1)
elif angle_in_question == 45 and magnitude_in_question > north_east_magnitude \
and magnitude_in_question > south_west_magnitude:
suppression_list.append(1)
else:
suppression_list.append(0)
new_img = Image.new('1', (im_width,im_height)) #bw=1;grayscale =L
new_img.putdata( suppression_list )
new_img.save('apple-lines.png', 'PNG')
```
|
9,319,767
|
I have a color photo of apple, how can I show only its outline (inside white, background black) with python/PIL?
|
2012/02/16
|
[
"https://Stackoverflow.com/questions/9319767",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1212200/"
] |
If your object and background have fairly well contrast
```
from PIL import Image
image = Image.open(your_image_file)
mask=image.convert("L")
th=150 # the value has to be adjusted for an image of interest
mask = mask.point(lambda i: i < th and 255)
mask.save(file_where_to_save_result)
```
if higher contrast is in one (of 3 colors), you may split the image into bands instead of converting it into grey scale.
if an image or background is fairly complicated, more sophisticated processing will be required
|
[Apple vs Lines](https://i.stack.imgur.com/sN9qU.jpg)
You can do it just using PIL and Python in less than 200 lines of code. Would be easier to use canny-edge-detection from a library.
Here is the steps. Convert to grayscale for lumens. Using kernel image processing detect edges using Sobel. Thin the edge using the magnitude and slope obtain from Sobel.
```
from PIL import Image
import math
def one_to_two_dimension_array(list_,columns):
#use list slice
return [ list_[i:i+columns] for i in range(0, len(list_),columns) ]
def flatten_matrix(matrix):
return [val for sublist in matrix for val in sublist]
def matrix_convole(matrix, kernel_matrix, multiplier):
return_list=[]
return_matrix=[]
border=(len(kernel_matrix) - 1) / 2;border=int(border)
center_kernel_pos=border
for matrix_row in range( len( matrix )):
for matrix_col in range(len( matrix[matrix_row] ) ):
accumulator = 0
if (matrix_row - border)<0 or \
(matrix_col-border)< 0 or \
(matrix_row+border) > (len( matrix )-border) or \
(matrix_col+border) > (len( matrix[matrix_row] )-border):
return_list.append(matrix[matrix_row][matrix_col])
continue
for kernel_row in range(len (kernel_matrix) ):
for kernel_col in range(len (kernel_matrix[kernel_row]) ):
relative_row= kernel_row - center_kernel_pos
relative_col= kernel_col - center_kernel_pos
kernel = kernel_matrix[kernel_row][kernel_col]
pixel = matrix [matrix_row + relative_row] [matrix_col + relative_col]
accumulator += pixel * kernel
return_list.append(accumulator* multiplier )
return_matrix = one_to_two_dimension_array( return_list, len( matrix[0] ) )
return return_matrix
def canny_round_degree(deg):
#0, 22.5, 45, 67.5, 90, 112.5, 135, 157.5, 180
if deg >= 0 and deg <= 22.5:
return 0
elif deg >= 22.5 and deg <= 67.5:
return 45
elif deg > 67.5 and deg <=112.5:
return 90
elif deg > 112.5 and deg <=157.5:
return 135
elif deg >= 157.5 and deg <= 180:
return 0
if deg <= 0 and deg >= -22.5:
return 0
elif deg <= -22.5 and deg >= -67.5:
return 135
elif deg < -67.5 and deg >= -112.5:
return 90
elif deg < -112.5 and deg >= -157.5:
return 45
elif deg <= -157.5 and deg >= -180:
return 0
image_path='apple.jpg'
gaussian_5x5_kernel=[[2,4,5,4,2],[4,9,12,9,4],[5,12,15,12,5],[4,9,12,9,4],[2,4,5,4,2]] #multiplier 1/159
sobel_kernel_gx=[[-1,0,1],[-2,0,2],[-1,0,1]]
sobel_kernel_gy=[[-1,-2,-1],[0,0,0],[1,2,1]]
im_list=list(Image.open(image_path).convert('L').getdata(0)) #grayscale, get first channel
im_width=Image.open(image_path).width
im_height=Image.open(image_path).height
im_matrix = one_to_two_dimension_array(im_list, im_width)
im_matrix_blur=matrix_convole(im_matrix,gaussian_5x5_kernel, 1/159)
sobel_gx_matrix=matrix_convole(im_matrix_blur,sobel_kernel_gx, 1)
sobel_gy_matrix=matrix_convole(im_matrix_blur,sobel_kernel_gy, 1)
sobel_gy_list=flatten_matrix(sobel_gy_matrix)
sobel_gx_list=flatten_matrix(sobel_gx_matrix)
sobel_g_magnitude_list = [math.hypot(gy,gx) for gx,gy in zip(sobel_gx_list,sobel_gy_list)]
sobel_g_angle_list = [ canny_round_degree(math.degrees(math.atan2(gy,gx))) for gx,gy in zip(sobel_gx_list,sobel_gy_list)]
sobel_g_angle_matrix = one_to_two_dimension_array(sobel_g_angle_list, im_width)
sobel_g_magnitude_matrix = one_to_two_dimension_array(sobel_g_magnitude_list, im_width)
suppression_list = []
for s_row in range( len( sobel_g_angle_matrix)):
for s_col in range(len( sobel_g_angle_matrix[s_row] ) ):
if (s_row - 1)<0 or \
(s_col-1)< 0 or \
(s_row+1) > (len( sobel_g_angle_matrix )-1) or \
(s_col+1) > (len( sobel_g_angle_matrix[s_row] )-1):
suppression_list.append(0)
continue
magnitude_in_question = sobel_g_magnitude_matrix[s_row][s_col]
#thresholding magnitude continue, arbitrary 129
if magnitude_in_question < 36:
suppression_list.append(0)
continue
angle_in_question = sobel_g_angle_matrix[s_row][s_col]
east_magnitude = sobel_g_magnitude_matrix[s_row][s_col-1]
west_magnitude = sobel_g_magnitude_matrix[s_row][s_col+1]
north_magnitude = sobel_g_magnitude_matrix[s_row-1][s_col]
south_magnitude = sobel_g_magnitude_matrix[s_row+1][s_col]
north_east_magnitude = sobel_g_magnitude_matrix[s_row-1][s_col-1]
north_west_magnitude = sobel_g_magnitude_matrix[s_row-1][s_col+1]
south_east_magnitude = sobel_g_magnitude_matrix[s_row+1][s_col-1]
south_west_magnitude = sobel_g_magnitude_matrix[s_row+1][s_col+1]
if angle_in_question == 0 and magnitude_in_question > east_magnitude \
and magnitude_in_question > west_magnitude:
suppression_list.append(1)
elif angle_in_question == 90 and magnitude_in_question > north_magnitude \
and magnitude_in_question > south_magnitude:
suppression_list.append(1)
elif angle_in_question == 135 and magnitude_in_question > north_west_magnitude \
and magnitude_in_question > south_east_magnitude:
suppression_list.append(1)
elif angle_in_question == 45 and magnitude_in_question > north_east_magnitude \
and magnitude_in_question > south_west_magnitude:
suppression_list.append(1)
else:
suppression_list.append(0)
new_img = Image.new('1', (im_width,im_height)) #bw=1;grayscale =L
new_img.putdata( suppression_list )
new_img.save('apple-lines.png', 'PNG')
```
|
51,434,996
|
I have the following code:
```
#!/usr/bin/python2.7
import json, re, sys
x = json.loads('''{"status":{"code":"200","msg":"ok","stackTrace":null},"dbTimeCost":11,"totalTimeCost":12,"hasmore":false,"count":5,"result":[{"_type":"Compute","_oid":"555e262fe4b059c7fbd6af72","label":"lvs3b01c-ea7c.stratus.lvs.ebay.com"},{"_type":"Compute","_oid":"555e27d8e4b059c7fbd6bab9","label":"lvs3b01c-9073.stratus.lvs.ebay.com"},{"_type":"Compute","_oid":"555e27c9e4b059c7fbd6ba7e","label":"lvs3b01c-b14b.stratus.lvs.ebay.com"},{"_type":"Compute","_oid":"555e2798e4b0800601a83b0f","label":"lvs3b01c-6ae2.stratus.lvs.ebay.com"},{"_type":"Compute","_oid":"555e2693e4b087582f108200","label":"lvs3b01c-a228.stratus.lvs.ebay.com"}]}''')
print x['result'][4]['label']
sys.exit()
```
The desired result should be all the labels. But, when I run it, it only prints the first label. What am I doing wrong here?
And also, before I could figure out that "result" was the key to use, I needed to copy and paste the json data to a site like "jsonlint.com" to reformat it in a readable fashion. Im wondering if there's a better way to do that, preferably without having to copy and paste the json data anywhere.
So two questions:
1. How do I get the above code to list all the labels
2. How do I know the key to the field I want, without having to reformat the given ugly one liner json data
|
2018/07/20
|
[
"https://Stackoverflow.com/questions/51434996",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7005590/"
] |
just change your code which is used to print the label
```
print x['result'][4]['label'] # here you are just printing the 4th label only
```
to
```
print [i["label"] for i in x['result']]
```
|
Using a list comprehension to get all label
**Ex:**
```
import json, re, sys
x = json.loads('''{"status":{"code":"200","msg":"ok","stackTrace":null},"dbTimeCost":11,"totalTimeCost":12,"hasmore":false,"count":5,"result":[{"_type":"Compute","_oid":"555e262fe4b059c7fbd6af72","label":"lvs3b01c-ea7c.stratus.lvs.ebay.com"},{"_type":"Compute","_oid":"555e27d8e4b059c7fbd6bab9","label":"lvs3b01c-9073.stratus.lvs.ebay.com"},{"_type":"Compute","_oid":"555e27c9e4b059c7fbd6ba7e","label":"lvs3b01c-b14b.stratus.lvs.ebay.com"},{"_type":"Compute","_oid":"555e2798e4b0800601a83b0f","label":"lvs3b01c-6ae2.stratus.lvs.ebay.com"},{"_type":"Compute","_oid":"555e2693e4b087582f108200","label":"lvs3b01c-a228.stratus.lvs.ebay.com"}]}''')
print [i["label"] for i in x['result']]
sys.exit()
```
**Output:**
```
[u'lvs3b01c-ea7c.stratus.lvs.ebay.com', u'lvs3b01c-9073.stratus.lvs.ebay.com', u'lvs3b01c-b14b.stratus.lvs.ebay.com', u'lvs3b01c-6ae2.stratus.lvs.ebay.com', u'lvs3b01c-a228.stratus.lvs.ebay.com']
```
You can use `pprint` to view your JSON in a better format.
**Ex:**
```
import pprint
pprint.pprint(x)
```
**Output:**
```
{u'count': 5,
u'dbTimeCost': 11,
u'hasmore': False,
u'result': [{u'_oid': u'555e262fe4b059c7fbd6af72',
u'_type': u'Compute',
u'label': u'lvs3b01c-ea7c.stratus.lvs.ebay.com'},
{u'_oid': u'555e27d8e4b059c7fbd6bab9',
u'_type': u'Compute',
u'label': u'lvs3b01c-9073.stratus.lvs.ebay.com'},
{u'_oid': u'555e27c9e4b059c7fbd6ba7e',
u'_type': u'Compute',
u'label': u'lvs3b01c-b14b.stratus.lvs.ebay.com'},
{u'_oid': u'555e2798e4b0800601a83b0f',
u'_type': u'Compute',
u'label': u'lvs3b01c-6ae2.stratus.lvs.ebay.com'},
{u'_oid': u'555e2693e4b087582f108200',
u'_type': u'Compute',
u'label': u'lvs3b01c-a228.stratus.lvs.ebay.com'}],
u'status': {u'code': u'200', u'msg': u'ok', u'stackTrace': None},
u'totalTimeCost': 12}
```
|
51,434,996
|
I have the following code:
```
#!/usr/bin/python2.7
import json, re, sys
x = json.loads('''{"status":{"code":"200","msg":"ok","stackTrace":null},"dbTimeCost":11,"totalTimeCost":12,"hasmore":false,"count":5,"result":[{"_type":"Compute","_oid":"555e262fe4b059c7fbd6af72","label":"lvs3b01c-ea7c.stratus.lvs.ebay.com"},{"_type":"Compute","_oid":"555e27d8e4b059c7fbd6bab9","label":"lvs3b01c-9073.stratus.lvs.ebay.com"},{"_type":"Compute","_oid":"555e27c9e4b059c7fbd6ba7e","label":"lvs3b01c-b14b.stratus.lvs.ebay.com"},{"_type":"Compute","_oid":"555e2798e4b0800601a83b0f","label":"lvs3b01c-6ae2.stratus.lvs.ebay.com"},{"_type":"Compute","_oid":"555e2693e4b087582f108200","label":"lvs3b01c-a228.stratus.lvs.ebay.com"}]}''')
print x['result'][4]['label']
sys.exit()
```
The desired result should be all the labels. But, when I run it, it only prints the first label. What am I doing wrong here?
And also, before I could figure out that "result" was the key to use, I needed to copy and paste the json data to a site like "jsonlint.com" to reformat it in a readable fashion. Im wondering if there's a better way to do that, preferably without having to copy and paste the json data anywhere.
So two questions:
1. How do I get the above code to list all the labels
2. How do I know the key to the field I want, without having to reformat the given ugly one liner json data
|
2018/07/20
|
[
"https://Stackoverflow.com/questions/51434996",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7005590/"
] |
just change your code which is used to print the label
```
print x['result'][4]['label'] # here you are just printing the 4th label only
```
to
```
print [i["label"] for i in x['result']]
```
|
You need to loop all the result. You result is a list of object which has the label.
```
labels = [ i.get('label') for i in x.get('result')]
print(labels)
```
Use `.get()` it will not return None if the key is not available.
|
47,422,284
|
How can I elegantly do it in `go`?
In python I could use attribute like this:
```
def function():
function.counter += 1
function.counter = 0
```
Does `go` have the same opportunity?
|
2017/11/21
|
[
"https://Stackoverflow.com/questions/47422284",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5537840/"
] |
For example,
`count.go`:
```
package main
import (
"fmt"
"sync"
"time"
)
type Count struct {
mx *sync.Mutex
count int64
}
func NewCount() *Count {
return &Count{mx: new(sync.Mutex), count: 0}
}
func (c *Count) Incr() {
c.mx.Lock()
c.count++
c.mx.Unlock()
}
func (c *Count) Count() int64 {
c.mx.Lock()
count := c.count
c.mx.Unlock()
return count
}
var fncCount = NewCount()
func fnc() {
fncCount.Incr()
}
func main() {
for i := 0; i < 42; i++ {
go fnc()
}
time.Sleep(time.Second)
fmt.Println(fncCount.Count())
}
```
Output:
```
$ go run count.go
42
```
Also, run the race detector,
```
$ go run -race count.go
42
```
See:
[Introducing the Go Race Detector](https://blog.golang.org/race-detector).
[Benign data races: what could possibly go wrong?](https://software.intel.com/en-us/blogs/2013/01/06/benign-data-races-what-could-possibly-go-wrong).
[The Go Memory Model](https://golang.org/ref/mem)
---
And here's a racy solution ([@maerics answer](https://stackoverflow.com/a/47422446/221700) is racy for the same reason),
```
package main
import (
"fmt"
"time"
)
var fncCount = 0
func fnc() {
fncCount++
}
func main() {
for i := 0; i < 42; i++ {
go fnc()
}
time.Sleep(time.Second)
fmt.Println(fncCount)
}
```
Output:
```
$ go run racer.go
39
```
And, with the race detector,
Output:
```
$ go run -race racer.go
==================
WARNING: DATA RACE
Read at 0x0000005b5380 by goroutine 7:
main.fnc()
/home/peter/gopath/src/so/racer.go:11 +0x3a
Previous write at 0x0000005b5380 by goroutine 6:
main.fnc()
/home/peter/gopath/src/so/racer.go:11 +0x56
Goroutine 7 (running) created at:
main.main()
/home/peter/gopath/src/so/racer.go:16 +0x4f
Goroutine 6 (finished) created at:
main.main()
/home/peter/gopath/src/so/racer.go:16 +0x4f
==================
42
Found 1 data race(s)
exit status 66
$
```
|
```
var doThingCounter = 0
func DoThing() {
// Do the thing...
doThingCounter++
}
```
|
47,422,284
|
How can I elegantly do it in `go`?
In python I could use attribute like this:
```
def function():
function.counter += 1
function.counter = 0
```
Does `go` have the same opportunity?
|
2017/11/21
|
[
"https://Stackoverflow.com/questions/47422284",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5537840/"
] |
Let me quote the atomic package documentation:
>
> Package atomic provides low-level atomic memory primitives useful for
> implementing synchronization algorithms.
> <https://golang.org/pkg/sync/atomic/>
>
>
>
Same code, but simpler and safe too.
```
package main
import (
"fmt"
"sync/atomic"
"time"
)
var fncCount uint64
func fnc() {
atomic.AddUint64(&fncCount, 1)
}
func main() {
for i := 0; i < 42; i++ {
go fnc()
}
// this is bad, because it won't wait for the goroutines finish
time.Sleep(time.Second)
fncCountFinal := atomic.LoadUint64(&fncCount)
fmt.Println(fncCountFinal)
}
```
>
> $ go run -race main.go
>
>
> 42
>
>
>
|
```
var doThingCounter = 0
func DoThing() {
// Do the thing...
doThingCounter++
}
```
|
65,398,433
|
**Problem description:** I want to create a program that can update one whole row (or cells in this row within given range) in one single line (i.e. one single API request).
This is what've seen in the **documentation**, that was related to my problem:
```py
# Updates A2 and A3 with values 42 and 43
# Note that update range can be bigger than values array
worksheet.update('A2:B4', [[42], [43]])
```
This is how I tried to implement it into **my program**:
```py
sheet.update(f'A{rowNumber + 1}:H{rowNumber + 1}', [[str(el)] for el in list_of_auctions[rowNumber]])
```
This is what printing `[[str(el)] for el in list_of_auctions[rowNumber]]` looks like:
```py
[['068222bb-c251-47ad-8c2a-e7ad7bad2f60'], ['urlLink'], ['100'], ['250'], ['20'], [''], [' ,'], ['0']]
```
As far as I can tell, everything here is according to the docs, however I get this **output**:
```
error from callback <bound method SocketHandler.handle_message of <amino.socket.SocketHandler object at 0x00000196F8C46070>>: {'code': 400, 'message': "Requested writing within range ['Sheet1'!A1:H1], but tried writing to row [2]", 'status': 'INVALID_ARGUMENT'}
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\websocket\_app.py", line 344, in _callback
callback(*args)
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\amino\socket.py", line 80, in handle_message
self.client.handle_socket_message(data)
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\amino\client.py", line 345, in handle_socket_message
return self.callbacks.resolve(data)
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\amino\socket.py", line 204, in resolve
return self.methods.get(data["t"], self.default)(data)
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\amino\socket.py", line 192, in _resolve_chat_message
return self.chat_methods.get(key, self.default)(data)
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\amino\socket.py", line 221, in on_text_message
def on_text_message(self, data): self.call(getframe(0).f_code.co_name, objects.Event(data["o"]).Event)
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\amino\socket.py", line 209, in call
handler(data)
File "C:\Users\1\Desktop\python bots\auction-bot\bot.py", line 315, in on_text_message
bid_rub(link, data.message.author.nickname, data.message.author.userId, id, linkto)
File "C:\Users\1\Desktop\python bots\auction-bot\bot.py", line 162, in bid_rub
sheet.update(f'A{ifExisting + 1}:H{ifExisting + 1}', [[str(el)] for el in list_of_auctions[ifExisting]])
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\gspread\utils.py", line 592, in wrapper
return f(*args, **kwargs)
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\gspread\models.py", line 1096, in update
response = self.spreadsheet.values_update(
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\gspread\models.py", line 235, in values_update
r = self.client.request('put', url, params=params, json=body)
File "C:\Users\1\Desktop\ИНФА\pycharm\venv\lib\site-packages\gspread\client.py", line 73, in request
raise APIError(response)
```
**Question:** I cannot figure out what can be the issue here, trying to tackle this for quite a while.
|
2020/12/21
|
[
"https://Stackoverflow.com/questions/65398433",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13682294/"
] |
The error indicates that the data you are trying to write has more rows than the rows in the range. Each list inside your list represent single row of data in the spreadsheet.
In your example:
```
[['068222bb-c251-47ad-8c2a-e7ad7bad2f60'], ['urlLink'], ['100'], ['250'], ['20'], [''], [' ,'], ['0']]
```
It represents 8 rows of data.
```
row 1 = ['068222bb-c251-47ad-8c2a-e7ad7bad2f60']
row 2 = ['urlLink']
row 3 = ['100']
row 4 = ['250']
row 5 = ['20']
row 6 = ['']
row 7 = [' ,']
row 8 = ['0']
```
If you want to write into a single row, the format should be:
```
[['068222bb-c251-47ad-8c2a-e7ad7bad2f60', 'urlLink', '100', '250', '20', '', ' ,', '0']]
```
To solve this, remove the `[` and `]` in `[str(el)]` and add another `[` and `]` to `str(el) for el in list_of_auctions[rowNumber]` or use itertools.chain() to make single list within a list
Your code should look like this:
```
sheet.update(f'A{rowNumber + 1}:H{rowNumber + 1}', [[str(el) for el in list_of_auctions[rowNumber]]])
or
import itertools
sheet.update(f'A{rowNumber + 1}:H{rowNumber + 1}', [list(itertools.chain(str(el) for el in list_of_auctions[rowNumber]))]
```
Reference:
----------
[Writing to a single range](https://developers.google.com/sheets/api/guides/values#writing_to_a_single_range)
[Python itertools](https://docs.python.org/3/library/itertools.html)
|
Hi you may try the following:
```
def gs_writer(sheet_name,dataframe,sheet_url,boolean,row,col):
import gspread
from gspread_dataframe import get_as_dataframe, set_with_dataframe
import google.oauth2
from oauth2client.service_account import ServiceAccountCredentials
scope = ['https://spreadsheets.google.com/feeds','https://www.googleapis.com/auth/drive']
credentials = ServiceAccountCredentials.from_json_keyfile_name('gsheet_credentials.json', scope)
gc = gspread.authorize(credentials)
sht2 = gc.open_by_url(sheet_url)
sht = sht2.worksheet(sheet_name)
set_with_dataframe(sht,dataframe,resize = boolean,row=row,col=col)
```
Now after defining this function, you may simply get your required data in a dataframe and input your row and col number where you want this printed, like this:
```
gs_writer("sheet_name",df,sheet_url,False,1,1)
```
This should write your data starting from A1
|
42,214,228
|
In numpy and tensorflow it's possible to add matrices (or tensors) of different dimensionality if the shape of smaller matrix is a suffix of bigger matrix. This is an example:
```
x = np.ndarray(shape=(10, 7, 5), dtype = float)
y = np.ndarray(shape=(7, 5), dtype = float)
```
For these two matrices operation `x+y` is a shortcut for:
```
for a in range(10):
for b in range(7):
for b in range(5):
result[a,b,c] = x[a,b,c] + y[b,c]
```
In my case however I have matrices of shape `(10,7,5)` and `(10,5)` and likewise I would like to perform the `+` operation using similar logic:
```
for a in range(10):
for b in range(7):
for b in range(5):
result[a,b,c] = x[a,b,c] + y[a,c]
^
```
In this case however `x+y` operation fails as neither numpy nor tensorflow understands what I want to do. Is there any way I can perform this operation efficiently (without writing python loop myself)?
In so far I have figured that I could introduce a temporary matrix `z` of shape `(10,7,5)` using einsum like this:
```
z = np.einsum('ij,k->ikj', y, np.ones(7))
x + z
```
but this creates an explicit three-dimensional matrix (or tensor) and if possible I would prefer to avoid that.
|
2017/02/13
|
[
"https://Stackoverflow.com/questions/42214228",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/766551/"
] |
In NumPy, you could extend `y` to `3D` and then add -
```
x + y[:,None,:]
```
Haven't dealt with `tensorflow` really, but looking into its docs, it seems, we could use [`tf.expand_dims`](https://www.tensorflow.org/api_docs/python/array_ops/shapes_and_shaping#expand_dims) -
```
x + tf.expand_dims(y, 1)
```
The extended version would still be a view into `y` and as such won't occupy anymore of the memory, as tested below -
```
In [512]: np.may_share_memory(y, y[:,None,:])
Out[512]: True
```
|
As correctly pointed out in the accepted answer the solution is to expand dimensions using available construct.
The point is to understand how numpy is doing the broadcasting of matrices in case of adding matrices if their dimensions don't match. The rule is that two matrices must have exactly the same dimensions with an exception that some dimensions in either of matrices can be replaced by 1.
E.g.
```
A (4d array): 8 x 1 x 6 x 1
B (3d array): 7 x 1 x 5
Result (4d array): 8 x 7 x 6 x 5
```
This example and detailed explanation can be found in scipy docs
<https://docs.scipy.org/doc/numpy-1.10.0/user/basics.broadcasting.html>
|
16,918,063
|
We're running into a problem (which is described <http://wiki.python.org/moin/UnicodeDecodeError>) -- read the second paragraph '...Paradoxically...'.
Specifically, we're trying to up-convert a string to unicode and we are receiving a UnicodeDecodeError.
Example:
```
>>> unicode('\xab')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xab in position 0: ordinal not in range(128)
```
But of course, this works without any problems
```
>>> unicode(u'\xab')
u'\xab'
```
Of course, this code is to demonstrate the conversion problem. In our actual code, we are not using string literals and we can cannot just pre-pend the unicode 'u' prefix, but instead we are dealing with strings returned from an os.walk(), and the file name includes the above value. Since we cannot coerce the value to a unicode without calling unicode() constructor, we're not sure how to proceed.
One really horrible hack that occurs is to write our own str2uni() method, something like:
```
def str2uni(val):
r"""brute force coersion of str -> unicode"""
try:
return unicode(src)
except UnicodeDecodeError:
pass
res = u''
for ch in val:
res += unichr(ord(ch))
return res
```
But before we do this -- wanted to see if anyone else had any insight?
**UPDATED**
I see everyone is getting focused on HOW I got to the example I posted, rather than the result. Sigh -- ok, here's the code that caused me to spend hours reducing the problem to the simplest form I shared above.
```
for _,_,files in os.walk('/path/to/folder'):
for fname in files:
filename = unicode(fname)
```
That piece of code tosses a UnicodeDecodeError exception when the filename has the following value '3\xab Floppy (A).link'
To see the error for yourself, do the following:
```
>>> unicode('3\xab Floppy (A).link')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xab in position 1: ordinal not in range(128)
```
**UPDATED**
I really appreciate everyone trying to help. And I also appreciate that most people make some pretty simple mistakes related to string/unicode handling. But I'd like to underline the reference to the **UnicodeDecodeError** exception. We are getting this when calling the unicode() constructor!!!
I believe the underlying cause is described in the aforementioned Wiki article <http://wiki.python.org/moin/UnicodeDecodeError>. Read from the second paragraph on down about how **"Paradoxically, a UnicodeDecodeError may happen when *encoding*..."**. The Wiki article very accurately describes what we are experiencing -- but while it elaborates on the cuases, it makes no suggestions for resolutions.
As a matter of fact, the third paragraph starts with the following astounding admission **"Unlike a similar case with UnicodeEncodeError, such a failure cannot be always avoided..."**.
Since I am not used to "cant get there from here" information as a developer, I thought it would be interested to cast about on Stack Overflow for the experiences of others.
|
2013/06/04
|
[
"https://Stackoverflow.com/questions/16918063",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/590028/"
] |
I think you're confusing Unicode strings and Unicode encodings (like UTF-8).
`os.walk(".")` returns the filenames (and directory names etc.) as strings that are *encoded* in the current codepage. It will silently *remove* characters that are not present in your current codepage ([see this question for a striking example](https://stackoverflow.com/q/7545511/20670)).
Therefore, if your file/directory names contain characters outside of your encoding's range, then you definitely need to use a Unicode string to specify the starting directory, for example by calling `os.walk(u".")`. Then you don't need to (and shouldn't) call `unicode()` on the results any longer, because they already *are* Unicode strings.
If you don't do this, you first need to *decode* the filenames (as in `mystring.decode("cp850")`) which will give you a Unicode string:
```
>>> "\xab".decode("cp850")
u'\xbd'
```
*Then* you can *encode* that into UTF-8 or any other encoding.
```
>>> _.encode("utf-8")
'\xc2\xbd'
```
If you're still confused why `unicode("\xab")` throws a *decoding* error, maybe the following explanation helps:
`"\xab"` is an *encoded* string. Python has no way of knowing which encoding that is, but before you can convert it to Unicode, it needs to be decoded first. Without any specification from you, `unicode()` assumes that it is encoded in ASCII, and when it tries to decode it under this assumption, it fails because `\xab` isn't part of ASCII. So either you need to find out which encoding is being used by your filesystem and call `unicode("\xab", encoding="cp850")` or whatever, or start with Unicode strings in the first place.
|
```
'\xab'
```
Is a **byte**, number 171.
```
u'\xab'
```
Is a **character**, U+00AB Left-pointing double angle quotation mark («).
`u'\xab'` is a short-hand way of saying `u'\u00ab'`. It's not the same (not even the same datatype) as the byte `'\xab'`; it would probably have been clearer to always use the `\u` syntax in Unicode string literals IMO, but it's too late to fix that now.
To go from bytes to characters is known as a **decode** operation. To go from characters to bytes is known as an **encode** operation. For either direction, you need to know which encoding is used to map between the two.
```
>>> unicode('\xab')
UnicodeDecodeError
```
`unicode` is a character string, so there is an implicit decode operation when you pass bytes to the `unicode()` constructor. If you don't tell it which encoding you want you get the default encoding which is often `ascii`. ASCII doesn't have a meaning for byte 171 so you get an error.
```
>>> unicode(u'\xab')
u'\xab'
```
Since `u'\xab'` (or `u'\u00ab'`) is already a character string, there is no implicit conversion in passing it to the `unicode()` constructor - you get an unchanged copy.
```
res = u''
for ch in val:
res += unichr(ord(ch))
return res
```
The encoding that maps each input byte to the Unicode character with the same ordinal value is ISO-8859-1. Consequently you could replace this loop with just:
```
return unicode(val, 'iso-8859-1')
```
(However note that if Windows is in the mix, then the encoding you want is probably not that one but the somewhat-similar `windows-1252`.)
>
> One really horrible hack that occurs is to write our own str2uni() method
>
>
>
This isn't generally a good idea. `UnicodeError`s are Python telling you you've misunderstood something about string types; ignoring that error instead of fixing it at source means you're more likely to hide subtle failures that will bite you later.
```
filename = unicode(fname)
```
So this would be better replaced with: `filename = unicode(fname, 'iso-8859-1')` if you know your filesystem is using ISO-8859-1 filenames. If your system locales are set up correctly then it should be possible to find out the encoding your filesystem is using, and go straight to that:
```
filename = unicode(fname, sys.getfilesystemencoding())
```
Though actually if it *is* set up correctly, you can skip all the encode/decode fuss by asking Python to treat filesystem paths as native Unicode instead of byte strings. You do that by passing a Unicode character string into the `os` filename interfaces:
```
for _,_,files in os.walk(u'/path/to/folder'): # note u'' string
for fname in files:
filename = fname # nothing more to do!
```
PS. The character in `3″ Floppy` should really be U+2033 Double Prime, but there is no encoding for that in ISO-8859-1. Better in the long term to use UTF-8 filesystem encoding so you can include any character.
|
16,918,063
|
We're running into a problem (which is described <http://wiki.python.org/moin/UnicodeDecodeError>) -- read the second paragraph '...Paradoxically...'.
Specifically, we're trying to up-convert a string to unicode and we are receiving a UnicodeDecodeError.
Example:
```
>>> unicode('\xab')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xab in position 0: ordinal not in range(128)
```
But of course, this works without any problems
```
>>> unicode(u'\xab')
u'\xab'
```
Of course, this code is to demonstrate the conversion problem. In our actual code, we are not using string literals and we can cannot just pre-pend the unicode 'u' prefix, but instead we are dealing with strings returned from an os.walk(), and the file name includes the above value. Since we cannot coerce the value to a unicode without calling unicode() constructor, we're not sure how to proceed.
One really horrible hack that occurs is to write our own str2uni() method, something like:
```
def str2uni(val):
r"""brute force coersion of str -> unicode"""
try:
return unicode(src)
except UnicodeDecodeError:
pass
res = u''
for ch in val:
res += unichr(ord(ch))
return res
```
But before we do this -- wanted to see if anyone else had any insight?
**UPDATED**
I see everyone is getting focused on HOW I got to the example I posted, rather than the result. Sigh -- ok, here's the code that caused me to spend hours reducing the problem to the simplest form I shared above.
```
for _,_,files in os.walk('/path/to/folder'):
for fname in files:
filename = unicode(fname)
```
That piece of code tosses a UnicodeDecodeError exception when the filename has the following value '3\xab Floppy (A).link'
To see the error for yourself, do the following:
```
>>> unicode('3\xab Floppy (A).link')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xab in position 1: ordinal not in range(128)
```
**UPDATED**
I really appreciate everyone trying to help. And I also appreciate that most people make some pretty simple mistakes related to string/unicode handling. But I'd like to underline the reference to the **UnicodeDecodeError** exception. We are getting this when calling the unicode() constructor!!!
I believe the underlying cause is described in the aforementioned Wiki article <http://wiki.python.org/moin/UnicodeDecodeError>. Read from the second paragraph on down about how **"Paradoxically, a UnicodeDecodeError may happen when *encoding*..."**. The Wiki article very accurately describes what we are experiencing -- but while it elaborates on the cuases, it makes no suggestions for resolutions.
As a matter of fact, the third paragraph starts with the following astounding admission **"Unlike a similar case with UnicodeEncodeError, such a failure cannot be always avoided..."**.
Since I am not used to "cant get there from here" information as a developer, I thought it would be interested to cast about on Stack Overflow for the experiences of others.
|
2013/06/04
|
[
"https://Stackoverflow.com/questions/16918063",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/590028/"
] |
I think you're confusing Unicode strings and Unicode encodings (like UTF-8).
`os.walk(".")` returns the filenames (and directory names etc.) as strings that are *encoded* in the current codepage. It will silently *remove* characters that are not present in your current codepage ([see this question for a striking example](https://stackoverflow.com/q/7545511/20670)).
Therefore, if your file/directory names contain characters outside of your encoding's range, then you definitely need to use a Unicode string to specify the starting directory, for example by calling `os.walk(u".")`. Then you don't need to (and shouldn't) call `unicode()` on the results any longer, because they already *are* Unicode strings.
If you don't do this, you first need to *decode* the filenames (as in `mystring.decode("cp850")`) which will give you a Unicode string:
```
>>> "\xab".decode("cp850")
u'\xbd'
```
*Then* you can *encode* that into UTF-8 or any other encoding.
```
>>> _.encode("utf-8")
'\xc2\xbd'
```
If you're still confused why `unicode("\xab")` throws a *decoding* error, maybe the following explanation helps:
`"\xab"` is an *encoded* string. Python has no way of knowing which encoding that is, but before you can convert it to Unicode, it needs to be decoded first. Without any specification from you, `unicode()` assumes that it is encoded in ASCII, and when it tries to decode it under this assumption, it fails because `\xab` isn't part of ASCII. So either you need to find out which encoding is being used by your filesystem and call `unicode("\xab", encoding="cp850")` or whatever, or start with Unicode strings in the first place.
|
As I understand it your issue is that `os.walk(unicode_path)` fails to decode some filenames to Unicode. This problem is fixed in Python 3.1+ (see [PEP 383: Non-decodable Bytes in System Character Interfaces](http://www.python.org/dev/peps/pep-0383/)):
>
> File names, environment variables, and command line arguments are
> defined as being character data in POSIX; the C APIs however allow
> passing arbitrary bytes - whether these conform to a certain encoding
> or not. This PEP proposes a means of dealing with such irregularities
> by embedding the bytes in character strings in such a way that allows
> recreation of the original byte string.
>
>
>
Windows provides Unicode API to access filesystem so there shouldn't be this problem.
Python 2.7 (utf-8 filesystem on Linux):
```
>>> import os
>>> list(os.walk("."))
[('.', [], ['\xc3('])]
>>> list(os.walk(u"."))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/os.py", line 284, in walk
if isdir(join(top, name)):
File "/usr/lib/python2.7/posixpath.py", line 71, in join
path += '/' + b
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 1: \
ordinal not in range(128)
```
Python 3.3:
```
>>> import os
>>> list(os.walk(b'.'))
[(b'.', [], [b'\xc3('])]
>>> list(os.walk(u'.'))
[('.', [], ['\udcc3('])]
```
Your `str2uni()` function tries (it introduces ambiguous names) to solve the same issue as "surrogateescape" error handler on Python 3. Use bytestrings for filenames on Python 2 if you are expecting filenames that can't be decoded using `sys.getfilesystemencoding()`.
|
16,918,063
|
We're running into a problem (which is described <http://wiki.python.org/moin/UnicodeDecodeError>) -- read the second paragraph '...Paradoxically...'.
Specifically, we're trying to up-convert a string to unicode and we are receiving a UnicodeDecodeError.
Example:
```
>>> unicode('\xab')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xab in position 0: ordinal not in range(128)
```
But of course, this works without any problems
```
>>> unicode(u'\xab')
u'\xab'
```
Of course, this code is to demonstrate the conversion problem. In our actual code, we are not using string literals and we can cannot just pre-pend the unicode 'u' prefix, but instead we are dealing with strings returned from an os.walk(), and the file name includes the above value. Since we cannot coerce the value to a unicode without calling unicode() constructor, we're not sure how to proceed.
One really horrible hack that occurs is to write our own str2uni() method, something like:
```
def str2uni(val):
r"""brute force coersion of str -> unicode"""
try:
return unicode(src)
except UnicodeDecodeError:
pass
res = u''
for ch in val:
res += unichr(ord(ch))
return res
```
But before we do this -- wanted to see if anyone else had any insight?
**UPDATED**
I see everyone is getting focused on HOW I got to the example I posted, rather than the result. Sigh -- ok, here's the code that caused me to spend hours reducing the problem to the simplest form I shared above.
```
for _,_,files in os.walk('/path/to/folder'):
for fname in files:
filename = unicode(fname)
```
That piece of code tosses a UnicodeDecodeError exception when the filename has the following value '3\xab Floppy (A).link'
To see the error for yourself, do the following:
```
>>> unicode('3\xab Floppy (A).link')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xab in position 1: ordinal not in range(128)
```
**UPDATED**
I really appreciate everyone trying to help. And I also appreciate that most people make some pretty simple mistakes related to string/unicode handling. But I'd like to underline the reference to the **UnicodeDecodeError** exception. We are getting this when calling the unicode() constructor!!!
I believe the underlying cause is described in the aforementioned Wiki article <http://wiki.python.org/moin/UnicodeDecodeError>. Read from the second paragraph on down about how **"Paradoxically, a UnicodeDecodeError may happen when *encoding*..."**. The Wiki article very accurately describes what we are experiencing -- but while it elaborates on the cuases, it makes no suggestions for resolutions.
As a matter of fact, the third paragraph starts with the following astounding admission **"Unlike a similar case with UnicodeEncodeError, such a failure cannot be always avoided..."**.
Since I am not used to "cant get there from here" information as a developer, I thought it would be interested to cast about on Stack Overflow for the experiences of others.
|
2013/06/04
|
[
"https://Stackoverflow.com/questions/16918063",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/590028/"
] |
```
for fname in files:
filename = unicode(fname)
```
The second line will complaint if `fname` is not ASCII. If you want to convert the string to Unicode, instead of `unicode(fname)` you should do `fname.decode('<the encoding here>')`.
I would suggest the encoding but you don't tell us what does `\xab` is in your `.link` file. You can search in google for the encoding anyways so it would stay like this:
```
for fname in files:
filename = fname.decode('<encoding>')
```
**UPDATE:** For example, **IF** the encoding of your filesystem's names is [ISO-8859-1](http://es.wikipedia.org/wiki/ISO_8859-1) then \xab char would be "«". To read it into python you should do:
```
for fname in files:
filename = fname.decode('latin1') #which is synonym to #ISO-8859-1
```
Hope this helps!
|
```
'\xab'
```
Is a **byte**, number 171.
```
u'\xab'
```
Is a **character**, U+00AB Left-pointing double angle quotation mark («).
`u'\xab'` is a short-hand way of saying `u'\u00ab'`. It's not the same (not even the same datatype) as the byte `'\xab'`; it would probably have been clearer to always use the `\u` syntax in Unicode string literals IMO, but it's too late to fix that now.
To go from bytes to characters is known as a **decode** operation. To go from characters to bytes is known as an **encode** operation. For either direction, you need to know which encoding is used to map between the two.
```
>>> unicode('\xab')
UnicodeDecodeError
```
`unicode` is a character string, so there is an implicit decode operation when you pass bytes to the `unicode()` constructor. If you don't tell it which encoding you want you get the default encoding which is often `ascii`. ASCII doesn't have a meaning for byte 171 so you get an error.
```
>>> unicode(u'\xab')
u'\xab'
```
Since `u'\xab'` (or `u'\u00ab'`) is already a character string, there is no implicit conversion in passing it to the `unicode()` constructor - you get an unchanged copy.
```
res = u''
for ch in val:
res += unichr(ord(ch))
return res
```
The encoding that maps each input byte to the Unicode character with the same ordinal value is ISO-8859-1. Consequently you could replace this loop with just:
```
return unicode(val, 'iso-8859-1')
```
(However note that if Windows is in the mix, then the encoding you want is probably not that one but the somewhat-similar `windows-1252`.)
>
> One really horrible hack that occurs is to write our own str2uni() method
>
>
>
This isn't generally a good idea. `UnicodeError`s are Python telling you you've misunderstood something about string types; ignoring that error instead of fixing it at source means you're more likely to hide subtle failures that will bite you later.
```
filename = unicode(fname)
```
So this would be better replaced with: `filename = unicode(fname, 'iso-8859-1')` if you know your filesystem is using ISO-8859-1 filenames. If your system locales are set up correctly then it should be possible to find out the encoding your filesystem is using, and go straight to that:
```
filename = unicode(fname, sys.getfilesystemencoding())
```
Though actually if it *is* set up correctly, you can skip all the encode/decode fuss by asking Python to treat filesystem paths as native Unicode instead of byte strings. You do that by passing a Unicode character string into the `os` filename interfaces:
```
for _,_,files in os.walk(u'/path/to/folder'): # note u'' string
for fname in files:
filename = fname # nothing more to do!
```
PS. The character in `3″ Floppy` should really be U+2033 Double Prime, but there is no encoding for that in ISO-8859-1. Better in the long term to use UTF-8 filesystem encoding so you can include any character.
|
16,918,063
|
We're running into a problem (which is described <http://wiki.python.org/moin/UnicodeDecodeError>) -- read the second paragraph '...Paradoxically...'.
Specifically, we're trying to up-convert a string to unicode and we are receiving a UnicodeDecodeError.
Example:
```
>>> unicode('\xab')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xab in position 0: ordinal not in range(128)
```
But of course, this works without any problems
```
>>> unicode(u'\xab')
u'\xab'
```
Of course, this code is to demonstrate the conversion problem. In our actual code, we are not using string literals and we can cannot just pre-pend the unicode 'u' prefix, but instead we are dealing with strings returned from an os.walk(), and the file name includes the above value. Since we cannot coerce the value to a unicode without calling unicode() constructor, we're not sure how to proceed.
One really horrible hack that occurs is to write our own str2uni() method, something like:
```
def str2uni(val):
r"""brute force coersion of str -> unicode"""
try:
return unicode(src)
except UnicodeDecodeError:
pass
res = u''
for ch in val:
res += unichr(ord(ch))
return res
```
But before we do this -- wanted to see if anyone else had any insight?
**UPDATED**
I see everyone is getting focused on HOW I got to the example I posted, rather than the result. Sigh -- ok, here's the code that caused me to spend hours reducing the problem to the simplest form I shared above.
```
for _,_,files in os.walk('/path/to/folder'):
for fname in files:
filename = unicode(fname)
```
That piece of code tosses a UnicodeDecodeError exception when the filename has the following value '3\xab Floppy (A).link'
To see the error for yourself, do the following:
```
>>> unicode('3\xab Floppy (A).link')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xab in position 1: ordinal not in range(128)
```
**UPDATED**
I really appreciate everyone trying to help. And I also appreciate that most people make some pretty simple mistakes related to string/unicode handling. But I'd like to underline the reference to the **UnicodeDecodeError** exception. We are getting this when calling the unicode() constructor!!!
I believe the underlying cause is described in the aforementioned Wiki article <http://wiki.python.org/moin/UnicodeDecodeError>. Read from the second paragraph on down about how **"Paradoxically, a UnicodeDecodeError may happen when *encoding*..."**. The Wiki article very accurately describes what we are experiencing -- but while it elaborates on the cuases, it makes no suggestions for resolutions.
As a matter of fact, the third paragraph starts with the following astounding admission **"Unlike a similar case with UnicodeEncodeError, such a failure cannot be always avoided..."**.
Since I am not used to "cant get there from here" information as a developer, I thought it would be interested to cast about on Stack Overflow for the experiences of others.
|
2013/06/04
|
[
"https://Stackoverflow.com/questions/16918063",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/590028/"
] |
```
for fname in files:
filename = unicode(fname)
```
The second line will complaint if `fname` is not ASCII. If you want to convert the string to Unicode, instead of `unicode(fname)` you should do `fname.decode('<the encoding here>')`.
I would suggest the encoding but you don't tell us what does `\xab` is in your `.link` file. You can search in google for the encoding anyways so it would stay like this:
```
for fname in files:
filename = fname.decode('<encoding>')
```
**UPDATE:** For example, **IF** the encoding of your filesystem's names is [ISO-8859-1](http://es.wikipedia.org/wiki/ISO_8859-1) then \xab char would be "«". To read it into python you should do:
```
for fname in files:
filename = fname.decode('latin1') #which is synonym to #ISO-8859-1
```
Hope this helps!
|
As I understand it your issue is that `os.walk(unicode_path)` fails to decode some filenames to Unicode. This problem is fixed in Python 3.1+ (see [PEP 383: Non-decodable Bytes in System Character Interfaces](http://www.python.org/dev/peps/pep-0383/)):
>
> File names, environment variables, and command line arguments are
> defined as being character data in POSIX; the C APIs however allow
> passing arbitrary bytes - whether these conform to a certain encoding
> or not. This PEP proposes a means of dealing with such irregularities
> by embedding the bytes in character strings in such a way that allows
> recreation of the original byte string.
>
>
>
Windows provides Unicode API to access filesystem so there shouldn't be this problem.
Python 2.7 (utf-8 filesystem on Linux):
```
>>> import os
>>> list(os.walk("."))
[('.', [], ['\xc3('])]
>>> list(os.walk(u"."))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/os.py", line 284, in walk
if isdir(join(top, name)):
File "/usr/lib/python2.7/posixpath.py", line 71, in join
path += '/' + b
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 1: \
ordinal not in range(128)
```
Python 3.3:
```
>>> import os
>>> list(os.walk(b'.'))
[(b'.', [], [b'\xc3('])]
>>> list(os.walk(u'.'))
[('.', [], ['\udcc3('])]
```
Your `str2uni()` function tries (it introduces ambiguous names) to solve the same issue as "surrogateescape" error handler on Python 3. Use bytestrings for filenames on Python 2 if you are expecting filenames that can't be decoded using `sys.getfilesystemencoding()`.
|
17,627,193
|
I'm on a fresh Virtualbox install of CentOS 6.4.
After installing zsh 5.0.2 from source using `./configure --prefix=/usr && make && make install` and setting it as the shell with `chsh -s /usr/bin/zsh`, everything is good.
Then some time after, after installing python it seems, it starts acting strange.
1. Happens with PuTTY and iTerm2 over SSH, does not happen on the raw terminal through Virtualbox.
2. typing something, then erasing it: rather than removing the char and moving the cursor back, the cursor moves forward.
3. Typing Ctrl+V then Backspace repeatedly prints out this repeating pattern '^@?'
4. Running cat from zsh works fine. Prints out '^H' if I type that, backspaces like normal if I type normal backspace.
Surely someone's seen this before and knows exactly what the hell it is.
I'm not positive yet, but it seems that installing `oh-my-zsh` can fix this. But I really want to know what the specific issue is here.
|
2013/07/13
|
[
"https://Stackoverflow.com/questions/17627193",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/340947/"
] |
OK , I suggest you try
export TERM=xterm
in your .zshrc configuration
the Changing into Zsh caused the bug.
|
**sigh** I knew I solved this before.
It's too damn easy to forget things.
The solution is to compile and apply the proper terminfo data with `tic`, as I have a custom config with my terminal clients, `xterm-256color-italic`, that confuses zsh.
There appear to be other ways to configure this stuff too; I basically just need it to be properly set up so italics work everywhere (including in tmux) so hopefully I can figure out how to do this more portably than I am currently.
|
17,627,193
|
I'm on a fresh Virtualbox install of CentOS 6.4.
After installing zsh 5.0.2 from source using `./configure --prefix=/usr && make && make install` and setting it as the shell with `chsh -s /usr/bin/zsh`, everything is good.
Then some time after, after installing python it seems, it starts acting strange.
1. Happens with PuTTY and iTerm2 over SSH, does not happen on the raw terminal through Virtualbox.
2. typing something, then erasing it: rather than removing the char and moving the cursor back, the cursor moves forward.
3. Typing Ctrl+V then Backspace repeatedly prints out this repeating pattern '^@?'
4. Running cat from zsh works fine. Prints out '^H' if I type that, backspaces like normal if I type normal backspace.
Surely someone's seen this before and knows exactly what the hell it is.
I'm not positive yet, but it seems that installing `oh-my-zsh` can fix this. But I really want to know what the specific issue is here.
|
2013/07/13
|
[
"https://Stackoverflow.com/questions/17627193",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/340947/"
] |
**sigh** I knew I solved this before.
It's too damn easy to forget things.
The solution is to compile and apply the proper terminfo data with `tic`, as I have a custom config with my terminal clients, `xterm-256color-italic`, that confuses zsh.
There appear to be other ways to configure this stuff too; I basically just need it to be properly set up so italics work everywhere (including in tmux) so hopefully I can figure out how to do this more portably than I am currently.
|
I encounter the same problem when I manually install ZSH without root, when the backspace turns to blankspace but still functions as Backspace. Finally, I find it is because "ncurses" is not installed well.
>
> tic: error while loading shared libraries: libncurses.so.6: cannot open shared object file: No such file or directory
> ? tic could not build /home/user/ceph-data/soft/ncurses-6.1/share/terminfo
>
>
>
After I reinstall the "ncurses", the problem of ZSH backspace is solved. Just for your information.
my `$TERM` is `xterm-256color`, by the way.
|
17,627,193
|
I'm on a fresh Virtualbox install of CentOS 6.4.
After installing zsh 5.0.2 from source using `./configure --prefix=/usr && make && make install` and setting it as the shell with `chsh -s /usr/bin/zsh`, everything is good.
Then some time after, after installing python it seems, it starts acting strange.
1. Happens with PuTTY and iTerm2 over SSH, does not happen on the raw terminal through Virtualbox.
2. typing something, then erasing it: rather than removing the char and moving the cursor back, the cursor moves forward.
3. Typing Ctrl+V then Backspace repeatedly prints out this repeating pattern '^@?'
4. Running cat from zsh works fine. Prints out '^H' if I type that, backspaces like normal if I type normal backspace.
Surely someone's seen this before and knows exactly what the hell it is.
I'm not positive yet, but it seems that installing `oh-my-zsh` can fix this. But I really want to know what the specific issue is here.
|
2013/07/13
|
[
"https://Stackoverflow.com/questions/17627193",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/340947/"
] |
OK , I suggest you try
export TERM=xterm
in your .zshrc configuration
the Changing into Zsh caused the bug.
|
I encounter the same problem when I manually install ZSH without root, when the backspace turns to blankspace but still functions as Backspace. Finally, I find it is because "ncurses" is not installed well.
>
> tic: error while loading shared libraries: libncurses.so.6: cannot open shared object file: No such file or directory
> ? tic could not build /home/user/ceph-data/soft/ncurses-6.1/share/terminfo
>
>
>
After I reinstall the "ncurses", the problem of ZSH backspace is solved. Just for your information.
my `$TERM` is `xterm-256color`, by the way.
|
58,777,374
|
I was recently studying someone's code and a portion of code given below
```
class Node:
def __init__(self, height=0, elem=None):
self.elem = elem
self.next = [None] * height
```
What does it mean by `[None] * height` in the above code
I know what does `*` operator (as multiplication and unpacking) and `None` means in python but this is somehow different.
|
2019/11/09
|
[
"https://Stackoverflow.com/questions/58777374",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8401374/"
] |
It means a list of `None`s with a `height` number of elements. e.g., for `height = 3`, it is this list:
```
[None, None, None]
```
|
```
>>> [None] * 5
[None, None, None, None, None]
```
Gives you a list of size `height` in your case
|
58,777,374
|
I was recently studying someone's code and a portion of code given below
```
class Node:
def __init__(self, height=0, elem=None):
self.elem = elem
self.next = [None] * height
```
What does it mean by `[None] * height` in the above code
I know what does `*` operator (as multiplication and unpacking) and `None` means in python but this is somehow different.
|
2019/11/09
|
[
"https://Stackoverflow.com/questions/58777374",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8401374/"
] |
If you do -
```
[element] * 3
```
You get -
```
[element, element, element]
```
That's what the code does, `[None] * height`
That is, if -
```
height = 4
[None] * height
# equals [None, None, None, None]
```
|
```
>>> [None] * 5
[None, None, None, None, None]
```
Gives you a list of size `height` in your case
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.