qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
4,135,261
I am having a problem connecting to a device with a Paramiko (version 1.7.6-2) ssh client: ``` $ python Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import paramiko >>> ssh = paramiko.SSHClient() >>> ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) >>> ssh.connect("123.0.0.1", username="root", password=None) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 327, in connect self._auth(username, password, pkey, key_filenames, allow_agent, look_for_keys) File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 481, in _auth raise saved_exception paramiko.AuthenticationException: Authentication failed. >>> ``` When I use ssh from the command line, it works fine: ``` ssh root@123.0.0.1 BusyBox v1.12.1 (2010-11-03 13:18:46 EDT) built-in shell (ash) Enter 'help' for a list of built-in commands. # ``` Anyone seen this before? **Edit 1** Here is the verbose output of the ssh command: ``` :~$ ssh -v root@123.0.0.1 OpenSSH_5.3p1 Debian-3ubuntu4, OpenSSL 0.9.8k 25 Mar 2009 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to 123.0.0.1 [123.0.0.1] port 22. debug1: Connection established. debug1: identity file /home/waffleman/.ssh/identity type -1 debug1: identity file /home/waffleman/.ssh/id_rsa type -1 debug1: identity file /home/waffleman/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1 debug1: match: OpenSSH_5.1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.3p1 Debian-3ubuntu4 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '123.0.0.1' is known and matches the RSA host key. debug1: Found key in /home/waffleman/.ssh/known_hosts:3 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentication succeeded (none). debug1: channel 0: new [client-session] debug1: Requesting no-more-sessions@openssh.com debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_US.utf8 ``` **Edit 2** Here is the python output with debug output: ``` Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import paramiko, os >>> paramiko.common.logging.basicConfig(level=paramiko.common.DEBUG) >>> ssh = paramiko.SSHClient() >>> ssh.load_system_host_keys() >>> ssh.load_host_keys(os.path.expanduser('~/.ssh/known_hosts')) >>> ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) >>> ssh.connect("123.0.0.1", username='root', password=None) DEBUG:paramiko.transport:starting thread (client mode): 0x928756cL INFO:paramiko.transport:Connected (version 2.0, client OpenSSH_5.1) DEBUG:paramiko.transport:kex algos:['diffie-hellman-group-exchange-sha256', 'diffie-hellman-group-exchange-sha1', 'diffie-hellman-group14-sha1', 'diffie-hellman-group1-sha1'] server key:['ssh-rsa', 'ssh-dss'] client encrypt:['aes128-cbc', '3des-cbc', 'blowfish-cbc', 'cast128-cbc', 'arcfour128', 'arcfour256', 'arcfour', 'aes192-cbc', 'aes256-cbc', 'rijndael-cbc@lysator.liu.se', 'aes128-ctr', 'aes192-ctr', 'aes256-ctr'] server encrypt:['aes128-cbc', '3des-cbc', 'blowfish-cbc', 'cast128-cbc', 'arcfour128', 'arcfour256', 'arcfour', 'aes192-cbc', 'aes256-cbc', 'rijndael-cbc@lysator.liu.se', 'aes128-ctr', 'aes192-ctr', 'aes256-ctr'] client mac:['hmac-md5', 'hmac-sha1', 'umac-64@openssh.com', 'hmac-ripemd160', 'hmac-ripemd160@openssh.com', 'hmac-sha1-96', 'hmac-md5-96'] server mac:['hmac-md5', 'hmac-sha1', 'umac-64@openssh.com', 'hmac-ripemd160', 'hmac-ripemd160@openssh.com', 'hmac-sha1-96', 'hmac-md5-96'] client compress:['none', 'zlib@openssh.com'] server compress:['none', 'zlib@openssh.com'] client lang:[''] server lang:[''] kex follows?False DEBUG:paramiko.transport:Ciphers agreed: local=aes128-ctr, remote=aes128-ctr DEBUG:paramiko.transport:using kex diffie-hellman-group1-sha1; server key type ssh-rsa; cipher: local aes128-ctr, remote aes128-ctr; mac: local hmac-sha1, remote hmac-sha1; compression: local none, remote none DEBUG:paramiko.transport:Switch to new keys ... DEBUG:paramiko.transport:Trying discovered key b945197b1de1207d9aa0663f01888c3c in /home/waffleman/.ssh/id_rsa DEBUG:paramiko.transport:userauth is OK INFO:paramiko.transport:Authentication (publickey) failed. Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 327, in connect self._auth(username, password, pkey, key_filenames, allow_agent, look_for_keys) File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 481, in _auth raise saved_exception paramiko.AuthenticationException: Authentication failed. >>> ```
2010/11/09
[ "https://Stackoverflow.com/questions/4135261", "https://Stackoverflow.com", "https://Stackoverflow.com/users/197108/" ]
It's really a old and remote issue, but I just got the same error and I think It'll be helpful to list the following info: 1. I'm using paramiko 2.9.1 and python>=3.6, make sure your paramiko>=2.9.0 2. cmd `ssh <hostname>` works fine 3. Code below get error: `AuthenticationException: Authentication failed.` ``` import paramiko client = paramiko.SSHClient() client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) session = client.connect("<hostname>") ``` From here <https://github.com/paramiko/paramiko/issues/1984>, I know this is a bug related to auth algorithms. You'll need to add a `disabled_algorithms` param in connect(), see docs here: <https://www.paramiko.org/changelog.html#2.9.0> But the changelog of 2.9.0 has a typo for disabled\_algorithms, it should be: ``` client.connect("<hostname>", disabled_algorithms={'pubkeys': ['rsa-sha2-256', 'rsa-sha2-512']}) ``` instead of: ``` client.connect("<hostname>", disabled_algorithms={'keys': ['rsa-sha2-256', 'rsa-sha2-512']}) ``` Finally, all goes well. Ps: Just got error `Unable to agree on a pubkey algorithm for signing a 'ssh-rsa' key!` for other hosts, may be downgrade paramiko below 2.9.0 is a better way.
Make sure that the permissions on the public and private key files (and possibly the containing folder) are set to very restrictive (i.e. chmod 600 id\_rsa). It turns out this is required (by the Operating System?) to use the files as ssh keys. Found this out from my helpful colleague :) Also make sure that you are using the correct username for the given ssh key.
4,135,261
I am having a problem connecting to a device with a Paramiko (version 1.7.6-2) ssh client: ``` $ python Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import paramiko >>> ssh = paramiko.SSHClient() >>> ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) >>> ssh.connect("123.0.0.1", username="root", password=None) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 327, in connect self._auth(username, password, pkey, key_filenames, allow_agent, look_for_keys) File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 481, in _auth raise saved_exception paramiko.AuthenticationException: Authentication failed. >>> ``` When I use ssh from the command line, it works fine: ``` ssh root@123.0.0.1 BusyBox v1.12.1 (2010-11-03 13:18:46 EDT) built-in shell (ash) Enter 'help' for a list of built-in commands. # ``` Anyone seen this before? **Edit 1** Here is the verbose output of the ssh command: ``` :~$ ssh -v root@123.0.0.1 OpenSSH_5.3p1 Debian-3ubuntu4, OpenSSL 0.9.8k 25 Mar 2009 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to 123.0.0.1 [123.0.0.1] port 22. debug1: Connection established. debug1: identity file /home/waffleman/.ssh/identity type -1 debug1: identity file /home/waffleman/.ssh/id_rsa type -1 debug1: identity file /home/waffleman/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1 debug1: match: OpenSSH_5.1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.3p1 Debian-3ubuntu4 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '123.0.0.1' is known and matches the RSA host key. debug1: Found key in /home/waffleman/.ssh/known_hosts:3 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentication succeeded (none). debug1: channel 0: new [client-session] debug1: Requesting no-more-sessions@openssh.com debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_US.utf8 ``` **Edit 2** Here is the python output with debug output: ``` Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import paramiko, os >>> paramiko.common.logging.basicConfig(level=paramiko.common.DEBUG) >>> ssh = paramiko.SSHClient() >>> ssh.load_system_host_keys() >>> ssh.load_host_keys(os.path.expanduser('~/.ssh/known_hosts')) >>> ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) >>> ssh.connect("123.0.0.1", username='root', password=None) DEBUG:paramiko.transport:starting thread (client mode): 0x928756cL INFO:paramiko.transport:Connected (version 2.0, client OpenSSH_5.1) DEBUG:paramiko.transport:kex algos:['diffie-hellman-group-exchange-sha256', 'diffie-hellman-group-exchange-sha1', 'diffie-hellman-group14-sha1', 'diffie-hellman-group1-sha1'] server key:['ssh-rsa', 'ssh-dss'] client encrypt:['aes128-cbc', '3des-cbc', 'blowfish-cbc', 'cast128-cbc', 'arcfour128', 'arcfour256', 'arcfour', 'aes192-cbc', 'aes256-cbc', 'rijndael-cbc@lysator.liu.se', 'aes128-ctr', 'aes192-ctr', 'aes256-ctr'] server encrypt:['aes128-cbc', '3des-cbc', 'blowfish-cbc', 'cast128-cbc', 'arcfour128', 'arcfour256', 'arcfour', 'aes192-cbc', 'aes256-cbc', 'rijndael-cbc@lysator.liu.se', 'aes128-ctr', 'aes192-ctr', 'aes256-ctr'] client mac:['hmac-md5', 'hmac-sha1', 'umac-64@openssh.com', 'hmac-ripemd160', 'hmac-ripemd160@openssh.com', 'hmac-sha1-96', 'hmac-md5-96'] server mac:['hmac-md5', 'hmac-sha1', 'umac-64@openssh.com', 'hmac-ripemd160', 'hmac-ripemd160@openssh.com', 'hmac-sha1-96', 'hmac-md5-96'] client compress:['none', 'zlib@openssh.com'] server compress:['none', 'zlib@openssh.com'] client lang:[''] server lang:[''] kex follows?False DEBUG:paramiko.transport:Ciphers agreed: local=aes128-ctr, remote=aes128-ctr DEBUG:paramiko.transport:using kex diffie-hellman-group1-sha1; server key type ssh-rsa; cipher: local aes128-ctr, remote aes128-ctr; mac: local hmac-sha1, remote hmac-sha1; compression: local none, remote none DEBUG:paramiko.transport:Switch to new keys ... DEBUG:paramiko.transport:Trying discovered key b945197b1de1207d9aa0663f01888c3c in /home/waffleman/.ssh/id_rsa DEBUG:paramiko.transport:userauth is OK INFO:paramiko.transport:Authentication (publickey) failed. Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 327, in connect self._auth(username, password, pkey, key_filenames, allow_agent, look_for_keys) File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 481, in _auth raise saved_exception paramiko.AuthenticationException: Authentication failed. >>> ```
2010/11/09
[ "https://Stackoverflow.com/questions/4135261", "https://Stackoverflow.com", "https://Stackoverflow.com/users/197108/" ]
As a very late follow-up on this matter, I believe I was running into the same issue as waffleman, in a context of a confined network. The hint about using `auth_none` on the `Transport` object turned out quite helpful, but I found myself a little puzzled as to how to implement that. Thing is, as of today at least, I can't get the `Transport` object of an `SSHClient` object until it has connected; but it won't connect in the first place... So In case this is useful to others, my work around is below. I just override the `_auth` method. OK, this is fragile, as `_auth` is a private thing. My other alternatives were - actually still are - to manually create the `Transport` and `Channel` objects, but for the time being I feel like I'm much better off with all this still under the hood. ``` from paramiko import SSHClient, BadAuthenticationType class SSHClient_try_noauth(SSHClient): def _auth(self, username, *args): try: self._transport.auth_none(username) except BadAuthenticationType: super()._auth(username, *args) ```
There could be different reasons on **server** side (sshd where you're connecting to), so it might be hard to debug on client side. For example, `tail -f /var/log/secure` : > > Oct 9 15:50:26 pc1udatahgw04 sshd[27501]: Authentication refused: bad > ownership or modes for directory /home/testuser > > > If you run `ls -lad /home/testuser` to see permissions, you'll see for example in our case: ``` $ ls -lad /home/testuser drwxrwxr-x 16 testuser testgroup 57344 Oct 9 15:23 /home/testuser ``` Notice second `w` bit. Home directory was opened up for group writes. `sshd` refuses key based authentication in this case. Again, check sshd log on *server* side. There could be other issues like already mentioned * /home/user/.ssh directory is too open * /home/user/.ssh/id\_rsa file is too open * /home/user/.ssh/id\_rsa.pub file is too open * /home/user/.ssh/id\_ecdsa file is too open * /home/user/.ssh/id\_ecdsa.pub file is too open etc..
4,135,261
I am having a problem connecting to a device with a Paramiko (version 1.7.6-2) ssh client: ``` $ python Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import paramiko >>> ssh = paramiko.SSHClient() >>> ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) >>> ssh.connect("123.0.0.1", username="root", password=None) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 327, in connect self._auth(username, password, pkey, key_filenames, allow_agent, look_for_keys) File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 481, in _auth raise saved_exception paramiko.AuthenticationException: Authentication failed. >>> ``` When I use ssh from the command line, it works fine: ``` ssh root@123.0.0.1 BusyBox v1.12.1 (2010-11-03 13:18:46 EDT) built-in shell (ash) Enter 'help' for a list of built-in commands. # ``` Anyone seen this before? **Edit 1** Here is the verbose output of the ssh command: ``` :~$ ssh -v root@123.0.0.1 OpenSSH_5.3p1 Debian-3ubuntu4, OpenSSL 0.9.8k 25 Mar 2009 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to 123.0.0.1 [123.0.0.1] port 22. debug1: Connection established. debug1: identity file /home/waffleman/.ssh/identity type -1 debug1: identity file /home/waffleman/.ssh/id_rsa type -1 debug1: identity file /home/waffleman/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1 debug1: match: OpenSSH_5.1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.3p1 Debian-3ubuntu4 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '123.0.0.1' is known and matches the RSA host key. debug1: Found key in /home/waffleman/.ssh/known_hosts:3 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentication succeeded (none). debug1: channel 0: new [client-session] debug1: Requesting no-more-sessions@openssh.com debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_US.utf8 ``` **Edit 2** Here is the python output with debug output: ``` Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import paramiko, os >>> paramiko.common.logging.basicConfig(level=paramiko.common.DEBUG) >>> ssh = paramiko.SSHClient() >>> ssh.load_system_host_keys() >>> ssh.load_host_keys(os.path.expanduser('~/.ssh/known_hosts')) >>> ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) >>> ssh.connect("123.0.0.1", username='root', password=None) DEBUG:paramiko.transport:starting thread (client mode): 0x928756cL INFO:paramiko.transport:Connected (version 2.0, client OpenSSH_5.1) DEBUG:paramiko.transport:kex algos:['diffie-hellman-group-exchange-sha256', 'diffie-hellman-group-exchange-sha1', 'diffie-hellman-group14-sha1', 'diffie-hellman-group1-sha1'] server key:['ssh-rsa', 'ssh-dss'] client encrypt:['aes128-cbc', '3des-cbc', 'blowfish-cbc', 'cast128-cbc', 'arcfour128', 'arcfour256', 'arcfour', 'aes192-cbc', 'aes256-cbc', 'rijndael-cbc@lysator.liu.se', 'aes128-ctr', 'aes192-ctr', 'aes256-ctr'] server encrypt:['aes128-cbc', '3des-cbc', 'blowfish-cbc', 'cast128-cbc', 'arcfour128', 'arcfour256', 'arcfour', 'aes192-cbc', 'aes256-cbc', 'rijndael-cbc@lysator.liu.se', 'aes128-ctr', 'aes192-ctr', 'aes256-ctr'] client mac:['hmac-md5', 'hmac-sha1', 'umac-64@openssh.com', 'hmac-ripemd160', 'hmac-ripemd160@openssh.com', 'hmac-sha1-96', 'hmac-md5-96'] server mac:['hmac-md5', 'hmac-sha1', 'umac-64@openssh.com', 'hmac-ripemd160', 'hmac-ripemd160@openssh.com', 'hmac-sha1-96', 'hmac-md5-96'] client compress:['none', 'zlib@openssh.com'] server compress:['none', 'zlib@openssh.com'] client lang:[''] server lang:[''] kex follows?False DEBUG:paramiko.transport:Ciphers agreed: local=aes128-ctr, remote=aes128-ctr DEBUG:paramiko.transport:using kex diffie-hellman-group1-sha1; server key type ssh-rsa; cipher: local aes128-ctr, remote aes128-ctr; mac: local hmac-sha1, remote hmac-sha1; compression: local none, remote none DEBUG:paramiko.transport:Switch to new keys ... DEBUG:paramiko.transport:Trying discovered key b945197b1de1207d9aa0663f01888c3c in /home/waffleman/.ssh/id_rsa DEBUG:paramiko.transport:userauth is OK INFO:paramiko.transport:Authentication (publickey) failed. Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 327, in connect self._auth(username, password, pkey, key_filenames, allow_agent, look_for_keys) File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 481, in _auth raise saved_exception paramiko.AuthenticationException: Authentication failed. >>> ```
2010/11/09
[ "https://Stackoverflow.com/questions/4135261", "https://Stackoverflow.com", "https://Stackoverflow.com/users/197108/" ]
The ssh server on the remote device denied your authentication. Make sure you're using the correct key, the public key is present in `authorized_keys`, `.ssh` directory permissions are correct, `authorized_keys` permissions are correct, and the device doesn't have any other access restrictions. It hard to say what's going on without logs from the server. [EDIT] I just looked back through your output, you are authenticating using `None` authentication. This usually isn't ever permitted, and is used to determine what auth methods are allowed by the server. It's possible your server is using host based authentication (or none at all!). Since `auth_none()` is rarely used, it's not accessible from the `SSHClient` class, so you will need to use `Transport` directly. ``` transport.auth_none('root') ```
[paramiko's SSHClient](http://www.lag.net/paramiko/docs/paramiko.SSHClient-class.html) has [`load_system_host_keys`](http://www.lag.net/paramiko/docs/paramiko.SSHClient-class.html#load_system_host_keys) method which you could use to load user specific set of keys. As example in the docs explain, it needs to be run before connecting to a server.
4,135,261
I am having a problem connecting to a device with a Paramiko (version 1.7.6-2) ssh client: ``` $ python Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import paramiko >>> ssh = paramiko.SSHClient() >>> ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) >>> ssh.connect("123.0.0.1", username="root", password=None) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 327, in connect self._auth(username, password, pkey, key_filenames, allow_agent, look_for_keys) File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 481, in _auth raise saved_exception paramiko.AuthenticationException: Authentication failed. >>> ``` When I use ssh from the command line, it works fine: ``` ssh root@123.0.0.1 BusyBox v1.12.1 (2010-11-03 13:18:46 EDT) built-in shell (ash) Enter 'help' for a list of built-in commands. # ``` Anyone seen this before? **Edit 1** Here is the verbose output of the ssh command: ``` :~$ ssh -v root@123.0.0.1 OpenSSH_5.3p1 Debian-3ubuntu4, OpenSSL 0.9.8k 25 Mar 2009 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to 123.0.0.1 [123.0.0.1] port 22. debug1: Connection established. debug1: identity file /home/waffleman/.ssh/identity type -1 debug1: identity file /home/waffleman/.ssh/id_rsa type -1 debug1: identity file /home/waffleman/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1 debug1: match: OpenSSH_5.1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.3p1 Debian-3ubuntu4 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '123.0.0.1' is known and matches the RSA host key. debug1: Found key in /home/waffleman/.ssh/known_hosts:3 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentication succeeded (none). debug1: channel 0: new [client-session] debug1: Requesting no-more-sessions@openssh.com debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_US.utf8 ``` **Edit 2** Here is the python output with debug output: ``` Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import paramiko, os >>> paramiko.common.logging.basicConfig(level=paramiko.common.DEBUG) >>> ssh = paramiko.SSHClient() >>> ssh.load_system_host_keys() >>> ssh.load_host_keys(os.path.expanduser('~/.ssh/known_hosts')) >>> ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) >>> ssh.connect("123.0.0.1", username='root', password=None) DEBUG:paramiko.transport:starting thread (client mode): 0x928756cL INFO:paramiko.transport:Connected (version 2.0, client OpenSSH_5.1) DEBUG:paramiko.transport:kex algos:['diffie-hellman-group-exchange-sha256', 'diffie-hellman-group-exchange-sha1', 'diffie-hellman-group14-sha1', 'diffie-hellman-group1-sha1'] server key:['ssh-rsa', 'ssh-dss'] client encrypt:['aes128-cbc', '3des-cbc', 'blowfish-cbc', 'cast128-cbc', 'arcfour128', 'arcfour256', 'arcfour', 'aes192-cbc', 'aes256-cbc', 'rijndael-cbc@lysator.liu.se', 'aes128-ctr', 'aes192-ctr', 'aes256-ctr'] server encrypt:['aes128-cbc', '3des-cbc', 'blowfish-cbc', 'cast128-cbc', 'arcfour128', 'arcfour256', 'arcfour', 'aes192-cbc', 'aes256-cbc', 'rijndael-cbc@lysator.liu.se', 'aes128-ctr', 'aes192-ctr', 'aes256-ctr'] client mac:['hmac-md5', 'hmac-sha1', 'umac-64@openssh.com', 'hmac-ripemd160', 'hmac-ripemd160@openssh.com', 'hmac-sha1-96', 'hmac-md5-96'] server mac:['hmac-md5', 'hmac-sha1', 'umac-64@openssh.com', 'hmac-ripemd160', 'hmac-ripemd160@openssh.com', 'hmac-sha1-96', 'hmac-md5-96'] client compress:['none', 'zlib@openssh.com'] server compress:['none', 'zlib@openssh.com'] client lang:[''] server lang:[''] kex follows?False DEBUG:paramiko.transport:Ciphers agreed: local=aes128-ctr, remote=aes128-ctr DEBUG:paramiko.transport:using kex diffie-hellman-group1-sha1; server key type ssh-rsa; cipher: local aes128-ctr, remote aes128-ctr; mac: local hmac-sha1, remote hmac-sha1; compression: local none, remote none DEBUG:paramiko.transport:Switch to new keys ... DEBUG:paramiko.transport:Trying discovered key b945197b1de1207d9aa0663f01888c3c in /home/waffleman/.ssh/id_rsa DEBUG:paramiko.transport:userauth is OK INFO:paramiko.transport:Authentication (publickey) failed. Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 327, in connect self._auth(username, password, pkey, key_filenames, allow_agent, look_for_keys) File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 481, in _auth raise saved_exception paramiko.AuthenticationException: Authentication failed. >>> ```
2010/11/09
[ "https://Stackoverflow.com/questions/4135261", "https://Stackoverflow.com", "https://Stackoverflow.com/users/197108/" ]
Make sure that the permissions on the public and private key files (and possibly the containing folder) are set to very restrictive (i.e. chmod 600 id\_rsa). It turns out this is required (by the Operating System?) to use the files as ssh keys. Found this out from my helpful colleague :) Also make sure that you are using the correct username for the given ssh key.
you may need to check log in server, try to excute `tail -f /var/log/auth.log` then you may find the reason why server refuses your connection. If server shows like this `userauth_pubkey: unsupported public key algorithm: rsa-sha2-512 [preauth]`, then you can add `transport.server_extensions = {'server-sig-algs': 'ssh-rsa'}` after you initialize your transport
4,135,261
I am having a problem connecting to a device with a Paramiko (version 1.7.6-2) ssh client: ``` $ python Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import paramiko >>> ssh = paramiko.SSHClient() >>> ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) >>> ssh.connect("123.0.0.1", username="root", password=None) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 327, in connect self._auth(username, password, pkey, key_filenames, allow_agent, look_for_keys) File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 481, in _auth raise saved_exception paramiko.AuthenticationException: Authentication failed. >>> ``` When I use ssh from the command line, it works fine: ``` ssh root@123.0.0.1 BusyBox v1.12.1 (2010-11-03 13:18:46 EDT) built-in shell (ash) Enter 'help' for a list of built-in commands. # ``` Anyone seen this before? **Edit 1** Here is the verbose output of the ssh command: ``` :~$ ssh -v root@123.0.0.1 OpenSSH_5.3p1 Debian-3ubuntu4, OpenSSL 0.9.8k 25 Mar 2009 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to 123.0.0.1 [123.0.0.1] port 22. debug1: Connection established. debug1: identity file /home/waffleman/.ssh/identity type -1 debug1: identity file /home/waffleman/.ssh/id_rsa type -1 debug1: identity file /home/waffleman/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1 debug1: match: OpenSSH_5.1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.3p1 Debian-3ubuntu4 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '123.0.0.1' is known and matches the RSA host key. debug1: Found key in /home/waffleman/.ssh/known_hosts:3 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentication succeeded (none). debug1: channel 0: new [client-session] debug1: Requesting no-more-sessions@openssh.com debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_US.utf8 ``` **Edit 2** Here is the python output with debug output: ``` Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import paramiko, os >>> paramiko.common.logging.basicConfig(level=paramiko.common.DEBUG) >>> ssh = paramiko.SSHClient() >>> ssh.load_system_host_keys() >>> ssh.load_host_keys(os.path.expanduser('~/.ssh/known_hosts')) >>> ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) >>> ssh.connect("123.0.0.1", username='root', password=None) DEBUG:paramiko.transport:starting thread (client mode): 0x928756cL INFO:paramiko.transport:Connected (version 2.0, client OpenSSH_5.1) DEBUG:paramiko.transport:kex algos:['diffie-hellman-group-exchange-sha256', 'diffie-hellman-group-exchange-sha1', 'diffie-hellman-group14-sha1', 'diffie-hellman-group1-sha1'] server key:['ssh-rsa', 'ssh-dss'] client encrypt:['aes128-cbc', '3des-cbc', 'blowfish-cbc', 'cast128-cbc', 'arcfour128', 'arcfour256', 'arcfour', 'aes192-cbc', 'aes256-cbc', 'rijndael-cbc@lysator.liu.se', 'aes128-ctr', 'aes192-ctr', 'aes256-ctr'] server encrypt:['aes128-cbc', '3des-cbc', 'blowfish-cbc', 'cast128-cbc', 'arcfour128', 'arcfour256', 'arcfour', 'aes192-cbc', 'aes256-cbc', 'rijndael-cbc@lysator.liu.se', 'aes128-ctr', 'aes192-ctr', 'aes256-ctr'] client mac:['hmac-md5', 'hmac-sha1', 'umac-64@openssh.com', 'hmac-ripemd160', 'hmac-ripemd160@openssh.com', 'hmac-sha1-96', 'hmac-md5-96'] server mac:['hmac-md5', 'hmac-sha1', 'umac-64@openssh.com', 'hmac-ripemd160', 'hmac-ripemd160@openssh.com', 'hmac-sha1-96', 'hmac-md5-96'] client compress:['none', 'zlib@openssh.com'] server compress:['none', 'zlib@openssh.com'] client lang:[''] server lang:[''] kex follows?False DEBUG:paramiko.transport:Ciphers agreed: local=aes128-ctr, remote=aes128-ctr DEBUG:paramiko.transport:using kex diffie-hellman-group1-sha1; server key type ssh-rsa; cipher: local aes128-ctr, remote aes128-ctr; mac: local hmac-sha1, remote hmac-sha1; compression: local none, remote none DEBUG:paramiko.transport:Switch to new keys ... DEBUG:paramiko.transport:Trying discovered key b945197b1de1207d9aa0663f01888c3c in /home/waffleman/.ssh/id_rsa DEBUG:paramiko.transport:userauth is OK INFO:paramiko.transport:Authentication (publickey) failed. Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 327, in connect self._auth(username, password, pkey, key_filenames, allow_agent, look_for_keys) File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 481, in _auth raise saved_exception paramiko.AuthenticationException: Authentication failed. >>> ```
2010/11/09
[ "https://Stackoverflow.com/questions/4135261", "https://Stackoverflow.com", "https://Stackoverflow.com/users/197108/" ]
The ssh server on the remote device denied your authentication. Make sure you're using the correct key, the public key is present in `authorized_keys`, `.ssh` directory permissions are correct, `authorized_keys` permissions are correct, and the device doesn't have any other access restrictions. It hard to say what's going on without logs from the server. [EDIT] I just looked back through your output, you are authenticating using `None` authentication. This usually isn't ever permitted, and is used to determine what auth methods are allowed by the server. It's possible your server is using host based authentication (or none at all!). Since `auth_none()` is rarely used, it's not accessible from the `SSHClient` class, so you will need to use `Transport` directly. ``` transport.auth_none('root') ```
venv installation also makes global files ----------------------------------------- Installing paramiko in a venv installs files both in the venv and in the global environment. Using paramiko in that venv only does not seem to work. In codium / vscode, be in a folder that has no access to the venv and then use paramiko in the base environment. If you uninstall it from the venv, the base environment does not run paramiko anymore. **From all of this it seems best to install paramiko *only* in the base environment so that it is available for any venv as well.** Details ------- ### installation in the venv leads to global files as well In my case, this error only popped up when I was in a virtual environment (venv) or when I was in a folder that contained a venv as well, but with Python interpreter of the base environment activated: ```sh >>> ssh.connect(host, port=port, username=user, key_filename=key_filepath) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/MY_USER/Documents/MY_PROJECT/MY_VENV/lib/python3.8/site-packages/paramiko/client.py", line 435, in connect self._auth( File "/home/MY_USER/Documents/MY_PROJECT/MY_VENV/lib/python3.8/site-packages/paramiko/client.py", line 766, in _auth raise saved_exception File "/home/MY_USER/Documents/MY_PROJECT/MY_VENV/lib/python3.8/site-packages/paramiko/client.py", line 742, in _auth self._transport.auth_publickey(username, key) File "/home/MY_USER/Documents/MY_PROJECT/MY_VENV/lib/python3.8/site-packages/paramiko/transport.py", line 1634, in auth_publickey return self.auth_handler.wait_for_response(my_event) File "/home/MY_USER/Documents/MY_PROJECT/MY_VENV/lib/python3.8/site-packages/paramiko/auth_handler.py", line 258, in wait_for_response raise e paramiko.ssh_exception.AuthenticationException: Authentication failed. ``` The script below worked only when I loaded whatever folder as the project folder in my code editor that did not have a venv with an installed Paramiko in it. ```py from os import getenv import paramiko from dotenv import load_dotenv load_dotenv(MY_FULL_PATH, override=True) ssh = paramiko.SSHClient() # ssh.load_host_keys(os.path.expanduser('~/.ssh/known_hosts')) ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) host = getenv("MY_HOST") port = getenv("MY_PORT") user = getenv("MY_USER") key_filepath = getenv("MY_SSH_KEY_FILEPATH") ssh.connect(host, port=port, username=user, key_filename=key_filepath) sftp = ssh.open_sftp() sftp.put(MY_FILEPATH1, MY_FILEPATH2) ``` As soon as there is a venv with installed Paramiko in the project folder, Paramiko seems to use the venv by default, and that error pops up **even if you choose the base environment as the interpreter** instead. I can only guess that this is a problem that occurs when Paramiko is installed both in the base environment and in the venv, as in my case, although I installed it *only* in the venv. #### uninstall from the base env When I tried uninstalling it from the base environment, it did not find any files: ```sh pip3 uninstall paramiko Found existing installation: paramiko 2.6.0 Not uninstalling paramiko at /usr/lib/python3/dist-packages, outside environment /usr Can't uninstall 'paramiko'. No files were found to uninstall. ``` Still, I find it at `./lib/python3/dist-packages/` when searching `grep -lR paramiko /usr`. And I have it also in two venvs. My guess is that Paramiko cannot deal with an installation in a venv since it is still successfully used when you are not in the venv. If you are in a folder with access to the venv that actually has it installed, it does not work unless you uninstall it again (tested). The venv that causes the errors is a completely new setup because I had problems installing Paramiko in another existing venv. The solution was to uninstall it from the venv, then I can use the venv and get Paramiko from the global installation, probably because the global installation is dominated by the venv installation which is then again wrongly interwined with the global installation. #### uninstall from the venv When I uninstalled it from the venv, paramiko was not found in the base environment anymore. I also see that using Paramiko in a venv needs some extra steps if you want to run a command in a venv, perhaps that explains that Paramiko is generally a global installation? See [Set up virtualenv with Paramiko SSH](https://stackoverflow.com/questions/38793109/set-up-virtualenv-with-paramiko-ssh). Any further ideas welcome.
4,135,261
I am having a problem connecting to a device with a Paramiko (version 1.7.6-2) ssh client: ``` $ python Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import paramiko >>> ssh = paramiko.SSHClient() >>> ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) >>> ssh.connect("123.0.0.1", username="root", password=None) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 327, in connect self._auth(username, password, pkey, key_filenames, allow_agent, look_for_keys) File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 481, in _auth raise saved_exception paramiko.AuthenticationException: Authentication failed. >>> ``` When I use ssh from the command line, it works fine: ``` ssh root@123.0.0.1 BusyBox v1.12.1 (2010-11-03 13:18:46 EDT) built-in shell (ash) Enter 'help' for a list of built-in commands. # ``` Anyone seen this before? **Edit 1** Here is the verbose output of the ssh command: ``` :~$ ssh -v root@123.0.0.1 OpenSSH_5.3p1 Debian-3ubuntu4, OpenSSL 0.9.8k 25 Mar 2009 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to 123.0.0.1 [123.0.0.1] port 22. debug1: Connection established. debug1: identity file /home/waffleman/.ssh/identity type -1 debug1: identity file /home/waffleman/.ssh/id_rsa type -1 debug1: identity file /home/waffleman/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1 debug1: match: OpenSSH_5.1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.3p1 Debian-3ubuntu4 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '123.0.0.1' is known and matches the RSA host key. debug1: Found key in /home/waffleman/.ssh/known_hosts:3 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentication succeeded (none). debug1: channel 0: new [client-session] debug1: Requesting no-more-sessions@openssh.com debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_US.utf8 ``` **Edit 2** Here is the python output with debug output: ``` Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import paramiko, os >>> paramiko.common.logging.basicConfig(level=paramiko.common.DEBUG) >>> ssh = paramiko.SSHClient() >>> ssh.load_system_host_keys() >>> ssh.load_host_keys(os.path.expanduser('~/.ssh/known_hosts')) >>> ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) >>> ssh.connect("123.0.0.1", username='root', password=None) DEBUG:paramiko.transport:starting thread (client mode): 0x928756cL INFO:paramiko.transport:Connected (version 2.0, client OpenSSH_5.1) DEBUG:paramiko.transport:kex algos:['diffie-hellman-group-exchange-sha256', 'diffie-hellman-group-exchange-sha1', 'diffie-hellman-group14-sha1', 'diffie-hellman-group1-sha1'] server key:['ssh-rsa', 'ssh-dss'] client encrypt:['aes128-cbc', '3des-cbc', 'blowfish-cbc', 'cast128-cbc', 'arcfour128', 'arcfour256', 'arcfour', 'aes192-cbc', 'aes256-cbc', 'rijndael-cbc@lysator.liu.se', 'aes128-ctr', 'aes192-ctr', 'aes256-ctr'] server encrypt:['aes128-cbc', '3des-cbc', 'blowfish-cbc', 'cast128-cbc', 'arcfour128', 'arcfour256', 'arcfour', 'aes192-cbc', 'aes256-cbc', 'rijndael-cbc@lysator.liu.se', 'aes128-ctr', 'aes192-ctr', 'aes256-ctr'] client mac:['hmac-md5', 'hmac-sha1', 'umac-64@openssh.com', 'hmac-ripemd160', 'hmac-ripemd160@openssh.com', 'hmac-sha1-96', 'hmac-md5-96'] server mac:['hmac-md5', 'hmac-sha1', 'umac-64@openssh.com', 'hmac-ripemd160', 'hmac-ripemd160@openssh.com', 'hmac-sha1-96', 'hmac-md5-96'] client compress:['none', 'zlib@openssh.com'] server compress:['none', 'zlib@openssh.com'] client lang:[''] server lang:[''] kex follows?False DEBUG:paramiko.transport:Ciphers agreed: local=aes128-ctr, remote=aes128-ctr DEBUG:paramiko.transport:using kex diffie-hellman-group1-sha1; server key type ssh-rsa; cipher: local aes128-ctr, remote aes128-ctr; mac: local hmac-sha1, remote hmac-sha1; compression: local none, remote none DEBUG:paramiko.transport:Switch to new keys ... DEBUG:paramiko.transport:Trying discovered key b945197b1de1207d9aa0663f01888c3c in /home/waffleman/.ssh/id_rsa DEBUG:paramiko.transport:userauth is OK INFO:paramiko.transport:Authentication (publickey) failed. Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 327, in connect self._auth(username, password, pkey, key_filenames, allow_agent, look_for_keys) File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 481, in _auth raise saved_exception paramiko.AuthenticationException: Authentication failed. >>> ```
2010/11/09
[ "https://Stackoverflow.com/questions/4135261", "https://Stackoverflow.com", "https://Stackoverflow.com/users/197108/" ]
As a very late follow-up on this matter, I believe I was running into the same issue as waffleman, in a context of a confined network. The hint about using `auth_none` on the `Transport` object turned out quite helpful, but I found myself a little puzzled as to how to implement that. Thing is, as of today at least, I can't get the `Transport` object of an `SSHClient` object until it has connected; but it won't connect in the first place... So In case this is useful to others, my work around is below. I just override the `_auth` method. OK, this is fragile, as `_auth` is a private thing. My other alternatives were - actually still are - to manually create the `Transport` and `Channel` objects, but for the time being I feel like I'm much better off with all this still under the hood. ``` from paramiko import SSHClient, BadAuthenticationType class SSHClient_try_noauth(SSHClient): def _auth(self, username, *args): try: self._transport.auth_none(username) except BadAuthenticationType: super()._auth(username, *args) ```
Make sure that the permissions on the public and private key files (and possibly the containing folder) are set to very restrictive (i.e. chmod 600 id\_rsa). It turns out this is required (by the Operating System?) to use the files as ssh keys. Found this out from my helpful colleague :) Also make sure that you are using the correct username for the given ssh key.
4,135,261
I am having a problem connecting to a device with a Paramiko (version 1.7.6-2) ssh client: ``` $ python Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import paramiko >>> ssh = paramiko.SSHClient() >>> ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) >>> ssh.connect("123.0.0.1", username="root", password=None) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 327, in connect self._auth(username, password, pkey, key_filenames, allow_agent, look_for_keys) File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 481, in _auth raise saved_exception paramiko.AuthenticationException: Authentication failed. >>> ``` When I use ssh from the command line, it works fine: ``` ssh root@123.0.0.1 BusyBox v1.12.1 (2010-11-03 13:18:46 EDT) built-in shell (ash) Enter 'help' for a list of built-in commands. # ``` Anyone seen this before? **Edit 1** Here is the verbose output of the ssh command: ``` :~$ ssh -v root@123.0.0.1 OpenSSH_5.3p1 Debian-3ubuntu4, OpenSSL 0.9.8k 25 Mar 2009 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to 123.0.0.1 [123.0.0.1] port 22. debug1: Connection established. debug1: identity file /home/waffleman/.ssh/identity type -1 debug1: identity file /home/waffleman/.ssh/id_rsa type -1 debug1: identity file /home/waffleman/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1 debug1: match: OpenSSH_5.1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.3p1 Debian-3ubuntu4 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '123.0.0.1' is known and matches the RSA host key. debug1: Found key in /home/waffleman/.ssh/known_hosts:3 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentication succeeded (none). debug1: channel 0: new [client-session] debug1: Requesting no-more-sessions@openssh.com debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_US.utf8 ``` **Edit 2** Here is the python output with debug output: ``` Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import paramiko, os >>> paramiko.common.logging.basicConfig(level=paramiko.common.DEBUG) >>> ssh = paramiko.SSHClient() >>> ssh.load_system_host_keys() >>> ssh.load_host_keys(os.path.expanduser('~/.ssh/known_hosts')) >>> ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) >>> ssh.connect("123.0.0.1", username='root', password=None) DEBUG:paramiko.transport:starting thread (client mode): 0x928756cL INFO:paramiko.transport:Connected (version 2.0, client OpenSSH_5.1) DEBUG:paramiko.transport:kex algos:['diffie-hellman-group-exchange-sha256', 'diffie-hellman-group-exchange-sha1', 'diffie-hellman-group14-sha1', 'diffie-hellman-group1-sha1'] server key:['ssh-rsa', 'ssh-dss'] client encrypt:['aes128-cbc', '3des-cbc', 'blowfish-cbc', 'cast128-cbc', 'arcfour128', 'arcfour256', 'arcfour', 'aes192-cbc', 'aes256-cbc', 'rijndael-cbc@lysator.liu.se', 'aes128-ctr', 'aes192-ctr', 'aes256-ctr'] server encrypt:['aes128-cbc', '3des-cbc', 'blowfish-cbc', 'cast128-cbc', 'arcfour128', 'arcfour256', 'arcfour', 'aes192-cbc', 'aes256-cbc', 'rijndael-cbc@lysator.liu.se', 'aes128-ctr', 'aes192-ctr', 'aes256-ctr'] client mac:['hmac-md5', 'hmac-sha1', 'umac-64@openssh.com', 'hmac-ripemd160', 'hmac-ripemd160@openssh.com', 'hmac-sha1-96', 'hmac-md5-96'] server mac:['hmac-md5', 'hmac-sha1', 'umac-64@openssh.com', 'hmac-ripemd160', 'hmac-ripemd160@openssh.com', 'hmac-sha1-96', 'hmac-md5-96'] client compress:['none', 'zlib@openssh.com'] server compress:['none', 'zlib@openssh.com'] client lang:[''] server lang:[''] kex follows?False DEBUG:paramiko.transport:Ciphers agreed: local=aes128-ctr, remote=aes128-ctr DEBUG:paramiko.transport:using kex diffie-hellman-group1-sha1; server key type ssh-rsa; cipher: local aes128-ctr, remote aes128-ctr; mac: local hmac-sha1, remote hmac-sha1; compression: local none, remote none DEBUG:paramiko.transport:Switch to new keys ... DEBUG:paramiko.transport:Trying discovered key b945197b1de1207d9aa0663f01888c3c in /home/waffleman/.ssh/id_rsa DEBUG:paramiko.transport:userauth is OK INFO:paramiko.transport:Authentication (publickey) failed. Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 327, in connect self._auth(username, password, pkey, key_filenames, allow_agent, look_for_keys) File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 481, in _auth raise saved_exception paramiko.AuthenticationException: Authentication failed. >>> ```
2010/11/09
[ "https://Stackoverflow.com/questions/4135261", "https://Stackoverflow.com", "https://Stackoverflow.com/users/197108/" ]
Make sure that the permissions on the public and private key files (and possibly the containing folder) are set to very restrictive (i.e. chmod 600 id\_rsa). It turns out this is required (by the Operating System?) to use the files as ssh keys. Found this out from my helpful colleague :) Also make sure that you are using the correct username for the given ssh key.
venv installation also makes global files ----------------------------------------- Installing paramiko in a venv installs files both in the venv and in the global environment. Using paramiko in that venv only does not seem to work. In codium / vscode, be in a folder that has no access to the venv and then use paramiko in the base environment. If you uninstall it from the venv, the base environment does not run paramiko anymore. **From all of this it seems best to install paramiko *only* in the base environment so that it is available for any venv as well.** Details ------- ### installation in the venv leads to global files as well In my case, this error only popped up when I was in a virtual environment (venv) or when I was in a folder that contained a venv as well, but with Python interpreter of the base environment activated: ```sh >>> ssh.connect(host, port=port, username=user, key_filename=key_filepath) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/MY_USER/Documents/MY_PROJECT/MY_VENV/lib/python3.8/site-packages/paramiko/client.py", line 435, in connect self._auth( File "/home/MY_USER/Documents/MY_PROJECT/MY_VENV/lib/python3.8/site-packages/paramiko/client.py", line 766, in _auth raise saved_exception File "/home/MY_USER/Documents/MY_PROJECT/MY_VENV/lib/python3.8/site-packages/paramiko/client.py", line 742, in _auth self._transport.auth_publickey(username, key) File "/home/MY_USER/Documents/MY_PROJECT/MY_VENV/lib/python3.8/site-packages/paramiko/transport.py", line 1634, in auth_publickey return self.auth_handler.wait_for_response(my_event) File "/home/MY_USER/Documents/MY_PROJECT/MY_VENV/lib/python3.8/site-packages/paramiko/auth_handler.py", line 258, in wait_for_response raise e paramiko.ssh_exception.AuthenticationException: Authentication failed. ``` The script below worked only when I loaded whatever folder as the project folder in my code editor that did not have a venv with an installed Paramiko in it. ```py from os import getenv import paramiko from dotenv import load_dotenv load_dotenv(MY_FULL_PATH, override=True) ssh = paramiko.SSHClient() # ssh.load_host_keys(os.path.expanduser('~/.ssh/known_hosts')) ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) host = getenv("MY_HOST") port = getenv("MY_PORT") user = getenv("MY_USER") key_filepath = getenv("MY_SSH_KEY_FILEPATH") ssh.connect(host, port=port, username=user, key_filename=key_filepath) sftp = ssh.open_sftp() sftp.put(MY_FILEPATH1, MY_FILEPATH2) ``` As soon as there is a venv with installed Paramiko in the project folder, Paramiko seems to use the venv by default, and that error pops up **even if you choose the base environment as the interpreter** instead. I can only guess that this is a problem that occurs when Paramiko is installed both in the base environment and in the venv, as in my case, although I installed it *only* in the venv. #### uninstall from the base env When I tried uninstalling it from the base environment, it did not find any files: ```sh pip3 uninstall paramiko Found existing installation: paramiko 2.6.0 Not uninstalling paramiko at /usr/lib/python3/dist-packages, outside environment /usr Can't uninstall 'paramiko'. No files were found to uninstall. ``` Still, I find it at `./lib/python3/dist-packages/` when searching `grep -lR paramiko /usr`. And I have it also in two venvs. My guess is that Paramiko cannot deal with an installation in a venv since it is still successfully used when you are not in the venv. If you are in a folder with access to the venv that actually has it installed, it does not work unless you uninstall it again (tested). The venv that causes the errors is a completely new setup because I had problems installing Paramiko in another existing venv. The solution was to uninstall it from the venv, then I can use the venv and get Paramiko from the global installation, probably because the global installation is dominated by the venv installation which is then again wrongly interwined with the global installation. #### uninstall from the venv When I uninstalled it from the venv, paramiko was not found in the base environment anymore. I also see that using Paramiko in a venv needs some extra steps if you want to run a command in a venv, perhaps that explains that Paramiko is generally a global installation? See [Set up virtualenv with Paramiko SSH](https://stackoverflow.com/questions/38793109/set-up-virtualenv-with-paramiko-ssh). Any further ideas welcome.
73,353,608
My script takes `-d`, `--delimiter` as argument: ``` parser.add_argument('-d', '--delimiter') ``` but when I pass it `--` as delimiter, it is empty ``` script.py --delimiter='--' ``` I know `--` is special in argument/parameter parsing, but I am using it in the form `--option='--'` and quoted. Why does it not work? I am using Python 3.7.3 Here is test code: ``` #!/bin/python3 import argparse parser = argparse.ArgumentParser() parser.add_argument('--delimiter') parser.add_argument('pattern') args = parser.parse_args() print(args.delimiter) ``` When I run it as `script --delimiter=-- AAA` it prints empty `args.delimiter`.
2022/08/14
[ "https://Stackoverflow.com/questions/73353608", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7287412/" ]
Existing bug report ------------------- Patches have been suggested, but it hasn't been applied. [Argparse incorrectly handles '--' as argument to option](https://github.com/python/cpython/issues/58572) Some simple examples: --------------------- ``` In [1]: import argparse In [2]: p = argparse.ArgumentParser() In [3]: a = p.add_argument('--foo') In [4]: p.parse_args(['--foo=123']) Out[4]: Namespace(foo='123') ``` The unexpected case: ``` In [5]: p.parse_args(['--foo=--']) Out[5]: Namespace(foo=[]) ``` Fully quote passes through - but I won't get into how you might achieve this via shell call: ``` In [6]: p.parse_args(['--foo="--"']) Out[6]: Namespace(foo='"--"') ``` '--' as separate string: ``` In [7]: p.parse_args(['--foo','--']) usage: ipython3 [-h] [--foo FOO] ipython3: error: argument --foo: expected one argument ... ``` another example of the double quote: ``` In [8]: p.parse_args(['--foo','"--"']) Out[8]: Namespace(foo='"--"') ``` In `_parse_known_args`, the input is scanned and classified as "O" or "A". The '--' is handled as ``` # all args after -- are non-options if arg_string == '--': arg_string_pattern_parts.append('-') for arg_string in arg_strings_iter: arg_string_pattern_parts.append('A') ``` I think the '--' are stripped out after that, but I haven't found that part of the code yet. I'm also not finding were the '--foo=...' version is handled. I vaguely recall some bug/issues over handling of multiple occurances of '--'. With the migration to github, I'm not following `argparse` developements as much as I used to. edit ---- `get_values` starts with: ``` def _get_values(self, action, arg_strings): # for everything but PARSER, REMAINDER args, strip out first '--' if action.nargs not in [PARSER, REMAINDER]: try: arg_strings.remove('--') except ValueError: pass ``` Why that results in a empty list will require more thought and testing. The '=' is handled in `_parse_optional`, which is used during the first scan: ``` # if the option string before the "=" is present, return the action if '=' in arg_string: option_string, explicit_arg = arg_string.split('=', 1) if option_string in self._option_string_actions: action = self._option_string_actions[option_string] return action, option_string, explicit_arg ``` old bug issues -------------- [argparse handling multiple "--" in args improperly](https://bugs.python.org/issue13922) [argparse: Allow the use of -- to break out of nargs and into subparser](https://github.com/python/cpython/issues/53780)
It calls `parse_args` which calls `parse_known_args` which calls `_parse_known_args`. Then, on line 2078 (or something similar), it does this (inside a while loop going through the string): ```py start_index = consume_optional(start_index) ``` which calls the `consume_optional` (which makes sense, because this is an optional argument it is parsing right now) defined earlier in the method `_parse_known_args`. When given `--delimiter='--'`, it will make this `action_tuples`: ```py # if the action expect exactly one argument, we've # successfully matched the option; exit the loop elif arg_count == 1: stop = start_index + 1 args = [explicit_arg] action_tuples.append((action, args, option_string)) break ## ## The above code gives you the following: ## action_tuples=[(_StoreAction(option_strings=['-d', '--delimiter'], dest='delimiter', nargs=None, const=None, default=None, type=None, choices=None, help=None, metavar=None), ['--'], '--delimiter')] ``` That is then iterated to, and is then fed to `take_action` on line 2009: ```py assert action_tuples for action, args, option_string in action_tuples: take_action(action, args, option_string) return stop ``` The `take_action` function will then call `self._get_values(action, argument_strings)` on line 1918, which, as mentioned in the answer by @hpaulj, removes the `--`. Then, you're left with the empty list.
73,353,608
My script takes `-d`, `--delimiter` as argument: ``` parser.add_argument('-d', '--delimiter') ``` but when I pass it `--` as delimiter, it is empty ``` script.py --delimiter='--' ``` I know `--` is special in argument/parameter parsing, but I am using it in the form `--option='--'` and quoted. Why does it not work? I am using Python 3.7.3 Here is test code: ``` #!/bin/python3 import argparse parser = argparse.ArgumentParser() parser.add_argument('--delimiter') parser.add_argument('pattern') args = parser.parse_args() print(args.delimiter) ``` When I run it as `script --delimiter=-- AAA` it prints empty `args.delimiter`.
2022/08/14
[ "https://Stackoverflow.com/questions/73353608", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7287412/" ]
This looks like a bug. You should report it. [This code](https://github.com/python/cpython/blob/3.10/Lib/argparse.py#L2422-L2426) in `argparse.py` is the start of `_get_values`, one of the primary helper functions for parsing values: ``` if action.nargs not in [PARSER, REMAINDER]: try: arg_strings.remove('--') except ValueError: pass ``` The code receives the `--` argument as the single element of a list `['--']`. It tries to remove `'--'` from the list, because when using `--` as an end-of-options marker, the `'--'` string will end up in `arg_strings` for one of the `_get_values` calls. However, when `'--'` is the actual argument value, the code still removes it anyway, so `arg_strings` ends up being an empty list instead of a single-element list. The code then goes through an else-if chain for handling different kinds of argument (branch bodies omitted to save space here): ``` # optional argument produces a default when not present if not arg_strings and action.nargs == OPTIONAL: ... # when nargs='*' on a positional, if there were no command-line # args, use the default if it is anything other than None elif (not arg_strings and action.nargs == ZERO_OR_MORE and not action.option_strings): ... # single argument or optional argument produces a single value elif len(arg_strings) == 1 and action.nargs in [None, OPTIONAL]: ... # REMAINDER arguments convert all values, checking none elif action.nargs == REMAINDER: ... # PARSER arguments convert all values, but check only the first elif action.nargs == PARSER: ... # SUPPRESS argument does not put anything in the namespace elif action.nargs == SUPPRESS: ... # all other types of nargs produce a list else: ... ``` This code should go through the 3rd branch, ``` # single argument or optional argument produces a single value elif len(arg_strings) == 1 and action.nargs in [None, OPTIONAL]: ``` but because the argument is missing from `arg_strings`, `len(arg_strings)` is 0. It instead hits the final case, which is supposed to handle a completely different kind of argument. That branch ends up returning an empty list instead of the `'--'` string that should have been returned, which is why `args.delimiter` ends up being an empty list instead of a `'--'` string. --- This bug manifests with positional arguments too. For example, ``` import argparse parser = argparse.ArgumentParser() parser.add_argument('a') parser.add_argument('b') args = parser.parse_args(["--", "--", "--"]) print(args) ``` prints ``` Namespace(a='--', b=[]) ``` because when `_get_values` handles the `b` argument, it receives `['--']` as `arg_strings` and removes the `'--'`. When handling the `a` argument, it receives `['--', '--']`, representing one end-of-options marker and one actual `--` argument value, and it successfully removes the end-of-options marker, but when handling `b`, it removes the actual argument value.
Existing bug report ------------------- Patches have been suggested, but it hasn't been applied. [Argparse incorrectly handles '--' as argument to option](https://github.com/python/cpython/issues/58572) Some simple examples: --------------------- ``` In [1]: import argparse In [2]: p = argparse.ArgumentParser() In [3]: a = p.add_argument('--foo') In [4]: p.parse_args(['--foo=123']) Out[4]: Namespace(foo='123') ``` The unexpected case: ``` In [5]: p.parse_args(['--foo=--']) Out[5]: Namespace(foo=[]) ``` Fully quote passes through - but I won't get into how you might achieve this via shell call: ``` In [6]: p.parse_args(['--foo="--"']) Out[6]: Namespace(foo='"--"') ``` '--' as separate string: ``` In [7]: p.parse_args(['--foo','--']) usage: ipython3 [-h] [--foo FOO] ipython3: error: argument --foo: expected one argument ... ``` another example of the double quote: ``` In [8]: p.parse_args(['--foo','"--"']) Out[8]: Namespace(foo='"--"') ``` In `_parse_known_args`, the input is scanned and classified as "O" or "A". The '--' is handled as ``` # all args after -- are non-options if arg_string == '--': arg_string_pattern_parts.append('-') for arg_string in arg_strings_iter: arg_string_pattern_parts.append('A') ``` I think the '--' are stripped out after that, but I haven't found that part of the code yet. I'm also not finding were the '--foo=...' version is handled. I vaguely recall some bug/issues over handling of multiple occurances of '--'. With the migration to github, I'm not following `argparse` developements as much as I used to. edit ---- `get_values` starts with: ``` def _get_values(self, action, arg_strings): # for everything but PARSER, REMAINDER args, strip out first '--' if action.nargs not in [PARSER, REMAINDER]: try: arg_strings.remove('--') except ValueError: pass ``` Why that results in a empty list will require more thought and testing. The '=' is handled in `_parse_optional`, which is used during the first scan: ``` # if the option string before the "=" is present, return the action if '=' in arg_string: option_string, explicit_arg = arg_string.split('=', 1) if option_string in self._option_string_actions: action = self._option_string_actions[option_string] return action, option_string, explicit_arg ``` old bug issues -------------- [argparse handling multiple "--" in args improperly](https://bugs.python.org/issue13922) [argparse: Allow the use of -- to break out of nargs and into subparser](https://github.com/python/cpython/issues/53780)
73,353,608
My script takes `-d`, `--delimiter` as argument: ``` parser.add_argument('-d', '--delimiter') ``` but when I pass it `--` as delimiter, it is empty ``` script.py --delimiter='--' ``` I know `--` is special in argument/parameter parsing, but I am using it in the form `--option='--'` and quoted. Why does it not work? I am using Python 3.7.3 Here is test code: ``` #!/bin/python3 import argparse parser = argparse.ArgumentParser() parser.add_argument('--delimiter') parser.add_argument('pattern') args = parser.parse_args() print(args.delimiter) ``` When I run it as `script --delimiter=-- AAA` it prints empty `args.delimiter`.
2022/08/14
[ "https://Stackoverflow.com/questions/73353608", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7287412/" ]
This looks like a bug. You should report it. [This code](https://github.com/python/cpython/blob/3.10/Lib/argparse.py#L2422-L2426) in `argparse.py` is the start of `_get_values`, one of the primary helper functions for parsing values: ``` if action.nargs not in [PARSER, REMAINDER]: try: arg_strings.remove('--') except ValueError: pass ``` The code receives the `--` argument as the single element of a list `['--']`. It tries to remove `'--'` from the list, because when using `--` as an end-of-options marker, the `'--'` string will end up in `arg_strings` for one of the `_get_values` calls. However, when `'--'` is the actual argument value, the code still removes it anyway, so `arg_strings` ends up being an empty list instead of a single-element list. The code then goes through an else-if chain for handling different kinds of argument (branch bodies omitted to save space here): ``` # optional argument produces a default when not present if not arg_strings and action.nargs == OPTIONAL: ... # when nargs='*' on a positional, if there were no command-line # args, use the default if it is anything other than None elif (not arg_strings and action.nargs == ZERO_OR_MORE and not action.option_strings): ... # single argument or optional argument produces a single value elif len(arg_strings) == 1 and action.nargs in [None, OPTIONAL]: ... # REMAINDER arguments convert all values, checking none elif action.nargs == REMAINDER: ... # PARSER arguments convert all values, but check only the first elif action.nargs == PARSER: ... # SUPPRESS argument does not put anything in the namespace elif action.nargs == SUPPRESS: ... # all other types of nargs produce a list else: ... ``` This code should go through the 3rd branch, ``` # single argument or optional argument produces a single value elif len(arg_strings) == 1 and action.nargs in [None, OPTIONAL]: ``` but because the argument is missing from `arg_strings`, `len(arg_strings)` is 0. It instead hits the final case, which is supposed to handle a completely different kind of argument. That branch ends up returning an empty list instead of the `'--'` string that should have been returned, which is why `args.delimiter` ends up being an empty list instead of a `'--'` string. --- This bug manifests with positional arguments too. For example, ``` import argparse parser = argparse.ArgumentParser() parser.add_argument('a') parser.add_argument('b') args = parser.parse_args(["--", "--", "--"]) print(args) ``` prints ``` Namespace(a='--', b=[]) ``` because when `_get_values` handles the `b` argument, it receives `['--']` as `arg_strings` and removes the `'--'`. When handling the `a` argument, it receives `['--', '--']`, representing one end-of-options marker and one actual `--` argument value, and it successfully removes the end-of-options marker, but when handling `b`, it removes the actual argument value.
It calls `parse_args` which calls `parse_known_args` which calls `_parse_known_args`. Then, on line 2078 (or something similar), it does this (inside a while loop going through the string): ```py start_index = consume_optional(start_index) ``` which calls the `consume_optional` (which makes sense, because this is an optional argument it is parsing right now) defined earlier in the method `_parse_known_args`. When given `--delimiter='--'`, it will make this `action_tuples`: ```py # if the action expect exactly one argument, we've # successfully matched the option; exit the loop elif arg_count == 1: stop = start_index + 1 args = [explicit_arg] action_tuples.append((action, args, option_string)) break ## ## The above code gives you the following: ## action_tuples=[(_StoreAction(option_strings=['-d', '--delimiter'], dest='delimiter', nargs=None, const=None, default=None, type=None, choices=None, help=None, metavar=None), ['--'], '--delimiter')] ``` That is then iterated to, and is then fed to `take_action` on line 2009: ```py assert action_tuples for action, args, option_string in action_tuples: take_action(action, args, option_string) return stop ``` The `take_action` function will then call `self._get_values(action, argument_strings)` on line 1918, which, as mentioned in the answer by @hpaulj, removes the `--`. Then, you're left with the empty list.
18,219,529
In python, logging to syslog is fairly trivial: ``` syslog.openlog("ident") syslog.syslog(0, "spilled beer on server") syslog.closelog() ``` Is there an equivalently simple way in Java? After quite a bit of googling, I've been unable to find an easy to understand method that doesn't require reconfiguring rsyslogd or syslogd. If there's no equivalent, what is the simplest way to log to syslog?
2013/08/13
[ "https://Stackoverflow.com/questions/18219529", "https://Stackoverflow.com", "https://Stackoverflow.com/users/643675/" ]
One way to go direct to the log without udp is with [syslog4j](http://www.syslog4j.org/). I wouldn't necessarily say it's simple, but it doesn't require reconfiguring syslog, at least.
The closest I can think of, would be using [Log4J](https://logging.apache.org/log4j/) and configuring the [SyslogAppender](https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/net/SyslogAppender.html) so it writes to syslog. Sorry, that's not as easy as in Python!
18,219,529
In python, logging to syslog is fairly trivial: ``` syslog.openlog("ident") syslog.syslog(0, "spilled beer on server") syslog.closelog() ``` Is there an equivalently simple way in Java? After quite a bit of googling, I've been unable to find an easy to understand method that doesn't require reconfiguring rsyslogd or syslogd. If there's no equivalent, what is the simplest way to log to syslog?
2013/08/13
[ "https://Stackoverflow.com/questions/18219529", "https://Stackoverflow.com", "https://Stackoverflow.com/users/643675/" ]
The closest I can think of, would be using [Log4J](https://logging.apache.org/log4j/) and configuring the [SyslogAppender](https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/net/SyslogAppender.html) so it writes to syslog. Sorry, that's not as easy as in Python!
This is the simplest client code I could think of: ``` import java.net.DatagramPacket; import java.net.DatagramSocket; import java.net.InetAddress; // java Syslog localhost "Hello world" public class Syslog { public static void main(String[] args) throws Exception { InetAddress address = InetAddress.getByName(args[0]); byte[] bytes = args[1].getBytes(); DatagramSocket socket = new DatagramSocket(); try { DatagramPacket data = new DatagramPacket(bytes, bytes.length, address, 514); socket.send(data); } finally { socket.close(); } } } ``` And the server (syslogd): ``` import java.net.DatagramPacket; import java.net.DatagramSocket; // java SyslogD public class SyslogD { public static void main(String[] args) throws Exception { DatagramSocket socket = new DatagramSocket(514); try { for(;;) { DatagramPacket data = new DatagramPacket(new byte[4096], 4096); socket.receive(data); System.out.println("[" + data.getAddress().toString() + "] " + new String(data.getData(),0,data.getLength())); } } finally { socket.close(); } } } ```
18,219,529
In python, logging to syslog is fairly trivial: ``` syslog.openlog("ident") syslog.syslog(0, "spilled beer on server") syslog.closelog() ``` Is there an equivalently simple way in Java? After quite a bit of googling, I've been unable to find an easy to understand method that doesn't require reconfiguring rsyslogd or syslogd. If there's no equivalent, what is the simplest way to log to syslog?
2013/08/13
[ "https://Stackoverflow.com/questions/18219529", "https://Stackoverflow.com", "https://Stackoverflow.com/users/643675/" ]
One way to go direct to the log without udp is with [syslog4j](http://www.syslog4j.org/). I wouldn't necessarily say it's simple, but it doesn't require reconfiguring syslog, at least.
This is the simplest client code I could think of: ``` import java.net.DatagramPacket; import java.net.DatagramSocket; import java.net.InetAddress; // java Syslog localhost "Hello world" public class Syslog { public static void main(String[] args) throws Exception { InetAddress address = InetAddress.getByName(args[0]); byte[] bytes = args[1].getBytes(); DatagramSocket socket = new DatagramSocket(); try { DatagramPacket data = new DatagramPacket(bytes, bytes.length, address, 514); socket.send(data); } finally { socket.close(); } } } ``` And the server (syslogd): ``` import java.net.DatagramPacket; import java.net.DatagramSocket; // java SyslogD public class SyslogD { public static void main(String[] args) throws Exception { DatagramSocket socket = new DatagramSocket(514); try { for(;;) { DatagramPacket data = new DatagramPacket(new byte[4096], 4096); socket.receive(data); System.out.println("[" + data.getAddress().toString() + "] " + new String(data.getData(),0,data.getLength())); } } finally { socket.close(); } } } ```
48,102,393
I have 1000 files each having one million lines. Each line has the following form: ``` a number,a text ``` I want to remove all of the numbers from the beginning of every line of every file. including the , Example: ``` 14671823,aboasdyflj -> aboasdyflj ``` What I'm doing is: ``` os.system("sed -i -- 's/^.*,//g' data/*") ``` and it works fine but it's taking a huge amount of time. What is the fastest way to do this? I'm coding in python.
2018/01/04
[ "https://Stackoverflow.com/questions/48102393", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5120089/" ]
This is much faster: ``` cut -f2 -d ',' data.txt > tmp.txt && mv tmp.txt data.txt ``` On a file with 11 million rows it took less than one second. To use this on several files in a directory, use: ```sh TMP=/pathto/tmpfile for file in dir/*; do cut -f2 -d ',' "$file" > $TMP && mv $TMP "$file" done ``` A thing worth mentioning is that it often takes much longer time to do stuff in place rather than using a separate file. I tried your sed command but switched from in place to a temporary file. Total time went down from 26s to 9s.
I would use GNU `awk` (to leverage the `-i inplace` editing of file) with `,` as the field separator, *no expensive Regex manipulation*: ``` awk -F, -i inplace '{print $2}' file.txt ``` For example, if the filenames have a common prefix like `file`, you can use shell globbing: ``` awk -F, -i inplace '{print $2}' file* ``` `awk` will treat each file as different argument while applying the in-place modifications. --- As a side note, you could simply run the shell command in the shell directly instead of wrapping it in `os.system()` which is insecure and deprecated BTW in favor of `subprocess`.
48,102,393
I have 1000 files each having one million lines. Each line has the following form: ``` a number,a text ``` I want to remove all of the numbers from the beginning of every line of every file. including the , Example: ``` 14671823,aboasdyflj -> aboasdyflj ``` What I'm doing is: ``` os.system("sed -i -- 's/^.*,//g' data/*") ``` and it works fine but it's taking a huge amount of time. What is the fastest way to do this? I'm coding in python.
2018/01/04
[ "https://Stackoverflow.com/questions/48102393", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5120089/" ]
This is much faster: ``` cut -f2 -d ',' data.txt > tmp.txt && mv tmp.txt data.txt ``` On a file with 11 million rows it took less than one second. To use this on several files in a directory, use: ```sh TMP=/pathto/tmpfile for file in dir/*; do cut -f2 -d ',' "$file" > $TMP && mv $TMP "$file" done ``` A thing worth mentioning is that it often takes much longer time to do stuff in place rather than using a separate file. I tried your sed command but switched from in place to a temporary file. Total time went down from 26s to 9s.
You can take advantage of your multicore system, along with the tips of other users on handling a specific file faster. ``` FILES = ['a', 'b', 'c', 'd'] CORES = 4 q = multiprocessing.Queue(len(FILES)) for f in FILES: q.put(f) def handler(q, i): while True: try: f = q.get(block=False) except Queue.Empty: return os.system("cut -f2 -d ',' {f} > tmp{i} && mv tmp{i} {f}".format(**locals())) processes = [multiprocessing.Process(target=handler, args=(q, i)) for i in range(CORES)] [p.start() for p in processes] [p.join() for p in processes] print "Done!" ```
48,102,393
I have 1000 files each having one million lines. Each line has the following form: ``` a number,a text ``` I want to remove all of the numbers from the beginning of every line of every file. including the , Example: ``` 14671823,aboasdyflj -> aboasdyflj ``` What I'm doing is: ``` os.system("sed -i -- 's/^.*,//g' data/*") ``` and it works fine but it's taking a huge amount of time. What is the fastest way to do this? I'm coding in python.
2018/01/04
[ "https://Stackoverflow.com/questions/48102393", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5120089/" ]
This is much faster: ``` cut -f2 -d ',' data.txt > tmp.txt && mv tmp.txt data.txt ``` On a file with 11 million rows it took less than one second. To use this on several files in a directory, use: ```sh TMP=/pathto/tmpfile for file in dir/*; do cut -f2 -d ',' "$file" > $TMP && mv $TMP "$file" done ``` A thing worth mentioning is that it often takes much longer time to do stuff in place rather than using a separate file. I tried your sed command but switched from in place to a temporary file. Total time went down from 26s to 9s.
that's probably pretty fast & native python. Reduced loops and using `csv.reader` & `csv.writer` which are compiled in most implementations: ``` import csv,os,glob for f1 in glob.glob("*.txt"): f2 = f1+".new" with open(f1) as fr, open(f2,"w",newline="") as fw: csv.writer(fw).writerows(x[1] for x in csv.reader(fr)) os.remove(f1) os.rename(f2,f1) # move back the newfile into the old one ``` maybe the `writerows` part could be even faster by using `map` & `operator.itemgetter` to remove the inner loop: ``` csv.writer(fw).writerows(map(operator.itemgetter(1),csv.reader(fr))) ``` Also: * it's portable on all systems including windows without MSYS installed * it stops with exception in case of problem avoiding to destroy the input * the temporary file is created in the same filesystem on purpose so deleting+renaming is super fast (as opposed to moving temp file to input across filesystems which would require `shutil.move` & would copy the data)
48,102,393
I have 1000 files each having one million lines. Each line has the following form: ``` a number,a text ``` I want to remove all of the numbers from the beginning of every line of every file. including the , Example: ``` 14671823,aboasdyflj -> aboasdyflj ``` What I'm doing is: ``` os.system("sed -i -- 's/^.*,//g' data/*") ``` and it works fine but it's taking a huge amount of time. What is the fastest way to do this? I'm coding in python.
2018/01/04
[ "https://Stackoverflow.com/questions/48102393", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5120089/" ]
I would use GNU `awk` (to leverage the `-i inplace` editing of file) with `,` as the field separator, *no expensive Regex manipulation*: ``` awk -F, -i inplace '{print $2}' file.txt ``` For example, if the filenames have a common prefix like `file`, you can use shell globbing: ``` awk -F, -i inplace '{print $2}' file* ``` `awk` will treat each file as different argument while applying the in-place modifications. --- As a side note, you could simply run the shell command in the shell directly instead of wrapping it in `os.system()` which is insecure and deprecated BTW in favor of `subprocess`.
You can take advantage of your multicore system, along with the tips of other users on handling a specific file faster. ``` FILES = ['a', 'b', 'c', 'd'] CORES = 4 q = multiprocessing.Queue(len(FILES)) for f in FILES: q.put(f) def handler(q, i): while True: try: f = q.get(block=False) except Queue.Empty: return os.system("cut -f2 -d ',' {f} > tmp{i} && mv tmp{i} {f}".format(**locals())) processes = [multiprocessing.Process(target=handler, args=(q, i)) for i in range(CORES)] [p.start() for p in processes] [p.join() for p in processes] print "Done!" ```
48,102,393
I have 1000 files each having one million lines. Each line has the following form: ``` a number,a text ``` I want to remove all of the numbers from the beginning of every line of every file. including the , Example: ``` 14671823,aboasdyflj -> aboasdyflj ``` What I'm doing is: ``` os.system("sed -i -- 's/^.*,//g' data/*") ``` and it works fine but it's taking a huge amount of time. What is the fastest way to do this? I'm coding in python.
2018/01/04
[ "https://Stackoverflow.com/questions/48102393", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5120089/" ]
that's probably pretty fast & native python. Reduced loops and using `csv.reader` & `csv.writer` which are compiled in most implementations: ``` import csv,os,glob for f1 in glob.glob("*.txt"): f2 = f1+".new" with open(f1) as fr, open(f2,"w",newline="") as fw: csv.writer(fw).writerows(x[1] for x in csv.reader(fr)) os.remove(f1) os.rename(f2,f1) # move back the newfile into the old one ``` maybe the `writerows` part could be even faster by using `map` & `operator.itemgetter` to remove the inner loop: ``` csv.writer(fw).writerows(map(operator.itemgetter(1),csv.reader(fr))) ``` Also: * it's portable on all systems including windows without MSYS installed * it stops with exception in case of problem avoiding to destroy the input * the temporary file is created in the same filesystem on purpose so deleting+renaming is super fast (as opposed to moving temp file to input across filesystems which would require `shutil.move` & would copy the data)
You can take advantage of your multicore system, along with the tips of other users on handling a specific file faster. ``` FILES = ['a', 'b', 'c', 'd'] CORES = 4 q = multiprocessing.Queue(len(FILES)) for f in FILES: q.put(f) def handler(q, i): while True: try: f = q.get(block=False) except Queue.Empty: return os.system("cut -f2 -d ',' {f} > tmp{i} && mv tmp{i} {f}".format(**locals())) processes = [multiprocessing.Process(target=handler, args=(q, i)) for i in range(CORES)] [p.start() for p in processes] [p.join() for p in processes] print "Done!" ```
61,380,617
when I'm trying open a website with urllib library I'm getting the error. I'm not getting why this error occurs? currently I'm using python 3.6 version. Is this problem with version? ``` url = 'https://example.com' html = urllib.request.urlopen(url).read().decode('utf-8') text = get_text(html) data = text.split() print(data) ```
2020/04/23
[ "https://Stackoverflow.com/questions/61380617", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13139499/" ]
You should not have duplicate column names in the dataframe, we correct that using `make.unique`. ``` names(df) <- make.unique(names(df)) ``` We can then remove empty rows and get data in long format using `pivot_longer`. ``` library(dplyr) library(tidyr) df %>% filter(orig != '' | dest != '') %>% pivot_longer(cols = -c(orig, dest), names_to = c('.value', 'index'), names_sep = '\\.') %>% select(-index) ``` --- For the updated dataset we can use : ``` df %>% pivot_longer(cols = -c(orig, dest), names_to = 'year') %>% mutate(.copy = c('cartrip', 'walking')[.copy]) %>% pivot_wider(names_from = .copy, values_from = value) # orig dest year cartrip walking # <fct> <fct> <chr> <int> <int> # 1 Seoul Inchon 1997 543 543 # 2 Seoul Inchon 2002 524 524 # 3 Seoul Inchon 2006 364 364 # 4 Seoul Inchon 2010 452 452 # 5 Seoul Inchon 2016 845 845 # 6 Seoul Gyeongi 1997 543 543 # 7 Seoul Gyeongi 2002 524 524 # 8 Seoul Gyeongi 2006 364 364 # 9 Seoul Gyeongi 2010 452 452 #10 Seoul Gyeongi 2016 845 845 #11 Inchon Seoul 1997 543 543 #12 Inchon Seoul 2002 524 524 #13 Inchon Seoul 2006 364 364 #14 Inchon Seoul 2010 452 452 #15 Inchon Seoul 2016 845 845 ```
a `data.table` solution. You might need to play around the `year`. As `melt` now in `data.table` cannot handle the `year` in your question correctly. I guess `pivot_longer` from `tidyr` can do this in one shot. ```r library(data.table) df <- fread('orig dest cartrip cartrip cartrip cartrip cartrip walking walking walking walking walking 1997 2002 2006 2010 2016 1997 2002 2006 2010 2016 Seoul Inchon 543 524 364 452 845 543 524 364 452 845 Seoul Gyeongi 543 524 364 452 845 543 524 364 452 845 Inchon Seoul 543 524 364 452 845 543 524 364 452 845 ') result <- melt(df[orig!="",],measure.vars = patterns(walking="^walking",cartrip="^cartrip"),variable.name = "year") result[,year:=forcats::lvls_revalue(year,c("1997", "2002", "2006", "2010", "2016") )] result[order(orig,dest)][,.(year,orig,dest,cartrip,walking)] #> year orig dest cartrip walking #> 1: 1997 Inchon Seoul 543 543 #> 2: 2002 Inchon Seoul 524 524 #> 3: 2006 Inchon Seoul 364 364 #> 4: 2010 Inchon Seoul 452 452 #> 5: 2016 Inchon Seoul 845 845 #> 6: 1997 Seoul Gyeongi 543 543 #> 7: 2002 Seoul Gyeongi 524 524 #> 8: 2006 Seoul Gyeongi 364 364 #> 9: 2010 Seoul Gyeongi 452 452 #> 10: 2016 Seoul Gyeongi 845 845 #> 11: 1997 Seoul Inchon 543 543 #> 12: 2002 Seoul Inchon 524 524 #> 13: 2006 Seoul Inchon 364 364 #> 14: 2010 Seoul Inchon 452 452 #> 15: 2016 Seoul Inchon 845 845 ``` Created on 2020-04-23 by the [reprex package](https://reprex.tidyverse.org) (v0.3.0)
15,118,974
I'm trying to learn ruby, so I'm following an exercise of google dev. I'm trying to parse some links. In the case of successful redirection (considering that I know that it its possible only to get redirected once), I get redirect forbidden. I noticed that I go from a http protocol link to an https protocol link. Any concrete idea how could I implement in this in ruby because google's exercise is for python? error: ``` ruby fix.rb redirection forbidden: http://code.google.com/edu/languages/google-python-class/images/puzzle/p-bija-baei.jpg -> https://developers.google.com/edu/python/images/puzzle/p-bija-baei.jpg?csw=1 ``` code that should achieve what I'm looking for: ``` def acquireData(urls, imgs) #List item urls list of valid urls !checked, imgs list of the imgs I'll download afterwards. begin urls.each do |url| page = Nokogiri::HTML(open(url)) puts page.body end rescue Exception => e puts e end end ```
2013/02/27
[ "https://Stackoverflow.com/questions/15118974", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1388172/" ]
Ruby's [OpenURI](http://www.ruby-doc.org/stdlib-1.9.3/libdoc/open-uri/rdoc/OpenURI.html) will automatically handle redirects for you, as long as they're not "[meta-refresh](http://en.wikipedia.org/wiki/Meta_refresh)" that occur inside the HTML itself. For instance, this follows a redirect automatically: ``` irb(main):008:0> page = open('http://www.example.org') #<StringIO:0x00000002ae2de0> irb(main):009:0> page.base_uri.to_s "http://www.iana.org/domains/example" ``` In other words, the request to "www.example.org" got redirected to "www.iana.org" and OpenURI tracked it correctly. If you are trying to learn HOW to handle redirects, read the [Net::HTTP](http://ruby-doc.org/stdlib-1.9.3/libdoc/net/http/rdoc/Net/HTTP.html) documentation. Here is the example how to do it from the document: > > Following Redirection > > > Each Net::HTTPResponse object belongs to a class for its response code. > > > For example, all 2XX responses are instances of a Net::HTTPSuccess subclass, a 3XX response is an instance of a Net::HTTPRedirection subclass and a 200 response is an instance of the Net::HTTPOK class. For details of response classes, see the section “HTTP Response Classes” below. > > > Using a case statement you can handle various types of responses properly: > > > ``` def fetch(uri_str, limit = 10) # You should choose a better exception. raise ArgumentError, 'too many HTTP redirects' if limit == 0 response = Net::HTTP.get_response(URI(uri_str)) case response when Net::HTTPSuccess then response when Net::HTTPRedirection then location = response['location'] warn "redirected to #{location}" fetch(location, limit - 1) else response.value end end print fetch('http://www.ruby-lang.org') ``` If you want to handle meta-refresh statements, reflect on this: ``` require 'nokogiri' doc = Nokogiri::HTML(%[<meta http-equiv="refresh" content="5;URL='http://example.com/'">]) meta_refresh = doc.at('meta[http-equiv="refresh"]') if meta_refresh puts meta_refresh['content'][/URL=(.+)/, 1].gsub(/['"]/, '') end ``` Which outputs: ``` http://example.com/ ```
Basically the url in code.google that you're trying to open redirects to a https url. You can see that by yourself if you paste `http://code.google.com/edu/languages/google-python-class/images/puzzle/p-bija-baei.jpg` into your browser Check the following [bug report](http://bugs.ruby-lang.org/issues/859) that explains why open-uri can't redirect to https; So the solution to your problem is simply: use a different set of urls (that don't redirect to https)
29,656,173
I'm a student doing a computer science course and for part of the assessment we have to write a program that will take 10 digits from the user and used them to calculate an 11th number in order to produce an ISBN. The numbers that the user inputs HAVE to be limited to one digit, and an error message should be displayed if more than one digit is entered. This is the code that I am using: ``` print('Please enter your 10 digit number') a = int(input("FIRST NUMBER: ")) aa = (a*11) if len(a) > 1: print ("Error. Only 1 digit allowed!") b = int(input("SECOND NUMBER: ")) bb = (b*10) if len(a) > 1: print ("Error. Only 1 digit allowed!") ``` ect. I have to keep the inputs as integers so that some of the calculations in the rest of the program work, but when I run the program, an error saying "object of type 'int' has no len()". I'm assuming that it is referring to the fact that it is an integer and has no length. Is there any way that I can keep 'a' as an integer but limit the length to 1 digit? (Also I understand that there is probably a more efficient way of writing the program, but I have a fairly limited knowledge of python)
2015/04/15
[ "https://Stackoverflow.com/questions/29656173", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4678142/" ]
You have to convert the int to a string because int does not have a length property. Also You were checking if the digit was longer than 1 for a twice so I switched the SECOND NUMBER check to b ``` print('Please enter your 10 digit number') a = raw_input("FIRST NUMBER: ") if len(a) > 1: print ("Error. Only 1 digit allowed!") a = int(a) aa = (a*10) b = raw_input("SECOND NUMBER: ") if len(b) > 1: print ("Error. Only 1 digit allowed!") b = int(b) bb = (b*10) ``` ### Or more simply: You could ask for the number and keep asking until the length is 10 and the input is a number ``` num = raw_input('Please enter your 10 digit number:') while len(num) != 10 or (not num.isdigit()): print 'Not a 10 digit number' num = raw_input('Please enter your 10 digit number:') num = int(num) print 'The final number is: ', num ```
Firstly, I'm assuming you are using 3.x. Secondly, if you are using 2.x, you can't use `len` on numbers. This is what I would suggest: ``` print('Please enter your 10 digit number') number = '' for x in range(1,11): digit = input('Please enter digit ' + str(x) + ': ') while len(digit) != 1: # digit is either empty or not a single digit so keep asking digit = input('That was not 1 digit. Please enter digit ' + str(x) + ': ') number += digit # digit is a single digit so add to number # do the rest ``` It makes more sense to keep all the numbers in a `str` as you can then split them out later as you need them e.g. `number[0]` will be the first digit, `number[1]` will be the second. If you can adapt your program to not have to explicitly use a, b, c ,d etc. and instead use slicing, it will be quite simple to construct. Obviously if you can use a whole 10 digit number than the best method would be: ``` number = input('Please enter your 10 digit number: ') while len(number) != 10: number = input('That was not a 10 digit number. Please enter your 10 digit number ') ``` As a last resort, if you absolutely have to have individual variable names per digit, you can use `exec` and `eval`: ``` var_names = [('a', 1), ('b', 2), ('c', 3), ('d', 4)] # add more as needed for var, num in var_names: exec(var + ' = input("Please enter digit " + str(num) + ": ")') while eval('len(' + var + ')') != 1: exec(var + ' = input("That was not a single digit. Please enter digit " + str(num) + ": ")') ``` That would give you vars a,b,c and d equalling the digits given.
29,656,173
I'm a student doing a computer science course and for part of the assessment we have to write a program that will take 10 digits from the user and used them to calculate an 11th number in order to produce an ISBN. The numbers that the user inputs HAVE to be limited to one digit, and an error message should be displayed if more than one digit is entered. This is the code that I am using: ``` print('Please enter your 10 digit number') a = int(input("FIRST NUMBER: ")) aa = (a*11) if len(a) > 1: print ("Error. Only 1 digit allowed!") b = int(input("SECOND NUMBER: ")) bb = (b*10) if len(a) > 1: print ("Error. Only 1 digit allowed!") ``` ect. I have to keep the inputs as integers so that some of the calculations in the rest of the program work, but when I run the program, an error saying "object of type 'int' has no len()". I'm assuming that it is referring to the fact that it is an integer and has no length. Is there any way that I can keep 'a' as an integer but limit the length to 1 digit? (Also I understand that there is probably a more efficient way of writing the program, but I have a fairly limited knowledge of python)
2015/04/15
[ "https://Stackoverflow.com/questions/29656173", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4678142/" ]
You have to convert the int to a string because int does not have a length property. Also You were checking if the digit was longer than 1 for a twice so I switched the SECOND NUMBER check to b ``` print('Please enter your 10 digit number') a = raw_input("FIRST NUMBER: ") if len(a) > 1: print ("Error. Only 1 digit allowed!") a = int(a) aa = (a*10) b = raw_input("SECOND NUMBER: ") if len(b) > 1: print ("Error. Only 1 digit allowed!") b = int(b) bb = (b*10) ``` ### Or more simply: You could ask for the number and keep asking until the length is 10 and the input is a number ``` num = raw_input('Please enter your 10 digit number:') while len(num) != 10 or (not num.isdigit()): print 'Not a 10 digit number' num = raw_input('Please enter your 10 digit number:') num = int(num) print 'The final number is: ', num ```
You never stated if you're on Windows or Linux, the code listed below is for Windows (as I'm on a Windows machine right now and can't test the equivalent on Linux). ``` # For windows import msvcrt print('Please enter your 10 digit number') print('First number: ') a = int(msvcrt.getch()) print(a) ``` The `.getch()` call with `msvcrt` just gets a single character from the terminal input. You should also wrap the call to `int()` in a try/except block to stop your application from crashing when getting non-integer input.
29,656,173
I'm a student doing a computer science course and for part of the assessment we have to write a program that will take 10 digits from the user and used them to calculate an 11th number in order to produce an ISBN. The numbers that the user inputs HAVE to be limited to one digit, and an error message should be displayed if more than one digit is entered. This is the code that I am using: ``` print('Please enter your 10 digit number') a = int(input("FIRST NUMBER: ")) aa = (a*11) if len(a) > 1: print ("Error. Only 1 digit allowed!") b = int(input("SECOND NUMBER: ")) bb = (b*10) if len(a) > 1: print ("Error. Only 1 digit allowed!") ``` ect. I have to keep the inputs as integers so that some of the calculations in the rest of the program work, but when I run the program, an error saying "object of type 'int' has no len()". I'm assuming that it is referring to the fact that it is an integer and has no length. Is there any way that I can keep 'a' as an integer but limit the length to 1 digit? (Also I understand that there is probably a more efficient way of writing the program, but I have a fairly limited knowledge of python)
2015/04/15
[ "https://Stackoverflow.com/questions/29656173", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4678142/" ]
You have to convert the int to a string because int does not have a length property. Also You were checking if the digit was longer than 1 for a twice so I switched the SECOND NUMBER check to b ``` print('Please enter your 10 digit number') a = raw_input("FIRST NUMBER: ") if len(a) > 1: print ("Error. Only 1 digit allowed!") a = int(a) aa = (a*10) b = raw_input("SECOND NUMBER: ") if len(b) > 1: print ("Error. Only 1 digit allowed!") b = int(b) bb = (b*10) ``` ### Or more simply: You could ask for the number and keep asking until the length is 10 and the input is a number ``` num = raw_input('Please enter your 10 digit number:') while len(num) != 10 or (not num.isdigit()): print 'Not a 10 digit number' num = raw_input('Please enter your 10 digit number:') num = int(num) print 'The final number is: ', num ```
I suggest creating a function to handle of of the prompting, then call it in your code. Here is a simplfied example: ``` def single_num(prompt): num = "" while True: num = raw_input(prompt) if len(num) == 1: try: return int(num) except ValueError: print "Error, you must enter a number" else: print "Try again, now with a single number" ``` This will take a prompt, and ask it over and over again until it recieves a 1 length string. To make this more user friendly, you can add in protections with `try...except` for non-numerical input and whatnot
29,656,173
I'm a student doing a computer science course and for part of the assessment we have to write a program that will take 10 digits from the user and used them to calculate an 11th number in order to produce an ISBN. The numbers that the user inputs HAVE to be limited to one digit, and an error message should be displayed if more than one digit is entered. This is the code that I am using: ``` print('Please enter your 10 digit number') a = int(input("FIRST NUMBER: ")) aa = (a*11) if len(a) > 1: print ("Error. Only 1 digit allowed!") b = int(input("SECOND NUMBER: ")) bb = (b*10) if len(a) > 1: print ("Error. Only 1 digit allowed!") ``` ect. I have to keep the inputs as integers so that some of the calculations in the rest of the program work, but when I run the program, an error saying "object of type 'int' has no len()". I'm assuming that it is referring to the fact that it is an integer and has no length. Is there any way that I can keep 'a' as an integer but limit the length to 1 digit? (Also I understand that there is probably a more efficient way of writing the program, but I have a fairly limited knowledge of python)
2015/04/15
[ "https://Stackoverflow.com/questions/29656173", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4678142/" ]
You have to convert the int to a string because int does not have a length property. Also You were checking if the digit was longer than 1 for a twice so I switched the SECOND NUMBER check to b ``` print('Please enter your 10 digit number') a = raw_input("FIRST NUMBER: ") if len(a) > 1: print ("Error. Only 1 digit allowed!") a = int(a) aa = (a*10) b = raw_input("SECOND NUMBER: ") if len(b) > 1: print ("Error. Only 1 digit allowed!") b = int(b) bb = (b*10) ``` ### Or more simply: You could ask for the number and keep asking until the length is 10 and the input is a number ``` num = raw_input('Please enter your 10 digit number:') while len(num) != 10 or (not num.isdigit()): print 'Not a 10 digit number' num = raw_input('Please enter your 10 digit number:') num = int(num) print 'The final number is: ', num ```
This is probably *cleanest* to do with a validation wrapper. ``` def validator(testfunc): def wrap(func): def wrapped(*args, **kwargs): result = func(*args, **kwargs) pass, *failfunc = testfunc(result) it pass: return result elif failfunc: failfunc[0]() return wrapped return wrap def ten_digits(num): pass = False if len(num) != 10: msg = "{} is not of length 10".format(num) elif not num.isdigit(): msg = "{} is not a number".format(num) else: pass = True def failfunc(): raise ValueError(msg) response = (pass, None if pass else failfunc) return response valid_input = validator(ten_digits)(input) # or raw_input in Python2 response = valid_input("Enter your 10 digit number: ") ``` This is probably a bit overengineered, but it's incredibly reusable (have a different set of tests you need validated? Write a new `ten_digits` analogy!) and very configurable (want different behavior out of your fail function? Write it in!) Which means you could do things like: ``` ISBN = validator(ten_digits)(input)("ISBN# = ") title = validator(max_50_chars)(input)("Title = ") author = validator(no_digits)(input)("Author = ") price = decimal.Decimal(validator(float_between_1_and_50)(input)( "Price = ")).quantize(decimal.Decimal('1.00')) ```
66,533,544
``` --------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-1-b7eb239f86a7> in <module> 1 # Initialize path to SQLite database 2 path = 'data/classic_rock.db' ----> 3 con = sq3.Connection(path) 4 5 NameError: name 'sq3' is not defined ```
2021/03/08
[ "https://Stackoverflow.com/questions/66533544", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14204983/" ]
It looks like from the traceback that you are attempting to use `sq3` and you either did not `import` the library or did not correctly alias the library in question. Cannot know for sure without your code though.
'sq3' is not defined That's why. Somewhere in your code you're expecting a variable called sql3 but it doesn't exist.
36,467,658
I installed firewalld on my centos server but as I tried to start it I got this: ``` $ sudo systemctl start firewalld Job for firewalld.service failed. See 'systemctl status firewalld.service' and 'journalctl -xn' for details. ``` here is the systemctl status: ``` sudo systemctl status firewalld firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled) Active: failed (Result: exit-code) since پنجشنبه 2016-04-07 05:36:17 UTC; 9s ago Process: 929 ExecStart=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS (code=exited, status=1/FAILURE) Main PID: 929 (code=exited, status=1/FAILURE) آوریل 07 05:36:17 server1.hamed1soleimani.ir systemd[1]: firewalld.service: main process exited, code=exited, status=1/FAILURE آوریل 07 05:36:17 server1.hamed1soleimani.ir systemd[1]: Failed to start firewalld - dynamic firewall daemon. آوریل 07 05:36:17 server1.hamed1soleimani.ir systemd[1]: Unit firewalld.service entered failed state. ``` and firewall-cmd status: ``` sudo firewall-cmd --stat Traceback (most recent call last): File "/bin/firewall-cmd", line 24, in <module> from gi.repository import GObject File "/usr/lib64/python2.7/site-packages/gi/__init__.py", line 37, in <module> from . import _gi ImportError: /usr/lib64/python2.7/site-packages/gi/_gi.so: undefined symbol: g_type_check_instance_is_fundamentally_a ``` I cant realize relation between firewalld and some gtk python extensions!
2016/04/07
[ "https://Stackoverflow.com/questions/36467658", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3383936/" ]
This worked for me: ``` systemctl stop firewalld pkill -f firewalld systemctl start firewalld ```
I know that it is an old thread , But I was facing this problem and I just fixed it, Figured it will help someone in the nearby future. I thought the problem was in my code or that I mis placed the file. Well , sadly This file is corrupted (perhaps misplaced) `/usr/lib/python2.7/site-packages/gi/_gi.so` or I think it has been compiled badly. What you need is to Update **Glib 2** since it will overwrite & fix it , You can do this using **yum** Try `yum update glib2` I tested the above using **CentOS Linux release 7.1.1503 (Core)** Cheers
36,467,658
I installed firewalld on my centos server but as I tried to start it I got this: ``` $ sudo systemctl start firewalld Job for firewalld.service failed. See 'systemctl status firewalld.service' and 'journalctl -xn' for details. ``` here is the systemctl status: ``` sudo systemctl status firewalld firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled) Active: failed (Result: exit-code) since پنجشنبه 2016-04-07 05:36:17 UTC; 9s ago Process: 929 ExecStart=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS (code=exited, status=1/FAILURE) Main PID: 929 (code=exited, status=1/FAILURE) آوریل 07 05:36:17 server1.hamed1soleimani.ir systemd[1]: firewalld.service: main process exited, code=exited, status=1/FAILURE آوریل 07 05:36:17 server1.hamed1soleimani.ir systemd[1]: Failed to start firewalld - dynamic firewall daemon. آوریل 07 05:36:17 server1.hamed1soleimani.ir systemd[1]: Unit firewalld.service entered failed state. ``` and firewall-cmd status: ``` sudo firewall-cmd --stat Traceback (most recent call last): File "/bin/firewall-cmd", line 24, in <module> from gi.repository import GObject File "/usr/lib64/python2.7/site-packages/gi/__init__.py", line 37, in <module> from . import _gi ImportError: /usr/lib64/python2.7/site-packages/gi/_gi.so: undefined symbol: g_type_check_instance_is_fundamentally_a ``` I cant realize relation between firewalld and some gtk python extensions!
2016/04/07
[ "https://Stackoverflow.com/questions/36467658", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3383936/" ]
I know that it is an old thread , But I was facing this problem and I just fixed it, Figured it will help someone in the nearby future. I thought the problem was in my code or that I mis placed the file. Well , sadly This file is corrupted (perhaps misplaced) `/usr/lib/python2.7/site-packages/gi/_gi.so` or I think it has been compiled badly. What you need is to Update **Glib 2** since it will overwrite & fix it , You can do this using **yum** Try `yum update glib2` I tested the above using **CentOS Linux release 7.1.1503 (Core)** Cheers
The problem is your package `/usr/lib/python2.7/site-packages/gi/_gi.so` ``` Debian (python2) -> sudo apt install python-gi Debian (python3) -> sudo apt install python3-gi ``` RedHat based systems -> `yum install gilb2` Note : for OverWrite and fix you can use: -> `yum update glib2`
36,467,658
I installed firewalld on my centos server but as I tried to start it I got this: ``` $ sudo systemctl start firewalld Job for firewalld.service failed. See 'systemctl status firewalld.service' and 'journalctl -xn' for details. ``` here is the systemctl status: ``` sudo systemctl status firewalld firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled) Active: failed (Result: exit-code) since پنجشنبه 2016-04-07 05:36:17 UTC; 9s ago Process: 929 ExecStart=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS (code=exited, status=1/FAILURE) Main PID: 929 (code=exited, status=1/FAILURE) آوریل 07 05:36:17 server1.hamed1soleimani.ir systemd[1]: firewalld.service: main process exited, code=exited, status=1/FAILURE آوریل 07 05:36:17 server1.hamed1soleimani.ir systemd[1]: Failed to start firewalld - dynamic firewall daemon. آوریل 07 05:36:17 server1.hamed1soleimani.ir systemd[1]: Unit firewalld.service entered failed state. ``` and firewall-cmd status: ``` sudo firewall-cmd --stat Traceback (most recent call last): File "/bin/firewall-cmd", line 24, in <module> from gi.repository import GObject File "/usr/lib64/python2.7/site-packages/gi/__init__.py", line 37, in <module> from . import _gi ImportError: /usr/lib64/python2.7/site-packages/gi/_gi.so: undefined symbol: g_type_check_instance_is_fundamentally_a ``` I cant realize relation between firewalld and some gtk python extensions!
2016/04/07
[ "https://Stackoverflow.com/questions/36467658", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3383936/" ]
I know that it is an old thread , But I was facing this problem and I just fixed it, Figured it will help someone in the nearby future. I thought the problem was in my code or that I mis placed the file. Well , sadly This file is corrupted (perhaps misplaced) `/usr/lib/python2.7/site-packages/gi/_gi.so` or I think it has been compiled badly. What you need is to Update **Glib 2** since it will overwrite & fix it , You can do this using **yum** Try `yum update glib2` I tested the above using **CentOS Linux release 7.1.1503 (Core)** Cheers
You should try restarting the dbus service: ``` $ sudo systemctl restart dbus $ sudo systemctl restart firewalld ```
36,467,658
I installed firewalld on my centos server but as I tried to start it I got this: ``` $ sudo systemctl start firewalld Job for firewalld.service failed. See 'systemctl status firewalld.service' and 'journalctl -xn' for details. ``` here is the systemctl status: ``` sudo systemctl status firewalld firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled) Active: failed (Result: exit-code) since پنجشنبه 2016-04-07 05:36:17 UTC; 9s ago Process: 929 ExecStart=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS (code=exited, status=1/FAILURE) Main PID: 929 (code=exited, status=1/FAILURE) آوریل 07 05:36:17 server1.hamed1soleimani.ir systemd[1]: firewalld.service: main process exited, code=exited, status=1/FAILURE آوریل 07 05:36:17 server1.hamed1soleimani.ir systemd[1]: Failed to start firewalld - dynamic firewall daemon. آوریل 07 05:36:17 server1.hamed1soleimani.ir systemd[1]: Unit firewalld.service entered failed state. ``` and firewall-cmd status: ``` sudo firewall-cmd --stat Traceback (most recent call last): File "/bin/firewall-cmd", line 24, in <module> from gi.repository import GObject File "/usr/lib64/python2.7/site-packages/gi/__init__.py", line 37, in <module> from . import _gi ImportError: /usr/lib64/python2.7/site-packages/gi/_gi.so: undefined symbol: g_type_check_instance_is_fundamentally_a ``` I cant realize relation between firewalld and some gtk python extensions!
2016/04/07
[ "https://Stackoverflow.com/questions/36467658", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3383936/" ]
This worked for me: ``` systemctl stop firewalld pkill -f firewalld systemctl start firewalld ```
The problem is your package `/usr/lib/python2.7/site-packages/gi/_gi.so` ``` Debian (python2) -> sudo apt install python-gi Debian (python3) -> sudo apt install python3-gi ``` RedHat based systems -> `yum install gilb2` Note : for OverWrite and fix you can use: -> `yum update glib2`
36,467,658
I installed firewalld on my centos server but as I tried to start it I got this: ``` $ sudo systemctl start firewalld Job for firewalld.service failed. See 'systemctl status firewalld.service' and 'journalctl -xn' for details. ``` here is the systemctl status: ``` sudo systemctl status firewalld firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled) Active: failed (Result: exit-code) since پنجشنبه 2016-04-07 05:36:17 UTC; 9s ago Process: 929 ExecStart=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS (code=exited, status=1/FAILURE) Main PID: 929 (code=exited, status=1/FAILURE) آوریل 07 05:36:17 server1.hamed1soleimani.ir systemd[1]: firewalld.service: main process exited, code=exited, status=1/FAILURE آوریل 07 05:36:17 server1.hamed1soleimani.ir systemd[1]: Failed to start firewalld - dynamic firewall daemon. آوریل 07 05:36:17 server1.hamed1soleimani.ir systemd[1]: Unit firewalld.service entered failed state. ``` and firewall-cmd status: ``` sudo firewall-cmd --stat Traceback (most recent call last): File "/bin/firewall-cmd", line 24, in <module> from gi.repository import GObject File "/usr/lib64/python2.7/site-packages/gi/__init__.py", line 37, in <module> from . import _gi ImportError: /usr/lib64/python2.7/site-packages/gi/_gi.so: undefined symbol: g_type_check_instance_is_fundamentally_a ``` I cant realize relation between firewalld and some gtk python extensions!
2016/04/07
[ "https://Stackoverflow.com/questions/36467658", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3383936/" ]
This worked for me: ``` systemctl stop firewalld pkill -f firewalld systemctl start firewalld ```
You should try restarting the dbus service: ``` $ sudo systemctl restart dbus $ sudo systemctl restart firewalld ```
38,282,659
I have two data points `x` and `y`: ``` x = 5 (value corresponding to 95%) y = 17 (value corresponding to 102.5%) ``` No I would like to calculate the value for `xi` which should correspond to 100%. ``` x = 5 (value corresponding to 95%) xi = ?? (value corresponding to 100%) y = 17 (value corresponding to 102.5%) ``` How should I do this using python?
2016/07/09
[ "https://Stackoverflow.com/questions/38282659", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6306448/" ]
is that what you want? ``` In [145]: s = pd.Series([5, np.nan, 17], index=[95, 100, 102.5]) In [146]: s Out[146]: 95.0 5.0 100.0 NaN 102.5 17.0 dtype: float64 In [147]: s.interpolate(method='index') Out[147]: 95.0 5.0 100.0 13.0 102.5 17.0 dtype: float64 ```
We can easily plot this on a graph without Python: [![](https://i.stack.imgur.com/PW6fy.png)](https://i.stack.imgur.com/PW6fy.png) This shows us what the answer should be (13). But how do we calculate this? First, we find the gradient with this: [![](https://i.stack.imgur.com/zQVKFs.png)](https://i.stack.imgur.com/zQVKFs.png) The numbers substituted into the equation give this: [![](https://i.stack.imgur.com/cgsQy.png)](https://i.stack.imgur.com/cgsQy.png) So we know for 0.625 we increase the Y value by, we increase the X value by 1. We've been given that Y is 100. We know that 102.5 relates to 17. `100 - 102.5 = -2.5`. `-2.5 / 0.625 = -4` and then `17 + -4 = 13`. This also works with the other numbers: `100 - 95 = 5`, `5 / 0.625 = 8`, `5 + 8 = 13`. We can also go backwards using the reciprocal of the gradient (`1 / m`). We've been given that X is 13. We know that 102.5 relates to 17. `13 - 17 = -4`. `-4 / 0.625 = -2.5` and then `102.5 + -2.5 = 100`. How do we do this in python? ``` def findXPoint(xa,xb,ya,yb,yc): m = (xa - xb) / (ya - yb) xc = (yc - yb) * m + xb return ``` And to find a Y point given the X point: ``` def findYPoint(xa,xb,ya,yb,xc): m = (ya - yb) / (xa - xb) yc = (xc - xb) * m + yb return yc ``` This function will also extrapolate from the data points.
38,282,659
I have two data points `x` and `y`: ``` x = 5 (value corresponding to 95%) y = 17 (value corresponding to 102.5%) ``` No I would like to calculate the value for `xi` which should correspond to 100%. ``` x = 5 (value corresponding to 95%) xi = ?? (value corresponding to 100%) y = 17 (value corresponding to 102.5%) ``` How should I do this using python?
2016/07/09
[ "https://Stackoverflow.com/questions/38282659", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6306448/" ]
is that what you want? ``` In [145]: s = pd.Series([5, np.nan, 17], index=[95, 100, 102.5]) In [146]: s Out[146]: 95.0 5.0 100.0 NaN 102.5 17.0 dtype: float64 In [147]: s.interpolate(method='index') Out[147]: 95.0 5.0 100.0 13.0 102.5 17.0 dtype: float64 ```
You can use [numpy.interp](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.interp.html) function to interpolate a value ``` import numpy as np import matplotlib.pyplot as plt x = [95, 102.5] y = [5, 17] x_new = 100 y_new = np.interp(x_new, x, y) print(y_new) # 13.0 plt.plot(x, y, "og-", x_new, y_new, "or"); ``` [![enter image description here](https://i.stack.imgur.com/3fL4F.png)](https://i.stack.imgur.com/3fL4F.png)
34,778,397
I am currently creating a music player in python 3.3 and I have a way of opening the mp3/wav files, namely through using through 'os.startfile()', but, this way of running the files means that if I run more than one, the second cancels the first, and the third cancels the second, and so on and so forth, so I only end up running the last file. So, basically, I would like a way of reading the mp3 file length so that I can use 'time.sleep(SongLength)' between the start of each file. Thanks in advance. **EDIT:** I forgot to mention, but I would prefer to do this using only pre-installed libraries, as i am hoping to publish this online as a part of a (much) larger program
2016/01/13
[ "https://Stackoverflow.com/questions/34778397", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2966288/" ]
i've managed to do this Using an external module, as after ages of trying to do it without any, i gave up and used [tinytag](https://pypi.python.org/pypi/tinytag/), as it is easy to install and use.
Nothing you can do without external libraries, as far as I know. Try using [pymad](http://spacepants.org/src/pymad/). Use it like this: ``` import mad SongFile = mad.MadFile("something.mp3") SongLength = SongFile.total_time() ```
22,490,833
I have this string: ``` Email: promo@elysianrealestate.com ``` I want to get the email address: ### I tried this ``` Email:.* ``` but I got the whole string, not just the email help please ### i am using scrapy with python
2014/03/18
[ "https://Stackoverflow.com/questions/22490833", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2038257/" ]
If your string always finish with the email, you use: ``` r'Email:\s*(.*)' ``` I got the idea from [here](http://doc.scrapy.org/en/0.7/topics/selectors.html#using-selectors-with-regular-expressions) but I can't test it as I don't have a scrapy shell availabl at the moment.
This should capture your emails, it ensures that you only capture correctly formed emails: ``` Email:\s+(\b[A-Za-z0-9(._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,4}\b) ``` Here's how I tested it: ``` >>> import re >>> txt = """ I have this string: Email: promo@elysianrealestate.com foo bar baz I want to get the email address:""" >>> re.findall(r""" Email:\s+ (\b # edge of first part [A-Za-z0-9(._%+-]+ # name, can be dotted @ # @ [A-Za-z0-9.-]+ # domain, e.g. something.something \. # . [A-Za-z]{2,4}\b) # any lettered end, 2 to 4 letters long """, txt, re.VERBOSE) ['promo@elysianrealestate.com'] ```
22,490,833
I have this string: ``` Email: promo@elysianrealestate.com ``` I want to get the email address: ### I tried this ``` Email:.* ``` but I got the whole string, not just the email help please ### i am using scrapy with python
2014/03/18
[ "https://Stackoverflow.com/questions/22490833", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2038257/" ]
If your string always finish with the email, you use: ``` r'Email:\s*(.*)' ``` I got the idea from [here](http://doc.scrapy.org/en/0.7/topics/selectors.html#using-selectors-with-regular-expressions) but I can't test it as I don't have a scrapy shell availabl at the moment.
You need to create a group to mark the text that you want captured. For this, try wrapping the pattern in parenthesis: ```py r'Email:\s+(.+)' ```
22,490,833
I have this string: ``` Email: promo@elysianrealestate.com ``` I want to get the email address: ### I tried this ``` Email:.* ``` but I got the whole string, not just the email help please ### i am using scrapy with python
2014/03/18
[ "https://Stackoverflow.com/questions/22490833", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2038257/" ]
If your string always finish with the email, you use: ``` r'Email:\s*(.*)' ``` I got the idea from [here](http://doc.scrapy.org/en/0.7/topics/selectors.html#using-selectors-with-regular-expressions) but I can't test it as I don't have a scrapy shell availabl at the moment.
As long as you know the ":" will always separate the "Email" from the actual email address, why not try ( for s = "Email: promo@elysianrealestate.com"): ``` emailAddr = s.split(":")[1].strip() ``` If you need to worry about text after the ".com", just try another split on a " " character and then take the first (0th) element of the list. ``` emailAddr = emailAddr.split(" ")[0] ```
57,854,020
**My Problem** I am trying to create a column in python which is the conditional smoothed moving 14 day average of another column. The condition is that I only want to include positive values from another column in the rolling average. I am currently using the following code which works exactly how I want it to, but it is really slow because of the loops. I want to try and re-do it without using loops. The dataset is simply the last closing price of a stock. **Current Working Code** ``` import numpy as np import pandas as pd csv1 = pd.read_csv('stock_price.csv', delimiter = ',') df = pd.DataFrame(csv1) df['delta'] = df.PX_LAST.pct_change() df.loc[df.index[0], 'avg_gain'] = 0 for x in range(1,len(df.index)): if df["delta"].iloc[x] > 0: df["avg_gain"].iloc[x] = ((df["avg_gain"].iloc[x - 1] * 13) + df["delta"].iloc[x]) / 14 else: df["avg_gain"].iloc[x] = ((df["avg_gain"].iloc[x - 1] * 13) + 0) / 14 df ``` **Correct Output Example** ``` Dates PX_LAST delta avg_gain 03/09/2018 43.67800 NaN 0.000000 04/09/2018 43.14825 -0.012129 0.000000 05/09/2018 42.81725 -0.007671 0.000000 06/09/2018 43.07725 0.006072 0.000434 07/09/2018 43.37525 0.006918 0.000897 10/09/2018 43.47925 0.002398 0.001004 11/09/2018 43.59750 0.002720 0.001127 12/09/2018 43.68725 0.002059 0.001193 13/09/2018 44.08925 0.009202 0.001765 14/09/2018 43.89075 -0.004502 0.001639 17/09/2018 44.04200 0.003446 0.001768 ``` **Attempted Solutions** I tried to create a new column that only comprises of the positive values and then tried to create the smoothed moving average of that new column but it doesn't give me the right answer ``` df['new_col'] = df['delta'].apply(lambda x: x if x > 0 else 0) df['avg_gain'] = df['new_col'].ewm(14,min_periods=1).mean() ``` The maths behind it as follows... Avg\_Gain = ((Avg\_Gain(t-1) \* 13) + (New\_Col \* 1)) / 14 where New\_Col only equals the positive values of Delta Does anyone know how I might be able to do it? Cheers
2019/09/09
[ "https://Stackoverflow.com/questions/57854020", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12041022/" ]
This should speed up your code: `df['avg_gain'] = df[df['delta'] > 0]['delta'].rolling(14).mean()` Does your current code converge to zero? If you can provide the data, then it would be easier for the folk to do some analysis.
I would suggest you add a column which is 0 if the value is < 0 and instead has the same value as the one you want to consider if it is >= 0. Then you take the running average of this new column. ``` df['new_col'] = df.apply(lambda x: x['delta'] if x['delta'] >= 0 else 0) df['avg_gain'] = df['new_value'].rolling(14).mean() ``` This would take into account zeros instead of just discarding them.
44,934,948
I try to get all lattitudes and longtitudes from this json. Code: ``` import urllib.parse import requests raw_json = 'http://live.ksmobile.net/live/getreplayvideos?userid=' print() userid = 735890904669618176 #userid = input('UserID: ') url = raw_json + urllib.parse.urlencode({'userid=': userid}) + '&page_size=1000' print(url) json_data = requests.get(url).json() print() for coordinates in json_data['data']['video_info']: print(coordinates['lat'], coordinates['lnt']) print() ``` Error: ``` /usr/bin/python3.6 /media/anon/3D0B8DD536C9574F/PythonProjects/getLocation/getCoordinates http://live.ksmobile.net/live/getreplayvideos?userid=userid%3D=735890904669618176&page_size=1000 Traceback (most recent call last): File "/media/anon/3D0B8DD536C9574F/PythonProjects/getLocation/getCoordinates", line 17, in <module> for coordinates in json_data['data']['video_info']: TypeError: list indices must be integers or slices, not str Process finished with exit code 1 ``` Where do I go wrong? In advance, thanks for your help and time. I just post some of the json to show what it looks like. The json looks like this: ``` { "status": "200", "msg": "", "data": { "time": "1499275646", "video_info": [ { "vid": "14992026438883533757", "watchnumber": "38", "topicid": "0", "topic": "", "vtime": "1499202678", "title": "happy 4th of july", "userid": "735890904669618176", "online": "0", "addr": "", "isaddr": "2", "lnt": "-80.1282576", "lat": "26.2810628", "area": "A_US", "countryCode": "US", "chatSystem": "1", }, ``` Full json: <https://pastebin.com/qJywTqa1>
2017/07/05
[ "https://Stackoverflow.com/questions/44934948", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8258990/" ]
Your URL construction is incorrect. The URL you have built (as shown in the output of your script) is: ``` http://live.ksmobile.net/live/getreplayvideos?userid=userid%3D=735890904669618176&page_size=1000 ``` Where you actually want this: ``` http://live.ksmobile.net/live/getreplayvideos?userid=735890904669618176&page_size=1000 ``` So your were actually getting this JSON in your response: ``` { "status": "200", "msg": "", "data": [] } ``` Which is why you were seeing that error. Here is the corrected script: ``` import urllib.parse import requests raw_json = 'http://live.ksmobile.net/live/getreplayvideos?' print() userid = 735890904669618176 #userid = input('UserID: ') url = raw_json + urllib.parse.urlencode({'userid': userid}) + '&page_size=1000' print(url) json_data = requests.get(url).json() print() for coordinates in json_data['data']['video_info']: print(coordinates['lat'], coordinates['lnt']) print() ```
According to your posted json, you have problem in this statement- `print(coordinates['lat'], coordinates['lnt'])` Here `coordinates` is a list having only one item which is dictionary. So your statement should be- `print(coordinates[0]['lat'], coordinates[0]['lnt'])`
63,301,691
my code works perfectly in Python 3.8, but when I switch to Python 3.5 in same operating system, with same code and everything else, it starts throwing out "SyntaxError: invalid syntax". Here is the error, and the part of the code that I think which relates to the error : ``` Traceback (most recent call last): File "pwb.py", line 390, in <module> if not main(): File "pwb.py", line 385, in main file_package) File "pwb.py", line 100, in run_python_file exec(compile(source, filename, 'exec', dont_inherit=True), File ".\scripts\signbot.py", line 83 namespace: int ^ SyntaxError: invalid syntax CRITICAL: Exiting due to uncaught exception <class 'SyntaxError'> ``` And here is the part of the code : ``` @dataclass class RevisionInfo: namespace: int title: str type: str bot: bool comment: str user: str oldRevision: Optional[int] newRevision: int timestamp: int ``` Sorry if the question title is not specific, but I'm having troubles getting this code working in Python 3.5. The server I'm going to run this code in only supports Python 3.5, so I need to get this working with 3.5. Thanks.
2020/08/07
[ "https://Stackoverflow.com/questions/63301691", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10214891/" ]
There are at least two issues here: 1. [Variable annotations](https://docs.python.org/3/whatsnew/3.6.html#whatsnew36-pep526) were new in Python 3.6. 2. The [`dataclasses`](https://docs.python.org/3/library/dataclasses.html?highlight=dataclass#module-dataclasses) module was new in Python 3.7. Either use Python 3.7 or greater, or rewrite your code so it doesn't rely on dataclasses and variable annotations. This is one of many reasons that it's a good idea to use the same version of Python in development as you intend to use in production. You can avoid writing code that won't work on your server.
One new and exciting feature coming in Python 3.7 is the data class. you're not able to use it in python 3.5. You should use the traditional way and use constructor: ``` class Mapping: def __init__(self, iterable): self.items_list = [] self.__update(iterable) def update(self, iterable): for item in iterable: self.items_list.append(item) ```
46,366,398
I am using Pymodm as a mongoDB odm with python flask. I have looked through code and documentation (<https://github.com/mongodb/pymodm> and <http://pymodm.readthedocs.io/en/latest>) but could not find what I was looking for. I am looking for an easy way to fetch data from the database without converting it to a pymodm object but as plain JSON. Is this possible with pymodm? Currently, I am overloading the flask JSONEncoder to handle DateTime and ObjectID and use that to convert the pymodm Object to JSON.
2017/09/22
[ "https://Stackoverflow.com/questions/46366398", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3531894/" ]
It is not obvious from the PyMODM documentation, but here's how to do it: ``` pymodm_obj.to_son().to_dict() ``` Actually, I just re-read your question, and I don't think anything is forcing you to use PyMODM everywhere in your project once you have made the decision to use it. So if you are just looking for the JSON structures, you could just use the base pymongo package functionality.
Having: ``` from pymodm import MongoModel, fields import json class Foo(MongoModel): name = fields.CharField(required=True) a=Foo() ``` You can do: ``` jsonFooString=json.dumps(a.to_son().to_dict()) ```
46,366,398
I am using Pymodm as a mongoDB odm with python flask. I have looked through code and documentation (<https://github.com/mongodb/pymodm> and <http://pymodm.readthedocs.io/en/latest>) but could not find what I was looking for. I am looking for an easy way to fetch data from the database without converting it to a pymodm object but as plain JSON. Is this possible with pymodm? Currently, I am overloading the flask JSONEncoder to handle DateTime and ObjectID and use that to convert the pymodm Object to JSON.
2017/09/22
[ "https://Stackoverflow.com/questions/46366398", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3531894/" ]
It is not obvious from the PyMODM documentation, but here's how to do it: ``` pymodm_obj.to_son().to_dict() ``` Actually, I just re-read your question, and I don't think anything is forcing you to use PyMODM everywhere in your project once you have made the decision to use it. So if you are just looking for the JSON structures, you could just use the base pymongo package functionality.
If you need to build CRUD api you might also want to check this little package, basically DRF for pymodm So if you want to create CREATE/UPDATE/DELETE it would look like this from api.pymodm\_rest import viewsets ``` class ServiceAreaViewSet(viewsets.ModelViewSet): queryset = ServiceArea.objects instance_class = ServiceArea lookup_field = '_id' ``` [<https://github.com/lokoArt/pymodm_rest][1]>
46,366,398
I am using Pymodm as a mongoDB odm with python flask. I have looked through code and documentation (<https://github.com/mongodb/pymodm> and <http://pymodm.readthedocs.io/en/latest>) but could not find what I was looking for. I am looking for an easy way to fetch data from the database without converting it to a pymodm object but as plain JSON. Is this possible with pymodm? Currently, I am overloading the flask JSONEncoder to handle DateTime and ObjectID and use that to convert the pymodm Object to JSON.
2017/09/22
[ "https://Stackoverflow.com/questions/46366398", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3531894/" ]
Having: ``` from pymodm import MongoModel, fields import json class Foo(MongoModel): name = fields.CharField(required=True) a=Foo() ``` You can do: ``` jsonFooString=json.dumps(a.to_son().to_dict()) ```
If you need to build CRUD api you might also want to check this little package, basically DRF for pymodm So if you want to create CREATE/UPDATE/DELETE it would look like this from api.pymodm\_rest import viewsets ``` class ServiceAreaViewSet(viewsets.ModelViewSet): queryset = ServiceArea.objects instance_class = ServiceArea lookup_field = '_id' ``` [<https://github.com/lokoArt/pymodm_rest][1]>
15,059,082
This is my code. In the first def function, I made it return column\_choose, and I wanna use column\_choose's value in second def function(get\_data\_list). What can I do? I have tried many times. But IDLE always show:global name 'column\_choose' is not defined. How to use column\_choose's value in second function? By the way, I use python 3.2 ``` def get_column_number(): while True: column_choose = input('What column:') if column_choose == '1' or column_choose == '2' or column_choose == '3' or column_choose == '4' or column_choose == '5' or column_choose == '6': column_choose = int(column_choose) return column_choose break else: print('bad column number, try again') def get_data_list(column_number): new_column_number = column_number.split(',') date_column = new_column_number[0] choose_column = new_column_number[column_choose-1] return result def main(): #Call get_input_descriptor get_input_descriptor() #Call get_column_number get_column_number() file_obj = open("table.csv", "r") for column_number in file_obj: column_number = (column_number.strip()) result = get_data_list(column_number) print(result) file_obj.close() main() ```
2013/02/25
[ "https://Stackoverflow.com/questions/15059082", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2011210/" ]
Using `float: left` on the elements will cause them to ignore the `nowrap` rule. Since you are already using `display: inline-block`, you don't need to float the elements to have them display side-by-side. Just remove `float: left`
Was because of the float:left;, once i removed that, fine. Spotted it after typing out question sorry.
70,647,836
Very new to python. I am trying to iterate over a list of floating points and append elements to a new list based on a condition. Everytime I populate the list I get double the output, for example a list with three floating points gives me an output of six elements in the new list. ``` tempretures = [39.3, 38.2, 38.1] new_list = [] i=0 while i <len(tempretures): for tempreture in range(len(tempretures)): if tempreture < 38.3: new_list = new_list + ['L'] elif tempreture > 39.2: new_list = new_list + ['H'] else: tempreture (38.3>=39.2) new_list = new_list + ['N'] i=i+1 print (new_list) print (i) ``` ``` ['L', 'N', 'L', 'N', 'L', 'N'] 3 ```
2022/01/10
[ "https://Stackoverflow.com/questions/70647836", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17887798/" ]
The issue is with the indentation of the line `new_list = new_list + ['N']`. Because it is under-indented, it runs for every instance. If I can suggest an easier syntax: ``` temperatures = [39.3, 38.2, 38.1] new_list = [] for temperature in temperatures: if temperature < 38.3: new_list.append('L') elif temperature > 39.2: new_list.append('H') else: new_list.append('N') print(new_list) print(len(new_list)) ``` -> ``` ['H', 'L', 'L'] 3 ```
Hope it will solve your issue: ``` tempretures = [39.3, 38.2, 38.1] new_list = [] for temperature in tempretures: if temperature > 39.2: new_list.append('H') elif temperature>=38.3 and temperature<39.2: new_list.append('N') else: new_list.append('L') print (new_list) ``` sample output: `['H', 'L', 'L']`
70,647,836
Very new to python. I am trying to iterate over a list of floating points and append elements to a new list based on a condition. Everytime I populate the list I get double the output, for example a list with three floating points gives me an output of six elements in the new list. ``` tempretures = [39.3, 38.2, 38.1] new_list = [] i=0 while i <len(tempretures): for tempreture in range(len(tempretures)): if tempreture < 38.3: new_list = new_list + ['L'] elif tempreture > 39.2: new_list = new_list + ['H'] else: tempreture (38.3>=39.2) new_list = new_list + ['N'] i=i+1 print (new_list) print (i) ``` ``` ['L', 'N', 'L', 'N', 'L', 'N'] 3 ```
2022/01/10
[ "https://Stackoverflow.com/questions/70647836", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17887798/" ]
The cause of this is at the `else:` statement at the bottom, you haven't indented the line `new_list = new_list + ['N']` so it is being ran no matter the result. There are also a few other improvements which I've made and added comments explaining what it's doing Change your code to this: ``` temperatures = [39.3, 38.2, 38.1] new_list = [] for temperature in temperatures: #goes through all the values in "temperatures" and assigns it to the variable "temperature" if temperature < 38.3: #if temperature is less than 38.3 new_list.append('L') #append (add) 'L' to the list elif temperature > 39.2: #if temperature is greater than 39.2 new_list.append('H') #append 'H' to the list else: #else temperature is greater than or equal to 38.3 or less than or equal to 39.2 new_list.append('N') #append 'N' to the list ```
Hope it will solve your issue: ``` tempretures = [39.3, 38.2, 38.1] new_list = [] for temperature in tempretures: if temperature > 39.2: new_list.append('H') elif temperature>=38.3 and temperature<39.2: new_list.append('N') else: new_list.append('L') print (new_list) ``` sample output: `['H', 'L', 'L']`
70,647,836
Very new to python. I am trying to iterate over a list of floating points and append elements to a new list based on a condition. Everytime I populate the list I get double the output, for example a list with three floating points gives me an output of six elements in the new list. ``` tempretures = [39.3, 38.2, 38.1] new_list = [] i=0 while i <len(tempretures): for tempreture in range(len(tempretures)): if tempreture < 38.3: new_list = new_list + ['L'] elif tempreture > 39.2: new_list = new_list + ['H'] else: tempreture (38.3>=39.2) new_list = new_list + ['N'] i=i+1 print (new_list) print (i) ``` ``` ['L', 'N', 'L', 'N', 'L', 'N'] 3 ```
2022/01/10
[ "https://Stackoverflow.com/questions/70647836", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17887798/" ]
It looks like you're looking for something like the below list comprehension, which should be a go to when you have a pattern of building a new list based on the values in an existing list. ``` new_list = ['L' if t < 38.3 else 'H' if t > 39.2 else 'N' for t in temperatures] ```
Hope it will solve your issue: ``` tempretures = [39.3, 38.2, 38.1] new_list = [] for temperature in tempretures: if temperature > 39.2: new_list.append('H') elif temperature>=38.3 and temperature<39.2: new_list.append('N') else: new_list.append('L') print (new_list) ``` sample output: `['H', 'L', 'L']`
57,010,692
i am trying to extract specific data from requested json file so after passing Authorization and using requests.get i got my request , i think it is called dictionary for python coders and called json for javascript coders it containt too much information that i dont need and i would like to extract one or two only for example {"bio" : " hello world " } and that json file contains more that one " bio " for example i am scraping 100 accounts and i would like to extract all " bio " in one code so i tried this : ```py from bs4 import BeautifulSoup import requests headers = {"Authorization" : "xxxx"} req = requests.get('website', headers = headers) data = req.text soup = BeautifulSoup(data,'html.parser') titles = soup.find_all('span',{'class':'bio'}) for title in titles : print(title.text) ``` and didnt work , i tried multiple ideas with no success if possible please write me a code that i can understande since iam trying to learn more about my mistakes thanks
2019/07/12
[ "https://Stackoverflow.com/questions/57010692", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9952973/" ]
The `Aphid` library I created is perfect for this. from command-prompt ```py py -m pip install Aphid ``` Then its just as easy as loading your json data and searching it with aphid. ``` import json import Aphid resp = requests.get(yoururl) data = json.loads(resp.text) results = Aphid.findall(data, 'bio') ``` `results` is now equal to a list of tuples(key, value), of every occurence of the 'bio' key.
After you get your request either: * you get a simple json file (in which case you import it to python using [json](https://docs.python.org/3/library/json.html)) **or** * you get an html file from which you can extract the json code (using BeautifulSoup) which in turn you will parse using json library.
16,956,523
[Using Python3] I have a csv file that has two columns (an email address and a country code; script is made to actually make it two columns if not the case in the original file - kind of) that I want to split out by the value in the second column and output in separate csv files. ``` eppetj@desrfpkwpwmhdc.com us ==> output-us.csv uheuyvhy@zyetccm.com de ==> output-de.csv avpxhbdt@reywimmujbwm.com es ==> output-es.csv gqcottyqmy@romeajpui.com it ==> output-it.csv qscar@tpcptkfuaiod.com fr ==> output-fr.csv qshxvlngi@oxnzjbdpvlwaem.com gb ==> output-gb.csv vztybzbxqq@gahvg.com us ==> output-us.csv ... ... ... ``` Currently my code kind of does this, but instead of writing each email address to the csv it overwrites the email placed before that. Can someone help me out with this? I am very new to programming and Python and I might not have written the code in the most pythonic way, so I would really appreciate any feedback on the code in general! Thanks in advance! Code: ``` import csv def tsv_to_dict(filename): """Creates a reader of a specified .tsv file.""" with open(filename, 'r') as f: reader = csv.reader(f, delimiter='\t') # '\t' implies tab email_list = [] # Checks each list in the reader list and removes empty elements for lst in reader: email_list.append([elem for elem in lst if elem != '']) # List comprehension # Stores the list of lists as a dict email_dict = dict(email_list) return email_dict def count_keys(dictionary): """Counts the number of entries in a dictionary.""" return len(dictionary.keys()) def clean_dict(dictionary): """Removes all whitespace in keys from specified dictionary.""" return { k.strip():v for k,v in dictionary.items() } # Dictionary comprehension def split_emails(dictionary): """Splits out all email addresses from dictionary into output csv files by country code.""" # Creating a list of unique country codes cc_list = [] for v in dictionary.values(): if not v in cc_list: cc_list.append(v) # Writing the email addresses to a csv based on the cc (value) in dictionary for key, value in dictionary.items(): for c in cc_list: if c == value: with open('output-' +str(c) +'.csv', 'w') as f_out: writer = csv.writer(f_out, lineterminator='\r\n') writer.writerow([key]) ```
2013/06/06
[ "https://Stackoverflow.com/questions/16956523", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2445114/" ]
You can simplify this a lot by using a `defaultdict`: ``` import csv from collections import defaultdict emails = defaultdict(list) with open('email.tsv','r') as f: reader = csv.reader(f, delimiter='\t') for row in reader: if row: if '@' in row[0]: emails[row[1].strip()].append(row[0].strip()+'\n') for key,values in emails.items(): with open('output-{}.csv'.format(key), 'w') as f: f.writelines(values) ``` As your separated files are not comma separated, but single columns - you don't need the csv module and can simply write the rows. The `emails` dictionary contains a key for each country code, and a list for all the matching email addresses. To make sure the email addresses are printed correctly, we remove any whitespace and add the a line break (this is so we can use `writelines` later). Once the dictionary is populated, its simply a matter of stepping through the keys to create the files and then writing out the resulting list.
Not a Python answer, but maybe you can use this Bash solution. ``` $ while read email country do echo $email >> output-$country.csv done < in.csv ``` This reads the lines from `in.csv`, splits them into two parts `email` and `country`, and appends (`>>`) the `email` to the file called `output-$country.csv`.
16,956,523
[Using Python3] I have a csv file that has two columns (an email address and a country code; script is made to actually make it two columns if not the case in the original file - kind of) that I want to split out by the value in the second column and output in separate csv files. ``` eppetj@desrfpkwpwmhdc.com us ==> output-us.csv uheuyvhy@zyetccm.com de ==> output-de.csv avpxhbdt@reywimmujbwm.com es ==> output-es.csv gqcottyqmy@romeajpui.com it ==> output-it.csv qscar@tpcptkfuaiod.com fr ==> output-fr.csv qshxvlngi@oxnzjbdpvlwaem.com gb ==> output-gb.csv vztybzbxqq@gahvg.com us ==> output-us.csv ... ... ... ``` Currently my code kind of does this, but instead of writing each email address to the csv it overwrites the email placed before that. Can someone help me out with this? I am very new to programming and Python and I might not have written the code in the most pythonic way, so I would really appreciate any feedback on the code in general! Thanks in advance! Code: ``` import csv def tsv_to_dict(filename): """Creates a reader of a specified .tsv file.""" with open(filename, 'r') as f: reader = csv.reader(f, delimiter='\t') # '\t' implies tab email_list = [] # Checks each list in the reader list and removes empty elements for lst in reader: email_list.append([elem for elem in lst if elem != '']) # List comprehension # Stores the list of lists as a dict email_dict = dict(email_list) return email_dict def count_keys(dictionary): """Counts the number of entries in a dictionary.""" return len(dictionary.keys()) def clean_dict(dictionary): """Removes all whitespace in keys from specified dictionary.""" return { k.strip():v for k,v in dictionary.items() } # Dictionary comprehension def split_emails(dictionary): """Splits out all email addresses from dictionary into output csv files by country code.""" # Creating a list of unique country codes cc_list = [] for v in dictionary.values(): if not v in cc_list: cc_list.append(v) # Writing the email addresses to a csv based on the cc (value) in dictionary for key, value in dictionary.items(): for c in cc_list: if c == value: with open('output-' +str(c) +'.csv', 'w') as f_out: writer = csv.writer(f_out, lineterminator='\r\n') writer.writerow([key]) ```
2013/06/06
[ "https://Stackoverflow.com/questions/16956523", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2445114/" ]
The problem with your code is that it keeps opening the same country output file each time it writes an entry into it, thereby overwriting whatever might have already been there. A simple way to avoid that is to open all the output files at once for writing and store them in a dictionary keyed by the country code. Likewise, you can have another that associates each country code to a`csv.writer`object for that country's output file. **Update:** While I agree that Burhan's approach is probably superior, I feel that you have the idea that my earlier answer was excessively long due to all the comments it had -- so here's another version of essentially the same logic but with minimal comments to allow you better discern its reasonably-short true length (even with the contextmanager). ``` import csv from contextlib import contextmanager @contextmanager # to manage simultaneous opening and closing of output files def open_country_csv_files(countries): csv_files = {country: open('output-'+country+'.csv', 'w') for country in countries} yield csv_files for f in csv_files.values(): f.close() with open('email.tsv', 'r') as f: email_dict = {row[0]: row[1] for row in csv.reader(f, delimiter='\t') if row} countries = set(email_dict.values()) with open_country_csv_files(countries) as csv_files: csv_writers = {country: csv.writer(csv_files[country], lineterminator='\r\n') for country in countries} for email_addr,country in email_dict.items(): csv_writers[country].writerow([email_addr]) ```
Not a Python answer, but maybe you can use this Bash solution. ``` $ while read email country do echo $email >> output-$country.csv done < in.csv ``` This reads the lines from `in.csv`, splits them into two parts `email` and `country`, and appends (`>>`) the `email` to the file called `output-$country.csv`.
16,956,523
[Using Python3] I have a csv file that has two columns (an email address and a country code; script is made to actually make it two columns if not the case in the original file - kind of) that I want to split out by the value in the second column and output in separate csv files. ``` eppetj@desrfpkwpwmhdc.com us ==> output-us.csv uheuyvhy@zyetccm.com de ==> output-de.csv avpxhbdt@reywimmujbwm.com es ==> output-es.csv gqcottyqmy@romeajpui.com it ==> output-it.csv qscar@tpcptkfuaiod.com fr ==> output-fr.csv qshxvlngi@oxnzjbdpvlwaem.com gb ==> output-gb.csv vztybzbxqq@gahvg.com us ==> output-us.csv ... ... ... ``` Currently my code kind of does this, but instead of writing each email address to the csv it overwrites the email placed before that. Can someone help me out with this? I am very new to programming and Python and I might not have written the code in the most pythonic way, so I would really appreciate any feedback on the code in general! Thanks in advance! Code: ``` import csv def tsv_to_dict(filename): """Creates a reader of a specified .tsv file.""" with open(filename, 'r') as f: reader = csv.reader(f, delimiter='\t') # '\t' implies tab email_list = [] # Checks each list in the reader list and removes empty elements for lst in reader: email_list.append([elem for elem in lst if elem != '']) # List comprehension # Stores the list of lists as a dict email_dict = dict(email_list) return email_dict def count_keys(dictionary): """Counts the number of entries in a dictionary.""" return len(dictionary.keys()) def clean_dict(dictionary): """Removes all whitespace in keys from specified dictionary.""" return { k.strip():v for k,v in dictionary.items() } # Dictionary comprehension def split_emails(dictionary): """Splits out all email addresses from dictionary into output csv files by country code.""" # Creating a list of unique country codes cc_list = [] for v in dictionary.values(): if not v in cc_list: cc_list.append(v) # Writing the email addresses to a csv based on the cc (value) in dictionary for key, value in dictionary.items(): for c in cc_list: if c == value: with open('output-' +str(c) +'.csv', 'w') as f_out: writer = csv.writer(f_out, lineterminator='\r\n') writer.writerow([key]) ```
2013/06/06
[ "https://Stackoverflow.com/questions/16956523", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2445114/" ]
You can simplify this a lot by using a `defaultdict`: ``` import csv from collections import defaultdict emails = defaultdict(list) with open('email.tsv','r') as f: reader = csv.reader(f, delimiter='\t') for row in reader: if row: if '@' in row[0]: emails[row[1].strip()].append(row[0].strip()+'\n') for key,values in emails.items(): with open('output-{}.csv'.format(key), 'w') as f: f.writelines(values) ``` As your separated files are not comma separated, but single columns - you don't need the csv module and can simply write the rows. The `emails` dictionary contains a key for each country code, and a list for all the matching email addresses. To make sure the email addresses are printed correctly, we remove any whitespace and add the a line break (this is so we can use `writelines` later). Once the dictionary is populated, its simply a matter of stepping through the keys to create the files and then writing out the resulting list.
The problem with your code is that it keeps opening the same country output file each time it writes an entry into it, thereby overwriting whatever might have already been there. A simple way to avoid that is to open all the output files at once for writing and store them in a dictionary keyed by the country code. Likewise, you can have another that associates each country code to a`csv.writer`object for that country's output file. **Update:** While I agree that Burhan's approach is probably superior, I feel that you have the idea that my earlier answer was excessively long due to all the comments it had -- so here's another version of essentially the same logic but with minimal comments to allow you better discern its reasonably-short true length (even with the contextmanager). ``` import csv from contextlib import contextmanager @contextmanager # to manage simultaneous opening and closing of output files def open_country_csv_files(countries): csv_files = {country: open('output-'+country+'.csv', 'w') for country in countries} yield csv_files for f in csv_files.values(): f.close() with open('email.tsv', 'r') as f: email_dict = {row[0]: row[1] for row in csv.reader(f, delimiter='\t') if row} countries = set(email_dict.values()) with open_country_csv_files(countries) as csv_files: csv_writers = {country: csv.writer(csv_files[country], lineterminator='\r\n') for country in countries} for email_addr,country in email_dict.items(): csv_writers[country].writerow([email_addr]) ```
71,117,916
I'm looking to use the Kubernetes python client to delete a deployment, but then block and wait until all of the associated pods are deleted as well. A lot of the examples I'm finding recommend using the watch function something like follows. ``` try: # try to delete if exists AppsV1Api(api_client).delete_namespaced_deployment(namespace="default", name="mypod") except Exception: # handle exception # wait for all pods associated with deployment to be deleted. for e in w.stream( v1.list_namespaced_pod, namespace="default", label_selector='mylabel=my-value", timeout_seconds=300): pod_name = e['object'].metadata.name print("pod_name", pod_name) if e['type'] == 'DELETED': w.stop() break ``` However, I see two problems with this. 1. If the pod is already gone (or if some other process deletes all pods before execution reaches the watch stream), then the watch will find no events and the for loop will get stuck until the timeout expires. Watch does not seem to generate activity if there are no events. 2. Upon seeing events in the event stream for the pod activity, how do know all the pods got deleted? Seems fragile to count them. I'm basically looking to replace the `kubectl delete --wait` functionality with a python script. Thanks for any insights into this.
2022/02/14
[ "https://Stackoverflow.com/questions/71117916", "https://Stackoverflow.com", "https://Stackoverflow.com/users/226081/" ]
```css .card { display: flex; flex-direction: column; flex-wrap: wrap; justify-content: center; align-items: center; height: 400px; } .card img { height: 400px; max-width:50%; } ``` ```html <div class = "container"> <div class = "card"> <img src="https://www.unfe.org/wp-content/uploads/2019/04/SM-placeholder.png"> <h2>Gift Cards</h2> <p> Lorem ipsum dolor, sit amet consectetur adipisicing elit. Neque expedita tempore quasi omnis a aut et totam illo fuga accusamus dolorum vero, ut harum consectetur. Minima molestias officiis culpa non sed dicta itaque. Et aliquam illo obcaecati molestias veritatis porro. </p> <p>Already have an Orange MyTunes Music Gift Card?</p> <hr> <a href="#">>Redeem</a> </div> </div> ``` Not the best approach however, I suggest you change HTML code to be like this: ```css .card { display: flex; justify-content: center; align-items: center; } .card-img { width: 50%; display: flex; align-items: center; } .card-img img { width: 100%; } .card-body { width: 50% } ``` ```html <div class="container"> <div class="card"> <div class="card-img"> <img src="https://s6.uupload.ir/files/giftcard_scrj.png"> </div> <div class="card-body"> <h2>Gift Cards</h2> <p> Lorem ipsum dolor, sit amet consectetur adipisicing elit. Neque expedita tempore quasi omnis a aut et totam illo fuga accusamus dolorum vero, ut harum consectetur. Minima molestias officiis culpa non sed dicta itaque. Et aliquam illo obcaecati molestias veritatis porro. </p> <p>Already have an Orange MyTunes Music Gift Card?</p> <hr> <a href="#">>Redeem</a> </div> </div> </div> ```
You need to `flex`: ```css .card{ display: flex; justify-content: center; gap: 20px; margin-top: 20px; } .img{ width: 40%; } img{ width: 100%; } .text{ width: 40%; } .text p{ font-size: 12px; } ``` ```html <div class="card"> <div class="img"> <img src="https://s6.uupload.ir/files/magearray-giftcard-icon_n30.png"> </div> <div class="text"> <h2>Gift Cards</h2> <p> Lorem ipsum dolor, sit amet consectetur adipisicing elit. Neque expedita tempore quasi omnis a aut et totam illo fuga accusamus dolorum vero, ut harum consectetur. Minima molestias officiis culpa non sed dicta itaque. Et aliquam illo obcaecati molestias veritatis porro. </p> <p>Already have an Orange MyTunes Music Gift Card?</p> <hr> <a href="#">>Redeem</a> </div> </div> ```
50,259,795
I just installed the discord.py rewrite branch, but attempting to use `import discord` or `from discord.ext import commands` simply results in a TypeError. ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.6/site-packages/discord/__init__.py", line 20, in <module> from .client import Client, AppInfo File "/usr/local/lib/python3.6/site-packages/discord/client.py", line 30, in <module> from .guild import Guild File "/usr/local/lib/python3.6/site-packages/discord/guild.py", line 39, in <module> from .channel import * File "/usr/local/lib/python3.6/site-packages/discord/channel.py", line 31, in <module> from .webhook import Webhook File "/usr/local/lib/python3.6/site-packages/discord/webhook.py", line 27, in <module> import aiohttp File "/usr/local/lib/python3.6/site-packages/aiohttp/__init__.py", line 6, in <module> from .client import * # noqa File "/usr/local/lib/python3.6/site-packages/aiohttp/client.py", line 15, in <module> from . import connector as connector_mod File "/usr/local/lib/python3.6/site-packages/aiohttp/connector.py", line 17, in <module> from .client_proto import ResponseHandler File "/usr/local/lib/python3.6/site-packages/aiohttp/client_proto.py", line 6, in <module> from .http import HttpResponseParser, StreamWriter File "/usr/local/lib/python3.6/site-packages/aiohttp/http.py", line 8, in <module> from .http_parser import (HttpParser, HttpRequestParser, HttpResponseParser, File "/usr/local/lib/python3.6/site-packages/aiohttp/http_parser.py", line 15, in <module> from .http_writer import HttpVersion, HttpVersion10 File "/usr/local/lib/python3.6/site-packages/aiohttp/http_writer.py", line 304, in <module> class URL(yarl.URL): File "/usr/local/lib/python3.6/site-packages/yarl/__init__.py", line 232, in __init_subclass__ "is forbidden".format(cls)) TypeError: Inheritance a class <class 'aiohttp.http_writer.URL'> from URL is forbidden ``` Although the error is technically from yarl rather than from discord.py itself, the error only occurs upon trying to import the modules. I've already tried reinstalling python as well as the discord.py rewrite branch, and if it makes any difference am running on a RPi 3 B+
2018/05/09
[ "https://Stackoverflow.com/questions/50259795", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9766666/" ]
Your aiohttp package might be out of date. Try ``` pip install --upgrade aiohttp ```
I tried to install discord.py on my python 3.7 and it didn't work. I had to install python 3.6.6 to make it work, maybe you are using python 3.7, if so you should try rolling back to python 3.6.6
66,306,167
Can you please explain how the below python code is evaluated to be True ``` if 50 == 10 or 30: print('True') else: print('False') ``` Output: True
2021/02/21
[ "https://Stackoverflow.com/questions/66306167", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7121586/" ]
Replace internals of while- loop with simpler version ``` while leftIndex < left.count && rightIndex < right.count { if left[leftIndex] <= right[rightIndex] { mergedArr.append(left[leftIndex]) leftIndex += 1 } else { mergedArr.append(right[rightIndex]) rightIndex += 1 } } ``` and you forgot about the rest of arrays: ``` while leftIndex < left.count { mergedArr.append(left[leftIndex]) leftIndex += 1 } while rightIndex < right.count { mergedArr.append(right[rightIndex]) rightIndex += 1 } ``` Why? When you merge two arrays, tail of one array may be not treated yet. For example, you merge `[1 3 5] and [1 2]`. After copying to result `[1 1 2]` the second array is finished (`rightIndex` becomes equal to `right.count`) and main while-loop stops. But what about `[3,5]` piece?
Consider if the data remains on the `left` or `right` only. ``` public func mergeSort<T: Comparable>(_ array: [T]) -> [T] { if array.count < 2 { return array } let mid = array.count / 2 let left = [T](array[0..<mid]) let right = [T](array[mid..<array.count]) return merge(left, right) } private func merge<T: Comparable>(_ left: [T], _ right: [T]) -> [T] { var leftIndex = 0 var rightIndex = 0 var tempList = [T]() while leftIndex < left.count && rightIndex < right.count { if left[leftIndex] < right[rightIndex] { tempList.append(left[leftIndex]) leftIndex += 1 } else if left[leftIndex] > right[rightIndex] { tempList.append(right[rightIndex]) rightIndex += 1 } else { tempList.append(left[leftIndex]) tempList.append(right[rightIndex]) leftIndex += 1 rightIndex += 1 } } //Don't miss this part. while leftIndex < left.count { tempList.append(left[leftIndex]) leftIndex += 1 } while rightIndex < right.count { tempList.append(right[rightIndex]) rightIndex += 1 } return tempList } ```
66,306,167
Can you please explain how the below python code is evaluated to be True ``` if 50 == 10 or 30: print('True') else: print('False') ``` Output: True
2021/02/21
[ "https://Stackoverflow.com/questions/66306167", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7121586/" ]
It's not working because you forget to add to your ordered array the remaining data in left and right sub-array(s). The merge method should look like the below. ``` func merge<>(_ left: [Int], _ right: [Int]) -> [Int] { var leftIndex = 0 var rightIndex = 0 var orderedArray = [Int]() while leftIndex < left.count && rightIndex < right.count { let leftElement = left[leftIndex] let rightElement = right[rightIndex] if leftElement < rightElement { orderedArray.append(leftElement) leftIndex += 1 } else if leftElement > rightElement { orderedArray.append(rightElement) rightIndex += 1 } else { orderedArray.append(leftElement) orderedArray.append(rightElement) leftIndex += 1 rightIndex += 1 } } while leftIndex < left.count { orderedArray.append(left[leftIndex]) leftIndex += 1 } while rightIndex < right.count { orderedArray.append(right[rightIndex]) rightIndex += 1 } return orderedArray } ```
Consider if the data remains on the `left` or `right` only. ``` public func mergeSort<T: Comparable>(_ array: [T]) -> [T] { if array.count < 2 { return array } let mid = array.count / 2 let left = [T](array[0..<mid]) let right = [T](array[mid..<array.count]) return merge(left, right) } private func merge<T: Comparable>(_ left: [T], _ right: [T]) -> [T] { var leftIndex = 0 var rightIndex = 0 var tempList = [T]() while leftIndex < left.count && rightIndex < right.count { if left[leftIndex] < right[rightIndex] { tempList.append(left[leftIndex]) leftIndex += 1 } else if left[leftIndex] > right[rightIndex] { tempList.append(right[rightIndex]) rightIndex += 1 } else { tempList.append(left[leftIndex]) tempList.append(right[rightIndex]) leftIndex += 1 rightIndex += 1 } } //Don't miss this part. while leftIndex < left.count { tempList.append(left[leftIndex]) leftIndex += 1 } while rightIndex < right.count { tempList.append(right[rightIndex]) rightIndex += 1 } return tempList } ```
13,907,949
I'm having an issue and I have no idea why this is happening and how to fix it. I'm working on developing a Videogame with python and pygame and I'm getting this error: ``` File "/home/matt/Smoking-Games/sg-project00/project00/GameModel.py", line 15, in Update self.imageDef=self.values[2] TypeError: 'NoneType' object has no attribute '__getitem__' ``` The code: ``` import pygame,components from pygame.locals import * class Player(components.Entity): def __init__(self,images): components.Entity.__init__(self,images) self.values=[] def Update(self,events,background): move=components.MoveFunctions() self.values=move.CompleteMove(events) self.imageDef=self.values[2] self.isMoving=self.values[3] def Animation(self,time): if(self.isMoving and time==1): self.pos+=1 if (self.pos>(len(self.anim[self.imageDef])-1)): self.pos=0 self.image=self.anim[self.imageDef][self.pos] ``` Can you explain to me what that error means and why it is happening so I can fix it?
2012/12/17
[ "https://Stackoverflow.com/questions/13907949", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1908896/" ]
BrenBarn is correct. The error means you tried to do something like `None[5]`. In the backtrace, it says `self.imageDef=self.values[2]`, which means that your `self.values` is `None`. You should go through all the functions that update `self.values` and make sure you account for all the corner cases.
The function `move.CompleteMove(events)` that you use within your class probably doesn't contain a `return` statement. So nothing is returned to `self.values` (==> None). Use `return` in `move.CompleteMove(events)` to return whatever you want to store in `self.values` and it should work. Hope this helps.
13,907,949
I'm having an issue and I have no idea why this is happening and how to fix it. I'm working on developing a Videogame with python and pygame and I'm getting this error: ``` File "/home/matt/Smoking-Games/sg-project00/project00/GameModel.py", line 15, in Update self.imageDef=self.values[2] TypeError: 'NoneType' object has no attribute '__getitem__' ``` The code: ``` import pygame,components from pygame.locals import * class Player(components.Entity): def __init__(self,images): components.Entity.__init__(self,images) self.values=[] def Update(self,events,background): move=components.MoveFunctions() self.values=move.CompleteMove(events) self.imageDef=self.values[2] self.isMoving=self.values[3] def Animation(self,time): if(self.isMoving and time==1): self.pos+=1 if (self.pos>(len(self.anim[self.imageDef])-1)): self.pos=0 self.image=self.anim[self.imageDef][self.pos] ``` Can you explain to me what that error means and why it is happening so I can fix it?
2012/12/17
[ "https://Stackoverflow.com/questions/13907949", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1908896/" ]
BrenBarn is correct. The error means you tried to do something like `None[5]`. In the backtrace, it says `self.imageDef=self.values[2]`, which means that your `self.values` is `None`. You should go through all the functions that update `self.values` and make sure you account for all the corner cases.
`move.CompleteMove()` does not return a value (perhaps it just prints something). Any method that does not return a value returns `None`, and you have assigned `None` to `self.values`. Here is an example of this: ``` >>> def hello(x): ... print x*2 ... >>> hello('world') worldworld >>> y = hello('world') worldworld >>> y >>> ``` You'll note `y` doesn't print anything, because its `None` (the only value that doesn't print anything on the interactive prompt).
13,907,949
I'm having an issue and I have no idea why this is happening and how to fix it. I'm working on developing a Videogame with python and pygame and I'm getting this error: ``` File "/home/matt/Smoking-Games/sg-project00/project00/GameModel.py", line 15, in Update self.imageDef=self.values[2] TypeError: 'NoneType' object has no attribute '__getitem__' ``` The code: ``` import pygame,components from pygame.locals import * class Player(components.Entity): def __init__(self,images): components.Entity.__init__(self,images) self.values=[] def Update(self,events,background): move=components.MoveFunctions() self.values=move.CompleteMove(events) self.imageDef=self.values[2] self.isMoving=self.values[3] def Animation(self,time): if(self.isMoving and time==1): self.pos+=1 if (self.pos>(len(self.anim[self.imageDef])-1)): self.pos=0 self.image=self.anim[self.imageDef][self.pos] ``` Can you explain to me what that error means and why it is happening so I can fix it?
2012/12/17
[ "https://Stackoverflow.com/questions/13907949", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1908896/" ]
`move.CompleteMove()` does not return a value (perhaps it just prints something). Any method that does not return a value returns `None`, and you have assigned `None` to `self.values`. Here is an example of this: ``` >>> def hello(x): ... print x*2 ... >>> hello('world') worldworld >>> y = hello('world') worldworld >>> y >>> ``` You'll note `y` doesn't print anything, because its `None` (the only value that doesn't print anything on the interactive prompt).
The function `move.CompleteMove(events)` that you use within your class probably doesn't contain a `return` statement. So nothing is returned to `self.values` (==> None). Use `return` in `move.CompleteMove(events)` to return whatever you want to store in `self.values` and it should work. Hope this helps.
58,500,923
I have an array of elements [a\_1, a\_2, ... a\_n] and array ofprobabilities associated with this elements [p\_1, p\_2, ..., p\_n]. I want to choose "k" elements from [a\_1,...a\_n], k << n, according to probabilities [p\_1,p\_2,...,p\_n]. How can I code it in python? Thank you very much, I am not experienced at programming
2019/10/22
[ "https://Stackoverflow.com/questions/58500923", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12256362/" ]
use `numpy.random.choice` example: ``` from numpy.random import choice sample_space = np.array([a_1, a_2, ... a_n]) # substitute the a_i's discrete_probability_distribution = np.array([p_1, p_2, ..., p_n]) # substitute the p_i's # picking N samples N = 10 for _ in range(N): print(choice(sample_space, discrete_probability_distribution) ```
Perhaps you want something similar to this? ``` import random data = ['a', 'b', 'c', 'd'] probabilities = [0.5, 0.1, 0.9, 0.2] for _ in range(10): print([d for d,p in zip(data,probabilities) if p>random.random()]) ``` The above would output something like: ``` ['c'] ['c'] ['a', 'c'] ['a', 'c'] ['a', 'c'] [] ['a', 'c'] ['c', 'd'] ['a', 'c'] ['d'] ```
15,305,634
Today I progressed further into [this Python roguelike tutorial](http://roguebasin.roguelikedevelopment.org/index.php?title=Complete_Roguelike_Tutorial,_using_python%2Blibtcod), and got to the inventory. As of now, I can pick up items and use them. The only problem is, when accessing the inventory, it's only visible for a split second, even though I used the `console_wait_for_keypress(True)` function. I'm not sure as to why it disappears. Here's the code that displays a menu(in this case, the inventory): ``` def menu(header,options,width): if len(options)>26: raise ValueError('Cannot have a menu with more than 26 options.') header_height=libtcod.console_get_height_rect(con,0,0,width,SCREEN_HEIGHT,header) height=len(options)+header_height window=libtcod.console_new(width,height) libtcod.console_set_default_foreground(window,libtcod.white) libtcod.console_print_rect_ex(window,0,0,width,height,libtcod.BKGND_NONE,libtcod.LEFT,header) y=header_height letter_index=ord('a') for option_text in options: text='('+chr(letter_index)+')'+option_text libtcod.console_print_ex(window,0,y,libtcod.BKGND_NONE,libtcod.LEFT,text) y+=1 letter_index+=1 x=SCREEN_WIDTH/2-width/2 y=SCREEN_HEIGHT/2-height/2 libtcod.console_blit(window,0,0,width,height,0,x,y,1.0,0.7) libtcod.console_flush() key=libtcod.console_wait_for_keypress(True) index=key.c-ord('a') if index>=0 and index<len(options): return index return None ``` I'd appreciate anyone's help or input to this problem.
2013/03/09
[ "https://Stackoverflow.com/questions/15305634", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2138040/" ]
Your code is messy, there may be multiple issues but I think this line is your problem ``` txView.SetBackgroundResource(Resource.Color.PrimaryColor); ``` As you can see [here](http://developer.android.com/reference/android/view/View.html#setBackgroundResource%28int%29) in the documentation you should only pass a reference to a `Drawable` as the parameter not a `Color`. You need to use this method [here](http://developer.android.com/reference/android/view/View.html#setBackgroundColor%28int%29) like so: `txView.SetBackgroundColor(Resource.Color.PrimaryColor);` Also the code in the question won't compile, where does the txView variable come from? I'm assuming it's meant to be titleView? And another thing; the log entry you have posted will be displayed every time you run your project, you can ignore it. See [here](http://mono-for-android.1047100.n5.nabble.com/Runtime-version-supported-by-this-application-is-unavailable-tp5677802p5678753.html) for more info. The actual log entry you should have posted would have come much later (and only *after* you click Continue in Visual Studio)
You need to tell which view to find id in. So instantiate a `view` after get the `factory`. ``` var view = factory.Inflate(Resource.Layout.DialogRegister, null); ``` Because the `titleView` would reference to null, it causes the crash, Then, you can find the `title` using the `view` you just created. One thing to point out, `Android.Resource` references to resources of Android framework in Mono for Android, and `Resource` is actually the reference to your layouts, ids etc. So, the code would be like: ``` var titleView = view.FindViewById<TextView>(Resource.Id.title); ``` [`SetBackgroundResource`](http://developer.android.com/reference/android/view/View.html#setBackgroundResource%28int%29) can only take drawable as an effective parameter, so color won't work in this case. However `SetBackgroundColor` would work because `Android.Graphics.Color.Red` is a [`Color`](http://developer.android.com/reference/android/graphics/Color.html) object. Also, you can `SetView(view)` while building the dialog.
15,305,634
Today I progressed further into [this Python roguelike tutorial](http://roguebasin.roguelikedevelopment.org/index.php?title=Complete_Roguelike_Tutorial,_using_python%2Blibtcod), and got to the inventory. As of now, I can pick up items and use them. The only problem is, when accessing the inventory, it's only visible for a split second, even though I used the `console_wait_for_keypress(True)` function. I'm not sure as to why it disappears. Here's the code that displays a menu(in this case, the inventory): ``` def menu(header,options,width): if len(options)>26: raise ValueError('Cannot have a menu with more than 26 options.') header_height=libtcod.console_get_height_rect(con,0,0,width,SCREEN_HEIGHT,header) height=len(options)+header_height window=libtcod.console_new(width,height) libtcod.console_set_default_foreground(window,libtcod.white) libtcod.console_print_rect_ex(window,0,0,width,height,libtcod.BKGND_NONE,libtcod.LEFT,header) y=header_height letter_index=ord('a') for option_text in options: text='('+chr(letter_index)+')'+option_text libtcod.console_print_ex(window,0,y,libtcod.BKGND_NONE,libtcod.LEFT,text) y+=1 letter_index+=1 x=SCREEN_WIDTH/2-width/2 y=SCREEN_HEIGHT/2-height/2 libtcod.console_blit(window,0,0,width,height,0,x,y,1.0,0.7) libtcod.console_flush() key=libtcod.console_wait_for_keypress(True) index=key.c-ord('a') if index>=0 and index<len(options): return index return None ``` I'd appreciate anyone's help or input to this problem.
2013/03/09
[ "https://Stackoverflow.com/questions/15305634", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2138040/" ]
You need to tell which view to find id in. So instantiate a `view` after get the `factory`. ``` var view = factory.Inflate(Resource.Layout.DialogRegister, null); ``` Because the `titleView` would reference to null, it causes the crash, Then, you can find the `title` using the `view` you just created. One thing to point out, `Android.Resource` references to resources of Android framework in Mono for Android, and `Resource` is actually the reference to your layouts, ids etc. So, the code would be like: ``` var titleView = view.FindViewById<TextView>(Resource.Id.title); ``` [`SetBackgroundResource`](http://developer.android.com/reference/android/view/View.html#setBackgroundResource%28int%29) can only take drawable as an effective parameter, so color won't work in this case. However `SetBackgroundColor` would work because `Android.Graphics.Color.Red` is a [`Color`](http://developer.android.com/reference/android/graphics/Color.html) object. Also, you can `SetView(view)` while building the dialog.
I would just use dialogs. You override the OnCreateDialog method. There you can set a contentview and set a custom title if needed. You can also customize the dialog.Here is some example code, there is a SetTitle method. Here is a brief example more can be found at the link below the code. This code shows how to wire up a button click to show the dialog. Once the button is clicked the OnCreateDialog will be called and the system will show your dialog. ``` const int NewGame = 1; protected override Dialog OnCreateDialog(int id) { switch (id) { case NewGame: Dialog d = new Dialog(this);// Create the new dialog d.SetTitle("New Game");//Set the title. d.SetContentView(Resource.Layout.ActNewGame);//Set the layout resource. //here you can use d.FindViewById<T>(Resource) return d; } return null; } ``` } ``` // wire up a button click in the OnCreate method. btnNewGame.Click += (o, e) => { ShowDialog(NewGame); }; ``` <http://xandroid4net.blogspot.com/2014/09/xamarinandroid-ways-to-customize-dialogs.html>
15,305,634
Today I progressed further into [this Python roguelike tutorial](http://roguebasin.roguelikedevelopment.org/index.php?title=Complete_Roguelike_Tutorial,_using_python%2Blibtcod), and got to the inventory. As of now, I can pick up items and use them. The only problem is, when accessing the inventory, it's only visible for a split second, even though I used the `console_wait_for_keypress(True)` function. I'm not sure as to why it disappears. Here's the code that displays a menu(in this case, the inventory): ``` def menu(header,options,width): if len(options)>26: raise ValueError('Cannot have a menu with more than 26 options.') header_height=libtcod.console_get_height_rect(con,0,0,width,SCREEN_HEIGHT,header) height=len(options)+header_height window=libtcod.console_new(width,height) libtcod.console_set_default_foreground(window,libtcod.white) libtcod.console_print_rect_ex(window,0,0,width,height,libtcod.BKGND_NONE,libtcod.LEFT,header) y=header_height letter_index=ord('a') for option_text in options: text='('+chr(letter_index)+')'+option_text libtcod.console_print_ex(window,0,y,libtcod.BKGND_NONE,libtcod.LEFT,text) y+=1 letter_index+=1 x=SCREEN_WIDTH/2-width/2 y=SCREEN_HEIGHT/2-height/2 libtcod.console_blit(window,0,0,width,height,0,x,y,1.0,0.7) libtcod.console_flush() key=libtcod.console_wait_for_keypress(True) index=key.c-ord('a') if index>=0 and index<len(options): return index return None ``` I'd appreciate anyone's help or input to this problem.
2013/03/09
[ "https://Stackoverflow.com/questions/15305634", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2138040/" ]
Your code is messy, there may be multiple issues but I think this line is your problem ``` txView.SetBackgroundResource(Resource.Color.PrimaryColor); ``` As you can see [here](http://developer.android.com/reference/android/view/View.html#setBackgroundResource%28int%29) in the documentation you should only pass a reference to a `Drawable` as the parameter not a `Color`. You need to use this method [here](http://developer.android.com/reference/android/view/View.html#setBackgroundColor%28int%29) like so: `txView.SetBackgroundColor(Resource.Color.PrimaryColor);` Also the code in the question won't compile, where does the txView variable come from? I'm assuming it's meant to be titleView? And another thing; the log entry you have posted will be displayed every time you run your project, you can ignore it. See [here](http://mono-for-android.1047100.n5.nabble.com/Runtime-version-supported-by-this-application-is-unavailable-tp5677802p5678753.html) for more info. The actual log entry you should have posted would have come much later (and only *after* you click Continue in Visual Studio)
I would just use dialogs. You override the OnCreateDialog method. There you can set a contentview and set a custom title if needed. You can also customize the dialog.Here is some example code, there is a SetTitle method. Here is a brief example more can be found at the link below the code. This code shows how to wire up a button click to show the dialog. Once the button is clicked the OnCreateDialog will be called and the system will show your dialog. ``` const int NewGame = 1; protected override Dialog OnCreateDialog(int id) { switch (id) { case NewGame: Dialog d = new Dialog(this);// Create the new dialog d.SetTitle("New Game");//Set the title. d.SetContentView(Resource.Layout.ActNewGame);//Set the layout resource. //here you can use d.FindViewById<T>(Resource) return d; } return null; } ``` } ``` // wire up a button click in the OnCreate method. btnNewGame.Click += (o, e) => { ShowDialog(NewGame); }; ``` <http://xandroid4net.blogspot.com/2014/09/xamarinandroid-ways-to-customize-dialogs.html>
56,324,750
I want to get docker host machine ip address and interface names i.e ifconfig of docker host machine. instead im getting docker container ip address on doing ifconfig in docker container. It would be great if someone tell me to fetch ip address of docker host machine from a docker container. i have tried doing ifconfig dockerhostname, as a result i get error dockerhostmachi: error fetching interface information: Device not found This is my dockerfile FROM ubuntu:14.04 Install dependencies ==================== RUN apt-get update && apt-get install -y \ python-dev \ libffi-dev \ libssl-dev \ python-enum \ apache2 \ libapache2-mod-wsgi \ python-pip \ python-qt4 ``` RUN chmod -R 777 /var/log/apache2/error.log RUN chmod -R 777 /var/log/apache2/other_vhosts_access.log RUN chmod -R 777 /var/log/apache2/access.log RUN chmod -R 777 /var/run/apache2/ ``` RUN a2enmod wsgi i need to get ifconfig result for docker host machine from a docker container or when i run docker image.
2019/05/27
[ "https://Stackoverflow.com/questions/56324750", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11361438/" ]
This is an [XY problem](https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem) Why? Because the issue is not the amount of data not fitting the index, that's the *symptom*. Real problem ------------ Real problem is how to stop duplicate email entries. Attempted solution ------------------ Attempted solution is to create a `UNIQUE` index on the `email` column. It's a great attempt at solving the issue except - emails can be unusually large, your index length will vary. Sometimes it might be a 10 bytes, sometimes 30, sometimes 50.. and sometimes 255 - that's *not good*. Back to the drawing board ------------------------- What if all emails had fixed length? That's a much easier problem to tackle. You don't have to worry about index size limitation, all you need to make sure is that it's below the default limit of 767 bytes. Better solution --------------- Let's not index the `email` field. Let's create another column, `email_unique` and let's store the **hash** of the email there. Then, make the hash a `UNIQUE` index. Benefits: - always fixed with - always falls within default index length of 767 bytes - no worrying about utf8 How to do it to waste as little space as possible ------------------------------------------------- * choose a hashing algorithm. `sha1` is completely fine, although you can go for `sha256`. I'll use 256-bit version of `SHA-2` algorithm * create a `binary(32 field)`. It will hold the raw value of our hashing function. It will always be fixed with for any kind of email * I'll use triggers, `before create` and `before update` to maintain the value of the hash so I don't have to worry about it in my language's logic. ### Add the binary column ``` ALTER TABLE admins ADD email_hash BINARY(32) AFTER email; ``` ### Add the before insert trigger ``` DELIMITER $$ CREATE TRIGGER `admins_before_insert` BEFORE INSERT ON admins FOR EACH ROW BEGIN SET NEW.email_hash = UNHEX(SHA2(NEW.email, 256)); -- this creates a binary representation of a sha-256 hashed email column END$$ DELIMITER ; ``` ### Add before update trigger ``` DELIMITER $$ CREATE TRIGGER `admins_before_insert` BEFORE UPDATE ON admins FOR EACH ROW BEGIN SET NEW.email_hash = UNHEX(SHA2(NEW.email, 256)); -- this creates a binary representation of a sha-256 hashed email column END$$ DELIMITER ; ``` Final words ----------- I added trigger example code but I didn't test it. The idea is to be able to add and update emails and have MySQL tell you if there's a duplicate. Some people don't like using triggers. That's why the step with triggers is optional and an example how to achieve the effect if you prefer that route. You can, of course, increase the amount of bytes MySQL will accept for indexing. However, that's not the optimal solution as you can quickly fill up your memory and basically waste resources. Down the line, you might exceed newly set limit.
You can chenge the innodb\_large\_prefix in your config file to ON. That will set your index key prefixes up to 3072 bytes as the mysql [doc](https://dev.mysql.com/doc/refman/5.6/en/innodb-restrictions.html) says. ``` [mysqld] innodb_large_prefix = 1 ```
63,320,723
Im a beginner in python and im currently working on a problem on code forces called Lecture Sleep. The question gives you 3 lines of inputs: ``` 6 3 1 3 5 2 5 4 1 1 0 1 0 0 ``` I'm trying to figure out how to link the second array of numbers `(1 3 5 2 5 4)` to the 3rd array of numbers `(1 1 0 1 0 0)`. So that `1 = 1, 3 = 1, 5 = 0, 2 = 1, 5 = 0, 4 = 0`.
2020/08/08
[ "https://Stackoverflow.com/questions/63320723", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14072979/" ]
It might not be the solution for you, but I tell what we do. 1. Prefix the package names, and using namespaces (eg. `company.product.tool`). 2. When we install our packages (including their in-house dependencies), we use a `requirements.txt` file including our PyPI URL. We run everything in container(s) and we install all public dependencies in them when we are building the images.
Your company could redirect all requests to pypi to a service you control first (perhaps just at your build servers' `hosts` file(s)) This would potentially allow you to * prefer/override arbitrary packages with local ones * detect such cases * cache common/large upstream packages locally * reject suspect/non-known versions/names of [upstream packages](https://en.wikipedia.org/wiki/Supply_chain_attack)
63,320,723
Im a beginner in python and im currently working on a problem on code forces called Lecture Sleep. The question gives you 3 lines of inputs: ``` 6 3 1 3 5 2 5 4 1 1 0 1 0 0 ``` I'm trying to figure out how to link the second array of numbers `(1 3 5 2 5 4)` to the 3rd array of numbers `(1 1 0 1 0 0)`. So that `1 = 1, 3 = 1, 5 = 0, 2 = 1, 5 = 0, 4 = 0`.
2020/08/08
[ "https://Stackoverflow.com/questions/63320723", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14072979/" ]
It might not be the solution for you, but I tell what we do. 1. Prefix the package names, and using namespaces (eg. `company.product.tool`). 2. When we install our packages (including their in-house dependencies), we use a `requirements.txt` file including our PyPI URL. We run everything in container(s) and we install all public dependencies in them when we are building the images.
We use VCS for this. I see you've explicitly ruled that out, but have you considered using branches to mark your latest stable builds in VCS? If you aren't interested in the latest version of master or the dev branch, but you are running test/QA against commits, then I would configure your test/QA suite to merge into a branch named something like "stable" or "pypi-stable" and then your requirements files look like this: ``` pip install git+https://gitlab.com/yourorg/yourpackage.git@pypi-stable ``` The same configuration will work for setup.py requirements blocks (which allows for chained internal dependencies). Am I missing something?
63,320,723
Im a beginner in python and im currently working on a problem on code forces called Lecture Sleep. The question gives you 3 lines of inputs: ``` 6 3 1 3 5 2 5 4 1 1 0 1 0 0 ``` I'm trying to figure out how to link the second array of numbers `(1 3 5 2 5 4)` to the 3rd array of numbers `(1 1 0 1 0 0)`. So that `1 = 1, 3 = 1, 5 = 0, 2 = 1, 5 = 0, 4 = 0`.
2020/08/08
[ "https://Stackoverflow.com/questions/63320723", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14072979/" ]
It might not be the solution for you, but I tell what we do. 1. Prefix the package names, and using namespaces (eg. `company.product.tool`). 2. When we install our packages (including their in-house dependencies), we use a `requirements.txt` file including our PyPI URL. We run everything in container(s) and we install all public dependencies in them when we are building the images.
You could perhaps get the behavior you are looking for from a `requirements.txt` and two `pip` calls: ``` cat requirements.txt | xargs -n 1 pip install -i <your-s3pipy> pip install -r requirements.txt ``` The first one tries to install what it can from your local repository and ignores a package if it fails. The second call tries to install everything that failed before from pipy. This works because `--upgrade-strategy only-if-needed` is the default (as of pip 10.X I believe, don't quote me on that). If you are using an old pip you may have to specify this manually. --- A limitation of this approach is if you expect/request a local package, but it doesn't exist and a package with the same name exists on pipy. In this case, you will get that package instead. Not sure if that is a concern.
63,320,723
Im a beginner in python and im currently working on a problem on code forces called Lecture Sleep. The question gives you 3 lines of inputs: ``` 6 3 1 3 5 2 5 4 1 1 0 1 0 0 ``` I'm trying to figure out how to link the second array of numbers `(1 3 5 2 5 4)` to the 3rd array of numbers `(1 1 0 1 0 0)`. So that `1 = 1, 3 = 1, 5 = 0, 2 = 1, 5 = 0, 4 = 0`.
2020/08/08
[ "https://Stackoverflow.com/questions/63320723", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14072979/" ]
It might not be the solution for you, but I tell what we do. 1. Prefix the package names, and using namespaces (eg. `company.product.tool`). 2. When we install our packages (including their in-house dependencies), we use a `requirements.txt` file including our PyPI URL. We run everything in container(s) and we install all public dependencies in them when we are building the images.
The comment from @a\_guest on my first answer got me thinking, and the "problem" is that pip doesn't consider where the package originated when it sorts through candidates to satisfy requirements. So here is a possible way to change this: Monkey-patch pip and introduce a preference over indexes. ``` from __future__ import absolute_import import os import sys import pip from pip._internal.index.package_finder import CandidateEvaluator class MyCandidateEvaluator(CandidateEvaluator): def _sort_key(self, candidate): (has_allowed_hash, yank_value, binary_preference, candidate.version, build_tag, pri) = super()._sort_key(candidate) priority_index = "localhost" #use your s3pipy here if priority_index in candidate.link.comes_from: priority = 1 else: priority = 0 return (has_allowed_hash, yank_value, binary_preference, priority, candidate.version, build_tag, pri) pip._internal.index.package_finder.CandidateEvaluator = MyCandidateEvaluator # Remove '' and current working directory from the first entry # of sys.path, if present to avoid using current directory # in pip commands check, freeze, install, list and show, # when invoked as python -m pip <command> if sys.path[0] in ('', os.getcwd()): sys.path.pop(0) # If we are running from a wheel, add the wheel to sys.path # This allows the usage python pip-*.whl/pip install pip-*.whl if __package__ == '': # __file__ is pip-*.whl/pip/__main__.py # first dirname call strips of '/__main__.py', second strips off '/pip' # Resulting path is the name of the wheel itself # Add that to sys.path so we can import pip path = os.path.dirname(os.path.dirname(__file__)) sys.path.insert(0, path) from pip._internal.cli.main import main as _main # isort:skip # noqa if __name__ == '__main__': sys.exit(_main()) ``` setup a `requirements.txt` ``` numpy sampleproject ``` and call above script using the same parameters as you'd use for `pip`. ``` >python mypip.py install --no-cache --extra-index http://localhost:8000 -r requirements.txt Looking in indexes: https://pypi.org/simple, http://localhost:8000 Collecting numpy Downloading numpy-1.19.1-cp37-cp37m-win_amd64.whl (12.9 MB) |████████████████████████████████| 12.9 MB 6.8 MB/s Collecting sampleproject Downloading http://localhost:8000/sampleproject/sampleproject-0.5.0-py2.py3-none-any.whl (4.3 kB) Collecting peppercorn Downloading peppercorn-0.6-py3-none-any.whl (4.8 kB) Installing collected packages: numpy, peppercorn, sampleproject Successfully installed numpy-1.19.1 peppercorn-0.6 sampleproject-0.5.0 ``` Compare this to the default pip call ``` >pip install --no-cache --extra-index http://localhost:8000 -r requirements.txt Looking in indexes: https://pypi.org/simple, http://localhost:8000 Collecting numpy Downloading numpy-1.19.1-cp37-cp37m-win_amd64.whl (12.9 MB) |████████████████████████████████| 12.9 MB 6.4 MB/s Collecting sampleproject Downloading sampleproject-2.0.0-py3-none-any.whl (4.2 kB) Collecting peppercorn Downloading peppercorn-0.6-py3-none-any.whl (4.8 kB) Installing collected packages: numpy, peppercorn, sampleproject Successfully installed numpy-1.19.1 peppercorn-0.6 sampleproject-2.0.0 ``` And notice that `mypip` prefers a package if it can be retrieved from `localhost`; ofc you can customize this behavior further.
12,468,022
I have several threads running in parallel from Python on a cluster system. Each python thread outputs to a directory `mydir`. Each script, before outputting checks if *mydir* exists and if not creates it: ``` if not os.path.isdir(mydir): os.makedirs(mydir) ``` but this yields the error: ``` os.makedirs(self.log_dir) File "/usr/lib/python2.6/os.py", line 157, in makedirs mkdir(name,mode) OSError: [Errno 17] File exists ``` I suspect it might be due to a race condition, where one job creates the *dir* before the other gets to it. Is this possible? If so, how can this error be avoided? I'm not sure it's a race condition so was wondering if other issues in Python can cause this odd error.
2012/09/17
[ "https://Stackoverflow.com/questions/12468022", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Any time code can execute between when you check something and when you act on it, you will have a race condition. One way to avoid this (and the usual way in Python) is to just try and then handle the exception ``` while True: mydir = next_dir_name() try: os.makedirs(mydir) break except OSError, e: if e.errno != errno.EEXIST: raise # time.sleep might help here pass ``` If you have a lot of threads trying to make a predictable series of directories this will still raise a lot of exceptions, but you will get there in the end. Better to just have one thread creating the dirs in that case
Catch the exception and, if the errno is 17, ignore it. That's the only thing you can do if there's a race condition between the `isdir` and `makedirs` calls. However, it could also be possible that a *file* with the same name exists - in that case `os.path.exists` would return `True` but `os.path.isdir` returns false.
12,468,022
I have several threads running in parallel from Python on a cluster system. Each python thread outputs to a directory `mydir`. Each script, before outputting checks if *mydir* exists and if not creates it: ``` if not os.path.isdir(mydir): os.makedirs(mydir) ``` but this yields the error: ``` os.makedirs(self.log_dir) File "/usr/lib/python2.6/os.py", line 157, in makedirs mkdir(name,mode) OSError: [Errno 17] File exists ``` I suspect it might be due to a race condition, where one job creates the *dir* before the other gets to it. Is this possible? If so, how can this error be avoided? I'm not sure it's a race condition so was wondering if other issues in Python can cause this odd error.
2012/09/17
[ "https://Stackoverflow.com/questions/12468022", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
As of Python `>=3.2`, `os.makedirs()` can take a third optional argument `exist_ok`: ``` os.makedirs(mydir, exist_ok=True) ```
Catch the exception and, if the errno is 17, ignore it. That's the only thing you can do if there's a race condition between the `isdir` and `makedirs` calls. However, it could also be possible that a *file* with the same name exists - in that case `os.path.exists` would return `True` but `os.path.isdir` returns false.
12,468,022
I have several threads running in parallel from Python on a cluster system. Each python thread outputs to a directory `mydir`. Each script, before outputting checks if *mydir* exists and if not creates it: ``` if not os.path.isdir(mydir): os.makedirs(mydir) ``` but this yields the error: ``` os.makedirs(self.log_dir) File "/usr/lib/python2.6/os.py", line 157, in makedirs mkdir(name,mode) OSError: [Errno 17] File exists ``` I suspect it might be due to a race condition, where one job creates the *dir* before the other gets to it. Is this possible? If so, how can this error be avoided? I'm not sure it's a race condition so was wondering if other issues in Python can cause this odd error.
2012/09/17
[ "https://Stackoverflow.com/questions/12468022", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Catch the exception and, if the errno is 17, ignore it. That's the only thing you can do if there's a race condition between the `isdir` and `makedirs` calls. However, it could also be possible that a *file* with the same name exists - in that case `os.path.exists` would return `True` but `os.path.isdir` returns false.
I had a similar issues and here is what I did ``` try: if not os.path.exists(os.path.dirname(mydir)): os.makedirs(os.path.dirname(mydir)) except OSError as err: print(err) ``` **Description:** Just checking if the directory already exist throws this error message **[Errno 17] File exists** because we are just checking if the ***directory name*** exist or not which will return the directory name of the **mydir** value being passed but not if it already exist or not. What is being missed is not checking if that directory already exist which can be done by checking the path with **os.path.exists()** and in there we passed the respective directory name.
12,468,022
I have several threads running in parallel from Python on a cluster system. Each python thread outputs to a directory `mydir`. Each script, before outputting checks if *mydir* exists and if not creates it: ``` if not os.path.isdir(mydir): os.makedirs(mydir) ``` but this yields the error: ``` os.makedirs(self.log_dir) File "/usr/lib/python2.6/os.py", line 157, in makedirs mkdir(name,mode) OSError: [Errno 17] File exists ``` I suspect it might be due to a race condition, where one job creates the *dir* before the other gets to it. Is this possible? If so, how can this error be avoided? I'm not sure it's a race condition so was wondering if other issues in Python can cause this odd error.
2012/09/17
[ "https://Stackoverflow.com/questions/12468022", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Catch the exception and, if the errno is 17, ignore it. That's the only thing you can do if there's a race condition between the `isdir` and `makedirs` calls. However, it could also be possible that a *file* with the same name exists - in that case `os.path.exists` would return `True` but `os.path.isdir` returns false.
To ignore the dir or file exist error, you can try this: ``` except OSError, e: if e.errno != 17: print("Error:", e) ```
12,468,022
I have several threads running in parallel from Python on a cluster system. Each python thread outputs to a directory `mydir`. Each script, before outputting checks if *mydir* exists and if not creates it: ``` if not os.path.isdir(mydir): os.makedirs(mydir) ``` but this yields the error: ``` os.makedirs(self.log_dir) File "/usr/lib/python2.6/os.py", line 157, in makedirs mkdir(name,mode) OSError: [Errno 17] File exists ``` I suspect it might be due to a race condition, where one job creates the *dir* before the other gets to it. Is this possible? If so, how can this error be avoided? I'm not sure it's a race condition so was wondering if other issues in Python can cause this odd error.
2012/09/17
[ "https://Stackoverflow.com/questions/12468022", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Any time code can execute between when you check something and when you act on it, you will have a race condition. One way to avoid this (and the usual way in Python) is to just try and then handle the exception ``` while True: mydir = next_dir_name() try: os.makedirs(mydir) break except OSError, e: if e.errno != errno.EEXIST: raise # time.sleep might help here pass ``` If you have a lot of threads trying to make a predictable series of directories this will still raise a lot of exceptions, but you will get there in the end. Better to just have one thread creating the dirs in that case
I had a similar issues and here is what I did ``` try: if not os.path.exists(os.path.dirname(mydir)): os.makedirs(os.path.dirname(mydir)) except OSError as err: print(err) ``` **Description:** Just checking if the directory already exist throws this error message **[Errno 17] File exists** because we are just checking if the ***directory name*** exist or not which will return the directory name of the **mydir** value being passed but not if it already exist or not. What is being missed is not checking if that directory already exist which can be done by checking the path with **os.path.exists()** and in there we passed the respective directory name.
12,468,022
I have several threads running in parallel from Python on a cluster system. Each python thread outputs to a directory `mydir`. Each script, before outputting checks if *mydir* exists and if not creates it: ``` if not os.path.isdir(mydir): os.makedirs(mydir) ``` but this yields the error: ``` os.makedirs(self.log_dir) File "/usr/lib/python2.6/os.py", line 157, in makedirs mkdir(name,mode) OSError: [Errno 17] File exists ``` I suspect it might be due to a race condition, where one job creates the *dir* before the other gets to it. Is this possible? If so, how can this error be avoided? I'm not sure it's a race condition so was wondering if other issues in Python can cause this odd error.
2012/09/17
[ "https://Stackoverflow.com/questions/12468022", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Any time code can execute between when you check something and when you act on it, you will have a race condition. One way to avoid this (and the usual way in Python) is to just try and then handle the exception ``` while True: mydir = next_dir_name() try: os.makedirs(mydir) break except OSError, e: if e.errno != errno.EEXIST: raise # time.sleep might help here pass ``` If you have a lot of threads trying to make a predictable series of directories this will still raise a lot of exceptions, but you will get there in the end. Better to just have one thread creating the dirs in that case
To ignore the dir or file exist error, you can try this: ``` except OSError, e: if e.errno != 17: print("Error:", e) ```
12,468,022
I have several threads running in parallel from Python on a cluster system. Each python thread outputs to a directory `mydir`. Each script, before outputting checks if *mydir* exists and if not creates it: ``` if not os.path.isdir(mydir): os.makedirs(mydir) ``` but this yields the error: ``` os.makedirs(self.log_dir) File "/usr/lib/python2.6/os.py", line 157, in makedirs mkdir(name,mode) OSError: [Errno 17] File exists ``` I suspect it might be due to a race condition, where one job creates the *dir* before the other gets to it. Is this possible? If so, how can this error be avoided? I'm not sure it's a race condition so was wondering if other issues in Python can cause this odd error.
2012/09/17
[ "https://Stackoverflow.com/questions/12468022", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
As of Python `>=3.2`, `os.makedirs()` can take a third optional argument `exist_ok`: ``` os.makedirs(mydir, exist_ok=True) ```
I had a similar issues and here is what I did ``` try: if not os.path.exists(os.path.dirname(mydir)): os.makedirs(os.path.dirname(mydir)) except OSError as err: print(err) ``` **Description:** Just checking if the directory already exist throws this error message **[Errno 17] File exists** because we are just checking if the ***directory name*** exist or not which will return the directory name of the **mydir** value being passed but not if it already exist or not. What is being missed is not checking if that directory already exist which can be done by checking the path with **os.path.exists()** and in there we passed the respective directory name.
12,468,022
I have several threads running in parallel from Python on a cluster system. Each python thread outputs to a directory `mydir`. Each script, before outputting checks if *mydir* exists and if not creates it: ``` if not os.path.isdir(mydir): os.makedirs(mydir) ``` but this yields the error: ``` os.makedirs(self.log_dir) File "/usr/lib/python2.6/os.py", line 157, in makedirs mkdir(name,mode) OSError: [Errno 17] File exists ``` I suspect it might be due to a race condition, where one job creates the *dir* before the other gets to it. Is this possible? If so, how can this error be avoided? I'm not sure it's a race condition so was wondering if other issues in Python can cause this odd error.
2012/09/17
[ "https://Stackoverflow.com/questions/12468022", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
As of Python `>=3.2`, `os.makedirs()` can take a third optional argument `exist_ok`: ``` os.makedirs(mydir, exist_ok=True) ```
To ignore the dir or file exist error, you can try this: ``` except OSError, e: if e.errno != 17: print("Error:", e) ```
12,468,022
I have several threads running in parallel from Python on a cluster system. Each python thread outputs to a directory `mydir`. Each script, before outputting checks if *mydir* exists and if not creates it: ``` if not os.path.isdir(mydir): os.makedirs(mydir) ``` but this yields the error: ``` os.makedirs(self.log_dir) File "/usr/lib/python2.6/os.py", line 157, in makedirs mkdir(name,mode) OSError: [Errno 17] File exists ``` I suspect it might be due to a race condition, where one job creates the *dir* before the other gets to it. Is this possible? If so, how can this error be avoided? I'm not sure it's a race condition so was wondering if other issues in Python can cause this odd error.
2012/09/17
[ "https://Stackoverflow.com/questions/12468022", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
I had a similar issues and here is what I did ``` try: if not os.path.exists(os.path.dirname(mydir)): os.makedirs(os.path.dirname(mydir)) except OSError as err: print(err) ``` **Description:** Just checking if the directory already exist throws this error message **[Errno 17] File exists** because we are just checking if the ***directory name*** exist or not which will return the directory name of the **mydir** value being passed but not if it already exist or not. What is being missed is not checking if that directory already exist which can be done by checking the path with **os.path.exists()** and in there we passed the respective directory name.
To ignore the dir or file exist error, you can try this: ``` except OSError, e: if e.errno != 17: print("Error:", e) ```
63,783,587
My goal is to install a package to a specific directory on my machine so I can package it up to be used with AWS Lambda. Here is what I have tried: `pip install snowflake-connector-python -t .` `pip install --system --target=C:\Users\path2folder --install-option=--install-scripts=C:\Users\path2folder --upgrade snowflake-connector-python` Both of these options have returned the following error message: `ERROR: Can not combine '--user' and '--target'` In order for the AWS Lambda function to work, I need to have my dependencies installed in a specific directory to create a .zip file for deployment. I have searched through Google and StackOverflow, but have not seen a thread that has answered this issue. Update: This does not seem to be a problem on Mac. The issue described is on Windows 10.
2020/09/07
[ "https://Stackoverflow.com/questions/63783587", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12655086/" ]
We encountered the same issue when running `pip install --target ./py_pkg -r requirements.txt --upgrade` with Microsoft store version of Python 3.9. Adding `--no-user` to the end of it seems solves the issue. Maybe you can try that in your command and let us know if this solution works? `pip install --target ./py_pkg -r requirements.txt --upgrade --no-user`
We had the same issue just in a Python course: The error comes up if Python is installed as an app from the Microsoft app store. In our case it was resolved after re-installing Python by downloading and using the installation package directly from the Python website.
63,783,587
My goal is to install a package to a specific directory on my machine so I can package it up to be used with AWS Lambda. Here is what I have tried: `pip install snowflake-connector-python -t .` `pip install --system --target=C:\Users\path2folder --install-option=--install-scripts=C:\Users\path2folder --upgrade snowflake-connector-python` Both of these options have returned the following error message: `ERROR: Can not combine '--user' and '--target'` In order for the AWS Lambda function to work, I need to have my dependencies installed in a specific directory to create a .zip file for deployment. I have searched through Google and StackOverflow, but have not seen a thread that has answered this issue. Update: This does not seem to be a problem on Mac. The issue described is on Windows 10.
2020/09/07
[ "https://Stackoverflow.com/questions/63783587", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12655086/" ]
We had the same issue just in a Python course: The error comes up if Python is installed as an app from the Microsoft app store. In our case it was resolved after re-installing Python by downloading and using the installation package directly from the Python website.
I got a similar error recently. Adding my solution so that it might help someone facing the error due to the same reason. I was facing an issue where all my pip installed packages were going to an older python brew installation folder. As part of debugging, I was trying to install `awscli-local` package to user site-package using: `pip install --user awscli-local` Then I got: `ERROR: cannot combine --user and --target` In my case, it was due to the changes in pip config I had set some time back for some other reason. I had set the 'target' config globally - removing which removed this error and my actual issue I was debugging for. --- Check the following if solutions given above doesn't resolve your issue: try the command: `pip config edit --editor <your_text_editor>` For me: `pip config edit --editor sublime` This will open the current config file where you can check if there's any conflicting configuration like the 'target' set in my case.
63,783,587
My goal is to install a package to a specific directory on my machine so I can package it up to be used with AWS Lambda. Here is what I have tried: `pip install snowflake-connector-python -t .` `pip install --system --target=C:\Users\path2folder --install-option=--install-scripts=C:\Users\path2folder --upgrade snowflake-connector-python` Both of these options have returned the following error message: `ERROR: Can not combine '--user' and '--target'` In order for the AWS Lambda function to work, I need to have my dependencies installed in a specific directory to create a .zip file for deployment. I have searched through Google and StackOverflow, but have not seen a thread that has answered this issue. Update: This does not seem to be a problem on Mac. The issue described is on Windows 10.
2020/09/07
[ "https://Stackoverflow.com/questions/63783587", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12655086/" ]
We encountered the same issue when running `pip install --target ./py_pkg -r requirements.txt --upgrade` with Microsoft store version of Python 3.9. Adding `--no-user` to the end of it seems solves the issue. Maybe you can try that in your command and let us know if this solution works? `pip install --target ./py_pkg -r requirements.txt --upgrade --no-user`
I got a similar error recently. Adding my solution so that it might help someone facing the error due to the same reason. I was facing an issue where all my pip installed packages were going to an older python brew installation folder. As part of debugging, I was trying to install `awscli-local` package to user site-package using: `pip install --user awscli-local` Then I got: `ERROR: cannot combine --user and --target` In my case, it was due to the changes in pip config I had set some time back for some other reason. I had set the 'target' config globally - removing which removed this error and my actual issue I was debugging for. --- Check the following if solutions given above doesn't resolve your issue: try the command: `pip config edit --editor <your_text_editor>` For me: `pip config edit --editor sublime` This will open the current config file where you can check if there's any conflicting configuration like the 'target' set in my case.
56,578,199
I am trying to save AWS CLI command into python variable (list). The trick is the following code returns result I want but doesn't save it into variable and return empty list. ``` import os bashCommand = 'aws s3api list-buckets --query "Buckets[].Name"' f = [os.system(bashCommand)] print(f) ``` output: ``` [ "bucket1", "bucket2", "bucket3" ] [0] ``` desired output: ``` [ "bucket1", "bucket2", "bucket3" ] ("bucket1", "bucket2", "bucket3") ```
2019/06/13
[ "https://Stackoverflow.com/questions/56578199", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10941134/" ]
If you are using Python and you wish to list buckets, then it would be better to use the AWS SDK for Python, which is `boto3`: ``` import boto3 s3 = boto3.resource('s3') buckets = [bucket.name for bucket in s3.buckets.all()] ``` See: [S3 — Boto 3 Docs](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html) This is much more sensible that calling out to the AWS CLI (which itself uses boto!).
I use this command to create a Python list of all buckets: ``` bucket_list = eval(subprocess.check_output('aws s3api list-buckets --query "Buckets[].Name"').translate(None, '\r\n ')) ```
56,578,199
I am trying to save AWS CLI command into python variable (list). The trick is the following code returns result I want but doesn't save it into variable and return empty list. ``` import os bashCommand = 'aws s3api list-buckets --query "Buckets[].Name"' f = [os.system(bashCommand)] print(f) ``` output: ``` [ "bucket1", "bucket2", "bucket3" ] [0] ``` desired output: ``` [ "bucket1", "bucket2", "bucket3" ] ("bucket1", "bucket2", "bucket3") ```
2019/06/13
[ "https://Stackoverflow.com/questions/56578199", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10941134/" ]
If you are using Python and you wish to list buckets, then it would be better to use the AWS SDK for Python, which is `boto3`: ``` import boto3 s3 = boto3.resource('s3') buckets = [bucket.name for bucket in s3.buckets.all()] ``` See: [S3 — Boto 3 Docs](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html) This is much more sensible that calling out to the AWS CLI (which itself uses boto!).
You really don't need anything fancy , all you have to do is import subprocess and json and use them :) This was tested using python3.6 and on Linux ``` output = subprocess.run(["aws", "--region=us-west-2", "s3api", "list-buckets", "--query", "Buckets[].Name"], stdout=subprocess.PIPE, stderr=subprocess.PIPE) output_utf8=output.stdout.decode('utf-8') output_utf8_json = json.loads(output_utf8) print(output_utf8_json) for key in output_utf8_json: print(key) ```
56,578,199
I am trying to save AWS CLI command into python variable (list). The trick is the following code returns result I want but doesn't save it into variable and return empty list. ``` import os bashCommand = 'aws s3api list-buckets --query "Buckets[].Name"' f = [os.system(bashCommand)] print(f) ``` output: ``` [ "bucket1", "bucket2", "bucket3" ] [0] ``` desired output: ``` [ "bucket1", "bucket2", "bucket3" ] ("bucket1", "bucket2", "bucket3") ```
2019/06/13
[ "https://Stackoverflow.com/questions/56578199", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10941134/" ]
You really don't need anything fancy , all you have to do is import subprocess and json and use them :) This was tested using python3.6 and on Linux ``` output = subprocess.run(["aws", "--region=us-west-2", "s3api", "list-buckets", "--query", "Buckets[].Name"], stdout=subprocess.PIPE, stderr=subprocess.PIPE) output_utf8=output.stdout.decode('utf-8') output_utf8_json = json.loads(output_utf8) print(output_utf8_json) for key in output_utf8_json: print(key) ```
I use this command to create a Python list of all buckets: ``` bucket_list = eval(subprocess.check_output('aws s3api list-buckets --query "Buckets[].Name"').translate(None, '\r\n ')) ```
63,993,912
python version 3.8.3 ``` import telegram #imorted methodts from telegram.ext import Updater, CommandHandler import requests from telegram import ReplyKeyboardMarkup, KeyboardButton from telegram.ext.messagehandler import MessageHandler import json # below is function defined the buttons to be return def start(bot, update): button1 = KeyboardButton("hello") button2 = KeyboardButton("by") keyboard = [button1, button2] reply_markup = telegram.ReplyKeyboardMarkup(keyboard) chat_id = update.message.chat_id bot.send_message(chat_id=chat_id, text='please choose USD or EUR', reply_markup = reply_markup) # it works and returns text if reply_markup parameter disabled. def main(): updater = Updater('my token') dp = updater.dispatcher dp.add_handler(CommandHandler('start', start)) updater.start_polling() updater.idle() main() ``` buttons are not working. Please help to check what could be the reason of this issue.
2020/09/21
[ "https://Stackoverflow.com/questions/63993912", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14315167/" ]
After a great effort, I have found a solution for this. ``` class LinearProgressWithTextWidget extends StatelessWidget { final Color color; final double progress; LinearProgressWithTextWidget({Key key,@required this.color, @required this.progress}) : super(key: key); @override Widget build(BuildContext context) { double totalWidth = ((MediaQuery.of(context).size.width/2)-padding); return Container( child: Column( children: [ Transform.translate( offset: Offset((totalWidth * 2 * progress) - totalWidth, -5), child: Container( padding: EdgeInsets.only(left: 4, right: 4, top: 4, bottom: 4), decoration: BoxDecoration( borderRadius: BorderRadius.circular(2.0), color: color, ), child: Text( "${progress * 100}%", style: TextStyle( fontSize: 12.0, fontFamily: 'Kanit-Medium', color: Colors.white, height: 0.8 ), ), ), ), LinearPercentIndicator( padding: EdgeInsets.zero, lineHeight: 15, backgroundColor: HexColor("#F8F8F8"), percent: progress, progressColor: color, ), ], ) ); } } ```
I added the loading indicator inside of a stack and wrapped the whole widget with a `LayoutBuilder`, which will give you the BoxConstraints of the current widget. You can use that to calculate the position of the percent indicator and place a widget (text) above it. [![Progress Indicator with Percent](https://i.stack.imgur.com/lUP37.png)](https://i.stack.imgur.com/lUP37.png) ```dart class MyProgressIndicator extends StatelessWidget { const MyProgressIndicator({ Key key, }) : super(key: key); @override Widget build(BuildContext context) { double percent = .5; return LayoutBuilder( builder: (context, constraints) { return Container( child: Stack( fit: StackFit.expand, overflow: Overflow.visible, children: [ Positioned( top: 0, // you can adjust this through negatives to raise your child widget left: (constraints.maxWidth * percent) - (50 / 2), // child width / 2 (this is to get the center of the widget), child: Center( child: Container( width: 50, alignment: Alignment.topCenter, child: Text('${percent * 100}%'), ), ), ), Positioned( top: 0, right: 0, left: 0, bottom: 0, child: LinearPercentIndicator( padding: EdgeInsets.zero, lineHeight: 15, width: constraints.maxWidth, backgroundColor: Colors.black, percent: percent, progressColor: Colors.yellow, ), ), ], ), ); }, ); } } ```
40,840,480
I would like to install my own package in my local dir (I do not have the root privilege). How to install python3-dev locally? I am using ubuntu 16.04.
2016/11/28
[ "https://Stackoverflow.com/questions/40840480", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5651936/" ]
You have a widget for this: ``` {{ form_errors(form) }} ```
Accessing errors from **TWIG** Displays all errors in template ``` {{ form_errors(form) }} ``` Access error for specific field ``` {{ form_errors(form.username) }} ``` Read More: [How to get error message of each field from form object in symfony2?](https://stackoverflow.com/a/40712685/2689199)
48,965,221
I'm having a dataset which as the following ``` customer products Sales 1 a 10 1 a 10 2 b 20 3 c 30 ``` How can I reshape and to do that in python and pandas? I've tried with the pivot tools but since I have duplicated CUSTOMER ID it's not working... ``` Products customerID a b c 1 10 1 10 2 20 3 30 {' update': {209: 'Originator', 211: 'Originator', 212: 'Originator', 213: 'Originator', 214: 'Originator'}, 'CUSTOMER ID': {209: 1000368, 211: 1000368, 212: 1000968, 213: 1000968, 214: 1000968}, 'NET SALES VALUE SANOFI':{209: 426881.0, 211: 332103.0, 212: 882666.0, 213: 882666.0, 214: 294222.0}, 'PRODUCT FAMILY': {209: 'APROVEL', 211: 'APROVEL', 212: 'APROVEL', 213: 'APROVEL', 214: 'APROVEL'}, 'CHANNEL DEFINITION': {209: 'PHARMACY', 211: 'PHARMACY', 212: 'PHARMACY', 213: 'PHARMACY', 214: 'PHARMACY'}, 'index': {209: 209, 211: 211, 212: 212, 213: 213, 214: 214} CUSTOMER ID 1228675 non-null int64 DISTRIBUTOR ID 1228675 non-null float64 PRODUCT FAMILY 1228675 non-null object GROSS SALES QUANTITY 1228675 non-null int64 GROSS SALES VALUE 1228675 non-null int64 NET SALES VALUE 1228675 non-null int64 DISCOUNT VALUES 1228675 non-null int64 CHANNEL DEFINITION 1228675 non-null object ``` what i tried also : `ONLY_PHARMA.pivot_table(values = "NET SALES VALUE ", index = ["CUSTOMER ID"], columns = "PRODUCT FAMILY").reset_index()` what im getting now a mix of float and Int....?? Why? ``` ID A B C 1000167 NaN 2.380122e+05 244767.466667 or im having : ValueError: negative dimensions are not allowed ``` OR I've done which also return me floats and int: ``` pvt = pd.pivot_table(ONLY_PHARMA.reset_index(), index=['CUSTOMER ID'], columns='PRODUCT FAMILY', values='NET SALES VALUE' , fill_value='') \ .reset_index() ```
2018/02/24
[ "https://Stackoverflow.com/questions/48965221", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9406236/" ]
You can use [`cumcount`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.cumcount.html) with [`set_index`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html) + [`unstack`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.unstack.html) for reshape: ``` g = df.groupby(['customer', 'products']).cumcount() df = ( df.set_index([g, 'customer', 'products'])['Sales'] .unstack().sort_index(level=1) .reset_index(level=0, drop=True) ) print (df) products a b c customer 1 10.0 NaN NaN 1 10.0 NaN NaN 2 NaN 20.0 NaN 3 NaN NaN 30.0 ``` Notice: If duplicated values, maybe need aggregation, check [how to pivot a dataframe](https://stackoverflow.com/questions/47152691/how-to-pivot-a-dataframe)
Your question is unclear. In case of duplicate key, we usually aggregate values. Is that what you want ? Try this: ``` df.pivot_table(index='customer', columns='products', values ='Sales', aggfunc='sum') products customer a b c 0 1 20.0 NaN NaN 1 2 NaN 20.0 NaN 2 3 NaN NaN 30.0 ```
48,965,221
I'm having a dataset which as the following ``` customer products Sales 1 a 10 1 a 10 2 b 20 3 c 30 ``` How can I reshape and to do that in python and pandas? I've tried with the pivot tools but since I have duplicated CUSTOMER ID it's not working... ``` Products customerID a b c 1 10 1 10 2 20 3 30 {' update': {209: 'Originator', 211: 'Originator', 212: 'Originator', 213: 'Originator', 214: 'Originator'}, 'CUSTOMER ID': {209: 1000368, 211: 1000368, 212: 1000968, 213: 1000968, 214: 1000968}, 'NET SALES VALUE SANOFI':{209: 426881.0, 211: 332103.0, 212: 882666.0, 213: 882666.0, 214: 294222.0}, 'PRODUCT FAMILY': {209: 'APROVEL', 211: 'APROVEL', 212: 'APROVEL', 213: 'APROVEL', 214: 'APROVEL'}, 'CHANNEL DEFINITION': {209: 'PHARMACY', 211: 'PHARMACY', 212: 'PHARMACY', 213: 'PHARMACY', 214: 'PHARMACY'}, 'index': {209: 209, 211: 211, 212: 212, 213: 213, 214: 214} CUSTOMER ID 1228675 non-null int64 DISTRIBUTOR ID 1228675 non-null float64 PRODUCT FAMILY 1228675 non-null object GROSS SALES QUANTITY 1228675 non-null int64 GROSS SALES VALUE 1228675 non-null int64 NET SALES VALUE 1228675 non-null int64 DISCOUNT VALUES 1228675 non-null int64 CHANNEL DEFINITION 1228675 non-null object ``` what i tried also : `ONLY_PHARMA.pivot_table(values = "NET SALES VALUE ", index = ["CUSTOMER ID"], columns = "PRODUCT FAMILY").reset_index()` what im getting now a mix of float and Int....?? Why? ``` ID A B C 1000167 NaN 2.380122e+05 244767.466667 or im having : ValueError: negative dimensions are not allowed ``` OR I've done which also return me floats and int: ``` pvt = pd.pivot_table(ONLY_PHARMA.reset_index(), index=['CUSTOMER ID'], columns='PRODUCT FAMILY', values='NET SALES VALUE' , fill_value='') \ .reset_index() ```
2018/02/24
[ "https://Stackoverflow.com/questions/48965221", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9406236/" ]
You can use [`cumcount`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.cumcount.html) with [`set_index`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html) + [`unstack`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.unstack.html) for reshape: ``` g = df.groupby(['customer', 'products']).cumcount() df = ( df.set_index([g, 'customer', 'products'])['Sales'] .unstack().sort_index(level=1) .reset_index(level=0, drop=True) ) print (df) products a b c customer 1 10.0 NaN NaN 1 10.0 NaN NaN 2 NaN 20.0 NaN 3 NaN NaN 30.0 ``` Notice: If duplicated values, maybe need aggregation, check [how to pivot a dataframe](https://stackoverflow.com/questions/47152691/how-to-pivot-a-dataframe)
Another method using [`str.get_dummies`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html). ``` pd.concat([df, df.products.str.get_dummies().multiply(df["Sales"], axis="index")], axis=1) customer products Sales a b c 0 1 a 10 10 0 0 1 1 a 10 10 0 0 2 2 b 20 0 20 0 3 3 c 30 0 0 30 ``` `df.products.str.get_dummies()` creates dummy variables as follows ``` a b c 0 1 0 0 1 1 0 0 2 0 1 0 3 0 0 1 ``` We then need to multiply this dummy variable table with `df["Sales"]`. This is achieved by `df.products.str.get_dummies().multiply(df["Sales"], axis="index")` (See reference for more information.) ``` a b c 0 10 0 0 1 10 0 0 2 0 20 0 3 0 0 30 ``` Reference [how to multiply multiple columns by a column in Pandas](https://stackoverflow.com/questions/22702760/how-to-multiply-multiple-columns-by-a-column-in-pandas) Note: to replace `0` with `np.nan`, you need to add `.replace(0, np.nan)` like `pd.concat([df, df.products.str.get_dummies().replace(0, np.nan).mul(df["Sales"], axis="index")], axis=1)`
48,965,221
I'm having a dataset which as the following ``` customer products Sales 1 a 10 1 a 10 2 b 20 3 c 30 ``` How can I reshape and to do that in python and pandas? I've tried with the pivot tools but since I have duplicated CUSTOMER ID it's not working... ``` Products customerID a b c 1 10 1 10 2 20 3 30 {' update': {209: 'Originator', 211: 'Originator', 212: 'Originator', 213: 'Originator', 214: 'Originator'}, 'CUSTOMER ID': {209: 1000368, 211: 1000368, 212: 1000968, 213: 1000968, 214: 1000968}, 'NET SALES VALUE SANOFI':{209: 426881.0, 211: 332103.0, 212: 882666.0, 213: 882666.0, 214: 294222.0}, 'PRODUCT FAMILY': {209: 'APROVEL', 211: 'APROVEL', 212: 'APROVEL', 213: 'APROVEL', 214: 'APROVEL'}, 'CHANNEL DEFINITION': {209: 'PHARMACY', 211: 'PHARMACY', 212: 'PHARMACY', 213: 'PHARMACY', 214: 'PHARMACY'}, 'index': {209: 209, 211: 211, 212: 212, 213: 213, 214: 214} CUSTOMER ID 1228675 non-null int64 DISTRIBUTOR ID 1228675 non-null float64 PRODUCT FAMILY 1228675 non-null object GROSS SALES QUANTITY 1228675 non-null int64 GROSS SALES VALUE 1228675 non-null int64 NET SALES VALUE 1228675 non-null int64 DISCOUNT VALUES 1228675 non-null int64 CHANNEL DEFINITION 1228675 non-null object ``` what i tried also : `ONLY_PHARMA.pivot_table(values = "NET SALES VALUE ", index = ["CUSTOMER ID"], columns = "PRODUCT FAMILY").reset_index()` what im getting now a mix of float and Int....?? Why? ``` ID A B C 1000167 NaN 2.380122e+05 244767.466667 or im having : ValueError: negative dimensions are not allowed ``` OR I've done which also return me floats and int: ``` pvt = pd.pivot_table(ONLY_PHARMA.reset_index(), index=['CUSTOMER ID'], columns='PRODUCT FAMILY', values='NET SALES VALUE' , fill_value='') \ .reset_index() ```
2018/02/24
[ "https://Stackoverflow.com/questions/48965221", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9406236/" ]
Another method using [`str.get_dummies`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html). ``` pd.concat([df, df.products.str.get_dummies().multiply(df["Sales"], axis="index")], axis=1) customer products Sales a b c 0 1 a 10 10 0 0 1 1 a 10 10 0 0 2 2 b 20 0 20 0 3 3 c 30 0 0 30 ``` `df.products.str.get_dummies()` creates dummy variables as follows ``` a b c 0 1 0 0 1 1 0 0 2 0 1 0 3 0 0 1 ``` We then need to multiply this dummy variable table with `df["Sales"]`. This is achieved by `df.products.str.get_dummies().multiply(df["Sales"], axis="index")` (See reference for more information.) ``` a b c 0 10 0 0 1 10 0 0 2 0 20 0 3 0 0 30 ``` Reference [how to multiply multiple columns by a column in Pandas](https://stackoverflow.com/questions/22702760/how-to-multiply-multiple-columns-by-a-column-in-pandas) Note: to replace `0` with `np.nan`, you need to add `.replace(0, np.nan)` like `pd.concat([df, df.products.str.get_dummies().replace(0, np.nan).mul(df["Sales"], axis="index")], axis=1)`
Your question is unclear. In case of duplicate key, we usually aggregate values. Is that what you want ? Try this: ``` df.pivot_table(index='customer', columns='products', values ='Sales', aggfunc='sum') products customer a b c 0 1 20.0 NaN NaN 1 2 NaN 20.0 NaN 2 3 NaN NaN 30.0 ```
48,965,221
I'm having a dataset which as the following ``` customer products Sales 1 a 10 1 a 10 2 b 20 3 c 30 ``` How can I reshape and to do that in python and pandas? I've tried with the pivot tools but since I have duplicated CUSTOMER ID it's not working... ``` Products customerID a b c 1 10 1 10 2 20 3 30 {' update': {209: 'Originator', 211: 'Originator', 212: 'Originator', 213: 'Originator', 214: 'Originator'}, 'CUSTOMER ID': {209: 1000368, 211: 1000368, 212: 1000968, 213: 1000968, 214: 1000968}, 'NET SALES VALUE SANOFI':{209: 426881.0, 211: 332103.0, 212: 882666.0, 213: 882666.0, 214: 294222.0}, 'PRODUCT FAMILY': {209: 'APROVEL', 211: 'APROVEL', 212: 'APROVEL', 213: 'APROVEL', 214: 'APROVEL'}, 'CHANNEL DEFINITION': {209: 'PHARMACY', 211: 'PHARMACY', 212: 'PHARMACY', 213: 'PHARMACY', 214: 'PHARMACY'}, 'index': {209: 209, 211: 211, 212: 212, 213: 213, 214: 214} CUSTOMER ID 1228675 non-null int64 DISTRIBUTOR ID 1228675 non-null float64 PRODUCT FAMILY 1228675 non-null object GROSS SALES QUANTITY 1228675 non-null int64 GROSS SALES VALUE 1228675 non-null int64 NET SALES VALUE 1228675 non-null int64 DISCOUNT VALUES 1228675 non-null int64 CHANNEL DEFINITION 1228675 non-null object ``` what i tried also : `ONLY_PHARMA.pivot_table(values = "NET SALES VALUE ", index = ["CUSTOMER ID"], columns = "PRODUCT FAMILY").reset_index()` what im getting now a mix of float and Int....?? Why? ``` ID A B C 1000167 NaN 2.380122e+05 244767.466667 or im having : ValueError: negative dimensions are not allowed ``` OR I've done which also return me floats and int: ``` pvt = pd.pivot_table(ONLY_PHARMA.reset_index(), index=['CUSTOMER ID'], columns='PRODUCT FAMILY', values='NET SALES VALUE' , fill_value='') \ .reset_index() ```
2018/02/24
[ "https://Stackoverflow.com/questions/48965221", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9406236/" ]
Here's fairly straight-forward way assuming you have a unique index, given your input of: ``` customer products Sales 0 1 a 10 1 1 a 10 2 2 b 20 3 3 c 30 ``` Pivot it to columnise the products and rejoin to just the customer column on the original frame, eg: ``` new_df = df[['customer']].join(df.pivot(columns='products', values='Sales')) ``` This'll give you: ``` customer a b c 0 1 10.0 NaN NaN 1 1 10.0 NaN NaN 2 2 NaN 20.0 NaN 3 3 NaN NaN 30.0 ``` Then sort out your indexing / filling blank values.
Your question is unclear. In case of duplicate key, we usually aggregate values. Is that what you want ? Try this: ``` df.pivot_table(index='customer', columns='products', values ='Sales', aggfunc='sum') products customer a b c 0 1 20.0 NaN NaN 1 2 NaN 20.0 NaN 2 3 NaN NaN 30.0 ```
48,965,221
I'm having a dataset which as the following ``` customer products Sales 1 a 10 1 a 10 2 b 20 3 c 30 ``` How can I reshape and to do that in python and pandas? I've tried with the pivot tools but since I have duplicated CUSTOMER ID it's not working... ``` Products customerID a b c 1 10 1 10 2 20 3 30 {' update': {209: 'Originator', 211: 'Originator', 212: 'Originator', 213: 'Originator', 214: 'Originator'}, 'CUSTOMER ID': {209: 1000368, 211: 1000368, 212: 1000968, 213: 1000968, 214: 1000968}, 'NET SALES VALUE SANOFI':{209: 426881.0, 211: 332103.0, 212: 882666.0, 213: 882666.0, 214: 294222.0}, 'PRODUCT FAMILY': {209: 'APROVEL', 211: 'APROVEL', 212: 'APROVEL', 213: 'APROVEL', 214: 'APROVEL'}, 'CHANNEL DEFINITION': {209: 'PHARMACY', 211: 'PHARMACY', 212: 'PHARMACY', 213: 'PHARMACY', 214: 'PHARMACY'}, 'index': {209: 209, 211: 211, 212: 212, 213: 213, 214: 214} CUSTOMER ID 1228675 non-null int64 DISTRIBUTOR ID 1228675 non-null float64 PRODUCT FAMILY 1228675 non-null object GROSS SALES QUANTITY 1228675 non-null int64 GROSS SALES VALUE 1228675 non-null int64 NET SALES VALUE 1228675 non-null int64 DISCOUNT VALUES 1228675 non-null int64 CHANNEL DEFINITION 1228675 non-null object ``` what i tried also : `ONLY_PHARMA.pivot_table(values = "NET SALES VALUE ", index = ["CUSTOMER ID"], columns = "PRODUCT FAMILY").reset_index()` what im getting now a mix of float and Int....?? Why? ``` ID A B C 1000167 NaN 2.380122e+05 244767.466667 or im having : ValueError: negative dimensions are not allowed ``` OR I've done which also return me floats and int: ``` pvt = pd.pivot_table(ONLY_PHARMA.reset_index(), index=['CUSTOMER ID'], columns='PRODUCT FAMILY', values='NET SALES VALUE' , fill_value='') \ .reset_index() ```
2018/02/24
[ "https://Stackoverflow.com/questions/48965221", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9406236/" ]
Here's fairly straight-forward way assuming you have a unique index, given your input of: ``` customer products Sales 0 1 a 10 1 1 a 10 2 2 b 20 3 3 c 30 ``` Pivot it to columnise the products and rejoin to just the customer column on the original frame, eg: ``` new_df = df[['customer']].join(df.pivot(columns='products', values='Sales')) ``` This'll give you: ``` customer a b c 0 1 10.0 NaN NaN 1 1 10.0 NaN NaN 2 2 NaN 20.0 NaN 3 3 NaN NaN 30.0 ``` Then sort out your indexing / filling blank values.
Another method using [`str.get_dummies`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html). ``` pd.concat([df, df.products.str.get_dummies().multiply(df["Sales"], axis="index")], axis=1) customer products Sales a b c 0 1 a 10 10 0 0 1 1 a 10 10 0 0 2 2 b 20 0 20 0 3 3 c 30 0 0 30 ``` `df.products.str.get_dummies()` creates dummy variables as follows ``` a b c 0 1 0 0 1 1 0 0 2 0 1 0 3 0 0 1 ``` We then need to multiply this dummy variable table with `df["Sales"]`. This is achieved by `df.products.str.get_dummies().multiply(df["Sales"], axis="index")` (See reference for more information.) ``` a b c 0 10 0 0 1 10 0 0 2 0 20 0 3 0 0 30 ``` Reference [how to multiply multiple columns by a column in Pandas](https://stackoverflow.com/questions/22702760/how-to-multiply-multiple-columns-by-a-column-in-pandas) Note: to replace `0` with `np.nan`, you need to add `.replace(0, np.nan)` like `pd.concat([df, df.products.str.get_dummies().replace(0, np.nan).mul(df["Sales"], axis="index")], axis=1)`
26,409,964
The [pickle documentation](https://docs.python.org/2/library/pickle.html#what-can-be-pickled-and-unpickled) states that "when class instances are pickled, their class’s data are not pickled along with them. Only the instance data are pickled." Can anyone provide a recipe for including class variables as well as instance variables when pickling and unpickling?
2014/10/16
[ "https://Stackoverflow.com/questions/26409964", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4147103/" ]
Use `dill` instead of pickle, and code exactly how you probably have done already. ``` >>> class A(object): ... y = 1 ... x = 0 ... def __call__(self, x): ... self.x = x ... return self.x + self.y ... >>> b = A() >>> b.y = 4 >>> b(2) 6 >>> b.z = 5 >>> import dill >>> _b = dill.dumps(b) >>> b_ = dill.loads(_b) >>> >>> b_.z 5 >>> b_.x 2 >>> b_.y 4 >>> >>> A.y = 100 >>> c = A() >>> _c = dill.dumps(c) >>> c_ = dill.loads(_c) >>> c_.y 100 ```
You can do this easily using the standard library functions by using `__getstate__` and `__setstate__`: ``` class A(object): y = 1 x = 0 def __getstate__(self): ret = self.__dict__.copy() ret['cls_x'] = A.x ret['cls_y'] = A.y return ret def __setstate__(self, state): A.x = state.pop('cls_x') A.y = state.pop('cls_y') self.__dict__.update(state) ```
26,409,964
The [pickle documentation](https://docs.python.org/2/library/pickle.html#what-can-be-pickled-and-unpickled) states that "when class instances are pickled, their class’s data are not pickled along with them. Only the instance data are pickled." Can anyone provide a recipe for including class variables as well as instance variables when pickling and unpickling?
2014/10/16
[ "https://Stackoverflow.com/questions/26409964", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4147103/" ]
Use `dill` instead of pickle, and code exactly how you probably have done already. ``` >>> class A(object): ... y = 1 ... x = 0 ... def __call__(self, x): ... self.x = x ... return self.x + self.y ... >>> b = A() >>> b.y = 4 >>> b(2) 6 >>> b.z = 5 >>> import dill >>> _b = dill.dumps(b) >>> b_ = dill.loads(_b) >>> >>> b_.z 5 >>> b_.x 2 >>> b_.y 4 >>> >>> A.y = 100 >>> c = A() >>> _c = dill.dumps(c) >>> c_ = dill.loads(_c) >>> c_.y 100 ```
Here's a solution using only standard library modules. Simply execute the following code block, and from then on pickle behaves in the desired way. As Mike McKerns was saying, `dill` does something similar under the hood. Based on relevant discussion found [here](https://bytes.com/topic/python/answers/552476-why-cant-you-pickle-instancemethods). ``` import copy_reg def _pickle_method(method): func_name = method.im_func.__name__ obj = method.im_self cls = method.im_class return _unpickle_method, (func_name, obj, cls) def _unpickle_method(func_name, obj, cls): for cls in cls.mro(): try: func = cls.__dict__[func_name] except KeyError: pass else: break return func.__get__(obj, cls) copy_reg.pickle(types.MethodType, _pickle_method, _unpickle_method) ```
25,384,922
I've installed some packages during the execution of my script as a user. Those packages were the first user packages, so python didn't add `~/.local/lib/python2.7/site-packages` to the `sys.path` before script run. I want to import those installed packages. But I cannot because they are not in `sys.path`. How can I refresh `sys.path`? I'm using python 2.7.
2014/08/19
[ "https://Stackoverflow.com/questions/25384922", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2108548/" ]
As explained in [What sets up sys.path with Python, and when?](https://stackoverflow.com/questions/4271494/what-sets-up-sys-path-with-python-and-when) `sys.path` is populated with the help of builtin `site.py` module. So you just need to reload it. You cannot it in one step because you don't have `site` in your namespace. To sum up: ``` import site from importlib import reload reload(site) ``` That's it.
It might be better to add it directly to your `sys.path` with: ``` import sys sys.path.append("/your/new/path") ``` Or, if it needs to be found first: ``` import sys sys.path.insert(1, "/your/new/path") ```
18,621,624
[I'm taking an intro to python class online](http://cscircles.cemc.uwaterloo.ca/8-remix/) and the site is designed to auto-enter input() data into the program that you write to resolve various python logic problems. [Please see this page to see how the online class's tool uses input entries](http://cscircles.cemc.uwaterloo.ca/visualize/#code=lis%20%3D%20%5B%27Text%27%2C%20%27in%27%2C%20%27the%27%2C%20%27middle!%27%2C%20%27END%27%5D%0Awidth%20%3D%2013%0Afor%20s1%20in%20lis%3A%0A%20%20%20%20L%20%3D%20len%28s1%29%0A%20%20%20%20periods_rtside%20%3D%20%28width%20-%20L%29%2F%2F2%0A%20%20%20%20periods_leftside%20%3D%20width%20-%20periods_rtside%20-%20L%0A%20%20%20%20periods_rt_str%20%3D%20%27.%27%20%2a%20periods_rtside%0A%20%20%20%20periods_left_str%20%3D%20%27.%27%20%2a%20periods_leftside%0A%20%20%20%20line1%20%3D%20periods_left_str%20%2B%20s1%20%2B%20periods_rt_str%0A%20%20%20%20if%20s1%20%3D%3D%20%27END%27%3A%0A%20%20%20%20%20%20%20%20%20break%0A%20%20%20%20print%28line1%29%0A%20%20%20%20) For example the class randomly inputs the following: ``` 30 centered text is great testing is great for python! END ``` Obviously, I would have to convert the 30 to an int. How do I convert the rest into a usable list or array? ``` width = int(input()) lis = ['centered', 'text', 'is', 'great', 'END'] ```
2013/09/04
[ "https://Stackoverflow.com/questions/18621624", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2609312/" ]
It sounds like you want to call `input` in a loop. Here's one way to do it: ``` lst = [] s = input() while s != 'END': lst.append(s) s = input() ``` There are other options for how to set up the condition on the loop, but I think this is the most straight forward. If the calculation for when to stop looping were more complicated, an alternative design might be to make the loop unconditional (with `while True`) and then `break` if the right conditions were met.
You can create a list and read each string one by one, and add it to the list: ``` width=int(input()) lis=[] tmp='' while tmp!='END': tmp=input() #receives a string, in python 3.0+ lis.append(tmp) ```
18,621,624
[I'm taking an intro to python class online](http://cscircles.cemc.uwaterloo.ca/8-remix/) and the site is designed to auto-enter input() data into the program that you write to resolve various python logic problems. [Please see this page to see how the online class's tool uses input entries](http://cscircles.cemc.uwaterloo.ca/visualize/#code=lis%20%3D%20%5B%27Text%27%2C%20%27in%27%2C%20%27the%27%2C%20%27middle!%27%2C%20%27END%27%5D%0Awidth%20%3D%2013%0Afor%20s1%20in%20lis%3A%0A%20%20%20%20L%20%3D%20len%28s1%29%0A%20%20%20%20periods_rtside%20%3D%20%28width%20-%20L%29%2F%2F2%0A%20%20%20%20periods_leftside%20%3D%20width%20-%20periods_rtside%20-%20L%0A%20%20%20%20periods_rt_str%20%3D%20%27.%27%20%2a%20periods_rtside%0A%20%20%20%20periods_left_str%20%3D%20%27.%27%20%2a%20periods_leftside%0A%20%20%20%20line1%20%3D%20periods_left_str%20%2B%20s1%20%2B%20periods_rt_str%0A%20%20%20%20if%20s1%20%3D%3D%20%27END%27%3A%0A%20%20%20%20%20%20%20%20%20break%0A%20%20%20%20print%28line1%29%0A%20%20%20%20) For example the class randomly inputs the following: ``` 30 centered text is great testing is great for python! END ``` Obviously, I would have to convert the 30 to an int. How do I convert the rest into a usable list or array? ``` width = int(input()) lis = ['centered', 'text', 'is', 'great', 'END'] ```
2013/09/04
[ "https://Stackoverflow.com/questions/18621624", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2609312/" ]
It sounds like you want to call `input` in a loop. Here's one way to do it: ``` lst = [] s = input() while s != 'END': lst.append(s) s = input() ``` There are other options for how to set up the condition on the loop, but I think this is the most straight forward. If the calculation for when to stop looping were more complicated, an alternative design might be to make the loop unconditional (with `while True`) and then `break` if the right conditions were met.
The `input()` method they provide you will return each line of the user input as it's called. For example, the following function prints each line of the input by calling input throughout a loop ``` for line in range(6): print(input()) ```
65,643,645
I'm pretty new to python and to programming in general. I'm trying to make the game Bounce. The game runs as expected but as soon as I close the window, it shows an error. This is the code: ``` from tkinter import * import random import time # Creating the window: window = Tk() window.title("Bounce") window.geometry('600x600') window.resizable(False, False) # Creating the canvas containing the game: canvas = Canvas(window, width = 450, height = 450, bg = "black") canvas.pack(padx = 50, pady= 50) score = canvas.create_text(10, 20, fill = "white") window.update() # Creating the ball: class Ball: def __init__(self, canvas1, paddle1, color): self.canvas = canvas1 self.paddle = paddle1 self.id = canvas1.create_oval(10, 10, 25, 25, fill = color) # The starting point of the ball self.canvas.move(self.id, 190, 160) starting_direction = [-3, -2, -1, 0, 1, 2, 3] random.shuffle(starting_direction) self.x = starting_direction[0] self.y = -3 self.canvas_height = self.canvas.winfo_height() self.canvas_width = self.canvas.winfo_width() # Detecting the collision between the ball and the paddle: def hit_paddle(self, ballcoords): paddle_pos = self.canvas.coords(self.paddle.id) if ballcoords[0] <= paddle_pos[2] and ballcoords[2] >= paddle_pos[0]: if paddle_pos[3] >= ballcoords[3] >= paddle_pos[1]: return True return False # Detecting the collision between the the ball and the canvas sides: def draw(self): self.canvas.move(self.id, self.x, self.y) ballcoords = self.canvas.coords(self.id) if ballcoords[1] <= 0: self.y = 3 if ballcoords[3] >= self.canvas_height: self.y = 0 self.x = 0 self.canvas.create_text(225, 150, text = "Game Over!", font = ("Arial", 16), fill = "white") if ballcoords[0] <= 0: self.x = 3 if ballcoords[2] >= self.canvas_width: self.x = -3 if self.hit_paddle(ballcoords): self.y = -3 class Paddle: def __init__(self, canvas1, color): self.canvas1 = canvas self.id = canvas.create_rectangle(0, 0, 100, 10, fill = color) self.canvas1.move(self.id, 180, 350) self.x = 0 self.y = 0 self.canvas1_width = canvas1.winfo_width() self.canvas1.bind_all("<Left>", self.left) self.canvas1.bind_all("<Right>", self.right) def draw(self): self.canvas1.move(self.id, self.x, 0) paddlecoords = self.canvas1.coords(self.id) if paddlecoords[0] <= 0: self.x = 0 if paddlecoords[2] >= self.canvas1_width: self.x = 0 def right(self, event): self.x = 3 def left(self, event): self.x = -3 paddle = Paddle(canvas, color = "white") ball = Ball(canvas, paddle, color = "red") while True: ball.draw() paddle.draw() window.update_idletasks() window.update() time.sleep(0.001) ``` This is the error: ``` Traceback (most recent call last): File "D:\CSCI201\Arcade Games Project\Bounce\Bounce_Game.py", line 111, in <module> ball.draw() File "D:\CSCI201\Arcade Games Project\Bounce\Bounce_Game.py", line 64, in draw self.canvas.move(self.id, self.x, self.y) File "C:\Users\M.Youssry\AppData\Local\Programs\Python\Python39\lib\tkinter\__init__.py", line 2916, in move self.tk.call((self._w, 'move') + args) _tkinter.TclError: invalid command name ".!canvas" ``` I've tried inserting .mainloop() as suggested to another user having the same problem but it hasn't worked for me.
2021/01/09
[ "https://Stackoverflow.com/questions/65643645", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14915849/" ]
It is caused by close button on top-right corner of window, the only way you have to stop script. After you click close button, window destried, so no widget, like canvas, exist. You can set a flag to identify if while loop should stop and exit in handler of window close button event. ```py window.protocol("WM_DELETE_WINDOW", handler) ``` Here, you can exit script any time by click close button of window. ```py from tkinter import * import random import time # Creating the window: window = Tk() window.title("Bounce") window.geometry('600x600') window.resizable(False, False) # Creating the canvas containing the game: canvas = Canvas(window, width = 450, height = 450, bg = "black") canvas.pack(padx = 50, pady= 50) score = canvas.create_text(10, 20, fill = "white") window.update() # Creating the ball: class Ball: def __init__(self, canvas1, paddle1, color): self.canvas = canvas1 self.paddle = paddle1 self.id = canvas1.create_oval(10, 10, 25, 25, fill = color) # The starting point of the ball self.canvas.move(self.id, 190, 160) starting_direction = [-3, -2, -1, 0, 1, 2, 3] random.shuffle(starting_direction) self.x = starting_direction[0] self.y = -3 self.canvas_height = self.canvas.winfo_height() self.canvas_width = self.canvas.winfo_width() # Detecting the collision between the ball and the paddle: def hit_paddle(self, ballcoords): paddle_pos = self.canvas.coords(self.paddle.id) if ballcoords[0] <= paddle_pos[2] and ballcoords[2] >= paddle_pos[0]: if paddle_pos[3] >= ballcoords[3] >= paddle_pos[1]: return True return False # Detecting the collision between the the ball and the canvas sides: def draw(self): self.canvas.move(self.id, self.x, self.y) ballcoords = self.canvas.coords(self.id) if ballcoords[1] <= 0: self.y = 3 if ballcoords[3] >= self.canvas_height: self.y = 0 self.x = 0 self.canvas.create_text(225, 150, text = "Game Over!", font = ("Arial", 16), fill = "white") if ballcoords[0] <= 0: self.x = 3 if ballcoords[2] >= self.canvas_width: self.x = -3 if self.hit_paddle(ballcoords): self.y = -3 class Paddle: def __init__(self, canvas1, color): self.canvas1 = canvas self.id = canvas.create_rectangle(0, 0, 100, 10, fill = color) self.canvas1.move(self.id, 180, 350) self.x = 0 self.y = 0 self.canvas1_width = canvas1.winfo_width() self.canvas1.bind_all("<Left>", self.left) self.canvas1.bind_all("<Right>", self.right) def draw(self): self.canvas1.move(self.id, self.x, 0) paddlecoords = self.canvas1.coords(self.id) if paddlecoords[0] <= 0: self.x = 0 if paddlecoords[2] >= self.canvas1_width: self.x = 0 def right(self, event): self.x = 3 def left(self, event): self.x = -3 paddle = Paddle(canvas, color = "white") ball = Ball(canvas, paddle, color = "red") # New code after here def handler(): global run run = False window.protocol("WM_DELETE_WINDOW", handler) run = True while run: # New code before here ball.draw() paddle.draw() window.update_idletasks() window.update() time.sleep(0.01) window.destroy() # should always destroy window before exit ```
I had this problem and solved it by restarting my iPython-console (Spyder)
5,898,555
I'm playing with the [pyflakes plugin for vim](https://github.com/kevinw/pyflakes-vim) and now when I open a python file I get the error messages in the screenshot [here](http://dl.dropbox.com/u/6114719/Screenshot.png) Any ideas how to fix this? Thanks in advance...
2011/05/05
[ "https://Stackoverflow.com/questions/5898555", "https://Stackoverflow.com", "https://Stackoverflow.com/users/91748/" ]
Could be an issue with the version of Python you're running under vs. what the package you're using is looking for. A quick google for "Module getChildNodes python" got me to the page for [Python compiler package](http://docs.python.org/library/compiler.html) which has one of those nice little "Deprecated" messages on it. So it might be that the pyflakes plugin is out of synch with the version of Python you have installed. "Python -V" will show you what version you're running. ``` C:\projects\fun>python -V Python 2.7.1 ```
This is a bug in pyflakes and we cannot help you with this here. Try filing an issue on [their git repository](https://github.com/kevinw/pyflakes-vim/issues).
5,898,555
I'm playing with the [pyflakes plugin for vim](https://github.com/kevinw/pyflakes-vim) and now when I open a python file I get the error messages in the screenshot [here](http://dl.dropbox.com/u/6114719/Screenshot.png) Any ideas how to fix this? Thanks in advance...
2011/05/05
[ "https://Stackoverflow.com/questions/5898555", "https://Stackoverflow.com", "https://Stackoverflow.com/users/91748/" ]
<https://github.com/kevinw/pyflakes-vim/issues/27> > > You can recommend to users that they clone the pyflakes-vim repo with git clone --recursive or you can suggest after the fact to use git submodule update --init --recursive if pyflakes-vim is saved as a git submodule itself. > > > Or go to pyflakes-vim and: ``` git submodule init && git submodule update ``` The point is that pyflakes-vim needs a (fresh) local copy of pyflakes under `ftplugin/plugin/pyflakes` if the system-wide installed version is too old.
This is a bug in pyflakes and we cannot help you with this here. Try filing an issue on [their git repository](https://github.com/kevinw/pyflakes-vim/issues).
5,898,555
I'm playing with the [pyflakes plugin for vim](https://github.com/kevinw/pyflakes-vim) and now when I open a python file I get the error messages in the screenshot [here](http://dl.dropbox.com/u/6114719/Screenshot.png) Any ideas how to fix this? Thanks in advance...
2011/05/05
[ "https://Stackoverflow.com/questions/5898555", "https://Stackoverflow.com", "https://Stackoverflow.com/users/91748/" ]
I tried this to solve my problem under the Mac OS X 10.9.5. ``` sudo easy_install pip pip install pyflakes ``` Then I opened the python scripts again, no issues reported as this: ![Imported Error: No module named pyflakes](https://i.stack.imgur.com/C2Ovx.png) Enjoy! Robin 2015.01.30
This is a bug in pyflakes and we cannot help you with this here. Try filing an issue on [their git repository](https://github.com/kevinw/pyflakes-vim/issues).
5,898,555
I'm playing with the [pyflakes plugin for vim](https://github.com/kevinw/pyflakes-vim) and now when I open a python file I get the error messages in the screenshot [here](http://dl.dropbox.com/u/6114719/Screenshot.png) Any ideas how to fix this? Thanks in advance...
2011/05/05
[ "https://Stackoverflow.com/questions/5898555", "https://Stackoverflow.com", "https://Stackoverflow.com/users/91748/" ]
<https://github.com/kevinw/pyflakes-vim/issues/27> > > You can recommend to users that they clone the pyflakes-vim repo with git clone --recursive or you can suggest after the fact to use git submodule update --init --recursive if pyflakes-vim is saved as a git submodule itself. > > > Or go to pyflakes-vim and: ``` git submodule init && git submodule update ``` The point is that pyflakes-vim needs a (fresh) local copy of pyflakes under `ftplugin/plugin/pyflakes` if the system-wide installed version is too old.
Could be an issue with the version of Python you're running under vs. what the package you're using is looking for. A quick google for "Module getChildNodes python" got me to the page for [Python compiler package](http://docs.python.org/library/compiler.html) which has one of those nice little "Deprecated" messages on it. So it might be that the pyflakes plugin is out of synch with the version of Python you have installed. "Python -V" will show you what version you're running. ``` C:\projects\fun>python -V Python 2.7.1 ```
5,898,555
I'm playing with the [pyflakes plugin for vim](https://github.com/kevinw/pyflakes-vim) and now when I open a python file I get the error messages in the screenshot [here](http://dl.dropbox.com/u/6114719/Screenshot.png) Any ideas how to fix this? Thanks in advance...
2011/05/05
[ "https://Stackoverflow.com/questions/5898555", "https://Stackoverflow.com", "https://Stackoverflow.com/users/91748/" ]
Could be an issue with the version of Python you're running under vs. what the package you're using is looking for. A quick google for "Module getChildNodes python" got me to the page for [Python compiler package](http://docs.python.org/library/compiler.html) which has one of those nice little "Deprecated" messages on it. So it might be that the pyflakes plugin is out of synch with the version of Python you have installed. "Python -V" will show you what version you're running. ``` C:\projects\fun>python -V Python 2.7.1 ```
I tried this to solve my problem under the Mac OS X 10.9.5. ``` sudo easy_install pip pip install pyflakes ``` Then I opened the python scripts again, no issues reported as this: ![Imported Error: No module named pyflakes](https://i.stack.imgur.com/C2Ovx.png) Enjoy! Robin 2015.01.30
5,898,555
I'm playing with the [pyflakes plugin for vim](https://github.com/kevinw/pyflakes-vim) and now when I open a python file I get the error messages in the screenshot [here](http://dl.dropbox.com/u/6114719/Screenshot.png) Any ideas how to fix this? Thanks in advance...
2011/05/05
[ "https://Stackoverflow.com/questions/5898555", "https://Stackoverflow.com", "https://Stackoverflow.com/users/91748/" ]
<https://github.com/kevinw/pyflakes-vim/issues/27> > > You can recommend to users that they clone the pyflakes-vim repo with git clone --recursive or you can suggest after the fact to use git submodule update --init --recursive if pyflakes-vim is saved as a git submodule itself. > > > Or go to pyflakes-vim and: ``` git submodule init && git submodule update ``` The point is that pyflakes-vim needs a (fresh) local copy of pyflakes under `ftplugin/plugin/pyflakes` if the system-wide installed version is too old.
I tried this to solve my problem under the Mac OS X 10.9.5. ``` sudo easy_install pip pip install pyflakes ``` Then I opened the python scripts again, no issues reported as this: ![Imported Error: No module named pyflakes](https://i.stack.imgur.com/C2Ovx.png) Enjoy! Robin 2015.01.30