qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
7,459,766
|
I got the below failure while trying to get MySQL-python installed on my Ubuntu/Linux Box.From the below it seem like the issue is `sh: mysql_config: not found`
Could someone advice me on what to do?
```
rmicro@ubuntu:~$ pip install MySQL-python
Downloading/unpacking MySQL-python
Downloading MySQL-python-1.2.3.tar.gz (70Kb): 70Kb downloaded
Running setup.py egg_info for package MySQL-python
sh: mysql_config: not found
Traceback (most recent call last):
File "<string>", line 14, in <module>
File "/home/rmicro/build/MySQL-python/setup.py", line 15, in <module>
metadata, options = get_config()
File "setup_posix.py", line 43, in get_config
libs = mysql_config("libs_r")
File "setup_posix.py", line 24, in mysql_config
raise EnvironmentError("%s not found" % (mysql_config.path,))
EnvironmentError: mysql_config not found
Complete output from command python setup.py egg_info:
sh: mysql_config: not found
Traceback (most recent call last):
File "<string>", line 14, in <module>
File "/home/rmicro/build/MySQL-python/setup.py", line 15, in <module>
metadata, options = get_config()
File "setup_posix.py", line 43, in get_config
libs = mysql_config("libs_r")
File "setup_posix.py", line 24, in mysql_config
raise EnvironmentError("%s not found" % (mysql_config.path,))
EnvironmentError: mysql_config not found
----------------------------------------
Command python setup.py egg_info failed with error code 1
```
|
2011/09/18
|
[
"https://Stackoverflow.com/questions/7459766",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/618677/"
] |
Python or Python3 with MySQL, you will need these. These libraries use MySQL's connector for C and Python (you need the C libraries installed as well), which overcome some of the limitations of the mysqldb libraries.
```
sudo apt-get install libmysqlclient-dev
sudo apt-get install python-mysql.connector
sudo apt-get install python3-mysql.connector
```
|
In python3 with virtualenv on a Ubuntu Bionic machine the following commands worked for me:
```
sudo apt install build-essential python-dev libmysqlclient-dev
sudo apt-get install libssl-dev
pip install mysqlclient
```
|
7,459,766
|
I got the below failure while trying to get MySQL-python installed on my Ubuntu/Linux Box.From the below it seem like the issue is `sh: mysql_config: not found`
Could someone advice me on what to do?
```
rmicro@ubuntu:~$ pip install MySQL-python
Downloading/unpacking MySQL-python
Downloading MySQL-python-1.2.3.tar.gz (70Kb): 70Kb downloaded
Running setup.py egg_info for package MySQL-python
sh: mysql_config: not found
Traceback (most recent call last):
File "<string>", line 14, in <module>
File "/home/rmicro/build/MySQL-python/setup.py", line 15, in <module>
metadata, options = get_config()
File "setup_posix.py", line 43, in get_config
libs = mysql_config("libs_r")
File "setup_posix.py", line 24, in mysql_config
raise EnvironmentError("%s not found" % (mysql_config.path,))
EnvironmentError: mysql_config not found
Complete output from command python setup.py egg_info:
sh: mysql_config: not found
Traceback (most recent call last):
File "<string>", line 14, in <module>
File "/home/rmicro/build/MySQL-python/setup.py", line 15, in <module>
metadata, options = get_config()
File "setup_posix.py", line 43, in get_config
libs = mysql_config("libs_r")
File "setup_posix.py", line 24, in mysql_config
raise EnvironmentError("%s not found" % (mysql_config.path,))
EnvironmentError: mysql_config not found
----------------------------------------
Command python setup.py egg_info failed with error code 1
```
|
2011/09/18
|
[
"https://Stackoverflow.com/questions/7459766",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/618677/"
] |
Reread the error message. It says:
>
> sh: mysql\_config: not found
>
>
>
If you are on Ubuntu Natty, `mysql_config` belongs to package [libmysqlclient-dev](http://packages.ubuntu.com/search?searchon=contents&keywords=mysql_config&mode=exactfilename&suite=natty&arch=any)
|
this worked for me on python 3
pip install mysqlclient
-----------------------
|
7,459,766
|
I got the below failure while trying to get MySQL-python installed on my Ubuntu/Linux Box.From the below it seem like the issue is `sh: mysql_config: not found`
Could someone advice me on what to do?
```
rmicro@ubuntu:~$ pip install MySQL-python
Downloading/unpacking MySQL-python
Downloading MySQL-python-1.2.3.tar.gz (70Kb): 70Kb downloaded
Running setup.py egg_info for package MySQL-python
sh: mysql_config: not found
Traceback (most recent call last):
File "<string>", line 14, in <module>
File "/home/rmicro/build/MySQL-python/setup.py", line 15, in <module>
metadata, options = get_config()
File "setup_posix.py", line 43, in get_config
libs = mysql_config("libs_r")
File "setup_posix.py", line 24, in mysql_config
raise EnvironmentError("%s not found" % (mysql_config.path,))
EnvironmentError: mysql_config not found
Complete output from command python setup.py egg_info:
sh: mysql_config: not found
Traceback (most recent call last):
File "<string>", line 14, in <module>
File "/home/rmicro/build/MySQL-python/setup.py", line 15, in <module>
metadata, options = get_config()
File "setup_posix.py", line 43, in get_config
libs = mysql_config("libs_r")
File "setup_posix.py", line 24, in mysql_config
raise EnvironmentError("%s not found" % (mysql_config.path,))
EnvironmentError: mysql_config not found
----------------------------------------
Command python setup.py egg_info failed with error code 1
```
|
2011/09/18
|
[
"https://Stackoverflow.com/questions/7459766",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/618677/"
] |
On Ubuntu it is advised to use the distributions repository. So installing python-mysqldb should be straight forward:
```
sudo apt-get install python-mysqldb
```
If you actually want to use pip to install, which is as mentioned before not the suggested path but possible, please have a look at this previously asked question and answer: [pip install mysql-python fails with EnvironmentError: mysql\_config not found](https://stackoverflow.com/questions/5178292/pip-install-mysql-python-show-error)
Here is a very comprehensive guide by the developer: <http://mysql-python.blogspot.no/2012/11/is-mysqldb-hard-to-install.html>
To get all the prerequisites for python-mysqld to install it using pip (which you will want to do if you are using virtualenv), run this:
```
sudo apt-get install build-essential python-dev libmysqlclient-dev
```
|
1. find the folder: `sudo find / -name "mysql_config"` (assume it's `"/opt/local/lib/mysql5/bin"`)
2. add it into PATH:`export PATH:export PATH=/opt/local/lib/mysql5/bin:$PATH`
3. install it again
|
21,979,134
|
If I do `print(1 >> 2)` I get 0.
If I do `print(2 << 1)` I get 4.
If I do `print(9 << 3)` I get 72
If I do `print(3 >> 9)` I get 0
What do `>>` and `<<` do in python?
|
2014/02/24
|
[
"https://Stackoverflow.com/questions/21979134",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2840324/"
] |
They are bitwise shift operators. For example, 2 has the binary equivalent 00000010, so 2 << 1 is 00000010 shifted left 1 time. This yields 00000100, which is 4.
1 >> 2 is 00000001 shifted right 2 times which is 00000000 (the 1 falls off the end after the first shift though, so 1>>1 is also 0), obviously that is 0.
|
Bitwise shift left and bitwise shift right. They're roughly equivalent to doubling (`<<`) or halving (`>>`) just like decimal shift left is roughly equivalent to multiplying by 10 and decimal shift right is roughly equivalent to dividing by 10.
|
1,529,002
|
So, I was playing around with Python while answering [this question](https://stackoverflow.com/questions/1528932/how-to-create-inline-objects-with-properties-in-python/), and I discovered that this is not valid:
```
o = object()
o.attr = 'hello'
```
due to an `AttributeError: 'object' object has no attribute 'attr'`. However, with any class inherited from object, it is valid:
```
class Sub(object):
pass
s = Sub()
s.attr = 'hello'
```
Printing `s.attr` displays 'hello' as expected. Why is this the case? What in the Python language specification specifies that you can't assign attributes to vanilla objects?
---
For other workarounds, see [How can I create an object and add attributes to it?](https://stackoverflow.com/questions/2827623).
|
2009/10/07
|
[
"https://Stackoverflow.com/questions/1529002",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14902/"
] |
To support arbitrary attribute assignment, an object needs a `__dict__`: a dict associated with the object, where arbitrary attributes can be stored. Otherwise, there's nowhere to *put* new attributes.
An instance of `object` does **not** carry around a `__dict__` -- if it did, before the horrible circular dependence problem (since `dict`, like most everything else, inherits from `object`;-), this would saddle *every* object in Python with a dict, which would mean an overhead of *many* bytes per object that currently doesn't have or need a dict (essentially, all objects that don't have arbitrarily assignable attributes don't have or need a dict).
For example, using the excellent `pympler` project (you can get it via svn from [here](http://code.google.com/p/pympler/source/checkout)), we can do some measurements...:
```
>>> from pympler import asizeof
>>> asizeof.asizeof({})
144
>>> asizeof.asizeof(23)
16
```
You wouldn't want every `int` to take up 144 bytes instead of just 16, right?-)
Now, when you make a class (inheriting from whatever), things change...:
```
>>> class dint(int): pass
...
>>> asizeof.asizeof(dint(23))
184
```
...the `__dict__` *is* now added (plus, a little more overhead) -- so a `dint` instance can have arbitrary attributes, but you pay quite a space cost for that flexibility.
So what if you wanted `int`s with just *one* extra attribute `foobar`...? It's a rare need, but Python does offer a special mechanism for the purpose...
```
>>> class fint(int):
... __slots__ = 'foobar',
... def __init__(self, x): self.foobar=x+100
...
>>> asizeof.asizeof(fint(23))
80
```
...not *quite* as tiny as an `int`, mind you! (or even the two `int`s, one the `self` and one the `self.foobar` -- the second one can be reassigned), but surely much better than a `dint`.
When the class has the `__slots__` special attribute (a sequence of strings), then the `class` statement (more precisely, the default metaclass, `type`) does **not** equip every instance of that class with a `__dict__` (and therefore the ability to have arbitrary attributes), just a finite, rigid set of "slots" (basically places which can each hold one reference to some object) with the given names.
In exchange for the lost flexibility, you gain a lot of bytes per instance (probably meaningful only if you have zillions of instances gallivanting around, but, there *are* use cases for that).
|
It is simply due to optimization.
Dicts are relatively large.
```
>>> import sys
>>> sys.getsizeof((lambda:1).__dict__)
140
```
Most (maybe all) classes that are defined in C do not have a dict for optimization.
If you look at the [source code](http://svn.python.org/view/python/trunk/Objects/object.c?revision=74457&view=markup) you will see that there are many checks to see if the object has a dict or not.
|
1,529,002
|
So, I was playing around with Python while answering [this question](https://stackoverflow.com/questions/1528932/how-to-create-inline-objects-with-properties-in-python/), and I discovered that this is not valid:
```
o = object()
o.attr = 'hello'
```
due to an `AttributeError: 'object' object has no attribute 'attr'`. However, with any class inherited from object, it is valid:
```
class Sub(object):
pass
s = Sub()
s.attr = 'hello'
```
Printing `s.attr` displays 'hello' as expected. Why is this the case? What in the Python language specification specifies that you can't assign attributes to vanilla objects?
---
For other workarounds, see [How can I create an object and add attributes to it?](https://stackoverflow.com/questions/2827623).
|
2009/10/07
|
[
"https://Stackoverflow.com/questions/1529002",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14902/"
] |
To support arbitrary attribute assignment, an object needs a `__dict__`: a dict associated with the object, where arbitrary attributes can be stored. Otherwise, there's nowhere to *put* new attributes.
An instance of `object` does **not** carry around a `__dict__` -- if it did, before the horrible circular dependence problem (since `dict`, like most everything else, inherits from `object`;-), this would saddle *every* object in Python with a dict, which would mean an overhead of *many* bytes per object that currently doesn't have or need a dict (essentially, all objects that don't have arbitrarily assignable attributes don't have or need a dict).
For example, using the excellent `pympler` project (you can get it via svn from [here](http://code.google.com/p/pympler/source/checkout)), we can do some measurements...:
```
>>> from pympler import asizeof
>>> asizeof.asizeof({})
144
>>> asizeof.asizeof(23)
16
```
You wouldn't want every `int` to take up 144 bytes instead of just 16, right?-)
Now, when you make a class (inheriting from whatever), things change...:
```
>>> class dint(int): pass
...
>>> asizeof.asizeof(dint(23))
184
```
...the `__dict__` *is* now added (plus, a little more overhead) -- so a `dint` instance can have arbitrary attributes, but you pay quite a space cost for that flexibility.
So what if you wanted `int`s with just *one* extra attribute `foobar`...? It's a rare need, but Python does offer a special mechanism for the purpose...
```
>>> class fint(int):
... __slots__ = 'foobar',
... def __init__(self, x): self.foobar=x+100
...
>>> asizeof.asizeof(fint(23))
80
```
...not *quite* as tiny as an `int`, mind you! (or even the two `int`s, one the `self` and one the `self.foobar` -- the second one can be reassigned), but surely much better than a `dint`.
When the class has the `__slots__` special attribute (a sequence of strings), then the `class` statement (more precisely, the default metaclass, `type`) does **not** equip every instance of that class with a `__dict__` (and therefore the ability to have arbitrary attributes), just a finite, rigid set of "slots" (basically places which can each hold one reference to some object) with the given names.
In exchange for the lost flexibility, you gain a lot of bytes per instance (probably meaningful only if you have zillions of instances gallivanting around, but, there *are* use cases for that).
|
As other answerers have said, an `object` does not have a `__dict__`. `object` is the base class of **all** types, including `int` or `str`. Thus whatever is provided by `object` will be a burden to them as well. Even something as simple as an *optional* `__dict__` would need an extra pointer for each value; this would waste additional 4-8 bytes of memory for each object in the system, for a very limited utility.
---
Instead of doing an instance of a dummy class, in Python 3.3+, you can (and should) use [`types.SimpleNamespace`](https://docs.python.org/3/library/types.html#types.SimpleNamespace) for this.
|
1,529,002
|
So, I was playing around with Python while answering [this question](https://stackoverflow.com/questions/1528932/how-to-create-inline-objects-with-properties-in-python/), and I discovered that this is not valid:
```
o = object()
o.attr = 'hello'
```
due to an `AttributeError: 'object' object has no attribute 'attr'`. However, with any class inherited from object, it is valid:
```
class Sub(object):
pass
s = Sub()
s.attr = 'hello'
```
Printing `s.attr` displays 'hello' as expected. Why is this the case? What in the Python language specification specifies that you can't assign attributes to vanilla objects?
---
For other workarounds, see [How can I create an object and add attributes to it?](https://stackoverflow.com/questions/2827623).
|
2009/10/07
|
[
"https://Stackoverflow.com/questions/1529002",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14902/"
] |
As other answerers have said, an `object` does not have a `__dict__`. `object` is the base class of **all** types, including `int` or `str`. Thus whatever is provided by `object` will be a burden to them as well. Even something as simple as an *optional* `__dict__` would need an extra pointer for each value; this would waste additional 4-8 bytes of memory for each object in the system, for a very limited utility.
---
Instead of doing an instance of a dummy class, in Python 3.3+, you can (and should) use [`types.SimpleNamespace`](https://docs.python.org/3/library/types.html#types.SimpleNamespace) for this.
|
This is (IMO) one of the fundamental limitations with Python - you can't re-open classes. I believe the actual problem, though, is caused by the fact that classes implemented in C can't be modified at runtime... subclasses can, but not the base classes.
|
1,529,002
|
So, I was playing around with Python while answering [this question](https://stackoverflow.com/questions/1528932/how-to-create-inline-objects-with-properties-in-python/), and I discovered that this is not valid:
```
o = object()
o.attr = 'hello'
```
due to an `AttributeError: 'object' object has no attribute 'attr'`. However, with any class inherited from object, it is valid:
```
class Sub(object):
pass
s = Sub()
s.attr = 'hello'
```
Printing `s.attr` displays 'hello' as expected. Why is this the case? What in the Python language specification specifies that you can't assign attributes to vanilla objects?
---
For other workarounds, see [How can I create an object and add attributes to it?](https://stackoverflow.com/questions/2827623).
|
2009/10/07
|
[
"https://Stackoverflow.com/questions/1529002",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14902/"
] |
It is simply due to optimization.
Dicts are relatively large.
```
>>> import sys
>>> sys.getsizeof((lambda:1).__dict__)
140
```
Most (maybe all) classes that are defined in C do not have a dict for optimization.
If you look at the [source code](http://svn.python.org/view/python/trunk/Objects/object.c?revision=74457&view=markup) you will see that there are many checks to see if the object has a dict or not.
|
This is (IMO) one of the fundamental limitations with Python - you can't re-open classes. I believe the actual problem, though, is caused by the fact that classes implemented in C can't be modified at runtime... subclasses can, but not the base classes.
|
1,529,002
|
So, I was playing around with Python while answering [this question](https://stackoverflow.com/questions/1528932/how-to-create-inline-objects-with-properties-in-python/), and I discovered that this is not valid:
```
o = object()
o.attr = 'hello'
```
due to an `AttributeError: 'object' object has no attribute 'attr'`. However, with any class inherited from object, it is valid:
```
class Sub(object):
pass
s = Sub()
s.attr = 'hello'
```
Printing `s.attr` displays 'hello' as expected. Why is this the case? What in the Python language specification specifies that you can't assign attributes to vanilla objects?
---
For other workarounds, see [How can I create an object and add attributes to it?](https://stackoverflow.com/questions/2827623).
|
2009/10/07
|
[
"https://Stackoverflow.com/questions/1529002",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14902/"
] |
So, investigating my own question, I discovered this about the Python language: you can inherit from things like int, and you see the same behaviour:
```
>>> class MyInt(int):
pass
>>> x = MyInt()
>>> print x
0
>>> x.hello = 4
>>> print x.hello
4
>>> x = x + 1
>>> print x
1
>>> print x.hello
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
AttributeError: 'int' object has no attribute 'hello'
```
I assume the error at the end is because the add function returns an int, so I'd have to override functions like `__add__` and such in order to retain my custom attributes. But this all now makes sense to me (I think), when I think of "object" like "int".
|
<https://docs.python.org/3/library/functions.html#object> :
>
> **Note**: [`object`](https://docs.python.org/3/library/functions.html#object) does *not* have a [`__dict__`](https://docs.python.org/3/library/stdtypes.html#object.__dict__), so you can’t assign arbitrary attributes to an instance of the [`object`](https://docs.python.org/3/library/functions.html#object) class.
>
>
>
|
1,529,002
|
So, I was playing around with Python while answering [this question](https://stackoverflow.com/questions/1528932/how-to-create-inline-objects-with-properties-in-python/), and I discovered that this is not valid:
```
o = object()
o.attr = 'hello'
```
due to an `AttributeError: 'object' object has no attribute 'attr'`. However, with any class inherited from object, it is valid:
```
class Sub(object):
pass
s = Sub()
s.attr = 'hello'
```
Printing `s.attr` displays 'hello' as expected. Why is this the case? What in the Python language specification specifies that you can't assign attributes to vanilla objects?
---
For other workarounds, see [How can I create an object and add attributes to it?](https://stackoverflow.com/questions/2827623).
|
2009/10/07
|
[
"https://Stackoverflow.com/questions/1529002",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14902/"
] |
It is simply due to optimization.
Dicts are relatively large.
```
>>> import sys
>>> sys.getsizeof((lambda:1).__dict__)
140
```
Most (maybe all) classes that are defined in C do not have a dict for optimization.
If you look at the [source code](http://svn.python.org/view/python/trunk/Objects/object.c?revision=74457&view=markup) you will see that there are many checks to see if the object has a dict or not.
|
It's because object is a "type", not a class. In general, all classes that are defined in C extensions (like all the built in datatypes, and stuff like numpy arrays) do not allow addition of arbitrary attributes.
|
1,529,002
|
So, I was playing around with Python while answering [this question](https://stackoverflow.com/questions/1528932/how-to-create-inline-objects-with-properties-in-python/), and I discovered that this is not valid:
```
o = object()
o.attr = 'hello'
```
due to an `AttributeError: 'object' object has no attribute 'attr'`. However, with any class inherited from object, it is valid:
```
class Sub(object):
pass
s = Sub()
s.attr = 'hello'
```
Printing `s.attr` displays 'hello' as expected. Why is this the case? What in the Python language specification specifies that you can't assign attributes to vanilla objects?
---
For other workarounds, see [How can I create an object and add attributes to it?](https://stackoverflow.com/questions/2827623).
|
2009/10/07
|
[
"https://Stackoverflow.com/questions/1529002",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14902/"
] |
To support arbitrary attribute assignment, an object needs a `__dict__`: a dict associated with the object, where arbitrary attributes can be stored. Otherwise, there's nowhere to *put* new attributes.
An instance of `object` does **not** carry around a `__dict__` -- if it did, before the horrible circular dependence problem (since `dict`, like most everything else, inherits from `object`;-), this would saddle *every* object in Python with a dict, which would mean an overhead of *many* bytes per object that currently doesn't have or need a dict (essentially, all objects that don't have arbitrarily assignable attributes don't have or need a dict).
For example, using the excellent `pympler` project (you can get it via svn from [here](http://code.google.com/p/pympler/source/checkout)), we can do some measurements...:
```
>>> from pympler import asizeof
>>> asizeof.asizeof({})
144
>>> asizeof.asizeof(23)
16
```
You wouldn't want every `int` to take up 144 bytes instead of just 16, right?-)
Now, when you make a class (inheriting from whatever), things change...:
```
>>> class dint(int): pass
...
>>> asizeof.asizeof(dint(23))
184
```
...the `__dict__` *is* now added (plus, a little more overhead) -- so a `dint` instance can have arbitrary attributes, but you pay quite a space cost for that flexibility.
So what if you wanted `int`s with just *one* extra attribute `foobar`...? It's a rare need, but Python does offer a special mechanism for the purpose...
```
>>> class fint(int):
... __slots__ = 'foobar',
... def __init__(self, x): self.foobar=x+100
...
>>> asizeof.asizeof(fint(23))
80
```
...not *quite* as tiny as an `int`, mind you! (or even the two `int`s, one the `self` and one the `self.foobar` -- the second one can be reassigned), but surely much better than a `dint`.
When the class has the `__slots__` special attribute (a sequence of strings), then the `class` statement (more precisely, the default metaclass, `type`) does **not** equip every instance of that class with a `__dict__` (and therefore the ability to have arbitrary attributes), just a finite, rigid set of "slots" (basically places which can each hold one reference to some object) with the given names.
In exchange for the lost flexibility, you gain a lot of bytes per instance (probably meaningful only if you have zillions of instances gallivanting around, but, there *are* use cases for that).
|
<https://docs.python.org/3/library/functions.html#object> :
>
> **Note**: [`object`](https://docs.python.org/3/library/functions.html#object) does *not* have a [`__dict__`](https://docs.python.org/3/library/stdtypes.html#object.__dict__), so you can’t assign arbitrary attributes to an instance of the [`object`](https://docs.python.org/3/library/functions.html#object) class.
>
>
>
|
1,529,002
|
So, I was playing around with Python while answering [this question](https://stackoverflow.com/questions/1528932/how-to-create-inline-objects-with-properties-in-python/), and I discovered that this is not valid:
```
o = object()
o.attr = 'hello'
```
due to an `AttributeError: 'object' object has no attribute 'attr'`. However, with any class inherited from object, it is valid:
```
class Sub(object):
pass
s = Sub()
s.attr = 'hello'
```
Printing `s.attr` displays 'hello' as expected. Why is this the case? What in the Python language specification specifies that you can't assign attributes to vanilla objects?
---
For other workarounds, see [How can I create an object and add attributes to it?](https://stackoverflow.com/questions/2827623).
|
2009/10/07
|
[
"https://Stackoverflow.com/questions/1529002",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14902/"
] |
As other answerers have said, an `object` does not have a `__dict__`. `object` is the base class of **all** types, including `int` or `str`. Thus whatever is provided by `object` will be a burden to them as well. Even something as simple as an *optional* `__dict__` would need an extra pointer for each value; this would waste additional 4-8 bytes of memory for each object in the system, for a very limited utility.
---
Instead of doing an instance of a dummy class, in Python 3.3+, you can (and should) use [`types.SimpleNamespace`](https://docs.python.org/3/library/types.html#types.SimpleNamespace) for this.
|
So, investigating my own question, I discovered this about the Python language: you can inherit from things like int, and you see the same behaviour:
```
>>> class MyInt(int):
pass
>>> x = MyInt()
>>> print x
0
>>> x.hello = 4
>>> print x.hello
4
>>> x = x + 1
>>> print x
1
>>> print x.hello
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
AttributeError: 'int' object has no attribute 'hello'
```
I assume the error at the end is because the add function returns an int, so I'd have to override functions like `__add__` and such in order to retain my custom attributes. But this all now makes sense to me (I think), when I think of "object" like "int".
|
1,529,002
|
So, I was playing around with Python while answering [this question](https://stackoverflow.com/questions/1528932/how-to-create-inline-objects-with-properties-in-python/), and I discovered that this is not valid:
```
o = object()
o.attr = 'hello'
```
due to an `AttributeError: 'object' object has no attribute 'attr'`. However, with any class inherited from object, it is valid:
```
class Sub(object):
pass
s = Sub()
s.attr = 'hello'
```
Printing `s.attr` displays 'hello' as expected. Why is this the case? What in the Python language specification specifies that you can't assign attributes to vanilla objects?
---
For other workarounds, see [How can I create an object and add attributes to it?](https://stackoverflow.com/questions/2827623).
|
2009/10/07
|
[
"https://Stackoverflow.com/questions/1529002",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14902/"
] |
So, investigating my own question, I discovered this about the Python language: you can inherit from things like int, and you see the same behaviour:
```
>>> class MyInt(int):
pass
>>> x = MyInt()
>>> print x
0
>>> x.hello = 4
>>> print x.hello
4
>>> x = x + 1
>>> print x
1
>>> print x.hello
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
AttributeError: 'int' object has no attribute 'hello'
```
I assume the error at the end is because the add function returns an int, so I'd have to override functions like `__add__` and such in order to retain my custom attributes. But this all now makes sense to me (I think), when I think of "object" like "int".
|
It's because object is a "type", not a class. In general, all classes that are defined in C extensions (like all the built in datatypes, and stuff like numpy arrays) do not allow addition of arbitrary attributes.
|
1,529,002
|
So, I was playing around with Python while answering [this question](https://stackoverflow.com/questions/1528932/how-to-create-inline-objects-with-properties-in-python/), and I discovered that this is not valid:
```
o = object()
o.attr = 'hello'
```
due to an `AttributeError: 'object' object has no attribute 'attr'`. However, with any class inherited from object, it is valid:
```
class Sub(object):
pass
s = Sub()
s.attr = 'hello'
```
Printing `s.attr` displays 'hello' as expected. Why is this the case? What in the Python language specification specifies that you can't assign attributes to vanilla objects?
---
For other workarounds, see [How can I create an object and add attributes to it?](https://stackoverflow.com/questions/2827623).
|
2009/10/07
|
[
"https://Stackoverflow.com/questions/1529002",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14902/"
] |
It is simply due to optimization.
Dicts are relatively large.
```
>>> import sys
>>> sys.getsizeof((lambda:1).__dict__)
140
```
Most (maybe all) classes that are defined in C do not have a dict for optimization.
If you look at the [source code](http://svn.python.org/view/python/trunk/Objects/object.c?revision=74457&view=markup) you will see that there are many checks to see if the object has a dict or not.
|
<https://docs.python.org/3/library/functions.html#object> :
>
> **Note**: [`object`](https://docs.python.org/3/library/functions.html#object) does *not* have a [`__dict__`](https://docs.python.org/3/library/stdtypes.html#object.__dict__), so you can’t assign arbitrary attributes to an instance of the [`object`](https://docs.python.org/3/library/functions.html#object) class.
>
>
>
|
68,968,534
|
In Python 2, there is a comparison function.
>
> A comparison function is any callable that accept two arguments, compares them, and returns a negative number for less-than, zero for equality, or a positive number for greater-than.
>
>
>
In Python 3, the comparison function is replaced with a key function.
>
> A key function is a callable that accepts one argument and returns another value to be used as the sort key.
>
>
>
Now, I've a list of `tuple[int, int, str]` that I want to sort, and the string can be `S` or `E`. There are some tie breaker rules that use values from two tuples.
Given a tuple x: int, y: int, z: str, the rules are as follows:
1. If x are equal, and both z = S, larger y comes first.
2. If x are equal, and both z = E, smaller y comes first.
3. If x are equal, and one z = E another z = S, S record comes first.
A Python 2 style implementation is as follows:
```
def _cmp(coord1: tuple[int, int, str], coord2: tuple[int, int, str]) -> int:
x1, y1, type1 = coord1
x2, y2, type2 = coord2
if x1 != x2:
return x1 - x2
if type1 == type2:
return y2 - y1 if type1 == "S" else y1 - y2
return -1 if type1 == "S" else 1
```
Expected output:
```
[(0, 3, "S"), (0, 2, "S"), (1, 2, "E"), (2, 3, "E")],
[(3, 3, "S"), (4, 2, "S"), (5, 2, "E"), (5, 3, "E")],
[(6, 2, "S"), (7, 3, "S"), (7, 2, "E"), (8, 3, "E")]
```
I'm aware that a comparison function can be converted to a key function using [functools.cmp\_to\_key](https://docs.python.org/3/library/functools.html#functools.cmp_to_key); however, I'm wondering if there's a way to implement it directly as a key function since the docs say:
>
> This function is primarily used as a transition tool for programs being converted from Python 2 which supported the use of comparison functions.
>
>
>
|
2021/08/28
|
[
"https://Stackoverflow.com/questions/68968534",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/839733/"
] |
Consider how tuples *normally* compare: element by element, going to the next element when the current values are equal (sometimes called *lexicographic order*).
Our required comparison algorithm, rewritten in steps that match that general approach, is:
* First, we want to compare the `x` values, putting them in ascending order.
* Then we want to compare the `z` values; we want tuples with an S to come first. This is the opposite of what normally happens, and we can't easily specify reverse order for only part of the key, and we can't negate a string value. However, since only S and E are possible, we can simply map S to 0 and E to 1. Or more simply, S can map to False and E to True, since those are *numerically* equal to 0 and 1 respectively.
* Finally, if the `z` values were equal, we want to compare the `y` values - in normal order if we have a E, and in reverse order (so, negate the numeric values) if we have a S.
So, we create a key that corresponds to that rule:
* The first element is `x`.
* The second element is our translation of the original `z` value.
* The third element is the `y` value, possibly negated depending on `z`.
In code:
```
def fancy_key(xyz: tuple[int, int, str]) -> tuple[int, bool, int]:
x, y, z = xyz
is_e = z == 'E'
return (x, is_e, (y if is_e else -y))
```
|
Since the original version of this was rec'd for deletion as supposedly not actually answering the question due to it... being too long, I guess?, here's a shorter version of the exact same answer that gives less insight but uses fewer words. (Yes, it was too long. It also answered the question. "omg tl;dr" shouldn't be sufficient cause to claim that it didn't.)
The accepted answer uses the entire original tuple as the key, and simply reorders it so that the built-in sort can handle it correctly. It is, in my opinion, the best solution, as even though that's not really a "key" by any meaningful typical definition of a "key" (and it certainly doesn't give the efficiency benefits of a key-based sort) the efficiency of the sort itself is probably as close to the original compare-based sort as you can get it.
*Additionally*, there is another way to solve the problem of converting the comparison function to a key function, which is the way `cmp_to_key` solves it (as the OP alluded to). That is to use the key function to instead define a class with a single sort item (here, your tuple) as the sole member variable and define the class's `__lt__()` built-in (and ideally the other comparison built-ins as well) by having them call the comparison function with self as the first parameter.
To be exceedingly clear about this: that is a *second solution* to the problem. I don't know that it can be made any clearer that the above approach -- the approach that a Python standard library function itself uses -- does indeed solve the problem stated by the OP. Hopefully the review agrees, I suppose.
How and why does that work, and what are the pros and cons of going about it that way instead? Well, I'd explain, but it has unfortunately been deemed outside of the scope of the question to do so by the deletion review, which suggested asking a separate question to get more detail on how it works. As the OP seems more than capable and can likely figure it out themselves, I'll just leave that part out and make the reviewers happy. I've learned my lesson from how they treated the version of the question the OP posted yesterday.
(Ironically, lengthening the answer by including a code example that would basically be just a class definition and would only be useful to those that don't know how to define one already would seem to be unobjectionable, perhaps even preferable, to reviewers -- for those that need that, I'm sure there are already asked-and-answered questions elsewhere on SO regarding how to define and instantiate a class.)
Now that it is hopefully clear that an additional answer to the question was provided, I'll also note firstly that I still think the currently accepted answer is better for this specific case, and secondly that the `cmp_to_key` approach doesn't really create a traditional "key function" by any commonly accepted plain-face meaning of the term, and there likely really isn't any simple way to programmatically construct a "real" key function from a comparison function in the general case as is implied by the portion of the `cmp_to_key` docs quoted by the OP in the question itself (hopefully noting that it was specifically stated in the question is sufficient indication to reviewers that it is a relevant observation, despite it being seemingly insufficient the first time around).
|
68,968,534
|
In Python 2, there is a comparison function.
>
> A comparison function is any callable that accept two arguments, compares them, and returns a negative number for less-than, zero for equality, or a positive number for greater-than.
>
>
>
In Python 3, the comparison function is replaced with a key function.
>
> A key function is a callable that accepts one argument and returns another value to be used as the sort key.
>
>
>
Now, I've a list of `tuple[int, int, str]` that I want to sort, and the string can be `S` or `E`. There are some tie breaker rules that use values from two tuples.
Given a tuple x: int, y: int, z: str, the rules are as follows:
1. If x are equal, and both z = S, larger y comes first.
2. If x are equal, and both z = E, smaller y comes first.
3. If x are equal, and one z = E another z = S, S record comes first.
A Python 2 style implementation is as follows:
```
def _cmp(coord1: tuple[int, int, str], coord2: tuple[int, int, str]) -> int:
x1, y1, type1 = coord1
x2, y2, type2 = coord2
if x1 != x2:
return x1 - x2
if type1 == type2:
return y2 - y1 if type1 == "S" else y1 - y2
return -1 if type1 == "S" else 1
```
Expected output:
```
[(0, 3, "S"), (0, 2, "S"), (1, 2, "E"), (2, 3, "E")],
[(3, 3, "S"), (4, 2, "S"), (5, 2, "E"), (5, 3, "E")],
[(6, 2, "S"), (7, 3, "S"), (7, 2, "E"), (8, 3, "E")]
```
I'm aware that a comparison function can be converted to a key function using [functools.cmp\_to\_key](https://docs.python.org/3/library/functools.html#functools.cmp_to_key); however, I'm wondering if there's a way to implement it directly as a key function since the docs say:
>
> This function is primarily used as a transition tool for programs being converted from Python 2 which supported the use of comparison functions.
>
>
>
|
2021/08/28
|
[
"https://Stackoverflow.com/questions/68968534",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/839733/"
] |
Consider how tuples *normally* compare: element by element, going to the next element when the current values are equal (sometimes called *lexicographic order*).
Our required comparison algorithm, rewritten in steps that match that general approach, is:
* First, we want to compare the `x` values, putting them in ascending order.
* Then we want to compare the `z` values; we want tuples with an S to come first. This is the opposite of what normally happens, and we can't easily specify reverse order for only part of the key, and we can't negate a string value. However, since only S and E are possible, we can simply map S to 0 and E to 1. Or more simply, S can map to False and E to True, since those are *numerically* equal to 0 and 1 respectively.
* Finally, if the `z` values were equal, we want to compare the `y` values - in normal order if we have a E, and in reverse order (so, negate the numeric values) if we have a S.
So, we create a key that corresponds to that rule:
* The first element is `x`.
* The second element is our translation of the original `z` value.
* The third element is the `y` value, possibly negated depending on `z`.
In code:
```
def fancy_key(xyz: tuple[int, int, str]) -> tuple[int, bool, int]:
x, y, z = xyz
is_e = z == 'E'
return (x, is_e, (y if is_e else -y))
```
|
Alternately, one can replicate the work that the built-in `cmp_to_key` does, but hard-wiring the comparison logic from the original `cmp` function. I don't recommend this, obviously; but it is still in some sense "direct", and it highlights a few important things about Python internals.
The idea is, we create a wrapper class that implements the *relational operators* `<`, `==` etc. via the corresponding [*magic methods*](https://stackoverflow.com/questions/2657627/why-does-python-use-magic-methods) - [`__lt__`, `__eq__` etc.](https://docs.python.org/3/reference/datamodel.html#object.__lt__) By using another tool from `functools` - the [`@total_ordering` decorator](https://docs.python.org/3/library/functools.html#functools.total_ordering) - we [only need to implement those two](https://stackoverflow.com/questions/16238322/python-total-ordering-why-lt-and-eq-instead-of-le); the rest can be inferred by combining those results.
That could look like, for example:
```
import functools # since we're still using `total_ordering`
@functools.total_ordering
class fancy_key:
def __init__(self, xyz):
self.xyz = xyz
def __lt__(self, other: fancy_key) -> bool:
x1, y1, type1 = self.xyz
x2, y2, type2 = other.xyz
if x1 != x2:
return x1 < x2
if type1 != type2:
return type1 == "S"
return y2 < y1 if type1 == "S" else y1 < y2
def __eq__(self, other: fancy_key) -> bool:
return self.xyz == other.xyz
```
Notice that for `__lt__` we have rewritten the comparison logic, modelled off the original `_cmp` - except that we return `True` for the cases where `_cmp` would return `-1`, and `False` otherwise. For `__eq__`, of course, we can simplify greatly; just compare the underlying data directly.
Because classes are callable, we can directly use this class as the `key` for a `sort` call. Python will use it to create instances of the wrapper for each original tuple (rather than calling a function to create transformed data for each original tuple). The instances compare in the way we want them to (because of the operator overload). Thus, we are done.
The [actual `cmp_to_key` function](https://github.com/python/cpython/blob/3.9/Lib/functools.py#L206) (thanks to @George for the reference) generalizes this process by creating a new class on the fly, and returning that class. It uses a [closure](https://en.wikipedia.org/wiki/Closure_%28computer_programming%29) to provide the original `mycmp` (as it's named in that source code) to the new class, which has a hard-coded implementation of the magic methods - these all just call `mycmp` and check whether the result was negative, zero or positive. This implementation does *not* use `total_ordering`, presumably for efficiency; and also uses [`__slots__`](https://stackoverflow.com/questions/472000/usage-of-slots) for efficiency. It also tries to replace this code with a native C implementation, if available.
|
68,968,534
|
In Python 2, there is a comparison function.
>
> A comparison function is any callable that accept two arguments, compares them, and returns a negative number for less-than, zero for equality, or a positive number for greater-than.
>
>
>
In Python 3, the comparison function is replaced with a key function.
>
> A key function is a callable that accepts one argument and returns another value to be used as the sort key.
>
>
>
Now, I've a list of `tuple[int, int, str]` that I want to sort, and the string can be `S` or `E`. There are some tie breaker rules that use values from two tuples.
Given a tuple x: int, y: int, z: str, the rules are as follows:
1. If x are equal, and both z = S, larger y comes first.
2. If x are equal, and both z = E, smaller y comes first.
3. If x are equal, and one z = E another z = S, S record comes first.
A Python 2 style implementation is as follows:
```
def _cmp(coord1: tuple[int, int, str], coord2: tuple[int, int, str]) -> int:
x1, y1, type1 = coord1
x2, y2, type2 = coord2
if x1 != x2:
return x1 - x2
if type1 == type2:
return y2 - y1 if type1 == "S" else y1 - y2
return -1 if type1 == "S" else 1
```
Expected output:
```
[(0, 3, "S"), (0, 2, "S"), (1, 2, "E"), (2, 3, "E")],
[(3, 3, "S"), (4, 2, "S"), (5, 2, "E"), (5, 3, "E")],
[(6, 2, "S"), (7, 3, "S"), (7, 2, "E"), (8, 3, "E")]
```
I'm aware that a comparison function can be converted to a key function using [functools.cmp\_to\_key](https://docs.python.org/3/library/functools.html#functools.cmp_to_key); however, I'm wondering if there's a way to implement it directly as a key function since the docs say:
>
> This function is primarily used as a transition tool for programs being converted from Python 2 which supported the use of comparison functions.
>
>
>
|
2021/08/28
|
[
"https://Stackoverflow.com/questions/68968534",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/839733/"
] |
Alternately, one can replicate the work that the built-in `cmp_to_key` does, but hard-wiring the comparison logic from the original `cmp` function. I don't recommend this, obviously; but it is still in some sense "direct", and it highlights a few important things about Python internals.
The idea is, we create a wrapper class that implements the *relational operators* `<`, `==` etc. via the corresponding [*magic methods*](https://stackoverflow.com/questions/2657627/why-does-python-use-magic-methods) - [`__lt__`, `__eq__` etc.](https://docs.python.org/3/reference/datamodel.html#object.__lt__) By using another tool from `functools` - the [`@total_ordering` decorator](https://docs.python.org/3/library/functools.html#functools.total_ordering) - we [only need to implement those two](https://stackoverflow.com/questions/16238322/python-total-ordering-why-lt-and-eq-instead-of-le); the rest can be inferred by combining those results.
That could look like, for example:
```
import functools # since we're still using `total_ordering`
@functools.total_ordering
class fancy_key:
def __init__(self, xyz):
self.xyz = xyz
def __lt__(self, other: fancy_key) -> bool:
x1, y1, type1 = self.xyz
x2, y2, type2 = other.xyz
if x1 != x2:
return x1 < x2
if type1 != type2:
return type1 == "S"
return y2 < y1 if type1 == "S" else y1 < y2
def __eq__(self, other: fancy_key) -> bool:
return self.xyz == other.xyz
```
Notice that for `__lt__` we have rewritten the comparison logic, modelled off the original `_cmp` - except that we return `True` for the cases where `_cmp` would return `-1`, and `False` otherwise. For `__eq__`, of course, we can simplify greatly; just compare the underlying data directly.
Because classes are callable, we can directly use this class as the `key` for a `sort` call. Python will use it to create instances of the wrapper for each original tuple (rather than calling a function to create transformed data for each original tuple). The instances compare in the way we want them to (because of the operator overload). Thus, we are done.
The [actual `cmp_to_key` function](https://github.com/python/cpython/blob/3.9/Lib/functools.py#L206) (thanks to @George for the reference) generalizes this process by creating a new class on the fly, and returning that class. It uses a [closure](https://en.wikipedia.org/wiki/Closure_%28computer_programming%29) to provide the original `mycmp` (as it's named in that source code) to the new class, which has a hard-coded implementation of the magic methods - these all just call `mycmp` and check whether the result was negative, zero or positive. This implementation does *not* use `total_ordering`, presumably for efficiency; and also uses [`__slots__`](https://stackoverflow.com/questions/472000/usage-of-slots) for efficiency. It also tries to replace this code with a native C implementation, if available.
|
Since the original version of this was rec'd for deletion as supposedly not actually answering the question due to it... being too long, I guess?, here's a shorter version of the exact same answer that gives less insight but uses fewer words. (Yes, it was too long. It also answered the question. "omg tl;dr" shouldn't be sufficient cause to claim that it didn't.)
The accepted answer uses the entire original tuple as the key, and simply reorders it so that the built-in sort can handle it correctly. It is, in my opinion, the best solution, as even though that's not really a "key" by any meaningful typical definition of a "key" (and it certainly doesn't give the efficiency benefits of a key-based sort) the efficiency of the sort itself is probably as close to the original compare-based sort as you can get it.
*Additionally*, there is another way to solve the problem of converting the comparison function to a key function, which is the way `cmp_to_key` solves it (as the OP alluded to). That is to use the key function to instead define a class with a single sort item (here, your tuple) as the sole member variable and define the class's `__lt__()` built-in (and ideally the other comparison built-ins as well) by having them call the comparison function with self as the first parameter.
To be exceedingly clear about this: that is a *second solution* to the problem. I don't know that it can be made any clearer that the above approach -- the approach that a Python standard library function itself uses -- does indeed solve the problem stated by the OP. Hopefully the review agrees, I suppose.
How and why does that work, and what are the pros and cons of going about it that way instead? Well, I'd explain, but it has unfortunately been deemed outside of the scope of the question to do so by the deletion review, which suggested asking a separate question to get more detail on how it works. As the OP seems more than capable and can likely figure it out themselves, I'll just leave that part out and make the reviewers happy. I've learned my lesson from how they treated the version of the question the OP posted yesterday.
(Ironically, lengthening the answer by including a code example that would basically be just a class definition and would only be useful to those that don't know how to define one already would seem to be unobjectionable, perhaps even preferable, to reviewers -- for those that need that, I'm sure there are already asked-and-answered questions elsewhere on SO regarding how to define and instantiate a class.)
Now that it is hopefully clear that an additional answer to the question was provided, I'll also note firstly that I still think the currently accepted answer is better for this specific case, and secondly that the `cmp_to_key` approach doesn't really create a traditional "key function" by any commonly accepted plain-face meaning of the term, and there likely really isn't any simple way to programmatically construct a "real" key function from a comparison function in the general case as is implied by the portion of the `cmp_to_key` docs quoted by the OP in the question itself (hopefully noting that it was specifically stated in the question is sufficient indication to reviewers that it is a relevant observation, despite it being seemingly insufficient the first time around).
|
62,633,601
|
I want to re-implement a certain API client, which is written in Python, in JavaScript. I fail to replicate the HMAC SHA256 signing function. For some keys the output is identical, but for some it is different. It appears that the output is the same when the key consists of printable characters after decoding its Base64 representation.
Python
------
```
#!/usr/bin/env python3
import base64
import hashlib
import hmac
def sign_string(key_b64, to_sign):
key = base64.b64decode(key_b64)
signed_hmac_sha256 = hmac.HMAC(key, to_sign.encode(), hashlib.sha256)
digest = signed_hmac_sha256.digest()
return base64.b64encode(digest).decode()
print(sign_string('VGhpcyBpcyBhIHByaW50YWJsZSBzdHJpbmcuCg==', "my message"))
print(sign_string('dGhlIHdpbmQgb2YgTXQuIEZ1amkK', "my message"))
print(sign_string('pkmNNJw3alrpIBi5t5Pxuym00M211oN86IhLZVT8', "my message"))
```
JavaScript
----------
```
<script src="https://cdnjs.cloudflare.com/ajax/libs/crypto-js/3.1.9-1/crypto-js.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/crypto-js/3.1.9-1/hmac-sha256.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/crypto-js/3.1.9-1/enc-base64.min.js"></script>
<script>
function sign_string(key_b64, to_sign) {
key = atob(key_b64)
var hash = CryptoJS.HmacSHA256(to_sign, key);
var hashInBase64 = CryptoJS.enc.Base64.stringify(hash);
document.write(hashInBase64 + '<br>');
}
sign_string('VGhpcyBpcyBhIHByaW50YWJsZSBzdHJpbmcuCg==', "my message")
sign_string('dGhlIHdpbmQgb2YgTXQuIEZ1amkK', "my message")
sign_string('pkmNNJw3alrpIBi5t5Pxuym00M211oN86IhLZVT8', "my message")
</script>
```
Outputs
-------
**Python**
```
TdhfUQfym16HyWQ8wxQeNVvJKr/tp5rLKHYQSpURLpw=
pQ5NzK3KIWjqc75AXBvWgLK8X0kZvjRHXrLAdxIN+Bk=
8srAvMucCd91CWI7DeCFjxJrEYllaaH63wmVlMk0W+I=
```
**JavaScript**
```
TdhfUQfym16HyWQ8wxQeNVvJKr/tp5rLKHYQSpURLpw=
pQ5NzK3KIWjqc75AXBvWgLK8X0kZvjRHXrLAdxIN+Bk=
31QxOpifnpFUpx/sn336ZmmjkYbLlNrs8NP9om6nPeY=
```
As you can see the first two are the same, while the last is different.
**How can I change the JavaScript code to behave the same as the python code?**
|
2020/06/29
|
[
"https://Stackoverflow.com/questions/62633601",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1447243/"
] |
The base64 encoded secret you are trying to give to CryptoJs does not represent a valid UTF-8 string, which CryptoJS requires. You can use [this tool](https://onlineutf8tools.com/convert-hexadecimal-to-utf8) to check for validity. `atob()` is encoding agnostic and just converts byte by byte, and does not check if it's valid UTF-8.
Here I did the decoding of the base64 secret with CryptoJS's own decoder and it throws an error saying it's invalid UTF-8:
```
<script src="https://cdnjs.cloudflare.com/ajax/libs/crypto-js/3.1.9-1/crypto-js.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/crypto-js/3.1.9-1/hmac-sha256.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/crypto-js/3.1.9-1/enc-base64.min.js"></script>
<script>
function sign_string(key_b64, to_sign) {
var key = CryptoJS.enc.Base64.parse(key_b64).toString(CryptoJS.enc.Utf8);
var hash = CryptoJS.HmacSHA256(to_sign, key);
var hashInBase64 = CryptoJS.enc.Base64.stringify(hash);
document.write(hashInBase64 + '<br>');
}
sign_string('VGhpcyBpcyBhIHByaW50YWJsZSBzdHJpbmcuCg==', "my message")
sign_string('dGhlIHdpbmQgb2YgTXQuIEZ1amkK', "my message")
sign_string('pkmNNJw3alrpIBi5t5Pxuym00M211oN86IhLZVT8', "my message")
</script>
```
I also found a way you can use raw bytes for the key. This works for the last key but not for the first two.
```
var key = CryptoJS.enc.Hex.parse(toHex(atob(key_b64)));
```
Now if you combine these two methods you can have a real solution. This final code gives identical output as python:
```
<script src="https://cdnjs.cloudflare.com/ajax/libs/crypto-js/3.1.9-1/crypto-js.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/crypto-js/3.1.9-1/hmac-sha256.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/crypto-js/3.1.9-1/enc-base64.min.js"></script>
<script>
function sign_string(key_b64, to_sign) {
try {
var key = CryptoJS.enc.Base64.parse(key_b64).toString(CryptoJS.enc.Utf8);
}
catch {
var key = CryptoJS.enc.Hex.parse(toHex(atob(key_b64)));
}
var hash = CryptoJS.HmacSHA256(to_sign, key);
var hashInBase64 = CryptoJS.enc.Base64.stringify(hash);
document.write(hashInBase64 + '<br>');
}
function toHex(str) {
var result = '';
for (var i=0; i<str.length; i++) {
if (str.charCodeAt(i).toString(16).length === 1) {
result += '0' + str.charCodeAt(i).toString(16);
} else {
result += str.charCodeAt(i).toString(16);
}
}
return result;
}
sign_string('VGhpcyBpcyBhIHByaW50YWJsZSBzdHJpbmcuCg==', "my message")
sign_string('dGhlIHdpbmQgb2YgTXQuIEZ1amkK', "my message")
sign_string('pkmNNJw3alrpIBi5t5Pxuym00M211oN86IhLZVT8', "my message")
sign_string('xTsHZGfWUmnIpSu+TaVraECU88O3j9qVjlwTWGb/C8k=', "my message")
</script>
```
|
In the third example you are using different parameters in the python and JavaScript versions.
In python:
sign\_string('xTsHZGfWUmnIpSu+TaVraECU88O3j9qVjlwTWGb/C8k=', "my message")
In JavaScript:
sign\_string('pkmNNJw3alrpIBi5t5Pxuym00M211oN86IhLZVT8', "my message")
|
43,216,256
|
I am trying to do some deep learning work. For this, I first installed all the packages for deep learning in my Python environment.
Here is what I did.
In Anaconda, I created an environment called `tensorflow` as follows
```
conda create -n tensorflow
```
Then installed the data science Python packages, like Pandas, NumPy, etc., inside it. I also installed TensorFlow and Keras there. Here is the list of packages in that environment
```
(tensorflow) SFOM00618927A:dl i854319$ conda list
# packages in environment at /Users/i854319/anaconda/envs/tensorflow:
#
appdirs 1.4.3 <pip>
appnope 0.1.0 py36_0
beautifulsoup4 4.5.3 py36_0
bleach 1.5.0 py36_0
cycler 0.10.0 py36_0
decorator 4.0.11 py36_0
entrypoints 0.2.2 py36_1
freetype 2.5.5 2
html5lib 0.999 py36_0
icu 54.1 0
ipykernel 4.5.2 py36_0
ipython 5.3.0 py36_0
ipython_genutils 0.2.0 py36_0
ipywidgets 6.0.0 py36_0
jinja2 2.9.5 py36_0
jsonschema 2.5.1 py36_0
jupyter 1.0.0 py36_3
jupyter_client 5.0.0 py36_0
jupyter_console 5.1.0 py36_0
jupyter_core 4.3.0 py36_0
Keras 2.0.2 <pip>
libpng 1.6.27 0
markupsafe 0.23 py36_2
matplotlib 2.0.0 np112py36_0
mistune 0.7.4 py36_0
mkl 2017.0.1 0
nbconvert 5.1.1 py36_0
nbformat 4.3.0 py36_0
notebook 4.4.1 py36_0
numpy 1.12.1 <pip>
numpy 1.12.1 py36_0
openssl 1.0.2k 1
packaging 16.8 <pip>
pandas 0.19.2 np112py36_1
pandocfilters 1.4.1 py36_0
path.py 10.1 py36_0
pexpect 4.2.1 py36_0
pickleshare 0.7.4 py36_0
pip 9.0.1 py36_1
prompt_toolkit 1.0.13 py36_0
protobuf 3.2.0 <pip>
ptyprocess 0.5.1 py36_0
pygments 2.2.0 py36_0
pyparsing 2.1.4 py36_0
pyparsing 2.2.0 <pip>
pyqt 5.6.0 py36_2
python 3.6.1 0
python-dateutil 2.6.0 py36_0
pytz 2017.2 py36_0
PyYAML 3.12 <pip>
pyzmq 16.0.2 py36_0
qt 5.6.2 0
qtconsole 4.3.0 py36_0
readline 6.2 2
scikit-learn 0.18.1 np112py36_1
scipy 0.19.0 np112py36_0
setuptools 34.3.3 <pip>
setuptools 27.2.0 py36_0
simplegeneric 0.8.1 py36_1
sip 4.18 py36_0
six 1.10.0 <pip>
six 1.10.0 py36_0
sqlite 3.13.0 0
tensorflow 1.0.1 <pip>
terminado 0.6 py36_0
testpath 0.3 py36_0
Theano 0.9.0 <pip>
tk 8.5.18 0
tornado 4.4.2 py36_0
traitlets 4.3.2 py36_0
wcwidth 0.1.7 py36_0
wheel 0.29.0 <pip>
wheel 0.29.0 py36_0
widgetsnbextension 2.0.0 py36_0
xz 5.2.2 1
zlib 1.2.8 3
(tensorflow) SFOM00618927A:dl i854319$
```
You can see that `jupyter` is also installed.
Now, when I open up the Python interpreter in this environment and I run the basic TensorFlow command, it all works fine. However, I wanted to do the same thing in the Jupyter notebook. So, I created a new directory (outside of this environment).
```
mkdir dl
```
In that, I activated `tensorflow` environment
```
SFOM00618927A:dl i854319$ source activate tensorflow
(tensorflow) SFOM00618927A:dl i854319$ conda list
```
And I can see the same list of packages in that.
Now, I open up a Jupyter notebook
```
SFOM00618927A:dl i854319$ source activate tensorflow
(tensorflow) SFOM00618927A:dl i854319$ jupyter notebook
```
It opens up a new notebook in the browser. But when I just import basic python libraries in that, like pandas, it says "no packages available". I am not sure why is that when the same environment has all those packages and in the same directory, if I use Python interpreter it shows all packages.
```
import pandas
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-4-d6ac987968b6> in <module>()
----> 1 import pandas
ModuleNotFoundError: No module named 'pandas'
```
Why jupyter notebook is not picking up these modules?
So, Jupyter notebook doesn't show env as the interpreter
[](https://i.stack.imgur.com/whyaq.png)
|
2017/04/04
|
[
"https://Stackoverflow.com/questions/43216256",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2769240/"
] |
I have found a fairly simple way to do this.
Initially, through your Anaconda Prompt, you can follow the steps in this official Tensorflow site - [here](https://www.tensorflow.org/install/install_windows). You have to follow the steps as is, no deviation.
Later, you open the Anaconda Navigator. In Anaconda Navigator, go to Applications On --- section. Select the drop down list, after following above steps you must see an entry - tensorflow into it. Select tensorflow and let the environment load.
Then, select Jupyter Notebook in this new context, and install it, let the installation get over.
After that you can run the Jupyter notebook like the regular notebook in tensorflow environment.
|
It is better to create new environment with new name ($newenv):`conda create -n $newenv tensorflow`
Then by using anaconda navigator under environment tab you can find newenv in the middle column.
By clicking on the play button open terminal and type: `activate tensorflow`
Then install tensorflow inside the newenv by typing: `pip install tensorflow`
Now you have tensorflow inside the new environment so then install jupyter by typing: `pip install jupyter notebook`
Then just simply type: `jupyter notebook` to run the jupyter notebook.
Inside of the jupyter notebook type: `import tensorflow as tf`
To test the the tf you can use [THIS LINK](https://www.tensorflow.org/tutorials)
|
43,216,256
|
I am trying to do some deep learning work. For this, I first installed all the packages for deep learning in my Python environment.
Here is what I did.
In Anaconda, I created an environment called `tensorflow` as follows
```
conda create -n tensorflow
```
Then installed the data science Python packages, like Pandas, NumPy, etc., inside it. I also installed TensorFlow and Keras there. Here is the list of packages in that environment
```
(tensorflow) SFOM00618927A:dl i854319$ conda list
# packages in environment at /Users/i854319/anaconda/envs/tensorflow:
#
appdirs 1.4.3 <pip>
appnope 0.1.0 py36_0
beautifulsoup4 4.5.3 py36_0
bleach 1.5.0 py36_0
cycler 0.10.0 py36_0
decorator 4.0.11 py36_0
entrypoints 0.2.2 py36_1
freetype 2.5.5 2
html5lib 0.999 py36_0
icu 54.1 0
ipykernel 4.5.2 py36_0
ipython 5.3.0 py36_0
ipython_genutils 0.2.0 py36_0
ipywidgets 6.0.0 py36_0
jinja2 2.9.5 py36_0
jsonschema 2.5.1 py36_0
jupyter 1.0.0 py36_3
jupyter_client 5.0.0 py36_0
jupyter_console 5.1.0 py36_0
jupyter_core 4.3.0 py36_0
Keras 2.0.2 <pip>
libpng 1.6.27 0
markupsafe 0.23 py36_2
matplotlib 2.0.0 np112py36_0
mistune 0.7.4 py36_0
mkl 2017.0.1 0
nbconvert 5.1.1 py36_0
nbformat 4.3.0 py36_0
notebook 4.4.1 py36_0
numpy 1.12.1 <pip>
numpy 1.12.1 py36_0
openssl 1.0.2k 1
packaging 16.8 <pip>
pandas 0.19.2 np112py36_1
pandocfilters 1.4.1 py36_0
path.py 10.1 py36_0
pexpect 4.2.1 py36_0
pickleshare 0.7.4 py36_0
pip 9.0.1 py36_1
prompt_toolkit 1.0.13 py36_0
protobuf 3.2.0 <pip>
ptyprocess 0.5.1 py36_0
pygments 2.2.0 py36_0
pyparsing 2.1.4 py36_0
pyparsing 2.2.0 <pip>
pyqt 5.6.0 py36_2
python 3.6.1 0
python-dateutil 2.6.0 py36_0
pytz 2017.2 py36_0
PyYAML 3.12 <pip>
pyzmq 16.0.2 py36_0
qt 5.6.2 0
qtconsole 4.3.0 py36_0
readline 6.2 2
scikit-learn 0.18.1 np112py36_1
scipy 0.19.0 np112py36_0
setuptools 34.3.3 <pip>
setuptools 27.2.0 py36_0
simplegeneric 0.8.1 py36_1
sip 4.18 py36_0
six 1.10.0 <pip>
six 1.10.0 py36_0
sqlite 3.13.0 0
tensorflow 1.0.1 <pip>
terminado 0.6 py36_0
testpath 0.3 py36_0
Theano 0.9.0 <pip>
tk 8.5.18 0
tornado 4.4.2 py36_0
traitlets 4.3.2 py36_0
wcwidth 0.1.7 py36_0
wheel 0.29.0 <pip>
wheel 0.29.0 py36_0
widgetsnbextension 2.0.0 py36_0
xz 5.2.2 1
zlib 1.2.8 3
(tensorflow) SFOM00618927A:dl i854319$
```
You can see that `jupyter` is also installed.
Now, when I open up the Python interpreter in this environment and I run the basic TensorFlow command, it all works fine. However, I wanted to do the same thing in the Jupyter notebook. So, I created a new directory (outside of this environment).
```
mkdir dl
```
In that, I activated `tensorflow` environment
```
SFOM00618927A:dl i854319$ source activate tensorflow
(tensorflow) SFOM00618927A:dl i854319$ conda list
```
And I can see the same list of packages in that.
Now, I open up a Jupyter notebook
```
SFOM00618927A:dl i854319$ source activate tensorflow
(tensorflow) SFOM00618927A:dl i854319$ jupyter notebook
```
It opens up a new notebook in the browser. But when I just import basic python libraries in that, like pandas, it says "no packages available". I am not sure why is that when the same environment has all those packages and in the same directory, if I use Python interpreter it shows all packages.
```
import pandas
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-4-d6ac987968b6> in <module>()
----> 1 import pandas
ModuleNotFoundError: No module named 'pandas'
```
Why jupyter notebook is not picking up these modules?
So, Jupyter notebook doesn't show env as the interpreter
[](https://i.stack.imgur.com/whyaq.png)
|
2017/04/04
|
[
"https://Stackoverflow.com/questions/43216256",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2769240/"
] |
I came up with your case. This is how I sort it out
1. Install Anaconda
2. Create a virtual environment - `conda create -n tensorflow`
3. Go inside your virtual environment - (on macOS/Linux:) `source activate tensorflow` (on Windows: `activate tensorflow`)
4. Inside that install tensorflow. You can install it using `pip`
5. Finish install
So then the next thing, when you launch it:
6. If you are not inside the virtual environment type - `Source Activate Tensorflow`
7. Then inside this again install your Jupiter notebook and Pandas libraries, because there can be some missing in this virtual environment
Inside the virtual environment just type:
8. `pip install jupyter notebook`
9. `pip install pandas`
Then you can launch jupyter notebook saying:
10. `jupyter notebook`
11. Select the correct terminal python 3 or 2
12. Then import those modules
|
For Anaconda users in Windows 10 and those who recently updated Anaconda environment, TensorFlow may cause some issues to activate or initiate.
Here is the solution which I explored and which worked for me:
* Uninstall current Anaconda environment and delete all the existing files associated with Anaconda from your C:\Users or where ever you installed it.
* Download Anaconda (<https://www.anaconda.com/download/?lang=en-us#windows>)
* While installing, check the "Add Anaconda to my PATH environment variable"
* After installing, open the Anaconda command prompt to install TensorFlow using these steps:
* Create a conda environment named tensorflow by invoking the following command:
>
> conda create -n tensorflow python=3.5
> (Use this command even if you are using python 3.6 because TensorFlow will get upgraded in the following steps)
>
>
>
* Activate the conda environment by issuing the following command:
>
> activate tensorflow
> After this step, the command prompt will change to (tensorflow)
>
>
>
* After activating, upgrade tensorflow using this command:
>
> pip install --ignore-installed --upgrade
> Now you have successfully installed the CPU version of TensorFlow.
>
>
>
* Close the Anaconda command prompt and open it again and activate the tensorflow environment using 'activate tensorflow' command.
* Inside the tensorflow environment, install the following libraries using the commands:
pip install jupyter
pip install keras
pip install pandas
pip install pandas-datareader
pip install matplotlib
pip install scipy
pip install sklearn
* Now your tensorflow environment contains all the common libraries used in deep learning.
* Congrats, these libraries will make you ready to build deep neural nets. If you need more libraries install using the same command 'pip install libraryname'
|
43,216,256
|
I am trying to do some deep learning work. For this, I first installed all the packages for deep learning in my Python environment.
Here is what I did.
In Anaconda, I created an environment called `tensorflow` as follows
```
conda create -n tensorflow
```
Then installed the data science Python packages, like Pandas, NumPy, etc., inside it. I also installed TensorFlow and Keras there. Here is the list of packages in that environment
```
(tensorflow) SFOM00618927A:dl i854319$ conda list
# packages in environment at /Users/i854319/anaconda/envs/tensorflow:
#
appdirs 1.4.3 <pip>
appnope 0.1.0 py36_0
beautifulsoup4 4.5.3 py36_0
bleach 1.5.0 py36_0
cycler 0.10.0 py36_0
decorator 4.0.11 py36_0
entrypoints 0.2.2 py36_1
freetype 2.5.5 2
html5lib 0.999 py36_0
icu 54.1 0
ipykernel 4.5.2 py36_0
ipython 5.3.0 py36_0
ipython_genutils 0.2.0 py36_0
ipywidgets 6.0.0 py36_0
jinja2 2.9.5 py36_0
jsonschema 2.5.1 py36_0
jupyter 1.0.0 py36_3
jupyter_client 5.0.0 py36_0
jupyter_console 5.1.0 py36_0
jupyter_core 4.3.0 py36_0
Keras 2.0.2 <pip>
libpng 1.6.27 0
markupsafe 0.23 py36_2
matplotlib 2.0.0 np112py36_0
mistune 0.7.4 py36_0
mkl 2017.0.1 0
nbconvert 5.1.1 py36_0
nbformat 4.3.0 py36_0
notebook 4.4.1 py36_0
numpy 1.12.1 <pip>
numpy 1.12.1 py36_0
openssl 1.0.2k 1
packaging 16.8 <pip>
pandas 0.19.2 np112py36_1
pandocfilters 1.4.1 py36_0
path.py 10.1 py36_0
pexpect 4.2.1 py36_0
pickleshare 0.7.4 py36_0
pip 9.0.1 py36_1
prompt_toolkit 1.0.13 py36_0
protobuf 3.2.0 <pip>
ptyprocess 0.5.1 py36_0
pygments 2.2.0 py36_0
pyparsing 2.1.4 py36_0
pyparsing 2.2.0 <pip>
pyqt 5.6.0 py36_2
python 3.6.1 0
python-dateutil 2.6.0 py36_0
pytz 2017.2 py36_0
PyYAML 3.12 <pip>
pyzmq 16.0.2 py36_0
qt 5.6.2 0
qtconsole 4.3.0 py36_0
readline 6.2 2
scikit-learn 0.18.1 np112py36_1
scipy 0.19.0 np112py36_0
setuptools 34.3.3 <pip>
setuptools 27.2.0 py36_0
simplegeneric 0.8.1 py36_1
sip 4.18 py36_0
six 1.10.0 <pip>
six 1.10.0 py36_0
sqlite 3.13.0 0
tensorflow 1.0.1 <pip>
terminado 0.6 py36_0
testpath 0.3 py36_0
Theano 0.9.0 <pip>
tk 8.5.18 0
tornado 4.4.2 py36_0
traitlets 4.3.2 py36_0
wcwidth 0.1.7 py36_0
wheel 0.29.0 <pip>
wheel 0.29.0 py36_0
widgetsnbextension 2.0.0 py36_0
xz 5.2.2 1
zlib 1.2.8 3
(tensorflow) SFOM00618927A:dl i854319$
```
You can see that `jupyter` is also installed.
Now, when I open up the Python interpreter in this environment and I run the basic TensorFlow command, it all works fine. However, I wanted to do the same thing in the Jupyter notebook. So, I created a new directory (outside of this environment).
```
mkdir dl
```
In that, I activated `tensorflow` environment
```
SFOM00618927A:dl i854319$ source activate tensorflow
(tensorflow) SFOM00618927A:dl i854319$ conda list
```
And I can see the same list of packages in that.
Now, I open up a Jupyter notebook
```
SFOM00618927A:dl i854319$ source activate tensorflow
(tensorflow) SFOM00618927A:dl i854319$ jupyter notebook
```
It opens up a new notebook in the browser. But when I just import basic python libraries in that, like pandas, it says "no packages available". I am not sure why is that when the same environment has all those packages and in the same directory, if I use Python interpreter it shows all packages.
```
import pandas
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-4-d6ac987968b6> in <module>()
----> 1 import pandas
ModuleNotFoundError: No module named 'pandas'
```
Why jupyter notebook is not picking up these modules?
So, Jupyter notebook doesn't show env as the interpreter
[](https://i.stack.imgur.com/whyaq.png)
|
2017/04/04
|
[
"https://Stackoverflow.com/questions/43216256",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2769240/"
] |
1. install tensorflow by running these commands in anoconda shell or in console:
```
conda create -n tensorflow python=3.5
activate tensorflow
conda install pandas matplotlib jupyter notebook scipy scikit-learn
pip install tensorflow
```
2. close the console and reopen it and type these commands:
```
activate tensorflow
jupyter notebook
```
|
I have to install it using condo's pip3. Just start jupyter-notebook and execute following
```
import sys
sys.executable
```
This will give you something like this
```
/home/<user>/anaconda3/bin/python
```
Now in a terminal execute the following (using pip3 from the above path where we found our python)
```
/home/<user>/anaconda3/bin/pip3 install tensorflow
```
This is basically installing the Tensorflow in the Conda environment using the Conda pip3 installer
|
43,216,256
|
I am trying to do some deep learning work. For this, I first installed all the packages for deep learning in my Python environment.
Here is what I did.
In Anaconda, I created an environment called `tensorflow` as follows
```
conda create -n tensorflow
```
Then installed the data science Python packages, like Pandas, NumPy, etc., inside it. I also installed TensorFlow and Keras there. Here is the list of packages in that environment
```
(tensorflow) SFOM00618927A:dl i854319$ conda list
# packages in environment at /Users/i854319/anaconda/envs/tensorflow:
#
appdirs 1.4.3 <pip>
appnope 0.1.0 py36_0
beautifulsoup4 4.5.3 py36_0
bleach 1.5.0 py36_0
cycler 0.10.0 py36_0
decorator 4.0.11 py36_0
entrypoints 0.2.2 py36_1
freetype 2.5.5 2
html5lib 0.999 py36_0
icu 54.1 0
ipykernel 4.5.2 py36_0
ipython 5.3.0 py36_0
ipython_genutils 0.2.0 py36_0
ipywidgets 6.0.0 py36_0
jinja2 2.9.5 py36_0
jsonschema 2.5.1 py36_0
jupyter 1.0.0 py36_3
jupyter_client 5.0.0 py36_0
jupyter_console 5.1.0 py36_0
jupyter_core 4.3.0 py36_0
Keras 2.0.2 <pip>
libpng 1.6.27 0
markupsafe 0.23 py36_2
matplotlib 2.0.0 np112py36_0
mistune 0.7.4 py36_0
mkl 2017.0.1 0
nbconvert 5.1.1 py36_0
nbformat 4.3.0 py36_0
notebook 4.4.1 py36_0
numpy 1.12.1 <pip>
numpy 1.12.1 py36_0
openssl 1.0.2k 1
packaging 16.8 <pip>
pandas 0.19.2 np112py36_1
pandocfilters 1.4.1 py36_0
path.py 10.1 py36_0
pexpect 4.2.1 py36_0
pickleshare 0.7.4 py36_0
pip 9.0.1 py36_1
prompt_toolkit 1.0.13 py36_0
protobuf 3.2.0 <pip>
ptyprocess 0.5.1 py36_0
pygments 2.2.0 py36_0
pyparsing 2.1.4 py36_0
pyparsing 2.2.0 <pip>
pyqt 5.6.0 py36_2
python 3.6.1 0
python-dateutil 2.6.0 py36_0
pytz 2017.2 py36_0
PyYAML 3.12 <pip>
pyzmq 16.0.2 py36_0
qt 5.6.2 0
qtconsole 4.3.0 py36_0
readline 6.2 2
scikit-learn 0.18.1 np112py36_1
scipy 0.19.0 np112py36_0
setuptools 34.3.3 <pip>
setuptools 27.2.0 py36_0
simplegeneric 0.8.1 py36_1
sip 4.18 py36_0
six 1.10.0 <pip>
six 1.10.0 py36_0
sqlite 3.13.0 0
tensorflow 1.0.1 <pip>
terminado 0.6 py36_0
testpath 0.3 py36_0
Theano 0.9.0 <pip>
tk 8.5.18 0
tornado 4.4.2 py36_0
traitlets 4.3.2 py36_0
wcwidth 0.1.7 py36_0
wheel 0.29.0 <pip>
wheel 0.29.0 py36_0
widgetsnbextension 2.0.0 py36_0
xz 5.2.2 1
zlib 1.2.8 3
(tensorflow) SFOM00618927A:dl i854319$
```
You can see that `jupyter` is also installed.
Now, when I open up the Python interpreter in this environment and I run the basic TensorFlow command, it all works fine. However, I wanted to do the same thing in the Jupyter notebook. So, I created a new directory (outside of this environment).
```
mkdir dl
```
In that, I activated `tensorflow` environment
```
SFOM00618927A:dl i854319$ source activate tensorflow
(tensorflow) SFOM00618927A:dl i854319$ conda list
```
And I can see the same list of packages in that.
Now, I open up a Jupyter notebook
```
SFOM00618927A:dl i854319$ source activate tensorflow
(tensorflow) SFOM00618927A:dl i854319$ jupyter notebook
```
It opens up a new notebook in the browser. But when I just import basic python libraries in that, like pandas, it says "no packages available". I am not sure why is that when the same environment has all those packages and in the same directory, if I use Python interpreter it shows all packages.
```
import pandas
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-4-d6ac987968b6> in <module>()
----> 1 import pandas
ModuleNotFoundError: No module named 'pandas'
```
Why jupyter notebook is not picking up these modules?
So, Jupyter notebook doesn't show env as the interpreter
[](https://i.stack.imgur.com/whyaq.png)
|
2017/04/04
|
[
"https://Stackoverflow.com/questions/43216256",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2769240/"
] |
I have found a fairly simple way to do this.
Initially, through your Anaconda Prompt, you can follow the steps in this official Tensorflow site - [here](https://www.tensorflow.org/install/install_windows). You have to follow the steps as is, no deviation.
Later, you open the Anaconda Navigator. In Anaconda Navigator, go to Applications On --- section. Select the drop down list, after following above steps you must see an entry - tensorflow into it. Select tensorflow and let the environment load.
Then, select Jupyter Notebook in this new context, and install it, let the installation get over.
After that you can run the Jupyter notebook like the regular notebook in tensorflow environment.
|
Although it's a long time after this question is being asked since I was searching so much for the same problem and couldn't find the extant solutions helpful, I write what fixed my trouble for anyone with the same issue:
The point is, Jupyter should be installed in your virtual environment, meaning, after activating the `tensorflow` environment, run the following in the command prompt (**in `tensorflow` virtual environment**):
```
conda install jupyter
jupyter notebook
```
and then the jupyter will pop up.
|
43,216,256
|
I am trying to do some deep learning work. For this, I first installed all the packages for deep learning in my Python environment.
Here is what I did.
In Anaconda, I created an environment called `tensorflow` as follows
```
conda create -n tensorflow
```
Then installed the data science Python packages, like Pandas, NumPy, etc., inside it. I also installed TensorFlow and Keras there. Here is the list of packages in that environment
```
(tensorflow) SFOM00618927A:dl i854319$ conda list
# packages in environment at /Users/i854319/anaconda/envs/tensorflow:
#
appdirs 1.4.3 <pip>
appnope 0.1.0 py36_0
beautifulsoup4 4.5.3 py36_0
bleach 1.5.0 py36_0
cycler 0.10.0 py36_0
decorator 4.0.11 py36_0
entrypoints 0.2.2 py36_1
freetype 2.5.5 2
html5lib 0.999 py36_0
icu 54.1 0
ipykernel 4.5.2 py36_0
ipython 5.3.0 py36_0
ipython_genutils 0.2.0 py36_0
ipywidgets 6.0.0 py36_0
jinja2 2.9.5 py36_0
jsonschema 2.5.1 py36_0
jupyter 1.0.0 py36_3
jupyter_client 5.0.0 py36_0
jupyter_console 5.1.0 py36_0
jupyter_core 4.3.0 py36_0
Keras 2.0.2 <pip>
libpng 1.6.27 0
markupsafe 0.23 py36_2
matplotlib 2.0.0 np112py36_0
mistune 0.7.4 py36_0
mkl 2017.0.1 0
nbconvert 5.1.1 py36_0
nbformat 4.3.0 py36_0
notebook 4.4.1 py36_0
numpy 1.12.1 <pip>
numpy 1.12.1 py36_0
openssl 1.0.2k 1
packaging 16.8 <pip>
pandas 0.19.2 np112py36_1
pandocfilters 1.4.1 py36_0
path.py 10.1 py36_0
pexpect 4.2.1 py36_0
pickleshare 0.7.4 py36_0
pip 9.0.1 py36_1
prompt_toolkit 1.0.13 py36_0
protobuf 3.2.0 <pip>
ptyprocess 0.5.1 py36_0
pygments 2.2.0 py36_0
pyparsing 2.1.4 py36_0
pyparsing 2.2.0 <pip>
pyqt 5.6.0 py36_2
python 3.6.1 0
python-dateutil 2.6.0 py36_0
pytz 2017.2 py36_0
PyYAML 3.12 <pip>
pyzmq 16.0.2 py36_0
qt 5.6.2 0
qtconsole 4.3.0 py36_0
readline 6.2 2
scikit-learn 0.18.1 np112py36_1
scipy 0.19.0 np112py36_0
setuptools 34.3.3 <pip>
setuptools 27.2.0 py36_0
simplegeneric 0.8.1 py36_1
sip 4.18 py36_0
six 1.10.0 <pip>
six 1.10.0 py36_0
sqlite 3.13.0 0
tensorflow 1.0.1 <pip>
terminado 0.6 py36_0
testpath 0.3 py36_0
Theano 0.9.0 <pip>
tk 8.5.18 0
tornado 4.4.2 py36_0
traitlets 4.3.2 py36_0
wcwidth 0.1.7 py36_0
wheel 0.29.0 <pip>
wheel 0.29.0 py36_0
widgetsnbextension 2.0.0 py36_0
xz 5.2.2 1
zlib 1.2.8 3
(tensorflow) SFOM00618927A:dl i854319$
```
You can see that `jupyter` is also installed.
Now, when I open up the Python interpreter in this environment and I run the basic TensorFlow command, it all works fine. However, I wanted to do the same thing in the Jupyter notebook. So, I created a new directory (outside of this environment).
```
mkdir dl
```
In that, I activated `tensorflow` environment
```
SFOM00618927A:dl i854319$ source activate tensorflow
(tensorflow) SFOM00618927A:dl i854319$ conda list
```
And I can see the same list of packages in that.
Now, I open up a Jupyter notebook
```
SFOM00618927A:dl i854319$ source activate tensorflow
(tensorflow) SFOM00618927A:dl i854319$ jupyter notebook
```
It opens up a new notebook in the browser. But when I just import basic python libraries in that, like pandas, it says "no packages available". I am not sure why is that when the same environment has all those packages and in the same directory, if I use Python interpreter it shows all packages.
```
import pandas
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-4-d6ac987968b6> in <module>()
----> 1 import pandas
ModuleNotFoundError: No module named 'pandas'
```
Why jupyter notebook is not picking up these modules?
So, Jupyter notebook doesn't show env as the interpreter
[](https://i.stack.imgur.com/whyaq.png)
|
2017/04/04
|
[
"https://Stackoverflow.com/questions/43216256",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2769240/"
] |
1. install tensorflow by running these commands in anoconda shell or in console:
```
conda create -n tensorflow python=3.5
activate tensorflow
conda install pandas matplotlib jupyter notebook scipy scikit-learn
pip install tensorflow
```
2. close the console and reopen it and type these commands:
```
activate tensorflow
jupyter notebook
```
|
I have found a fairly simple way to do this.
Initially, through your Anaconda Prompt, you can follow the steps in this official Tensorflow site - [here](https://www.tensorflow.org/install/install_windows). You have to follow the steps as is, no deviation.
Later, you open the Anaconda Navigator. In Anaconda Navigator, go to Applications On --- section. Select the drop down list, after following above steps you must see an entry - tensorflow into it. Select tensorflow and let the environment load.
Then, select Jupyter Notebook in this new context, and install it, let the installation get over.
After that you can run the Jupyter notebook like the regular notebook in tensorflow environment.
|
43,216,256
|
I am trying to do some deep learning work. For this, I first installed all the packages for deep learning in my Python environment.
Here is what I did.
In Anaconda, I created an environment called `tensorflow` as follows
```
conda create -n tensorflow
```
Then installed the data science Python packages, like Pandas, NumPy, etc., inside it. I also installed TensorFlow and Keras there. Here is the list of packages in that environment
```
(tensorflow) SFOM00618927A:dl i854319$ conda list
# packages in environment at /Users/i854319/anaconda/envs/tensorflow:
#
appdirs 1.4.3 <pip>
appnope 0.1.0 py36_0
beautifulsoup4 4.5.3 py36_0
bleach 1.5.0 py36_0
cycler 0.10.0 py36_0
decorator 4.0.11 py36_0
entrypoints 0.2.2 py36_1
freetype 2.5.5 2
html5lib 0.999 py36_0
icu 54.1 0
ipykernel 4.5.2 py36_0
ipython 5.3.0 py36_0
ipython_genutils 0.2.0 py36_0
ipywidgets 6.0.0 py36_0
jinja2 2.9.5 py36_0
jsonschema 2.5.1 py36_0
jupyter 1.0.0 py36_3
jupyter_client 5.0.0 py36_0
jupyter_console 5.1.0 py36_0
jupyter_core 4.3.0 py36_0
Keras 2.0.2 <pip>
libpng 1.6.27 0
markupsafe 0.23 py36_2
matplotlib 2.0.0 np112py36_0
mistune 0.7.4 py36_0
mkl 2017.0.1 0
nbconvert 5.1.1 py36_0
nbformat 4.3.0 py36_0
notebook 4.4.1 py36_0
numpy 1.12.1 <pip>
numpy 1.12.1 py36_0
openssl 1.0.2k 1
packaging 16.8 <pip>
pandas 0.19.2 np112py36_1
pandocfilters 1.4.1 py36_0
path.py 10.1 py36_0
pexpect 4.2.1 py36_0
pickleshare 0.7.4 py36_0
pip 9.0.1 py36_1
prompt_toolkit 1.0.13 py36_0
protobuf 3.2.0 <pip>
ptyprocess 0.5.1 py36_0
pygments 2.2.0 py36_0
pyparsing 2.1.4 py36_0
pyparsing 2.2.0 <pip>
pyqt 5.6.0 py36_2
python 3.6.1 0
python-dateutil 2.6.0 py36_0
pytz 2017.2 py36_0
PyYAML 3.12 <pip>
pyzmq 16.0.2 py36_0
qt 5.6.2 0
qtconsole 4.3.0 py36_0
readline 6.2 2
scikit-learn 0.18.1 np112py36_1
scipy 0.19.0 np112py36_0
setuptools 34.3.3 <pip>
setuptools 27.2.0 py36_0
simplegeneric 0.8.1 py36_1
sip 4.18 py36_0
six 1.10.0 <pip>
six 1.10.0 py36_0
sqlite 3.13.0 0
tensorflow 1.0.1 <pip>
terminado 0.6 py36_0
testpath 0.3 py36_0
Theano 0.9.0 <pip>
tk 8.5.18 0
tornado 4.4.2 py36_0
traitlets 4.3.2 py36_0
wcwidth 0.1.7 py36_0
wheel 0.29.0 <pip>
wheel 0.29.0 py36_0
widgetsnbextension 2.0.0 py36_0
xz 5.2.2 1
zlib 1.2.8 3
(tensorflow) SFOM00618927A:dl i854319$
```
You can see that `jupyter` is also installed.
Now, when I open up the Python interpreter in this environment and I run the basic TensorFlow command, it all works fine. However, I wanted to do the same thing in the Jupyter notebook. So, I created a new directory (outside of this environment).
```
mkdir dl
```
In that, I activated `tensorflow` environment
```
SFOM00618927A:dl i854319$ source activate tensorflow
(tensorflow) SFOM00618927A:dl i854319$ conda list
```
And I can see the same list of packages in that.
Now, I open up a Jupyter notebook
```
SFOM00618927A:dl i854319$ source activate tensorflow
(tensorflow) SFOM00618927A:dl i854319$ jupyter notebook
```
It opens up a new notebook in the browser. But when I just import basic python libraries in that, like pandas, it says "no packages available". I am not sure why is that when the same environment has all those packages and in the same directory, if I use Python interpreter it shows all packages.
```
import pandas
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-4-d6ac987968b6> in <module>()
----> 1 import pandas
ModuleNotFoundError: No module named 'pandas'
```
Why jupyter notebook is not picking up these modules?
So, Jupyter notebook doesn't show env as the interpreter
[](https://i.stack.imgur.com/whyaq.png)
|
2017/04/04
|
[
"https://Stackoverflow.com/questions/43216256",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2769240/"
] |
Although it's a long time after this question is being asked since I was searching so much for the same problem and couldn't find the extant solutions helpful, I write what fixed my trouble for anyone with the same issue:
The point is, Jupyter should be installed in your virtual environment, meaning, after activating the `tensorflow` environment, run the following in the command prompt (**in `tensorflow` virtual environment**):
```
conda install jupyter
jupyter notebook
```
and then the jupyter will pop up.
|
I have to install it using condo's pip3. Just start jupyter-notebook and execute following
```
import sys
sys.executable
```
This will give you something like this
```
/home/<user>/anaconda3/bin/python
```
Now in a terminal execute the following (using pip3 from the above path where we found our python)
```
/home/<user>/anaconda3/bin/pip3 install tensorflow
```
This is basically installing the Tensorflow in the Conda environment using the Conda pip3 installer
|
43,216,256
|
I am trying to do some deep learning work. For this, I first installed all the packages for deep learning in my Python environment.
Here is what I did.
In Anaconda, I created an environment called `tensorflow` as follows
```
conda create -n tensorflow
```
Then installed the data science Python packages, like Pandas, NumPy, etc., inside it. I also installed TensorFlow and Keras there. Here is the list of packages in that environment
```
(tensorflow) SFOM00618927A:dl i854319$ conda list
# packages in environment at /Users/i854319/anaconda/envs/tensorflow:
#
appdirs 1.4.3 <pip>
appnope 0.1.0 py36_0
beautifulsoup4 4.5.3 py36_0
bleach 1.5.0 py36_0
cycler 0.10.0 py36_0
decorator 4.0.11 py36_0
entrypoints 0.2.2 py36_1
freetype 2.5.5 2
html5lib 0.999 py36_0
icu 54.1 0
ipykernel 4.5.2 py36_0
ipython 5.3.0 py36_0
ipython_genutils 0.2.0 py36_0
ipywidgets 6.0.0 py36_0
jinja2 2.9.5 py36_0
jsonschema 2.5.1 py36_0
jupyter 1.0.0 py36_3
jupyter_client 5.0.0 py36_0
jupyter_console 5.1.0 py36_0
jupyter_core 4.3.0 py36_0
Keras 2.0.2 <pip>
libpng 1.6.27 0
markupsafe 0.23 py36_2
matplotlib 2.0.0 np112py36_0
mistune 0.7.4 py36_0
mkl 2017.0.1 0
nbconvert 5.1.1 py36_0
nbformat 4.3.0 py36_0
notebook 4.4.1 py36_0
numpy 1.12.1 <pip>
numpy 1.12.1 py36_0
openssl 1.0.2k 1
packaging 16.8 <pip>
pandas 0.19.2 np112py36_1
pandocfilters 1.4.1 py36_0
path.py 10.1 py36_0
pexpect 4.2.1 py36_0
pickleshare 0.7.4 py36_0
pip 9.0.1 py36_1
prompt_toolkit 1.0.13 py36_0
protobuf 3.2.0 <pip>
ptyprocess 0.5.1 py36_0
pygments 2.2.0 py36_0
pyparsing 2.1.4 py36_0
pyparsing 2.2.0 <pip>
pyqt 5.6.0 py36_2
python 3.6.1 0
python-dateutil 2.6.0 py36_0
pytz 2017.2 py36_0
PyYAML 3.12 <pip>
pyzmq 16.0.2 py36_0
qt 5.6.2 0
qtconsole 4.3.0 py36_0
readline 6.2 2
scikit-learn 0.18.1 np112py36_1
scipy 0.19.0 np112py36_0
setuptools 34.3.3 <pip>
setuptools 27.2.0 py36_0
simplegeneric 0.8.1 py36_1
sip 4.18 py36_0
six 1.10.0 <pip>
six 1.10.0 py36_0
sqlite 3.13.0 0
tensorflow 1.0.1 <pip>
terminado 0.6 py36_0
testpath 0.3 py36_0
Theano 0.9.0 <pip>
tk 8.5.18 0
tornado 4.4.2 py36_0
traitlets 4.3.2 py36_0
wcwidth 0.1.7 py36_0
wheel 0.29.0 <pip>
wheel 0.29.0 py36_0
widgetsnbextension 2.0.0 py36_0
xz 5.2.2 1
zlib 1.2.8 3
(tensorflow) SFOM00618927A:dl i854319$
```
You can see that `jupyter` is also installed.
Now, when I open up the Python interpreter in this environment and I run the basic TensorFlow command, it all works fine. However, I wanted to do the same thing in the Jupyter notebook. So, I created a new directory (outside of this environment).
```
mkdir dl
```
In that, I activated `tensorflow` environment
```
SFOM00618927A:dl i854319$ source activate tensorflow
(tensorflow) SFOM00618927A:dl i854319$ conda list
```
And I can see the same list of packages in that.
Now, I open up a Jupyter notebook
```
SFOM00618927A:dl i854319$ source activate tensorflow
(tensorflow) SFOM00618927A:dl i854319$ jupyter notebook
```
It opens up a new notebook in the browser. But when I just import basic python libraries in that, like pandas, it says "no packages available". I am not sure why is that when the same environment has all those packages and in the same directory, if I use Python interpreter it shows all packages.
```
import pandas
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-4-d6ac987968b6> in <module>()
----> 1 import pandas
ModuleNotFoundError: No module named 'pandas'
```
Why jupyter notebook is not picking up these modules?
So, Jupyter notebook doesn't show env as the interpreter
[](https://i.stack.imgur.com/whyaq.png)
|
2017/04/04
|
[
"https://Stackoverflow.com/questions/43216256",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2769240/"
] |
1. install tensorflow by running these commands in anoconda shell or in console:
```
conda create -n tensorflow python=3.5
activate tensorflow
conda install pandas matplotlib jupyter notebook scipy scikit-learn
pip install tensorflow
```
2. close the console and reopen it and type these commands:
```
activate tensorflow
jupyter notebook
```
|
Although it's a long time after this question is being asked since I was searching so much for the same problem and couldn't find the extant solutions helpful, I write what fixed my trouble for anyone with the same issue:
The point is, Jupyter should be installed in your virtual environment, meaning, after activating the `tensorflow` environment, run the following in the command prompt (**in `tensorflow` virtual environment**):
```
conda install jupyter
jupyter notebook
```
and then the jupyter will pop up.
|
43,216,256
|
I am trying to do some deep learning work. For this, I first installed all the packages for deep learning in my Python environment.
Here is what I did.
In Anaconda, I created an environment called `tensorflow` as follows
```
conda create -n tensorflow
```
Then installed the data science Python packages, like Pandas, NumPy, etc., inside it. I also installed TensorFlow and Keras there. Here is the list of packages in that environment
```
(tensorflow) SFOM00618927A:dl i854319$ conda list
# packages in environment at /Users/i854319/anaconda/envs/tensorflow:
#
appdirs 1.4.3 <pip>
appnope 0.1.0 py36_0
beautifulsoup4 4.5.3 py36_0
bleach 1.5.0 py36_0
cycler 0.10.0 py36_0
decorator 4.0.11 py36_0
entrypoints 0.2.2 py36_1
freetype 2.5.5 2
html5lib 0.999 py36_0
icu 54.1 0
ipykernel 4.5.2 py36_0
ipython 5.3.0 py36_0
ipython_genutils 0.2.0 py36_0
ipywidgets 6.0.0 py36_0
jinja2 2.9.5 py36_0
jsonschema 2.5.1 py36_0
jupyter 1.0.0 py36_3
jupyter_client 5.0.0 py36_0
jupyter_console 5.1.0 py36_0
jupyter_core 4.3.0 py36_0
Keras 2.0.2 <pip>
libpng 1.6.27 0
markupsafe 0.23 py36_2
matplotlib 2.0.0 np112py36_0
mistune 0.7.4 py36_0
mkl 2017.0.1 0
nbconvert 5.1.1 py36_0
nbformat 4.3.0 py36_0
notebook 4.4.1 py36_0
numpy 1.12.1 <pip>
numpy 1.12.1 py36_0
openssl 1.0.2k 1
packaging 16.8 <pip>
pandas 0.19.2 np112py36_1
pandocfilters 1.4.1 py36_0
path.py 10.1 py36_0
pexpect 4.2.1 py36_0
pickleshare 0.7.4 py36_0
pip 9.0.1 py36_1
prompt_toolkit 1.0.13 py36_0
protobuf 3.2.0 <pip>
ptyprocess 0.5.1 py36_0
pygments 2.2.0 py36_0
pyparsing 2.1.4 py36_0
pyparsing 2.2.0 <pip>
pyqt 5.6.0 py36_2
python 3.6.1 0
python-dateutil 2.6.0 py36_0
pytz 2017.2 py36_0
PyYAML 3.12 <pip>
pyzmq 16.0.2 py36_0
qt 5.6.2 0
qtconsole 4.3.0 py36_0
readline 6.2 2
scikit-learn 0.18.1 np112py36_1
scipy 0.19.0 np112py36_0
setuptools 34.3.3 <pip>
setuptools 27.2.0 py36_0
simplegeneric 0.8.1 py36_1
sip 4.18 py36_0
six 1.10.0 <pip>
six 1.10.0 py36_0
sqlite 3.13.0 0
tensorflow 1.0.1 <pip>
terminado 0.6 py36_0
testpath 0.3 py36_0
Theano 0.9.0 <pip>
tk 8.5.18 0
tornado 4.4.2 py36_0
traitlets 4.3.2 py36_0
wcwidth 0.1.7 py36_0
wheel 0.29.0 <pip>
wheel 0.29.0 py36_0
widgetsnbextension 2.0.0 py36_0
xz 5.2.2 1
zlib 1.2.8 3
(tensorflow) SFOM00618927A:dl i854319$
```
You can see that `jupyter` is also installed.
Now, when I open up the Python interpreter in this environment and I run the basic TensorFlow command, it all works fine. However, I wanted to do the same thing in the Jupyter notebook. So, I created a new directory (outside of this environment).
```
mkdir dl
```
In that, I activated `tensorflow` environment
```
SFOM00618927A:dl i854319$ source activate tensorflow
(tensorflow) SFOM00618927A:dl i854319$ conda list
```
And I can see the same list of packages in that.
Now, I open up a Jupyter notebook
```
SFOM00618927A:dl i854319$ source activate tensorflow
(tensorflow) SFOM00618927A:dl i854319$ jupyter notebook
```
It opens up a new notebook in the browser. But when I just import basic python libraries in that, like pandas, it says "no packages available". I am not sure why is that when the same environment has all those packages and in the same directory, if I use Python interpreter it shows all packages.
```
import pandas
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-4-d6ac987968b6> in <module>()
----> 1 import pandas
ModuleNotFoundError: No module named 'pandas'
```
Why jupyter notebook is not picking up these modules?
So, Jupyter notebook doesn't show env as the interpreter
[](https://i.stack.imgur.com/whyaq.png)
|
2017/04/04
|
[
"https://Stackoverflow.com/questions/43216256",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2769240/"
] |
1. Install Anaconda
2. Run Anaconda command prompt
3. write "activate tensorflow" for windows
4. pip install tensorflow
5. pip install jupyter notebook
6. jupyter notebook.
Only this solution worked for me. Tried 7 8 solutions.
Using Windows platform.
|
I would suggest launching Jupyter lab/notebook from your base environment and selecting the right kernel.
[How to add conda environment to jupyter lab](https://stackoverflow.com/questions/53004311/how-to-add-conda-environment-to-jupyter-lab) should contains the info needed to add the kernel to your base environment.
Disclaimer : I asked the question in the topic I linked, but I feel it answers your problem too.
|
43,216,256
|
I am trying to do some deep learning work. For this, I first installed all the packages for deep learning in my Python environment.
Here is what I did.
In Anaconda, I created an environment called `tensorflow` as follows
```
conda create -n tensorflow
```
Then installed the data science Python packages, like Pandas, NumPy, etc., inside it. I also installed TensorFlow and Keras there. Here is the list of packages in that environment
```
(tensorflow) SFOM00618927A:dl i854319$ conda list
# packages in environment at /Users/i854319/anaconda/envs/tensorflow:
#
appdirs 1.4.3 <pip>
appnope 0.1.0 py36_0
beautifulsoup4 4.5.3 py36_0
bleach 1.5.0 py36_0
cycler 0.10.0 py36_0
decorator 4.0.11 py36_0
entrypoints 0.2.2 py36_1
freetype 2.5.5 2
html5lib 0.999 py36_0
icu 54.1 0
ipykernel 4.5.2 py36_0
ipython 5.3.0 py36_0
ipython_genutils 0.2.0 py36_0
ipywidgets 6.0.0 py36_0
jinja2 2.9.5 py36_0
jsonschema 2.5.1 py36_0
jupyter 1.0.0 py36_3
jupyter_client 5.0.0 py36_0
jupyter_console 5.1.0 py36_0
jupyter_core 4.3.0 py36_0
Keras 2.0.2 <pip>
libpng 1.6.27 0
markupsafe 0.23 py36_2
matplotlib 2.0.0 np112py36_0
mistune 0.7.4 py36_0
mkl 2017.0.1 0
nbconvert 5.1.1 py36_0
nbformat 4.3.0 py36_0
notebook 4.4.1 py36_0
numpy 1.12.1 <pip>
numpy 1.12.1 py36_0
openssl 1.0.2k 1
packaging 16.8 <pip>
pandas 0.19.2 np112py36_1
pandocfilters 1.4.1 py36_0
path.py 10.1 py36_0
pexpect 4.2.1 py36_0
pickleshare 0.7.4 py36_0
pip 9.0.1 py36_1
prompt_toolkit 1.0.13 py36_0
protobuf 3.2.0 <pip>
ptyprocess 0.5.1 py36_0
pygments 2.2.0 py36_0
pyparsing 2.1.4 py36_0
pyparsing 2.2.0 <pip>
pyqt 5.6.0 py36_2
python 3.6.1 0
python-dateutil 2.6.0 py36_0
pytz 2017.2 py36_0
PyYAML 3.12 <pip>
pyzmq 16.0.2 py36_0
qt 5.6.2 0
qtconsole 4.3.0 py36_0
readline 6.2 2
scikit-learn 0.18.1 np112py36_1
scipy 0.19.0 np112py36_0
setuptools 34.3.3 <pip>
setuptools 27.2.0 py36_0
simplegeneric 0.8.1 py36_1
sip 4.18 py36_0
six 1.10.0 <pip>
six 1.10.0 py36_0
sqlite 3.13.0 0
tensorflow 1.0.1 <pip>
terminado 0.6 py36_0
testpath 0.3 py36_0
Theano 0.9.0 <pip>
tk 8.5.18 0
tornado 4.4.2 py36_0
traitlets 4.3.2 py36_0
wcwidth 0.1.7 py36_0
wheel 0.29.0 <pip>
wheel 0.29.0 py36_0
widgetsnbextension 2.0.0 py36_0
xz 5.2.2 1
zlib 1.2.8 3
(tensorflow) SFOM00618927A:dl i854319$
```
You can see that `jupyter` is also installed.
Now, when I open up the Python interpreter in this environment and I run the basic TensorFlow command, it all works fine. However, I wanted to do the same thing in the Jupyter notebook. So, I created a new directory (outside of this environment).
```
mkdir dl
```
In that, I activated `tensorflow` environment
```
SFOM00618927A:dl i854319$ source activate tensorflow
(tensorflow) SFOM00618927A:dl i854319$ conda list
```
And I can see the same list of packages in that.
Now, I open up a Jupyter notebook
```
SFOM00618927A:dl i854319$ source activate tensorflow
(tensorflow) SFOM00618927A:dl i854319$ jupyter notebook
```
It opens up a new notebook in the browser. But when I just import basic python libraries in that, like pandas, it says "no packages available". I am not sure why is that when the same environment has all those packages and in the same directory, if I use Python interpreter it shows all packages.
```
import pandas
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-4-d6ac987968b6> in <module>()
----> 1 import pandas
ModuleNotFoundError: No module named 'pandas'
```
Why jupyter notebook is not picking up these modules?
So, Jupyter notebook doesn't show env as the interpreter
[](https://i.stack.imgur.com/whyaq.png)
|
2017/04/04
|
[
"https://Stackoverflow.com/questions/43216256",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2769240/"
] |
I believe a short video showing all the details if you have Anaconda is the following for mac (it is very similar to windows users as well) just open Anaconda navigator and everything is just the same (almost!)
<https://www.youtube.com/watch?v=gDzAm25CORk>
Then go to jupyter notebook and code
```
!pip install tensorflow
```
Then
```
import tensorflow as tf
```
It work for me! :)
|
I have to install it using condo's pip3. Just start jupyter-notebook and execute following
```
import sys
sys.executable
```
This will give you something like this
```
/home/<user>/anaconda3/bin/python
```
Now in a terminal execute the following (using pip3 from the above path where we found our python)
```
/home/<user>/anaconda3/bin/pip3 install tensorflow
```
This is basically installing the Tensorflow in the Conda environment using the Conda pip3 installer
|
43,216,256
|
I am trying to do some deep learning work. For this, I first installed all the packages for deep learning in my Python environment.
Here is what I did.
In Anaconda, I created an environment called `tensorflow` as follows
```
conda create -n tensorflow
```
Then installed the data science Python packages, like Pandas, NumPy, etc., inside it. I also installed TensorFlow and Keras there. Here is the list of packages in that environment
```
(tensorflow) SFOM00618927A:dl i854319$ conda list
# packages in environment at /Users/i854319/anaconda/envs/tensorflow:
#
appdirs 1.4.3 <pip>
appnope 0.1.0 py36_0
beautifulsoup4 4.5.3 py36_0
bleach 1.5.0 py36_0
cycler 0.10.0 py36_0
decorator 4.0.11 py36_0
entrypoints 0.2.2 py36_1
freetype 2.5.5 2
html5lib 0.999 py36_0
icu 54.1 0
ipykernel 4.5.2 py36_0
ipython 5.3.0 py36_0
ipython_genutils 0.2.0 py36_0
ipywidgets 6.0.0 py36_0
jinja2 2.9.5 py36_0
jsonschema 2.5.1 py36_0
jupyter 1.0.0 py36_3
jupyter_client 5.0.0 py36_0
jupyter_console 5.1.0 py36_0
jupyter_core 4.3.0 py36_0
Keras 2.0.2 <pip>
libpng 1.6.27 0
markupsafe 0.23 py36_2
matplotlib 2.0.0 np112py36_0
mistune 0.7.4 py36_0
mkl 2017.0.1 0
nbconvert 5.1.1 py36_0
nbformat 4.3.0 py36_0
notebook 4.4.1 py36_0
numpy 1.12.1 <pip>
numpy 1.12.1 py36_0
openssl 1.0.2k 1
packaging 16.8 <pip>
pandas 0.19.2 np112py36_1
pandocfilters 1.4.1 py36_0
path.py 10.1 py36_0
pexpect 4.2.1 py36_0
pickleshare 0.7.4 py36_0
pip 9.0.1 py36_1
prompt_toolkit 1.0.13 py36_0
protobuf 3.2.0 <pip>
ptyprocess 0.5.1 py36_0
pygments 2.2.0 py36_0
pyparsing 2.1.4 py36_0
pyparsing 2.2.0 <pip>
pyqt 5.6.0 py36_2
python 3.6.1 0
python-dateutil 2.6.0 py36_0
pytz 2017.2 py36_0
PyYAML 3.12 <pip>
pyzmq 16.0.2 py36_0
qt 5.6.2 0
qtconsole 4.3.0 py36_0
readline 6.2 2
scikit-learn 0.18.1 np112py36_1
scipy 0.19.0 np112py36_0
setuptools 34.3.3 <pip>
setuptools 27.2.0 py36_0
simplegeneric 0.8.1 py36_1
sip 4.18 py36_0
six 1.10.0 <pip>
six 1.10.0 py36_0
sqlite 3.13.0 0
tensorflow 1.0.1 <pip>
terminado 0.6 py36_0
testpath 0.3 py36_0
Theano 0.9.0 <pip>
tk 8.5.18 0
tornado 4.4.2 py36_0
traitlets 4.3.2 py36_0
wcwidth 0.1.7 py36_0
wheel 0.29.0 <pip>
wheel 0.29.0 py36_0
widgetsnbextension 2.0.0 py36_0
xz 5.2.2 1
zlib 1.2.8 3
(tensorflow) SFOM00618927A:dl i854319$
```
You can see that `jupyter` is also installed.
Now, when I open up the Python interpreter in this environment and I run the basic TensorFlow command, it all works fine. However, I wanted to do the same thing in the Jupyter notebook. So, I created a new directory (outside of this environment).
```
mkdir dl
```
In that, I activated `tensorflow` environment
```
SFOM00618927A:dl i854319$ source activate tensorflow
(tensorflow) SFOM00618927A:dl i854319$ conda list
```
And I can see the same list of packages in that.
Now, I open up a Jupyter notebook
```
SFOM00618927A:dl i854319$ source activate tensorflow
(tensorflow) SFOM00618927A:dl i854319$ jupyter notebook
```
It opens up a new notebook in the browser. But when I just import basic python libraries in that, like pandas, it says "no packages available". I am not sure why is that when the same environment has all those packages and in the same directory, if I use Python interpreter it shows all packages.
```
import pandas
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-4-d6ac987968b6> in <module>()
----> 1 import pandas
ModuleNotFoundError: No module named 'pandas'
```
Why jupyter notebook is not picking up these modules?
So, Jupyter notebook doesn't show env as the interpreter
[](https://i.stack.imgur.com/whyaq.png)
|
2017/04/04
|
[
"https://Stackoverflow.com/questions/43216256",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2769240/"
] |
1. Install Anaconda
2. Run Anaconda command prompt
3. write "activate tensorflow" for windows
4. pip install tensorflow
5. pip install jupyter notebook
6. jupyter notebook.
Only this solution worked for me. Tried 7 8 solutions.
Using Windows platform.
|
Although it's a long time after this question is being asked since I was searching so much for the same problem and couldn't find the extant solutions helpful, I write what fixed my trouble for anyone with the same issue:
The point is, Jupyter should be installed in your virtual environment, meaning, after activating the `tensorflow` environment, run the following in the command prompt (**in `tensorflow` virtual environment**):
```
conda install jupyter
jupyter notebook
```
and then the jupyter will pop up.
|
19,151,734
|
I have data in the following format:
```
user,item,rating
1,1,3
1,2,2
2,1,2
2,4,1
```
and so on
I want to convert this in matrix form
So, the out put is like this
```
Item--> 1,2,3,4....
user
1 3,2,0,0....
2 2,0,0,1
```
....and so on..
How do I do this in python?
THanks
|
2013/10/03
|
[
"https://Stackoverflow.com/questions/19151734",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/902885/"
] |
```
data = [
(1,1,3),
(1,2,2),
(2,1,2),
(2,4,1),
]
#import csv
#with open('data.csv') as f:
# next(f) # Skip header
# data = [map(int, row) for row in csv.reader(f)]
# # Python 3.x: map(int, row) -> tuple(map(int, row))
n = max(max(user, item) for user, item, rating in data) # Get size of matrix
matrix = np.zeros((n, n))
for user, item, rating in data:
matrix[user-1][item-1] = rating # Convert to 0-based index.
for row in matrix:
print(row)
```
prints
```
[3, 2, 0, 0]
[2, 0, 0, 1]
[0, 0, 0, 0]
[0, 0, 0, 0]
```
|
a different approach from @falsetru,
do you read from file in write to file?
may be work with dictionary
```
from collections import defaultdict
valdict=defaultdict(int)
nuser=0
nitem=0
for line in infile:
eachline=line.strip().split(",")
valdict[tuple(eachline[0:2])]=eachline[2]
nuser=max(nuser,eachline[0])
nitem=max(nitem,eachline[1])
towrite=",".join(range(1,nuser+1))+"\n"
for i in range(1:nuser+1):
towrite+=str(i)
for j in range(1:nitem+1):
towrite+=","+str(valdict[i,j])
towrite+="\n"
outfile.write(towrite)
```
|
74,165,151
|
Let's say I have following python code:
```
import numpy as np
import matplotlib.pyplot as plt
fig=plt.figure()
ax=plt.axes(projection='3d')
x=y=np.linspace(1,10,100)
X,Y=np.meshgrid(x,y)
Z=np.sin(X)**3+np.cos(Y)**3
ax.plot_surface(X,Y,Z)
plt.show()
```
How do I calculate from this code the gradient and plot it? I am also confused in what numpy.gradient() function exaclty returns.
I have here the graph of the function.
[](https://i.stack.imgur.com/eAzOE.png)
|
2022/10/22
|
[
"https://Stackoverflow.com/questions/74165151",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12548284/"
] |
gradient is a vector. It has 2 components (in this case, since we are dealing with function ℝ²→ℝ, X,Y↦Z(X,Y)), one which is ∂Z/∂X, (also a function of X and Y), another which is ∂Z/∂Y.
So, np.gradients returns both. `np.gradient(Z)`, called with a 100×100 array of Z, returns a list [∂Z/∂X, ∂Z/∂Y], both being also 100×100 arrays of values: a 100×100 arrays of ∂Z/∂X values, and a 100×100 arrays of ∂Z/∂Y values.
As for how to plot it, it is up to you. How would you like to plot it? You could use the gradient to alter colors, for example.
Or draw arrows.
|
Here is the practical way to achieve it with Python:
```
import numpy as np
import matplotlib.pyplot as plt
# Some scalar function of interest:
def z(x, y):
return np.power(np.sin(x), 3) + np.power(np.cos(y), 3)
# Grid for gradient:
xmin, xmax = -7, 7
x = y = np.linspace(xmin, xmax, 100)
X, Y = np.meshgrid(x, y)
# Compute gradient:
dZ = np.gradient(Z, x, y)
# Gradient magnitude (arrow colors):
M = np.hypot(*dZ)
# Grid for contour:
xh = yh = np.linspace(xmin, xmax, 400)
Xh, Yh = np.meshgrid(xh, yh)
# Plotting gradient & contour:
fig, axe = plt.subplots(figsize=(12, 12))
axe.contour(Xh, Yh, Zh, 30, cmap="jet", linewidths=0.75)
axe.quiver(X, Y, dZ[1], dZ[0], M, cmap="jet", units='xy', pivot='tail', width=0.03, scale=5)
axe.set_aspect("equal") # Don't stretch the scale
axe.grid()
```
It renders:
[](https://i.stack.imgur.com/Fg42y.jpg)
There are two visualizations of interest to see the gradient:
* Quiver which renders the vector field, see [`plt.quiver`](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.quiver.html) and [advanced quiver](https://matplotlib.org/stable/gallery/images_contours_and_fields/quiver_demo.html#sphx-glr-gallery-images-contours-and-fields-quiver-demo-py) to tune arrows;
* Contour which renders isopleth curves (levels), see [`plt.contour`](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.contour.html) to adapt levels.
As expected [gradient is orthogonal to contour](https://math.stackexchange.com/questions/1059293/proving-gradient-of-a-function-is-always-perpendicular-to-the-contour-lines) curves.
|
60,473,135
|
I am using python 3.7 on Spyder. Here is my simple code to store string elements ['a','b'] in a list L as sympy symbols. As output, I have new list L with two Symbols [a,b] in it. But when I try to use these symbols in my calculation I get an error saying a & b are not defined. Any suggestions on how can I fix this?
Basically, what I want to do is use string elements in a list as symbols for sympy calculations. Any suggestions on other methods to do this are welcomed. Thank you.
```
import sympy as sm
L=['a','b']
print(L)
for j in range(len(L)):
L[j] = sm.symbols(L[j])
B=sm.solve(a**2 - 1, a)
print(B)
```
Here is the error:
```
runfile('C:/Users/bhise/.spyder-py3/temp.py', wdir='C:/Users/bhise/.spyder-py3')
['a', 'b']
Traceback (most recent call last):
File "<ipython-input-43-6826047bb7df>", line 1, in <module>
runfile('C:/Users/bhise/.spyder-py3/temp.py',
wdir='C:/Users/bhise/.spyder-py3')
File "C:\Users\bhise\Anaconda3\lib\site-
packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile
execfile(filename, namespace)
File "C:\Users\bhise\Anaconda3\lib\site-
packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/bhise/.spyder-py3/temp.py", line 10, in <module>
B=sm.solve(a**2 - 1, a)
NameError: name 'a' is not defined
```
|
2020/03/01
|
[
"https://Stackoverflow.com/questions/60473135",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11039234/"
] |
Not knowing the second template argument to `std::array<>` means your `test` class should be templated as well.
```
template <std::size_t N>
class test
{
void function(const std::array<int, N> & myarr)
{
// ...
}
};
```
By the way, it's better to pass `myarr` as `const &`.
|
You could use an approach like:
```
#include<array>
using namespace std;
template <size_t N>
class test
{
void function(const array<int, N> & myarr)
{
/* My code */
}
};
```
But keep in mind that `std::array` is not a dynamic array. You have to know the sizes at compile time.
If you get to know the sizes later at runtime of your program you should consider using `std::vector` instead:
```
#include<vector>
using namespace std;
class test
{
void function(const vector<int> & myvec)
{
/* My code */
}
};
```
In that variant you don't need to pass the size at all.
|
59,410,455
|
I have an application with python, flask, and flask\_mysqldb. When I execute the first query, everything works fine, but the second query always throws an error (2006, server has gone away). Everything I found on the web says this error is a timeout issue, which doesn't seem to be my case because:
1 - I run the second query just a few seconds after running the first
2 - My timeout configuration is set to 8 hours
I don't know what else this could be, here is the code that I am running:
```
import os
from flask import Flask
from flask import render_template
from flaskext.mysql import MySQL
import endpoints.usuario as usuario
app = Flask(__name__, static_folder='/root/sftp/atom-projects/flask-example/public/')
app.config['MYSQL_HOST'] = '123'
app.config['MYSQL_USER'] = '123'
app.config['MYSQL_PASSWORD'] = '123'
app.config['MYSQL_DB'] = '123'
app.add_url_rule('/usuarios', 'usuarios', usuario.list_all, methods=['GET'])
@app.errorhandler(404)
def not_found(e):
return app.send_static_file('index.html')
```
here is the code for the usuarios file:
```
from flask_mysqldb import MySQL
from flask import Flask, make_response
from flask import current_app
from flask import request
import bcrypt
def list_all():
mysql = MySQL(current_app)
cursor = mysql.connection.cursor()
cursor.execute("select * from usuario")
records = cursor.fetchall()
usuarios = []
for row in records:
usuarios.append({"id":row[0], "nome":row[1], "email":row[2], "senha":row[3], "tipo":row[4]})
for usuario in usuarios:
tipo = None
cursor.execute("select * from tipo_usuario where id = %s", [usuario['tipo']])
records = cursor.fetchall()
for row in records:
usuario['tipo'] = {"id":row[0], "permissao":row[1]}
return make_response({"msg":'', "error":False, "data":usuarios})
```
I have this running on nginx + gunicorn, here is the log :
```
gunicorn -w 1 --reload main:app
[2019-12-19 12:53:21 +0000] [5356] [INFO] Starting gunicorn 20.0.4
[2019-12-19 12:53:21 +0000] [5356] [INFO] Listening at: http://127.0.0.1:8000 (5356)
[2019-12-19 12:53:21 +0000] [5356] [INFO] Using worker: sync
[2019-12-19 12:53:21 +0000] [5359] [INFO] Booting worker with pid: 5359
[2019-12-19 12:53:28 +0000] [5359] [ERROR] Error handling request /usuarios
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/gunicorn/workers/sync.py", line 134, in handle
self.handle_request(listener, req, client, addr)
File "/usr/local/lib/python3.5/dist-packages/gunicorn/workers/sync.py", line 175, in handle_request
respiter = self.wsgi(environ, resp.start_response)
File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 2463, in __call__
return self.wsgi_app(environ, start_response)
File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 2457, in wsgi_app
ctx.auto_pop(error)
File "/usr/local/lib/python3.5/dist-packages/flask/ctx.py", line 452, in auto_pop
self.pop(exc)
File "/usr/local/lib/python3.5/dist-packages/flask/ctx.py", line 438, in pop
app_ctx.pop(exc)
File "/usr/local/lib/python3.5/dist-packages/flask/ctx.py", line 238, in pop
self.app.do_teardown_appcontext(exc)
File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 2320, in do_teardown_appcontext
func(exc)
File "/usr/local/lib/python3.5/dist-packages/flask_mysqldb/__init__.py", line 100, in teardown
ctx.mysql_db.close()
MySQLdb._exceptions.OperationalError: (2006, '')
```
If I run it with more workers, I can run a few more (depending on how many workers) queries, what could be causing this?
|
2019/12/19
|
[
"https://Stackoverflow.com/questions/59410455",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5844134/"
] |
>
> I've noticed that the same data stored inside integer had reversed byte order than when stored as char
>
>
>
This implies that the file was stored with different byte endianness than what the CPU uses. In the example output, you can see that the CPU uses little-endian (least significant byte first). Given that the order was the opposite in the file, we can deduce that the file uses big-endian (most significant byte first). Big-endian is commonly used in data exchange formats.
>
> I would like to know why is that and eventualy how to change that.
>
>
>
POSIX has standard functions for converting big endian to native endianness (the `ntoh` family of functions). Standard C++ does not, but it is fairly straight forward to implement. However, there are some mistakes that are easy to make, so it will be safer to use an existing library.
|
As @Mat briefly explained, you're running into something called "endianness". There's "Big Endian", where the most significant bits are at the beginning?! (yes, it's a bit counter-intuitive), and "Little Endian", where the least significant bits are at the beginning.
>
> For example: Arabic numerals are big endian. "1234" is "one-thousand two hundred thirty four", not "four thousand three hundred twenty one". The most significant digits come first.
>
>
>
I'd be shocked to find that there aren't dozens of different open source functions handling this problem out there.
A quick google search turned up: <https://www.boost.org/doc/libs/1_61_0/libs/endian/doc/index.html>
This is caused by different CPU architectures. Some are big endian, some are little. There's almost certainly a list at Mat's linked wikipedia page. When they write out their bits to their own storage, they often write them "natively", in their own endian format. This could be a big problem when a server talks to clients using a variety of cpu types (every web-server ever, most cross-platform networked games, etc.). In those cases, the communication protocol must specify which endianness they're using and then the software must translate as needed.
### Edit the edit:
"Endianness" should be called "startianness". Counter intuitive names are bad. "Principle of Least Surprise" good.
Ah well.
When it matters just use an existing library. POSIX has a collection of not-terribly-standardized-names for functions that do the work. There's the boost library I linked above. I've used proprietary libraries on a couple projects. I'm quite sure there are others out there as well, many open sourced.
|
26,266,437
|
I just installed python 2.7 and also pip to the 2.7 site package.
When I get the version with:
```
pip -V
```
It shows:
```
pip 1.3.1 from /usr/lib/python2.6/site-packages (python 2.6)
```
How do I use the 2.7 version of pip located at:
```
/usr/local/lib/python2.7/site-packages
```
|
2014/10/08
|
[
"https://Stackoverflow.com/questions/26266437",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/84885/"
] |
There should be a binary called "pip2.7" installed at some location included within your $PATH variable.
You can find that out by typing
```
which pip2.7
```
This should print something like '/usr/local/bin/pip2.7' to your stdout. If it does not print anything like this, it is not installed. In that case, install it by running
```
$ wget https://bootstrap.pypa.io/pip/2.7/get-pip.py
$ sudo python2.7 get-pip.py
```
Now, you should be all set, and
```
which pip2.7
```
should return the correct output.
|
An alternative is to call the `pip` module by using python2.7, as below:
```
python2.7 -m pip <commands>
```
For example, you could run `python2.7 -m pip install <package>` to install your favorite python modules. Here is a reference: <https://stackoverflow.com/a/50017310/4256346>.
In case the pip module has not yet been installed for this version of python, you can run the following:
```
python2.7 -m ensurepip
```
Running this command will "bootstrap the pip installer". Note that running this may require administrative privileges (i.e. `sudo`). Here is a reference: <https://docs.python.org/2.7/library/ensurepip.html> and another reference <https://stackoverflow.com/a/46631019/4256346>.
|
26,266,437
|
I just installed python 2.7 and also pip to the 2.7 site package.
When I get the version with:
```
pip -V
```
It shows:
```
pip 1.3.1 from /usr/lib/python2.6/site-packages (python 2.6)
```
How do I use the 2.7 version of pip located at:
```
/usr/local/lib/python2.7/site-packages
```
|
2014/10/08
|
[
"https://Stackoverflow.com/questions/26266437",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/84885/"
] |
There should be a binary called "pip2.7" installed at some location included within your $PATH variable.
You can find that out by typing
```
which pip2.7
```
This should print something like '/usr/local/bin/pip2.7' to your stdout. If it does not print anything like this, it is not installed. In that case, install it by running
```
$ wget https://bootstrap.pypa.io/pip/2.7/get-pip.py
$ sudo python2.7 get-pip.py
```
Now, you should be all set, and
```
which pip2.7
```
should return the correct output.
|
as noted [here](https://stackoverflow.com/questions/61699983/upgrade-to-ubuntu-20-04-killed-pip/61974430#61974430), this is what worked best for me:
```
sudo apt-get install python3 python3-pip python3-setuptools
sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 10
```
|
26,266,437
|
I just installed python 2.7 and also pip to the 2.7 site package.
When I get the version with:
```
pip -V
```
It shows:
```
pip 1.3.1 from /usr/lib/python2.6/site-packages (python 2.6)
```
How do I use the 2.7 version of pip located at:
```
/usr/local/lib/python2.7/site-packages
```
|
2014/10/08
|
[
"https://Stackoverflow.com/questions/26266437",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/84885/"
] |
There should be a binary called "pip2.7" installed at some location included within your $PATH variable.
You can find that out by typing
```
which pip2.7
```
This should print something like '/usr/local/bin/pip2.7' to your stdout. If it does not print anything like this, it is not installed. In that case, install it by running
```
$ wget https://bootstrap.pypa.io/pip/2.7/get-pip.py
$ sudo python2.7 get-pip.py
```
Now, you should be all set, and
```
which pip2.7
```
should return the correct output.
|
`pip` has now dropped support for python2, therefore you can't use python2 pip
You can't find `python2-pip` in `apt-get` anymore, and you won't get `pip` when installing python2 from source
You can still install python modules using `apt-get`. To install a python prepend ‘python-’ to the module name
```
apt-get install python-six # install six
```
|
26,266,437
|
I just installed python 2.7 and also pip to the 2.7 site package.
When I get the version with:
```
pip -V
```
It shows:
```
pip 1.3.1 from /usr/lib/python2.6/site-packages (python 2.6)
```
How do I use the 2.7 version of pip located at:
```
/usr/local/lib/python2.7/site-packages
```
|
2014/10/08
|
[
"https://Stackoverflow.com/questions/26266437",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/84885/"
] |
An alternative is to call the `pip` module by using python2.7, as below:
```
python2.7 -m pip <commands>
```
For example, you could run `python2.7 -m pip install <package>` to install your favorite python modules. Here is a reference: <https://stackoverflow.com/a/50017310/4256346>.
In case the pip module has not yet been installed for this version of python, you can run the following:
```
python2.7 -m ensurepip
```
Running this command will "bootstrap the pip installer". Note that running this may require administrative privileges (i.e. `sudo`). Here is a reference: <https://docs.python.org/2.7/library/ensurepip.html> and another reference <https://stackoverflow.com/a/46631019/4256346>.
|
as noted [here](https://stackoverflow.com/questions/61699983/upgrade-to-ubuntu-20-04-killed-pip/61974430#61974430), this is what worked best for me:
```
sudo apt-get install python3 python3-pip python3-setuptools
sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 10
```
|
26,266,437
|
I just installed python 2.7 and also pip to the 2.7 site package.
When I get the version with:
```
pip -V
```
It shows:
```
pip 1.3.1 from /usr/lib/python2.6/site-packages (python 2.6)
```
How do I use the 2.7 version of pip located at:
```
/usr/local/lib/python2.7/site-packages
```
|
2014/10/08
|
[
"https://Stackoverflow.com/questions/26266437",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/84885/"
] |
An alternative is to call the `pip` module by using python2.7, as below:
```
python2.7 -m pip <commands>
```
For example, you could run `python2.7 -m pip install <package>` to install your favorite python modules. Here is a reference: <https://stackoverflow.com/a/50017310/4256346>.
In case the pip module has not yet been installed for this version of python, you can run the following:
```
python2.7 -m ensurepip
```
Running this command will "bootstrap the pip installer". Note that running this may require administrative privileges (i.e. `sudo`). Here is a reference: <https://docs.python.org/2.7/library/ensurepip.html> and another reference <https://stackoverflow.com/a/46631019/4256346>.
|
`pip` has now dropped support for python2, therefore you can't use python2 pip
You can't find `python2-pip` in `apt-get` anymore, and you won't get `pip` when installing python2 from source
You can still install python modules using `apt-get`. To install a python prepend ‘python-’ to the module name
```
apt-get install python-six # install six
```
|
26,266,437
|
I just installed python 2.7 and also pip to the 2.7 site package.
When I get the version with:
```
pip -V
```
It shows:
```
pip 1.3.1 from /usr/lib/python2.6/site-packages (python 2.6)
```
How do I use the 2.7 version of pip located at:
```
/usr/local/lib/python2.7/site-packages
```
|
2014/10/08
|
[
"https://Stackoverflow.com/questions/26266437",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/84885/"
] |
as noted [here](https://stackoverflow.com/questions/61699983/upgrade-to-ubuntu-20-04-killed-pip/61974430#61974430), this is what worked best for me:
```
sudo apt-get install python3 python3-pip python3-setuptools
sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 10
```
|
`pip` has now dropped support for python2, therefore you can't use python2 pip
You can't find `python2-pip` in `apt-get` anymore, and you won't get `pip` when installing python2 from source
You can still install python modules using `apt-get`. To install a python prepend ‘python-’ to the module name
```
apt-get install python-six # install six
```
|
64,882,005
|
I have a python list shown below. I want to remove all the elements after a specific character `''`
Note1: The number of elements before `''` can vary. I am developing a generic code.
Note2: There can be multiple `''` I want to remove after the first `''`
Note3: Slice is not applicable because it supports only integers
Can someone please help me with this?
Thank you very much.
```
['iter,objective,inf_pr,inf_du,lg(mu),||d||,lg(rg),alpha_du,alpha_pr,ls',
'0,8.5770822e+000,1.35e-002,1.73e+001,-1.0,0.00e+000,-,0.00e+000,0.00e+000,0',
'1,8.3762931e+000,1.29e-002,1.13e+001,-1.0,9.25e+000,-,9.86e-001,4.62e-002f,2',
'5,8.0000031e+000,8.86e-010,1.45e-008,-5.7,1.88e-004,-,1.00e+000,1.00e+000h,1',
'6,7.9999994e+000,1.28e-013,2.18e-012,-8.6,2.31e-006,-,1.00e+000,1.00e+000h,1',
'',
'Number,of,Iterations....:,6',
'',
'(scaled),(unscaled)',
'Objective...............:,7.9999994450134029e+000,7.9999994450134029e+000',
'Dual,infeasibility......:,2.1781026770818554e-012,2.1781026770818554e-012',
'Constraint,violation....:,1.0658141036401503e-013,1.2789769243681803e-013',
'Complementarity.........:,2.5067022522763431e-009,2.5067022522763431e-009',
'Overall,NLP,error.......:,2.5067022522763431e-009,2.5067022522763431e-009',
'',
'',
```
|
2020/11/17
|
[
"https://Stackoverflow.com/questions/64882005",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14606112/"
] |
```
list = ['a', 'b', 'c', '', 'd', 'e']
list = list[:list.index('')]
#list is now ['a', 'b', 'c']
```
Explanation: `list.index('')` finds the first instance of `''` in the list. `list[:x]` gives the first `x` elements of the list. This code will throw an exception if `''` is not in the list.
|
You have a list and want to delete everything after a value that meets some sort of criteria. You can enumerate the list, searching for that value and delete the remaining slice. `list.index` will tell you the index of value that exactly matches some object like `""`.
```
test = ["foo", "bar", "" "baz", "", "quux"]
try:
del test[test.index("")]
except ValueError:
pass
```
If you have a more complex criteria, you can do your own scan
```
test = ["foo", "bar", " % " "baz", "", "quux"]
for i, val in enumerate(test):
if "%" in val:
del test[i:]
break
```
If this list is really file, you could look for the value as you read and short-circuit.
```
test = []
with open("foo.txt") as f:
for line in f:
line = line.strip()
if line == "":
break
test.append(line)
```
|
64,882,005
|
I have a python list shown below. I want to remove all the elements after a specific character `''`
Note1: The number of elements before `''` can vary. I am developing a generic code.
Note2: There can be multiple `''` I want to remove after the first `''`
Note3: Slice is not applicable because it supports only integers
Can someone please help me with this?
Thank you very much.
```
['iter,objective,inf_pr,inf_du,lg(mu),||d||,lg(rg),alpha_du,alpha_pr,ls',
'0,8.5770822e+000,1.35e-002,1.73e+001,-1.0,0.00e+000,-,0.00e+000,0.00e+000,0',
'1,8.3762931e+000,1.29e-002,1.13e+001,-1.0,9.25e+000,-,9.86e-001,4.62e-002f,2',
'5,8.0000031e+000,8.86e-010,1.45e-008,-5.7,1.88e-004,-,1.00e+000,1.00e+000h,1',
'6,7.9999994e+000,1.28e-013,2.18e-012,-8.6,2.31e-006,-,1.00e+000,1.00e+000h,1',
'',
'Number,of,Iterations....:,6',
'',
'(scaled),(unscaled)',
'Objective...............:,7.9999994450134029e+000,7.9999994450134029e+000',
'Dual,infeasibility......:,2.1781026770818554e-012,2.1781026770818554e-012',
'Constraint,violation....:,1.0658141036401503e-013,1.2789769243681803e-013',
'Complementarity.........:,2.5067022522763431e-009,2.5067022522763431e-009',
'Overall,NLP,error.......:,2.5067022522763431e-009,2.5067022522763431e-009',
'',
'',
```
|
2020/11/17
|
[
"https://Stackoverflow.com/questions/64882005",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14606112/"
] |
```
list = ['a', 'b', 'c', '', 'd', 'e']
list = list[:list.index('')]
#list is now ['a', 'b', 'c']
```
Explanation: `list.index('')` finds the first instance of `''` in the list. `list[:x]` gives the first `x` elements of the list. This code will throw an exception if `''` is not in the list.
|
First, you should find the first time `''` shows, by using `mylist.index('')`. This will indeed find the first show of it, as in `index()`'s documentation:
>
> Return first index of value.
>
>
>
Also, note the rest of the documentation:
>
> Raises ValueError if the value is not present.
>
>
>
Make sure to catch the error (or add `''` to the end of your list beforehand).
Now you can use slice `mylist[:mylist.index('')]`, or if you don't want to use it:
```
output = []
for i in range(mylist.index('')):
output.append(mylist[i])
mylist = output
```
|
30,744,415
|
Like many before me I don´t succeed in installing a few Python packages (mysql, pycld2, etc.) on Windows. I have a Windows 8 machine, 64-bit, and Python 3.4. At first I got the well-known error "can´t find vcvarsall.bat - install VS C++ 10.0". This I tried to solve by installing MinGW and use that as compiler. This did not work. Then finally I found an installer for this VS C++ 10.0 here <http://microsoft-visual-cpp-express.soft32.com/free-download/>. This doesn´t work too good either. Now it seems to find the vcvarsall file but instead gives me a couple of new errors
```
nclude -IC:\Python34\include /Tc_mysql.c /Fobuild\temp.win32-3.4\Release\_mysql.
obj /Zl_mysql.c_mysql.c(42) : fatal error C1083: Cannot open include file: 'config-win.h':
No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 10.0\\VC\\BIN\\cl.exe' failed with exit status 2
```
And:
```
pycldmodule.cc
bindings\pycldmodule.cc(16) : fatal error C1083: Cannot open include file: '
strings.h': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 10.0\\VC\\BIN\\cl.exe' failed with exit status 2
```
So now it doesn´t find strings.h and config-win.h and I´m too new to these sorts of problems to know what to look for. Anyone knows what I should do?
The thing is that I could just not use Windows and go over to Ubuntu as, for what I´ve understood, works painlessly with python. However, I have to use the win32com package which doesn´t exist on Ubuntu (have I understood that right?).
If I can´t solve these installing hassles on Windows, would a solution be to use a Windows virtual machine for the win32com part and do the rest on a host Ubuntu (or the other way around)? Would there be anyway to communicate between the two in that case? I.e. sending strings or arrays of data.
|
2015/06/09
|
[
"https://Stackoverflow.com/questions/30744415",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2069136/"
] |
I would recommend installing Ubuntu (as a Ubuntu user), you can dual-boot. However, that isn't an answer.
MySQLClient (the fork for Python3) is available a precompiled binary from here:
<http://www.lfd.uci.edu/~gohlke/pythonlibs/#mysqlclient>
Try to find precompiled binaries for simplicity sake. As far as troubleshooting the install goes, I've tried the recommend VC Studio 9.0 on fresh installs and it cannot find stdint.h (which, like yours, suggests it's more than broken).
|
I grew frustrated with trying to get python and other packages to compile/play nicely on Windows as well. Switching over to Ubuntu was a breath of fresh air, for sure.
The win32com package is made specifically for Windows hosts, so that could not longer be used, but there are other ways to achieve the same thing in Ubuntu.
Are you trying to target Windows specifically? What are you using win32com for?
|
30,744,415
|
Like many before me I don´t succeed in installing a few Python packages (mysql, pycld2, etc.) on Windows. I have a Windows 8 machine, 64-bit, and Python 3.4. At first I got the well-known error "can´t find vcvarsall.bat - install VS C++ 10.0". This I tried to solve by installing MinGW and use that as compiler. This did not work. Then finally I found an installer for this VS C++ 10.0 here <http://microsoft-visual-cpp-express.soft32.com/free-download/>. This doesn´t work too good either. Now it seems to find the vcvarsall file but instead gives me a couple of new errors
```
nclude -IC:\Python34\include /Tc_mysql.c /Fobuild\temp.win32-3.4\Release\_mysql.
obj /Zl_mysql.c_mysql.c(42) : fatal error C1083: Cannot open include file: 'config-win.h':
No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 10.0\\VC\\BIN\\cl.exe' failed with exit status 2
```
And:
```
pycldmodule.cc
bindings\pycldmodule.cc(16) : fatal error C1083: Cannot open include file: '
strings.h': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 10.0\\VC\\BIN\\cl.exe' failed with exit status 2
```
So now it doesn´t find strings.h and config-win.h and I´m too new to these sorts of problems to know what to look for. Anyone knows what I should do?
The thing is that I could just not use Windows and go over to Ubuntu as, for what I´ve understood, works painlessly with python. However, I have to use the win32com package which doesn´t exist on Ubuntu (have I understood that right?).
If I can´t solve these installing hassles on Windows, would a solution be to use a Windows virtual machine for the win32com part and do the rest on a host Ubuntu (or the other way around)? Would there be anyway to communicate between the two in that case? I.e. sending strings or arrays of data.
|
2015/06/09
|
[
"https://Stackoverflow.com/questions/30744415",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2069136/"
] |
I would recommend installing Ubuntu (as a Ubuntu user), you can dual-boot. However, that isn't an answer.
MySQLClient (the fork for Python3) is available a precompiled binary from here:
<http://www.lfd.uci.edu/~gohlke/pythonlibs/#mysqlclient>
Try to find precompiled binaries for simplicity sake. As far as troubleshooting the install goes, I've tried the recommend VC Studio 9.0 on fresh installs and it cannot find stdint.h (which, like yours, suggests it's more than broken).
|
You could try <http://www.activestate.com/activepython/downloads> for Windows. I t includes compiled binaries, avoiding the need for a C complier.
|
30,744,415
|
Like many before me I don´t succeed in installing a few Python packages (mysql, pycld2, etc.) on Windows. I have a Windows 8 machine, 64-bit, and Python 3.4. At first I got the well-known error "can´t find vcvarsall.bat - install VS C++ 10.0". This I tried to solve by installing MinGW and use that as compiler. This did not work. Then finally I found an installer for this VS C++ 10.0 here <http://microsoft-visual-cpp-express.soft32.com/free-download/>. This doesn´t work too good either. Now it seems to find the vcvarsall file but instead gives me a couple of new errors
```
nclude -IC:\Python34\include /Tc_mysql.c /Fobuild\temp.win32-3.4\Release\_mysql.
obj /Zl_mysql.c_mysql.c(42) : fatal error C1083: Cannot open include file: 'config-win.h':
No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 10.0\\VC\\BIN\\cl.exe' failed with exit status 2
```
And:
```
pycldmodule.cc
bindings\pycldmodule.cc(16) : fatal error C1083: Cannot open include file: '
strings.h': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 10.0\\VC\\BIN\\cl.exe' failed with exit status 2
```
So now it doesn´t find strings.h and config-win.h and I´m too new to these sorts of problems to know what to look for. Anyone knows what I should do?
The thing is that I could just not use Windows and go over to Ubuntu as, for what I´ve understood, works painlessly with python. However, I have to use the win32com package which doesn´t exist on Ubuntu (have I understood that right?).
If I can´t solve these installing hassles on Windows, would a solution be to use a Windows virtual machine for the win32com part and do the rest on a host Ubuntu (or the other way around)? Would there be anyway to communicate between the two in that case? I.e. sending strings or arrays of data.
|
2015/06/09
|
[
"https://Stackoverflow.com/questions/30744415",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2069136/"
] |
I would recommend installing Ubuntu (as a Ubuntu user), you can dual-boot. However, that isn't an answer.
MySQLClient (the fork for Python3) is available a precompiled binary from here:
<http://www.lfd.uci.edu/~gohlke/pythonlibs/#mysqlclient>
Try to find precompiled binaries for simplicity sake. As far as troubleshooting the install goes, I've tried the recommend VC Studio 9.0 on fresh installs and it cannot find stdint.h (which, like yours, suggests it's more than broken).
|
Looks like you're missing MySQL dev package. Another [StackOverflow question](https://stackoverflow.com/questions/1972259/cannot-open-include-file-config-win-h-no-such-file-or-directory-while-inst) has the details. But if I were you, I'd go the route [Alexander Huszagh](https://stackoverflow.com/users/4131059/alexander-huszagh) recommended and get my precompiled binaries from <http://www.lfd.uci.edu/~gohlke/pythonlibs/#mysqlclient>
|
30,744,415
|
Like many before me I don´t succeed in installing a few Python packages (mysql, pycld2, etc.) on Windows. I have a Windows 8 machine, 64-bit, and Python 3.4. At first I got the well-known error "can´t find vcvarsall.bat - install VS C++ 10.0". This I tried to solve by installing MinGW and use that as compiler. This did not work. Then finally I found an installer for this VS C++ 10.0 here <http://microsoft-visual-cpp-express.soft32.com/free-download/>. This doesn´t work too good either. Now it seems to find the vcvarsall file but instead gives me a couple of new errors
```
nclude -IC:\Python34\include /Tc_mysql.c /Fobuild\temp.win32-3.4\Release\_mysql.
obj /Zl_mysql.c_mysql.c(42) : fatal error C1083: Cannot open include file: 'config-win.h':
No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 10.0\\VC\\BIN\\cl.exe' failed with exit status 2
```
And:
```
pycldmodule.cc
bindings\pycldmodule.cc(16) : fatal error C1083: Cannot open include file: '
strings.h': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 10.0\\VC\\BIN\\cl.exe' failed with exit status 2
```
So now it doesn´t find strings.h and config-win.h and I´m too new to these sorts of problems to know what to look for. Anyone knows what I should do?
The thing is that I could just not use Windows and go over to Ubuntu as, for what I´ve understood, works painlessly with python. However, I have to use the win32com package which doesn´t exist on Ubuntu (have I understood that right?).
If I can´t solve these installing hassles on Windows, would a solution be to use a Windows virtual machine for the win32com part and do the rest on a host Ubuntu (or the other way around)? Would there be anyway to communicate between the two in that case? I.e. sending strings or arrays of data.
|
2015/06/09
|
[
"https://Stackoverflow.com/questions/30744415",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2069136/"
] |
You could try <http://www.activestate.com/activepython/downloads> for Windows. I t includes compiled binaries, avoiding the need for a C complier.
|
I grew frustrated with trying to get python and other packages to compile/play nicely on Windows as well. Switching over to Ubuntu was a breath of fresh air, for sure.
The win32com package is made specifically for Windows hosts, so that could not longer be used, but there are other ways to achieve the same thing in Ubuntu.
Are you trying to target Windows specifically? What are you using win32com for?
|
30,744,415
|
Like many before me I don´t succeed in installing a few Python packages (mysql, pycld2, etc.) on Windows. I have a Windows 8 machine, 64-bit, and Python 3.4. At first I got the well-known error "can´t find vcvarsall.bat - install VS C++ 10.0". This I tried to solve by installing MinGW and use that as compiler. This did not work. Then finally I found an installer for this VS C++ 10.0 here <http://microsoft-visual-cpp-express.soft32.com/free-download/>. This doesn´t work too good either. Now it seems to find the vcvarsall file but instead gives me a couple of new errors
```
nclude -IC:\Python34\include /Tc_mysql.c /Fobuild\temp.win32-3.4\Release\_mysql.
obj /Zl_mysql.c_mysql.c(42) : fatal error C1083: Cannot open include file: 'config-win.h':
No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 10.0\\VC\\BIN\\cl.exe' failed with exit status 2
```
And:
```
pycldmodule.cc
bindings\pycldmodule.cc(16) : fatal error C1083: Cannot open include file: '
strings.h': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 10.0\\VC\\BIN\\cl.exe' failed with exit status 2
```
So now it doesn´t find strings.h and config-win.h and I´m too new to these sorts of problems to know what to look for. Anyone knows what I should do?
The thing is that I could just not use Windows and go over to Ubuntu as, for what I´ve understood, works painlessly with python. However, I have to use the win32com package which doesn´t exist on Ubuntu (have I understood that right?).
If I can´t solve these installing hassles on Windows, would a solution be to use a Windows virtual machine for the win32com part and do the rest on a host Ubuntu (or the other way around)? Would there be anyway to communicate between the two in that case? I.e. sending strings or arrays of data.
|
2015/06/09
|
[
"https://Stackoverflow.com/questions/30744415",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2069136/"
] |
I have faced the exact same issues for Python 2.7 on 64 bit Windows trying to install pycld2.
Tried many methods like installing VS express 2008, MingW, etc and it just doesnt work.
What saved me is this link:
<https://github.com/aboSamoor/polyglot/issues/11>
The proposed solution is to download the binaries from <http://www.lfd.uci.edu/~gohlke/pythonlibs/> and pip install .whl
The cpXX denotes the version of python. So in my case, I used cp27.
Hope it helps
|
I grew frustrated with trying to get python and other packages to compile/play nicely on Windows as well. Switching over to Ubuntu was a breath of fresh air, for sure.
The win32com package is made specifically for Windows hosts, so that could not longer be used, but there are other ways to achieve the same thing in Ubuntu.
Are you trying to target Windows specifically? What are you using win32com for?
|
30,744,415
|
Like many before me I don´t succeed in installing a few Python packages (mysql, pycld2, etc.) on Windows. I have a Windows 8 machine, 64-bit, and Python 3.4. At first I got the well-known error "can´t find vcvarsall.bat - install VS C++ 10.0". This I tried to solve by installing MinGW and use that as compiler. This did not work. Then finally I found an installer for this VS C++ 10.0 here <http://microsoft-visual-cpp-express.soft32.com/free-download/>. This doesn´t work too good either. Now it seems to find the vcvarsall file but instead gives me a couple of new errors
```
nclude -IC:\Python34\include /Tc_mysql.c /Fobuild\temp.win32-3.4\Release\_mysql.
obj /Zl_mysql.c_mysql.c(42) : fatal error C1083: Cannot open include file: 'config-win.h':
No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 10.0\\VC\\BIN\\cl.exe' failed with exit status 2
```
And:
```
pycldmodule.cc
bindings\pycldmodule.cc(16) : fatal error C1083: Cannot open include file: '
strings.h': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 10.0\\VC\\BIN\\cl.exe' failed with exit status 2
```
So now it doesn´t find strings.h and config-win.h and I´m too new to these sorts of problems to know what to look for. Anyone knows what I should do?
The thing is that I could just not use Windows and go over to Ubuntu as, for what I´ve understood, works painlessly with python. However, I have to use the win32com package which doesn´t exist on Ubuntu (have I understood that right?).
If I can´t solve these installing hassles on Windows, would a solution be to use a Windows virtual machine for the win32com part and do the rest on a host Ubuntu (or the other way around)? Would there be anyway to communicate between the two in that case? I.e. sending strings or arrays of data.
|
2015/06/09
|
[
"https://Stackoverflow.com/questions/30744415",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2069136/"
] |
You could try <http://www.activestate.com/activepython/downloads> for Windows. I t includes compiled binaries, avoiding the need for a C complier.
|
Looks like you're missing MySQL dev package. Another [StackOverflow question](https://stackoverflow.com/questions/1972259/cannot-open-include-file-config-win-h-no-such-file-or-directory-while-inst) has the details. But if I were you, I'd go the route [Alexander Huszagh](https://stackoverflow.com/users/4131059/alexander-huszagh) recommended and get my precompiled binaries from <http://www.lfd.uci.edu/~gohlke/pythonlibs/#mysqlclient>
|
30,744,415
|
Like many before me I don´t succeed in installing a few Python packages (mysql, pycld2, etc.) on Windows. I have a Windows 8 machine, 64-bit, and Python 3.4. At first I got the well-known error "can´t find vcvarsall.bat - install VS C++ 10.0". This I tried to solve by installing MinGW and use that as compiler. This did not work. Then finally I found an installer for this VS C++ 10.0 here <http://microsoft-visual-cpp-express.soft32.com/free-download/>. This doesn´t work too good either. Now it seems to find the vcvarsall file but instead gives me a couple of new errors
```
nclude -IC:\Python34\include /Tc_mysql.c /Fobuild\temp.win32-3.4\Release\_mysql.
obj /Zl_mysql.c_mysql.c(42) : fatal error C1083: Cannot open include file: 'config-win.h':
No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 10.0\\VC\\BIN\\cl.exe' failed with exit status 2
```
And:
```
pycldmodule.cc
bindings\pycldmodule.cc(16) : fatal error C1083: Cannot open include file: '
strings.h': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 10.0\\VC\\BIN\\cl.exe' failed with exit status 2
```
So now it doesn´t find strings.h and config-win.h and I´m too new to these sorts of problems to know what to look for. Anyone knows what I should do?
The thing is that I could just not use Windows and go over to Ubuntu as, for what I´ve understood, works painlessly with python. However, I have to use the win32com package which doesn´t exist on Ubuntu (have I understood that right?).
If I can´t solve these installing hassles on Windows, would a solution be to use a Windows virtual machine for the win32com part and do the rest on a host Ubuntu (or the other way around)? Would there be anyway to communicate between the two in that case? I.e. sending strings or arrays of data.
|
2015/06/09
|
[
"https://Stackoverflow.com/questions/30744415",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2069136/"
] |
I have faced the exact same issues for Python 2.7 on 64 bit Windows trying to install pycld2.
Tried many methods like installing VS express 2008, MingW, etc and it just doesnt work.
What saved me is this link:
<https://github.com/aboSamoor/polyglot/issues/11>
The proposed solution is to download the binaries from <http://www.lfd.uci.edu/~gohlke/pythonlibs/> and pip install .whl
The cpXX denotes the version of python. So in my case, I used cp27.
Hope it helps
|
You could try <http://www.activestate.com/activepython/downloads> for Windows. I t includes compiled binaries, avoiding the need for a C complier.
|
30,744,415
|
Like many before me I don´t succeed in installing a few Python packages (mysql, pycld2, etc.) on Windows. I have a Windows 8 machine, 64-bit, and Python 3.4. At first I got the well-known error "can´t find vcvarsall.bat - install VS C++ 10.0". This I tried to solve by installing MinGW and use that as compiler. This did not work. Then finally I found an installer for this VS C++ 10.0 here <http://microsoft-visual-cpp-express.soft32.com/free-download/>. This doesn´t work too good either. Now it seems to find the vcvarsall file but instead gives me a couple of new errors
```
nclude -IC:\Python34\include /Tc_mysql.c /Fobuild\temp.win32-3.4\Release\_mysql.
obj /Zl_mysql.c_mysql.c(42) : fatal error C1083: Cannot open include file: 'config-win.h':
No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 10.0\\VC\\BIN\\cl.exe' failed with exit status 2
```
And:
```
pycldmodule.cc
bindings\pycldmodule.cc(16) : fatal error C1083: Cannot open include file: '
strings.h': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 10.0\\VC\\BIN\\cl.exe' failed with exit status 2
```
So now it doesn´t find strings.h and config-win.h and I´m too new to these sorts of problems to know what to look for. Anyone knows what I should do?
The thing is that I could just not use Windows and go over to Ubuntu as, for what I´ve understood, works painlessly with python. However, I have to use the win32com package which doesn´t exist on Ubuntu (have I understood that right?).
If I can´t solve these installing hassles on Windows, would a solution be to use a Windows virtual machine for the win32com part and do the rest on a host Ubuntu (or the other way around)? Would there be anyway to communicate between the two in that case? I.e. sending strings or arrays of data.
|
2015/06/09
|
[
"https://Stackoverflow.com/questions/30744415",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2069136/"
] |
I have faced the exact same issues for Python 2.7 on 64 bit Windows trying to install pycld2.
Tried many methods like installing VS express 2008, MingW, etc and it just doesnt work.
What saved me is this link:
<https://github.com/aboSamoor/polyglot/issues/11>
The proposed solution is to download the binaries from <http://www.lfd.uci.edu/~gohlke/pythonlibs/> and pip install .whl
The cpXX denotes the version of python. So in my case, I used cp27.
Hope it helps
|
Looks like you're missing MySQL dev package. Another [StackOverflow question](https://stackoverflow.com/questions/1972259/cannot-open-include-file-config-win-h-no-such-file-or-directory-while-inst) has the details. But if I were you, I'd go the route [Alexander Huszagh](https://stackoverflow.com/users/4131059/alexander-huszagh) recommended and get my precompiled binaries from <http://www.lfd.uci.edu/~gohlke/pythonlibs/#mysqlclient>
|
32,330,838
|
I am new to Python. I have a code both in python 3.x & python 2.x (Actually, it is a library which has been written in 2.x). I am calling a function in python 2.x from python 3.x. The library return a HTTPResponse (from python 2.x). I am not able to parse the HTTPResponse in my code (In Python 3.x).
**My request is :**
```
jsonData = {'string': post_message_data['Message']}
url = "%s/testurl/" % (settings.STIX_API_URL)
response = requests.post(url, jsonData)
```
**Processing request in Python 2.x**
In python 2.x I am processing this request & sending back http response which is plain text reply parsed from email.
```
htmldata = request.body
strdata = json.loads(htmldata)
html = strdata['string']
reply = quotations.extract_from(html, 'text/html')
reply = quotations.extract_from_html(html)
return HttpResponse(json.dumps({'reply':reply}), mimetype='application/json')
```
Now my question is "how to get that json data in json format in the function called in 3.x"
I have tried response.read() , response.readall() , response.content. Each time getting different errors.
|
2015/09/01
|
[
"https://Stackoverflow.com/questions/32330838",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5268513/"
] |
This should work:
```
beans = {
myBean(MyBeanImpl) { bean ->
bean.scope = 'prototype'
someProperty = 42
otherProperty = "blue"
bookService = ref("bookService")
}
}
```
|
I agree with Jeff Scott Brown.
How do you know it doesn't work? We're using Grails 2.3.9.
I have this in my resources.groovy:
```
httpBuilderPool(HTTPBuilder) { bean ->
bean.scope = 'prototype' // A new service is created every time it is injected into another class
client = ref('httpClientPool')
}
...
```
and this in a Spock Integration Test:
```
import grails.test.spock.IntegrationSpec
import org.apache.http.impl.client.CloseableHttpClient
import org.apache.log4j.Logger
class JukinHttpBuilderSpec extends IntegrationSpec {
private final Logger log = Logger.getLogger(getClass())
def httpBuilderPool
def grailsApplication
void "test jukinHttpBuilder instantiates"() {
expect:
httpBuilderPool
httpBuilderPool.client instanceof CloseableHttpClient
}
void "test httpBuilderPool is prototype instaance"() {
when: 'we call getBean on the application context'
def someInstanceIds = (1..5).collect { grailsApplication.mainContext.getBean('httpBuilderPool').toString() }.toSet()
log.info someInstanceIds.toString()
then: 'we should get a new instance every access'
someInstanceIds.size() == 5
}
void "test injected httpBuilderPool is prototype instance"() {
when: 'we access the injeted httpBuilderPool'
def someInstanceIds = (1..5).collect { httpBuilderPool.toString() }.toSet()
log.info someInstanceIds.toString()
then: 'it uses the same instance every time'
someInstanceIds.size() == 1
}
}
```
which shows me it works in 2.3.9.
|
30,364,874
|
I'm trying to teach myself python and I'm quite new to parsing concepts. I'm trying to parse the output from my fire service pager, it seems to follow a consistent pattern as follows:
```
(UNIT1, UNIT2, UNIT3) 911-STRU (Box# 12345) aBusiness 12345 Street aTown (Xstr CrossStreet1/CrossStreet2) building fire, persons reported #F123456
```
It seems that each section is separated by the use of () brackets the fields break down as follows
```
(Responded trucks) CallSource-JobClassification (Box number if available) Building Name, Building Address (Cross streets) Description of job #JobNumber
```
Scrap that, just got a call while writing this. If no box number is provided then that section is skipped entirely meaning that it goes straight to the address section, therefore I can't count on parsing using the brackets.
So to the parsing experts out there, can I attack this with pyparsing or will I need a custom parser? Furthermore can I target specific sections with a parser so it doesn't matter what order they appear in, as is the case with the Box# being an optional field?
My goal is to take this input, tidy it up with parsing and then send it via Twitter, SMS, email or all of the above.
Many thanks in advance
EDIT:
I've got this 99% working using the following code
```
import re
sInput = ('(UNIT123, UNIT1234) AMB-MED APPLE HEADQUARTERS 1 INFINITE LOOP CUPERTINO. (XStr DE ANZA BLVD/MARIANI AVE) .42YOM CARDIAC ARREST. #F9876543')
#sInput = '(UNIT123, UNIT1234) ALARM-SPRNKLR (Alarm Type MANUAL/SMOKE) (Box 12345) APPLE HEADQUARTERS 1 INFINITE LOOP CUPERTINO. (XStr DE ANZA BLVD/MARIANI AVE) #F9876544'
# Matches truck names using the consistent four uppercase letters followed by three - four numbers.
pAppliances = re.findall(r'\w[A-Z]{3}\d[0-9]{2,3}', sInput)
# Matches source and job type using the - as a guide, this section is always proceeded by the trucks on the job
# therefore is always proceeded by a ) and a space. Allows between 3-9 characters either side of the - this is
# to allow such variations as 911-RESC, FAA-AIRCRAFT etc.
pJobSource = re.findall(r'\) ([A-Za-z1-9]{2,8}-[A-Za-z1-9]{2,8})', sInput)
# Gets address by starting at (but ignoring) the job source e.g. -RESC and capturing everything until the next . period
# the end of the address section always has a period. Uses ?; to ignore up to two sets of brackets that may appear in
# the string for things such as box numbers or alarm types.
pAddress = re.findall(r'-[A-Z1-9]{2,8} (.*?)\. \(', sInput)
pAddressOptionTwo = re.findall(r'-[A-Z1-9]{2,8}(?: \(.*?\))(?: \(.*?\)) (.*?)\. \(', sInput)
# Finds the specified cross streets as they are always within () brackets, each bracket has a space immediately
# before or after and the work XStr is always present.
pCrossStreet = re.findall(r' \((XStr.*?)\) ', sInput)
# The job details / description is always contained between two . periods e.g. .42YOM CARDIAC ARREST. each period
# has a space either immediately before or after.
pJobDetails = re.findall(r' \.(.*?)\. ', sInput)
# Job number is always in the format #F followed by seven digits. The # is always proceeded by a space. Allowed
# between 1 and 8 digits for future proofing.
pJobNumber = re.findall(r' (#F\d{0,7})', sInput)
print pAppliances
print pJobSource
print pAddress
print pCrossStreet
print pJobDetails
print pJobNumber
```
When run on the uncommented sInput string it returns the following
```
['UNIT123', 'UNIT1234']
['AMB-MED']
['APPLE HEADQUARTERS 1 INFINITE LOOP CUPERTINO']
['XStr DE ANZA BLVD/MARIANI AVE']
['42YOM CARDIAC ARREST']
['#F9876543']
```
However when I run it on the commented sInput string I get the following
```
['UNIT123', 'UNIT1234']
['ALARM-SPRNKLR']
['(Alarm Type MANUAL/SMOKE) (Box 12345) APPLE HEADQUARTERS 1 INFINITE LOOP CUPERTINO']
['XStr DE ANZA BLVD/MARIANI AVE']
[]
['#F9876544']
```
This is because two option bracket sets have been included in this message. I managed to correct this using the pAddressOptionTwo line however when the first string is then applied it returns no address at all as it didn't find the brackets.
So the new refocused question is:
How can I make an optional argument in the regex line. If there are brackets present ignore them and their contents and return the rest of the string OR if there are no brackets present continue as per normal.
|
2015/05/21
|
[
"https://Stackoverflow.com/questions/30364874",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4889516/"
] |
I think your best/easiest option is to use [regular expressions](https://docs.python.org/2/library/re.html), defining a pattern that will match all or parts of your input string and extract the pieces that you're interested in.
[PyParsing](http://pyparsing.wikispaces.com/) will probably work fine too. I have not used it myself but the first few examples looks like some kind of higher level wrapper around regex, although I would expect it differs in many aspects once you delve deeper into it.
Another option is to define a [lexer](http://en.wikipedia.org/wiki/Lexical_analysis) and create a parser from it using [PLY](http://www.dabeaz.com/ply/). That would probably be overkill for your use case however, as it is aimed more at parsing programming language and natural language syntax.
|
If you know pyparsing, then it might be easier to go with it. The `()` can always be treated as optional. Pyparsing will make certain things easier out of the box.
If you are not so familiar with pyparsing, and your main goal is learning python, then hand craft your own parser in pure python. Nothing better at learning a new language than re-inventing some wheels :-)
|
21,331,730
|
I want to install PHP on the server. and I want to install it with Python script. Can I include PHP (some version number) in the reqirements.txt file and install it on the server?
If not, then how can I install PHP on the server using a python script?
|
2014/01/24
|
[
"https://Stackoverflow.com/questions/21331730",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1162512/"
] |
You can get the counts by using
```
df.groupby([df.index.date, 'action']).count()
```
or you can plot directly using this method
```
df.groupby([df.index.date, 'action']).count().plot(kind='bar')
```
You could also just store the results to `count` and then plot it separately. This is assuming that your index is already in datetimeindex format, otherwise follow the directions of @mkln above.
|
Starting from
```
mydate col_name
0 2000-12-29 00:10:00 action1
1 2000-12-29 00:20:00 action2
2 2000-12-29 00:30:00 action2
3 2000-12-29 00:40:00 action1
4 2000-12-29 00:50:00 action1
5 2000-12-31 00:10:00 action1
6 2000-12-31 00:20:00 action2
7 2000-12-31 00:30:00 action2
```
You can do
```
df['mydate'] = pd.to_datetime(df['mydate'])
df = df.set_index('mydate')
df['day'] = df.index.date
counts = df.groupby(['day', 'col_name']).agg(len)
```
but perhaps there's an even more straightforward way. the above should work anyway.
If you want to use counts as a DataFrame, I'd then transform it back
```
counts = pd.DataFrame(counts, columns=['count'])
```
|
21,331,730
|
I want to install PHP on the server. and I want to install it with Python script. Can I include PHP (some version number) in the reqirements.txt file and install it on the server?
If not, then how can I install PHP on the server using a python script?
|
2014/01/24
|
[
"https://Stackoverflow.com/questions/21331730",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1162512/"
] |
Starting from
```
mydate col_name
0 2000-12-29 00:10:00 action1
1 2000-12-29 00:20:00 action2
2 2000-12-29 00:30:00 action2
3 2000-12-29 00:40:00 action1
4 2000-12-29 00:50:00 action1
5 2000-12-31 00:10:00 action1
6 2000-12-31 00:20:00 action2
7 2000-12-31 00:30:00 action2
```
You can do
```
df['mydate'] = pd.to_datetime(df['mydate'])
df = df.set_index('mydate')
df['day'] = df.index.date
counts = df.groupby(['day', 'col_name']).agg(len)
```
but perhaps there's an even more straightforward way. the above should work anyway.
If you want to use counts as a DataFrame, I'd then transform it back
```
counts = pd.DataFrame(counts, columns=['count'])
```
|
I find the combo `.count_values().plot.bar()` very intuitive to do histogram plot. It also puts categories in the right order for you and, in many cases where there are too many categories, you can simply do `.count_values().iloc[:k].plot.bar()`.
So, what I would do in your case is to compute a new Pandas Series of date+action, formatted for readability, and then invoke one of the snippet above. The code might look like this:
```
date_and_action = df['date'].astype(str).str.slice(0, 10) + '_' + df['action']
date_and_action.count_values().iloc[:k].plot.bar()
```
|
21,331,730
|
I want to install PHP on the server. and I want to install it with Python script. Can I include PHP (some version number) in the reqirements.txt file and install it on the server?
If not, then how can I install PHP on the server using a python script?
|
2014/01/24
|
[
"https://Stackoverflow.com/questions/21331730",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1162512/"
] |
You can get the counts by using
```
df.groupby([df.index.date, 'action']).count()
```
or you can plot directly using this method
```
df.groupby([df.index.date, 'action']).count().plot(kind='bar')
```
You could also just store the results to `count` and then plot it separately. This is assuming that your index is already in datetimeindex format, otherwise follow the directions of @mkln above.
|
I find the combo `.count_values().plot.bar()` very intuitive to do histogram plot. It also puts categories in the right order for you and, in many cases where there are too many categories, you can simply do `.count_values().iloc[:k].plot.bar()`.
So, what I would do in your case is to compute a new Pandas Series of date+action, formatted for readability, and then invoke one of the snippet above. The code might look like this:
```
date_and_action = df['date'].astype(str).str.slice(0, 10) + '_' + df['action']
date_and_action.count_values().iloc[:k].plot.bar()
```
|
40,821,733
|
I'm currently using vagrant and set it up to connect to my local computer's port 5000 and when I move to localhost:5000, the default ubuntu webpage appears to confirm that I'm connected.
However, it tells me to manipulate the app using the index.html in there but I already have a whole Python flask app stored somewhere on github that I want to just git install and run using flask. My flask app works on vagrant because I've tested it out already.
How do I change the pages localhost:5000 is displaying to that of my flask app?
For reference, here's my flask app python code (html templates are in their own folders and didn't include):
```
import os
from angular_flask import app
def runserver():
port = int(os.environ.get('PORT', 5000))
app.run(host='0.0.0.0', port=port)
if __name__ == '__main__':
runserver()
```
I've also added this to my vagrantfile
```
config.vm.network "forwarded_port", guest: 80, host: 5000
```
which allows me to see the pages on my localhost but I want to change the pages viewed to that of the same thing I set up here: <https://cs3319asst3.herokuapp.com/>
|
2016/11/26
|
[
"https://Stackoverflow.com/questions/40821733",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4297337/"
] |
Create a static field inside class, and increment it in constructor.
something like this:
```
class A {
public:
A() : itemnumber(nextNum) { ++nextNum; }
private:
int itemnumber;
static int nextNum;
}
// in CPP file initialize it
int A::nextNum = 1;
```
Also, don't forget to increment static field in copy and move constructors\operators.
|
with a static variable like
```
class rect{
public:
static int num;
rect(){num++;}
};
int rect::num =0;
int main(){
rect a();
cout << rect::num;
}
```
|
49,469,409
|
I am relatively new to programming.
I'm trying to run the following:
```
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
import numpy as np
my_map = Basemap(projection = 'ortho', lat_0=50, lon_0=-100,
resolution = 'l', area_thresh=1000.0)
my_map.drawcoastlines()
my_map.drawcountries()
my_map.fillcontinents(color='red')
plt.show()
```
However, I get "AttributeError: 'AxesSubplot' object has no attribute 'get\_axis\_bgcolor'"
I'm using python 3.6, matplotlib 2.2.0, basemap 1.0.7. They were downloaded using Anaconda.
OS - Mac 10.12.4
How do I get rid of this error?
|
2018/03/24
|
[
"https://Stackoverflow.com/questions/49469409",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9545845/"
] |
The matplotlib deprecated the get\_axis\_bgcolor. You'll need to update basemap to version 1.1.0 to fix this error. It's installable via conda-forge, via:
```
conda install -c conda-forge basemap
```
In case you'll get error like, "Unable to open boundary dataset file. Only the 'crude' and 'low', resolution datasets are installed by default." Install, the additional files via:
```
conda install -c conda-forge basemap-data-hires
```
|
In addition to [@user45237841](https://stackoverflow.com/users/8861059/user45237841)'s answer, you can also change the `resolution` to `c` or `l` to resolve this error `Unable to open boundary dataset file. Only the 'crude' and 'low', resolution datasets are installed by default.`
```
my_map = Basemap(projection = 'ortho', lat_0=50, lon_0=-100,
resolution = 'c', area_thresh=1000.0)
# c is for crude and l is for low
```
|
49,469,409
|
I am relatively new to programming.
I'm trying to run the following:
```
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
import numpy as np
my_map = Basemap(projection = 'ortho', lat_0=50, lon_0=-100,
resolution = 'l', area_thresh=1000.0)
my_map.drawcoastlines()
my_map.drawcountries()
my_map.fillcontinents(color='red')
plt.show()
```
However, I get "AttributeError: 'AxesSubplot' object has no attribute 'get\_axis\_bgcolor'"
I'm using python 3.6, matplotlib 2.2.0, basemap 1.0.7. They were downloaded using Anaconda.
OS - Mac 10.12.4
How do I get rid of this error?
|
2018/03/24
|
[
"https://Stackoverflow.com/questions/49469409",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9545845/"
] |
The matplotlib deprecated the get\_axis\_bgcolor. You'll need to update basemap to version 1.1.0 to fix this error. It's installable via conda-forge, via:
```
conda install -c conda-forge basemap
```
In case you'll get error like, "Unable to open boundary dataset file. Only the 'crude' and 'low', resolution datasets are installed by default." Install, the additional files via:
```
conda install -c conda-forge basemap-data-hires
```
|
if you are using Jupyter-notebook make sure that using --yes to processing package installing on platform. `conda install -c conda-forge basemap-data-hires --yes`
|
49,469,409
|
I am relatively new to programming.
I'm trying to run the following:
```
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
import numpy as np
my_map = Basemap(projection = 'ortho', lat_0=50, lon_0=-100,
resolution = 'l', area_thresh=1000.0)
my_map.drawcoastlines()
my_map.drawcountries()
my_map.fillcontinents(color='red')
plt.show()
```
However, I get "AttributeError: 'AxesSubplot' object has no attribute 'get\_axis\_bgcolor'"
I'm using python 3.6, matplotlib 2.2.0, basemap 1.0.7. They were downloaded using Anaconda.
OS - Mac 10.12.4
How do I get rid of this error?
|
2018/03/24
|
[
"https://Stackoverflow.com/questions/49469409",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9545845/"
] |
The matplotlib deprecated the get\_axis\_bgcolor. You'll need to update basemap to version 1.1.0 to fix this error. It's installable via conda-forge, via:
```
conda install -c conda-forge basemap
```
In case you'll get error like, "Unable to open boundary dataset file. Only the 'crude' and 'low', resolution datasets are installed by default." Install, the additional files via:
```
conda install -c conda-forge basemap-data-hires
```
|
If you don't want to update just replace `get_axis_bgcolor` with `get_facecolor` in `\site-packages\mpl_toolkits\basemap\__init__.py` file.
```
Line 1623: fill_color = ax.get_facecolor()
Line 1767: axisbgc = ax.get_facecolor()
```
|
49,469,409
|
I am relatively new to programming.
I'm trying to run the following:
```
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
import numpy as np
my_map = Basemap(projection = 'ortho', lat_0=50, lon_0=-100,
resolution = 'l', area_thresh=1000.0)
my_map.drawcoastlines()
my_map.drawcountries()
my_map.fillcontinents(color='red')
plt.show()
```
However, I get "AttributeError: 'AxesSubplot' object has no attribute 'get\_axis\_bgcolor'"
I'm using python 3.6, matplotlib 2.2.0, basemap 1.0.7. They were downloaded using Anaconda.
OS - Mac 10.12.4
How do I get rid of this error?
|
2018/03/24
|
[
"https://Stackoverflow.com/questions/49469409",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9545845/"
] |
In addition to [@user45237841](https://stackoverflow.com/users/8861059/user45237841)'s answer, you can also change the `resolution` to `c` or `l` to resolve this error `Unable to open boundary dataset file. Only the 'crude' and 'low', resolution datasets are installed by default.`
```
my_map = Basemap(projection = 'ortho', lat_0=50, lon_0=-100,
resolution = 'c', area_thresh=1000.0)
# c is for crude and l is for low
```
|
if you are using Jupyter-notebook make sure that using --yes to processing package installing on platform. `conda install -c conda-forge basemap-data-hires --yes`
|
49,469,409
|
I am relatively new to programming.
I'm trying to run the following:
```
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
import numpy as np
my_map = Basemap(projection = 'ortho', lat_0=50, lon_0=-100,
resolution = 'l', area_thresh=1000.0)
my_map.drawcoastlines()
my_map.drawcountries()
my_map.fillcontinents(color='red')
plt.show()
```
However, I get "AttributeError: 'AxesSubplot' object has no attribute 'get\_axis\_bgcolor'"
I'm using python 3.6, matplotlib 2.2.0, basemap 1.0.7. They were downloaded using Anaconda.
OS - Mac 10.12.4
How do I get rid of this error?
|
2018/03/24
|
[
"https://Stackoverflow.com/questions/49469409",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9545845/"
] |
In addition to [@user45237841](https://stackoverflow.com/users/8861059/user45237841)'s answer, you can also change the `resolution` to `c` or `l` to resolve this error `Unable to open boundary dataset file. Only the 'crude' and 'low', resolution datasets are installed by default.`
```
my_map = Basemap(projection = 'ortho', lat_0=50, lon_0=-100,
resolution = 'c', area_thresh=1000.0)
# c is for crude and l is for low
```
|
If you don't want to update just replace `get_axis_bgcolor` with `get_facecolor` in `\site-packages\mpl_toolkits\basemap\__init__.py` file.
```
Line 1623: fill_color = ax.get_facecolor()
Line 1767: axisbgc = ax.get_facecolor()
```
|
64,641,472
|
Accidentally my python script has made a table with name as "ext\_data\_content\_modec --replace" which we want to delete.
However BQ doesn't seem to recognize the table with spaces and keywords(--replace).
We have tried many variants of bq rm , as well as tried deleting the from BQ console but it doesn't work
For example, see below (etlt\_dsc is dataset name).
```
$ bq rm 'etlt_dsc.ext_data_content_modec --replace'
BigQuery error in rm operation: Not found: Table boeing-prod-atm-next-dsc:etlt_dsc.ext_data_content_modec --replace
```
Besides above we tried below commands but nothing worked
```
bq rm "etlt_dsc.ext_data_content_modec --replace"
bq rm [etlt_dsc.ext_data_content_modec --replace]
bq rm [etlt_dsc.ext_data_content_modec --replace']
bq rm etlt_dsc.ext_data_content_modec \--replace
```
Would anyone has input for us please ?
|
2020/11/02
|
[
"https://Stackoverflow.com/questions/64641472",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13078109/"
] |
You would do something like this:
```
Map<String, B> bById = ...
Map<String, B> bByName = ...
for (A a : listA) {
B b = bById.getOrDefault(a.id, bByName.get(a.name));
if (b != null) {
a.setPersonalDetails(b.getPersonalDetails);
}
}
```
|
you can use comparator to achieve this.Just a example psudo code is given below
```java
Collections.sort(listA, Comparator.comparing(A::getId)
.thenComparing(A::getName)
.thenComparing(A::getAge));
```
|
50,863,799
|
I'm pretty new to Python, but have Python 3.6 installed, and running a few other programs perfectly. I'm trying to pull data using the pandas\_datareader module but keep running into this issue. Operating system: OSX.I've visited the other threads on similar errors and tried their methods to no avail.
Additional concern: When using Sublime Text, if I run it as a Python (instead of Python3) build, it funcitons fine, but all my other accompanying programs are written on Python3. Is there a way of making this work on 3.6 that I'm missing?
I have already visited the 'is\_list\_like' error question, and have changed the fred.py file to pandas.api.types in the import line.
```
Traceback (most recent call last):
File
"/Users/scottgolightly/Desktop/python_work/data_read_practice.py", line
3, in <module>
import pandas_datareader.data as web
File
"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-
packages/pandas_datareader/__init__.py", line 2, in <module>
from .data import (DataReader, Options, get_components_yahoo,
File
"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-
packages/pandas_datareader/data.py", line 14, in <module>
from pandas_datareader.fred import FredReader
File
"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-
packages/pandas_datareader/fred.py", line 1, in <module>
from pandas.core.common import is_list_like
ImportError: cannot import name 'is_list_like'
```
|
2018/06/14
|
[
"https://Stackoverflow.com/questions/50863799",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7075311/"
] |
As has been noted, `is_list_like` has been moved from `pandas.core.common` to `pandas.api.types`.
There are several paths forward for you.
1. My (highly) recommended solution: download Conda and set up an environment with a version of Pandas prior to v0.23.0.
2. You can install the development version of Pandas, with a patch in place:
`pip install git+https://github.com/pydata/pandas-datareader.git`
3. Since you say that you have a version of Pandas in a different environment that works, I suspect the Python calling it is version 2.X. If so, try using [past.autotranslate](http://python-future.org/translation.html) to import the older version of Pandas.
4. If this working version of Pandas actually belongs to a Python 3.X site-packages, then you can manually import it using:
`sys.path.insert(0, '/path/to/other/pandas')`
|
Small workaround is to define it like this:
```
import pandas as pd
pd.core.common.is_list_like = pd.api.types.is_list_like
import pandas_datareader
```
|
4,645,822
|
I've been struggling with the [cutting stock problem](http://en.wikipedia.org/wiki/Cutting_stock_problem) for a while, and I need to do a funcion that given an array of values, gives me an array of array for all the possible combinations.
I trying to do this function, but (as everything in python), I think someone must have done it better :).
I think the name of the function is combination. Does anyone know whats the best way to do this, and what's the best module and function for this
P.s. I have read some papers on the matter, but the matematical terms dazzle me :)
|
2011/01/10
|
[
"https://Stackoverflow.com/questions/4645822",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
```
>>> from itertools import permutations
>>> x = range(3)
>>> list(permutations(x))
[(0, 1, 2), (0, 2, 1), (1, 0, 2), (1, 2, 0), (2, 0, 1), (2, 1, 0)]
>>>
```
|
Do you mean [itertools.combinations](http://docs.python.org/library/itertools.html#itertools.combinations)?
|
4,645,822
|
I've been struggling with the [cutting stock problem](http://en.wikipedia.org/wiki/Cutting_stock_problem) for a while, and I need to do a funcion that given an array of values, gives me an array of array for all the possible combinations.
I trying to do this function, but (as everything in python), I think someone must have done it better :).
I think the name of the function is combination. Does anyone know whats the best way to do this, and what's the best module and function for this
P.s. I have read some papers on the matter, but the matematical terms dazzle me :)
|
2011/01/10
|
[
"https://Stackoverflow.com/questions/4645822",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
```
>>> from itertools import permutations
>>> x = range(3)
>>> list(permutations(x))
[(0, 1, 2), (0, 2, 1), (1, 0, 2), (1, 2, 0), (2, 0, 1), (2, 1, 0)]
>>>
```
|
```
>>> from itertools import combinations
>>> list(combinations('abcd', 2))
```
|
22,476,489
|
We can convert a `datetime` value in to decimal using following function.
```
import time
from datetime import datetime
t = datetime.now()
t1 = t.timetuple()
print time.mktime(t1)
```
Output :
```
Out[9]: 1395136322.0
```
Similarly is there a way to convert strings in to a `decimal` using python?.
Example string.
```
"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:27.0) Gecko/20100101 Firefox/27.0"
```
|
2014/03/18
|
[
"https://Stackoverflow.com/questions/22476489",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/461436/"
] |
If you want an integer to uniquely identify a string, I'd go for hashing functions, like SHA. They return the same value for the same input.
```
import hashlib
def sha256_hash_as_int(s):
return int(hashlib.sha256(s).hexdigest(), 16)
```
If you use Python 3, you first have to encode `s` to some concrete encoding, like UTF-8.
Furthermore, take a look at the `hashlib` module and decide if you really need an integer, or if the output of `hexdigest()` isn't OK for you, too.
|
You can use [hash](http://docs.python.org/2.7/library/functions.html#hash) function:
```
>>> hash("Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:27.0) Gecko/20100101 Firefox/27.0")
1892010093
```
|
37,737,098
|
I have a string time coming from a third party (external to my python program), and I need to compare that time to right now. How long ago was that time?
How can I do this?
I've looked at the `datetime` and `time` libraries, as well as `pytz`, and can't find an obvious way to do this. It should automatically incorporate DST because the third party doesn't explicitly state its offset, only the timezone (US/Eastern).
I've tried this, and it fails:
```
dt = datetime.datetime.strptime('June 10, 2016 12:00PM', '%B %d, %Y %I:%M%p')
dtEt = dt.replace(tzinfo=pytz.timezone('US/Eastern'))
now = datetime.datetime.now()
now - dtEt
```
TypeError: can't subtract offset-naive and offset-aware datetimes
|
2016/06/09
|
[
"https://Stackoverflow.com/questions/37737098",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6365333/"
] |
Signing a commit will change the commit metadata, and thus change the underlying SHA1 commit ID. As you probably know, for Git, this has the same consequence of trying to change the contents of your history.
If you want to just re-sign your last commit you could run:
`git commit -S --amend`
If you want to re-sign a commit in the middle of your history you could do a couple of things, all of them being a bit nasty if you ask me:
1. You could `reset --soft` to the commit you want to sign. Run `git commit -S --amend` and then commit all the staged changes. This would *merge* all your history after that commit into a single commit
2. Branch out (for safety) and `reset --hard` to the commit you want to sign. Sign it, and if you want to perserve commit history you could now `git cherry-pick NEXTCOMMIT -S` to re-build the whole signed history.
|
If you want to sign all the existing commits on the branch without do any changes to them:
```
git rebase --exec 'git commit --amend --no-edit -n -S' -i origin/HEAD
```
|
37,737,098
|
I have a string time coming from a third party (external to my python program), and I need to compare that time to right now. How long ago was that time?
How can I do this?
I've looked at the `datetime` and `time` libraries, as well as `pytz`, and can't find an obvious way to do this. It should automatically incorporate DST because the third party doesn't explicitly state its offset, only the timezone (US/Eastern).
I've tried this, and it fails:
```
dt = datetime.datetime.strptime('June 10, 2016 12:00PM', '%B %d, %Y %I:%M%p')
dtEt = dt.replace(tzinfo=pytz.timezone('US/Eastern'))
now = datetime.datetime.now()
now - dtEt
```
TypeError: can't subtract offset-naive and offset-aware datetimes
|
2016/06/09
|
[
"https://Stackoverflow.com/questions/37737098",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6365333/"
] |
Signing a commit will change the commit metadata, and thus change the underlying SHA1 commit ID. As you probably know, for Git, this has the same consequence of trying to change the contents of your history.
If you want to just re-sign your last commit you could run:
`git commit -S --amend`
If you want to re-sign a commit in the middle of your history you could do a couple of things, all of them being a bit nasty if you ask me:
1. You could `reset --soft` to the commit you want to sign. Run `git commit -S --amend` and then commit all the staged changes. This would *merge* all your history after that commit into a single commit
2. Branch out (for safety) and `reset --hard` to the commit you want to sign. Sign it, and if you want to perserve commit history you could now `git cherry-pick NEXTCOMMIT -S` to re-build the whole signed history.
|
If I had my branch created from master, I would use the following:
>
>
> ```
> git rebase --exec 'git commit --amend --no-edit -n -S' -i master
>
> ```
>
>
This command will detect all my commits on my branch and reapply them as signed commits on top of master.
|
37,737,098
|
I have a string time coming from a third party (external to my python program), and I need to compare that time to right now. How long ago was that time?
How can I do this?
I've looked at the `datetime` and `time` libraries, as well as `pytz`, and can't find an obvious way to do this. It should automatically incorporate DST because the third party doesn't explicitly state its offset, only the timezone (US/Eastern).
I've tried this, and it fails:
```
dt = datetime.datetime.strptime('June 10, 2016 12:00PM', '%B %d, %Y %I:%M%p')
dtEt = dt.replace(tzinfo=pytz.timezone('US/Eastern'))
now = datetime.datetime.now()
now - dtEt
```
TypeError: can't subtract offset-naive and offset-aware datetimes
|
2016/06/09
|
[
"https://Stackoverflow.com/questions/37737098",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6365333/"
] |
If you want to sign all the existing commits on the branch without do any changes to them:
```
git rebase --exec 'git commit --amend --no-edit -n -S' -i origin/HEAD
```
|
If I had my branch created from master, I would use the following:
>
>
> ```
> git rebase --exec 'git commit --amend --no-edit -n -S' -i master
>
> ```
>
>
This command will detect all my commits on my branch and reapply them as signed commits on top of master.
|
22,486,519
|
I am trying to create a fabric script that will install the erlang solutions R15B02 package and am having some difficulty. I have the following code in my fabric script:
```
sudo("apt-get update")
sudo("apt-get -qy install python-software-properties")
sudo('add-apt-repository "deb http://packages.erlang-solutions.com/debian quantal contrib"')
sudo('add-apt-repository "deb http://packages.erlang-solutions.com/debian precise contrib"')
sudo('add-apt-repository "deb http://packages.erlang-solutions.com/debian oneiric contrib"')
sudo('add-apt-repository "deb http://packages.erlang-solutions.com/debian lucid contrib"')
sudo("wget http://packages.erlang-solutions.com/debian/erlang_solutions.asc")
sudo("sudo apt-key add erlang_solutions.asc")
sudo("apt-get update")
sudo("apt-get -qy install ca-certificates-java default-jre-headless fontconfig fontconfig-config hicolor-icon-theme icedtea-6-jre-cacao icedtea-6-jre-jamvm java-common libatk1.0-0 libatk1.0-data libavahi-client3 libavahi-common-data libavahi-common3 libcairo2 libcups2 libdatrie1 libfontconfig1 libgdk-pixbuf2.0-0 libgdk-pixbuf2.0-common libgl1-mesa-dri libgl1-mesa-glx libglapi-mesa libgstreamer-plugins-base0.10-0 libgstreamer0.10-0 libgtk2.0-0 libgtk2.0-bin libgtk2.0-common libice6 libjasper1 libjpeg-turbo8 libjpeg8 libllvm3.0 libnspr4 libnss3 libnss3-1d liborc-0.4-0 libpango1.0-0 libpixman-1-0 libsm6 libthai-data libthai0 libtiff4 libwxbase2.8-0 libwxgtk2.8-0 libx11-xcb1 libxcb-glx0 libxcb-render0 libxcb-shm0 libxcomposite1 libxcursor1 libxdamage1 libxfixes3 libxft2 libxi6 libxinerama1 libxrandr2 libxrender1 libxxf86vm1 openjdk-6-jre-headless openjdk-6-jre-lib shared-mime-info ttf-dejavu-core tzdata-java x11-common tzdata")
sudo("apt-get -qy install erlang")
```
This works wonderful for installing 16B but, one of the applications I need to install on these servers has some incompatabilities with 16B currently. Is there a way that I can specify the R15B02 package? When I run apt-cache showpkg erlang, I only see packages for 16B and 14B.
|
2014/03/18
|
[
"https://Stackoverflow.com/questions/22486519",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/232337/"
] |
You can also use one of these projects for installing and managing different versions of Erlang on the same computer:
* <https://github.com/spawngrid/kerl>
* <https://github.com/metadave/erln8>
|
If you can find the file 'esl-erlang\_15.b.2-1~ubuntu~precise\_i386.deb' or the 64 bit version, those could be installed with dpkg. If you find these, to install both at once, extract the .deb with `dpkg -x esl-erlang_15.b.2-1~ubuntu~precise_i386.deb` and move the binaries inside somewhere else. If you can't find that .deb file, you can download the source and compile it, configuring it to install somewhere different with `./configure --prefix=/path/to/old/lib/install/path`.
You could put the old version in a different directory and call it with the variable `LD_PRELOAD` set to, for example, `/usr/old/path/to/old/version/of/erlang/SharedObjectFile.so`.
So to run the program that takes the old version, do this:
`~$ LD_PRELOAD=/usr/old/path/to/old/version/of/erlang/oldErlangLib.so ProgramToRun`
I hope this is what you meant. Every time you run the program with the old depencencies this variable will have to be set. You can also set multiple preloads with a space between the libraries to override. Be sure to escape these spaces with double quotes or `\(space character goes here)`.
|
68,076,036
|
In my main directory I have two programs: `main.py` and a myfolder (which is a directory).
The `main.py` file has these 2 lines:
```
from myfolder import Adding
print(Adding.execute())
```
Inside the myfolder directory, I have 3 python files only: `__init__.py`, `abstract_math_ops.py`, and `adding.py`.
The `__init__.py` file as just one line:
```
from myfolder.adding import Adding
```
The `abstract_math_ops.py` file is just an abstract class with one method:
```
from abc import ABC, abstractmethod
class AbstractMathOps(ABC):
@staticmethod
@abstractmethod
def execute(*args):
pass
```
The `adding.py` file is:
```
from abstract_math_ops import AbstractMathOps
class Adding(AbstractMathOps):
@staticmethod
def execute(*args):
--doing some addition --
```
When I execute `main.py`, I get the error:
```
~\myfolder\adding.py in <module>
---> 1 from abstract_math_ops import AbstractMathOps
ModuleNotFoundError: No module named 'abstract_math_ops '
```
If I put the abstract class inside the adding.py file, I do not get any errors. However, I need to put it in a different file as other python files (i.e. substracting.py, multipying.py) can be created following the footprint of the AbstractMathOps abstract class. Also, that way I do not have everything in a single file.
How can I solve this problem?
|
2021/06/22
|
[
"https://Stackoverflow.com/questions/68076036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7705108/"
] |
Your folder is a package, and you can't import sibling-submodules of a package in the way you're trying to do in `adding.py`. You either need to use an absolute import (`from myfolder.abstract_math_ops import AbstractMathOps`, which works the same anywhere), or use an explicit relative import (`from .abstract_math_ops import AbstractMathOps`, which only works from within the same package).
But if the two modules in your package are really as short as you've shown, maybe you should reconsider making `myfolder` a package at all. You could very easily define all of your classes in single `myfolder.py` file, and it would be easier to make sense of since the classes are so interrelated.
|
Try with:
```
from .abstract_math_ops import AbstractMathOps
```
You need to add the relative location of the file for the import to work in this case.
|
51,156,919
|
I'm currently reading a dummy.txt, the content showing as below:
```
8t1080 0.077500 0.092123 -0.079937
63mh9j 0.327872 -0.074191 -0.014623
63l2o3 0.504010 0.356935 -0.275896
64c97u 0.107409 0.021140 -0.000909
```
Now, I am reading it using python like below:
```
lines = open("dummy.txt", "r").readlines()
```
I wanted to make a structure so that I can have a list... or array (doesn't matter) of arrays. Each smaller array will have the 0th element as string, and the following decimals will be a float. In order to do that, I'm currently trying:
```
for line in lines:
line = line.split()
for x in range(1, len(line)):
line[x] = float(line[x])
```
Interestingly, this doesn't work, since
```
for line in lines:
line = line.split()
```
wouldn't actually split the line, and change the read data (lines variable, to be specific).
Meanwhile, below works, and successfully modifies the read data (lines variable).
```
for x in range(0, len(lines)):
lines[x] = lines[x].split()
for x in range(1, len(line)):
line[x] = float(line[x])
```
So what is the difference between the two for loop that has two different results?
|
2018/07/03
|
[
"https://Stackoverflow.com/questions/51156919",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8221657/"
] |
You would greatly benefit from [`pandas`](https://pandas.pydata.org/) in this case:
```
import pandas as pd
df = pd.read_csv('dummy.txt', sep=' ', header=None)
>>> df.values
array([['8t1080', 0.0775, 0.092123, -0.079937],
['63mh9j', 0.327872, -0.074191, -0.014622999999999999],
['63l2o3', 0.5040100000000001, 0.356935, -0.27589600000000003],
['64c97u', 0.10740899999999999, 0.02114, -0.000909]], dtype=object)
```
Or all in one go (without saving it your text file as a dataframe object):
```
my_array = pd.read_csv('dummy.txt', sep=' ', header=None).values
>>> my_array
array([['8t1080', 0.0775, 0.092123, -0.079937],
['63mh9j', 0.327872, -0.074191, -0.014622999999999999],
['63l2o3', 0.5040100000000001, 0.356935, -0.27589600000000003],
['64c97u', 0.10740899999999999, 0.02114, -0.000909]], dtype=object)
```
|
In your first case it IS working, however each time the for loops the line variable is reset to the next value, and its current value is lost to recieve the next one.
```
aux=[]
for line in lines: #here the program changes the value of line
line = line.split() # here you change the value of line
for x in range(1, len(line)):
line[x] = float(line[x])
aux.append(line)
```
using an auxiliar var you can "save" your values to later use
|
51,156,919
|
I'm currently reading a dummy.txt, the content showing as below:
```
8t1080 0.077500 0.092123 -0.079937
63mh9j 0.327872 -0.074191 -0.014623
63l2o3 0.504010 0.356935 -0.275896
64c97u 0.107409 0.021140 -0.000909
```
Now, I am reading it using python like below:
```
lines = open("dummy.txt", "r").readlines()
```
I wanted to make a structure so that I can have a list... or array (doesn't matter) of arrays. Each smaller array will have the 0th element as string, and the following decimals will be a float. In order to do that, I'm currently trying:
```
for line in lines:
line = line.split()
for x in range(1, len(line)):
line[x] = float(line[x])
```
Interestingly, this doesn't work, since
```
for line in lines:
line = line.split()
```
wouldn't actually split the line, and change the read data (lines variable, to be specific).
Meanwhile, below works, and successfully modifies the read data (lines variable).
```
for x in range(0, len(lines)):
lines[x] = lines[x].split()
for x in range(1, len(line)):
line[x] = float(line[x])
```
So what is the difference between the two for loop that has two different results?
|
2018/07/03
|
[
"https://Stackoverflow.com/questions/51156919",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8221657/"
] |
You just need a data structure to output to for the first example i.e.
```
data = []
lines = open("dummy.txt", "r").readlines()
for line in lines:
line = line.split()
for x in range(1, len(line)):
line[x] = float(line[x])
data.append(line)
```
The data list will contain what you want.
|
In your first case it IS working, however each time the for loops the line variable is reset to the next value, and its current value is lost to recieve the next one.
```
aux=[]
for line in lines: #here the program changes the value of line
line = line.split() # here you change the value of line
for x in range(1, len(line)):
line[x] = float(line[x])
aux.append(line)
```
using an auxiliar var you can "save" your values to later use
|
43,113,717
|
I have the text file like this
```
Ethernet adapter Local Area Connection:
Connection-specific DNS Suffix . : example.com
IPv6 Address. . . . . . . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
Temporary IPv6 Address. . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
Link-local IPv6 Address . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx
IPv4 Address. . . . . . . . . . . : 10.0.6.106
Subnet Mask . . . . . . . . . . . : 255.255.0.0
Default Gateway . . . . . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
Ethernet adapter Local Area Connection:
Connection-specific DNS Suffix . : example.com
IPv6 Address. . . . . . . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
Temporary IPv6 Address. . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
Link-local IPv6 Address . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx
Subnet Mask . . . . . . . . . . . : 255.255.0.0
Default Gateway . . . . . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
10.0.0.1
Ethernet adapter Local Area Connection:
Connection-specific DNS Suffix . : example.com
IPv6 Address. . . . . . . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
Temporary IPv6 Address. . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
Link-local IPv6 Address . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx
IPv4 Address. . . . . . . . . . . : 10.0.6.107
Subnet Mask . . . . . . . . . . . : 255.255.0.0
Default Gateway . . . . . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
10.0.0.1
Ethernet adapter Local Area Connection:
Connection-specific DNS Suffix . : example.com
IPv6 Address. . . . . . . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
Temporary IPv6 Address. . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
Link-local IPv6 Address . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx
IPv4 Address. . . . . . . . . . . : 10.0.6.108
Subnet Mask . . . . . . . . . . . : 255.255.0.0
Default Gateway . . . . . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
Ethernet adapter Local Area Connection:
Connection-specific DNS Suffix . : example.com
IPv6 Address. . . . . . . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
Temporary IPv6 Address. . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
Link-local IPv6 Address . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx
Subnet Mask . . . . . . . . . . . : 255.255.0.0
Default Gateway . . . . . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
10.0.0.1
Ethernet adapter Local Area Connection:
Connection-specific DNS Suffix . : example.com
IPv6 Address. . . . . . . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
Temporary IPv6 Address. . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
Link-local IPv6 Address . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx
IPv4 Address. . . . . . . . . . . : 10.0.6.109
Subnet Mask . . . . . . . . . . . : 255.255.0.0
Default Gateway . . . . . . . . . : xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
10.0.0.1
```
I want to print all the IPv4 Addresses from this file using python script.
Currently I am able to print only first IPv4 (10.0.6.106) address only with the below python script.
**ip = open("ip.txt").read().split("IPv4 ")[1].split(":")[1].split("\n")[0].strip()
print ip**
please help me to print all the IPv4 adresses.
|
2017/03/30
|
[
"https://Stackoverflow.com/questions/43113717",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3636467/"
] |
yes, using @IdClass annotation.
```
@Entity
@IdClass(EmployeeKey.class)
public class Employee {
@Id
private int id;
@Id
private int departmendId;
}
public class EmployeeKey implements Serializable {
private int id;
private int departmendId;
}
public interface EmployeeRepository extends JpaRepository<Employee, EmployeeKey>,{}
```
|
Even if the underlying table does not have an explicit primary key specified, I am sure there is at least one column that is defined as unique (or has a unique index specified for it).
You can add the @Id annotation to the entity field relevant to that column and that will the sufficient for the persistence provider. You probably would not want to specify any generator of course.
If you happen to have a few unique columns which could form a sort of natural key then you can specify a composite key using the `@EmbeddedId` strategy or the `@IdClass` strategy in your entity.
|
30,625,787
|
This might seem simple but it has flummoxed me for a day, so I'm turning to ya'll.
I have a valid Python dictionary:
```
{'numeric_int2': {'(113.7, 211.4]': 3,
'(15.023, 113.7]': 4,
'(211.4, 309.1]': 5,
'(309.1, 406.8]': 4,
'(406.8, 504.5]': 5,
'(504.5, 602.2]': 7,
'(602.2, 699.9]': 4,
'(699.9, 797.6]': 5,
'(797.6, 895.3]': 4,
'(895.3, 993]': 6}}
```
I want to convert it to valid JSON, which requires removing the single quotes. The desired end result is:
```
{"numeric_int2": {"(113.7, 211.4]": 3,
"(15.023, 113.7]": 4,
"(211.4, 309.1]": 5,
"(309.1, 406.8]": 4,
"(406.8, 504.5]": 5,
"(504.5, 602.2]": 7,
"(602.2, 699.9]": 4,
"(699.9, 797.6]": 5,
"(797.6, 895.3]": 4,
"(895.3, 993]": 6}}
```
I've tried every way I can think of from json.dumps() or anything else. How can I do this? Points if it is fast.
I should add, when I try to use json.dumps() on the dictionary, I get an error:
```
TypeError: 1 is not JSON serializable
```
This is my complete code:
```
In [17]:
import pandas as pd
import numpy as np
import itertools
import simplejson
raw_data = {
'numeric_float1': list([np.random.random() for _ in range(0, 47)]+[np.nan]),
'numeric_float2': list([np.random.random() for _ in range(0, 47)]+[np.nan]),
'numeric_float3': list([np.random.random() for _ in range(0, 47)]+[np.nan]),
}
df = pd.DataFrame(raw_data)
df_labels = [
'category1:category',
'numeric_float1:numeric',
'numeric_float2:numeric',
'numeric_float3:numeric'
]
columns = list(zip([w.split(':')[0] for w in df_labels],[w.split(':')[1] for w in df_labels]))
In [18]:
def count_by_value(df,columns):
numeric = [c[0] for c in columns if c[1] == 'numeric']
output = {}
for column in df[numeric]:
output[column] = pd.cut(df[column],10).value_counts().to_dict()
output = simplejson.dumps(output)
return output
In [19]:
# Test the function
count_by_value(df,columns)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-19-02e2e6cb949b> in <module>()
1 # Test the function
----> 2 count_by_value(df,columns)
<ipython-input-18-c2d882f5652d> in count_by_value(df, columns)
9 output[column] = pd.cut(df[column],10).value_counts().to_dict()
10
---> 11 output = simplejson.dumps(output)
12
13 return output
/Users/antonnobla/anaconda/lib/python3.4/site-packages/simplejson/__init__.py in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, encoding, default, use_decimal, namedtuple_as_object, tuple_as_array, bigint_as_string, sort_keys, item_sort_key, for_json, ignore_nan, int_as_string_bitcount, **kw)
368 and not kw
369 ):
--> 370 return _default_encoder.encode(obj)
371 if cls is None:
372 cls = JSONEncoder
/Users/antonnobla/anaconda/lib/python3.4/site-packages/simplejson/encoder.py in encode(self, o)
268 # exceptions aren't as detailed. The list call should be roughly
269 # equivalent to the PySequence_Fast that ''.join() would do.
--> 270 chunks = self.iterencode(o, _one_shot=True)
271 if not isinstance(chunks, (list, tuple)):
272 chunks = list(chunks)
/Users/antonnobla/anaconda/lib/python3.4/site-packages/simplejson/encoder.py in iterencode(self, o, _one_shot)
350 Decimal=decimal.Decimal)
351 try:
--> 352 return _iterencode(o, 0)
353 finally:
354 key_memo.clear()
/Users/antonnobla/anaconda/lib/python3.4/site-packages/simplejson/encoder.py in default(self, o)
245
246 """
--> 247 raise TypeError(repr(o) + " is not JSON serializable")
248
249 def encode(self, o):
TypeError: 4 is not JSON serializable
```
|
2015/06/03
|
[
"https://Stackoverflow.com/questions/30625787",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2935984/"
] |
**it appears both simplejson and json work as expected to me**, however
simplejson is faster than json(by quite a bit) and it seems to work fine with your data
```
import simplejson,json
print simplejson.dumps({'numeric_int2': {'(113.7, 211.4]': 3,
'(15.023, 113.7]': 4,
'(211.4, 309.1]': 5,
'(309.1, 406.8]': 4,
'(406.8, 504.5]': 5,
'(504.5, 602.2]': 7,
'(602.2, 699.9]': 4,
'(699.9, 797.6]': 5,
'(797.6, 895.3]': 4,
'(895.3, 993]': 6}})
print json.dumps({'numeric_int2': {'(113.7, 211.4]': 3,
'(15.023, 113.7]': 4,
'(211.4, 309.1]': 5,
'(309.1, 406.8]': 4,
'(406.8, 504.5]': 5,
'(504.5, 602.2]': 7,
'(602.2, 699.9]': 4,
'(699.9, 797.6]': 5,
'(797.6, 895.3]': 4,
'(895.3, 993]': 6}})
```
|
Found the answer. Here is the function that works:
```
# Count the frequency of each value
def count_by_value(df,columns):
# Selects appropriate columns for the action
numeric = [c[0] for c in columns if c[1] == 'numeric']
# Returns 0 if none of the appropriate columns exists
if len(numeric) == 0:
return 0
output = pd.DataFrame()
for column in df[numeric]:
output[column] = pd.cut(df[column],10).value_counts().to_dict()
output = output.to_json()
# output results
return output
```
|
59,146,674
|
I have a batch file which is running a python script and in the python script, I have a subprocess function which is being ran.
I have tried `subprocess.check_output`, `subprocess.run`, `subprocess.Popen`, all of them returns me an empty string only when running it using a batch file.
If I run it manually or using an IDE, I get the response correctly. Below is the code for `subprocess.run`:
```
response = subprocess.run(fileCommand, shell=True, cwd=pSetTableauExeDirectory, capture_output=True)
self.writeInLog(' Command Response: \t' + str(response))
```
***Response is in stdout=b''***
**When ran in batch file and from task scheduler:**
>
> Command Response: CompletedProcess(args='tableau refreshextract
> --config-file "Z:\XXX\tableau\_config\SampleSuperStore.txt"',
> returncode=0, stdout=b'', stderr=b'')
>
>
>
**When ran manually or in IDE:**
>
> Command Response: CompletedProcess(args='tableau refreshextract
> --config-file "Z:\XXX\tableau\_config\SampleSuperStore.txt"',
> returncode=0, stdout=b'Data source refresh completed.\r\n0 rows uploaded.\r\n', stderr=b'')
>
>
>
Batch file which runs the python program. Parameters are parsed to the python application
```
SET config=SampleSuperStore.txt
CALL C:\XXX\AppData\Local\Continuum\anaconda3\Scripts\activate.bat
C:\XXX\AppData\Local\Continuum\anaconda3\python.exe Z:\XXX\pMainManual.py "%config%"
```
Why is that??
--Complete python code---
```
try:
from pWrapper import wrapper
import sys
except Exception as e:
print(str(e))
class main:
def __init__(self):
self.tableauPath = 'C:\\Program Files\\Tableau\\Tableau 2018.3\\bin\\'
self.tableauCommand = 'tableau refreshextract --config-file'
def runJob(self,argv):
self.manual_sProcess(argv[1])
def manual_sProcess(self,tableauConfigFile):
new_wrapper = wrapper()
new_wrapper.tableauSetup(self.tableauPath,self.tableauCommand)
if new_wrapper.tableauConfigExists(tableauConfigFile):
new_wrapper.tableauCommand(tableauConfigFile)
if __name__ == "__main__":
new_main = main()
new_main.runJob(sys.argv)
```
Wrapper class:
```
def tableauCommand(self,tableauConfigFile):
command = self.setTableauExeDirectory + ' ' + self.refreshConfigCommand + ' "' + tableauConfigFile + '"'
self.new_automateTableauExtract.runCommand(tableauConfigFile,command,self.refreshConfigCommand,self.tableauFilePath,self.setTableauExeDirectory)
```
Automate Class:
```
def runCommand(self,pConfig,pCommand,pRefreshConfigCommand,pFilePath,pSetTableauExeDirectory):
try:
fileCommand = pRefreshConfigCommand + ' "' + pFilePath + '"'
response = subprocess.run(fileCommand, shell=True, cwd=pSetTableauExeDirectory, capture_output=True)
self.writeInLog(' Command Response: \t' + str(response))
except Exception as e:
self.writeInLog('Exception in function runCommand: ' + str(e))
```
UPDATE: I initially thought that the bat file was causing this issue but it looks like it works when running manually a batch file but not when it is set on task scheduler
|
2019/12/02
|
[
"https://Stackoverflow.com/questions/59146674",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2865368/"
] |
Updated
-------
First of all, if there is a need to run `anaconda-prompt` by calling `activate.bat` file, you can simply do as follows:
```
import subprocess
def call_anaconda_venv():
subprocess.call('python -m venv virtual.env')
subprocess.call('cmd.exe /k /path/venv/Scripts/activate.bat')
if __name__ == "__main__":
call_anaconda_venv()
```
* The result of the above code would be a **running instance** of `anaconda-prompt` as required.
---
Now as Problem Seems Like:
---
>
> I have a `batch file` which is **running a python script** and in the python script, I have a `subprocess` function which is being run.
>
>
>
I have implemented the same program as required; Suppose we have
* **Batch File** ---> `script.bat` \*\*\*\* includes a *command* to run python script i.e `test.py`. \*\*\*\*
* **Python Script File** ---> `test.py` \*\*\*\* includes a *method* to run commands using `subprocess`. \*\*\*\*
* **Batch File** ---> `sys_info.bat` \*\*\*\* includes a *command* which would give the system information of my computer. \*\*\*\*
---
Now First, `script.bat` includes a command that will run the required python script as given below;
```
python \file_path\test.py
pause
```
Here, `pause` command is used to prevent auto-closing console after execution. Now we have `test.py`, python script which includes `subprocess` method to run required commands and get their **output**.
---
```
from subprocess import check_output
class BatchCommands:
@staticmethod
def run_commands_using_subprocess(commands):
print("Running commands from File: {}".format(commands))
value = check_output(commands, shell=True).decode()
return value
@staticmethod
def run():
commands_from_file = "\file-path\sys_info.bat"
print('##############################################################')
print("Shell Commands using >>> subprocess-module <<<")
print('##############################################################')
values = BatchCommands.run_commands_using_subprocess(commands_from_file)
print(values)
if __name__ == '__main__':
BatchCommands.run()
```
---
Now, in the end, I have a `sys_info.bat` file which includes commands to renew the IP-Adress of my computer. Commands in `sys_info.bat` file are as follows;
```
systeminfo
```
Place multiple commands in `sys_info.bat` file, then you can also run multiple commands at a time like:
```
ipconfig/all
ipconfig/release
ipconfig/reset
ipconfig/renew
ipconfig
```
Before to use the file, set all files `directory paths`, and run the batch file i.e `script.py` in `command-prompt` as follows;
* Run command-prompt or terminal as an `administrator`.
```
run \file_path\script.py
```
Here is the result after running the batch file in the `terminal`.
[](https://i.stack.imgur.com/OOGOC.gif)
|
This is happening because your ide is not running in a shell that works in the way that open subprocess is expecting.
If you set SHELL=False and specify the absolute path to the batch file it will run.
you might still need the cwd if the batch file requires it.
|
34,339,867
|
I am trying to match the following strings:
```
2 match virtual-address 172.29.210.119 tcp eq www
4 match virtual-address 172.29.210.147 tcp any
```
The expected output:
```
172.29.210.119
tcp
www
172.29.210.147
tcp
any
```
I am using pattern:
```
match virtual-address (\d+\.\d+\.\d+\.\d+)\s?(\w+)? (?>eq)?\s?(\d+|\w+)
```
I get the expected output with that pattern testing in: <https://regex101.com/>
But when I use the same pattern to match in python, I get the following error:
```
Traceback (most recent call last):
File ".\ace2f5_parser.py", line 119, in <module>
virtual_ip_proto_port = re.findall(pattern_virtual_ip_proto_port, line)
File "C:\Users\hpokhare\AppData\Local\Programs\Python\Python35-32\lib\re.py", line 213, in findall
return _compile(pattern, flags).findall(string)
File "C:\Users\hpokhare\AppData\Local\Programs\Python\Python35-32\lib\re.py", line 293, in _compile
p = sre_compile.compile(pattern, flags)
File "C:\Users\hpokhare\AppData\Local\Programs\Python\Python35-32\lib\sre_compile.py", line 536, in compi
p = sre_parse.parse(p, flags)
File "C:\Users\hpokhare\AppData\Local\Programs\Python\Python35-32\lib\sre_parse.py", line 829, in parse
p = _parse_sub(source, pattern, 0)
File "C:\Users\hpokhare\AppData\Local\Programs\Python\Python35-32\lib\sre_parse.py", line 437, in _parse_
itemsappend(_parse(source, state))
File "C:\Users\hpokhare\AppData\Local\Programs\Python\Python35-32\lib\sre_parse.py", line 767, in _parse
len(char) + 1)
sre_constants.error: unknown extension ?> at position 53
```
What does the error mean. Doesn't it support ?>. Any ideas on how to resolve the issue.
|
2015/12/17
|
[
"https://Stackoverflow.com/questions/34339867",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4611991/"
] |
You can use this regex in Python:
```
\bmatch virtual-address (\d+\.\d+\.\d+\.\d+)\s?(\w+) (?:eq\s+)?(\w+)
```
[RegEx Demo](https://regex101.com/r/tX8mB2/1)
Python regex doesn't support *Atomic Group* syntax `(?>..)` like PCRE.
|
If you modify the flavor of regex101 for "python", you will see that you can not use `(?>eq)?`
An alternative to what you want is to use `$`, to assert position at end of a line. Using `(\w+)$` will catch the last of the string sentence.
```
import re
text = [
'2 match virtual-address 172.29.210.119 tcp eq www',
'4 match virtual-address 172.29.210.147 tcp any'
]
regexp = re.compile(r'match virtual-address (\d+\.\d+\.\d+\.\d+)\s(\w+).*?\s(\w+)$')
for i in text:
ip, protocol, url = regexp.search(i).groups()
print(ip, protocol, url, '', sep='\n')
```
|
22,976,523
|
I'm working on a small app that pulls data out of a list stored in a list, passes it through a class init, and then displays/allows user to work. Everything was going fine until i tried to format the original 'list' in the IDLE so it was easier to read (for me). so I'd change 9 to 09, 8 to 08. etc It was a simple formating/spacing change and it broke the entire god damn program, citing 'invalid token'. WTF is this, I thought. So then I opened the interpreter and started typing:
```
>x = [5,5] #Control
>x
[5, 5]
>>> y=[05,05] #control2
>>> y
[5, 5]
>>> z = [05, "ge"] #test. 'Integer', before string, apparantly works.
>>> z
[5, 'ge']
> a = ["ge", 09] #test2. String, before 'integer', cocks things up.
SyntaxError: invalid token
>>> b= ["ge", 9] #test3, this works fine.
>>> b
['ge', 9]
```
I guess my question is... why does this occur? Why is python interpret these integers as 'tokens' when they follow strings, but as integers when they follow integers?
|
2014/04/10
|
[
"https://Stackoverflow.com/questions/22976523",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2584933/"
] |
It's nothing to do with lists or strings. When you prefix a number with `0`, it's interpreted as [octal](http://en.wikipedia.org/wiki/Octal). And 9 is not a valid octal digit!
```
Python 2.7.6
Type "help", "copyright", "credits" or "license" for more information.
>>> 09
File "<stdin>", line 1
09
^
SyntaxError: invalid token
>>> 011
9
```
Note that in Python3, this gives you the error for any 0-prefixed number, presumably to reduce confusion of the type you are experiencing. To specify octal in Python3, you must use `0o` as a prefix.
```
Python 3.3.3
Type "help", "copyright", "credits" or "license" for more information.
>>> 09
File "<stdin>", line 1
09
^
SyntaxError: invalid token
>>> 011
File "<stdin>", line 1
011
^
SyntaxError: invalid token
>>> 0o11
9
>>> 0o9
File "<stdin>", line 1
0o9
^
SyntaxError: invalid token
>>>
```
|
It's not just Python, it's most programming languages. Starting a number with a zero signifies that the number is in octal, which means only digits `0-7` are valid. Thus,
```
5 == 05
6 == 06
7 == 07
8 == 010
9 == 011
...
15 == 017
16 == 020
...
255 == 0377
```
Similarly, prefix `0x` means the number is hexadecimal (so, valid digits are `0-9` and `a-f`: `255 == 0xff`)
|
22,976,523
|
I'm working on a small app that pulls data out of a list stored in a list, passes it through a class init, and then displays/allows user to work. Everything was going fine until i tried to format the original 'list' in the IDLE so it was easier to read (for me). so I'd change 9 to 09, 8 to 08. etc It was a simple formating/spacing change and it broke the entire god damn program, citing 'invalid token'. WTF is this, I thought. So then I opened the interpreter and started typing:
```
>x = [5,5] #Control
>x
[5, 5]
>>> y=[05,05] #control2
>>> y
[5, 5]
>>> z = [05, "ge"] #test. 'Integer', before string, apparantly works.
>>> z
[5, 'ge']
> a = ["ge", 09] #test2. String, before 'integer', cocks things up.
SyntaxError: invalid token
>>> b= ["ge", 9] #test3, this works fine.
>>> b
['ge', 9]
```
I guess my question is... why does this occur? Why is python interpret these integers as 'tokens' when they follow strings, but as integers when they follow integers?
|
2014/04/10
|
[
"https://Stackoverflow.com/questions/22976523",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2584933/"
] |
It's nothing to do with lists or strings. When you prefix a number with `0`, it's interpreted as [octal](http://en.wikipedia.org/wiki/Octal). And 9 is not a valid octal digit!
```
Python 2.7.6
Type "help", "copyright", "credits" or "license" for more information.
>>> 09
File "<stdin>", line 1
09
^
SyntaxError: invalid token
>>> 011
9
```
Note that in Python3, this gives you the error for any 0-prefixed number, presumably to reduce confusion of the type you are experiencing. To specify octal in Python3, you must use `0o` as a prefix.
```
Python 3.3.3
Type "help", "copyright", "credits" or "license" for more information.
>>> 09
File "<stdin>", line 1
09
^
SyntaxError: invalid token
>>> 011
File "<stdin>", line 1
011
^
SyntaxError: invalid token
>>> 0o11
9
>>> 0o9
File "<stdin>", line 1
0o9
^
SyntaxError: invalid token
>>>
```
|
This is because python interprets numbers with a `0` in front of them as octal, so saying `09` doesn't make much sense.
If you changed it for instance to the following:
```
a = ["ge", 07]
```
everything works fine.
|
22,976,523
|
I'm working on a small app that pulls data out of a list stored in a list, passes it through a class init, and then displays/allows user to work. Everything was going fine until i tried to format the original 'list' in the IDLE so it was easier to read (for me). so I'd change 9 to 09, 8 to 08. etc It was a simple formating/spacing change and it broke the entire god damn program, citing 'invalid token'. WTF is this, I thought. So then I opened the interpreter and started typing:
```
>x = [5,5] #Control
>x
[5, 5]
>>> y=[05,05] #control2
>>> y
[5, 5]
>>> z = [05, "ge"] #test. 'Integer', before string, apparantly works.
>>> z
[5, 'ge']
> a = ["ge", 09] #test2. String, before 'integer', cocks things up.
SyntaxError: invalid token
>>> b= ["ge", 9] #test3, this works fine.
>>> b
['ge', 9]
```
I guess my question is... why does this occur? Why is python interpret these integers as 'tokens' when they follow strings, but as integers when they follow integers?
|
2014/04/10
|
[
"https://Stackoverflow.com/questions/22976523",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2584933/"
] |
It's nothing to do with lists or strings. When you prefix a number with `0`, it's interpreted as [octal](http://en.wikipedia.org/wiki/Octal). And 9 is not a valid octal digit!
```
Python 2.7.6
Type "help", "copyright", "credits" or "license" for more information.
>>> 09
File "<stdin>", line 1
09
^
SyntaxError: invalid token
>>> 011
9
```
Note that in Python3, this gives you the error for any 0-prefixed number, presumably to reduce confusion of the type you are experiencing. To specify octal in Python3, you must use `0o` as a prefix.
```
Python 3.3.3
Type "help", "copyright", "credits" or "license" for more information.
>>> 09
File "<stdin>", line 1
09
^
SyntaxError: invalid token
>>> 011
File "<stdin>", line 1
011
^
SyntaxError: invalid token
>>> 0o11
9
>>> 0o9
File "<stdin>", line 1
0o9
^
SyntaxError: invalid token
>>>
```
|
This is because if the digit starts with a `0` it is considered an octal digit and octal digits are only from`0-7`
### Example
```
>>> 015 - 02 #which is obviously not what you'd expect for base10 integers
11
```
|
23,120,865
|
Apologies if this is a basic question, but let us say I have a tab delimited file named `file.txt` formatted as follows:
```
Label-A [tab] Value-1
Label-B [tab] Value-2
Label-C [tab] Value-3
[...]
Label-i [tab] Value-n
```
I want [xlrd](https://pypi.python.org/pypi/xlrd) or [openpyxl](http://pythonhosted.org/openpyxl/) to add this data to the excel worksheet named `Worksheet` in the file `workbook.xlsx` such that the cells contain the following values. I do not want to affect the contents of any other part of `workbook.xlsx` other than the two columns that are affected
```
A1=Label-A
B1=Value-1
A2=Label-B
B2=Value-2
[etc.]
```
EDIT: Solution
```
import sys
import csv
import openpyxl
tab_file = sys.stdin.readlines()
reader = csv.reader(tab_file, delimiter='\t')
first_row = next(reader)
num_cols = len(first_row)
try:
workbook = sys.argv[1]
write_sheet = sys.argv[2]
except Exception:
raise sys.exit("ERROR")
try:
first_col = int(sys.argv[3])
except Exception:
first_col = 0
tab_reader = csv.reader(tab_file, delimiter='\t')
xls_book = openpyxl.load_workbook(filename=workbook)
sheet_names = xls_book.get_sheet_names()
xls_sheet = xls_book.get_sheet_by_name(write_sheet)
for row_index, row in enumerate(tab_reader):
number = 0
col_number = first_col
while number < num_cols:
cell_tmp = xls_sheet.cell(row = row_index, column = col_number)
cell_tmp.value = row[number]
number += 1
col_number += 1
xls_book.save(workbook)
```
|
2014/04/16
|
[
"https://Stackoverflow.com/questions/23120865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3543052/"
] |
Since you said you are used to working in Bash, I'm assuming you're using some kind of Unix/Linux, so here's something that will work on Linux.
Before pasting the code, I'd like to point a few things:
Working with Excel in Unix (and Python) is not that straightforward. For instance, you can't open an Excel sheet for reading and writing at the same time (at least, not as far as I know, although I must recognize that I have never worked with the `openpyxl` module). Python has two well known modules (that I am used to working with **:-D** ) when it comes to handling Excel sheets: One is for reading Excel sheets ([xlrd](http://www.lexicon.net/sjmachin/xlrd.html)) and the second one for writing them ([xlwt](https://secure.simplistix.co.uk/svn/xlwt/trunk/xlwt/doc/xlwt.html?p=4966)) With those two modules, if you want to modify an existing sheet, as I understand you want to do, you need to read the existing sheet, copying it to a writable sheet and edit that one. Check the question/answers in [this other S.O. question](https://stackoverflow.com/questions/2725852/writing-to-existing-workbook-using-xlwt) that explain it with some more detail.
Reading *whatever*-separated files is much easier thanks to the [csv](https://docs.python.org/2/library/csv.html) module (its prepared for comma-separated files, but it can be easily tweaked for other separators). Check it out.
Also, I wasn't very sure from your example if the contents of the tab-separated file indicate somehow the row indexes on the Excel sheet or they're purely positional. When you say that in the tab-separated file you have `Value-2`, I wasn't sure if that `2` meant the second row on the Excel file or it was just an example of some text. I assumed the latest (which is easier to deal with), so whatever pair *Label Value* appears on the first row of your tab-separated file will be the first pair on the first row of the Excel file. It this is not the case, leave a comment a we will deal with it **;-)**
Ok, so let's assume the following scenario:
You have a tab-separated file like this:
*stack37.txt*:
```
Label-A Value-1
Label-B Value-2
Label-C Value-3
```
The excel file you want to modify is *stack37.xls*. It only has one sheet (or better said, the sheet you want to modify is the first one in the file) and it initially looks like this (in LibreOffice Calc):

Now, this is the python code (I stored it in a file called *stack37.py* and it's located in the same directory of the tab-separated file and the excel file):
```
import csv
import xlwt
import xlrd
from xlutils import copy as xl_copy
with open('stack37.txt') as tab_file:
tab_reader = csv.reader(tab_file, delimiter='\t')
xls_readable_book = xlrd.open_workbook('stack37.xls')
xls_writeable_book = xl_copy.copy(xls_readable_book)
xls_writeable_sheet = xls_writeable_book.get_sheet(0)
for row_index, row in enumerate(tab_reader):
xls_writeable_sheet.write(row_index, 0, row[0])
xls_writeable_sheet.write(row_index, 1, row[1])
xls_writeable_book.save('stack37.xls')
```
After you run this code, the file *stack37.xls* will look like this:

What I meant about not knowing what you exactly wanted to do with the values in your tab-separated file is that regardless of what you name your items in there, it will modify the first row of the excel sheet, then the second... (even if your first `Value` is called `Value-2`, the code above will not put that value on the second row of the Excel sheet, but on the fist row) It just assumes the first line in the tab-separated file corresponds with the values to set on the first row of the Excel sheet.
Let explain with an slightly modified example:
Let's assume your original Excel file looks like the original excel file on my screenshot (the full of `| Hello-Ax | Bye-Bx |`) but your tab-separated file now looks like this:
*stack37.txt*:
```
foo bar
baz baz2
```
After you run *stack37.py*, this is how your Excel will look like:

(see? first row of the tab-separated file goes to the first row in the Excel file)
**UPDATE 1**:
I'm trying the `openpyxl` module myself... Theoretically (according to the documentation) the following should work (note that I've changed the extensions to Excel 2007/2010 `.xlsx`):
```
import csv
import openpyxl
with open('stack37.txt') as tab_file:
tab_reader = csv.reader(tab_file, delimiter='\t')
xls_book = openpyxl.load_workbook(filename='stack37.xlsx')
sheet_names = xls_book.get_sheet_names()
xls_sheet = xls_book.get_sheet_by_name(sheet_names[0])
for row_index, row in enumerate(tab_reader):
cell_tmp1 = xls_sheet.cell(row = row_index, column = 0)
cell_tmp1.value = row[0]
cell_tmp2 = xls_sheet.cell(row = row_index, column = 1)
cell_tmp2.value = row[1]
xls_book.save('stack37_new.xlsx')
```
But if I do that, my LibreOffice refuses to open the newly generated file `stack37_new.xlsx` (maybe is because my LibreOffice is old? I'm in a Ubuntu 12.04, LibreOffice version 3.5.7.2... who knows, maybe is just that)
|
That's a job for VBA, but if I had to do it in Python I would do something like this:
```
import Excel
xl = Excel.ExcelApp(False)
wb = xl.app.Workbooks("MyWorkBook.xlsx")
wb.Sheets("Ass'y").Cells(1, 1).Value2 = "something"
wb.Save()
```
With an helper `Excel.py` class like this:
```
import win32com.client
class ExcelApp(object):
def __init__(self, createNewInstance, visible = False):
self._createNewInstance=createNewInstance
if createNewInstance:
self.app = win32com.client.Dispatch('Excel.Application')
if visible:
self.app.Visible = True
else:
self.app = win32com.client.GetActiveObject("Excel.Application")
def __exit__(self):
if self.app and self._createNewInstance:
self.app.Quit()
def __del__(self):
if self.app and self._createNewInstance:
self.app.Quit()
def quit(self):
if self.app:
self.app.Quit()
```
|
23,120,865
|
Apologies if this is a basic question, but let us say I have a tab delimited file named `file.txt` formatted as follows:
```
Label-A [tab] Value-1
Label-B [tab] Value-2
Label-C [tab] Value-3
[...]
Label-i [tab] Value-n
```
I want [xlrd](https://pypi.python.org/pypi/xlrd) or [openpyxl](http://pythonhosted.org/openpyxl/) to add this data to the excel worksheet named `Worksheet` in the file `workbook.xlsx` such that the cells contain the following values. I do not want to affect the contents of any other part of `workbook.xlsx` other than the two columns that are affected
```
A1=Label-A
B1=Value-1
A2=Label-B
B2=Value-2
[etc.]
```
EDIT: Solution
```
import sys
import csv
import openpyxl
tab_file = sys.stdin.readlines()
reader = csv.reader(tab_file, delimiter='\t')
first_row = next(reader)
num_cols = len(first_row)
try:
workbook = sys.argv[1]
write_sheet = sys.argv[2]
except Exception:
raise sys.exit("ERROR")
try:
first_col = int(sys.argv[3])
except Exception:
first_col = 0
tab_reader = csv.reader(tab_file, delimiter='\t')
xls_book = openpyxl.load_workbook(filename=workbook)
sheet_names = xls_book.get_sheet_names()
xls_sheet = xls_book.get_sheet_by_name(write_sheet)
for row_index, row in enumerate(tab_reader):
number = 0
col_number = first_col
while number < num_cols:
cell_tmp = xls_sheet.cell(row = row_index, column = col_number)
cell_tmp.value = row[number]
number += 1
col_number += 1
xls_book.save(workbook)
```
|
2014/04/16
|
[
"https://Stackoverflow.com/questions/23120865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3543052/"
] |
Since you said you are used to working in Bash, I'm assuming you're using some kind of Unix/Linux, so here's something that will work on Linux.
Before pasting the code, I'd like to point a few things:
Working with Excel in Unix (and Python) is not that straightforward. For instance, you can't open an Excel sheet for reading and writing at the same time (at least, not as far as I know, although I must recognize that I have never worked with the `openpyxl` module). Python has two well known modules (that I am used to working with **:-D** ) when it comes to handling Excel sheets: One is for reading Excel sheets ([xlrd](http://www.lexicon.net/sjmachin/xlrd.html)) and the second one for writing them ([xlwt](https://secure.simplistix.co.uk/svn/xlwt/trunk/xlwt/doc/xlwt.html?p=4966)) With those two modules, if you want to modify an existing sheet, as I understand you want to do, you need to read the existing sheet, copying it to a writable sheet and edit that one. Check the question/answers in [this other S.O. question](https://stackoverflow.com/questions/2725852/writing-to-existing-workbook-using-xlwt) that explain it with some more detail.
Reading *whatever*-separated files is much easier thanks to the [csv](https://docs.python.org/2/library/csv.html) module (its prepared for comma-separated files, but it can be easily tweaked for other separators). Check it out.
Also, I wasn't very sure from your example if the contents of the tab-separated file indicate somehow the row indexes on the Excel sheet or they're purely positional. When you say that in the tab-separated file you have `Value-2`, I wasn't sure if that `2` meant the second row on the Excel file or it was just an example of some text. I assumed the latest (which is easier to deal with), so whatever pair *Label Value* appears on the first row of your tab-separated file will be the first pair on the first row of the Excel file. It this is not the case, leave a comment a we will deal with it **;-)**
Ok, so let's assume the following scenario:
You have a tab-separated file like this:
*stack37.txt*:
```
Label-A Value-1
Label-B Value-2
Label-C Value-3
```
The excel file you want to modify is *stack37.xls*. It only has one sheet (or better said, the sheet you want to modify is the first one in the file) and it initially looks like this (in LibreOffice Calc):

Now, this is the python code (I stored it in a file called *stack37.py* and it's located in the same directory of the tab-separated file and the excel file):
```
import csv
import xlwt
import xlrd
from xlutils import copy as xl_copy
with open('stack37.txt') as tab_file:
tab_reader = csv.reader(tab_file, delimiter='\t')
xls_readable_book = xlrd.open_workbook('stack37.xls')
xls_writeable_book = xl_copy.copy(xls_readable_book)
xls_writeable_sheet = xls_writeable_book.get_sheet(0)
for row_index, row in enumerate(tab_reader):
xls_writeable_sheet.write(row_index, 0, row[0])
xls_writeable_sheet.write(row_index, 1, row[1])
xls_writeable_book.save('stack37.xls')
```
After you run this code, the file *stack37.xls* will look like this:

What I meant about not knowing what you exactly wanted to do with the values in your tab-separated file is that regardless of what you name your items in there, it will modify the first row of the excel sheet, then the second... (even if your first `Value` is called `Value-2`, the code above will not put that value on the second row of the Excel sheet, but on the fist row) It just assumes the first line in the tab-separated file corresponds with the values to set on the first row of the Excel sheet.
Let explain with an slightly modified example:
Let's assume your original Excel file looks like the original excel file on my screenshot (the full of `| Hello-Ax | Bye-Bx |`) but your tab-separated file now looks like this:
*stack37.txt*:
```
foo bar
baz baz2
```
After you run *stack37.py*, this is how your Excel will look like:

(see? first row of the tab-separated file goes to the first row in the Excel file)
**UPDATE 1**:
I'm trying the `openpyxl` module myself... Theoretically (according to the documentation) the following should work (note that I've changed the extensions to Excel 2007/2010 `.xlsx`):
```
import csv
import openpyxl
with open('stack37.txt') as tab_file:
tab_reader = csv.reader(tab_file, delimiter='\t')
xls_book = openpyxl.load_workbook(filename='stack37.xlsx')
sheet_names = xls_book.get_sheet_names()
xls_sheet = xls_book.get_sheet_by_name(sheet_names[0])
for row_index, row in enumerate(tab_reader):
cell_tmp1 = xls_sheet.cell(row = row_index, column = 0)
cell_tmp1.value = row[0]
cell_tmp2 = xls_sheet.cell(row = row_index, column = 1)
cell_tmp2.value = row[1]
xls_book.save('stack37_new.xlsx')
```
But if I do that, my LibreOffice refuses to open the newly generated file `stack37_new.xlsx` (maybe is because my LibreOffice is old? I'm in a Ubuntu 12.04, LibreOffice version 3.5.7.2... who knows, maybe is just that)
|
You should use the CSV module in the standard library to read the file.
In openpyxl you can have something like this:
```
from openpyxl import load_workbook
wb = load_workbook('workbook.xlsx')
ws = wb[sheetname]
for idx, line in enumerate(csvfile):
ws.cell(row=idx, column=0) = line[0]
ws.cell(row=idx, column=1) = line[1]
wb.save("changed.xlsx")
```
|
2,830,953
|
I have a script which contains two classes. (I'm obviously deleting a lot of stuff that I don't believe is relevant to the error I'm dealing with.) The eventual task is to create a decision tree, as I mentioned in [this](https://stackoverflow.com/questions/2726167/parse-a-csv-file-using-python-to-make-a-decision-tree-later) question.
Unfortunately, I'm getting an infinite loop, and I'm having difficulty identifying why. I've identified the line of code that's going haywire, but I would have thought the iterator and the list I'm adding to would be different objects. Is there some side effect of list's .append functionality that I'm not aware of? Or am I making some other blindingly obvious mistake?
```
class Dataset:
individuals = [] #Becomes a list of dictionaries, in which each dictionary is a row from the CSV with the headers as keys
def field_set(self): #Returns a list of the fields in individuals[] that can be used to split the data (i.e. have more than one value amongst the individuals
def classified(self, predicted_value): #Returns True if all the individuals have the same value for predicted_value
def fields_exhausted(self, predicted_value): #Returns True if all the individuals are identical except for predicted_value
def lowest_entropy_value(self, predicted_value): #Returns the field that will reduce <a href="http://en.wikipedia.org/wiki/Entropy_%28information_theory%29">entropy</a> the most
def __init__(self, individuals=[]):
```
and
```
class Node:
ds = Dataset() #The data that is associated with this Node
links = [] #List of Nodes, the offspring Nodes of this node
level = 0 #Tree depth of this Node
split_value = '' #Field used to split out this Node from the parent node
node_value = '' #Value used to split out this Node from the parent Node
def split_dataset(self, split_value): #Splits the dataset into a series of smaller datasets, each of which has a unique value for split_value. Then creates subnodes to store these datasets.
fields = [] #List of options for split_value amongst the individuals
datasets = {} #Dictionary of Datasets, each one with a value from fields[] as its key
for field in self.ds.field_set()[split_value]: #Populates the keys of fields[]
fields.append(field)
datasets[field] = Dataset()
for i in self.ds.individuals: #Adds individuals to the datasets.dataset that matches their result for split_value
datasets[i[split_value]].individuals.append(i) #<---Causes an infinite loop on the second hit
for field in fields: #Creates subnodes from each of the datasets.Dataset options
self.add_subnode(datasets[field],split_value,field)
def add_subnode(self, dataset, split_value='', node_value=''):
def __init__(self, level, dataset=Dataset()):
```
My initialisation code is currently:
```
if __name__ == '__main__':
filename = (sys.argv[1]) #Takes in a CSV file
predicted_value = "# class" #Identifies the field from the CSV file that should be predicted
base_dataset = parse_csv(filename) #Turns the CSV file into a list of lists
parsed_dataset = individual_list(base_dataset) #Turns the list of lists into a list of dictionaries
root = Node(0, Dataset(parsed_dataset)) #Creates a root node, passing it the full dataset
root.split_dataset(root.ds.lowest_entropy_value(predicted_value)) #Performs the first split, creating multiple subnodes
n = root.links[0]
n.split_dataset(n.ds.lowest_entropy_value(predicted_value)) #Attempts to split the first subnode.
```
|
2010/05/13
|
[
"https://Stackoverflow.com/questions/2830953",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/27290/"
] |
This is fairly easy.
```
var timerID;
$("#left").hover(function() {
timerID = setInterval(slideLeft, 1000);
}, function() {
clearInterval(timerID);
});
function slideLeft() {
$("#slider").animate({left: -30});
}
```
and similar for right.
You only need to use `hover()` if there's something you need to stop when the mouse leaves the area. Otherwise you can simply use the `mouseover()` event.
**Note:** the actual effect I've put in there probably isn't right but it depends entirely on the slider plugin you're using. Adjust as required.
|
You don't have to check where the mouse is, as the `mouseout` event will be triggered when the mouse leaves the element.
To make the movement repeat while the mouse is hovering the element, start an interval that you stop when the mouse leaves the element:
```
$(function(){
var moveInterval;
$('#moveLeft').hover(function(){
moveInterval = window.setInterval(function(){
// here you move the slider
}, 100);
}, function(){
window.clearInterval(moveInterval);
});
});
```
|
69,512,596
|
I've recently started learning how to code in python. I wanted to know if there is a norm or specific rule for the position of statements while using functions.
eg:
```
def example(x):
y = 7
print("Default value is", y)
print("Value entered is", x)
a = int(input("Enter a value: "))
example(a)
```
Would it be better to move the input statement inside the function? Will this pose any problems in more complex programs?
|
2021/10/10
|
[
"https://Stackoverflow.com/questions/69512596",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17117924/"
] |
In larger programs, pretty much everything will be in functions (or methods, which are a kind of function). The only code at the top level will be a couple of lines to call the first function (often called `main`).
The question then is whether to put the `input` into the same function, or into separate functions. That's a more general question of how to organise code, how large or small to make the functions and how to separate concerns.
For `input()` in particular, it's probably better to put it in a separate function from the calculation; that way, you will be able to use the same calculation functions (a) with user-supplied input; (b) with values coming from elsewhere in the program; and, importantly (c) from tests.
Similarly, probably separate the output from the calculation; again, you'll then be able to (a) print it out; (b) further process it or write it to file, database, etc; and (c) check it in tests.
That way, you'll have three functions, each with a separate concern: one to get the input from the user and convert it to `int`; a second one to do whatever calculation is required; and a third one to format up the result neatly and print it for the user.
|
If the result of input statement will only be used inside one function, then moving the statement into that function might be better in the future when your code becomes more complex
|
4,646,659
|
How to convert the web site develpoed in django, python into desktop application.
I am new to python and django can you please help me out
Thanks in Advance
|
2011/01/10
|
[
"https://Stackoverflow.com/questions/4646659",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/569806/"
] |
I would try to replicate the Django application functionality with the [PyQt toolkit](http://www.riverbankcomputing.co.uk/software/pyqt/intro).
You can in fact embed web content in PyQt applications, with the help of QtWebKit. I would post some potentially useful links, but apparently I have too low a reputation to post more than one :)
|
I have `django manage.py runserver` in .bat file and a localhost bookmark bar in a browser and whola a django-desktop-app. Or make your own browser that opens localhost. [Creating a web-browser with Python and PyQT](https://pythonspot.com/creating-a-webbrowser-with-python-and-pyqt-tutorial/)
|
4,646,659
|
How to convert the web site develpoed in django, python into desktop application.
I am new to python and django can you please help me out
Thanks in Advance
|
2011/01/10
|
[
"https://Stackoverflow.com/questions/4646659",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/569806/"
] |
I think you should just create an application that connects to the webserver. There is a good answer to getting RESTful API calls into your django application. This means you'd basically just be creating a new front-end for your server.
[Using django-rest-interface](https://stackoverflow.com/questions/212941/using-django-rest-interface)
It doesn't make sense to rewrite the *entire* django application as a desktop application. I mean, where do you want to store the data?
|
I am thinking about a similar Problem.
Would it be enough to habe a minimal PyQt Gui that enables you to present the djando-website from localhost (get rid of TCP/HTTPS on loop interface somehow) via QtWebkit?
All you seem to need is to have a minimal Python-Broser, that surfs the build in Webserver (and i guess you could even call Django directly for the html-payload without going over the HTTP/TCP layers).
|
4,646,659
|
How to convert the web site develpoed in django, python into desktop application.
I am new to python and django can you please help me out
Thanks in Advance
|
2011/01/10
|
[
"https://Stackoverflow.com/questions/4646659",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/569806/"
] |
I would try to replicate the Django application functionality with the [PyQt toolkit](http://www.riverbankcomputing.co.uk/software/pyqt/intro).
You can in fact embed web content in PyQt applications, with the help of QtWebKit. I would post some potentially useful links, but apparently I have too low a reputation to post more than one :)
|
I am thinking about a similar Problem.
Would it be enough to habe a minimal PyQt Gui that enables you to present the djando-website from localhost (get rid of TCP/HTTPS on loop interface somehow) via QtWebkit?
All you seem to need is to have a minimal Python-Broser, that surfs the build in Webserver (and i guess you could even call Django directly for the html-payload without going over the HTTP/TCP layers).
|
4,646,659
|
How to convert the web site develpoed in django, python into desktop application.
I am new to python and django can you please help me out
Thanks in Advance
|
2011/01/10
|
[
"https://Stackoverflow.com/questions/4646659",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/569806/"
] |
There are two places you can go to try to decouple the view and put it into a new desktop app. First you can use the existing controller and model and adapt a new view to that. Second, you can use only the existing model and build a new view and controller.
If you haven't adhered closely enough to the MVC principles that you can detach the model from the rest of the application, you can simply rewrite the entire thing. if you are forced to go this route, bail on django and http entirely (as duffymo suggests above).
You have to also evaluate these solutions based upon performance requirements and "heaviness" of the services. If you have stringent performance requirements then relying on the HTTP layer just gets in the way, and providing a simple API into your model is the way to go.
There are clearly a lot of possibly solutions but this is the approach I would take to deciding what the appropriate one is...
|
There's a project called [Camelot](http://www.python-camelot.com/) which seems to try to combine Django-like features on the desktop using PyQt. Haven't tried it though.
|
4,646,659
|
How to convert the web site develpoed in django, python into desktop application.
I am new to python and django can you please help me out
Thanks in Advance
|
2011/01/10
|
[
"https://Stackoverflow.com/questions/4646659",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/569806/"
] |
For starters, you'll have to replace the web UI with a desktop technology like Tk/Tcl.
If you do that, you may not want to use HTTP as the protocol between the client and the services.
Django is a web framework. If you're switching to a desktop, you'll have to forego Django.
|
It is possible to convert a django application to a desktop app with pywebview with some line of codes. Frist create a python file gui.py in directory where manage.py exists. install pywebview through pip, the write the code in gui.py
```
import os
import sys
import time
from threading import Thread
import webview
def start_webview():
window = webview.create_window('Hello world', 'http://localhost:8000/', confirm_close=True, width=900, height=600)
webview.start()
window.closed = os._exit(0)
def start_startdjango():
if sys.platform in ['win32', 'win64']:
os.system("python manage.py runserver {}:{}".format('127.0.0.1', '8000'))
# time.sleep(10)
else:
os.system("python3 manage.py runserver {}:{}".format('127.0.0.1', '8000'))
# time.sleep(10)
if __name__ == '__main__':
Thread(target=start_startdjango).start()
start_webview()
```
then run gui.py with the command python gui.py. IT will create a window like desktop app
|
4,646,659
|
How to convert the web site develpoed in django, python into desktop application.
I am new to python and django can you please help me out
Thanks in Advance
|
2011/01/10
|
[
"https://Stackoverflow.com/questions/4646659",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/569806/"
] |
For starters, you'll have to replace the web UI with a desktop technology like Tk/Tcl.
If you do that, you may not want to use HTTP as the protocol between the client and the services.
Django is a web framework. If you're switching to a desktop, you'll have to forego Django.
|
There's a project called [Camelot](http://www.python-camelot.com/) which seems to try to combine Django-like features on the desktop using PyQt. Haven't tried it though.
|
4,646,659
|
How to convert the web site develpoed in django, python into desktop application.
I am new to python and django can you please help me out
Thanks in Advance
|
2011/01/10
|
[
"https://Stackoverflow.com/questions/4646659",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/569806/"
] |
I would try to replicate the Django application functionality with the [PyQt toolkit](http://www.riverbankcomputing.co.uk/software/pyqt/intro).
You can in fact embed web content in PyQt applications, with the help of QtWebKit. I would post some potentially useful links, but apparently I have too low a reputation to post more than one :)
|
There are two places you can go to try to decouple the view and put it into a new desktop app. First you can use the existing controller and model and adapt a new view to that. Second, you can use only the existing model and build a new view and controller.
If you haven't adhered closely enough to the MVC principles that you can detach the model from the rest of the application, you can simply rewrite the entire thing. if you are forced to go this route, bail on django and http entirely (as duffymo suggests above).
You have to also evaluate these solutions based upon performance requirements and "heaviness" of the services. If you have stringent performance requirements then relying on the HTTP layer just gets in the way, and providing a simple API into your model is the way to go.
There are clearly a lot of possibly solutions but this is the approach I would take to deciding what the appropriate one is...
|
4,646,659
|
How to convert the web site develpoed in django, python into desktop application.
I am new to python and django can you please help me out
Thanks in Advance
|
2011/01/10
|
[
"https://Stackoverflow.com/questions/4646659",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/569806/"
] |
I think you should just create an application that connects to the webserver. There is a good answer to getting RESTful API calls into your django application. This means you'd basically just be creating a new front-end for your server.
[Using django-rest-interface](https://stackoverflow.com/questions/212941/using-django-rest-interface)
It doesn't make sense to rewrite the *entire* django application as a desktop application. I mean, where do you want to store the data?
|
I have `django manage.py runserver` in .bat file and a localhost bookmark bar in a browser and whola a django-desktop-app. Or make your own browser that opens localhost. [Creating a web-browser with Python and PyQT](https://pythonspot.com/creating-a-webbrowser-with-python-and-pyqt-tutorial/)
|
4,646,659
|
How to convert the web site develpoed in django, python into desktop application.
I am new to python and django can you please help me out
Thanks in Advance
|
2011/01/10
|
[
"https://Stackoverflow.com/questions/4646659",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/569806/"
] |
It is possible to convert a django application to a desktop app with pywebview with some line of codes. Frist create a python file gui.py in directory where manage.py exists. install pywebview through pip, the write the code in gui.py
```
import os
import sys
import time
from threading import Thread
import webview
def start_webview():
window = webview.create_window('Hello world', 'http://localhost:8000/', confirm_close=True, width=900, height=600)
webview.start()
window.closed = os._exit(0)
def start_startdjango():
if sys.platform in ['win32', 'win64']:
os.system("python manage.py runserver {}:{}".format('127.0.0.1', '8000'))
# time.sleep(10)
else:
os.system("python3 manage.py runserver {}:{}".format('127.0.0.1', '8000'))
# time.sleep(10)
if __name__ == '__main__':
Thread(target=start_startdjango).start()
start_webview()
```
then run gui.py with the command python gui.py. IT will create a window like desktop app
|
I have `django manage.py runserver` in .bat file and a localhost bookmark bar in a browser and whola a django-desktop-app. Or make your own browser that opens localhost. [Creating a web-browser with Python and PyQT](https://pythonspot.com/creating-a-webbrowser-with-python-and-pyqt-tutorial/)
|
4,646,659
|
How to convert the web site develpoed in django, python into desktop application.
I am new to python and django can you please help me out
Thanks in Advance
|
2011/01/10
|
[
"https://Stackoverflow.com/questions/4646659",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/569806/"
] |
For starters, you'll have to replace the web UI with a desktop technology like Tk/Tcl.
If you do that, you may not want to use HTTP as the protocol between the client and the services.
Django is a web framework. If you're switching to a desktop, you'll have to forego Django.
|
I have `django manage.py runserver` in .bat file and a localhost bookmark bar in a browser and whola a django-desktop-app. Or make your own browser that opens localhost. [Creating a web-browser with Python and PyQT](https://pythonspot.com/creating-a-webbrowser-with-python-and-pyqt-tutorial/)
|
11,170,478
|
I have a command line program developed in c. Lets say, i have a parser written in C. Now i am developing a project with gui in python and i need that parser for python project. In c we can invoke a system call and redirect the output to system.out or a file. Is there are any way to do this python? I have both code and executable file of c program. Thanks in advance.
|
2012/06/23
|
[
"https://Stackoverflow.com/questions/11170478",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1135245/"
] |
I would not expect MySQL to give that error message, but many other databases do. In other databases you can work around it by repeating the column definition:
```
SELECT amount1 + amount2 as totalamount
FROM Donation
WHERE amount1 + amount2 > 1000
```
Or you can use a subquery to avoid the repitition:
```
SELECT totalamount
FROM (
select amount1 + amount2 as totalamount
, *
from Donation
) as SubQueryAlias
WHERE totalamount > 1000
```
[Live example at SQL Fiddle.](http://sqlfiddle.com/#!2/ce40c/2/0)
|
No way.
**WHERE** filters column while **HAVING** filters on aggregates.
See [SQL Having](http://www.w3schools.com/sql/sql_having.asp)
|
11,170,478
|
I have a command line program developed in c. Lets say, i have a parser written in C. Now i am developing a project with gui in python and i need that parser for python project. In c we can invoke a system call and redirect the output to system.out or a file. Is there are any way to do this python? I have both code and executable file of c program. Thanks in advance.
|
2012/06/23
|
[
"https://Stackoverflow.com/questions/11170478",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1135245/"
] |
I would not expect MySQL to give that error message, but many other databases do. In other databases you can work around it by repeating the column definition:
```
SELECT amount1 + amount2 as totalamount
FROM Donation
WHERE amount1 + amount2 > 1000
```
Or you can use a subquery to avoid the repitition:
```
SELECT totalamount
FROM (
select amount1 + amount2 as totalamount
, *
from Donation
) as SubQueryAlias
WHERE totalamount > 1000
```
[Live example at SQL Fiddle.](http://sqlfiddle.com/#!2/ce40c/2/0)
|
Depending on the SQL dialect you can't put a derived column in the where clause.
Instead use this where clause.
```
WHERE (amount1 + amount2) > 1000
```
|
11,170,478
|
I have a command line program developed in c. Lets say, i have a parser written in C. Now i am developing a project with gui in python and i need that parser for python project. In c we can invoke a system call and redirect the output to system.out or a file. Is there are any way to do this python? I have both code and executable file of c program. Thanks in advance.
|
2012/06/23
|
[
"https://Stackoverflow.com/questions/11170478",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1135245/"
] |
I would not expect MySQL to give that error message, but many other databases do. In other databases you can work around it by repeating the column definition:
```
SELECT amount1 + amount2 as totalamount
FROM Donation
WHERE amount1 + amount2 > 1000
```
Or you can use a subquery to avoid the repitition:
```
SELECT totalamount
FROM (
select amount1 + amount2 as totalamount
, *
from Donation
) as SubQueryAlias
WHERE totalamount > 1000
```
[Live example at SQL Fiddle.](http://sqlfiddle.com/#!2/ce40c/2/0)
|
I suggest you use one of the variations in Andomar's answer.
What MySQL allows is this (don't use it, it's not standard and almost any other DBMS does NOT allow it):
```
SELECT (amount1 + amount2) AS totalamount
FROM Donation
HAVING totalamount > 1000 ;
```
|
6,418,199
|
I was looking up the pypy project (Python in Python), and started pondering the issue of what is running the outer layer of python? Surely, I conjectured, it can't be as the old saying goes "turtles all the way down"! Afterall, python is not valid x86 assembly!
Soon I remembered the concept of bootstrapping, and looked up compiler bootstrapping. "Ok", I thought, "so it can be either written in a different language or hand compiled from assembly". In the interest of performance, I'm sure C compilers are just built up from assembly.
This is all well, but the question still remains, how does the computer get that assembly file?!
Say I buy a new cpu with nothing on it. During the first operation I wish to install an OS, which runs C. What runs the C compiler? Is there a miniature C compiler in the BIOS?
Can someone explain this to me?
|
2011/06/20
|
[
"https://Stackoverflow.com/questions/6418199",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/322900/"
] |
>
> In the interest of performance, I'm sure C compilers are just built up from assembly.
>
>
>
C compilers are, nowadays, (almost?) completely written in C (or higher-level languages - Clang is C++, for instance). Compilers gain little to nothing from including hand-written assembly code. The things that take most time are as slow as they are because they solve very hard problems, where "hard" means "big computational complexity" - rewriting in assembly brings at most a constant speedup, but those don't really matter anymore at that level.
Also, most compilers want high portability, so architecture-specific tricks in the front and middle end are out of question (and in the backends, they' not desirable either, because they may break cross-compilation).
>
> Say I buy a new cpu with nothing on it. During the first operation I wish to install an OS, which runs C. What runs the C compiler? Is there a miniature C compiler in the BIOS?
>
>
>
When you're installing an OS, there's (usually) no C compiler run. The setup CD is full of readily-compiled binaries for that architecture. If there's a C compiler included (as it's the case with many Linux distros), that's an already-compiled exectable too. And those distros that make you build your own kernel etc. also have at least one executable included - the compiler. That is, of course, unless you have to compile your own kernel on an existing installation of anything with a C compiler.
If by "new CPU" you mean a new architecture that isn't backwards-compatible to anything that's yet supported, self-hosting compilers can follow the usual porting procedure: First write a backend for that new target, then compile yourself for it, and suddenly you got a mature compiler with a battle-hardened (compiled a whole compiler) native backend on the new platform.
|
If you buy a new machine with a pre-installed OS, it doesn't even need to include a compiler anywhere, because all the executable code has been compiled on some other machine, by whoever provides the OS - your machine doesn't need to compile anything itself.
How do you get to this point if you have a completely new CPU architecture? In this case, you would probably start by writing a new code generation back-end for your new CPU architecture (the "target") for an existing C compiler that runs on some other platform (the "host") - a [cross-compiler](http://en.wikipedia.org/wiki/Cross_compiler).
Once your cross-compiler (running on the host) works well enough to generate a correct compiler (and necessary libraries, etc.) that will run on the target, then you can compile the compiler with itself on the target platform, and end up with a target-native compiler, which runs on the target and generates code which runs on the target.
It's the same principle with a new language: you have to write code in an existing language that you do have a toolchain for, which will compile your new language into something that you can work with (let's call this the "bootstrap compiler"). Once you get this working well enough, you can write a compiler in your new language (the "real compiler"), and then compile the real compiler with the bootstrap compiler. At this point you're writing the compiler for your new language in the new language itself, and your language is said to be "self-hosting".
|
6,418,199
|
I was looking up the pypy project (Python in Python), and started pondering the issue of what is running the outer layer of python? Surely, I conjectured, it can't be as the old saying goes "turtles all the way down"! Afterall, python is not valid x86 assembly!
Soon I remembered the concept of bootstrapping, and looked up compiler bootstrapping. "Ok", I thought, "so it can be either written in a different language or hand compiled from assembly". In the interest of performance, I'm sure C compilers are just built up from assembly.
This is all well, but the question still remains, how does the computer get that assembly file?!
Say I buy a new cpu with nothing on it. During the first operation I wish to install an OS, which runs C. What runs the C compiler? Is there a miniature C compiler in the BIOS?
Can someone explain this to me?
|
2011/06/20
|
[
"https://Stackoverflow.com/questions/6418199",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/322900/"
] |
>
> Say I buy a new cpu with nothing on it. During the first operation I wish to install an OS, which runs C. What runs the C compiler? Is there a miniature C compiler in the BIOS?
>
>
>
I understand what you're asking... what would happen if we had no C compiler and had to start from scratch?
The answer is you'd have to start from assembly or hardware. That is, you can either build a compiler in software or hardware. If there were no compilers in the whole world, these days you could probably do it faster in assembly; however, back in the day I believe compilers were in fact dedicated pieces of hardware. The [wikipedia article](https://secure.wikimedia.org/wikipedia/en/wiki/History_of_compiler_construction) is somewhat short and doesn't back me up on that, but never mind.
The next question I guess is what happens today? Well, those compiler writers have been busy writing portable C for years, so the compiler should be able to compile itself. It's worth discussing on a very high level what compilation is. Basically, you take a set of statements and produce assembly from them. That's it. Well, it's actually more complicated than that - you can do all sorts of things with lexers and parsers and I only understand a small subset of it, but essentially, you're looking to map C to assembly.
Under normal operation, the compiler produces assembly code matching your platform, but it doesn't have to. It can produce assembly code for any platform you like, provided it knows how to. So the first step in making C work on your platform is to create a target in an existing compiler, start adding instructions and get basic code working.
Once this is done, in theory, you can now *cross compile* from one platform to another. The next stages are: building a kernel, bootloader and some basic userland utilities for that platform.
Then, you can have a go at compiling the compiler for that platform (once you've got a working userland and everything you need to run the build process). If that succeeds, you've got basic utilities, a working kernel, userland and a compiler system. You're now well on your way.
Note that in the process of porting the compiler, you probably needed to write an assembler and linker for that platform too. To keep the description simple, I omitted them.
If this is of interest, [Linux from Scratch](http://www.linuxfromscratch.org/) is an interesting read. It doesn't tell you how to create a new target from scratch (which is significantly non trivial) - it assumes you're going to build for an existing known target, but it does show you how you cross compile the essentials and begin building up the system.
Python does not actually assemble to assembly. For a start, the running python program keeps track of counts of references to objects, something that a cpu won't do for you. However, the concept of instruction-based code is at the heart of Python too. Have a play with this:
```
>>> def hello(x, y, z, q):
... print "Hello, world"
... q()
... return x+y+z
...
>>> import dis
dis.dis(hello)
2 0 LOAD_CONST 1 ('Hello, world')
3 PRINT_ITEM
4 PRINT_NEWLINE
3 5 LOAD_FAST 3 (q)
8 CALL_FUNCTION 0
11 POP_TOP
4 12 LOAD_FAST 0 (x)
15 LOAD_FAST 1 (y)
18 BINARY_ADD
19 LOAD_FAST 2 (z)
22 BINARY_ADD
23 RETURN_VALUE
```
There you can see how Python thinks of the code you entered. This is python bytecode, i.e. the assembly language of python. It effectively has its own "instruction set" if you like for implementing the language. This is the concept of a virtual machine.
Java has exactly the same kind of idea. I took a class function and ran `javap -c class` to get this:
```
invalid.site.ningefingers.main:();
Code:
0: aload_0
1: invokespecial #1; //Method java/lang/Object."<init>":()V
4: return
public static void main(java.lang.String[]);
Code:
0: iconst_0
1: istore_1
2: iconst_0
3: istore_1
4: iload_1
5: aload_0
6: arraylength
7: if_icmpge 57
10: getstatic #2;
13: new #3;
16: dup
17: invokespecial #4;
20: ldc #5;
22: invokevirtual #6;
25: iload_1
26: invokevirtual #7;
//.......
}
```
I take it you get the idea. These are the assembly languages of the python and java worlds, i.e. how the python interpreter and java compiler think respectively.
Something else that would be worth reading up on is [JonesForth](https://rwmj.wordpress.com/2010/08/07/jonesforth-git-repository/). This is both a working forth interpreter and a tutorial and I can't recommend it enough for thinking about "how things get executed" and how you write a simple, lightweight language.
|
If you buy a new machine with a pre-installed OS, it doesn't even need to include a compiler anywhere, because all the executable code has been compiled on some other machine, by whoever provides the OS - your machine doesn't need to compile anything itself.
How do you get to this point if you have a completely new CPU architecture? In this case, you would probably start by writing a new code generation back-end for your new CPU architecture (the "target") for an existing C compiler that runs on some other platform (the "host") - a [cross-compiler](http://en.wikipedia.org/wiki/Cross_compiler).
Once your cross-compiler (running on the host) works well enough to generate a correct compiler (and necessary libraries, etc.) that will run on the target, then you can compile the compiler with itself on the target platform, and end up with a target-native compiler, which runs on the target and generates code which runs on the target.
It's the same principle with a new language: you have to write code in an existing language that you do have a toolchain for, which will compile your new language into something that you can work with (let's call this the "bootstrap compiler"). Once you get this working well enough, you can write a compiler in your new language (the "real compiler"), and then compile the real compiler with the bootstrap compiler. At this point you're writing the compiler for your new language in the new language itself, and your language is said to be "self-hosting".
|
6,418,199
|
I was looking up the pypy project (Python in Python), and started pondering the issue of what is running the outer layer of python? Surely, I conjectured, it can't be as the old saying goes "turtles all the way down"! Afterall, python is not valid x86 assembly!
Soon I remembered the concept of bootstrapping, and looked up compiler bootstrapping. "Ok", I thought, "so it can be either written in a different language or hand compiled from assembly". In the interest of performance, I'm sure C compilers are just built up from assembly.
This is all well, but the question still remains, how does the computer get that assembly file?!
Say I buy a new cpu with nothing on it. During the first operation I wish to install an OS, which runs C. What runs the C compiler? Is there a miniature C compiler in the BIOS?
Can someone explain this to me?
|
2011/06/20
|
[
"https://Stackoverflow.com/questions/6418199",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/322900/"
] |
>
> Say I buy a new cpu with nothing on it. During the first operation I wish to install an OS, which runs C. What runs the C compiler? Is there a miniature C compiler in the BIOS?
>
>
>
I understand what you're asking... what would happen if we had no C compiler and had to start from scratch?
The answer is you'd have to start from assembly or hardware. That is, you can either build a compiler in software or hardware. If there were no compilers in the whole world, these days you could probably do it faster in assembly; however, back in the day I believe compilers were in fact dedicated pieces of hardware. The [wikipedia article](https://secure.wikimedia.org/wikipedia/en/wiki/History_of_compiler_construction) is somewhat short and doesn't back me up on that, but never mind.
The next question I guess is what happens today? Well, those compiler writers have been busy writing portable C for years, so the compiler should be able to compile itself. It's worth discussing on a very high level what compilation is. Basically, you take a set of statements and produce assembly from them. That's it. Well, it's actually more complicated than that - you can do all sorts of things with lexers and parsers and I only understand a small subset of it, but essentially, you're looking to map C to assembly.
Under normal operation, the compiler produces assembly code matching your platform, but it doesn't have to. It can produce assembly code for any platform you like, provided it knows how to. So the first step in making C work on your platform is to create a target in an existing compiler, start adding instructions and get basic code working.
Once this is done, in theory, you can now *cross compile* from one platform to another. The next stages are: building a kernel, bootloader and some basic userland utilities for that platform.
Then, you can have a go at compiling the compiler for that platform (once you've got a working userland and everything you need to run the build process). If that succeeds, you've got basic utilities, a working kernel, userland and a compiler system. You're now well on your way.
Note that in the process of porting the compiler, you probably needed to write an assembler and linker for that platform too. To keep the description simple, I omitted them.
If this is of interest, [Linux from Scratch](http://www.linuxfromscratch.org/) is an interesting read. It doesn't tell you how to create a new target from scratch (which is significantly non trivial) - it assumes you're going to build for an existing known target, but it does show you how you cross compile the essentials and begin building up the system.
Python does not actually assemble to assembly. For a start, the running python program keeps track of counts of references to objects, something that a cpu won't do for you. However, the concept of instruction-based code is at the heart of Python too. Have a play with this:
```
>>> def hello(x, y, z, q):
... print "Hello, world"
... q()
... return x+y+z
...
>>> import dis
dis.dis(hello)
2 0 LOAD_CONST 1 ('Hello, world')
3 PRINT_ITEM
4 PRINT_NEWLINE
3 5 LOAD_FAST 3 (q)
8 CALL_FUNCTION 0
11 POP_TOP
4 12 LOAD_FAST 0 (x)
15 LOAD_FAST 1 (y)
18 BINARY_ADD
19 LOAD_FAST 2 (z)
22 BINARY_ADD
23 RETURN_VALUE
```
There you can see how Python thinks of the code you entered. This is python bytecode, i.e. the assembly language of python. It effectively has its own "instruction set" if you like for implementing the language. This is the concept of a virtual machine.
Java has exactly the same kind of idea. I took a class function and ran `javap -c class` to get this:
```
invalid.site.ningefingers.main:();
Code:
0: aload_0
1: invokespecial #1; //Method java/lang/Object."<init>":()V
4: return
public static void main(java.lang.String[]);
Code:
0: iconst_0
1: istore_1
2: iconst_0
3: istore_1
4: iload_1
5: aload_0
6: arraylength
7: if_icmpge 57
10: getstatic #2;
13: new #3;
16: dup
17: invokespecial #4;
20: ldc #5;
22: invokevirtual #6;
25: iload_1
26: invokevirtual #7;
//.......
}
```
I take it you get the idea. These are the assembly languages of the python and java worlds, i.e. how the python interpreter and java compiler think respectively.
Something else that would be worth reading up on is [JonesForth](https://rwmj.wordpress.com/2010/08/07/jonesforth-git-repository/). This is both a working forth interpreter and a tutorial and I can't recommend it enough for thinking about "how things get executed" and how you write a simple, lightweight language.
|
>
> In the interest of performance, I'm sure C compilers are just built up from assembly.
>
>
>
C compilers are, nowadays, (almost?) completely written in C (or higher-level languages - Clang is C++, for instance). Compilers gain little to nothing from including hand-written assembly code. The things that take most time are as slow as they are because they solve very hard problems, where "hard" means "big computational complexity" - rewriting in assembly brings at most a constant speedup, but those don't really matter anymore at that level.
Also, most compilers want high portability, so architecture-specific tricks in the front and middle end are out of question (and in the backends, they' not desirable either, because they may break cross-compilation).
>
> Say I buy a new cpu with nothing on it. During the first operation I wish to install an OS, which runs C. What runs the C compiler? Is there a miniature C compiler in the BIOS?
>
>
>
When you're installing an OS, there's (usually) no C compiler run. The setup CD is full of readily-compiled binaries for that architecture. If there's a C compiler included (as it's the case with many Linux distros), that's an already-compiled exectable too. And those distros that make you build your own kernel etc. also have at least one executable included - the compiler. That is, of course, unless you have to compile your own kernel on an existing installation of anything with a C compiler.
If by "new CPU" you mean a new architecture that isn't backwards-compatible to anything that's yet supported, self-hosting compilers can follow the usual porting procedure: First write a backend for that new target, then compile yourself for it, and suddenly you got a mature compiler with a battle-hardened (compiled a whole compiler) native backend on the new platform.
|
37,959,217
|
I'm using PM2 to run a Python program in the background like so
`pm2 start helloworld.py`
and it works perfectly fine. However, within `helloworld.py` I have several print statements that act as logs. For example, when a network request comes in or if a database value is updated. When I run `helloworld.py` like so:
`python3 helloworld.py`
all these print statements are visible and I can debug my application. However, when running
`pm2 logs helloworld`
none of these print statements show up.
|
2016/06/22
|
[
"https://Stackoverflow.com/questions/37959217",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/896112/"
] |
This question is a few months old, so maybe you figured this out a while ago, but it was one of the top google hits when I was having the same problem so I thought I'd add what I found.
Seems like it's an issue with how python buffers sys.stdout. In some platforms/instances, when called by say pm2 or nohup, the sys.stdout stream may not get flushed until the process exits. Passing the "-u" argument to the python interpreter stops it from buffering sys.stdout. In the process.json for pm2 I added "interpreter\_args": "-u" and I'm getting logs normally now.
|
Check the folder #HOME/.pm2/logs
See for example the folder structure section here: <http://pm2.keymetrics.io/docs/usage/quick-start/>
Also consider using a configuration file with an explicit logs folder that is relative to your scripts. (Note this folder must exist before pm2 can use it.) See <http://pm2.keymetrics.io/docs/usage/application-declaration/>
```
{
"apps": [
{
"script": "app/server.js",
"log_date_format": "YYYY-MM-DD HH:mm Z",
"error_file": "logs/server.web.error.log",
"out_file": "logs/server.web.out.log",
...
```
Nice way to follow these log files is to run tail
```
tail -f logs/*.log
```
UPDATE:
To be clear, using a configuration file works for python scripts. Just create a json configuration file that specifies your script and where you want the output to go. For example
```
{
"apps": [
{
"name": "Test Python",
"script": "test.py",
"out_file": "test.out.log",
}
]
}
```
Then run it
pm2 start test.json
Look for the process id in the results. Use this process ID to stop your process and to see where the log file is. E.g. pm2 show 3
|
41,247,600
|
For the following two dataframes:
```
df1 = pd.DataFrame({'name': pd.Series(["A", "B", "C"]), 'value': pd.Series([1., 2., 3.])})
name value
0 A 1.0
1 B 2.0
2 C 3.0
df2 = pd.DataFrame({'name': pd.Series(["A", "C", "D"]), 'value': pd.Series([1., 3., 5.])})
name value
0 A 1.0
1 C 3.0
2 D 5.0
```
I would like to keep only the rows in `df2` where the value in the `name` column overlaps with a value in the `name` column of `df1`, i.e. produce the following dataframe:
```
name value
0 A 1.0
1 C 3.0
```
I have tried a number of approaches but I am new to python and pandas and don't appreciate the syntax coming from R. Why does this line of code not work, and what would?
```
df2[df2["name"] in df1["name"]]
```
|
2016/12/20
|
[
"https://Stackoverflow.com/questions/41247600",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4424484/"
] |
You can use [`isin`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html):
```
print (df2[df2["name"].isin(df1["name"])])
name value
0 A 1.0
1 C 3.0
```
Another faster solution with [`numpy.intersect1d`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.intersect1d.html):
```
val = np.intersect1d(df2["name"], df1["name"])
print (val)
['A' 'C']
print (df2[df2.name.isin(val)])
name value
0 A 1.0
1 C 3.0
```
|
Slightly different method that might be useful on your actual data, you could use an "inner join" (the intersection) a la SQL. More useful if your columns aren't duplicated in both data frames (e.g. merging two different data sets with some common key)
```
df1 = pd.DataFrame({'name': pd.Series(["A", "B", "C"]), 'value': pd.Series([1., 2., 3.])})
df2 = pd.DataFrame({'name': pd.Series(["A", "C", "D"]), 'value': pd.Series([1., 3., 5.])})
# supposedly for the join you should be able to tell join on='<column_name>', 'name' here,
# but wasn't working for me.
df1.set_index('name', inplace=True)
df2.set_index('name', inplace=True)
df1.join(df2, how='inner', rsuffix='_other')
# value value_other
# name
# A 1.0 1.0
# C 3.0 3.0
```
Changing `how` to `outer` would give you the intersection of the two, `left` for just `df1` rows, `right` for `df2`.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.