qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
14,241,239
|
Can I have any highlight kind of things using Python 2.7? Say when my script clicking on the `submit button`,feeding data into the `text field` or selecting values from the `drop-down field`, just to highlight on that element to make sure to the script runner that his/her script doing what he/she wants.
***EDIT***
I am using selenium-webdriver with python to automate some web based work on a third party application.
Thanks
|
2013/01/09
|
[
"https://Stackoverflow.com/questions/14241239",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2767755/"
] |
This is something you need to do with javascript, not python.
|
***[NOTE: I'm leaving this answer for historical purposes but readers should note that the original question has changed from concerning itself with Python to concerning itself with Selenium]***
Assuming you're talking about a browser based application being served from a Python back-end server (and it's just a guess since there's *no information* in your post):
If you are constructing a response in your Python back-end, wrap the stuff that you want to highlight in a `<span>` tag and set a `class` on the span tag. Then, in your CSS define that class with whatever highlighting properties you want to use.
However, if you want to accomplish this highlighting in an already-loaded browser page without generating new HTML on the back end and returning that to the browser, then Python (on the server) has no knowledge of or ability to affect the web page in browser. You must accomplish this using Javascript or a Javascript library or framework in the browser.
|
26,691,784
|
Example:
```
class Planet(Enum):
MERCURY = (mass: 3.303e+23, radius: 2.4397e6)
def __init__(self, mass, radius):
self.mass = mass # in kilograms
self.radius = radius # in meters
```
Ref: <https://docs.python.org/3/library/enum.html#planet>
Why do I want to do this? If there are a few primitive types (int, bool) in the constructor list, it would be nice to used named arguments.
|
2014/11/01
|
[
"https://Stackoverflow.com/questions/26691784",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/257299/"
] |
While you can't use named arguments the way you describe with enums, you can get a similar effect with a [`namedtuple`](https://docs.python.org/3/library/collections.html#collections.namedtuple) mixin:
```
from collections import namedtuple
from enum import Enum
Body = namedtuple("Body", ["mass", "radius"])
class Planet(Body, Enum):
MERCURY = Body(mass=3.303e+23, radius=2.4397e6)
VENUS = Body(mass=4.869e+24, radius=6.0518e6)
EARTH = Body(mass=5.976e+24, radius=3.3972e6)
# ... etc.
```
... which to my mind is cleaner, since you don't have to write an `__init__` method.
Example use:
```
>>> Planet.MERCURY
<Planet.MERCURY: Body(mass=3.303e+23, radius=2439700.0)>
>>> Planet.EARTH.mass
5.976e+24
>>> Planet.VENUS.radius
6051800.0
```
Note that, as per [the docs](https://docs.python.org/3/library/enum.html#others), "mix-in types must appear before `Enum` itself in the sequence of bases".
|
The accepted answer by @zero-piraeus can be slightly extended to allow default arguments as well. This is very handy when you have a large enum with most entries having the same value for an element.
```
class Body(namedtuple('Body', "mass radius moons")):
def __new__(cls, mass, radius, moons=0):
return super().__new__(cls, mass, radius, moons)
def __getnewargs__(self):
return (self.mass, self.radius, self.moons)
class Planet(Body, Enum):
MERCURY = Body(mass=3.303e+23, radius=2.4397e6)
VENUS = Body(mass=4.869e+24, radius=6.0518e6)
EARTH = Body(5.976e+24, 3.3972e6, moons=1)
```
Beware pickling will not work without the `__getnewargs__`.
```
class Foo:
def __init__(self):
self.planet = Planet.EARTH # pickle error in deepcopy
from copy import deepcopy
f1 = Foo()
f2 = deepcopy(f1) # pickle error here
```
|
26,691,784
|
Example:
```
class Planet(Enum):
MERCURY = (mass: 3.303e+23, radius: 2.4397e6)
def __init__(self, mass, radius):
self.mass = mass # in kilograms
self.radius = radius # in meters
```
Ref: <https://docs.python.org/3/library/enum.html#planet>
Why do I want to do this? If there are a few primitive types (int, bool) in the constructor list, it would be nice to used named arguments.
|
2014/11/01
|
[
"https://Stackoverflow.com/questions/26691784",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/257299/"
] |
While you can't use named arguments the way you describe with enums, you can get a similar effect with a [`namedtuple`](https://docs.python.org/3/library/collections.html#collections.namedtuple) mixin:
```
from collections import namedtuple
from enum import Enum
Body = namedtuple("Body", ["mass", "radius"])
class Planet(Body, Enum):
MERCURY = Body(mass=3.303e+23, radius=2.4397e6)
VENUS = Body(mass=4.869e+24, radius=6.0518e6)
EARTH = Body(mass=5.976e+24, radius=3.3972e6)
# ... etc.
```
... which to my mind is cleaner, since you don't have to write an `__init__` method.
Example use:
```
>>> Planet.MERCURY
<Planet.MERCURY: Body(mass=3.303e+23, radius=2439700.0)>
>>> Planet.EARTH.mass
5.976e+24
>>> Planet.VENUS.radius
6051800.0
```
Note that, as per [the docs](https://docs.python.org/3/library/enum.html#others), "mix-in types must appear before `Enum` itself in the sequence of bases".
|
If going beyond the `namedtuple` mix-in check out the [`aenum`](https://pypi.python.org/pypi/aenum) library1. Besides having a few extra bells and whistles for `Enum` it also supports `NamedConstant` and a metaclass-based `NamedTuple`.
Using `aenum.Enum` the above code could look like:
```
from aenum import Enum, enum, _reduce_ex_by_name
class Planet(Enum, init='mass radius'):
MERCURY = enum(mass=3.303e+23, radius=2.4397e6)
VENUS = enum(mass=4.869e+24, radius=6.0518e6)
EARTH = enum(mass=5.976e+24, radius=3.3972e6)
# replace __reduce_ex__ so pickling works
__reduce_ex__ = _reduce_ex_by_name
```
and in use:
```
--> for p in Planet:
... print(repr(p))
<Planet.MERCURY: enum(radius=2439700.0, mass=3.3030000000000001e+23)>
<Planet.EARTH: enum(radius=3397200.0, mass=5.9760000000000004e+24)>
<Planet.VENUS: enum(radius=6051800.0, mass=4.8690000000000001e+24)>
--> print(Planet.VENUS.mass)
4.869e+24
```
---
1 Disclosure: I am the author of the [Python stdlib `Enum`](https://docs.python.org/3/library/enum.html), the [`enum34` backport](https://pypi.python.org/pypi/enum34), and the [Advanced Enumeration (`aenum`)](https://pypi.python.org/pypi/aenum) library.
|
26,691,784
|
Example:
```
class Planet(Enum):
MERCURY = (mass: 3.303e+23, radius: 2.4397e6)
def __init__(self, mass, radius):
self.mass = mass # in kilograms
self.radius = radius # in meters
```
Ref: <https://docs.python.org/3/library/enum.html#planet>
Why do I want to do this? If there are a few primitive types (int, bool) in the constructor list, it would be nice to used named arguments.
|
2014/11/01
|
[
"https://Stackoverflow.com/questions/26691784",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/257299/"
] |
While you can't use named arguments the way you describe with enums, you can get a similar effect with a [`namedtuple`](https://docs.python.org/3/library/collections.html#collections.namedtuple) mixin:
```
from collections import namedtuple
from enum import Enum
Body = namedtuple("Body", ["mass", "radius"])
class Planet(Body, Enum):
MERCURY = Body(mass=3.303e+23, radius=2.4397e6)
VENUS = Body(mass=4.869e+24, radius=6.0518e6)
EARTH = Body(mass=5.976e+24, radius=3.3972e6)
# ... etc.
```
... which to my mind is cleaner, since you don't have to write an `__init__` method.
Example use:
```
>>> Planet.MERCURY
<Planet.MERCURY: Body(mass=3.303e+23, radius=2439700.0)>
>>> Planet.EARTH.mass
5.976e+24
>>> Planet.VENUS.radius
6051800.0
```
Note that, as per [the docs](https://docs.python.org/3/library/enum.html#others), "mix-in types must appear before `Enum` itself in the sequence of bases".
|
For Python 3.6.1+ the [typing.NamedTuple](https://docs.python.org/3.6/library/typing.html#typing.NamedTuple) can be used, which also allows for setting default values, which leads to prettier code. The example by @shao.lo then looks like this:
```
from enum import Enum
from typing import NamedTuple
class Body(NamedTuple):
mass: float
radius: float
moons: int=0
class Planet(Body, Enum):
MERCURY = Body(mass=3.303e+23, radius=2.4397e6)
VENUS = Body(mass=4.869e+24, radius=6.0518e6)
EARTH = Body(5.976e+24, 3.3972e6, moons=1)
```
This also supports pickling. The typing.Any can be used if you don't want to specify the type.
Credit to @monk-time, who's answer [here](https://stackoverflow.com/a/43157792/5693369) inspired this solution.
|
26,691,784
|
Example:
```
class Planet(Enum):
MERCURY = (mass: 3.303e+23, radius: 2.4397e6)
def __init__(self, mass, radius):
self.mass = mass # in kilograms
self.radius = radius # in meters
```
Ref: <https://docs.python.org/3/library/enum.html#planet>
Why do I want to do this? If there are a few primitive types (int, bool) in the constructor list, it would be nice to used named arguments.
|
2014/11/01
|
[
"https://Stackoverflow.com/questions/26691784",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/257299/"
] |
The accepted answer by @zero-piraeus can be slightly extended to allow default arguments as well. This is very handy when you have a large enum with most entries having the same value for an element.
```
class Body(namedtuple('Body', "mass radius moons")):
def __new__(cls, mass, radius, moons=0):
return super().__new__(cls, mass, radius, moons)
def __getnewargs__(self):
return (self.mass, self.radius, self.moons)
class Planet(Body, Enum):
MERCURY = Body(mass=3.303e+23, radius=2.4397e6)
VENUS = Body(mass=4.869e+24, radius=6.0518e6)
EARTH = Body(5.976e+24, 3.3972e6, moons=1)
```
Beware pickling will not work without the `__getnewargs__`.
```
class Foo:
def __init__(self):
self.planet = Planet.EARTH # pickle error in deepcopy
from copy import deepcopy
f1 = Foo()
f2 = deepcopy(f1) # pickle error here
```
|
If going beyond the `namedtuple` mix-in check out the [`aenum`](https://pypi.python.org/pypi/aenum) library1. Besides having a few extra bells and whistles for `Enum` it also supports `NamedConstant` and a metaclass-based `NamedTuple`.
Using `aenum.Enum` the above code could look like:
```
from aenum import Enum, enum, _reduce_ex_by_name
class Planet(Enum, init='mass radius'):
MERCURY = enum(mass=3.303e+23, radius=2.4397e6)
VENUS = enum(mass=4.869e+24, radius=6.0518e6)
EARTH = enum(mass=5.976e+24, radius=3.3972e6)
# replace __reduce_ex__ so pickling works
__reduce_ex__ = _reduce_ex_by_name
```
and in use:
```
--> for p in Planet:
... print(repr(p))
<Planet.MERCURY: enum(radius=2439700.0, mass=3.3030000000000001e+23)>
<Planet.EARTH: enum(radius=3397200.0, mass=5.9760000000000004e+24)>
<Planet.VENUS: enum(radius=6051800.0, mass=4.8690000000000001e+24)>
--> print(Planet.VENUS.mass)
4.869e+24
```
---
1 Disclosure: I am the author of the [Python stdlib `Enum`](https://docs.python.org/3/library/enum.html), the [`enum34` backport](https://pypi.python.org/pypi/enum34), and the [Advanced Enumeration (`aenum`)](https://pypi.python.org/pypi/aenum) library.
|
26,691,784
|
Example:
```
class Planet(Enum):
MERCURY = (mass: 3.303e+23, radius: 2.4397e6)
def __init__(self, mass, radius):
self.mass = mass # in kilograms
self.radius = radius # in meters
```
Ref: <https://docs.python.org/3/library/enum.html#planet>
Why do I want to do this? If there are a few primitive types (int, bool) in the constructor list, it would be nice to used named arguments.
|
2014/11/01
|
[
"https://Stackoverflow.com/questions/26691784",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/257299/"
] |
The accepted answer by @zero-piraeus can be slightly extended to allow default arguments as well. This is very handy when you have a large enum with most entries having the same value for an element.
```
class Body(namedtuple('Body', "mass radius moons")):
def __new__(cls, mass, radius, moons=0):
return super().__new__(cls, mass, radius, moons)
def __getnewargs__(self):
return (self.mass, self.radius, self.moons)
class Planet(Body, Enum):
MERCURY = Body(mass=3.303e+23, radius=2.4397e6)
VENUS = Body(mass=4.869e+24, radius=6.0518e6)
EARTH = Body(5.976e+24, 3.3972e6, moons=1)
```
Beware pickling will not work without the `__getnewargs__`.
```
class Foo:
def __init__(self):
self.planet = Planet.EARTH # pickle error in deepcopy
from copy import deepcopy
f1 = Foo()
f2 = deepcopy(f1) # pickle error here
```
|
For Python 3.6.1+ the [typing.NamedTuple](https://docs.python.org/3.6/library/typing.html#typing.NamedTuple) can be used, which also allows for setting default values, which leads to prettier code. The example by @shao.lo then looks like this:
```
from enum import Enum
from typing import NamedTuple
class Body(NamedTuple):
mass: float
radius: float
moons: int=0
class Planet(Body, Enum):
MERCURY = Body(mass=3.303e+23, radius=2.4397e6)
VENUS = Body(mass=4.869e+24, radius=6.0518e6)
EARTH = Body(5.976e+24, 3.3972e6, moons=1)
```
This also supports pickling. The typing.Any can be used if you don't want to specify the type.
Credit to @monk-time, who's answer [here](https://stackoverflow.com/a/43157792/5693369) inspired this solution.
|
26,691,784
|
Example:
```
class Planet(Enum):
MERCURY = (mass: 3.303e+23, radius: 2.4397e6)
def __init__(self, mass, radius):
self.mass = mass # in kilograms
self.radius = radius # in meters
```
Ref: <https://docs.python.org/3/library/enum.html#planet>
Why do I want to do this? If there are a few primitive types (int, bool) in the constructor list, it would be nice to used named arguments.
|
2014/11/01
|
[
"https://Stackoverflow.com/questions/26691784",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/257299/"
] |
For Python 3.6.1+ the [typing.NamedTuple](https://docs.python.org/3.6/library/typing.html#typing.NamedTuple) can be used, which also allows for setting default values, which leads to prettier code. The example by @shao.lo then looks like this:
```
from enum import Enum
from typing import NamedTuple
class Body(NamedTuple):
mass: float
radius: float
moons: int=0
class Planet(Body, Enum):
MERCURY = Body(mass=3.303e+23, radius=2.4397e6)
VENUS = Body(mass=4.869e+24, radius=6.0518e6)
EARTH = Body(5.976e+24, 3.3972e6, moons=1)
```
This also supports pickling. The typing.Any can be used if you don't want to specify the type.
Credit to @monk-time, who's answer [here](https://stackoverflow.com/a/43157792/5693369) inspired this solution.
|
If going beyond the `namedtuple` mix-in check out the [`aenum`](https://pypi.python.org/pypi/aenum) library1. Besides having a few extra bells and whistles for `Enum` it also supports `NamedConstant` and a metaclass-based `NamedTuple`.
Using `aenum.Enum` the above code could look like:
```
from aenum import Enum, enum, _reduce_ex_by_name
class Planet(Enum, init='mass radius'):
MERCURY = enum(mass=3.303e+23, radius=2.4397e6)
VENUS = enum(mass=4.869e+24, radius=6.0518e6)
EARTH = enum(mass=5.976e+24, radius=3.3972e6)
# replace __reduce_ex__ so pickling works
__reduce_ex__ = _reduce_ex_by_name
```
and in use:
```
--> for p in Planet:
... print(repr(p))
<Planet.MERCURY: enum(radius=2439700.0, mass=3.3030000000000001e+23)>
<Planet.EARTH: enum(radius=3397200.0, mass=5.9760000000000004e+24)>
<Planet.VENUS: enum(radius=6051800.0, mass=4.8690000000000001e+24)>
--> print(Planet.VENUS.mass)
4.869e+24
```
---
1 Disclosure: I am the author of the [Python stdlib `Enum`](https://docs.python.org/3/library/enum.html), the [`enum34` backport](https://pypi.python.org/pypi/enum34), and the [Advanced Enumeration (`aenum`)](https://pypi.python.org/pypi/aenum) library.
|
74,542,597
|
i have a a number of xml files with me, whose format is:
```
<objects>
<object>
<record>
<invoice_source>EMAIL</invoice_source>
<invoice_capture_date>2022-11-18</invoice_capture_date>
<document_type>INVOICE</document_type>
<data_capture_provider_code>00001</data_capture_provider_code>
<data_capture_provider_reference>1264</data_capture_provider_reference>
<document_capture_provide_code>00002</document_capture_provide_code>
<document_capture_provider_ref>1264</document_capture_provider_ref>
<rows/>
</record>
</object>
</objects>
```
there is two root objects in this xml. i want to remove one of them using. i want the xml to look like this:
```
<objects>
<record>
<invoice_source>EMAIL</invoice_source>
<invoice_capture_date>2022-11-18</invoice_capture_date>
<document_type>INVOICE</document_type>
<data_capture_provider_code>00001</data_capture_provider_code>
<data_capture_provider_reference>1264</data_capture_provider_reference>
<document_capture_provide_code>00002</document_capture_provide_code>
<document_capture_provider_ref>1264</document_capture_provider_ref>
<rows/>
</record>
</objects>
```
i have a folder full of this files. i want to do it using python. is there any way.
|
2022/11/23
|
[
"https://Stackoverflow.com/questions/74542597",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20397498/"
] |
So here is what I would do.
Instead of controlling the stamina in multiple places and hve forth and back references (=dependencies) between all your scripts I would rather keep this authority within the `PlayerController`.
Your `StaminaBar` component should be purely **listening** and visualizing the current value without having the authority to modify it.
Next step would be to decide for a general code structure
* Who is responsible for what?
* Who knows / controls what?
There are many possible answers to those but for now an this specific case
* You can either say the `PlayerController` "knows" the `StaminaBar` just like it also knows the `InputManager` and can't live without both
* Or you could decouple them and let the `PlayerController` work without having the visualization via the `StaminaBar` but rather let the `StaminaBar` listen to the value and just display it .. or not if you want to remove or change this later on
Personally I would go with the second so I will try and give you an example how I would deal with this:
```
public class PlayerController : MonoBehaviour
{
[Header("Own References")]
[SerializeField] private CharacterController _controller;
[Header("Scene References")]
[SerializeField] private Transform _cameraTransform;
[SerializeField] private InputManager _inputManager;
// In general always make you stuff as encapsulated as possible
// -> nobody should be able to change these except you via the Inspector
// (Values you are anyway not gonna change at all you could also convert to "const")
[Header("Settings")]
[SerializeField] private float _maxHealth = 100f;
[SerializeField] private float _maxStamina = 100f;
[SerializeField] private float _staminaDrainPerSecond = 2f;
[SerializeField] private float _secondsDelayBeforeStaminaRegen = 1f;
[SerializeField] private float _staminaRegenPerSecond = 2f;
[SerializeField] private float _playerSpeed = 1f;
[SerializeField] private float _playerRunSpeed = 2f;
[SerializeField] private float _jumpHeight = 1f;
[SerializeField] private float _gravityValue = -9.81f;
// Your runtime valus
private float _staminaRegenDelayTimer;
private float _currentHealt;
private float _currentStamina;
// You only need a single float for this
private float _currentYVelocity;
// EVENTS we expose so other classes can react to those
public UnityEvent OnDeath;
public UnityEvent<float> OnHealthChanged;
public UnityEvent<float> OnStaminaChanged;
// Provide public read-only access to the settings so your visuals can access those for their setup
public float MaxHealth => _maxHealth;
public float MaxStamina => _maxStamina;
// And then use properties for your runtime values
// whenever you set the value you do additional stuff like cleaning the value and invoke according events
public float currentHealth
{
get => _currentHealt;
private set
{
_currentHealt = Mathf.Clamp(value, 0, _maxHealth);
OnHealthChanged.Invoke(_currentHealt);
if (value <= 0f)
{
OnDeath.Invoke();
}
}
}
public float currentStamina
{
get => _currentStamina;
private set
{
_currentStamina = Mathf.Clamp(value, 0, _maxStamina);
OnStaminaChanged.Invoke(_currentStamina);
}
}
private void Awake()
{
// As a thumb rule to avoid issues with order I usually initialize everything I an in Awake
if (!_controller) _controller = GetComponent<CharacterController>();
currentHealth = MaxHealth;
currentStamina = MaxStamina;
}
private void Start()
{
// in start do the things were you depend on others already being initialized
if (!_inputManager) _inputManager = InputManager.Instance;
if (!_cameraTransform) _cameraTransform = Camera.main.transform;
}
private void Update()
{
UpdateStamina();
UpdateHorizontalMovement();
UpdateVerticalMovement();
}
private void UpdateStamina()
{
if (_inputManager.IsRunning)
{
// drain your stamina -> also informs all listeners
currentStamina -= _staminaDrainPerSecond * Time.deltaTime;
// reset the regen timer
_staminaRegenDelayTimer = _secondsDelayBeforeStaminaRegen;
}
else
{
// only if not pressing run start the regen timer
if (_staminaRegenDelayTimer > 0)
{
_staminaRegenDelayTimer -= Time.deltaTime;
}
else
{
// once timer is finished start regen
currentStamina += _staminaRegenPerSecond * Time.deltaTime;
}
}
}
private void UpdateHorizontalMovement()
{
var movement = _inputManager.PlayerMovement;
var move = _cameraTransform.forward * movement.y + _cameraTransform.right * movement.x;
move.y = 0f;
move *= _inputManager.IsRunning && currentStamina > 0 ? _playerRunSpeed : _playerSpeed;
_controller.Move(move * Time.deltaTime);
}
private void UpdateVerticalMovement()
{
if (_controller.isGrounded)
{
if (_inputManager.JumpedThisFrame)
{
_currentYVelocity += Mathf.Sqrt(_jumpHeight * -3.0f * _gravityValue);
}
else if (_currentYVelocity < 0)
{
_currentYVelocity = 0f;
}
}
else
{
_currentYVelocity += _gravityValue * Time.deltaTime;
}
_controller.Move(Vector3.up * _currentYVelocity * Time.deltaTime);
}
}
```
And then your `StaminaBar` shinks down to really only being a display. The `PlayerController` doesn't care/even know it exists and can fully work without it.
```
public class StaminaBar : MonoBehaviour
{
[SerializeField] private Slider _staminaSlider;
[SerializeField] private PlayerController _playerController;
private void Awake()
{
// or wherever you get the reference from
if (!_playerController) _playerController = FindObjectOfType<PlayerController>();
// poll the setting from the player
_staminaSlider.maxValue = _playerController.MaxStamina;
// attach a callback to the event
_playerController.OnStaminaChanged.AddListener(OnStaminaChanged);
// just to be sure invoke the callback once immediately with the current value
// so we don't have to wait for the first actual event invocation
OnStaminaChanged(_playerController.currentStamina);
}
private void OnDestroy()
{
if(_playerController) _playerController.OnStaminaChanged.RemoveListener(OnStaminaChanged);
}
// This will now be called whenever the stamina has changed
private void OnStaminaChanged(float stamina)
{
_staminaSlider.value = stamina;
}
}
```
And just for completeness - I also refactored your `InputManager` a bit on the fly ^^
```
public class InputManager : MonoBehaviour
{
[Header("Own references")]
[SerializeField] private Transform _bulletParent;
[SerializeField] private Transform _barrelTransform;
[Header("Scene references")]
[SerializeField] private Transform _cameraTransform;
// By using the correct component right away you can later skip "GetComponent"
[Header("Assets")]
[SerializeField] private BulletController _bulletPrefab;
[Header("Settings")]
[SerializeField] private float _bulletHitMissDistance = 25f;
[SerializeField] private float _damage = 100;
[SerializeField] private float _impactForce = 30;
[SerializeField] private float _fireRate = 8f;
public static InputManager Instance { get; private set; }
// Again I would use properties here
// You don't want anything else to set the "isRunning" flag
// And the others don't need to be methods either
public bool IsRunning { get; private set; }
public Vector2 PlayerMovement => _playerControls.Player.Movement.ReadValue<Vector2>();
public Vector2 MouseDelta => _playerControls.Player.Look.ReadValue<Vector2>();
public bool JumpedThisFrame => _playerControls.Player.Jump.triggered;
private Coroutine _fireCoroutine;
private PlayerControls _playerControls;
private WaitForSeconds _rapidFireWait;
private void Awake()
{
if (Instance != null && Instance != this)
{
Destroy(gameObject);
}
else
{
Instance = this;
}
_playerControls = new PlayerControls();
//Cursor.visible = false;
_rapidFireWait = new WaitForSeconds(1 / _fireRate);
_cameraTransform = Camera.main.transform;
_playerControls.Player.RunStart.performed += _ => Running();
_playerControls.Player.RunEnd.performed += _ => RunningStop();
_playerControls.Player.Shoot.started += _ => StartFiring();
_playerControls.Player.Shoot.canceled += _ => StopFiring();
}
private void OnEnable()
{
_playerControls.Enable();
}
private void OnDisable()
{
_playerControls.Disable();
}
private void StartFiring()
{
_fireCoroutine = StartCoroutine(RapidFire());
}
private void StopFiring()
{
if (_fireCoroutine != null)
{
StopCoroutine(_fireCoroutine);
_fireCoroutine = null;
}
}
private void Shooting()
{
var bulletController = Instantiate(_bulletPrefab, _barrelTransform.position, Quaternion.identity, _bulletParent);
if (Physics.Raycast(_cameraTransform.position, _cameraTransform.forward, out var hit, Mathf.Infinity))
{
bulletController.target = hit.point;
bulletController.hit = true;
if (hit.transform.TryGetComponent<Enemy>(out var enemy))
{
enemy.TakeDamage(_damage);
}
if (hit.rigidbody != null)
{
hit.rigidbody.AddForce(-hit.normal * _impactForce);
}
}
else
{
bulletController.target = _cameraTransform.position + _cameraTransform.forward * _bulletHitMissDistance;
bulletController.hit = false;
}
}
private IEnumerator RapidFire()
{
while (true)
{
Shooting();
yield return _rapidFireWait;
}
}
private void Running()
{
IsRunning = true;
}
private void RunningStop()
{
IsRunning = false;
}
}
```
|
You're decreasing and increasing the stamina in the same scope. I think you should let the stamina to be drained when sprint is pressed and start regenerating only if it is released.
|
59,573,454
|
I am trying to find a simple way to calculate soft cosine similarity between two sentences.
Here is my attempt and learning:
```
from gensim.matutils import softcossim
sent_1 = 'Dravid is a cricket player and a opening batsman'.split()
sent_2 = 'Leo is a cricket player too He is a batsman,baller and keeper'.split()
print(softcossim(sent_1, sent_2, similarity_matrix))
```
I'm unable to understand about `similarity_matrix`. Please help me find so, and henceforth the soft cosine similarity in python.
|
2020/01/03
|
[
"https://Stackoverflow.com/questions/59573454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4763959/"
] |
As of the current version of Gensim, 3.8.3, some of the method calls from both the question and previous answers have been deprecated. Those functions deprecated have been removed from the 4.0.0 beta. Can't seem to provide code in a reply to @EliadL, so adding a new comment.
The current method for solving this problem in Gensim 3.8.3 and 4.0.0 is as follows:
```py
import gensim.downloader as api
from gensim import corpora
from gensim.matutils import softcossim
sent_1 = 'Dravid is a cricket player and a opening batsman'.split()
sent_2 = 'Leo is a cricket player too He is a batsman,baller and keeper'.split()
# Download the FastText model
fasttext_model300 = api.load('fasttext-wiki-news-subwords-300')
# Prepare a dictionary and a corpus.
documents = [sent_1, sent_2]
dictionary = corpora.Dictionary(documents)
# Prepare the similarity matrix
similarity_index = WordEmbeddingSimilarityIndex(fasttext_model300)
similarity_matrix = SparseTermSimilarityMatrix(similarity_index, dictionary)
# Convert the sentences into bag-of-words vectors.
sent_1 = dictionary.doc2bow(sent_1)
sent_2 = dictionary.doc2bow(sent_2)
# Compute soft cosine similarity
print(similarity_matrix.inner_product(sent_1, sent_2, normalized=True))
#> 0.68463486
```
For users of Gensim v. 3.8.3, I've also found this [Notebook](https://github.com/RaRe-Technologies/gensim/blob/release-3.8.3/docs/notebooks/soft_cosine_tutorial.ipynb) to be helpful in understanding Soft Cosine Similarity and how to apply Soft Cosine Similarity using Gensim.
As of now, for users of Gensim 4.0.0 beta this [Notebook](https://github.com/RaRe-Technologies/gensim/blob/develop/docs/notebooks/soft_cosine_tutorial.ipynb) is the one to look at.
|
Going by [this tutorial](https://www.machinelearningplus.com/nlp/gensim-tutorial/#18howtocomputesimilaritymetricslikecosinesimilarityandsoftcosinesimilarity):
```
import gensim.downloader as api
from gensim import corpora
from gensim.matutils import softcossim
sent_1 = 'Dravid is a cricket player and a opening batsman'.split()
sent_2 = 'Leo is a cricket player too He is a batsman,baller and keeper'.split()
# Download the FastText model
fasttext_model300 = api.load('fasttext-wiki-news-subwords-300')
# Prepare a dictionary and a corpus.
documents = [sent_1, sent_2]
dictionary = corpora.Dictionary(documents)
# Prepare the similarity matrix
similarity_matrix = fasttext_model300.similarity_matrix(dictionary)
# Convert the sentences into bag-of-words vectors.
sent_1 = dictionary.doc2bow(sent_1)
sent_2 = dictionary.doc2bow(sent_2)
# Compute soft cosine similarity
print(softcossim(sent_1, sent_2, similarity_matrix))
#> 0.7909639717134869
```
|
59,573,454
|
I am trying to find a simple way to calculate soft cosine similarity between two sentences.
Here is my attempt and learning:
```
from gensim.matutils import softcossim
sent_1 = 'Dravid is a cricket player and a opening batsman'.split()
sent_2 = 'Leo is a cricket player too He is a batsman,baller and keeper'.split()
print(softcossim(sent_1, sent_2, similarity_matrix))
```
I'm unable to understand about `similarity_matrix`. Please help me find so, and henceforth the soft cosine similarity in python.
|
2020/01/03
|
[
"https://Stackoverflow.com/questions/59573454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4763959/"
] |
Going by [this tutorial](https://www.machinelearningplus.com/nlp/gensim-tutorial/#18howtocomputesimilaritymetricslikecosinesimilarityandsoftcosinesimilarity):
```
import gensim.downloader as api
from gensim import corpora
from gensim.matutils import softcossim
sent_1 = 'Dravid is a cricket player and a opening batsman'.split()
sent_2 = 'Leo is a cricket player too He is a batsman,baller and keeper'.split()
# Download the FastText model
fasttext_model300 = api.load('fasttext-wiki-news-subwords-300')
# Prepare a dictionary and a corpus.
documents = [sent_1, sent_2]
dictionary = corpora.Dictionary(documents)
# Prepare the similarity matrix
similarity_matrix = fasttext_model300.similarity_matrix(dictionary)
# Convert the sentences into bag-of-words vectors.
sent_1 = dictionary.doc2bow(sent_1)
sent_2 = dictionary.doc2bow(sent_2)
# Compute soft cosine similarity
print(softcossim(sent_1, sent_2, similarity_matrix))
#> 0.7909639717134869
```
|
You can use SoftCosineSimilarity class in gensim.similarities in gensim 4.0.0 upwards
```
from gensim.similarities import SoftCosineSimilarity
#Calculate Soft Cosine Similarity between the query and the documents.
def find_similarity(query,documents):
query = dictionary.doc2bow(query)
index = SoftCosineSimilarity(
[dictionary.doc2bow(document) for document in documents],
similarity_matrix)
similarities = index[query]
return similarities
```
|
59,573,454
|
I am trying to find a simple way to calculate soft cosine similarity between two sentences.
Here is my attempt and learning:
```
from gensim.matutils import softcossim
sent_1 = 'Dravid is a cricket player and a opening batsman'.split()
sent_2 = 'Leo is a cricket player too He is a batsman,baller and keeper'.split()
print(softcossim(sent_1, sent_2, similarity_matrix))
```
I'm unable to understand about `similarity_matrix`. Please help me find so, and henceforth the soft cosine similarity in python.
|
2020/01/03
|
[
"https://Stackoverflow.com/questions/59573454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4763959/"
] |
As of the current version of Gensim, 3.8.3, some of the method calls from both the question and previous answers have been deprecated. Those functions deprecated have been removed from the 4.0.0 beta. Can't seem to provide code in a reply to @EliadL, so adding a new comment.
The current method for solving this problem in Gensim 3.8.3 and 4.0.0 is as follows:
```py
import gensim.downloader as api
from gensim import corpora
from gensim.matutils import softcossim
sent_1 = 'Dravid is a cricket player and a opening batsman'.split()
sent_2 = 'Leo is a cricket player too He is a batsman,baller and keeper'.split()
# Download the FastText model
fasttext_model300 = api.load('fasttext-wiki-news-subwords-300')
# Prepare a dictionary and a corpus.
documents = [sent_1, sent_2]
dictionary = corpora.Dictionary(documents)
# Prepare the similarity matrix
similarity_index = WordEmbeddingSimilarityIndex(fasttext_model300)
similarity_matrix = SparseTermSimilarityMatrix(similarity_index, dictionary)
# Convert the sentences into bag-of-words vectors.
sent_1 = dictionary.doc2bow(sent_1)
sent_2 = dictionary.doc2bow(sent_2)
# Compute soft cosine similarity
print(similarity_matrix.inner_product(sent_1, sent_2, normalized=True))
#> 0.68463486
```
For users of Gensim v. 3.8.3, I've also found this [Notebook](https://github.com/RaRe-Technologies/gensim/blob/release-3.8.3/docs/notebooks/soft_cosine_tutorial.ipynb) to be helpful in understanding Soft Cosine Similarity and how to apply Soft Cosine Similarity using Gensim.
As of now, for users of Gensim 4.0.0 beta this [Notebook](https://github.com/RaRe-Technologies/gensim/blob/develop/docs/notebooks/soft_cosine_tutorial.ipynb) is the one to look at.
|
You can use SoftCosineSimilarity class in gensim.similarities in gensim 4.0.0 upwards
```
from gensim.similarities import SoftCosineSimilarity
#Calculate Soft Cosine Similarity between the query and the documents.
def find_similarity(query,documents):
query = dictionary.doc2bow(query)
index = SoftCosineSimilarity(
[dictionary.doc2bow(document) for document in documents],
similarity_matrix)
similarities = index[query]
return similarities
```
|
510,972
|
How do I find out the name of the class used to create an instance of an object in Python?
I'm not sure if I should use the [`inspect`](https://docs.python.org/2/library/inspect.html "inspect — Inspect live objects") module or parse the `__class__` attribute.
|
2009/02/04
|
[
"https://Stackoverflow.com/questions/510972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18909/"
] |
```
class A:
pass
a = A()
str(a.__class__)
```
The sample code above (when input in the interactive interpreter) will produce `'__main__.A'` as opposed to `'A'` which is produced if the `__name__` attribute is invoked. By simply passing the result of `A.__class__` to the `str` constructor the parsing is handled for you. However, you could also use the following code if you want something more explicit.
```
"{0}.{1}".format(a.__class__.__module__,a.__class__.__name__)
```
This behavior can be preferable if you have classes with the same name defined in separate modules.
**The sample code provided above was tested in Python 2.7.5.**
|
In Python 2,
```
type(instance).__name__ != instance.__class__.__name__
# if class A is defined like
class A():
...
type(instance) == instance.__class__
# if class A is defined like
class A(object):
...
```
Example:
```
>>> class aclass(object):
... pass
...
>>> a = aclass()
>>> type(a)
<class '__main__.aclass'>
>>> a.__class__
<class '__main__.aclass'>
>>>
>>> type(a).__name__
'aclass'
>>>
>>> a.__class__.__name__
'aclass'
>>>
>>> class bclass():
... pass
...
>>> b = bclass()
>>>
>>> type(b)
<type 'instance'>
>>> b.__class__
<class __main__.bclass at 0xb765047c>
>>> type(b).__name__
'instance'
>>>
>>> b.__class__.__name__
'bclass'
>>>
```
|
510,972
|
How do I find out the name of the class used to create an instance of an object in Python?
I'm not sure if I should use the [`inspect`](https://docs.python.org/2/library/inspect.html "inspect — Inspect live objects") module or parse the `__class__` attribute.
|
2009/02/04
|
[
"https://Stackoverflow.com/questions/510972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18909/"
] |
[`type()`](https://docs.python.org/3/library/functions.html#type) ?
```
>>> class A:
... def whoami(self):
... print(type(self).__name__)
...
>>>
>>> class B(A):
... pass
...
>>>
>>>
>>> o = B()
>>> o.whoami()
'B'
>>>
```
|
Good question.
Here's a simple example based on GHZ's which might help someone:
```
>>> class person(object):
def init(self,name):
self.name=name
def info(self)
print "My name is {0}, I am a {1}".format(self.name,self.__class__.__name__)
>>> bob = person(name='Robert')
>>> bob.info()
My name is Robert, I am a person
```
|
510,972
|
How do I find out the name of the class used to create an instance of an object in Python?
I'm not sure if I should use the [`inspect`](https://docs.python.org/2/library/inspect.html "inspect — Inspect live objects") module or parse the `__class__` attribute.
|
2009/02/04
|
[
"https://Stackoverflow.com/questions/510972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18909/"
] |
```
class A:
pass
a = A()
str(a.__class__)
```
The sample code above (when input in the interactive interpreter) will produce `'__main__.A'` as opposed to `'A'` which is produced if the `__name__` attribute is invoked. By simply passing the result of `A.__class__` to the `str` constructor the parsing is handled for you. However, you could also use the following code if you want something more explicit.
```
"{0}.{1}".format(a.__class__.__module__,a.__class__.__name__)
```
This behavior can be preferable if you have classes with the same name defined in separate modules.
**The sample code provided above was tested in Python 2.7.5.**
|
To get instance classname:
```py
type(instance).__name__
```
or
```py
instance.__class__.__name__
```
both are the same
|
510,972
|
How do I find out the name of the class used to create an instance of an object in Python?
I'm not sure if I should use the [`inspect`](https://docs.python.org/2/library/inspect.html "inspect — Inspect live objects") module or parse the `__class__` attribute.
|
2009/02/04
|
[
"https://Stackoverflow.com/questions/510972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18909/"
] |
In Python 2,
```
type(instance).__name__ != instance.__class__.__name__
# if class A is defined like
class A():
...
type(instance) == instance.__class__
# if class A is defined like
class A(object):
...
```
Example:
```
>>> class aclass(object):
... pass
...
>>> a = aclass()
>>> type(a)
<class '__main__.aclass'>
>>> a.__class__
<class '__main__.aclass'>
>>>
>>> type(a).__name__
'aclass'
>>>
>>> a.__class__.__name__
'aclass'
>>>
>>> class bclass():
... pass
...
>>> b = bclass()
>>>
>>> type(b)
<type 'instance'>
>>> b.__class__
<class __main__.bclass at 0xb765047c>
>>> type(b).__name__
'instance'
>>>
>>> b.__class__.__name__
'bclass'
>>>
```
|
Apart from grabbing the special [`__name__`](https://docs.python.org/3/library/stdtypes.html#definition.__name__) attribute, you might find yourself in need of the [qualified name](https://www.python.org/dev/peps/pep-3155/) for a given class/function. This is done by grabbing the types `__qualname__`.
In most cases, these will be exactly the same, but, when dealing with nested classes/methods these differ in the output you get. For example:
```
class Spam:
def meth(self):
pass
class Bar:
pass
>>> s = Spam()
>>> type(s).__name__
'Spam'
>>> type(s).__qualname__
'Spam'
>>> type(s).Bar.__name__ # type not needed here
'Bar'
>>> type(s).Bar.__qualname__ # type not needed here
'Spam.Bar'
>>> type(s).meth.__name__
'meth'
>>> type(s).meth.__qualname__
'Spam.meth'
```
Since introspection is what you're after, this is always you might want to consider.
|
510,972
|
How do I find out the name of the class used to create an instance of an object in Python?
I'm not sure if I should use the [`inspect`](https://docs.python.org/2/library/inspect.html "inspect — Inspect live objects") module or parse the `__class__` attribute.
|
2009/02/04
|
[
"https://Stackoverflow.com/questions/510972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18909/"
] |
Good question.
Here's a simple example based on GHZ's which might help someone:
```
>>> class person(object):
def init(self,name):
self.name=name
def info(self)
print "My name is {0}, I am a {1}".format(self.name,self.__class__.__name__)
>>> bob = person(name='Robert')
>>> bob.info()
My name is Robert, I am a person
```
|
To get instance classname:
```py
type(instance).__name__
```
or
```py
instance.__class__.__name__
```
both are the same
|
510,972
|
How do I find out the name of the class used to create an instance of an object in Python?
I'm not sure if I should use the [`inspect`](https://docs.python.org/2/library/inspect.html "inspect — Inspect live objects") module or parse the `__class__` attribute.
|
2009/02/04
|
[
"https://Stackoverflow.com/questions/510972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18909/"
] |
To get instance classname:
```py
type(instance).__name__
```
or
```py
instance.__class__.__name__
```
both are the same
|
You can first use `type` and then `str` to extract class name from it.
```py
class foo:pass;
bar:foo=foo();
print(str(type(bar))[8:-2][len(str(type(bar).__module__))+1:]);
```
Result
------
```
foo
```
|
510,972
|
How do I find out the name of the class used to create an instance of an object in Python?
I'm not sure if I should use the [`inspect`](https://docs.python.org/2/library/inspect.html "inspect — Inspect live objects") module or parse the `__class__` attribute.
|
2009/02/04
|
[
"https://Stackoverflow.com/questions/510972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18909/"
] |
```
class A:
pass
a = A()
str(a.__class__)
```
The sample code above (when input in the interactive interpreter) will produce `'__main__.A'` as opposed to `'A'` which is produced if the `__name__` attribute is invoked. By simply passing the result of `A.__class__` to the `str` constructor the parsing is handled for you. However, you could also use the following code if you want something more explicit.
```
"{0}.{1}".format(a.__class__.__module__,a.__class__.__name__)
```
This behavior can be preferable if you have classes with the same name defined in separate modules.
**The sample code provided above was tested in Python 2.7.5.**
|
Alternatively you can use the `classmethod` decorator:
```python
class A:
@classmethod
def get_classname(cls):
return cls.__name__
def use_classname(self):
return self.get_classname()
```
**Usage**:
```python
>>> A.get_classname()
'A'
>>> a = A()
>>> a.get_classname()
'A'
>>> a.use_classname()
'A'
```
|
510,972
|
How do I find out the name of the class used to create an instance of an object in Python?
I'm not sure if I should use the [`inspect`](https://docs.python.org/2/library/inspect.html "inspect — Inspect live objects") module or parse the `__class__` attribute.
|
2009/02/04
|
[
"https://Stackoverflow.com/questions/510972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18909/"
] |
Have you tried the [`__name__` attribute](https://docs.python.org/library/stdtypes.html#definition.__name__) of the class? ie `type(x).__name__` will give you the name of the class, which I think is what you want.
```
>>> import itertools
>>> x = itertools.count(0)
>>> type(x).__name__
'count'
```
If you're still using Python 2, note that the above method works with [new-style classes](https://wiki.python.org/moin/NewClassVsClassicClass) only (in Python 3+ all classes are "new-style" classes). Your code might use some old-style classes. The following works for both:
```
x.__class__.__name__
```
|
Alternatively you can use the `classmethod` decorator:
```python
class A:
@classmethod
def get_classname(cls):
return cls.__name__
def use_classname(self):
return self.get_classname()
```
**Usage**:
```python
>>> A.get_classname()
'A'
>>> a = A()
>>> a.get_classname()
'A'
>>> a.use_classname()
'A'
```
|
510,972
|
How do I find out the name of the class used to create an instance of an object in Python?
I'm not sure if I should use the [`inspect`](https://docs.python.org/2/library/inspect.html "inspect — Inspect live objects") module or parse the `__class__` attribute.
|
2009/02/04
|
[
"https://Stackoverflow.com/questions/510972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18909/"
] |
Do you want the name of the class as a string?
```
instance.__class__.__name__
```
|
You can simply use `__qualname__` which stands for qualified name of a function or class
Example:
```
>>> class C:
... class D:
... def meth(self):
... pass
...
>>> C.__qualname__
'C'
>>> C.D.__qualname__
'C.D'
>>> C.D.meth.__qualname__
'C.D.meth'
```
documentation link [**qualname**](https://docs.python.org/3/glossary.html#term-qualified-name)
|
510,972
|
How do I find out the name of the class used to create an instance of an object in Python?
I'm not sure if I should use the [`inspect`](https://docs.python.org/2/library/inspect.html "inspect — Inspect live objects") module or parse the `__class__` attribute.
|
2009/02/04
|
[
"https://Stackoverflow.com/questions/510972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18909/"
] |
Have you tried the [`__name__` attribute](https://docs.python.org/library/stdtypes.html#definition.__name__) of the class? ie `type(x).__name__` will give you the name of the class, which I think is what you want.
```
>>> import itertools
>>> x = itertools.count(0)
>>> type(x).__name__
'count'
```
If you're still using Python 2, note that the above method works with [new-style classes](https://wiki.python.org/moin/NewClassVsClassicClass) only (in Python 3+ all classes are "new-style" classes). Your code might use some old-style classes. The following works for both:
```
x.__class__.__name__
```
|
To get instance classname:
```py
type(instance).__name__
```
or
```py
instance.__class__.__name__
```
both are the same
|
70,014,480
|
I've been hosting my static site via an google app engine standard python setup for years without a problem. Today I started seeing the error below. Note: there used to be a page on GCP explaining how to host a static page using python GAE standard, but I can't find it now. Is it maybe the case where now it's recommended to use a bucket instead?
```
gunicorn.errors.HaltServer
Traceback (most recent call last): File "/layers/google.python.pip/pip/bin/gunicorn", line 8, in sys.exit(run()) File "/layers/google.python.pip/pip/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 58, in run WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run() File "/layers/google.python.pip/pip/lib/python3.8/site-packages/gunicorn/app/base.py", line 228, in run super().run() File "/layers/google.python.pip/pip/lib/python3.8/site-packages/gunicorn/app/base.py", line 72, in run Arbiter(self).run() File "/layers/google.python.pip/pip/lib/python3.8/site-packages/gunicorn/arbiter.py", line 229, in run self.halt(reason=inst.reason, exit_status=inst.exit_status) File "/layers/google.python.pip/pip/lib/python3.8/site-packages/gunicorn/arbiter.py", line 342, in halt self.stop() File "/layers/google.python.pip/pip/lib/python3.8/site-packages/gunicorn/arbiter.py", line 393, in stop time.sleep(0.1) File "/layers/google.python.pip/pip/lib/python3.8/site-packages/gunicorn/arbiter.py", line 242, in handle_chld self.reap_workers() File "/layers/google.python.pip/pip/lib/python3.8/site-packages/gunicorn/arbiter.py", line 525, in reap_workers raise HaltServer(reason, self.WORKER_BOOT_ERROR) gunicorn.errors.HaltServer:
```
Here's my app.yaml file:
```
runtime: python38
service: webapp
handlers:
# site root -> app
- url: /
static_files: dist/index.html
upload: dist/index.html
expiration: "0m"
secure: always
# urls with no dot in them -> app
- url: /([^.]+?)\/?$ # urls
static_files: dist/index.html
upload: dist/index.html
expiration: "0m"
secure: always
# everything else
- url: /(.*)
static_files: dist/\1
upload: dist/(.*)
expiration: "0m"
secure: always
```
|
2021/11/18
|
[
"https://Stackoverflow.com/questions/70014480",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10458445/"
] |
This error only happened on Nov 17th and has since not happened again, without any changes by me. Perhaps it was related to something under-the-hood on google app engine servers.
|
Note that you are using Python 3.8 as per your `app.yaml` file, and the document you have shared is for Python 2.7. As Python 2 is no longer supported, migrating from Python 2 to Python 3 runtime will help you remove the error.
The documentation [here](https://cloud.google.com/appengine/docs/standard/python/migrate-to-python3) will help you to migrate to Python 3 standard runtime.
|
50,996,060
|
I'm trying to use ruamel.yaml to modify an AWS CloudFormation template on the fly using python. I added the following code to make the safe\_load working with CloudFormation functions such as `!Ref`. However, when I dump them out, those values with !Ref (or any other functions) will be wrapped by quotes. CloudFormation is not able to identify that.
See example below:
```
import sys, json, io, boto3
import ruamel.yaml
def funcparse(loader, node):
node.value = {
ruamel.yaml.ScalarNode: loader.construct_scalar,
ruamel.yaml.SequenceNode: loader.construct_sequence,
ruamel.yaml.MappingNode: loader.construct_mapping,
}[type(node)](node)
node.tag = node.tag.replace(u'!Ref', 'Ref').replace(u'!', u'Fn::')
return dict([ (node.tag, node.value) ])
funcnames = [ 'Ref', 'Base64', 'FindInMap', 'GetAtt', 'GetAZs', 'ImportValue',
'Join', 'Select', 'Split', 'Split', 'Sub', 'And', 'Equals', 'If',
'Not', 'Or' ]
for func in funcnames:
ruamel.yaml.SafeLoader.add_constructor(u'!' + func, funcparse)
txt = open("/space/tmp/a.template","r")
base = ruamel.yaml.safe_load(txt)
base["foo"] = {
"name": "abc",
"Resources": {
"RouteTableId" : "!Ref aaa",
"VpcPeeringConnectionId" : "!Ref bbb",
"yourname": "dfw"
}
}
ruamel.yaml.safe_dump(
base,
sys.stdout,
default_flow_style=False
)
```
The input file is like this:
```
foo:
bar: !Ref barr
aa: !Ref bb
```
The output is like this:
```
foo:
Resources:
RouteTableId: '!Ref aaa'
VpcPeeringConnectionId: '!Ref bbb'
yourname: dfw
name: abc
```
Notice the '!Ref VpcRouteTable' is been wrapped by single quotes. This won't be identified by CloudFormation. Is there a way to configure dumper so that the output will be like:
```
foo:
Resources:
RouteTableId: !Ref aaa
VpcPeeringConnectionId: !Ref bbb
yourname: dfw
name: abc
```
Other things I have tried:
* pyyaml library, works the same
* Use Ref:: instead of !Ref, works the
same
|
2018/06/22
|
[
"https://Stackoverflow.com/questions/50996060",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3209177/"
] |
Essentially you tweak the loader, to load tagged (scalar) objects as if they were mappings, with the tag the key and the value the scalar. But you don't do anything to distinguish the `dict` loaded from such a mapping from other dicts loaded from normal mappings, nor do you have any specific code to represent such a mapping to "get the tag back".
When you try to "create" a scalar with a tag, you just make a string starting with an exclamation mark, and that needs to get dumped quoted to distinguish it from **real** tagged nodes.
What obfuscates this all, is that your example overwrites the loaded data by assigning to `base["foo"]` so the only thing you can derive from the `safe_load`, and and all your code before that, is that it doesn't throw an exception. I.e. if you leave out the lines starting with `base["foo"] = {` your output will look like:
```
foo:
aa:
Ref: bb
bar:
Ref: barr
```
And in that `Ref: bb` is not distinguishable from a normal dumped dict. If you want to explore this route, then you should make a subclass `TagDict(dict)`, and have `funcparse` return that subclass, *and also add a `representer` for that subclass that re-creates the tag from the key and then dumps the value*. Once that works (round-trip equals input), you can do:
```
"RouteTableId" : TagDict('Ref', 'aaa')
```
If you do that, you should, apart from removing non-used libraries, also change your code to close the file-pointer `txt` in your code, as that can lead to problems. You can do this elegantly be using the `with` statement:
```
with open("/space/tmp/a.template","r") as txt:
base = ruamel.yaml.safe_load(txt)
```
(I also would leave out the `"r"` (or put a space before it); and replace `txt` with a more appropriate variable name that indicates this is an (input) file pointer).
You also have the entry `'Split'` twice in your `funcnames`, which is superfluous.
---
A more generic solution can be achieved by using a `multi-constructor` that matches any tag and having three basic types to cover scalars, mappings and sequences.
```
import sys
import ruamel.yaml
yaml_str = """\
foo:
scalar: !Ref barr
mapping: !Select
a: !Ref 1
b: !Base64 A413
sequence: !Split
- !Ref baz
- !Split Multi word scalar
"""
class Generic:
def __init__(self, tag, value, style=None):
self._value = value
self._tag = tag
self._style = style
class GenericScalar(Generic):
@classmethod
def to_yaml(self, representer, node):
return representer.represent_scalar(node._tag, node._value)
@staticmethod
def construct(constructor, node):
return constructor.construct_scalar(node)
class GenericMapping(Generic):
@classmethod
def to_yaml(self, representer, node):
return representer.represent_mapping(node._tag, node._value)
@staticmethod
def construct(constructor, node):
return constructor.construct_mapping(node, deep=True)
class GenericSequence(Generic):
@classmethod
def to_yaml(self, representer, node):
return representer.represent_sequence(node._tag, node._value)
@staticmethod
def construct(constructor, node):
return constructor.construct_sequence(node, deep=True)
def default_constructor(constructor, tag_suffix, node):
generic = {
ruamel.yaml.ScalarNode: GenericScalar,
ruamel.yaml.MappingNode: GenericMapping,
ruamel.yaml.SequenceNode: GenericSequence,
}.get(type(node))
if generic is None:
raise NotImplementedError('Node: ' + str(type(node)))
style = getattr(node, 'style', None)
instance = generic.__new__(generic)
yield instance
state = generic.construct(constructor, node)
instance.__init__(tag_suffix, state, style=style)
ruamel.yaml.add_multi_constructor('', default_constructor, Loader=ruamel.yaml.SafeLoader)
yaml = ruamel.yaml.YAML(typ='safe', pure=True)
yaml.default_flow_style = False
yaml.register_class(GenericScalar)
yaml.register_class(GenericMapping)
yaml.register_class(GenericSequence)
base = yaml.load(yaml_str)
base['bar'] = {
'name': 'abc',
'Resources': {
'RouteTableId' : GenericScalar('!Ref', 'aaa'),
'VpcPeeringConnectionId' : GenericScalar('!Ref', 'bbb'),
'yourname': 'dfw',
's' : GenericSequence('!Split', ['a', GenericScalar('!Not', 'b'), 'c']),
}
}
yaml.dump(base, sys.stdout)
```
which outputs:
```
bar:
Resources:
RouteTableId: !Ref aaa
VpcPeeringConnectionId: !Ref bbb
s: !Split
- a
- !Not b
- c
yourname: dfw
name: abc
foo:
mapping: !Select
a: !Ref 1
b: !Base64 A413
scalar: !Ref barr
sequence: !Split
- !Ref baz
- !Split Multi word scalar
```
Please note that sequences and mappings are handled correctly and that they can be created as well. There is however no check that:
* the tag you provide is actually valid
* the value associated with the tag is of the proper type for that tag name (scalar, mapping, sequence)
* if you want `GenericMapping` to behave more like `dict`, then you probably want it a subclass of `dict` (and not of `Generic`) and provide the appropriate `__init__` (idem for `GenericSequence`/`list`)
When the assignment is changed to something more close to yours:
```
base["foo"] = {
"name": "abc",
"Resources": {
"RouteTableId" : GenericScalar('!Ref', 'aaa'),
"VpcPeeringConnectionId" : GenericScalar('!Ref', 'bbb'),
"yourname": "dfw"
}
}
```
the output is:
```
foo:
Resources:
RouteTableId: !Ref aaa
VpcPeeringConnectionId: !Ref bbb
yourname: dfw
name: abc
```
which is exactly the output you want.
|
Apart from Anthon's detailed answer above, for the specific question in terms of CloudFormation template, I found another very quick & sweet workaround.
Still using the constructor snippet to load the YAML.
```
def funcparse(loader, node):
node.value = {
ruamel.yaml.ScalarNode: loader.construct_scalar,
ruamel.yaml.SequenceNode: loader.construct_sequence,
ruamel.yaml.MappingNode: loader.construct_mapping,
}[type(node)](node)
node.tag = node.tag.replace(u'!Ref', 'Ref').replace(u'!', u'Fn::')
return dict([ (node.tag, node.value) ])
funcnames = [ 'Ref', 'Base64', 'FindInMap', 'GetAtt', 'GetAZs', 'ImportValue',
'Join', 'Select', 'Split', 'Split', 'Sub', 'And', 'Equals', 'If',
'Not', 'Or' ]
for func in funcnames:
ruamel.yaml.SafeLoader.add_constructor(u'!' + func, funcparse)
```
When we manipulate the data, instead of doing
```
base["foo"] = {
"name": "abc",
"Resources": {
"RouteTableId" : "!Ref aaa",
"VpcPeeringConnectionId" : "!Ref bbb",
"yourname": "dfw"
}
}
```
which will wrap the value `!Ref aaa` with quotes, we can simply do:
```
base["foo"] = {
"name": "abc",
"Resources": {
"RouteTableId" : {
"Ref" : "aaa"
},
"VpcPeeringConnectionId" : {
"Ref" : "bbb
},
"yourname": "dfw"
}
}
```
Similarly, for other functions in CloudFormation, such as !GetAtt, we should use their long form `Fn::GetAtt` and use them as the key of a JSON object. Problem solved easily.
|
62,585,490
|
TF 2.3.0.dev20200620
I got this error during .fit(...) for a model with a sigmoid binary output. I used tf.data.Dataset as the input pipeline.
The strange thing is it depends on the metric:
Don't work:
```
model.compile(
optimizer=tf.keras.optimizers.Adam(lr=1e-4, decay=1e-6),
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=['accuracy']
)
```
work:
```
model.compile(
optimizer=tf.keras.optimizers.Adam(lr=1e-4, decay=1e-6),
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.BinaryAccuracy()]
)
```
But as I understood, 'accuracy' should be fine. In fact, instead of using my own tf.data.Dataset custom setup (can be provided if needed), using tf.keras.preprocessing.image\_dataset\_from\_directory give no such error. This is the case from tutorial <https://keras.io/examples/vision/image_classification_from_scratch>.
Trace is pasted below. Notice this is diff from other 2 older questions. it involves somehow the metrics.
ValueError: in user code:
```
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:806 train_function *
return step_function(self, iterator)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:796 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:1211 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2526 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2886 _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:789 run_step **
outputs = model.train_step(data)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:759 train_step
self.compiled_metrics.update_state(y, y_pred, sample_weight)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/compile_utils.py:388 update_state
self.build(y_pred, y_true)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/compile_utils.py:319 build
self._metrics, y_true, y_pred)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/nest.py:1139 map_structure_up_to
**kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/nest.py:1235 map_structure_with_tuple_paths_up_to
*flat_value_lists)]
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/nest.py:1234 <listcomp>
results = [func(*args, **kwargs) for args in zip(flat_path_list,
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/nest.py:1137 <lambda>
lambda _, *values: func(*values), # Discards the path arg.
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/compile_utils.py:419 _get_metric_objects
return [self._get_metric_object(m, y_t, y_p) for m in metrics]
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/compile_utils.py:419 <listcomp>
return [self._get_metric_object(m, y_t, y_p) for m in metrics]
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/compile_utils.py:440 _get_metric_object
y_t_rank = len(y_t.shape.as_list())
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/tensor_shape.py:1190 as_list
raise ValueError("as_list() is not defined on an unknown TensorShape.")
ValueError: as_list() is not defined on an unknown TensorShape.
```
|
2020/06/25
|
[
"https://Stackoverflow.com/questions/62585490",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1762295/"
] |
Had exactly the same problem when using 'accuracy' metric.
I followed <https://github.com/tensorflow/tensorflow/issues/32912#issuecomment-550363802> example:
```
def _fixup_shape(images, labels, weights):
images.set_shape([None, None, None, 3])
labels.set_shape([None, 19]) # I have 19 classes
weights.set_shape([None])
return images, labels, weights
dataset = dataset.map(_fixup_shape)
```
which helped me solve the problem.
But, in my case, instead of using one map function, as kawingkelvin did above, to load and set\_shape inside, **I needed to use two map functions** because of some errors in the TF code.
The final solution for me was to use the following order:
`dataset.batch.map(get_data).map(fix_shape).prefetch`
NOTE: batch can be done both before and after map(get\_data) depending on how your get\_data function is created. Fix\_shape must be done after.
|
I am able to fix this in such a way as to keep the metrics 'accuracy' (rather than using BinaryAccuracy). However, I do not quite understand why this is needed for 'accuracy', but not needed for other closely related one (e.g. BinaryAccuracy).
2 things:
1. construct a ds such that the batch label has shape of (batch\_size, 1) but not (batch\_size,). Following the keras.io tutorial mentioned, it should have been ok with the latter. This change aims to get rid of the "unknown" in the TensorShape.
2. add this to the ds pipeline:
label.set\_shape([1])
```
def process_path(file_path):
label = get_label(file_path)
img = tf.io.read_file(file_path)
img = tf.image.decode_jpeg(img, channels=3)
label.set_shape([1])
return img, label
ds = ds.map(process_path, num_parallel_calls=AUTO).shuffle(1024).repeat().batch(batch_size).prefetch(buffer_size=AUTO)
```
This is the state before .batch(...) so a single sample should have 1 as the shape (and therefore, (batch\_size, 1) after batching.
After doing so, the error didn't happen, and I used the exact same metrics 'accuracy' as in
<https://keras.io/examples/vision/image_classification_from_scratch>
Hope this helps anyone who got hit. I have to admit I don't truly understand why it didn't work in the first place. It still seems like a TF bug to me.
|
49,989,188
|
I have function like this one:
```
def get_list_of_movies(table):
#some code here
print(a_list)
return a_list
```
Reason why I want to use print and return is that I'm using this function in many places. So after calling this function from menu I want to get printed list of content.
This same function I'm using in another function - just to get list. Problem is - when I call this function it prints list as well.
Question: How to prevent function from executing print line when its used in another function just to get list?
This is part of exercise so I can't define more functions / split this or soo - I'm kind of limited to this one function.
*Edit: Thank you for all answers! I'm just beginner but you showed me ways in python(programming in general) that I never thought of! Using second parameter (boolin) is very cleaver. I do learn here a lot!*
|
2018/04/23
|
[
"https://Stackoverflow.com/questions/49989188",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8773813/"
] |
Add a separate argument with a default value of `None` to control the printing. Pass
```
def get_list_of_movies(table, printIt=False):
...
if printIt:
print(a_list)
return a_list
...
movies = get_list_of_movies(table, printIt=True)
```
Another approach is to pass `print` itself as the argument, where the default value is a no-op:
```
def get_list_of_movies(table, printer=lambda *args: None):
...
printer(a_list)
return a_list
...
movies = get_list_of_movies(table, printer=print)
```
This opens the door to being able to customize exactly how the result is print; you are effectively adding an arbitrary callback to be performed on the return value, which admittedly can be handled with a custom pass-through function as well:
```
def print_it_first(x):
print(x)
return x
movies = print_it_first(get_list_of_movies(table))
```
This doesn't require any special treatment of `get_list_of_movies` itself, so is probably preferable from a design standpoint.
|
A completely different approach is to *always* print the list, but control where it gets printed *to*:
```
def get_list_of_movies(table, print_to=os.devnull):
...
print(a_list, file=location)
return a_list
movies = get_list_of_movies(table, print_to=sys.stdout)
```
The `print_to` argument can be any file-like object, with the default ensuring no output is written anywhere.
|
11,882,194
|
I have a django web application running on our **apache2** production server using **mod\_python**, but no static files are found (css,images ... )
All our static stuff is under `/var/my.site/example/static`
```
/var/my.site/example/static/
|-admin/
|-css/
|-img/
|-css/
|-js/
|-img/
```
Now I thought I just could alias all requests to my static stuff like so:
This is the apache2 conf:
```
<VirtualHost 123.123.123:443>
... SSL stuff ...
RewriteEngine On
ReWriteOptions Inherit
<Location "/example">
SetHandler python-program
PythonHandler django.core.handlers.modpython
SetEnv DJANGO_SETTINGS_MODULE example.settings
PythonPath "[ \
'/home/me/Envs/ex/lib/python2.6/site-packages',\
'/var/my.site',\
'/home/me/Envs/ex/lib/python2.6/site-packages/django',\
'/home/me/Envs/ex/lib/python2.6/site-packages/MySQLdb',\
'/var/my.site/example',\
'/var/my.site/example/static'] + sys.path"
PythonDebug Off
</Location>
Alias /example/static /var/my.site/example/static
<Directory /var/my.site/example/static>
Order allow,deny
Allow from all
</Directory>
</VirtualHost>
```
This is my settings.py
```
...
STATIC_ROOT = '/var/my.site'
STATIC_URL = '/example/static/'
STATICFILES_DIRS = (
"/var/my.site/example/static",
)
...
```
There is no errors in the apache-error log. But here log from apache-secure\_access.log
```
[09/Aug/2012:12:37:55 +0200] "GET /example/admin/ HTTP/1.1" 200 6694
[09/Aug/2012:12:37:55 +0200] "GET /example/static/css/base.css HTTP/1.1" 301 468
[09/Aug/2012:12:37:55 +0200] "GET /example/static/img/logo.png HTTP/1.1" 403 766
[09/Aug/2012:12:37:55 +0200] "GET /example/static/css/base.css/ HTTP/1.1" 500 756
[09/Aug/2012:12:37:55 +0200] "GET /example/static/admin/css/dashboard.css HTTP/1.1" 301 622
```
But this doesn't work and I'm not sure, if I even, is on the right track. It does work when I set `DEBUG = True` But that's just because django serves all the static files.
**What am I doing wrong?**
**Does anyone know about a good tutorial or example?**
|
2012/08/09
|
[
"https://Stackoverflow.com/questions/11882194",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/481406/"
] |
After @supervacuo suggestion that I strip down everything from django, I got apache to serve the static files and realized what was wrong.
The problem was that `<Location "/example">` got priority over `Alias /example/static`. It didn't matter where I put the `Alias` (above or below the `<Location> - tag`).
To fix it I changed the `STATIC_URL` and `STATIC_ROOT`, than I could change the `Alias` not to interfere with the `<Location> - tag`
**From:**
```
STATIC_ROOT = '/var/my.site'
STATIC_URL = '/example/static/'
Alias /example/static /var/my.site/example/static
```
**To:**
```
STATIC_ROOT = '/var/my.site/example'
STATIC_URL = '/static/'
Alias /static /var/my.site/example/static
```
|
Try to eliminate the problem step-by-step.
Loading static files should work completely independently of Django. Try commenting out all lines relating to Django in your `VirtualHost` config. (Remember to reload Apache after changing the configuration)
If that works, it may be that you need to take more steps to avoid Django trampling over URLs in the same namespace (perhaps using `SetHandler`?).
If not, there's a more basic problem with your static files. If you can't resolve it, perhaps [ServerFault](https://serverfault.com/) can help?
|
11,882,194
|
I have a django web application running on our **apache2** production server using **mod\_python**, but no static files are found (css,images ... )
All our static stuff is under `/var/my.site/example/static`
```
/var/my.site/example/static/
|-admin/
|-css/
|-img/
|-css/
|-js/
|-img/
```
Now I thought I just could alias all requests to my static stuff like so:
This is the apache2 conf:
```
<VirtualHost 123.123.123:443>
... SSL stuff ...
RewriteEngine On
ReWriteOptions Inherit
<Location "/example">
SetHandler python-program
PythonHandler django.core.handlers.modpython
SetEnv DJANGO_SETTINGS_MODULE example.settings
PythonPath "[ \
'/home/me/Envs/ex/lib/python2.6/site-packages',\
'/var/my.site',\
'/home/me/Envs/ex/lib/python2.6/site-packages/django',\
'/home/me/Envs/ex/lib/python2.6/site-packages/MySQLdb',\
'/var/my.site/example',\
'/var/my.site/example/static'] + sys.path"
PythonDebug Off
</Location>
Alias /example/static /var/my.site/example/static
<Directory /var/my.site/example/static>
Order allow,deny
Allow from all
</Directory>
</VirtualHost>
```
This is my settings.py
```
...
STATIC_ROOT = '/var/my.site'
STATIC_URL = '/example/static/'
STATICFILES_DIRS = (
"/var/my.site/example/static",
)
...
```
There is no errors in the apache-error log. But here log from apache-secure\_access.log
```
[09/Aug/2012:12:37:55 +0200] "GET /example/admin/ HTTP/1.1" 200 6694
[09/Aug/2012:12:37:55 +0200] "GET /example/static/css/base.css HTTP/1.1" 301 468
[09/Aug/2012:12:37:55 +0200] "GET /example/static/img/logo.png HTTP/1.1" 403 766
[09/Aug/2012:12:37:55 +0200] "GET /example/static/css/base.css/ HTTP/1.1" 500 756
[09/Aug/2012:12:37:55 +0200] "GET /example/static/admin/css/dashboard.css HTTP/1.1" 301 622
```
But this doesn't work and I'm not sure, if I even, is on the right track. It does work when I set `DEBUG = True` But that's just because django serves all the static files.
**What am I doing wrong?**
**Does anyone know about a good tutorial or example?**
|
2012/08/09
|
[
"https://Stackoverflow.com/questions/11882194",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/481406/"
] |
Try to eliminate the problem step-by-step.
Loading static files should work completely independently of Django. Try commenting out all lines relating to Django in your `VirtualHost` config. (Remember to reload Apache after changing the configuration)
If that works, it may be that you need to take more steps to avoid Django trampling over URLs in the same namespace (perhaps using `SetHandler`?).
If not, there's a more basic problem with your static files. If you can't resolve it, perhaps [ServerFault](https://serverfault.com/) can help?
|
The only problem I can see is with this:
`"GET /example/static/css/base.css/ HTTP/1.1" 500 756`
Since `/base.css/` is not a valid link; it gets passed to django; and since it doesn't match a URL pattern it raises a 500. You should fix the template that has the errant `/`.
|
11,882,194
|
I have a django web application running on our **apache2** production server using **mod\_python**, but no static files are found (css,images ... )
All our static stuff is under `/var/my.site/example/static`
```
/var/my.site/example/static/
|-admin/
|-css/
|-img/
|-css/
|-js/
|-img/
```
Now I thought I just could alias all requests to my static stuff like so:
This is the apache2 conf:
```
<VirtualHost 123.123.123:443>
... SSL stuff ...
RewriteEngine On
ReWriteOptions Inherit
<Location "/example">
SetHandler python-program
PythonHandler django.core.handlers.modpython
SetEnv DJANGO_SETTINGS_MODULE example.settings
PythonPath "[ \
'/home/me/Envs/ex/lib/python2.6/site-packages',\
'/var/my.site',\
'/home/me/Envs/ex/lib/python2.6/site-packages/django',\
'/home/me/Envs/ex/lib/python2.6/site-packages/MySQLdb',\
'/var/my.site/example',\
'/var/my.site/example/static'] + sys.path"
PythonDebug Off
</Location>
Alias /example/static /var/my.site/example/static
<Directory /var/my.site/example/static>
Order allow,deny
Allow from all
</Directory>
</VirtualHost>
```
This is my settings.py
```
...
STATIC_ROOT = '/var/my.site'
STATIC_URL = '/example/static/'
STATICFILES_DIRS = (
"/var/my.site/example/static",
)
...
```
There is no errors in the apache-error log. But here log from apache-secure\_access.log
```
[09/Aug/2012:12:37:55 +0200] "GET /example/admin/ HTTP/1.1" 200 6694
[09/Aug/2012:12:37:55 +0200] "GET /example/static/css/base.css HTTP/1.1" 301 468
[09/Aug/2012:12:37:55 +0200] "GET /example/static/img/logo.png HTTP/1.1" 403 766
[09/Aug/2012:12:37:55 +0200] "GET /example/static/css/base.css/ HTTP/1.1" 500 756
[09/Aug/2012:12:37:55 +0200] "GET /example/static/admin/css/dashboard.css HTTP/1.1" 301 622
```
But this doesn't work and I'm not sure, if I even, is on the right track. It does work when I set `DEBUG = True` But that's just because django serves all the static files.
**What am I doing wrong?**
**Does anyone know about a good tutorial or example?**
|
2012/08/09
|
[
"https://Stackoverflow.com/questions/11882194",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/481406/"
] |
After @supervacuo suggestion that I strip down everything from django, I got apache to serve the static files and realized what was wrong.
The problem was that `<Location "/example">` got priority over `Alias /example/static`. It didn't matter where I put the `Alias` (above or below the `<Location> - tag`).
To fix it I changed the `STATIC_URL` and `STATIC_ROOT`, than I could change the `Alias` not to interfere with the `<Location> - tag`
**From:**
```
STATIC_ROOT = '/var/my.site'
STATIC_URL = '/example/static/'
Alias /example/static /var/my.site/example/static
```
**To:**
```
STATIC_ROOT = '/var/my.site/example'
STATIC_URL = '/static/'
Alias /static /var/my.site/example/static
```
|
The only problem I can see is with this:
`"GET /example/static/css/base.css/ HTTP/1.1" 500 756`
Since `/base.css/` is not a valid link; it gets passed to django; and since it doesn't match a URL pattern it raises a 500. You should fix the template that has the errant `/`.
|
61,966,894
|
So I was using flask\_login for my login system on my mac, and I seem to have run into a problem. When I ran the code, it says I have not set my secret key even if I had done so.
My code was:
```py
from flask import Flask, render_template, request, session, redirect, url_for, jsonify
from flask_session import Session
from flask_login import LoginManager, login_user,logout_user, login_required, current_user
from models.model import *
from sqlalchemy import or_, and_
app = Flask(__name__)
app.secret_key = "<Some secret key>"
app.config["SQLALCHEMY_DATABASE_URI"] = '<Some Uri>'
app.config["SQLALCHEMY_TRACK_MODIFICATIONS"] = False
db.init_app(app)
Session(app)
login_manager = LoginManager()
login_manager.login_view = 'index'
login_manager.init_app(app)
@app.route("/")
def index():
if current_user.is_authenticated:
return render_template("home.html")
return render_template("login.html", tip="You have to log in first.")
@login_manager.user_loader
def load_user(user_id):
return User.query.get(int(user_id))
@app.route("/login", methods=['POST'])
def verif():
""" Gets name and password from form """
username = request.form.get("username")
password = request.form.get("password")
""" Checks user and password. """
userpassCheck = User.query.filter(and_(User.username == username, User.password == password)).first()
if not userpassCheck:
return render_template("index.html", tip="Incorrect Username or Password.")
login_user(userpassCheck)
return redirect(url_for('index'))
if __name__ == '__main__':
app.run(debug=True, use_reloader=True)
```
The whole error traceback was:
```
Traceback (most recent call last):
File "/Users/scythia/Desktop/Project1/venv/lib/python3.7/site-packages/flask/app.py", line 2464, in __call__
return self.wsgi_app(environ, start_response)
File "/Users/scythia/Desktop/Project1/venv/lib/python3.7/site-packages/flask/app.py", line 2450, in wsgi_app
response = self.handle_exception(e)
File "/Users/scythia/Desktop/Project1/venv/lib/python3.7/site-packages/flask/app.py", line 1867, in handle_exception
reraise(exc_type, exc_value, tb)
File "/Users/scythia/Desktop/Project1/venv/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/Users/scythia/Desktop/Project1/venv/lib/python3.7/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/Users/scythia/Desktop/Project1/venv/lib/python3.7/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/Users/scythia/Desktop/Project1/venv/lib/python3.7/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/Users/scythia/Desktop/Project1/venv/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/Users/scythia/Desktop/Project1/venv/lib/python3.7/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/Users/scythia/Desktop/Project1/venv/lib/python3.7/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/Users/scythia/Desktop/Project1/application.py", line 54, in verif
login_user(userpassCheck)
File "/Users/scythia/Desktop/Project1/venv/lib/python3.7/site-packages/flask_login/utils.py", line 170, in login_user
session['_user_id'] = user_id
File "/Users/scythia/Desktop/Project1/venv/lib/python3.7/site-packages/werkzeug/local.py", line 350, in __setitem__
self._get_current_object()[key] = value
File "/Users/scythia/Desktop/Project1/venv/lib/python3.7/site-packages/flask/sessions.py", line 103, in _fail
"The session is unavailable because no secret "
RuntimeError: The session is unavailable because no secret key was set. Set the secret_key on the application to something unique and secret.
```
My expectations for the code was for it to be completely able to run. I haven't included the two HTML files because I think it is not related. I can add it if you think it is.
Using:
Python 3.7.7;
Flask 1.1.2;
|
2020/05/23
|
[
"https://Stackoverflow.com/questions/61966894",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13600242/"
] |
As @Harmandeep Kalsi said in the comments, I added `app.config['SESSION_TYPE']` and it worked.
|
For anyone else that is still getting an error after the other answers, if you're using the format `app.confing['<string>']` make sure that you include the underscore between "SECRET" and "KEY".
That ended up being my issue, but it results in the same error code so searching for it lead me here. So, I thought I'd provide my solution to anyone that comes across this in the future.
So what worked for me was switching:
```
app.config['SECRET KEY']
```
with:
```
app.config['SECRET_KEY']
```
|
11,170,414
|
I just upgraded from SnowLeapord to Lion and now cannot create virtualenvs. I understand that there are new Python installations after the upgrade and no site packages and have tried installing pip and virtualenv again as well as upgrading to Xcode4 but I always get this error:
```
~ > virtualenv --distribute env
New python executable in env/bin/python
Installing distribute........
Complete output from command /Users/jaderberg/env/bin/python -c "#!python
\"\"\"Bootstra... main(sys.argv[1:])
" --always-copy -U distribute:
Traceback (most recent call last):
File "<string>", line 23, in <module>
File "/Users/jaderberg/env/lib/python2.7/distutils/__init__.py", line 16, in <module>
exec(open(os.path.join(distutils_path, '__init__.py')).read())
IOError: [Errno 2] No such file or directory: 'System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/__init__.py'
----------------------------------------
...Installing distribute...done.
Traceback (most recent call last):
File "/usr/local/bin/virtualenv", line 9, in <module>
load_entry_point('virtualenv==1.7.2', 'console_scripts', 'virtualenv')()
File "/Library/Python/2.7/site-packages/virtualenv-1.7.2-py2.7.egg/virtualenv.py", line 942, in main
never_download=options.never_download)
File "/Library/Python/2.7/site-packages/virtualenv-1.7.2-py2.7.egg/virtualenv.py", line 1049, in create_environment
search_dirs=search_dirs, never_download=never_download)
File "/Library/Python/2.7/site-packages/virtualenv-1.7.2-py2.7.egg/virtualenv.py", line 603, in install_distribute
search_dirs=search_dirs, never_download=never_download)
File "/Library/Python/2.7/site-packages/virtualenv-1.7.2-py2.7.egg/virtualenv.py", line 570, in _install_req
cwd=cwd)
File "/Library/Python/2.7/site-packages/virtualenv-1.7.2-py2.7.egg/virtualenv.py", line 1020, in call_subprocess
% (cmd_desc, proc.returncode))
OSError: Command /Users/jaderberg/env/bin/python -c "#!python
\"\"\"Bootstra... main(sys.argv[1:])
" --always-copy -U distribute failed with error code 1
```
I am a bit of a unix/python novice and just cannot work out how to get this working. Any ideas? Without using the --distribute tag I get this error:
```
~ > virtualenv env
New python executable in env/bin/python
Installing setuptools.............
Complete output from command /Users/jaderberg/env/bin/python -c "#!python
\"\"\"Bootstra...sys.argv[1:])
" /Library/Python/2.7/...ols-0.6c11-py2.7.egg:
Traceback (most recent call last):
File "", line 279, in
File "", line 207, in main
File "/Library/Python/2.7/site-packages/distribute-0.6.27-py2.7.egg/setuptools/__init__.py", line 2, in
from setuptools.extension import Extension, Library
File "/Library/Python/2.7/site-packages/distribute-0.6.27-py2.7.egg/setuptools/extension.py", line 2, in
import distutils.core
File "/Users/jaderberg/env/lib/python2.7/distutils/__init__.py", line 16, in
exec(open(os.path.join(distutils_path, '__init__.py')).read())
IOError: [Errno 2] No such file or directory: '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/__init__.py'
----------------------------------------
...Installing setuptools...done.
Traceback (most recent call last):
File "/usr/local/bin/virtualenv", line 9, in
load_entry_point('virtualenv==1.7.2', 'console_scripts', 'virtualenv')()
File "/Library/Python/2.7/site-packages/virtualenv-1.7.2-py2.7.egg/virtualenv.py", line 942, in main
never_download=options.never_download)
File "/Library/Python/2.7/site-packages/virtualenv-1.7.2-py2.7.egg/virtualenv.py", line 1052, in create_environment
search_dirs=search_dirs, never_download=never_download)
File "/Library/Python/2.7/site-packages/virtualenv-1.7.2-py2.7.egg/virtualenv.py", line 598, in install_setuptools
search_dirs=search_dirs, never_download=never_download)
File "/Library/Python/2.7/site-packages/virtualenv-1.7.2-py2.7.egg/virtualenv.py", line 570, in _install_req
cwd=cwd)
File "/Library/Python/2.7/site-packages/virtualenv-1.7.2-py2.7.egg/virtualenv.py", line 1020, in call_subprocess
% (cmd_desc, proc.returncode))
OSError: Command /Users/jaderberg/env/bin/python -c "#!python
\"\"\"Bootstra...sys.argv[1:])
" /Library/Python/2.7/...ols-0.6c11-py2.7.egg failed with error code 1
```
|
2012/06/23
|
[
"https://Stackoverflow.com/questions/11170414",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/883845/"
] |
Turns out that although I upgraded Xcode to version 4, it does not automatically install the command line tools. I followed this <http://blog.cingusoft.org/mac-osx-lion-virtualenv-and-could-not-call-in>.
Basically, install Xcode, go into Preferences and then Downloads and install "Command Line Tools". It works now.
The Command Line Tools are also available directly from <https://developer.apple.com/downloads/index.action#>
|
I also had to upgrade my setuptools.
`pip install setuptools --upgrade`
|
11,170,414
|
I just upgraded from SnowLeapord to Lion and now cannot create virtualenvs. I understand that there are new Python installations after the upgrade and no site packages and have tried installing pip and virtualenv again as well as upgrading to Xcode4 but I always get this error:
```
~ > virtualenv --distribute env
New python executable in env/bin/python
Installing distribute........
Complete output from command /Users/jaderberg/env/bin/python -c "#!python
\"\"\"Bootstra... main(sys.argv[1:])
" --always-copy -U distribute:
Traceback (most recent call last):
File "<string>", line 23, in <module>
File "/Users/jaderberg/env/lib/python2.7/distutils/__init__.py", line 16, in <module>
exec(open(os.path.join(distutils_path, '__init__.py')).read())
IOError: [Errno 2] No such file or directory: 'System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/__init__.py'
----------------------------------------
...Installing distribute...done.
Traceback (most recent call last):
File "/usr/local/bin/virtualenv", line 9, in <module>
load_entry_point('virtualenv==1.7.2', 'console_scripts', 'virtualenv')()
File "/Library/Python/2.7/site-packages/virtualenv-1.7.2-py2.7.egg/virtualenv.py", line 942, in main
never_download=options.never_download)
File "/Library/Python/2.7/site-packages/virtualenv-1.7.2-py2.7.egg/virtualenv.py", line 1049, in create_environment
search_dirs=search_dirs, never_download=never_download)
File "/Library/Python/2.7/site-packages/virtualenv-1.7.2-py2.7.egg/virtualenv.py", line 603, in install_distribute
search_dirs=search_dirs, never_download=never_download)
File "/Library/Python/2.7/site-packages/virtualenv-1.7.2-py2.7.egg/virtualenv.py", line 570, in _install_req
cwd=cwd)
File "/Library/Python/2.7/site-packages/virtualenv-1.7.2-py2.7.egg/virtualenv.py", line 1020, in call_subprocess
% (cmd_desc, proc.returncode))
OSError: Command /Users/jaderberg/env/bin/python -c "#!python
\"\"\"Bootstra... main(sys.argv[1:])
" --always-copy -U distribute failed with error code 1
```
I am a bit of a unix/python novice and just cannot work out how to get this working. Any ideas? Without using the --distribute tag I get this error:
```
~ > virtualenv env
New python executable in env/bin/python
Installing setuptools.............
Complete output from command /Users/jaderberg/env/bin/python -c "#!python
\"\"\"Bootstra...sys.argv[1:])
" /Library/Python/2.7/...ols-0.6c11-py2.7.egg:
Traceback (most recent call last):
File "", line 279, in
File "", line 207, in main
File "/Library/Python/2.7/site-packages/distribute-0.6.27-py2.7.egg/setuptools/__init__.py", line 2, in
from setuptools.extension import Extension, Library
File "/Library/Python/2.7/site-packages/distribute-0.6.27-py2.7.egg/setuptools/extension.py", line 2, in
import distutils.core
File "/Users/jaderberg/env/lib/python2.7/distutils/__init__.py", line 16, in
exec(open(os.path.join(distutils_path, '__init__.py')).read())
IOError: [Errno 2] No such file or directory: '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/__init__.py'
----------------------------------------
...Installing setuptools...done.
Traceback (most recent call last):
File "/usr/local/bin/virtualenv", line 9, in
load_entry_point('virtualenv==1.7.2', 'console_scripts', 'virtualenv')()
File "/Library/Python/2.7/site-packages/virtualenv-1.7.2-py2.7.egg/virtualenv.py", line 942, in main
never_download=options.never_download)
File "/Library/Python/2.7/site-packages/virtualenv-1.7.2-py2.7.egg/virtualenv.py", line 1052, in create_environment
search_dirs=search_dirs, never_download=never_download)
File "/Library/Python/2.7/site-packages/virtualenv-1.7.2-py2.7.egg/virtualenv.py", line 598, in install_setuptools
search_dirs=search_dirs, never_download=never_download)
File "/Library/Python/2.7/site-packages/virtualenv-1.7.2-py2.7.egg/virtualenv.py", line 570, in _install_req
cwd=cwd)
File "/Library/Python/2.7/site-packages/virtualenv-1.7.2-py2.7.egg/virtualenv.py", line 1020, in call_subprocess
% (cmd_desc, proc.returncode))
OSError: Command /Users/jaderberg/env/bin/python -c "#!python
\"\"\"Bootstra...sys.argv[1:])
" /Library/Python/2.7/...ols-0.6c11-py2.7.egg failed with error code 1
```
|
2012/06/23
|
[
"https://Stackoverflow.com/questions/11170414",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/883845/"
] |
Turns out that although I upgraded Xcode to version 4, it does not automatically install the command line tools. I followed this <http://blog.cingusoft.org/mac-osx-lion-virtualenv-and-could-not-call-in>.
Basically, install Xcode, go into Preferences and then Downloads and install "Command Line Tools". It works now.
The Command Line Tools are also available directly from <https://developer.apple.com/downloads/index.action#>
|
```
cd /usr/lib/python2.7
sudo ln -s plat-x86_64-linux-gnu/_sysconfigdata_nd.py .
```
|
57,151,931
|
I've created a python script together with selenium to parse a specific content from a webpage. I can get this result `AARONS INC` located under `QUOTE` in many different ways but the way I wish to scrape that is by using ***`pseudo selector`*** which unfortunately selenium doesn't support. The commented out line within the script below represents that selenium doesn't support `pseudo selector`.
However, when I use `pseudo selector` within `driver.execute_script()` then I can parse it flawlessly. To make this work I had to use hardcoded delay for the element to be avilable. Now, I wish to do the same wrapping this `driver.execute_script()` within `Explicit Wait` condition.
```
import time
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Chrome()
wait = WebDriverWait(driver, 20)
driver.get("https://www.nyse.com/quote/XNYS:AAN")
time.sleep(15)
# item = wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "span:contains('AARONS')")))
item = driver.execute_script('''return $('span:contains("AARONS")')[0];''')
print(item.text)
```
***How can I wrap `driver.execute_script()` within Explicit Wait condition?***
|
2019/07/22
|
[
"https://Stackoverflow.com/questions/57151931",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7180194/"
] |
This is one of the ways you can achieve that. Give it a shot.
```
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
with webdriver.Chrome() as driver:
wait = WebDriverWait(driver, 10)
driver.get('https://www.nyse.com/quote/XNYS:AAN')
item = wait.until(
lambda driver: driver.execute_script('''return $('span:contains("AARONS")')[0];''')
)
print(item.text)
```
|
Here is the simple approach.
```
url = 'https://www.nyse.com/quote/XNYS:AAN'
driver.get(url)
# wait for the elment to be presented
ele = WebDriverWait(driver, 30).until(lambda driver: driver.execute_script('''return $('span:contains("AARONS")')[0];'''))
# print the text of the element
print (ele.text)
```
|
57,151,931
|
I've created a python script together with selenium to parse a specific content from a webpage. I can get this result `AARONS INC` located under `QUOTE` in many different ways but the way I wish to scrape that is by using ***`pseudo selector`*** which unfortunately selenium doesn't support. The commented out line within the script below represents that selenium doesn't support `pseudo selector`.
However, when I use `pseudo selector` within `driver.execute_script()` then I can parse it flawlessly. To make this work I had to use hardcoded delay for the element to be avilable. Now, I wish to do the same wrapping this `driver.execute_script()` within `Explicit Wait` condition.
```
import time
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Chrome()
wait = WebDriverWait(driver, 20)
driver.get("https://www.nyse.com/quote/XNYS:AAN")
time.sleep(15)
# item = wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "span:contains('AARONS')")))
item = driver.execute_script('''return $('span:contains("AARONS")')[0];''')
print(item.text)
```
***How can I wrap `driver.execute_script()` within Explicit Wait condition?***
|
2019/07/22
|
[
"https://Stackoverflow.com/questions/57151931",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7180194/"
] |
You could do the while thing in the browser script which is probably safer:
```
item = driver.execute_async_script("""
var span, interval = setInterval(() => {
if(span = $('span:contains("AARONS")')[0]){
clearInterval(interval)
arguments[0](span)
}
}, 1000)
""")
```
|
Here is the simple approach.
```
url = 'https://www.nyse.com/quote/XNYS:AAN'
driver.get(url)
# wait for the elment to be presented
ele = WebDriverWait(driver, 30).until(lambda driver: driver.execute_script('''return $('span:contains("AARONS")')[0];'''))
# print the text of the element
print (ele.text)
```
|
15,669,924
|
I'm trying to get tumblr "liked" posts for a user at the <http://api.tumblr.com/v2/user/likes> url. I have registered my app with tumblr and authorized the app to access the user's tumblr data, so I have `oauth_consumer_key`,
`oauth_consumer_secret`, `oauth_token`, and `oauth_token secret`. However, I'm not sure what to do with these details when I make the api call. I'm trying to create a command line script that will just output json for further processing, so a solution in bash (cURL), Perl, or python would be ideal.
|
2013/03/27
|
[
"https://Stackoverflow.com/questions/15669924",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1132385/"
] |
Well if you don't mind using Python I can recommend [rauth](https://github.com/litl/rauth). There isn't a Tumblr example, but there are [real world, working examples](https://github.com/litl/rauth/tree/master/examples) for both OAuth 1.0/a and OAuth 2.0. The API is intended to be simple and straight forward. I'm not sure what other requirements you might have, but maybe worth giving it a shot?
Here's a working example to go by if you're interested:
```
from rauth import OAuth1Service
import re
import webbrowser
# Get a real consumer key & secret from http://www.tumblr.com/oauth/apps
tumblr = OAuth1Service(
consumer_key='gKRR414Bc2teq0ukznfGVUmb41EN3o0Nu6jctJ3dYx16jiiCsb',
consumer_secret='DcKJMlhbCHM8iBDmHudA9uzyJWIFaSTbDFd7rOoDXjSIKgMYcE',
name='tumblr',
request_token_url='http://www.tumblr.com/oauth/request_token',
access_token_url='http://www.tumblr.com/oauth/access_token',
authorize_url='http://www.tumblr.com/oauth/authorize',
base_url='https://api.tumblr.com/v2/')
request_token, request_token_secret = tumblr.get_request_token()
authorize_url = tumblr.get_authorize_url(request_token)
print 'Visit this URL in your browser: ' + authorize_url
webbrowser.open(authorize_url)
authed_url = raw_input('Copy URL from your browser\'s address bar: ')
verifier = re.search('\oauth_verifier=([^#]*)', authed_url).group(1)
session = tumblr.get_auth_session(request_token,
request_token_secret,
method='POST',
data={'oauth_verifier': verifier})
user = session.get('user/info').json()['response']['user']
print 'Currently logged in as: {name}'.format(name=user['name'])
```
Full disclosure, I maintain rauth.
|
I sort of found an answer. I ended up using OAuth::Consumer in perl to connect to the tumblr API. It's the simplest solution I've found so far and it just works.
|
64,063,248
|
My Python version:`Python 3.8.3`
`python -m pip install IPython` gives me `Successfully installed IPython-7.18.1`
Still gives me the following error:
```
from IPython.display import Image
/usr/bin/python3 "/home/sanyifeju/Desktop/python/ML/decision_trees.py"
Traceback (most recent call last):
File "/home/sanyifeju/Desktop/python/ML/decision_trees.py", line 4, in <module>
from IPython.display import Image
ModuleNotFoundError: No module named 'IPython'
```
'
What am I missing?
I am on Ubuntu 20.04.1 , not sure if that makes any difference.
if I run `python -m pip install ipython`
I get Requirement already satisfied.
```
Requirement already satisfied: ipython in /home/sanyifeju/anaconda3/lib/python3.8/site-packages (7.18.1)
Requirement already satisfied: setuptools>=18.5 in /home/sanyifeju/anaconda3/lib/python3.8/site-packages (from ipython) (49.6.0.post20200814)
Requirement already satisfied: decorator in /home/sanyifeju/anaconda3/lib/python3.8/site-packages (from ipython) (4.4.2)
Requirement already satisfied: pexpect>4.3; sys_platform != "win32" in /home/sanyifeju/anaconda3/lib/python3.8/site-packages (from ipython) (4.8.0)
Requirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in /home/sanyifeju/anaconda3/lib/python3.8/site-packages (from ipython) (3.0.7)
Requirement already satisfied: jedi>=0.10 in /home/sanyifeju/anaconda3/lib/python3.8/site-packages (from ipython) (0.17.1)
Requirement already satisfied: pygments in /home/sanyifeju/anaconda3/lib/python3.8/site-packages (from ipython) (2.7.1)
Requirement already satisfied: backcall in /home/sanyifeju/anaconda3/lib/python3.8/site-packages (from ipython) (0.2.0)
Requirement already satisfied: pickleshare in /home/sanyifeju/anaconda3/lib/python3.8/site-packages (from ipython) (0.7.5)
Requirement already satisfied: traitlets>=4.2 in /home/sanyifeju/anaconda3/lib/python3.8/site-packages (from ipython) (4.3.3)
Requirement already satisfied: ptyprocess>=0.5 in /home/sanyifeju/anaconda3/lib/python3.8/site-packages (from pexpect>4.3; sys_platform != "win32"->ipython) (0.6.0)
Requirement already satisfied: wcwidth in /home/sanyifeju/anaconda3/lib/python3.8/site-packages (from prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0->ipython) (0.2.5)
Requirement already satisfied: parso<0.8.0,>=0.7.0 in /home/sanyifeju/anaconda3/lib/python3.8/site-packages (from jedi>=0.10->ipython) (0.7.0)
Requirement already satisfied: ipython-genutils in /home/sanyifeju/anaconda3/lib/python3.8/site-packages (from traitlets>=4.2->ipython) (0.2.0)
Requirement already satisfied: six in /home/sanyifeju/anaconda3/lib/python3.8/site-packages (from traitlets>=4.2->ipython) (1.15.0)
```
|
2020/09/25
|
[
"https://Stackoverflow.com/questions/64063248",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1107591/"
] |
I had the same issue, and the problem was that the `python` command was linked to the python2 version:
```
$ ls -l /usr/bin/python
lrwxrwxrwx 1 root root 7 Apr 15 2020 /usr/bin/python -> python2*
```
The following commands fixed it for me:
```
$ sudo rm /usr/bin/python
$ sudo ln -s python3 /usr/bin/python
```
|
Try installing-
```
python -m pip install ipython
```
|
13,218,362
|
I'm currently learning python and tried to make a little game using the pygame librabry. I use python 3.2.3 and pygame 1.9.2a with Windows Xp. Everything works fine, except one thing : if I go on another window when my game is running, it crashes and I get an error message in the console :
```
Fatal Python error: (pygame parachute) Segmentation Fault
```
This piece of code that I took out of my program seems to be causing the error, however I can't see anything wrong with it :
```
import pygame
from pygame.locals import *
pygame.init()
fenetre = pygame.display.set_mode((800, 600))
go = 1
while go:
for event in pygame.event.get():
if event.type == QUIT:
go = 0
```
Thanks for your help !
|
2012/11/04
|
[
"https://Stackoverflow.com/questions/13218362",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1717248/"
] |
I know thread is old, but I was getting the same error "Fatal Python error: (pygame parachute) Segmentation Fault" in linux when I resized a pygame window continuously for several seconds. Just in case this helps anyone else, it turned out to be caused by blitting to the window surface in one thread when I was resizing it in another thread by calling pygame.display.set\_mode(screen\_size, 0). I fixed it by acquiring a lock before drawing to or resizing the window.
|
I don't know if you have anything after the last line that you're not putting in, but if you don't, you should replace your last line with
```
pygame.quit()
sys.exit()
```
As an alternative, you could put those two lines outside of the `while` loop and keep what you have. Don't forget to `import sys`.
|
68,532,863
|
I currently have a multiple regression that generates an OLS summary based on the life expectancy and the variables that impact it, however that does not include RMSE or standard deviation. Does statsmodels have a rsme library, and is there a way to calculate standard deviation from my code?
I have found a previous example of this problem: [regression model statsmodel python](https://stackoverflow.com/questions/52562664/regression-model-statsmodel-python) , and I read the statsmodels info page: <https://www.statsmodels.org/stable/generated/statsmodels.tools.eval_measures.rmse.html> and testing I am still not able to get this problem resolved.
```
import pandas as pd
import openpyxl
import statsmodels.formula.api as smf
import statsmodels.formula.api as ols
df = pd.read_excel(C:/Users/File1.xlsx, sheet_name = 'States')
dfME = df[(df[State] == "Maine")]
pd.set_option('display.max_columns', None)
dfME.head()
model = smf.ols(Life Expectancy ~ Race + Age + Weight + C(Pets), data = dfME)
modelfit = model.fit()
modelfit.summary
```
|
2021/07/26
|
[
"https://Stackoverflow.com/questions/68532863",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12374203/"
] |
The canonical dplyr-way would be to write a custom predicate function that returns `TRUE` or `FALSE` for each column depending on whether the conditions are matched and use this function inside `across(where(predicate_function), ...)`.
Below I borrow the example data from @Tob and add some variations (one column is `0`, `1` but double, one column contains `NA`s, one column is a numeric column which contains other values).
```r
library(dplyr)
test_data <- tibble(strings = c("a", "b", "c", "d", "e"),
col_2 = c(1, 0, 0, 0, NA),
col_3 = as.double(c(0, 1, 1, 0, 1)),
col_4 = c(0L, 1L, 1L, 0L, 1L),
col_5 = 1:5)
# let's have a look at the data and the column types
test_data
#> # A tibble: 5 x 5
#> strings col_2 col_3 col_4 col_5
#> <chr> <dbl> <dbl> <int> <int>
#> 1 a 1 0 0 1
#> 2 b 0 1 1 2
#> 3 c 0 1 1 3
#> 4 d 0 0 0 4
#> 5 e NA 1 1 5
# predicate function
is_01_col <- function(x) {
all(unique(x) %in% c(0, 1, NA))
}
test_data %>%
mutate(across(where(is_01_col), as.factor)) %>%
glimpse
#> Rows: 5
#> Columns: 5
#> $ strings <chr> "a", "b", "c", "d", "e"
#> $ col_2 <fct> 1, 0, 0, 0, NA
#> $ col_3 <fct> 0, 1, 1, 0, 1
#> $ col_4 <fct> 0, 1, 1, 0, 1
#> $ col_5 <int> 1, 2, 3, 4, 5
```
Created on 2021-07-26 by the [reprex package](https://reprex.tidyverse.org) (v0.3.0)
|
This is what I might do but I don't know how fast it will if your data is large
```
# Create some data
test_data <- data.frame(strings = c("a", "b", "c", "d", "e"),
col_2 = c(1, 0, 0, 0, 1),
col_3 = c( 0,1, 1, 0, 1))
# Find columns that are only 0s and 1s
cols_to_convert <- names(test_data)[lapply(test_data, function(x) identical(sort(unique(x)), c(0,1))) == TRUE]
# Convert these columns to factors
new_data <- test_data %>% mutate(across(all_of(cols_to_convert), ~ as.factor(.x)))
# Check that the columns are factors
lapply(new_data, class)
```
|
68,532,863
|
I currently have a multiple regression that generates an OLS summary based on the life expectancy and the variables that impact it, however that does not include RMSE or standard deviation. Does statsmodels have a rsme library, and is there a way to calculate standard deviation from my code?
I have found a previous example of this problem: [regression model statsmodel python](https://stackoverflow.com/questions/52562664/regression-model-statsmodel-python) , and I read the statsmodels info page: <https://www.statsmodels.org/stable/generated/statsmodels.tools.eval_measures.rmse.html> and testing I am still not able to get this problem resolved.
```
import pandas as pd
import openpyxl
import statsmodels.formula.api as smf
import statsmodels.formula.api as ols
df = pd.read_excel(C:/Users/File1.xlsx, sheet_name = 'States')
dfME = df[(df[State] == "Maine")]
pd.set_option('display.max_columns', None)
dfME.head()
model = smf.ols(Life Expectancy ~ Race + Age + Weight + C(Pets), data = dfME)
modelfit = model.fit()
modelfit.summary
```
|
2021/07/26
|
[
"https://Stackoverflow.com/questions/68532863",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12374203/"
] |
This is what I might do but I don't know how fast it will if your data is large
```
# Create some data
test_data <- data.frame(strings = c("a", "b", "c", "d", "e"),
col_2 = c(1, 0, 0, 0, 1),
col_3 = c( 0,1, 1, 0, 1))
# Find columns that are only 0s and 1s
cols_to_convert <- names(test_data)[lapply(test_data, function(x) identical(sort(unique(x)), c(0,1))) == TRUE]
# Convert these columns to factors
new_data <- test_data %>% mutate(across(all_of(cols_to_convert), ~ as.factor(.x)))
# Check that the columns are factors
lapply(new_data, class)
```
|
Another dplyr approach to reach your goal. I used the built-in dataset `mtcars` because some columns (`vs` and `am`) of type `double` are binary (0 and 1).
```
df <- mtcars %>%
mutate(across(where( ~ setequal(na.omit(.x), 0:1)), as.factor))
glimpse(df)
# Rows: 32
# Columns: 11
# $ mpg <dbl> 21.0, 21.0, 22.8, 21.4, 18.7, 18.1, 14.3, 24.4, 22.8, 19.2,~
# $ cyl <dbl> 6, 6, 4, 6, 8, 6, 8, 4, 4, 6, 6, 8, 8, 8, 8, 8, 8, 4, 4, 4,~
# $ disp <dbl> 160.0, 160.0, 108.0, 258.0, 360.0, 225.0, 360.0, 146.7, 140~
# $ hp <dbl> 110, 110, 93, 110, 175, 105, 245, 62, 95, 123, 123, 180, 18~
# $ drat <dbl> 3.90, 3.90, 3.85, 3.08, 3.15, 2.76, 3.21, 3.69, 3.92, 3.92,~
# $ wt <dbl> 2.620, 2.875, 2.320, 3.215, 3.440, 3.460, 3.570, 3.190, 3.1~
# $ qsec <dbl> 16.46, 17.02, 18.61, 19.44, 17.02, 20.22, 15.84, 20.00, 22.~
# $ vs <fct> 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1,~
# $ am <fct> 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1,~
# $ gear <dbl> 4, 4, 4, 3, 3, 3, 3, 4, 4, 4, 4, 3, 3, 3, 3, 3, 3, 4, 4, 4,~
# $ carb <dbl> 4, 4, 1, 1, 2, 1, 4, 2, 2, 4, 4, 3, 3, 3, 4, 4, 4, 1, 2, 1,~
```
|
68,532,863
|
I currently have a multiple regression that generates an OLS summary based on the life expectancy and the variables that impact it, however that does not include RMSE or standard deviation. Does statsmodels have a rsme library, and is there a way to calculate standard deviation from my code?
I have found a previous example of this problem: [regression model statsmodel python](https://stackoverflow.com/questions/52562664/regression-model-statsmodel-python) , and I read the statsmodels info page: <https://www.statsmodels.org/stable/generated/statsmodels.tools.eval_measures.rmse.html> and testing I am still not able to get this problem resolved.
```
import pandas as pd
import openpyxl
import statsmodels.formula.api as smf
import statsmodels.formula.api as ols
df = pd.read_excel(C:/Users/File1.xlsx, sheet_name = 'States')
dfME = df[(df[State] == "Maine")]
pd.set_option('display.max_columns', None)
dfME.head()
model = smf.ols(Life Expectancy ~ Race + Age + Weight + C(Pets), data = dfME)
modelfit = model.fit()
modelfit.summary
```
|
2021/07/26
|
[
"https://Stackoverflow.com/questions/68532863",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12374203/"
] |
The canonical dplyr-way would be to write a custom predicate function that returns `TRUE` or `FALSE` for each column depending on whether the conditions are matched and use this function inside `across(where(predicate_function), ...)`.
Below I borrow the example data from @Tob and add some variations (one column is `0`, `1` but double, one column contains `NA`s, one column is a numeric column which contains other values).
```r
library(dplyr)
test_data <- tibble(strings = c("a", "b", "c", "d", "e"),
col_2 = c(1, 0, 0, 0, NA),
col_3 = as.double(c(0, 1, 1, 0, 1)),
col_4 = c(0L, 1L, 1L, 0L, 1L),
col_5 = 1:5)
# let's have a look at the data and the column types
test_data
#> # A tibble: 5 x 5
#> strings col_2 col_3 col_4 col_5
#> <chr> <dbl> <dbl> <int> <int>
#> 1 a 1 0 0 1
#> 2 b 0 1 1 2
#> 3 c 0 1 1 3
#> 4 d 0 0 0 4
#> 5 e NA 1 1 5
# predicate function
is_01_col <- function(x) {
all(unique(x) %in% c(0, 1, NA))
}
test_data %>%
mutate(across(where(is_01_col), as.factor)) %>%
glimpse
#> Rows: 5
#> Columns: 5
#> $ strings <chr> "a", "b", "c", "d", "e"
#> $ col_2 <fct> 1, 0, 0, 0, NA
#> $ col_3 <fct> 0, 1, 1, 0, 1
#> $ col_4 <fct> 0, 1, 1, 0, 1
#> $ col_5 <int> 1, 2, 3, 4, 5
```
Created on 2021-07-26 by the [reprex package](https://reprex.tidyverse.org) (v0.3.0)
|
Another dplyr approach to reach your goal. I used the built-in dataset `mtcars` because some columns (`vs` and `am`) of type `double` are binary (0 and 1).
```
df <- mtcars %>%
mutate(across(where( ~ setequal(na.omit(.x), 0:1)), as.factor))
glimpse(df)
# Rows: 32
# Columns: 11
# $ mpg <dbl> 21.0, 21.0, 22.8, 21.4, 18.7, 18.1, 14.3, 24.4, 22.8, 19.2,~
# $ cyl <dbl> 6, 6, 4, 6, 8, 6, 8, 4, 4, 6, 6, 8, 8, 8, 8, 8, 8, 4, 4, 4,~
# $ disp <dbl> 160.0, 160.0, 108.0, 258.0, 360.0, 225.0, 360.0, 146.7, 140~
# $ hp <dbl> 110, 110, 93, 110, 175, 105, 245, 62, 95, 123, 123, 180, 18~
# $ drat <dbl> 3.90, 3.90, 3.85, 3.08, 3.15, 2.76, 3.21, 3.69, 3.92, 3.92,~
# $ wt <dbl> 2.620, 2.875, 2.320, 3.215, 3.440, 3.460, 3.570, 3.190, 3.1~
# $ qsec <dbl> 16.46, 17.02, 18.61, 19.44, 17.02, 20.22, 15.84, 20.00, 22.~
# $ vs <fct> 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1,~
# $ am <fct> 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1,~
# $ gear <dbl> 4, 4, 4, 3, 3, 3, 3, 4, 4, 4, 4, 3, 3, 3, 3, 3, 3, 4, 4, 4,~
# $ carb <dbl> 4, 4, 1, 1, 2, 1, 4, 2, 2, 4, 4, 3, 3, 3, 4, 4, 4, 1, 2, 1,~
```
|
20,338,360
|
I am looking for a production database to use with python/django for web development. I've installed MySQL successfully. I believe the python connector is not working and I don't know how to make it work. Please point me in the right direction. Thanks.
If I try importing `MySQLdb`:
```
import MySQLdb
```
I get the following exception.
```
Traceback (most recent call last):
File "/Users/vantran/tutorial/scrape_yf/mysql.py", line 3, in <module>
import MySQLdb
ImportError: No module named MySQLdb
```
I've tried using MySQL but I am struggling with getting the connector package to install or work properly. <http://dev.mysql.com/downloads/file.php?id=414340>
I've also tried to look at the other SO questions regarding installing MySQL python connectors, but they all seem to be unnecessarily complicated.
I've also tried
1. <http://www.tutorialspoint.com/python/python_database_access.htm>
2. <http://zetcode.com/db/mysqlpython/>
3. <https://github.com/PyMySQL/PyMySQL>
...but nothing seems to work.
|
2013/12/02
|
[
"https://Stackoverflow.com/questions/20338360",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2424253/"
] |
If your problem is with the `MySQLdb` module, not the MySQL server itself, you might want to consider [`PyMySQL`](https://github.com/PyMySQL/PyMySQL) instead. It's much simpler to set up. Of course it's also somewhat different.
The key difference is that it's a pure Python implementation of the MySQL protocol, not a wrapper around `libmysql`. So it has minimal requirements, but in a few use cases it may not be as performant. Also, since they're completely different libraries, there are a few rare things that one supports but not the other, and various things that they support differently. (For example, `MySQLdb` handles all MySQL warnings as Python warnings; `PyMySQL` handles them as information for you to process.)
|
I would recommend postgres.app : <http://postgresapp.com>
Tried and never left
My preference for the driver is <http://initd.org/psycopg/>
You'll find a list of drivers at <http://wiki.postgresql.org/wiki/Python>
|
3,947,878
|
I have a appengine webapp where i need to set HTTP Location variable to redirect to another page in the same webapp. For that i need to produce a absolute link. for portability reason i cannot use directly the domain name which i am currently using.
Is it possible to produce the domain name on which the webapp is hosted in the python code.
|
2010/10/16
|
[
"https://Stackoverflow.com/questions/3947878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/329292/"
] |
I don't think I quite fully understand the need to getdomain name. But check out if redirect API provided by google app engine will do the job.
<http://code.google.com/appengine/docs/python/tools/webapp/redirects.html>
|
this question is poorly answered. You need the domain name to generate pages like: robots.txt, sitemap.xml and many many more things which are not relative links. Ive tried using this:
```
from google.appengine.api.app_identity import get_default_version_hostname
host = get_default_version_hostname()
```
but... it is not good when you upgrade to your own domain... because I still get the appspot.com name.
|
3,947,878
|
I have a appengine webapp where i need to set HTTP Location variable to redirect to another page in the same webapp. For that i need to produce a absolute link. for portability reason i cannot use directly the domain name which i am currently using.
Is it possible to produce the domain name on which the webapp is hosted in the python code.
|
2010/10/16
|
[
"https://Stackoverflow.com/questions/3947878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/329292/"
] |
I don't think I quite fully understand the need to getdomain name. But check out if redirect API provided by google app engine will do the job.
<http://code.google.com/appengine/docs/python/tools/webapp/redirects.html>
|
If your application uses a [custom domain name](https://cloud.google.com/appengine/docs/python/console/using-custom-domains-and-ssl#adding_a_custom_domain_for_your_application), then *get\_default\_version\_hostname()* will return *myapp.appspot.com*, or *1.default.myapp.appspot.com* not *mydomain.com*
To get the domain name in the request, use:
```
hostname = self.request.host
```
According to the [webob docs](http://docs.webob.org/en/latest/api/request.html?highlight=host#webob.request.BaseRequest.host):
>
> returns 'Host name provided in HTTP\_HOST, with fall-back to
> SERVER\_NAME'
>
>
>
This won't work inside a cron task or if the client calls *myapp.appspot.com* instead of *mydomain.com*.
See also issue: <https://code.google.com/p/googleappengine/issues/detail?id=11993>
get\_default\_version\_hostname() won't work inside /\_ah/start handler on a Managed VM. the workaround is to set:
os.environ['DEFAULT\_VERSION\_HOSTNAME'] = os.environ['GAE\_APPENGINE\_HOSTNAME']
|
3,947,878
|
I have a appengine webapp where i need to set HTTP Location variable to redirect to another page in the same webapp. For that i need to produce a absolute link. for portability reason i cannot use directly the domain name which i am currently using.
Is it possible to produce the domain name on which the webapp is hosted in the python code.
|
2010/10/16
|
[
"https://Stackoverflow.com/questions/3947878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/329292/"
] |
If you're using the webapp framework, the current URL is in `self.request.url`. Various components are broken out in properties of self.request, documented [here](http://pythonpaste.org/webob/#urls).
|
this question is poorly answered. You need the domain name to generate pages like: robots.txt, sitemap.xml and many many more things which are not relative links. Ive tried using this:
```
from google.appengine.api.app_identity import get_default_version_hostname
host = get_default_version_hostname()
```
but... it is not good when you upgrade to your own domain... because I still get the appspot.com name.
|
3,947,878
|
I have a appengine webapp where i need to set HTTP Location variable to redirect to another page in the same webapp. For that i need to produce a absolute link. for portability reason i cannot use directly the domain name which i am currently using.
Is it possible to produce the domain name on which the webapp is hosted in the python code.
|
2010/10/16
|
[
"https://Stackoverflow.com/questions/3947878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/329292/"
] |
If you're using the webapp framework, the current URL is in `self.request.url`. Various components are broken out in properties of self.request, documented [here](http://pythonpaste.org/webob/#urls).
|
If your application uses a [custom domain name](https://cloud.google.com/appengine/docs/python/console/using-custom-domains-and-ssl#adding_a_custom_domain_for_your_application), then *get\_default\_version\_hostname()* will return *myapp.appspot.com*, or *1.default.myapp.appspot.com* not *mydomain.com*
To get the domain name in the request, use:
```
hostname = self.request.host
```
According to the [webob docs](http://docs.webob.org/en/latest/api/request.html?highlight=host#webob.request.BaseRequest.host):
>
> returns 'Host name provided in HTTP\_HOST, with fall-back to
> SERVER\_NAME'
>
>
>
This won't work inside a cron task or if the client calls *myapp.appspot.com* instead of *mydomain.com*.
See also issue: <https://code.google.com/p/googleappengine/issues/detail?id=11993>
get\_default\_version\_hostname() won't work inside /\_ah/start handler on a Managed VM. the workaround is to set:
os.environ['DEFAULT\_VERSION\_HOSTNAME'] = os.environ['GAE\_APPENGINE\_HOSTNAME']
|
3,947,878
|
I have a appengine webapp where i need to set HTTP Location variable to redirect to another page in the same webapp. For that i need to produce a absolute link. for portability reason i cannot use directly the domain name which i am currently using.
Is it possible to produce the domain name on which the webapp is hosted in the python code.
|
2010/10/16
|
[
"https://Stackoverflow.com/questions/3947878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/329292/"
] |
this question is poorly answered. You need the domain name to generate pages like: robots.txt, sitemap.xml and many many more things which are not relative links. Ive tried using this:
```
from google.appengine.api.app_identity import get_default_version_hostname
host = get_default_version_hostname()
```
but... it is not good when you upgrade to your own domain... because I still get the appspot.com name.
|
If your application uses a [custom domain name](https://cloud.google.com/appengine/docs/python/console/using-custom-domains-and-ssl#adding_a_custom_domain_for_your_application), then *get\_default\_version\_hostname()* will return *myapp.appspot.com*, or *1.default.myapp.appspot.com* not *mydomain.com*
To get the domain name in the request, use:
```
hostname = self.request.host
```
According to the [webob docs](http://docs.webob.org/en/latest/api/request.html?highlight=host#webob.request.BaseRequest.host):
>
> returns 'Host name provided in HTTP\_HOST, with fall-back to
> SERVER\_NAME'
>
>
>
This won't work inside a cron task or if the client calls *myapp.appspot.com* instead of *mydomain.com*.
See also issue: <https://code.google.com/p/googleappengine/issues/detail?id=11993>
get\_default\_version\_hostname() won't work inside /\_ah/start handler on a Managed VM. the workaround is to set:
os.environ['DEFAULT\_VERSION\_HOSTNAME'] = os.environ['GAE\_APPENGINE\_HOSTNAME']
|
71,288,828
|
I am trying to extract book names from oreilly media website using python beautiful soup.
However I see that the book names are not in the page source html.
I am using this link to see the books:
[https://www.oreilly.com/search/?query=\*&extended\_publisher\_data=true&highlight=true&include\_assessments=false&include\_case\_studies=true&include\_courses=true&include\_playlists=true&include\_collections=true&include\_notebooks=true&include\_sandboxes=true&include\_scenarios=true&is\_academic\_institution\_account=false&source=user&formats=book&formats=article&formats=journal&sort=date\_added&facet\_json=true&json\_facets=true&page=0&include\_facets=true&include\_practice\_exams=true](https://www.oreilly.com/search/?query=*&extended_publisher_data=true&highlight=true&include_assessments=false&include_case_studies=true&include_courses=true&include_playlists=true&include_collections=true&include_notebooks=true&include_sandboxes=true&include_scenarios=true&is_academic_institution_account=false&source=user&formats=book&formats=article&formats=journal&sort=date_added&facet_json=true&json_facets=true&page=0&include_facets=true&include_practice_exams=true)
Attached is a screenshot that shows the webpage with the first two books alongside with chrome developer tool with arrows pointing to the elements i'd like to extract.
[](https://i.stack.imgur.com/3A2vS.png)
I looked at the page source but could not find the book names - maybe they are hidden inside some other links inside the main html.
I tried to open some of the links inside the html and searched for the book names but could not find anything.
is it possible to extract the first or second book names from the website using beautiful soup?
if not is there any other python package that can do that? maybe selenium?
Or as a last resort any other tool...
|
2022/02/27
|
[
"https://Stackoverflow.com/questions/71288828",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7895331/"
] |
So if you investigate into network tab, when loading page, you are sending request to API
[](https://i.stack.imgur.com/kKfDY.png)
It returns json with books.
After some investigation by me, you can get your titles via
```
import json
import requests
response_json = json.loads(requests.get(
"https://www.oreilly.com/api/v2/search/?query=*&extended_publisher_data=true&highlight=true&include_assessments=false&include_case_studies=true&include_courses=true&include_playlists=true&include_collections=true&include_notebooks=true&include_sandboxes=true&include_scenarios=true&is_academic_institution_account=false&source=user&formats=book&formats=article&formats=journal&sort=date_added&facet_json=true&json_facets=true&page=0&include_facets=true&include_practice_exams=true&orm-service=search-frontend").text)
for book in response_json['results']:
print(book['highlights']['title'][0])
```
|
To solve this issue you need to know beautiful soup can deal with websites that use plan html. so the the websites that use JavaScript in their page beautiful soup cant's get all page data that you looking for bcz you need a browser like to load the JavaScript data in the website.
and here you need to use Selenium bcz it open a browser page and load all data of the page, and you can use both as a combine like this:
```
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
import lxml
# This will make selenium run in backround
chrome_options = Options()
chrome_options.add_argument("--headless")
# You need to install driver
driver = webdriver.Chrome('#Dir of the driver' ,options=chrome_options)
driver.get('#url')
html = driver.page_source
soup = BeautifulSoup(html, 'lxml')
```
and with this you can get all data that you need, and dont forget to
write this at end to quit selenium in background.
```
driver.quit()
```
|
63,424,301
|
I am trying to refresh power b.i. more frequently than current capability of gateway schedule refresh.
I found this:
<https://github.com/dubravcik/pbixrefresher-python>
Installed and verified I have all required packages installed to run.
Right now it works fine until the end - where after it refreshes a Save function seems to execute correctly - but the report does not save - and when it tries Publish function - a prompt is created asking if user would like to save and there is a timeout.
I have tried increasing the time-out argument and adding more wait time in the routine (along with a couple of other suggested ideas from the github issues thread).
Below is what cmd looks like along with the error - I also added the Main routine of the pbixrefresher file in case there is a different way to save (hotkeys) or something worth trying. I tried this both as my user and admin in CMD - but wasn't sure if it's possible a permissions setting could block the report from saving. Thank you for reading any help is greatly appreciated.
```
Starting Power BI
Waiting 15 sec
Identifying Power BI window
Refreshing
Waiting for refresh end (timeout in 100000 sec)
Saving
Publish
Traceback (most recent call last):
File "c:\python36\lib\site-packages\pywinauto\application.py", line 258, in __resolve_control
criteria)
File "c:\python36\lib\site-packages\pywinauto\timings.py", line 458, in wait_until_passes
raise err
pywinauto.timings.TimeoutError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\python36\lib\runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "c:\python36\lib\runpy.py", line 85, in run_code
exec(code, run_globals)
File "C:\Python36\Scripts\pbixrefresher.exe_main.py", line 9, in
File "c:\python36\lib\site-packages\pbixrefresher\pbixrefresher.py", line 77, in main
publish_dialog.child_window(title = WORKSPACE, found_index=0).click_input()
File "c:\python36\lib\site-packages\pywinauto\application.py", line 379, in getattribute
ctrls = self.__resolve_control(self.criteria)
File "c:\python36\lib\site-packages\pywinauto\application.py", line 261, in __resolve_control
raise e.original_exception
File "c:\python36\lib\site-packages\pywinauto\timings.py", line 436, in wait_until_passes
func_val = func(*args, **kwargs)
File "c:\python36\lib\site-packages\pywinauto\application.py", line 222, in __get_ctrl
ctrl = self.backend.generic_wrapper_class(findwindows.find_element(**ctrl_criteria))
File "c:\python36\lib\site-packages\pywinauto\findwindows.py", line 87, in find_element
raise ElementNotFoundError(kwargs)
pywinauto.findwindows.ElementNotFoundError: {'auto_id': 'KoPublishToGroupDialog', 'top_level_only': False, 'parent': <uia_element_info.UIAElementInfo - 'Simple - Power BI Desktop', WindowsForms10.Window.8.app.0.1bb715_r6_ad1, 8914246>, 'backend': 'uia'}
```
The main routine from pbixrefresher:
```
def main():
# Parse arguments from cmd
parser = argparse.ArgumentParser()
parser.add_argument("workbook", help = "Path to .pbix file")
parser.add_argument("--workspace", help = "name of online Power BI service work space to publish in", default = "My workspace")
parser.add_argument("--refresh-timeout", help = "refresh timeout", default = 30000, type = int)
parser.add_argument("--no-publish", dest='publish', help="don't publish, just save", default = True, action = 'store_false' )
parser.add_argument("--init-wait", help = "initial wait time on startup", default = 15, type = int)
args = parser.parse_args()
timings.after_clickinput_wait = 1
WORKBOOK = args.workbook
WORKSPACE = args.workspace
INIT_WAIT = args.init_wait
REFRESH_TIMEOUT = args.refresh_timeout
# Kill running PBI
PROCNAME = "PBIDesktop.exe"
for proc in psutil.process_iter():
# check whether the process name matches
if proc.name() == PROCNAME:
proc.kill()
time.sleep(3)
# Start PBI and open the workbook
print("Starting Power BI")
os.system('start "" "' + WORKBOOK + '"')
print("Waiting ",INIT_WAIT,"sec")
time.sleep(INIT_WAIT)
# Connect pywinauto
print("Identifying Power BI window")
app = Application(backend = 'uia').connect(path = PROCNAME)
win = app.window(title_re = '.*Power BI Desktop')
time.sleep(5)
win.wait("enabled", timeout = 300)
win.Save.wait("enabled", timeout = 300)
win.set_focus()
win.Home.click_input()
win.Save.wait("enabled", timeout = 300)
win.wait("enabled", timeout = 300)
# Refresh
print("Refreshing")
win.Refresh.click_input()
#wait_win_ready(win)
time.sleep(5)
print("Waiting for refresh end (timeout in ", REFRESH_TIMEOUT,"sec)")
win.wait("enabled", timeout = REFRESH_TIMEOUT)
# Save
print("Saving")
type_keys("%1", win)
#wait_win_ready(win)
time.sleep(5)
win.wait("enabled", timeout = REFRESH_TIMEOUT)
# Publish
if args.publish:
print("Publish")
win.Publish.click_input()
publish_dialog = win.child_window(auto_id = "KoPublishToGroupDialog")
publish_dialog.child_window(title = WORKSPACE).click_input()
publish_dialog.Select.click()
try:
win.Replace.wait('visible', timeout = 10)
except Exception:
pass
if win.Replace.exists():
win.Replace.click_input()
win["Got it"].wait('visible', timeout = REFRESH_TIMEOUT)
win["Got it"].click_input()
#Close
print("Exiting")
win.close()
# Force close
for proc in psutil.process_iter():
if proc.name() == PROCNAME:
proc.kill()
if __name__ == '__main__':
try:
main()
except Exception as e:
print(e)
sys.exit(1)
```
|
2020/08/15
|
[
"https://Stackoverflow.com/questions/63424301",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11854373/"
] |
If this is your command line:
```
g++ -std=c++17 -pthread -o http_test.out http_test.cpp -lssl -lcrypto && ./http_test.out
```
Aren't you missing "-O2"? It looks like you are building without optimizations. Which will be considerably slower.
|
From what i know of bitmex engine, having a latency around 10ms for order execution is the best you can get, and it will be worse during high volatility periods. Check <https://bonfida.com/latency-monitor> to get an idea of latencies. On crypto world, latency are far higher than traditional hft
|
31,687,690
|
I just got a new MackBook Pro and installed Python 3.4.
I ran the terminal and typed
```
python3.4
```
I got:
```
Python 3.4.3 (v3.4.3:9b73f1c3e601, Feb 23 2015, 02:52:03)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
```
I typed:
```
>>> print("Hello world")
Hello world
```
All good, but when I tried to do something a bit more complex I ran into trouble, I did:
```
>>>counter = 5
>>>
>>> while counter > 0:
... counter -= 1
... print()
... print("Hello World")
```
I get the error:
```
File "<stdin>", line 4
print("Hello World")
^
SyntaxError: invalid syntax
```
My guess is that the error is on the 'print("Hello World")' but I have no clue as to why, I don't need to indent it if I want it to run after the loop is finished. Any help will be appreciated.
|
2015/07/28
|
[
"https://Stackoverflow.com/questions/31687690",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4996405/"
] |
Notice the "..." prompt? That's telling you that the interactive interpreter knows you are in a block. You'll have to enter a blank line to terminate the block, before doing the final print statement.
This is an artifact of running interactively -- the blank line isn't required when you type your code into a file.
|
You have to use space for indentation (and ";" to separate two instruction :
```
>>> counter = 5
>>> while counter > 0:
counter -= 1
print("Hello")
Hello
Hello
Hello
Hello
Hello
>>>
```
|
31,687,690
|
I just got a new MackBook Pro and installed Python 3.4.
I ran the terminal and typed
```
python3.4
```
I got:
```
Python 3.4.3 (v3.4.3:9b73f1c3e601, Feb 23 2015, 02:52:03)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
```
I typed:
```
>>> print("Hello world")
Hello world
```
All good, but when I tried to do something a bit more complex I ran into trouble, I did:
```
>>>counter = 5
>>>
>>> while counter > 0:
... counter -= 1
... print()
... print("Hello World")
```
I get the error:
```
File "<stdin>", line 4
print("Hello World")
^
SyntaxError: invalid syntax
```
My guess is that the error is on the 'print("Hello World")' but I have no clue as to why, I don't need to indent it if I want it to run after the loop is finished. Any help will be appreciated.
|
2015/07/28
|
[
"https://Stackoverflow.com/questions/31687690",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4996405/"
] |
Notice the "..." prompt? That's telling you that the interactive interpreter knows you are in a block. You'll have to enter a blank line to terminate the block, before doing the final print statement.
This is an artifact of running interactively -- the blank line isn't required when you type your code into a file.
|
Because it is syntax error.
```
>>> while counter > 0:
... counter -= 1
... print()
... print("Hello World")
```
this is how python console works - you can see that you have three dots before print('hello world') which indicates that python still expects **indendted** code that belongs to while block.
You need to double-press enter in order to get to normal mode. (Signalized by >>>). Also in future if you encounter similar problems always try to run them from file and not only from console.
|
31,687,690
|
I just got a new MackBook Pro and installed Python 3.4.
I ran the terminal and typed
```
python3.4
```
I got:
```
Python 3.4.3 (v3.4.3:9b73f1c3e601, Feb 23 2015, 02:52:03)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
```
I typed:
```
>>> print("Hello world")
Hello world
```
All good, but when I tried to do something a bit more complex I ran into trouble, I did:
```
>>>counter = 5
>>>
>>> while counter > 0:
... counter -= 1
... print()
... print("Hello World")
```
I get the error:
```
File "<stdin>", line 4
print("Hello World")
^
SyntaxError: invalid syntax
```
My guess is that the error is on the 'print("Hello World")' but I have no clue as to why, I don't need to indent it if I want it to run after the loop is finished. Any help will be appreciated.
|
2015/07/28
|
[
"https://Stackoverflow.com/questions/31687690",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4996405/"
] |
Notice the "..." prompt? That's telling you that the interactive interpreter knows you are in a block. You'll have to enter a blank line to terminate the block, before doing the final print statement.
This is an artifact of running interactively -- the blank line isn't required when you type your code into a file.
|
This is caused by a quirk of python's interactive mode, which treats newlines specially.
When you have a `...` prompt, it *must* be followed by a continuation of the preceding compound statement, rather than the beginning of a new statement, which it would in non-interactive mode. Press enter again to make the `...` prompt go away.
---
Observer that this fails:
```
echo $'while False: pass\npass' | python -i
```
But this works:
```
echo $'while False: pass\npass' | python
```
---
You can read the nitty-gritty details [in the grammar reference](https://docs.python.org/3/reference/grammar.html). Interactive input uses the `single_input` start state, and non-interactive input uses the `file_input` start state.
|
31,687,690
|
I just got a new MackBook Pro and installed Python 3.4.
I ran the terminal and typed
```
python3.4
```
I got:
```
Python 3.4.3 (v3.4.3:9b73f1c3e601, Feb 23 2015, 02:52:03)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
```
I typed:
```
>>> print("Hello world")
Hello world
```
All good, but when I tried to do something a bit more complex I ran into trouble, I did:
```
>>>counter = 5
>>>
>>> while counter > 0:
... counter -= 1
... print()
... print("Hello World")
```
I get the error:
```
File "<stdin>", line 4
print("Hello World")
^
SyntaxError: invalid syntax
```
My guess is that the error is on the 'print("Hello World")' but I have no clue as to why, I don't need to indent it if I want it to run after the loop is finished. Any help will be appreciated.
|
2015/07/28
|
[
"https://Stackoverflow.com/questions/31687690",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4996405/"
] |
Because it is syntax error.
```
>>> while counter > 0:
... counter -= 1
... print()
... print("Hello World")
```
this is how python console works - you can see that you have three dots before print('hello world') which indicates that python still expects **indendted** code that belongs to while block.
You need to double-press enter in order to get to normal mode. (Signalized by >>>). Also in future if you encounter similar problems always try to run them from file and not only from console.
|
You have to use space for indentation (and ";" to separate two instruction :
```
>>> counter = 5
>>> while counter > 0:
counter -= 1
print("Hello")
Hello
Hello
Hello
Hello
Hello
>>>
```
|
31,687,690
|
I just got a new MackBook Pro and installed Python 3.4.
I ran the terminal and typed
```
python3.4
```
I got:
```
Python 3.4.3 (v3.4.3:9b73f1c3e601, Feb 23 2015, 02:52:03)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
```
I typed:
```
>>> print("Hello world")
Hello world
```
All good, but when I tried to do something a bit more complex I ran into trouble, I did:
```
>>>counter = 5
>>>
>>> while counter > 0:
... counter -= 1
... print()
... print("Hello World")
```
I get the error:
```
File "<stdin>", line 4
print("Hello World")
^
SyntaxError: invalid syntax
```
My guess is that the error is on the 'print("Hello World")' but I have no clue as to why, I don't need to indent it if I want it to run after the loop is finished. Any help will be appreciated.
|
2015/07/28
|
[
"https://Stackoverflow.com/questions/31687690",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4996405/"
] |
This is caused by a quirk of python's interactive mode, which treats newlines specially.
When you have a `...` prompt, it *must* be followed by a continuation of the preceding compound statement, rather than the beginning of a new statement, which it would in non-interactive mode. Press enter again to make the `...` prompt go away.
---
Observer that this fails:
```
echo $'while False: pass\npass' | python -i
```
But this works:
```
echo $'while False: pass\npass' | python
```
---
You can read the nitty-gritty details [in the grammar reference](https://docs.python.org/3/reference/grammar.html). Interactive input uses the `single_input` start state, and non-interactive input uses the `file_input` start state.
|
You have to use space for indentation (and ";" to separate two instruction :
```
>>> counter = 5
>>> while counter > 0:
counter -= 1
print("Hello")
Hello
Hello
Hello
Hello
Hello
>>>
```
|
31,687,690
|
I just got a new MackBook Pro and installed Python 3.4.
I ran the terminal and typed
```
python3.4
```
I got:
```
Python 3.4.3 (v3.4.3:9b73f1c3e601, Feb 23 2015, 02:52:03)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
```
I typed:
```
>>> print("Hello world")
Hello world
```
All good, but when I tried to do something a bit more complex I ran into trouble, I did:
```
>>>counter = 5
>>>
>>> while counter > 0:
... counter -= 1
... print()
... print("Hello World")
```
I get the error:
```
File "<stdin>", line 4
print("Hello World")
^
SyntaxError: invalid syntax
```
My guess is that the error is on the 'print("Hello World")' but I have no clue as to why, I don't need to indent it if I want it to run after the loop is finished. Any help will be appreciated.
|
2015/07/28
|
[
"https://Stackoverflow.com/questions/31687690",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4996405/"
] |
Because it is syntax error.
```
>>> while counter > 0:
... counter -= 1
... print()
... print("Hello World")
```
this is how python console works - you can see that you have three dots before print('hello world') which indicates that python still expects **indendted** code that belongs to while block.
You need to double-press enter in order to get to normal mode. (Signalized by >>>). Also in future if you encounter similar problems always try to run them from file and not only from console.
|
This is caused by a quirk of python's interactive mode, which treats newlines specially.
When you have a `...` prompt, it *must* be followed by a continuation of the preceding compound statement, rather than the beginning of a new statement, which it would in non-interactive mode. Press enter again to make the `...` prompt go away.
---
Observer that this fails:
```
echo $'while False: pass\npass' | python -i
```
But this works:
```
echo $'while False: pass\npass' | python
```
---
You can read the nitty-gritty details [in the grammar reference](https://docs.python.org/3/reference/grammar.html). Interactive input uses the `single_input` start state, and non-interactive input uses the `file_input` start state.
|
28,242,066
|
my code :
```
def isModuleBlink(modulename):
f = '/tmp/'+modulename + '.blink'
if(os.path.isfile(f)):
with open(f) as fii:
res = fii.read()
print 'res',res
print res is '1'
if(res is '1'):
print 'return true'
return True
return False
```
and print out :
```
res 1
False
```
Why python return false for condition?
When I test `print '1' is '1'` in python terminal terunt `true` but in this script return False?
|
2015/01/30
|
[
"https://Stackoverflow.com/questions/28242066",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3585139/"
] |
`res` is `1 \n` and not `1` ... in condition i replaced `1 in res` and work ...
thanks
|
`is` tests if things are *identical*. You want to test if two strings are equal, not necessarily that they occupy the same memory address. So you want `==`.
|
38,249,606
|
Say I have a vector of values from a tokenizing function, `tokenize()`. I know it will only have two values. I want to store the first value in `a` and the second in `b`. In Python, I would do:
```python
a, b = string.split(' ')
```
I could do it as such in an ugly way:
```cpp
vector<string> tokens = tokenize(string);
string a = tokens[0];
string b = tokens[1];
```
But that requires two extra lines of code, an extra variable, and less readability.
How would I do such a thing in C++ in a clean and efficient way?
**EDIT:** I must emphasize that efficiency is very important. Too many answers don't satisfy this. This includes modifying [my tokenization function](https://gist.github.com/CrazyPython/be933ac13243f4db9c2a6155c63ae9b2).
**EDIT 2**: I am using C++11 for reasons outside of my control and I also cannot use Boost.
|
2016/07/07
|
[
"https://Stackoverflow.com/questions/38249606",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1459669/"
] |
With structured bindings (definitely will be in C++17), you'd be able to write something like:
```
auto [a,b] = as_tuple<2>(tokenize(str));
```
where `as_tuple<N>` is some to-be-declared function that converts a `vector<string>` to a `tuple<string, string, ... N times ...>`, probably throwing if the sizes don't match. You can't destructure a `std::vector` since it's size isn't known at compile time. This will necessarily do extra moves of the `string` so you're losing some efficiency in order to gain some code clarity. Maybe that's ok.
Or maybe you write a `tokenize<N>` that returns a `tuple<string, string, ... N times ...>` directly, avoiding the extra move. In that case:
```
auto [a, b] = tokenize<2>(str);
```
is great.
---
Before C++17, what you have is what you can do. But just make your variables references:
```
std::vector<std::string> tokens = tokenize(str);
std::string& a = tokens[0];
std::string& b = tokens[1];
```
Yeah, it's a couple extra lines of code. That's not the end of the world. It's easy to understand.
|
Ideally you'd rewrite the `tokenize()` function so that it returns a pair of strings rather than a vector:
```
std::pair<std::string, std::string> tokenize(const std::string& str);
```
Or you would pass two references to empty strings to the function as parameters.
```
void tokenize(const std::string& str, std::string& result_1, std::string& result_2);
```
If you have no control over the tokenize function the best you can do is move the strings out of the vector in an optimal way.
```
std::vector<std::string> tokens = tokenize(str);
std::string a = std::move(tokens.first());
std::string b = std::move(tokens.last());
```
|
38,249,606
|
Say I have a vector of values from a tokenizing function, `tokenize()`. I know it will only have two values. I want to store the first value in `a` and the second in `b`. In Python, I would do:
```python
a, b = string.split(' ')
```
I could do it as such in an ugly way:
```cpp
vector<string> tokens = tokenize(string);
string a = tokens[0];
string b = tokens[1];
```
But that requires two extra lines of code, an extra variable, and less readability.
How would I do such a thing in C++ in a clean and efficient way?
**EDIT:** I must emphasize that efficiency is very important. Too many answers don't satisfy this. This includes modifying [my tokenization function](https://gist.github.com/CrazyPython/be933ac13243f4db9c2a6155c63ae9b2).
**EDIT 2**: I am using C++11 for reasons outside of my control and I also cannot use Boost.
|
2016/07/07
|
[
"https://Stackoverflow.com/questions/38249606",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1459669/"
] |
If you "know it will only have two values", you could write something like:
```
#include <cassert>
#include <iostream>
#include <string>
#include <tuple>
std::pair<std::string, std::string> tokenize(const std::string &text)
{
const auto pos(text.find(' '));
assert(pos != std::string::npos);
return {text.substr(0, pos), text.substr(pos + 1)};
}
```
your [code](https://gist.github.com/CrazyPython/7850ff18e447ffbaf336520eb1e92c30) is a great example of the power of STL but it's probably a bit slower.
```
int main()
{
std::string a, b;
std::tie(a, b) = tokenize("first second");
std::cout << a << " " << b << '\n';
}
```
Unfortunately without [structured bindings](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p0144r2.pdf) (C++17) you have to use the `std::tie` hack and the variables `a` and `b` have to exist.
|
With structured bindings (definitely will be in C++17), you'd be able to write something like:
```
auto [a,b] = as_tuple<2>(tokenize(str));
```
where `as_tuple<N>` is some to-be-declared function that converts a `vector<string>` to a `tuple<string, string, ... N times ...>`, probably throwing if the sizes don't match. You can't destructure a `std::vector` since it's size isn't known at compile time. This will necessarily do extra moves of the `string` so you're losing some efficiency in order to gain some code clarity. Maybe that's ok.
Or maybe you write a `tokenize<N>` that returns a `tuple<string, string, ... N times ...>` directly, avoiding the extra move. In that case:
```
auto [a, b] = tokenize<2>(str);
```
is great.
---
Before C++17, what you have is what you can do. But just make your variables references:
```
std::vector<std::string> tokens = tokenize(str);
std::string& a = tokens[0];
std::string& b = tokens[1];
```
Yeah, it's a couple extra lines of code. That's not the end of the world. It's easy to understand.
|
38,249,606
|
Say I have a vector of values from a tokenizing function, `tokenize()`. I know it will only have two values. I want to store the first value in `a` and the second in `b`. In Python, I would do:
```python
a, b = string.split(' ')
```
I could do it as such in an ugly way:
```cpp
vector<string> tokens = tokenize(string);
string a = tokens[0];
string b = tokens[1];
```
But that requires two extra lines of code, an extra variable, and less readability.
How would I do such a thing in C++ in a clean and efficient way?
**EDIT:** I must emphasize that efficiency is very important. Too many answers don't satisfy this. This includes modifying [my tokenization function](https://gist.github.com/CrazyPython/be933ac13243f4db9c2a6155c63ae9b2).
**EDIT 2**: I am using C++11 for reasons outside of my control and I also cannot use Boost.
|
2016/07/07
|
[
"https://Stackoverflow.com/questions/38249606",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1459669/"
] |
If you "know it will only have two values", you could write something like:
```
#include <cassert>
#include <iostream>
#include <string>
#include <tuple>
std::pair<std::string, std::string> tokenize(const std::string &text)
{
const auto pos(text.find(' '));
assert(pos != std::string::npos);
return {text.substr(0, pos), text.substr(pos + 1)};
}
```
your [code](https://gist.github.com/CrazyPython/7850ff18e447ffbaf336520eb1e92c30) is a great example of the power of STL but it's probably a bit slower.
```
int main()
{
std::string a, b;
std::tie(a, b) = tokenize("first second");
std::cout << a << " " << b << '\n';
}
```
Unfortunately without [structured bindings](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p0144r2.pdf) (C++17) you have to use the `std::tie` hack and the variables `a` and `b` have to exist.
|
Ideally you'd rewrite the `tokenize()` function so that it returns a pair of strings rather than a vector:
```
std::pair<std::string, std::string> tokenize(const std::string& str);
```
Or you would pass two references to empty strings to the function as parameters.
```
void tokenize(const std::string& str, std::string& result_1, std::string& result_2);
```
If you have no control over the tokenize function the best you can do is move the strings out of the vector in an optimal way.
```
std::vector<std::string> tokens = tokenize(str);
std::string a = std::move(tokens.first());
std::string b = std::move(tokens.last());
```
|
8,055,132
|
I have the script below which I'm using to send say 10 messages myself<->myself. However, I've noticed that Python really takes a while to do that. Last year I needed a system to send about 200 emails with attachments and text and I implemented it with msmtp + bash. As far as I remember it was much faster.
Moving the while loop inside (around the smtp\_serv.sendmail(sender, recepient, msg) function yields similar results).
Am I doing something wrong? Surely it can't be slower than bash + msmtp (and I'm only sending a 'hi' message, no attachments).
```
#! /usr/bin/python3.1
def sendmail(recepient, msg):
import smtplib
# Parameters
sender = 'login@gmail.com'
password = 'password'
smtpStr = 'smtp.gmail.com'
smtpPort = 587
# /Parameters
smtp_serv = smtplib.SMTP(smtpStr, smtpPort)
smtp_serv.ehlo_or_helo_if_needed()
smtp_serv.starttls()
smtp_serv.ehlo()
recepientExists = smtp_serv.verify(recepient)
if recepientExists[0] == 250:
smtp_serv.login(sender, password)
try:
smtp_serv.sendmail(sender, recepient, msg)
except smtplib.SMTPException:
print(recepientExists[1])
else:
print('Error', recepientExists[0], ':', recepientExists[1])
smtp_serv.quit()
for in in range(10):
sendmail('receiver@gmail.com', 'hi')
```
|
2011/11/08
|
[
"https://Stackoverflow.com/questions/8055132",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1030287/"
] |
You are opening the connection to the SMTP server and then closing it for each email. It would be more efficient to keep the connection open while sending all of the emails.
|
The real answer here is "profile that code!". Time how long different parts of the code take so you know where most of the time is spent. That way you'll have a real answer without guesswork.
Still, my guess would be that it is the calls to `smtp_serv.verify(recipient)` may be the slow ones. Reasons might be that the server sometimes needs to ask other SMTP servers for info, or that it does throttling on these operations to avoid having spammers use them massively to gather email addresses.
Also, try pinging the SMTP server. If the ping-pong takes significant time, I would expect sending each email would take at least that long.
|
8,055,132
|
I have the script below which I'm using to send say 10 messages myself<->myself. However, I've noticed that Python really takes a while to do that. Last year I needed a system to send about 200 emails with attachments and text and I implemented it with msmtp + bash. As far as I remember it was much faster.
Moving the while loop inside (around the smtp\_serv.sendmail(sender, recepient, msg) function yields similar results).
Am I doing something wrong? Surely it can't be slower than bash + msmtp (and I'm only sending a 'hi' message, no attachments).
```
#! /usr/bin/python3.1
def sendmail(recepient, msg):
import smtplib
# Parameters
sender = 'login@gmail.com'
password = 'password'
smtpStr = 'smtp.gmail.com'
smtpPort = 587
# /Parameters
smtp_serv = smtplib.SMTP(smtpStr, smtpPort)
smtp_serv.ehlo_or_helo_if_needed()
smtp_serv.starttls()
smtp_serv.ehlo()
recepientExists = smtp_serv.verify(recepient)
if recepientExists[0] == 250:
smtp_serv.login(sender, password)
try:
smtp_serv.sendmail(sender, recepient, msg)
except smtplib.SMTPException:
print(recepientExists[1])
else:
print('Error', recepientExists[0], ':', recepientExists[1])
smtp_serv.quit()
for in in range(10):
sendmail('receiver@gmail.com', 'hi')
```
|
2011/11/08
|
[
"https://Stackoverflow.com/questions/8055132",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1030287/"
] |
In this script it takes five times more time to setup SMTP connection (5 seconds) than to send a e-mail (1 second) so it could make sense to setup a single connection and send several e-mails instead of creating the connection each time:
```
#!/usr/bin/env python3
import smtplib
from contextlib import contextmanager
from datetime import datetime
from email.mime.text import MIMEText
from netrc import netrc
from timeit import default_timer as timer
@contextmanager
def logined(sender, password, smtp_host='smtp.gmail.com', smtp_port=587):
start = timer(); smtp_serv = smtplib.SMTP(smtp_host, smtp_port, timeout=10)
try: # make smtp server and login
smtp_serv.ehlo_or_helo_if_needed()
smtp_serv.starttls()
smtp_serv.ehlo()
print('smtp setup took (%.2f seconds passed)' % (timer()-start,))
start = timer(); smtp_serv.login(sender, password)
print('login took %.2f seconds' % (timer()-start,))
start = timer(); yield smtp_serv
finally:
print('Operations with smtp_serv took %.2f seconds' % (timer()-start,))
start = timer(); smtp_serv.quit()
print('Quiting took %.2f seconds' % (timer()-start,))
smtp_host = 'smtp.gmail.com'
login, _, password = netrc().authenticators(smtp_host)
with logined(login, password, smtp_host) as smtp_serv:
for i in range(10):
msg = MIMEText('#%d timestamp %s' % (i, datetime.utcnow()))
msg['Subject'] = 'test #%d' % i
msg['From'] = login
msg['To'] = login
smtp_serv.send_message(msg)
```
### Output
```
smtp setup took (5.43 seconds passed)
login took 0.40 seconds
Operations with smtp_serv took 9.84 seconds
Quiting took 0.05 seconds
```
If your Python version doesn't have `.send_message()` then you could use:
```
smtp_serv.sendmail(from, to, msg.as_string())
```
|
You are opening the connection to the SMTP server and then closing it for each email. It would be more efficient to keep the connection open while sending all of the emails.
|
8,055,132
|
I have the script below which I'm using to send say 10 messages myself<->myself. However, I've noticed that Python really takes a while to do that. Last year I needed a system to send about 200 emails with attachments and text and I implemented it with msmtp + bash. As far as I remember it was much faster.
Moving the while loop inside (around the smtp\_serv.sendmail(sender, recepient, msg) function yields similar results).
Am I doing something wrong? Surely it can't be slower than bash + msmtp (and I'm only sending a 'hi' message, no attachments).
```
#! /usr/bin/python3.1
def sendmail(recepient, msg):
import smtplib
# Parameters
sender = 'login@gmail.com'
password = 'password'
smtpStr = 'smtp.gmail.com'
smtpPort = 587
# /Parameters
smtp_serv = smtplib.SMTP(smtpStr, smtpPort)
smtp_serv.ehlo_or_helo_if_needed()
smtp_serv.starttls()
smtp_serv.ehlo()
recepientExists = smtp_serv.verify(recepient)
if recepientExists[0] == 250:
smtp_serv.login(sender, password)
try:
smtp_serv.sendmail(sender, recepient, msg)
except smtplib.SMTPException:
print(recepientExists[1])
else:
print('Error', recepientExists[0], ':', recepientExists[1])
smtp_serv.quit()
for in in range(10):
sendmail('receiver@gmail.com', 'hi')
```
|
2011/11/08
|
[
"https://Stackoverflow.com/questions/8055132",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1030287/"
] |
In this script it takes five times more time to setup SMTP connection (5 seconds) than to send a e-mail (1 second) so it could make sense to setup a single connection and send several e-mails instead of creating the connection each time:
```
#!/usr/bin/env python3
import smtplib
from contextlib import contextmanager
from datetime import datetime
from email.mime.text import MIMEText
from netrc import netrc
from timeit import default_timer as timer
@contextmanager
def logined(sender, password, smtp_host='smtp.gmail.com', smtp_port=587):
start = timer(); smtp_serv = smtplib.SMTP(smtp_host, smtp_port, timeout=10)
try: # make smtp server and login
smtp_serv.ehlo_or_helo_if_needed()
smtp_serv.starttls()
smtp_serv.ehlo()
print('smtp setup took (%.2f seconds passed)' % (timer()-start,))
start = timer(); smtp_serv.login(sender, password)
print('login took %.2f seconds' % (timer()-start,))
start = timer(); yield smtp_serv
finally:
print('Operations with smtp_serv took %.2f seconds' % (timer()-start,))
start = timer(); smtp_serv.quit()
print('Quiting took %.2f seconds' % (timer()-start,))
smtp_host = 'smtp.gmail.com'
login, _, password = netrc().authenticators(smtp_host)
with logined(login, password, smtp_host) as smtp_serv:
for i in range(10):
msg = MIMEText('#%d timestamp %s' % (i, datetime.utcnow()))
msg['Subject'] = 'test #%d' % i
msg['From'] = login
msg['To'] = login
smtp_serv.send_message(msg)
```
### Output
```
smtp setup took (5.43 seconds passed)
login took 0.40 seconds
Operations with smtp_serv took 9.84 seconds
Quiting took 0.05 seconds
```
If your Python version doesn't have `.send_message()` then you could use:
```
smtp_serv.sendmail(from, to, msg.as_string())
```
|
The real answer here is "profile that code!". Time how long different parts of the code take so you know where most of the time is spent. That way you'll have a real answer without guesswork.
Still, my guess would be that it is the calls to `smtp_serv.verify(recipient)` may be the slow ones. Reasons might be that the server sometimes needs to ask other SMTP servers for info, or that it does throttling on these operations to avoid having spammers use them massively to gather email addresses.
Also, try pinging the SMTP server. If the ping-pong takes significant time, I would expect sending each email would take at least that long.
|
8,055,132
|
I have the script below which I'm using to send say 10 messages myself<->myself. However, I've noticed that Python really takes a while to do that. Last year I needed a system to send about 200 emails with attachments and text and I implemented it with msmtp + bash. As far as I remember it was much faster.
Moving the while loop inside (around the smtp\_serv.sendmail(sender, recepient, msg) function yields similar results).
Am I doing something wrong? Surely it can't be slower than bash + msmtp (and I'm only sending a 'hi' message, no attachments).
```
#! /usr/bin/python3.1
def sendmail(recepient, msg):
import smtplib
# Parameters
sender = 'login@gmail.com'
password = 'password'
smtpStr = 'smtp.gmail.com'
smtpPort = 587
# /Parameters
smtp_serv = smtplib.SMTP(smtpStr, smtpPort)
smtp_serv.ehlo_or_helo_if_needed()
smtp_serv.starttls()
smtp_serv.ehlo()
recepientExists = smtp_serv.verify(recepient)
if recepientExists[0] == 250:
smtp_serv.login(sender, password)
try:
smtp_serv.sendmail(sender, recepient, msg)
except smtplib.SMTPException:
print(recepientExists[1])
else:
print('Error', recepientExists[0], ':', recepientExists[1])
smtp_serv.quit()
for in in range(10):
sendmail('receiver@gmail.com', 'hi')
```
|
2011/11/08
|
[
"https://Stackoverflow.com/questions/8055132",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1030287/"
] |
Maybe this comes very late, but I think it is relevant for the matter.
I had the same issue recently and realized, by searching around, that the call to connect the SMTP server may be very time consuming due to issues with domain name resolution, since the SMTP server performs a reverse lookup to verify the connecting client.
In my case this call was taking around 1 minute!:
```
s = smtplib.SMTP(smtp_server)
```
Solution was to fix the domain name resolution on the Linux box. After that, connection became very quick.
Hope this may be of help.
|
The real answer here is "profile that code!". Time how long different parts of the code take so you know where most of the time is spent. That way you'll have a real answer without guesswork.
Still, my guess would be that it is the calls to `smtp_serv.verify(recipient)` may be the slow ones. Reasons might be that the server sometimes needs to ask other SMTP servers for info, or that it does throttling on these operations to avoid having spammers use them massively to gather email addresses.
Also, try pinging the SMTP server. If the ping-pong takes significant time, I would expect sending each email would take at least that long.
|
8,055,132
|
I have the script below which I'm using to send say 10 messages myself<->myself. However, I've noticed that Python really takes a while to do that. Last year I needed a system to send about 200 emails with attachments and text and I implemented it with msmtp + bash. As far as I remember it was much faster.
Moving the while loop inside (around the smtp\_serv.sendmail(sender, recepient, msg) function yields similar results).
Am I doing something wrong? Surely it can't be slower than bash + msmtp (and I'm only sending a 'hi' message, no attachments).
```
#! /usr/bin/python3.1
def sendmail(recepient, msg):
import smtplib
# Parameters
sender = 'login@gmail.com'
password = 'password'
smtpStr = 'smtp.gmail.com'
smtpPort = 587
# /Parameters
smtp_serv = smtplib.SMTP(smtpStr, smtpPort)
smtp_serv.ehlo_or_helo_if_needed()
smtp_serv.starttls()
smtp_serv.ehlo()
recepientExists = smtp_serv.verify(recepient)
if recepientExists[0] == 250:
smtp_serv.login(sender, password)
try:
smtp_serv.sendmail(sender, recepient, msg)
except smtplib.SMTPException:
print(recepientExists[1])
else:
print('Error', recepientExists[0], ':', recepientExists[1])
smtp_serv.quit()
for in in range(10):
sendmail('receiver@gmail.com', 'hi')
```
|
2011/11/08
|
[
"https://Stackoverflow.com/questions/8055132",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1030287/"
] |
In this script it takes five times more time to setup SMTP connection (5 seconds) than to send a e-mail (1 second) so it could make sense to setup a single connection and send several e-mails instead of creating the connection each time:
```
#!/usr/bin/env python3
import smtplib
from contextlib import contextmanager
from datetime import datetime
from email.mime.text import MIMEText
from netrc import netrc
from timeit import default_timer as timer
@contextmanager
def logined(sender, password, smtp_host='smtp.gmail.com', smtp_port=587):
start = timer(); smtp_serv = smtplib.SMTP(smtp_host, smtp_port, timeout=10)
try: # make smtp server and login
smtp_serv.ehlo_or_helo_if_needed()
smtp_serv.starttls()
smtp_serv.ehlo()
print('smtp setup took (%.2f seconds passed)' % (timer()-start,))
start = timer(); smtp_serv.login(sender, password)
print('login took %.2f seconds' % (timer()-start,))
start = timer(); yield smtp_serv
finally:
print('Operations with smtp_serv took %.2f seconds' % (timer()-start,))
start = timer(); smtp_serv.quit()
print('Quiting took %.2f seconds' % (timer()-start,))
smtp_host = 'smtp.gmail.com'
login, _, password = netrc().authenticators(smtp_host)
with logined(login, password, smtp_host) as smtp_serv:
for i in range(10):
msg = MIMEText('#%d timestamp %s' % (i, datetime.utcnow()))
msg['Subject'] = 'test #%d' % i
msg['From'] = login
msg['To'] = login
smtp_serv.send_message(msg)
```
### Output
```
smtp setup took (5.43 seconds passed)
login took 0.40 seconds
Operations with smtp_serv took 9.84 seconds
Quiting took 0.05 seconds
```
If your Python version doesn't have `.send_message()` then you could use:
```
smtp_serv.sendmail(from, to, msg.as_string())
```
|
Maybe this comes very late, but I think it is relevant for the matter.
I had the same issue recently and realized, by searching around, that the call to connect the SMTP server may be very time consuming due to issues with domain name resolution, since the SMTP server performs a reverse lookup to verify the connecting client.
In my case this call was taking around 1 minute!:
```
s = smtplib.SMTP(smtp_server)
```
Solution was to fix the domain name resolution on the Linux box. After that, connection became very quick.
Hope this may be of help.
|
34,162,320
|
I want to execute bash command
```
'/bin/echo </verbosegc> >> /tmp/jruby.log'
```
in python using Popen. The code does not raise any exception, but none change is made on the jruby.log after execution. The python code is shown below.
```
>>> command='/bin/echo </verbosegc> >> '+fullpath
>>> command
'/bin/echo </verbosegc> >> /tmp/jruby.log'
>>process = subprocess.Popen(command.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
>>> output= process.communicate()[0]
>>> output
'</verbosegc> >> /tmp/jruby.log\n
```
I also print out the process.pid and then check the pid using ps -ef | grep pid. The result shows that the process pid has been finished.
|
2015/12/08
|
[
"https://Stackoverflow.com/questions/34162320",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/910118/"
] |
The first argument to `subprocess.Popen` is the array `['/bin/echo', '</verbosegc>', '>>', '/tmp/jruby.log']`. When the first argument to `subprocess.Popen` is an array, it does not launch a shell to run the command, and the shell is what's responsible for interpreting `>> /tmp/jruby.log` to mean "write output to jruby.log".
In order to make the `>>` redirection work in this command, you'll need to pass `command` directly to `subprocess.Popen()` without splitting it into a list. You'll also need to quote the first argument (or else the shell will interpret the "<" and ">" characters in ways you don't want):
```
command = '/bin/echo "</verbosegc>" >> /tmp/jruby.log'
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
```
|
Have you tried without splitting the command `and using shell=True`? My usual format is:
```
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
output = process.stdout.read() # or .readlines()
```
|
34,162,320
|
I want to execute bash command
```
'/bin/echo </verbosegc> >> /tmp/jruby.log'
```
in python using Popen. The code does not raise any exception, but none change is made on the jruby.log after execution. The python code is shown below.
```
>>> command='/bin/echo </verbosegc> >> '+fullpath
>>> command
'/bin/echo </verbosegc> >> /tmp/jruby.log'
>>process = subprocess.Popen(command.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
>>> output= process.communicate()[0]
>>> output
'</verbosegc> >> /tmp/jruby.log\n
```
I also print out the process.pid and then check the pid using ps -ef | grep pid. The result shows that the process pid has been finished.
|
2015/12/08
|
[
"https://Stackoverflow.com/questions/34162320",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/910118/"
] |
Consider the following:
```
command = [ 'printf "%s\n" "$1" >>"$2"', # shell script to execute
'', # $0 in shell
'</verbosegc>', # $1
'/tmp/jruby.log' ] # $2
subprocess.Popen(command, shell=True)
```
The first argument is a shell script referring to `$1` and `$2`, which are in turn passed as separate arguments. Keeping data separate from code, rather than trying to substitute the former into the latter, is a precaution against shell injection (think of this as an analog to SQL injection).
---
Of course, don't *actually* do anything like this in Python -- the native primitives for file IO are far more appropriate.
|
Have you tried without splitting the command `and using shell=True`? My usual format is:
```
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
output = process.stdout.read() # or .readlines()
```
|
34,162,320
|
I want to execute bash command
```
'/bin/echo </verbosegc> >> /tmp/jruby.log'
```
in python using Popen. The code does not raise any exception, but none change is made on the jruby.log after execution. The python code is shown below.
```
>>> command='/bin/echo </verbosegc> >> '+fullpath
>>> command
'/bin/echo </verbosegc> >> /tmp/jruby.log'
>>process = subprocess.Popen(command.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
>>> output= process.communicate()[0]
>>> output
'</verbosegc> >> /tmp/jruby.log\n
```
I also print out the process.pid and then check the pid using ps -ef | grep pid. The result shows that the process pid has been finished.
|
2015/12/08
|
[
"https://Stackoverflow.com/questions/34162320",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/910118/"
] |
Just use pass file object if you want to append the output to a file, you cannot redirect to a file unless you set `shell=True`:
```
command = ['/bin/echo', '</verbosegc>']
with open('/tmp/jruby.log',"a") as f:
subprocess.check_call(command, stdout=f,stderr=subprocess.STDOUT)
```
|
Have you tried without splitting the command `and using shell=True`? My usual format is:
```
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
output = process.stdout.read() # or .readlines()
```
|
34,162,320
|
I want to execute bash command
```
'/bin/echo </verbosegc> >> /tmp/jruby.log'
```
in python using Popen. The code does not raise any exception, but none change is made on the jruby.log after execution. The python code is shown below.
```
>>> command='/bin/echo </verbosegc> >> '+fullpath
>>> command
'/bin/echo </verbosegc> >> /tmp/jruby.log'
>>process = subprocess.Popen(command.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
>>> output= process.communicate()[0]
>>> output
'</verbosegc> >> /tmp/jruby.log\n
```
I also print out the process.pid and then check the pid using ps -ef | grep pid. The result shows that the process pid has been finished.
|
2015/12/08
|
[
"https://Stackoverflow.com/questions/34162320",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/910118/"
] |
The first argument to `subprocess.Popen` is the array `['/bin/echo', '</verbosegc>', '>>', '/tmp/jruby.log']`. When the first argument to `subprocess.Popen` is an array, it does not launch a shell to run the command, and the shell is what's responsible for interpreting `>> /tmp/jruby.log` to mean "write output to jruby.log".
In order to make the `>>` redirection work in this command, you'll need to pass `command` directly to `subprocess.Popen()` without splitting it into a list. You'll also need to quote the first argument (or else the shell will interpret the "<" and ">" characters in ways you don't want):
```
command = '/bin/echo "</verbosegc>" >> /tmp/jruby.log'
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
```
|
Consider the following:
```
command = [ 'printf "%s\n" "$1" >>"$2"', # shell script to execute
'', # $0 in shell
'</verbosegc>', # $1
'/tmp/jruby.log' ] # $2
subprocess.Popen(command, shell=True)
```
The first argument is a shell script referring to `$1` and `$2`, which are in turn passed as separate arguments. Keeping data separate from code, rather than trying to substitute the former into the latter, is a precaution against shell injection (think of this as an analog to SQL injection).
---
Of course, don't *actually* do anything like this in Python -- the native primitives for file IO are far more appropriate.
|
34,162,320
|
I want to execute bash command
```
'/bin/echo </verbosegc> >> /tmp/jruby.log'
```
in python using Popen. The code does not raise any exception, but none change is made on the jruby.log after execution. The python code is shown below.
```
>>> command='/bin/echo </verbosegc> >> '+fullpath
>>> command
'/bin/echo </verbosegc> >> /tmp/jruby.log'
>>process = subprocess.Popen(command.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
>>> output= process.communicate()[0]
>>> output
'</verbosegc> >> /tmp/jruby.log\n
```
I also print out the process.pid and then check the pid using ps -ef | grep pid. The result shows that the process pid has been finished.
|
2015/12/08
|
[
"https://Stackoverflow.com/questions/34162320",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/910118/"
] |
Just use pass file object if you want to append the output to a file, you cannot redirect to a file unless you set `shell=True`:
```
command = ['/bin/echo', '</verbosegc>']
with open('/tmp/jruby.log',"a") as f:
subprocess.check_call(command, stdout=f,stderr=subprocess.STDOUT)
```
|
Consider the following:
```
command = [ 'printf "%s\n" "$1" >>"$2"', # shell script to execute
'', # $0 in shell
'</verbosegc>', # $1
'/tmp/jruby.log' ] # $2
subprocess.Popen(command, shell=True)
```
The first argument is a shell script referring to `$1` and `$2`, which are in turn passed as separate arguments. Keeping data separate from code, rather than trying to substitute the former into the latter, is a precaution against shell injection (think of this as an analog to SQL injection).
---
Of course, don't *actually* do anything like this in Python -- the native primitives for file IO are far more appropriate.
|
46,668,481
|
I'm trying to use PyQt\_Fit. I installed it from pip install pyqt\_fit but when I import it does not work and show me this message:
```
----------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-8-36ec621967a7> in <module>()
----> 1 import pyqt_fit
/home/yuri/anaconda2/lib/python2.7/site-packages/pyqt_fit/__init__.py in <module>()
12 'functions', 'residuals', 'CurveFitting']
13
---> 14 from . import functions
15 from . import residuals
16 from .curve_fitting import CurveFitting
/home/yuri/anaconda2/lib/python2.7/site-packages/pyqt_fit/functions/__init__.py in <module>()
4
5 from ..utils import namedtuple
----> 6 from .. import loader
7 import os
8 from path import path
/home/yuri/anaconda2/lib/python2.7/site-packages/pyqt_fit/loader.py in <module>()
1 from __future__ import print_function, absolute_import
2 import inspect
----> 3 from path import path
4 import imp
5 import sys
ImportError: cannot import name path
```
I'm using Ubuntu 16.04.
How can I fix it ?
|
2017/10/10
|
[
"https://Stackoverflow.com/questions/46668481",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8752769/"
] |
I faced the same problem with you. when I install the pyqt\_fit package successfully by
```
sudo pip install git+https://github.com/Multiplicom/pyqt-fit.git
```
It will install the path.py (The last version) and pyqt\_fit at the same time.
Then When I import the package, I faced the follow error
```
import pyqt_fit
Traceback (most recent call last):
File "<ipython-input-253-36ec621967a7>", line 1, in <module>
import pyqt_fit
File "/Users/mengxinpan/anaconda3/lib/python3.6/site-packages/pyqt_fit/__init__.py", line 14, in <module>
from . import functions, residuals
File "/Users/mengxinpan/anaconda3/lib/python3.6/site-packages/pyqt_fit/residuals/__init__.py", line 7, in <module>
from path import path
ImportError: cannot import name 'path'
```
The error is caused by the path.path function has been revised to path.Path in the last version path.py package.
So my solution is open all the file in the pyqt\_fit folder, like 'site-packages/pyqt\_fit/residuals/**init**.py', change all the
```
from path import path
```
to
```
from path import Path as path
```
Then I can import the pyqt\_fit successfully.
I try to install the old version path.py by
```
sudo pip install -I path.py==7.7.1
```
But it still not working.
|
This seems to be happening for quite some time. Check this recent issue report [on the repo](https://github.com/sergeyfarin/pyqt-fit/issues/5).
I've installed the package and tested myself and I got the same problem. Checked the solution provided on the possible duplicate and seems to have fixed the problem.
You might not have pip3 installed, so try with:
```
sudo pip install -I path.py==7.7.1
```
Edit:
You can also try installing the package directly from [this forked repo](https://github.com/Multiplicom/pyqt-fit/pull/1) that seems to have fixed it:
```
sudo pip install git+https://github.com/Multiplicom/pyqt-fit.git
```
|
46,668,481
|
I'm trying to use PyQt\_Fit. I installed it from pip install pyqt\_fit but when I import it does not work and show me this message:
```
----------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-8-36ec621967a7> in <module>()
----> 1 import pyqt_fit
/home/yuri/anaconda2/lib/python2.7/site-packages/pyqt_fit/__init__.py in <module>()
12 'functions', 'residuals', 'CurveFitting']
13
---> 14 from . import functions
15 from . import residuals
16 from .curve_fitting import CurveFitting
/home/yuri/anaconda2/lib/python2.7/site-packages/pyqt_fit/functions/__init__.py in <module>()
4
5 from ..utils import namedtuple
----> 6 from .. import loader
7 import os
8 from path import path
/home/yuri/anaconda2/lib/python2.7/site-packages/pyqt_fit/loader.py in <module>()
1 from __future__ import print_function, absolute_import
2 import inspect
----> 3 from path import path
4 import imp
5 import sys
ImportError: cannot import name path
```
I'm using Ubuntu 16.04.
How can I fix it ?
|
2017/10/10
|
[
"https://Stackoverflow.com/questions/46668481",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8752769/"
] |
Although people are suggesting `path.py==7.7.1`, it worked with `path.py=7.1` for me:
```
sudo pip uninstall -y path.py
sudo pip install -I path.py==7.1
```
I'm also using Ubuntu 16.04.
|
This seems to be happening for quite some time. Check this recent issue report [on the repo](https://github.com/sergeyfarin/pyqt-fit/issues/5).
I've installed the package and tested myself and I got the same problem. Checked the solution provided on the possible duplicate and seems to have fixed the problem.
You might not have pip3 installed, so try with:
```
sudo pip install -I path.py==7.7.1
```
Edit:
You can also try installing the package directly from [this forked repo](https://github.com/Multiplicom/pyqt-fit/pull/1) that seems to have fixed it:
```
sudo pip install git+https://github.com/Multiplicom/pyqt-fit.git
```
|
72,179,492
|
Recently, I updated to Ubuntu 22. I am using python 3.10.
After installing matplotlib and other required libraries for python, I am trying to plot some graphs.
Everytime I am facing this error while running my code.
I followed all the solutions given in stackoverflow or Google but no luck.
This is the error I am getting:
```
File ~/.local/lib/python3.10/site-packages/prettyplotlib/_eventplot.py:3, in <module>
1 __author__ = 'jgosmann'
----> 3 from matplotlib.cbook import iterable
5 from prettyplotlib.utils import remove_chartjunk, maybe_get_ax
6 from prettyplotlib.colors import set2
ImportError: cannot import name 'iterable' from 'matplotlib.cbook'
```
When I imported matplotlib, there is no issue.
How can I get rid of this error?
Any help or suggestion would be appreciated.
Thank you
|
2022/05/10
|
[
"https://Stackoverflow.com/questions/72179492",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16780162/"
] |
```
md$Children[which(x >= 8 & y < 60)] = NA
```
|
@AbdurRohman's answer is good.
You could use
```r
md <- within(md,
Children[Children >= 8 & Age <60] <- NA
)
```
for slightly clearer code. (You should definitely tell your instructor you got help on Stack Overflow.)
|
9,794,616
|
I'm using the following code which will generate a wav file which contains a tone at 440 Hz lasting for 2 seconds.
```
from scipy.io.wavfile import write
from numpy import linspace,sin,pi,int16
def note(freq, len, amp=1, rate=44100):
t = linspace(0,len,len*rate)
data = sin(2*pi*freq*t)*amp
return data.astype(int16) # two byte integers
tone = note(440,2,amp=10000)
write('440hzAtone.wav',44100,tone) # writing the sound to a file
```
I was wondering if I could modify the code, basing it off the note method, in order to actually generate a tune with python.
I tried adding two different tones, and as expected the two tones play simultaneously, creating something which sounds a bit like a dial tone:
```
tone1 = note(440,2,amp=10000)
tone2 = note(480,2,amp=10000)
tone = tone1+tone2
write('440hzAtone.wav',44100,tone)
```
I also tried multiplying the two tones, but this just generates static.
I also tried genreating tones of different lengths and adding them, however this causes an exception to be raised, like so:
```
tone1 = note(440,2,amp=10000)
tone2 = note(480,1,amp=10000)
tone = tone1+tone2
write('440hzAtone.wav',44100,tone)
```
causes:
```
ValueError: operands could not be broadcast together with shapes (88200) (44100)
```
So, I was wondering - how can I concatenate different tones like this to make a tune?
|
2012/03/20
|
[
"https://Stackoverflow.com/questions/9794616",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/840973/"
] |
You can do this using numpy.concatenate, (as already posted). You need to specify the concatenation axis also. Using very low rate to illustrate:
```
from scipy.io.wavfile import write
from numpy import linspace,sin,pi,int16,concatenate
def note(freq, len, amp=1, rate=5):
t = linspace(0,len,len*rate)
data = sin(2*pi*freq*t)*amp
return data.astype(int16) # two byte integers
tone1 = note(440,2,amp=10)
tone2 = note(140,2,amp=10)
print tone1
print tone2
print concatenate((tone2,tone1),axis=1)
#output:
[ 0 -9 -3 8 6 -6 -8 3 9 0]
[ 0 6 9 8 3 -3 -8 -9 -6 0]
[ 0 6 9 8 3 -3 -8 -9 -6 0 0 -9 -3 8 6 -6 -8 3 9 0]
```
|
`numpy.linspace` creates a numpy array. To concatenate the tones, you'd want to concatenate the corresponding arrays. For this, a bit of Googling indicates that Numpy provides the helpfully named [`numpy.concatenate` function](http://docs.scipy.org/doc/numpy/reference/generated/numpy.concatenate.html).
|
5,151,898
|
I realized there is a memory leak in one python script. Which occupied around 25MB first, and after 15 days it is more than 500 MB.
I followed many different ways, and not able to get into the root of the problem as am a python newbie...
Finally, I got this following
```
objgraph.show_most_common_types(limit=20)
tuple 37674
function 9156
dict 3935
list 1646
wrapper_descriptor 1468
weakref 888
builtin_function_or_method 874
classobj 684
method_descriptor 551
type 533
instance 483
Kind 470
getset_descriptor 404
ImmNodeSet 362
module 342
IdentitySetMulti 333
PartRow 331
member_descriptor 264
cell 185
FontEntry 170
```
I set a break point, and after every iteration this is what is happening...
```
objgraph.show_growth()
tuple 37674 +10
```
What is the best way to proceed ?
```
(Pdb) c
(Pdb) objgraph.show_growth()
tuple 37684 +10
```
I guess printing out all the tuples, and cross checking -- what this 10 tuple gets added everytime will give me some clue ? Kindly let me know how to do that..
Or is there any other way to find out this memory leak. Am using python 2.4.3, and because of many other product dependencies - Unfortunately i cannot / should not upgrade.
|
2011/03/01
|
[
"https://Stackoverflow.com/questions/5151898",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/379997/"
] |
Am I reading correctly that the same script is running for 15 days non-stop?
For such long-running processes periodic restart is a good practice and it's much easier to do than eliminating all memory leaks.
*Update*: Look at [this answer](https://stackoverflow.com/questions/1641231/python-working-around-memory-leaks/1641280#1641280), it seems to do exactly what you need -- print all newly added objects that were not garbage collected.
|
My first thought is, probably you are creating new objects in you script and accumulating them in some sort of global list. It is usually easier to go over your script and make sure that you are not generating any persistent data than debugging the garbage. I think the utility you are using, objgraph, also allows you to print the garbage object with the number of references to it. You could try that.
|
10,928,313
|
Is anyone familiar with DRAKON?
I quite like the idea of the DRAKON visual editor and have been playing with it using Python -- more info: <http://drakon-editor.sourceforge.net/python/python.html>
The only thing I've had a problem with so far is python's try: except: exceptions. The only way I've attempted it is to use branches and then define try: and except: as separate actions below the branch. The only thing with this is that DRAKON doesn't pick up the try: and automatically indent the exception code afterwards.
Is there any way to handle try: except: in a visual way in DRAKON, or perhaps you've heard of another similar visual editor project for python?
Thanks.
|
2012/06/07
|
[
"https://Stackoverflow.com/questions/10928313",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/507286/"
] |
You could put the whole "try: except:" construct inside one "Action" icon like this:

Both spaces and tabs can be used for indentation inside an icon.
|
There are limitation exist in Drakon since it is a code generator, but what you can do is to re-factor the code as much as possible and stuff it inside action block:
```
try:
function_1()
function_2()
except:
function_3()
```
Drakon works best if you follow suggested rules(skewer,happy route,branching etc).
Once you construct algorithm base on this, it can help you solve complex problems fast.
Hope thats help.
|
15,063,936
|
I have a script reading in a csv file with very huge fields:
```
# example from http://docs.python.org/3.3/library/csv.html?highlight=csv%20dictreader#examples
import csv
with open('some.csv', newline='') as f:
reader = csv.reader(f)
for row in reader:
print(row)
```
However, this throws the following error on some csv files:
```
_csv.Error: field larger than field limit (131072)
```
How can I analyze csv files with huge fields? Skipping the lines with huge fields is not an option as the data needs to be analyzed in subsequent steps.
|
2013/02/25
|
[
"https://Stackoverflow.com/questions/15063936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1251007/"
] |
This could be because your CSV file has embedded single or double quotes. If your CSV file is tab-delimited try opening it as:
```
c = csv.reader(f, delimiter='\t', quoting=csv.QUOTE_NONE)
```
|
You can use the `error_bad_lines` option of `pd.read_csv` to skip these lines.
```py
import pandas as pd
data_df = pd.read_csv('data.csv', error_bad_lines=False)
```
This works since the "bad lines" as defined in pandas include lines that one of their fields exceed the csv limit.
Be careful that this solution is valid only when the fields in your csv file *shouldn't* be this long.
If you expect to have big field sizes, this will throw away your data.
|
15,063,936
|
I have a script reading in a csv file with very huge fields:
```
# example from http://docs.python.org/3.3/library/csv.html?highlight=csv%20dictreader#examples
import csv
with open('some.csv', newline='') as f:
reader = csv.reader(f)
for row in reader:
print(row)
```
However, this throws the following error on some csv files:
```
_csv.Error: field larger than field limit (131072)
```
How can I analyze csv files with huge fields? Skipping the lines with huge fields is not an option as the data needs to be analyzed in subsequent steps.
|
2013/02/25
|
[
"https://Stackoverflow.com/questions/15063936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1251007/"
] |
This could be because your CSV file has embedded single or double quotes. If your CSV file is tab-delimited try opening it as:
```
c = csv.reader(f, delimiter='\t', quoting=csv.QUOTE_NONE)
```
|
I just had this happen to me on a 'plain' CSV file. Some people might call it an invalid formatted file. No escape characters, no double quotes and delimiter was a semicolon.
A sample line from this file would look like this:
>
> First cell; Second " Cell with one double quote and leading
> space;'Partially quoted' cell;Last cell
>
>
>
the single quote in the second cell would throw the parser off its rails. What worked was:
```
csv.reader(inputfile, delimiter=';', doublequote='False', quotechar='', quoting=csv.QUOTE_NONE)
```
|
15,063,936
|
I have a script reading in a csv file with very huge fields:
```
# example from http://docs.python.org/3.3/library/csv.html?highlight=csv%20dictreader#examples
import csv
with open('some.csv', newline='') as f:
reader = csv.reader(f)
for row in reader:
print(row)
```
However, this throws the following error on some csv files:
```
_csv.Error: field larger than field limit (131072)
```
How can I analyze csv files with huge fields? Skipping the lines with huge fields is not an option as the data needs to be analyzed in subsequent steps.
|
2013/02/25
|
[
"https://Stackoverflow.com/questions/15063936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1251007/"
] |
Below is to check the current limit
```
csv.field_size_limit()
```
Out[20]: 131072
Below is to increase the limit. Add it to the code
```
csv.field_size_limit(100000000)
```
Try checking the limit again
```
csv.field_size_limit()
```
Out[22]: 100000000
Now you won't get the error "\_csv.Error: field larger than field limit (131072)"
|
Sometimes, a row contain double quote column. When csv reader try read this row, not understood end of column and fire this raise.
Solution is below:
```
reader = csv.reader(cf, quoting=csv.QUOTE_MINIMAL)
```
|
15,063,936
|
I have a script reading in a csv file with very huge fields:
```
# example from http://docs.python.org/3.3/library/csv.html?highlight=csv%20dictreader#examples
import csv
with open('some.csv', newline='') as f:
reader = csv.reader(f)
for row in reader:
print(row)
```
However, this throws the following error on some csv files:
```
_csv.Error: field larger than field limit (131072)
```
How can I analyze csv files with huge fields? Skipping the lines with huge fields is not an option as the data needs to be analyzed in subsequent steps.
|
2013/02/25
|
[
"https://Stackoverflow.com/questions/15063936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1251007/"
] |
The csv file might contain very huge fields, therefore increase the `field_size_limit`:
```
import sys
import csv
csv.field_size_limit(sys.maxsize)
```
`sys.maxsize` works for Python 2.x and 3.x. `sys.maxint` would only work with Python 2.x ([SO: what-is-sys-maxint-in-python-3](https://stackoverflow.com/questions/13795758/what-is-sys-maxint-in-python-3))
### Update
As Geoff pointed out, the code above might result in the following error: `OverflowError: Python int too large to convert to C long`.
To circumvent this, you could use the following *quick and dirty* code (which should work on every system with Python 2 and Python 3):
```
import sys
import csv
maxInt = sys.maxsize
while True:
# decrease the maxInt value by factor 10
# as long as the OverflowError occurs.
try:
csv.field_size_limit(maxInt)
break
except OverflowError:
maxInt = int(maxInt/10)
```
|
I just had this happen to me on a 'plain' CSV file. Some people might call it an invalid formatted file. No escape characters, no double quotes and delimiter was a semicolon.
A sample line from this file would look like this:
>
> First cell; Second " Cell with one double quote and leading
> space;'Partially quoted' cell;Last cell
>
>
>
the single quote in the second cell would throw the parser off its rails. What worked was:
```
csv.reader(inputfile, delimiter=';', doublequote='False', quotechar='', quoting=csv.QUOTE_NONE)
```
|
15,063,936
|
I have a script reading in a csv file with very huge fields:
```
# example from http://docs.python.org/3.3/library/csv.html?highlight=csv%20dictreader#examples
import csv
with open('some.csv', newline='') as f:
reader = csv.reader(f)
for row in reader:
print(row)
```
However, this throws the following error on some csv files:
```
_csv.Error: field larger than field limit (131072)
```
How can I analyze csv files with huge fields? Skipping the lines with huge fields is not an option as the data needs to be analyzed in subsequent steps.
|
2013/02/25
|
[
"https://Stackoverflow.com/questions/15063936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1251007/"
] |
Below is to check the current limit
```
csv.field_size_limit()
```
Out[20]: 131072
Below is to increase the limit. Add it to the code
```
csv.field_size_limit(100000000)
```
Try checking the limit again
```
csv.field_size_limit()
```
Out[22]: 100000000
Now you won't get the error "\_csv.Error: field larger than field limit (131072)"
|
You can use the `error_bad_lines` option of `pd.read_csv` to skip these lines.
```py
import pandas as pd
data_df = pd.read_csv('data.csv', error_bad_lines=False)
```
This works since the "bad lines" as defined in pandas include lines that one of their fields exceed the csv limit.
Be careful that this solution is valid only when the fields in your csv file *shouldn't* be this long.
If you expect to have big field sizes, this will throw away your data.
|
15,063,936
|
I have a script reading in a csv file with very huge fields:
```
# example from http://docs.python.org/3.3/library/csv.html?highlight=csv%20dictreader#examples
import csv
with open('some.csv', newline='') as f:
reader = csv.reader(f)
for row in reader:
print(row)
```
However, this throws the following error on some csv files:
```
_csv.Error: field larger than field limit (131072)
```
How can I analyze csv files with huge fields? Skipping the lines with huge fields is not an option as the data needs to be analyzed in subsequent steps.
|
2013/02/25
|
[
"https://Stackoverflow.com/questions/15063936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1251007/"
] |
Below is to check the current limit
```
csv.field_size_limit()
```
Out[20]: 131072
Below is to increase the limit. Add it to the code
```
csv.field_size_limit(100000000)
```
Try checking the limit again
```
csv.field_size_limit()
```
Out[22]: 100000000
Now you won't get the error "\_csv.Error: field larger than field limit (131072)"
|
Find the cqlshrc file usually placed in .cassandra directory.
In that file append,
```
[csv]
field_size_limit = 1000000000
```
|
15,063,936
|
I have a script reading in a csv file with very huge fields:
```
# example from http://docs.python.org/3.3/library/csv.html?highlight=csv%20dictreader#examples
import csv
with open('some.csv', newline='') as f:
reader = csv.reader(f)
for row in reader:
print(row)
```
However, this throws the following error on some csv files:
```
_csv.Error: field larger than field limit (131072)
```
How can I analyze csv files with huge fields? Skipping the lines with huge fields is not an option as the data needs to be analyzed in subsequent steps.
|
2013/02/25
|
[
"https://Stackoverflow.com/questions/15063936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1251007/"
] |
The csv file might contain very huge fields, therefore increase the `field_size_limit`:
```
import sys
import csv
csv.field_size_limit(sys.maxsize)
```
`sys.maxsize` works for Python 2.x and 3.x. `sys.maxint` would only work with Python 2.x ([SO: what-is-sys-maxint-in-python-3](https://stackoverflow.com/questions/13795758/what-is-sys-maxint-in-python-3))
### Update
As Geoff pointed out, the code above might result in the following error: `OverflowError: Python int too large to convert to C long`.
To circumvent this, you could use the following *quick and dirty* code (which should work on every system with Python 2 and Python 3):
```
import sys
import csv
maxInt = sys.maxsize
while True:
# decrease the maxInt value by factor 10
# as long as the OverflowError occurs.
try:
csv.field_size_limit(maxInt)
break
except OverflowError:
maxInt = int(maxInt/10)
```
|
*.csv* field sizes are controlled via [[Python.Docs]: csv.field\_size\_limit([new\_limit])](https://docs.python.org/library/csv.html#csv.field_size_limit) (**emphasis** is mine):
>
> Returns the current maximum field size allowed by the parser. **If *new\_limit* is given, this becomes the new limit**.
>
>
>
It is set by default to ***131072*** or ***0x20000*** (*128k*), which should be enough for any decent *.csv*:
>
>
> ```py
> >>> import csv
> >>>
> >>>
> >>> limit0 = csv.field_size_limit()
> >>> limit0
> 131072
> >>> "0x{0:016X}".format(limit0)
> '0x0000000000020000'
>
> ```
>
>
However, when dealing with a *.csv* file (**with the correct quoting and delimiter**) having (at least) one field longer than this size, the error pops up.
To get rid of the error, the size limit should be increased (to avoid any worries, the maximum possible value is attempted).
Behind the scenes (check [[GitHub]: python/cpython - (master) cpython/Modules/\_csv.c](https://github.com/python/cpython/blob/master/Modules/_csv.c) for implementation details), the variable that holds this value is a *C **long*** ([[Wikipedia]: C data types](https://en.wikipedia.org/wiki/C_data_types)), whose size **varies depending on *CPU* architecture and *OS*** (*I**L**P*). The classical difference: for a ***064bit*** *OS* (and *Python* build), the *long* type size (**in bits**) is:
* *Nix*: ***64***
* *Win*: ***32***
When attempting to set it, the new value is checked to be in the *long* boundaries, that's why in some cases another exception pops up (because *sys.maxsize* is typically *064bit* wide - encountered on *Win*):
>
>
> ```py
> >>> import sys, ctypes as ct
> >>>
> >>>
> >>> "v{:d}.{:d}.{:d}".format(*sys.version_info[:3]), sys.platform, sys.maxsize, ct.sizeof(ct.c_void_p) * 8, ct.sizeof(ct.c_long) * 8
> ('v3.9.9', 'win32', 9223372036854775807, 64, 32)
> >>>
> >>> csv.field_size_limit(sys.maxsize)
> Traceback (most recent call last):
> File "<stdin>", line 1, in <module>
> OverflowError: Python int too large to convert to C long
>
> ```
>
>
To avoid running into this problem, set the (maximum possible) limit (***LONG\_MAX***), **using an artifice** (thanks to [[Python.Docs]: ctypes - A foreign function library for Python](https://docs.python.org/library/ctypes.html#module-ctypes)). It should work on *Python 3* and *Python 2*, on any *CPU* / *OS*.
>
>
> ```py
> >>> csv.field_size_limit(int(ct.c_ulong(-1).value // 2))
> 131072
> >>> limit1 = csv.field_size_limit()
> >>> limit1
> 2147483647
> >>> "0x{0:016X}".format(limit1)
> '0x000000007FFFFFFF'
>
> ```
>
>
*064bit* *Python* on a *Nix* like *OS*:
>
>
> ```py
> >>> import sys, csv, ctypes as ct
> >>>
> >>>
> >>> "v{:d}.{:d}.{:d}".format(*sys.version_info[:3]), sys.platform, sys.maxsize, ct.sizeof(ct.c_void_p) * 8, ct.sizeof(ct.c_long) * 8
> ('v3.8.10', 'linux', 9223372036854775807, 64, 64)
> >>>
> >>> csv.field_size_limit()
> 131072
> >>>
> >>> csv.field_size_limit(int(ct.c_ulong(-1).value // 2))
> 131072
> >>> limit1 = csv.field_size_limit()
> >>> limit1
> 9223372036854775807
> >>> "0x{0:016X}".format(limit1)
> '0x7FFFFFFFFFFFFFFF'
>
> ```
>
>
For *032bit* *Python*, things should run smoothly without the artifice (as both *sys.maxsize* and *LONG\_MAX* are *032bit* wide).
If this maximum value is still not enough, then the *.csv* would need manual intervention in order to be processed from *Python*.
Check the following resources for more details on:
* Playing with *C* types boundaries from *Python*: [[SO]: Maximum and minimum value of C types integers from Python (@CristiFati's answer)](https://stackoverflow.com/a/52485502/4788546)
* *Python* *064bit* *vs* *032bit* differences: [[SO]: How do I determine if my python shell is executing in 32bit or 64bit mode on OS X? (@CristiFati's answer)](https://stackoverflow.com/a/50053286/4788546)
|
15,063,936
|
I have a script reading in a csv file with very huge fields:
```
# example from http://docs.python.org/3.3/library/csv.html?highlight=csv%20dictreader#examples
import csv
with open('some.csv', newline='') as f:
reader = csv.reader(f)
for row in reader:
print(row)
```
However, this throws the following error on some csv files:
```
_csv.Error: field larger than field limit (131072)
```
How can I analyze csv files with huge fields? Skipping the lines with huge fields is not an option as the data needs to be analyzed in subsequent steps.
|
2013/02/25
|
[
"https://Stackoverflow.com/questions/15063936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1251007/"
] |
Sometimes, a row contain double quote column. When csv reader try read this row, not understood end of column and fire this raise.
Solution is below:
```
reader = csv.reader(cf, quoting=csv.QUOTE_MINIMAL)
```
|
Find the cqlshrc file usually placed in .cassandra directory.
In that file append,
```
[csv]
field_size_limit = 1000000000
```
|
15,063,936
|
I have a script reading in a csv file with very huge fields:
```
# example from http://docs.python.org/3.3/library/csv.html?highlight=csv%20dictreader#examples
import csv
with open('some.csv', newline='') as f:
reader = csv.reader(f)
for row in reader:
print(row)
```
However, this throws the following error on some csv files:
```
_csv.Error: field larger than field limit (131072)
```
How can I analyze csv files with huge fields? Skipping the lines with huge fields is not an option as the data needs to be analyzed in subsequent steps.
|
2013/02/25
|
[
"https://Stackoverflow.com/questions/15063936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1251007/"
] |
The csv file might contain very huge fields, therefore increase the `field_size_limit`:
```
import sys
import csv
csv.field_size_limit(sys.maxsize)
```
`sys.maxsize` works for Python 2.x and 3.x. `sys.maxint` would only work with Python 2.x ([SO: what-is-sys-maxint-in-python-3](https://stackoverflow.com/questions/13795758/what-is-sys-maxint-in-python-3))
### Update
As Geoff pointed out, the code above might result in the following error: `OverflowError: Python int too large to convert to C long`.
To circumvent this, you could use the following *quick and dirty* code (which should work on every system with Python 2 and Python 3):
```
import sys
import csv
maxInt = sys.maxsize
while True:
# decrease the maxInt value by factor 10
# as long as the OverflowError occurs.
try:
csv.field_size_limit(maxInt)
break
except OverflowError:
maxInt = int(maxInt/10)
```
|
Sometimes, a row contain double quote column. When csv reader try read this row, not understood end of column and fire this raise.
Solution is below:
```
reader = csv.reader(cf, quoting=csv.QUOTE_MINIMAL)
```
|
15,063,936
|
I have a script reading in a csv file with very huge fields:
```
# example from http://docs.python.org/3.3/library/csv.html?highlight=csv%20dictreader#examples
import csv
with open('some.csv', newline='') as f:
reader = csv.reader(f)
for row in reader:
print(row)
```
However, this throws the following error on some csv files:
```
_csv.Error: field larger than field limit (131072)
```
How can I analyze csv files with huge fields? Skipping the lines with huge fields is not an option as the data needs to be analyzed in subsequent steps.
|
2013/02/25
|
[
"https://Stackoverflow.com/questions/15063936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1251007/"
] |
*.csv* field sizes are controlled via [[Python.Docs]: csv.field\_size\_limit([new\_limit])](https://docs.python.org/library/csv.html#csv.field_size_limit) (**emphasis** is mine):
>
> Returns the current maximum field size allowed by the parser. **If *new\_limit* is given, this becomes the new limit**.
>
>
>
It is set by default to ***131072*** or ***0x20000*** (*128k*), which should be enough for any decent *.csv*:
>
>
> ```py
> >>> import csv
> >>>
> >>>
> >>> limit0 = csv.field_size_limit()
> >>> limit0
> 131072
> >>> "0x{0:016X}".format(limit0)
> '0x0000000000020000'
>
> ```
>
>
However, when dealing with a *.csv* file (**with the correct quoting and delimiter**) having (at least) one field longer than this size, the error pops up.
To get rid of the error, the size limit should be increased (to avoid any worries, the maximum possible value is attempted).
Behind the scenes (check [[GitHub]: python/cpython - (master) cpython/Modules/\_csv.c](https://github.com/python/cpython/blob/master/Modules/_csv.c) for implementation details), the variable that holds this value is a *C **long*** ([[Wikipedia]: C data types](https://en.wikipedia.org/wiki/C_data_types)), whose size **varies depending on *CPU* architecture and *OS*** (*I**L**P*). The classical difference: for a ***064bit*** *OS* (and *Python* build), the *long* type size (**in bits**) is:
* *Nix*: ***64***
* *Win*: ***32***
When attempting to set it, the new value is checked to be in the *long* boundaries, that's why in some cases another exception pops up (because *sys.maxsize* is typically *064bit* wide - encountered on *Win*):
>
>
> ```py
> >>> import sys, ctypes as ct
> >>>
> >>>
> >>> "v{:d}.{:d}.{:d}".format(*sys.version_info[:3]), sys.platform, sys.maxsize, ct.sizeof(ct.c_void_p) * 8, ct.sizeof(ct.c_long) * 8
> ('v3.9.9', 'win32', 9223372036854775807, 64, 32)
> >>>
> >>> csv.field_size_limit(sys.maxsize)
> Traceback (most recent call last):
> File "<stdin>", line 1, in <module>
> OverflowError: Python int too large to convert to C long
>
> ```
>
>
To avoid running into this problem, set the (maximum possible) limit (***LONG\_MAX***), **using an artifice** (thanks to [[Python.Docs]: ctypes - A foreign function library for Python](https://docs.python.org/library/ctypes.html#module-ctypes)). It should work on *Python 3* and *Python 2*, on any *CPU* / *OS*.
>
>
> ```py
> >>> csv.field_size_limit(int(ct.c_ulong(-1).value // 2))
> 131072
> >>> limit1 = csv.field_size_limit()
> >>> limit1
> 2147483647
> >>> "0x{0:016X}".format(limit1)
> '0x000000007FFFFFFF'
>
> ```
>
>
*064bit* *Python* on a *Nix* like *OS*:
>
>
> ```py
> >>> import sys, csv, ctypes as ct
> >>>
> >>>
> >>> "v{:d}.{:d}.{:d}".format(*sys.version_info[:3]), sys.platform, sys.maxsize, ct.sizeof(ct.c_void_p) * 8, ct.sizeof(ct.c_long) * 8
> ('v3.8.10', 'linux', 9223372036854775807, 64, 64)
> >>>
> >>> csv.field_size_limit()
> 131072
> >>>
> >>> csv.field_size_limit(int(ct.c_ulong(-1).value // 2))
> 131072
> >>> limit1 = csv.field_size_limit()
> >>> limit1
> 9223372036854775807
> >>> "0x{0:016X}".format(limit1)
> '0x7FFFFFFFFFFFFFFF'
>
> ```
>
>
For *032bit* *Python*, things should run smoothly without the artifice (as both *sys.maxsize* and *LONG\_MAX* are *032bit* wide).
If this maximum value is still not enough, then the *.csv* would need manual intervention in order to be processed from *Python*.
Check the following resources for more details on:
* Playing with *C* types boundaries from *Python*: [[SO]: Maximum and minimum value of C types integers from Python (@CristiFati's answer)](https://stackoverflow.com/a/52485502/4788546)
* *Python* *064bit* *vs* *032bit* differences: [[SO]: How do I determine if my python shell is executing in 32bit or 64bit mode on OS X? (@CristiFati's answer)](https://stackoverflow.com/a/50053286/4788546)
|
You can use the `error_bad_lines` option of `pd.read_csv` to skip these lines.
```py
import pandas as pd
data_df = pd.read_csv('data.csv', error_bad_lines=False)
```
This works since the "bad lines" as defined in pandas include lines that one of their fields exceed the csv limit.
Be careful that this solution is valid only when the fields in your csv file *shouldn't* be this long.
If you expect to have big field sizes, this will throw away your data.
|
48,206,553
|
I am trying to make a view that i can use in multiple apps with different redirect urls:
Parent function:
```
def create_order(request, redirect_url):
data = dict()
if request.method == 'POST':
form = OrderForm(request.POST)
if form.is_valid():
form.save()
return redirect(redirect_url)
else:
form = OrderForm()
data['form'] = form
return render(request, 'core/order_document.html', data)
```
Child function:
```
@login_required()
def admin_order_document(request):
redirect_url = 'administrator:order_waiting_list'
return create_order(request, redirect_url)
```
When i'm trying to call admin\_order\_document function i'm getting:
```
Traceback (most recent call last):
File "/home/project/venv/lib/python3.5/site-packages/django/core/handlers/exception.py", line 41, in inner
response = get_response(request)
File "/home/project/venv/lib/python3.5/site-packages/django/core/handlers/base.py", line 187, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/home/project/venv/lib/python3.5/site-packages/django/core/handlers/base.py", line 185, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
TypeError: create_order() missing 1 required positional argument: 'redirect_url'
```
If i remove redirect\_url from both functions and manually add 'administrator:order\_waiting\_list' to redirect() it works, but i need to redirect to multiple urls. So, why am i getting this error?
|
2018/01/11
|
[
"https://Stackoverflow.com/questions/48206553",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7161215/"
] |
```
url(r'^orders/create/', views.create_order, name='create_order')
```
This clearly is not going to work, since `create_order` requires `redirect_url` but there is no `redirect_url` kwarg in the regex `r'^orders/create/'`.
Perhaps you want to use the `admin_order_document` view here instead:
```
url(r'^orders/create/', views.admin_order_document, name='create_order')
```
Note you should add a trailing dollar, i.e. `r'^orders/create/$'` unless you want to match `orders/create/something-else` as well as `orders/create/`.
|
If you didn't changed the regular url
```
urlpatterns = [
url(r'^admin/', admin_site.urls),
...
]
```
of your admin site you need to call your function like that:
```
@login_required()
def admin_order_document(request):
redirect_url = 'admin:order_waiting_list'
return create_order(request, redirect_url)
```
That should fix your problem.
|
35,438,785
|
I have a list of numbers and I want to make rows and columns out of the list.
I can brute force it and do the following below in Python 2.7.
```
l = [1,2,3,4,5,6,7,8,9]
r1 = [l[0], l[1], l[2]]
r2 = [l[3], l[4], l[5]]
r3 = [l[6], l[7], l[8]]
c1 = [l[0], l[3], l[6]]
```
But I can't seem to create a function in python to make it work. Is my syntax wrong?
```
def make_row(r, li, arg1, arg2, arg3):
r = [li[arg1], li[arg2], li[arg3]]
make_row(r1, l, 0, 1, 2)
make_row(r2, l, 3, 4, 5)
make_row(r3, l, 6, 7, 8)
```
Can anybody tell me what I'm doing wrong? The `make_row` function does not seem to work correctly.
|
2016/02/16
|
[
"https://Stackoverflow.com/questions/35438785",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5936229/"
] |
Your error is a misunderstanding in [how Python passes arguments](http://robertheaton.com/2014/02/09/pythons-pass-by-object-reference-as-explained-by-philip-k-dick/). `r` in the function `make_row` is just a name. When you assign into it, it simply points that name to something new, in the context of your function, leaving the old object, and old name outside the function, unchanged.
If you return the result of `make_row` you can see it generates the correct output, it just does not save it into the variables as you are thinking it would.
---
### However there are easier (and more Pythonic) ways to do what you are trying to do:
This will return a list of your **rows**:
```
[l[i:i+3] for i in xrange(0, len(l), 3)]
```
And this is the equivalent for **columns**:
```
[l[i::3] for i in xrange(0, 3)]
```
If you want rows/columns of a different length, just substitute that number in place of the 3's in these statements.
|
Your function `make_row` works as far as I can tell (I have not tested it), but you need to `return r`.
|
35,438,785
|
I have a list of numbers and I want to make rows and columns out of the list.
I can brute force it and do the following below in Python 2.7.
```
l = [1,2,3,4,5,6,7,8,9]
r1 = [l[0], l[1], l[2]]
r2 = [l[3], l[4], l[5]]
r3 = [l[6], l[7], l[8]]
c1 = [l[0], l[3], l[6]]
```
But I can't seem to create a function in python to make it work. Is my syntax wrong?
```
def make_row(r, li, arg1, arg2, arg3):
r = [li[arg1], li[arg2], li[arg3]]
make_row(r1, l, 0, 1, 2)
make_row(r2, l, 3, 4, 5)
make_row(r3, l, 6, 7, 8)
```
Can anybody tell me what I'm doing wrong? The `make_row` function does not seem to work correctly.
|
2016/02/16
|
[
"https://Stackoverflow.com/questions/35438785",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5936229/"
] |
Your error is a misunderstanding in [how Python passes arguments](http://robertheaton.com/2014/02/09/pythons-pass-by-object-reference-as-explained-by-philip-k-dick/). `r` in the function `make_row` is just a name. When you assign into it, it simply points that name to something new, in the context of your function, leaving the old object, and old name outside the function, unchanged.
If you return the result of `make_row` you can see it generates the correct output, it just does not save it into the variables as you are thinking it would.
---
### However there are easier (and more Pythonic) ways to do what you are trying to do:
This will return a list of your **rows**:
```
[l[i:i+3] for i in xrange(0, len(l), 3)]
```
And this is the equivalent for **columns**:
```
[l[i::3] for i in xrange(0, 3)]
```
If you want rows/columns of a different length, just substitute that number in place of the 3's in these statements.
|
OK I think i figured out my own question. The code should look like this.
```
def make_row(li, arg1, arg2, arg3):
r = [li[arg1], li[arg2], li[arg3]]
return r
row1 = make_row(li, 0, 1, 2)
```
|
35,438,785
|
I have a list of numbers and I want to make rows and columns out of the list.
I can brute force it and do the following below in Python 2.7.
```
l = [1,2,3,4,5,6,7,8,9]
r1 = [l[0], l[1], l[2]]
r2 = [l[3], l[4], l[5]]
r3 = [l[6], l[7], l[8]]
c1 = [l[0], l[3], l[6]]
```
But I can't seem to create a function in python to make it work. Is my syntax wrong?
```
def make_row(r, li, arg1, arg2, arg3):
r = [li[arg1], li[arg2], li[arg3]]
make_row(r1, l, 0, 1, 2)
make_row(r2, l, 3, 4, 5)
make_row(r3, l, 6, 7, 8)
```
Can anybody tell me what I'm doing wrong? The `make_row` function does not seem to work correctly.
|
2016/02/16
|
[
"https://Stackoverflow.com/questions/35438785",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5936229/"
] |
OK I think i figured out my own question. The code should look like this.
```
def make_row(li, arg1, arg2, arg3):
r = [li[arg1], li[arg2], li[arg3]]
return r
row1 = make_row(li, 0, 1, 2)
```
|
Your function `make_row` works as far as I can tell (I have not tested it), but you need to `return r`.
|
42,230,691
|
For a beginner in Tkinter, and just average in Python, it's hard to find proper stuff on tkinter. Here is the problem I met (and begin to solve). I think Problem came from python version.
I'm trying to do a GUI, in OOP, and I got difficulty in combining different classes.
Let say I have a "small box" (for example, a menu bar), and in want to put it in a "big box". Working from this tutorial (<http://sebsauvage.net/python/gui/index.html>), I'm trying the following code
```
#!usr/bin/env python3.5
# coding: utf-8
import tkinter as tki
class SmallBox(tki.Tk):
def __init__(self,parent):
tki.Tk.__init__(self,parent)
self.parent = parent
self.grid()
self.box = tki.LabelFrame(self,text="small box")
self.box.grid()
self.graphicalStuff = tki.Entry(self.box) # something graphical
self.graphicalStuff.grid()
class BigBox(tki.Tk):
def __init__(self,parent):
tki.Tk.__init__(self,parent)
self.parent = parent
self.grid()
self.box = tki.LabelFrame(self,text='big box containing the small one')
self.graphStuff = tki.Entry(self.box) # something graphical
self.sbox = SmallBox(self)
self.graphStuff.grid()
self.box.grid()
self.sbox.grid()
```
But I got the following error.
```
File "/usr/lib/python3.5/tkinter/__init__.py", line 1871, in __init__
self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use)
TypeError: create() argument 1 must be str or None, not BigBox
```
|
2017/02/14
|
[
"https://Stackoverflow.com/questions/42230691",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6721930/"
] |
The tutorial you are using has an incorrect example. The `Tk` class doesn't have a parent.
Also, you must only create a single instance of `Tk` (or subclass of `Tk`). Tkinter widgets exist in a tree-like hierarchy with a single root. This root widget is `Tk()`. You cannot have more than one root.
|
The code looks quite similar at this one : [Best way to structure a tkinter application](https://stackoverflow.com/questions/17466561/best-way-to-structure-a-tkinter-application)
But there is one slight difference, we're not working on Frame here. And the error asks for a problem in screenName, etc. which, intuitively, looks more like a Frame.
In fact, I would say that in Python3 you can't use anymore the version of the first tutorial and you have to use Frame, and code something like that :
```
#!usr/bin/env python3.5
# coding: utf-8
import tkinter as tki
class SmallBox(tki.Frame):
def __init__(self,parent):
tki.Frame.__init__(self,parent)
self.parent = parent
self.grid()
self.box = tki.LabelFrame(self,text="small box")
self.box.grid()
self.graphicalStuff = tki.Entry(self.box) # something graphical
self.graphicalStuff.grid()
class BigBox(tki.Frame):
def __init__(self,parent):
tki.Frame.__init__(self,parent)
self.parent = parent
self.grid()
self.box = tki.LabelFrame(self,text='big box containing the small one')
self.graphStuff = tki.Entry(self.box) # something graphical
self.sbox = SmallBox(self)
self.graphStuff.grid()
self.box.grid()
self.sbox.grid()
if __name__ == '__main__':
tg = BigBox(None)
tg.mainloop()
```
We don't find (especially for French people, or maybe people not "natural" in english) many examples and docs, and the one I use is quite common, so maybe it will be useful to someone.
|
45,733,399
|
I have a Javascript file `Commodity.js` like this:
```
commodityInfo = [
["GLASS ITEM", 1.0, 1.0, ],
["HOUSEHOLD GOODS", 3.0, 2.0, ],
["FROZEN PRODUCTS", 1.0, 3.0, ],
["BEDDING", 1.0, 4.0, ],
["PERFUME", 1.0, 5.0, ],
["HARDWARE", 5.0, 6.0, ],
["CURTAIN", 1.0, 7.0, ],
["CLOTHING", 24.0, 8.0, ],
["ELECTRICAL ITEMS", 1.0, 9.0, ],
["PLUMBING MATERIAL", 1.0, 10.0, ],
["FLOWER", 7.0, 11.0, ],
["PROCESSED FOODS.", 1.0, 12.0, ],
["TILES", 1.0, 13.0, ],
["ELECTRICAL", 9.0, 14.0, ],
["PLUMBING", 1.0, 15.0, ]
];
```
I want to iterate through each of the item like GLASS ITEM, HOUSEHOLD GOODS, FROZEN PRODUCTS and use the number beside it for some calculations using python.
Can someone tell me how to open and iterate through the items like that in python.
Thanking You.
|
2017/08/17
|
[
"https://Stackoverflow.com/questions/45733399",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6624726/"
] |
The following code may not be the most efficient, but it works for your case.
What I'm doing here: turn the string (the content of the file) into valid JSON and then load the JSON string into a Python variable.
Note: It would be easier if the content of your JS file was already valid JSON!
```
import re
import json
# for the sake of this code, we will assume you can successfully load the content of your JS file
# into a variable called "file_content"
# E.G. with the following code:
#
# with open('Commodity.js', 'r') as f: #open the file
# file_content = f.read()
# since I do not have such a file, I will fill the variable "manually", based on your sample data
file_content = """
commodityInfo = [
["GLASS ITEM", 1.0, 1.0, ],
["HOUSEHOLD GOODS", 3.0, 2.0, ],
["FROZEN PRODUCTS", 1.0, 3.0, ],
["BEDDING", 1.0, 4.0, ],
["PERFUME", 1.0, 5.0, ],
["HARDWARE", 5.0, 6.0, ],
["CURTAIN", 1.0, 7.0, ],
["CLOTHING", 24.0, 8.0, ],
["ELECTRICAL ITEMS", 1.0, 9.0, ],
["PLUMBING MATERIAL", 1.0, 10.0, ],
["FLOWER", 7.0, 11.0, ],
["PROCESSED FOODS.", 1.0, 12.0, ],
["TILES", 1.0, 13.0, ],
["ELECTRICAL", 9.0, 14.0, ],
["PLUMBING", 1.0, 15.0, ]
];
"""
# get rid of leading/trailing line breaks
file_content = file_content.strip()
# get rid of "commodityInfo = " and the ";" and make the array valid JSON
r = re.match(".*=", file_content)
json_str = file_content.replace(r.group(), "").replace(";", "").replace(", ]", "]")
# now we can load the JSON into a Python variable
# in this case, it will be a list of lists, just as the source is an array of array
l = json.loads(json_str)
# now we can do whatever we want with the list, e.g. iterate it
for item in l:
print(item)
```
|
You can use `for` loops to achieve that.
Something like this would work:
```
for commodity in commodityInfo:
commodity[0] # the first element (e.g: GLASS ITEM)
commodity[1] # the second element (e.g: 1.0)
print(commodity[1] + commodity[2]) #calculate two values
```
You can learn more about `for` loops [here](https://www.tutorialspoint.com/python/python_for_loop.htm)
|
20,554,040
|
I'm new with Django and I follow a tuto. The problem is that the tuto uses Sqlite but I want to use MySql server instead. I changed the parameters following documentation but I have the following error when I try to run the server. I already found some resolve but it didn't work...
For your information, I installed MySql-Python and reinstall Django with PIP. Without any success. I hope you will be able to help me.
Traceback :
```
Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation. All rights reserved.
C:\Users\adescamp>cd C:\Users\adescamp\agregmail
C:\Users\adescamp\agregmail>python manage.py runserver 8000
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line 399, in execute_from_command_line
utility.execute()
File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Python27\lib\site-packages\django\core\management\base.py", line 242, in run_from_argv
self.execute(*args, **options.__dict__)
File "C:\Python27\lib\site-packages\django\core\management\base.py", line 280, in execute
translation.activate('en-us')
File "C:\Python27\lib\site-packages\django\utils\translation\__init__.py", line 130, in activate
return _trans.activate(language)
File "C:\Python27\lib\site-packages\django\utils\translation\trans_real.py", line 188, in activate
_active.value = translation(language)
File "C:\Python27\lib\site-packages\django\utils\translation\trans_real.py", line 177, in translation
default_translation = _fetch(settings.LANGUAGE_CODE)
File "C:\Python27\lib\site-packages\django\utils\translation\trans_real.py", line 159, in _fetch
app = import_module(appname)
File "C:\Python27\lib\site-packages\django\utils\importlib.py", line 40, in import_module
__import__(name)
File "C:\Python27\lib\site-packages\django\contrib\admin\__init__.py", line 6, in <module>
from django.contrib.admin.sites import AdminSite, site
File "C:\Python27\lib\site-packages\django\contrib\admin\sites.py", line 4, in <module>
from django.contrib.admin.forms import AdminAuthenticationForm
File "C:\Python27\lib\site-packages\django\contrib\admin\forms.py", line 6, in <module>
from django.contrib.auth.forms import AuthenticationForm
File "C:\Python27\lib\site-packages\django\contrib\auth\forms.py", line 17, in <module>
from django.contrib.auth.models import User
File "C:\Python27\lib\site-packages\django\contrib\auth\models.py", line 48, in <module>
class Permission(models.Model):
File "C:\Python27\lib\site-packages\django\db\models\base.py", line 96, in __new__
new_class.add_to_class('_meta', Options(meta, **kwargs))
File "C:\Python27\lib\site-packages\django\db\models\base.py", line 264, in add_to_class
value.contribute_to_class(cls, name)
File "C:\Python27\lib\site-packages\django\db\models\options.py", line 124, in contribute_to_class
self.db_table = truncate_name(self.db_table, connection.ops.max_name_length())
File "C:\Python27\lib\site-packages\django\db\__init__.py", line 34, in __getattr__
return getattr(connections[DEFAULT_DB_ALIAS], item)
File "C:\Python27\lib\site-packages\django\db\utils.py", line 198, in __getitem__
backend = load_backend(db['ENGINE'])
File "C:\Python27\lib\site-packages\django\db\utils.py", line 113, in load_backend
return import_module('%s.base' % backend_name)
File "C:\Python27\lib\site-packages\django\utils\importlib.py", line 40, in import_module
__import__(name)
File "C:\Python27\lib\site-packages\django\db\backends\mysql\base.py", line 17, in <module>
raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e) django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: No module named MySQLdb
```
Thank you!
|
2013/12/12
|
[
"https://Stackoverflow.com/questions/20554040",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2904080/"
] |
Your problem is most likely related to buffering in your system, not anything intrinsically wrong with your line of code. I was able to create a test scenario where I could reproduce it - then make it go away. I hope it will work for you too.
Here is my test scenario. First I write a short script that writes the time to a file every 100 ms (approx) - this is my "log file" that generates enough data that `uniq -c` should give me an interesting output every second:
```
#!/bin/ksh
while :
do
echo The time is `date` >> a.txt
sleep 0.1
done
```
(Note - I had to use `ksh` which has the ability to do a sub-second `sleep`)
In another window, I type
```
tail -f a.txt | uniq -c
```
Sure enough, you get the following output appearing every second:
```
9 The time is Thu Dec 12 21:01:05 EST 2013
10 The time is Thu Dec 12 21:01:06 EST 2013
10 The time is Thu Dec 12 21:01:07 EST 2013
9 The time is Thu Dec 12 21:01:08 EST 2013
10 The time is Thu Dec 12 21:01:09 EST 2013
9 The time is Thu Dec 12 21:01:10 EST 2013
10 The time is Thu Dec 12 21:01:11 EST 2013
10 The time is Thu Dec 12 21:01:12 EST 2013
```
etc. No delays. Important to note - **I did not attempt to cut out the time**. Next, I did
```
tail -f a.txt | cut -f7 -d' ' | uniq -c
```
And your problem reproduced - it would "hang" for quite a while (until there was 4k of characters in the buffer, and then it would vomit it all out at once).
A bit of searching online ( <https://stackoverflow.com/a/16823549/1967396> ) told me of a utility called [stdbuf](http://linux.die.net/man/1/stdbuf) . At that reference, it specifically mentions almost exactly your scenario, and they provide the following workaround (paraphrasing to match my scenario above):
```
tail -f a.txt | stdbuf -oL cut -f7 -d' ' | uniq -c
```
And that would be great… except that this utility doesn't exist on my machine (Mac OS) - it is specific to GNU coreutils. This left me unable to test - although it may be a good solution for you.
Never fear - I found the following workaround, based on the `socat` command (which I honestly barely understand, but I adapted from the answer given at <https://unix.stackexchange.com/a/25377> ).
Make a small file called `tailcut.sh` (this is the "long\_running\_command" from the link above):
```
#!/bin/ksh
tail -f a.txt | cut -f7 -d' '
```
Give it execute permissions with `chmod 755 tailcut.sh` . Then issue the following command:
```
socat EXEC:./tailcut.sh,pty,ctty STDIO | uniq -c
```
And hey presto - your lumpy output is lumpy no more. The `socat` sends the output from the script straight to the next pipe, and `uniq` can do its thing.
|
Consider how `uniq -c` is working.
In order to print the count, it needs to read all the unique lines and only once a line that is different from the previous one, it can print the line and number of occurences.
That's just how the algorithm fundamentally works and there is no way around it.
You can test this by running
```
touch a
tail -F a | uniq -c
```
And then one after another
```
echo 1 >> a
echo 1 >> a
echo 1 >> a
```
nothing happens. Only after you run
```
echo 2 >> a
```
`uniq` can print there were 3 "1\n" occurences.
|
20,554,040
|
I'm new with Django and I follow a tuto. The problem is that the tuto uses Sqlite but I want to use MySql server instead. I changed the parameters following documentation but I have the following error when I try to run the server. I already found some resolve but it didn't work...
For your information, I installed MySql-Python and reinstall Django with PIP. Without any success. I hope you will be able to help me.
Traceback :
```
Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation. All rights reserved.
C:\Users\adescamp>cd C:\Users\adescamp\agregmail
C:\Users\adescamp\agregmail>python manage.py runserver 8000
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line 399, in execute_from_command_line
utility.execute()
File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Python27\lib\site-packages\django\core\management\base.py", line 242, in run_from_argv
self.execute(*args, **options.__dict__)
File "C:\Python27\lib\site-packages\django\core\management\base.py", line 280, in execute
translation.activate('en-us')
File "C:\Python27\lib\site-packages\django\utils\translation\__init__.py", line 130, in activate
return _trans.activate(language)
File "C:\Python27\lib\site-packages\django\utils\translation\trans_real.py", line 188, in activate
_active.value = translation(language)
File "C:\Python27\lib\site-packages\django\utils\translation\trans_real.py", line 177, in translation
default_translation = _fetch(settings.LANGUAGE_CODE)
File "C:\Python27\lib\site-packages\django\utils\translation\trans_real.py", line 159, in _fetch
app = import_module(appname)
File "C:\Python27\lib\site-packages\django\utils\importlib.py", line 40, in import_module
__import__(name)
File "C:\Python27\lib\site-packages\django\contrib\admin\__init__.py", line 6, in <module>
from django.contrib.admin.sites import AdminSite, site
File "C:\Python27\lib\site-packages\django\contrib\admin\sites.py", line 4, in <module>
from django.contrib.admin.forms import AdminAuthenticationForm
File "C:\Python27\lib\site-packages\django\contrib\admin\forms.py", line 6, in <module>
from django.contrib.auth.forms import AuthenticationForm
File "C:\Python27\lib\site-packages\django\contrib\auth\forms.py", line 17, in <module>
from django.contrib.auth.models import User
File "C:\Python27\lib\site-packages\django\contrib\auth\models.py", line 48, in <module>
class Permission(models.Model):
File "C:\Python27\lib\site-packages\django\db\models\base.py", line 96, in __new__
new_class.add_to_class('_meta', Options(meta, **kwargs))
File "C:\Python27\lib\site-packages\django\db\models\base.py", line 264, in add_to_class
value.contribute_to_class(cls, name)
File "C:\Python27\lib\site-packages\django\db\models\options.py", line 124, in contribute_to_class
self.db_table = truncate_name(self.db_table, connection.ops.max_name_length())
File "C:\Python27\lib\site-packages\django\db\__init__.py", line 34, in __getattr__
return getattr(connections[DEFAULT_DB_ALIAS], item)
File "C:\Python27\lib\site-packages\django\db\utils.py", line 198, in __getitem__
backend = load_backend(db['ENGINE'])
File "C:\Python27\lib\site-packages\django\db\utils.py", line 113, in load_backend
return import_module('%s.base' % backend_name)
File "C:\Python27\lib\site-packages\django\utils\importlib.py", line 40, in import_module
__import__(name)
File "C:\Python27\lib\site-packages\django\db\backends\mysql\base.py", line 17, in <module>
raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e) django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: No module named MySQLdb
```
Thank you!
|
2013/12/12
|
[
"https://Stackoverflow.com/questions/20554040",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2904080/"
] |
You may try `logtop`, (`apt-get install logtop`):
Usage:
```
tail -F /var/logs/request.log | [cut the date-time] | logtop
```
Example:
```
$ tail -f /var/log/varnish/varnishncsa.log | awk '{print $4}' | logtop
5585 elements in 10 seconds (558.50 elements/s)
1 690 69.00/s [28/Mar/2015:23:13:48
2 676 67.60/s [28/Mar/2015:23:13:47
3 620 62.00/s [28/Mar/2015:23:13:49
4 576 57.60/s [28/Mar/2015:23:13:53
5 541 54.10/s [28/Mar/2015:23:13:54
6 540 54.00/s [28/Mar/2015:23:13:55
7 511 51.10/s [28/Mar/2015:23:13:51
8 484 48.40/s [28/Mar/2015:23:13:52
9 468 46.80/s [28/Mar/2015:23:13:50
```
Columns are, from left to right:
* Just row number
* qte seen
* hits per second
* the actual line
|
Consider how `uniq -c` is working.
In order to print the count, it needs to read all the unique lines and only once a line that is different from the previous one, it can print the line and number of occurences.
That's just how the algorithm fundamentally works and there is no way around it.
You can test this by running
```
touch a
tail -F a | uniq -c
```
And then one after another
```
echo 1 >> a
echo 1 >> a
echo 1 >> a
```
nothing happens. Only after you run
```
echo 2 >> a
```
`uniq` can print there were 3 "1\n" occurences.
|
20,554,040
|
I'm new with Django and I follow a tuto. The problem is that the tuto uses Sqlite but I want to use MySql server instead. I changed the parameters following documentation but I have the following error when I try to run the server. I already found some resolve but it didn't work...
For your information, I installed MySql-Python and reinstall Django with PIP. Without any success. I hope you will be able to help me.
Traceback :
```
Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation. All rights reserved.
C:\Users\adescamp>cd C:\Users\adescamp\agregmail
C:\Users\adescamp\agregmail>python manage.py runserver 8000
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line 399, in execute_from_command_line
utility.execute()
File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Python27\lib\site-packages\django\core\management\base.py", line 242, in run_from_argv
self.execute(*args, **options.__dict__)
File "C:\Python27\lib\site-packages\django\core\management\base.py", line 280, in execute
translation.activate('en-us')
File "C:\Python27\lib\site-packages\django\utils\translation\__init__.py", line 130, in activate
return _trans.activate(language)
File "C:\Python27\lib\site-packages\django\utils\translation\trans_real.py", line 188, in activate
_active.value = translation(language)
File "C:\Python27\lib\site-packages\django\utils\translation\trans_real.py", line 177, in translation
default_translation = _fetch(settings.LANGUAGE_CODE)
File "C:\Python27\lib\site-packages\django\utils\translation\trans_real.py", line 159, in _fetch
app = import_module(appname)
File "C:\Python27\lib\site-packages\django\utils\importlib.py", line 40, in import_module
__import__(name)
File "C:\Python27\lib\site-packages\django\contrib\admin\__init__.py", line 6, in <module>
from django.contrib.admin.sites import AdminSite, site
File "C:\Python27\lib\site-packages\django\contrib\admin\sites.py", line 4, in <module>
from django.contrib.admin.forms import AdminAuthenticationForm
File "C:\Python27\lib\site-packages\django\contrib\admin\forms.py", line 6, in <module>
from django.contrib.auth.forms import AuthenticationForm
File "C:\Python27\lib\site-packages\django\contrib\auth\forms.py", line 17, in <module>
from django.contrib.auth.models import User
File "C:\Python27\lib\site-packages\django\contrib\auth\models.py", line 48, in <module>
class Permission(models.Model):
File "C:\Python27\lib\site-packages\django\db\models\base.py", line 96, in __new__
new_class.add_to_class('_meta', Options(meta, **kwargs))
File "C:\Python27\lib\site-packages\django\db\models\base.py", line 264, in add_to_class
value.contribute_to_class(cls, name)
File "C:\Python27\lib\site-packages\django\db\models\options.py", line 124, in contribute_to_class
self.db_table = truncate_name(self.db_table, connection.ops.max_name_length())
File "C:\Python27\lib\site-packages\django\db\__init__.py", line 34, in __getattr__
return getattr(connections[DEFAULT_DB_ALIAS], item)
File "C:\Python27\lib\site-packages\django\db\utils.py", line 198, in __getitem__
backend = load_backend(db['ENGINE'])
File "C:\Python27\lib\site-packages\django\db\utils.py", line 113, in load_backend
return import_module('%s.base' % backend_name)
File "C:\Python27\lib\site-packages\django\utils\importlib.py", line 40, in import_module
__import__(name)
File "C:\Python27\lib\site-packages\django\db\backends\mysql\base.py", line 17, in <module>
raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e) django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: No module named MySQLdb
```
Thank you!
|
2013/12/12
|
[
"https://Stackoverflow.com/questions/20554040",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2904080/"
] |
Your problem is most likely related to buffering in your system, not anything intrinsically wrong with your line of code. I was able to create a test scenario where I could reproduce it - then make it go away. I hope it will work for you too.
Here is my test scenario. First I write a short script that writes the time to a file every 100 ms (approx) - this is my "log file" that generates enough data that `uniq -c` should give me an interesting output every second:
```
#!/bin/ksh
while :
do
echo The time is `date` >> a.txt
sleep 0.1
done
```
(Note - I had to use `ksh` which has the ability to do a sub-second `sleep`)
In another window, I type
```
tail -f a.txt | uniq -c
```
Sure enough, you get the following output appearing every second:
```
9 The time is Thu Dec 12 21:01:05 EST 2013
10 The time is Thu Dec 12 21:01:06 EST 2013
10 The time is Thu Dec 12 21:01:07 EST 2013
9 The time is Thu Dec 12 21:01:08 EST 2013
10 The time is Thu Dec 12 21:01:09 EST 2013
9 The time is Thu Dec 12 21:01:10 EST 2013
10 The time is Thu Dec 12 21:01:11 EST 2013
10 The time is Thu Dec 12 21:01:12 EST 2013
```
etc. No delays. Important to note - **I did not attempt to cut out the time**. Next, I did
```
tail -f a.txt | cut -f7 -d' ' | uniq -c
```
And your problem reproduced - it would "hang" for quite a while (until there was 4k of characters in the buffer, and then it would vomit it all out at once).
A bit of searching online ( <https://stackoverflow.com/a/16823549/1967396> ) told me of a utility called [stdbuf](http://linux.die.net/man/1/stdbuf) . At that reference, it specifically mentions almost exactly your scenario, and they provide the following workaround (paraphrasing to match my scenario above):
```
tail -f a.txt | stdbuf -oL cut -f7 -d' ' | uniq -c
```
And that would be great… except that this utility doesn't exist on my machine (Mac OS) - it is specific to GNU coreutils. This left me unable to test - although it may be a good solution for you.
Never fear - I found the following workaround, based on the `socat` command (which I honestly barely understand, but I adapted from the answer given at <https://unix.stackexchange.com/a/25377> ).
Make a small file called `tailcut.sh` (this is the "long\_running\_command" from the link above):
```
#!/bin/ksh
tail -f a.txt | cut -f7 -d' '
```
Give it execute permissions with `chmod 755 tailcut.sh` . Then issue the following command:
```
socat EXEC:./tailcut.sh,pty,ctty STDIO | uniq -c
```
And hey presto - your lumpy output is lumpy no more. The `socat` sends the output from the script straight to the next pipe, and `uniq` can do its thing.
|
You may try `logtop`, (`apt-get install logtop`):
Usage:
```
tail -F /var/logs/request.log | [cut the date-time] | logtop
```
Example:
```
$ tail -f /var/log/varnish/varnishncsa.log | awk '{print $4}' | logtop
5585 elements in 10 seconds (558.50 elements/s)
1 690 69.00/s [28/Mar/2015:23:13:48
2 676 67.60/s [28/Mar/2015:23:13:47
3 620 62.00/s [28/Mar/2015:23:13:49
4 576 57.60/s [28/Mar/2015:23:13:53
5 541 54.10/s [28/Mar/2015:23:13:54
6 540 54.00/s [28/Mar/2015:23:13:55
7 511 51.10/s [28/Mar/2015:23:13:51
8 484 48.40/s [28/Mar/2015:23:13:52
9 468 46.80/s [28/Mar/2015:23:13:50
```
Columns are, from left to right:
* Just row number
* qte seen
* hits per second
* the actual line
|
6,929,981
|
I'm trying to build a regex that joins numbers in a string when they have spaces between them, ex:
```
$string = "I want to go home 8890 7463 and then go to 58639 6312 the cinema"
```
The regex should output:
```
"I want to go home 88907463 and then go to 586396312 the cinema"
```
The regex can be either in python or php language.
Thanks!
|
2011/08/03
|
[
"https://Stackoverflow.com/questions/6929981",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/797495/"
] |
Use a look-ahead to see if the next block is a set of numbers and remove the trailing space. That way, it works for any number of sets (which I suspected you might want):
```
$string = "I want to go home 8890 7463 41234 and then go to 58639 6312 the cinema";
$newstring = preg_replace("/\b(\d+)\s+(?=\d+\b)/", "$1", $string);
// Note: Remove the \b on both sides if you want any words with a number combined.
// The \b tokens ensure that only blocks with only numbers are merged.
echo $newstring;
// I want to go home 8890746341234 and then go to 586396312 the cinema
```
|
Python:
```
import re
text = 'abc 123 456 789 xyz'
text = re.sub(r'(\d+)\s+(?=\d)', r'\1', text) # abc 123456789 xyz
```
This works for any number of consecutive number groups, with any amount of spacing in-between.
|
71,583,214
|
In GitBook, the title shows up while mousing over them by default.
[](https://i.stack.imgur.com/nXE8q.png)
I wanna show up the title. I inspect the elements,
```html
<div class="book-header" role="navigation">
<!-- Title -->
<h1>
<i class="fa fa-circle-o-notch fa-spin"></i>
<a href=".." >Preface</a>
</h1>
</div>
```
and related CSS is,
```
.book-header h1 a, .book-header h1 a:hover {
color: inherit;
text-decoration: none;
}
```
I add the following CSS,
```css
.book-header h1 a {
display: block !important;
}
```
but it doesn't work.
---
Update:
I follow the answer from @ED Wu, and add the following code to CSS,
```css
.book-header h1 {
opacity: 1;
}
```
The title does show up. However, the left sidebar doesn't show up while I click on `三` (no action) after adding `opacity:1`. An example is [here](https://dianyao.co/python).
[](https://i.stack.imgur.com/ksrfr.png)
|
2022/03/23
|
[
"https://Stackoverflow.com/questions/71583214",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3067748/"
] |
Because .book-header h1 opacity is 0.
Try add this to your css.
```
.book-header h1 {
opacity:1!important;
}
```
|
Try this! More about `color: inherit` [here](https://www.w3schools.com/cssref/css_inherit.asp). You can use other property like `z-index`, `opacity` and `position` if it doesn't work too. Thanks :)
```css
.book-header h1 a, .book-header h1 a:hover {
display: block !important;
color: #000 !important;
text-decoration: none;
}
```
```html
<div class="book-header" role="navigation">
<!-- Title -->
<h1>
<i class="fa fa-circle-o-notch fa-spin"></i>
<a href=".." >Preface</a>
</h1>
</div>
```
|
46,908,231
|
I'm a noobie, learning to code and i stumbled upon an incorrect output while practicing a code in python, please help me with this. I tried my best to find the problem in the code but i could not find it.
Code:
```
def compare(x,y):
if x>y:
return 1
elif x==y:
return 0
else:
return -1
i=raw_input("enter x\n")
j=raw_input("enter y\n")
print compare(i,j)
```
Output:
```
-> python python.py
enter x
10
enter y
5
-1
```
The output that i had to receive is 1 but the output that i receive is -1. Please help me with the unseen error in my code.
Thank you.
|
2017/10/24
|
[
"https://Stackoverflow.com/questions/46908231",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8818971/"
] |
`raw_input` returns a string always.
so you have to convert the input values into numbers.
```
i=raw_input("enter x\n")
j=raw_input("enter y\n")
print compare(i,j)
```
should be
```
i=int(raw_input("enter x\n"))
j=int(raw_input("enter y\n"))
print compare(i,j)
```
|
Your issue is that `raw_input()` returns a string, not an integer.
Therefore, what your function is actually doing is checking "10" > "5", which is `False`, therefore it falls through your `if` block and reaches the `else` clause.
To fix this, you'll need to cast your input strings to integers by wrapping the values in `int()`.
i.e.
`i = int(raw_input("enter x\n"))`.
|
46,908,231
|
I'm a noobie, learning to code and i stumbled upon an incorrect output while practicing a code in python, please help me with this. I tried my best to find the problem in the code but i could not find it.
Code:
```
def compare(x,y):
if x>y:
return 1
elif x==y:
return 0
else:
return -1
i=raw_input("enter x\n")
j=raw_input("enter y\n")
print compare(i,j)
```
Output:
```
-> python python.py
enter x
10
enter y
5
-1
```
The output that i had to receive is 1 but the output that i receive is -1. Please help me with the unseen error in my code.
Thank you.
|
2017/10/24
|
[
"https://Stackoverflow.com/questions/46908231",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8818971/"
] |
`raw_input` returns a string always.
so you have to convert the input values into numbers.
```
i=raw_input("enter x\n")
j=raw_input("enter y\n")
print compare(i,j)
```
should be
```
i=int(raw_input("enter x\n"))
j=int(raw_input("enter y\n"))
print compare(i,j)
```
|
Use the inbuilt cmp builtin function.
```
>>> help(cmp)
Help on built-in function cmp in module __builtin__:
cmp(...)
cmp(x, y) -> integer
Return negative if x<y, zero if x==y, positive if x>y.
```
So your function will look like this.
```
>>> def compare(x,y):
... return cmp(x,y)
...
>>>
```
Then get two variables using raw\_input() which returns string, So If you are typing two numbers with a blankspace in the middle, splitting based on blank space will save two numbers in these x and y, and then apply map function which takes two parameters, one int function and the sequence which is nothing but a list created out of split().
```
>>> x,y = map(int, raw_input().split())
3 2
```
Now Comparing the x and y, Since x = 3 and y =2, Now since as per the documentation of cmp(), It Return negative if xy.
```
>>> compare(x,y)
1
>>> compare(y,x)
-1
>>> compare(x-1,y)
0
>>>
```
|
66,385,439
|
I'm working on a project where I need to convert a set of data rows from database into `list of OrderedDict` for other purpose and use this `list of OrderedDict` to convert into a `nested JSON` format in `python`. I'm starting to learn python. I was able convert the query response from database which is a `list of lists` to `list of OrderedDict`.
I have the `list of OrderedDict` as below:
```
{
'OUTBOUND': [
OrderedDict([('Leg', 1), ('SessionID', 'W12231fwfegwcaa2'),('FeeCode', 'ATO'),('SeatGroup', '2'),
('Currency', 'MXN'),('Modality', 'VB'),('BookingClass', 'A'),('Price', 145.0),('Num_Pax', 1),('Channel', 'Web')]),
OrderedDict([('Leg', 1),('SessionID', 'W12231fwfegwcaa2'),('FeeCode', 'ATO'),('SeatGroup', '4'),
('Currency', 'MXN'),('Modality', 'VB'),('BookingClass', 'A'),('Price', 111.0),('Num_Pax', 1),('Channel', 'Web')]),
OrderedDict([('Leg', 1),('SessionID', 'W12231fwfegwcaa2'),('FeeCode', 'BDM'),('SeatGroup', 'null'),
('Currency', 'MXN'),('Modality', 'VB'),('BookingClass', 'A'),('Price', 111.0),('Num_Pax', 1),('Channel', 'Web')]),
OrderedDict([('Leg', 2),('SessionID', 'W12231fwfegwcaa2'),('FeeCode', 'ATO'),('SeatGroup', '1'),
('Currency', 'MXN'),('Modality', 'VB'),('BookingClass', 'U'),('Price', 180.0),('Num_Pax', 1),('Channel', 'Web'))]),
OrderedDict([('Leg', 2),('SessionID', 'W12231fwfegwcaa2'),('FeeCode', 'ATO'),('SeatGroup', '4'),
('Currency', 'MXN'),('Modality', 'VB'),('BookingClass', 'U'),('Price', 97.0),('Num_Pax', 1),('Channel', 'Web')]),
OrderedDict([('Leg', 2),('SessionID', 'W12231fwfegwcaa2'),('FeeCode', 'BDM'),('SeatGroup', 'null'),
('Currency', 'MXN'),('Modality', 'VB'),('BookingClass', 'U'),('Price', 97.0),('Num_Pax', 1),('Channel', 'Web')])
]
}
```
And I needed the nested format like below:
```
{
"OUTBOUND": [
{
"Leg": 1,
"SessionID": "W12231fwfegwcaa2",
"Modality": "VB",
"BookingClass": "A",
"FeeCodes":[
{
"FeeCode": "ATO",
"Prices":
[
{
"SeatGroup": "2",
"Price": 145.0,
"Currency": "MXN"
},
{
"SeatGroup": "4",
"Price": 111.0,
"Currency": "MXN"
}
]
},
{
"FeeCode": "VBABDM",
"Prices":
[
{
"SeatGroup": "null",
"Price": 111.0,
"Currency": "MXN"
}
]
}
],
"Num_Pax": 1,
"Channel": "Web"
},
{
"Leg": 2,
"SessionID": "W12231fwfegwcaa2",
"Modality": "VB",
"BookingClass": "U",
"FeeCodes":[
{
"FeeCode": "ATO",
"Prices":
[
{
"SeatGroup": "1",
"Price": 180.0,
"Currency": "MXN"
},
{
"SeatGroup": "4",
"price": 97.0,
"Currency": "MXN"
}
]
},
{
"FeeCode": "VBABDM",
"Prices":
[
{
"SeatGroup": "null",
"price": 97.0,
"Currency": "MXN"
}
]
}
],
"Num_Pax": 1,
"Channel": "Web"
}
]
}
```
If I'm not wrong, I need to group by `Leg`, `SessionID`, `Modality`, `BookingClass`, `NumPax` and `Channel` and group the `FeeCode`, `SeatGroup`, `Price` and `Currency` into nested format as above but unable to move ahead with how to loop and group for nesting.
It would be great if I could get some help. Thanks
|
2021/02/26
|
[
"https://Stackoverflow.com/questions/66385439",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2699684/"
] |
I was able to write a python code to get the format as I needed using simple looping with a couple of changes in the output like the fields SessionID, Num\_Pax and Channel is taken outside then the OUTBOUND field and fields within are generated.
Instead of OrderedDict, I used a list of lists as input which I convert into Pandas DataFrame and work with the DataFrame to get the nested format.
Below is the code I used:
```
outbound_df = pd.DataFrame(response_outbound,columns=All_columns)
Common_columns = ['Leg', 'Modality', 'BookingClass']
### Taking SessionID, AirlineCode,Num_Pax and Channel outside OUTBOUND part as they are common for all the leg level data
response_data['SessionID'] = outbound_df['SessionID'].unique()[0]
response_data['Num_Pax'] = int(outbound_df['Num_Pax'].unique()[0])
response_data['Channel'] = outbound_df['Channel'].unique()[0]
temp_data = []
Legs = outbound_df['Leg'].unique()
for i in Legs:
subdata = outbound_df[outbound_df['Leg']==i]
### Initializing leg_data dict
leg_data = collections.OrderedDict()
### Populating common fields of the leg (Leg, Modality,BookingClass)
for j in Common_columns:
if(j=='Leg'):
leg_data[j] = int(subdata[j].unique()[0])
else:
leg_data[j] = subdata[j].unique()[0]
leg_data['FeeCodes'] = []
FeeCodes = subdata['FeeCode'].unique()
for fc in FeeCodes:
subdata_fees = subdata[subdata['FeeCode']==fc]
Prices = {'FeeCode':fc, "Prices":[]}
for _,rows in subdata_fees.iterrows():
data = {}
data['SeatGroup'] = rows['SeatGroup']
data['Price'] = float(rows['Price'])
data['Currency'] = rows['Currency']
Prices["Prices"].append(data)
leg_data["FeeCodes"].append(Prices)
temp_data.append(leg_data)
response_data["OUTBOUND"] = temp_data
```
I can just do `json.dumps` on `response_data` to get json format which will be sent to the next steps.
Below is the output format I get:
```
{
"SessionID":"W12231fwfegwcaa2",
"Num_Pax":1,
"Channel":"Web",
"OUTBOUND":[
{
"Leg":1,
"Modality":"VB",
"BookingClass":"A",
"FeeCodes":[
{
"FeeCode":"ATO",
"Prices":[
{
"SeatGroup":"2",
"Price":145.0,
"Currency":"MXN"
},
{
"SeatGroup":"4",
"Price":111.0,
"Currency":"MXN"
}
]
},
{
"FeeCode":"VBABDM",
"Prices":[
{
"SeatGroup":"null",
"Price":111.0,
"Currency":"MXN"
}
]
}
]
},
{
"Leg":2,
"Modality":"VB",
"BookingClass":"U",
"FeeCodes":[
{
"FeeCode":"ATO",
"Prices":[
{
"SeatGroup":"1",
"Price":180.0,
"Currency":"MXN"
},
{
"SeatGroup":"4",
"price":97.0,
"Currency":"MXN"
}
]
},
{
"FeeCode":"VBABDM",
"Prices":[
{
"SeatGroup":"null",
"price":97.0,
"Currency":"MXN"
}
]
}
]
}
]
}
```
Please let me know if we can shorten the code in terms of lengthy iterations or any other changes. Thanks.
PS: Sorry for my editing mistakes
|
Assuming that you stored the dictionary to some variable `foo`, you can do:
```py
import json
json.dumps(foo)
```
And be careful, you added extra bracket in the 4th element `OUTBOUND` list
|
67,055,004
|
My Azure devops page will look like :
[](https://i.stack.imgur.com/YBdJx.png)
I have 4 pandas dataframes.
I need to create 4 sub pages in Azure devops wiki from each dataframe.
Say, Sub1 from first dataframe, Sub2 from second dataframe and so on.
My result should be in tab. The result should look like :
[](https://i.stack.imgur.com/B66yA.png)
Is it possible to create subpages thru API?
I have referenced the following docs. But I am unable to make any sense. Any inputs will be helpful. Thanks.
<https://github.com/microsoft/azure-devops-python-samples/blob/main/API%20Samples.ipynb>
<https://learn.microsoft.com/en-us/rest/api/azure/devops/wiki/pages/create%20or%20update?view=azure-devops-rest-6.0>
|
2021/04/12
|
[
"https://Stackoverflow.com/questions/67055004",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11049287/"
] |
use value() method
```
$user->image()->value('image');
```
From [Eloquent documentation](https://laravel.com/docs/8.x/queries#retrieving-a-single-row-column-from-a-table)
>
> If you don't need an entire row, you may extract a single value from a record using the value method. This method will return the value of the column directly:
>
>
> $email = DB::table('users')->where('name', 'John')->value('email');
>
>
>
You can set it as a user attribute.
```
public function getProfileImageAttribute()
{
return optional($this->image)->image;
//or
return $this->image->image ?? null;
//or
return $this->image->image ?? 'path/of/default/image';
}
```
now you can call it like this
```
$user->profile_image;
```
|
You can create another function inside your model and access the previous method like
```
public function image()
{
return $this->hasOne(UserImages::class, 'user_id', 'id')->latest();
}
public function avatar()
{
return $this->image->image ?: null;
//OR
return $this->image->image ?? null;
//OR
return !is_null($this->image) ? $this->image->image : null;
//OR
return optional($this->image)->image;
}
```
And it will be accessible with `$user->avatar();`
**As from discussion you are sending response to api**
```
$this->user = $this->user->where('id', "!=", $current_user->id)->with(['chats','Image:user_id,image'])->paginate(50);
```
This will help you, but it will be better to use Resources for api responses to transform some specific fields.
|
67,055,004
|
My Azure devops page will look like :
[](https://i.stack.imgur.com/YBdJx.png)
I have 4 pandas dataframes.
I need to create 4 sub pages in Azure devops wiki from each dataframe.
Say, Sub1 from first dataframe, Sub2 from second dataframe and so on.
My result should be in tab. The result should look like :
[](https://i.stack.imgur.com/B66yA.png)
Is it possible to create subpages thru API?
I have referenced the following docs. But I am unable to make any sense. Any inputs will be helpful. Thanks.
<https://github.com/microsoft/azure-devops-python-samples/blob/main/API%20Samples.ipynb>
<https://learn.microsoft.com/en-us/rest/api/azure/devops/wiki/pages/create%20or%20update?view=azure-devops-rest-6.0>
|
2021/04/12
|
[
"https://Stackoverflow.com/questions/67055004",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11049287/"
] |
use value() method
```
$user->image()->value('image');
```
From [Eloquent documentation](https://laravel.com/docs/8.x/queries#retrieving-a-single-row-column-from-a-table)
>
> If you don't need an entire row, you may extract a single value from a record using the value method. This method will return the value of the column directly:
>
>
> $email = DB::table('users')->where('name', 'John')->value('email');
>
>
>
You can set it as a user attribute.
```
public function getProfileImageAttribute()
{
return optional($this->image)->image;
//or
return $this->image->image ?? null;
//or
return $this->image->image ?? 'path/of/default/image';
}
```
now you can call it like this
```
$user->profile_image;
```
|
You could use the magic Laravel provided:
```
$user->image->pluck('profile_image');
```
Documentation: <https://laravel.com/docs/8.x/collections#method-pluck>
|
30,114,579
|
I am running ubuntu 12.04 and running programs through the terminal. I have a file that compiles and runs without any issues when I am in the current directory. Example below,
```
david@block-ubuntu:~/Documents/BudgetAutomation/BillList$ pwd
/home/david/Documents/BudgetAutomation/BillList
david@block-ubuntu:~/Documents/BudgetAutomation/BillList$ python3.4 bill.py
./otherlisted.txt
./monthlisted.txt
david@block-ubuntu:~/Documents/BudgetAutomation/BillList$
```
Now when I go back one directory and try running the same piece of code, I get an error message, `ValueError: need more than 1 value to unpack`. Below is what happens when I run the sample code one folder back and then the sample code below that.
```
david@block-ubuntu:~/Documents/BudgetAutomation$ python3.4 /home/david/Documents/BudgetAutomation/BillList/bill.py
Traceback (most recent call last):
File "/home/david/Documents/BudgetAutomation/BillList/bill.py", line 22, in <module>
bill_no, bill_name, trash = line.split('|', 2)
ValueError: need more than 1 value to unpack
```
The code, `bill.py`, below. This program reads two text files from the folder that it is located in and parses the lines into variables.
```
#!/usr/bin/env python
import glob
# gather all txt files in directory
arr = glob.glob('./*.txt')
arrlen = int(len(arr))
# create array to store list of bill numbers and names
list_num = []
list_name = []
# for loop that parses lines into appropriate variables
for i in range(arrlen):
with open(arr[i]) as input:
w = 0 ## iterative variable for arrays
for line in input:
list_num.append(1) ## initialize arrays
list_name.append(1)
# split line into variables.. trash is rest of line that has no use
bill_no, bill_name, trash = line.split('|', 2)
# stores values in array
list_num[w] = bill_no
list_name[w] = bill_name
w += 1
```
What is going on here? Am I not running the compile and run command in the terminal correctly? Another note to know is that I eventually call this code from another file and it will not run the for loop, I am assuming since it doesn't run unless its called from its own folder/directory?
|
2015/05/08
|
[
"https://Stackoverflow.com/questions/30114579",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4362951/"
] |
Your problem starts in line 5:
```
arr = glob.glob('./*.txt')
```
You are telling glob to look in the local directory for all .txt files. Since you are one directory up you do not have these files.
You are getting a ValueError because the line variable is empty.
As it is written you will need to run it from that directory.
Edit:
The way I see it you have three separate options.
1. You could simply run script with the full path (assuming it is executable)
~/Documents/BudgetAutomation/BillList/bill.py
2. You could put the full path into the file (although not very Pythonic)
arr = glob.glob('/home/[username]/Documents/BudgetAutomation/BillList/\*.txt')
3. You could use [sys.argv](https://docs.python.org/3/library/sys.html?highlight=sys.argv#sys.argv) to pass the path in the file. This would be my personal preferred way. Use os.path.join to put the correct slashes.
arr = glob.glob(os.path.join(sys.argv[1](https://docs.python.org/3/library/sys.html?highlight=sys.argv#sys.argv), '\*.txt'))
|
You don't need to create that range object to iterate over the glob result. You can just do it like this:
```
for file_path in arr:
with open(file_path) as text_file:
#...code below...
```
The reason of why that exception is raised, I guess, is there exist text files contain content not conforming with your need. You read a line from that file, which is something may be like "foo|bar", then the splitting result of it is ["foo", "bar"].
If you want to avoid this exception, you could just catch it:
```
try:
bill_no, bill_name, trash = line.split('|', 2)
except ValueError:
# You can do something more meaningful but just "pass"
pass
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.