qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
55,503,673
Let's say I have a python function whose single argument is a non-trivial type: ``` from typing import List, Dict ArgType = List[Dict[str, int]] # this could be any non-trivial type def myfun(a: ArgType) -> None: ... ``` ... and then I have a data structure that I have unpacked from a JSON source: ``` import json data = json.loads(...) ``` My question is: How can I check *at runtime* that `data` has the correct type to be used as an argument to `myfun()` before using it as an argument for `myfun()`? ``` if not isCorrectType(data, ArgType): raise TypeError("data is not correct type") else: myfun(data) ```
2019/04/03
[ "https://Stackoverflow.com/questions/55503673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/28835/" ]
It's awkward that there's no built-in function for this but [`typeguard`](https://pypi.org/project/typeguard/) comes with a convenient `check_type()` function: ``` >>> from typeguard import check_type >>> from typing import List >>> check_type("foo", [1,2,"3"], List[int]) Traceback (most recent call last): ... TypeError: type of foo[2] must be int; got str instead type of foo[2] must be int; got str instead ``` For more see: <https://typeguard.readthedocs.io/en/latest/api.html#typeguard.check_type>
You would have to check your nested type structure manually - the type hint's are not enforced. Checking like this ist best done using **ABC (Abstract Meta Classes)** - so users can provide their derived classes that support the same accessing as default dict/lists: ``` import collections.abc def isCorrectType(data): if isinstance(data, collections.abc.Collection): for d in data: if isinstance(d,collections.abc.MutableMapping): for key in d: if isinstance(key,str) and isinstance(d[key],int): pass else: return False else: return False else: return False return True ``` Output: ``` print ( isCorrectType( [ {"a":2} ] )) # True print ( isCorrectType( [ {2:2} ] )) # False print ( isCorrectType( [ {"a":"a"} ] )) # False print ( isCorrectType( [ {"a":2},1 ] )) # False ``` Doku: * [ABC - abstract meta classes](https://docs.python.org/3/library/collections.abc.html) Related: * [What is duck typing?](https://stackoverflow.com/questions/4205130/what-is-duck-typing) --- The other way round would be to follow the ["Ask forgiveness not permission" - explain](https://stackoverflow.com/questions/12265451/ask-forgiveness-not-permission-explain) paradigm and simyply *use* your data in the form you want and `try:/except:` around if if it does not conform to what you wanted. This fits better with [What is duck typing?](https://stackoverflow.com/questions/4205130/duck-typing) - and allows (similar to ABC-checking) the consumer to provide you with derived classes from list/dict while it still will work...
55,503,673
Let's say I have a python function whose single argument is a non-trivial type: ``` from typing import List, Dict ArgType = List[Dict[str, int]] # this could be any non-trivial type def myfun(a: ArgType) -> None: ... ``` ... and then I have a data structure that I have unpacked from a JSON source: ``` import json data = json.loads(...) ``` My question is: How can I check *at runtime* that `data` has the correct type to be used as an argument to `myfun()` before using it as an argument for `myfun()`? ``` if not isCorrectType(data, ArgType): raise TypeError("data is not correct type") else: myfun(data) ```
2019/04/03
[ "https://Stackoverflow.com/questions/55503673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/28835/" ]
Validating a type annotation is a non-trivial task. Python does not do it automatically, and writing your own validator is difficult because the [`typing`](https://docs.python.org/3/library/typing.html) module doesn't offer much of a useful interface. (In fact the internals of the `typing` module have changed so much since its introduction in python 3.5 that it's honestly a nightmare to work with.) Here's a type validator function taken from one of my personal projects (wall of code warning): ``` import inspect import typing __all__ = ['is_instance', 'is_subtype', 'python_type', 'is_generic', 'is_base_generic', 'is_qualified_generic'] if hasattr(typing, '_GenericAlias'): # python 3.7 def _is_generic(cls): if isinstance(cls, typing._GenericAlias): return True if isinstance(cls, typing._SpecialForm): return cls not in {typing.Any} return False def _is_base_generic(cls): if isinstance(cls, typing._GenericAlias): if cls.__origin__ in {typing.Generic, typing._Protocol}: return False if isinstance(cls, typing._VariadicGenericAlias): return True return len(cls.__parameters__) > 0 if isinstance(cls, typing._SpecialForm): return cls._name in {'ClassVar', 'Union', 'Optional'} return False def _get_base_generic(cls): # subclasses of Generic will have their _name set to None, but # their __origin__ will point to the base generic if cls._name is None: return cls.__origin__ else: return getattr(typing, cls._name) def _get_python_type(cls): """ Like `python_type`, but only works with `typing` classes. """ return cls.__origin__ def _get_name(cls): return cls._name else: # python <3.7 if hasattr(typing, '_Union'): # python 3.6 def _is_generic(cls): if isinstance(cls, (typing.GenericMeta, typing._Union, typing._Optional, typing._ClassVar)): return True return False def _is_base_generic(cls): if isinstance(cls, (typing.GenericMeta, typing._Union)): return cls.__args__ in {None, ()} if isinstance(cls, typing._Optional): return True return False else: # python 3.5 def _is_generic(cls): if isinstance(cls, (typing.GenericMeta, typing.UnionMeta, typing.OptionalMeta, typing.CallableMeta, typing.TupleMeta)): return True return False def _is_base_generic(cls): if isinstance(cls, typing.GenericMeta): return all(isinstance(arg, typing.TypeVar) for arg in cls.__parameters__) if isinstance(cls, typing.UnionMeta): return cls.__union_params__ is None if isinstance(cls, typing.TupleMeta): return cls.__tuple_params__ is None if isinstance(cls, typing.CallableMeta): return cls.__args__ is None if isinstance(cls, typing.OptionalMeta): return True return False def _get_base_generic(cls): try: return cls.__origin__ except AttributeError: pass name = type(cls).__name__ if not name.endswith('Meta'): raise NotImplementedError("Cannot determine base of {}".format(cls)) name = name[:-4] return getattr(typing, name) def _get_python_type(cls): """ Like `python_type`, but only works with `typing` classes. """ # Many classes actually reference their corresponding abstract base class from the abc module # instead of their builtin variant (i.e. typing.List references MutableSequence instead of list). # We're interested in the builtin class (if any), so we'll traverse the MRO and look for it there. for typ in cls.mro(): if typ.__module__ == 'builtins' and typ is not object: return typ try: return cls.__extra__ except AttributeError: pass if is_qualified_generic(cls): cls = get_base_generic(cls) if cls is typing.Tuple: return tuple raise NotImplementedError("Cannot determine python type of {}".format(cls)) def _get_name(cls): try: return cls.__name__ except AttributeError: return type(cls).__name__[1:] if hasattr(typing.List, '__args__'): # python 3.6+ def _get_subtypes(cls): subtypes = cls.__args__ if get_base_generic(cls) is typing.Callable: if len(subtypes) != 2 or subtypes[0] is not ...: subtypes = (subtypes[:-1], subtypes[-1]) return subtypes else: # python 3.5 def _get_subtypes(cls): if isinstance(cls, typing.CallableMeta): if cls.__args__ is None: return () return cls.__args__, cls.__result__ for name in ['__parameters__', '__union_params__', '__tuple_params__']: try: subtypes = getattr(cls, name) break except AttributeError: pass else: raise NotImplementedError("Cannot extract subtypes from {}".format(cls)) subtypes = [typ for typ in subtypes if not isinstance(typ, typing.TypeVar)] return subtypes def is_generic(cls): """ Detects any kind of generic, for example `List` or `List[int]`. This includes "special" types like Union and Tuple - anything that's subscriptable, basically. """ return _is_generic(cls) def is_base_generic(cls): """ Detects generic base classes, for example `List` (but not `List[int]`) """ return _is_base_generic(cls) def is_qualified_generic(cls): """ Detects generics with arguments, for example `List[int]` (but not `List`) """ return is_generic(cls) and not is_base_generic(cls) def get_base_generic(cls): if not is_qualified_generic(cls): raise TypeError('{} is not a qualified Generic and thus has no base'.format(cls)) return _get_base_generic(cls) def get_subtypes(cls): return _get_subtypes(cls) def _instancecheck_iterable(iterable, type_args): if len(type_args) != 1: raise TypeError("Generic iterables must have exactly 1 type argument; found {}".format(type_args)) type_ = type_args[0] return all(is_instance(val, type_) for val in iterable) def _instancecheck_mapping(mapping, type_args): return _instancecheck_itemsview(mapping.items(), type_args) def _instancecheck_itemsview(itemsview, type_args): if len(type_args) != 2: raise TypeError("Generic mappings must have exactly 2 type arguments; found {}".format(type_args)) key_type, value_type = type_args return all(is_instance(key, key_type) and is_instance(val, value_type) for key, val in itemsview) def _instancecheck_tuple(tup, type_args): if len(tup) != len(type_args): return False return all(is_instance(val, type_) for val, type_ in zip(tup, type_args)) _ORIGIN_TYPE_CHECKERS = {} for class_path, check_func in { # iterables 'typing.Container': _instancecheck_iterable, 'typing.Collection': _instancecheck_iterable, 'typing.AbstractSet': _instancecheck_iterable, 'typing.MutableSet': _instancecheck_iterable, 'typing.Sequence': _instancecheck_iterable, 'typing.MutableSequence': _instancecheck_iterable, 'typing.ByteString': _instancecheck_iterable, 'typing.Deque': _instancecheck_iterable, 'typing.List': _instancecheck_iterable, 'typing.Set': _instancecheck_iterable, 'typing.FrozenSet': _instancecheck_iterable, 'typing.KeysView': _instancecheck_iterable, 'typing.ValuesView': _instancecheck_iterable, 'typing.AsyncIterable': _instancecheck_iterable, # mappings 'typing.Mapping': _instancecheck_mapping, 'typing.MutableMapping': _instancecheck_mapping, 'typing.MappingView': _instancecheck_mapping, 'typing.ItemsView': _instancecheck_itemsview, 'typing.Dict': _instancecheck_mapping, 'typing.DefaultDict': _instancecheck_mapping, 'typing.Counter': _instancecheck_mapping, 'typing.ChainMap': _instancecheck_mapping, # other 'typing.Tuple': _instancecheck_tuple, }.items(): try: cls = eval(class_path) except AttributeError: continue _ORIGIN_TYPE_CHECKERS[cls] = check_func def _instancecheck_callable(value, type_): if not callable(value): return False if is_base_generic(type_): return True param_types, ret_type = get_subtypes(type_) sig = inspect.signature(value) missing_annotations = [] if param_types is not ...: if len(param_types) != len(sig.parameters): return False # FIXME: add support for TypeVars # if any of the existing annotations don't match the type, we'll return False. # Then, if any annotations are missing, we'll throw an exception. for param, expected_type in zip(sig.parameters.values(), param_types): param_type = param.annotation if param_type is inspect.Parameter.empty: missing_annotations.append(param) continue if not is_subtype(param_type, expected_type): return False if sig.return_annotation is inspect.Signature.empty: missing_annotations.append('return') else: if not is_subtype(sig.return_annotation, ret_type): return False if missing_annotations: raise ValueError("Missing annotations: {}".format(missing_annotations)) return True def _instancecheck_union(value, type_): types = get_subtypes(type_) return any(is_instance(value, typ) for typ in types) def _instancecheck_type(value, type_): # if it's not a class, return False if not isinstance(value, type): return False if is_base_generic(type_): return True type_args = get_subtypes(type_) if len(type_args) != 1: raise TypeError("Type must have exactly 1 type argument; found {}".format(type_args)) return is_subtype(value, type_args[0]) _SPECIAL_INSTANCE_CHECKERS = { 'Union': _instancecheck_union, 'Callable': _instancecheck_callable, 'Type': _instancecheck_type, 'Any': lambda v, t: True, } def is_instance(obj, type_): if type_.__module__ == 'typing': if is_qualified_generic(type_): base_generic = get_base_generic(type_) else: base_generic = type_ name = _get_name(base_generic) try: validator = _SPECIAL_INSTANCE_CHECKERS[name] except KeyError: pass else: return validator(obj, type_) if is_base_generic(type_): python_type = _get_python_type(type_) return isinstance(obj, python_type) if is_qualified_generic(type_): python_type = _get_python_type(type_) if not isinstance(obj, python_type): return False base = get_base_generic(type_) try: validator = _ORIGIN_TYPE_CHECKERS[base] except KeyError: raise NotImplementedError("Cannot perform isinstance check for type {}".format(type_)) type_args = get_subtypes(type_) return validator(obj, type_args) return isinstance(obj, type_) def is_subtype(sub_type, super_type): if not is_generic(sub_type): python_super = python_type(super_type) return issubclass(sub_type, python_super) # at this point we know `sub_type` is a generic python_sub = python_type(sub_type) python_super = python_type(super_type) if not issubclass(python_sub, python_super): return False # at this point we know that `sub_type`'s base type is a subtype of `super_type`'s base type. # If `super_type` isn't qualified, then there's nothing more to do. if not is_generic(super_type) or is_base_generic(super_type): return True # at this point we know that `super_type` is a qualified generic... so if `sub_type` isn't # qualified, it can't be a subtype. if is_base_generic(sub_type): return False # at this point we know that both types are qualified generics, so we just have to # compare their sub-types. sub_args = get_subtypes(sub_type) super_args = get_subtypes(super_type) return all(is_subtype(sub_arg, super_arg) for sub_arg, super_arg in zip(sub_args, super_args)) def python_type(annotation): """ Given a type annotation or a class as input, returns the corresponding python class. Examples: :: >>> python_type(typing.Dict) <class 'dict'> >>> python_type(typing.List[int]) <class 'list'> >>> python_type(int) <class 'int'> """ try: mro = annotation.mro() except AttributeError: # if it doesn't have an mro method, it must be a weird typing object return _get_python_type(annotation) if Type in mro: return annotation.python_type elif annotation.__module__ == 'typing': return _get_python_type(annotation) else: return annotation ``` Demonstration: ``` >>> is_instance([{'x': 3}], List[Dict[str, int]]) True >>> is_instance([{'x': 3}, {'y': 7.5}], List[Dict[str, int]]) False ``` (As far as I'm aware, this supports all python versions, even the ones <3.5 using the [`typing` module backport](https://pypi.org/project/typing/).)
The common way to handle this is by making use of the fact that if whatever object you pass to `myfun` doesn't have the required functionality a corresponding exception will be raised (usually `TypeError` or `AttributeError`). So you would do the following: ``` try: myfun(data) except (TypeError, AttributeError) as err: # Fallback for invalid types here. ``` You indicate in your question that you would raise a `TypeError` if the passed object does not have the appropriate structure but Python does this already for you. The critical question is how you would handle this case. You could also move the `try / except` block into `myfun`, if appropriate. When it comes to typing in Python you usually rely on [duck typing](https://en.wikipedia.org/wiki/Duck_typing): if the object has the required functionality then you don't care much about what type it is, as long as it serves the purpose. Consider the following example. We just pass the data into the function and then get the `AttributeError` for free (which we can then except); no need for manual type checking: ``` >>> def myfun(data): ... for x in data: ... print(x.items()) ... >>> data = json.loads('[[["a", 1], ["b", 2]], [["c", 3], ["d", 4]]]') >>> myfun(data) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 3, in myfun AttributeError: 'list' object has no attribute 'items' ``` In case you are concerned about the usefulness of the resulting error, you could still except and then re-raise a custom exception (or even change the exception's message): ``` try: myfun(data) except (TypeError, AttributeError) as err: raise TypeError('Data has incorrect structure') from err try: myfun(data) except (TypeError, AttributeError) as err: err.args = ('Data has incorrect structure',) raise ``` When using third-party code one should always check the documentation for exceptions that will be raised. For example [`numpy.inner`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.inner.html) reports that it will raise a `ValueError` under certain circumstances. When using that function we don't need to perform any checks ourselves but rely on the fact that it will raise the error if needed. When using third-party code for which it is not clear how it will behave in some corner-cases, i.m.o. it is easier and clearer to just hardcode a corresponding type checker (see below) instead of using a generic solution that works for any type. These cases should be rare anyway and leaving a corresponding comment makes your fellow developers aware of the situation. The `typing` library is for type-hinting and as such it won't be checking the types at runtime. Sure you could do this manually but it is rather cumbersome: ``` def type_checker(data): return ( isinstance(data, list) and all(isinstance(x, dict) for x in list) and all(isinstance(k, str) and isinstance(v, int) for x in list for k, v in x.items()) ) ``` This together with an appropriate comment is still an acceptable solution and it is reusable where a similar data structure is expected. The intent is clear and the code is easily verifiable.
55,503,673
Let's say I have a python function whose single argument is a non-trivial type: ``` from typing import List, Dict ArgType = List[Dict[str, int]] # this could be any non-trivial type def myfun(a: ArgType) -> None: ... ``` ... and then I have a data structure that I have unpacked from a JSON source: ``` import json data = json.loads(...) ``` My question is: How can I check *at runtime* that `data` has the correct type to be used as an argument to `myfun()` before using it as an argument for `myfun()`? ``` if not isCorrectType(data, ArgType): raise TypeError("data is not correct type") else: myfun(data) ```
2019/04/03
[ "https://Stackoverflow.com/questions/55503673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/28835/" ]
Validating a type annotation is a non-trivial task. Python does not do it automatically, and writing your own validator is difficult because the [`typing`](https://docs.python.org/3/library/typing.html) module doesn't offer much of a useful interface. (In fact the internals of the `typing` module have changed so much since its introduction in python 3.5 that it's honestly a nightmare to work with.) Here's a type validator function taken from one of my personal projects (wall of code warning): ``` import inspect import typing __all__ = ['is_instance', 'is_subtype', 'python_type', 'is_generic', 'is_base_generic', 'is_qualified_generic'] if hasattr(typing, '_GenericAlias'): # python 3.7 def _is_generic(cls): if isinstance(cls, typing._GenericAlias): return True if isinstance(cls, typing._SpecialForm): return cls not in {typing.Any} return False def _is_base_generic(cls): if isinstance(cls, typing._GenericAlias): if cls.__origin__ in {typing.Generic, typing._Protocol}: return False if isinstance(cls, typing._VariadicGenericAlias): return True return len(cls.__parameters__) > 0 if isinstance(cls, typing._SpecialForm): return cls._name in {'ClassVar', 'Union', 'Optional'} return False def _get_base_generic(cls): # subclasses of Generic will have their _name set to None, but # their __origin__ will point to the base generic if cls._name is None: return cls.__origin__ else: return getattr(typing, cls._name) def _get_python_type(cls): """ Like `python_type`, but only works with `typing` classes. """ return cls.__origin__ def _get_name(cls): return cls._name else: # python <3.7 if hasattr(typing, '_Union'): # python 3.6 def _is_generic(cls): if isinstance(cls, (typing.GenericMeta, typing._Union, typing._Optional, typing._ClassVar)): return True return False def _is_base_generic(cls): if isinstance(cls, (typing.GenericMeta, typing._Union)): return cls.__args__ in {None, ()} if isinstance(cls, typing._Optional): return True return False else: # python 3.5 def _is_generic(cls): if isinstance(cls, (typing.GenericMeta, typing.UnionMeta, typing.OptionalMeta, typing.CallableMeta, typing.TupleMeta)): return True return False def _is_base_generic(cls): if isinstance(cls, typing.GenericMeta): return all(isinstance(arg, typing.TypeVar) for arg in cls.__parameters__) if isinstance(cls, typing.UnionMeta): return cls.__union_params__ is None if isinstance(cls, typing.TupleMeta): return cls.__tuple_params__ is None if isinstance(cls, typing.CallableMeta): return cls.__args__ is None if isinstance(cls, typing.OptionalMeta): return True return False def _get_base_generic(cls): try: return cls.__origin__ except AttributeError: pass name = type(cls).__name__ if not name.endswith('Meta'): raise NotImplementedError("Cannot determine base of {}".format(cls)) name = name[:-4] return getattr(typing, name) def _get_python_type(cls): """ Like `python_type`, but only works with `typing` classes. """ # Many classes actually reference their corresponding abstract base class from the abc module # instead of their builtin variant (i.e. typing.List references MutableSequence instead of list). # We're interested in the builtin class (if any), so we'll traverse the MRO and look for it there. for typ in cls.mro(): if typ.__module__ == 'builtins' and typ is not object: return typ try: return cls.__extra__ except AttributeError: pass if is_qualified_generic(cls): cls = get_base_generic(cls) if cls is typing.Tuple: return tuple raise NotImplementedError("Cannot determine python type of {}".format(cls)) def _get_name(cls): try: return cls.__name__ except AttributeError: return type(cls).__name__[1:] if hasattr(typing.List, '__args__'): # python 3.6+ def _get_subtypes(cls): subtypes = cls.__args__ if get_base_generic(cls) is typing.Callable: if len(subtypes) != 2 or subtypes[0] is not ...: subtypes = (subtypes[:-1], subtypes[-1]) return subtypes else: # python 3.5 def _get_subtypes(cls): if isinstance(cls, typing.CallableMeta): if cls.__args__ is None: return () return cls.__args__, cls.__result__ for name in ['__parameters__', '__union_params__', '__tuple_params__']: try: subtypes = getattr(cls, name) break except AttributeError: pass else: raise NotImplementedError("Cannot extract subtypes from {}".format(cls)) subtypes = [typ for typ in subtypes if not isinstance(typ, typing.TypeVar)] return subtypes def is_generic(cls): """ Detects any kind of generic, for example `List` or `List[int]`. This includes "special" types like Union and Tuple - anything that's subscriptable, basically. """ return _is_generic(cls) def is_base_generic(cls): """ Detects generic base classes, for example `List` (but not `List[int]`) """ return _is_base_generic(cls) def is_qualified_generic(cls): """ Detects generics with arguments, for example `List[int]` (but not `List`) """ return is_generic(cls) and not is_base_generic(cls) def get_base_generic(cls): if not is_qualified_generic(cls): raise TypeError('{} is not a qualified Generic and thus has no base'.format(cls)) return _get_base_generic(cls) def get_subtypes(cls): return _get_subtypes(cls) def _instancecheck_iterable(iterable, type_args): if len(type_args) != 1: raise TypeError("Generic iterables must have exactly 1 type argument; found {}".format(type_args)) type_ = type_args[0] return all(is_instance(val, type_) for val in iterable) def _instancecheck_mapping(mapping, type_args): return _instancecheck_itemsview(mapping.items(), type_args) def _instancecheck_itemsview(itemsview, type_args): if len(type_args) != 2: raise TypeError("Generic mappings must have exactly 2 type arguments; found {}".format(type_args)) key_type, value_type = type_args return all(is_instance(key, key_type) and is_instance(val, value_type) for key, val in itemsview) def _instancecheck_tuple(tup, type_args): if len(tup) != len(type_args): return False return all(is_instance(val, type_) for val, type_ in zip(tup, type_args)) _ORIGIN_TYPE_CHECKERS = {} for class_path, check_func in { # iterables 'typing.Container': _instancecheck_iterable, 'typing.Collection': _instancecheck_iterable, 'typing.AbstractSet': _instancecheck_iterable, 'typing.MutableSet': _instancecheck_iterable, 'typing.Sequence': _instancecheck_iterable, 'typing.MutableSequence': _instancecheck_iterable, 'typing.ByteString': _instancecheck_iterable, 'typing.Deque': _instancecheck_iterable, 'typing.List': _instancecheck_iterable, 'typing.Set': _instancecheck_iterable, 'typing.FrozenSet': _instancecheck_iterable, 'typing.KeysView': _instancecheck_iterable, 'typing.ValuesView': _instancecheck_iterable, 'typing.AsyncIterable': _instancecheck_iterable, # mappings 'typing.Mapping': _instancecheck_mapping, 'typing.MutableMapping': _instancecheck_mapping, 'typing.MappingView': _instancecheck_mapping, 'typing.ItemsView': _instancecheck_itemsview, 'typing.Dict': _instancecheck_mapping, 'typing.DefaultDict': _instancecheck_mapping, 'typing.Counter': _instancecheck_mapping, 'typing.ChainMap': _instancecheck_mapping, # other 'typing.Tuple': _instancecheck_tuple, }.items(): try: cls = eval(class_path) except AttributeError: continue _ORIGIN_TYPE_CHECKERS[cls] = check_func def _instancecheck_callable(value, type_): if not callable(value): return False if is_base_generic(type_): return True param_types, ret_type = get_subtypes(type_) sig = inspect.signature(value) missing_annotations = [] if param_types is not ...: if len(param_types) != len(sig.parameters): return False # FIXME: add support for TypeVars # if any of the existing annotations don't match the type, we'll return False. # Then, if any annotations are missing, we'll throw an exception. for param, expected_type in zip(sig.parameters.values(), param_types): param_type = param.annotation if param_type is inspect.Parameter.empty: missing_annotations.append(param) continue if not is_subtype(param_type, expected_type): return False if sig.return_annotation is inspect.Signature.empty: missing_annotations.append('return') else: if not is_subtype(sig.return_annotation, ret_type): return False if missing_annotations: raise ValueError("Missing annotations: {}".format(missing_annotations)) return True def _instancecheck_union(value, type_): types = get_subtypes(type_) return any(is_instance(value, typ) for typ in types) def _instancecheck_type(value, type_): # if it's not a class, return False if not isinstance(value, type): return False if is_base_generic(type_): return True type_args = get_subtypes(type_) if len(type_args) != 1: raise TypeError("Type must have exactly 1 type argument; found {}".format(type_args)) return is_subtype(value, type_args[0]) _SPECIAL_INSTANCE_CHECKERS = { 'Union': _instancecheck_union, 'Callable': _instancecheck_callable, 'Type': _instancecheck_type, 'Any': lambda v, t: True, } def is_instance(obj, type_): if type_.__module__ == 'typing': if is_qualified_generic(type_): base_generic = get_base_generic(type_) else: base_generic = type_ name = _get_name(base_generic) try: validator = _SPECIAL_INSTANCE_CHECKERS[name] except KeyError: pass else: return validator(obj, type_) if is_base_generic(type_): python_type = _get_python_type(type_) return isinstance(obj, python_type) if is_qualified_generic(type_): python_type = _get_python_type(type_) if not isinstance(obj, python_type): return False base = get_base_generic(type_) try: validator = _ORIGIN_TYPE_CHECKERS[base] except KeyError: raise NotImplementedError("Cannot perform isinstance check for type {}".format(type_)) type_args = get_subtypes(type_) return validator(obj, type_args) return isinstance(obj, type_) def is_subtype(sub_type, super_type): if not is_generic(sub_type): python_super = python_type(super_type) return issubclass(sub_type, python_super) # at this point we know `sub_type` is a generic python_sub = python_type(sub_type) python_super = python_type(super_type) if not issubclass(python_sub, python_super): return False # at this point we know that `sub_type`'s base type is a subtype of `super_type`'s base type. # If `super_type` isn't qualified, then there's nothing more to do. if not is_generic(super_type) or is_base_generic(super_type): return True # at this point we know that `super_type` is a qualified generic... so if `sub_type` isn't # qualified, it can't be a subtype. if is_base_generic(sub_type): return False # at this point we know that both types are qualified generics, so we just have to # compare their sub-types. sub_args = get_subtypes(sub_type) super_args = get_subtypes(super_type) return all(is_subtype(sub_arg, super_arg) for sub_arg, super_arg in zip(sub_args, super_args)) def python_type(annotation): """ Given a type annotation or a class as input, returns the corresponding python class. Examples: :: >>> python_type(typing.Dict) <class 'dict'> >>> python_type(typing.List[int]) <class 'list'> >>> python_type(int) <class 'int'> """ try: mro = annotation.mro() except AttributeError: # if it doesn't have an mro method, it must be a weird typing object return _get_python_type(annotation) if Type in mro: return annotation.python_type elif annotation.__module__ == 'typing': return _get_python_type(annotation) else: return annotation ``` Demonstration: ``` >>> is_instance([{'x': 3}], List[Dict[str, int]]) True >>> is_instance([{'x': 3}, {'y': 7.5}], List[Dict[str, int]]) False ``` (As far as I'm aware, this supports all python versions, even the ones <3.5 using the [`typing` module backport](https://pypi.org/project/typing/).)
If all you want to do is json-parsing, you should just use [pydantic](https://pydantic-docs.helpmanual.io/). But, I encountered the same problem where I wanted to check the type of python objects, so I created a simpler solution than in other answers that handles at least complex types with nested lists and dictionaries. I created a gist with this method at <https://gist.github.com/ramraj07/f537bf9f80b4133c65dd76c958d4c461> Some example uses of this method include: ``` from typing import List, Dict, Union, Type, Optional check_type('a', str) check_type({'a': 1}, Dict[str, int]) check_type([{'a': [1.0]}, 'ten'], List[Union[Dict[str, List[float]], str]]) check_type(None, Optional[str]) check_type('abc', Optional[str]) ``` Here's the code below for reference: ``` import typing def check_type(obj: typing.Any, type_to_check: typing.Any, _external=True) -> None: try: if not hasattr(type_to_check, "_name"): # base-case if not isinstance(obj, type_to_check): raise TypeError return # type_to_check is from typing library type_name = type_to_check._name if type_to_check is typing.Any: pass elif type_name in ("List", "Tuple"): if (type_name == "List" and not isinstance(obj, list)) or ( type_name == "Tuple" and not isinstance(obj, tuple) ): raise TypeError element_type = type_to_check.__args__[0] for element in obj: check_type(element, element_type, _external=False) elif type_name == "Dict": if not isinstance(obj, dict): raise TypeError if len(type_to_check.__args__) != 2: raise NotImplementedError( "check_type can only accept Dict typing with separate annotations for key and values" ) key_type, value_type = type_to_check.__args__ for key, value in obj.items(): check_type(key, key_type, _external=False) check_type(value, value_type, _external=False) elif type_name is None and type_to_check.__origin__ is typing.Union: type_options = type_to_check.__args__ no_option_matched = True for type_option in type_options: try: check_type(obj, type_option, _external=False) no_option_matched = False break except TypeError: pass if no_option_matched: raise TypeError else: raise NotImplementedError( f"check_type method currently does not support checking typing of form '{type_name}'" ) except TypeError: if _external: raise TypeError( f"Object {repr(obj)} is of type {_construct_type_description(obj)} " f"when {type_to_check} was expected" ) raise TypeError() def _construct_type_description(obj) -> str: def get_types_in_iterable(iterable) -> str: types = {_construct_type_description(element) for element in iterable} return types.pop() if len(types) == 1 else f"Union[{','.join(types)}]" if isinstance(obj, list): return f"List[{get_types_in_iterable(obj)}]" elif isinstance(obj, dict): key_types = get_types_in_iterable(obj.keys()) val_types = get_types_in_iterable(obj.values()) return f"Dict[{key_types}, {val_types}]" else: return type(obj).__name__ ```
55,503,673
Let's say I have a python function whose single argument is a non-trivial type: ``` from typing import List, Dict ArgType = List[Dict[str, int]] # this could be any non-trivial type def myfun(a: ArgType) -> None: ... ``` ... and then I have a data structure that I have unpacked from a JSON source: ``` import json data = json.loads(...) ``` My question is: How can I check *at runtime* that `data` has the correct type to be used as an argument to `myfun()` before using it as an argument for `myfun()`? ``` if not isCorrectType(data, ArgType): raise TypeError("data is not correct type") else: myfun(data) ```
2019/04/03
[ "https://Stackoverflow.com/questions/55503673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/28835/" ]
Validating a type annotation is a non-trivial task. Python does not do it automatically, and writing your own validator is difficult because the [`typing`](https://docs.python.org/3/library/typing.html) module doesn't offer much of a useful interface. (In fact the internals of the `typing` module have changed so much since its introduction in python 3.5 that it's honestly a nightmare to work with.) Here's a type validator function taken from one of my personal projects (wall of code warning): ``` import inspect import typing __all__ = ['is_instance', 'is_subtype', 'python_type', 'is_generic', 'is_base_generic', 'is_qualified_generic'] if hasattr(typing, '_GenericAlias'): # python 3.7 def _is_generic(cls): if isinstance(cls, typing._GenericAlias): return True if isinstance(cls, typing._SpecialForm): return cls not in {typing.Any} return False def _is_base_generic(cls): if isinstance(cls, typing._GenericAlias): if cls.__origin__ in {typing.Generic, typing._Protocol}: return False if isinstance(cls, typing._VariadicGenericAlias): return True return len(cls.__parameters__) > 0 if isinstance(cls, typing._SpecialForm): return cls._name in {'ClassVar', 'Union', 'Optional'} return False def _get_base_generic(cls): # subclasses of Generic will have their _name set to None, but # their __origin__ will point to the base generic if cls._name is None: return cls.__origin__ else: return getattr(typing, cls._name) def _get_python_type(cls): """ Like `python_type`, but only works with `typing` classes. """ return cls.__origin__ def _get_name(cls): return cls._name else: # python <3.7 if hasattr(typing, '_Union'): # python 3.6 def _is_generic(cls): if isinstance(cls, (typing.GenericMeta, typing._Union, typing._Optional, typing._ClassVar)): return True return False def _is_base_generic(cls): if isinstance(cls, (typing.GenericMeta, typing._Union)): return cls.__args__ in {None, ()} if isinstance(cls, typing._Optional): return True return False else: # python 3.5 def _is_generic(cls): if isinstance(cls, (typing.GenericMeta, typing.UnionMeta, typing.OptionalMeta, typing.CallableMeta, typing.TupleMeta)): return True return False def _is_base_generic(cls): if isinstance(cls, typing.GenericMeta): return all(isinstance(arg, typing.TypeVar) for arg in cls.__parameters__) if isinstance(cls, typing.UnionMeta): return cls.__union_params__ is None if isinstance(cls, typing.TupleMeta): return cls.__tuple_params__ is None if isinstance(cls, typing.CallableMeta): return cls.__args__ is None if isinstance(cls, typing.OptionalMeta): return True return False def _get_base_generic(cls): try: return cls.__origin__ except AttributeError: pass name = type(cls).__name__ if not name.endswith('Meta'): raise NotImplementedError("Cannot determine base of {}".format(cls)) name = name[:-4] return getattr(typing, name) def _get_python_type(cls): """ Like `python_type`, but only works with `typing` classes. """ # Many classes actually reference their corresponding abstract base class from the abc module # instead of their builtin variant (i.e. typing.List references MutableSequence instead of list). # We're interested in the builtin class (if any), so we'll traverse the MRO and look for it there. for typ in cls.mro(): if typ.__module__ == 'builtins' and typ is not object: return typ try: return cls.__extra__ except AttributeError: pass if is_qualified_generic(cls): cls = get_base_generic(cls) if cls is typing.Tuple: return tuple raise NotImplementedError("Cannot determine python type of {}".format(cls)) def _get_name(cls): try: return cls.__name__ except AttributeError: return type(cls).__name__[1:] if hasattr(typing.List, '__args__'): # python 3.6+ def _get_subtypes(cls): subtypes = cls.__args__ if get_base_generic(cls) is typing.Callable: if len(subtypes) != 2 or subtypes[0] is not ...: subtypes = (subtypes[:-1], subtypes[-1]) return subtypes else: # python 3.5 def _get_subtypes(cls): if isinstance(cls, typing.CallableMeta): if cls.__args__ is None: return () return cls.__args__, cls.__result__ for name in ['__parameters__', '__union_params__', '__tuple_params__']: try: subtypes = getattr(cls, name) break except AttributeError: pass else: raise NotImplementedError("Cannot extract subtypes from {}".format(cls)) subtypes = [typ for typ in subtypes if not isinstance(typ, typing.TypeVar)] return subtypes def is_generic(cls): """ Detects any kind of generic, for example `List` or `List[int]`. This includes "special" types like Union and Tuple - anything that's subscriptable, basically. """ return _is_generic(cls) def is_base_generic(cls): """ Detects generic base classes, for example `List` (but not `List[int]`) """ return _is_base_generic(cls) def is_qualified_generic(cls): """ Detects generics with arguments, for example `List[int]` (but not `List`) """ return is_generic(cls) and not is_base_generic(cls) def get_base_generic(cls): if not is_qualified_generic(cls): raise TypeError('{} is not a qualified Generic and thus has no base'.format(cls)) return _get_base_generic(cls) def get_subtypes(cls): return _get_subtypes(cls) def _instancecheck_iterable(iterable, type_args): if len(type_args) != 1: raise TypeError("Generic iterables must have exactly 1 type argument; found {}".format(type_args)) type_ = type_args[0] return all(is_instance(val, type_) for val in iterable) def _instancecheck_mapping(mapping, type_args): return _instancecheck_itemsview(mapping.items(), type_args) def _instancecheck_itemsview(itemsview, type_args): if len(type_args) != 2: raise TypeError("Generic mappings must have exactly 2 type arguments; found {}".format(type_args)) key_type, value_type = type_args return all(is_instance(key, key_type) and is_instance(val, value_type) for key, val in itemsview) def _instancecheck_tuple(tup, type_args): if len(tup) != len(type_args): return False return all(is_instance(val, type_) for val, type_ in zip(tup, type_args)) _ORIGIN_TYPE_CHECKERS = {} for class_path, check_func in { # iterables 'typing.Container': _instancecheck_iterable, 'typing.Collection': _instancecheck_iterable, 'typing.AbstractSet': _instancecheck_iterable, 'typing.MutableSet': _instancecheck_iterable, 'typing.Sequence': _instancecheck_iterable, 'typing.MutableSequence': _instancecheck_iterable, 'typing.ByteString': _instancecheck_iterable, 'typing.Deque': _instancecheck_iterable, 'typing.List': _instancecheck_iterable, 'typing.Set': _instancecheck_iterable, 'typing.FrozenSet': _instancecheck_iterable, 'typing.KeysView': _instancecheck_iterable, 'typing.ValuesView': _instancecheck_iterable, 'typing.AsyncIterable': _instancecheck_iterable, # mappings 'typing.Mapping': _instancecheck_mapping, 'typing.MutableMapping': _instancecheck_mapping, 'typing.MappingView': _instancecheck_mapping, 'typing.ItemsView': _instancecheck_itemsview, 'typing.Dict': _instancecheck_mapping, 'typing.DefaultDict': _instancecheck_mapping, 'typing.Counter': _instancecheck_mapping, 'typing.ChainMap': _instancecheck_mapping, # other 'typing.Tuple': _instancecheck_tuple, }.items(): try: cls = eval(class_path) except AttributeError: continue _ORIGIN_TYPE_CHECKERS[cls] = check_func def _instancecheck_callable(value, type_): if not callable(value): return False if is_base_generic(type_): return True param_types, ret_type = get_subtypes(type_) sig = inspect.signature(value) missing_annotations = [] if param_types is not ...: if len(param_types) != len(sig.parameters): return False # FIXME: add support for TypeVars # if any of the existing annotations don't match the type, we'll return False. # Then, if any annotations are missing, we'll throw an exception. for param, expected_type in zip(sig.parameters.values(), param_types): param_type = param.annotation if param_type is inspect.Parameter.empty: missing_annotations.append(param) continue if not is_subtype(param_type, expected_type): return False if sig.return_annotation is inspect.Signature.empty: missing_annotations.append('return') else: if not is_subtype(sig.return_annotation, ret_type): return False if missing_annotations: raise ValueError("Missing annotations: {}".format(missing_annotations)) return True def _instancecheck_union(value, type_): types = get_subtypes(type_) return any(is_instance(value, typ) for typ in types) def _instancecheck_type(value, type_): # if it's not a class, return False if not isinstance(value, type): return False if is_base_generic(type_): return True type_args = get_subtypes(type_) if len(type_args) != 1: raise TypeError("Type must have exactly 1 type argument; found {}".format(type_args)) return is_subtype(value, type_args[0]) _SPECIAL_INSTANCE_CHECKERS = { 'Union': _instancecheck_union, 'Callable': _instancecheck_callable, 'Type': _instancecheck_type, 'Any': lambda v, t: True, } def is_instance(obj, type_): if type_.__module__ == 'typing': if is_qualified_generic(type_): base_generic = get_base_generic(type_) else: base_generic = type_ name = _get_name(base_generic) try: validator = _SPECIAL_INSTANCE_CHECKERS[name] except KeyError: pass else: return validator(obj, type_) if is_base_generic(type_): python_type = _get_python_type(type_) return isinstance(obj, python_type) if is_qualified_generic(type_): python_type = _get_python_type(type_) if not isinstance(obj, python_type): return False base = get_base_generic(type_) try: validator = _ORIGIN_TYPE_CHECKERS[base] except KeyError: raise NotImplementedError("Cannot perform isinstance check for type {}".format(type_)) type_args = get_subtypes(type_) return validator(obj, type_args) return isinstance(obj, type_) def is_subtype(sub_type, super_type): if not is_generic(sub_type): python_super = python_type(super_type) return issubclass(sub_type, python_super) # at this point we know `sub_type` is a generic python_sub = python_type(sub_type) python_super = python_type(super_type) if not issubclass(python_sub, python_super): return False # at this point we know that `sub_type`'s base type is a subtype of `super_type`'s base type. # If `super_type` isn't qualified, then there's nothing more to do. if not is_generic(super_type) or is_base_generic(super_type): return True # at this point we know that `super_type` is a qualified generic... so if `sub_type` isn't # qualified, it can't be a subtype. if is_base_generic(sub_type): return False # at this point we know that both types are qualified generics, so we just have to # compare their sub-types. sub_args = get_subtypes(sub_type) super_args = get_subtypes(super_type) return all(is_subtype(sub_arg, super_arg) for sub_arg, super_arg in zip(sub_args, super_args)) def python_type(annotation): """ Given a type annotation or a class as input, returns the corresponding python class. Examples: :: >>> python_type(typing.Dict) <class 'dict'> >>> python_type(typing.List[int]) <class 'list'> >>> python_type(int) <class 'int'> """ try: mro = annotation.mro() except AttributeError: # if it doesn't have an mro method, it must be a weird typing object return _get_python_type(annotation) if Type in mro: return annotation.python_type elif annotation.__module__ == 'typing': return _get_python_type(annotation) else: return annotation ``` Demonstration: ``` >>> is_instance([{'x': 3}], List[Dict[str, int]]) True >>> is_instance([{'x': 3}, {'y': 7.5}], List[Dict[str, int]]) False ``` (As far as I'm aware, this supports all python versions, even the ones <3.5 using the [`typing` module backport](https://pypi.org/project/typing/).)
You would have to check your nested type structure manually - the type hint's are not enforced. Checking like this ist best done using **ABC (Abstract Meta Classes)** - so users can provide their derived classes that support the same accessing as default dict/lists: ``` import collections.abc def isCorrectType(data): if isinstance(data, collections.abc.Collection): for d in data: if isinstance(d,collections.abc.MutableMapping): for key in d: if isinstance(key,str) and isinstance(d[key],int): pass else: return False else: return False else: return False return True ``` Output: ``` print ( isCorrectType( [ {"a":2} ] )) # True print ( isCorrectType( [ {2:2} ] )) # False print ( isCorrectType( [ {"a":"a"} ] )) # False print ( isCorrectType( [ {"a":2},1 ] )) # False ``` Doku: * [ABC - abstract meta classes](https://docs.python.org/3/library/collections.abc.html) Related: * [What is duck typing?](https://stackoverflow.com/questions/4205130/what-is-duck-typing) --- The other way round would be to follow the ["Ask forgiveness not permission" - explain](https://stackoverflow.com/questions/12265451/ask-forgiveness-not-permission-explain) paradigm and simyply *use* your data in the form you want and `try:/except:` around if if it does not conform to what you wanted. This fits better with [What is duck typing?](https://stackoverflow.com/questions/4205130/duck-typing) - and allows (similar to ABC-checking) the consumer to provide you with derived classes from list/dict while it still will work...
55,503,673
Let's say I have a python function whose single argument is a non-trivial type: ``` from typing import List, Dict ArgType = List[Dict[str, int]] # this could be any non-trivial type def myfun(a: ArgType) -> None: ... ``` ... and then I have a data structure that I have unpacked from a JSON source: ``` import json data = json.loads(...) ``` My question is: How can I check *at runtime* that `data` has the correct type to be used as an argument to `myfun()` before using it as an argument for `myfun()`? ``` if not isCorrectType(data, ArgType): raise TypeError("data is not correct type") else: myfun(data) ```
2019/04/03
[ "https://Stackoverflow.com/questions/55503673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/28835/" ]
Validating a type annotation is a non-trivial task. Python does not do it automatically, and writing your own validator is difficult because the [`typing`](https://docs.python.org/3/library/typing.html) module doesn't offer much of a useful interface. (In fact the internals of the `typing` module have changed so much since its introduction in python 3.5 that it's honestly a nightmare to work with.) Here's a type validator function taken from one of my personal projects (wall of code warning): ``` import inspect import typing __all__ = ['is_instance', 'is_subtype', 'python_type', 'is_generic', 'is_base_generic', 'is_qualified_generic'] if hasattr(typing, '_GenericAlias'): # python 3.7 def _is_generic(cls): if isinstance(cls, typing._GenericAlias): return True if isinstance(cls, typing._SpecialForm): return cls not in {typing.Any} return False def _is_base_generic(cls): if isinstance(cls, typing._GenericAlias): if cls.__origin__ in {typing.Generic, typing._Protocol}: return False if isinstance(cls, typing._VariadicGenericAlias): return True return len(cls.__parameters__) > 0 if isinstance(cls, typing._SpecialForm): return cls._name in {'ClassVar', 'Union', 'Optional'} return False def _get_base_generic(cls): # subclasses of Generic will have their _name set to None, but # their __origin__ will point to the base generic if cls._name is None: return cls.__origin__ else: return getattr(typing, cls._name) def _get_python_type(cls): """ Like `python_type`, but only works with `typing` classes. """ return cls.__origin__ def _get_name(cls): return cls._name else: # python <3.7 if hasattr(typing, '_Union'): # python 3.6 def _is_generic(cls): if isinstance(cls, (typing.GenericMeta, typing._Union, typing._Optional, typing._ClassVar)): return True return False def _is_base_generic(cls): if isinstance(cls, (typing.GenericMeta, typing._Union)): return cls.__args__ in {None, ()} if isinstance(cls, typing._Optional): return True return False else: # python 3.5 def _is_generic(cls): if isinstance(cls, (typing.GenericMeta, typing.UnionMeta, typing.OptionalMeta, typing.CallableMeta, typing.TupleMeta)): return True return False def _is_base_generic(cls): if isinstance(cls, typing.GenericMeta): return all(isinstance(arg, typing.TypeVar) for arg in cls.__parameters__) if isinstance(cls, typing.UnionMeta): return cls.__union_params__ is None if isinstance(cls, typing.TupleMeta): return cls.__tuple_params__ is None if isinstance(cls, typing.CallableMeta): return cls.__args__ is None if isinstance(cls, typing.OptionalMeta): return True return False def _get_base_generic(cls): try: return cls.__origin__ except AttributeError: pass name = type(cls).__name__ if not name.endswith('Meta'): raise NotImplementedError("Cannot determine base of {}".format(cls)) name = name[:-4] return getattr(typing, name) def _get_python_type(cls): """ Like `python_type`, but only works with `typing` classes. """ # Many classes actually reference their corresponding abstract base class from the abc module # instead of their builtin variant (i.e. typing.List references MutableSequence instead of list). # We're interested in the builtin class (if any), so we'll traverse the MRO and look for it there. for typ in cls.mro(): if typ.__module__ == 'builtins' and typ is not object: return typ try: return cls.__extra__ except AttributeError: pass if is_qualified_generic(cls): cls = get_base_generic(cls) if cls is typing.Tuple: return tuple raise NotImplementedError("Cannot determine python type of {}".format(cls)) def _get_name(cls): try: return cls.__name__ except AttributeError: return type(cls).__name__[1:] if hasattr(typing.List, '__args__'): # python 3.6+ def _get_subtypes(cls): subtypes = cls.__args__ if get_base_generic(cls) is typing.Callable: if len(subtypes) != 2 or subtypes[0] is not ...: subtypes = (subtypes[:-1], subtypes[-1]) return subtypes else: # python 3.5 def _get_subtypes(cls): if isinstance(cls, typing.CallableMeta): if cls.__args__ is None: return () return cls.__args__, cls.__result__ for name in ['__parameters__', '__union_params__', '__tuple_params__']: try: subtypes = getattr(cls, name) break except AttributeError: pass else: raise NotImplementedError("Cannot extract subtypes from {}".format(cls)) subtypes = [typ for typ in subtypes if not isinstance(typ, typing.TypeVar)] return subtypes def is_generic(cls): """ Detects any kind of generic, for example `List` or `List[int]`. This includes "special" types like Union and Tuple - anything that's subscriptable, basically. """ return _is_generic(cls) def is_base_generic(cls): """ Detects generic base classes, for example `List` (but not `List[int]`) """ return _is_base_generic(cls) def is_qualified_generic(cls): """ Detects generics with arguments, for example `List[int]` (but not `List`) """ return is_generic(cls) and not is_base_generic(cls) def get_base_generic(cls): if not is_qualified_generic(cls): raise TypeError('{} is not a qualified Generic and thus has no base'.format(cls)) return _get_base_generic(cls) def get_subtypes(cls): return _get_subtypes(cls) def _instancecheck_iterable(iterable, type_args): if len(type_args) != 1: raise TypeError("Generic iterables must have exactly 1 type argument; found {}".format(type_args)) type_ = type_args[0] return all(is_instance(val, type_) for val in iterable) def _instancecheck_mapping(mapping, type_args): return _instancecheck_itemsview(mapping.items(), type_args) def _instancecheck_itemsview(itemsview, type_args): if len(type_args) != 2: raise TypeError("Generic mappings must have exactly 2 type arguments; found {}".format(type_args)) key_type, value_type = type_args return all(is_instance(key, key_type) and is_instance(val, value_type) for key, val in itemsview) def _instancecheck_tuple(tup, type_args): if len(tup) != len(type_args): return False return all(is_instance(val, type_) for val, type_ in zip(tup, type_args)) _ORIGIN_TYPE_CHECKERS = {} for class_path, check_func in { # iterables 'typing.Container': _instancecheck_iterable, 'typing.Collection': _instancecheck_iterable, 'typing.AbstractSet': _instancecheck_iterable, 'typing.MutableSet': _instancecheck_iterable, 'typing.Sequence': _instancecheck_iterable, 'typing.MutableSequence': _instancecheck_iterable, 'typing.ByteString': _instancecheck_iterable, 'typing.Deque': _instancecheck_iterable, 'typing.List': _instancecheck_iterable, 'typing.Set': _instancecheck_iterable, 'typing.FrozenSet': _instancecheck_iterable, 'typing.KeysView': _instancecheck_iterable, 'typing.ValuesView': _instancecheck_iterable, 'typing.AsyncIterable': _instancecheck_iterable, # mappings 'typing.Mapping': _instancecheck_mapping, 'typing.MutableMapping': _instancecheck_mapping, 'typing.MappingView': _instancecheck_mapping, 'typing.ItemsView': _instancecheck_itemsview, 'typing.Dict': _instancecheck_mapping, 'typing.DefaultDict': _instancecheck_mapping, 'typing.Counter': _instancecheck_mapping, 'typing.ChainMap': _instancecheck_mapping, # other 'typing.Tuple': _instancecheck_tuple, }.items(): try: cls = eval(class_path) except AttributeError: continue _ORIGIN_TYPE_CHECKERS[cls] = check_func def _instancecheck_callable(value, type_): if not callable(value): return False if is_base_generic(type_): return True param_types, ret_type = get_subtypes(type_) sig = inspect.signature(value) missing_annotations = [] if param_types is not ...: if len(param_types) != len(sig.parameters): return False # FIXME: add support for TypeVars # if any of the existing annotations don't match the type, we'll return False. # Then, if any annotations are missing, we'll throw an exception. for param, expected_type in zip(sig.parameters.values(), param_types): param_type = param.annotation if param_type is inspect.Parameter.empty: missing_annotations.append(param) continue if not is_subtype(param_type, expected_type): return False if sig.return_annotation is inspect.Signature.empty: missing_annotations.append('return') else: if not is_subtype(sig.return_annotation, ret_type): return False if missing_annotations: raise ValueError("Missing annotations: {}".format(missing_annotations)) return True def _instancecheck_union(value, type_): types = get_subtypes(type_) return any(is_instance(value, typ) for typ in types) def _instancecheck_type(value, type_): # if it's not a class, return False if not isinstance(value, type): return False if is_base_generic(type_): return True type_args = get_subtypes(type_) if len(type_args) != 1: raise TypeError("Type must have exactly 1 type argument; found {}".format(type_args)) return is_subtype(value, type_args[0]) _SPECIAL_INSTANCE_CHECKERS = { 'Union': _instancecheck_union, 'Callable': _instancecheck_callable, 'Type': _instancecheck_type, 'Any': lambda v, t: True, } def is_instance(obj, type_): if type_.__module__ == 'typing': if is_qualified_generic(type_): base_generic = get_base_generic(type_) else: base_generic = type_ name = _get_name(base_generic) try: validator = _SPECIAL_INSTANCE_CHECKERS[name] except KeyError: pass else: return validator(obj, type_) if is_base_generic(type_): python_type = _get_python_type(type_) return isinstance(obj, python_type) if is_qualified_generic(type_): python_type = _get_python_type(type_) if not isinstance(obj, python_type): return False base = get_base_generic(type_) try: validator = _ORIGIN_TYPE_CHECKERS[base] except KeyError: raise NotImplementedError("Cannot perform isinstance check for type {}".format(type_)) type_args = get_subtypes(type_) return validator(obj, type_args) return isinstance(obj, type_) def is_subtype(sub_type, super_type): if not is_generic(sub_type): python_super = python_type(super_type) return issubclass(sub_type, python_super) # at this point we know `sub_type` is a generic python_sub = python_type(sub_type) python_super = python_type(super_type) if not issubclass(python_sub, python_super): return False # at this point we know that `sub_type`'s base type is a subtype of `super_type`'s base type. # If `super_type` isn't qualified, then there's nothing more to do. if not is_generic(super_type) or is_base_generic(super_type): return True # at this point we know that `super_type` is a qualified generic... so if `sub_type` isn't # qualified, it can't be a subtype. if is_base_generic(sub_type): return False # at this point we know that both types are qualified generics, so we just have to # compare their sub-types. sub_args = get_subtypes(sub_type) super_args = get_subtypes(super_type) return all(is_subtype(sub_arg, super_arg) for sub_arg, super_arg in zip(sub_args, super_args)) def python_type(annotation): """ Given a type annotation or a class as input, returns the corresponding python class. Examples: :: >>> python_type(typing.Dict) <class 'dict'> >>> python_type(typing.List[int]) <class 'list'> >>> python_type(int) <class 'int'> """ try: mro = annotation.mro() except AttributeError: # if it doesn't have an mro method, it must be a weird typing object return _get_python_type(annotation) if Type in mro: return annotation.python_type elif annotation.__module__ == 'typing': return _get_python_type(annotation) else: return annotation ``` Demonstration: ``` >>> is_instance([{'x': 3}], List[Dict[str, int]]) True >>> is_instance([{'x': 3}, {'y': 7.5}], List[Dict[str, int]]) False ``` (As far as I'm aware, this supports all python versions, even the ones <3.5 using the [`typing` module backport](https://pypi.org/project/typing/).)
It's awkward that there's no built-in function for this but [`typeguard`](https://pypi.org/project/typeguard/) comes with a convenient `check_type()` function: ``` >>> from typeguard import check_type >>> from typing import List >>> check_type("foo", [1,2,"3"], List[int]) Traceback (most recent call last): ... TypeError: type of foo[2] must be int; got str instead type of foo[2] must be int; got str instead ``` For more see: <https://typeguard.readthedocs.io/en/latest/api.html#typeguard.check_type>
55,503,673
Let's say I have a python function whose single argument is a non-trivial type: ``` from typing import List, Dict ArgType = List[Dict[str, int]] # this could be any non-trivial type def myfun(a: ArgType) -> None: ... ``` ... and then I have a data structure that I have unpacked from a JSON source: ``` import json data = json.loads(...) ``` My question is: How can I check *at runtime* that `data` has the correct type to be used as an argument to `myfun()` before using it as an argument for `myfun()`? ``` if not isCorrectType(data, ArgType): raise TypeError("data is not correct type") else: myfun(data) ```
2019/04/03
[ "https://Stackoverflow.com/questions/55503673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/28835/" ]
First of all, even though I think you are aware but rather for the sake of completeness, the typing library contains types for **type hints**. These type hints are used by IDE's to check if your code is somewhat sane, and also serves as documentation what types a developer expects. To check whether a variable is a type of something, we have to use the [isinstance](https://docs.python.org/3/library/functions.html#isinstance) function. Amazingly, we can use direct types of the typing library function, eg. ``` from typing import List value = [] isinstance(value, List) ``` However, for nested structures such as `List[Dict[str, int]]` we cannot use this directly, because you funny enough get a TypeError. What you have to do is: 1. Check if the initial value is a list 2. Check if each item of the list is of type dict 3. Check if each key of each dict is in fact a string and if each value is in fact an int Unfortunately, for strict checking python is a bit cumbersome. However, do be aware that python makes use of duck typing: if it is like a duck and behaves like a duck, then it definitely is a duck.
If all you want to do is json-parsing, you should just use [pydantic](https://pydantic-docs.helpmanual.io/). But, I encountered the same problem where I wanted to check the type of python objects, so I created a simpler solution than in other answers that handles at least complex types with nested lists and dictionaries. I created a gist with this method at <https://gist.github.com/ramraj07/f537bf9f80b4133c65dd76c958d4c461> Some example uses of this method include: ``` from typing import List, Dict, Union, Type, Optional check_type('a', str) check_type({'a': 1}, Dict[str, int]) check_type([{'a': [1.0]}, 'ten'], List[Union[Dict[str, List[float]], str]]) check_type(None, Optional[str]) check_type('abc', Optional[str]) ``` Here's the code below for reference: ``` import typing def check_type(obj: typing.Any, type_to_check: typing.Any, _external=True) -> None: try: if not hasattr(type_to_check, "_name"): # base-case if not isinstance(obj, type_to_check): raise TypeError return # type_to_check is from typing library type_name = type_to_check._name if type_to_check is typing.Any: pass elif type_name in ("List", "Tuple"): if (type_name == "List" and not isinstance(obj, list)) or ( type_name == "Tuple" and not isinstance(obj, tuple) ): raise TypeError element_type = type_to_check.__args__[0] for element in obj: check_type(element, element_type, _external=False) elif type_name == "Dict": if not isinstance(obj, dict): raise TypeError if len(type_to_check.__args__) != 2: raise NotImplementedError( "check_type can only accept Dict typing with separate annotations for key and values" ) key_type, value_type = type_to_check.__args__ for key, value in obj.items(): check_type(key, key_type, _external=False) check_type(value, value_type, _external=False) elif type_name is None and type_to_check.__origin__ is typing.Union: type_options = type_to_check.__args__ no_option_matched = True for type_option in type_options: try: check_type(obj, type_option, _external=False) no_option_matched = False break except TypeError: pass if no_option_matched: raise TypeError else: raise NotImplementedError( f"check_type method currently does not support checking typing of form '{type_name}'" ) except TypeError: if _external: raise TypeError( f"Object {repr(obj)} is of type {_construct_type_description(obj)} " f"when {type_to_check} was expected" ) raise TypeError() def _construct_type_description(obj) -> str: def get_types_in_iterable(iterable) -> str: types = {_construct_type_description(element) for element in iterable} return types.pop() if len(types) == 1 else f"Union[{','.join(types)}]" if isinstance(obj, list): return f"List[{get_types_in_iterable(obj)}]" elif isinstance(obj, dict): key_types = get_types_in_iterable(obj.keys()) val_types = get_types_in_iterable(obj.values()) return f"Dict[{key_types}, {val_types}]" else: return type(obj).__name__ ```
55,503,673
Let's say I have a python function whose single argument is a non-trivial type: ``` from typing import List, Dict ArgType = List[Dict[str, int]] # this could be any non-trivial type def myfun(a: ArgType) -> None: ... ``` ... and then I have a data structure that I have unpacked from a JSON source: ``` import json data = json.loads(...) ``` My question is: How can I check *at runtime* that `data` has the correct type to be used as an argument to `myfun()` before using it as an argument for `myfun()`? ``` if not isCorrectType(data, ArgType): raise TypeError("data is not correct type") else: myfun(data) ```
2019/04/03
[ "https://Stackoverflow.com/questions/55503673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/28835/" ]
Validating a type annotation is a non-trivial task. Python does not do it automatically, and writing your own validator is difficult because the [`typing`](https://docs.python.org/3/library/typing.html) module doesn't offer much of a useful interface. (In fact the internals of the `typing` module have changed so much since its introduction in python 3.5 that it's honestly a nightmare to work with.) Here's a type validator function taken from one of my personal projects (wall of code warning): ``` import inspect import typing __all__ = ['is_instance', 'is_subtype', 'python_type', 'is_generic', 'is_base_generic', 'is_qualified_generic'] if hasattr(typing, '_GenericAlias'): # python 3.7 def _is_generic(cls): if isinstance(cls, typing._GenericAlias): return True if isinstance(cls, typing._SpecialForm): return cls not in {typing.Any} return False def _is_base_generic(cls): if isinstance(cls, typing._GenericAlias): if cls.__origin__ in {typing.Generic, typing._Protocol}: return False if isinstance(cls, typing._VariadicGenericAlias): return True return len(cls.__parameters__) > 0 if isinstance(cls, typing._SpecialForm): return cls._name in {'ClassVar', 'Union', 'Optional'} return False def _get_base_generic(cls): # subclasses of Generic will have their _name set to None, but # their __origin__ will point to the base generic if cls._name is None: return cls.__origin__ else: return getattr(typing, cls._name) def _get_python_type(cls): """ Like `python_type`, but only works with `typing` classes. """ return cls.__origin__ def _get_name(cls): return cls._name else: # python <3.7 if hasattr(typing, '_Union'): # python 3.6 def _is_generic(cls): if isinstance(cls, (typing.GenericMeta, typing._Union, typing._Optional, typing._ClassVar)): return True return False def _is_base_generic(cls): if isinstance(cls, (typing.GenericMeta, typing._Union)): return cls.__args__ in {None, ()} if isinstance(cls, typing._Optional): return True return False else: # python 3.5 def _is_generic(cls): if isinstance(cls, (typing.GenericMeta, typing.UnionMeta, typing.OptionalMeta, typing.CallableMeta, typing.TupleMeta)): return True return False def _is_base_generic(cls): if isinstance(cls, typing.GenericMeta): return all(isinstance(arg, typing.TypeVar) for arg in cls.__parameters__) if isinstance(cls, typing.UnionMeta): return cls.__union_params__ is None if isinstance(cls, typing.TupleMeta): return cls.__tuple_params__ is None if isinstance(cls, typing.CallableMeta): return cls.__args__ is None if isinstance(cls, typing.OptionalMeta): return True return False def _get_base_generic(cls): try: return cls.__origin__ except AttributeError: pass name = type(cls).__name__ if not name.endswith('Meta'): raise NotImplementedError("Cannot determine base of {}".format(cls)) name = name[:-4] return getattr(typing, name) def _get_python_type(cls): """ Like `python_type`, but only works with `typing` classes. """ # Many classes actually reference their corresponding abstract base class from the abc module # instead of their builtin variant (i.e. typing.List references MutableSequence instead of list). # We're interested in the builtin class (if any), so we'll traverse the MRO and look for it there. for typ in cls.mro(): if typ.__module__ == 'builtins' and typ is not object: return typ try: return cls.__extra__ except AttributeError: pass if is_qualified_generic(cls): cls = get_base_generic(cls) if cls is typing.Tuple: return tuple raise NotImplementedError("Cannot determine python type of {}".format(cls)) def _get_name(cls): try: return cls.__name__ except AttributeError: return type(cls).__name__[1:] if hasattr(typing.List, '__args__'): # python 3.6+ def _get_subtypes(cls): subtypes = cls.__args__ if get_base_generic(cls) is typing.Callable: if len(subtypes) != 2 or subtypes[0] is not ...: subtypes = (subtypes[:-1], subtypes[-1]) return subtypes else: # python 3.5 def _get_subtypes(cls): if isinstance(cls, typing.CallableMeta): if cls.__args__ is None: return () return cls.__args__, cls.__result__ for name in ['__parameters__', '__union_params__', '__tuple_params__']: try: subtypes = getattr(cls, name) break except AttributeError: pass else: raise NotImplementedError("Cannot extract subtypes from {}".format(cls)) subtypes = [typ for typ in subtypes if not isinstance(typ, typing.TypeVar)] return subtypes def is_generic(cls): """ Detects any kind of generic, for example `List` or `List[int]`. This includes "special" types like Union and Tuple - anything that's subscriptable, basically. """ return _is_generic(cls) def is_base_generic(cls): """ Detects generic base classes, for example `List` (but not `List[int]`) """ return _is_base_generic(cls) def is_qualified_generic(cls): """ Detects generics with arguments, for example `List[int]` (but not `List`) """ return is_generic(cls) and not is_base_generic(cls) def get_base_generic(cls): if not is_qualified_generic(cls): raise TypeError('{} is not a qualified Generic and thus has no base'.format(cls)) return _get_base_generic(cls) def get_subtypes(cls): return _get_subtypes(cls) def _instancecheck_iterable(iterable, type_args): if len(type_args) != 1: raise TypeError("Generic iterables must have exactly 1 type argument; found {}".format(type_args)) type_ = type_args[0] return all(is_instance(val, type_) for val in iterable) def _instancecheck_mapping(mapping, type_args): return _instancecheck_itemsview(mapping.items(), type_args) def _instancecheck_itemsview(itemsview, type_args): if len(type_args) != 2: raise TypeError("Generic mappings must have exactly 2 type arguments; found {}".format(type_args)) key_type, value_type = type_args return all(is_instance(key, key_type) and is_instance(val, value_type) for key, val in itemsview) def _instancecheck_tuple(tup, type_args): if len(tup) != len(type_args): return False return all(is_instance(val, type_) for val, type_ in zip(tup, type_args)) _ORIGIN_TYPE_CHECKERS = {} for class_path, check_func in { # iterables 'typing.Container': _instancecheck_iterable, 'typing.Collection': _instancecheck_iterable, 'typing.AbstractSet': _instancecheck_iterable, 'typing.MutableSet': _instancecheck_iterable, 'typing.Sequence': _instancecheck_iterable, 'typing.MutableSequence': _instancecheck_iterable, 'typing.ByteString': _instancecheck_iterable, 'typing.Deque': _instancecheck_iterable, 'typing.List': _instancecheck_iterable, 'typing.Set': _instancecheck_iterable, 'typing.FrozenSet': _instancecheck_iterable, 'typing.KeysView': _instancecheck_iterable, 'typing.ValuesView': _instancecheck_iterable, 'typing.AsyncIterable': _instancecheck_iterable, # mappings 'typing.Mapping': _instancecheck_mapping, 'typing.MutableMapping': _instancecheck_mapping, 'typing.MappingView': _instancecheck_mapping, 'typing.ItemsView': _instancecheck_itemsview, 'typing.Dict': _instancecheck_mapping, 'typing.DefaultDict': _instancecheck_mapping, 'typing.Counter': _instancecheck_mapping, 'typing.ChainMap': _instancecheck_mapping, # other 'typing.Tuple': _instancecheck_tuple, }.items(): try: cls = eval(class_path) except AttributeError: continue _ORIGIN_TYPE_CHECKERS[cls] = check_func def _instancecheck_callable(value, type_): if not callable(value): return False if is_base_generic(type_): return True param_types, ret_type = get_subtypes(type_) sig = inspect.signature(value) missing_annotations = [] if param_types is not ...: if len(param_types) != len(sig.parameters): return False # FIXME: add support for TypeVars # if any of the existing annotations don't match the type, we'll return False. # Then, if any annotations are missing, we'll throw an exception. for param, expected_type in zip(sig.parameters.values(), param_types): param_type = param.annotation if param_type is inspect.Parameter.empty: missing_annotations.append(param) continue if not is_subtype(param_type, expected_type): return False if sig.return_annotation is inspect.Signature.empty: missing_annotations.append('return') else: if not is_subtype(sig.return_annotation, ret_type): return False if missing_annotations: raise ValueError("Missing annotations: {}".format(missing_annotations)) return True def _instancecheck_union(value, type_): types = get_subtypes(type_) return any(is_instance(value, typ) for typ in types) def _instancecheck_type(value, type_): # if it's not a class, return False if not isinstance(value, type): return False if is_base_generic(type_): return True type_args = get_subtypes(type_) if len(type_args) != 1: raise TypeError("Type must have exactly 1 type argument; found {}".format(type_args)) return is_subtype(value, type_args[0]) _SPECIAL_INSTANCE_CHECKERS = { 'Union': _instancecheck_union, 'Callable': _instancecheck_callable, 'Type': _instancecheck_type, 'Any': lambda v, t: True, } def is_instance(obj, type_): if type_.__module__ == 'typing': if is_qualified_generic(type_): base_generic = get_base_generic(type_) else: base_generic = type_ name = _get_name(base_generic) try: validator = _SPECIAL_INSTANCE_CHECKERS[name] except KeyError: pass else: return validator(obj, type_) if is_base_generic(type_): python_type = _get_python_type(type_) return isinstance(obj, python_type) if is_qualified_generic(type_): python_type = _get_python_type(type_) if not isinstance(obj, python_type): return False base = get_base_generic(type_) try: validator = _ORIGIN_TYPE_CHECKERS[base] except KeyError: raise NotImplementedError("Cannot perform isinstance check for type {}".format(type_)) type_args = get_subtypes(type_) return validator(obj, type_args) return isinstance(obj, type_) def is_subtype(sub_type, super_type): if not is_generic(sub_type): python_super = python_type(super_type) return issubclass(sub_type, python_super) # at this point we know `sub_type` is a generic python_sub = python_type(sub_type) python_super = python_type(super_type) if not issubclass(python_sub, python_super): return False # at this point we know that `sub_type`'s base type is a subtype of `super_type`'s base type. # If `super_type` isn't qualified, then there's nothing more to do. if not is_generic(super_type) or is_base_generic(super_type): return True # at this point we know that `super_type` is a qualified generic... so if `sub_type` isn't # qualified, it can't be a subtype. if is_base_generic(sub_type): return False # at this point we know that both types are qualified generics, so we just have to # compare their sub-types. sub_args = get_subtypes(sub_type) super_args = get_subtypes(super_type) return all(is_subtype(sub_arg, super_arg) for sub_arg, super_arg in zip(sub_args, super_args)) def python_type(annotation): """ Given a type annotation or a class as input, returns the corresponding python class. Examples: :: >>> python_type(typing.Dict) <class 'dict'> >>> python_type(typing.List[int]) <class 'list'> >>> python_type(int) <class 'int'> """ try: mro = annotation.mro() except AttributeError: # if it doesn't have an mro method, it must be a weird typing object return _get_python_type(annotation) if Type in mro: return annotation.python_type elif annotation.__module__ == 'typing': return _get_python_type(annotation) else: return annotation ``` Demonstration: ``` >>> is_instance([{'x': 3}], List[Dict[str, int]]) True >>> is_instance([{'x': 3}, {'y': 7.5}], List[Dict[str, int]]) False ``` (As far as I'm aware, this supports all python versions, even the ones <3.5 using the [`typing` module backport](https://pypi.org/project/typing/).)
First of all, even though I think you are aware but rather for the sake of completeness, the typing library contains types for **type hints**. These type hints are used by IDE's to check if your code is somewhat sane, and also serves as documentation what types a developer expects. To check whether a variable is a type of something, we have to use the [isinstance](https://docs.python.org/3/library/functions.html#isinstance) function. Amazingly, we can use direct types of the typing library function, eg. ``` from typing import List value = [] isinstance(value, List) ``` However, for nested structures such as `List[Dict[str, int]]` we cannot use this directly, because you funny enough get a TypeError. What you have to do is: 1. Check if the initial value is a list 2. Check if each item of the list is of type dict 3. Check if each key of each dict is in fact a string and if each value is in fact an int Unfortunately, for strict checking python is a bit cumbersome. However, do be aware that python makes use of duck typing: if it is like a duck and behaves like a duck, then it definitely is a duck.
55,503,673
Let's say I have a python function whose single argument is a non-trivial type: ``` from typing import List, Dict ArgType = List[Dict[str, int]] # this could be any non-trivial type def myfun(a: ArgType) -> None: ... ``` ... and then I have a data structure that I have unpacked from a JSON source: ``` import json data = json.loads(...) ``` My question is: How can I check *at runtime* that `data` has the correct type to be used as an argument to `myfun()` before using it as an argument for `myfun()`? ``` if not isCorrectType(data, ArgType): raise TypeError("data is not correct type") else: myfun(data) ```
2019/04/03
[ "https://Stackoverflow.com/questions/55503673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/28835/" ]
It's awkward that there's no built-in function for this but [`typeguard`](https://pypi.org/project/typeguard/) comes with a convenient `check_type()` function: ``` >>> from typeguard import check_type >>> from typing import List >>> check_type("foo", [1,2,"3"], List[int]) Traceback (most recent call last): ... TypeError: type of foo[2] must be int; got str instead type of foo[2] must be int; got str instead ``` For more see: <https://typeguard.readthedocs.io/en/latest/api.html#typeguard.check_type>
If all you want to do is json-parsing, you should just use [pydantic](https://pydantic-docs.helpmanual.io/). But, I encountered the same problem where I wanted to check the type of python objects, so I created a simpler solution than in other answers that handles at least complex types with nested lists and dictionaries. I created a gist with this method at <https://gist.github.com/ramraj07/f537bf9f80b4133c65dd76c958d4c461> Some example uses of this method include: ``` from typing import List, Dict, Union, Type, Optional check_type('a', str) check_type({'a': 1}, Dict[str, int]) check_type([{'a': [1.0]}, 'ten'], List[Union[Dict[str, List[float]], str]]) check_type(None, Optional[str]) check_type('abc', Optional[str]) ``` Here's the code below for reference: ``` import typing def check_type(obj: typing.Any, type_to_check: typing.Any, _external=True) -> None: try: if not hasattr(type_to_check, "_name"): # base-case if not isinstance(obj, type_to_check): raise TypeError return # type_to_check is from typing library type_name = type_to_check._name if type_to_check is typing.Any: pass elif type_name in ("List", "Tuple"): if (type_name == "List" and not isinstance(obj, list)) or ( type_name == "Tuple" and not isinstance(obj, tuple) ): raise TypeError element_type = type_to_check.__args__[0] for element in obj: check_type(element, element_type, _external=False) elif type_name == "Dict": if not isinstance(obj, dict): raise TypeError if len(type_to_check.__args__) != 2: raise NotImplementedError( "check_type can only accept Dict typing with separate annotations for key and values" ) key_type, value_type = type_to_check.__args__ for key, value in obj.items(): check_type(key, key_type, _external=False) check_type(value, value_type, _external=False) elif type_name is None and type_to_check.__origin__ is typing.Union: type_options = type_to_check.__args__ no_option_matched = True for type_option in type_options: try: check_type(obj, type_option, _external=False) no_option_matched = False break except TypeError: pass if no_option_matched: raise TypeError else: raise NotImplementedError( f"check_type method currently does not support checking typing of form '{type_name}'" ) except TypeError: if _external: raise TypeError( f"Object {repr(obj)} is of type {_construct_type_description(obj)} " f"when {type_to_check} was expected" ) raise TypeError() def _construct_type_description(obj) -> str: def get_types_in_iterable(iterable) -> str: types = {_construct_type_description(element) for element in iterable} return types.pop() if len(types) == 1 else f"Union[{','.join(types)}]" if isinstance(obj, list): return f"List[{get_types_in_iterable(obj)}]" elif isinstance(obj, dict): key_types = get_types_in_iterable(obj.keys()) val_types = get_types_in_iterable(obj.values()) return f"Dict[{key_types}, {val_types}]" else: return type(obj).__name__ ```
55,503,673
Let's say I have a python function whose single argument is a non-trivial type: ``` from typing import List, Dict ArgType = List[Dict[str, int]] # this could be any non-trivial type def myfun(a: ArgType) -> None: ... ``` ... and then I have a data structure that I have unpacked from a JSON source: ``` import json data = json.loads(...) ``` My question is: How can I check *at runtime* that `data` has the correct type to be used as an argument to `myfun()` before using it as an argument for `myfun()`? ``` if not isCorrectType(data, ArgType): raise TypeError("data is not correct type") else: myfun(data) ```
2019/04/03
[ "https://Stackoverflow.com/questions/55503673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/28835/" ]
You would have to check your nested type structure manually - the type hint's are not enforced. Checking like this ist best done using **ABC (Abstract Meta Classes)** - so users can provide their derived classes that support the same accessing as default dict/lists: ``` import collections.abc def isCorrectType(data): if isinstance(data, collections.abc.Collection): for d in data: if isinstance(d,collections.abc.MutableMapping): for key in d: if isinstance(key,str) and isinstance(d[key],int): pass else: return False else: return False else: return False return True ``` Output: ``` print ( isCorrectType( [ {"a":2} ] )) # True print ( isCorrectType( [ {2:2} ] )) # False print ( isCorrectType( [ {"a":"a"} ] )) # False print ( isCorrectType( [ {"a":2},1 ] )) # False ``` Doku: * [ABC - abstract meta classes](https://docs.python.org/3/library/collections.abc.html) Related: * [What is duck typing?](https://stackoverflow.com/questions/4205130/what-is-duck-typing) --- The other way round would be to follow the ["Ask forgiveness not permission" - explain](https://stackoverflow.com/questions/12265451/ask-forgiveness-not-permission-explain) paradigm and simyply *use* your data in the form you want and `try:/except:` around if if it does not conform to what you wanted. This fits better with [What is duck typing?](https://stackoverflow.com/questions/4205130/duck-typing) - and allows (similar to ABC-checking) the consumer to provide you with derived classes from list/dict while it still will work...
If all you want to do is json-parsing, you should just use [pydantic](https://pydantic-docs.helpmanual.io/). But, I encountered the same problem where I wanted to check the type of python objects, so I created a simpler solution than in other answers that handles at least complex types with nested lists and dictionaries. I created a gist with this method at <https://gist.github.com/ramraj07/f537bf9f80b4133c65dd76c958d4c461> Some example uses of this method include: ``` from typing import List, Dict, Union, Type, Optional check_type('a', str) check_type({'a': 1}, Dict[str, int]) check_type([{'a': [1.0]}, 'ten'], List[Union[Dict[str, List[float]], str]]) check_type(None, Optional[str]) check_type('abc', Optional[str]) ``` Here's the code below for reference: ``` import typing def check_type(obj: typing.Any, type_to_check: typing.Any, _external=True) -> None: try: if not hasattr(type_to_check, "_name"): # base-case if not isinstance(obj, type_to_check): raise TypeError return # type_to_check is from typing library type_name = type_to_check._name if type_to_check is typing.Any: pass elif type_name in ("List", "Tuple"): if (type_name == "List" and not isinstance(obj, list)) or ( type_name == "Tuple" and not isinstance(obj, tuple) ): raise TypeError element_type = type_to_check.__args__[0] for element in obj: check_type(element, element_type, _external=False) elif type_name == "Dict": if not isinstance(obj, dict): raise TypeError if len(type_to_check.__args__) != 2: raise NotImplementedError( "check_type can only accept Dict typing with separate annotations for key and values" ) key_type, value_type = type_to_check.__args__ for key, value in obj.items(): check_type(key, key_type, _external=False) check_type(value, value_type, _external=False) elif type_name is None and type_to_check.__origin__ is typing.Union: type_options = type_to_check.__args__ no_option_matched = True for type_option in type_options: try: check_type(obj, type_option, _external=False) no_option_matched = False break except TypeError: pass if no_option_matched: raise TypeError else: raise NotImplementedError( f"check_type method currently does not support checking typing of form '{type_name}'" ) except TypeError: if _external: raise TypeError( f"Object {repr(obj)} is of type {_construct_type_description(obj)} " f"when {type_to_check} was expected" ) raise TypeError() def _construct_type_description(obj) -> str: def get_types_in_iterable(iterable) -> str: types = {_construct_type_description(element) for element in iterable} return types.pop() if len(types) == 1 else f"Union[{','.join(types)}]" if isinstance(obj, list): return f"List[{get_types_in_iterable(obj)}]" elif isinstance(obj, dict): key_types = get_types_in_iterable(obj.keys()) val_types = get_types_in_iterable(obj.values()) return f"Dict[{key_types}, {val_types}]" else: return type(obj).__name__ ```
72,709,963
Consider i have 5 files in 5 different location. Example = fileA in XYZ location fileB in ZXC location fileC in XBN location so on I want to check if these files are actually saved in that location if they are not re run the code above that saves the file. Ex: ``` if: fileA, fileB so on are present in their particular location the proceed with code further else: re run the file saving code above ``` How do i do this in python i am not able to figure out.
2022/06/22
[ "https://Stackoverflow.com/questions/72709963", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19189066/" ]
You can store all your files with their locations in a list and then iterate all locations for existence then you can decide further what to do. A Python example: ```py from os.path import exists # all files to check in different locations locations = [ '/some/location/xyz/fileA', '/other/location/fileB', '/yet/another/location/fileC', ] # iterate to check each file existance status = [exists(location) for location in locations] # check the status of all files # if any of the files doesn't exist, else will be called if(all(status)): print('All files are present.') else: print('Any or all files do not exist.') ```
I'm not a python dev, and I just wanted try to contribute to the community. The first answer is way better than mine, but I'd like to share my solution for that question. You could use `sys` to pass the files' names, inside a `try` block to handle when the files are not found. If you run the script from a location while the files are at another location, it would be needed to provide their path. `check.py ../test1.txt ../test2.txt ../test3.txt` ``` #!/usr/bin/python3 import os.path import sys try: fpath = sys.argv[1] fpath = sys.argv[2] fpath = sys.argv[3] if (os.path.isfile(fpath)) != "": print("Exists on system") else: pass except IndexError: for i in range(3): file1 = "test1.txt" file2 = "test2.txt" file3 = "test3.txt" sys.stdout = open(file1, "w") print("Saving content to files") sys.stdout = open(file2, "w") print("Saving content to files") sys.stdout = open(file3, "w") print("Saving content to files") ``` The `exception` part would then "save" the files, by creating new ones, writing whatever content you desire.
19,001,826
It starts with a url on the web (ex: <http://python.org>), fetches the web-page corresponding to that url, and parses all the links on that page into a repository of links. Next, it fetches the contents of any of the url from the repository just created, parses the links from this new content into the repository and continues this process for all links in the repository until stopped or after a given number of links are fetched. How can i do that using python and scrapy?. I am able to scrape all links in a webpage but how to perform it recursively in depth
2013/09/25
[ "https://Stackoverflow.com/questions/19001826", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3001533/" ]
`sfence` doesn't block StoreLoad reordering. Unless there are any NT stores in flight, it's architecturally a no-op. Stores already wait for older stores to commit before they themselves commit to L1d and become globally visible, because x86 doesn't allow StoreStore reordering. (Except for NT stores / stores to WC memory) **For seq\_cst you need a full barrier to flush the store buffer / make sure all older stores are globally visible *before* any later loads.** See <https://preshing.com/20120515/memory-reordering-caught-in-the-act/> for an example where failing to use `mfence` in practice leads to non-sequentially-consistent behaviour, i.e. memory reordering. --- As you found, it is possible to map seq\_cst to x86 asm with full barriers on every seq\_cst load instead of on every seq\_cst store / RMW. In that case you wouldn't need any barrier instructions on stores (so they'd have release semantics), but you'd need `mfence` before every `atomic::load(seq_cst)`.
You don't need an `mfence`; `sfence` does indeed suffice. In fact, you never need `lfence` in x86 unless you are dealing with a device. But Intel (and I think AMD) has (or at least had) a single implementation shared with `mfence` and `sfence` (namely, flushing the store buffer), so there was no performance advantage to using the weaker `sfence`. BTW, note that you don't have to flush after every write to a shared variable; you only have to flush between a write and a subsequent read of a different shared variable.
66,105,974
I am new to regex and was wondering how the following could be implemented. For example, I have a css file with `url('Inter.ttf')` and my python program would convert this url to `url('user/Inter.ttf')`. However, I run into a problem when I try to avoid double replacement. So how can I use regex to tell python the difference between `url('Inter.ttf')` and `url('/hello/Inter.ttf')` when using re.sub to replace them. I have tried `re.sub(r"\boriginalurl.ttf\b", "/user/" + originalurl.ttf, file)`. But this seems to not work. So how would I tell python to replace the whole word `'Inter.ttf'` with `'/user/Inter.ttf'` and `'/hello/Inter.ttf'` with `'/user/hello/Inter.ttf'`.
2021/02/08
[ "https://Stackoverflow.com/questions/66105974", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15168919/" ]
You can use a `look-around` method to insert the `/user/` dynamically: ``` (?<=url\(')/*(?=(?:.*?Inter\.ttf)'\)) ``` And then use `re.sub` to replace with `/user/`: ``` strings = ["url('Inter.ttf')", "url('/hello/Inter.ttf')"] p = re.compile(r"(?<=url\(')/?(?=(?:.*?Inter\.ttf)'\))") for s in strings: s = re.sub(p, "/user/", s) print(s) ``` ``` url('user/Inter.ttf') url('user/hello/Inter.ttf') ``` Pattern Explanation ------------------- `(?<=url\(')`: Positive lookbehind; matches strings that come after a string like `url('`. `/?`: Matches **zero** or **one** forward slashes `/`. This is important for matching paths like `/hello/Inter.ttf` because it starts with the `/`. This is going to be selected and replaced with the ending forward slash in the replacement string, `/user/`. `(?=(?:.*?Inter.ttf)'\)`: Positive lookahead; matches strings that come **before** a string that ends with `Inter.ttf')`. I suggest playing around with it on <https://regex101.com>, selecting the `Substitution` method on the left-hand-side. Edit ---- If you want to match multiple fonts, you can just remove the `Inter.ttf` part of the regex: ``` (?<=url\(')/?(?=(?:.*?)'\)) ``` Alternatively, if you wanted it to append `/user/` to paths that had a file extension, you can replace `Inter\.ttf` with `\.\w{3}`, which effectively matches 3 of any character in `[a-zA-Z0-9_]`: ``` (?<=url\(')/?(?=(?:.*?\.\w{3})'\)) ```
a simple way to do that is like this without regex: ``` fin = open("input.css", "rt") fout = open("out.css", "wt") for line in fin: if "'Inter.ttf'" in line: fout.write(line.replace("'Inter.ttf'", "'/user/Inter.ttf'")) elif "'/hello/Inter.ttf'" in line: fout.write(line.replace("'/hello/Inter.ttf'", "'/user/hello/Inter.ttf'")) else: fout.write(line) ```
25,623,841
I am using Python 2.7.5. When raising an int to the power of zero you would expect to see either -1 or 1 depending on whether the numerator was positive or negative. Typing directly into the python interpreter yields the following: ``` >>> -2418**0 -1 ``` This is the correct answer. However when I type this into the same interpretter: ``` >>> result = -2481 >>> result**0 1 ``` the answer is 1 instead of -1. Using the `complex` builtin [as suggested here](https://stackoverflow.com/questions/17747124/valueerror-negative-number-cannot-be-raised-to-a-fractional-power) has no effect on the outcome. Why is this happening?
2014/09/02
[ "https://Stackoverflow.com/questions/25623841", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1139011/" ]
Why would you expect it to be -1? 1 is (according to the definition I was taught) the correct answer. The first gives the incorrect answer due to operator precedence. ``` (-1)**0 = 1 -1**0 = -(1**0) = -(1) = -1 ``` See Wikipedia for the definition of the 0 exponent: <http://en.wikipedia.org/wiki/Exponentiation#Zero_exponent>
`-2418**0` is interpreted (mathematically) as `-1 * (2418**0)` so the answer is `-1 * 1 = -1`. Exponentiation happens before multiplication. In your second example you bind the variable `result` to `-1`. The next line takes the variable `result` and raises it to the power of `0` so you get `1`. In other words you're doing `(-1)**0`. `n**0` is `1` for any real number `n`... except `0`: technically `0**0` is undefined, although Python will still return `0**0 == 1`.
25,623,841
I am using Python 2.7.5. When raising an int to the power of zero you would expect to see either -1 or 1 depending on whether the numerator was positive or negative. Typing directly into the python interpreter yields the following: ``` >>> -2418**0 -1 ``` This is the correct answer. However when I type this into the same interpretter: ``` >>> result = -2481 >>> result**0 1 ``` the answer is 1 instead of -1. Using the `complex` builtin [as suggested here](https://stackoverflow.com/questions/17747124/valueerror-negative-number-cannot-be-raised-to-a-fractional-power) has no effect on the outcome. Why is this happening?
2014/09/02
[ "https://Stackoverflow.com/questions/25623841", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1139011/" ]
`-2418**0` is interpreted (mathematically) as `-1 * (2418**0)` so the answer is `-1 * 1 = -1`. Exponentiation happens before multiplication. In your second example you bind the variable `result` to `-1`. The next line takes the variable `result` and raises it to the power of `0` so you get `1`. In other words you're doing `(-1)**0`. `n**0` is `1` for any real number `n`... except `0`: technically `0**0` is undefined, although Python will still return `0**0 == 1`.
Your maths is wrong. `(-2481)**0` should be 1. According to wikipedia, `Any nonzero number raised by the exponent 0 is 1`.
25,623,841
I am using Python 2.7.5. When raising an int to the power of zero you would expect to see either -1 or 1 depending on whether the numerator was positive or negative. Typing directly into the python interpreter yields the following: ``` >>> -2418**0 -1 ``` This is the correct answer. However when I type this into the same interpretter: ``` >>> result = -2481 >>> result**0 1 ``` the answer is 1 instead of -1. Using the `complex` builtin [as suggested here](https://stackoverflow.com/questions/17747124/valueerror-negative-number-cannot-be-raised-to-a-fractional-power) has no effect on the outcome. Why is this happening?
2014/09/02
[ "https://Stackoverflow.com/questions/25623841", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1139011/" ]
Why would you expect it to be -1? 1 is (according to the definition I was taught) the correct answer. The first gives the incorrect answer due to operator precedence. ``` (-1)**0 = 1 -1**0 = -(1**0) = -(1) = -1 ``` See Wikipedia for the definition of the 0 exponent: <http://en.wikipedia.org/wiki/Exponentiation#Zero_exponent>
Your maths is wrong. `(-2481)**0` should be 1. According to wikipedia, `Any nonzero number raised by the exponent 0 is 1`.
24,648,132
so for some reason this error([Errno 10013] An attempt was made to access a socket in a way forbidden by its access permissions), keeps occurring. when i try to use registration in Django. I am using windows 7 and pycharm IDE with django 1.65. I have already tried different ports to run server (8001 & 8008) and also adding permission in windows firewall and kasperesky firewall for python.exe and pycharm. Any suggestion. ``` Environment: Request Method: POST Request URL: http://127.0.0.1:8001/accounts/register/ Django Version: 1.6.5 Python Version: 2.7.8 Installed Applications: ('django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'profiles', 'south', 'registration', 'PIL', 'stripe') Installed Middleware: ('django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware') Traceback: File "C:\Users\jasan\Virtual_enviornments\virtual_env_matchmaker\lib\site-packages\django\core\handlers\base.py" in get_response 112. response = wrapped_callback(request, *callback_args, **callback_kwargs) File "C:\Users\jasan\Virtual_enviornments\virtual_env_matchmaker\lib\site-packages\django\views\generic\base.py" in view 69. return self.dispatch(request, *args, **kwargs) File "C:\Users\jasan\Virtual_enviornments\virtual_env_matchmaker\lib\site-packages\registration\views.py" in dispatch 79. return super(RegistrationView, self).dispatch(request, *args, **kwargs) File "C:\Users\jasan\Virtual_enviornments\virtual_env_matchmaker\lib\site-packages\django\views\generic\base.py" in dispatch 87. return handler(request, *args, **kwargs) File "C:\Users\jasan\Virtual_enviornments\virtual_env_matchmaker\lib\site-packages\registration\views.py" in post 35. return self.form_valid(request, form) File "C:\Users\jasan\Virtual_enviornments\virtual_env_matchmaker\lib\site-packages\registration\views.py" in form_valid 82. new_user = self.register(request, **form.cleaned_data) File "C:\Users\jasan\Virtual_enviornments\virtual_env_matchmaker\lib\site-packages\registration\backends\default\views.py" in register 80. password, site) File "C:\Users\jasan\Virtual_enviornments\virtual_env_matchmaker\lib\site-packages\django\db\transaction.py" in inner 431. return func(*args, **kwargs) File "C:\Users\jasan\Virtual_enviornments\virtual_env_matchmaker\lib\site-packages\registration\models.py" in create_inactive_user 91. registration_profile.send_activation_email(site) File "C:\Users\jasan\Virtual_enviornments\virtual_env_matchmaker\lib\site-packages\registration\models.py" in send_activation_email 270. self.user.email_user(subject, message, settings.DEFAULT_FROM_EMAIL) File "C:\Users\jasan\Virtual_enviornments\virtual_env_matchmaker\lib\site-packages\django\contrib\auth\models.py" in email_user 413. send_mail(subject, message, from_email, [self.email]) File "C:\Users\jasan\Virtual_enviornments\virtual_env_matchmaker\lib\site-packages\django\core\mail\__init__.py" in send_mail 50. connection=connection).send() File "C:\Users\jasan\Virtual_enviornments\virtual_env_matchmaker\lib\site-packages\django\core\mail\message.py" in send 274. return self.get_connection(fail_silently).send_messages([self]) File "C:\Users\jasan\Virtual_enviornments\virtual_env_matchmaker\lib\site-packages\django\core\mail\backends\smtp.py" in send_messages 87. new_conn_created = self.open() File "C:\Users\jasan\Virtual_enviornments\virtual_env_matchmaker\lib\site-packages\django\core\mail\backends\smtp.py" in open 48. local_hostname=DNS_NAME.get_fqdn()) File "C:\Python27\Lib\smtplib.py" in __init__ 251. (code, msg) = self.connect(host, port) File "C:\Python27\Lib\smtplib.py" in connect 311. self.sock = self._get_socket(host, port, self.timeout) File "C:\Python27\Lib\smtplib.py" in _get_socket 286. return socket.create_connection((host, port), timeout) File "C:\Python27\Lib\socket.py" in create_connection 571. raise err Exception Type: error at /accounts/register/ Exception Value: [Errno 10013] An attempt was made to access a socket in a way forbidden by its access permissions ```
2014/07/09
[ "https://Stackoverflow.com/questions/24648132", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3818829/" ]
The problem has to do with your email server setup. Instead of figuring out what to fix, just set your `EMAIL_BACKEND` in `settings.py` to the following: ``` if DEBUG: EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend' ``` This way, any email sent by django will be shown in the console instead of attempting delivery. You can then continue developing your application. Having emails printed on the console is good if you are developing, but it can be a headache if your application is sending a lot of emails across. A better solution is to install [mailcatcher](http://mailcatcher.me/). This application will create a local mail server for testing and as a bonus, provide you a web interface where you can view the emails being sent by your server: ![mailcatcher](https://i.stack.imgur.com/yRL91.png) It is a Ruby application, and as you are on Windows, I would suggest using [rubyinstaller](http://rubyinstaller.org/) to help with gem installation. The website also shows you how to configure django: ``` if DEBUG: EMAIL_HOST = '127.0.0.1' EMAIL_HOST_USER = '' EMAIL_HOST_PASSWORD = '' EMAIL_PORT = 1025 EMAIL_USE_TLS = False ```
This has nothing to do with your webserver ports, this is to do with the host and port that `smtplib` is trying to open in order to send an email. These are controlled by `settings.EMAIL_HOST` and `settings.EMAIL_PORT`. There are other settings too, see the [documentation](https://docs.djangoproject.com/en/1.7/topics/email/#smtp-backend) for details on how to set up email properly.
56,553,902
I have been trying to extract stock prices using pandas\_datareader. data, but I kept receiving an error message. I have checked other threads relating to this problem and, I have tried downloading data reader using conda install DataReader and also tried pip install DataReader. ``` import pandas as pd import datetime from pandas import Series,DataFrame import pandas_datareader.data as web pandas_datareader.__version__ '0.6.0' start=datetime.datetime(2009,1,1) end=datetime.datetime(2019,1,1) df=web.DataReader( 'AT&T Inc T',start,end) df.head() ``` My expected result should be a data frame with all the features and rows of the stock. Below is the error message I got: Please, how do I fix this problem? Thanks. ``` <ipython-input-45-d75bedd6b2dd> in <module> 1 start=datetime.datetime(2009,1,1) 2 end=datetime.datetime(2019,1,1) ----> 3 df=web.DataReader( 'AT&T Inc T',start,end) 4 df.head() ~\Anaconda3\lib\site-packages\pandas_datareader\data.py in DataReader(name, data_source, start, end, retry_count, pause, session, access_key) 456 else: 457 msg = "data_source=%r is not implemented" % data_source --> 458 raise NotImplementedError(msg) 459 460 NotImplementedError: data_source=datetime.datetime(2009, 1, 1, 0, 0) is not implemented ```
2019/06/12
[ "https://Stackoverflow.com/questions/56553902", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11290634/" ]
`for number in d:` will iterate through the keys of the dictionary, not values. You can use ``` for number in d.values(): ``` or ``` for name, number in d.items(): ``` if you also need the names.
You need to iterate over the key-value pairs in the dict with `items()` ``` def overNum(): d = {'Tom':'93', 'Hannah':'83', 'Jack':'94'} count = 0 for name, number in d.items(): if int(number) >= 90: count += 1 print(count) ``` Also there are some issues with the `if` statement that i fixed.
56,553,902
I have been trying to extract stock prices using pandas\_datareader. data, but I kept receiving an error message. I have checked other threads relating to this problem and, I have tried downloading data reader using conda install DataReader and also tried pip install DataReader. ``` import pandas as pd import datetime from pandas import Series,DataFrame import pandas_datareader.data as web pandas_datareader.__version__ '0.6.0' start=datetime.datetime(2009,1,1) end=datetime.datetime(2019,1,1) df=web.DataReader( 'AT&T Inc T',start,end) df.head() ``` My expected result should be a data frame with all the features and rows of the stock. Below is the error message I got: Please, how do I fix this problem? Thanks. ``` <ipython-input-45-d75bedd6b2dd> in <module> 1 start=datetime.datetime(2009,1,1) 2 end=datetime.datetime(2019,1,1) ----> 3 df=web.DataReader( 'AT&T Inc T',start,end) 4 df.head() ~\Anaconda3\lib\site-packages\pandas_datareader\data.py in DataReader(name, data_source, start, end, retry_count, pause, session, access_key) 456 else: 457 msg = "data_source=%r is not implemented" % data_source --> 458 raise NotImplementedError(msg) 459 460 NotImplementedError: data_source=datetime.datetime(2009, 1, 1, 0, 0) is not implemented ```
2019/06/12
[ "https://Stackoverflow.com/questions/56553902", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11290634/" ]
`for number in d:` will iterate through the keys of the dictionary, not values. You can use ``` for number in d.values(): ``` or ``` for name, number in d.items(): ``` if you also need the names.
You can collect the items in a list that are greater or equal to 90 then take the [`len()`](https://docs.python.org/3/library/functions.html#len): ``` >>> d = {'Luke':'93', 'Hannah':'83', 'Jack':'94'} >>> len([v for v in d.values() if int(v) >= 90]) 2 ``` Or using [`sum()`](https://docs.python.org/3/library/functions.html#sum) to sum booleans without building a new list, as suggested by [@Primusa](https://stackoverflow.com/questions/56553892/counting-values-in-a-dictionary-that-are-greater-than-a-certain-number-in-python/56553927#comment99689037_56553927) in the comments: ``` >>> d = {'Luke':'93', 'Hannah':'83', 'Jack':'94'} >>> sum(int(i) >= 90 for i in d.values()) 2 ```
56,553,902
I have been trying to extract stock prices using pandas\_datareader. data, but I kept receiving an error message. I have checked other threads relating to this problem and, I have tried downloading data reader using conda install DataReader and also tried pip install DataReader. ``` import pandas as pd import datetime from pandas import Series,DataFrame import pandas_datareader.data as web pandas_datareader.__version__ '0.6.0' start=datetime.datetime(2009,1,1) end=datetime.datetime(2019,1,1) df=web.DataReader( 'AT&T Inc T',start,end) df.head() ``` My expected result should be a data frame with all the features and rows of the stock. Below is the error message I got: Please, how do I fix this problem? Thanks. ``` <ipython-input-45-d75bedd6b2dd> in <module> 1 start=datetime.datetime(2009,1,1) 2 end=datetime.datetime(2019,1,1) ----> 3 df=web.DataReader( 'AT&T Inc T',start,end) 4 df.head() ~\Anaconda3\lib\site-packages\pandas_datareader\data.py in DataReader(name, data_source, start, end, retry_count, pause, session, access_key) 456 else: 457 msg = "data_source=%r is not implemented" % data_source --> 458 raise NotImplementedError(msg) 459 460 NotImplementedError: data_source=datetime.datetime(2009, 1, 1, 0, 0) is not implemented ```
2019/06/12
[ "https://Stackoverflow.com/questions/56553902", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11290634/" ]
`for number in d:` will iterate through the keys of the dictionary, not values. You can use ``` for number in d.values(): ``` or ``` for name, number in d.items(): ``` if you also need the names.
You could use `filter`: ``` len(list(filter(lambda x: int(x[1]) > 90, d.items()))) ```
56,553,902
I have been trying to extract stock prices using pandas\_datareader. data, but I kept receiving an error message. I have checked other threads relating to this problem and, I have tried downloading data reader using conda install DataReader and also tried pip install DataReader. ``` import pandas as pd import datetime from pandas import Series,DataFrame import pandas_datareader.data as web pandas_datareader.__version__ '0.6.0' start=datetime.datetime(2009,1,1) end=datetime.datetime(2019,1,1) df=web.DataReader( 'AT&T Inc T',start,end) df.head() ``` My expected result should be a data frame with all the features and rows of the stock. Below is the error message I got: Please, how do I fix this problem? Thanks. ``` <ipython-input-45-d75bedd6b2dd> in <module> 1 start=datetime.datetime(2009,1,1) 2 end=datetime.datetime(2019,1,1) ----> 3 df=web.DataReader( 'AT&T Inc T',start,end) 4 df.head() ~\Anaconda3\lib\site-packages\pandas_datareader\data.py in DataReader(name, data_source, start, end, retry_count, pause, session, access_key) 456 else: 457 msg = "data_source=%r is not implemented" % data_source --> 458 raise NotImplementedError(msg) 459 460 NotImplementedError: data_source=datetime.datetime(2009, 1, 1, 0, 0) is not implemented ```
2019/06/12
[ "https://Stackoverflow.com/questions/56553902", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11290634/" ]
You can collect the items in a list that are greater or equal to 90 then take the [`len()`](https://docs.python.org/3/library/functions.html#len): ``` >>> d = {'Luke':'93', 'Hannah':'83', 'Jack':'94'} >>> len([v for v in d.values() if int(v) >= 90]) 2 ``` Or using [`sum()`](https://docs.python.org/3/library/functions.html#sum) to sum booleans without building a new list, as suggested by [@Primusa](https://stackoverflow.com/questions/56553892/counting-values-in-a-dictionary-that-are-greater-than-a-certain-number-in-python/56553927#comment99689037_56553927) in the comments: ``` >>> d = {'Luke':'93', 'Hannah':'83', 'Jack':'94'} >>> sum(int(i) >= 90 for i in d.values()) 2 ```
You need to iterate over the key-value pairs in the dict with `items()` ``` def overNum(): d = {'Tom':'93', 'Hannah':'83', 'Jack':'94'} count = 0 for name, number in d.items(): if int(number) >= 90: count += 1 print(count) ``` Also there are some issues with the `if` statement that i fixed.
56,553,902
I have been trying to extract stock prices using pandas\_datareader. data, but I kept receiving an error message. I have checked other threads relating to this problem and, I have tried downloading data reader using conda install DataReader and also tried pip install DataReader. ``` import pandas as pd import datetime from pandas import Series,DataFrame import pandas_datareader.data as web pandas_datareader.__version__ '0.6.0' start=datetime.datetime(2009,1,1) end=datetime.datetime(2019,1,1) df=web.DataReader( 'AT&T Inc T',start,end) df.head() ``` My expected result should be a data frame with all the features and rows of the stock. Below is the error message I got: Please, how do I fix this problem? Thanks. ``` <ipython-input-45-d75bedd6b2dd> in <module> 1 start=datetime.datetime(2009,1,1) 2 end=datetime.datetime(2019,1,1) ----> 3 df=web.DataReader( 'AT&T Inc T',start,end) 4 df.head() ~\Anaconda3\lib\site-packages\pandas_datareader\data.py in DataReader(name, data_source, start, end, retry_count, pause, session, access_key) 456 else: 457 msg = "data_source=%r is not implemented" % data_source --> 458 raise NotImplementedError(msg) 459 460 NotImplementedError: data_source=datetime.datetime(2009, 1, 1, 0, 0) is not implemented ```
2019/06/12
[ "https://Stackoverflow.com/questions/56553902", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11290634/" ]
You could use `filter`: ``` len(list(filter(lambda x: int(x[1]) > 90, d.items()))) ```
You need to iterate over the key-value pairs in the dict with `items()` ``` def overNum(): d = {'Tom':'93', 'Hannah':'83', 'Jack':'94'} count = 0 for name, number in d.items(): if int(number) >= 90: count += 1 print(count) ``` Also there are some issues with the `if` statement that i fixed.
56,553,902
I have been trying to extract stock prices using pandas\_datareader. data, but I kept receiving an error message. I have checked other threads relating to this problem and, I have tried downloading data reader using conda install DataReader and also tried pip install DataReader. ``` import pandas as pd import datetime from pandas import Series,DataFrame import pandas_datareader.data as web pandas_datareader.__version__ '0.6.0' start=datetime.datetime(2009,1,1) end=datetime.datetime(2019,1,1) df=web.DataReader( 'AT&T Inc T',start,end) df.head() ``` My expected result should be a data frame with all the features and rows of the stock. Below is the error message I got: Please, how do I fix this problem? Thanks. ``` <ipython-input-45-d75bedd6b2dd> in <module> 1 start=datetime.datetime(2009,1,1) 2 end=datetime.datetime(2019,1,1) ----> 3 df=web.DataReader( 'AT&T Inc T',start,end) 4 df.head() ~\Anaconda3\lib\site-packages\pandas_datareader\data.py in DataReader(name, data_source, start, end, retry_count, pause, session, access_key) 456 else: 457 msg = "data_source=%r is not implemented" % data_source --> 458 raise NotImplementedError(msg) 459 460 NotImplementedError: data_source=datetime.datetime(2009, 1, 1, 0, 0) is not implemented ```
2019/06/12
[ "https://Stackoverflow.com/questions/56553902", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11290634/" ]
You can collect the items in a list that are greater or equal to 90 then take the [`len()`](https://docs.python.org/3/library/functions.html#len): ``` >>> d = {'Luke':'93', 'Hannah':'83', 'Jack':'94'} >>> len([v for v in d.values() if int(v) >= 90]) 2 ``` Or using [`sum()`](https://docs.python.org/3/library/functions.html#sum) to sum booleans without building a new list, as suggested by [@Primusa](https://stackoverflow.com/questions/56553892/counting-values-in-a-dictionary-that-are-greater-than-a-certain-number-in-python/56553927#comment99689037_56553927) in the comments: ``` >>> d = {'Luke':'93', 'Hannah':'83', 'Jack':'94'} >>> sum(int(i) >= 90 for i in d.values()) 2 ```
You could use `filter`: ``` len(list(filter(lambda x: int(x[1]) > 90, d.items()))) ```
20,023,709
I'm working on a laser tag game project that uses pygame and Raspberry Pi. In the game, I need a background timer in order to keep track of game time. Currently I'm using the following to do this but doesnt seem to work correctly: ``` pygame.timer.get_ticks() ``` My second problem is resetting this timer when the game is restarted. The game should restart without having to restart the program and that is only likely to be done with resetting the timer, I guess. What I need, in brief, is to have a background timer variable and be able to reset it any time in a while loop. I'm a real beginner to python and pygame, but the solution of this problem will give a great boost to my knowledge and the progress of the project. Any help will be greately appreciated.
2013/11/16
[ "https://Stackoverflow.com/questions/20023709", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1478315/" ]
You could use the `Timer` class in Android, and set a repeating timer, with a initial delay. ``` Timer timer = new Timer(); timer.schedule(TimerTask task, long delay, long period) ``` A `TimerTask` is very much like a `Runnable`. See: <http://developer.android.com/reference/java/util/Timer.html>
I've used 2 timers : ``` handler.postDelayed(runnable, 1500); // Creating a timer for 1.5 seconds ``` this created a 1.5sec timer, while inside the timer loop : ``` private Runnable runnable = new Runnable() { @Override public void run() { Foo(); handler.postDelayed(this, 1500); } }; ``` I called `handler.postDelayed(this,1500);` again, which made 2 timers -> causing the time bug.
72,393
A tutorial I have on Regex in python explains how to use the re module in python, I wanted to grab the URL out of an A tag so knowing Regex I wrote the correct expression and tested it in my regex testing app of choice and ensured it worked. When placed into python it failed: ``` result = re.match("a_regex_of_pure_awesomeness", "a string containing the awesomeness") # result is None` ``` After much head scratching I found out the issue, it automatically expects your pattern to be at the start of the string. I have found a fix but I would like to know how to change: ``` regex = ".*(a_regex_of_pure_awesomeness)" ``` into ``` regex = "a_regex_of_pure_awesomeness" ``` Okay, it's a standard URL regex but I wanted to avoid any potential confusion about what I wanted to get rid of and possibly pretend to be funny.
2008/09/16
[ "https://Stackoverflow.com/questions/72393", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1384652/" ]
In Python, there's a distinction between "match" and "search"; match only looks for the pattern at the start of the string, and search looks for the pattern starting at any location within the string. [Python regex docs](http://docs.python.org/lib/module-re.html) [Matching vs searching](http://docs.python.org/lib/matching-searching.html)
Are you using the `re.match()` or `re.search()` method? My understanding is that `re.match()` assumes a "`^`" at the beginning of your expression and will only search at the beginning of the text, while `re.search()` acts more like the Perl regular expressions and will only match the beginning of the text if you include a "`^`" at the beginning of your expression. Hope that helps.
72,393
A tutorial I have on Regex in python explains how to use the re module in python, I wanted to grab the URL out of an A tag so knowing Regex I wrote the correct expression and tested it in my regex testing app of choice and ensured it worked. When placed into python it failed: ``` result = re.match("a_regex_of_pure_awesomeness", "a string containing the awesomeness") # result is None` ``` After much head scratching I found out the issue, it automatically expects your pattern to be at the start of the string. I have found a fix but I would like to know how to change: ``` regex = ".*(a_regex_of_pure_awesomeness)" ``` into ``` regex = "a_regex_of_pure_awesomeness" ``` Okay, it's a standard URL regex but I wanted to avoid any potential confusion about what I wanted to get rid of and possibly pretend to be funny.
2008/09/16
[ "https://Stackoverflow.com/questions/72393", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1384652/" ]
``` >>> import re >>> pattern = re.compile("url") >>> string = " url" >>> pattern.match(string) >>> pattern.search(string) <_sre.SRE_Match object at 0xb7f7a6e8> ```
Are you using the `re.match()` or `re.search()` method? My understanding is that `re.match()` assumes a "`^`" at the beginning of your expression and will only search at the beginning of the text, while `re.search()` acts more like the Perl regular expressions and will only match the beginning of the text if you include a "`^`" at the beginning of your expression. Hope that helps.
72,393
A tutorial I have on Regex in python explains how to use the re module in python, I wanted to grab the URL out of an A tag so knowing Regex I wrote the correct expression and tested it in my regex testing app of choice and ensured it worked. When placed into python it failed: ``` result = re.match("a_regex_of_pure_awesomeness", "a string containing the awesomeness") # result is None` ``` After much head scratching I found out the issue, it automatically expects your pattern to be at the start of the string. I have found a fix but I would like to know how to change: ``` regex = ".*(a_regex_of_pure_awesomeness)" ``` into ``` regex = "a_regex_of_pure_awesomeness" ``` Okay, it's a standard URL regex but I wanted to avoid any potential confusion about what I wanted to get rid of and possibly pretend to be funny.
2008/09/16
[ "https://Stackoverflow.com/questions/72393", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1384652/" ]
``` from BeautifulSoup import BeautifulSoup soup = BeautifulSoup(your_html) for a in soup.findAll('a', href=True): # do something with `a` w/ href attribute print a['href'] ```
Are you using the `re.match()` or `re.search()` method? My understanding is that `re.match()` assumes a "`^`" at the beginning of your expression and will only search at the beginning of the text, while `re.search()` acts more like the Perl regular expressions and will only match the beginning of the text if you include a "`^`" at the beginning of your expression. Hope that helps.
72,393
A tutorial I have on Regex in python explains how to use the re module in python, I wanted to grab the URL out of an A tag so knowing Regex I wrote the correct expression and tested it in my regex testing app of choice and ensured it worked. When placed into python it failed: ``` result = re.match("a_regex_of_pure_awesomeness", "a string containing the awesomeness") # result is None` ``` After much head scratching I found out the issue, it automatically expects your pattern to be at the start of the string. I have found a fix but I would like to know how to change: ``` regex = ".*(a_regex_of_pure_awesomeness)" ``` into ``` regex = "a_regex_of_pure_awesomeness" ``` Okay, it's a standard URL regex but I wanted to avoid any potential confusion about what I wanted to get rid of and possibly pretend to be funny.
2008/09/16
[ "https://Stackoverflow.com/questions/72393", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1384652/" ]
In Python, there's a distinction between "match" and "search"; match only looks for the pattern at the start of the string, and search looks for the pattern starting at any location within the string. [Python regex docs](http://docs.python.org/lib/module-re.html) [Matching vs searching](http://docs.python.org/lib/matching-searching.html)
``` >>> import re >>> pattern = re.compile("url") >>> string = " url" >>> pattern.match(string) >>> pattern.search(string) <_sre.SRE_Match object at 0xb7f7a6e8> ```
72,393
A tutorial I have on Regex in python explains how to use the re module in python, I wanted to grab the URL out of an A tag so knowing Regex I wrote the correct expression and tested it in my regex testing app of choice and ensured it worked. When placed into python it failed: ``` result = re.match("a_regex_of_pure_awesomeness", "a string containing the awesomeness") # result is None` ``` After much head scratching I found out the issue, it automatically expects your pattern to be at the start of the string. I have found a fix but I would like to know how to change: ``` regex = ".*(a_regex_of_pure_awesomeness)" ``` into ``` regex = "a_regex_of_pure_awesomeness" ``` Okay, it's a standard URL regex but I wanted to avoid any potential confusion about what I wanted to get rid of and possibly pretend to be funny.
2008/09/16
[ "https://Stackoverflow.com/questions/72393", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1384652/" ]
In Python, there's a distinction between "match" and "search"; match only looks for the pattern at the start of the string, and search looks for the pattern starting at any location within the string. [Python regex docs](http://docs.python.org/lib/module-re.html) [Matching vs searching](http://docs.python.org/lib/matching-searching.html)
``` from BeautifulSoup import BeautifulSoup soup = BeautifulSoup(your_html) for a in soup.findAll('a', href=True): # do something with `a` w/ href attribute print a['href'] ```
65,393,659
I built an application to suggest email addresses fixes, and I need to detect email addresses that are basically not real existing email addresses, like the following: 14370afcdc17429f9e418d5ffbd0334a@magic.com ce06e817-2149-6cfd-dd24-51b31e93ea1a@stackoverflow.org.il 87c0d782-e09f-056f-f544-c6ec9d17943c@microsoft.org.il root@ns3160176.ip-151-106-35.eu ds4-f1g-54-h5-dfg-yk-4gd-htr5-fdg5h@outlook.com h-rt-dfg4-sv6-fg32-dsv5-vfd5-ds312@gmail.com test@454-fs-ns-dff4-xhh-43d-frfs.com I could do multi regex checks, but I don't think I will hit the good rate % of the suspected 'not-real' email addresses, as I go to a specific regex pattern each time. I looked in: [Javascript script to find gibberish words in form inputs](https://stackoverflow.com/questions/10211028/javascript-script-to-find-gibberish-words-in-form-inputs) [Translate this JavaScript Gibberish please?](https://stackoverflow.com/questions/10621334/translate-this-javascript-gibberish-please) [Detect keyboard mashed email addresses](https://stackoverflow.com/questions/43468609/detect-keyboard-mashed-email-addresses) Finally I looked over this: [Unable to detect gibberish names using Python](https://stackoverflow.com/questions/50659889/unable-to-detect-gibberish-names-using-python) And It seems to fit my needs, I think. A script that will give me some score about the possibility of the each part of the email address to be a Gibberish (or not real) email address. So what I want is the output to be: ``` const strings = ["14370afcdc17429f9e418d5ffbd0334a", "gmail", "ce06e817-2149-6cfd-dd24-51b31e93ea1a", "87c0d782-e09f-056f-f544-c6ec9d17943c", "space-max", "ns3160176.ip-151-106-35", "ds4-f1g-54-h5-dfg-yk-4gd-htr5-fdg5h", "outlook", "h-rt-dfg4-sv6-fg32-dsv5-vfd5- ds312", "system-analytics", "454-fs-ns-dff4-xhh-43d-frfs"]; for (let i = 0; i < strings.length; i++) { validateGibbrish(strings[i]); } ``` And this `validateGibberish` function logic will be similar to this python code: ``` from nltk.corpus import brown from collections import Counter import numpy as np text = '\n'.join([' '.join([w for w in s]) for s in brown.sents()]) unigrams = Counter(text) bigrams = Counter(text[i:(i+2)] for i in range(len(text)-2)) trigrams = Counter(text[i:(i+3)] for i in range(len(text)-3)) weights = [0.001, 0.01, 0.989] def strangeness(text): r = 0 text = ' ' + text + '\n' for i in range(2, len(text)): char = text[i] context1 = text[(i-1):i] context2 = text[(i-2):i] num = unigrams[char] * weights[0] + bigrams[context1+char] * weights[1] + trigrams[context2+char] * weights[2] den = sum(unigrams.values()) * weights[0] + unigrams[char] + weights[1] + bigrams[context1] * weights[2] r -= np.log(num / den) return r / (len(text) - 2) ``` So in the end I will loop on all the strings and get something like this: ``` "14370afcdc17429f9e418d5ffbd0334a" -> 8.9073 "gmail" -> 1.0044 "ce06e817-2149-6cfd-dd24-51b31e93ea1a" -> 7.4261 "87c0d782-e09f-056f-f544-c6ec9d17943c" -> 8.3916 "space-max" -> 1.3553 "ns3160176.ip-151-106-35" -> 6.2584 "ds4-f1g-54-h5-dfg-yk-4gd-htr5-fdg5h" -> 7.1796 "outlook" -> 1.6694 "h-rt-dfg4-sv6-fg32-dsv5-vfd5-ds312" -> 8.5734 "system-analytics" -> 1.9489 "454-fs-ns-dff4-xhh-43d-frfs" -> 7.7058 ``` Does anybody have a hint how to do it and can help? Thanks a lot :) **UPDATE (12-22-2020)** I manage to write some code based on @Konstantin Pribluda answer, the Shannon entropy calculation: ``` const getFrequencies = str => { let dict = new Set(str); return [...dict].map(chr => { return str.match(new RegExp(chr, 'g')).length; }); }; // Measure the entropy of a string in bits per symbol. const entropy = str => getFrequencies(str) .reduce((sum, frequency) => { let p = frequency / str.length; return sum - (p * Math.log(p) / Math.log(2)); }, 0); const strings = ['14370afcdc17429f9e418d5ffbd0334a', 'or', 'sdf', 'test', 'dave coperfield', 'gmail', 'ce06e817-2149-6cfd-dd24-51b31e93ea1a', '87c0d782-e09f-056f-f544-c6ec9d17943c', 'space-max', 'ns3160176.ip-151-106-35', 'ds4-f1g-54-h5-dfg-yk-4gd-htr5-fdg5h', 'outlook', 'h-rt-dfg4-sv6-fg32-dsv5-vfd5-ds312', 'system-analytics', '454-fs-ns-dff4-xhh-43d-frfs']; for (let i = 0; i < strings.length; i++) { const str = strings[i]; let result = 0; try { result = entropy(str); } catch (error) { result = 0; } console.log(`Entropy of '${str}' in bits per symbol:`, result); } ``` The output is: ``` Entropy of '14370afcdc17429f9e418d5ffbd0334a' in bits per symbol: 3.7417292966721747 Entropy of 'or' in bits per symbol: 1 Entropy of 'sdf' in bits per symbol: 1.584962500721156 Entropy of 'test' in bits per symbol: 1.5 Entropy of 'dave coperfield' in bits per symbol: 3.4565647621309536 Entropy of 'gmail' in bits per symbol: 2.3219280948873626 Entropy of 'ce06e817-2149-6cfd-dd24-51b31e93ea1a' in bits per symbol: 3.882021446536749 Entropy of '87c0d782-e09f-056f-f544-c6ec9d17943c' in bits per symbol: 3.787301737252941 Entropy of 'space-max' in bits per symbol: 2.94770277922009 Entropy of 'ns3160176.ip-151-106-35' in bits per symbol: 3.1477803284561103 Entropy of 'ds4-f1g-54-h5-dfg-yk-4gd-htr5-fdg5h' in bits per symbol: 3.3502926596166693 Entropy of 'outlook' in bits per symbol: 2.1280852788913944 Entropy of 'h-rt-dfg4-sv6-fg32-dsv5-vfd5-ds312' in bits per symbol: 3.619340871812292 Entropy of 'system-analytics' in bits per symbol: 3.327819531114783 Entropy of '454-fs-ns-dff4-xhh-43d-frfs' in bits per symbol: 3.1299133176846836 ``` It's still not working as expected, as 'dave coperfield' gets about the same points as other gibberish results. Anyone else have better logic or ideas on how to do it?
2020/12/21
[ "https://Stackoverflow.com/questions/65393659", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4442606/" ]
This is what I come up with: ```js // gibberish detector js (function (h) { function e(c, b, a) { return c < b ? (a = b - c, Math.log(b) / Math.log(a) * 100) : c > a ? (b = c - a, Math.log(100 - a) / Math.log(b) * 100) : 0 } function k(c) { for (var b = {}, a = "", d = 0; d < c.length; ++d)c[d] in b || (b[c[d]] = 1, a += c[d]); return a } h.detect = function (c) { if (0 === c.length || !c.trim()) return 0; for (var b = c, a = []; a.length < b.length / 35;)a.push(b.substring(0, 35)), b = b.substring(36); 1 <= a.length && 10 > a[a.length - 1].length && (a[a.length - 2] += a[a.length - 1], a.pop()); for (var b = [], d = 0; d < a.length; d++)b.push(k(a[d]).length); a = 100 * b; for (d = b = 0; d < a.length; d++)b += parseFloat(a[d], 10); a = b / a.length; for (var f = d = b = 0; f < c.length; f++) { var g = c.charAt(f); g.match(/^[a-zA-Z]+$/) && (g.match(/^(a|e|i|o|u)$/i) && b++, d++) } b = 0 !== d ? b / d * 100 : 0; c = c.split(/[\W_]/).length / c.length * 100; a = Math.max(1, e(a, 45, 50)); b = Math.max(1, e(b, 35, 45)); c = Math.max(1, e(c, 15, 20)); return Math.max(1, (Math.log10(a) + Math.log10(b) + Math.log10(c)) / 6 * 100) } })("undefined" === typeof exports ? this.gibberish = {} : exports) // email syntax validator function validateSyntax(email) { return /^(([^<>()[\]\\.,;:\s@"]+(\.[^<>()[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/.test(email.toLowerCase()); } // shannon entropy function entropy(str) { return Object.values(Array.from(str).reduce((freq, c) => (freq[c] = (freq[c] || 0) + 1) && freq, {})).reduce((sum, f) => sum - f / str.length * Math.log2(f / str.length), 0) } // vowel counter function countVowels(word) { var m = word.match(/[aeiou]/gi); return m === null ? 0 : m.length; } // dummy function function isTrue(value){ return value } // validate string by multiple tests function detectGibberish(str){ var strWithoutPunct = str.replace(/[.,\/#!$%\^&\*;:{}=\-_`~()]/g,""); var entropyValue = entropy(str) < 3.5; var gibberishValue = gibberish.detect(str) < 50; var vovelValue = 30 < 100 / strWithoutPunct.length * countVowels(strWithoutPunct) && 100 / strWithoutPunct.length * countVowels(str) < 35; return [entropyValue, gibberishValue, vovelValue].filter(isTrue).length > 1 } // main function function validateEmail(email) { return validateSyntax(email) ? detectGibberish(email.split("@")[0]) : false } // tests document.write(validateEmail("dsfghjdhjs@gmail.com") + "<br/>") document.write(validateEmail("jhon.smith@gmail.com")) ``` I have combined multiple tests: gibberish-detector.js, Shannon entropy and counting vowels (between 30% and 35%). You can adjust some values for more accurate result.
A thing you may consider doing is checking each time how random each string is, then sort the results according to their score and given a threshold exclude the ones with high randomness. It is inevitable that you will miss some. There are some implementations for checking the randomness of strings, for example: * <https://en.wikipedia.org/wiki/Diehard_tests> * <http://www.cacert.at/random/> You may have to create a hash (to map chars and symbols to sequences of integers) before you apply some of these because some work only with integers, since they test properties of random numbers generators. Also a stack exchange link that can be of help is this: * <https://stats.stackexchange.com/questions/371150/check-if-a-character-string-is-not-random> PS. I am having a similar problem in a service since robots create accounts with these type of fake emails. After years of dealing with this issue (basically deleting manually from the DB the fake emails) I am now considering introducing a visual check (captcha) in the signup page to avoid the frustration.
6,116,527
I'm trying to get the pymysql module working with python3 on a Macintosh. Note that I am a beginning python user who decided to switch from ruby and am trying to build a simple (sigh) database project to drive my learning python. In a simple (I thought) test program, I am getting a syntax error in confiparser.py (which is used by the pymysql module) ``` def __init__(self, defaults=None, dict_type=_default_dict, allow_no_value=False, *, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True, default_section=DEFAULTSECT, interpolation=_UNSET): ``` According to Komodo, the error is on the second line. I assume it is related to the asterix but regardless, I don't know why there would be a problem like this with a standard Python module. Anyone seen this before?
2011/05/24
[ "https://Stackoverflow.com/questions/6116527", "https://Stackoverflow.com", "https://Stackoverflow.com/users/57246/" ]
You're most certainly running the code with a 2.x interpreter. I wonder why it even tries to import 3.x libraries, perhaps the answer lies in your installation process - but that's a different question. Anyway, this (before any other `import`s) ``` import sys print(sys.version) ``` should show which Python version is actually run, as Komodo Edit may be choosing the wrong executable for whatever reason. Alternatively, leave out the parens and it simply fails if run with Python 3.
In Python 3.2 the configparser module does indeed look that way. Importing it works fine from Python 3.2, but *not* from Python 2. Am I right in guessing you get the error when you try to run your module with Komodo? Then you just have configured the wrong Python executable.
67,052,300
I've been trying to find the best way to convert a given GIF image to a sequence of BMP files using python. I've found some libraries like Wand and ImageMagic but still haven't found a good example to accomplish this.
2021/04/12
[ "https://Stackoverflow.com/questions/67052300", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2163839/" ]
Reading an animated GIF file using Python Image Processing Library - Pillow --------------------------------------------------------------------------- ``` from PIL import Image from PIL import GifImagePlugin imageObject = Image.open("./xmas.gif") print(imageObject.is_animated) print(imageObject.n_frames) ``` Display individual frames from the loaded animated GIF file ----------------------------------------------------------- ``` for frame in range(0,imageObject.n_frames): imageObject.seek(frame) imageObject.show() ```
In Imagemagick, which comes with Linux and can be installed for Windows or Mac OSX, ``` convert image.gif -coalese image.bmp ``` the results will be image-0.bmp, image-1.bmp ... Use `convert` for Imagemagick 6 or replace `convert` with `magick` for Imagemagick 7.
67,052,300
I've been trying to find the best way to convert a given GIF image to a sequence of BMP files using python. I've found some libraries like Wand and ImageMagic but still haven't found a good example to accomplish this.
2021/04/12
[ "https://Stackoverflow.com/questions/67052300", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2163839/" ]
```py from wand.image import Image with Image(filename="animation.gif") as img: img.coalesce() img.save(filename="frame%02d.bmp) ``` Use `Image.coalesce()` to rebuild optimized frames, and ImageMagick's "Percent Escapes" format (`%02d`) to write image frame as a separate BMP file.
In Imagemagick, which comes with Linux and can be installed for Windows or Mac OSX, ``` convert image.gif -coalese image.bmp ``` the results will be image-0.bmp, image-1.bmp ... Use `convert` for Imagemagick 6 or replace `convert` with `magick` for Imagemagick 7.
44,963,360
I am very new to python. Long time user of stackoverflow but first time posting a question. I am trying to extract data from website using beautifulsoup. [Sample Code where I want to extract is (listed in and tagged in data)](https://i.stack.imgur.com/AASRq.jpg) The was able to extract in to list but I am unable to extract the acutal data. the goal here is to extract **Listed in:** Nail Polish Subscription Boxes, Subscription Boxes for Beauty Products, Subscription Boxes for Women **Tagged in:** Makeup, Beauty, Nail polish Can you please tell me how to achive it. ``` import requests from bs4 import BeautifulSoup l1=[] url='http://boxes.mysubscriptionaddiction.com/box/julep-maven' source_code=requests.get(url) plain_text=source_code.text soup= BeautifulSoup(plain_text,"lxml") for item in soup.find_all('p'): l1.append(item.contents) search='\nListed in:\n' for a in l1: if a[0] in ('\nTagged in:\n','\nListed in:\n'): print(a) ```
2017/07/07
[ "https://Stackoverflow.com/questions/44963360", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8268739/" ]
I don't know if there's a slicker way, but a straightforward for loop will do the trick: ``` $frequency = []; for ($i = 0; $i < sizeof($date); $i++) { $frequency[] = array($date[$i], $time_start[$i], $time_end[$i]); } print_r($frequency); // Output: // Array // ( // [0] => Array // ( // [0] => 2017-06-10 // [1] => 02:00 PM // [2] => 05:00 PM // ) // // [1] => Array // ( // [0] => 2017-06-11 // [1] => 03:00 PM // [2] => 06:00 PM // ) // // [2] => Array // ( // [0] => 2017-06-12 // [1] => 04:00 PM // [2] => 07:00 PM // ) // // ) ```
You can also map them: ``` $result = array_map(function ($value1,$value2,$value3) { return [$value1,$value2,$value3]; }, $date,$time_start,$time_end); ```
44,963,360
I am very new to python. Long time user of stackoverflow but first time posting a question. I am trying to extract data from website using beautifulsoup. [Sample Code where I want to extract is (listed in and tagged in data)](https://i.stack.imgur.com/AASRq.jpg) The was able to extract in to list but I am unable to extract the acutal data. the goal here is to extract **Listed in:** Nail Polish Subscription Boxes, Subscription Boxes for Beauty Products, Subscription Boxes for Women **Tagged in:** Makeup, Beauty, Nail polish Can you please tell me how to achive it. ``` import requests from bs4 import BeautifulSoup l1=[] url='http://boxes.mysubscriptionaddiction.com/box/julep-maven' source_code=requests.get(url) plain_text=source_code.text soup= BeautifulSoup(plain_text,"lxml") for item in soup.find_all('p'): l1.append(item.contents) search='\nListed in:\n' for a in l1: if a[0] in ('\nTagged in:\n','\nListed in:\n'): print(a) ```
2017/07/07
[ "https://Stackoverflow.com/questions/44963360", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8268739/" ]
I don't know if there's a slicker way, but a straightforward for loop will do the trick: ``` $frequency = []; for ($i = 0; $i < sizeof($date); $i++) { $frequency[] = array($date[$i], $time_start[$i], $time_end[$i]); } print_r($frequency); // Output: // Array // ( // [0] => Array // ( // [0] => 2017-06-10 // [1] => 02:00 PM // [2] => 05:00 PM // ) // // [1] => Array // ( // [0] => 2017-06-11 // [1] => 03:00 PM // [2] => 06:00 PM // ) // // [2] => Array // ( // [0] => 2017-06-12 // [1] => 04:00 PM // [2] => 07:00 PM // ) // // ) ```
Or you could do: ``` $frequency=array_map(null,$date, $time_start, $time_end); ``` see here: <http://php.net/manual/de/function.array-map.php> and for a short demo, here: <http://rextester.com/VJRCA63297>
44,963,360
I am very new to python. Long time user of stackoverflow but first time posting a question. I am trying to extract data from website using beautifulsoup. [Sample Code where I want to extract is (listed in and tagged in data)](https://i.stack.imgur.com/AASRq.jpg) The was able to extract in to list but I am unable to extract the acutal data. the goal here is to extract **Listed in:** Nail Polish Subscription Boxes, Subscription Boxes for Beauty Products, Subscription Boxes for Women **Tagged in:** Makeup, Beauty, Nail polish Can you please tell me how to achive it. ``` import requests from bs4 import BeautifulSoup l1=[] url='http://boxes.mysubscriptionaddiction.com/box/julep-maven' source_code=requests.get(url) plain_text=source_code.text soup= BeautifulSoup(plain_text,"lxml") for item in soup.find_all('p'): l1.append(item.contents) search='\nListed in:\n' for a in l1: if a[0] in ('\nTagged in:\n','\nListed in:\n'): print(a) ```
2017/07/07
[ "https://Stackoverflow.com/questions/44963360", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8268739/" ]
I don't know if there's a slicker way, but a straightforward for loop will do the trick: ``` $frequency = []; for ($i = 0; $i < sizeof($date); $i++) { $frequency[] = array($date[$i], $time_start[$i], $time_end[$i]); } print_r($frequency); // Output: // Array // ( // [0] => Array // ( // [0] => 2017-06-10 // [1] => 02:00 PM // [2] => 05:00 PM // ) // // [1] => Array // ( // [0] => 2017-06-11 // [1] => 03:00 PM // [2] => 06:00 PM // ) // // [2] => Array // ( // [0] => 2017-06-12 // [1] => 04:00 PM // [2] => 07:00 PM // ) // // ) ```
You can do like below ``` <?php $complete_array = array(); $date = array('2017-06-10', '2017-06-11', '2017-06-12'); $time_start = array('02:00 PM', '03:00 PM', '04:00 PM'); $time_end = array('05:00 PM', '06:00 PM', '07:00 PM'); array_push($complete_array, $date, $time_start, $time_end); echo "<pre>"; print_r($complete_array); ?> ``` **Output:** ``` Array ( [0] => Array ( [0] => 2017-06-10 [1] => 2017-06-11 [2] => 2017-06-12 ) [1] => Array ( [0] => 02:00 PM [1] => 03:00 PM [2] => 04:00 PM ) [2] => Array ( [0] => 05:00 PM [1] => 06:00 PM [2] => 07:00 PM ) ) ```
44,963,360
I am very new to python. Long time user of stackoverflow but first time posting a question. I am trying to extract data from website using beautifulsoup. [Sample Code where I want to extract is (listed in and tagged in data)](https://i.stack.imgur.com/AASRq.jpg) The was able to extract in to list but I am unable to extract the acutal data. the goal here is to extract **Listed in:** Nail Polish Subscription Boxes, Subscription Boxes for Beauty Products, Subscription Boxes for Women **Tagged in:** Makeup, Beauty, Nail polish Can you please tell me how to achive it. ``` import requests from bs4 import BeautifulSoup l1=[] url='http://boxes.mysubscriptionaddiction.com/box/julep-maven' source_code=requests.get(url) plain_text=source_code.text soup= BeautifulSoup(plain_text,"lxml") for item in soup.find_all('p'): l1.append(item.contents) search='\nListed in:\n' for a in l1: if a[0] in ('\nTagged in:\n','\nListed in:\n'): print(a) ```
2017/07/07
[ "https://Stackoverflow.com/questions/44963360", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8268739/" ]
Or you could do: ``` $frequency=array_map(null,$date, $time_start, $time_end); ``` see here: <http://php.net/manual/de/function.array-map.php> and for a short demo, here: <http://rextester.com/VJRCA63297>
You can also map them: ``` $result = array_map(function ($value1,$value2,$value3) { return [$value1,$value2,$value3]; }, $date,$time_start,$time_end); ```
44,963,360
I am very new to python. Long time user of stackoverflow but first time posting a question. I am trying to extract data from website using beautifulsoup. [Sample Code where I want to extract is (listed in and tagged in data)](https://i.stack.imgur.com/AASRq.jpg) The was able to extract in to list but I am unable to extract the acutal data. the goal here is to extract **Listed in:** Nail Polish Subscription Boxes, Subscription Boxes for Beauty Products, Subscription Boxes for Women **Tagged in:** Makeup, Beauty, Nail polish Can you please tell me how to achive it. ``` import requests from bs4 import BeautifulSoup l1=[] url='http://boxes.mysubscriptionaddiction.com/box/julep-maven' source_code=requests.get(url) plain_text=source_code.text soup= BeautifulSoup(plain_text,"lxml") for item in soup.find_all('p'): l1.append(item.contents) search='\nListed in:\n' for a in l1: if a[0] in ('\nTagged in:\n','\nListed in:\n'): print(a) ```
2017/07/07
[ "https://Stackoverflow.com/questions/44963360", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8268739/" ]
Or you could do: ``` $frequency=array_map(null,$date, $time_start, $time_end); ``` see here: <http://php.net/manual/de/function.array-map.php> and for a short demo, here: <http://rextester.com/VJRCA63297>
You can do like below ``` <?php $complete_array = array(); $date = array('2017-06-10', '2017-06-11', '2017-06-12'); $time_start = array('02:00 PM', '03:00 PM', '04:00 PM'); $time_end = array('05:00 PM', '06:00 PM', '07:00 PM'); array_push($complete_array, $date, $time_start, $time_end); echo "<pre>"; print_r($complete_array); ?> ``` **Output:** ``` Array ( [0] => Array ( [0] => 2017-06-10 [1] => 2017-06-11 [2] => 2017-06-12 ) [1] => Array ( [0] => 02:00 PM [1] => 03:00 PM [2] => 04:00 PM ) [2] => Array ( [0] => 05:00 PM [1] => 06:00 PM [2] => 07:00 PM ) ) ```
16,326,285
I have old python. So can't use subprocess. I have two python scripts. One primary.py and another secondary.py. While running primary.py I need to run secondary.py. Format to run secondary.py is 'python secondary.py Argument' `os.system('python secondary.py Argument')...is giving error saying that can't open file 'Argument': [Errno 2] No such file or directory`
2013/05/01
[ "https://Stackoverflow.com/questions/16326285", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2340760/" ]
Given the code you described, this error can come up for three reasons: * `python` isn't on your `PATH`, or * `secondary.py` isn't in your current working directory. * `Argument` isn't in your current working directory. From your edited question, it sounds like it's the last of the three, meaning the problem likely has nothing to do with `system` at all… but let's see how to solve all three anyway. First, you want a path to the same `python` that's running `primary.py`, which is what [`sys.executable`](http://docs.python.org/2/library/sys.html#sys.executable) is for. And then you want a path to `secondary.py`. Unfortunately, for this one, there is no way (in Python 2.3) that's guaranteed to work… but on many POSIX systems, in many situations, [`sys.argv\[0\]`](http://docs.python.org/2/library/sys.html#sys.argv) will be an absolute path to `primary.py`, so you can just use `dirname` and `join` out of [`os.path`](http://docs.python.org/2/library/os.path.html) to convert that into an absolute path to `secondary.py`. And then, assuming `Argument` is in the script directory, do the same thing for that: ``` my_dir = os.path.dirname(sys.argv[0]) os.system('%s %s %s' % (sys.executable, os.path.join(my_dir, 'secondary.py'), os.path.join(my_dir, 'Argument'))) ```
Which python version do you have? Could you show contents of your secondary.py ? For newer version it seems to work correctly: ``` ddzialak@ubuntu:$ cat f.py import os os.system("python s.py Arg") ddzialak@ubuntu:$ cat s.py print "OK!!!" ddzialak@ubuntu:$ python f.py OK!!! ddzialak@ubuntu:$ ```
63,671,929
boto3 provides default **waiters** for some services like EC2, S3, etc. This is not provided by default for all services. Now, I've a case where an EFS volume is created and the lifecycle policy is added to the file system. The EFS creation takes some time and the lifecycle policy isn't in the required efs state. i.e., efs created. How to wait for EFS to be created in a python boto3 code, so that policies can be added?
2020/08/31
[ "https://Stackoverflow.com/questions/63671929", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11192798/" ]
Comment out `django.middleware.csrf.CsrfViewMiddleware` in the `MIDDLEWARE` entry in `settings.py` of your django project. I tried `curl -X POST localhost:8000/` after adding a trivial post to a class-based view. It returned the famous 403 CSRF verification failed. After commenting out the above middleware the post method was invoked.
Had a simlar problem the easiest fix is to disable the firewall to get the the GET and POST working
5,483,404
I want to build a heavy ajax web2.0 app and I don't have javascript, django or ruby on rails. I have some experience with python. I am not sure which one to choose. I have a backend database and have to run few queries for each page, no big deal. So, I am looking for a choice which is quite easy to learn and maintain in the future. Thank you
2011/03/30
[ "https://Stackoverflow.com/questions/5483404", "https://Stackoverflow.com", "https://Stackoverflow.com/users/207335/" ]
I'm not sure if this meets the guidelines for a valid question on here. If you know any Python go with Django, if you know any Ruby go with Rails. From my understanding Rails is a bit more opinionated when it comes to JavaScript. In other words it comes bundled with a bunch of helpers to make it simpler to Ajaxify your code. Django on the other hand leaves it up to you to choose your own framework. (Note: I'm no expert on Django, but have been informed as much) Rails comes bundled with [Prototype](http://www.prototypejs.org/), works equally well with [jQuery](http://jquery.com/) and in the master codebase they have already switched jQuery to be the default in preparation for the next release.
ROR has much better community activity. It's easier to learn without learning ruby (i do not recommend that way, but yes - you can write in ROR barely understanding ruby). About performance: ruby 1.8 was much slower than python. But maybe ruby 1.9 is faster. If you want to build smart ajax application and you understand javascript it does not matter which framework you will use. If not or you are lazy - ROR have some aid to ajax requests. Also take a note about django /admin/ :)
5,483,404
I want to build a heavy ajax web2.0 app and I don't have javascript, django or ruby on rails. I have some experience with python. I am not sure which one to choose. I have a backend database and have to run few queries for each page, no big deal. So, I am looking for a choice which is quite easy to learn and maintain in the future. Thank you
2011/03/30
[ "https://Stackoverflow.com/questions/5483404", "https://Stackoverflow.com", "https://Stackoverflow.com/users/207335/" ]
I'm not sure if this meets the guidelines for a valid question on here. If you know any Python go with Django, if you know any Ruby go with Rails. From my understanding Rails is a bit more opinionated when it comes to JavaScript. In other words it comes bundled with a bunch of helpers to make it simpler to Ajaxify your code. Django on the other hand leaves it up to you to choose your own framework. (Note: I'm no expert on Django, but have been informed as much) Rails comes bundled with [Prototype](http://www.prototypejs.org/), works equally well with [jQuery](http://jquery.com/) and in the master codebase they have already switched jQuery to be the default in preparation for the next release.
We've had a blast developing with Django/Jquery, and development, in our opinion, is easier, faster. That being said, we tend to go with Django because of Python's raw power and reliability. Not to say ROR doesn't have similar strengths, but we get "stuck" more often than not when using ROR than when using Django. If it's a small-ish web app and you're not worried about production deployment on a significant level then go with ROR. If you're looking for something better equipped, more reliable and more conducive to development then go with Django. Keep in mind, though, that this all boils down to what you know, what you're most comfortable with. If you know a bit of Python go with Django, but know that Ruby is just as easy to pick up if you'd rather go that direction. They're both win/win really.
5,483,404
I want to build a heavy ajax web2.0 app and I don't have javascript, django or ruby on rails. I have some experience with python. I am not sure which one to choose. I have a backend database and have to run few queries for each page, no big deal. So, I am looking for a choice which is quite easy to learn and maintain in the future. Thank you
2011/03/30
[ "https://Stackoverflow.com/questions/5483404", "https://Stackoverflow.com", "https://Stackoverflow.com/users/207335/" ]
I'm not sure if this meets the guidelines for a valid question on here. If you know any Python go with Django, if you know any Ruby go with Rails. From my understanding Rails is a bit more opinionated when it comes to JavaScript. In other words it comes bundled with a bunch of helpers to make it simpler to Ajaxify your code. Django on the other hand leaves it up to you to choose your own framework. (Note: I'm no expert on Django, but have been informed as much) Rails comes bundled with [Prototype](http://www.prototypejs.org/), works equally well with [jQuery](http://jquery.com/) and in the master codebase they have already switched jQuery to be the default in preparation for the next release.
If you like Rails but want to stick with Python, you might also consider [web2py](http://www.web2py.com), which is probably the Python framework that is the most like Rails (it was inspired by Rails, as well as Django). Ajax is particularly easy in web2py -- it comes bundled with jQuery, the scaffolding application includes a lot of built-in [Ajax functionality](http://www.web2py.com/book/default/chapter/10), and you can build pages from [components](http://www.web2py.com/book/default/chapter/13#Components) that operate via Ajax. I think you'll find that web2py is even easier to learn and use than Rails and Django. It's very quick to get started -- just [download](http://www.web2py.com/examples/default/download), unzip, and run it. It requires no installation or configuration and has no dependencies. There's a very helpful [mailing list](https://groups.google.com/forum/?fromgroups#!forum/web2py) if you have any questions.
63,658,572
I am writing a program to produce an image of the Mandelbrot set. The set requires iterating through the formula: z = z\_{n-1}^2 + C. The (n-1) refers to the previous value of z in the loop. In my program I have written ``` z_new = (self.z)**2.0 + c_number self.z = z_new ``` within a loop. Is there a better way in python to update a value using its current value? I'm not sure the `+=` operator would work here, since the formula requires squaring the current value before adding the complex number, C.
2020/08/30
[ "https://Stackoverflow.com/questions/63658572", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14120667/" ]
I think you may have mis-interpreted @Lev\_Levitsky's comment. If you wanted it on one line then they suggested: ```py self.z = self.z**2 + c_number ``` is equivalent to what you've got written. You don't really need the temporary variable `z_new` since in the "one-liner" the previous value of `self.z` is used when setting the next value.
the simplified version should be: ```py self.z = (self.z)**2.0 + c_number ```
4,900,003
I'm using the @login\_required decorator in my project since day one and it's working fine, but for some reason, I'm starting to get " AttributeError: 'unicode' object has no attribute 'user' " on some specific urls (and those worked in the past). Example : I am the website, logged, and then I click on link and I'm getting this error that usually is linked to the fact that there is no SessionMiddleware installed. But in my case, there is one since I am logged on the site and the page I am on also had a @login\_required. Any idea? The url is definied as : `(r'^accept/(?P<token>[a-zA-Z0-9_-]+)?$', 'accept'),` and the method as : `@login_required def accept(request,token): ...` The Traceback: ``` Traceback (most recent call last): File "/Users/macbook/virtualenv/proj/lib/python2.6/site-packages/django/core/servers/basehttp.py", line 674, in __call__ return self.application(environ, start_response) File "/Users/macbook/virtualenv/proj/lib/python2.6/site-packages/django/core/handlers/wsgi.py", line 241, in __call__ response = self.get_response(request) File "/Users/macbook/virtualenv/proj/lib/python2.6/site-packages/django/core/handlers/base.py", line 141, in get_response return self.handle_uncaught_exception(request, resolver, sys.exc_info()) File "/Users/macbook/virtualenv/proj/lib/python2.6/site-packages/django/core/handlers/base.py", line 165, in handle_uncaught_exception return debug.technical_500_response(request, *exc_info) File "/Users/macbook/virtualenv/proj/lib/python2.6/site-packages/django/core/handlers/base.py", line 100, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/Users/macbook/virtualenv/proj/lib/python2.6/site-packages/django/contrib/auth/decorators.py", line 25, in _wrapped_view return view_func(request, *args, **kwargs) File "/Users/macbook/dev/pycharm-projects/proj/match/views.py", line 33, in accept return __process(token,callback) File "/Users/macbook/virtualenv/proj/lib/python2.6/site-packages/django/contrib/auth/decorators.py", line 24, in _wrapped_view if test_func(request.user): AttributeError: 'unicode' object has no attribute 'user'` ```
2011/02/04
[ "https://Stackoverflow.com/questions/4900003", "https://Stackoverflow.com", "https://Stackoverflow.com/users/23051/" ]
The decorator was on a private method that doesn't have the request as a parameter. I removed that decorator (left there because of a refactoring and lack of test [bad me]). Problem solved.
This can also happen if you call a decorated method from another method without providing a request parameter.
46,000,595
I have created a pie chart in `matplotlib`. I want to achieve [**this**](http://jsfiddle.net/ztJkb/4/) result in python **i.e. whenever the mouse is hovered on any slice its color is changed**.I have searched a lot and came up with the use of `bind` method but that was not effective though and therefore was not able to come up with the positive result. I will have no problem if this can be done through any other library(say `tkinter`, `plotly`,etc but I need to come up with the solution with `matplotlib` so I would appreciate that more).Please have a look through my question and any suggestion is warmly welcomed... Here is my code: ``` import matplotlib.pyplot as plt labels = 'A', 'B', 'C', 'D' sizes = [10, 35, 50, 5] explode = (0, 0, 0.1, 0) # only "explode" the 3rd slice (i.e. 'C') fig1, ax1 = plt.subplots() ax1.pie(sizes, explode=explode, labels=labels, autopct='%1.1f%%', shadow=True, startangle=90) ax1.axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle. plt.show() ``` Regards...
2017/09/01
[ "https://Stackoverflow.com/questions/46000595", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You would need a [matplotlib event handler](https://matplotlib.org/users/event_handling.html) for a `motion_notify_event`. This can be connected to a function which checks if the mouse is inside one of the pie chart's wedges. This is done via [`contains_point`](https://matplotlib.org/devdocs/api/_as_gen/matplotlib.patches.Patch.html#matplotlib.patches.Patch.contains_point). In that case colorize the wedge differently, else set its color to its original color. ``` import matplotlib.pyplot as plt labels = 'A', 'B', 'C', 'D' sizes = [10, 35, 50, 5] explode = (0, 0, 0.1, 0) # only "explode" the 3rd slice (i.e. 'C') fig1, ax1 = plt.subplots() wedges, _, __ = ax1.pie(sizes, explode=explode, labels=labels, autopct='%1.1f%%', shadow=True, startangle=90) ax1.axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle. ocols= [w.get_facecolor() for w in wedges] ncols= ["gold", "indigo", "purple", "salmon"] def update(event): if event.inaxes == ax1: for i, w in enumerate(wedges): if w.contains_point([event.x, event.y]): w.set_facecolor(ncols[i]) else: w.set_facecolor(ocols[i]) fig1.canvas.draw_idle() fig1.canvas.mpl_connect("motion_notify_event", update) plt.show() ```
First off, what you are looking for is [the documentation on Event handling in matplotlib](https://matplotlib.org/users/event_handling.html). In particular, the `motion_notify_event` will be fired every time the mouse moves. However, I can't think of an easy way to identify which wedge the mouse is over right now. If clicking is acceptable, then the problem is much easier: ``` labels = 'A', 'B', 'C', 'D' sizes = [10, 35, 50, 5] explode = (0, 0, 0.1, 0) # only "explode" the 3rd slice (i.e. 'C') click_color = [0.2, 0.2, 0.2] fig1, ax1 = plt.subplots() patches, texts, autotexts = ax1.pie(sizes, explode=explode, labels=labels, autopct='%1.1f%%', shadow=True, startangle=90) # store original color inside patch object # THIS IS VERY HACKY. # We use the Artist's 'gid' which seems to be unused as far as I can tell # to be able to recall the original color for p in patches: p.set_gid(p.get_facecolor()) # enable picking p.set_picker(True) ax1.axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle. def on_pick(event): #restore all facescolor to erase previous changes for p in patches: p.set_facecolor(p.get_gid()) a = event.artist # print('on pick:', a, a.get_gid()) a.set_facecolor(click_color) plt.draw() fig1.canvas.mpl_connect('pick_event', on_pick) plt.show() ```
4,690,366
This is my first post and I'm still a Python and Scipy newcomer, so go easy on me! I'm trying to convert an Nx1 matrix into a python list. Say I have some 3x1 matrix `x = scipy.matrix([1,2,3]).transpose()` My aim is to create a list, y, from x so that `y = [1, 2, 3]` I've tried using the `tolist()` method, but it returns `[[1], [2], [3]]`, which isn't the result that I'm after. The best i can do is this ``` y = [xi for xi in x.flat] ``` but it's a bit cumbersome, and I'm not sure if there's an easier way to achieve the same result. Like I said, I'm still coming to grips with Python and Scipy... Thanks
2011/01/14
[ "https://Stackoverflow.com/questions/4690366", "https://Stackoverflow.com", "https://Stackoverflow.com/users/568864/" ]
A question for your question ---------------------------- While Sven and Navi have answered your question on how to convert ``` x = scipy.matrix([1,2,3]).transpose() ``` into a list, I'll ask a question before answering: * Why are you using an Nx1 matrix instead of an array? Using array instead of matrix ----------------------------- If you look at the Numpy for Matlab Users wiki/documentation page, section 3 discusses [*'array' or 'matrix'? Which should I use?*](http://www.scipy.org/NumPy_for_Matlab_Users#head-e9a492daa18afcd86e84e07cd2824a9b1b651935). The short answer is that you should use array. One of the advantages of using an array is that: > > You can treat rank-1 arrays as either row or column vectors. dot(A,v) treats v as a column vector, while dot(v,A) treats v as a row vector. This can save you having to type a lot of transposes. > > > Also, as stated in the [Numpy Reference Documentation](http://docs.scipy.org/doc/numpy/reference/arrays.classes.html#matrix-objects), "Matrix objects are always two-dimensional." This is why `x.tolist()` returned a nested list of `[[1], [2], [3]]` for you. Since you want an Nx1 object, I'd recommend using array as follows: ``` >>> import scipy >>> x = scipy.array([1,2,3]) >>> x array([1, 2, 3]) >>> y = x.tolist() // That's it. A clean, succinct conversion to a list. >>> y [1, 2, 3] ``` If you really want to use matrix -------------------------------- If for some reason you truly need/want to use a matrix instead of an array, here's what I would do: ``` >>> import scipy >>> x = scipy.matrix([1,2,3]).transpose() >>> x matrix([[1], [2], [3]]) >>> y = x.T.tolist()[0] >>> y [1, 2, 3] ``` In words, the `x.T.tolist()[0]` will: 1. Transpose the x matrix using the `.T` attribute 2. Convert the transposed matrix into a nested list using `.tolist()` 3. Grab the first element of the nested listed using `[0]`
How about ``` x.ravel().tolist()[0] ``` or ``` scipy.array(x).ravel().tolist() ```
4,690,366
This is my first post and I'm still a Python and Scipy newcomer, so go easy on me! I'm trying to convert an Nx1 matrix into a python list. Say I have some 3x1 matrix `x = scipy.matrix([1,2,3]).transpose()` My aim is to create a list, y, from x so that `y = [1, 2, 3]` I've tried using the `tolist()` method, but it returns `[[1], [2], [3]]`, which isn't the result that I'm after. The best i can do is this ``` y = [xi for xi in x.flat] ``` but it's a bit cumbersome, and I'm not sure if there's an easier way to achieve the same result. Like I said, I'm still coming to grips with Python and Scipy... Thanks
2011/01/14
[ "https://Stackoverflow.com/questions/4690366", "https://Stackoverflow.com", "https://Stackoverflow.com/users/568864/" ]
A question for your question ---------------------------- While Sven and Navi have answered your question on how to convert ``` x = scipy.matrix([1,2,3]).transpose() ``` into a list, I'll ask a question before answering: * Why are you using an Nx1 matrix instead of an array? Using array instead of matrix ----------------------------- If you look at the Numpy for Matlab Users wiki/documentation page, section 3 discusses [*'array' or 'matrix'? Which should I use?*](http://www.scipy.org/NumPy_for_Matlab_Users#head-e9a492daa18afcd86e84e07cd2824a9b1b651935). The short answer is that you should use array. One of the advantages of using an array is that: > > You can treat rank-1 arrays as either row or column vectors. dot(A,v) treats v as a column vector, while dot(v,A) treats v as a row vector. This can save you having to type a lot of transposes. > > > Also, as stated in the [Numpy Reference Documentation](http://docs.scipy.org/doc/numpy/reference/arrays.classes.html#matrix-objects), "Matrix objects are always two-dimensional." This is why `x.tolist()` returned a nested list of `[[1], [2], [3]]` for you. Since you want an Nx1 object, I'd recommend using array as follows: ``` >>> import scipy >>> x = scipy.array([1,2,3]) >>> x array([1, 2, 3]) >>> y = x.tolist() // That's it. A clean, succinct conversion to a list. >>> y [1, 2, 3] ``` If you really want to use matrix -------------------------------- If for some reason you truly need/want to use a matrix instead of an array, here's what I would do: ``` >>> import scipy >>> x = scipy.matrix([1,2,3]).transpose() >>> x matrix([[1], [2], [3]]) >>> y = x.T.tolist()[0] >>> y [1, 2, 3] ``` In words, the `x.T.tolist()[0]` will: 1. Transpose the x matrix using the `.T` attribute 2. Convert the transposed matrix into a nested list using `.tolist()` 3. Grab the first element of the nested listed using `[0]`
I think you are almost there use the flatten function <http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.flatten.html>
6,876,553
I thought to try using [D](http://en.wikipedia.org/wiki/D_%28programming_language%29) for some system administration scripts which require high performance (for comparing performance with python/perl etc). I can't find an example in the tutorials I looked through so far (dsource.org etc.) on how to make a system call (i.e. calling another software) and receiving it's output from stdout, though? If I missed it, could someone point me to the right docs/tutorial, or provide with the answer right away?
2011/07/29
[ "https://Stackoverflow.com/questions/6876553", "https://Stackoverflow.com", "https://Stackoverflow.com/users/340811/" ]
Well, then I of course found it: <http://www.digitalmars.com/d/2.0/phobos/std_process.html#shell> (Version using the Tango library here: <http://www.dsource.org/projects/tango/wiki/TutExec>). The former version is the one that works with D 2.0 (thereby the current dmd compiler that comes with ubuntu). I got this tiny example to work now, compiled with dmd: ``` import std.stdio; import std.process; void main() { string output = shell("ls -l"); write(output); } ```
std.process has been updated since... the new function is spawnShell ``` import std.stdio; import std.process; void main(){ auto pid = spawnShell("ls -l"); write(pid); } ```
62,998,373
I have a graph structure like this:[![enter image description here](https://i.stack.imgur.com/SMnZU.png)](https://i.stack.imgur.com/SMnZU.png) I need to select all `ContentItem` nodes they have any connections with the other nodes. I am also passing in a list of ids for each of the nodes for filtering purposes. i.e. I pass in a list of the neo4j ids for the items I wish to INCLUDE in the search. Any `ContentItem` that is related to any of the other nodes which have an id passed in should return. I've tried with a UNION as this felt like the simplest way, but I'm not sure that it's correct. ``` MATCH (n:ContentItem) WHERE id(n) IN $neoIds WITH n OPTIONAL MATCH (n:ContentItem)-[:IN]->(pt:PulseTopic) WHERE id(pt) IN $pulseTopics RETURN n UNION OPTIONAL MATCH (n:ContentItem)-[:IN]->(pst:SubPulseTopic) WHERE id(pst) IN $subPulseTopics RETURN n UNION OPTIONAL MATCH (n:ContentItem)-[:FROM]->(s:Supplier) WHERE id(s) IN $suppliers RETURN n UNION OPTIONAL MATCH (n:ContentItem)-[:USED_FOR]->(ua:UseArea) WHERE id(ua) IN $useAreas RETURN n UNION OPTIONAL MATCH (n:ContentItem)-[:IN]->(blt:BLTopic) WHERE id(blt) IN $blTopics RETURN n ``` Firstly when I reference the record in python I get an error: ``` for r in tx.run(cypherStep2, paramsStep2): d = r['n']['id'] ``` ...gives: `TypeError: 'NoneType' object is not subscriptable` I'm not sure why that would be. If I just do `MATCH (n:ContentItem) WHERE id(n) IN $neoIds RETURN n` I don't get this error, so I'm thinking this is something to do with the `UNION`. And secondly, I am wondering if this will actually filter ContentItem on `$neoIds` passed in or whether `OPTIONAL MATCH (n:ContentItem)` means ANY `ContentItem` in the `UNION`. What is the best way to do a query like this, please?
2020/07/20
[ "https://Stackoverflow.com/questions/62998373", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7412939/" ]
You can pass the \_incrementCounter method down to the other widget. File 1: ``` class _MyHomePageState extends State<MyHomePage> { int _counter = 0; void _incrementCounter() { setState(() { _counter++; }); } @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text(widget.title), ), body: Center( child: Column( mainAxisAlignment: MainAxisAlignment.center, children: <Widget>[ Text( 'You have pushed the button this many times:', ), Text( '$_counter', style: Theme.of(context).textTheme.headline4, ), ], ), ), floatingActionButton: IncrementCounterButton( incrementCounter: _incrementCounter, ), ); } } ``` File 2: ``` class IncrementCounterButton extends StatelessWidget { final void Function() incrementCounter; IncrementCounterButton({Key key, this.incrementCounter) : super(key: key); @override Widget build(BuildContext context) { return FloatingActionButton( onPressed: incrementCounter, tooltip: 'Increment', child: Icon(Icons.add), ); } } ```
You have to pass the function instead of calling it. The code: ``` onPressed: () { _incrementCounter(); }, ``` or like this of you like shortcuts: ``` onPressed: () => _incrementCounter(), ``` Hope it helps! Happy coding:)
13,421,709
I use `vim` (installed on `cygwin`) to write `c++` programs but it does not highlight some `c++` keywords such as `new`, `delete`, `public`, `friend`, `try`, but highlight others such as `namespace`, `int`, `const`, `operator`, `true`, `class`, `include`. It also not change color of operators. I never changed its syntax file. What's wrong with it? Thanks a lot. I use a customized color scheme; when I change it to `desert` color scheme, highlighting has no problem, but I need to use that color scheme and can't change it to something else. I want it show program as the following picture(I used this color scheme with notepad++ in the picture): ![example of correctly colored one](https://i.stack.imgur.com/qGIb9.png) but now it's as the following picture: ![not correctly colored one](https://i.stack.imgur.com/anaCh.png) the `colorscheme` is here: ``` "Tomorrow Night Bright - Full Colour and 256 Colour " http://chriskempson.com " " Hex colour conversion functions borrowed from the theme "Desert256"" " Default GUI Colours let s:foreground = "eaeaea" let s:background = "000000" let s:selection = "424242" let s:line = "2a2a2a" let s:comment = "969896" let s:red = "d54e53" let s:orange = "e78c45" let s:yellow = "e7c547" let s:green = "b9ca4a" let s:aqua = "70c0b1" let s:blue = "7aa6da" let s:purple = "c397d8" let s:window = "4d5057" set background=dark hi clear syntax reset let g:colors_name = "Tomorrow-Night-Bright" if has("gui_running") || &t_Co == 88 || &t_Co == 256 " Returns an approximate grey index for the given grey level fun <SID>grey_number(x) if &t_Co == 88 if a:x < 23 return 0 elseif a:x < 69 return 1 elseif a:x < 103 return 2 elseif a:x < 127 return 3 elseif a:x < 150 return 4 elseif a:x < 173 return 5 elseif a:x < 196 return 6 elseif a:x < 219 return 7 elseif a:x < 243 return 8 else return 9 endif else if a:x < 14 return 0 else let l:n = (a:x - 8) / 10 let l:m = (a:x - 8) % 10 if l:m < 5 return l:n else return l:n + 1 endif endif endif endfun " Returns the actual grey level represented by the grey index fun <SID>grey_level(n) if &t_Co == 88 if a:n == 0 return 0 elseif a:n == 1 return 46 elseif a:n == 2 return 92 elseif a:n == 3 return 115 elseif a:n == 4 return 139 elseif a:n == 5 return 162 elseif a:n == 6 return 185 elseif a:n == 7 return 208 elseif a:n == 8 return 231 else return 255 endif else if a:n == 0 return 0 else return 8 + (a:n * 10) endif endif endfun " Returns the palette index for the given grey index fun <SID>grey_colour(n) if &t_Co == 88 if a:n == 0 return 16 elseif a:n == 9 return 79 else return 79 + a:n endif else if a:n == 0 return 16 elseif a:n == 25 return 231 else return 231 + a:n endif endif endfun " Returns an approximate colour index for the given colour level fun <SID>rgb_number(x) if &t_Co == 88 if a:x < 69 return 0 elseif a:x < 172 return 1 elseif a:x < 230 return 2 else return 3 endif else if a:x < 75 return 0 else let l:n = (a:x - 55) / 40 let l:m = (a:x - 55) % 40 if l:m < 20 return l:n else return l:n + 1 endif endif endif endfun " Returns the actual colour level for the given colour index fun <SID>rgb_level(n) if &t_Co == 88 if a:n == 0 return 0 elseif a:n == 1 return 139 elseif a:n == 2 return 205 else return 255 endif else if a:n == 0 return 0 else return 55 + (a:n * 40) endif endif endfun " Returns the palette index for the given R/G/B colour indices fun <SID>rgb_colour(x, y, z) if &t_Co == 88 return 16 + (a:x * 16) + (a:y * 4) + a:z else return 16 + (a:x * 36) + (a:y * 6) + a:z endif endfun " Returns the palette index to approximate the given R/G/B colour levels fun <SID>colour(r, g, b) " Get the closest grey let l:gx = <SID>grey_number(a:r) let l:gy = <SID>grey_number(a:g) let l:gz = <SID>grey_number(a:b) " Get the closest colour let l:x = <SID>rgb_number(a:r) let l:y = <SID>rgb_number(a:g) let l:z = <SID>rgb_number(a:b) if l:gx == l:gy && l:gy == l:gz " There are two possibilities let l:dgr = <SID>grey_level(l:gx) - a:r let l:dgg = <SID>grey_level(l:gy) - a:g let l:dgb = <SID>grey_level(l:gz) - a:b let l:dgrey = (l:dgr * l:dgr) + (l:dgg * l:dgg) + (l:dgb * l:dgb) let l:dr = <SID>rgb_level(l:gx) - a:r let l:dg = <SID>rgb_level(l:gy) - a:g let l:db = <SID>rgb_level(l:gz) - a:b let l:drgb = (l:dr * l:dr) + (l:dg * l:dg) + (l:db * l:db) if l:dgrey < l:drgb " Use the grey return <SID>grey_colour(l:gx) else " Use the colour return <SID>rgb_colour(l:x, l:y, l:z) endif else " Only one possibility return <SID>rgb_colour(l:x, l:y, l:z) endif endfun " Returns the palette index to approximate the 'rrggbb' hex string fun <SID>rgb(rgb) let l:r = ("0x" . strpart(a:rgb, 0, 2)) + 0 let l:g = ("0x" . strpart(a:rgb, 2, 2)) + 0 let l:b = ("0x" . strpart(a:rgb, 4, 2)) + 0 return <SID>colour(l:r, l:g, l:b) endfun " Sets the highlighting for the given group fun <SID>X(group, fg, bg, attr) if a:fg != "" exec "hi " . a:group . " guifg=#" . a:fg . " ctermfg=" . <SID>rgb(a:fg) endif if a:bg != "" exec "hi " . a:group . " guibg=#" . a:bg . " ctermbg=" . <SID>rgb(a:bg) endif if a:attr != "" exec "hi " . a:group . " gui=" . a:attr . " cterm=" . a:attr endif endfun " Vim Highlighting call <SID>X("Normal", s:foreground, s:background, "") call <SID>X("LineNr", s:selection, "", "") call <SID>X("NonText", s:selection, "", "") call <SID>X("SpecialKey", s:selection, "", "") call <SID>X("Search", s:background, s:yellow, "") call <SID>X("TabLine", s:foreground, s:background, "reverse") call <SID>X("StatusLine", s:window, s:yellow, "reverse") call <SID>X("StatusLineNC", s:window, s:foreground, "reverse") call <SID>X("VertSplit", s:window, s:window, "none") call <SID>X("Visual", "", s:selection, "") call <SID>X("Directory", s:blue, "", "") call <SID>X("ModeMsg", s:green, "", "") call <SID>X("MoreMsg", s:green, "", "") call <SID>X("Question", s:green, "", "") call <SID>X("WarningMsg", s:red, "", "") call <SID>X("MatchParen", "", s:selection, "") call <SID>X("Folded", s:comment, s:background, "") call <SID>X("FoldColumn", "", s:background, "") if version >= 700 call <SID>X("CursorLine", "", s:line, "none") call <SID>X("CursorColumn", "", s:line, "none") call <SID>X("PMenu", s:foreground, s:selection, "none") call <SID>X("PMenuSel", s:foreground, s:selection, "reverse") end if version >= 703 call <SID>X("ColorColumn", "", s:line, "none") end " Standard Highlighting call <SID>X("Comment", s:comment, "", "") call <SID>X("Todo", s:comment, s:background, "") call <SID>X("Title", s:comment, "", "") call <SID>X("Identifier", s:red, "", "none") call <SID>X("Statement", s:foreground, "", "") call <SID>X("Conditional", s:foreground, "", "") call <SID>X("Repeat", s:foreground, "", "") call <SID>X("Structure", s:purple, "", "") call <SID>X("Function", s:blue, "", "") call <SID>X("Constant", s:orange, "", "") call <SID>X("String", s:green, "", "") call <SID>X("Special", s:foreground, "", "") call <SID>X("PreProc", s:purple, "", "") call <SID>X("Operator", s:aqua, "", "none") call <SID>X("Type", s:blue, "", "none") call <SID>X("Define", s:purple, "", "none") call <SID>X("Include", s:blue, "", "") "call <SID>X("Ignore", "666666", "", "") " Vim Highlighting call <SID>X("vimCommand", s:red, "", "none") " C Highlighting call <SID>X("cType", s:yellow, "", "") call <SID>X("cStorageClass", s:purple, "", "") call <SID>X("cConditional", s:purple, "", "") call <SID>X("cRepeat", s:purple, "", "") " PHP Highlighting call <SID>X("phpVarSelector", s:red, "", "") call <SID>X("phpKeyword", s:purple, "", "") call <SID>X("phpRepeat", s:purple, "", "") call <SID>X("phpConditional", s:purple, "", "") call <SID>X("phpStatement", s:purple, "", "") call <SID>X("phpMemberSelector", s:foreground, "", "") " Ruby Highlighting call <SID>X("rubySymbol", s:green, "", "") call <SID>X("rubyConstant", s:yellow, "", "") call <SID>X("rubyAttribute", s:blue, "", "") call <SID>X("rubyInclude", s:blue, "", "") call <SID>X("rubyLocalVariableOrMethod", s:orange, "", "") call <SID>X("rubyCurlyBlock", s:orange, "", "") call <SID>X("rubyStringDelimiter", s:green, "", "") call <SID>X("rubyInterpolationDelimiter", s:orange, "", "") call <SID>X("rubyConditional", s:purple, "", "") call <SID>X("rubyRepeat", s:purple, "", "") " Python Highlighting call <SID>X("pythonInclude", s:purple, "", "") call <SID>X("pythonStatement", s:purple, "", "") call <SID>X("pythonConditional", s:purple, "", "") call <SID>X("pythonFunction", s:blue, "", "") " JavaScript Highlighting call <SID>X("javaScriptBraces", s:foreground, "", "") call <SID>X("javaScriptFunction", s:purple, "", "") call <SID>X("javaScriptConditional", s:purple, "", "") call <SID>X("javaScriptRepeat", s:purple, "", "") call <SID>X("javaScriptNumber", s:orange, "", "") call <SID>X("javaScriptMember", s:orange, "", "") " HTML Highlighting call <SID>X("htmlTag", s:red, "", "") call <SID>X("htmlTagName", s:red, "", "") call <SID>X("htmlArg", s:red, "", "") call <SID>X("htmlScriptTag", s:red, "", "") " Diff Highlighting call <SID>X("diffAdded", s:green, "", "") call <SID>X("diffRemoved", s:red, "", "") " Delete Functions delf <SID>X delf <SID>rgb delf <SID>colour delf <SID>rgb_colour delf <SID>rgb_level delf <SID>rgb_number delf <SID>grey_colour delf <SID>grey_level delf <SID>grey_number endif ```
2012/11/16
[ "https://Stackoverflow.com/questions/13421709", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1363855/" ]
Add this to the C Highlighting paragraph: ``` call <SID>X("Statement", s:purple, "", "") ```
All the keywords you mention eventually link to the standard `Statement` syntax group. Maybe that one got cleared. Try ``` :verbose highlight Statement ``` If it shows `xxx cleared`, you're one step further and now need to investigate why your colorscheme does not define a coloring.
26,256,055
In my python program , I have my string: ``` test = {"Controller_node1_external_port": {"properties": {"fixed_ips": [{"ip_address": "12.0.0.1"}],"network_id": {"get_param": ["ex_net_map_param",{"get_param": "ex_net_param"}]}},"type": "OS::Neutron::Port"}} ``` `yaml.dump(test)` is giving me the output : ``` Controller_node1_external_port: properties: fixed_ips: - {ip_address: 12.0.0.1} network_id: get_param: - ex_net_map_param - {get_param: ex_net_param} type: OS::Neutron::Port ``` But I want ip\_address line as `- ip_address: 12.0.0.1` ( means without flower braces covered) Desired ouput: ``` Controller_node1_external_port: properties: fixed_ips: - ip_address: 12.0.0.1 network_id: get_param: - ex_net_map_param - {get_param: ex_net_param} type: OS::Neutron::Port ```
2014/10/08
[ "https://Stackoverflow.com/questions/26256055", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3197309/" ]
I will provide a simple way to store and retrive your data inside that multidimensional array: ``` <?php //Global definition of the array $categories = array( "house" => array(), "indie" => array(), "trap" => array(), "trance" => array(), "partybangers" => array(), ); function push_to_category($category,$value) { global $categories; array_push($categories[$category], $value); } push_to_category("house","2797"); //To retrive data you call $categories[$category][index] //ex) $categories["house"][0] returns 2797 ?> ```
I like [FoxNos's answer](https://stackoverflow.com/questions/26256032/store-value-in-multidimensional-array-if-value-is-equal-to-key#answer-26256906), but not the global, since the global might not be a global in another context (`$categories`might be defined in another function or class). So this is what I would've done: ```php $categories = array(.........); function push_to_category(&$categories, $category, $value) { array_push($categories[$category], $value); } push_to_category($categories, "house","2797"); ``` Since the 1st arg is by-reference, you don't need a global, so you can use it anywhere. Minor improvement. If you're inside another function or class, and want to `push_to_category` a lot, you could even do this: ```php class UserController { function categoriesAction() { $categories = array(.........); $push_to_category = function($category, $value) use (&$categories) { array_push($categories[$category], $value); } $push_to_category("house","2797"); } } ``` which makes a local function ([closure](http://nl1.php.net/manual/en/functions.anonymous.php)) that uses/manipulates a local variable (`$categories`).
12,326,443
I am writing a python program which validates device events. I am continuosly reading some data from serial port from a device. When I write something on serail port of device, the device writes a string on serialport which I have to read. Continously reading part from serial port is in a seperate worker thread an I read it line by line and write it to a thread. The device writes some data continuously and also it writes the event description on the serail port. To be more specific when I write something on the device, it generates an event on the device. The description of the event is written back to the serial port. This i have to read it back and validate whether the event has occured. Now what is happening as I am reading device output line by line in a thread, by the time i write something and start reading that event desc to occur, the output of that has already gone and next some other output line are being read. How do I synchronize this? Can any help me in designing this part?
2012/09/07
[ "https://Stackoverflow.com/questions/12326443", "https://Stackoverflow.com", "https://Stackoverflow.com/users/348686/" ]
If you are just using threads for asynchronous IO, You may be better off not using threads and use select.select, or perhaps even asyncore if you want to make it even easier on yourself. <http://docs.python.org/library/asyncore.html>
Code Snippet is as follows: Class SerialCom: ``` __init__(self,comport): self.comport = comport self.readSerialPortThread = ReadSearialPortThread(self.comport) def writeStringToSerialPort(someString): self.comport.write(someString) def waitfordata(someString): #I have to continuously read data from serial port till we someString. ``` In ReadSearialPortThread data from serialport is continuously reading device info values. When I write data using writeStringToSerialPort(), the device outputs some data to the serial port I have to read this data from the function waitfordata, to validate the response from the device. Now what is happening is when I write some values and call waitfordata() the required value are already read by readSerialPortThread and continued to reading some other values like device info. So I am losing the values there. I want to know how to syncronize that.
47,367,681
I am splinting a text based on ",". I need to ignore the commas in text between quotes (simple or doubled). Example of text: ``` Capacitors,3,"C2,C7-C8",100nF,, Capacitors,3,'C2,C7-C8',100nF,, ``` Have to return ``` ['Capacitors','3','C2,C7-C8','100nF','',''] ``` How to say this (ignore between quotes) in regular expressions? (of python) For now, I am using ``` pattern = re.compile('\s*,\s*') pattern.split(myText) ```
2017/11/18
[ "https://Stackoverflow.com/questions/47367681", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7803545/" ]
Don't use regex for this. With a little tweaking, you can use `csv` module to parse the line perfectly (`csv` is designed to handle quoted commas). Just normalize the quotes to double quotes: ``` import csv s = """Capacitors,3,"C2,C7-C8",100nF,, Capacitors,3,'C2,C7-C8',100nF,,""" print(next(csv.reader([s.replace("'",'"')]))) ``` result: ``` ['Capacitors', '3', 'C2,C7-C8', '100nF', '', ' Capacitors', '3', 'C2,C7-C8', '100nF', '', ''] ```
I guess you changed your question. That looks like a csv-formatted file: ``` import io s = """\ Capacitors,3,"C2,C7-C8",100nF,, Capacitors,3,'C2,C7-C8',100nF,,""" [i for i in csv.reader(io.StringIO(s), delimiter=',', quotechar='"')] ``` Returns: ``` [['Capacitors', '3', 'C2,C7-C8', '100nF', '', ''], ['Capacitors', '3', "'C2", "C7-C8'", '100nF', '', '']] ```
47,367,681
I am splinting a text based on ",". I need to ignore the commas in text between quotes (simple or doubled). Example of text: ``` Capacitors,3,"C2,C7-C8",100nF,, Capacitors,3,'C2,C7-C8',100nF,, ``` Have to return ``` ['Capacitors','3','C2,C7-C8','100nF','',''] ``` How to say this (ignore between quotes) in regular expressions? (of python) For now, I am using ``` pattern = re.compile('\s*,\s*') pattern.split(myText) ```
2017/11/18
[ "https://Stackoverflow.com/questions/47367681", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7803545/" ]
Don't use regex for this. With a little tweaking, you can use `csv` module to parse the line perfectly (`csv` is designed to handle quoted commas). Just normalize the quotes to double quotes: ``` import csv s = """Capacitors,3,"C2,C7-C8",100nF,, Capacitors,3,'C2,C7-C8',100nF,,""" print(next(csv.reader([s.replace("'",'"')]))) ``` result: ``` ['Capacitors', '3', 'C2,C7-C8', '100nF', '', ' Capacitors', '3', 'C2,C7-C8', '100nF', '', ''] ```
Since question is tagged under regex so here is regex version : ``` s="""Capacitors,3,"C2,C7-C8",100nF,, Capacitors,3,'C2,C7-C8',100nF,,""" import re pattern=r"(([\"'])(?:(?!\2).)*|[^,\n]+)" word_list=[] match=re.finditer(pattern,s) for find in match: word_list.append(find.group()) print(word_list) ```
14,427,281
AIM: need to find out how to parse data from api search below into a CSV file. The search returns results in the following format: ``` [(u'Bertille Maciag', 10), (u'Peter Prior', 5), (u'Chris OverPar Duguid', 4), (u 'Selby Dhliwayo', 4), (u'FakeBitch!', 4), (u'Django Unchianed UK', 4), (u'Padrai g Lynch ', 4), (u'Jessica Gunn', 4), (u'harvey.', 4), (u'Wowphotography', 3)] ``` I'm a newbie to python and any help would be greatly appreciated ``` import twitter, json, operator #Construct Twitter API object searchApi = twitter.Twitter(domain="search.twitter.com") #Get trends query = "#snow" tweeters = dict() for i in range(1,16): response = searchApi.search(q=query, rpp=100, page=i) tweets = response['results'] for item in tweets: tweet = json.loads(json.dumps(item)) user = tweet['from_user_name'] #print user if user in tweeters: # print "list already contains", user tweeters[user] += 1 else: tweeters[user] = 1 sorted_tweeters = sorted(tweeters.iteritems(), key=operator.itemgetter(1), reverse=True) print len(tweeters) print tweeters print sorted_tweeters[0:10] print 'Done!' ```
2013/01/20
[ "https://Stackoverflow.com/questions/14427281", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1973375/" ]
It looks like you have all the hard bits working, and are just missing the 'save to csv' part. ``` import collections import twitter, json, operator #Construct Twitter API object searchApi = twitter.Twitter(domain="search.twitter.com") #Get trends query = "#snow" tweeters = collections.defaultdict(lambda: 0) for i in range(1,16): response = searchApi.search(q=query, rpp=100, page=i) tweets = response['results'] for item in tweets: user = item['from_user_name'] #print user tweeters[user] += 1 sorted_tweeters = sorted(tweeters.iteritems(), key=operator.itemgetter(1), reverse=True) str_fmt = u'%s\u200E, %d \n' with open('test_so.csv','w') as f_out: for twiters in sorted_tweeters: f_out.write((str_fmt % twiters).encode('utf8')) ``` You need the 'u' on the format string and `encode` because you have non-ascii charachters in the user names. `u'\200E` is the ltr marker so that the csv file will look right with rtl language user names. I also cleaned up the iteration code a bit, by using a `defaultdict` you don't need check if a key exists, if it does not exist, the generator function is called and it's value is returned (in this case 0). `item` is already a `dict`, there is no need to convert it to a json string and then back to a `dict`
Have you looked at the [python CSV](http://docs.python.org/2/library/csv.html) module? using your output: ``` import csv, os x = [(u'Bertille Maciag', 10), (u'Peter Prior', 5), (u'Chris OverPar Duguid', 4), (u'Selby Dhliwayo', 4), (u'FakeBitch!', 4), (u'Django Unchianed UK', 4), (u'Padraig Lynch ', 4), (u'Jessica Gunn', 4), (u'harvey.', 4), (u'Wowphotography', 3)] f = open("/tmp/yourfile.csv", "w") writer = csv.writer(f, quoting=csv.QUOTE_ALL) for i in x: writer.writerow(i) f.close() ```
73,988,902
I have one tensor slice with all image and one tensor with its masking image. how do i combine/join/add them and make it a single tensor dataset `tf.data.dataset` ``` # turning them into tensor data val_img_data = tf.data.Dataset.from_tensor_slices(np.array(all_val_img)) val_mask_data = tf.data.Dataset.from_tensor_slices(np.array(all_val_mask)) ``` then i mapped a function to paths to make them image ``` val_img_tensor = val_img_data.map(get_image) val_mask_tensor = val_mask_data.map(get_image) ``` So now i have two tensors one image and other mask. how do i join them and make it a tensor data combined? I tried zipping them: it didn't work. ``` val_data = tf.data.Dataset.from_tensor_slices(zip(val_img_tensor, val_mask_tensor)) ``` Error ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/tensorflow/python/data/util/structure.py in normalize_element(element, element_signature) 101 if spec is None: --> 102 spec = type_spec_from_value(t, use_fallback=False) 103 except TypeError: 11 frames TypeError: Could not build a `TypeSpec` for <zip object at 0x7f08f3862050> with type zip During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/constant_op.py in convert_to_eager_tensor(value, ctx, dtype) 100 dtype = dtypes.as_dtype(dtype).as_datatype_enum 101 ctx.ensure_initialized() --> 102 return ops.EagerTensor(value, ctx.device_name, dtype) 103 104 ValueError: Attempt to convert a value (<zip object at 0x7f08f3862050>) with an unsupported type (<class 'zip'>) to a Tensor. ```
2022/10/07
[ "https://Stackoverflow.com/questions/73988902", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14076425/" ]
The comment of Djinn is mostly you need to follow. Here is the end to end answer. Here is how you can build data pipeline for segmentation model training, generally a training paris with both `images, masks`. First, get the sample paths. ``` images = [ 1.jpg, 2.jpg, 3.jpg, ... ] masks = [ 1.png, 2.png, 3.png, ... ] ``` Second, define the hyper-params i.e image size, batch size etc. And build the `tf.data` API input pipelines. ``` IMAGE_SIZE = 128 BATCH_SIZE = 86 def read_image(image_path, mask=False): image = tf.io.read_file(image_path) if mask: image = tf.image.decode_png(image, channels=1) image.set_shape([None, None, 1]) image = tf.image.resize(images=image, size=[IMAGE_SIZE, IMAGE_SIZE]) image = tf.cast(image, tf.int32) else: image = tf.image.decode_png(image, channels=3) image.set_shape([None, None, 3]) image = tf.image.resize(images=image, size=[IMAGE_SIZE, IMAGE_SIZE]) image = image / 255. return image def load_data(image_list, mask_list): image = read_image(image_list) mask = read_image(mask_list, mask=True) return image, mask def data_generator(image_list, mask_list, split='train'): dataset = tf.data.Dataset.from_tensor_slices((image_list, mask_list)) dataset = dataset.shuffle(8*BATCH_SIZE) if split == 'train' else dataset dataset = dataset.map(load_data, num_parallel_calls=tf.data.AUTOTUNE) dataset = dataset.batch(BATCH_SIZE, drop_remainder=True) dataset = dataset.prefetch(tf.data.AUTOTUNE) return dataset ``` Lastly, pass the list of images paths (image + mask) to build data generator. ``` train_dataset = data_generator(images, masks) image, mask = next(iter(train_dataset.take(1))) print(image.shape, mask.shape) (86, 128, 128, 3) (86, 128, 128, 1) ``` Here you can see that, the `tf.data.Dataset.from_tensor_slices` successfully load the training pairs and return as tuple (no need zipping). Hope it will resolve your problem. I've also answered your other query regarding augmentaiton pipelines, [HERE](https://stackoverflow.com/a/73997583/9215780). To add more, check out the following resources, I've shared plenty of semantic segmentaiton modeling approach. It may help. * [Carvana Image Semantic Segmentation : Starter](https://www.kaggle.com/code/ipythonx/carvana-image-semantic-segmentation-starter) * [Stanford Background Scene Understanding : Starter](https://www.kaggle.com/code/ipythonx/stanford-background-scene-understanding-starter) * [Retinal Vessel Segmentation : Starter](https://www.kaggle.com/code/ipythonx/retinal-vessel-segmentation-starter)
Maybe try `tf.data.Dataset.zip`: ``` val_data = tf.data.Dataset.zip((val_img_tensor, val_mask_tensor)) ```
65,846,292
My list does not appear when I'm running my program. There are no errors it just pops up with a blank screen. This is my code. Please help only new to python. ``` devices = ['iphone', 'ps5', 'pc'] devicesaccessories = ['mouse', 'keyboard', 'airpods'] joinedlist = devices + devicesaccessories ```
2021/01/22
[ "https://Stackoverflow.com/questions/65846292", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15059825/" ]
You are probably not having the dependency binaries like opencv\_\*.dll in the same folder as your binary. Also InferenceEngine binaries needs to be present in the folder from where your binary is expected to run. Please use DependencyWalker to identify the load dependency and copy the needed binary.
copy "\inference\_engine\lib\intel64\Release\plugins.xml" to project dir, then replace Core core to Core core(plugins.xml)
9,331,010
This post is the same with my question in [MySQL in Python: UnicodeEncodeError: 'ascii'](https://stackoverflow.com/questions/9330046/mysql-in-python-unicodeencodeerror-ascii) this is just to clear things up. I am trying to save a string to a MySQL database but I get an error: > > File ".smart.py", line 51, in > (number, text, 'smart', 'u') > > > UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position > 25: ordinal not in range(128) > > > and the string is saved at ***m['Text']*** > > Lala\*=#&%@<>\_?!:;-'"/()¥¡¿ > > > Here is a snippet to the code ``` risk = m['Text'] msg = risk.encode('utf8') text = db.escape_string(msg) sql = "INSERT INTO posts(nmbr, \ msg, tel, sts) \ VALUES ('%s', '%s', '%s', '%s')" % \ (number, text, 'smart', 'u') ``` If i try to comment out the SQL query and put ***print text*** it would print out Lala\*=#&%@<>\_?!:;-'"/()¥¡¿ The error is only encountered when the SQL is being processed. MySQL encoding is set to utf8\_unicode\_ci. (or should i change this?) Thanks.
2012/02/17
[ "https://Stackoverflow.com/questions/9331010", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1067791/" ]
add these parameters `MySQLdb.connect(..., use_unicode=1,charset="utf8")`. create a cursor ``` cur = db.cursor() ``` and then execute like so: ``` risk = m['Text'] sql = """INSERT INTO posts(nmbr, msg, tel, sts) \ VALUES (%s, %s, %s, %s)""" values = (number, risk, 'smart', 'u') cur.execute(sql,values) #use comma to separate sql and values, this will ensure values are escaped/sanitized cur.commit() ``` now you dont need these two lines: ``` msg = risk.encode('utf8') text = db.escape_string(msg) ```
It is not clear whether your `m['Text']` value is of type `StringType` or `UnicodeType`. My bet is that it is a byte-string (`StringType`). If that's true, then adding a line `m['Text'] = m['Text'].decode('UTF-8')` before your insert may work.
36,307,767
I have set the output of Azure stream analytics job to service bus queue which sends the data in JSON serialized format. When I receive the queue message in python script, along with the data in curly braces, I get @strin3http//schemas.microsoft.com/2003/10/Serialization/� appended in front. I am not able to trim it as the received message is not being recognized as either a string or a message. Because of this I cannot de-serialize the data.
2016/03/30
[ "https://Stackoverflow.com/questions/36307767", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6134419/" ]
The issue was similiar with the SO thread [Interoperability Azure Service Bus Message Queue Messages](https://stackoverflow.com/questions/33542509/interoperability-azure-service-bus-message-queue-messages). Per my experience, the data from Azure Stream Analytics to Service Bus was sent via AMQP protocol, but the protocol of receiving the data in Python is HTTP. The excess content was generated by AMQP during transmission. Assumption that receiving the message via the code below, please see <https://azure.microsoft.com/en-us/documentation/articles/service-bus-python-how-to-use-queues/#receive-messages-from-a-queue>. The function `receive_queue_message` with the `False` value of the argument `peek_lock` wrapped the REST API [Receive and Delete Message (Destructive Read)](https://msdn.microsoft.com/en-us/library/azure/hh780770.aspx). ``` msg = bus_service.receive_queue_message('taskqueue', peek_lock=False) print(msg.body) ``` According to the source code of Azure Service Bus SDK for Python include the functions [`receive_queue_message`](https://github.com/Azure/azure-sdk-for-python/blob/master/azure-servicebus/azure/servicebus/servicebusservice.py#L937), [`read_delete_queue_message`](https://github.com/Azure/azure-sdk-for-python/blob/master/azure-servicebus/azure/servicebus/servicebusservice.py#L884) and [`_create_message`](https://github.com/Azure/azure-sdk-for-python/blob/master/azure-servicebus/azure/servicebus/_serialization.py#L59), I think you can directly remove the excess content from the `msg.body` using the string common function [`lstrip`](https://docs.python.org/2/library/string.html#string.lstrip) or [`strip`](https://docs.python.org/2/library/string.html#string.strip).
I ran into this issue as well. The previous answers are only workarounds and do not fix the root cause of this issue. The problem you are encountering is likely due to your Stream Analytics compatibility level. Compatibility level 1.0 uses an XML serializer producing the XML tag you are seeing. Compatibility level 1.1 "fixes" this issue. See my previous answer here: <https://stackoverflow.com/a/49307178/263139>.
36,307,767
I have set the output of Azure stream analytics job to service bus queue which sends the data in JSON serialized format. When I receive the queue message in python script, along with the data in curly braces, I get @strin3http//schemas.microsoft.com/2003/10/Serialization/� appended in front. I am not able to trim it as the received message is not being recognized as either a string or a message. Because of this I cannot de-serialize the data.
2016/03/30
[ "https://Stackoverflow.com/questions/36307767", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6134419/" ]
The issue was similiar with the SO thread [Interoperability Azure Service Bus Message Queue Messages](https://stackoverflow.com/questions/33542509/interoperability-azure-service-bus-message-queue-messages). Per my experience, the data from Azure Stream Analytics to Service Bus was sent via AMQP protocol, but the protocol of receiving the data in Python is HTTP. The excess content was generated by AMQP during transmission. Assumption that receiving the message via the code below, please see <https://azure.microsoft.com/en-us/documentation/articles/service-bus-python-how-to-use-queues/#receive-messages-from-a-queue>. The function `receive_queue_message` with the `False` value of the argument `peek_lock` wrapped the REST API [Receive and Delete Message (Destructive Read)](https://msdn.microsoft.com/en-us/library/azure/hh780770.aspx). ``` msg = bus_service.receive_queue_message('taskqueue', peek_lock=False) print(msg.body) ``` According to the source code of Azure Service Bus SDK for Python include the functions [`receive_queue_message`](https://github.com/Azure/azure-sdk-for-python/blob/master/azure-servicebus/azure/servicebus/servicebusservice.py#L937), [`read_delete_queue_message`](https://github.com/Azure/azure-sdk-for-python/blob/master/azure-servicebus/azure/servicebus/servicebusservice.py#L884) and [`_create_message`](https://github.com/Azure/azure-sdk-for-python/blob/master/azure-servicebus/azure/servicebus/_serialization.py#L59), I think you can directly remove the excess content from the `msg.body` using the string common function [`lstrip`](https://docs.python.org/2/library/string.html#string.lstrip) or [`strip`](https://docs.python.org/2/library/string.html#string.strip).
I had the same issue but in a .net solution. I was writing a service which sends data to a queue, and on the other hand, I was writing a service which gets that data from the queue. I've tried to send a JSON, like this: ``` var documentMessage = new DocumentMessage(); var json = JsonConvert.SerializeObject(documentMessage); BrokeredMessage message = new BrokeredMessage(json); await _client.SendAsync(message); ``` In this second service I was getting the JSON but with this prefix: > > @strin3http//schemas.microsoft.com/2003/10/Serialization/� > > > I solved this problem by add **DataContractJsonSerializer** like that: ``` var documentMessage = new DocumentMessage(); var serializer = new DataContractJsonSerializer(typeof(DocumentMessage)); BrokeredMessage message = new BrokeredMessage(documentMessage , serializer); await _client.SendAsync(message); ``` If you want to solve the problem in that way, you will have to add Data Attributes from **System.Runtime.Serialization** to the model: ``` [DataContract] public class DocumentMessage { [DataMember] public string Property1 { get; private set; } [DataMember] public string Property2 { get; private set; } } ```
36,307,767
I have set the output of Azure stream analytics job to service bus queue which sends the data in JSON serialized format. When I receive the queue message in python script, along with the data in curly braces, I get @strin3http//schemas.microsoft.com/2003/10/Serialization/� appended in front. I am not able to trim it as the received message is not being recognized as either a string or a message. Because of this I cannot de-serialize the data.
2016/03/30
[ "https://Stackoverflow.com/questions/36307767", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6134419/" ]
The issue was similiar with the SO thread [Interoperability Azure Service Bus Message Queue Messages](https://stackoverflow.com/questions/33542509/interoperability-azure-service-bus-message-queue-messages). Per my experience, the data from Azure Stream Analytics to Service Bus was sent via AMQP protocol, but the protocol of receiving the data in Python is HTTP. The excess content was generated by AMQP during transmission. Assumption that receiving the message via the code below, please see <https://azure.microsoft.com/en-us/documentation/articles/service-bus-python-how-to-use-queues/#receive-messages-from-a-queue>. The function `receive_queue_message` with the `False` value of the argument `peek_lock` wrapped the REST API [Receive and Delete Message (Destructive Read)](https://msdn.microsoft.com/en-us/library/azure/hh780770.aspx). ``` msg = bus_service.receive_queue_message('taskqueue', peek_lock=False) print(msg.body) ``` According to the source code of Azure Service Bus SDK for Python include the functions [`receive_queue_message`](https://github.com/Azure/azure-sdk-for-python/blob/master/azure-servicebus/azure/servicebus/servicebusservice.py#L937), [`read_delete_queue_message`](https://github.com/Azure/azure-sdk-for-python/blob/master/azure-servicebus/azure/servicebus/servicebusservice.py#L884) and [`_create_message`](https://github.com/Azure/azure-sdk-for-python/blob/master/azure-servicebus/azure/servicebus/_serialization.py#L59), I think you can directly remove the excess content from the `msg.body` using the string common function [`lstrip`](https://docs.python.org/2/library/string.html#string.lstrip) or [`strip`](https://docs.python.org/2/library/string.html#string.strip).
When using Microsoft.ServiceBus nuget package, replace ``` message.GetBody<Stream>(); ``` with ``` message.GetBody<string>(); ```
36,307,767
I have set the output of Azure stream analytics job to service bus queue which sends the data in JSON serialized format. When I receive the queue message in python script, along with the data in curly braces, I get @strin3http//schemas.microsoft.com/2003/10/Serialization/� appended in front. I am not able to trim it as the received message is not being recognized as either a string or a message. Because of this I cannot de-serialize the data.
2016/03/30
[ "https://Stackoverflow.com/questions/36307767", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6134419/" ]
[This TechNet article](https://social.technet.microsoft.com/wiki/contents/articles/34750.integrating-service-bus-stack-with-logic-apps-and-azure-functions.aspx) suggests the following code: ``` // Get indices of actual message var start = jsonString.IndexOf("{"); var end = jsonString.LastIndexOf("}") + 1; var length = end - start; // Get actual message string cleandJsonString = jsonString.Substring(start, length); ``` Pretty primitive but whatever works I suppose...
I ran into this issue as well. The previous answers are only workarounds and do not fix the root cause of this issue. The problem you are encountering is likely due to your Stream Analytics compatibility level. Compatibility level 1.0 uses an XML serializer producing the XML tag you are seeing. Compatibility level 1.1 "fixes" this issue. See my previous answer here: <https://stackoverflow.com/a/49307178/263139>.
36,307,767
I have set the output of Azure stream analytics job to service bus queue which sends the data in JSON serialized format. When I receive the queue message in python script, along with the data in curly braces, I get @strin3http//schemas.microsoft.com/2003/10/Serialization/� appended in front. I am not able to trim it as the received message is not being recognized as either a string or a message. Because of this I cannot de-serialize the data.
2016/03/30
[ "https://Stackoverflow.com/questions/36307767", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6134419/" ]
[This TechNet article](https://social.technet.microsoft.com/wiki/contents/articles/34750.integrating-service-bus-stack-with-logic-apps-and-azure-functions.aspx) suggests the following code: ``` // Get indices of actual message var start = jsonString.IndexOf("{"); var end = jsonString.LastIndexOf("}") + 1; var length = end - start; // Get actual message string cleandJsonString = jsonString.Substring(start, length); ``` Pretty primitive but whatever works I suppose...
I had the same issue but in a .net solution. I was writing a service which sends data to a queue, and on the other hand, I was writing a service which gets that data from the queue. I've tried to send a JSON, like this: ``` var documentMessage = new DocumentMessage(); var json = JsonConvert.SerializeObject(documentMessage); BrokeredMessage message = new BrokeredMessage(json); await _client.SendAsync(message); ``` In this second service I was getting the JSON but with this prefix: > > @strin3http//schemas.microsoft.com/2003/10/Serialization/� > > > I solved this problem by add **DataContractJsonSerializer** like that: ``` var documentMessage = new DocumentMessage(); var serializer = new DataContractJsonSerializer(typeof(DocumentMessage)); BrokeredMessage message = new BrokeredMessage(documentMessage , serializer); await _client.SendAsync(message); ``` If you want to solve the problem in that way, you will have to add Data Attributes from **System.Runtime.Serialization** to the model: ``` [DataContract] public class DocumentMessage { [DataMember] public string Property1 { get; private set; } [DataMember] public string Property2 { get; private set; } } ```
36,307,767
I have set the output of Azure stream analytics job to service bus queue which sends the data in JSON serialized format. When I receive the queue message in python script, along with the data in curly braces, I get @strin3http//schemas.microsoft.com/2003/10/Serialization/� appended in front. I am not able to trim it as the received message is not being recognized as either a string or a message. Because of this I cannot de-serialize the data.
2016/03/30
[ "https://Stackoverflow.com/questions/36307767", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6134419/" ]
[This TechNet article](https://social.technet.microsoft.com/wiki/contents/articles/34750.integrating-service-bus-stack-with-logic-apps-and-azure-functions.aspx) suggests the following code: ``` // Get indices of actual message var start = jsonString.IndexOf("{"); var end = jsonString.LastIndexOf("}") + 1; var length = end - start; // Get actual message string cleandJsonString = jsonString.Substring(start, length); ``` Pretty primitive but whatever works I suppose...
When using Microsoft.ServiceBus nuget package, replace ``` message.GetBody<Stream>(); ``` with ``` message.GetBody<string>(); ```
36,307,767
I have set the output of Azure stream analytics job to service bus queue which sends the data in JSON serialized format. When I receive the queue message in python script, along with the data in curly braces, I get @strin3http//schemas.microsoft.com/2003/10/Serialization/� appended in front. I am not able to trim it as the received message is not being recognized as either a string or a message. Because of this I cannot de-serialize the data.
2016/03/30
[ "https://Stackoverflow.com/questions/36307767", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6134419/" ]
I ran into this issue as well. The previous answers are only workarounds and do not fix the root cause of this issue. The problem you are encountering is likely due to your Stream Analytics compatibility level. Compatibility level 1.0 uses an XML serializer producing the XML tag you are seeing. Compatibility level 1.1 "fixes" this issue. See my previous answer here: <https://stackoverflow.com/a/49307178/263139>.
I had the same issue but in a .net solution. I was writing a service which sends data to a queue, and on the other hand, I was writing a service which gets that data from the queue. I've tried to send a JSON, like this: ``` var documentMessage = new DocumentMessage(); var json = JsonConvert.SerializeObject(documentMessage); BrokeredMessage message = new BrokeredMessage(json); await _client.SendAsync(message); ``` In this second service I was getting the JSON but with this prefix: > > @strin3http//schemas.microsoft.com/2003/10/Serialization/� > > > I solved this problem by add **DataContractJsonSerializer** like that: ``` var documentMessage = new DocumentMessage(); var serializer = new DataContractJsonSerializer(typeof(DocumentMessage)); BrokeredMessage message = new BrokeredMessage(documentMessage , serializer); await _client.SendAsync(message); ``` If you want to solve the problem in that way, you will have to add Data Attributes from **System.Runtime.Serialization** to the model: ``` [DataContract] public class DocumentMessage { [DataMember] public string Property1 { get; private set; } [DataMember] public string Property2 { get; private set; } } ```
36,307,767
I have set the output of Azure stream analytics job to service bus queue which sends the data in JSON serialized format. When I receive the queue message in python script, along with the data in curly braces, I get @strin3http//schemas.microsoft.com/2003/10/Serialization/� appended in front. I am not able to trim it as the received message is not being recognized as either a string or a message. Because of this I cannot de-serialize the data.
2016/03/30
[ "https://Stackoverflow.com/questions/36307767", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6134419/" ]
I ran into this issue as well. The previous answers are only workarounds and do not fix the root cause of this issue. The problem you are encountering is likely due to your Stream Analytics compatibility level. Compatibility level 1.0 uses an XML serializer producing the XML tag you are seeing. Compatibility level 1.1 "fixes" this issue. See my previous answer here: <https://stackoverflow.com/a/49307178/263139>.
When using Microsoft.ServiceBus nuget package, replace ``` message.GetBody<Stream>(); ``` with ``` message.GetBody<string>(); ```
37,190,989
I am using gensim word2vec package in python. I know how to get the vocabulary from the trained model. But how to get the word count for each word in vocabulary?
2016/05/12
[ "https://Stackoverflow.com/questions/37190989", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5969670/" ]
Each word in the vocabulary has an associated vocabulary object, which contains an index and a count. ``` vocab_obj = w2v.vocab["word"] vocab_obj.count ``` Output for google news w2v model: 2998437 So to get the count for each word, you would iterate over all words and vocab objects in the vocabulary. ``` for word, vocab_obj in w2v.vocab.items(): #Do something with vocab_obj.count ```
When you want to create a dictionary of word to count for easy retrieval later, you can do so as follows: ``` w2c = dict() for item in model.wv.vocab: w2c[item]=model.wv.vocab[item].count ``` If you want to sort it to see the most frequent words in the model, you can also do that so: ``` w2cSorted=dict(sorted(w2c.items(), key=lambda x: x[1],reverse=True)) ```
37,190,989
I am using gensim word2vec package in python. I know how to get the vocabulary from the trained model. But how to get the word count for each word in vocabulary?
2016/05/12
[ "https://Stackoverflow.com/questions/37190989", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5969670/" ]
Each word in the vocabulary has an associated vocabulary object, which contains an index and a count. ``` vocab_obj = w2v.vocab["word"] vocab_obj.count ``` Output for google news w2v model: 2998437 So to get the count for each word, you would iterate over all words and vocab objects in the vocabulary. ``` for word, vocab_obj in w2v.vocab.items(): #Do something with vocab_obj.count ```
The vocab attribute was removed from KeyedVector in [Gensim 4.0.0](https://github.com/RaRe-Technologies/gensim/releases/tag/4.0.0beta). Instead: ``` word2vec_model.wv.get_vecattr("my-word", "count") # returns count of "my-word" len(word2vec_model.wv) # returns size of the vocabulary ``` Check out notes on [migrating from Gensim 3.x to 4](https://github.com/RaRe-Technologies/gensim/wiki/Migrating-from-Gensim-3.x-to-4#4-vocab-dict-became-key_to_index-for-looking-up-a-keys-integer-index-or-get_vecattr-and-set_vecattr-for-other-per-key-attributes)
15,169,101
I'm writing a Python script that needs to write some data to a temporary file, then create a subprocess running a C++ program that will read the temporary file. I'm trying to use [`NamedTemporaryFile`](http://docs.python.org/2/library/tempfile.html#tempfile.NamedTemporaryFile) for this, but according to the docs, > > Whether the name can be used to open the file a second time, while the named temporary file is still open, varies across platforms (it can be so used on Unix; it cannot on Windows NT or later). > > > And indeed, on Windows if I flush the temporary file after writing, but don't close it until I want it to go away, the subprocess isn't able to open it for reading. I'm working around this by creating the file with `delete=False`, closing it before spawning the subprocess, and then manually deleting it once I'm done: ``` fileTemp = tempfile.NamedTemporaryFile(delete = False) try: fileTemp.write(someStuff) fileTemp.close() # ...run the subprocess and wait for it to complete... finally: os.remove(fileTemp.name) ``` This seems inelegant. Is there a better way to do this? Perhaps a way to open up the permissions on the temporary file so the subprocess can get at it?
2013/03/02
[ "https://Stackoverflow.com/questions/15169101", "https://Stackoverflow.com", "https://Stackoverflow.com/users/938914/" ]
[According](http://bugs.python.org/issue14243#msg164504) to Richard Oudkerk > > (...) the only reason that trying to reopen a `NamedTemporaryFile` fails on > Windows is because when we reopen we need to use `O_TEMPORARY`. > > > and he gives an example of how to do this in Python 3.3+ ``` import os, tempfile DATA = b"hello bob" def temp_opener(name, flag, mode=0o777): return os.open(name, flag | os.O_TEMPORARY, mode) with tempfile.NamedTemporaryFile() as f: f.write(DATA) f.flush() with open(f.name, "rb", opener=temp_opener) as f: assert f.read() == DATA assert not os.path.exists(f.name) ``` Because there's no `opener` parameter in the built-in `open()` in Python 2.x, we have to combine lower level `os.open()` and `os.fdopen()` functions to achieve the same effect: ``` import subprocess import tempfile DATA = b"hello bob" with tempfile.NamedTemporaryFile() as f: f.write(DATA) f.flush() subprocess_code = \ """import os f = os.fdopen(os.open(r'{FILENAME}', os.O_RDWR | os.O_BINARY | os.O_TEMPORARY), 'rb') assert f.read() == b'{DATA}' """.replace('\n', ';').format(FILENAME=f.name, DATA=DATA) subprocess.check_output(['python', '-c', subprocess_code]) == DATA ```
I know this is a really old post, but I think it's relevant today given that the API is changing and functions like mktemp and mkstemp are being replaced by functions like TemporaryFile() and TemporaryDirectory(). I just wanted to demonstrate in the following sample how to make sure that a temp directory is still available downstream: Instead of coding: ``` tmpdirname = tempfile.TemporaryDirectory() ``` and using tmpdirname throughout your code, you should trying to use your code in a with statement block to insure that it is available for your code calls... like this: ``` with tempfile.TemporaryDirectory() as tmpdirname: [do dependent code nested so it's part of the with statement] ``` If you reference it outside of the with then it's likely that it won't be visible anymore.
15,169,101
I'm writing a Python script that needs to write some data to a temporary file, then create a subprocess running a C++ program that will read the temporary file. I'm trying to use [`NamedTemporaryFile`](http://docs.python.org/2/library/tempfile.html#tempfile.NamedTemporaryFile) for this, but according to the docs, > > Whether the name can be used to open the file a second time, while the named temporary file is still open, varies across platforms (it can be so used on Unix; it cannot on Windows NT or later). > > > And indeed, on Windows if I flush the temporary file after writing, but don't close it until I want it to go away, the subprocess isn't able to open it for reading. I'm working around this by creating the file with `delete=False`, closing it before spawning the subprocess, and then manually deleting it once I'm done: ``` fileTemp = tempfile.NamedTemporaryFile(delete = False) try: fileTemp.write(someStuff) fileTemp.close() # ...run the subprocess and wait for it to complete... finally: os.remove(fileTemp.name) ``` This seems inelegant. Is there a better way to do this? Perhaps a way to open up the permissions on the temporary file so the subprocess can get at it?
2013/03/02
[ "https://Stackoverflow.com/questions/15169101", "https://Stackoverflow.com", "https://Stackoverflow.com/users/938914/" ]
You can always go low-level, though am not sure if it's clean enough for you: ``` fd, filename = tempfile.mkstemp() try: os.write(fd, someStuff) os.close(fd) # ...run the subprocess and wait for it to complete... finally: os.remove(filename) ```
I know this is a really old post, but I think it's relevant today given that the API is changing and functions like mktemp and mkstemp are being replaced by functions like TemporaryFile() and TemporaryDirectory(). I just wanted to demonstrate in the following sample how to make sure that a temp directory is still available downstream: Instead of coding: ``` tmpdirname = tempfile.TemporaryDirectory() ``` and using tmpdirname throughout your code, you should trying to use your code in a with statement block to insure that it is available for your code calls... like this: ``` with tempfile.TemporaryDirectory() as tmpdirname: [do dependent code nested so it's part of the with statement] ``` If you reference it outside of the with then it's likely that it won't be visible anymore.
15,169,101
I'm writing a Python script that needs to write some data to a temporary file, then create a subprocess running a C++ program that will read the temporary file. I'm trying to use [`NamedTemporaryFile`](http://docs.python.org/2/library/tempfile.html#tempfile.NamedTemporaryFile) for this, but according to the docs, > > Whether the name can be used to open the file a second time, while the named temporary file is still open, varies across platforms (it can be so used on Unix; it cannot on Windows NT or later). > > > And indeed, on Windows if I flush the temporary file after writing, but don't close it until I want it to go away, the subprocess isn't able to open it for reading. I'm working around this by creating the file with `delete=False`, closing it before spawning the subprocess, and then manually deleting it once I'm done: ``` fileTemp = tempfile.NamedTemporaryFile(delete = False) try: fileTemp.write(someStuff) fileTemp.close() # ...run the subprocess and wait for it to complete... finally: os.remove(fileTemp.name) ``` This seems inelegant. Is there a better way to do this? Perhaps a way to open up the permissions on the temporary file so the subprocess can get at it?
2013/03/02
[ "https://Stackoverflow.com/questions/15169101", "https://Stackoverflow.com", "https://Stackoverflow.com/users/938914/" ]
At least if you open a temporary file using existing Python libraries, accessing it from multiple processes is not possible in case of Windows. According to [MSDN](http://msdn.microsoft.com/en-us/library/aa363858) you can specify a 3rd parameter (`dwSharedMode`) shared mode flag `FILE_SHARE_READ` to `CreateFile()` function which: > > Enables subsequent open operations on a file or device to request read > access. Otherwise, other processes cannot open the file or device if > they request read access. If this flag is not specified, but the file > or device has been opened for read access, the function fails. > > > So, you can write a Windows specific C routine to create a custom temporary file opener function, call it from Python and then you can make your sub-process access the file without any error. But I think you should stick with your existing approach as it is the most portable version and will work on any system and thus is the most elegant implementation. * Discussion on Linux and windows file locking can be found [here](https://stackoverflow.com/questions/546504/how-do-i-make-windows-file-locking-more-like-unix-file-locking). EDIT: Turns out it is possible to open & read the temporary file from multiple processes in Windows too. See Piotr Dobrogost's [answer](https://stackoverflow.com/a/15235559/951640).
Using [`mkstemp()`](https://docs.python.org/3/library/tempfile.html#tempfile.mkstemp) instead with [`os.fdopen()`](https://docs.python.org/3/library/os.html#os.fdopen) in a `with` statement avoids having to call `close()`: ``` fd, path = tempfile.mkstemp() try: with os.fdopen(fd, 'wb') as fileTemp: fileTemp.write(someStuff) # ...run the subprocess and wait for it to complete... finally: os.remove(path) ```
15,169,101
I'm writing a Python script that needs to write some data to a temporary file, then create a subprocess running a C++ program that will read the temporary file. I'm trying to use [`NamedTemporaryFile`](http://docs.python.org/2/library/tempfile.html#tempfile.NamedTemporaryFile) for this, but according to the docs, > > Whether the name can be used to open the file a second time, while the named temporary file is still open, varies across platforms (it can be so used on Unix; it cannot on Windows NT or later). > > > And indeed, on Windows if I flush the temporary file after writing, but don't close it until I want it to go away, the subprocess isn't able to open it for reading. I'm working around this by creating the file with `delete=False`, closing it before spawning the subprocess, and then manually deleting it once I'm done: ``` fileTemp = tempfile.NamedTemporaryFile(delete = False) try: fileTemp.write(someStuff) fileTemp.close() # ...run the subprocess and wait for it to complete... finally: os.remove(fileTemp.name) ``` This seems inelegant. Is there a better way to do this? Perhaps a way to open up the permissions on the temporary file so the subprocess can get at it?
2013/03/02
[ "https://Stackoverflow.com/questions/15169101", "https://Stackoverflow.com", "https://Stackoverflow.com/users/938914/" ]
Since nobody else appears to be interested in leaving this information out in the open... `tempfile` does expose a function, `mkdtemp()`, which can trivialize this problem: ``` try: temp_dir = mkdtemp() temp_file = make_a_file_in_a_dir(temp_dir) do_your_subprocess_stuff(temp_file) remove_your_temp_file(temp_file) finally: os.rmdir(temp_dir) ``` I leave the implementation of the intermediate functions up to the reader, as one might wish to do things like use `mkstemp()` to tighten up the security of the temporary file itself, or overwrite the file in-place before removing it. I don't particularly know what security restrictions one might have that are not easily planned for by perusing the source of `tempfile`. Anyway, yes, using `NamedTemporaryFile` on Windows might be inelegant, and my solution here might also be inelegant, but you've already decided that Windows support is more important than elegant code, so you might as well go ahead and do something readable.
I know this is a really old post, but I think it's relevant today given that the API is changing and functions like mktemp and mkstemp are being replaced by functions like TemporaryFile() and TemporaryDirectory(). I just wanted to demonstrate in the following sample how to make sure that a temp directory is still available downstream: Instead of coding: ``` tmpdirname = tempfile.TemporaryDirectory() ``` and using tmpdirname throughout your code, you should trying to use your code in a with statement block to insure that it is available for your code calls... like this: ``` with tempfile.TemporaryDirectory() as tmpdirname: [do dependent code nested so it's part of the with statement] ``` If you reference it outside of the with then it's likely that it won't be visible anymore.
15,169,101
I'm writing a Python script that needs to write some data to a temporary file, then create a subprocess running a C++ program that will read the temporary file. I'm trying to use [`NamedTemporaryFile`](http://docs.python.org/2/library/tempfile.html#tempfile.NamedTemporaryFile) for this, but according to the docs, > > Whether the name can be used to open the file a second time, while the named temporary file is still open, varies across platforms (it can be so used on Unix; it cannot on Windows NT or later). > > > And indeed, on Windows if I flush the temporary file after writing, but don't close it until I want it to go away, the subprocess isn't able to open it for reading. I'm working around this by creating the file with `delete=False`, closing it before spawning the subprocess, and then manually deleting it once I'm done: ``` fileTemp = tempfile.NamedTemporaryFile(delete = False) try: fileTemp.write(someStuff) fileTemp.close() # ...run the subprocess and wait for it to complete... finally: os.remove(fileTemp.name) ``` This seems inelegant. Is there a better way to do this? Perhaps a way to open up the permissions on the temporary file so the subprocess can get at it?
2013/03/02
[ "https://Stackoverflow.com/questions/15169101", "https://Stackoverflow.com", "https://Stackoverflow.com/users/938914/" ]
[According](http://bugs.python.org/issue14243#msg164504) to Richard Oudkerk > > (...) the only reason that trying to reopen a `NamedTemporaryFile` fails on > Windows is because when we reopen we need to use `O_TEMPORARY`. > > > and he gives an example of how to do this in Python 3.3+ ``` import os, tempfile DATA = b"hello bob" def temp_opener(name, flag, mode=0o777): return os.open(name, flag | os.O_TEMPORARY, mode) with tempfile.NamedTemporaryFile() as f: f.write(DATA) f.flush() with open(f.name, "rb", opener=temp_opener) as f: assert f.read() == DATA assert not os.path.exists(f.name) ``` Because there's no `opener` parameter in the built-in `open()` in Python 2.x, we have to combine lower level `os.open()` and `os.fdopen()` functions to achieve the same effect: ``` import subprocess import tempfile DATA = b"hello bob" with tempfile.NamedTemporaryFile() as f: f.write(DATA) f.flush() subprocess_code = \ """import os f = os.fdopen(os.open(r'{FILENAME}', os.O_RDWR | os.O_BINARY | os.O_TEMPORARY), 'rb') assert f.read() == b'{DATA}' """.replace('\n', ';').format(FILENAME=f.name, DATA=DATA) subprocess.check_output(['python', '-c', subprocess_code]) == DATA ```
You can always go low-level, though am not sure if it's clean enough for you: ``` fd, filename = tempfile.mkstemp() try: os.write(fd, someStuff) os.close(fd) # ...run the subprocess and wait for it to complete... finally: os.remove(filename) ```
15,169,101
I'm writing a Python script that needs to write some data to a temporary file, then create a subprocess running a C++ program that will read the temporary file. I'm trying to use [`NamedTemporaryFile`](http://docs.python.org/2/library/tempfile.html#tempfile.NamedTemporaryFile) for this, but according to the docs, > > Whether the name can be used to open the file a second time, while the named temporary file is still open, varies across platforms (it can be so used on Unix; it cannot on Windows NT or later). > > > And indeed, on Windows if I flush the temporary file after writing, but don't close it until I want it to go away, the subprocess isn't able to open it for reading. I'm working around this by creating the file with `delete=False`, closing it before spawning the subprocess, and then manually deleting it once I'm done: ``` fileTemp = tempfile.NamedTemporaryFile(delete = False) try: fileTemp.write(someStuff) fileTemp.close() # ...run the subprocess and wait for it to complete... finally: os.remove(fileTemp.name) ``` This seems inelegant. Is there a better way to do this? Perhaps a way to open up the permissions on the temporary file so the subprocess can get at it?
2013/03/02
[ "https://Stackoverflow.com/questions/15169101", "https://Stackoverflow.com", "https://Stackoverflow.com/users/938914/" ]
At least if you open a temporary file using existing Python libraries, accessing it from multiple processes is not possible in case of Windows. According to [MSDN](http://msdn.microsoft.com/en-us/library/aa363858) you can specify a 3rd parameter (`dwSharedMode`) shared mode flag `FILE_SHARE_READ` to `CreateFile()` function which: > > Enables subsequent open operations on a file or device to request read > access. Otherwise, other processes cannot open the file or device if > they request read access. If this flag is not specified, but the file > or device has been opened for read access, the function fails. > > > So, you can write a Windows specific C routine to create a custom temporary file opener function, call it from Python and then you can make your sub-process access the file without any error. But I think you should stick with your existing approach as it is the most portable version and will work on any system and thus is the most elegant implementation. * Discussion on Linux and windows file locking can be found [here](https://stackoverflow.com/questions/546504/how-do-i-make-windows-file-locking-more-like-unix-file-locking). EDIT: Turns out it is possible to open & read the temporary file from multiple processes in Windows too. See Piotr Dobrogost's [answer](https://stackoverflow.com/a/15235559/951640).
You can always go low-level, though am not sure if it's clean enough for you: ``` fd, filename = tempfile.mkstemp() try: os.write(fd, someStuff) os.close(fd) # ...run the subprocess and wait for it to complete... finally: os.remove(filename) ```
15,169,101
I'm writing a Python script that needs to write some data to a temporary file, then create a subprocess running a C++ program that will read the temporary file. I'm trying to use [`NamedTemporaryFile`](http://docs.python.org/2/library/tempfile.html#tempfile.NamedTemporaryFile) for this, but according to the docs, > > Whether the name can be used to open the file a second time, while the named temporary file is still open, varies across platforms (it can be so used on Unix; it cannot on Windows NT or later). > > > And indeed, on Windows if I flush the temporary file after writing, but don't close it until I want it to go away, the subprocess isn't able to open it for reading. I'm working around this by creating the file with `delete=False`, closing it before spawning the subprocess, and then manually deleting it once I'm done: ``` fileTemp = tempfile.NamedTemporaryFile(delete = False) try: fileTemp.write(someStuff) fileTemp.close() # ...run the subprocess and wait for it to complete... finally: os.remove(fileTemp.name) ``` This seems inelegant. Is there a better way to do this? Perhaps a way to open up the permissions on the temporary file so the subprocess can get at it?
2013/03/02
[ "https://Stackoverflow.com/questions/15169101", "https://Stackoverflow.com", "https://Stackoverflow.com/users/938914/" ]
Since nobody else appears to be interested in leaving this information out in the open... `tempfile` does expose a function, `mkdtemp()`, which can trivialize this problem: ``` try: temp_dir = mkdtemp() temp_file = make_a_file_in_a_dir(temp_dir) do_your_subprocess_stuff(temp_file) remove_your_temp_file(temp_file) finally: os.rmdir(temp_dir) ``` I leave the implementation of the intermediate functions up to the reader, as one might wish to do things like use `mkstemp()` to tighten up the security of the temporary file itself, or overwrite the file in-place before removing it. I don't particularly know what security restrictions one might have that are not easily planned for by perusing the source of `tempfile`. Anyway, yes, using `NamedTemporaryFile` on Windows might be inelegant, and my solution here might also be inelegant, but you've already decided that Windows support is more important than elegant code, so you might as well go ahead and do something readable.
Using [`mkstemp()`](https://docs.python.org/3/library/tempfile.html#tempfile.mkstemp) instead with [`os.fdopen()`](https://docs.python.org/3/library/os.html#os.fdopen) in a `with` statement avoids having to call `close()`: ``` fd, path = tempfile.mkstemp() try: with os.fdopen(fd, 'wb') as fileTemp: fileTemp.write(someStuff) # ...run the subprocess and wait for it to complete... finally: os.remove(path) ```
15,169,101
I'm writing a Python script that needs to write some data to a temporary file, then create a subprocess running a C++ program that will read the temporary file. I'm trying to use [`NamedTemporaryFile`](http://docs.python.org/2/library/tempfile.html#tempfile.NamedTemporaryFile) for this, but according to the docs, > > Whether the name can be used to open the file a second time, while the named temporary file is still open, varies across platforms (it can be so used on Unix; it cannot on Windows NT or later). > > > And indeed, on Windows if I flush the temporary file after writing, but don't close it until I want it to go away, the subprocess isn't able to open it for reading. I'm working around this by creating the file with `delete=False`, closing it before spawning the subprocess, and then manually deleting it once I'm done: ``` fileTemp = tempfile.NamedTemporaryFile(delete = False) try: fileTemp.write(someStuff) fileTemp.close() # ...run the subprocess and wait for it to complete... finally: os.remove(fileTemp.name) ``` This seems inelegant. Is there a better way to do this? Perhaps a way to open up the permissions on the temporary file so the subprocess can get at it?
2013/03/02
[ "https://Stackoverflow.com/questions/15169101", "https://Stackoverflow.com", "https://Stackoverflow.com/users/938914/" ]
You can always go low-level, though am not sure if it's clean enough for you: ``` fd, filename = tempfile.mkstemp() try: os.write(fd, someStuff) os.close(fd) # ...run the subprocess and wait for it to complete... finally: os.remove(filename) ```
Using [`mkstemp()`](https://docs.python.org/3/library/tempfile.html#tempfile.mkstemp) instead with [`os.fdopen()`](https://docs.python.org/3/library/os.html#os.fdopen) in a `with` statement avoids having to call `close()`: ``` fd, path = tempfile.mkstemp() try: with os.fdopen(fd, 'wb') as fileTemp: fileTemp.write(someStuff) # ...run the subprocess and wait for it to complete... finally: os.remove(path) ```
15,169,101
I'm writing a Python script that needs to write some data to a temporary file, then create a subprocess running a C++ program that will read the temporary file. I'm trying to use [`NamedTemporaryFile`](http://docs.python.org/2/library/tempfile.html#tempfile.NamedTemporaryFile) for this, but according to the docs, > > Whether the name can be used to open the file a second time, while the named temporary file is still open, varies across platforms (it can be so used on Unix; it cannot on Windows NT or later). > > > And indeed, on Windows if I flush the temporary file after writing, but don't close it until I want it to go away, the subprocess isn't able to open it for reading. I'm working around this by creating the file with `delete=False`, closing it before spawning the subprocess, and then manually deleting it once I'm done: ``` fileTemp = tempfile.NamedTemporaryFile(delete = False) try: fileTemp.write(someStuff) fileTemp.close() # ...run the subprocess and wait for it to complete... finally: os.remove(fileTemp.name) ``` This seems inelegant. Is there a better way to do this? Perhaps a way to open up the permissions on the temporary file so the subprocess can get at it?
2013/03/02
[ "https://Stackoverflow.com/questions/15169101", "https://Stackoverflow.com", "https://Stackoverflow.com/users/938914/" ]
At least if you open a temporary file using existing Python libraries, accessing it from multiple processes is not possible in case of Windows. According to [MSDN](http://msdn.microsoft.com/en-us/library/aa363858) you can specify a 3rd parameter (`dwSharedMode`) shared mode flag `FILE_SHARE_READ` to `CreateFile()` function which: > > Enables subsequent open operations on a file or device to request read > access. Otherwise, other processes cannot open the file or device if > they request read access. If this flag is not specified, but the file > or device has been opened for read access, the function fails. > > > So, you can write a Windows specific C routine to create a custom temporary file opener function, call it from Python and then you can make your sub-process access the file without any error. But I think you should stick with your existing approach as it is the most portable version and will work on any system and thus is the most elegant implementation. * Discussion on Linux and windows file locking can be found [here](https://stackoverflow.com/questions/546504/how-do-i-make-windows-file-locking-more-like-unix-file-locking). EDIT: Turns out it is possible to open & read the temporary file from multiple processes in Windows too. See Piotr Dobrogost's [answer](https://stackoverflow.com/a/15235559/951640).
I know this is a really old post, but I think it's relevant today given that the API is changing and functions like mktemp and mkstemp are being replaced by functions like TemporaryFile() and TemporaryDirectory(). I just wanted to demonstrate in the following sample how to make sure that a temp directory is still available downstream: Instead of coding: ``` tmpdirname = tempfile.TemporaryDirectory() ``` and using tmpdirname throughout your code, you should trying to use your code in a with statement block to insure that it is available for your code calls... like this: ``` with tempfile.TemporaryDirectory() as tmpdirname: [do dependent code nested so it's part of the with statement] ``` If you reference it outside of the with then it's likely that it won't be visible anymore.
15,169,101
I'm writing a Python script that needs to write some data to a temporary file, then create a subprocess running a C++ program that will read the temporary file. I'm trying to use [`NamedTemporaryFile`](http://docs.python.org/2/library/tempfile.html#tempfile.NamedTemporaryFile) for this, but according to the docs, > > Whether the name can be used to open the file a second time, while the named temporary file is still open, varies across platforms (it can be so used on Unix; it cannot on Windows NT or later). > > > And indeed, on Windows if I flush the temporary file after writing, but don't close it until I want it to go away, the subprocess isn't able to open it for reading. I'm working around this by creating the file with `delete=False`, closing it before spawning the subprocess, and then manually deleting it once I'm done: ``` fileTemp = tempfile.NamedTemporaryFile(delete = False) try: fileTemp.write(someStuff) fileTemp.close() # ...run the subprocess and wait for it to complete... finally: os.remove(fileTemp.name) ``` This seems inelegant. Is there a better way to do this? Perhaps a way to open up the permissions on the temporary file so the subprocess can get at it?
2013/03/02
[ "https://Stackoverflow.com/questions/15169101", "https://Stackoverflow.com", "https://Stackoverflow.com/users/938914/" ]
Since nobody else appears to be interested in leaving this information out in the open... `tempfile` does expose a function, `mkdtemp()`, which can trivialize this problem: ``` try: temp_dir = mkdtemp() temp_file = make_a_file_in_a_dir(temp_dir) do_your_subprocess_stuff(temp_file) remove_your_temp_file(temp_file) finally: os.rmdir(temp_dir) ``` I leave the implementation of the intermediate functions up to the reader, as one might wish to do things like use `mkstemp()` to tighten up the security of the temporary file itself, or overwrite the file in-place before removing it. I don't particularly know what security restrictions one might have that are not easily planned for by perusing the source of `tempfile`. Anyway, yes, using `NamedTemporaryFile` on Windows might be inelegant, and my solution here might also be inelegant, but you've already decided that Windows support is more important than elegant code, so you might as well go ahead and do something readable.
You can always go low-level, though am not sure if it's clean enough for you: ``` fd, filename = tempfile.mkstemp() try: os.write(fd, someStuff) os.close(fd) # ...run the subprocess and wait for it to complete... finally: os.remove(filename) ```
61,782,776
I tried this ``` x = np.array([ [0,0], [1,0], [2.61,-1.28], [-0.59,2.1] ]) for i in X: X = np.append(X[i], X[i][0]**2, axis = 1) print(X) ``` But i am getting this ``` IndexError Traceback (most recent call last) <ipython-input-12-9bfd33261d84> in <module>() 6 ]) 7 for i in X: ----> 8 X = np.append(X[i], X[i][0]**2, axis = 1) 9 10 print(X) IndexError: arrays used as indices must be of integer (or boolean) type ``` Someone please help!
2020/05/13
[ "https://Stackoverflow.com/questions/61782776", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12894182/" ]
How about concatenate: ``` np.concatenate((x,x**2)) ``` Output: ``` array([[ 0. , 0. ], [ 1. , 0. ], [ 2.61 , -1.28 ], [-0.59 , 2.1 ], [ 0. , 0. ], [ 1. , 0. ], [ 6.8121, 1.6384], [ 0.3481, 4.41 ]]) ```
``` In [210]: x = np.array([ ...: [0,0], ...: [1,0], ...: [2.61,-1.28], ...: [-0.59,2.1] ...: ]) ...: In [211]: x # (4,2) array Out[211]: array([[ 0. , 0. ], [ 1. , 0. ], [ 2.61, -1.28], [-0.59, 2.1 ]]) In [212]: for i in x: # iterate on rows ...: print(i) # i is a row, not an index x[i] would be wrong ...: [0. 0.] [1. 0.] [ 2.61 -1.28] [-0.59 2.1 ] ``` Look at one row: ``` In [214]: x[2] Out[214]: array([ 2.61, -1.28]) ``` You can join that row with its square with: ``` In [216]: np.concatenate((x[2], x[2]**2)) Out[216]: array([ 2.61 , -1.28 , 6.8121, 1.6384]) ``` And doing the same for the whole array. Where possible in `numpy` work with the whole array, not rows and elements. It's simpler, and faster. ``` In [217]: np.concatenate((x, x**2), axis=1) Out[217]: array([[ 0. , 0. , 0. , 0. ], [ 1. , 0. , 1. , 0. ], [ 2.61 , -1.28 , 6.8121, 1.6384], [-0.59 , 2.1 , 0.3481, 4.41 ]]) ```
37,690,440
Right now I'm writing a function that reads data from a file, with the goal being to add that data to a numpy array and return said array. I would like to return the array as a 2D array, however I'm not sure what the complete shape of the array will be (I know the amount of columns, but not rows). What I have right now is: ``` columns = _____ for line in currentFile: currentLine = line.split() data = np.zeros(shape=(columns),dtype=float) tempData = [] for i in range(columns): tempData.append(currentLine[i]) data = np.concatenate((data,tempdata),axis=0) ``` However, this makes a 1D array. Essentially what I'm asking is: Is there any way to have add a python list as a row to a numpy array with a variable amount of rows?
2016/06/07
[ "https://Stackoverflow.com/questions/37690440", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5379671/" ]
If your file `data.txt` is ``` 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 ``` All you need to do is ``` >>> import numpy as n >>> data_array = n.loadtxt("data.txt") >>> data_array array([[1., 2., 3., 4.], [1., 2., 3., 4.], [1., 2., 3., 4.], [1., 2., 3., 4.], [1., 2., 3., 4.], [1., 2., 3., 4.]]) ```
If you modify @Abstracted 's solution as: ``` data_array = np.loadtxt("data.txt", dtype =int) ``` you will get the array in integer form if you want it that way.
37,690,440
Right now I'm writing a function that reads data from a file, with the goal being to add that data to a numpy array and return said array. I would like to return the array as a 2D array, however I'm not sure what the complete shape of the array will be (I know the amount of columns, but not rows). What I have right now is: ``` columns = _____ for line in currentFile: currentLine = line.split() data = np.zeros(shape=(columns),dtype=float) tempData = [] for i in range(columns): tempData.append(currentLine[i]) data = np.concatenate((data,tempdata),axis=0) ``` However, this makes a 1D array. Essentially what I'm asking is: Is there any way to have add a python list as a row to a numpy array with a variable amount of rows?
2016/06/07
[ "https://Stackoverflow.com/questions/37690440", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5379671/" ]
Look at the common way of constructing an array: ``` np.array([[1,2,3],[4,5,6],[7,8,9]]) np.array([[1,2,3], [4,5,6], [7,8,9]]) ``` The input is a list of lists. Your reader could imitate that Roughly: ``` data = [] for line in f.readline(): values = line.strip().split(',') values = [int(v) for v in values] data.append(values) data = np.array(data) ``` `np.loadtxt` and `np.genfromtxt` do essentially that, just with more bells and whistles.
If you modify @Abstracted 's solution as: ``` data_array = np.loadtxt("data.txt", dtype =int) ``` you will get the array in integer form if you want it that way.
32,404,818
I am using iPython in command prompt, Windows 7. I thought this would be easy to find, I searched and found directions on how to use the inspect package but it seems like the inspect package is meant to be used for functions that are created by the programmer rather than functions that are part of a package. My main goal to to be able to use the help files from within command prompt of iPython, to be able to look up a function such as csv.reader() and figure out all the possible arguments for it AND all possible values for these arguements. In R programming this would simply be args(csv.reader()) I have tried googling this but they all point me to the inspect package, perhaps I'm misunderstanding it's use? For example, If I wanted to see a list of all possible arguments and the corresponding possible values for these arguments for the csv.reader() function (from the import csv package), how would I go about doing that? I've tried doing help(csv.reader) but this doesn't provide me a list of all possible arguments and their potential values. 'Dialect' shows up but it doesn't tell me the possible values of the dialect argument of the csv.reader function. I can easily go to the site: <https://docs.python.org/3/library/csv.html#csv-fmt-params> and see that the dialect options are: delimiter, doublequote, escapechar, etc.. etc..but is there a way to see this in Python console? I've also tried dir(csv.reader) but this isn't what I was looking for either. Going bald trying to figure this out....
2015/09/04
[ "https://Stackoverflow.com/questions/32404818", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4959665/" ]
There is no way to do this generically, `help(<function>)` will at a minimum return you the function signature (including the argument names), Python is dynamically typed so you don't get any types and arguments by themselves don't tell you what the valid values are. This is where a good docstring comes in. However, the `csv` module does have a specific function for listing the dialects: ``` >>> csv.list_dialects() ['excel', 'excel-tab', 'unix'] >>> help(csv.excel) Help on class excel in module csv: class excel(Dialect) | Describe the usual properties of Excel-generated CSV files. ... ```
The inspect module is extremely powerful. To get a list of classes, for example in the csv module, you could go: ``` import inspect, csv from pprint import pprint module = csv mod_string = 'csv' module_classes = inspect.getmembers(module, inspect.isclass) for i in range(len(module_classes)): myclass = module_classes[i][0] myclass = mod_string+'.'+myclass myclass = eval(myclass) # could construct whatever query you want about this class here... # you'll need to play with this line to get what you want; it will failasis #line = inspect.formatargspect(*inspect.getfullargspec(myclass)) pprint(myclass) ``` Hope this helps get you started!
74,074,355
My code uses matplotlib which requires numpy. I'm using pipenv as my environment. When I run the code through my terminal and pipenv shell, it executes without a problem. I've just installed Pycharm for Apple silicon (I have an M1) and set up my interpreter to use the same pipenv environment that I configured earlier. However, when I try to run it through Pycharm (even the terminal in pycharm), it throws me the following error: `Original error was: dlopen(/Users/s/.local/share/virtualenvs/CS_156-UWxYg3KY/lib/python3.8/site-packages/numpy/core/_multiarray_umath.cpython-38-darwin.so, 0x0002): tried: '/Users/s/.local/share/virtualenvs/CS_156-UWxYg3KY/lib/python3.8/site-packages/numpy/core/_multiarray_umath.cpython-38-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e'))` What's confusing me is the fact that my code executes when using this same environment through the my terminal... But it fails when running on Pycharm? Any insights appreciated!
2022/10/14
[ "https://Stackoverflow.com/questions/74074355", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19456156/" ]
Since you are on Windows, I am pretty sure the different results are because the UCRT detects during runtime whether FMA3 (fused-multiply-add) instructions are available for the CPU and if yes, use them in transcendental functions such as cosine. This gives [slightly different results](https://stackoverflow.com/a/29086451/3740047). The solution is to place the call `set_FMA3_enable(0);` at the very start of your `main()` or `WinMain()` function, as described [here](https://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/get-fma3-enable-set-fma3-enable?view=msvc-170). If you want to have reproducibility also between different operating systems, things become harder or even impossible. See e.g. [this](https://randomascii.wordpress.com/2013/07/16/floating-point-determinism/) blog post. In response also to the comments stating that you should just use some tolerance, I do not agree with this as a general statement. Certainly, there are many applications where this is the way to go. But I do think that it **can** be a sensible requirement to get exactly the same floating point results **for some applications**, at least when staying on the same OS (Windows, in this case). In fact, we had the very same issue with `set_FMA3_enable` a while ago. I am a software developer for a traffic simulation, and minor differences such as 10^-16 often build up and lead to entirely different simulation results eventually. Naturally, one is supposed to run many simulations with different seeds and average over all of them, making the different behavior irrelevant for the final result. But: Sometimes customers have a problem at a specific simulation second for a specific seed (e.g. an application crash or incorrect behavior of an entity), and not being able to reproduce it on our developer machines due to a different CPU makes it much harder to diagnose and fix the issue. Moreover, if the test system consists of a mixture of older and newer CPUs and test cases are not bound to specific resources, means that sometimes tests can deviate seemingly without reason (flaky tests). This is certainly not desired. Requiring exact reproducibility also makes writing the tests much easier because you do not require heuristic thresholds (e.g. a tolerance or some guessed value for the amount of samples). Moreover, our customers expect the results to remain stable for a specific version of the program since they calibrated (more or less...) their traffic networks to real data. This is somewhat questionable, since (again) one should actually look at averages, but the naive expectation in reality usually wins.
IEEE-745 double precision binary floating point provides no more than 15 decimal significant digits of precision. You are looking at the "noise" of different library implementations and possibly different FPU implementations. > > How to make calculations fully reproducible? > > > That is an X-Y problem. The answer is you can't. But it is the wrong question. You would do better to ask how you can implement valid and robust tests that are sympathetic to this well-known and unavoidable technical issue with floating-point representation. Without providing the test code you are trying to use, it is not possible to answer that directly. Generally you should avoid comparing floating point values for exact equality, and rather subtract the result from the desired value, and test for some acceptable discrepancy within the supported precision of the FP type used. For example: ``` #define EXPECTED_RESULT 40965.8966304650 #define RESULT_PRECISION 00000.0000000001 double actual_result = test() ; bool error = fabs( actual_result- EXPECTED_RESULT ) > RESULT_PRECISION ; ```
74,074,355
My code uses matplotlib which requires numpy. I'm using pipenv as my environment. When I run the code through my terminal and pipenv shell, it executes without a problem. I've just installed Pycharm for Apple silicon (I have an M1) and set up my interpreter to use the same pipenv environment that I configured earlier. However, when I try to run it through Pycharm (even the terminal in pycharm), it throws me the following error: `Original error was: dlopen(/Users/s/.local/share/virtualenvs/CS_156-UWxYg3KY/lib/python3.8/site-packages/numpy/core/_multiarray_umath.cpython-38-darwin.so, 0x0002): tried: '/Users/s/.local/share/virtualenvs/CS_156-UWxYg3KY/lib/python3.8/site-packages/numpy/core/_multiarray_umath.cpython-38-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e'))` What's confusing me is the fact that my code executes when using this same environment through the my terminal... But it fails when running on Pycharm? Any insights appreciated!
2022/10/14
[ "https://Stackoverflow.com/questions/74074355", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19456156/" ]
Since you are on Windows, I am pretty sure the different results are because the UCRT detects during runtime whether FMA3 (fused-multiply-add) instructions are available for the CPU and if yes, use them in transcendental functions such as cosine. This gives [slightly different results](https://stackoverflow.com/a/29086451/3740047). The solution is to place the call `set_FMA3_enable(0);` at the very start of your `main()` or `WinMain()` function, as described [here](https://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/get-fma3-enable-set-fma3-enable?view=msvc-170). If you want to have reproducibility also between different operating systems, things become harder or even impossible. See e.g. [this](https://randomascii.wordpress.com/2013/07/16/floating-point-determinism/) blog post. In response also to the comments stating that you should just use some tolerance, I do not agree with this as a general statement. Certainly, there are many applications where this is the way to go. But I do think that it **can** be a sensible requirement to get exactly the same floating point results **for some applications**, at least when staying on the same OS (Windows, in this case). In fact, we had the very same issue with `set_FMA3_enable` a while ago. I am a software developer for a traffic simulation, and minor differences such as 10^-16 often build up and lead to entirely different simulation results eventually. Naturally, one is supposed to run many simulations with different seeds and average over all of them, making the different behavior irrelevant for the final result. But: Sometimes customers have a problem at a specific simulation second for a specific seed (e.g. an application crash or incorrect behavior of an entity), and not being able to reproduce it on our developer machines due to a different CPU makes it much harder to diagnose and fix the issue. Moreover, if the test system consists of a mixture of older and newer CPUs and test cases are not bound to specific resources, means that sometimes tests can deviate seemingly without reason (flaky tests). This is certainly not desired. Requiring exact reproducibility also makes writing the tests much easier because you do not require heuristic thresholds (e.g. a tolerance or some guessed value for the amount of samples). Moreover, our customers expect the results to remain stable for a specific version of the program since they calibrated (more or less...) their traffic networks to real data. This is somewhat questionable, since (again) one should actually look at averages, but the naive expectation in reality usually wins.
* First of all, `40965.8966304650828827e-01` cannot be a result from `cos()` function, as cos(x) is a function that, for real valued arguments always returns a value in the interval `[-1.0, 1.0]` so the result shown cannot be the output from it. * Second, you will have probably read somewhere that `double` values have a precision of roughly 17 digits in the significand, while your are trying to show 21 digit. You cannot get correct data past the `...508`, as you are trying to force the result farther from the 17dig limit. The reason you get different results in different computers is that what is shown after the precise digits are shown is undefined behaviour, so it's normal that you get different values (you could get different values even on different runs on the same machine with the same program)
72,479,835
I'm totally new to command line and am trying to follow the instructions [here](http://rleca.pbworks.com/w/file/fetch/124098201/tuto_obitools_install_W10OS.html) to get OBITools installed. I've gotten part way through, but I am getting an error I don't understand when trying to download the OBITools install file. The code presented in the tutorial is: `wget http://metabarcoding.org//obitools/doc/_downloads/get-obitools.py python get-obitools.py` I am getting the following error: `>>> wget http://metabarcoding.org//obitools/doc/_downloads/get-obitools.py File "<stdin>", line 1 wget http://metabarcoding.org//obitools/doc/_downloads/get-obitools.py ^ SyntaxError: invalid syntax` I'm not sure what I'm doing wrong? Any help is much appreciated!
2022/06/02
[ "https://Stackoverflow.com/questions/72479835", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15333572/" ]
You're not supposed to type those lines at the python shell (the one with >>>), you're supposed to type those into your regular bash shell (the one with $).
Type `exit()` to quit from the python shell and try again
57,077,879
I am running the function below on a very long CSV file. The function calculates the Z-score of the column MFE for every 50 lines. Some of these 50 lines contain just zeros, and therefore when calculating the Zscore the program stops because it can't divide by zero. How can I solve this problem, and instead of stopping the program running print a 0 for the z-score of these lines? ``` def doZscore(csv_file, n_random): df = pd.read_csv(csv_file) row_start = 0 row_end = n_random + 1 step = n_random + 1 zscore = [] while row_end <= len(df): selected_rows = df['MFE'].iloc[row_start:row_end] arr = [] for x in selected_rows: arr.append(float(x)) scores = stats.zscore(arr) for i in scores: zscore.append(round(i, 3)) arr.clear() row_start += step row_end += step df['Zscore'] = zscore with open(csv_file, 'w') as f: df.to_csv(f, index=False) f.close() return ``` The error I am getting is: /s/software/anaconda/python3/lib/python3.7/site-packages/scipy/stats/stats.py:2253: RuntimeWarning: invalid value encountered in true\_divide return (a - mns) / sstd
2019/07/17
[ "https://Stackoverflow.com/questions/57077879", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10647510/" ]
You can do either of the two following options: ``` if sum(arr) == 0: scores = [0] else: scores = stats.zscore(arr) ``` The re factored way is: ``` scores = [0] if sum(arr) == 0 else scores = stats.zscore(arr) ``` Both would work fine.
As long as that is what you want to do, you'd just check before `scores = stats.zscore(arr)` if your array is all 0s, make `scores = arr` instead.
57,077,879
I am running the function below on a very long CSV file. The function calculates the Z-score of the column MFE for every 50 lines. Some of these 50 lines contain just zeros, and therefore when calculating the Zscore the program stops because it can't divide by zero. How can I solve this problem, and instead of stopping the program running print a 0 for the z-score of these lines? ``` def doZscore(csv_file, n_random): df = pd.read_csv(csv_file) row_start = 0 row_end = n_random + 1 step = n_random + 1 zscore = [] while row_end <= len(df): selected_rows = df['MFE'].iloc[row_start:row_end] arr = [] for x in selected_rows: arr.append(float(x)) scores = stats.zscore(arr) for i in scores: zscore.append(round(i, 3)) arr.clear() row_start += step row_end += step df['Zscore'] = zscore with open(csv_file, 'w') as f: df.to_csv(f, index=False) f.close() return ``` The error I am getting is: /s/software/anaconda/python3/lib/python3.7/site-packages/scipy/stats/stats.py:2253: RuntimeWarning: invalid value encountered in true\_divide return (a - mns) / sstd
2019/07/17
[ "https://Stackoverflow.com/questions/57077879", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10647510/" ]
You can do either of the two following options: ``` if sum(arr) == 0: scores = [0] else: scores = stats.zscore(arr) ``` The re factored way is: ``` scores = [0] if sum(arr) == 0 else scores = stats.zscore(arr) ``` Both would work fine.
I'm guessing `scores = stats.zscore(arr)` is where the division occurs? You could add a check to see if `arr`only contains zeroes, for example by using ``` if arr.count(0) == len(arr): scores = arr else: scores = stats.zscore(arr) ```
25,193,352
How can I create a list (or a numpy array if possible) in python that takes `datetime` objects in the first column and other data types in other columns? For example, the list would be something like this: ``` list = [[<datetime object>, 0, 0.] [<datetime object>, 0, 0.] [<datetime object>, 0, 0.]] ``` What is the best way to create and initialize a list like this? So far, I have tried using `np.empty`, `np.zeros`, and a list comprehension, similar to this: ``` list = [[None for x in xrange(3)] for x in xrange(3)] ``` But if I do this, I would need a `for loop` to populate the first column and there doesn't seem to be a way to assign it in a simpler way like the following: ``` list[0][:] = another_list_same_length ```
2014/08/07
[ "https://Stackoverflow.com/questions/25193352", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2613064/" ]
You mean something like: ``` In [17]: import numpy as np In [18]: np.array([[[datetime.now(),np.zeros(2)] for x in range(10)]]) Out[18]: array([[[datetime.datetime(2014, 8, 7, 23, 45, 12, 151489), array([ 0., 0.])], [datetime.datetime(2014, 8, 7, 23, 45, 12, 151560), array([ 0., 0.])], [datetime.datetime(2014, 8, 7, 23, 45, 12, 151595), array([ 0., 0.])], [datetime.datetime(2014, 8, 7, 23, 45, 12, 151619), array([ 0., 0.])], [datetime.datetime(2014, 8, 7, 23, 45, 12, 151634), array([ 0., 0.])], [datetime.datetime(2014, 8, 7, 23, 45, 12, 151648), array([ 0., 0.])], [datetime.datetime(2014, 8, 7, 23, 45, 12, 151662), array([ 0., 0.])], [datetime.datetime(2014, 8, 7, 23, 45, 12, 151677), array([ 0., 0.])], [datetime.datetime(2014, 8, 7, 23, 45, 12, 151691), array([ 0., 0.])], [datetime.datetime(2014, 8, 7, 23, 45, 12, 151706), array([ 0., 0.])]]], dtype=object) ```
Use `zip` ``` >>> column1 = [1, 1, 1] >>> column2 = [2, 2, 2] >>> column3 = [3, 3, 3] >>> zip(column1, column2, column3) [(1, 2, 3), (1, 2, 3), (1, 2, 3)] >>> # Or, if you'd like a list of lists: ... >>> [list(tup) for tup in zip(column1, column2, column3)] [[1, 2, 3], [1, 2, 3], [1, 2, 3]] >>> ``` This would allow you to build-up the columns separately and then combine them. `column1` could be dates (or anything else.) Hope this helps.
56,801,645
I tried to add python scripts to my packages reference this two tutorials. [Handling of setup.py](http://docs.ros.org/api/catkin/html/user_guide/setup_dot_py.html) [Installing Python scripts and modules](http://docs.ros.org/api/catkin/html/howto/format2/installing_python.html) So I added setup.py in root `test\src\test_pkg`, changed CMakeLists.txt in path `test\src`. (My package root path is `test\`, and my package path is `test\src\test_pkg`, my python scripts path is `test\src\test_pkg\scripts`) This is setup.py. ``` #!/usr/bin/env python # -*- coding: utf-8 -*- from distutils.core import setup from catkin_pkg.python_setup import generate_distutils_setup setup_args = generate_distutils_setup( packages=['test_pkg'], scripts=['/scripts'], package_dir={'': 'src'} ) setup(**setup_args) ``` This is CMakeLists.txt ``` cmake_minimum_required(VERSION 2.8.3) project(test_pkg) find_package(catkin REQUIRED COMPONENTS roscpp rospy std_msgs sensor_msgs message_generation ) catkin_python_setup() catkin_install_python(PROGRAMS scripts/talker DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}) add_message_files( FILES Num.msg ) add_service_files( FILES AddTwoInts.srv ) generate_messages( DEPENDENCIES std_msgs sensor_msgs ) catkin_package( CATKIN_DEPENDS roscpp rospy std_msgs message_runtime sensor_msgs include_directories( # include ${catkin_INCLUDE_DIRS} ) install(PROGRAMS DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION} ) ``` Then I run `catkin_make` in path `test`.(I have run `test\devel\setup.bat`) And got this CMake error: ``` Base path: E:\workspace\ros\test Source space: E:\workspace\ros\test\src Build space: E:\workspace\ros\test\build Devel space: E:\workspace\ros\test\devel Install space: E:\workspace\ros\test\install #### #### Running command: "nmake cmake_check_build_system" in "E:\workspace\ros\test\build" #### Microsoft (R) ?????????ù??? 14.20.27508.1 ?? ??????? (C) Microsoft Corporation?? ????????????? -- Using CATKIN_DEVEL_PREFIX: E:/workspace/ros/test/devel -- Using CMAKE_PREFIX_PATH: E:/workspace/ros/test/devel;C:/opt/ros/melodic/x64;C:/opt/rosdeps/x64 -- This workspace overlays: E:/workspace/ros/test/devel;C:/opt/ros/melodic/x64 -- Using PYTHON_EXECUTABLE: C:/opt/python27amd64/python.exe -- Using default Python package layout -- Using empy: C:/opt/python27amd64/lib/site-packages/em.pyc -- Using CATKIN_ENABLE_TESTING: ON -- Call enable_testing() -- Using CATKIN_TEST_RESULTS_DIR: E:/workspace/ros/test/build/test_results -- Found gtest: gtests will be built -- Using Python nosetests: C:/opt/python27amd64/Scripts/nosetests-2.7.exe -- catkin 0.7.14 -- BUILD_SHARED_LIBS is on -- BUILD_SHARED_LIBS is on -- Using CATKIN_WHITELIST_PACKAGES: test_pkg -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- ~~ traversing 1 packages in topological order: -- ~~ - test_pkg -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- +++ processing catkin package: 'test_pkg' -- ==> add_subdirectory(test_pkg) -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy CMake Error at C:/opt/ros/melodic/x64/share/catkin/cmake/catkin_install_python.cmake:20 (message): catkin_install_python() called without required DESTINATION argument. Call Stack (most recent call first): test_pkg/CMakeLists.txt:27 (catkin_install_python) -- Configuring incomplete, errors occurred! See also "E:/workspace/ros/test/build/CMakeFiles/CMakeOutput.log". See also "E:/workspace/ros/test/build/CMakeFiles/CMakeError.log". NMAKE : fatal error U1077: ??C:\opt\rosdeps\x64\bin\cmake.exe??: ???????0x1?? Stop. Invoking "nmake cmake_check_build_system" failed ``` How to fix this error? Thanks for any reply. System: Windows10 ROS: ROS1 * /rosdistro: melodic * /rosversion: 1.14.3
2019/06/28
[ "https://Stackoverflow.com/questions/56801645", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9589731/" ]
First, while `catkin` is a generic enough tool, it's typically used for the robotics framework ROS. So by asking on ROS' question community [answers.ros.org](https://answers.ros.org/questions/) you might get more response. > > > ``` > CMake Error at C:/opt/ros/melodic/x64/share/catkin/cmake/catkin_install_python.cmake:20 (message): > catkin_install_python() called without required DESTINATION argument. > > ``` > > I think you referred to the right online resources. I've also looked at them and none of them clarified this, but `catkin_package()` needs to be called prior to `catkin_install_python`.
I got the same error. I thought you copy the code to the wrong position. If you place those code in a relatively early position, you are possibly getting it wrong. Try to place the code here. ``` ## Mark executable scripts (Python etc.) for installation ## in contrast to setup.py, you can choose the destination) #catkin_package() #catkin_python_setup() catkin_install_python(PROGRAMS scripts/talker.py scripts/listener.py DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION} ) ```
56,801,645
I tried to add python scripts to my packages reference this two tutorials. [Handling of setup.py](http://docs.ros.org/api/catkin/html/user_guide/setup_dot_py.html) [Installing Python scripts and modules](http://docs.ros.org/api/catkin/html/howto/format2/installing_python.html) So I added setup.py in root `test\src\test_pkg`, changed CMakeLists.txt in path `test\src`. (My package root path is `test\`, and my package path is `test\src\test_pkg`, my python scripts path is `test\src\test_pkg\scripts`) This is setup.py. ``` #!/usr/bin/env python # -*- coding: utf-8 -*- from distutils.core import setup from catkin_pkg.python_setup import generate_distutils_setup setup_args = generate_distutils_setup( packages=['test_pkg'], scripts=['/scripts'], package_dir={'': 'src'} ) setup(**setup_args) ``` This is CMakeLists.txt ``` cmake_minimum_required(VERSION 2.8.3) project(test_pkg) find_package(catkin REQUIRED COMPONENTS roscpp rospy std_msgs sensor_msgs message_generation ) catkin_python_setup() catkin_install_python(PROGRAMS scripts/talker DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}) add_message_files( FILES Num.msg ) add_service_files( FILES AddTwoInts.srv ) generate_messages( DEPENDENCIES std_msgs sensor_msgs ) catkin_package( CATKIN_DEPENDS roscpp rospy std_msgs message_runtime sensor_msgs include_directories( # include ${catkin_INCLUDE_DIRS} ) install(PROGRAMS DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION} ) ``` Then I run `catkin_make` in path `test`.(I have run `test\devel\setup.bat`) And got this CMake error: ``` Base path: E:\workspace\ros\test Source space: E:\workspace\ros\test\src Build space: E:\workspace\ros\test\build Devel space: E:\workspace\ros\test\devel Install space: E:\workspace\ros\test\install #### #### Running command: "nmake cmake_check_build_system" in "E:\workspace\ros\test\build" #### Microsoft (R) ?????????ù??? 14.20.27508.1 ?? ??????? (C) Microsoft Corporation?? ????????????? -- Using CATKIN_DEVEL_PREFIX: E:/workspace/ros/test/devel -- Using CMAKE_PREFIX_PATH: E:/workspace/ros/test/devel;C:/opt/ros/melodic/x64;C:/opt/rosdeps/x64 -- This workspace overlays: E:/workspace/ros/test/devel;C:/opt/ros/melodic/x64 -- Using PYTHON_EXECUTABLE: C:/opt/python27amd64/python.exe -- Using default Python package layout -- Using empy: C:/opt/python27amd64/lib/site-packages/em.pyc -- Using CATKIN_ENABLE_TESTING: ON -- Call enable_testing() -- Using CATKIN_TEST_RESULTS_DIR: E:/workspace/ros/test/build/test_results -- Found gtest: gtests will be built -- Using Python nosetests: C:/opt/python27amd64/Scripts/nosetests-2.7.exe -- catkin 0.7.14 -- BUILD_SHARED_LIBS is on -- BUILD_SHARED_LIBS is on -- Using CATKIN_WHITELIST_PACKAGES: test_pkg -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- ~~ traversing 1 packages in topological order: -- ~~ - test_pkg -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- +++ processing catkin package: 'test_pkg' -- ==> add_subdirectory(test_pkg) -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy CMake Error at C:/opt/ros/melodic/x64/share/catkin/cmake/catkin_install_python.cmake:20 (message): catkin_install_python() called without required DESTINATION argument. Call Stack (most recent call first): test_pkg/CMakeLists.txt:27 (catkin_install_python) -- Configuring incomplete, errors occurred! See also "E:/workspace/ros/test/build/CMakeFiles/CMakeOutput.log". See also "E:/workspace/ros/test/build/CMakeFiles/CMakeError.log". NMAKE : fatal error U1077: ??C:\opt\rosdeps\x64\bin\cmake.exe??: ???????0x1?? Stop. Invoking "nmake cmake_check_build_system" failed ``` How to fix this error? Thanks for any reply. System: Windows10 ROS: ROS1 * /rosdistro: melodic * /rosversion: 1.14.3
2019/06/28
[ "https://Stackoverflow.com/questions/56801645", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9589731/" ]
First, while `catkin` is a generic enough tool, it's typically used for the robotics framework ROS. So by asking on ROS' question community [answers.ros.org](https://answers.ros.org/questions/) you might get more response. > > > ``` > CMake Error at C:/opt/ros/melodic/x64/share/catkin/cmake/catkin_install_python.cmake:20 (message): > catkin_install_python() called without required DESTINATION argument. > > ``` > > I think you referred to the right online resources. I've also looked at them and none of them clarified this, but `catkin_package()` needs to be called prior to `catkin_install_python`.
The correct order is: ``` cmake_minimum_required(VERSION 2.8.3) project(test_pkg) find_package(catkin REQUIRED COMPONENTS roscpp rospy std_msgs sensor_msgs message_generation ) add_message_files( FILES Num.msg ) add_service_files( FILES AddTwoInts.srv ) generate_messages( DEPENDENCIES std_msgs sensor_msgs ) catkin_package( CATKIN_DEPENDS roscpp rospy std_msgs message_runtime sensor_msgs include_directories( # include ${catkin_INCLUDE_DIRS} ) #catkin_python_setup() catkin_install_python(PROGRAMS scripts/talker DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}) ) ``` But since you delete all the unused code, I am not sure whether I got it correct above. If it is in the original file, it should be here: ``` ## Mark executable scripts (Python etc.) for installation ## in contrast to setup.py, you can choose the destination) #catkin_package() #catkin_python_setup() catkin_install_python(PROGRAMS scripts/talker.py scripts/listener.py DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION} ) ``` Almost at the end of the file.
56,801,645
I tried to add python scripts to my packages reference this two tutorials. [Handling of setup.py](http://docs.ros.org/api/catkin/html/user_guide/setup_dot_py.html) [Installing Python scripts and modules](http://docs.ros.org/api/catkin/html/howto/format2/installing_python.html) So I added setup.py in root `test\src\test_pkg`, changed CMakeLists.txt in path `test\src`. (My package root path is `test\`, and my package path is `test\src\test_pkg`, my python scripts path is `test\src\test_pkg\scripts`) This is setup.py. ``` #!/usr/bin/env python # -*- coding: utf-8 -*- from distutils.core import setup from catkin_pkg.python_setup import generate_distutils_setup setup_args = generate_distutils_setup( packages=['test_pkg'], scripts=['/scripts'], package_dir={'': 'src'} ) setup(**setup_args) ``` This is CMakeLists.txt ``` cmake_minimum_required(VERSION 2.8.3) project(test_pkg) find_package(catkin REQUIRED COMPONENTS roscpp rospy std_msgs sensor_msgs message_generation ) catkin_python_setup() catkin_install_python(PROGRAMS scripts/talker DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}) add_message_files( FILES Num.msg ) add_service_files( FILES AddTwoInts.srv ) generate_messages( DEPENDENCIES std_msgs sensor_msgs ) catkin_package( CATKIN_DEPENDS roscpp rospy std_msgs message_runtime sensor_msgs include_directories( # include ${catkin_INCLUDE_DIRS} ) install(PROGRAMS DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION} ) ``` Then I run `catkin_make` in path `test`.(I have run `test\devel\setup.bat`) And got this CMake error: ``` Base path: E:\workspace\ros\test Source space: E:\workspace\ros\test\src Build space: E:\workspace\ros\test\build Devel space: E:\workspace\ros\test\devel Install space: E:\workspace\ros\test\install #### #### Running command: "nmake cmake_check_build_system" in "E:\workspace\ros\test\build" #### Microsoft (R) ?????????ù??? 14.20.27508.1 ?? ??????? (C) Microsoft Corporation?? ????????????? -- Using CATKIN_DEVEL_PREFIX: E:/workspace/ros/test/devel -- Using CMAKE_PREFIX_PATH: E:/workspace/ros/test/devel;C:/opt/ros/melodic/x64;C:/opt/rosdeps/x64 -- This workspace overlays: E:/workspace/ros/test/devel;C:/opt/ros/melodic/x64 -- Using PYTHON_EXECUTABLE: C:/opt/python27amd64/python.exe -- Using default Python package layout -- Using empy: C:/opt/python27amd64/lib/site-packages/em.pyc -- Using CATKIN_ENABLE_TESTING: ON -- Call enable_testing() -- Using CATKIN_TEST_RESULTS_DIR: E:/workspace/ros/test/build/test_results -- Found gtest: gtests will be built -- Using Python nosetests: C:/opt/python27amd64/Scripts/nosetests-2.7.exe -- catkin 0.7.14 -- BUILD_SHARED_LIBS is on -- BUILD_SHARED_LIBS is on -- Using CATKIN_WHITELIST_PACKAGES: test_pkg -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- ~~ traversing 1 packages in topological order: -- ~~ - test_pkg -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- +++ processing catkin package: 'test_pkg' -- ==> add_subdirectory(test_pkg) -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy CMake Error at C:/opt/ros/melodic/x64/share/catkin/cmake/catkin_install_python.cmake:20 (message): catkin_install_python() called without required DESTINATION argument. Call Stack (most recent call first): test_pkg/CMakeLists.txt:27 (catkin_install_python) -- Configuring incomplete, errors occurred! See also "E:/workspace/ros/test/build/CMakeFiles/CMakeOutput.log". See also "E:/workspace/ros/test/build/CMakeFiles/CMakeError.log". NMAKE : fatal error U1077: ??C:\opt\rosdeps\x64\bin\cmake.exe??: ???????0x1?? Stop. Invoking "nmake cmake_check_build_system" failed ``` How to fix this error? Thanks for any reply. System: Windows10 ROS: ROS1 * /rosdistro: melodic * /rosversion: 1.14.3
2019/06/28
[ "https://Stackoverflow.com/questions/56801645", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9589731/" ]
The correct order is: ``` cmake_minimum_required(VERSION 2.8.3) project(test_pkg) find_package(catkin REQUIRED COMPONENTS roscpp rospy std_msgs sensor_msgs message_generation ) add_message_files( FILES Num.msg ) add_service_files( FILES AddTwoInts.srv ) generate_messages( DEPENDENCIES std_msgs sensor_msgs ) catkin_package( CATKIN_DEPENDS roscpp rospy std_msgs message_runtime sensor_msgs include_directories( # include ${catkin_INCLUDE_DIRS} ) #catkin_python_setup() catkin_install_python(PROGRAMS scripts/talker DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}) ) ``` But since you delete all the unused code, I am not sure whether I got it correct above. If it is in the original file, it should be here: ``` ## Mark executable scripts (Python etc.) for installation ## in contrast to setup.py, you can choose the destination) #catkin_package() #catkin_python_setup() catkin_install_python(PROGRAMS scripts/talker.py scripts/listener.py DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION} ) ``` Almost at the end of the file.
I got the same error. I thought you copy the code to the wrong position. If you place those code in a relatively early position, you are possibly getting it wrong. Try to place the code here. ``` ## Mark executable scripts (Python etc.) for installation ## in contrast to setup.py, you can choose the destination) #catkin_package() #catkin_python_setup() catkin_install_python(PROGRAMS scripts/talker.py scripts/listener.py DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION} ) ```
17,261,801
I have a Java project where I must build an object from a JSON input, which comes in the following format: ``` { "Shell": 13401, "JavaScript": 2693931, "Ruby": 2264, "C": 111534, "C++": 940606, "Python": 39021, "R": 2216, "D": 35036, "Objective-C": 4913 } ``` Then in my code I have: ``` public void fetchProjectLanguages(Project project) throws IOException { List<Language> languages = null; String searchUrl = String.format("%s/repos/%s/%s/languages", REPO_API, project.getUser().getLogin(), project.getName()); String jsonString = requests.get(searchUrl); Language lang = gson.fromJson(jsonString, Language.class); languages.add(lang); } ``` My `Language` object is composed of two attributes: `name` and `loc`, and the JSON input itself does not represent a language but a **set** of languages, being each line of the object a language itself. In my example: shell, javascript, ruby, c, c++, python, R, D and Objective-C. How can it do that? I appreciate any help!
2013/06/23
[ "https://Stackoverflow.com/questions/17261801", "https://Stackoverflow.com", "https://Stackoverflow.com/users/413570/" ]
You can use an **[adapter](http://google-gson.googlecode.com/svn/trunk/gson/docs/javadocs/com/google/gson/TypeAdapter.html)**. Say you have: ``` class Language { public String name; public Integer loc; } class Languages { public List<Language> list = new ArrayList<Language>(); } ``` The adapter: ``` class LanguagesTypeAdapter implements JsonSerializer<Languages>, JsonDeserializer<Languages> { public JsonElement serialize(Languages languages, Type typeOfT, JsonSerializationContext context) { JsonObject json = new JsonObject(); for (Language language : languages.list) { json.addProperty(language.name, language.loc); } return json; } public Languages deserialize(JsonElement element, Type typeOfT, JsonDeserializationContext context) throws JsonParseException { JsonObject json = element.getAsJsonObject(); Languages languages = new Languages(); for (Entry<String, JsonElement> entry : json.entrySet()) { String name = entry.getKey(); Integer loc = entry.getValue().getAsInt(); Language language = new Language(); language.name = name; language.loc = loc; languages.list.add(language); } return languages; } } ``` And a sample: ``` GsonBuilder builder = new GsonBuilder(); builder.registerTypeAdapter(Languages.class, new LanguagesTypeAdapter()); Gson gson = builder.create(); Languages languages = gson.fromJson("{"+ "\"Shell\": 13401,"+ "\"JavaScript\": 2693931,"+ "\"Ruby\": 2264,"+ "\"C\": 111534,"+ "\"C++\": 940606,"+ "\"Python\": 39021,"+ "\"R\": 2216,"+ "\"D\": 35036,"+ "\"Objective-C\": 4913"+ "}", Languages.class); String json = gson.toJson(languages); ``` Results : ``` {"Shell":13401,"JavaScript":2693931,"Ruby":2264,"C":111534,"C++":940606,"Python":39021,"R":2216,"D":35036,"Objective-C":4913} ``` Hope this helps...
You can try this out `Map<String, String> map = gson.fromJson(json, new TypeToken<Map<String, String>>() {}.getType());` to get a map of language/value.
30,215,470
I found this example of code here on stackoverflow and I would like to make the first window close when a new one is opened. So what I would like is when a new window is opened, the main one should be closed automatically. ``` #!/usr/bin/env python import Tkinter as tk from Tkinter import * class windowclass(): def __init__(self,master): self.master = master self.frame = tk.Frame(master) self.lbl = Label(master , text = "Label") self.lbl.pack() self.btn = Button(master , text = "Button" , command = self.command ) self.btn.pack() self.frame.pack() def command(self): print 'Button is pressed!' self.newWindow = tk.Toplevel(self.master) self.app = windowclass1(self.newWindow) class windowclass1(): def __init__(self , master): self.master = master self.frame = tk.Frame(master) master.title("a") self.quitButton = tk.Button(self.frame, text = 'Quit', width = 25 , command = self.close_window) self.quitButton.pack() self.frame.pack() def close_window(self): self.master.destroy() root = Tk() root.title("window") root.geometry("350x50") cls = windowclass(root) root.mainloop() ```
2015/05/13
[ "https://Stackoverflow.com/questions/30215470", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3036295/" ]
You would withdraw the main window, but you have no way to close the program after the button click in the Toplevel, when the main window is still open but doesn't show Also pick one or the other of (but don't use both) ``` import Tkinter as tk from Tkinter import * ``` This opens a 2nd Toplevel which allows you to exit the program ``` import Tkinter as tk class windowclass(): def __init__(self,master): self.master = master ##self.frame = tk.Frame(master) not used self.lbl = tk.Label(master , text = "Label") self.lbl.pack() self.btn = tk.Button(master , text = "Button" , command = self.command ) self.btn.pack() ##self.frame.pack() not used def command(self): print 'Button is pressed!' self.master.withdraw() toplevel=tk.Toplevel(self.master) tk.Button(toplevel, text="Exit the program", command=self.master.quit).pack() self.newWindow = tk.Toplevel(self.master) self.app = windowclass1(self.newWindow) class windowclass1(): def __init__(self , master): """ note that "master" here refers to the TopLevel """ self.master = master self.frame = tk.Frame(master) master.title("a") self.quitButton = tk.Button(self.frame, text = 'Quit this TopLevel', width = 25 , command = self.close_window) self.quitButton.pack() self.frame.pack() def close_window(self): self.master.destroy() ## closes this TopLevel only root = tk.Tk() root.title("window") root.geometry("350x50") cls = windowclass(root) root.mainloop() ```
In your code: ``` self.newWindow = tk.Toplevel(self.master) ``` You are not creating a new window independent completely from your root (or `master`) but rather a child of the Toplevel (`master` in your case), of course this new `child` toplevel will act independent of the `master` until the `master` gets detroyed where the `child` toplevel will be destroyed as well, To make it completely seperate, create a new instance of the Tk object and have it close the `windowclass` window (destroy its object): ``` self.newWindow = Tk() ``` you have two options here: 1 - Either you need to specify in the `windowclass1.close_window()`, that you want to destroy the `cls` object when you create the `windowclass1()` object, this way: ``` def close_window(self): cls.master.destroy() ``` 2 - Which is the preferred one for generality, is to destroy the `cls` after you create `windowclass1` object in the `windowclass.command()` method, like this: ``` def command(self): print 'Button is pressed!' self.newWindow = Tk() self.app = windowclass1(self.newWindow) self.master.destroy() ``` and make the quitButton in the `__init__()` of windowclass1 like this: ``` self.quitButton = tk.Button(self.frame, text = 'Quit', width = 25, command = self.master.quit) ``` to quit completely your program
55,036,033
Now, I have called python to C++. Using ctype to connect between both of them. And I have a problem about core dump when in running time. I have a library which is called "libfst.so" This is my code. NGramFST.h ``` #include <iostream> class NGramFST{ private: static NGramFST* m_Instace; public: NGramFST(){ } static NGramFST* getInstance() { if (m_Instace == NULL){ m_Instace = new NGramFST(); } return m_Instace; } double getProbabilityOfWord(std::string word, std::string context) { std::cout << "reloading..." << std::endl; return 1; } }; ``` NGramFST.cpp ``` #include "NGramFST.h" NGramFST* NGramFST::m_Instace = NULL; extern "C" { double FST_getProbability(std::string word, std::string context){ return NGramFST::getInstance()->getProbabilityOfWord(word, context); } } ``` And this is my python code. ``` from ctypes import cdll lib = cdll.LoadLibrary('./libfst.so') #-------------------------main code------------------------ class FST(object): def __init__(self): print 'Initializing' def getProbabilityOfWord(self, word, context): lib.FST_getProbability(word, context) fst = FST() print fst.getProbabilityOfWord(c_wchar_p('jack london'), c_wchar_p('my name is')) ``` This is error ``` terminate called after throwing an instance of 'std::bad_alloc' what(): std::bad_alloc Aborted (core dumped) ``` I reviewed again but I can not detect where is my problem.
2019/03/07
[ "https://Stackoverflow.com/questions/55036033", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5798231/" ]
`ctypes` does not understand C++ types (it's not called `c++types`). It cannot handle `std::string`. It wouldn't know that your function expects `std::string` arguments anyway. In order to work with `ctypes`, your library needs a C-compatible interface. `extern "C"` is necessary but not sufficient. The functions need to be actually callable from C. Better yet, use a modern C++/Python binding library such as [pybind11](https://pybind11.readthedocs.io/en/stable/).
It work when I change python code below ``` string1 = "my string 1" string2 = "my string 2" # create byte objects from the strings b_string1 = string1.encode('utf-8') b_string2 = string2.encode('utf-8') print fst.getProbabilityOfWord(b_string1, b_string2) ``` and c++ code change type of param bellow ``` FST_getProbability(const char* word, const char* context) ```
54,143,731
I am running a long computation on a Jupyter notebook and one of the threads spawn by python (a `pickle.dump` call, I suspect) took all the available RAM making the system clunky. Now, I would like to terminate the single thread. Interrupting the notebook does not work and I would like not to restart the notebook in order to don't lose all the calculations made so far. If I open the Activity Monitor I can clearly see one python process which contains multiple threads. I know I can terminate the whole process, but is there a way to terminate a single thread?
2019/01/11
[ "https://Stackoverflow.com/questions/54143731", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3190076/" ]
I do not think you can kill a thread of a process outside the process itself: As reported in [this answer](https://unix.stackexchange.com/a/1071/210584) > > Threads are an integral part of the process and cannot be killed > outside it. There is the `pthread_kill` function but it only applies in > the context of the `thread itself`. From the docs at the link > > >
Of course the answer is yes, I have a demo code FYI(not secure): ``` from threading import Thread import time class MyThread(Thread): def __init__(self, stop): Thread.__init__(self) self.stop = stop def run(self): stop = False while not stop: print("I'm running") time.sleep(1) # if the signal is stop, break `while loop` so the thread is over. stop = self.stop m = MyThread(stop=False) m.start() while 1: i = input("input S to stop\n") if i == "S": m.stop = True break else: continue ```
10,755,833
I am attempting to deploy a [Flask](http://flask.pocoo.org/) app to [Heroku](http://www.heroku.com/). I'm using [Peewee](http://peewee.readthedocs.org/en/latest/) as an ORM for a Postgres database. When I follow the [standard Heroku steps to deploying Flask](https://devcenter.heroku.com/articles/python), the web process crashes after I enter `heroku ps:scale web=1`. Here's what the logs say: ``` Starting process with command `python app.py` /app/.heroku/venv/lib/python2.7/site-packages/peewee.py:2434: UserWarning: Table for <class 'flask_peewee.auth.User'> ("user") is reserved, please override using Meta.db_table cls, _meta.db_table, Traceback (most recent call last): File "app.py", line 167, in <module> auth.User.create_table(fail_silently=True) File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 2518, in create_table if fail_silently and cls.table_exists(): File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 2514, in table_exists return cls._meta.db_table in cls._meta.database.get_tables() File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 507, in get_tables ORDER BY c.relname""") File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 313, in execute cursor = self.get_cursor() File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 310, in get_cursor return self.get_conn().cursor() File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 306, in get_conn self.connect() File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 296, in connect self.__local.conn = self.adapter.connect(self. database, **self.connect_kwargs) File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 199, in connect return psycopg2.connect(database=database, **kwargs) File "/app/.heroku/venv/lib/python2.7/site-packages/psycopg2/__init__.py", line 179, in connect connection_factory=connection_factory, async=async) psycopg2.OperationalError: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? Process exited with status 1 State changed from starting to crashed ``` I've tried a bunch of different things to get Heroku to allow my app to talk to a Postgres db, but haven't had any luck. Is there an easy way to do this? What do I need to do to configure Flask/Peewee so that I can use a db on Heroku?
2012/05/25
[ "https://Stackoverflow.com/questions/10755833", "https://Stackoverflow.com", "https://Stackoverflow.com/users/135156/" ]
According to the [Peewee docs](http://docs.peewee-orm.com/en/latest/peewee/database.html?highlight=postgres#dynamically-defining-a-database), you don't want to use `Proxy()` unless your local database driver is different than your remote one (i.e. locally, you're using SQLite and remotely you're using Postgres). If, however, you are using Postgres both locally and remotely it's a much simpler change. In this case, you'll want to only change the connection values (database name, username, password, host, port, etc.) at runtime and do not need to use `Proxy()`. Peewee has a [built-in URL parser for database connections](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#database-url). Here's how to use it: ``` import os from peewee import * from playhouse.db_url import connect db = connect(os.environ.get('DATABASE_URL')) class BaseModel(Model): class Meta: database = db ``` In this example, peewee's `db_url` module reads the environment variable `DATABASE_URL` and parses it to extract the relevant connection variables. It then creates a `PostgresqlDatabase` [object](http://docs.peewee-orm.com/en/latest/peewee/api.html#PostgresqlDatabase) with those values. Locally, you'll want to set `DATABASE_URL` as an environment variable. You can do this according to the instructions of whatever shell you're using. Or, if you want to use the Heroku toolchain (launch your local server using `heroku local`) you can [add it to a file called `.env` at the top level of your project](https://devcenter.heroku.com/articles/heroku-local#set-up-your-local-environment-variables). For the remote setup, you'll want to [add your database URL as a remote Heroku environment variable](https://devcenter.heroku.com/articles/config-vars#setting-up-config-vars-for-a-deployed-application). You can do this with the following command: ``` heroku config:set DATABASE_URL=postgresql://myurl ``` You can find that URL by going into Heroku, navigating to your database, and clicking on "database credentials". It's listed under `URI`.
Are you parsing the DATABASE\_URL environment variable? It will look something like this: ``` postgres://username:password@host:port/database_name ``` So you will want to pull that in and parse it before you open a connection to your database. Depending on how you've declared your database (in your config or next to your wsgi app) it might look like this: ``` import os import urlparse urlparse.uses_netloc.append('postgres') url = urlparse.urlparse(os.environ['DATABASE_URL']) # for your config DATABASE = { 'engine': 'peewee.PostgresqlDatabase', 'name': url.path[1:], 'password': url.password, 'host': url.hostname, 'port': url.port, } ``` See the notes here: <https://devcenter.heroku.com/articles/django>
10,755,833
I am attempting to deploy a [Flask](http://flask.pocoo.org/) app to [Heroku](http://www.heroku.com/). I'm using [Peewee](http://peewee.readthedocs.org/en/latest/) as an ORM for a Postgres database. When I follow the [standard Heroku steps to deploying Flask](https://devcenter.heroku.com/articles/python), the web process crashes after I enter `heroku ps:scale web=1`. Here's what the logs say: ``` Starting process with command `python app.py` /app/.heroku/venv/lib/python2.7/site-packages/peewee.py:2434: UserWarning: Table for <class 'flask_peewee.auth.User'> ("user") is reserved, please override using Meta.db_table cls, _meta.db_table, Traceback (most recent call last): File "app.py", line 167, in <module> auth.User.create_table(fail_silently=True) File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 2518, in create_table if fail_silently and cls.table_exists(): File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 2514, in table_exists return cls._meta.db_table in cls._meta.database.get_tables() File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 507, in get_tables ORDER BY c.relname""") File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 313, in execute cursor = self.get_cursor() File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 310, in get_cursor return self.get_conn().cursor() File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 306, in get_conn self.connect() File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 296, in connect self.__local.conn = self.adapter.connect(self. database, **self.connect_kwargs) File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 199, in connect return psycopg2.connect(database=database, **kwargs) File "/app/.heroku/venv/lib/python2.7/site-packages/psycopg2/__init__.py", line 179, in connect connection_factory=connection_factory, async=async) psycopg2.OperationalError: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? Process exited with status 1 State changed from starting to crashed ``` I've tried a bunch of different things to get Heroku to allow my app to talk to a Postgres db, but haven't had any luck. Is there an easy way to do this? What do I need to do to configure Flask/Peewee so that I can use a db on Heroku?
2012/05/25
[ "https://Stackoverflow.com/questions/10755833", "https://Stackoverflow.com", "https://Stackoverflow.com/users/135156/" ]
According to the [Peewee docs](http://docs.peewee-orm.com/en/latest/peewee/database.html?highlight=postgres#dynamically-defining-a-database), you don't want to use `Proxy()` unless your local database driver is different than your remote one (i.e. locally, you're using SQLite and remotely you're using Postgres). If, however, you are using Postgres both locally and remotely it's a much simpler change. In this case, you'll want to only change the connection values (database name, username, password, host, port, etc.) at runtime and do not need to use `Proxy()`. Peewee has a [built-in URL parser for database connections](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#database-url). Here's how to use it: ``` import os from peewee import * from playhouse.db_url import connect db = connect(os.environ.get('DATABASE_URL')) class BaseModel(Model): class Meta: database = db ``` In this example, peewee's `db_url` module reads the environment variable `DATABASE_URL` and parses it to extract the relevant connection variables. It then creates a `PostgresqlDatabase` [object](http://docs.peewee-orm.com/en/latest/peewee/api.html#PostgresqlDatabase) with those values. Locally, you'll want to set `DATABASE_URL` as an environment variable. You can do this according to the instructions of whatever shell you're using. Or, if you want to use the Heroku toolchain (launch your local server using `heroku local`) you can [add it to a file called `.env` at the top level of your project](https://devcenter.heroku.com/articles/heroku-local#set-up-your-local-environment-variables). For the remote setup, you'll want to [add your database URL as a remote Heroku environment variable](https://devcenter.heroku.com/articles/config-vars#setting-up-config-vars-for-a-deployed-application). You can do this with the following command: ``` heroku config:set DATABASE_URL=postgresql://myurl ``` You can find that URL by going into Heroku, navigating to your database, and clicking on "database credentials". It's listed under `URI`.
heroku config:set HEROKU=1 ``` import os import urlparse import psycopg2 from flask import Flask from flask_peewee.db import Database if 'HEROKU' in os.environ: DEBUG = False urlparse.uses_netloc.append('postgres') url = urlparse.urlparse(os.environ['DATABASE_URL']) DATABASE = { 'engine': 'peewee.PostgresqlDatabase', 'name': url.path[1:], 'user': url.username, 'password': url.password, 'host': url.hostname, 'port': url.port, } else: DEBUG = True DATABASE = { 'engine': 'peewee.PostgresqlDatabase', 'name': 'framingappdb', 'user': 'postgres', 'password': 'postgres', 'host': 'localhost', 'port': 5432 , 'threadlocals': True } app = Flask(__name__) app.config.from_object(__name__) db = Database(app) ``` Modified coleifer's answer to answer hasenj's comment. Please mark one of these as the accepted answer.
10,755,833
I am attempting to deploy a [Flask](http://flask.pocoo.org/) app to [Heroku](http://www.heroku.com/). I'm using [Peewee](http://peewee.readthedocs.org/en/latest/) as an ORM for a Postgres database. When I follow the [standard Heroku steps to deploying Flask](https://devcenter.heroku.com/articles/python), the web process crashes after I enter `heroku ps:scale web=1`. Here's what the logs say: ``` Starting process with command `python app.py` /app/.heroku/venv/lib/python2.7/site-packages/peewee.py:2434: UserWarning: Table for <class 'flask_peewee.auth.User'> ("user") is reserved, please override using Meta.db_table cls, _meta.db_table, Traceback (most recent call last): File "app.py", line 167, in <module> auth.User.create_table(fail_silently=True) File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 2518, in create_table if fail_silently and cls.table_exists(): File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 2514, in table_exists return cls._meta.db_table in cls._meta.database.get_tables() File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 507, in get_tables ORDER BY c.relname""") File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 313, in execute cursor = self.get_cursor() File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 310, in get_cursor return self.get_conn().cursor() File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 306, in get_conn self.connect() File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 296, in connect self.__local.conn = self.adapter.connect(self. database, **self.connect_kwargs) File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 199, in connect return psycopg2.connect(database=database, **kwargs) File "/app/.heroku/venv/lib/python2.7/site-packages/psycopg2/__init__.py", line 179, in connect connection_factory=connection_factory, async=async) psycopg2.OperationalError: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? Process exited with status 1 State changed from starting to crashed ``` I've tried a bunch of different things to get Heroku to allow my app to talk to a Postgres db, but haven't had any luck. Is there an easy way to do this? What do I need to do to configure Flask/Peewee so that I can use a db on Heroku?
2012/05/25
[ "https://Stackoverflow.com/questions/10755833", "https://Stackoverflow.com", "https://Stackoverflow.com/users/135156/" ]
According to the [Peewee docs](http://docs.peewee-orm.com/en/latest/peewee/database.html?highlight=postgres#dynamically-defining-a-database), you don't want to use `Proxy()` unless your local database driver is different than your remote one (i.e. locally, you're using SQLite and remotely you're using Postgres). If, however, you are using Postgres both locally and remotely it's a much simpler change. In this case, you'll want to only change the connection values (database name, username, password, host, port, etc.) at runtime and do not need to use `Proxy()`. Peewee has a [built-in URL parser for database connections](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#database-url). Here's how to use it: ``` import os from peewee import * from playhouse.db_url import connect db = connect(os.environ.get('DATABASE_URL')) class BaseModel(Model): class Meta: database = db ``` In this example, peewee's `db_url` module reads the environment variable `DATABASE_URL` and parses it to extract the relevant connection variables. It then creates a `PostgresqlDatabase` [object](http://docs.peewee-orm.com/en/latest/peewee/api.html#PostgresqlDatabase) with those values. Locally, you'll want to set `DATABASE_URL` as an environment variable. You can do this according to the instructions of whatever shell you're using. Or, if you want to use the Heroku toolchain (launch your local server using `heroku local`) you can [add it to a file called `.env` at the top level of your project](https://devcenter.heroku.com/articles/heroku-local#set-up-your-local-environment-variables). For the remote setup, you'll want to [add your database URL as a remote Heroku environment variable](https://devcenter.heroku.com/articles/config-vars#setting-up-config-vars-for-a-deployed-application). You can do this with the following command: ``` heroku config:set DATABASE_URL=postgresql://myurl ``` You can find that URL by going into Heroku, navigating to your database, and clicking on "database credentials". It's listed under `URI`.
I have managed to get my Flask app which uses Peewee working on Heroku using the below code: ``` # persons.py import os from peewee import * db_proxy = Proxy() # Define your models here class Person(Model): name = CharField(max_length=20, unique=True) age = IntField() class Meta: database = db_proxy # Import modules based on the environment. # The HEROKU value first needs to be set on Heroku # either through the web front-end or through the command # line (if you have Heroku Toolbelt installed, type the following: # heroku config:set HEROKU=1). if 'HEROKU' in os.environ: import urlparse, psycopg2 urlparse.uses_netloc.append('postgres') url = urlparse.urlparse(os.environ["DATABASE_URL"]) db = PostgresqlDatabase(database=url.path[1:], user=url.username, password=url.password, host=url.hostname, port=url.port) db_proxy.initialize(db) else: db = SqliteDatabase('persons.db') db_proxy.initialize(db) if __name__ == '__main__': db_proxy.connect() db_proxy.create_tables([Person], safe=True) ``` You should already have a Postgres database add-on attached to your app. You can do this via the command line or through the web front-end. Assuming that the database is already attached to your app and you have already deployed your with the above changes, log-in to Heroku and create the table(s): ``` $ heroku login $ heroku run bash $ python persons.py ``` Check that the table was created: ``` $ heroku pg:psql your_app_name::DATABASE=> \dt ``` You then import this file (persons.py in this example) in another Python script, e.g. a request handler. You need to manage the database connection explicitly: ``` # server.py from flask import g from persons import db_proxy @app.before_request def before_request(): g.db = db_proxy g.db.connect() @app.after_request def after_request(response): g.db.close() return response … ``` References: * <https://devcenter.heroku.com/articles/heroku-postgresql> * <http://peewee.readthedocs.org/en/latest/peewee/database.html#dynamically-defining-a-database> * <http://peewee.readthedocs.org/en/latest/peewee/example.html#establishing-a-database-connection> * <https://stackoverflow.com/a/20131277/3104465> * <https://gist.github.com/fprieur/9561148>
46,125,105
I have done the following to get json file data into redis using this python script- ``` import json import redis r = redis.StrictRedis(host='127.0.0.1', port=6379, db=1) with open('products.json') as data_file: test_data = json.load(data_file) r.set('test_json', test_data) ``` When I use the **get** commmand from redis-cli (get test\_json) I get **nil** back. I must be using the wrong command? Please help for my understanding on this.
2017/09/08
[ "https://Stackoverflow.com/questions/46125105", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5526217/" ]
Ways of circumventing your company's policy, from best to worse: * Load mod\_cgi anyway. * Load mod\_fcgi, and convert your CGI script into a Fast CGI daemon. It's a lot of work, but you'll can get faster code out of it! * Load your own module that does exactly the same thing as mod\_cgi. mod\_cgi is open source, so it should be easy to just rename it. * Load mod\_fcgi, and write a Fast CGI daemon that executes your script. * Install a second apache web server with mod\_cgi enabled. Link to it directly or use mod\_proxy on the original server. * Write your own web server. Link to it directly or use mod\_proxy on the original server.
[Last time you asked this question](https://stackoverflow.com/questions/45864198/cgi-scripts-to-mod-perl), you talked about using `mod_perl` instead. The standard way to run CGI program unchanged (for some value of "unchanged") under `mod_perl` is by using [ModPerl::Registry](https://metacpan.org/pod/ModPerl::Registry). Did you try that? How did it go? Another alternative would be to convert your programs to use [PSGI](http://plackperl.org/). You could try using [Plack::App::Wrap::CGI](https://metacpan.org/pod/Plack::App::WrapCGI) or [CGI::Emulate::PSGI](https://metacpan.org/pod/CGI::Emulate::PSGI). Using Plack would free you from any deployment restrictions. You could run the code under `mod-perl` or even as a separate service behind a proxy server. But I can't help marvelling at how ridiculous this whole situation is. Your company has CGI programs that it (presumably) relies on to run part of its business. And they've just decided to turn off support for them. You need to find out why this decision has been made and try to buy some time in order to convert to an alternative technology.
73,416,533
I am new to python and wanted to know if there are best approaches for solving this problem. I have a string template which I want to compare with a list of strings and if any difference found, create a dictionary out of it. ``` template = "Hi {name}, how are you? Are you living in {location} currently? Can you confirm if following data is correct - {list_of_data}" list_of_strings = [ "Hi John, how are you? Are you living in California currently? Can you confirm if following data is correct - 123, 456, 345", "Hi Steve, how are you? Are you living in New York currently? Can you confirm if following data is correct - 6542" ] ``` ``` expected = [ {"name": "John", "location": "California", "list_of_data": [123, 456, 345]}, {"name": "Steve", "location": "New York", "list_of_data": [6542]}, ] ``` I tried many different approaches but ended up stuck in some random logics and the solutions did not look generic enough to support any string with the template. Any help is highly appreciated.
2022/08/19
[ "https://Stackoverflow.com/questions/73416533", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12722706/" ]
You can use regular-expression ``` template = "Hi {name}, how are you? Are you living in {location} currently? Can you confirm if following data is correct - {list_of_data}" list_of_strings = [ "Hi John, how are you? Are you living in California currently? Can you confirm if following data is correct - 123, 456, 345", "Hi Steve, how are you? Are you living in New York currently? Can you confirm if following data is correct - 6542" ] import re expected = [] for s in list_of_strings: r_ = re.search("Hi (.+)?, how are you\? Are you living in (.+?) currently\? Can you confirm if following data is correct - (.+)", s) res = {} res["name"] = r_.group(1) res["location"] = r_.group(2) res["list_of_data"] = list(map(int, (r_.group(3).split(",")))) expected.append(res) print(expected) ``` It will produce following output ``` [{'name': 'John', 'location': 'California', 'list_of_data': [123, 456, 345]}, {'name': 'Steve', 'location': 'New York', 'list_of_data': [6542]}] ``` It should produce expected output, please check for minor bugs if any ...
I guess using named regular expression groups is more elegant way to solve this problem, for example: ``` import re list_of_strings = [ "Hi John, how are you? Are you living in California currently? Can you confirm if following data is correct - 123, 456, 345", "Hi Steve, how are you? Are you living in New York currently? Can you confirm if following data is correct - 6542" ] pattern = re.compile( r"Hi (?P<name>(.+)?), how are you\? " r"Are you living in (?P<location>(.+)) currently\? " r"Can you confirm if following data is correct - (?P<list_of_data>(.+))" ) result = [] for string in list_of_strings: if match := pattern.match(string): obj = match.groupdict() obj['list_of_data'] = list(map(int, obj['list_of_data'].split(','))) result.append(obj) print(result) ``` **Output:** ``` [ {'name': 'John', 'location': 'California', 'list_of_data': [123, 456, 345]}, {'name': 'Steve', 'location': 'New York', 'list_of_data': [6542]} ] ```