qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
29
22k
response_k
stringlengths
26
13.4k
__index_level_0__
int64
0
17.8k
37,464,116
I have found some ways to using python to build android app.But all of them need to install sl4a and pythonforandroid. It is so complicated,so is there any way to package sl4a to my android app project,and once I install the apk,I needn't install sl4a any more.
2016/05/26
[ "https://Stackoverflow.com/questions/37464116", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5867427/" ]
You can use Kivy at following link : [Python for Android](https://github.com/kivy/python-for-android) It will can help u have a look to the following topic : [How can I integrate a Python code in the Android Java app?](https://www.quora.com/How-can-I-integrate-a-Python-code-in-the-Android-Java-app) hope this help you.
You *can* do this. What it comes down to is packaging the Python interpreter (compiled by the NDK) in your app, and starting it via the standard NDK mechanisms. This is the same thing that e.g. Kivy does, but you'd be adding the code to your own app rather than (like Kivy) using a java bootstrap then letting the Python manage everything else. One option, which has seen a little discussion/development recently, is to use [python-for-android](https://github.com/kivy/python-for-android) to build all the python components, then copy them into your java project (and add the code to handle it). This is possible, but not currently as easy as it could be, you'd need to look into how it works internally to get the outputs you need. Another option that is probably easier right now if you don't need compiled code beyond python itself is to directly use the precompiled python binaries of the [CrystaX NDK](https://www.crystax.net/), in which case including the python binaries comes down to only adding them to your Android.mk. You'd still need some C and NDK code to interact with the interpreter, but the process is quite straightforward. (SL4A has some Android build tools of its own, which you could also use for this, but I don't know what you'd need to do to integrate it as I think SL4A does extra things on top of just having the python interpreter present).
11,754
21,717,995
How can i set the python on my mac back to the default reference location? When I to do ``` sudo easy_install virtualenv ``` I get the following results ``` dyld: Library not loaded: @rpath/Python Referenced from: /Users/a1ctesta/Library/Enthought/Canopy_64bit/User/bin/python Reason: image not found ``` I do not have Canopy installed any longer so I wanted to restore this back to the original reference that came on the computer.
2014/02/12
[ "https://Stackoverflow.com/questions/21717995", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3221720/" ]
You probably need to modify your `$PATH` environment variable so that `/usr/bin` is before your custom path. To check if this is the issue, run the following command and see if `/usr/bin` is before or after your custom path ``` $ echo $PATH ``` The PATH environment variable is often set in `~/.bash_profile`, for example on my system I have ``` export PATH=/opt/local/bin:/opt/local/sbin:/Developer/usr/bin:$PATH ``` Meaning that the `python` executable in `/opt/local/bin` takes precedence over the one in the default `PATH`.
I had the same issue. It was easiest to reinstall Canopy. From the Canopy preferences menu, Unset Canopy as your default Python. Then restart your computer. <https://support.enthought.com/hc/en-us/articles/204469700-Uninstalling-and-resetting-Canopy>
11,755
73,409,909
No matter what I install, it's either opencv-python, opencv-contrib-python or both at the same time, I keep getting the "No module named 'cv2'" error. I couldn't find an answer here. Some say that only opencv-python works, others say opencv-contrib-python works. I tried everything but nothing seems to work. I'm trying to use the aruco function and I know it belongs to the contrib module. Any tip? Thanks
2022/08/18
[ "https://Stackoverflow.com/questions/73409909", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11672955/" ]
You need update your EAS to 0.60.0 npm install -g eas-cli I had the same problem. Work for me, friend
We fixed this issue by next steps 1: Upgrade your eas-cli running: `npm install -g eas-cli` 2: If you're publishing by first time your app, you need request access in to your App Store Connet, by this link, <https://appstoreconnect.apple.com/access/api> ``` Select Users and Access, and then select the API Keys tab. ``` Then, click in "Request access" blue button, and then, your api keys generated will appear there. Then download any key, open whit text editor and paste the path of file in eas terminal, by selecting "[Enter an App Specific Password]" Important: Is not necessary creare a new key if you has already downloaded your key We hope this works for you. And that's it.
11,759
66,937,068
**Please help me fix my regex :).** Summary: How can I make a repeating group (tags) match greedily even if it means a preceding optional group (a label) is empty. For some reason, my regex is not acting as desired: ``` code: re.match("^foo(-.*?)?((?:-(?:a|b))*)$", "foo-a-b").groups() output: ('-a', '-b') expected: ('', '-a-b') # since "-a" and "-b" are both tags that should be greedily matched by the last pattern ``` Examples of expected behavior: | Input | Expected | Current Output | Notes | | --- | --- | --- | --- | | foo-a-b | ('', '-a-b') | ('-a', '-b') | everything after "foo" is a tag, so label should be empty | | foo-b-a | ('', '-b-a') | ('-b', '-a') | everything after "foo" is a tag, so label should be empty | | foo-c-a-b | ('-c', '-a-b') | as expected | has both a label and a tag | | foo-a-b-c | ('-a-b-c', '') | as expected | everything is a label because tags can only be at the end | My real-life problem has a much more complicated definition of tags, labels, and "foo", but the issue is reproducible even with this smaller contrived example. Ideally, I'd also prefer to avoid having to repeat the "a|b" part of the regex, since in my real system it is actually a really long and complex pattern. If repeating that part of the regex is unavoidable, that's okay though! Repeating it within a negative look-behind also throws an error because of variable-length strings: ``` r_tags_nonames = re.sub("\(\?P<.+?>", "(?:", r_tags) re.match(r_tags, example).groups() ``` ``` Traceback (most recent call last): File "foo.py", line 210, in <module> _main() File "foo.py", line 119, in _main format, match = firstMatch(fileRoot) File "foo.py", line 90, in firstMatch match = re.fullmatch(regex, path, flags=re.IGNORECASE) ... File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/sre_compile.py", line 182, in _compile raise error("look-behind requires fixed-width pattern") re.error: look-behind requires fixed-width pattern ``` An example of a more complex version of this where repeating is problematic (because of renaming) – imagine dozens of such tags where the tags themselves also have subtags with names and their own regexes: ``` "^foo(-.*?)?((?:-(?:(?P<tag1>hello)|(?P<tag2>world)))*)$" ``` Thanks in advance!
2021/04/04
[ "https://Stackoverflow.com/questions/66937068", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4089332/" ]
This boils down to stylistic preferences. I personally don't have qualms with having some of the attributes defined outside of `__init__(...)`, and so would opt for something like: ``` class TriangleData(): def __init__(self, contour_data: np.ndarray): self.get_triangle_data(contour_data) def get_triangle_data(self, contour_data): # calculations self.vertices = vertices_final self.triangle_area = triangle_area self.contours = contours ``` and disable any linter warnings that didn't like it. You could also opt for something like: ``` class TriangleData(): def __init__(self, contour_data: np.ndarray): self.vertices, self.triangle_area, self.contours = self.get_triangle_data(contour_data) def get_triangle_data(self, contour_data): # calculations return vertices_final, triangle_area, contours ``` which I think should satify the linter as-is.
There is a wide spectrum of approaches and libraries to eliminate redundancy of a plain `__init__()`. Please take a look at the [dataclasses](https://docs.python.org/3/library/dataclasses.html), [attrs](https://www.attrs.org/en/stable/), [NamedTuple](https://docs.python.org/3/library/typing.html#typing.NamedTuple) and [Pydantic](https://pydantic-docs.helpmanual.io/). --- UPD: I'm agree with Randy: there's nothing wrong about setting attributes outside of `__init__`. Beside of that, you could update instance dict directly. This would be better in case when the final set of fields should be computed dynamically. ```py class TriangleData(): def __init__(self, contour_data: np.ndarray): self.contour_data = contour_data data = self.get_triangle_data() # That's strange, that you return that field from `get` method # with different name, than you need to bind. Could we avoid that? # data['poly_dp_contours'] = data.pop('contorous') self.set_triangle_data(**data) def set_triangle_data(self, vertices, triangle, contorous): self.vertices = vertices self.triangle = triangle self.contorous = contorous # or even like below def set_triangle_data(self, **data): valid_keys = ['vertices', 'triangle', 'contorous', <...>] vars(self).update({k: data[k] for k in valid_keys if k in data}) def get_triangle_data(self): # Run calculations from original contour dataetc etc ..... return { self.vertices_key: vertices_final, self.triangle_area_key: triangle_area, self.contours_key: contours, } ```
11,762
1,038,907
> > **Possible Duplicate:** > > [Suggestions for a Cron like scheduler in Python?](https://stackoverflow.com/questions/373335/suggestions-for-a-cron-like-scheduler-in-python) > > > What would be the most pythonic way to schedule a function to run periodically as a background task? There are some ideas [here](http://code.activestate.com/recipes/65222/), but they all seem rather ugly to me. And incomplete. The java [Timer](http://java.sun.com/javase/6/docs/api/java/util/Timer.html) class has a very complete solution. Anyone know of a similar python class?
2009/06/24
[ "https://Stackoverflow.com/questions/1038907", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7581/" ]
There is a handy event scheduler that might do what you need. Here's a link to the documentation: <http://docs.python.org/library/sched.html>
Many programmers try to avoid multi-threaded code, since it is highly bug-prone in imperative programming. If you want to a scheduled task in a single-threaded environment, then you probably need some kind of "[Reactor](http://en.wikipedia.org/wiki/Reactor_pattern)". You may want to use a ready-made one like [Twisted](http://twistedmatrix.com/trac/)'s. Then it would be a basic function provided by your reactor, for example (with pygame): > > pygame.time.set\_timer - repeatedly create an event on the event queue > > >
11,765
48,075,739
django version: 1.11, python version: 3.6.3 I found this stackoverflow question: [Unit Testing a Django Form with a FileField](https://stackoverflow.com/questions/2473392/unit-testing-a-django-form-with-a-filefield) and I like how there isn't an actual image/external file used for the unittest; however I tried these approaches: ``` from django.test import TestCase from io import BytesIO from PIL import Image from my_app.forms import MyForm from django.core.files.uploadedfile import InMemoryUploadedFile class MyModelTest(TestCase): def test_valid_form_data(self): im_io = BytesIO() # BytesIO has to be used, StrinIO isn't working im = Image.new(mode='RGB', size=(200, 200)) im.save(im_io, 'JPEG') form_data = { 'some_field': 'some_data' } image_data = { InMemoryUploadedFile(im_io, None, 'random.jpg', 'image/jpeg', len(im_io.getvalue()), None) } form = MyForm(data=form_data, files=image_data) self.assertTrue(form.is_valid()) ``` however, this always results in the following error message: ``` Traceback (most recent call last): File "/home/my_user/projects/my_app/products/tests/test_forms.py", line 44, in test_valid_form_data self.assertTrue(form.is_valid()) File "/home/my_user/.virtualenvs/forum/lib/python3.6/site-packages/django/forms/forms.py", line 183, in is_valid return self.is_bound and not self.errors File "/home/my_user/.virtualenvs/forum/lib/python3.6/site-packages/django/forms/forms.py", line 175, in errors self.full_clean() File "/home/my_user/.virtualenvs/forum/lib/python3.6/site-packages/django/forms/forms.py", line 384, in full_clean self._clean_fields() File "/home/my_user/.virtualenvs/forum/lib/python3.6/site-packages/django/forms/forms.py", line 396, in _clean_fields value = field.widget.value_from_datadict(self.data, self.files, self.add_prefix(name)) File "/home/my_user/.virtualenvs/forum/lib/python3.6/site-packages/django/forms/widgets.py", line 423, in value_from_datadict upload = super(ClearableFileInput, self).value_from_datadict(data, files, name) File "/home/my_user/.virtualenvs/forum/lib/python3.6/site-packages/django/forms/widgets.py", line 367, in value_from_datadict return files.get(name) AttributeError: 'set' object has no attribute 'get' ``` Why? I understand that `.get()` is a dictionary method, but I fail to see where it created a set.
2018/01/03
[ "https://Stackoverflow.com/questions/48075739", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3329127/" ]
`image_data` should be dict, without providing key `{value}` will create set object. You need to define it like this `{key: value}`. Fix to this: ``` image_data = { 'image_field': InMemoryUploadedFile(im_io, None, 'random.jpg', 'image/jpeg', len(im_io.getvalue()), None) } ```
Without External File Code ``` from django.core.files.uploadedfile import SimpleUploadedFile testfile = ( b'\x47\x49\x46\x38\x39\x61\x01\x00\x01\x00\x00\x00\x00\x21\xf9\x04' b'\x01\x0a\x00\x01\x00\x2c\x00\x00\x00\x00\x01\x00\x01\x00\x00\x02' b'\x02\x4c\x01\x00\x3b') avatar = SimpleUploadedFile('small.gif', testfile, content_type='image/gif') ```
11,772
3,319,788
I am trying to retrieve data in a sybase data base from python and I was wondering which would be the best way to do it. I found this module but may be you have some other suggestions: <http://python-sybase.sourceforge.net/> Thanks
2010/07/23
[ "https://Stackoverflow.com/questions/3319788", "https://Stackoverflow.com", "https://Stackoverflow.com/users/400413/" ]
The sybase module you linked is by far the easiest way. You can get data like so: ``` import Sybase db = Sybase.connect('server','name','pass','database') c = db.cursor() c.execute("sql statement") list1 = c.fetchall() print list1 ``` You will have to use something like freetds to setup the interfaces for sybase, however.
you can also connect through [ODBC](http://tinyurl.com/255plm2).
11,773
28,724,785
I am currently working on a project that makes use of Python 3.2.3 and thought to do some tests of my code in IPython but it seems that IPython does not support the version of python I am using. I get the following ImportError when trying to run Ipython on my Ubuntu machine. ``` ImportError: IPython requires Python version 2.7 or 3.3 or above. ```
2015/02/25
[ "https://Stackoverflow.com/questions/28724785", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3166105/" ]
You can find some discussion about dropping 2.6 and 3.2 support [here](https://github.com/ipython/ipython/pull/4002). One of the main reasons was that 3.2 does not support 2.x-style unicode strings - `u"I love IPython!"`, while 3.3 and above do. This change made it possible to support 2.7 and 3.3+ in a single codebase.
[Quickstart](http://ipython.org/ipython-doc/2/install/install.html): IPython requires Python 2.7 or ≥ 3.3. If you need to use Python 2.6 or 3.2, you can find IPython 1.0 [here](http://archive.ipython.org/release/). PS: The deadsnakes PPA has packages for old and new python versions: ``` sudo apt-get install python-software-properties sudo add-apt-repository ppa:fkrull/deadsnakes sudo apt-get update sudo apt-get install python3.3 ```
11,775
68,081,171
This is a problem of the missing number in python. ``` class Missing: n = int(input()) arr = list(map(int,input().split(" "))) def __init__(self,arr,n): self.arr = arr self.n = n def MissingNumber(self): self.res = self.n*(self.n+1)/2 self.sum_array = sum(self.arr) return "Missing no. is ",self.res-self.sum_array Obj = Missing() Obj.MissingNumber() ``` I am getting this error. can anybody solve it? ``` Obj = Missing() TypeError: __init__() missing 2 required positional arguments: 'arr' and 'n' ```
2021/06/22
[ "https://Stackoverflow.com/questions/68081171", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14643490/" ]
you need put the input outside class,and assign it when you create instance by `Obj = Missing(arr,n)` code: ``` class Missing: def __init__(self,arr,n): self.arr = arr self.n = n def MissingNumber(self): self.res = self.n*(self.n+1)/2 self.sum_array = sum(self.arr) return "Missing no. is ",self.res-self.sum_array n = int(input()) arr = list(map(int,input().split(" "))) Obj = Missing(arr,n) print(Obj.MissingNumber()) ``` result: ``` 5 1 2 3 4 5 ('Missing no. is ', 0.0) ```
your constructor need two parameter. You need to assign it before it run. you need to assign n and arr outside class object ``` class Missing: def __init__(self,arr,n): self.arr = arr self.n = n def MissingNumber(self): self.res = self.n*(self.n+1)/2 self.sum_array = sum(self.arr) return "Missing no. is ",self.res-self.sum_array if __name__ == '__main__': n = int(input()) arr = list(map(int,input().split(" "))) Obj = Missing(n,arr) Obj.MissingNumber() ```
11,776
6,715,486
So I am fairly new to python. But would like your help solving this minor issue I having removing similar duplicates out of a list. So I have a list of urls: `myList = ['http://www.mywebsite.com/shoes', 'http://wwww.yourwebsite.com/', 'http://www.mywebsite.com/shoes/']` I want to remove similar url's as you can see <http://www.mywebsite.com/shoes> and <http://www.mywebsite.com/shoes/> are pretty much the same. I would like to remove one of them (I don't care which one) But keep the other. Essentially removing the duplicate from the list. I would give an example. But I don't even know where to begin. Any insight would great help.
2011/07/16
[ "https://Stackoverflow.com/questions/6715486", "https://Stackoverflow.com", "https://Stackoverflow.com/users/841342/" ]
If similarity for you is difference in '\' thab you can use sets[(read tutorial here)](http://docs.python.org/tutorial/datastructures.html#sets) [and here](http://docs.python.org/library/stdtypes.html?highlight=frozenset#set-types-set-frozenset) to remove duplicates from your list since: > > A set object is an unordered collection of distinct hashable objects. > Common uses include membership testing, removing duplicates from a > sequence, and computing mathematical operations such as intersection, > union, difference, and symmetric difference. > > > ``` myList = ['http://www.mywebsite.com/shoes', 'http://wwww.yourwebsite.com/', 'http://www.mywebsite.com/shoes/'] set(x.lstrip('\') for x in myList) # will return a set of unique urls # In case you need list myList = list(set(x.rstrip('\') for x in myList)) ```
The issue may be that you haven't figured out exactly what it means for two URLs to be similar. We can't help you with that because only you know what your requirements are. Once you do figure that out, though, the rest is simple enough. There are two ways to do it: * If your similarity relation is transitive - that is, if `similar(a,b) and similar(b,c)` implies that `similar(a,c)` for all URLs `a`, `b`, `c` - then it will be possible to convert each URL to a canonical form. Two URLs will be similar if and only if their canonical forms are equal. So the easiest thing to do in that case is to convert each URL to canonical form, then create a set out of the canonical URLs obtained in this way: ``` set(canonical(u) for u in myList) ``` * If your similarity relation is not transitive, then things get really tricky because you can have cases like A being similar to B and B being similar to C, but A is not similar to C. So then the question becomes, in this example, what would you like to include in your list with duplicates stripped? Would you include A and C because they are not similar to each other, or would you include only B because you'd consider both A and C similar duplicates of it? In this case, depending on how you want to handle "fuzzy" cases like this, there are various algorithms you can use - but again, we'd need to know your exact requirements to recommend anything.
11,777
15,848,156
I use the package named python-snappy. This package requires [snappy](https://code.google.com/p/snappy/) library. So, I download and install snappy successfully by the following commands such as: ``` ./configure make sudo make install ``` When I import snappy, I receive the errors: ``` from _snappy import CompressError, CompressedLengthError, \ ImportError: libsnappy.so.1 cannot open shared object file: No such file or directory ``` I'm using Python 2.7, snappy, python-snappy and Ubuntu 12.04 How can I fix this problem? Thanks
2013/04/06
[ "https://Stackoverflow.com/questions/15848156", "https://Stackoverflow.com", "https://Stackoverflow.com/users/875781/" ]
Traditionally you might have to run the `ldconfig` utility to update your */etc/ld.so.cache* (or equivalent as appropriate to your OS). Sometimes it might be necessary to add new entries (paths) to your */etc/ld.so.conf*. Basically the shared object (so) loaders on many versions of Unix (and probably other Unix-like operating systems) use a cache to help resolve their base filenames into actual files to be loaded (usually *mmap()'d*). This is roughly similar to the intermittent need to run *hash -r* or *rehash* in your shell after adding things to directories in your PATH. Usually you can just run `ldconfig` with no arguments (possibly after adding your new library's path to your */etc/ld.so.conf* text file). Good *Makefiles* will do this for you during `make install`. Here's a little bit more info: <http://linux.101hacks.com/unix/ldconfig/>
You can install the [python-snappy](http://packages.ubuntu.com/precise/python-snappy) and [libsnappy1](http://packages.ubuntu.com/precise/libsnappy1) from the ubuntu repos: ``` $ sudo apt-get install libsnappy1 python-snappy ``` You should not have to download anything.
11,779
59,412,006
I am new to python and having a problem: my task was `"Given a sentence, return a sentence with the words reversed"` e.g. `Tea is hot --------> hot is Tea` my code was: ``` def funct1 (x): a,b,c = x.split() return c + " "+ b + " "+ a funct1 ("I am home") ``` It did solve the answer, but i have 2 questions: 1. how to give space without adding a space concatenation 2. is there any other way to reverse than splitting? thank you.
2019/12/19
[ "https://Stackoverflow.com/questions/59412006", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12565719/" ]
Your code is heavily dependent on the assumption that the string will always contain **exactly** 2 spaces. The task description you provided does not say that this will always be the case. This assumption can be eliminated by using `str.join` and `[::-1]` to reverse the list: ``` def funct1(x): return ' '.join(x.split()[::-1]) print(funct1('short example')) print(funct1('example with more than 2 spaces')) ``` Outputs ``` example short spaces 2 than more with example ``` A micro-optimization can be the use of `reversed`, so an extra list does not need to be created: ``` return ' '.join(reversed(x.split())) ``` > > 2) is there any other way to reverse than spliting? > > > Since the requirement is to reverse the order of the words while maintaining the order of letters inside each word, the answer to this question is "not really". A regex can be used, but will that be much different than splitting on a whitespace? probably not. Using `split` is probably faster anyway. ``` import re def funct1(x): return ' '.join(reversed(re.findall(r'\b(\w+)\b', x))) ```
A pythonic way to do this would be: ``` def reverse_words(sentence): return " ".join(reversed(sentence.split())) ``` To answer the subquestions: 1. Yes, use [`str.join`](https://docs.python.org/3/library/stdtypes.html#str.join). 2. Yes, you can use a [slice](https://stackoverflow.com/questions/509211/understanding-slice-notation) with a negative step `"abc"[::-1] == "cba"`. You can also use the builtin function [`reversed`](https://docs.python.org/3/library/functions.html#reversed) like I've done above (to get a reversed copy of a sequence), which is marginally more readable.
11,785
32,780,946
I have this database layout ``` class Clinic(models.Model): class Menu(models.Model): ... menu = models.OneToOneField(Clinic, related_name='menu') class Item(models.Model): ... menu = models.ForeignKey('Menu') ``` and I would like to access my Menu model and the Items linked to that menu from my Clinic model. I have tried this ``` In [5]: clinic = Clinic.objects.get(pk=1) In [12]: clinic.menu Out[12]: <Menu: Menu object> In [13]: clinic.menu.objects.all() --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-13-2aff8882b6ad> in <module>() ----> 1 clinic.menu.objects.all() /Users/gabriel/.virtualenvs/meed_waps/lib/python3.4/site-packages/django/db/models/manager.py in __get__(self, instance, type) 253 def __get__(self, instance, type=None): 254 if instance is not None: --> 255 raise AttributeError("Manager isn't accessible via %s instances" % type.__name__) 256 return self.manager 257 AttributeError: Manager isn't accessible via Menu instances ``` but it tells me that the manager can't access it. Conceptually it seems like I should be able to access the items and get a list by tracing the relationship down through the clinic model like this Clinic > Menu > Items. Is there another way I should be doing this?
2015/09/25
[ "https://Stackoverflow.com/questions/32780946", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3282276/" ]
`clinic.menu.objects` is the `ModelManager` for the `Menu` model - note that it's an attribute of the `Menu` *class*, not of your `clinic.menu` instance. `ModelManager` classes are used to query the underlying table, and are not supposed to be called directly from instances (which wouldn't make much sense anyway), hence the error message. > > Conceptually it seems like I should be able to access the items and get a list by tracing the relationship down through the clinic model like this Clinic > Menu > Items > > > That's indeed possible (hopefully), but where are you mentionning `items` in `clinic.menu.objects` ? You want `clinic.menu.item_set.all()`.
Try this: ``` Clinic.objects.get(pk=1).menu.item_set.all() ```
11,786
8,264,569
I've got 32 SQLite (3.7.9) databases with 3 tables each that I'm trying to merge together using the idiom that I've found elsewhere (each db has the same schema): ``` attach db1.sqlite3 as toMerge; insert into tbl1 select * from toMerge.tbl1; insert into tbl2 select * from toMerge.tbl2; insert into tbl3 select * from toMerge.tbl3; detach toMerge; ``` and rinse-repeating for the entire set of databases. I do this using python and the sqlite3 module: ``` for fn in filelist: completedb = sqlite3.connect("complete.sqlite3") c = completedb.cursor() c.execute("pragma synchronous = off;") c.execute("pragma journal_mode=off;") print("Attempting to merge " + fn + ".") query = "attach '" + fn + "' as toMerge;" c.execute(query) try: c.execute("insert into tbl1 select * from toMerge.tbl1;") c.execute("insert into tbl2 select * from toMerge.tbl2;") c.execute("insert into tbl3 select * from toMerge.tbl3;") c.execute("detach toMerge;") completedb.commit() except sqlite3.Error as err: print "Error! ", type(err), " Error msg: ", err raise ``` 2 of the tables are fairly small, only 50K rows per db, while the third (tbl3) is larger, about 850 - 900K rows. Now, what happens is that the inserts progressively slow down until I get to about the fourth database when they grind to a near halt (on the order of a a megabyte or two in file size added every 1-3 minutes to the combined database). In case it was python, I've even tried dumping out the tables as INSERTs (.insert; .out foo; sqlite3 complete.db < foo is the skeleton, found [here](https://stackoverflow.com/questions/75675/how-do-i-dump-the-data-of-some-sqlite3-tables)) and combining them in a bash script using the sqlite3 CLI to do the work directly, but I get exactly the same problem. The table setup of tbl3 isn't too demanding - a text field containing a UUID, two integers, and four real values. My worry is that it's the number of rows, because I ran into exactly the same trouble at exactly the same spot (about four databases in) when the individual databases were an order of magnitude larger in terms of file size with the same number of rows (I trimmed the contents of tbl3 significantly by storing summary stats instead of raw data). Or maybe it's the way I'm performing the operation? Can anyone shed some light on this problem that I'm having before I throw something out the window?
2011/11/25
[ "https://Stackoverflow.com/questions/8264569", "https://Stackoverflow.com", "https://Stackoverflow.com/users/126321/" ]
You can do it somewhat with HTML5's new input types See: <http://www.456bereastreet.com/archive/201004/html5_input_types/>
Using HTML5, kind of yes. Link: <http://thereforei.am/2011/07/01/css-selectors-for-html5-input-validation/>
11,787
49,461,537
Using pycharm community python3.6.2 Django 2.0.3 views.py ``` from django.http import HttpResponse def hello_world(request): return HttpResponse('Hello World') ``` urls.py ``` from django.conf.urls import url from django.contrib import admin from . import views urlpatterns = [ url(r'^admin/', admin.site.urls), url(r'^$', views, hello_world), ] ``` i Tried to figure it out but missing something. error while running on pycharm > > urls.py", line 8, in > url(r'^$', views, hello\_world), > > > NameError: name 'hello\_world' is not defined > > >
2018/03/24
[ "https://Stackoverflow.com/questions/49461537", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9511012/" ]
The error is telling you that there is no variable such as `hello_world` defined. You need to change it to: ``` url(r'^$', views.hello_world) ``` Where `views` is the views module that you have imported at the top.
This line of code is wrong ``` url(r'^$', views, hello_world) ``` You just imported the `view` which is the file view.py. Now you need to call the function view, which it is going to be like: ``` url(r'^$', views.hello_world) ``` And you might think it is going to be useful to give that url a name so you can use it as a reference in your templates in the future. ``` url(r'^$', views.hello_world, name='hello-world') ``` Also, you can import your view.py as follow: ``` from .views import hello_world ``` The next is possible as well as suggested in the comments Niayesh Isky, but not encouraged. ``` from .views import * ```
11,790
36,937,305
I am running a shell script in cygwin where I am using python robot framework, but in that environment it's not getting 'pybot' as command. ``` $ pybot --version ``` Result- ``` -bash: pybot: command not found ``` However in cmd prompt the above command is working fine. PS- I have already set python interpreter path in environment variable. As the same command is working fine in cmd prompt. Is there any possible way to use pybot in cygwin shell?
2016/04/29
[ "https://Stackoverflow.com/questions/36937305", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4637378/" ]
I have explored and get to know that, in Cygwin we need to specify the file type as well. So, if we say- ``` pybot.bat --version ``` this should work.
worked for me in cygwin: Adjetterpc@Pompina $ robot.bat --version Robot Framework 3.0.4 (Python 2.7.13 on win32) Adjetterpc@Pompina $ pybot.bat --version Robot Framework 3.0.4 (Python 2.7.13 on win32)
11,791
62,023,199
I have a little problem on making ecounter for fast food, the problems is at continuing order. I don't know how to use True and False in python.. I choose three food as a beginner. It's end up failing.. So I tried to use True by copying other people code. at first the code was success, but when i changed a little thing in code, it went wrong and I don't know how to fix it. so far this is my code: ``` hcount = 0 scount = 0 fcount = 0 y = True n = False cstm= input("Please enter your name: ") print(" ") print(" ") print("Hi MR/MS " + cstm) print("WELCOME TO MACDUNNO ECOUNTER") print("CHOOSE YOUR FOOD") while True: print(" ") print('MENU MAC DUNNO') print("1. HAMBURGER = $1.50") print("2. SODA = $1.15") print("3. FRIES = $1.25") choice = int(input('ENTER NUMBER 1-3: ')) if choice == 1: amount = int(input("ENTER THE AMOUNT: ")) hcount += amount elif choice == 2: amount = int(input("ENTER THE AMOUNT: ")) scount += amount elif choice == 3: amount = int(input("ENTER THE AMOUNT: ")) fcount += amount countinue = input("ARE YOU STILL WANT TO ORDER? (y/n)") if countinue == n: sub = (hcount * 1.50) + (scount * 1.15) + (fcount * 1.25) tax = sub * 0.09 total = sub + tax print(" ") print(" ") print("************") print("PAYMENT") print("************") print('TOTAL HAMBURGER: {0}'.format(hcount)) print('TOTAL SODA: {0}'.format(scount)) print('TOTAL FRIES: {0}'.format(fcount)) print(" ") print('SUBTOTAL: {:0.2f}'.format(sub)) print('TAX : {:0.2f}'.format(tax)) print(" ") print("__________________________________") print('TOTAL: {:0.2f}'.format(total)) print(" ") pay =int(input("INSERT PAYMENT:")) if pay > total : exchange = pay - total print(' ') print(' ') print('EXCHANGE : {:0.2f}'.format(exchange)) print("THANK YOU, PLEASE COME AGAIN") import time time.sleep(5) else: print(' ') print(' ') print("INSUFFICIENT AMOUNT") print("PLEASE INSERT THE RIGHT AMOUNT") scpay = int(input("INSERT PAYMENT : ")) if scpay > total : exchange = scpay - total print(' ') print(' ') print('EXCHANGE : {:0.2f}'.format(exchange)) print("THANK YOU, PLEASE COME AGAIN") import time time.sleep(5) else: print(" ") print("ORDER TERMINATED") import time time.sleep(5) ```
2020/05/26
[ "https://Stackoverflow.com/questions/62023199", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13620217/" ]
Query is fairly self explanatory. I have added a should clause with a match on field c and no exists check on field c. Minimum\_should\_match is set as 1. A document which matches either of clause is returned i.e either it should not exist or should match the input string ``` { "query": { "bool": { "must": [ { "match": { "a": "norm" } }, { "match_phrase": { "b": "views" } } ], "should": [ { "match": { "c": "claims" } }, { "bool": { "must_not": [ { "exists": { "field": "c" } } ] } } ], "minimum_should_match": 1 } } } ``` **Data:** ``` "hits" : [ { "_source" : { "a" : "norm", "b" : "views", "c" : "claims" } }, { "_source" : { "a" : "norm", "b" : "views" } }, { "_source" : { "a" : "norm", "b" : "views", "c" : "ddd" } } ] ``` **Result:** ``` "hits" : [ { "_source" : { "a" : "norm", "b" : "views", "c" : "claims" } }, { "_source" : { "a" : "norm", "b" : "views" } } ] ``` Document where c="claims" or c not exists is returned
Use filter and exist. [exists](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-exists-query.html) [filter](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-exists-query.html) ``` { "query": { "filtered": { "filter": { "bool": { "must_not": [ { "missing": { "field": "your_field" } } ] } } } } } ```
11,792
48,886,423
I have a address list as : ``` addr = ['100 NORTH MAIN ROAD', '100 BROAD ROAD APT.', 'SAROJINI DEVI ROAD', 'BROAD AVENUE ROAD'] ``` I need to do my replacement work in a following function: ``` def subst(pattern, replace_str, string): ``` by defining a pattern outside of this function and passing it as an argument to subst. I need an output like: ``` addr = ['100 NORTH MAIN RD', '100 BROAD RD APT.', 'SAROJINI DEVI RD ', 'BROAD AVENUE RD'] ``` where all 'ROAD' strings are replaced with 'RD' --- ``` def subst(pattern, replace_str, string): #susbstitute pattern and return it new=[] for x in string: new.insert((re.sub(r'^(ROAD)','RD',x)),x) return new def main(): addr = ['100 NORTH MAIN ROAD', '100 BROAD ROAD APT.', 'SAROJINI DEVI ROAD', 'BROAD AVENUE ROAD'] #Create pattern Implementation here pattern=r'^(ROAD)' print (pattern) #Use subst function to replace 'ROAD' to 'RD.',Store as new_address new_address=subst(pattern,'RD',addr) return new_address ``` --- I have done this and getting below error Traceback (most recent call last): File "python", line 23, in File "python", line 20, in main File "python", line 7, in subst TypeError: 'str' object cannot be interpreted as an integer
2018/02/20
[ "https://Stackoverflow.com/questions/48886423", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9385748/" ]
No need for regex, just use `replace`: ``` [x.replace('ROAD', 'RD') for x in addr] ``` If you only want to replace the `ROAD` as a word, no in the middle, use: ``` [re.sub(r'\bROAD\b', 'RD', x) for x in addr] ```
Using `re` ``` import re addr = ['100 NORTH MAIN ROAD', '100 BROAD ROAD APT.', 'SAROJINI DEVI ROAD', 'BROAD AVENUE ROAD'] for i, v in enumerate(addr): addr[i] = re.sub('ROAD', 'RD', v) #v.replace("ROAD", "RD") print addr ``` **Output**: ``` ['100 NORTH MAIN RD', '100 BRD RD APT.', 'SAROJINI DEVI RD', 'BRD AVENUE RD'] ```
11,793
70,332,822
I am receiving an image as bytes that I would like to store using specifically with open command due to library restrictions. Therefore I am unable to use libs like opencv2 and PIL ``` mport numpy as np import base64 import cv2 import io from os.path import dirname, join from com.chaquo.python import Python def main(data): decoded_data = base64.b64decode(data) files_dir = str(Python.getPlatform().getApplication().getFilesDir()) filename = join(dirname(files_dir), 'image.PNG') with open(filename, 'rb') as file: img = file.write(img) return img ``` What I would simply like to do is save the image file to the current directory that i am in. Currently when i try my code it gives me the following error: ``` com.chaquo.python.PyException: TypeError: write() argument must be str, not bytes ``` Which requires me to have a string. I'd like to know what do i need to do to save the image as a .PNG file to the directory
2021/12/13
[ "https://Stackoverflow.com/questions/70332822", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
As you wrote most Databricks clusters use 1.2.17 so it is different version and version affected by vulnerability is not used by Databricks. Only one problem is when you install different version by yourself on the cluster. Even when you installed affected version you can mitigate the problem by setting Spark config in cluster advanced config as below: ``` spark.driver.extraJavaOptions "-Dlog4j2.formatMsgNoLookups=true" spark.executor.extraJavaOptions "-Dlog4j2.formatMsgNoLookups=true" ```
you get complete e2e update on this here : <https://databricks.com/blog/2021/12/13/log4j2-vulnerability-cve-2021-44228-research-and-assessment.html>
11,796
13,788,114
Given the following Python (from <http://norvig.com/sudoku.html>) ``` def cross(A, B): "Cross product of elements in A and elements in B." return [a+b for a in A for b in B] cols = '123456789' rows = 'ABCDEFGHI' squares = cross(rows, cols) ``` This produces: ``` ['A1', 'A2', 'A3', 'A4', 'A5', 'A6', 'A7', 'A8', 'A9', 'B1', 'B2', 'B3', ...] ``` As an exercise, I want to do the same in C++. Currently I have: ``` #include <iostream> #include <map> #include <vector> using std::string; using std::vector; static vector<string> cross_string(const string &A, const string &B) { vector<string> result; for (string::const_iterator itA = A.begin(); itA != A.end(); ++itA) { for (string::const_iterator itB = B.begin(); itB != B.end(); ++itB) { char s[] = {*itA, *itB, 0}; result.push_back(string(s)); } } return result; } int main(int argc, char** argv) { const char digits[] = "123456789"; const char rows[] = "ABCDEFGHI"; vector<string> res = cross_string(rows, digits); for (vector<string>::const_iterator it = res.begin(); it != res.end(); ++it) { std::cout << *it << std::endl; } } ``` This works, but I was hoping there would be a better way. This also does only strings, whereas the python does any list... --- Edit: Thanks for all of the replies. I accepted the one that I understood best, but Alf's answer was a close second. I note that all used C++11 and wonder whether as a novice at C++ I should adopt that directly instead of the learning the older standard. But that is perhaps best for another question.
2012/12/09
[ "https://Stackoverflow.com/questions/13788114", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1163625/" ]
Well it's shorter to just present the code than to explain: ``` #include <iostream> // std::wcout, std::endl #include <string> // std::string #include <utility> // std::begin, std::end #include <vector> using namespace std; string sum( char const a, char const b ) { return string() + a + b; } template< class Container > auto cross( Container const& a, Container const& b ) -> vector< decltype( sum( *begin( a ), *begin( b ) ) ) > { typedef decltype( sum( *begin( a ), *begin( b ) ) ) ResultItem; vector< ResultItem > result; for( auto&& itemA : a ) for( auto&& itemB : b ) { result.push_back( sum( itemA, itemB ) ); } return result; } wostream& operator<<( wostream& stream, string const& s ) { return (stream << s.c_str()); } template< class Item > wostream& operator<<( wostream& stream, vector<Item> const& v ) { stream << "["; bool isFirstItem = true; for( auto&& item : v ) { if( !isFirstItem ) { stream << ", "; } stream << item; isFirstItem = false; } stream << "]"; return stream; } int main() { string const cols = "123456789"; string const rows = "ABCDEFGHI"; auto const squares = cross( cols, rows ); wcout << squares << endl; } ```
templatize for classes A and B. then make pairs `std::pair<A, B>`
11,797
30,884,008
I have files with code that is formatted for windows. when I try to run them on linux machine i have problem with file encodings. Can anybody suggest a solution for this on Windows when I run I get - ``` This was return from redis Exception in thread Thread-3: Traceback (most recent call last): File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner self.run() File "/usr/lib/python2.7/threading.py", line 763, in run self.__target(*self.__args, **self.__kwargs) File "/home/bsingh/python_files/lib/Site.py", line 85, in monitor self.update1() File "/home/bsingh/python_files/lib/Site.py", line 78, in update1 for entry in new_pastes[::-1]: TypeError: 'NoneType' object has no attribute '__getitem__' ```
2015/06/17
[ "https://Stackoverflow.com/questions/30884008", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4943862/" ]
Your selection list in the first condition of your `if` statement is the problem. If `opt` happens to be `es`, for example, then ``` if opt == 'es': opt = "ESP:" ``` will change that to `ESP:`. ``` if opt in ["es","la","en","ar","fr"] and extent == "begin": ``` can then never be `True` (when you're using `and` instead of `or`). If you change that line to something like ``` if opt in ["ESP:","LAT:","ENG:","ar","fr"] and extent == "begin": ``` it might work (if the code you've shown is all that's relevant to the problem).
**AND** To become condition True by **AND Operator**, required **True** result from the **all conditions**. **OR** To become condition True by **OR Operator**, required **True** result from the any **one condition**. **E.g.** ``` In [1]: True and True Out[1]: True In [2]: True and False Out[2]: False In [3]: True or False Out[3]: True ``` --- In your code, print following statements: ``` print "Debug 1: opt value", opt print "Debug 2: extent value", extent ``` **Why use same variable name again??** If value of `opt` is `es` then if condition `if opt == 'es':` is `True` and `opt` variable is again assign to vale `ESP:`. And in your final if statement you check `opt in ["es","la","en","ar","fr"]` , so which is always `False`. ``` opt = child.get('desc') # ^^ extent = child.get('extent') if opt == 'es': opt = "ESP:" # ^^ elif opt == "la": opt = "LAT:" elif opt == "en": ```
11,803
17,559,257
I am using the following code to send email from unix. Code ``` #!/usr/bin/python import os def sendMail(): sendmail_location = "/usr/sbin/sendmail" # sendmail location p = os.popen("%s -t" % sendmail_location, "w") p.write("From: %s\n" % "myname@company.com") p.write("To: %s\n" % "yourname@company.com") p.write("Subject: My Subject \n") p.write("\n") # blank line separating headers from body p.write("body of the mail") status = p.close() if status != 0: print "Mail Sent Successfully", status sendMail() ``` I am not sure how to add attachment to this email (attachment being on a different directory /my/new/dir/)
2013/07/09
[ "https://Stackoverflow.com/questions/17559257", "https://Stackoverflow.com", "https://Stackoverflow.com/users/392233/" ]
Sendmail is an extremely simplistic program. It knows how to send a blob of text over smtp. If you want to have attachments, you're going to have to do the work of converting them into a blob of text and using (in your example) p.write() to add them into the message. That's hard - but you can use the `email` module (part of python core) to do a lot of the work for you. Even better, you can use `smtplib` (also part of core) to handle sending the mail. Check out <http://docs.python.org/2/library/email-examples.html#email-examples> for a worked example showing how to send a mail with attachments using `email` and `smtplib`
I normally use the following to send a file "file\_name.dat" as attachment: ``` uuencode file_name.dat file_name.dat | mail -s "Subject line" arnab.bhagabati@gmail.com ```
11,812
58,880,450
How might I remove duplicate lines from a file and also the unique related to this duplicate? **Example:** Input file: ``` line 1 : Messi , 1 line 2 : Messi , 2 line 3 : CR7 , 2 ``` I want the output file to be: ``` line 1 : CR7 , 2 ``` Just ( " CR7 , 2 " I want to delete duplicate lines and also the unique related to this duplicate) The deletion depends on the first row if there is a match in the first row I want to delete this line How to do this in python with this code what to edit on it : ``` lines_seen = set() # holds lines already seen outfile = open(outfilename, "w") for line in open(infilename, "r"): if line not in lines_seen: # not a duplicate outfile.write(line) lines_seen.add(line) outfile.close() ``` What is the best way to do this job?
2019/11/15
[ "https://Stackoverflow.com/questions/58880450", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12379359/" ]
Have you tried `Counter`? This works for example: ``` import collections a = [1, 1, 2] out = [k for k, v in collections.Counter(a).items() if v == 1] print(out) ``` Output: `[2]` Or with a longer example: ``` import collections a = [1, 1, 1, 2, 4, 4, 4, 5, 3] out = [k for k, v in collections.Counter(a).items() if v == 1] print(out) ``` Output: `[2, 5, 3]` EDIT: ----- Since you don't have a list at the beginning there are two ways, depending on the file size you should use the first for small enough files (otherwise you might run in memory problems) or the second one for large files. Read file as list and use previous answer: ========================================== ``` import collections lines = [line for line in open(infilename)] out = [k for k, v in collections.Counter(lines).items() if v == 1] with open(outfilename, 'w') as outfile: for o in out: outfile.write(o) ``` The first line reads your file completely as a list. This means, that really large files would be loaded in your memory. If you have to large files you can go ahead and use a sort of "blacklist": Using blacklist: ================ ``` lines_seen = set() # holds lines already seen blacklist = set() outfile = open(outfilename, "w") for line in open(infilename, "r"): if line not in lines_seen and line not in blacklist: # not a duplicate lines_seen.add(line) else: lines_seen.discard(line) blacklist.add(line) for l in lines_seen: outfile.write(l) outfile.close() ``` Here you add all lines to the set and only write the set to the file at the end. The blacklist remembers all multiple occurrences and therefore you do not write multiple lines even once. You can't do it in one go, to read and write since you do not know, if there comes the same line a second time. If you have further information (like multiple lines always come continuously) you could maybe do it differently EDIT 2 ------ If you want to do it depending on the first part: ``` firsts_seen = set() lines_seen = set() # holds lines already seen blacklist = set() outfile = open(outfilename, "w") for line in open(infilename, "r"): first = line.split(',')[0] if first not in firsts_seen and first not in blacklist: # not a duplicate lines_seen.add(line) firsts_seen.add(first) else: lines_seen.discard(line) firsts_seen.discard(first) blacklist.add(first) print(len(lines_seen)) for l in lines_seen: outfile.write(l) outfile.close() ``` P.S.: By now I have just been adding code, there might be a better way For example with a dict: ``` lines_dict = {} for line in open(infilename, 'r'): if line.split(',')[0] not in lines_dict: lines_dict[line.split(',')[0]] = [line] else: lines_dict[line.split(',')[0]].append(line) with open(outfilename, 'w') as outfile: for key, value in lines_dict.items(): if len(value) == 1: outfile.write(value[0]) ```
Given your input you can do something like this: ``` seen = {} # key maps to index double_seen = set() with open('input.txt') as f: for line in f: _, key = line.split(':') key = key.strip() if key not in seen: # Have not seen this yet? seen[key] = line # Then add it to the dictionary else: double_seen.add(key) # Else we have seen this more thane once # Now we can just write back to a different file with open('output.txt', 'w') as f2: for key in set(seen.keys()) - double_seen: f2.write(seen[key]) ``` Input I used: ``` line 1 : Messi line 2 : Messi line 3 : CR7 ``` Output: ``` line 3 : CR7 ``` Note this solution assumes Python3.7+ since it assumes dictionaries are in insertion order.
11,814
67,194,174
I want to write a python code were the output will be something like this [[1],[1,2],[1,2,3], … [1,2,3, … ]] I have this code and has the similar output but the only difference is that it prints all the cases for example if I say that the range is 3 it prints [[1], [1, 2], [1, 3], [1, 2, 3]] and not [[1], [1, 2], [1, 2, 3]] This is my code ``` def sub_lists(l): base = [1] lists = [base] for i in range(2, l+1): orig = lists[:] new = i for j in range(len(lists)): lists[j] = lists[j] + [new] lists = orig + lists return lists num=int(input("Please give me a number: ")); print(sub_lists(num)) ```
2021/04/21
[ "https://Stackoverflow.com/questions/67194174", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13300391/" ]
How about using `list comprehension`? For example: ``` num = 3 print([[*range(1, i + 1)] for i in range(1, num + 1)]) ``` Output: ``` [[1], [1, 2], [1, 2, 3]] ```
You should not use a loop inside loops if you intend to add a single number at the end of it only. ``` def copyBase(lst): return lst[:] def sub_lists(l): base = [1] lists = [] for i in range(1, l+1): lists += [copyBase(base),] base += [i+1] return lists num=int(input("Please give me a number: ")); print(sub_lists(num)) ```
11,815
58,105,181
I'm using databricks-connect on mac using pycharm but after I finished the configuration and tried to run `databricks-connect test`, I got the following error and have no idea what the problem is. I followed this documentation: <https://docs.databricks.com/user-guide/dev-tools/db-connect.html> The error message is as below: ``` scala> spa Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.7/bin/databricks-connect", line 11, in load_entry_point('databricks-connect==5.3.1', 'console_scripts', 'databricks-connect')() File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pyspark/databricks_connect.py", line 244, in main test() File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pyspark/databricks_connect.py", line 213, in test raise ValueError("Scala command failed to produce correct result") ValueError: Scala command failed to produce correct result ```
2019/09/25
[ "https://Stackoverflow.com/questions/58105181", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6279751/" ]
Maybe your Java/Python version does not comply. Check your cluster, which Python version does it use (in my case it was 3.5). And what's most important: check which JDK version do you have on your computer. In my case, I've had the latest one, which was not supported by `databricks-connect`. It needs to run on JDK 8.
To ignore the RUNTIME version, export an environment variable that resolves: export DEBUG\_IGNORE\_VERSION\_MISMATCH=1
11,819
25,331,758
I am writing a script to start and background a process inside a vagrant machine. It seems like every time the script ends and the ssh session ends, the background process also ends. Here's the command I am running: `vagrant ssh -c "cd /vagrant/src; nohup python hello.py > hello.out > 2>&1 &"` `hello.py` is actually just a flask development server. If I were to login to ssh interactively and run the `nohup` command manually, after I close the session, the server will continue to run. However, if I were to run it via `vagrant ssh -c`, it's almost as if the command never ran at all (i.e. no hello.out file created). What is the difference between running it manually and through vagrant ssh -c, and how to fix it so that it works?
2014/08/15
[ "https://Stackoverflow.com/questions/25331758", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3667561/" ]
I faced the same problem when trying to run Django application as a daemon. I don't know why, but adding a "sleep 1" behind works for me. ``` vagrant ssh -c "nohup python manage.py runserver & sleep 1" ```
Running nohup inside the ssh command did not work for me when running wireshark. This did: ``` nohup vagrant ssh -c "wireshark" & ```
11,821
46,901,975
I am new to this whole python and the data mining. Let's say I have a list of string called data ``` data[0] = ['I want to make everything lowercase'] data[1] = ['How Do I Do It'] data[2] = ['With A Large DataSet'] ``` and so on. My len(data) gives 50000. I have tried ``` {k.lower(): v for k, v in data.items()} ``` and it gives me error saying that 'list' object has no attribute 'items'. and I have also tried using .lower() and it is giving me the same AtrributeError. How do I recursively call the lower() function in all the data[:50000] to make all the of strings in the data to all lowercase? EDIT: For more details: I have a json file with datas such as: ``` {'review/a': 1.0, 'review/b':2.0, 'review/c':This IS the PART where I want to make all loWerCASE} ``` Then I call a function to get the specific reviews that I want to make all lower case to ``` def lowerCase(datum): feat = [datum['review/c']] return feat lowercase = [lowercase(d) for d in data] ``` Now that I have all the 'review/c' information in my lowercase list. I want to make all of that strings to lower case
2017/10/24
[ "https://Stackoverflow.com/questions/46901975", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4160544/" ]
if your list *data* like this: ``` data = ['I want to make everything lowercase', '', ''] data = [k.lower() for k in data] ``` if your list *data* is a list of string list: ``` data = [['I want to make everything lowercase'], ['']] data = [[k.lower()] for l in data for k in l] ``` the fact is that list don't has attribute 'items'
You need a list comprehension, not a dict comprehension: ``` lowercase_data = [v.lower() for v in data] ```
11,822
65,332,466
I get a [circular dependency error](https://stackoverflow.com/questions/40705237/django-db-migrations-exceptions-circulardependencyerror) that makes me have to comment out a field. Here is how my models are set up: ``` class Intake(models.Model): # This field causes a bug on makemigrations. It must be commented when first # running makemigrations and migrate. Then, uncommented and run makemigrations and # migrate again. start_widget = models.ForeignKey( "widgets.Widget", on_delete=models.CASCADE, related_name="start_widget", null=True, blank=True, ) class Widget(PolymorphicModel): intake = models.ForeignKey(Intake, on_delete=models.CASCADE) ``` By the way, Widget's PolymorphicModel superclass is from [here](https://django-polymorphic.readthedocs.io/en/stable/index.html). Why is this happening and how can I solve it **without having to comment out over and over again**? Thanks! EDIT: THE FULL ERR: ``` Traceback (most recent call last): File "/Users/nicksmith/Desktop/proj/backend/manage.py", line 22, in <module> main() File "/Users/nicksmith/Desktop/proj/backend/manage.py", line 18, in main execute_from_command_line(sys.argv) File "/Users/nicksmith/Desktop/proj/backend/venv/lib/python3.9/site-packages/django/core/management/__init__.py", line 401, in execute_from_command_line utility.execute() File "/Users/nicksmith/Desktop/proj/backend/venv/lib/python3.9/site-packages/django/core/management/__init__.py", line 395, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/Users/nicksmith/Desktop/proj/backend/venv/lib/python3.9/site-packages/django/core/management/base.py", line 330, in run_from_argv self.execute(*args, **cmd_options) File "/Users/nicksmith/Desktop/proj/backend/venv/lib/python3.9/site-packages/django/core/management/base.py", line 371, in execute output = self.handle(*args, **options) File "/Users/nicksmith/Desktop/proj/backend/venv/lib/python3.9/site-packages/django/core/management/base.py", line 85, in wrapped res = handle_func(*args, **kwargs) File "/Users/nicksmith/Desktop/proj/backend/venv/lib/python3.9/site-packages/django/core/management/commands/migrate.py", line 92, in handle executor = MigrationExecutor(connection, self.migration_progress_callback) File "/Users/nicksmith/Desktop/proj/backend/venv/lib/python3.9/site-packages/django/db/migrations/executor.py", line 18, in __init__ self.loader = MigrationLoader(self.connection) File "/Users/nicksmith/Desktop/proj/backend/venv/lib/python3.9/site-packages/django/db/migrations/loader.py", line 53, in __init__ self.build_graph() File "/Users/nicksmith/Desktop/proj/backend/venv/lib/python3.9/site-packages/django/db/migrations/loader.py", line 282, in build_graph self.graph.ensure_not_cyclic() File "/Users/nicksmith/Desktop/proj/backend/venv/lib/python3.9/site-packages/django/db/migrations/graph.py", line 274, in ensure_not_cyclic raise CircularDependencyError(", ".join("%s.%s" % n for n in cycle)) django.db.migrations.exceptions.CircularDependencyError: intakes.0001_initial, widgets.0001_initial ```
2020/12/16
[ "https://Stackoverflow.com/questions/65332466", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Ok OP let me do this again, you want to find all the rows which are "A" (base condition) and all the rows which are following a "A" row at some point, right ? Then, ``` is_A = df["ID"] == "A" not_A_follows_from_A = (df["ID"] != "A") &( df["ID"].shift() == "A") candidates = df["ID"].loc[is_A | not_A_follows_from_A].unique() df.loc[df["ID"].isin(candidates)] ``` Should work as intented. Edit : example ``` df = pd.DataFrame({ 'Time': [1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1], 'ID': ['A', 'A', 'B', 'B', 'C', 'C', 'D', 'A', 'E', 'E', 'E', 'A', 'F'], 'Val': [7, 2, 7, 5, 1, 6, 7, 3, 2, 4, 7, 8, 2]}) is_A = df["ID"] == "A" not_A_follows_from_A = (df["ID"] != "A") &( df["ID"].shift() == "A") candidates = df["ID"].loc[is_A | not_A_follows_from_A].unique() df.loc[df["ID"].isin(candidates)] ``` outputs this : ``` Time ID Val 0 1 A 7 1 1 A 2 2 1 B 7 3 0 B 5 7 1 A 3 8 0 E 2 9 0 E 4 10 1 E 7 11 1 A 8 12 1 F 2 ```
Let us try `drop_duplicates`, then `groupby` select the number of unique ID we would like to keep by `head`, and `merge` ``` out = df.merge(df[['Time','ID']].drop_duplicates().groupby('Time').head(2)) Time ID Val 0 1 A 2.0 1 1 A 5.0 2 1 B 2.5 3 1 B 2.0 4 2 A 1.0 5 2 A 6.0 6 2 B 4.0 7 2 B 2.0 ```
11,823
18,451,694
e.g. I defined an function which needs several input arguments, if some keyword arguments not being assigned, typically there be an TypeError message, but I want to change it, to output an NaN as the result, could it be done? ``` def myfunc( S0, K ,r....): if S0 = NaN or .....: ``` How to do it? Much appreciated. Edited: ``` def myfunc(a): return a / 2.5 + 5 print myfunc('whatever') >python -u "bisectnewton.py" Traceback (most recent call last): File "bisectnewton.py", line 6, in <module> print myfunc('whatever') File "bisectnewton.py", line 4, in myfunc return a / 2.5 + 5 TypeError: unsupported operand type(s) for /: 'str' and 'float' >Exit code: 1 ``` What I want is, the myfunc(a) only accpets an number as the input, if some other data type like a string = 'whatever' inputed, I don't want to just output an default error message, I want it to output something like return 'NaN' to tell others that the input should be an number. Now I changed it to this, but still not working, btw, is none the same as NaN? I think they're different. ``` def myfunc(S0): if math.isnan(S0): return 'NaN' return a / 2.5 + 5 print myfunc('whatever') >python -u "bisectnewton.py" Traceback (most recent call last): File "bisectnewton.py", line 8, in <module> print myfunc('whatever') File "bisectnewton.py", line 4, in myfunc if math.isnan(S0): TypeError: a float is required >Exit code: 1 ``` Thanks!
2013/08/26
[ "https://Stackoverflow.com/questions/18451694", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2525479/" ]
You can capture the TypeError and do whatever you want with it: ``` def myfunc(a): try: return a / 2.5 + 5 except TypeError: return float('nan') print myfunc('whatever') ``` [The Python Tutorial](http://docs.python.org/2/tutorial/errors.html) has an excellent chapter on this subject.
``` def myfunc(S0 = None, K = None, r = None, ....): if S0 is None or K is None or r is None: return NaN ```
11,824
68,030,577
I have obtained 3 lists/arrays in my python script and I am wondering why I am not able to multiply them to values successfully. ``` Array 1: [-0.01896408 -0.0191784 -0.01939271 ... 0.97766441 0.97745009 0.97723578] Array 2: [ 1.21527999 1.21302709 1.21077419 ... -0.69821075 -0.70046365 -0.70271655] Array 3: [-0.19631591 -0.1938487 -0.19138148 ... 0.72054634 0.72301355 0.72548077] ``` These are the points I am multiplying it with: `pointsArray = np.array([[103.890991,1.369125], [103.8892,1.368017], [103.8903,1.367166],[103.890221,1.367944] ])` What I want to do is this: ``` transformPoint = array1[i]*pointsArray[0] + array2[i]*pointsArray[1] + array3[i]*pointsArray[2] df["new_X"] = transformPoint[0] df["new Y"] = transformPoint[1] ``` where the pointsarray value is constant for every iteration in the loop and the arrays should iterate thru all values in them. This is the error I am getting with the transform point calculation line: `transformPoint = array1[i]*pointsArray[0] + array2[i]*pointsArray[1] + array3[i]*pointsArray[2]` ``` ValueError: operands could not be broadcast together with shapes (28686,) (2,) ``` How do I fix this or how do I go about with doing this calculation? Thank you!
2021/06/18
[ "https://Stackoverflow.com/questions/68030577", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15233108/" ]
I can't comment so I have to post my comment like this: As I understand it you try to multiply elements of the "pointsArray" with a scalar. Have you tried printing ``` array1[i] ``` to see what it looks like? From the error I would guess that this is somehow not just a number, but the entire array. You could for example try to print ``` array[0][i] ``` to see if the output is the first number in array1.
dot is used for matrix multiplication. Try this: ``` matrix1.dot(matrix2) ``` A bit more clarity: If you want to use \*, convert both to numpy.matrix and then try.
11,827
57,487,274
I have a below API response. This is a very small subset which I am pasting here for reference. there can be 80+ columns on this. ``` [["name","age","children","city", "info"], ["Richard Walter", "35", ["Simon", "Grace"], {"mobile":"yes","house_owner":"no"}], ["Mary", "43", ["Phil", "Marshall", "Emily"], {"mobile":"yes","house_owner":"yes", "own_stocks": "yes"}], ["Drew", "21", [], {"mobile":"yes","house_owner":"no", "investor":"yes"}]] ``` Initially I thought pandas could help here and searched accordingly but as a newbie to python/coding I was not able to get much out of it. any help or guidance is appreciated. I am expecting output in a JSON key-value pair format such as below. ``` {"name":"Mary", "age":"43", "children":["Phil", "Marshall", "Emily"],"info_mobile":"yes","info_house_owner":"yes", "info_own_stocks": "yes"}, {"name":"Drew", "age":"21", "children":[], "info_mobile":"yes","info_house_owner":"no", "info_investor":"yes"}]``` ```
2019/08/14
[ "https://Stackoverflow.com/questions/57487274", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11924956/" ]
There are times when using Python seems to be very effective, this might be one of those. ``` df['scores'].apply(lambda x: sum(float(i) if len(x) > 0 else np.nan for i in x.split(','))) 0 NaN 1 NaN 2 0.359982 3 0.359982 4 -0.000646 5 0.003793 6 0.856057 ```
You can just do `str.split` ``` df.scores.str.split(',',expand=True).astype(float).sum(1).mask(df.scores.isnull()) 0 NaN 1 NaN 2 0.359982 3 0.359982 4 -0.000646 5 0.003793 6 0.856057 dtype: float64 ```
11,830
48,176,802
I'm very new to work with `xlsxwriter` in python. I have created a scraper in python and it is working flawlessly. However, when I try to write these data in an excel file using `xlsxwriter` I get stuck. What I have written so far can create an excel file and write the last populated data derived from the for loop. How can I rectify my script to write all the data rather than the last one. It would be better If i knew how to append the newly populated values on the fly. The bottom line is, I'm getting two issues: 1. My script writes only the last populated values 2. The two fields are being written in a line, as in `row("A1"), row("A2")` but I wish to have them like `row("A1"), row("B1")` and so on. Script I've tried with: ``` import requests from bs4 import BeautifulSoup import xlsxwriter row = 0 col = 0 with xlsxwriter.Workbook('torrent.xlsx') as workbook: worksheet = workbook.add_worksheet() with requests.Session() as s: s.headers = {"User-Agent":"Mozilla/5.0"} res = s.get("https://www.yify-torrent.org/search/1080p/") soup = BeautifulSoup(res.text, 'lxml') for item in soup.select(".mv"): name = item.select("a")[0].text link = item.select("a")[0]['href'] data = name , link for elem in data: worksheet.write(row, col, elem) row += 1 ``` The result I'm having like (in a line): ``` title link ``` Whereas, I wish to have them like (in separate rows): ``` title link title1 link1 title2 link2 ``` and so on.
2018/01/09
[ "https://Stackoverflow.com/questions/48176802", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9189799/" ]
1. Each time through the first `for` loop, you overwrite `data`, so only the last thing assigned survives. This could be addressed by moving your second `for` loop to be inside the first, so it gets called for each value of `data`. 2. If you want things to be in different columns, you need to use different values for `col` when you call `worksheet.write`. You use `row += 1` to advance to subsequent rows; `col += 1` would do the same for columns.
As Scott Hunter noted, your overwriting your data, which is stored just fine as a tuple in your data variable. However, it seems like your issue is in your for loop, where you're only adding to row within each block, which explains why your code is traveling only vertically in your spread sheet. Perhaps rearranging things and adding in an iterator could help? ``` for idx,elem in enumerate(data): worksheet.write(row, idx, elem) row += 1 ``` Enumerate function will consistently add 1 to the idx variable for every iteration of code, therefor this single block of code can extend as far out as your tuple of data is long. Hope this helps!
11,833
61,715,377
So I have a docker project, which is some kind of Python pytest that runs subprocess on an executable file as a blackbox test. I would like to build the container, and then run it each time by copying the executable file to a dedicated folder inside the container WORKDIR (e.g. exec/). But I am not sure how to do it. Currently, I have to first include the executable in the folder then build the container. The structure is currently like this: ``` my_test_repo | |-- exec | | |-- my_executable | |-- tests | | |-- test_the_executable.py | |-- Dockerfile ``` I skipped over some other such as setup. In the Dockerfile, I do the following: ``` FROM python:3.7.7-buster WORKDIR /app COPY . /app RUN pip install --trusted-host pypi.python.org . RUN pip install pytest ENV NAME "Docker" RUN pytest ./tests/ --executable=./exec/my_executable ``` For the last time, I setup a pytest fixture to accept the path of the executable. I can run the test by building it: ``` docker build --tag testproject:1.0 . ``` How can I edit it so that the containers only consists of all the tests file. And it interacts with users so that I can `cp` my executable from my local dir to the container then run the test? Many thanks.
2020/05/10
[ "https://Stackoverflow.com/questions/61715377", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4930109/" ]
'Week' in mysql has 2 inputs: date and week type. By default it's equal 0. That means week starts from sunday. Try this code: ``` SELECT IFNULL(SUM(rendeles_dbszam),0) as eladott_pizzak_szama FROM rendeles WHERE WEEK(rendeles_idopont) = WEEK(CURRENT_DATE(),1) ```
You can use this little formula to get the Monday starting the week of any given DATE, DATETIME, or TIMESTAMP object. ``` FROM_DAYS(TO_DAYS(datestamp) -MOD(TO_DAYS(datestamp) -2, 7)) ``` I like to use it in a stored function named `TRUNC_MONDAY(datestamp)` defined like this. ``` DELIMITER $$ DROP FUNCTION IF EXISTS TRUNC_MONDAY$$ CREATE FUNCTION TRUNC_MONDAY(datestamp DATETIME) RETURNS DATE DETERMINISTIC NO SQL COMMENT 'preceding Monday' RETURN FROM_DAYS(TO_DAYS(datestamp) -MOD(TO_DAYS(datestamp) -2, 7))$$ DELIMITER ; ``` Then you can do stuff like this ``` SELECT IFNULL(SUM(rendeles_dbszam),0) as eladott_pizzak_szama FROM rendeles WHERE TRUNC_MONDAY(rendeles_idopont) = TRUNC_MONDAY(CURRENT_DATE()) ``` or even this to get a report covering eight previous weeks and the current week. ``` SELECT SUM(rendeles_dbszam) as eladott_pizzak_szama, TRUNC_MONDAY(rendeles_idopont) as week_beginning FROM rendeles WHERE rendeles_idopont >= TRUNC_MONDAY(CURDATE()) - INTERVAL 8 WEEK AND rendeles_idopoint < TRUNC_MONDAY(CURDATE()) + INTERVAL 1 WEEK GROUP BY TRUNC_MONDAY(rendeles_idopont) ``` I particularly like this `TRUNC_MONDAY()` approach because it works unambiguously even for calendar weeks that contain New Years' Days. (If you want `TRUNC_SUNDAY()` change the `-2` in the formula to `-1`.)
11,834
40,920,564
I am using two different `HTTP POST` utilities ([poster](https://addons.mozilla.org/en-US/firefox/addon/poster/) out of Firefox as well as `Python` [requests](http://docs.python-requests.org/en/master/) API) to post a simple `SPARQL` insert to `Virtuoso`. My URL is: `http://localhost:8890/sparql` My request parameters are: ``` default-graph-uri: <MY_GRAPH> should-sponge: soft debug: on timeout: format: application/xml save: display fname: ``` I put the actual SPARQL (`INSERT DATA { GRAPH...`) in the content of the message. I tried different content types, none of which worked. I do get 200 but the response is in HTML even though the above parameter set specifies `application/xml`, however, no data is inserted. When I try content type of `text/turtle`, I get 409 Invalid Path, which is also referenced in [this post](https://www.mail-archive.com/virtuoso-users@lists.sourceforge.net/msg07015.html). I can successfully do `HTTP GET`, however, that has a payload length limitation which I would like to exceed for performance reasons. The only difference with the GET is that the SPARQL goes in the URL under `query` parameter and the POST **should** enable a much larger payload in the message content, by including multiple triples in the same request, not just one (I have 100s of 1000s of inserts). I was trying to follow [this documentation page](http://docs.openlinksw.com/virtuoso/rdfsparqlprotocolendpoint/#rdfsupportedprotocolendpointurisparqlauthex).
2016/12/01
[ "https://Stackoverflow.com/questions/40920564", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1312080/" ]
I stopped by this question days ago trying to achieve the same with `curl`. Since it is a powerful (and far more convenient) alternative to browser extensions, here is the formulation that eventually proved successful: ``` curl -X POST \ -H "Content-Type:application/sparql-update" \ -H "Accept:text/html" \ --data "select distinct ?Concept where {[] a ?Concept} LIMIT 100" http://localhost:8890/sparql ``` More details on the headers in [this thread](https://sourceforge.net/p/virtuoso/mailman/virtuoso-users/thread/KfF_dXPTTyjakF6oWNlzWHOAXbCxkZ4h1erB4PSxdbCe8seqxaDqGq0vyg0VyIg3j-QoooTagDhE0R660W_udXI9wEsYVGsRE463O8k3jhk%3D%40protonmail.ch/#msg37272403).
There are many aspects to this "question" making it difficult to provide a simple answer, suitable to this site. This is one of the reasons I suggested the mailing list, which is better suited to conversational and/or multi-facet assistance. * Have you tried using `curl` as most of our examples do? * Looking at the [Poster](https://addons.mozilla.org/en-US/firefox/addon/poster/) page on Mozilla Add-Ons, I see that you may need to manually add a `?` to the end of your target URI -- so `http://localhost:8890/sparql?` rather than `http://localhost:8890/sparql` -- and it's not clear whether you've done that in your testing. On the [project page](https://code.google.com/archive/p/poster-extension/), I also note its [last commit](https://code.google.com/archive/p/poster-extension/source/default/commits) was in 2012, and there are [a great many open issues](https://code.google.com/archive/p/poster-extension/issues). * I'm not at all familiar with Python, so I've not dug in there. * Have you tried setting an `Accept:` header? This can have significant impact on the content returned by the server. * If I understand your described efforts correctly, your `format:` query parameter should be `output-format:`, and its value should not be `application/xml` but one of the supported formats [listed in the documentation](http://docs.openlinksw.com/virtuoso/rdfsparqlprotocolendpoint/#rdfsupportedmimesofprotocolserver). * Neither the [`virtuoso-users` post you referenced](https://www.mail-archive.com/virtuoso-users@lists.sourceforge.net/msg07015.html) nor this question have enough detail to analyze the cause of the `409 Invalid Path` error. Explicit details that allow us to reproduce this result would be helpful, optimally in a distinct thread.
11,835
20,688,034
Facing an HTTPSHandler error while installing python packages using pip, following is the stack trace, ``` --------desktop:~$ pip install Django==1.3 Traceback (most recent call last): File "/home/env/.genv/bin/pip", line 9, in <module> load_entry_point('pip==1.4.1', 'console_scripts', 'pip')() File "/home/env/.genv/lib/python2.7/site-packages/pkg_resources.py", line 378, in load_entry_point return get_distribution(dist).load_entry_point(group, name) File "/home/env/.genv/lib/python2.7/site-packages/pkg_resources.py", line 2566, in load_entry_point return ep.load() File "/home/env/.genv/lib/python2.7/site-packages/pkg_resources.py", line 2260, in load entry = __import__(self.module_name, globals(),globals(), ['__name__']) File "/home/env/.genv/lib/python2.7/site-packages/pip/__init__.py", line 10, in <module> from pip.util import get_installed_distributions, get_prog File "/home/env/.genv/lib/python2.7/site-packages/pip/util.py", line 17, in <module> from pip.vendor.distlib import version File "/home/env/.genv/lib/python2.7/site-packages/pip/vendor/distlib/version.py", line 13, in <module> from .compat import string_types File "/home/env/.genv/lib/python2.7/site-packages/pip/vendor/distlib/compat.py", line 31, in <module> from urllib2 import (Request, urlopen, URLError, HTTPError, ImportError: cannot import name HTTPSHandler ``` I used to edit Modules/setup.dist file and uncomment SSL code lines and rebuilt it, with reference to following thread : <http://forums.opensuse.org/english/get-technical-help-here/applications/488962-opensuse-python-openssl-2.html>
2013/12/19
[ "https://Stackoverflow.com/questions/20688034", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3016020/" ]
Another symptom of this problem for me was if I went into the python console of my virtualenv and did `import ssl` it would error out. Turns out my virtualenv wasn't using the `brew` version of python, just the default install on my machine. No clue why the default install suddenly stopped working, but here's how I fixed it the problem: * `rmvirtualenv myvirtualenv` * `brew update` * `brew reinstall python` * `mkvirtualenv -p /usr/local/Cellar/python/whatever_version_number/bin/python myvirtualenv`
In many cases this is caused by an out of date virtualenv, here's a script to regenerate your virtualenv(s): <https://gist.github.com/WoLpH/fb98f7dc6ba6f05da2b8> Simply copy it to a file (downloadable link above) and execute it like this: `zsh -e recreate_virtualenvs.sh <project_name>` ``` #!/bin/zsh -e if [ ! -d "$PROJECT_HOME" ]; then echo 'Your $PROJECT_HOME needs to be defined' echo 'http://virtualenvwrapper.readthedocs.org/en/latest/install.html#location-of-project-directories' exit 1 fi if [ "" = "$1" ]; then echo "Usage: $0 <project_name>" exit 1 fi env="$1" project_dir="$PROJECT_HOME/$1" env_dir="$HOME/envs/$1" function command_exists(){ type $1 2>/dev/null | grep -vq ' not found' } if command_exists workon; then echo 'Getting virtualenvwrapper from environment' # Workon exists, nothing to do :) elif [ -x ~/bin/mount_workon ]; then echo 'Using mount workon' # Optional support for packaged project directories and virtualenvs using # https://github.com/WoLpH/dotfiles/blob/master/bin/mount_workon . ~/bin/mount_workon mount_file "$project_dir" mount_file "$env_dir" elif command_exists virtualenvwrapper.sh; then echo 'Using virtualenvwrapper' . $(which virtualenvwrapper.sh) fi if ! command_exists workon; then echo 'Virtualenvwrapper not found, please install it' exit 1 fi rmvirtualenv $env || true echo "Recreating $env" mkvirtualenv $env || true workon "$env" || true pip install virtualenv{,wrapper} cd $project_dir setvirtualenvproject if [ -f setup.py ]; then echo "Installing local package" pip install -e . fi function install_requirements(){ # Installing requirements from given file, if it exists if [ -f "$1" ]; then echo "Installing requirements from $1" pip install -r "$1" fi } install_requirements requirements_test.txt install_requirements requirements-test.txt install_requirements requirements.txt install_requirements test_requirements.txt install_requirements test-requirements.txt if [ -d docs ]; then echo "Found docs, installing sphinx" pip install sphinx{,-pypi-upload} py fi echo "Installing ipython" pip install ipython if [ -f tox.ini ]; then deps=$(python -c " parser=__import__('ConfigParser').ConfigParser(); parser.read('tox.ini'); print parser.get('testenv', 'deps').strip().replace('{toxinidir}/', '')") echo "Found deps from tox.ini: $deps" echo $deps | parallel -v --no-notice pip install {} fi if [ -f .travis.yml ]; then echo "Found deps from travis:" installs=$(grep 'pip install' .travis.yml | grep -v '\$' | sed -e 's/.*pip install/pip install/' | grep -v 'pip install . --use-mirrors' | sed -e 's/$/;/') echo $installs eval $installs fi deactivate ```
11,840
1,045,906
I have worked with pyCurl in the past and have it working with my system default python install. However, I have a project that requires python to be more portable and I am using ActivePython-2.6. I have had no problems installing any other modules so far, but am getting errors installing pyCurl. The error: ``` Searching for pycurl Reading http://pypi.python.org/simple/pycurl/ Reading http://pycurl.sourceforge.net/ Reading http://pycurl.sourceforge.net/download/ Best match: pycurl 7.19.0 Downloading http://pycurl.sourceforge.net/download/pycurl-7.19.0.tar.gz Processing pycurl-7.19.0.tar.gz Running pycurl-7.19.0/setup.py -q bdist_egg --dist-dir /tmp/easy_install-tfVLW6/pycurl-7.19.0/egg-dist-tmp-p1WjAy sh: curl-config: not found Traceback (most recent call last): File "/opt/ActivePython-2.6/bin/easy_install", line 8, in <module> load_entry_point('setuptools==0.6c9', 'console_scripts', 'easy_install')() File "/opt/ActivePython-2.6/lib/python2.6/site-packages/setuptools-0.6c9-py2.6.egg/setuptools/command/easy_install.py", line 1671, in main File "/opt/ActivePython-2.6/lib/python2.6/site-packages/setuptools-0.6c9-py2.6.egg/setuptools/command/easy_install.py", line 1659, in with_ei_usage File "/opt/ActivePython-2.6/lib/python2.6/site-packages/setuptools-0.6c9-py2.6.egg/setuptools/command/easy_install.py", line 1675, in <lambda> File "/opt/ActivePython-2.6/lib/python2.6/distutils/core.py", line 152, in setup dist.run_commands() File "/opt/ActivePython-2.6/lib/python2.6/distutils/dist.py", line 975, in run_commands self.run_command(cmd) File "/opt/ActivePython-2.6/lib/python2.6/distutils/dist.py", line 995, in run_command cmd_obj.run() File "/opt/ActivePython-2.6/lib/python2.6/site-packages/setuptools-0.6c9-py2.6.egg/setuptools/command/easy_install.py", line 211, in run File "/opt/ActivePython-2.6/lib/python2.6/site-packages/setuptools-0.6c9-py2.6.egg/setuptools/command/easy_install.py", line 446, in easy_install File "/opt/ActivePython-2.6/lib/python2.6/site-packages/setuptools-0.6c9-py2.6.egg/setuptools/command/easy_install.py", line 476, in install_item File "/opt/ActivePython-2.6/lib/python2.6/site-packages/setuptools-0.6c9-py2.6.egg/setuptools/command/easy_install.py", line 655, in install_eggs File "/opt/ActivePython-2.6/lib/python2.6/site-packages/setuptools-0.6c9-py2.6.egg/setuptools/command/easy_install.py", line 930, in build_and_install File "/opt/ActivePython-2.6/lib/python2.6/site-packages/setuptools-0.6c9-py2.6.egg/setuptools/command/easy_install.py", line 919, in run_setup File "/opt/ActivePython-2.6/lib/python2.6/site-packages/setuptools-0.6c9-py2.6.egg/setuptools/sandbox.py", line 27, in run_setup File "/opt/ActivePython-2.6/lib/python2.6/site-packages/setuptools-0.6c9-py2.6.egg/setuptools/sandbox.py", line 63, in run File "/opt/ActivePython-2.6/lib/python2.6/site-packages/setuptools-0.6c9-py2.6.egg/setuptools/sandbox.py", line 29, in <lambda> File "setup.py", line 90, in <module> Exception: `curl-config' not found -- please install the libcurl development files ``` My system does have libcurl installed, but ActivePython doesn't seem to find it. Any ideas will help!
2009/06/25
[ "https://Stackoverflow.com/questions/1045906", "https://Stackoverflow.com", "https://Stackoverflow.com/users/124485/" ]
I couldn't find a curl-config to add to the path, which makes sense as it's not a module that can be called (as far as I can tell) The answer ended up being a bit of a hack, but it works. As I had pyCurl working in my native python2.6 install, I simply copied the curl and pycurl items from the native install into the ActivePython install.
It looks like `curl-config` isn't in your path. Try running it from the command line and adjust the `PATH` environment variable as needed so that Python can find it.
11,850
11,696,691
I have a Bash script (Bash 3.2, Mac OS X 10.8) that invokes multiple Python scripts in parallel in order to better utilize multiple cores. Each Python script takes a really long time to complete. The problem is that if I hit Ctrl+C in the middle of the Bash script, the Python scripts do not actually get killed. How can I write the Bash script so that killing it will also kill all its background children? Here's my original "reduced test case". Unfortunately I seem to have reduced it so much that it no longer demonstrates the problem; my mistake. ``` set -e cat >work.py <<EOF import sys, time for i in range(10): time.sleep(1) print "Tick from", sys.argv[1] EOF function process { python ./work.py $1 & } process one process two wait ``` Here's a complete test case, still highly reduced, but hopefully this one will demonstrate the problem. It reproduces on my machine... but then, two days ago I thought the *old* test case reproduced on my machine, and today it definitely doesn't. ``` #!/bin/bash -e set -x cat >work.sh <<EOF for i in 0 1 2 3 4 5 6 7 8 9; do sleep 1; echo "still going" done EOF chmod +x work.sh function kill_all_jobs { jobs -p | xargs kill; } trap kill_all_jobs SIGINT function process { ./work.sh $1 } process one & wait $! echo "All done!" ``` This code continues to print `still going` even after Ctrl+C. But if I move the `&` from outside `process` to inside (i.e.: `./work.sh $1 &`), then Ctrl+C works as expected. I don't understand this at all! In my real script, `process` contains more than one command, and the commands are long-running and must run in sequence; so I don't know how to "move the `&` inside `process`" in that case. I'm sure it's possible, but it must be non-trivial. ``` $ bash --version GNU bash, version 3.2.48(1)-release (x86_64-apple-darwin12) Copyright (C) 2007 Free Software Foundation, Inc. ``` EDIT: Many thanks to @AlanCurry for teaching me some Bash stuff. Unfortunately I still don't understand exactly what's going on in my examples, but it's practically a moot point, as Alan *also* helpfully pointed out that for my real-world parallelization problem, Bash is the wrong tool and I ought to be using a simple makefile with `make -j3`! `make` runs things in parallel where possible, and also understands Ctrl+C perfectly; problem solved (even though question unanswered).
2012/07/27
[ "https://Stackoverflow.com/questions/11696691", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1424877/" ]
You can't use this structure: ``` do { window.setTimeout(fadeIn,1000); } while(done == false); ``` Because the code in the `setTimeout()` runs sometime LATER, your value of done will NEVER be changed and this loop will run forever. And, as long as it runs, the `setTimeout()` never gets to fire either (because javascript is single threaded). Instead, what you should do is launch the next `setTimeout(fadeIn, 1000)` from the `fadeIn()` function if you aren't done. ``` function fadeOut() { if(parseFloat(current.style.opacity)-0.1>.0000001) { current.style.opacity = parseFloat(current.style.opacity) -0.1; setTimeout(fadeOut, 1000); } else { current.style.opacity = 0.0; } } ```
Remember that javascript is single-threaded, so your setTimeout'et functions will not be called until it is finished running the current script. Which will never happen since you're in a loop that's never gonna end (untill you're out of memory from all those setTimeout's). Just call setTimeout once and let the function return. And forget the idea of waiting for it to have happened.
11,855
19,489,132
What is the proper way to pass values of string variables to Popen function in Python? I tried the below piece of code ``` var1 = 'hello' var2 = 'universe' p=Popen('/usr/bin/python /apps/sample.py ' + '"' + str(eval('var1')) + ' ' + '"' + str(eval('var2')), shell=True) ``` and in sample.py, below is the code ``` print sys.argv[1] print '\n' print sys.argv[2] ``` but it prints the below output ``` hello universe none ``` so its considering both var1 and var2 as one argument. Is this the right approach of passing values of string variables as arguments?
2013/10/21
[ "https://Stackoverflow.com/questions/19489132", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2663585/" ]
This should work: ``` p = Popen('/usr/bin/python /apps/sample.py {} {}'.format(var1, var2), shell=True) ``` Learn about [string formatting](http://docs.python.org/2/library/string.html#format-examples) . Second, passing arguments to scripts has its quirks: <http://docs.python.org/2/library/subprocess.html#subprocess.Popen> This works for me: test.py: ``` #!/usr/bin/env python import sys print sys.argv[1] print '\n' print sys.argv[2] ``` Then: ``` chmod +x test.py ``` Then: ``` Python 2.7.4 (default, Jul 5 2013, 08:21:57) >>> from subprocess import Popen >>> p = Popen('./test.py {} {}'.format('hello', 'universe'), shell=True) >>> hello universe ```
``` var1 = 'hello' var2 = 'universe' p=Popen("/usr/bin/python /apps/sample.py %s %s"%(str(eval('var1')), str(eval('var2'))), shell=True) ``` Passing format specifier is nice way to do it.
11,856
48,234,112
I want to print all elements in this list in reversed order and every element in this list must be on a new line. For example if the list is ['i', 'am', 'programming', 'with', 'python'] it should print out: python with programming am i What is the best way to do this? ``` def list(): words = [] while True: output = input("Type a word: ") if output == "stop": break else: words.append(output) for elements in words: print(elements) list() ```
2018/01/12
[ "https://Stackoverflow.com/questions/48234112", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9160869/" ]
You don't need to convert to a data table for every step, it would be easier to query if you moved away from that. ``` var query = from row in dt.AsEnumerable() group new { premA = row.Field<decimal>("PREM_A"), premB = row.Field<decimal>("PREM_B"), } by row.Field<string>("ID").Trim() into g let premA = g.Sum(x => x.premA) let premB = g.Sum(x => x.premB) where premA != 0M || premB != 0M select new { Id = g.Key, PremA = premA, PremB = premB, }; ```
Also: ``` var resultsDt = dt.AsEnumerable() .GroupBy(row => row.Field<string>("ID")) .Select(grp =>new {Id= grp.Key, PREM_A= grp.Sum(r => r.Field<decimal>("PREM_A")), PREM_B=grp.Sum(r => r.Field<decimal>("PREM_B")) }) .Where(e=>e.PREM_A!=0 || e.PREM_B!=0); ```
11,857
64,802,573
I am trying to get the most basic of OpenAPI server to work as expected. It works as expected with auto-generated python-flask but not with aspnet where exceptions being raised on queries occurs. What extra steps are required to get the aspnet server to respond correctly to queries? The YAML is as below: ``` openapi: 3.0.0 info: title: Test API version: 0.0.0 servers: - url: http://localhost:{port} description: Local server variables: port: default: "8092" paths: /things: get: summary: Return a list of Things responses: '200': description: A JSON array of Things content: application/json: schema: type: array items: $ref: "#/components/schemas/Thing" components: schemas: Thing: properties: id: type: integer name: type: string ``` In order to get the server to run, the auto-generated launchSettings.json has to be modified from: ``` ... "web": { "commandName": "Project", "launchBrowser": true, "launchUrl": "swagger", "applicationUrl": "https://localhost:5001;http://localhost:5000", "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development" } ... ``` to: ``` ... "web": { "commandName": "Project", "launchBrowser": false, "launchUrl": "swagger", "applicationUrl": "http://localhost:8092", "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development" } ... ``` The console suggests that the server is running ok ``` info: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[0] User profile is available. Using 'C:\Users\jonathan.noble\AppData\Local\ASP.NET\DataProtection-Keys' as key repository and Windows DPAPI to encrypt keys at rest. Hosting environment: Development Content root path: D:\aspnetcore-server-generated\src\IO.Swagger Now listening on: http://localhost:8092 Application started. Press Ctrl+C to shut down. ``` However, when a query is made via`http://localhost:8092/things` (this is what the swagger editor suggests to use) the server throws an exception with ``` info: Microsoft.AspNetCore.Hosting.Diagnostics[1] Request starting HTTP/1.1 GET http://localhost:8092/things fail: Microsoft.AspNetCore.Server.Kestrel[13] Connection id "0HM46UOD2UM1E", Request id "0HM46UOD2UM1E:00000001": An unhandled exception was thrown by the application. System.UriFormatException: Invalid URI: The URI is empty. at System.Uri.CreateThis(String uri, Boolean dontEscape, UriKind uriKind) at System.Uri..ctor(String uriString) at IO.Swagger.Startup.<ConfigureServices>b__5_2(SwaggerGenOptions c) in D:\aspnetcore-server-generated\src\IO.Swagger\Startup.cs:line 73 at Microsoft.Extensions.Options.ConfigureNamedOptions`1.Configure(String name, TOptions options) ... ```
2020/11/12
[ "https://Stackoverflow.com/questions/64802573", "https://Stackoverflow.com", "https://Stackoverflow.com/users/133232/" ]
You can set a **cookie** for detection users; for example, for the first, I visit your store, you set a cookie with **GUID** value for me, now when I order something, you use my GUID as an identifier to track my order.
add AllowAnonymous attribute to actions or controllers. Then user don't need to do log in. ``` [ [AllowAnonymous] public class Orders: Controller { [AllowAnonymous] public ActionResult PlaceOrder() { } public ActionResult Get() { } } ```
11,858
325,463
I've a python script that has to launch a shell command for every file in a dir: ``` import os files = os.listdir(".") for f in files: os.execlp("myscript", "myscript", f) ``` This works fine for the first file, but after the "myscript" command has ended, the execution stops and does not come back to the python script. How can I do? Do I have to `fork()` before `calling os.execlp()`?
2008/11/28
[ "https://Stackoverflow.com/questions/325463", "https://Stackoverflow.com", "https://Stackoverflow.com/users/28582/" ]
> > subprocess: The `subprocess` module > allows you to spawn new processes, > connect to their input/output/error > pipes, and obtain their return codes. > > > <http://docs.python.org/library/subprocess.html> Usage: ``` import subprocess process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE) process.wait() print process.returncode ```
use spawn ``` import os os.spawnlp(os.P_WAIT, 'cp', 'cp', 'index.html', '/dev/null') ```
11,859
46,138,803
I'm trying the tutorial [Using Cloud Datastore with Python](https://cloud.google.com/python/getting-started/using-cloud-datastore), but when I run: ``` virtualenv -p python3 env ``` I got an error: ``` The path python3 (from --python=python3) does not exist ``` I checked the python version by running: ``` python -V ``` It gives me: ``` Python 3.5.2 :: Anaconda 4.1.1 (64-bit) ``` I run `set python` to see the Environment variables, which gives me: ``` Environment variable python not defined ``` An annoying thing is that, this is a lab machine which I don't have the admin right, I need to email the IT Admin to change the Environment variables. Tried:`virtualenv -p python env` It gives me: ``` The path python (from --python=python) does not exist ``` Seems there is no way around until the environment variable is fixed.
2017/09/10
[ "https://Stackoverflow.com/questions/46138803", "https://Stackoverflow.com", "https://Stackoverflow.com/users/646732/" ]
After reading this [tutorial](https://cloud.google.com/python/setup), I found the workaround for my case: ``` virtualenv --python "C:\\Anaconda3\\python.exe" env ```
If `python --V` is showing a version greater than 3, then why not try: ``` virtualenv -p python env ``` instead? The value of the `p` flag is simply referring to the version of python you're wanting to create the virtual environment with. In this case, `python` is greater than version 3.
11,869
53,495,570
I need to install `graph-tool` from source, so I add in my Dockerfile this: ``` FROM ubuntu:18.04 RUN git clone https://git.skewed.de/count0/graph-tool.git RUN cd graph-tool && ./configure && make && make install ``` as it written [here](https://git.skewed.de/count0/graph-tool/wikis/Installation-instructions#manual-compilation). When I try to build my Docker-compose I catch a error: ``` /bin/sh: 1: ./configure: not found ``` What am I doing wrong? Thanks! ADDED Full Dockerfile: ``` FROM ubuntu:16.04 ENV LANG C.UTF-8 ENV PYTHONUNBUFFERED 1 ENV C_FORCE_ROOT true # Install dependencies RUN apt-get update \ && apt-get install -y git \ && apt-get install -y python3-pip python3-dev \ && apt-get install -y binutils libproj-dev gdal-bin \ && cd /usr/local/bin \ && ln -s /usr/bin/python3 python \ && pip3 install --upgrade pip RUN git clone https://git.skewed.de/count0/graph-tool.git RUN apt-get update && apt-get install -y gcc RUN apt-get update && apt-get install -y libboost-all-dev RUN apt update && apt install -y --no-install-recommends \ make \ build-essential \ g++ RUN cd graph-tool && ./configure && make && make install # Project specific setups RUN mkdir /code WORKDIR /code ADD . /code RUN pip3 install -r requirements.txt ```
2018/11/27
[ "https://Stackoverflow.com/questions/53495570", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9098575/" ]
You need to run **autogen.sh** first, it will generate **configure** file P.S. Make sure you install libtool ``` apt-get install libtool ```
You have to install the prerequisites first. ``` RUN apt update && apt install -y --no-install-recommends \ make \ build-essential \ g++ \ .... ``` Don't forget to clean up and remove temp/unnecessary files!
11,875
19,768,456
I stumbled upon a behaviour in python that I have a hard time understanding. This is the proof-of-concept code: ``` from functools import partial if __name__ == '__main__': sequence = ['foo', 'bar', 'spam'] loop_one = lambda seq: [lambda: el for el in seq] no_op = lambda x: x loop_two = lambda seq: [partial(no_op, el) for el in seq] for func in (loop_one, loop_two): print [f() for f in func(sequence)] ``` The output of the above is: ``` ['spam', 'spam', 'spam'] ['foo', 'bar', 'spam'] ``` The behaviour of `loop_one` is surprising to me as I would expect it to behave as `loop_two`:`el` is an immutable value (a string) that changes at each loop, but **`lambda` seems to store a pointer to the "looping variable"**, like if the loop would recycle the same memory address for each element of the sequence. The above behaviour is the same with full-blown functions with a for loop in them (so it is not a list-comprehension syntax). **But wait: there is more... and more puzzling!** The following script works like `loop_one`: ``` b = [] for foo in ("foo", "bar"): b.append(lambda: foo) print [a() for a in b] ``` (output: `['bar', 'bar']`) But watch what happens when one substitute the variable name `foo` with `a`: ``` b = [] for a in ("foo", "bar"): b.append(lambda: a) print [a() for a in b] ``` (output: `[<function <lambda> at 0x25cce60>, <function <lambda> at 0x25cced8>]`) Any idea of what is happening here? I suspect there must be some gotcha related to the underlying C implementation of my interpreter, but I haven't anything else (Jthon, PyPy or similar) to test if this behaviour is consistent across different implementations.
2013/11/04
[ "https://Stackoverflow.com/questions/19768456", "https://Stackoverflow.com", "https://Stackoverflow.com/users/146792/" ]
Why don't you use JNA API `http://www.java2s.com/Code/Jar/j/Downloadjna351jar.htm` to load native library? Once you putted into your project classpath, you add this code `NativeLibrary.addSearchPath("libtesseract302", "your native lib path");` make sure you have this libtesseract302.dll file, normally it is located at `windows32` folder. For example, if your libtesseract302.dll file in somewhere `c:/abcv/aaa/libtesseract302.dll` then you just set the path like this `NativeLibrary.addSearchPath("libtesseract302", "c:/abcv/aaa");` I don't know how windows path look like either `c:/abcv/aaa` or `c:\\abcv\\aaa\\` if you want easier way, just put all your necessary dll file into your windows32 folder, JVM will take care of it. Another issue might be you were not installing the application correctly or the application version is unmatch with your jar version. try to install the latest application and download the latest jar to try again. Hope it helps :)
A few days ago I ran into the same error message when trying to load a C++ DLL with JNA. It turned out that the cause was a missing DLL that my DLL depended on. In my case it was the MS Visual Studio 2012 redistributable, which I then downloaded and installed on the machine and the problem was gone. Try using **Dependency Walker** to find any missing libraries and install them.
11,876
49,203,174
i am trying to install Home Automatization (<https://home-assistant.io>) on my Synology. I've installed python via the synology packaging system, i've done basic setup (<https://home-assistant.io/docs/installation/synology/>) but when i try to run the daemon i see this in console: *homeassistant requires Python '>=3.5.3' but the running Python is 3.5.1* Is there any chance to update the python to required version on synology? Can you help me please?
2018/03/09
[ "https://Stackoverflow.com/questions/49203174", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9469908/" ]
I faced exactly the same issue where I needed to keep previous user answers for a conversation. Take a look at [Handler](https://python-telegram-bot.readthedocs.io/en/stable/telegram.ext.handler.html]) documentation which is a base class for all handlers. It has parameter called pass\_user\_data. When set to True it passes user\_data dictionary to your handler and it's related to the user the update was sent from. You can utilize it to achieve what you are looking for. Let's say I have a conversation with an entry point and two states: ``` def build_conversation_handler(): conversation_handler = ConversationHandler( entry_points=[CommandHandler('command', callback=show_options)], states={ PROCESS_SELECT: [CallbackQueryHandler(process_select, pass_user_data=True)], SOME_OTHER: [MessageHandler(filters=Filters.text, callback=some_other, pass_user_data=True)], }, ) ``` Here are the handlers for the conversation: ``` def show_options(bot, update): button_list = [ [InlineKeyboardButton("Option 1", callback_data="Option 1"), InlineKeyboardButton("Option 2", callback_data="Option 2")]] update.message.reply_text("Here are your options:", reply_markup=InlineKeyboardMarkup(button_list)) return PROCESS_SELECT def process_select(bot, update, user_data): query = update.callback_query selection = query.data # save selection into user data user_data['selection'] = selection return SOME_OTHER def some_other(bot, update, user_data): # here I get my old selection old_selection = user_data['selection'] ``` In the first handler I show user keyboard to choose an option, in the next handler I grab selection from callback query and store it into user data. The last handler is a message handler so it has no callback data, but since I added user\_data to it I can access the dictionary with the data that I added previously. With this approach you can store and access anything between the handlers that will be related to a user.
I think the accepted solution is deprecated - <https://python-telegram-bot.readthedocs.io/en/stable/telegram.ext.handler.html> > > pass\_user\_data and pass\_chat\_data determine whether a dict you can use to keep any data in will be sent to the callback function. Related to either the user or the chat that the update was sent in. For each update from the same user or in the same chat, it will be the same dict. > > > Note that this is DEPRECATED, and you should use context based callbacks. See <https://git.io/fxJuV> for more info. > > > can store state in `context.user_data['var'] = val`
11,886
48,971,320
This may be a very easy question but I'm new to python and I've searched the net however I couldn't solve the problem. I have a csv file which I need to search for a specific word in the columns of its first row. How can I do that?
2018/02/25
[ "https://Stackoverflow.com/questions/48971320", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9201115/" ]
I used the default csv module of python to read a csv file line by line. Since you specified that we have to search only in the first row, that's why I used break-to stop execution after searching the first row of the csv. You can remove the break, to search throughout the csv. Hope this works. ``` import csv a='abc' #String that you want to search with open("testing.csv") as f_obj: reader = csv.reader(f_obj, delimiter=',') for line in reader: #Iterates through the rows of your csv print(line) #line here refers to a row in the csv if a in line: #If the string you want to search is in the row print("String found in first row of csv") break ```
``` import csv a='abc' #String that you want to search with open("testing.csv") as f_obj: reader = csv.reader(f_obj, delimiter=',') for line in reader: #Iterates through the rows of your csv print(line) #line here refers to a row in the csv if a in str(line): #If the string you want to search is in the row print("String found in first row of csv") break ``` You'll have to add "str(line)" to convert the line to string and then compare it.
11,887
819,396
I have a collection of files encoded in ANSI or UTF-16LE. I would like python to open the files using the correct encoding. The problem is that the ANSI files do not raise any sort of exception when encoded using UTF-16le and vice versa. Is there a straightforward way to open up the files using the correct file encoding?
2009/05/04
[ "https://Stackoverflow.com/questions/819396", "https://Stackoverflow.com", "https://Stackoverflow.com/users/100758/" ]
Use the [chardet](http://chardet.feedparser.org/) library to detect the encoding.
You can check for the [BOM](http://en.wikipedia.org/wiki/Byte-order_mark "BOM") at the beginning of the file to check whether it's UTF. Then [unicode.decode](http://docs.python.org/library/stdtypes.html#str.decode) accordingly (using one of the [standard encodings](http://docs.python.org/library/codecs.html#standard-encodings)). **EDIT** Or, maybe, try s.decode('ascii') your string (given s is the variable name). If it throws UnicodeDecodeError, then decode it as 'utf\_16\_le'.
11,888
63,141,133
I'm new to python and webscraping. I'm trying to extract the text from a list that starts with `"a href"`. The whole list is in a variable named `team"`. If I write `team[0].a.text` I get the first text. But when I do `team[0:14].a.text` I get this response: ``` AttributeError: 'list' object has no attribute 'a'` ``` I guess that means that the a.text function doesn't work on a list. How can get a list from of text from this? Here is a sample of the code as requested: ``` from urllib.request import urlopen as uReq from bs4 import BeautifulSoup as soup my_url = 'http://ligueelite.hockey-richelieu.qc.ca/fr/stats/classement.html?season=2295&subSeason=2296&category=2134' #opening connection and grabbing the page uClient = uReq(my_url) page_html = uClient.read() uClient.close() # html parser page_soup = soup(page_html, "html.parser") # grab each team team = page_soup.findAll("td",{"class":"team"}) ```
2020/07/28
[ "https://Stackoverflow.com/questions/63141133", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14011583/" ]
I hope this simple example helps you udnerstand on your errors more. **If I say `qjogos = (int(qtjog_entry.get()))` at the main block, like:** ``` from tkinter import * root = Tk() def click(): pass qtjog_entry = Entry(root) qtjog_entry.pack() qjogos = (int(qtjog_entry.get())) b = Button(root,text='Click me',command=click) b.pack() root.mainloop() ``` I get youre same error, ``` Traceback (most recent call last): File "c:/PyProjects/Patient Data Entry/test2.py", line 11, in <module> qjogos = (int(qtjog_entry.get())) ValueError: invalid literal for int() with base 10: '' ``` **But now if i say it inside of a function, like:** ``` from tkinter import * root = Tk() def click(): qjogos = (int(qtjog_entry.get())) print(qjogos) qtjog_entry = Entry(root) qtjog_entry.pack() b = Button(root,text='Click me',command=click) b.pack() root.mainloop() ``` I dont get any error and the number is printed in the terminal. What happens in the first code is that, initially when the program runs, the value of whats inside the entry box is `''` (empty sting) which is not an int and hence cannot be converted to an integer using `int()`. So you have to enter the value then click on a button which calls the function and then gets the value which is inside the entrybox at the time you clicked the button. I hope this is what you meant in your Q and that it clears your doubt, let me know if any more doubts. Explained it more cause you mentioned you're new to programming, cheers :D
It looks like `self.qtjog_entry.get()` returns an empty string; you can't use `int()` on an empty string. Without having more context, it's hard to tell exactly what's wrong here.
11,890
11,254,769
Here is the story. I have a bunch of stored procedures and all have their own argument types. What I am looking to do is to create a bit of a type safety layer in python so I can make sure all values are of the correct type before hitting the database. Of course I don't want to write up the whole schema again in python, so I thought I could auto generate this info on startup by fetching the argument names and types from the database. So I proceed to hack up this query just for testing ``` SELECT proname, proargnames, proargtypes FROM pg_catalog.pg_namespace n JOIN pg_catalog.pg_proc p ON pronamespace = n.oid WHERE nspname = 'public'; ``` Then I run it from python and for 'proargtypes' I get a string like this for each result ``` '1043 23 1043' ``` My keen eye tells me these are the oids for the postgresql types, seperated by space, and this particular string means the function accepts varchar,integer,varchar. So in python speak, this should be ``` (unicode, int, unicode) ``` Now how can I get the python types from these numbers? The ideal end result would be something like this ``` In [129]: get_python_type(23) Out[129]: int ``` I've looked all through psycopg2 and the closest I've found is 'extensions.string\_types' but that just maps oids to sql type names.
2012/06/29
[ "https://Stackoverflow.com/questions/11254769", "https://Stackoverflow.com", "https://Stackoverflow.com/users/537925/" ]
If you want the Python type classes, like you might get from a `SQLALchemy` column object, you'll need to build and maintain your own mapping. `psycopg2` doesn't have one, even internally. But if what you want is a way to get from an `oid` to a function that will convert raw values into Python instances, `psycopg2.extensions.string_types` is actually already what you need. It might look like it's just a mapping from `oid` to a name, but that's not quite true: its values aren't strings, they're instances of `psycopg2._psycopg.type`. Time to delve into a little code. `psycopg2` exposes an API for registering new type converters which we can use to trace back into the C code involved with typecasts; this centers around the `typecastObject` in [typecast.c](https://github.com/psycopg/psycopg2/blob/6becf0ef550e8eb0c241c3835b1466eef37b1784/psycopg/typecast.c), which, unsurprisingly, maps to the `psycopg2._psycopg.type` we find in our old friend `string_types`. This object contains pointers to two functions, `pcast` (for Python casting function) and `ccast` (for C casting function), which would seem like what we want— just pick whichever one exists and call it, problem solved. Except they're not among the attributes exposed (`name`, which is just a label, and `values`, which is a list of oids). What the type does expose to Python is `__call__`, which, it turns out, just chooses between `pcast` and `ccast` for us. The documentation for this method is singularly unhelpful, but looking at the C code further [shows](https://github.com/psycopg/psycopg2/blob/6becf0ef550e8eb0c241c3835b1466eef37b1784/psycopg/typecast.c#L471) that it takes two arguments: a string containing the raw value, and a cursor object. ``` >>> import psycopg2.extensions >>> cur = something_that_gets_a_cursor_object() >>> psycopg2.extensions.string_types[23] <psycopg2._psycopg.type 'INTEGER' at 0xDEADBEEF> >>> psycopg2.extensions.string_types[23]('100', cur) 100 >>> psycopg2.extensions.string_types[23]('10.0', cur) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: invalid literal for int() with base 10: '10.0' >>> string_types[1114] <psycopg2._psycopg.type 'DATETIME' at 0xDEADBEEF> >>> string_types[1114]('2018-11-15 21:35:21', cur) datetime.datetime(2018, 11, 15, 21, 35, 21) ``` The need for a cursor is unfortunate, and in fact, a cursor isn't always required: ``` >>> string_types[23]('100', None) 100 ``` But anything having to with converting actual string (`varchar`, for example) types is dependent on the PostgreSQL server's locale, at least in a Python 3 compilation, and passing `None` to those casters doesn't just fail— it segfaults. The method you mention, `cursor.cast(oid, raw)`, is essentially a wrapper around the casters in `psycopg2.extensions.string_types` and may be more convenient in some instances. The only workaround for needing a cursor and connection that I can think of would be to build essentially a mock connection object. If it exposed all of the relevant environment information without connecting to an actual database, it could be attached to a cursor object and used with `string_types[oid](raw, cur)` or with `cur.cast(oid, raw)`, but the mock would have be built in C and is left as an exercise to the reader.
The mapping of postgres types and python types is given [here](http://packages.python.org/psycopg2/usage.html). Does that help? Edit: When you read a record from a table, the postgres (or any database) driver will automatically map the record column types to Python types. ``` cur = con.cursor() cur.execute("SELECT * FROM Writers") row = cur.fetchone() for index, val in enumerate(row): print "column {0} value {1} of type {2}".format(index, val, type(val)) ``` Now, you just have to map Python types to MySQL types while writing your MySQL interface code. But, frankly, this is a roundabout way of mapping types from PostgreSQL types to MySQL types. I would just refer one of the numerous type mappings between these two databases like [this](http://en.wikibooks.org/wiki/Converting_MySQL_to_PostgreSQL#Data_Types)
11,891
34,538,890
I have a question which is not really clear in python documentation (<https://docs.python.org/2/library/stdtypes.html#set.intersection>). When using set.intersection the resulting set contains the objects from current set or from other? In the case that both objects have same value but are different objects in memory. I am using this to compare a previous extraction taken from file with a new one coming from the internet. Both have some objects that are similar but I want to update the old ones. Or maybe there is a simpler alternative to achieve this? It would me much easier if sets implemented `__getitem__`. ``` oldApsExtract = set() if (os.path.isfile("Apartments.json")): with open('Apartments.json', mode='r') as f: oldApsExtract = set(jsonpickle.decode(f.read())) newApsExtract = set(getNewExtract()) updatedAps = oldApsExtract.intersection(newApsExtract) deletedAps = oldApsExtract.difference(newApsExtract) newAps = newApsExtract.difference(oldApsExtract) for ap in deletedAps: ap.mark_deleted() for ap in updatedAps: ap.update() saveAps = list(oldApsExtract) + list(newAps) with open('Apartments.json', mode='w') as f: f.write(jsonpickle.encode(saveAps)) ```
2015/12/30
[ "https://Stackoverflow.com/questions/34538890", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1936538/" ]
What objects are used varies, if the sets are the same size the intersecting elements from b, if b has more elements then the objects from a are returned: ``` i = "$foobar" * 100 j = "$foob" * 100 l = "$foobar" * 100 k = "$foob" * 100 print(id(i), id(j)) print(id(l), id(k)) a = {i, j} b = {k, l, 3} inter = a.intersection(b) for ele in inter: print(id(ele)) ``` Output: ``` 35510304 35432016 35459968 35454928 35510304 35432016 ``` Now when they are the same size: ``` i = "$foobar" * 100 j = "$foob" * 100 l = "$foobar" * 100 k = "$foob" * 100 print(id(i), id(j)) print(id(l), id(k)) a = {i, j} b = {k, l} inter = a.intersection(b) for ele in inter: print(id(ele)) ``` Output: ``` 35910288 35859984 35918160 35704816 35704816 35918160 ``` There relevant part of the [source](https://github.com/python/cpython/blob/master/Objects/setobject.c#L1294). The line `if (PySet_GET_SIZE(other) > PySet_GET_SIZE(so))`, n the result of the comparison seems to decide which object to iterate over and what objects get used. ``` if (PySet_GET_SIZE(other) > PySet_GET_SIZE(so)) { tmp = (PyObject *)so; so = (PySetObject *)other; other = tmp; } while (set_next((PySetObject *)other, &pos, &entry)) { key = entry->key; hash = entry->hash; rv = set_contains_entry(so, key, hash); if (rv < 0) { Py_DECREF(result); return NULL; } if (rv) { if (set_add_entry(result, key, hash)) { Py_DECREF(result); return NULL; } ``` If you pass an object that is not a set, then the same is not true and the length is irrelevant as the objects from the iterable are used: ``` it = PyObject_GetIter(other); if (it == NULL) { Py_DECREF(result); return NULL; } while ((key = PyIter_Next(it)) != NULL) { hash = PyObject_Hash(key); if (hash == -1) goto error; rv = set_contains_entry(so, key, hash); if (rv < 0) goto error; if (rv) { if (set_add_entry(result, key, hash)) goto error; } Py_DECREF(key); ``` When you pass an iterable, firstly it could be an iterator so you could not check the size without consuming and if you passed a list then the lookup would be `0(n)` so it makes sense to just iterate over the iterable passed in, in contrast if you have a set of `1000000` elements and one with `10`, it makes sense to check if the `10` are in the set if `1000000` as opposed to checking if any of `1000000` are in your set of `10` as the lookup should be `0(1)` on average so it mean a linear pass over 10 vs a linear pass over 1000000 elements. If you look at the [wiki.python.org/moin/TimeComplexity](https://wiki.python.org/moin/TimeComplexity), this is backed up: > > Average case -> Intersection s&t O(min(len(s), len(t)) > > > Worst case -> O(len(s) \* len(t))O(len(s) \* len(t)) > > > *replace "min" with "max" if t is not a set* > > > So when we pass an iterable we should always get the objects from b: ``` i = "$foobar" * 100 j = "$foob" * 100 l = "$foobar" * 100 k = "$foob" * 100 print(id(i), id(j)) print(id(l), id(k)) a = {i, j} b = [k, l, 1,2,3] inter = a.intersection(b) for ele in inter: print(id(ele)) ``` You get the objects from b: ``` 20854128 20882896 20941072 20728768 20941072 20728768 ``` If you really want to decide what objects you keep then do the iterating and lookups yourself keeping whichever you want.
One thing you could do is instead use python dictionaries. Access is still O(1), elements are easy to access, and a simple loop such as follows can get the intersection feature: ``` res=[] for item in dict1.keys(): if dict2.has_key(item): res.append(item) ``` The advantage here is you have full control of what is going on, and can tweak it as you need to. For example, can also do things like: ``` if dict1.has_key(item): dict1[item]=updatedValue ```
11,892
64,839,088
I'm trying to use the OpenCV Stitcher class for putting two images together. I ran the simple example provided in the answer to this [question](https://stackoverflow.com/questions/34362922/how-to-use-opencv-stitcher-class-with-python) with the same koala images, but it returns `(1, None)` every time. I've tried this on opencv-python version 3.4, 4.2, and 4.4, and all have the same result. I've tried replacing the stitcher initializer with something else, (`cv2.Stitcher.create`, `cv2.Stitcher_create`, `cv2.createStitcher`), but nothing seems to work. If it helps, I'm on Mac Catalina, using Python 3.7. Thanks!
2020/11/14
[ "https://Stackoverflow.com/questions/64839088", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7129536/" ]
Try changing the default pano confidence threshold using `setPanoConfidenceThresh()`. By default it's 1.0, and apparently it results in the stitcher thinking that it has failed. Here is the full example that works for me. I used that pair of koala images as well, and I am on opencv 4.2.0: ``` stitcher = cv2.Stitcher.create(cv2.Stitcher_PANORAMA) stitcher.setPanoConfidenceThresh(0.0) # might be too aggressive for real examples foo = cv2.imread("/path/to/image1.jpg") bar = cv2.imread("/path/to/image2.jpg") status, result = stitcher.stitch((foo,bar)) assert status == 0 # Verify returned status is 'success' cv2.imshow("result", result) cv2.waitKey(0) cv2.destroyAllWindows() ``` I think in this particular case `cv2.Stitcher_SCANS` is a better mode (transformation between images is just a translation), but either `SCANS` or `PANORAMA` works.
Try rescaling the images to 0.6 and 0.6 , by using the resize. For some reason I got the result only at those values.
11,893
64,938,027
Can I indicate a specific dictionary shape/form for an argument to a function in python? Like in typescript I'd indicate that the `info` argument should be an object with a string `name` and a number `age`: ```js function parseInfo(info: {name: string, age: number}) { /* ... */ } ``` Is there a way to do this with a python function that's otherwise: ```py def parseInfo(info: dict): # function body ``` Or is that perhaps not Pythonic and I should use named keywords or something like that?
2020/11/20
[ "https://Stackoverflow.com/questions/64938027", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6826164/" ]
In Python 3.8+ you could use the [alternative syntax](https://www.python.org/dev/peps/pep-0589/#alternative-syntax) to create a [TypedDict](https://docs.python.org/3/library/typing.html#typing.TypedDict): ``` from typing import TypedDict Info = TypedDict('Info', {'name': str, 'age': int}) def parse_info(info: Info): pass ``` From the documentation on TypedDict: > > TypedDict declares a dictionary type that expects all of its instances > to have a certain set of keys, where each key is associated with a > value of a consistent type. This expectation is not checked at runtime > but is only enforced by type checkers. > > >
Perhaps you could do the following: ``` def assertTypes(obj, type_obj): for t in type_obj: if not(t in obj and type(obj[t]) == type_obj[t]): return False return True def parseInfo(info): if not assertTypes(info, {"name": str, "age": int}): print("INVALID OBJECT FORMAT") return #continue ``` --- ``` >>> parseInfo({"name": "AJ", "age": 8}) >>> parseInfo({"name": "AJ", "age": 'hi'}) INVALID OBJECT FORMAT >>> parseInfo({"name": "AJ"}) INVALID OBJECT FORMAT >>> parseInfo({"name": 1, "age": 100}) INVALID OBJECT FORMAT >>> parseInfo({"name": "Donald", "age": 100}) >>> ```
11,894
22,328,160
I am converting many obscure date formats from an old system. The dates are unpacked/processed as strings and converted into ISO 8601 format. This particular function attempts to convert YYMMDD0F to YYYYMMDD -- function name says it all. Dates from the year 2000 make this messy and clearly this is not the most pythonic way of handling them. How can I make this better using the `dateutil.parser`? ``` def YYMMDD0FtoYYYYMMDD(date): YY = date[0:2] MM = date[2:4] DD = date[4:6] if int(YY) >= 78: YYYY = '19%s' % YY elif 0 <= int(YY) <= 77 and MM!='' and MM!='00': YYYY = '20%s' % YY else: YYYY = '00%s' % YY return "%s-%s-%s" % (YYYY, MM, DD) ```
2014/03/11
[ "https://Stackoverflow.com/questions/22328160", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1645914/" ]
How about this one: ``` SELECT ID, Team, DPV, DPT, DPV-DPT AS Difference FROM e2teams ```
Something like this? ``` SELECT ID, Team, DPV, DPT, (DPV - DPT) as Difference FROM e2teams ``` You can find more information **[here](https://dev.mysql.com/doc/refman/5.0/en/arithmetic-functions.html)**
11,895
12,444,942
I'm doing an evolution experiment using python and pygame, however that is unimportant, it is one function that is not working and id like you to have a look at. The error message I'm getting is float object is not callable. It says the problem is in line 205 which is calling the function from line 51. I will post all my code most of which is irrelevant for fixing this problem. But i think its useful for you guys to have an idea of the code as a whole, and please dont hate me for lack of comments :P it will get there!! thanks Link to code: <http://pastebin.com/BBm7Ehax>
2012/09/16
[ "https://Stackoverflow.com/questions/12444942", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1339482/" ]
Line 51: ``` def distance(self,listx,listy): ``` Line 55: ``` self.distance=(((self.x-self.tcentrex)**2) + ((self.y-self.tcentrey)**2))**0.5 ``` You can't have `self.distance` be both a method and a variable and expect things to work properly. When line 55 is executed (during the first time the `distance()` method is called) it overwrites the method (which was at `self.distance`, because it's the method `distance` being called on `self`) with a `float` value.
In line 55, within the `distance` method, you assign a float value to `self.distance`. So after you call `distance` once, the attribute `distance` on that object refers to a float, which is not callable.
11,898
55,174,991
Attempting to convert this EGCD equation into python. ``` egcd(a, b) = (1, 0), if b = 0 = (t, s - q * t), otherwise, where q = a / b (note: integer division) r = a mod b (s, t) = egcd(b, r) ``` The test I used was egcd(5, 37) which should return (15,-2) but is returning (19.5, -5.135135135135135) My code is: ``` def egcd(a, b): if b == 0: return (1, 0) else: q = a / b # Calculate q r = a % b # Calculate r (s,t) = egcd(b,r) # Calculate (s, t) by calling egcd(b, r) return (t,s-q*t) # Return (t, s-q*t) ```
2019/03/15
[ "https://Stackoverflow.com/questions/55174991", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10687320/" ]
`a / b` in Python 3 is "true division" the result is non-truncating floating point division even when both operands are `int`s. To fix, either use `//` instead (which is floor division): ``` q = a // b ``` or [use `divmod`](https://docs.python.org/3/library/functions.html#divmod) to perform both division and remainder as a single computation, replacing both of these lines: ``` q = a / b r = a % b ``` with just: ``` q, r = divmod(a, b) ```
Change `q = a / b` for `q = a // b`
11,899
8,117,249
I think I might be repeating the question but I didn't find any of the answers suited to my requirement. Pardon my ignorance. I have a program running which continuously spits out some binary data from a server.It never stops until it's killed. I want to wrap it in a python script to read the output and process it as and when it arrives. I tried out few of the subprocess ideas in stack overflow but no use. Please suggest. ``` p=subprocess.popen(args,stderr=PIPE,stdin=PIPE,stdout=PIPE,shell=FALSE) #p.communicate#blocks forever as expected #p.stdout.read/readlines/readline-->blocks #select(on p.stdout.fileno())-->blocks ``` what is the best method?
2011/11/14
[ "https://Stackoverflow.com/questions/8117249", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1044911/" ]
Read with a length limit: ``` proc = subprocess.Popen(args, stdin=None, stdout=subprocess.PIPE, stderr=None) while True: chunk = proc.stdout.read(1024) # chunk is <= 1024 bytes ``` --- This is the code from your comment, slightly modified. It works for me: ``` import subprocess class container(object): pass self = container() args = ['yes', 'test ' * 10] self.p = subprocess.Popen(args, stdin=None, stderr=None, stdout=subprocess.PIPE, shell=False) while True: chunk = self.p.stdout.read(1024) print 'printing chunk' print chunk ```
You could run the other program with its output directed to a file and then use Python's *f.readline()* to tail the file.
11,900
54,469,599
Here's the code, it's from <https://plot.ly/python/line-and-scatter/> ===================================================================== ``` import plotly.plotly as py import plotly.graph_objs as go # Create random data with numpy import numpy as np N = 100 random_x = np.linspace(0, 1, N) random_y0 = np.random.randn(N)+5 random_y1 = np.random.randn(N) random_y2 = np.random.randn(N)-5 # Create traces trace0 = go.Scatter( x = random_x, y = random_y0, mode = 'markers', name = 'markers' ) trace1 = go.Scatter( x = random_x, y = random_y1, mode = 'lines+markers', name = 'lines+markers' ) trace2 = go.Scatter( x = random_x, y = random_y2, mode = 'lines', name = 'lines' ) data = [trace0, trace1, trace2] py.iplot(data, filename='scatter-mode') ``` When I copy and paste it into jupyter notebook I get the following error: PlotlyError: Because you didn't supply a 'file\_id' in the call, we're assuming you're trying to snag a figure from a url. You supplied the url, '', we expected it to start with '<https://plot.ly>'. Run help on this function for more information. whole error: ``` Aw, snap! We didn't get a username with your request. Don't have an account? https://plot.ly/api_signup Questions? accounts@plot.ly --------------------------------------------------------------------------- PlotlyError Traceback (most recent call last) <ipython-input-7-70bd62361f83> in <module>() 27 28 data = [trace0, trace1, trace2] ---> 29 py.iplot(data, filename='scatter-mode') c:\users\appdata\local\programs\python\python37-32\lib\site-packages\plotly\plotly\plotly.py in iplot(figure_or_data, **plot_options) 162 embed_options['height'] = str(embed_options['height']) + 'px' 163 --> 164 return tools.embed(url, **embed_options) 165 166 c:\users\appdata\local\programs\python\python37-32\lib\site-packages\plotly\tools.py in embed(file_owner_or_url, file_id, width, height) 394 else: 395 url = file_owner_or_url --> 396 return PlotlyDisplay(url, width, height) 397 else: 398 if (get_config_defaults()['plotly_domain'] c:\users\appdata\local\programs\python\python37-32\lib\site-packages\plotly\tools.py in __init__(self, url, width, height) 1438 def __init__(self, url, width, height): 1439 self.resource = url -> 1440 self.embed_code = get_embed(url, width=width, height=height) 1441 super(PlotlyDisplay, self).__init__(data=self.embed_code) 1442 c:\users\appdata\local\programs\python\python37-32\lib\site-packages\plotly\tools.py in get_embed(file_owner_or_url, file_id, width, height) 299 "'{1}'." 300 "\nRun help on this function for more information." --> 301 "".format(url, plotly_rest_url)) 302 urlsplit = six.moves.urllib.parse.urlparse(url) 303 file_owner = urlsplit.path.split('/')[1].split('~')[1] PlotlyError: Because you didn't supply a 'file_id' in the call, we're assuming you're trying to snag a figure from a url. You supplied the url, '', we expected it to start with 'https://plot.ly'. Run help on this function for more information. ```
2019/01/31
[ "https://Stackoverflow.com/questions/54469599", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9493894/" ]
You can try this code: ``` Sub Sample() ' Define object variables Dim listRange As Range Dim cellValue As Range ' Define other variables Dim itemsQuantity As Integer Dim stringResult As String Dim separator As String Dim counter As Integer ' Define the range where the options are located Set listRange = Range("A1:A4") itemsQuantity = listRange.Cells.Count counter = 1 For Each cellValue In listRange ' Select the case for inner items, penultimate and last item Select Case counter Case Is < itemsQuantity separator = ", " Case Is = itemsQuantity - 1 separator = " And " Case Else separator = vbNullString End Select stringResult = stringResult & cellValue.Value & separator counter = counter + 1 Next cellValue ' Assamble the last sentence stringResult = "You have entered " & stringResult & "." MsgBox stringResult End Sub ``` Customize the: ' Define the range where the options are located portion Cheers!
Column to Sentence ================== Features -------- * At least two cells of data in Range, or else "" is returned. * Only first column of Range is processed (`Resize`). Usage in Excel -------------- [![enter image description here](https://i.stack.imgur.com/FnDPU.jpg)](https://i.stack.imgur.com/FnDPU.jpg) The Code -------- ``` Function CCE(Range As Range) As String Application.Volatile Const strFirst = "You have entered " ' First String Const strDEL = ", " ' Delimiter Const strDELLast = " and " ' Last Delimiter Const strLast = "." ' Last String Dim vnt1 As Variant ' Source Array Dim vnt0 As Variant ' Zero Array Dim i As Long ' Arrays Row Counter ' Copy Source Range's first column to 2D 1-based 1-column Source Array. vnt1 = Range.Resize(, 1) ' Note: Join can be used only on a 0-based 1D array. ' Resize Zero Array to hold all data from Source Array. ReDim vnt0(UBound(vnt1) - 1) ' Copy data from Source Array to Zero Array. For i = 1 To UBound(vnt1) If vnt1(i, 1) = "" Then Exit For vnt0(i - 1) = vnt1(i, 1) Next ' If no "" was found, "i" has to be greater than 3 ensuring that ' Source Range contains at least 2 cells. If i < 3 Then Exit Function ReDim Preserve vnt0(i - 2) ' Join data from Zero Array to CCE. CCE = Join(vnt0, strDEL) ' Replace last occurence of strDEL with strDELLast. CCE = WorksheetFunction.Replace( _ CCE, InStrRev(CCE, strDEL), Len(strDEL), strDELLast) ' Add First and Last Strings. CCE = strFirst & CCE & strLast End Function ```
11,902
6,978,204
I made a set of XMLRPC client-server programs in python and set up a little method for authenticating my clients. However, after coding pretty much the whole thing, I realized that once a client was authenticated, the flag I had set for it was global in my class i.e. as long as one client is authenticated, all clients are authenticated. I don't know why, but I was under the impression that whenever SimpleXMLRPCServer was connected to by a client, it would create a new set of variables in my program. Basically the way it's set up now is ``` class someclass: authenticate(self, username, pass): #do something here if(check_for_authentication(username, pass)) self.authenticated=True other_action(self, vars): if authenticated: #do whatever else: return "Not authorized." server=SimpleXMLRPCServer.SimpleXMLRPCServer("0.0.0.0", 8000) server.register_instance(someclass()) server.serve_forever() ``` I need either a way to hack this into what I am looking for (i.e. the authenticated flag needs to be set for each client that connects), or another protocol that can do this more easily. After some searching I have been looking at twisted, but since this is already written, I'd rather modify it than have to rewrite it. I know for now I could just always get the username and password from the client, but in the intrest of resources (having to authenticate on every request) and saving bandwidth (which some of my clients have in very limited quantities), I'd rather not do that. Also, this is my first time trying to secure something like this(and I am not trained in internet security), so if I am overlooking some glaring error in my logic, please tell me. Basically, I can't have someone sending me fake variables in "other\_actions"
2011/08/08
[ "https://Stackoverflow.com/questions/6978204", "https://Stackoverflow.com", "https://Stackoverflow.com/users/400612/" ]
Something like this would work: ``` class SomeClass(object): authenticated = {} def authenticate(self, username, password): #do something here if authenticate(username, password): # make unique token can probably be just a hash # of the millisecond time and the username self.authenticated[make_unique_token(username)] = True def other_action(self, vars): # This will return True if the user is authenticated # and None otherwise, which evaluates to False if authenticated.get(vars.get('authentication-token')): #do whatever pass else: return "Not authorized." server=SimpleXMLRPCServer.SimpleXMLRPCServer("0.0.0.0", 8000) server.register_instance(someclass()) server.serve_forever() ``` You just need to pass them an authentication token once they've logged in. I assume you know you can't actually use `pass` as a variable name. Please remember to accept answers to you questions (I noticed you haven't for your last several).
You have to decide. If you really want to use one instance for all clients, you have to store the "authenticated" state somewhere else. I am not familiar with SimpleXMLRPCServer(), but if you could get the conection object somewhere, or at least its source address, you could establish a set() where all authenticated clients/connections/whatever are registered.
11,904
70,102,585
I'm [using Nikola](https://getnikola.com/), a static website generator, to build a website. I am automating its building through [Github Actions](https://github.com/getnikola/nikola-action). I also wanted to use [Pandoc](https://pandoc.org) to help convert my markdown to html, but I noted that Pandoc was not included in the original action. Therefore, I had to try to figure out myself how to include it. However, I've been thwarted time and again by `FileNotFound` errors. First, I tried to edit the action so that it installed Pandoc on the Ubuntu environment. Below is my edited version of the action. I only added the `Install Pandoc on Ubuntu` step. ```yaml on: [push] jobs: nikola-build: runs-on: ubuntu-latest steps: - name: Install Pandoc on Ubuntu run: sudo apt-get install -y pandoc - name: Check out uses: actions/checkout@v2 - name: Build and Deploy Nikola uses: getnikola/nikola-action@v3 with: dry_run: false ``` When this failed again informing me that Pandoc could not be found, I added a `requirements.txt` file to my repository: ``` Pandoc ``` I tried running the action again. Both installations—the action step I wrote and `pip install pandoc`—ran without any issues and were successful. And yet when it came to the step where Nikola starts to build the website, it seems that no matter what is done, it fails in rendering, because Pandoc cannot not be found: ``` Traceback (most recent call last): File "/usr/local/lib/python3.8/site-packages/nikola/plugins/compile/pandoc.py", line 76, in compile subprocess.check_call(['pandoc', '-o', dest, source] + self._get_pandoc_options(source)) File "/usr/local/lib/python3.8/subprocess.py", line 359, in check_call retcode = call(*popenargs, **kwargs) File "/usr/local/lib/python3.8/subprocess.py", line 340, in call with Popen(*popenargs, **kwargs) as p: File "/usr/local/lib/python3.8/subprocess.py", line 858, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "/usr/local/lib/python3.8/subprocess.py", line 1704, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: 'pandoc' ``` I've looked absolutely everywhere for solutions to similar problems, but they are few and outdated. I would greatly appreciate any insight into this problem, what is at fault, what I can do to fix it, etc.
2021/11/24
[ "https://Stackoverflow.com/questions/70102585", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17501797/" ]
`pandoc` is not a Python package. It is a separate and very powerful command line tool. `nikola` invokes the command line tool to do its work. You need to install it using the `sudo apt install pandoc` command line that they suggest.
I discovered that the system was unable to find Pandoc as the entirety of the project was run in a Docker container; I had previously installed Pandoc on the system itself and failed. I was able to solve the problem by modifying the [shell script](https://github.com/getnikola/nikola-action/blob/master/entrypoint.sh) to install Pandoc in the container.
11,905
25,851,090
The results from the code below in Python 2.7 struck me as a contradiction. The `is` operator is supposed to work with object identity and so is `id`. But their results diverge when I'm looking at a user-defined method. Why is that? ``` py-mach >>class Hello(object): ... def hello(): ... pass ... py-mach >>Hello.hello is Hello.hello False py-mach >>id(Hello.hello) - id(Hello.hello) 0 ``` I found the following excerpt from the description of the [Python data model](https://docs.python.org/2/reference/datamodel.html) somewhat useful. But it didn't really make everything clear. Why does the `id` function return the same integer if the user-defined method objects are constructed anew each time? > > User-defined method objects may be created when getting an attribute of a class (perhaps via an instance of that class), if that attribute is a user-defined function object, an unbound user-defined method object, or a class method object. When the attribute is a user-defined method object, a new method object is only created if the class from which it is being retrieved is the same as, or a derived class of, the class stored in the original method object; otherwise, the original method object is used as it is. > > >
2014/09/15
[ "https://Stackoverflow.com/questions/25851090", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1988435/" ]
The Python documentation for the [id function](https://docs.python.org/2/library/functions.html#id) states: > > Return the "identity" of an object. This is an integer (or long integer) which is guaranteed to be unique and constant for this object during its lifetime. **Two objects with non-overlapping lifetimes may have the same id() value.** > > > (emphasis mine) When you do `id(Hello.hello) == id(Hello.hello)`, the method object is created only briefly and is considered "dead" after the first call to 'id'. Because of the call to `id`, you only need `Hello.hello` to be alive for a short period of time -- enough to obtain the id. Once you get that id, the object is dead and the second `Hello.hello` can reuse that address, which makes it appear as if the two objects have the same id. This is in contrast to doing `Hello.hello is Hello.hello` -- both instances have to live long enough to be compared to each other, so you end up having two live instances. If you instead tried: ``` >>> a = Hello.hello >>> b = Hello.hello >>> id(a) == id(b) False ``` ...you'd get the expected value of `False`.
This is a "simple" consequence of how the memory allocator works. It is very similar to the case: ``` >>> id([]) == id([]) True ``` Basically python doesn't guarantee that ID's don't get reused -- it only guarantees that the id is unique *as long as the object is alive*. In this case, the first object being passed to `id` is dead after the call to `id` and (C)python re-uses that `id` when creating the second object. Never rely on this behavior as it is *allowed* by the language reference, but certainly not *required*.
11,906
58,040,654
I have this json dataset. From this dataset i only want "column\_names" keys and its values and "data" keys and its values.Each values of column\_names corresponds to values of data. How do i combine only these two keys in python for analysis ``` {"dataset":{"id":42635350,"dataset_code":"MSFT","column_names": ["Date","Open","High","Low","Close","Volume","Dividend","Split", "Adj_Open","Adj_High","Adj_Low","Adj_Close","Adj_Volume"], "frequency":"daily","type":"Time Series", "data":[["2017-12-28",85.9,85.93,85.55,85.72,10594344.0,0.0,1.0,83.1976157998082, 83.22667201021558,82.85862667838872,83.0232785373639,10594344.0], ["2017-12-27",85.65,85.98,85.215,85.71,14678025.0,0.0,1.0,82.95548071308001, 83.27509902756123,82.53416566217294,83.01359313389476,14678025.0] for cnames in data['dataset']['column_names']: print(cnames) for cdata in data['dataset']['data']: print(cdata) ``` For loop gives me column names and data values i want but i am not sure how to combine it and make it as a python data frame for analysis. Ref:The above piece of code is from quandal website
2019/09/21
[ "https://Stackoverflow.com/questions/58040654", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10331731/" ]
``` data = { "dataset": { "id":42635350,"dataset_code":"MSFT", "column_names": ["Date","Open","High","Low","Close","Volume","Dividend","Split","Adj_Open","Adj_High","Adj_Low","Adj_Close","Adj_Volume"], "frequency":"daily", "type":"Time Series", "data":[ ["2017-12-28",85.9,85.93,85.55,85.72,10594344.0,0.0,1.0,83.1976157998082, 83.22667201021558,82.85862667838872,83.0232785373639,10594344.0], ["2017-12-27",85.65,85.98,85.215,85.71,14678025.0,0.0,1.0,82.95548071308001,83.27509902756123,82.53416566217294,83.01359313389476,14678025.0] ] } } ``` Should the following code do what you want ? ``` import pandas as pd df = pd.DataFrame(data, columns = data['dataset']['column_names']) for i, data_row in enumerate(data['dataset']['data']): df.loc[i] = data_row ```
The following snippet should work for you ```py import pandas as pd df = pd.DataFrame(data['dataset']['data'],columns=data['dataset']['column_names']) ``` Check the following link to learn more <https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html>
11,907
21,585,730
I want to launch a program from python, in this case abaqus (a finite element analysis software), using: ``` os.system('abaqus job=' + JobName + ' user=' + UELname + ' interactive') ``` After say 5 minutes running the program I want to execute a python script that monitors some output files generated by abaqus. If a certain condition is met than the python script will terminate the abaqus job. There's a catch here. To read the output files I need to run the python script from abaqus: ``` os.system('abaqus cae noGUI=results2.py') ``` My question is this: Can I do this simply by: ``` os.system('abaqus job=' + JobName + ' user=' + UELname + ' interactive') time.sleep(300) os.system('abaqus cae noGUI=results2.py') ``` I know that using `interactive` key makes ths system wait for the abaqus job to finish before doing other stuff. Therefore, I asssume this is not as simple as I'd like it to be. Any ideas?
2014/02/05
[ "https://Stackoverflow.com/questions/21585730", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1262767/" ]
Did you try [subprocess](http://docs.python.org/2/library/subprocess.html) module?
Logic seems fine, i would suggest you to use [subprocess](http://docs.python.org/2/library/subprocess.html), instead of os.system. Since you are calling commands, you can run all these commands at once like this ``` cmdToRun = '\'abaqus job=\' + JobName + \' user=\' + UELname + \' interactive\' ; sleep 300; abaqus cae noGUI=results2.py' ```
11,909
749,680
I tried using the Process class as always but that didn't work. All I am doing is trying to run a Python file like someone double clicked it. Is it possible? EDIT: Sample code: ``` string pythonScript = @"C:\callme.py"; string workDir = System.IO.Path.GetDirectoryName ( pythonScript ); Process proc = new Process ( ); proc.StartInfo.WorkingDirectory = workDir; proc.StartInfo.UseShellExecute = true; proc.StartInfo.FileName = pythonScript; proc.StartInfo.Arguments = "1, 2, 3"; ``` I don't get any error, but the script isn't run. When I run the script manually, I see the result.
2009/04/14
[ "https://Stackoverflow.com/questions/749680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/51816/" ]
Here's my code for executing a python script from C#, with a redirected standard input and output ( I pass info in via the standard input), copied from an example on the web somewhere. Python location is hard coded as you can see, can refactor. ``` private static string CallPython(string script, string pyArgs, string workingDirectory, string[] standardInput) { ProcessStartInfo startInfo; Process process; string ret = ""; try { startInfo = new ProcessStartInfo(@"c:\python25\python.exe"); startInfo.WorkingDirectory = workingDirectory; if (pyArgs.Length != 0) startInfo.Arguments = script + " " + pyArgs; else startInfo.Arguments = script; startInfo.UseShellExecute = false; startInfo.CreateNoWindow = true; startInfo.RedirectStandardOutput = true; startInfo.RedirectStandardError = true; startInfo.RedirectStandardInput = true; process = new Process(); process.StartInfo = startInfo; process.Start(); // write to standard input foreach (string si in standardInput) { process.StandardInput.WriteLine(si); } string s; while ((s = process.StandardError.ReadLine()) != null) { ret += s; throw new System.Exception(ret); } while ((s = process.StandardOutput.ReadLine()) != null) { ret += s; } return ret; } catch (System.Exception ex) { string problem = ex.Message; return problem; } } ```
[Process.Start](http://msdn.microsoft.com/en-us/library/system.diagnostics.process.start.aspx) should work. if it doesn't, would you post your code and the error you are getting?
11,910
20,503,671
Ok so I'm trying to run a C program from a python script. Currently I'm using a test C program: ``` #include <stdio.h> int main() { while (1) { printf("2000\n"); sleep(1); } return 0; } ``` To simulate the program that I will be using, which takes readings from a sensor constantly. Then I'm trying to read the output (in this case `"2000"`) from the C program with subprocess in python: ``` #!usr/bin/python import subprocess process = subprocess.Popen("./main", stdout=subprocess.PIPE) while True: for line in iter(process.stdout.readline, ''): print line, ``` but this is not working. From using print statements, it runs the `.Popen` line then waits at `for line in iter(process.stdout.readline, ''):`, until I press Ctrl-C. Why is this? This is exactly what most examples that I've seen have as their code, and yet it does not read the file. Is there a way of making it run only when there is something to be read?
2013/12/10
[ "https://Stackoverflow.com/questions/20503671", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2836175/" ]
Your program isn't hung, it just runs very slowly. Your program is using buffered output; the `"2000\n"` data is not being written to stdout immediately, but will eventually make it. In your case, it might take `BUFSIZ/strlen("2000\n")` seconds (probably 1638 seconds) to complete. After this line: ``` printf("2000\n"); ``` add ``` fflush(stdout); ```
See [readline docs](http://docs.python.org/2/library/io.html#io.TextIOBase.readline). Your code: ``` process.stdout.readline ``` Is waiting for EOF or a newline. I cannot tell what you are ultimately trying to do, but adding a newline to your printf, e.g., `printf("2000\n");`, should at least get you started.
11,913
55,352,756
From a user given input of job description, i need to extract the keywords or phrases, using python and its libraries. I am open for suggestions and guidance from the community of what libraries work best and if in case, its simple, please guide through. Example of user input: `user_input = "i want a full stack developer. Specialization in python is a must".` Expected output: `keywords = ['full stack developer', 'python']`
2019/03/26
[ "https://Stackoverflow.com/questions/55352756", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10486777/" ]
Well, a good keywords set is a good method. But, the key is how to build it. There are many way to do it. Firstly, the simplest one is searching open keywords set in the web. It's depend on your luck and your knowledge. Your keywords (likes "python, java, machine learing") are common tags in Stackoverflow, Recruitment websites. Don't break the law! The second one is IR(Information Extraction), it's more complex than the last one. There are many algorithms, likes "TextRank", "Entropy", "Apriori", "HMM", "Tf-IDF", "Conditional Random Fields", and so on. Good lucky. For matching keywords/phases, `Trie Tree` is more faster.
Well, i answered my own question. Thanks anyways for those who replied. ``` keys = ['python', 'full stack developer','java','machine learning'] keywords = [] for i in range(len(keys)): word = keys[i] if word in keys: keywords.append(word) else: continue print(keywords) ``` Output was as expected!
11,916
55,373,000
I wanted to import `train_test_split` to split my dataset into a test dataset and a training dataset but an import error has occurred. I tried all of these but none of them worked: ``` conda upgrade scikit-learn pip uninstall scipy pip3 install scipy pip uninstall sklearn pip uninstall scikit-learn pip install sklearn ``` Here is the code which yields the error: ``` from sklearn.preprocessing import train_test_split X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2,random_state=0) ``` And here is the error: ``` from sklearn.preprocessing import train_test_split Traceback (most recent call last): File "<ipython-input-3-e25c97b1e6d9>", line 1, in <module> from sklearn.preprocessing import train_test_split ImportError: cannot import name 'train_test_split' from 'sklearn.preprocessing' (C:\ProgramData\Anaconda3\lib\site-packages\sklearn\preprocessing\__init__.py) ```
2019/03/27
[ "https://Stackoverflow.com/questions/55373000", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11264930/" ]
`train_test_split` isn't in `preprocessing`, it is in `model_selection` and `cross_validation`, so you meant: ``` from sklearn.model_selection import train_test_split ``` Or: ``` from sklearn.cross_validation import train_test_split ```
test\_train\_split is not present in preprocessing. It is present in model\_selection module so try. ``` from sklearn.model_selection import train_test_split ``` it will work.
11,917
46,210,757
Assume I have a python dictionary with 2 keys. ``` dic = {0:'Hi!', 1:'Hello!'} ``` What I want to do is to extend this dictionary by duplicating itself, but change the key value. For example, if I have a code ``` dic = {0:'Hi!', 1:'Hello'} multiplier = 3 def DictionaryExtend(number_of_multiplier, dictionary): "Function code" ``` then the result should look like ``` >>> DictionaryExtend(multiplier, dic) >>> dic >>> dic = {0:'Hi!', 1:'Hello', 2:'Hi!', 3:'Hello', 4:'Hi!', 5:'Hello'} ``` In this case, I changed the key values by adding the multipler at each duplication step. What's the efficient way of doing this? Plus, I'm also planning to do the same job for list variable. I mean, extend a list by duplicating itself and change some values like above exmple. Any suggestion for this would be helpful, too!
2017/09/14
[ "https://Stackoverflow.com/questions/46210757", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3794041/" ]
It's not immediately clear why you might want to do this. If the keys are always consecutive integers then you probably just want a list. Anyway, here's a snippet: ``` def dictExtender(multiplier, d): return dict(zip(range(multiplier * len(d)), list(d.values()) * multiplier)) ```
I don't think you need to use inheritance to achieve that. It's also unclear what the keys should be in the resulting dictionary. If the keys are always consecutive integers, then why not use a list? ``` origin = ['Hi', 'Hello'] extended = origin * 3 extended >> ['Hi', 'Hello', 'Hi', 'Hello', 'Hi', 'Hello'] extended[4] >> 'Hi' ``` If you want to perform a different operation with the keys, then simply: ``` mult_key = lambda key: [key,key+2,key+4] # just an example, this can be any custom implementation but beware of duplicate keys dic = {0:'Hi', 1:'Hello'} extended = { mkey:dic[key] for key in dic for mkey in mult_key(key) } extended >> {0:'Hi', 1:'Hello', 2:'Hi', 3:'Hello', 4:'Hi', 5:'Hello'} ```
11,918
40,914,325
I'm beginner to python and I would like to start with automation. Below is the task I'm trying to do. ``` ssh -p 2024 root@10.54.3.32 root@10.54.3.32's password: ``` I try to ssh to a particular machine and its prompting for password. But I have no clue how to give the input to this console. I have tried this ``` import sys import subprocess con = subprocess.Popen("ssh -p 2024 root@10.54.3.32", shell=True,stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr =subprocess.PIPE) print con.stdout.readlines() ``` If I execute this, output will be like ``` python auto.py root@10.54.3.32's password: ``` But I have no clue how to give the input to this. If some could help me out in this, would be much grateful. Also could you please help me after logging in, how to execute the commands on the remote machine via ssh. Would proceed with my automation if this is done I tried with `con.communicate()` since stdin is in `PIPE mode`. But no luck. If this cant be accomplished by subprocess, could you please suggest me alternate way to execute commands on remote console(some other module) useful for automation ? since most of my automation depends on executin commands on remote console Thanks
2016/12/01
[ "https://Stackoverflow.com/questions/40914325", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3275349/" ]
I have implemented through pexpect. You may need to `pip install pexpect` before you run the code: ``` import pexpect from pexpect import pxssh accessDenied = None unreachable = None username = 'someuser' ipaddress = 'mymachine' password = 'somepassword' command = 'ls -al' try: ssh = pexpect.spawn('ssh %s@%s' % (username, ipaddress)) ret = ssh.expect([pexpect.TIMEOUT, '.*sure.*connect.*\(yes/no\)\?', '[P|p]assword:']) if ret == 0: unreachable = True elif ret == 1: #Case asking for storing key ssh.sendline('yes') ret = ssh.expect([pexpect.TIMEOUT, '[P|p]assword:']) if ret == 0: accessDenied = True elif ret == 1: ssh.sendline(password) auth = ssh.expect(['[P|p]assword:', '#']) #Match for the prompt elif ret == 2: #Case asking for password ssh.sendline(password) auth = ssh.expect(['[P|p]assword:', '#']) #Match for the prompt if not auth == 1: accessDenied = True else: (command_output, exitstatus) = pexpect.run("ssh %s@%s '%s'" % (username, ipaddress, command), events={'(?i)password':'%s\n' % password}, withexitstatus=1, timeout=1000) print(command_output) except pxssh.ExceptionPxssh as e: print(e) accessDenied = 'Access denied' if accessDenied: print('Could not connect to the machine') elif unreachable: print('System unreachable') ``` This works only on Linux as pexpect is available only for Linux. You may use plink.exe if you need to run on Windows. `paramiko` is another module you may try, with which I had few issues before.
I have implemented through paramiko. You may need to `pip install paramiko` before you run the code: ``` import paramiko username = 'root' password = 'calvin' host = '192.168.0.1' ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.connect(host, username=str(username), password=str(password)) ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) chan = ssh.invoke_shell() time.sleep(1) print("Cnnection Successfully") ``` If you want to pass command and grab the output, Simply perform the following steps: ``` chan.send('Your Command') if chan is not None and chan.recv_ready(): resp = chan.recv(2048) while (chan.recv_ready()): resp += chan.recv(2048) output = str(resp, 'utf-8') print(output) ```
11,923
42,207,798
When I using lxml library in python to get data on a html page (Youtube video title), It not return text correctly It return a text Like this "à·à·à¶½à¶±à·à¶§à¶ºà¶±à" Here my code, ``` page = requests.get("https://www.youtube.com/watch?v=MZMapfEg5g8") source = html.fromstring(page.content) links = source.xpath('//link[@type="text/xml+oembed"]') for href in links: return href.attrib['title'] ``` Language I need is is in sinhala, and it's unicode.
2017/02/13
[ "https://Stackoverflow.com/questions/42207798", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6440193/" ]
Because your `on` clause is `1 = 0` nothing matches, so all rows are inserted. Changing your `on` clause to `a = b` will yield your expected results of `2,4,5,1,3`. rextester for `on a = b`: <http://rextester.com/OPLL86727> It might be helpful to be more explicit with aliasing your source and target: ``` declare @t1 table (a int) declare @t2 table (b int) insert into @t1 (a) values ( 1 ),(3),(5) insert into @t2 (b) values ( 2 ),(4),(5) ;with source as ( select * from @t1 ) merge into @t2 as target using source on source.a = target.b when not matched then insert (b) values (a); select * from @t2; ```
You are matching on 1=0 which will always fire the insert. You should use On Source.a = @t2.b
11,924
55,877,915
I am trying to gather weather data from an API and then store that Data in a Database for use later. I have been able to access the data and print it out using a for loop, but I would like to assign each iteration of that for loop to a variable to be stored in a different location in a database. How would I be able to do so? My Current Code Below: ``` #!/usr/bin/python3 from urllib.request import urlopen import json apikey="redacted" # Latitude & longitude lati="-26.20227" longi="28.04363" # Add units=si to get it in sensible ISO units url="https://api.forecast.io/forecast/"+apikey+"/"+lati+","+longi+"?units=si" meteo=urlopen(url).read() meteo = meteo.decode('utf-8') weather = json.loads(meteo) cTemp = (weather['currently']['temperature']) cSum = (weather['currently']['summary']) cRain1 = (weather['currently']['precipProbability']) cRain2 = cRain1*100 daily = (weather['daily']['summary']) print (cTemp) print (cSum) print (cRain2) print (daily) #Everthing above this line works as expected, I am focusing on the below code dailyTHigh = (weather['daily']['data']) for i in dailyTHigh: print (i['temperatureHigh']) ``` Gives me an output of the following: ``` 12.76 Clear 0 No precipitation throughout the week, with high temperatures rising to 24°C on Friday. 22.71 22.01 22.82 23.13 23.87 23.71 23.95 22.94 ``` How would I go about assigning each of the 8 High Temperatures to a different variable? ie, var1 = 22.71 var2 = 22.01 etc Thanks in advance,
2019/04/27
[ "https://Stackoverflow.com/questions/55877915", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6908101/" ]
IMO you need some sort of dynamic length data structure to which you can append the data inside a `for` loop and then access it using `index`. Therefore, you can create a `list` and then append all the values of `for` lop into it as shown below: ``` list = [] for i in dailyTHigh: list.append(i['temperatureHigh']) ``` Now you will be able to access the values of `list` as shown below: ``` for i in range(0,len(x)): print x[i] ``` The above approach is good as you don't need to know the number of items you need to insert as you may require equal number of variables to have the values assigned to. And you can easily access the values also.
IMO you can use a stack data structure and store the data in the FILO (First In Last Out) form. This way, you can also manage data in more efficient way, even it get's bigger in size (in future)
11,925
38,326,357
**following is my code in python for the scraping and output efforts** ``` html = urlopen("http://www.imdb.com/news/top") wineReviews = BeautifulSoup(html) lines = [] for headLine in imdbNews.findAll("h2"): #headLine.encode('ascii', 'ignore') imdb_news = headLine.get_text() lines.append(imdb_news) #f = open("output.txt", "a") #f.write(imdb_news) #f.close() ``` **The #s have been my attempts on trying to get rid of the Unicode errors but it just results into more errors that I can't seem to wrap my head around. Current code results in the following output:** ``` [u'Warner Bros. Brings \u2018Wonder Woman,\u2019 \u2018Suicide Squad,\u2019 \u2018Fantastic Beasts\u2019 to Comic-Con', u"\u2018Ghostbusters': Is There a Post-Credit Scene?", u'Javier Bardem Eyed for Frankenstein Role in Universal\u2019s Monster Universe (Exclusive)', u'\u2018Battlefield\u2019 Video Game Being Developed for TV Series by Paramount Television & Anonymous Content', u'\u2018Ghostbusters\u2019 Review Roundup: Critics Generally Positive On Female-Led Blockbuster', u'\u2018Assassin\u2019s Creed\u2019 Movie Won\u2019t Make Money, Ubisoft Chief Says', u"Fargo Taps The Leftovers' Carrie Coon as Female Lead in Season 3", u'Ridley Scott Long-Time Collaborator Julie Payne Dies at 64', u'Ridley Scott Longtime Collaborator Julie Payne Dies at 64', u'15 Highest Paid Music Stars of 2016, From The Weeknd to Taylor Swift (Photos)', u'South Africa\u2019s Pubcaster Draws Ire From Demonstrators, the Government', u'Jerry Greer, Son of Country Music Singer Craig Morgan, Dies at 19', u'Queen Latifah Says Racism Is "Still Alive and Kicking" at VH1 Hip Hop Honors', u'Jerry Greer, Son of Country Singer Craig Morgan, Found Dead After Boating Accident', u'[Watch] Emmy Awards movie/mini slugfest: \u2018The People v. O.J. Simpson\u2019 and \u2018Fargo\u2019 battle for the win', u'Amanda Evans Wraps Videovision\u2019s Thriller \u2018Serpent\u2019', u'\u2018Oslo\u2019 Theater Review: The Handshake That Shook the World', u'\u2018The Bachelorette\u2019 Recap: JoJo Tames Some Wild Horses', u'Disney Accelerator Names 9 Startups to Participate in 2016 Mentorship Program', u'Karlovy Vary Film Review: \u2018The Teacher\u2019', u'Top News', u'Movie News', u'TV News', u'Celebrity News'] ``` **How do I get rid of the u' and \u2018 , \u2019 etc..? and get my results in a txt file**
2016/07/12
[ "https://Stackoverflow.com/questions/38326357", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6578757/" ]
To avoid reflection, you can use a generic method: ``` public void DoSomething(MyClass a) => MakeSomeStaff(a, () => { /* Do method body */ }); private void MakeSomeStaff<T>(T item, Action action) where T: class { if (item == null) throw new Exception(); action(); } ```
**EDIT: Had an idea that abuses operator overloading, original answer at the bottom:** Use operator overloading to throw on null ``` public struct Some<T> where T : class { public T Value { get; } public Some(T value) { if (ReferenceEquals(value, null)) throw new Exception(); Value = value; } public override string ToString() => Value.ToString(); public static implicit operator T(Some<T> some) => some.Value; public static implicit operator Some<T>(T value) => new Some<T>(value); } private void DoThingsInternal(string foo) => Console.Out.WriteLine($"string len:{foo.Length}"); public void DoStuff(Some<string> foo) { DoThingsInternal(foo); string fooStr = foo; string fooStrBis = foo.Value; // do stuff } ``` **Original answer** You can use an extension method to throw for you ``` public static class NotNullExt{ public static T Ensure<T>(this T value,string message=null) where T:class { if(ReferenceEquals(null,value) throw new Exception(message??"Null value"); return value; } } public void DoSomething(MyClass a) { a=a.Ensure("foo"); // go ... } ```
11,928
25,824,417
Before Django 1.7 I used to define a per-project `fixtures` directory in the settings: ``` FIXTURE_DIRS = ('myproject/fixtures',) ``` and use that to place my `initial_data.json` fixture storing the default **groups** essential for the whole project. This has been working well for me as I could keep the design clean by separating per-project data from app-specific data. Now with Django 1.7, `initial_data` fixtures have been deprecated, [suggesting](https://docs.djangoproject.com/en/1.7/howto/initial-data/#automatically-loading-initial-data-fixtures) to include [data migrations](https://docs.djangoproject.com/en/1.7/topics/migrations/#data-migrations) together with app's schema migrations; leaving no obvious choice for global per-project initial data. Moreover the new [migrations framework](https://docs.djangoproject.com/en/1.7/topics/migrations/) installs all legacy initial data fixtures **before** executing migrations for the compliant apps (including the `django.contrib.auth` app). This behavior causes my fixture containing default groups to **fail installation**, since the `auth_group` table is not present in the DB yet. Any suggestions on how to (elegantly) make fixtures run **after** all the migrations, or at least after the auth app migrations? Or any other ideas to solve this problem? I find fixtures a great way for providing initial data and would like to have a simple and clean way of declaring them for automatic installation. The new [RunPython](https://docs.djangoproject.com/en/1.7/ref/migration-operations/#runpython) is just too cumbersome and I consider it an overkill for most purposes; and it seems to be only available for per-app migrations.
2014/09/13
[ "https://Stackoverflow.com/questions/25824417", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2263517/" ]
If you absolutely want to use fixtures, just use `RunPython`and `call_command` in your data migrations. ``` from django.db import migrations from django.core.management import call_command def add_data(apps, schema_editor): call_command('loaddata', 'thefixture.json') def remove_data(apps, schema_editor): call_command('flush') class Migration(migrations.Migration): dependencies = [ ('roundtable', '0001_initial'), ] operations = [ migrations.RunPython( add_data, reverse_code=remove_data), ] ``` However this is recommanded to load data using python code and Django ORM, as you won't have to face integrity issues. [Source](http://andrewsforge.com/article/upgrading-django-to-17/part-2-migrations-in-django-16-and-17/#data-migrations-in-django-17).
I recommend using factories instead of fixtures, they are a mess and difficult to maintain, better to use FactoryBoy with Django.
11,929
28,250,578
how to set proper document root for vagrant. Now it takes docroot from the wrong place. I'm trying to run laravel project, so it has to be not /var/www/project but /vae/www/project/public... My YAML file: ``` --- vagrantfile-local: vm: box: puphpet/debian75-x64 box_url: puphpet/debian75-x64 hostname: '' memory: '512' cpus: '1' chosen_provider: virtualbox network: private_network: 192.168.56.101 forwarded_port: 1ztIcBOBAG3R: host: '7958' guest: '22' post_up_message: '' provider: virtualbox: modifyvm: natdnshostresolver1: on vmware: numvcpus: 1 parallels: cpus: 1 provision: puppet: manifests_path: puphpet/puppet manifest_file: site.pp module_path: puphpet/puppet/modules options: - '--verbose' - '--hiera_config /vagrant/puphpet/puppet/hiera.yaml' - '--parser future' synced_folder: 2jsmp5Xo8wAe: owner: www-data group: www-data source: 'C:\\Users\\Vygandas\\Documents\\git\\project.x\\web' target: /var/www/projectx sync_type: default rsync: args: - '--verbose' - '--archive' - '-z' exclude: - .vagrant/ auto: 'false' usable_port_range: start: 10200 stop: 10500 ssh: host: null port: null private_key_path: null username: vagrant guest_port: null keep_alive: true forward_agent: false forward_x11: false shell: 'bash -l' vagrant: host: detect server: install: '1' packages: { } users_groups: install: '1' groups: { } users: { } cron: install: '1' jobs: { } firewall: install: '1' rules: null apache: install: '1' settings: user: www-data group: www-data default_vhost: true manage_user: false manage_group: false sendfile: 0 modules: - rewrite vhosts: 495wa1uc3p0z: servername: projectx.dev serveraliases: - www.projectx.dev docroot: /var/www/projectx/public port: '80' setenv: - 'APP_ENV dev' directories: 8yngfatheg7u: provider: directory path: /var/www/projectx/public options: - Indexes - FollowSymlinks - MultiViews allow_override: - All require: - all - granted custom_fragment: '' engine: php custom_fragment: '' ssl_cert: '' ssl_key: '' ssl_chain: '' ssl_certs_dir: '' mod_pagespeed: 0 nginx: install: '0' settings: default_vhost: 1 proxy_buffer_size: 128k proxy_buffers: '4 256k' upstreams: { } vhosts: ksovqgz8jsgn: proxy: '' server_name: awesome.dev server_aliases: - www.awesome.dev www_root: /var/www/awesome listen_port: '80' location: \.php$ index_files: - index.html - index.htm - index.php envvars: - 'APP_ENV dev' engine: php client_max_body_size: 1m ssl_cert: '' ssl_key: '' php: install: '1' version: '56' composer: '1' composer_home: '' modules: php: - cli - intl - mcrypt - gd - imagick - mysql pear: { } pecl: - pecl_http ini: display_errors: On error_reporting: '-1' session.save_path: /var/lib/php/session timezone: America/Chicago mod_php: 0 hhvm: install: '0' nightly: 0 composer: '1' composer_home: '' settings: host: 127.0.0.1 port: '9000' ini: display_errors: On error_reporting: '-1' timezone: null xdebug: install: '0' settings: xdebug.default_enable: '1' xdebug.remote_autostart: '0' xdebug.remote_connect_back: '1' xdebug.remote_enable: '1' xdebug.remote_handler: dbgp xdebug.remote_port: '9000' xhprof: install: '0' wpcli: install: '0' version: v0.17.1 drush: install: '0' version: 6.3.0 ruby: install: '1' versions: gA1kSNQgqjbS: version: '' nodejs: install: '0' npm_packages: { } python: install: '1' packages: { } versions: S0v3NX4H3glU: version: '' mysql: install: '1' override_options: { } root_password: '123' adminer: 0 databases: 3kES6Zw0Brtz: grant: - ALL name: projectx host: localhost user: projectxuser password: '123' sql_file: '' postgresql: install: '0' settings: root_password: '123' user_group: postgres encoding: UTF8 version: '9.3' databases: { } adminer: 0 mariadb: install: '0' override_options: { } root_password: '123' adminer: 0 databases: { } version: '10.0' sqlite: install: '0' adminer: 0 databases: { } mongodb: install: '0' settings: auth: 1 port: '27017' databases: { } redis: install: '0' settings: conf_port: '6379' mailcatcher: install: '1' settings: smtp_ip: 0.0.0.0 smtp_port: 1025 http_ip: 0.0.0.0 http_port: '1080' mailcatcher_path: /usr/local/rvm/wrappers/default from_email_method: inline beanstalkd: install: '0' settings: listenaddress: 0.0.0.0 listenport: '13000' maxjobsize: '65535' maxconnections: '1024' binlogdir: /var/lib/beanstalkd/binlog binlogfsync: null binlogsize: '10485760' beanstalk_console: 0 binlogdir: /var/lib/beanstalkd/binlog rabbitmq: install: '0' settings: port: '5672' elastic_search: install: '1' settings: version: 1.4.1 java_install: true solr: install: '0' settings: version: 4.10.2 port: '8984' ``` I can access it only via <http://projectx.dev/public> ... Plz help me O\_o
2015/01/31
[ "https://Stackoverflow.com/questions/28250578", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2470912/" ]
I can see you have: `vhosts: 495wa1uc3p0z: servername: projectx.dev serveraliases: - www.projectx.dev docroot: /var/www/projectx/public port: '80' setenv: - 'APP_ENV dev' directories: 8yngfatheg7u: provider: directory path: /var/www/projectx/public options: - Indexes - FollowSymlinks - MultiViews allow_override: - All require: - all - granted custom_fragment: '' engine: php custom_fragment: '' ssl_cert: '' ssl_key: '' ssl_chain: '' ssl_certs_dir: ''` Which tells me you probably started with `/var/www/projectx`, ran `$ vagrant up`, changed `/var/www/projectx` to `/var/www/projectx/public` and didn't do a `$ vagrant provision` to apply the changes.
It was right that I needed to run "vagran provision" after modifications, but another thing that config should look like this ``` vhosts: 495wa1uc3p0z: servername: projectx.dev docroot: /var/www/projectx/public port: '80' setenv: - 'APP_ENV dev' directories: 495wa1uc3p0z: provider: directory path: /var/www/projectx/public options: - Indexes - FollowSymlinks - MultiViews allow_override: - All allow: - All custom_fragment: '' ```
11,930
57,417,939
I am currently running a python script in a batch file. In the python, I have some print function to monitor the running code. The printed information then will be shown in the command window. In the meantime, I also want to save all these print-out text to a log-file, so I can track them in the long run. Currently, to do this, I need to have both print function in the python and use the text.write function to write to a text file. This causes some troubles in maintenance because every time I change some printing text, I also need to change the text in the write function. Also I feel it is not the most efficient way to do that. For example: ``` start_time = datetime.now() print("This code is run at " + str(start_time) + "\n") log_file.write("This code is run at " + str(start_time) + "\n") ``` I would like to use the print function in the python, so I can see that in the command window and then save all the print-out information to a log file at one time.
2019/08/08
[ "https://Stackoverflow.com/questions/57417939", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11730027/" ]
For a better solution in the long run, consider the built in [logging module](https://docs.python.org/3/library/logging.html). You can give multiple destinations, such as stdout and files, log rotation, formatting, and importance levels. Example: ```py import logging logging.basicConfig(filename='log_file', filemode='w', level=logging.DEBUG) logging.info("This code is run at %", start_time) ```
Just make a function ``` def print_and_log(text): print(text) with open("logfile.txt", "a") as logfile: logfile.write(text+"\n") ``` Then wherever you need to print, use this function and it will also log.
11,931
47,979,852
I just do this: ``` t = Variable(torch.randn(5)) t =t.cuda() print(t) ``` but it takes 5 to 10 minitues,everytime. I used cuda samples to test bandwidth, it's fine. Then I used pdb to find which takes the most time. I find in `/anaconda3/lib/python3.6/site-packages/torch/cuda/__init__`: ``` def _lazy_new(cls, *args, **kwargs): _lazy_init() # We need this method only for lazy init, so we can remove it del _CudaBase.__new__ return super(_CudaBase, cls).__new__(cls, *args, **kwargs) ``` it takes about 5 minitues in the `return` I don't know how to solve my problem by these imformation. My environment is: Ubuntu 16.04 + CUDA 9.1
2017/12/26
[ "https://Stackoverflow.com/questions/47979852", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9141892/" ]
There’s a cuda version mismatch between the cuda my pytorch was compiled with the cuda I'm running.I divided the official installation commond > > conda install pytorch torchvision cuda90 -c pytorch > > > into two section: > > conda install -c soumith magma-cuda90 > > > conda install pytorch torchvision -c soumith > > > The second commond installed pytorch-0.2.0 by default,which mathchs CUDA8.0. After I update my pytorch to 0.3.0,this commond only takes one second.
Try doing it this way: ``` torch.cuda.synchronize() t = Variable(torch.randn(5)) t =t.cuda() print(t) ``` Then, it should be *blazing fast* depending on your GPU memory, at least on every *re-run* it should be.
11,932
45,468,073
I have successfully installed Pandas through Anaconda in PyCharm. Unfortunately when I run Import Pandas this is what I get as the output: ``` /System/Library/Frameworks/Python.framework/Versions/2.7/bin/python2.7 "/Users/PycharmProjects/Security upload/Security upload.py" Traceback (most recent call last): File "/Users/PycharmProjects/Security upload/Security upload.py", line 3, in <module> import pandas File "/Users/Library/Python/2.7/lib/python/site- packages/pandas/__init__.py", line 23, in <module> from pandas.compat.numpy import * File "/Users/Library/Python/2.7/lib/python/site- packages/pandas/compat/__init__.py", line 361, in <module> from dateutil import parser as _date_parser File "/Users/Library/Python/2.7/lib/python/site- packages/dateutil/parser.py", line 43, in <module> from . import tz File "/Users/Library/Python/2.7/lib/python/site- packages/dateutil/tz/__init__.py", line 1, in <module> from .tz import * File "/Users/Library/Python/2.7/lib/python/site- packages/dateutil/tz/tz.py", line 23, in <module> from ._common import tzname_in_python2, _tzinfo, _total_seconds File "/Users/Library/Python/2.7/lib/python/site- packages/dateutil/tz/_common.py", line 2, in <module> from six.moves import _thread ImportError: cannot import name _thread ``` Could someone provide some insight on how to approach a solution?
2017/08/02
[ "https://Stackoverflow.com/questions/45468073", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7678127/" ]
According to [here](https://github.com/opencobra/cobrapy/issues/490) and [here](https://github.com/awslabs/aws-shell/issues/161), you need to fix your dateutil package. ``` pip uninstall python-dateutil pip install python-dateutil --upgrade ``` Maybe this: ``` sudo pip uninstall python-dateutil sudo pip install python-dateutil==2.2 ```
Was facing the same issue and started installing jupyter and got few errors reinstalling ipython worked for me ``` sudo -H pip install --ignore-installed -U ipython ``` I also needed to reinstall pyzmq ``` sudo -H pip install --ignore-installed -U pyzmq ``` after this I re-ran import pandas in ipython and it worked
11,933
66,996,203
I am trying to flip the lat long in a exported csv but am having a hard time getting python to recognize the rows to reorder them. Need the below data to read W#### N#####, W#### N#### so that QGIS's WKT layer import will work correctly later after I finish the formatting for WKT using Linestring(). ``` Example Data: name,start_y,start_x,end_y,end_x name2: 10,N 42.50105, W 122.87444, N 42.50079, W 122.74144 name3: 11,N 42.49398, W 123.47816, N 42.49453, W 123.29451 name4: 12,N 42.48980, W 123.47812, N 42.49036, W 123.29027 name5: 13,N 42.49403, W 123.20165, N 42.49411, W 123.12354 ``` The code I'm trying to use is: ``` with open(mycsv.csv', 'r') as infile, open(mycsv.csv', 'a') as outfile: # output dict needs a list for new column ordering writer = csv.DictWriter(outfile, fieldnames= ['name', 'start_x', 'start_y', 'end_x', 'end_y'], extrasaction='ignore', delimiter = ',') # reorder the header first writer.writeheader() for row in csv.DictReader(infile): # writes the reordered rows to the new file writer.writerow(row) ``` When I use this code the csv stays the same. So I ran: ``` import sys f = open(sys.argv[0],'r') reader = csv.reader(f,delimiter=",") num_cols = len(next(reader)) # Read first line and count columns print(num_cols) ``` and it tells me that it's only counting 1 column so it makes sense that the first formula isn't working because it's not reading the csv in separate columns, but one single line. What am I missing? Python 3.9 is what I'm using. Thanks in advance! ps, this is my first python program and I have no formal coding education so please excuse any rookie mistakes
2021/04/08
[ "https://Stackoverflow.com/questions/66996203", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15578731/" ]
-For the `Access Key ID` you can find in `IAM` user management -The `Secret Access Key` is the private SSH key you have created/you can create to log into your account, you would register the public key on IAM and keep the secret key to use the aws cli -You can find the `Region` in the information for the bucket that you want to use or on the top right of the page -I imagine the path is simply where in the bucket you want to store information so it is for you to decide -The `Bucket` is simply the name of the bucket that you want to use
* For region -> just check your upper right of the console you can choose any one, the default region when you access a resource from the AWS Management Console is US East (Ohio) (us-east-2). * For Bucket `s3 -> navigation pane ->buckets -> search your buckets if not then create one in a specific region`. If you want arn of the bucket as aws identifies resources by ARN then click on bucket and go to properties. * For Access key ID and Secret Access Key you will; `find this under IAM -> users -> select your name -> under credentials( if you don't found one then you need to create one ( by default it is not created)` * For ACL, `choose your bucket and go under permissions`. * for Path either it can be `/` for the bucket level permission or `/*` for object level permissions
11,934
3,211,031
``` def file_open(filename): fo=open(filename,'r') #fo.seek(5) fo.read(3) fo.close() file_open("file_ro.py") ``` I expect above program to return first 3 bytes from file . But it returns nothing. When I ran these in interactive python command prompt - I get expected output!
2010/07/09
[ "https://Stackoverflow.com/questions/3211031", "https://Stackoverflow.com", "https://Stackoverflow.com/users/246365/" ]
`fo.read()` *returns* the data that was read and you never assign it to anything. You are talking about 'output', but your code isn't supposed to output anything. Are you trying to print those three bytes? In that case you are looking for something like ``` f = open('file_ro.py', 'r') print f.read(3) ``` You are getting the 'expected output' in the interactive prompt, because it prints the result of the evaluation if it is not assigned anywhere (and if it is not `None`?), just like in the `fo.read(3)` line. Or something along those lines, - maybe someone can explain it better.
``` import sys def file_open(filename): fo=open(filename,'r') #fo.seek(5) read_data=fo.read(3) fo.close() print read_data file_open("file.py") ```
11,935
60,827,864
I am trying to have a master dag which will create further dags based on my need. I have the following python file inside the *dags\_folder* in *airflow.cfg*. This code creates the master dag in database. This master dag should read a text file and should create dags for each line in the text file. But the dags created inside the master dag are not added to the database. What is the correct way to create it? Version details: Python version: 3.7 Apache-airflow version: 1.10.8 ``` import datetime as dt from airflow import DAG from airflow.operators.bash_operator import BashOperator from airflow.operators.python_operator import PythonOperator root_dir = "/home/user/TestSpace/airflow_check/res" print("\n\n ===> \n Dag generator") default_args = { 'owner': 'airflow', 'start_date': dt.datetime(2020, 3, 22, 00, 00, 00), 'concurrency': 1, 'retries': 0 } def greet(_name): message = "Greetings {} at UTC: {} Local: {}\n".format(_name, dt.datetime.utcnow(), dt.datetime.now()) f = open("{}/greetings.txt".format(root_dir), "a+") print("\n\n =====> {}\n\n".format(message)) f.write(message) f.close() def create_dag(dag_name): with DAG(dag_name, default_args=default_args, schedule_interval='*/2 * * * *', catchup=False ) as i_dag: i_opr_greet = PythonOperator(task_id='greet', python_callable=greet, op_args=["{}_{}".format("greet", dag_name)]) i_echo_op = BashOperator(task_id='echo', bash_command='echo `date`') i_opr_greet >> i_echo_op return i_dag def create_all_dags(): all_lines = [] f = open("{}/../dag_names.txt".format(root_dir), "r") for x in f: all_lines.append(str(x)) f.close() for line in all_lines: print("Dag creation for {}".format(line)) globals()[line] = create_dag(line) with DAG('master_dag', default_args=default_args, schedule_interval='*/1 * * * *', catchup=False ) as dag: echo_op = BashOperator(task_id='echo', bash_command='echo `date`') create_op = PythonOperator(task_id='create_dag', python_callable=create_all_dags) echo_op >> create_op ```
2020/03/24
[ "https://Stackoverflow.com/questions/60827864", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3211801/" ]
You have 2 options: 1. **Use SubDagOperator**: [Example DAG](https://github.com/apache/airflow/blob/1.10.9/airflow/operators/subdag_operator.py). Use it if your Schedule Interval can be the same. 2. **Write a Python DAG File**: From you master DAG, create Python files in your AIRFLOW\_HOME containing DAGs. You can use Jinja2 templating engine for this.
Have a look at the TriggerDagRunOperator: <https://airflow.apache.org/docs/stable/_api/airflow/operators/dagrun_operator/index.html> Example usage: <https://github.com/apache/airflow/blob/master/airflow/example_dags/example_trigger_controller_dag.py>
11,938
52,345,911
I'm working on a python(3.6) project in which I need to clone a GitHub repo which will have the directory structure as: ``` |parent_DIR |--sub_DIR |file1.... |file2.... |--sub_DIR2 |file1... ``` Now I need to get the following info: ``` 1. Parent directory name 2. How many subdirectories are 3. names of subdirectories ``` Here's how I'm cloning the GitHub repo: **from views.py:** ``` # clone the github repo tempdir = tempfile.mkdtemp() saved_unmask = os.umask(0o077) out_dir = os.path.join(tempdir) Repo.clone_from(data['repo_url'], out_dir) ```
2018/09/15
[ "https://Stackoverflow.com/questions/52345911", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7644562/" ]
You can try your pop-over viewController's modalPresentationStyle to either `.overCurrentContext` or `.overFullScreen`. > > case overCurrentContext: > > > > > > > A presentation style where the content is displayed over another view controller’s content. > > > > > > > > > This means it will present the next viewController over the viewController's content. **So in case of Container ViewControllers:** So if you have any `tabBarController`, the `tabBar` will allow user to interact with it. > > case overFullScreen > > > > > > > A view presentation style in which the presented view covers the screen. > > > > > > > > > This means it will present next viewController over the fullScreen so the `taBar` will not be interactive till the presentation finish. ``` func presentNextController() { // In case your viewController is in storyboard or any other initialisation guard let nextVC = storyboard.instantiateViewController(with: "nextVC") as? NextViewController else { return } nextVC.modalPresentationStyle = .overFullScreen // set your custom transitioning delegate self.present(nextVC, animated: true, completion: nil) } ```
You need to set your new view controller's ``` modalPresentationStyle = .overCurrentContext ``` Do this when you initialise your view controller, or in the storyboard.
11,939
37,888,565
I'm having an issue with ctypes. I think my type conversion is correct and the error isn't making sense to me. Error on line " arg - ct.c\_char\_p(logfilepath) " TypeError: bytes or integer address expected instead of str instance I tried in both python 3.5 and 3.4. function i'm calling: ``` stream_initialize('stream_log.txt') ``` Stream\_initialize code" ``` def stream_initialize(logfilepath): f = shim.stream_initialize arg = ct.c_char_p(logfilepath) result = f(arg) if result: print(find_shim_error(result)) ```
2016/06/17
[ "https://Stackoverflow.com/questions/37888565", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5344673/" ]
`c_char_p` takes `bytes` object so you have to convert your `string` to `bytes` first: ``` ct.c_char_p(logfilepath.encode('utf-8')) ``` Another solution is using the `c_wchar_p` type which takes a `string`.
*For completeness' sake*: It is also possible to call it as `stream_initialize(b'stream_log.txt')`. Note the `b` in front of the string, which causes it to be interpreted as a `bytes` object.
11,940
42,441,687
I use `pyspark.sql.functions.udf` to define a UDF that uses a class imported from a .py module written by me. ``` from czech_simple_stemmer import CzechSimpleStemmer #this is my class in my module from pyspark.sql.functions import udf from pyspark.sql.types import StringType ...some code here... def clean_one_raw_doc(my_raw_doc): ... calls something from CzechSimpleStemmer ... udf_clean_one_raw_doc = udf(clean_one_raw_doc, StringType()) ``` When I call ``` df = spark.sql("SELECT * FROM mytable").withColumn("output_text", udf_clean_one_raw_doc("input_text")) ``` I get a typical huge error message where probably this is the relevant part: ``` File "/data2/hadoop/yarn/local/usercache/ja063930/appcache/application_1472572954011_132777/container_e23_1472572954011_132777_01_000003/pyspark.zip/pyspark/serializers.py", line 431, in loads return pickle.loads(obj, encoding=encoding) ImportError: No module named 'czech_simple_stemmer' ``` Do I understand it correctly that pyspark distributes `udf_clean_one_raw_doc` to all the worker nodes but `czech_simple_stemmer.py` is missing there in the nodes' python installations (being present only on the edge node where I run the spark driver)? And if yes, is there any way how I could tell pyspark to distribute this module too? I guess I could probably copy manually `czech_simple_stemmer.py` to all the nodes' pythons but 1) I don't have the admin access to the nodes, and 2) even if I beg the admin to put it there and he does it, then in case I need to do some tuning to the module itself, he'd probably kill me.
2017/02/24
[ "https://Stackoverflow.com/questions/42441687", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4985473/" ]
SparkContext.addPyFile("my\_module.py") will do it.
from the spark-submit [documentation](http://spark.apache.org/docs/2.0.1/submitting-applications.html) > > For Python, you can use the --py-files argument of spark-submit to add > .py, .zip or .egg files to be distributed with your application. If > you depend on multiple Python files we recommend packaging them into a > .zip or .egg. > > >
11,941
47,944,185
It's my first post, I hope it will be well done. I'm trying to run the following ZipLine Algo with local AAPL data : ``` import pandas as pd from collections import OrderedDict import pytz from zipline.api import order, symbol, record, order_target from zipline.algorithm import TradingAlgorithm data = OrderedDict() data['AAPL'] = pd.read_csv('AAPL.csv', index_col=0, parse_dates=['Date']) panel = pd.Panel(data) panel.minor_axis = ['Open', 'High', 'Low', 'Close', 'Volume', 'Price'] panel.major_axis = panel.major_axis.tz_localize(pytz.utc) print panel["AAPL"] def initialize(context): context.security = symbol('AAPL') def handle_data(context, data): MA1 = data[context.security].mavg(50) MA2 = data[context.security].mavg(100) date = str(data[context.security].datetime)[:10] current_price = data[context.security].price current_positions = context.portfolio.positions[symbol('AAPL')].amount cash = context.portfolio.cash value = context.portfolio.portfolio_value current_pnl = context.portfolio.pnl # code (this will come under handle_data function only) if (MA1 > MA2) and current_positions == 0: number_of_shares = int(cash / current_price) order(context.security, number_of_shares) record(date=date, MA1=MA1, MA2=MA2, Price= current_price, status="buy", shares=number_of_shares, PnL=current_pnl, cash=cash, value=value) elif (MA1 < MA2) and current_positions != 0: order_target(context.security, 0) record(date=date, MA1=MA1, MA2=MA2, Price=current_price, status="sell", shares="--", PnL=current_pnl, cash=cash, value=value) else: record(date=date, MA1=MA1, MA2=MA2, Price=current_price, status="--", shares="--", PnL=current_pnl, cash=cash, value=value) #initializing trading enviroment algo_obj = TradingAlgorithm(initialize=initialize, handle_data=handle_data) #run algo perf_manual = algo_obj.run(panel) #code #calculation print "total pnl : " + str(float(perf_manual[["PnL"]].iloc[-1])) buy_trade = perf_manual[["status"]].loc[perf_manual["status"] == "buy"].count() sell_trade = perf_manual[["status"]].loc[perf_manual["status"] == "sell"].count() total_trade = buy_trade + sell_trade print "buy trade : " + str(int(buy_trade)) + " sell trade : " + str(int(sell_trade)) + " total trade : " + str(int(total_trade)) ``` I was inspired by <https://www.quantinsti.com/blog/introduction-zipline-python/> and <https://www.quantinsti.com/blog/importing-csv-data-zipline-backtesting/>. I get this error : ``` Traceback (most recent call last): File "C:/Users/main/Desktop/docs/ALGO_TRADING/_DATAS/_zipline_data_bundle /temp.py", line 51, in <module> algo_obj = TradingAlgorithm(initialize=initialize, handle_data=handle_data) File "C:\Python27-32\lib\site-packages\zipline\algorithm.py", line 273, in __init__ self.trading_environment = TradingEnvironment() File "C:\Python27-32\lib\site-packages\zipline\finance\trading.py", line 99, in __init__ self.bm_symbol, File "C:\Python27-32\lib\site-packages\zipline\data\loader.py", line 166, in load_market_data environ, File "C:\Python27-32\lib\site-packages\zipline\data\loader.py", line 230, in ensure_benchmark_data last_date, File "C:\Python27-32\lib\site-packages\zipline\data\benchmarks.py", line 50, in get_benchmark_returns last_date File "C:\Python27-32\lib\site-packages\pandas_datareader\data.py", line 137, in DataReader session=session).read() File "C:\Python27-32\lib\site-packages\pandas_datareader\base.py", line 181, in read params=self._get_params(self.symbols)) File "C:\Python27-32\lib\site-packages\pandas_datareader\base.py", line 79, in _read_one_data out = self._read_url_as_StringIO(url, params=params) File "C:\Python27-32\lib\site-packages\pandas_datareader\base.py", line 90, in _read_url_as_StringIO response = self._get_response(url, params=params) File "C:\Python27-32\lib\site-packages\pandas_datareader\base.py", line 139, in _get_response raise RemoteDataError('Unable to read URL: {0}'.format(url)) pandas_datareader._utils.RemoteDataError: Unable to read URL: http://www.google.com/finance/historical?q=SPY&startdate=Dec+29%2C+1989&enddate=Dec+20%2C+2017&output=csv ``` I don't understand : "<http://www.google.com/finance/historical?q=SPY&startdate=Dec+29%2C+1989&enddate=Dec+20%2C+2017&output=csv>". I don't ask for online data request... and not 'SPY' stock but 'APPL'... What does this error mean to you ? Thanks a lot for your help ! C.
2017/12/22
[ "https://Stackoverflow.com/questions/47944185", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9131416/" ]
Only reference and workaround I found regarding this issue is [here](https://github.com/pydata/pandas-datareader/issues/394): ```py from pandas_datareader.google.daily import GoogleDailyReader @property def url(self): return 'http://finance.google.com/finance/historical' GoogleDailyReader.url = url ```
do: ``` pip install fix_yahoo_finance ``` then modify the file: zipline/lib/pythonx.x/site-packages/zipline/data/benchmarks.py add the following two statements to the file: ``` import fix_yahoo_finance as yf yf.pdr_override () ``` then change following instruction: ``` data = pd_reader.DataReader (symbol, 'Google' first_date, last_date) ``` to: ``` data = pd_reader.get_data_yahoo(symbol,first_date, last_date) ```
11,942
56,057,132
I define some func here, it will change all user defined attribtutes into upper case ``` def up(name, parent, attr): user_defined_attr = ((k, v) for k, v in attr.items() if not k.startswith('_')) up_attr = {k.upper(): v for k,v in user_defined_attr} return type(name, parent, up_attr) ``` For example: ``` my_class = up('my_class', (object,), {'some_attr': 'some_value'}) hasattr(my_class, 'SOME_ATTR') True ``` Here is some words from python doc about **metaclass** [https://docs.python.org/2/reference/datamodel.html?highlight=**metaclass**#**metaclass**](https://docs.python.org/2/reference/datamodel.html?highlight=__metaclass__#__metaclass__) ``` The appropriate metaclass is determined by the following precedence rules: If dict['__metaclass__'] exists, it is used. Otherwise, if there is at least one base class, its metaclass is used (this looks for a __class__ attribute first and if not found, uses its type). Otherwise, if a global variable named __metaclass__ exists, it is used. Otherwise, the old-style, classic metaclass (types.ClassType) is used. ``` So I did some test ``` >>> def up(name, parent, attr): ... user_defined_attr = ((k, v) for k, v in attr.items() if not k.startswith('_')) ... up_attr = {k.upper(): v for k,v in user_defined_attr} ... return type(name, parent, up_attr) ... >>> >>> >>> __metaclass__ = up >>> >>> class C1(object): ... attr1 = 1 ... >>> hasattr(C1, 'ATTR1') False ``` Not working for the global var case, why?
2019/05/09
[ "https://Stackoverflow.com/questions/56057132", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5245972/" ]
Okay, seems like this behaviour cannot be avoided, so you should parse dates manually. But the way to parse it is pretty simple. If we are parsing date in ISO 8601 format, the mask of date string looks like this: ``` <yyyy>-<mm>-<dd>T<hh>:<mm>:<ss>(.<ms>)?(Z|(+|-)<hh>:<mm>)? ``` 1. Getting date and time separately ----------------------------------- The `T` in string separates date from time. So, we can just split ISO string by `T` ``` var isoString = `2019-05-09T13:26:10.979Z` var [dateString, timeString] = isoString.split("T") ``` 2. Extracting date parameters from date string ---------------------------------------------- So, we have `dateString == "2019-05-09"`. This is pretty simple now to get this parameters separately ``` var [year, month, date] = dateString.split("-").map(Number) ``` 3. Handling time string ----------------------- With time string we should make more complex actions due to its variability. We have `timeString == "13:26:10Z"` Also it's possible `timeString == "13:26:10"` and `timeString == "13:26:10+01:00` ``` var clearTimeString = timeString.split(/[Z+-]/)[0] var [hours, minutes, seconds] = clearTimeString.split(":").map(Number) var offset = 0 // we will store offset in minutes, but in negation of native JS Date getTimezoneOffset if (timeString.includes("Z")) { // then clearTimeString references the UTC time offset = new Date().getTimezoneOffset() * -1 } else { var clearOffset = timeString.split(/[+-]/)[1] if (clearOffset) { // then we have offset tail var negation = timeString.includes("+") ? 1 : -1 // detecting is offset positive or negative var [offsetHours, offsetMinutes] = clearOffset.split(":").map(Number) offset = (offsetMinutes + offsetHours * 60) * negation } // otherwise we do nothing because there is no offset marker } ``` At this point we have our data representation in numeric format: `year`, `month`, `date`, `hours`, `minutes`, `seconds` and `offset` in minutes. 4. Using ...native JS Date constructor -------------------------------------- Yes, we cannot avoid it, because it is too cool. JS `Date` automatically match date for all negative and too big values. So we can just pass all parameters in raw format, and the JS `Date` constructor will create the right date for us automatically! ``` new Date(year, month - 1, date, hours, minutes + offset, seconds) ``` Voila! Here is fully working example. ```js function convertHistoricalDate(isoString) { var [dateString, timeString] = isoString.split("T") var [year, month, date] = dateString.split("-").map(Number) var clearTimeString = timeString.split(/[Z+-]/)[0] var [hours, minutes, seconds] = clearTimeString.split(":").map(Number) var offset = 0 // we will store offset in minutes, but in negation of native JS Date getTimezoneOffset if (timeString.includes("Z")) { // then clearTimeString references the UTC time offset = new Date().getTimezoneOffset() * -1 } else { var clearOffset = timeString.split(/[+-]/)[1] if (clearOffset) { // then we have offset tail var negation = timeString.includes("+") ? 1 : -1 // detecting is offset positive or negative var [offsetHours, offsetMinutes] = clearOffset.split(":").map(Number) offset = (offsetMinutes + offsetHours * 60) * negation } // otherwise we do nothing because there is no offset marker } return new Date(year, month - 1, date, hours, minutes + offset, seconds) } var testDate1 = convertHistoricalDate("1894-01-01T00:00:00+01:00") var testDate2 = convertHistoricalDate("1893-01-01T00:00:00+01:00") var testDate3 = convertHistoricalDate("1894-01-01T00:00:00-01:00") var testDate4 = convertHistoricalDate("1893-01-01T00:00:00-01:00") console.log(testDate1.toLocaleDateString(), testDate1.toLocaleTimeString()) console.log(testDate2.toLocaleDateString(), testDate2.toLocaleTimeString()) console.log(testDate3.toLocaleDateString(), testDate3.toLocaleTimeString()) console.log(testDate4.toLocaleDateString(), testDate4.toLocaleTimeString()) ``` Note ---- In this case we are getting `Date` instance with all its own values (like `.getHours()`) being **normalized**, including timezone offset. The `testDate1.toISOString` will still return weird result. But if you are working with this date, it will probably 100% fit your needings. Hope that helped :)
This might be the case when all browsers follow their own standards for encoding date formats (but I am not sure on this part). Anyways a simple fix for this is to apply the `toISOString` method. ```js const today = new Date(); console.log(today.toISOString()); ```
11,943
19,202,921
Here is my input: ``` >>> from decimal import Decimal >>> Decimal('114.3') Decimal('114.3') >>> Decimal(114.3) Decimal('114.2999999999999971578290569595992565155029296875') ``` I thought those two instances of Decimal have to be equal but Decimal with float arg seems to loose precision. Why is this? My python version is 2.7.3. Thanks in advance!
2013/10/05
[ "https://Stackoverflow.com/questions/19202921", "https://Stackoverflow.com", "https://Stackoverflow.com/users/699931/" ]
The second line is giving you the exact decimal value of the binary floating-point number which approximates 114.3. This is almost all about binary floating point, not much about Decimal. See [the docs](http://docs.python.org/2/tutorial/floatingpoint.html) for details. Later: if using Python 3, see [these docs](http://docs.python.org/3/tutorial/floatingpoint.html) instead. Same basic thing, but Python 3 has more tools to help you explore cases "like this".
When you do `Decimal(114.3)`, you are creating a regular float object and then passing it to Decimal. The accuracy is lost due to binary floating-point imprecision when the float 114.3 is created, before Decimal ever gets to see it. There's no way to get that accuracy back. That's why Decimal accepts string representations as input, so it can see what you actually typed and use the right level of precision.
11,944
16,739,894
I've found [this Library](https://github.com/pythonforfacebook/facebook-sdk/) it seems it is the official one, then [found this](https://stackoverflow.com/questions/10488913/how-to-obtain-a-user-access-token-in-python), but everytime i find an answer the half of it are links to [Facebook API Documentation](https://developers.facebook.com/docs/reference/php/facebook-getAccessToken/) which talks about Javascript or PHP and how to extract it from links! How do i make it on simple python script? NB: what i realy dont understand, why using a library and cant extract `token` if we can use `urllib` and `regex` to extract informations?
2013/05/24
[ "https://Stackoverflow.com/questions/16739894", "https://Stackoverflow.com", "https://Stackoverflow.com/users/861487/" ]
Javascript and PHP can be used as web development languages. You need a web front end for the user to grant permission so that you can obtain the access token. Rephrased: **You cannot obtain the access token programmatically, there must be manual user interaction** In Python it will involve setting up a web server, for example a script to update feed using facepy ``` import web from facepy import GraphAPI from urlparse import parse_qs url = ('/', 'index') app_id = "YOUR_APP_ID" app_secret = "APP_SECRET" post_login_url = "http://0.0.0.0:8080/" user_data = web.input(code=None) if not user_data.code: dialog_url = ( "http://www.facebook.com/dialog/oauth?" + "client_id=" + app_id + "&redirect_uri=" + post_login_url + "&scope=publish_stream" ) return "<script>top.location.href='" + dialog_url + "'</script>" else: graph = GraphAPI() response = graph.get( path='oauth/access_token', client_id=app_id, client_secret=app_secret, redirect_uri=post_login_url, code=code ) data = parse_qs(response) graph = GraphAPI(data['access_token'][0]) graph.post(path = 'me/feed', message = 'Your message here') ```
here is a Gist i tried to make using `Tornado` since the answer uses `web.py` <https://gist.github.com/abdelouahabb/5647185>
11,945
54,958,169
I have the three following dataframes: ``` df_A = pd.DataFrame( {'id_A': [1, 1, 1, 1, 2, 2, 3, 3], 'Animal_A': ['cat','dog','fish','bird','cat','fish','bird','cat' ]}) df_B = pd.DataFrame( {'id_B': [1, 2, 2, 3, 4, 4, 5], 'Animal_B': ['dog','cat','fish','dog','fish','cat','cat' ]}) df_P = pd.DataFrame( {'id_A': [1, 1, 2, 3], 'id_B': [2, 3, 4, 5]}) df_A id_A Animal_A 0 1 cat 1 1 dog 2 1 fish 3 1 bird 4 2 cat 5 2 fish 6 3 bird 7 3 cat df_B id_B Animal_B 0 1 dog 1 2 cat 2 2 fish 3 3 dog 4 4 fish 5 4 cat 6 5 cat df_P id_A id_B 0 1 2 1 1 3 2 2 4 3 3 5 ``` And I would like to get an additional column to df\_P that tells the number of Animals shared between id\_A and id\_B. What I'm doing is: ``` df_P["n_common"] = np.nan for i in df_P.index.tolist(): id_A = df_P["id_A"][i] id_B = df_P["id_B"][i] df_P.iloc[i,df_P.columns.get_loc('n_common')] = len(set(df_A['Animal_A'][df_A['id_A']==id_A]).intersection(df_B['Animal_B'][df_B['id_B']==id_B])) ``` The result being: ``` df_P id_A id_B n_common 0 1 2 2.0 1 1 3 1.0 2 2 4 2.0 3 3 5 1.0 ``` Is there a faster, more pythonic, way to do this? Is there a way to avoid the for loop?
2019/03/02
[ "https://Stackoverflow.com/questions/54958169", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11139882/" ]
Not sure if it is faster or more pythonic, but it avoids the for loop :) ``` import pandas as pd df_A = pd.DataFrame( {'id_A': [1, 1, 1, 1, 2, 2, 3, 3], 'Animal_A': ['cat','dog','fish','bird','cat','fish','bird','cat' ]}) df_B = pd.DataFrame( {'id_B': [1, 2, 2, 3, 4, 4, 5], 'Animal_B': ['dog','cat','fish','dog','fish','cat','cat' ]}) df_P = pd.DataFrame( {'id_A': [1, 1, 2, 3], 'id_B': [2, 3, 4, 5]}) df = pd.merge(df_A, df_P, on='id_A') df = pd.merge(df_B, df, on='id_B') df = df[df['Animal_A'] == df['Animal_B']].groupby(['id_A', 'id_B'])['Animal_A'].count().reset_index() df.rename({'Animal_A': 'n_common'},inplace=True,axis=1) ```
You can try the below: ``` df_A.merge(df_B, left_on = ['Animal_A'], right_on = ['Animal_B'] ).groupby(['id_A' ,'id_B']).count().reset_index().merge(df_P).drop('Animal_B', axis = 1).rename(columns = {'Animal_A': 'count'}) ```
11,948
42,282,577
I have this HTML ```html <div class="callout callout-accordion" style="background-image: url(&quot;/images/expand.png&quot;);"> <span class="edit" data-pk="bandwidth_bar">Bandwidth Settings</span> <span class="telnet-arrow"></span> </div> ``` I'm trying to select **span** with text = `Bandwidth Settings`, and click on the **div** with class name = `callout`. ```python if driver.find_element_by_tag_name("span") == ("Bandwidth Settings"): print "Found" time.sleep(100) driver.find_element_by_tag_name("div").find_element_by_class_name("callout").click() print "Not found" time.sleep(100) ``` I kept getting ```none Testing started at 1:59 PM ... Not found Process finished with exit code 0 ``` --- What did I miss? --- ### Select the parent *div* ```python if driver.find_element_by_xpath("//span[text()='Bandwidth Settings']") is None: print "Not Found" else: print "Found" span = driver.find_element_by_xpath("//span[text()='Bandwidth Settings']") div = span.find_element_by_xpath('..') div.click() ``` I got > > WebDriverException: Message: unknown error: Element > > >
2017/02/16
[ "https://Stackoverflow.com/questions/42282577", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4480164/" ]
One way would be to use `find_element_by_xpath(xpath)` like this: ``` if driver.find_element_by_xpath("//span[contains(.,'Bandwidth Settings')]") is None: print "Not found" else: print "Found" ... ``` For an exact match (as you asked for in your comment), use `"//span[text()='Bandwidth Settings']"` On your *edited* question, try one of these: Locate directly (if there is no other matching element): ``` driver.find_element_by_css_selector("div[style*='/images/telenet/expand.png']") ``` Locate via *span* (provided there isn't any other *div* on that level): ``` driver.find_element_by_xpath("//span[contains(.,'Bandwidth Settings')]/../div") ```
The code that you need to use: ``` from selenium.common.exceptions import NoSuchElementException try: span = driver.find_element_by_xpath('//span[text()="Bandwidth Settings"]') print "Found" except NoSuchElementException: print "Not found" ``` If you need to select the parent `div` element: ``` div = span.find_element_by_xpath('./parent::div') ```
11,949
24,857,779
I used os.rename() method to rename the directory in my python script. This script called automatically by the scheduler every day. Sometimes the os.rename() function returns the error, ``` [Error 5] Access is denied ``` But all other times its working fine. Code, ``` try: if(os.path.exists(Downloaded_Path)): os.rename(Downloaded_Path, Downloaded_Path + "_ByClientTool") except Exception,e: print "Error !!", str(e) return 1 ```
2014/07/21
[ "https://Stackoverflow.com/questions/24857779", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1553605/" ]
The error means that the user account that the scheduler uses to run the program does not have permissions to rename that directory. One common reason for the fact that it sometimes works and sometimes does not is that the program creates some of the directories it needs to rename but not others. * The directories created directly by the program have modify permissions for the user running the program, so it can rename those. * But, directories that were previously created by something else may restrict the access for the user running the program by default. Read about Windows File and Folder permissions: <http://technet.microsoft.com/en-us/library/bb727008.aspx>
This will also fail if the host names are not "network qualified" the same way. ``` >>> os.renames(r'\\host.domain.com\joan\rocks', r'\\host\joan\jett\rocks') WindowsError: [Error 5] Access is denied >>> os.renames(r'\\host\joan\rocks', r'\\host\joan\jett\rocks') >>> >>> os.renames(r'\\host.domain.com\joan\rocks', r'\\host.domain.com\joan\jett\rocks') >>> ```
11,951
36,428,178
Even with the most basic of code, my .txt file is coming out empty, and I can't understand why. I'm running this subroutine in `python 3` to gather information from the user. When I open the .txt file in both notepad and N++, I get an empty file. Here's my code : ``` def Setup(): fw = open('AutoLoader.txt', 'a') x = True while x == True: print("Enter new location to enter") new_entry = str(input('Start with \'web\' if it\'s a web page\n')) fw.write(new_entry) y = input('New Data? Y/N\n') if y == 'N' or y == 'n': fw.close break fw.close Start() ```
2016/04/05
[ "https://Stackoverflow.com/questions/36428178", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Try replacing fw.close with fw.close()
It's working on python 3.4 ``` def Setup(): fw = open('AutoLoader3.4.txt', 'a+') x = True while x == True: print("Enter new location to enter") new_entry = str(input('Start with \'web\' if it\'s a web page\n')) fw.write(new_entry) y = input('New Data? Y/N\n') if y == 'N' or y == 'n': fw.close() break fw.close() Setup() ```
11,957
26,506,642
if d is a dict, python docs provide us with this method: ``` dict.get = get(...) D.get(k[,d]) -> D[k] if k in D, else d. d defaults to None. ``` Another famous pattern is: ``` try: D[k]: except: d ``` But in my benchmarks (done with timeit), i have found that the following approach is the best: ``` if k in D: D[k] else: d ``` the try/except pattern IS the fastest if no exceptions are raised, but if no exceptions are raised, then we don't need this pattern. I now need to refactor all my code, which is written with a lot of .get()'s, because i need to squeeze some performance on a limited system. Am i correct or wrong? Why is the try/except pattern recommended when it looks like it's the slowest?
2014/10/22
[ "https://Stackoverflow.com/questions/26506642", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3599101/" ]
Looks like the `D[k] if k in D else d` pattern is about twice faster than .get, at least for some usages. .get ``` $ python -m timeit -s 'D={}; k=xrange(0,100000)' 'D.get(k)' 10000000 loops, best of 3: 0.0934 usec per loop ``` if/else ``` $ python -m timeit -s 'D={}; k=xrange(0,100000)' 'D[k] if k in D else None' 10000000 loops, best of 3: 0.0487 usec per loop ```
Maybe you should give `pypy` a try and compile that on your embedded system. `if-then-else` and `get` have similar results. Some benchmarks: PyPy: ``` $ pypy3 -m timeit -s 'd={}; k=0' 'd[k] if k in d else None' 1000000000 loops, best of 3: 0.0008 usec per loop $ pypy3 -m timeit -s 'd={}; k=0' 'd.get(k)' 1000000000 loops, best of 3: 0.000803 usec per loop ``` Python 3: ``` $ python -m timeit -s 'd={}; k=0' 'd.get(k)' 1 ↵ 10000000 loops, best of 3: 0.101 usec per loop $ python -m timeit -s 'd={}; k=0' 'd[k] if k in d else 0' 10000000 loops, best of 3: 0.0372 usec per loop ```
11,960
37,033,709
I know that it's often [best practice](https://softwareengineering.stackexchange.com/questions/213935/why-use-classes-when-programming-a-tkinter-gui-in-python) to write Tkinter GUI code using object-oriented programming (OOP), but I'm trying to keep things simple because I'm new to Python. I have written the following code to create a simple GUI: ``` #!/usr/bin/python3 from tkinter import * from tkinter import ttk def ChangeLabelText(): MyLabel.config(text = 'You pressed the button!') def main(): Root = Tk() MyLabel = ttk.Label(Root, text = 'The button has not been pressed.') MyLabel.pack() MyButton = ttk.Button(Root, text = 'Press Me', command = ChangeLabelText) MyButton.pack() Root.mainloop() if __name__ == "__main__": main() ``` [The GUI looks like this.](https://i.stack.imgur.com/ppWic.png) I thought the text in the GUI (MyLabel) would change to "You pressed the button!" when the button is clicked, but I get the following error when I click the button: ``` Exception in Tkinter callback Traceback (most recent call last): File "C:\Users\elsey\AppData\Local\Programs\Python\Python35-32\lib\tkinter\__init__.py", line 1549, in __call__ return self.func(*args) File "C:/Users/elsey/Documents/question code.py", line 6, in ChangeLabelText MyLabel.config(text = 'You pressed the button!') NameError: name 'MyLabel' is not defined ``` What am I doing wrong? Any guidance would be appreciated.
2016/05/04
[ "https://Stackoverflow.com/questions/37033709", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6030297/" ]
`MyLabel` is local to `main()` so the way you can not access it that way from `ChangeLabelText()`. If you do not want to change the design of your program, then you will need to change the definition of `ChangeLabelText()` like what follows: ``` def ChangeLabelText(m): m.config(text = 'You pressed the button!') ``` And withing main() you will need to pass `MyLabel` as an argument to `ChangeLabelText()`. But again, you will have a problem if you code this `command = ChangeLabelText(MyLabel)` when you declare and define `MyButton` because the program will execute directly the body of `ChangeLabelText()` at the start and you will not have the desired result. To resolve this later problem, you will have to use (and may be read about) [`lambda`](http://www.secnetix.de/olli/Python/lambda_functions.hawk) Full program ============ So your program becomes: ``` #!/usr/bin/python3 from tkinter import * from tkinter import ttk def ChangeLabelText(m): m.config(text = 'You pressed the button!') def main(): Root = Tk() MyLabel = ttk.Label(Root, text = 'The button has not been pressed.') MyLabel.pack() MyButton = ttk.Button(Root, text = 'Press Me', command = lambda: ChangeLabelText(MyLabel)) MyButton.pack() Root.mainloop() if __name__ == "__main__": main() ``` Demo ==== Before clicking: [![enter image description here](https://i.stack.imgur.com/MqwvI.png)](https://i.stack.imgur.com/MqwvI.png) After clicking: [![enter image description here](https://i.stack.imgur.com/9RyzN.png)](https://i.stack.imgur.com/9RyzN.png)
*but I'm trying to keep things simple because I'm new to Python* Hopefully this helps in understanding that classes are the simple way, otherwise you have to jump through hoops and manually keep track of many variables. Also, the Python Style Guide suggests that CamelCase is used for class names and lower\_case\_with\_underlines for variables and functions. <https://www.python.org/dev/peps/pep-0008/> ``` from tkinter import * from tkinter import ttk class ChangeLabel(): def __init__(self): root = Tk() self.my_label = ttk.Label(root, text = 'The button has not been pressed.') self.my_label.pack() ## not necessary to keep a reference to this button ## because it is not referenced anywhere else ttk.Button(root, text = 'Press Me', command = self.change_label_text).pack() root.mainloop() def change_label_text(self): self.my_label.config(text = 'You pressed the button!') if __name__ == "__main__": CL=ChangeLabel() ```
11,961
39,026,120
I have a python string that I need to remove parentheses. The standard way is to use `text = re.sub(r'\([^)]*\)', '', text)`, so the content within the parentheses will be removed. However, I just found a string that looks like `(Data with in (Boo) And good luck)`. With the regex I use, it will still have `And good luck)` part left. I know I can scan through the entire string and try to keep a counter of number of `(` and `)` and when the numbers are balanced, index the location of `(` and `)` and remove the content within middle, but is there a better/cleaner way for doing that? It doesn't need to be regex, whatever it will work is great, thanks. Someone asked for expected result so here's what I am expecting: `Hi this is a test ( a b ( c d) e) sentence` Post replace I want it to be `Hi this is a test sentence`, instead of `Hi this is a test e) sentence`
2016/08/18
[ "https://Stackoverflow.com/questions/39026120", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1294529/" ]
With the re module (replace the innermost parenthesis until there's no more replacement to do): ``` import re s = r'Sainte Anne -(Data with in (Boo) And good luck) Charenton' nb_rep = 1 while (nb_rep): (s, nb_rep) = re.subn(r'\([^()]*\)', '', s) print(s) ``` With the [regex module](https://pypi.python.org/pypi/regex) that allows recursion: ``` import regex s = r'Sainte Anne -(Data with in (Boo) And good luck) Charenton' print(regex.sub(r'\([^()]*+(?:(?R)[^()]*)*+\)', '', s)) ``` Where `(?R)` refers to the whole pattern itself.
First I split the line into tokens that do not contain the parenthesis, for later on joining them into a new line: ``` line = "(Data with in (Boo) And good luck)" new_line = "".join(re.split(r'(?:[()])',line)) print ( new_line ) # 'Data with in Boo And good luck' ```
11,964
3,433,806
I was reading on web2py framework for a hobby project of mine I am doing. I learned how to program in Python when I was younger so I do have a grasp on it. Right now I am more of a PHP dev but kindda loathe it. I just have this doubt that pops in: Is there a way to use "Vanilla" python on the backend? I mean Vanilla like PHP, without a Framework. How does templating work in that way? I mean, with indentation and Everything it kinda misses the point. Anyway I am trying web2py and really liking it.
2010/08/08
[ "https://Stackoverflow.com/questions/3433806", "https://Stackoverflow.com", "https://Stackoverflow.com/users/399621/" ]
There is no reason to do that :) but if you insist you can write on top of [WSGI](http://wsgi.org/wsgi/) I suggest that you can try a micro-framework such as web.py if u like it Vanilla style
without a framework, you use WSGI. to do this, you write a function `application` like so: ``` def application(environment, start_response): start_response("200 OK", [('Content-Type', 'text/plain')]) return "hello world" ``` `environment` contains cgi variables and other stuff. Normally what happens is application will call other functions with the same call signature and you get a chain of functions each of which handles a particular aspect of processing the request. You are of course responsible for handling your own templates. Nothing about it is built into the language.
11,973
56,663,388
I'm trying to learn and use tensorboard and followed [these guideline codes](https://www.tensorflow.org/tensorboard/r2/get_started) with a few modifications. When I run the code ``` model.fit(x=x_train, y=y_train, epochs=5, validation_data=(x_test, y_test), callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)]) ``` , I got ProfilerNotRunningError with this message "summary\_ops\_v2.py:1161] Trace already enabled". Why trace already enabled? How can I solve the problem? I tried to solve it with new log directions(I thought then it would make the trace be renewed), but it happened again. ```py import tensorflow as tf import datetime mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 def create_model(): return tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(512, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) model = create_model() model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) log_dir="logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S") train_summary_writer = tf.summary.create_file_writer(log_dir) tensorboard_callback = [tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)] model.fit(x=x_train, y=y_train, epochs=5, validation_data=(x_test, y_test), callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)]) ``` * error --> [enter image description here](https://i.stack.imgur.com/vE6hb.jpg) ``` Epoch 1/5 W0619 17:02:10.383985 15544 summary_ops_v2.py:1161] Trace already enabled 32/60000 [..............................] - ETA: 15:05 - loss: 2.3275 - accuracy: 0.0625 --------------------------------------------------------------------------- ProfilerNotRunningError Traceback (most recent call last) <ipython-input-23-0c608b0df5ad> in <module> 3 epochs=5, 4 validation_data=(x_test, y_test), ----> 5 callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)]) C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs) 641 max_queue_size=max_queue_size, 642 workers=workers, --> 643 use_multiprocessing=use_multiprocessing) 644 645 def evaluate(self, C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, **kwargs) 662 validation_steps=validation_steps, 663 validation_freq=validation_freq, --> 664 steps_name='steps_per_epoch') 665 666 def evaluate(self, C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs) 392 # Callbacks batch end. 393 batch_logs = cbks.make_logs(model, batch_logs, batch_outs, mode) --> 394 callbacks._call_batch_hook(mode, 'end', batch_index, batch_logs) 395 progbar.on_batch_end(batch_index, batch_logs) 396 C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in _call_batch_hook(self, mode, hook, batch, logs) 230 for callback in self.callbacks: 231 batch_hook = getattr(callback, hook_name) --> 232 batch_hook(batch, logs) 233 self._delta_ts[hook_name].append(time.time() - t_before_callbacks) 234 C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in on_train_batch_end(self, batch, logs) 513 """ 514 # For backwards compatibility. --> 515 self.on_batch_end(batch, logs=logs) 516 517 def on_test_batch_begin(self, batch, logs=None): C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in on_batch_end(self, batch, logs) 1600 self._total_batches_seen += 1 1601 if self._is_tracing: -> 1602 self._log_trace() 1603 elif (not self._is_tracing and 1604 self._total_batches_seen == self._profile_batch - 1): C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in _log_trace(self) 1634 name='batch_%d' % self._total_batches_seen, 1635 step=self._total_batches_seen, -> 1636 profiler_outdir=os.path.join(self.log_dir, 'train')) 1637 self._is_tracing = False 1638 C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\summary_ops_v2.py in trace_export(name, step, profiler_outdir) 1216 1217 if profiler: -> 1218 _profiler.save(profiler_outdir, _profiler.stop()) 1219 1220 trace_off() C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\eager\profiler.py in stop() 101 if _profiler is None: 102 raise ProfilerNotRunningError( --> 103 'Cannot stop profiling. No profiler is running.') 104 with c_api_util.tf_buffer() as buffer_: 105 pywrap_tensorflow.TFE_ProfilerSerializeToString( ProfilerNotRunningError: Cannot stop profiling. No profiler is running. ```
2019/06/19
[ "https://Stackoverflow.com/questions/56663388", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11668817/" ]
I was facing the same issue and even customizing the log\_dir option using datetime didn't work. Check this page: <https://github.com/tensorflow/tensorboard/issues/2819> which helped me. I just added the 'profile\_batch = 100000000' in this callback as: TensorBoard(log\_dir=log\_dir, .., profile\_batch = 100000000)
I faced with issue becuase of mixed slashes in tensorboard logdir (on Windows). This happened when I used: ``` logdir_path = os.path.join(r'D:something\models', 'model1') ``` Solved using: ``` logdir_path = os.path.normpath(os.path.join(r'D:something\models', 'model1')) callbacks = [tf.keras.callbacks.TensorBoard(log_dir=logdir_path)] ```
11,975
58,320,969
I’m working on the below script on my MacbookAir and I’m unclear where this syntax error comes from and tried searching on google why it breaks at the = sign in the print function. I understood there are different functions to print and tried many of them. But am unclear if I’m using the correct Python version (both 2 and 3 are installed). Can you please help out? I get an error in line `61`: ``` print("The interest rate is too high to trade {}".format(total_profit) , end="\n", file=output_file) ``` **Script:** ``` ## Initial values from the Python script principal=1000 coupon=0.06 frequency=2 r=0.03 transaction_fee = 0.003*principal ## Amendments to the variables as per question 7 probabilitypayingout=0.85 probabolilitynotpayingout=0.15 notpayingoutfullamount=200 maturity=7 market_price=1070 avoidtradingaboveinterestrate=0.02 #!/usr/bin/env python3.7 import numpy as np # Open a file to store output output_file = open("outputfile.txt", "w") # print variables of this case print("The variables used for this calculation are: \n - Probability of paying out the full principal {}.".format(probabilitypayingout), "\n - Probability of paying out partial principal {}.".format(probabolilitynotpayingout), "\n - Amount in case of paying out partial principal {}.".format(notpayingoutfullamount), "\n - Market price bond {}.".format(market_price), "\n - Bond maturity in years {}.".format(maturity), "\n - Coupon rate bond {}.".format(coupon), "\n - Principal bond {}.".format(principal), "\n - Frequency coupon bond {}.".format(frequency) , "\n - Risk free rate {}.".format(r) , "\n - Avoid trading aboe interest rate {}.".format(avoidtradingaboveinterestrate), "\n \n" ) # calculate true value and decide whether to trade true_price=0 principalpayout=(probabilitypayingout*principal)+(probabolilitynotpayingout*notpayingoutfullamount) for t in range(1,maturity*frequency+1): if t<(maturity*frequency): true_price = true_price + (coupon/frequency)*principal/(1+(r/frequency))**t # Present value of coupon payments else: true_price = true_price + (coupon/frequency)*principal/(1+(r/frequency))**t + principalpayout/(1+(r/frequency))**t # Present value of coupons and principal print("The price of the bond according to the pricing model is {}, while the current market price is {}.".format(true_price, market_price)) if true_price-transaction_fee>market_price: profit = true_price-transaction_fee-market_price print("The trade is executed and if the pricing model is correct, the profit will be {}".format(profit), "after deduction of trading fees.") else: print("The trade was not executed, because the expected profit after transaction fees is negative.") # Fifth, mimic changes in market conditions by adjusting the interest rate and market price. The indented code below the "for" line is repeated 1,000 times. total_profit=0 for n in range(0,1000): # Adds some random noise to the interest rate and market price, so each changes slightly (each time code is executed, values will differ because they are random) change_r=np.random.normal(0,0.015) change_market_price=np.random.normal(0,40) r_new = r + change_r market_price_new = market_price + change_market_price # Sixth, execute trading algorithm using new values true_price_new=0 if r_new>avoidtradingaboveinterestrate: print("The interest rate is too high to trade {}".format(total_profit) , end="\n", file=output_file) output_file.close() else: for t in range(1,maturity*frequency+1): if t<(maturity*frequency): true_price_new = true_price_new + (coupon/frequency)*principal/(1+(r_new/frequency))**t else: true_price_new = true_price_new + (coupon/frequency)*principal/(1+(r_new/frequency))**t + principalpayout/(1+(r_new/frequency))**t if true_price_new-transaction_fee>market_price_new: trading_profit = true_price_new-transaction_fee-market_price_new total_profit = total_profit + trading_profit print("The trade was executed and is expected to yield a profit of {}. The total profit from trading is {}.".format(trading_profit,total_profit), end="\n", file=output_file ) print("The total profit from trading is {}".format(total_profit), end="\n", file=output_file) output_file.close() ```
2019/10/10
[ "https://Stackoverflow.com/questions/58320969", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12194431/" ]
There's a few issues to fix here. Firstly don't use `id` attributes in content which can be dynamically appended multiple times. You'll end up with duplicate ids which is invalid and will cause issues in your JS. Use common `class` attributes instead. Secondly you can attach an event handler to all of the elements with that common class to remove the row. You can use DOM traversal methods to find the single `tr` related to the clicked button instead of targeting it by `id`. Finally you will need to use a delegated event handler as the button is dynamically created, and is not present in the DOM when the page loads. With all that said, try this: ```js $(document).ready(function() { $("#btnAdd").click(function() { $("table").append('<tr><td class="col-md-8"><input type="list" class="form-control"></input></td><td><button type="button" class="btn btn-info btn-rounded btn-sm my-0">Omhoog</button></td><td><button type="button" class="btn btn-info btn-rounded btn-sm my-0">Omlaag</button></td><button></button><td><button type="button" class="btn btn-danger btn-rounded btn-sm my-0 delete">Verwijder</button></td></tr>' ); }); $('table').on('click', '.delete', function() { $(this).closest('tr').remove(); }); }); ``` ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <button id="btnAdd">Add</button> <table></table> ``` Note that I added the `delete` class to the 'Verwijder' button.
You need event delegation for this. Also it's not ideal to use an id with a fixed number for each of your operations. If your table has 1000 rows, you don't want to copy-paste your function 1000 times. I created an example based on classes: ```js $(document).ready(function() { $(".btn-add").click(function() { $("table#myTable").append( "<tr id='tr2'><td class='col-md-8'><input type='list' class='form-control'></input></td><td><button type='button' class='btn btn-info btn-rounded btn-sm my-0'>Omhoog</button></td><td><button type='button' class='btn btn-info btn-rounded btn-sm my-0'>Omlaag</button></td><td><button type='button' class='btn btn-danger btn-rounded btn-sm my-0 btn-remove'>Verwijder</button></td></tr>" ); }); $("#myTable").on('click', '.btn-remove', function() { $(this).closest('tr').remove(); }); }); ``` ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <button class="btn-add">Add a row </button> <table id="myTable"> <tr> <td class='col-md-8'><input type='list' class='form-control'></td> <td><button type='button' class='btn btn-info btn-rounded btn-sm my-0'>Omhoog</button></td> <td><button type='button' class='btn btn-info btn-rounded btn-sm my-0'>Omlaag</button></td> <td><button type='button' class='btn btn-danger btn-rounded btn-sm my-0 btn-remove'>Verwijder</button></td> </tr> </table> I don't have the id myTable, so nothing happens here <table> <tr> <td class='col-md-8'><input type='list' class='form-control'></td> <td><button type='button' class='btn btn-info btn-rounded btn-sm my-0'>Omhoog</button></td> <td><button type='button' class='btn btn-info btn-rounded btn-sm my-0'>Omlaag</button></td> <td><button type='button' class='btn btn-danger btn-rounded btn-sm my-0 btn-remove'>Verwijder</button></td> </tr> </table> ```
11,985
14,109,915
I am currently playing around with an example from the book Violent Python. You can see my implementation [here](https://github.com/igniteflow/violent-python/blob/master/pwd-crackers/unix-pwd-crack.py) I am now trying to implement the same script in Go to compare performance, note I am completely new to Go. Opening the file and iterating over the lines is fine, however I cannot figure out how to use the "crypto" library to hash the string in the same way as Python's crypt.crypt(str\_to\_hash, salt). I thought it maybe something like ``` import "crypto/des" des.NewCipher([]byte("abcdefgh")) ``` However, no cigar. Any help would be much appreciated as it'd be really interesting to compare Go's parallel performance to Python's multithreaded. Edit: [Python docs for crypt.crypt](http://docs.python.org/2/library/crypt.html)
2013/01/01
[ "https://Stackoverflow.com/questions/14109915", "https://Stackoverflow.com", "https://Stackoverflow.com/users/343223/" ]
`crypt` is very easy to wrap with cgo, eg ``` package main import ( "fmt" "unsafe" ) // #cgo LDFLAGS: -lcrypt // #define _GNU_SOURCE // #include <crypt.h> // #include <stdlib.h> import "C" // crypt wraps C library crypt_r func crypt(key, salt string) string { data := C.struct_crypt_data{} ckey := C.CString(key) csalt := C.CString(salt) out := C.GoString(C.crypt_r(ckey, csalt, &data)) C.free(unsafe.Pointer(ckey)) C.free(unsafe.Pointer(csalt)) return out } func main() { fmt.Println(crypt("abcdefg", "aa")) } ``` Which produces this when run ``` aaTcvO819w3js ``` Which is identical to python `crypt.crypt` ``` >>> from crypt import crypt >>> crypt("abcdefg","aa") 'aaTcvO819w3js' >>> ``` (Updated to free the CStrings - thanks @james-henstridge)
E.g. ``` package main import ( "crypto/des" "fmt" "log" ) func main() { b, err := des.NewCipher([]byte("abcdefgh")) if err != nil { log.Fatal(err) } msg := []byte("Hello!?!") fmt.Printf("% 02x: %q\n", msg, msg) b.Encrypt(msg, msg) fmt.Printf("% 02x: %q\n", msg, msg) b.Decrypt(msg, msg) fmt.Printf("% 02x: %q\n", msg, msg) } ``` (Also: <http://play.golang.org/p/czYDRjtWNR>) --- Output: ``` 48 65 6c 6c 6f 21 3f 21: "Hello!?!" 3e 41 67 99 2d 9a 72 b9: ">Ag\x99-\x9ar\xb9" 48 65 6c 6c 6f 21 3f 21: "Hello!?!" ```
11,987
21,892,080
I'm using Python's [Watchdog](http://pythonhosted.org/watchdog/) to monitor a given directory for new files being created. When a file is created, some code runs that spawns a subprocess shell command to run different code to process this file. This should run for every new file that is created. I've tested this out when one file is created, and things work great, but am having trouble getting it working when multiple files are created, either at the same time, or one after another. My current problem is this... the processing code run in the shell takes a while to run and will not finish before a new file is created in the directory. There's nothing I can do about that. While this code is running, watchdog will not recognize that a new file has been created, and will not proceed with the code. So I think I need to spawn a new process for each new file, or do something get things to run concurrently, and not wait until one file is done before processing the next one. So my questions are: 1.) In reality I will have 4 files, in different series, created at the same time, in one directory. What's the best way get watchdog to run the code on file creation for all 4 files at once? 2.) When the code is running for one file, how do I get watchdog to begin processing the next file in the same series without waiting until processing for the previous file has completed. This is necessary because the files are particular and I need to pause the processing of one file until another file is finished, but the order in which they are created may vary. Do I need to combine my watchdog with multiprocessing or threading somehow? Or do I need to implement multiple observers? I'm kind of at a loss. Thanks for any help. ``` class MonitorFiles(FileSystemEventHandler): '''Sub-class of watchdog event handler''' def __init__(self, config=None, log=None): self.log = log self.config = config def on_created(self, event): file = os.path.basename(event.src_path) self.log.info('Created file {0}'.format(event.src_path)) dosWatch.go(event.src_path, self.config, self.log) def on_modified(self, event): file = os.path.basename(event.src_path) ext = os.path.splitext(file)[1] if ext == '.fits': self.log.warning('Modifying a FITS file is not allowed') return def on_deleted(self, event): self.log.critical('Nothing should ever be deleted from here!') return ``` ### Main Monitoring ``` def monitor(config, log): '''Uses the Watchdog package to monitor the data directory for new files. See the MonitorFiles class in dosClasses for actual monitoring code''' event_handler = dosclass.MonitorFiles(config, log) # add logging the the event handler log_handler = LoggingEventHandler() # set up observer observer = Observer() observer.schedule(event_handler, path=config.fitsDir, recursive=False) observer.schedule(log_handler, config.fitsDir, recursive=False) observer.start() log.info('Begin MaNGA DOS!') log.info('Start watching directory {0} for new files ...'.format(config.fitsDir)) # monitor try: while True: time.sleep(1) except KeyboardInterrupt: observer.unschedule_all() observer.stop() log.info('Stop watching directory ...') log.info('End MaNGA DOS!') log.info('--------------------------') log.info('') observer.join() ``` In the above, my monitor method sets up watchdog to monitor the main directory. The MonitorFiles class defines what happens when a file is created. It basically calls this dosWatch.go method which eventually calls a subprocess.Popen to run a shell command.
2014/02/19
[ "https://Stackoverflow.com/questions/21892080", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3329870/" ]
Here's what I ended up doing, which solved my problem. I used multiprocessing to start a separate watchdog monitoring process to watch for each file separately. Watchdog already queues up new files for me, which is fine for me. As for point 2 above, I needed, e.g. a file2 to process before a file1, even though file1 was created first. So during file1 I check for the output of the file2 processing. If it finds it, it goes ahead processing file1. If it doesn't it exits. On file2 processing, I check to see if file1 was created already, and if so, then process file1. (Code for this not shown) ### Main Monitoring of Cameras ``` def monitorCam(camera, config, mainlog): '''Uses the Watchdog package to monitor the data directory for new files. See the MonitorFiles class in dosClasses for actual monitoring code. Monitors each camera.''' mainlog.info('Process Name, PID: {0},{1}'.format(mp.current_process().name,mp.current_process().pid)) #init cam log camlog = initLogger(config, filename='manga_dos_{0}'.format(camera)) camlog.info('Camera {0}, PID {1} '.format(camera,mp.current_process().pid)) config.camera=camera event_handler = dosclass.MonitorFiles(config, camlog, mainlog) # add logging the the event handler log_handler = LoggingEventHandler() # set up observer observer = Observer() observer.schedule(event_handler, path=config.fitsDir, recursive=False) observer.schedule(log_handler, config.fitsDir, recursive=False) observer.daemon=True observer.start() camlog.info('Begin MaNGA DOS!') camlog.info('Start watching directory {0} for new files ...'.format(config.fitsDir)) camlog.info('Watching directory {0} for new files from camera {1}'.format(config.fitsDir,camera)) # monitor try: while True: time.sleep(1) except KeyboardInterrupt: observer.unschedule_all() observer.stop() camlog.info('Stop watching directory ...') camlog.info('End MaNGA DOS!') camlog.info('--------------------------') camlog.info('') #observer.join() if observer.is_alive(): camlog.info('still alive') else: camlog.info('thread ending') ``` ### Start of Multiple Camera Processes ``` def startProcess(camera,config,log): ''' Uses multiprocessing module to start 4 different camera monitoring processes''' jobs=[] #pdb.set_trace() #log.info(mp.log_to_stderr(logging.DEBUG)) for i in range(len(camera)): log.info('Starting to monitor camera {0}'.format(camera[i])) print 'Starting to monitor camera {0}'.format(camera[i]) try: p = mp.Process(target=monitorCam, args=(camera[i],config, log), name=camera[i]) p.daemon=True jobs.append(p) p.start() except KeyboardInterrupt: log.info('Ending process: {0} for camera {1}'.format(mp.current_process().pid, camera[i])) p.terminate() log.info('Terminated: {0}, {1}'.format(p,p.is_alive())) for i in range(len(jobs)): jobs[i].join() return ```
I'm not sure it would make much sense to do a thread per file. The [GIL](https://wiki.python.org/moin/GlobalInterpreterLock) will probably eliminate any advantage you'd see from doing that and might even impact performance pretty badly and lead to some unexpected behavior. I haven't personally found `watchdog` to be very reliable. You might consider implementing your own file watcher which can be done fairly easily as in the django framework (see [here](https://github.com/django/django/blob/master/django/utils/autoreload.py)) by creating a dict with the modified timestamp for each file.
11,990
25,663,543
I'm trying to run celery worker in OS X (Mavericks). I activated virtual environment (python 3.4) and tried to start Celery with this argument: ``` celery worker --app=scheduling -linfo ``` Where `scheduling` is my celery app. But I ended up with this error: `dbm.error: db type is dbm.gnu, but the module is not available` Complete stacktrace: ``` Traceback (most recent call last): File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/kombu/utils/__init__.py", line 320, in __get__ return obj.__dict__[self.__name__] KeyError: 'db' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/other/PhoenixEnv/bin/celery", line 9, in <module> load_entry_point('celery==3.1.9', 'console_scripts', 'celery')() File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/__main__.py", line 30, in main main() File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/bin/celery.py", line 80, in main cmd.execute_from_commandline(argv) File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/bin/celery.py", line 768, in execute_from_commandline super(CeleryCommand, self).execute_from_commandline(argv))) File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/bin/base.py", line 308, in execute_from_commandline return self.handle_argv(self.prog_name, argv[1:]) File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/bin/celery.py", line 760, in handle_argv return self.execute(command, argv) File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/bin/celery.py", line 692, in execute ).run_from_argv(self.prog_name, argv[1:], command=argv[0]) File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/bin/worker.py", line 175, in run_from_argv return self(*args, **options) File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/bin/base.py", line 271, in __call__ ret = self.run(*args, **kwargs) File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/bin/worker.py", line 209, in run ).start() File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/worker/__init__.py", line 100, in __init__ self.setup_instance(**self.prepare_args(**kwargs)) File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/worker/__init__.py", line 141, in setup_instance self.blueprint.apply(self, **kwargs) File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/bootsteps.py", line 221, in apply step.include(parent) File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/bootsteps.py", line 347, in include return self._should_include(parent)[0] File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/bootsteps.py", line 343, in _should_include return True, self.create(parent) File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/worker/components.py", line 220, in create w._persistence = w.state.Persistent(w.state, w.state_db, w.app.clock) File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/worker/state.py", line 161, in __init__ self.merge() File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/worker/state.py", line 169, in merge self._merge_with(self.db) File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/kombu/utils/__init__.py", line 322, in __get__ value = obj.__dict__[self.__name__] = self.__get(obj) File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/worker/state.py", line 238, in db return self.open() File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/worker/state.py", line 165, in open self.filename, protocol=self.protocol, writeback=True, File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/shelve.py", line 239, in open return DbfilenameShelf(filename, flag, protocol, writeback) File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/shelve.py", line 223, in __init__ Shelf.__init__(self, dbm.open(filename, flag), protocol, writeback) File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/dbm/__init__.py", line 91, in open "available".format(result)) dbm.error: db type is dbm.gnu, but the module is not available ``` Please help.
2014/09/04
[ "https://Stackoverflow.com/questions/25663543", "https://Stackoverflow.com", "https://Stackoverflow.com/users/493329/" ]
try this :- ``` var value=$(this).attr("href"); ``` [Demo](http://jsfiddle.net/rmbvo9oh/2/)
In JavaScript, when you bind an event to an element, the `this` variable is being available (of course depending on the element and event). So, when you click an element and bind a function, within that function scope the clicked element is referenced as `this`. Writing `$(this)` you make a jQuery object out of it: ``` $(document).ready(function(){ $("a").click(function(){ var value = $(this).attr("href"); alert("hello------> " + value); }); }); ```
11,991
11,301,863
I have a django FileField, which i use to store wav files on the Amazon s3 server. I have set up the celery task to read that file and convert it to mp3 and store it to another FileField. Problem i am facing is that i am unable to pass the input file to ffmpeg as the file is not the physical file on the hard disk drive. To circumvent that, i used stdin to feed the input stream of the file with the django's filefield. Here is the example: ``` output_file = NamedTemporaryFile(suffix='.mp3') subprocess.call(['ffmpeg', '-y', '-i', '-', output_file.name], stdin=recording_wav) ``` where recording\_wav file is: , which is actually stored on the amazon s3 server. The error for the above subprocess call is: ``` AttributeError: 'cStringIO.StringO' object has no attribute 'fileno' ``` How can i do this? Thanks in advance for the help. **Edit:** Full traceback: ``` [2012-07-03 04:09:50,336: ERROR/MainProcess] Task api.tasks.convert_audio[b7ab4192-2bff-4ea4-9421-b664c8d6ae2e] raised exception: AttributeError("'cStringIO.StringO' object has no attribute 'fileno'",) Traceback (most recent call last): File "/home/tejinder/envs/tmai/local/lib/python2.7/site-packages/celery/execute/trace.py", line 181, in trace_task R = retval = fun(*args, **kwargs) File "/home/tejinder/projects/tmai/../tmai/apps/api/tasks.py", line 56, in convert_audio subprocess.Popen(['ffmpeg', '-y', '-i', '-', output_file.name], stdin=recording_wav) File "/usr/lib/python2.7/subprocess.py", line 672, in __init__ errread, errwrite) = self._get_handles(stdin, stdout, stderr) File "/usr/lib/python2.7/subprocess.py", line 1043, in _get_handles p2cread = stdin.fileno() File "/home/tejinder/envs/tmai/local/lib/python2.7/site-packages/django/core/files/utils.py", line 12, in <lambda> fileno = property(lambda self: self.file.fileno) File "/home/tejinder/envs/tmai/local/lib/python2.7/site-packages/django/core/files/utils.py", line 12, in <lambda> fileno = property(lambda self: self.file.fileno) AttributeError: 'cStringIO.StringO' object has no attribute 'fileno' ```
2012/07/02
[ "https://Stackoverflow.com/questions/11301863", "https://Stackoverflow.com", "https://Stackoverflow.com/users/867365/" ]
Use [`subprocess.Popen.communicate`](http://docs.python.org/library/subprocess.html?highlight=popen.communicate#subprocess.Popen.communicate) to pass the input to your subprocess: ``` command = ['ffmpeg', '-y', '-i', '-', output_file.name] process = subprocess.Popen(command, stdin=subprocess.PIPE) process.communicate(recording_wav) ``` For extra fun, you could use the ffmpeg's output to avoid your NamedTemporaryFile: ``` command = ['ffmpeg', '-y', '-i', '-', '-f', 'mp3', '-'] process = subprocess.Popen(command, stdin=subprocess.PIPE) recording_mp3, errordata = process.communicate(recording_wav) ```
You need to create a pipe, pass the read end of the pipe to the subprocess, and dump the data into the write end.
11,992