qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
3,987,732
I have following python code: ``` def scrapeSite(urlToCheck): html = urllib2.urlopen(urlToCheck).read() from BeautifulSoup import BeautifulSoup soup = BeautifulSoup(html) tdtags = soup.findAll('td', { "class" : "c" }) for t in tdtags: print t.encode('latin1') ``` This will return me following html code: ``` <td class="c"> <a href="more.asp">FOO</a> </td> <td class="c"> <a href="alotmore.asp">BAR</a> </td> ``` I'd like to get the text between the a-Node (e.g. FOO or BAR), which would be t.contents.contents. Unfortunately it doesn't work that easy :) Does anyone have an idea how to solve that? Thanks a lot, any help is appreciated! Cheers, Joseph
2010/10/21
[ "https://Stackoverflow.com/questions/3987732", "https://Stackoverflow.com", "https://Stackoverflow.com/users/123172/" ]
In this case, you can use `t.contents[1].contents[0]` to get FOO and BAR. The thing is that contents returns a list with all elements (Tags and NavigableStrings), if you print contents, you can see it's something like `[u'\n', <a href="more.asp">FOO</a>, u'\n']` So, to get to the actual tag you need to access `contents[1]` (if you have the exact same contents, this can vary depending on the source HTML), after you've find the proper index you can use `contents[0]` afterwards to get the string inside the a tag. Now, as this depends on the exact contents of the HTML source, it's very fragile. A more generic and robust solution would be to use `find()` again to find the 'a' tag, via `t.find('a')` and then use the contents list to get the values in it `t.find('a').contents[0]` or just `t.find('a').contents` to get the whole list.
For your specific example, pyparsing's makeHTMLTags can be useful, since they are tolerant of many HTML variabilities in HTML tags, but provide a handy structure to the results: ``` html = """ <td class="c"> <a href="more.asp">FOO</a> </td> <td class="c"> <a href="alotmore.asp">BAR</a> </td> <td class="d"> <a href="alotmore.asp">BAZZ</a> </td> """ from pyparsing import * td,tdEnd = makeHTMLTags("td") a,aEnd = makeHTMLTags("a") td.setParseAction(withAttribute(**{"class":"c"})) pattern = td + a("anchor") + SkipTo(aEnd)("aBody") + aEnd + tdEnd for t,_,_ in pattern.scanString(html): print t.aBody, '->', t.anchor.href ``` prints: ``` FOO -> more.asp BAR -> alotmore.asp ```
68,269,165
I have a problem. I've created model called "Flower", everything works fine, i can create new "Flowers", i can get data from them etc. The problem is when I want to use column "owner\_id" in SQL query I got an error that this column don't exist, despite I can use it to get data from objects (for example flower1.owner\_id). I've deleted my sqlite database several times, made new migrations and used migrate but that still not worked.I also changed name of the column and re-created it, but that still doesn't helped. My models.py: ``` from django.db import models from django.contrib.auth.models import User class Flower(models.Model): name = models.CharField(max_length=80) water_time = models.IntegerField() owner_id = models.ForeignKey(User, on_delete=models.CASCADE, default=1) def __str__(self): return self.name ``` My views.py (view, where i want to use it): ``` class workspaceView(generic.DetailView): template_name = 'floris/workspace.html' def get_object(self): with connection.cursor() as cursor: cursor.execute(f'SELECT id,name FROM floris_Flower WHERE owner_id = {self.request.user.id}') row = cursor.fetchall() object = row print(object) if self.request.user.id == object.id: return object else: print('Error') ``` My urls.py: ``` from django.urls import path from . import views urlpatterns = [ path('', views.index2, name='index2'), path('login/', views.loginView, name='login'), path('register/', views.register, name='register'), path('addflower/', views.AddPlantView, name='addflower'), path('index.html', views.logout_view, name='logout'), path('workspace/', views.workspaceView.as_view(), name='workspace'), path('profile/<str:username>/', views.ProfileView.as_view(), name='profile'), ] ``` And my error code: ``` OperationalError at /floris/workspace/ no such column: owner_id Request Method: GET Request URL: http://127.0.0.1:8000/floris/workspace/ Django Version: 3.2.4 Exception Type: OperationalError Exception Value: no such column: owner_id Exception Location: /home/stazysta-kamil/.local/lib/python3.8/site-packages/django/db/backends/sqlite3/base.py, line 421, in execute Python Executable: /usr/bin/python3 Python Version: 3.8.10 Python Path: ['/home/stazysta-kamil/Desktop/floris/mysite', '/usr/lib/python38.zip', '/usr/lib/python3.8', '/usr/lib/python3.8/lib-dynload', '/home/stazysta-kamil/.local/lib/python3.8/site-packages', '/usr/local/lib/python3.8/dist-packages', '/usr/lib/python3/dist-packages'] Server time: Tue, 06 Jul 2021 10:43:32 +0000 ```
2021/07/06
[ "https://Stackoverflow.com/questions/68269165", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16390472/" ]
Since `owner_id` is declared as a `ForeignKey`, it will be available in the actual SQL database as `owner_id_id`. The additional prefix `_id` is automatically appended by Django for that relational field. When using Django ORM, you would just access it via `owner_id` then Django will automatically handle things for you in the background but if you are using raw SQL command, then you have to use the actual table column name which is `owner_id_id`. If you don't want such behavior, set the `db_column` of the model field with the exact name you want e.g. `owner_id = models.ForeignKey(User, on_delete=models.CASCADE, default=1, db_column="owner_id")`. As stated in Django documentation: > > Behind the scenes, Django appends "\_id" to the field name to create > its database column name. In the above example, the database table for > the Car model will have a manufacturer\_id column. > > > Related references: * <https://docs.djangoproject.com/en/3.2/ref/models/fields/#database-representation> * <https://docs.djangoproject.com/en/3.1/topics/db/models/#field-name-hiding-is-not-permitted> * <https://docs.djangoproject.com/en/3.1/topics/db/optimization/#use-foreign-key-values-directly>
The problem is in field "owner\_id" of Flower model. When you define a Foreign Key in model, is not necessary to add "\_id" at the end since Django add it automatically in this case, should be enough replace ``` owner_id = models.ForeignKey(User, on_delete=models.CASCADE, default=1) ``` with ``` owner = models.ForeignKey(User, on_delete=models.CASCADE, default=1) ```
54,085,972
I am trying to run a playbook locally but I want all the vars in the role's task/main.yml file to refer to a group\_var in a specific inventory file. Unfortunately the playbook is unable to access to the group\_vars directory as if fail to recognize the vars specified in the role. The command ran is the following: `/usr/local/bin/ansible-playbook --connection=local /opt/ansible/playbooks/create.yml -i ./inventory-file` but fails to find the group\_vars in the /group\_vars directory at the same directory level of the inventory file ``` fatal: [127.0.0.1]: FAILED! => { "msg": "The task includes an option with an undefined variable. The error was: 'admin_user_name' is undefined\n\nThe error appears to have been in '/opt/roles/create/tasks/main.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: create org\n ^ here\n" } ``` This is my configuration: ``` ansible-playbook 2.7.5 config file = /etc/ansible/ansible.cfg configured module search path = ['/opt/ansible-modules'] ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible executable location = /usr/local/bin/ansible-playbook python version = 3.6.3 (default, Oct 3 2017, 21:45:48) [GCC 7.2.0] ``` Any help is appreciated. Thanks, dom
2019/01/08
[ "https://Stackoverflow.com/questions/54085972", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3130919/" ]
So, theoretically adding localhost in the inventory would have been a good solution, but in my specific case (and in general for large deployments) was not an option. I also added `--extra-vars "myvar.json"` but did not work either. Turns out (evil detail...) that the right way to add a var file via command line is: `--extra-vars "@myvar.json"` Posting it here in hope nobody else struggle days to find this solution. Cheers, dom
As per the error, your ansible is not able to read the group\_vars, Can you please make sure that your group\_vars have the same folder called localhost. Example Playbook host is localhost `- hosts: localhost become: true roles: - { role: common, tags: [ 'common' ] } - { role: docker, tags: [ 'docker' ] }` So in group\_vars, it should be **localhost** and in that folder file in **main.yml** Or You can create a folder called all in **group\_vars** and create a file called **all.yml** This should solve the issue
61,976,842
So i look into text recognition of licensplates. Im using google cloude service for this. it returns me a list of possible stuff. But also text on the image not containing the license plates get recognized. So i thought i could just tell python to take from the list the one text that matches the pattern of the license plate. For germany it is like this: 1 or 3 letters. 1 Whitespace.1 or 2 letters. whitespace. up to 4 numbers. So i have basically 3 parts. In the smalles case it could be something like H A 4 In the biggest case something like HHH AB 1234 Hope it got clear. Thanks for any help.
2020/05/23
[ "https://Stackoverflow.com/questions/61976842", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10685847/" ]
You could use a regex for this: ``` ^[A-Z]{1,3}\s[A-Z]{1,2}\s\d{1,4}$ ``` An explanation: ``` ---------------------------------------------------------------------- ^ the beginning of the string ---------------------------------------------------------------------- [A-Z]{1,2} any character of: 'A' to 'Z' (between 1 and 2 times (matching the most amount possible)) ---------------------------------------------------------------------- \s whitespace (\n, \r, \t, \f, and " ") ---------------------------------------------------------------------- [A-Z]{1,2} any character of: 'A' to 'Z' (between 1 and 2 times (matching the most amount possible)) ---------------------------------------------------------------------- \s whitespace (\n, \r, \t, \f, and " ") ---------------------------------------------------------------------- \d{1,4} digits (0-9) (between 1 and 4 times (matching the most amount possible)) ---------------------------------------------------------------------- $ before an optional \n, and the end of the string ```
Here's a way: ``` import re string='frg3453453HHH AB 1234e456 2sf 3245 yKDEH A 4 554YFDN' print(re.findall('[A-Z]{1,3}\s[A-Z]{1,2}\s\d{1,4}',string)) ``` Output: ``` ['HHH AB 1234', 'DEH A 4'] ```
34,677,230
Given a list below: ``` snplist = [[1786, 0.0126525], [2463, 0.0126525], [2907, 0.0126525], [3068, 0.0126525], [3086, 0.0126525], [3398, 0.0126525], [5468,0.012654], [5531,0.0127005], [5564,0.0127005], [5580,0.0127005]] ``` I want to do a pairwise comparison of the second element in each sublist of the list, i.e. compare to see `0.0126525` from `[1786, 0.0126525]` is equal to `0.0126525` from `[2463, 0.0126525]` and so forth, if so, print the output as indicated in the code. Using for loop, I achieve the result: ``` for index, item in enumerate(snplist, 0): if index < len(snplist)-1: if snplist[index][1] == snplist[index+1][1]: print snplist[index][0], snplist[index+1][0], snplist[index][1] ``` When doing pairwise comparisons of the elements of a loop using list index, I always get into the error of `'index out of range'` because of the last element. I solve this problem by adding a condition ``` if index < len(snplist)-1: ``` I don't think this is the best way of doing this. I was wondering if there are more elaborate ways of doing pairwise comparisons of list elements in python? EDIT: I had not thought about the level of tolerance when comparing floats. I would consider two floats with `0.001` difference as being equal.
2016/01/08
[ "https://Stackoverflow.com/questions/34677230", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1945881/" ]
You can `zip` the `snplist` with the same list excluding the first element, and do the comparison, like this ``` for l1, l2 in zip(snplist, snplist[1:]): if l1[1] == l2[1]: print l1[0], l2[0], l1[1] ``` Since you are comparing floating point numbers, I would recommend using [`math.isclose`](https://docs.python.org/3/library/math.html#math.isclose) function from Python 3.5, like this ``` def isclose(a, b, rel_tol=1e-09, abs_tol=0.0): return abs(a-b) <= max(rel_tol * max(abs(a), abs(b)), abs_tol) ``` As you want to have 0.001 tolerance, you can do the comparison like this ``` if is_close(l1[1], l2[1], 0.001): ```
I suggest that you use `izip` for this to create a generator of item-neighbor pairs. Leaving the problem of comparing floating points aside, the code would look like this: ``` >>> from itertools import izip >>> lst = [[1,2], [3,4], [5,4], [7,8], [9,10], [11, 10]] >>> for item, next in izip(lst, lst[1:]): ... if item[1] == next[1]: ... print item[0], next[0], item[1] ... 3 5 4 9 11 10 ``` Remember to specify a tolerance when comparing floats, do *not* compare them with == ! You could define an `almost_equal` function for this, for example: ``` def almost_equal(x, y, tolerance): return abs(x-y) < tolerance ``` Then in the code above, use `almost_equal(item[1], next[1], tolerance)` instead of the comparison with ==.
73,675,665
I know there are three thread mapping model in operating system. 1. One to One 2. Many to One 3. Many to Many In this question I assume we use **One to One model**. Let's say, right now I restart my computer, and there are **10** kernel-level threads already running. After a while, I decide to run a python program which will launch one process with four threads. Three of the threads have to run a function that do a system call. **Here is a question, what is the correct scenario when I run the python program.** a) When a python program start, the kernel will launch another 4 threads in kernel space immediately (so there are 14 threads in kernel space now). When those 3 threads in user level initiate a system call, kernel will map those user-level threads to 3 of 4 kernel-level threads that kernel created when python program start, which also means we will waste 1 kernel-level thread. b) When a python program start, the kernel **will not** launch another 4 threads in kernel space immediately. Instead, kernel will create new kernel-level threads whenever those 3 user-level thread initiate a system call and ready to talk with kernel. In this case kernel will just create 3 threads exactly, which also means we will not waste any kernel-level threads. c) Very similar to second scenario, but in this case when those 3 user-level threads ready to run system call and talk with kernel, what kernel will do is make 3 of kernel-level threads that already created stop doing their current job, and then ask them to do the job that python program ask kernel do. Which means the scheduler will pick up 3 random kernel-level threads to stop what they're doing, and then store those tasks information to somewhere. After that, scheduler will ask those 3 kernel-level thread to finish the python program job first. In this case we always have only 10 threads in kernel-level. Any reply and suggested material to study is appreciated!
2022/09/10
[ "https://Stackoverflow.com/questions/73675665", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16030398/" ]
Kernel threads are like a specialized task responsible for doing a specific operation (not meant to last long). They are not threads waiting for incoming request from user-land threads. Moreover, a system call does not systematically create a kernel thread (see [this post](https://stackoverflow.com/questions/17683067/user-threads-v-s-kernel-threads) for more information and [this one](https://stackoverflow.com/questions/15917544/linux-system-call-flow-sequence) for some context about system calls): a kernel thread is started when a background task is required like for dealing with IO requests for example ([this post](https://stackoverflow.com/questions/72908947/writing-a-small-file-blocks-for-20-ms/73013307#73013307) shows a good practical case though the description is a bit deep). Basic system calls just runs in the same user-thread but with higher privileges. Note the kernel functions use a dedicated kernel stack: each user-level thread has 2 stacks on Linux: one for user-land functions and one for kernel-land functions (for sake of security). As a result, in practice, I think all answers are wrong in usual cases (ie. assuming the target operation do not require to create a kernel thread). If the target system calls done actually require to create kernel threads, then b) is the correct answer. Indeed, kernel threads are like a one-shot specialized task as previously stated. Creating/Destroying new kernel-threads is not much expensive since it is just basically a relatively lightweight `task_struct` data structure internally.
To answer this question directly. You have mixed kernel threads and threading. They are not completely different concepts, but a little different at the OS level. Also, kernel threads may last indefinitely in many cases. There are at least three types of data, 1. `thread_info` - specific schedulable entity; always exists. 2. `task_struct` - files open and other specific (not for kernel thread) 3. A memory management context (usually, but not always). For kernel threads, there is no memory management nor user space allocations. Kernel threads do have a separate stack in kernel space. Only kernel code may run in the context of the kernel stack, for example on behalf of user space via a syscall. It can also be borrowed during an interrupt; which may cause a context switch. The initial 10 kernel threads are just a number to add to the total. > > b) When a python program start, the kernel will not launch another 4 threads in kernel space immediately. Instead, kernel will create new kernel-level threads whenever those 3 user-level thread initiate a system call and ready to talk with kernel. In this case kernel will just create 3 threads exactly, which also means we will not waste any kernel-level threads. > > > This is correct. Your `pthread_create()` will use the same `task_struct`, which allows the threads to share file handles and static memory. Only the user stack and kernel stack are different. The threads share the same memory management structure as well. This is the only difference from a completely separate process. The context switch is light for a thread as there is no 'mm switch' which may incur all sorts of flushing. This `thread_info` structure can allow threads to live on different cores for SMP cases. The `thread_info` is the only real structure/memory for a kernel thread. `thread_info` is contained in an 8K region (2 pages) which also contains the kernel stack. It is the only 'schedulable' entity by the kernel. The stack itself contains information on how to return to user space, if it is not a kernel thread. For user space, we have additional structures which the `thread_info` has pointers to. So, it is one-to-one at least as far as `thread_info` goes. It is many-to-one for the other structure/data sets. Ie, the threads share them. For processes, they are **typically** one-to-one (not for `fork()`, but `execv()` type calls).
65,175,268
The formula below is a special case of the Wasserstein distance/optimal transport when the source and target distributions, `x` and `y` (also called marginal distributions) are 1D, that is, are vectors. [![enter image description here](https://i.stack.imgur.com/aKURS.jpg)](https://i.stack.imgur.com/aKURS.jpg) where **F^{-1}** are inverse probability distribution functions of the cumulative distributions of the marginals `u` and `v`, derived from real data called `x` and `y`, both generated from the normal distribution: ``` import numpy as np from numpy.random import randn import scipy.stats as ss n = 100 x = randn(n) y = randn(n) ``` How can the integral in the formula be coded in python and scipy? I'm guessing the x and y have to be converted to ranked marginals, which are non-negative and sum to 1, while Scipy's `ppf` could be used to calculate the inverse **F^{-1}**'s?
2020/12/07
[ "https://Stackoverflow.com/questions/65175268", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11637005/" ]
Note that when *n* gets large we have that a sorted set of *n* samples approaches the inverse CDF sampled at 1/n, 2/n, ..., n/n. E.g.: ```py import numpy as np import matplotlib.pyplot as plt from scipy.stats import norm plt.plot(norm.ppf(np.linspace(0, 1, 1000)), label="invcdf") plt.plot(np.sort(np.random.normal(size=1000)), label="sortsample") plt.legend() plt.show() ``` [![plot](https://i.stack.imgur.com/PmryH.png)](https://i.stack.imgur.com/PmryH.png) Also note that your integral from 0 to 1 can be approximated as a sum over 1/n, 2/n, ..., n/n. Thus we can simply answer your question: ``` def W(p, u, v): assert len(u) == len(v) return np.mean(np.abs(np.sort(u) - np.sort(v))**p)**(1/p) ``` Note that if `len(u) != len(v)` you can still apply the method with linear interpolation: ```py def W(p, u, v): u = np.sort(u) v = np.sort(v) if len(u) != len(v): if len(u) > len(v): u, v = v, u us = np.linspace(0, 1, len(u)) vs = np.linspace(0, 1, len(v)) u = np.linalg.interp(u, us, vs) return np.mean(np.abs(u - v)**p)**(1/p) ``` --- An alternative method if you have prior information about the sort of distribution of your data, but not its parameters, is to find the best fitting distribution on your data (e.g. with `scipy.stats.norm.fit`) for both `u` and `v` and then do the integral with the desired precision. E.g.: ``` from scipy.stats import norm as gauss def W_gauss(p, u, v, num_steps): ud = gauss(*gauss.fit(u)) vd = gauss(*gauss.fit(v)) z = np.linspace(0, 1, num_steps, endpoint=False) + 1/(2*num_steps) return np.mean(np.abs(ud.ppf(z) - vd.ppf(z))**p)**(1/p) ```
I guess I am a bit late but, but this is what I would do for an exact solution (using only numpy): ``` import numpy as np from numpy.random import randn n = 100 m = 80 p = 2 x = np.sort(randn(n)) y = np.sort(randn(m)) a = np.ones(n)/n b = np.ones(m)/m # cdfs ca = np.cumsum(a) cb = np.cumsum(b) # points on which we need to evaluate the quantile functions cba = np.sort(np.hstack([ca, cb])) # weights for integral h = np.diff(np.hstack([0, cba])) # construction of first quantile function bins = ca + 1e-10 # small tolerance to avoid rounding errors and enforce right continuity index_qx = np.digitize(cba, bins, right=True) # right=True becouse quantile function is # right continuous qx = x[index_qx] # quantile funciton F^{-1} # construction of second quantile function bins = cb + 1e-10 index_qy = np.digitize(cba, bins, right=True) # right=True becouse quantile function is # right continuous qy = y[index_qy] # quantile funciton G^{-1} ot_cost = np.sum((qx - qy)**p * h) print(ot_cost) ``` In case you are interested, here you can find a more detailed numpy based implementation of the ot problem on the real line with dual and primal solutions as well: <https://github.com/gnies/1d-optimal-transport>. (I am still working on it though).
31,357,459
I try to understand the non-greedy regex in python, but I don't understand why the following examples have this results: ``` print(re.search('a??b','aaab').group()) ab print(re.search('a*?b','aaab').group()) aaab ``` I thought it would be 'b' for the first and 'ab' for the second. Can anyone explain that?
2015/07/11
[ "https://Stackoverflow.com/questions/31357459", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5105884/" ]
This happens because the matches you are asking match *afterwards*. If you try to follow how the matching for `a??b` happens from left to right you'll see something like this: * Try 0 `a` plus `b` vs `aaab`: no match (`b != a`) * Try 1 `a` plus `b` vs `aaab` : no match (`ab != aa`) * Try 0 `a` plus `b` vs `aab`: no match (`b != a`) (match position moved to the right by one) * Try 1 `a` plus `b` vs `aab` : no match (`ab != aa`) * Try 0 `a` plus `b` vs `ab`: no match (`b != a`) (match position moved to the right by one) * Try 1 `a` plus `b` vs `ab` : **match** (`ab == ab`) Similarly for `*?`. The fact is that the `search` function returns the *leftmost* match. Using `??` and `*?` changes only the behaviour to prefer the *shortest leftmost* match but it will *not* return a shorter match that starts at the right of an already found match. Also note that the `re` module doesn't return overlapping matches, so even using `findall` or `finditer` you will not be able to find the two matches you are looking for.
Its because of that `??` is [*lazy*](http://www.rexegg.com/regex-quantifiers.html#lazy_solution) while `?` is greedy.and a lazy quantifier will match zero or one (its left token), zero if that still allows the overall pattern to match.for example all the following will returns an empty string : ``` >>> print(re.search('a??','a').group()) >>> print(re.search('a??','aa').group()) >>> print(re.search('a??','aaaa').group()) ``` And the regex `a??b` will match `ab` or `b` : ``` >>> print(re.search('a??b','aaab').group()) ab >>> print(re.search('a??b','aacb').group()) b ``` And if it doesn't allows the overall pattern to match and there was not any `b` it will return None : ``` >>> print(re.search('a??b','aac').group()) Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'NoneType' object has no attribute 'group' ``` And about the second part you have a none-greedy regex and the result is very obvious.It will match any number of `a` and then `b`: ``` print(re.search('a*?b','aaab').group()) aaab ```
31,357,459
I try to understand the non-greedy regex in python, but I don't understand why the following examples have this results: ``` print(re.search('a??b','aaab').group()) ab print(re.search('a*?b','aaab').group()) aaab ``` I thought it would be 'b' for the first and 'ab' for the second. Can anyone explain that?
2015/07/11
[ "https://Stackoverflow.com/questions/31357459", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5105884/" ]
This happens because the matches you are asking match *afterwards*. If you try to follow how the matching for `a??b` happens from left to right you'll see something like this: * Try 0 `a` plus `b` vs `aaab`: no match (`b != a`) * Try 1 `a` plus `b` vs `aaab` : no match (`ab != aa`) * Try 0 `a` plus `b` vs `aab`: no match (`b != a`) (match position moved to the right by one) * Try 1 `a` plus `b` vs `aab` : no match (`ab != aa`) * Try 0 `a` plus `b` vs `ab`: no match (`b != a`) (match position moved to the right by one) * Try 1 `a` plus `b` vs `ab` : **match** (`ab == ab`) Similarly for `*?`. The fact is that the `search` function returns the *leftmost* match. Using `??` and `*?` changes only the behaviour to prefer the *shortest leftmost* match but it will *not* return a shorter match that starts at the right of an already found match. Also note that the `re` module doesn't return overlapping matches, so even using `findall` or `finditer` you will not be able to find the two matches you are looking for.
Explanation for the Pattern - `/a??b/` `a??` matches the character `a` literally (case sensitive), Then the quantifier `??` means Between zero and one time, as few times as possible, expanding as needed [lazy], then character `b` should match, literally (case sensitive) So It will match last `'ab'` characters in the given string `'aaab'` And For Pattern - `/a*?b/` `a*?` matches the character `'a'` literally (case sensitive) Here the Quantifier `*?` means between zero and unlimited times, as few times as possible, expanding as needed [lazy] then character `b` should match, literally (case sensitive). So It will match `'aaab'` as a whole in `'aaab'`
40,762,671
I want to run a process on a remote machine and I want it to get terminated when my host program exits. I have a small test script which looks like this: ``` import time while True: print('hello') time.sleep(1) ``` and I start this process on a remote machine via a script like this one: import paramiko ``` ssh = paramiko.SSHClient() ssh.load_system_host_keys() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.connect('my_machine', username='root', key_filename='some_key') _in, _out, _err = ssh.exec_command('python /home/me/loop.py') time.sleep(5) # could do something with in/out/err ssh.close() ``` But my problem is that the started process keeps running even after I've closed the Python process which started the SSH connection. Is there a way to force closing the remote session and the remotely started process when the host process gets terminated? **Edit**: [This question](https://stackoverflow.com/questions/7734679/paramiko-and-exec-command-killing-remote-process) sounds similar but there is no satisfying answer. I tried to set the `keepalive` but with no effect: ``` ssh.get_transport().set_keepalive(1) ```
2016/11/23
[ "https://Stackoverflow.com/questions/40762671", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1668622/" ]
When the SSH connection is closed it'll not kill the running command on remote host. The easiest solution is: ``` ssh.exec_command('python /home/me/loop.py', get_pty=True) # ... do something ... ssh.close() ``` Then when the SSH connection is closed, the pty (on remote host) will also be closed and the kernel (on remote host) will send the `SIGHUP` signal to the remote command. By default `SIGHUP` will terminate the process so the remote command will be killed. --- According to the [APUE](http://www.apuebook.com/) book: > > *SIGHUP* is sent to *the controlling process (session leader)* associated > with a *controlling terminal* if a disconnect is detected by the terminal > interface. > > >
Try [`closer`](https://haarcuba.github.io/closer/) - a library I've written specifically for this sort of thing. Doesn't use Paramiko, but perhaps it will work for you anyway.
44,354,394
how do I get a canvas to actually have a size? ``` root = Tk() canv = Canvas(root, width=600, height=600) canv.pack(fill = BOTH, expand = True) root.after(1, draw) mainloop() ``` just creates a window with a 1px canvas in the top left corner edit: I omitted draw, because it didn’t throw anything, thus didn’t seem relevant. But since it runs perfectly if I n through it via pdb, here’s the full code: ``` from tkinter import * from pdb import set_trace class Field: #abstact def __init__(self, size): super().__init__() self.tiles = [[(None, 0) for i in range(size[1])] for j in range(size[0])] #list of columns def get_tile(self, x,y): return tiles[x][y] def inc_tile(self, x,y,player): t = self.tiles[x][y] tiles[x][y] = (player, t[1]) class RectField(Field): def __init__(self, size): super().__init__(size) def select_tile(self, x, y): lx = len(self.tiles) rx = floor(x*lx) ly = len(self.tiles[rx]) ry = floor(x*ly) return (rx, ry) def draw(self, canvas): canvas.delete(ALL) w = canvas.winfo_width() h = canvas.winfo_height() canvas.create_rectangle(0, 0, w, h, fill='#f0f') sx = w/len(self.tiles) for i in range(len(self.tiles)): sy = h/len(self.tiles[i]) for j in range(len(self.tiles[i])): pl = self.tiles[i][j][0] cl = '#888' if not pl else ('#f20' if pl == 1 else '#02f') canvas.create_rectangle(i*sx, j*sy, (i+1)*sx, (j+1)*sy, fill=cl, outline='#000') ############################################################################## ##################### # MAIN ############################################################################## ##################### root = Tk() canv = Canvas(root, width=600, height=600) canv.pack(fill = BOTH, expand = True) #set_trace() field = RectField((4,4)) def init(): canv.create_rectangle(0, 0, 600, 600, fill='#f0f') set_trace() field.draw(canv) root.update() root.after(1, init) mainloop() ``` OS is ubuntu 16.04, python version is the preinstalled python3, run via terminal
2017/06/04
[ "https://Stackoverflow.com/questions/44354394", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4690599/" ]
Your canvas is not staying minimized. If you were to give the canvas a distinct background color you would see that it immediately fills the whole window, and stays that way. You are getting `1` for the window width and height because you aren't giving tkinter enough time to draw it before asking for the size. `winfo_width` and `winfo_height` return the actual height, and the actual height can't be computed until the window is actually visible on the screen. The simplest solution is to force an update before calling your `init` function. You can do this by calling `update` and then calling `init` rather than using `after`. Instead of this: ``` root.after(1, init) ``` Do this: ``` root.update() init() ```
I was able to get your canvas to show up and work fine. It looks like your `init` function was the problem. You don't need to define a time to wait when calling your init() function just call it directly and the program will do the rest. Also I have looked over the tkinter documentation for canvas and I do not see anything like `.draw()` I do not think a function like that exist for tkinter. I could be wrong but if it does exist the documentation is not obvious. use this instead: ``` def init(): canv.create_rectangle(0, 0, 600, 600, fill='blue') # I only changed the color to make sure it was working #set_trace() # not needed. #field.draw(canv) # does nothing? #root.update() # not needed in this case. init() ```
34,035,270
My task is to remove all instances of one particular element ('6' in this example) and move those to the end of the list. The requirement is to traverse a list making in-line changes (creating no supplemental lists). Input example: [6,4,6,2,3,6,9,6,1,6,5] Output example: [4,2,3,9,1,5,6,6,6,6,6] So far, I have been able to do this only by making supplemental lists (breaking the task's requirements), so this working code is not allowed: ``` def shift_sixes(nums): b = [] c = 0 d = [] for i in nums: if i == 6: b.insert(len(nums),i) elif i != 6: c = c +1 d.insert(c,i) ans = d + b return ans ``` I've also tried `list.remove()` and `list.insert()` but have gotten into trouble with the indexing (which moves when I `insert()` then move the element to the end): For example - ``` a = [6,4,6,2,3,6,9,6,1,6,5] def shift_sixes(nums): for i in nums: if i == 6: nums.remove(i) nums.insert(nums[len(nums)-1], 0) elif i != 0: i shift_sixes(a) ``` Additionally, I have tried to use the enumerate() function as follows, but run into problems on the right hand side of the b[idx] assigment line: ``` for idx, b in enumerate(a): a[idx] = ??? ``` Have read other stackoverflow entries [here](https://stackoverflow.com/questions/1540049/replace-values-in-list-using-python), [here](https://stackoverflow.com/questions/24201926/how-to-replace-all-occurrences-of-an-element-in-a-list-in-python-in-place) and [here](https://stackoverflow.com/questions/2582138/finding-and-replacing-elements-in-a-list-python), but they do not tackle the movment of the element to one end. Would appreciate any help on this list traversal / inplace switching issue. Many thanks. --- EDIT @eph - thank you. this is indeed an elegant response. I am sure it will pass my 'no new list' requirement? I surely intend to learn more about lambda and its uses @falsetru - thank you for the reminder of the append/pop combination (which I tried to do in my original query via list.remove() and list.insert() @tdelaney - thank you as well. somehow your response is closest to what I was attempting, but it seems not to pass the test for [0, 0, 5].
2015/12/02
[ "https://Stackoverflow.com/questions/34035270", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5476661/" ]
Iterating the list reverse way, [pop](https://docs.python.org/3/library/stdtypes.html#mutable-sequence-types) the element if it's 6, then [append](https://docs.python.org/3/library/stdtypes.html#mutable-sequence-types) it. ``` xs = [6,4,6,2,3,6,9,6,1,6,5] for i in range(len(xs)-1, -1, -1): # 10 to 0 if xs[i] == 6: xs.append(xs.pop(i)) ```
Why not try something like this? Basically, the approach is to first count the number of values. If 0, then returns (since Python produces a ValueError if the list.index method is called for an element not in the list). We can then set the first acceptable index for the value to be the length of the list minus the number of occurrences it exists in the list. We can then combine list.pop/list.append to then traverse the list until all the values desired occur at the end of the list. ``` def shift_value(lst, value): counts = lst.count(value) # 5 if not counts: return lst value_index = len(lst) - counts index = lst.index(value) while index != value_index: lst.append(lst.pop(index)) index = lst.index(value) return lst lst = [6,4,6,2,3,6,9,6,1,6,5] print(shift_value(lst, 6)) ``` EDIT: This is horribly inefficient, better answer suggested above. This requires O(n^2) time, rather than O(n) time.
34,035,270
My task is to remove all instances of one particular element ('6' in this example) and move those to the end of the list. The requirement is to traverse a list making in-line changes (creating no supplemental lists). Input example: [6,4,6,2,3,6,9,6,1,6,5] Output example: [4,2,3,9,1,5,6,6,6,6,6] So far, I have been able to do this only by making supplemental lists (breaking the task's requirements), so this working code is not allowed: ``` def shift_sixes(nums): b = [] c = 0 d = [] for i in nums: if i == 6: b.insert(len(nums),i) elif i != 6: c = c +1 d.insert(c,i) ans = d + b return ans ``` I've also tried `list.remove()` and `list.insert()` but have gotten into trouble with the indexing (which moves when I `insert()` then move the element to the end): For example - ``` a = [6,4,6,2,3,6,9,6,1,6,5] def shift_sixes(nums): for i in nums: if i == 6: nums.remove(i) nums.insert(nums[len(nums)-1], 0) elif i != 0: i shift_sixes(a) ``` Additionally, I have tried to use the enumerate() function as follows, but run into problems on the right hand side of the b[idx] assigment line: ``` for idx, b in enumerate(a): a[idx] = ??? ``` Have read other stackoverflow entries [here](https://stackoverflow.com/questions/1540049/replace-values-in-list-using-python), [here](https://stackoverflow.com/questions/24201926/how-to-replace-all-occurrences-of-an-element-in-a-list-in-python-in-place) and [here](https://stackoverflow.com/questions/2582138/finding-and-replacing-elements-in-a-list-python), but they do not tackle the movment of the element to one end. Would appreciate any help on this list traversal / inplace switching issue. Many thanks. --- EDIT @eph - thank you. this is indeed an elegant response. I am sure it will pass my 'no new list' requirement? I surely intend to learn more about lambda and its uses @falsetru - thank you for the reminder of the append/pop combination (which I tried to do in my original query via list.remove() and list.insert() @tdelaney - thank you as well. somehow your response is closest to what I was attempting, but it seems not to pass the test for [0, 0, 5].
2015/12/02
[ "https://Stackoverflow.com/questions/34035270", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5476661/" ]
Iterating the list reverse way, [pop](https://docs.python.org/3/library/stdtypes.html#mutable-sequence-types) the element if it's 6, then [append](https://docs.python.org/3/library/stdtypes.html#mutable-sequence-types) it. ``` xs = [6,4,6,2,3,6,9,6,1,6,5] for i in range(len(xs)-1, -1, -1): # 10 to 0 if xs[i] == 6: xs.append(xs.pop(i)) ```
The key term here is "In Line". The way you do that is move `num[i] = num[i+1]` for each `i` to the end of the list. ``` def shift_sixes(num): for i, val in enumerate(num): if val == 6: # shift remaining items down for j in range(i,len(num)-1): num[j] = num[j+1] # add 6 at the end num[-1] = 6 return num print(shift_sixes([1,9,4,6,2,7,8,6,2,2,6])) print(shift_sixes([1,2,3])) print(shift_sixes([6])) print(shift_sixes([3])) ```
34,035,270
My task is to remove all instances of one particular element ('6' in this example) and move those to the end of the list. The requirement is to traverse a list making in-line changes (creating no supplemental lists). Input example: [6,4,6,2,3,6,9,6,1,6,5] Output example: [4,2,3,9,1,5,6,6,6,6,6] So far, I have been able to do this only by making supplemental lists (breaking the task's requirements), so this working code is not allowed: ``` def shift_sixes(nums): b = [] c = 0 d = [] for i in nums: if i == 6: b.insert(len(nums),i) elif i != 6: c = c +1 d.insert(c,i) ans = d + b return ans ``` I've also tried `list.remove()` and `list.insert()` but have gotten into trouble with the indexing (which moves when I `insert()` then move the element to the end): For example - ``` a = [6,4,6,2,3,6,9,6,1,6,5] def shift_sixes(nums): for i in nums: if i == 6: nums.remove(i) nums.insert(nums[len(nums)-1], 0) elif i != 0: i shift_sixes(a) ``` Additionally, I have tried to use the enumerate() function as follows, but run into problems on the right hand side of the b[idx] assigment line: ``` for idx, b in enumerate(a): a[idx] = ??? ``` Have read other stackoverflow entries [here](https://stackoverflow.com/questions/1540049/replace-values-in-list-using-python), [here](https://stackoverflow.com/questions/24201926/how-to-replace-all-occurrences-of-an-element-in-a-list-in-python-in-place) and [here](https://stackoverflow.com/questions/2582138/finding-and-replacing-elements-in-a-list-python), but they do not tackle the movment of the element to one end. Would appreciate any help on this list traversal / inplace switching issue. Many thanks. --- EDIT @eph - thank you. this is indeed an elegant response. I am sure it will pass my 'no new list' requirement? I surely intend to learn more about lambda and its uses @falsetru - thank you for the reminder of the append/pop combination (which I tried to do in my original query via list.remove() and list.insert() @tdelaney - thank you as well. somehow your response is closest to what I was attempting, but it seems not to pass the test for [0, 0, 5].
2015/12/02
[ "https://Stackoverflow.com/questions/34035270", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5476661/" ]
Iterating the list reverse way, [pop](https://docs.python.org/3/library/stdtypes.html#mutable-sequence-types) the element if it's 6, then [append](https://docs.python.org/3/library/stdtypes.html#mutable-sequence-types) it. ``` xs = [6,4,6,2,3,6,9,6,1,6,5] for i in range(len(xs)-1, -1, -1): # 10 to 0 if xs[i] == 6: xs.append(xs.pop(i)) ```
Use two runners. First from front to end checking for 6s, second from end to front pointing to last item that's not a 6. Keep swapping (`a[i+1], a[i] = a[i], a[i+1]`) until they meet. **Catch:** this is not stable like in a stable sort. But I don't see that as a requirement. *Will try to write working code when in front of a python interpreter with a keyboard.*
34,035,270
My task is to remove all instances of one particular element ('6' in this example) and move those to the end of the list. The requirement is to traverse a list making in-line changes (creating no supplemental lists). Input example: [6,4,6,2,3,6,9,6,1,6,5] Output example: [4,2,3,9,1,5,6,6,6,6,6] So far, I have been able to do this only by making supplemental lists (breaking the task's requirements), so this working code is not allowed: ``` def shift_sixes(nums): b = [] c = 0 d = [] for i in nums: if i == 6: b.insert(len(nums),i) elif i != 6: c = c +1 d.insert(c,i) ans = d + b return ans ``` I've also tried `list.remove()` and `list.insert()` but have gotten into trouble with the indexing (which moves when I `insert()` then move the element to the end): For example - ``` a = [6,4,6,2,3,6,9,6,1,6,5] def shift_sixes(nums): for i in nums: if i == 6: nums.remove(i) nums.insert(nums[len(nums)-1], 0) elif i != 0: i shift_sixes(a) ``` Additionally, I have tried to use the enumerate() function as follows, but run into problems on the right hand side of the b[idx] assigment line: ``` for idx, b in enumerate(a): a[idx] = ??? ``` Have read other stackoverflow entries [here](https://stackoverflow.com/questions/1540049/replace-values-in-list-using-python), [here](https://stackoverflow.com/questions/24201926/how-to-replace-all-occurrences-of-an-element-in-a-list-in-python-in-place) and [here](https://stackoverflow.com/questions/2582138/finding-and-replacing-elements-in-a-list-python), but they do not tackle the movment of the element to one end. Would appreciate any help on this list traversal / inplace switching issue. Many thanks. --- EDIT @eph - thank you. this is indeed an elegant response. I am sure it will pass my 'no new list' requirement? I surely intend to learn more about lambda and its uses @falsetru - thank you for the reminder of the append/pop combination (which I tried to do in my original query via list.remove() and list.insert() @tdelaney - thank you as well. somehow your response is closest to what I was attempting, but it seems not to pass the test for [0, 0, 5].
2015/12/02
[ "https://Stackoverflow.com/questions/34035270", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5476661/" ]
Iterating the list reverse way, [pop](https://docs.python.org/3/library/stdtypes.html#mutable-sequence-types) the element if it's 6, then [append](https://docs.python.org/3/library/stdtypes.html#mutable-sequence-types) it. ``` xs = [6,4,6,2,3,6,9,6,1,6,5] for i in range(len(xs)-1, -1, -1): # 10 to 0 if xs[i] == 6: xs.append(xs.pop(i)) ```
In case you need a stable sort (i.e. order of elements that are not 6 should remain the same), then the solution is: ``` def move_to_end(data, value): current = 0 # Instead of iterating with for, we iterate with index processed = 0 # How many elements we already found and moved to end of list length = len(data) # How many elements we must process while current + processed < length: # While there's still data to process if data[current] == value: # If current element matches condition data.append(data.pop(current)) # We remove it from list and append to end processed += 1 # Our index remains the same since list shifted, but we increase number of processed elements else: # If current element is not a match current += 1 # We increase our index and proceed to next element if __name__ == '__main__': print print 'Some testing:' print for test_case in ( [1, 9, 4, 6, 2, 7, 8, 6, 2, 2, 6], # Generic case [6, 6, 6, 6], # All items are 6 [1, 7], # No items are 6 [], # No items at all ): print 'Raw:', test_case move_to_end(test_case, 6) print 'Becomes:', test_case print ``` Note that this solution retains the order of not only non-matching elements, but of matching elements as well. So for example, if you change the check condition from "equal to 6" to "is an even number", all elements matching the condition will be moved to the end of list while retaining their order among themselves.
34,035,270
My task is to remove all instances of one particular element ('6' in this example) and move those to the end of the list. The requirement is to traverse a list making in-line changes (creating no supplemental lists). Input example: [6,4,6,2,3,6,9,6,1,6,5] Output example: [4,2,3,9,1,5,6,6,6,6,6] So far, I have been able to do this only by making supplemental lists (breaking the task's requirements), so this working code is not allowed: ``` def shift_sixes(nums): b = [] c = 0 d = [] for i in nums: if i == 6: b.insert(len(nums),i) elif i != 6: c = c +1 d.insert(c,i) ans = d + b return ans ``` I've also tried `list.remove()` and `list.insert()` but have gotten into trouble with the indexing (which moves when I `insert()` then move the element to the end): For example - ``` a = [6,4,6,2,3,6,9,6,1,6,5] def shift_sixes(nums): for i in nums: if i == 6: nums.remove(i) nums.insert(nums[len(nums)-1], 0) elif i != 0: i shift_sixes(a) ``` Additionally, I have tried to use the enumerate() function as follows, but run into problems on the right hand side of the b[idx] assigment line: ``` for idx, b in enumerate(a): a[idx] = ??? ``` Have read other stackoverflow entries [here](https://stackoverflow.com/questions/1540049/replace-values-in-list-using-python), [here](https://stackoverflow.com/questions/24201926/how-to-replace-all-occurrences-of-an-element-in-a-list-in-python-in-place) and [here](https://stackoverflow.com/questions/2582138/finding-and-replacing-elements-in-a-list-python), but they do not tackle the movment of the element to one end. Would appreciate any help on this list traversal / inplace switching issue. Many thanks. --- EDIT @eph - thank you. this is indeed an elegant response. I am sure it will pass my 'no new list' requirement? I surely intend to learn more about lambda and its uses @falsetru - thank you for the reminder of the append/pop combination (which I tried to do in my original query via list.remove() and list.insert() @tdelaney - thank you as well. somehow your response is closest to what I was attempting, but it seems not to pass the test for [0, 0, 5].
2015/12/02
[ "https://Stackoverflow.com/questions/34035270", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5476661/" ]
Iterating the list reverse way, [pop](https://docs.python.org/3/library/stdtypes.html#mutable-sequence-types) the element if it's 6, then [append](https://docs.python.org/3/library/stdtypes.html#mutable-sequence-types) it. ``` xs = [6,4,6,2,3,6,9,6,1,6,5] for i in range(len(xs)-1, -1, -1): # 10 to 0 if xs[i] == 6: xs.append(xs.pop(i)) ```
Why not keep it simple? ``` a = [6,4,6,2,3,6,9,6,1,6,5] def shift_sixes(nums): for i in range(0,len(nums)): if nums[i] == 6: nums.append(nums.pop(i)) >>> shift_sixes(a) >>> a [3, 9, 1, 5, 2, 4, 6, 6, 6, 6] ```
34,035,270
My task is to remove all instances of one particular element ('6' in this example) and move those to the end of the list. The requirement is to traverse a list making in-line changes (creating no supplemental lists). Input example: [6,4,6,2,3,6,9,6,1,6,5] Output example: [4,2,3,9,1,5,6,6,6,6,6] So far, I have been able to do this only by making supplemental lists (breaking the task's requirements), so this working code is not allowed: ``` def shift_sixes(nums): b = [] c = 0 d = [] for i in nums: if i == 6: b.insert(len(nums),i) elif i != 6: c = c +1 d.insert(c,i) ans = d + b return ans ``` I've also tried `list.remove()` and `list.insert()` but have gotten into trouble with the indexing (which moves when I `insert()` then move the element to the end): For example - ``` a = [6,4,6,2,3,6,9,6,1,6,5] def shift_sixes(nums): for i in nums: if i == 6: nums.remove(i) nums.insert(nums[len(nums)-1], 0) elif i != 0: i shift_sixes(a) ``` Additionally, I have tried to use the enumerate() function as follows, but run into problems on the right hand side of the b[idx] assigment line: ``` for idx, b in enumerate(a): a[idx] = ??? ``` Have read other stackoverflow entries [here](https://stackoverflow.com/questions/1540049/replace-values-in-list-using-python), [here](https://stackoverflow.com/questions/24201926/how-to-replace-all-occurrences-of-an-element-in-a-list-in-python-in-place) and [here](https://stackoverflow.com/questions/2582138/finding-and-replacing-elements-in-a-list-python), but they do not tackle the movment of the element to one end. Would appreciate any help on this list traversal / inplace switching issue. Many thanks. --- EDIT @eph - thank you. this is indeed an elegant response. I am sure it will pass my 'no new list' requirement? I surely intend to learn more about lambda and its uses @falsetru - thank you for the reminder of the append/pop combination (which I tried to do in my original query via list.remove() and list.insert() @tdelaney - thank you as well. somehow your response is closest to what I was attempting, but it seems not to pass the test for [0, 0, 5].
2015/12/02
[ "https://Stackoverflow.com/questions/34035270", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5476661/" ]
It is a bad idea to modify list while traverse. You can either make a copy to traverse, or generate a new list during traverse. In fact, the question can be done in many ways, such as: ``` >>> a.sort(key = lambda i: i == 6) >>> a [4, 2, 3, 9, 1, 5, 6, 6, 6, 6, 6] ```
Why not try something like this? Basically, the approach is to first count the number of values. If 0, then returns (since Python produces a ValueError if the list.index method is called for an element not in the list). We can then set the first acceptable index for the value to be the length of the list minus the number of occurrences it exists in the list. We can then combine list.pop/list.append to then traverse the list until all the values desired occur at the end of the list. ``` def shift_value(lst, value): counts = lst.count(value) # 5 if not counts: return lst value_index = len(lst) - counts index = lst.index(value) while index != value_index: lst.append(lst.pop(index)) index = lst.index(value) return lst lst = [6,4,6,2,3,6,9,6,1,6,5] print(shift_value(lst, 6)) ``` EDIT: This is horribly inefficient, better answer suggested above. This requires O(n^2) time, rather than O(n) time.
34,035,270
My task is to remove all instances of one particular element ('6' in this example) and move those to the end of the list. The requirement is to traverse a list making in-line changes (creating no supplemental lists). Input example: [6,4,6,2,3,6,9,6,1,6,5] Output example: [4,2,3,9,1,5,6,6,6,6,6] So far, I have been able to do this only by making supplemental lists (breaking the task's requirements), so this working code is not allowed: ``` def shift_sixes(nums): b = [] c = 0 d = [] for i in nums: if i == 6: b.insert(len(nums),i) elif i != 6: c = c +1 d.insert(c,i) ans = d + b return ans ``` I've also tried `list.remove()` and `list.insert()` but have gotten into trouble with the indexing (which moves when I `insert()` then move the element to the end): For example - ``` a = [6,4,6,2,3,6,9,6,1,6,5] def shift_sixes(nums): for i in nums: if i == 6: nums.remove(i) nums.insert(nums[len(nums)-1], 0) elif i != 0: i shift_sixes(a) ``` Additionally, I have tried to use the enumerate() function as follows, but run into problems on the right hand side of the b[idx] assigment line: ``` for idx, b in enumerate(a): a[idx] = ??? ``` Have read other stackoverflow entries [here](https://stackoverflow.com/questions/1540049/replace-values-in-list-using-python), [here](https://stackoverflow.com/questions/24201926/how-to-replace-all-occurrences-of-an-element-in-a-list-in-python-in-place) and [here](https://stackoverflow.com/questions/2582138/finding-and-replacing-elements-in-a-list-python), but they do not tackle the movment of the element to one end. Would appreciate any help on this list traversal / inplace switching issue. Many thanks. --- EDIT @eph - thank you. this is indeed an elegant response. I am sure it will pass my 'no new list' requirement? I surely intend to learn more about lambda and its uses @falsetru - thank you for the reminder of the append/pop combination (which I tried to do in my original query via list.remove() and list.insert() @tdelaney - thank you as well. somehow your response is closest to what I was attempting, but it seems not to pass the test for [0, 0, 5].
2015/12/02
[ "https://Stackoverflow.com/questions/34035270", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5476661/" ]
It is a bad idea to modify list while traverse. You can either make a copy to traverse, or generate a new list during traverse. In fact, the question can be done in many ways, such as: ``` >>> a.sort(key = lambda i: i == 6) >>> a [4, 2, 3, 9, 1, 5, 6, 6, 6, 6, 6] ```
The key term here is "In Line". The way you do that is move `num[i] = num[i+1]` for each `i` to the end of the list. ``` def shift_sixes(num): for i, val in enumerate(num): if val == 6: # shift remaining items down for j in range(i,len(num)-1): num[j] = num[j+1] # add 6 at the end num[-1] = 6 return num print(shift_sixes([1,9,4,6,2,7,8,6,2,2,6])) print(shift_sixes([1,2,3])) print(shift_sixes([6])) print(shift_sixes([3])) ```
34,035,270
My task is to remove all instances of one particular element ('6' in this example) and move those to the end of the list. The requirement is to traverse a list making in-line changes (creating no supplemental lists). Input example: [6,4,6,2,3,6,9,6,1,6,5] Output example: [4,2,3,9,1,5,6,6,6,6,6] So far, I have been able to do this only by making supplemental lists (breaking the task's requirements), so this working code is not allowed: ``` def shift_sixes(nums): b = [] c = 0 d = [] for i in nums: if i == 6: b.insert(len(nums),i) elif i != 6: c = c +1 d.insert(c,i) ans = d + b return ans ``` I've also tried `list.remove()` and `list.insert()` but have gotten into trouble with the indexing (which moves when I `insert()` then move the element to the end): For example - ``` a = [6,4,6,2,3,6,9,6,1,6,5] def shift_sixes(nums): for i in nums: if i == 6: nums.remove(i) nums.insert(nums[len(nums)-1], 0) elif i != 0: i shift_sixes(a) ``` Additionally, I have tried to use the enumerate() function as follows, but run into problems on the right hand side of the b[idx] assigment line: ``` for idx, b in enumerate(a): a[idx] = ??? ``` Have read other stackoverflow entries [here](https://stackoverflow.com/questions/1540049/replace-values-in-list-using-python), [here](https://stackoverflow.com/questions/24201926/how-to-replace-all-occurrences-of-an-element-in-a-list-in-python-in-place) and [here](https://stackoverflow.com/questions/2582138/finding-and-replacing-elements-in-a-list-python), but they do not tackle the movment of the element to one end. Would appreciate any help on this list traversal / inplace switching issue. Many thanks. --- EDIT @eph - thank you. this is indeed an elegant response. I am sure it will pass my 'no new list' requirement? I surely intend to learn more about lambda and its uses @falsetru - thank you for the reminder of the append/pop combination (which I tried to do in my original query via list.remove() and list.insert() @tdelaney - thank you as well. somehow your response is closest to what I was attempting, but it seems not to pass the test for [0, 0, 5].
2015/12/02
[ "https://Stackoverflow.com/questions/34035270", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5476661/" ]
It is a bad idea to modify list while traverse. You can either make a copy to traverse, or generate a new list during traverse. In fact, the question can be done in many ways, such as: ``` >>> a.sort(key = lambda i: i == 6) >>> a [4, 2, 3, 9, 1, 5, 6, 6, 6, 6, 6] ```
Use two runners. First from front to end checking for 6s, second from end to front pointing to last item that's not a 6. Keep swapping (`a[i+1], a[i] = a[i], a[i+1]`) until they meet. **Catch:** this is not stable like in a stable sort. But I don't see that as a requirement. *Will try to write working code when in front of a python interpreter with a keyboard.*
34,035,270
My task is to remove all instances of one particular element ('6' in this example) and move those to the end of the list. The requirement is to traverse a list making in-line changes (creating no supplemental lists). Input example: [6,4,6,2,3,6,9,6,1,6,5] Output example: [4,2,3,9,1,5,6,6,6,6,6] So far, I have been able to do this only by making supplemental lists (breaking the task's requirements), so this working code is not allowed: ``` def shift_sixes(nums): b = [] c = 0 d = [] for i in nums: if i == 6: b.insert(len(nums),i) elif i != 6: c = c +1 d.insert(c,i) ans = d + b return ans ``` I've also tried `list.remove()` and `list.insert()` but have gotten into trouble with the indexing (which moves when I `insert()` then move the element to the end): For example - ``` a = [6,4,6,2,3,6,9,6,1,6,5] def shift_sixes(nums): for i in nums: if i == 6: nums.remove(i) nums.insert(nums[len(nums)-1], 0) elif i != 0: i shift_sixes(a) ``` Additionally, I have tried to use the enumerate() function as follows, but run into problems on the right hand side of the b[idx] assigment line: ``` for idx, b in enumerate(a): a[idx] = ??? ``` Have read other stackoverflow entries [here](https://stackoverflow.com/questions/1540049/replace-values-in-list-using-python), [here](https://stackoverflow.com/questions/24201926/how-to-replace-all-occurrences-of-an-element-in-a-list-in-python-in-place) and [here](https://stackoverflow.com/questions/2582138/finding-and-replacing-elements-in-a-list-python), but they do not tackle the movment of the element to one end. Would appreciate any help on this list traversal / inplace switching issue. Many thanks. --- EDIT @eph - thank you. this is indeed an elegant response. I am sure it will pass my 'no new list' requirement? I surely intend to learn more about lambda and its uses @falsetru - thank you for the reminder of the append/pop combination (which I tried to do in my original query via list.remove() and list.insert() @tdelaney - thank you as well. somehow your response is closest to what I was attempting, but it seems not to pass the test for [0, 0, 5].
2015/12/02
[ "https://Stackoverflow.com/questions/34035270", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5476661/" ]
It is a bad idea to modify list while traverse. You can either make a copy to traverse, or generate a new list during traverse. In fact, the question can be done in many ways, such as: ``` >>> a.sort(key = lambda i: i == 6) >>> a [4, 2, 3, 9, 1, 5, 6, 6, 6, 6, 6] ```
In case you need a stable sort (i.e. order of elements that are not 6 should remain the same), then the solution is: ``` def move_to_end(data, value): current = 0 # Instead of iterating with for, we iterate with index processed = 0 # How many elements we already found and moved to end of list length = len(data) # How many elements we must process while current + processed < length: # While there's still data to process if data[current] == value: # If current element matches condition data.append(data.pop(current)) # We remove it from list and append to end processed += 1 # Our index remains the same since list shifted, but we increase number of processed elements else: # If current element is not a match current += 1 # We increase our index and proceed to next element if __name__ == '__main__': print print 'Some testing:' print for test_case in ( [1, 9, 4, 6, 2, 7, 8, 6, 2, 2, 6], # Generic case [6, 6, 6, 6], # All items are 6 [1, 7], # No items are 6 [], # No items at all ): print 'Raw:', test_case move_to_end(test_case, 6) print 'Becomes:', test_case print ``` Note that this solution retains the order of not only non-matching elements, but of matching elements as well. So for example, if you change the check condition from "equal to 6" to "is an even number", all elements matching the condition will be moved to the end of list while retaining their order among themselves.
34,035,270
My task is to remove all instances of one particular element ('6' in this example) and move those to the end of the list. The requirement is to traverse a list making in-line changes (creating no supplemental lists). Input example: [6,4,6,2,3,6,9,6,1,6,5] Output example: [4,2,3,9,1,5,6,6,6,6,6] So far, I have been able to do this only by making supplemental lists (breaking the task's requirements), so this working code is not allowed: ``` def shift_sixes(nums): b = [] c = 0 d = [] for i in nums: if i == 6: b.insert(len(nums),i) elif i != 6: c = c +1 d.insert(c,i) ans = d + b return ans ``` I've also tried `list.remove()` and `list.insert()` but have gotten into trouble with the indexing (which moves when I `insert()` then move the element to the end): For example - ``` a = [6,4,6,2,3,6,9,6,1,6,5] def shift_sixes(nums): for i in nums: if i == 6: nums.remove(i) nums.insert(nums[len(nums)-1], 0) elif i != 0: i shift_sixes(a) ``` Additionally, I have tried to use the enumerate() function as follows, but run into problems on the right hand side of the b[idx] assigment line: ``` for idx, b in enumerate(a): a[idx] = ??? ``` Have read other stackoverflow entries [here](https://stackoverflow.com/questions/1540049/replace-values-in-list-using-python), [here](https://stackoverflow.com/questions/24201926/how-to-replace-all-occurrences-of-an-element-in-a-list-in-python-in-place) and [here](https://stackoverflow.com/questions/2582138/finding-and-replacing-elements-in-a-list-python), but they do not tackle the movment of the element to one end. Would appreciate any help on this list traversal / inplace switching issue. Many thanks. --- EDIT @eph - thank you. this is indeed an elegant response. I am sure it will pass my 'no new list' requirement? I surely intend to learn more about lambda and its uses @falsetru - thank you for the reminder of the append/pop combination (which I tried to do in my original query via list.remove() and list.insert() @tdelaney - thank you as well. somehow your response is closest to what I was attempting, but it seems not to pass the test for [0, 0, 5].
2015/12/02
[ "https://Stackoverflow.com/questions/34035270", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5476661/" ]
It is a bad idea to modify list while traverse. You can either make a copy to traverse, or generate a new list during traverse. In fact, the question can be done in many ways, such as: ``` >>> a.sort(key = lambda i: i == 6) >>> a [4, 2, 3, 9, 1, 5, 6, 6, 6, 6, 6] ```
Why not keep it simple? ``` a = [6,4,6,2,3,6,9,6,1,6,5] def shift_sixes(nums): for i in range(0,len(nums)): if nums[i] == 6: nums.append(nums.pop(i)) >>> shift_sixes(a) >>> a [3, 9, 1, 5, 2, 4, 6, 6, 6, 6] ```
2,769,516
I'm trying to learn OpenGL ES quickly (I know, I know, but these are the pressures that have been thrusted upon me) and I have been read around a fair bit, which lots of success at rendering basic models, some basic lighting and 'some' texturing success too. But this is CONSTANTLY the point at which all OpenGL ES tutorials end, they never say more of what a real life app may need. So I have a few questions that Im hoping arent too difficult. 1. How do people get 3d models from their favorite 3d modeling tool into the iPhone/iPad application? I have seen a couple of blog posts where people have written some python scripts for tools like Blender which create .h files that you can use, is this what people seem to do everytime? Or do the "big" tooling suites (3DS, Maya, etc...) have exporting features? 2. Say I have my model in a nice .h file, all the vertexes, texture points, etc.. are lined up, how to I make my model (say of a basic person) walk? Or to be more general, how do you animate "part" of a model (legs only, turn head, etc...)? Do they need to be a massive mash-up of many different tiny models, or can you pre-bake animations these days "into" models (somehow) 3. Truely great 3D games for the iPhone are (im sure) unbelievably complex, but how do people (game dev firms) seem to manage that designer/developer workflow? Surely not all the animations, textures, etc... are done programatically. I hope these are not stupid questions, and in actual fact, my app that Im trying to investigate how to make is really quite simple, just a basic 3D model that I want to be able to pan/tilt around using touch. Has anyone ever done/seen anything like this that I might be able to read up on? Thanks for any help you can give, I appreciate all types of response big or small :) Cheers, Mark
2010/05/04
[ "https://Stackoverflow.com/questions/2769516", "https://Stackoverflow.com", "https://Stackoverflow.com/users/26310/" ]
1. something many people are surprised with when starting OpenGL development is that there's no such thing as a "OpenGL file format" for models, let alone animated ones. (DirectX for example comes with a .x file format supported right away). This is because OpenGL acts somewhat at a lower level. Of course, as tm1rbrt mentioned, there are plenty of libraries available. You can easily create your own file format though if you only need geometry. Things get more complex when you want to take also animation and shading into account. Take a look at [Collada](http://collada.org) for that sort of things. 2. again, animation can be done in several ways. Characters are often animated with skeletal animation. Have a look at the [cal3d](http://gna.org/projects/cal3d) library as a starting point for this. 3. you definitely want to spend some time creating a good pipeline for your content creation. Artist must have a set of tools to create their models and animations and to test them in the game engine. Artist must also be instructed about the limits of the engine, both in terms of polygons and of shading. Sometimes complex custom editors are coded to create levels, worlds, etc. in a way compatible with your specific needs.
1. Write or use a model loading library. Or use an existing graphics library; this will have routines to load models/textures already. 2. Animating models is done with bones in the 3d model editor. Graphics library will take care of moving the vertices etc for you. 3. No, artists create art and programmers create engines. [This is a link to my favourite graphics engine.](http://irrlicht.sourceforge.net/) Hope that helps
2,769,516
I'm trying to learn OpenGL ES quickly (I know, I know, but these are the pressures that have been thrusted upon me) and I have been read around a fair bit, which lots of success at rendering basic models, some basic lighting and 'some' texturing success too. But this is CONSTANTLY the point at which all OpenGL ES tutorials end, they never say more of what a real life app may need. So I have a few questions that Im hoping arent too difficult. 1. How do people get 3d models from their favorite 3d modeling tool into the iPhone/iPad application? I have seen a couple of blog posts where people have written some python scripts for tools like Blender which create .h files that you can use, is this what people seem to do everytime? Or do the "big" tooling suites (3DS, Maya, etc...) have exporting features? 2. Say I have my model in a nice .h file, all the vertexes, texture points, etc.. are lined up, how to I make my model (say of a basic person) walk? Or to be more general, how do you animate "part" of a model (legs only, turn head, etc...)? Do they need to be a massive mash-up of many different tiny models, or can you pre-bake animations these days "into" models (somehow) 3. Truely great 3D games for the iPhone are (im sure) unbelievably complex, but how do people (game dev firms) seem to manage that designer/developer workflow? Surely not all the animations, textures, etc... are done programatically. I hope these are not stupid questions, and in actual fact, my app that Im trying to investigate how to make is really quite simple, just a basic 3D model that I want to be able to pan/tilt around using touch. Has anyone ever done/seen anything like this that I might be able to read up on? Thanks for any help you can give, I appreciate all types of response big or small :) Cheers, Mark
2010/05/04
[ "https://Stackoverflow.com/questions/2769516", "https://Stackoverflow.com", "https://Stackoverflow.com/users/26310/" ]
Trying to explain why the answer to this question always will be vague. OpenGLES is very low level. Its all about pushing triangles to the screen and filling pixels and nothing else basicly. What you need to create a game is, as you've realised, a lot of code for managing assets, loading objects and worlds, managing animations, textures, sound, maybe network, physics, etc. These parts is the "game engine". Development firms have their own preferences. Some buy their game engine, other like to develop their own. Most use some combination of bought tech, open source and inhouse built tech and tools. There are many engines on the market, and everyone have their own opinion on which is best... Workflow and tools used vary a lot from large firms with strict roles and big budgets to small indie teams of a couple of guys and gals that do whatever is needed to get the game done :-) For the hobbyist, and indie dev, there are several cheap and open source engines you can use of different maturity, and amount of documentation/support. Same there, you have to look around until you find one you like. on top of the game engine, you write your game code that uses the game engine (and any other libraries you might need) to create whatever game it is you want to make.
1. Write or use a model loading library. Or use an existing graphics library; this will have routines to load models/textures already. 2. Animating models is done with bones in the 3d model editor. Graphics library will take care of moving the vertices etc for you. 3. No, artists create art and programmers create engines. [This is a link to my favourite graphics engine.](http://irrlicht.sourceforge.net/) Hope that helps
2,769,516
I'm trying to learn OpenGL ES quickly (I know, I know, but these are the pressures that have been thrusted upon me) and I have been read around a fair bit, which lots of success at rendering basic models, some basic lighting and 'some' texturing success too. But this is CONSTANTLY the point at which all OpenGL ES tutorials end, they never say more of what a real life app may need. So I have a few questions that Im hoping arent too difficult. 1. How do people get 3d models from their favorite 3d modeling tool into the iPhone/iPad application? I have seen a couple of blog posts where people have written some python scripts for tools like Blender which create .h files that you can use, is this what people seem to do everytime? Or do the "big" tooling suites (3DS, Maya, etc...) have exporting features? 2. Say I have my model in a nice .h file, all the vertexes, texture points, etc.. are lined up, how to I make my model (say of a basic person) walk? Or to be more general, how do you animate "part" of a model (legs only, turn head, etc...)? Do they need to be a massive mash-up of many different tiny models, or can you pre-bake animations these days "into" models (somehow) 3. Truely great 3D games for the iPhone are (im sure) unbelievably complex, but how do people (game dev firms) seem to manage that designer/developer workflow? Surely not all the animations, textures, etc... are done programatically. I hope these are not stupid questions, and in actual fact, my app that Im trying to investigate how to make is really quite simple, just a basic 3D model that I want to be able to pan/tilt around using touch. Has anyone ever done/seen anything like this that I might be able to read up on? Thanks for any help you can give, I appreciate all types of response big or small :) Cheers, Mark
2010/05/04
[ "https://Stackoverflow.com/questions/2769516", "https://Stackoverflow.com", "https://Stackoverflow.com/users/26310/" ]
Trying to explain why the answer to this question always will be vague. OpenGLES is very low level. Its all about pushing triangles to the screen and filling pixels and nothing else basicly. What you need to create a game is, as you've realised, a lot of code for managing assets, loading objects and worlds, managing animations, textures, sound, maybe network, physics, etc. These parts is the "game engine". Development firms have their own preferences. Some buy their game engine, other like to develop their own. Most use some combination of bought tech, open source and inhouse built tech and tools. There are many engines on the market, and everyone have their own opinion on which is best... Workflow and tools used vary a lot from large firms with strict roles and big budgets to small indie teams of a couple of guys and gals that do whatever is needed to get the game done :-) For the hobbyist, and indie dev, there are several cheap and open source engines you can use of different maturity, and amount of documentation/support. Same there, you have to look around until you find one you like. on top of the game engine, you write your game code that uses the game engine (and any other libraries you might need) to create whatever game it is you want to make.
1. something many people are surprised with when starting OpenGL development is that there's no such thing as a "OpenGL file format" for models, let alone animated ones. (DirectX for example comes with a .x file format supported right away). This is because OpenGL acts somewhat at a lower level. Of course, as tm1rbrt mentioned, there are plenty of libraries available. You can easily create your own file format though if you only need geometry. Things get more complex when you want to take also animation and shading into account. Take a look at [Collada](http://collada.org) for that sort of things. 2. again, animation can be done in several ways. Characters are often animated with skeletal animation. Have a look at the [cal3d](http://gna.org/projects/cal3d) library as a starting point for this. 3. you definitely want to spend some time creating a good pipeline for your content creation. Artist must have a set of tools to create their models and animations and to test them in the game engine. Artist must also be instructed about the limits of the engine, both in terms of polygons and of shading. Sometimes complex custom editors are coded to create levels, worlds, etc. in a way compatible with your specific needs.
71,868,469
I'm trying to save an object using cbv's im new to using it, and I'm trying to save an object using create view but is getting this error: "NOT NULL constraint failed: forum\_question.user\_id" I would appreciate beginner friendly explanation on how to fix this and maybe tips as well, thank you! models.py: ``` class Question(VoteModel, models.Model): user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE) title = models.CharField(max_length=30) detail = models.TextField() tags = models.TextField(default='') add_time = models.DateTimeField(auto_now_add=True) def __str__(self): return self.title ``` forms.py: ``` class QuestionForm(ModelForm): class Meta: model = Question fields = ['title', 'detail', 'tags'] ``` views.py: ``` class AskForm(CreateView): def post(self): user = self.request.user model = Question form_class = QuestionForm template_name = 'forum/ask-question.html' if form_class.is_valid(): form_class.save() ``` exceptions?: [![enter image description here](https://i.stack.imgur.com/MUXva.png)](https://i.stack.imgur.com/MUXva.png) edit 3: [![enter image description here](https://i.stack.imgur.com/SxtFu.png)](https://i.stack.imgur.com/SxtFu.png) extra info: Traceback (most recent call last): File "/home/titanium/.local/lib/python3.8/site-packages/django/core/handlers/exception.py", line 47, in inner response = get\_response(request) File "/home/titanium/.local/lib/python3.8/site-packages/django/core/handlers/base.py", line 181, in \_get\_response response = wrapped\_callback(request, \*callback\_args, \*\*callback\_kwargs) File "/home/titanium/.local/lib/python3.8/site-packages/django/views/generic/base.py", line 69, in view return self.dispatch(request, \*args, \*\*kwargs) File "/home/titanium/.local/lib/python3.8/site-packages/django/views/generic/base.py", line 101, in dispatch return handler(request, \*args, \*\*kwargs) File "/home/titanium/.local/lib/python3.8/site-packages/django/views/generic/edit.py", line 174, in post return super().post(request, \*args, \*\*kwargs) File "/home/titanium/.local/lib/python3.8/site-packages/django/views/generic/edit.py", line 144, in post return self.form\_valid(form) File "/home/titanium/.local/lib/python3.8/site-packages/django/views/generic/edit.py", line 127, in form\_valid self.object = form.save() File "/home/titanium/.local/lib/python3.8/site-packages/django/forms/models.py", line 466, in save self.instance.save() File "/home/titanium/.local/lib/python3.8/site-packages/vote/models.py", line 67, in save super(VoteModel, self).save(\*args, \*\*kwargs) File "/home/titanium/.local/lib/python3.8/site-packages/django/db/models/base.py", line 743, in save self.save\_base(using=using, force\_insert=force\_insert, File "/home/titanium/.local/lib/python3.8/site-packages/django/db/models/base.py", line 780, in save\_base updated = self.\_save\_table( File "/home/titanium/.local/lib/python3.8/site-packages/django/db/models/base.py", line 885, in \_save\_table results = self.\_do\_insert(cls.\_base\_manager, using, fields, returning\_fields, raw) File "/home/titanium/.local/lib/python3.8/site-packages/django/db/models/base.py", line 923, in \_do\_insert return manager.\_insert( File "/home/titanium/.local/lib/python3.8/site-packages/django/db/models/manager.py", line 85, in manager\_method return getattr(self.get\_queryset(), name)(\*args, \*\*kwargs) File "/home/titanium/.local/lib/python3.8/site-packages/django/db/models/query.py", line 1301, in \_insert return query.get\_compiler(using=using).execute\_sql(returning\_fields) File "/home/titanium/.local/lib/python3.8/site-packages/django/db/models/sql/compiler.py", line 1441, in execute\_sql cursor.execute(sql, params) File "/home/titanium/.local/lib/python3.8/site-packages/django/db/backends/utils.py", line 99, in execute return super().execute(sql, params) File "/home/titanium/.local/lib/python3.8/site-packages/django/db/backends/utils.py", line 67, in execute return self.\_execute\_with\_wrappers(sql, params, many=False, executor=self.\_execute) File "/home/titanium/.local/lib/python3.8/site-packages/django/db/backends/utils.py", line 76, in \_execute\_with\_wrappers return executor(sql, params, many, context) File "/home/titanium/.local/lib/python3.8/site-packages/django/db/backends/utils.py", line 85, in \_execute return self.cursor.execute(sql, params) File "/home/titanium/.local/lib/python3.8/site-packages/django/db/utils.py", line 90, in **exit** raise dj\_exc\_value.with\_traceback(traceback) from exc\_value File "/home/titanium/.local/lib/python3.8/site-packages/django/db/backends/utils.py", line 85, in \_execute return self.cursor.execute(sql, params) File "/home/titanium/.local/lib/python3.8/site-packages/django/db/backends/sqlite3/base.py", line 416, in execute return Database.Cursor.execute(self, query, params) django.db.utils.IntegrityError: NOT NULL constraint failed: forum\_question.user\_id [14/Apr/2022 09:58:02] "POST /ask/ HTTP/1.1" 500 175023
2022/04/14
[ "https://Stackoverflow.com/questions/71868469", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17130619/" ]
A forum question instance must have a non null user field, but you are not specifying the user related to the object you're creating. In the case you dont want to add the user, update your model's user field to be: ``` user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE, blank=True, null=True) ``` or in your ask form you overload form\_valid() in order to add the user sort of like this (Note I have not tested this directly, follow the [documentation here](https://docs.djangoproject.com/en/4.0/topics/class-based-views/generic-editing/#models-and-request-user)): ``` class AskForm(CreateView): def post(self): user = self.request.user model = Question form_class = QuestionForm template_name = 'forum/ask-question.html' if form_class.is_valid(): form_class.save() def form_valid(self, form): form.instance.created_by = self.request.user return super().form_valid(form) ```
I'm not sure if this is still useful, however, I ran into the same error. You can fix the error by deleting your migration files and the database. The error is due to the sending of NULL data(no data) to an already existing field in the database, usually after that field have been modified or deleted.
58,711,540
What is the equivalent of C++ STL set<> in python 3? If there is not an implementation what should I use in python to: 1) Store a list of numbers 2) Find a not less than element in that list? like lower\_bound<> of stl`s set
2019/11/05
[ "https://Stackoverflow.com/questions/58711540", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6138473/" ]
The content scripts run in an "isolated world" which is a different context. By default devtools works in the page context so you need to [switch the context selector](https://developers.google.com/web/tools/chrome-devtools/console/reference#context) in devtools console toolbar to your extension: ![enter image description here](https://i.stack.imgur.com/v44eO.png) An alternative solution is to expose the functions in the page context by putting them into a `<script>` element in the web page, but that won't be your content script anymore, it'd be just a normal page script function ([more info](https://stackoverflow.com/a/9517879)).
You can access your extension's console by right click on the extension popup and then selecting "Inspect".
7,943,751
What is the Python 3 equivalent of `python -m SimpleHTTPServer`?
2011/10/30
[ "https://Stackoverflow.com/questions/7943751", "https://Stackoverflow.com", "https://Stackoverflow.com/users/845948/" ]
Using 2to3 utility. ``` $ cat try.py import SimpleHTTPServer $ 2to3 try.py RefactoringTool: Skipping implicit fixer: buffer RefactoringTool: Skipping implicit fixer: idioms RefactoringTool: Skipping implicit fixer: set_literal RefactoringTool: Skipping implicit fixer: ws_comma RefactoringTool: Refactored try.py --- try.py (original) +++ try.py (refactored) @@ -1 +1 @@ -import SimpleHTTPServer +import http.server RefactoringTool: Files that need to be modified: RefactoringTool: try.py ``` Like many \*nix utils, `2to3` accepts `stdin` if the argument passed is `-`. Therefore, you can test without creating any files like so: ``` $ 2to3 - <<< "import SimpleHTTPServer" ```
In one of my projects I run tests against Python 2 and 3. For that I wrote a small script which starts a local server independently: ``` $ python -m $(python -c 'import sys; print("http.server" if sys.version_info[:2] > (2,7) else "SimpleHTTPServer")') Serving HTTP on 0.0.0.0 port 8000 ... ``` As an alias: ``` $ alias serve="python -m $(python -c 'import sys; print("http.server" if sys.version_info[:2] > (2,7) else "SimpleHTTPServer")')" $ serve Serving HTTP on 0.0.0.0 port 8000 ... ``` Please note that I control my Python version via [conda environments](https://conda.io/docs/user-guide/tasks/manage-environments.html), because of that I can use `python` instead of `python3` for using Python 3.
7,943,751
What is the Python 3 equivalent of `python -m SimpleHTTPServer`?
2011/10/30
[ "https://Stackoverflow.com/questions/7943751", "https://Stackoverflow.com", "https://Stackoverflow.com/users/845948/" ]
From [the docs](https://docs.python.org/2/library/simplehttpserver.html): > > The `SimpleHTTPServer` module has been merged into `http.server` in Python 3.0. The 2to3 tool will automatically adapt imports when converting your sources to 3.0. > > > So, your command is `python -m http.server`, or depending on your installation, it can be: ``` python3 -m http.server ```
The equivalent is: ``` python3 -m http.server ```
7,943,751
What is the Python 3 equivalent of `python -m SimpleHTTPServer`?
2011/10/30
[ "https://Stackoverflow.com/questions/7943751", "https://Stackoverflow.com", "https://Stackoverflow.com/users/845948/" ]
Using 2to3 utility. ``` $ cat try.py import SimpleHTTPServer $ 2to3 try.py RefactoringTool: Skipping implicit fixer: buffer RefactoringTool: Skipping implicit fixer: idioms RefactoringTool: Skipping implicit fixer: set_literal RefactoringTool: Skipping implicit fixer: ws_comma RefactoringTool: Refactored try.py --- try.py (original) +++ try.py (refactored) @@ -1 +1 @@ -import SimpleHTTPServer +import http.server RefactoringTool: Files that need to be modified: RefactoringTool: try.py ``` Like many \*nix utils, `2to3` accepts `stdin` if the argument passed is `-`. Therefore, you can test without creating any files like so: ``` $ 2to3 - <<< "import SimpleHTTPServer" ```
As everyone has mentioned [http.server](https://docs.python.org/3/library/http.server.html#module-http.server) module is equivalent to `python -m SimpleHTTPServer`. But as a warning from <https://docs.python.org/3/library/http.server.html#module-http.server> > > **Warning**: `http.server` is not recommended for production. It only implements basic security checks. > > > Usage ===== http.server can also be invoked directly using the `-m` switch of the interpreter. ``` python -m http.server ``` The above command will run a server by default on port number `8000`. You can also give the port number explicitly while running the server ``` python -m http.server 9000 ``` The above command will run an HTTP server on port 9000 instead of 8000. > > By default, server binds itself to all interfaces. The option > -b/--bind specifies a specific address to which it should bind. Both IPv4 and IPv6 addresses are supported. For example, the following > command causes the server to bind to localhost only: > > > ``` python -m http.server 8000 --bind 127.0.0.1 ``` or ``` python -m http.server 8000 -b 127.0.0.1 ``` Python 3.8 version also supports IPv6 in the bind argument. Directory Binding ----------------- By default, server uses the current directory. The option `-d/--directory` specifies a directory to which it should serve the files. For example, the following command uses a specific directory: ``` python -m http.server --directory /tmp/ ``` Directory binding is introduced in python 3.7
7,943,751
What is the Python 3 equivalent of `python -m SimpleHTTPServer`?
2011/10/30
[ "https://Stackoverflow.com/questions/7943751", "https://Stackoverflow.com", "https://Stackoverflow.com/users/845948/" ]
The equivalent is: ``` python3 -m http.server ```
Using 2to3 utility. ``` $ cat try.py import SimpleHTTPServer $ 2to3 try.py RefactoringTool: Skipping implicit fixer: buffer RefactoringTool: Skipping implicit fixer: idioms RefactoringTool: Skipping implicit fixer: set_literal RefactoringTool: Skipping implicit fixer: ws_comma RefactoringTool: Refactored try.py --- try.py (original) +++ try.py (refactored) @@ -1 +1 @@ -import SimpleHTTPServer +import http.server RefactoringTool: Files that need to be modified: RefactoringTool: try.py ``` Like many \*nix utils, `2to3` accepts `stdin` if the argument passed is `-`. Therefore, you can test without creating any files like so: ``` $ 2to3 - <<< "import SimpleHTTPServer" ```
7,943,751
What is the Python 3 equivalent of `python -m SimpleHTTPServer`?
2011/10/30
[ "https://Stackoverflow.com/questions/7943751", "https://Stackoverflow.com", "https://Stackoverflow.com/users/845948/" ]
From [the docs](https://docs.python.org/2/library/simplehttpserver.html): > > The `SimpleHTTPServer` module has been merged into `http.server` in Python 3.0. The 2to3 tool will automatically adapt imports when converting your sources to 3.0. > > > So, your command is `python -m http.server`, or depending on your installation, it can be: ``` python3 -m http.server ```
In addition to Petr's answer, if you want to bind to a specific interface instead of all the interfaces you can use `-b` or `--bind` flag. ``` python -m http.server 8000 --bind 127.0.0.1 ``` The above snippet should do the trick. 8000 is the port number. 80 is used as the standard port for HTTP communications.
7,943,751
What is the Python 3 equivalent of `python -m SimpleHTTPServer`?
2011/10/30
[ "https://Stackoverflow.com/questions/7943751", "https://Stackoverflow.com", "https://Stackoverflow.com/users/845948/" ]
In addition to Petr's answer, if you want to bind to a specific interface instead of all the interfaces you can use `-b` or `--bind` flag. ``` python -m http.server 8000 --bind 127.0.0.1 ``` The above snippet should do the trick. 8000 is the port number. 80 is used as the standard port for HTTP communications.
Just wanted to add what worked for me: `python3 -m http.server 8000` (you can use any port number here except the ones which are currently in use)
7,943,751
What is the Python 3 equivalent of `python -m SimpleHTTPServer`?
2011/10/30
[ "https://Stackoverflow.com/questions/7943751", "https://Stackoverflow.com", "https://Stackoverflow.com/users/845948/" ]
Using 2to3 utility. ``` $ cat try.py import SimpleHTTPServer $ 2to3 try.py RefactoringTool: Skipping implicit fixer: buffer RefactoringTool: Skipping implicit fixer: idioms RefactoringTool: Skipping implicit fixer: set_literal RefactoringTool: Skipping implicit fixer: ws_comma RefactoringTool: Refactored try.py --- try.py (original) +++ try.py (refactored) @@ -1 +1 @@ -import SimpleHTTPServer +import http.server RefactoringTool: Files that need to be modified: RefactoringTool: try.py ``` Like many \*nix utils, `2to3` accepts `stdin` if the argument passed is `-`. Therefore, you can test without creating any files like so: ``` $ 2to3 - <<< "import SimpleHTTPServer" ```
Just wanted to add what worked for me: `python3 -m http.server 8000` (you can use any port number here except the ones which are currently in use)
7,943,751
What is the Python 3 equivalent of `python -m SimpleHTTPServer`?
2011/10/30
[ "https://Stackoverflow.com/questions/7943751", "https://Stackoverflow.com", "https://Stackoverflow.com/users/845948/" ]
In addition to Petr's answer, if you want to bind to a specific interface instead of all the interfaces you can use `-b` or `--bind` flag. ``` python -m http.server 8000 --bind 127.0.0.1 ``` The above snippet should do the trick. 8000 is the port number. 80 is used as the standard port for HTTP communications.
As everyone has mentioned [http.server](https://docs.python.org/3/library/http.server.html#module-http.server) module is equivalent to `python -m SimpleHTTPServer`. But as a warning from <https://docs.python.org/3/library/http.server.html#module-http.server> > > **Warning**: `http.server` is not recommended for production. It only implements basic security checks. > > > Usage ===== http.server can also be invoked directly using the `-m` switch of the interpreter. ``` python -m http.server ``` The above command will run a server by default on port number `8000`. You can also give the port number explicitly while running the server ``` python -m http.server 9000 ``` The above command will run an HTTP server on port 9000 instead of 8000. > > By default, server binds itself to all interfaces. The option > -b/--bind specifies a specific address to which it should bind. Both IPv4 and IPv6 addresses are supported. For example, the following > command causes the server to bind to localhost only: > > > ``` python -m http.server 8000 --bind 127.0.0.1 ``` or ``` python -m http.server 8000 -b 127.0.0.1 ``` Python 3.8 version also supports IPv6 in the bind argument. Directory Binding ----------------- By default, server uses the current directory. The option `-d/--directory` specifies a directory to which it should serve the files. For example, the following command uses a specific directory: ``` python -m http.server --directory /tmp/ ``` Directory binding is introduced in python 3.7
7,943,751
What is the Python 3 equivalent of `python -m SimpleHTTPServer`?
2011/10/30
[ "https://Stackoverflow.com/questions/7943751", "https://Stackoverflow.com", "https://Stackoverflow.com/users/845948/" ]
As everyone has mentioned [http.server](https://docs.python.org/3/library/http.server.html#module-http.server) module is equivalent to `python -m SimpleHTTPServer`. But as a warning from <https://docs.python.org/3/library/http.server.html#module-http.server> > > **Warning**: `http.server` is not recommended for production. It only implements basic security checks. > > > Usage ===== http.server can also be invoked directly using the `-m` switch of the interpreter. ``` python -m http.server ``` The above command will run a server by default on port number `8000`. You can also give the port number explicitly while running the server ``` python -m http.server 9000 ``` The above command will run an HTTP server on port 9000 instead of 8000. > > By default, server binds itself to all interfaces. The option > -b/--bind specifies a specific address to which it should bind. Both IPv4 and IPv6 addresses are supported. For example, the following > command causes the server to bind to localhost only: > > > ``` python -m http.server 8000 --bind 127.0.0.1 ``` or ``` python -m http.server 8000 -b 127.0.0.1 ``` Python 3.8 version also supports IPv6 in the bind argument. Directory Binding ----------------- By default, server uses the current directory. The option `-d/--directory` specifies a directory to which it should serve the files. For example, the following command uses a specific directory: ``` python -m http.server --directory /tmp/ ``` Directory binding is introduced in python 3.7
In one of my projects I run tests against Python 2 and 3. For that I wrote a small script which starts a local server independently: ``` $ python -m $(python -c 'import sys; print("http.server" if sys.version_info[:2] > (2,7) else "SimpleHTTPServer")') Serving HTTP on 0.0.0.0 port 8000 ... ``` As an alias: ``` $ alias serve="python -m $(python -c 'import sys; print("http.server" if sys.version_info[:2] > (2,7) else "SimpleHTTPServer")')" $ serve Serving HTTP on 0.0.0.0 port 8000 ... ``` Please note that I control my Python version via [conda environments](https://conda.io/docs/user-guide/tasks/manage-environments.html), because of that I can use `python` instead of `python3` for using Python 3.
7,943,751
What is the Python 3 equivalent of `python -m SimpleHTTPServer`?
2011/10/30
[ "https://Stackoverflow.com/questions/7943751", "https://Stackoverflow.com", "https://Stackoverflow.com/users/845948/" ]
The equivalent is: ``` python3 -m http.server ```
As everyone has mentioned [http.server](https://docs.python.org/3/library/http.server.html#module-http.server) module is equivalent to `python -m SimpleHTTPServer`. But as a warning from <https://docs.python.org/3/library/http.server.html#module-http.server> > > **Warning**: `http.server` is not recommended for production. It only implements basic security checks. > > > Usage ===== http.server can also be invoked directly using the `-m` switch of the interpreter. ``` python -m http.server ``` The above command will run a server by default on port number `8000`. You can also give the port number explicitly while running the server ``` python -m http.server 9000 ``` The above command will run an HTTP server on port 9000 instead of 8000. > > By default, server binds itself to all interfaces. The option > -b/--bind specifies a specific address to which it should bind. Both IPv4 and IPv6 addresses are supported. For example, the following > command causes the server to bind to localhost only: > > > ``` python -m http.server 8000 --bind 127.0.0.1 ``` or ``` python -m http.server 8000 -b 127.0.0.1 ``` Python 3.8 version also supports IPv6 in the bind argument. Directory Binding ----------------- By default, server uses the current directory. The option `-d/--directory` specifies a directory to which it should serve the files. For example, the following command uses a specific directory: ``` python -m http.server --directory /tmp/ ``` Directory binding is introduced in python 3.7
7,976,733
I am relaying the output of my script to a local port in my system viz - $python script.py | nc 127.0.0.1 8033 Let's assume that my computer has ip 10.0.0.3 Now, Is it possible that some other computer (say IP 10.0.0.4) can listen to this port via nc or anything else. Please suggest.
2011/11/02
[ "https://Stackoverflow.com/questions/7976733", "https://Stackoverflow.com", "https://Stackoverflow.com/users/270216/" ]
Not directly. The program listening on the port must be on the local machine (meaning 10.0.0.3 in your example). You could arrange for a program on the local machine to listen and send the information to another machine, but the socket connection can only be established on the host.
I use Perl to do exactly this - you could use python, of course. In Perl, I use the `IO::Socket::INET` library. I instantiate a new instance of `INET` with the `IP`, `port` and `Protocol`, and a time out for the `comms`. I then use the `recv` method to read data from that socket. It's not as simple as nc; I wish NC did this - it would be a lot easier :) Here's an outline of the actual Perl --- ``` my $data; my socket; $socket=IO::Socket::INET->new( PeerAddr => 10.0.0.3, PeerPort-> 8033, Proto => "tcp", Timeout => 1 ) or die "Unable to open port"; $socket->recv($data,bytes_to_read); # Put your chosen read size in stead of bytes_to_read print $data; ``` ---
21,535,061
Is it possible to create a python program, that can interact with Google's Translate? I'm thinking of a way that firstly opens a .txt file, then reads the first line, then interacts with google translate and translates the word from a spesific language to a spesific language, then logs it into a different txt file. Main question: Is it possible to make Python 3.3 interact with Google Translate? Please tell me if I didn't explain myself enough. Thank you, Tharix
2014/02/03
[ "https://Stackoverflow.com/questions/21535061", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2802035/" ]
Oh, the mind-bending horror of weak memory ordering... The first snippet is your basic atomic read-modify-write - if someone else touches whatever address `x1` points to, the store-exclusive will fail and it will try again until it succeeds. So far so good. However, this only applies to the address (or more rightly region) covered by the exclusive monitor, so whilst it's good for *atomicity*, it's ineffective for *synchronisation* of anything other than that value. Consider a case where CPU1 is waiting for CPU0 to write some data to a buffer. CPU1 sits there waiting on some kind of synchronisation object (let's say a semaphore), waiting for CPU0 to update it to signal that new data is ready. 1. CPU0 writes to the data address. 2. CPU0 increments the semaphore (atomically, as you do) which happens to be elsewhere in memory. 3. ??? 4. CPU1 sees the new semaphore value. 5. CPU1 reads *some* data, which may or may not be the old data, the new data, or some mix of the two. Now, what happened at step 3? Maybe it all occurred in order. Quite possibly, the hardware decided that since there was no address dependency it would let the store to the semaphore go ahead of the store to the data address. Maybe the semaphore store hit in the cache whereas the data didn't. Maybe it just did so because of complicated reasons only those hardware guys understand. Either way it's perfectly possible for CPU1 to see the semaphore update *before* the new data has hit memory, thus read back invalid data. To fix this, CPU0 must have a barrier between steps 1 and 2, to ensure the data has definitely been written *before* the semaphore is written. Having the atomic write *be* a barrier is a nice simple way to do this. However since barriers are pretty performance-degrading you want the lightweight no-barrier version as well for situations where you don't need this kind of full synchronisation. Now, the *even less* intuitive part is that CPU1 could also reorder its loads. Again since there is no address dependency, it would be free to speculate the data load before the semaphore load irrespective of CPU0's barrier. Thus CPU1 also needs its own barrier between steps 4 and 5. For the more authoritative, but pretty heavy going, version have a read of ARM's [Barrier Litmus Tests and Cookbook](http://infocenter.arm.com/help/topic/com.arm.doc.genc007826/index.html). Be warned, this stuff can be *confusing* ;) As an aside, in this case the architectural semantics of acquire/release complicate things further. Since they are only one-way barriers, whilst `OSAtomicAdd32Barrier` adds up to a full barrier relative to code before and after it, it doesn't actually guarantee any ordering relative to the atomic operation itself - see [this discussion from Linux](http://lists.infradead.org/pipermail/linux-arm-kernel/2014-February/229588.html) for more explanation. Of course, that's from the theoretical point of view of the architecture; in reality it's not inconceivable that the A7 hardware has taken the 'simple' option of wiring up `LDAXR` to just do `DMB+LDXR`, and so on, meaning they can get away with this since they're at liberty to code to their own implementation, rather than the specification.
I would guess that this is simply a way of reproducing existing architecture-independent semantics for this operation. With the `ldaxr`/`stlxr` pair, the above sequence will assure correct ordering if the AtomicAdd32 is used as a synchronization mechanism (mutex/semaphore) - regardless of whether the resulting higher-level operation is an acquire or release. So - this is not about enforcing consistency of the atomic add, but about enforcing ordering between acquiring/releasing a mutex and any operations performed on the resource protected by that mutex. It is less efficient than the `ldxar`/`stxr` or `ldxr`/`stlxr` you would use in a normal native synchronization mechanism, but if you have existing platform-independent code expecting an atomic add with those semantics, this is probably the best way to implement it.
21,535,061
Is it possible to create a python program, that can interact with Google's Translate? I'm thinking of a way that firstly opens a .txt file, then reads the first line, then interacts with google translate and translates the word from a spesific language to a spesific language, then logs it into a different txt file. Main question: Is it possible to make Python 3.3 interact with Google Translate? Please tell me if I didn't explain myself enough. Thank you, Tharix
2014/02/03
[ "https://Stackoverflow.com/questions/21535061", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2802035/" ]
`OSAtomicAdd32Barrier()` exists for people that are using `OSAtomicAdd()` for something beyond just atomic increment. Specifically, they are implementing their own multi-processing synchronization primitives based on `OSAtomicAdd()`. For example, creating their own mutex library. `OSAtomicAdd32Barrier()` uses heavy barrier instructions to enforce memory ordering on both side of the atomic operation. This is not desirable in normal usage. To summarize: 1) If you just want to increment an integer in a thread-safe way, use `OSAtomicAdd32()` 2) If you are stuck with a bunch of old code that foolishly assumes `OSAtomicAdd32()` can be used as an interprocessor memory ordering and speculation barrier, replace it with `OSAtomicAdd32Barrier()`
I would guess that this is simply a way of reproducing existing architecture-independent semantics for this operation. With the `ldaxr`/`stlxr` pair, the above sequence will assure correct ordering if the AtomicAdd32 is used as a synchronization mechanism (mutex/semaphore) - regardless of whether the resulting higher-level operation is an acquire or release. So - this is not about enforcing consistency of the atomic add, but about enforcing ordering between acquiring/releasing a mutex and any operations performed on the resource protected by that mutex. It is less efficient than the `ldxar`/`stxr` or `ldxr`/`stlxr` you would use in a normal native synchronization mechanism, but if you have existing platform-independent code expecting an atomic add with those semantics, this is probably the best way to implement it.
66,357,772
django+gunicorn+nginx gives 404 while serving static files I am trying to deploy a Django project using nginx + gunicorn + postgresql. All the configuration is done, my admin panel project static file will serve , but other static files; it returns a 404 error.(iam use run python manage.py collectstatic) ``` my error.log nginx :: "/blogpy/static/home/test.css" failed (2: No such file or directory)" Structure: blogpy -blogpy -config -nginx -nginx.conf -docker-compose.yml -Dockerfile -home -static -home -test.css(not working) - requirements -static -templates -.env -docker-compose.yml -Dockerfile setting.py: DEBUG = False ALLOWED_HOSTS = ['*'] STATIC_URL = '/static/' STATIC_ROOT = BASE_DIR / 'static' nginx configuration: ---- nginx.conf: user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; upstream blogpy{ server blogpy:8000; } server { listen 80; server_name localhost; charset utf-8; location / { proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $server_name; proxy_pass http://blogpy; } location /static/ { alias /blogpy/static/; } } } ```
2021/02/24
[ "https://Stackoverflow.com/questions/66357772", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14751614/" ]
Try In ---- nginx.conf: ``` location /static/ { autoindex off; alias /home/ubuntu/blogpy/static/; #add full path of static file directry } ```
To get /blogpy/home/static/ files been copied into /blogpy/static/ by collectstatic command, you need to specify STATICFILES\_DIRS setting <https://docs.djangoproject.com/en/3.1/ref/settings/#std:setting-STATICFILES_DIRS> ``` STATICFILES_DIRS = [ BASE_DIR / 'home' / 'static', ] ```
66,357,772
django+gunicorn+nginx gives 404 while serving static files I am trying to deploy a Django project using nginx + gunicorn + postgresql. All the configuration is done, my admin panel project static file will serve , but other static files; it returns a 404 error.(iam use run python manage.py collectstatic) ``` my error.log nginx :: "/blogpy/static/home/test.css" failed (2: No such file or directory)" Structure: blogpy -blogpy -config -nginx -nginx.conf -docker-compose.yml -Dockerfile -home -static -home -test.css(not working) - requirements -static -templates -.env -docker-compose.yml -Dockerfile setting.py: DEBUG = False ALLOWED_HOSTS = ['*'] STATIC_URL = '/static/' STATIC_ROOT = BASE_DIR / 'static' nginx configuration: ---- nginx.conf: user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; upstream blogpy{ server blogpy:8000; } server { listen 80; server_name localhost; charset utf-8; location / { proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $server_name; proxy_pass http://blogpy; } location /static/ { alias /blogpy/static/; } } } ```
2021/02/24
[ "https://Stackoverflow.com/questions/66357772", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14751614/" ]
Try In ---- nginx.conf: ``` location /static/ { autoindex off; alias /home/ubuntu/blogpy/static/; #add full path of static file directry } ```
It is recommended to serve static files directly from `nginx`. Add the following to your nginx site config. ``` location /static/ { alias /path/to/static/directory/; } ```
59,802,608
I have this code and it raise an error in python 3 and such a comparison can work on python 2 how can I change it? ``` import tensorflow as tf def train_set(): class MyCallBacks(tf.keras.callbacks.Callback): def on_epoch_end(self,epoch,logs={}): if(logs.get('acc')>0.95): print('the training will stop !') self.model.stop_training=True callbacks=MyCallBacks() mnist_dataset=tf.keras.datasets.mnist (x_train,y_train),(x_test,y_test)=mnist_dataset.load_data() x_train=x_train/255.0 x_test=x_test/255.0 classifier=tf.keras.Sequential([ tf.keras.layers.Flatten(input_shape=(28,28)), tf.keras.layers.Dense(512,activation=tf.nn.relu), tf.keras.layers.Dense(10,activation=tf.nn.softmax) ]) classifier.compile( optimizer='sgd', loss='sparse_categorical_crossentropy', metrics=['accuracy'] ) history=classifier.fit(x_train,y_train,epochs=20,callbacks=[callbacks]) return history.epoch,history.history['acc'][-1] train_set() ```
2020/01/18
[ "https://Stackoverflow.com/questions/59802608", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11214617/" ]
Tensorflow 2.0 ============== ``` DESIRED_ACCURACY = 0.979 class myCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epochs, logs={}) : if(logs.get('acc') is not None and logs.get('acc') >= DESIRED_ACCURACY) : print('\nReached 99.9% accuracy so cancelling training!') self.model.stop_training = True callbacks = myCallback() ```
I had the same problem and instead of using 'acc', I changed it to 'accuracy' everywhere. So it seems that maybe it is better to try changing 'acc' to 'accuracy'.
59,802,608
I have this code and it raise an error in python 3 and such a comparison can work on python 2 how can I change it? ``` import tensorflow as tf def train_set(): class MyCallBacks(tf.keras.callbacks.Callback): def on_epoch_end(self,epoch,logs={}): if(logs.get('acc')>0.95): print('the training will stop !') self.model.stop_training=True callbacks=MyCallBacks() mnist_dataset=tf.keras.datasets.mnist (x_train,y_train),(x_test,y_test)=mnist_dataset.load_data() x_train=x_train/255.0 x_test=x_test/255.0 classifier=tf.keras.Sequential([ tf.keras.layers.Flatten(input_shape=(28,28)), tf.keras.layers.Dense(512,activation=tf.nn.relu), tf.keras.layers.Dense(10,activation=tf.nn.softmax) ]) classifier.compile( optimizer='sgd', loss='sparse_categorical_crossentropy', metrics=['accuracy'] ) history=classifier.fit(x_train,y_train,epochs=20,callbacks=[callbacks]) return history.epoch,history.history['acc'][-1] train_set() ```
2020/01/18
[ "https://Stackoverflow.com/questions/59802608", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11214617/" ]
we are in the same boat from Coursera Course So, this is my code ``` # GRADED FUNCTION: train_mnist def train_mnist(): # Please write your code only where you are indicated. # please do not remove # model fitting inline comments. # YOUR CODE SHOULD START HERE class myCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs={}): if (logs.get('acc') > 0.99): print("\n Reached 99% Accuracy!") self.model.stop_training = True # YOUR CODE SHOULD END HERE mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data(path=path) # YOUR CODE SHOULD START HERE x_train = x_train/255.0 x_test = x_test/255.0 callbacks = myCallback() # YOUR CODE SHOULD END HERE model = tf.keras.models.Sequential([ # YOUR CODE SHOULD START HERE tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, 'relu'), tf.keras.layers.Dense(10, 'softmax') # YOUR CODE SHOULD END HERE ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) # model fitting history = model.fit(x_train, y_train, epochs=10, callbacks=[callbacks]# YOUR CODE SHOULD START HERE # YOUR CODE SHOULD END HERE ) # model fitting return history.epoch, history.history['acc'][-1] ``` Hope, it helps you! Keep Learning
I had the same problem and instead of using 'acc', I changed it to 'accuracy' everywhere. So it seems that maybe it is better to try changing 'acc' to 'accuracy'.
59,802,608
I have this code and it raise an error in python 3 and such a comparison can work on python 2 how can I change it? ``` import tensorflow as tf def train_set(): class MyCallBacks(tf.keras.callbacks.Callback): def on_epoch_end(self,epoch,logs={}): if(logs.get('acc')>0.95): print('the training will stop !') self.model.stop_training=True callbacks=MyCallBacks() mnist_dataset=tf.keras.datasets.mnist (x_train,y_train),(x_test,y_test)=mnist_dataset.load_data() x_train=x_train/255.0 x_test=x_test/255.0 classifier=tf.keras.Sequential([ tf.keras.layers.Flatten(input_shape=(28,28)), tf.keras.layers.Dense(512,activation=tf.nn.relu), tf.keras.layers.Dense(10,activation=tf.nn.softmax) ]) classifier.compile( optimizer='sgd', loss='sparse_categorical_crossentropy', metrics=['accuracy'] ) history=classifier.fit(x_train,y_train,epochs=20,callbacks=[callbacks]) return history.epoch,history.history['acc'][-1] train_set() ```
2020/01/18
[ "https://Stackoverflow.com/questions/59802608", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11214617/" ]
Inside your callback try this : ``` class myCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs={}): print("---",logs,"---") ''' if(logs.get('acc')>=0.99): print("Reached 99% accuracy so cancelling training!") ''' ``` It gave me this **`--- {'loss': 0.18487292938232422, 'acc': 0.94411665} ---`** I had `acc` so I used, if had `accuracy` I would have used `accuracy`. So log and what do you have then use that. TF goes through major changes all the time, so its ok to play safe, very safe.
we are in the same boat from Coursera Course So, this is my code ``` # GRADED FUNCTION: train_mnist def train_mnist(): # Please write your code only where you are indicated. # please do not remove # model fitting inline comments. # YOUR CODE SHOULD START HERE class myCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs={}): if (logs.get('acc') > 0.99): print("\n Reached 99% Accuracy!") self.model.stop_training = True # YOUR CODE SHOULD END HERE mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data(path=path) # YOUR CODE SHOULD START HERE x_train = x_train/255.0 x_test = x_test/255.0 callbacks = myCallback() # YOUR CODE SHOULD END HERE model = tf.keras.models.Sequential([ # YOUR CODE SHOULD START HERE tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, 'relu'), tf.keras.layers.Dense(10, 'softmax') # YOUR CODE SHOULD END HERE ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) # model fitting history = model.fit(x_train, y_train, epochs=10, callbacks=[callbacks]# YOUR CODE SHOULD START HERE # YOUR CODE SHOULD END HERE ) # model fitting return history.epoch, history.history['acc'][-1] ``` Hope, it helps you! Keep Learning
59,802,608
I have this code and it raise an error in python 3 and such a comparison can work on python 2 how can I change it? ``` import tensorflow as tf def train_set(): class MyCallBacks(tf.keras.callbacks.Callback): def on_epoch_end(self,epoch,logs={}): if(logs.get('acc')>0.95): print('the training will stop !') self.model.stop_training=True callbacks=MyCallBacks() mnist_dataset=tf.keras.datasets.mnist (x_train,y_train),(x_test,y_test)=mnist_dataset.load_data() x_train=x_train/255.0 x_test=x_test/255.0 classifier=tf.keras.Sequential([ tf.keras.layers.Flatten(input_shape=(28,28)), tf.keras.layers.Dense(512,activation=tf.nn.relu), tf.keras.layers.Dense(10,activation=tf.nn.softmax) ]) classifier.compile( optimizer='sgd', loss='sparse_categorical_crossentropy', metrics=['accuracy'] ) history=classifier.fit(x_train,y_train,epochs=20,callbacks=[callbacks]) return history.epoch,history.history['acc'][-1] train_set() ```
2020/01/18
[ "https://Stackoverflow.com/questions/59802608", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11214617/" ]
Do not use `accuracy` use `acc` instead in this particular case since `logs.get()` is not working on `accuracy` but normally both works for common callback blocks in `TF`. If someone is looking for this issue while doing the Coursera course, this is the simplest answer to get around this problem.
I had the same problem and instead of using 'acc', I changed it to 'accuracy' everywhere. So it seems that maybe it is better to try changing 'acc' to 'accuracy'.
59,802,608
I have this code and it raise an error in python 3 and such a comparison can work on python 2 how can I change it? ``` import tensorflow as tf def train_set(): class MyCallBacks(tf.keras.callbacks.Callback): def on_epoch_end(self,epoch,logs={}): if(logs.get('acc')>0.95): print('the training will stop !') self.model.stop_training=True callbacks=MyCallBacks() mnist_dataset=tf.keras.datasets.mnist (x_train,y_train),(x_test,y_test)=mnist_dataset.load_data() x_train=x_train/255.0 x_test=x_test/255.0 classifier=tf.keras.Sequential([ tf.keras.layers.Flatten(input_shape=(28,28)), tf.keras.layers.Dense(512,activation=tf.nn.relu), tf.keras.layers.Dense(10,activation=tf.nn.softmax) ]) classifier.compile( optimizer='sgd', loss='sparse_categorical_crossentropy', metrics=['accuracy'] ) history=classifier.fit(x_train,y_train,epochs=20,callbacks=[callbacks]) return history.epoch,history.history['acc'][-1] train_set() ```
2020/01/18
[ "https://Stackoverflow.com/questions/59802608", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11214617/" ]
Try using try-except ``` class myCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs = {}): try: if(logs.get('acc')>0.95): print("\nReached") self.model.stop_training = True except: if(logs.get('accuracy')>0.95): print("Reached!!!") self.model.stop_training = True ```
I had the same problem and instead of using 'acc', I changed it to 'accuracy' everywhere. So it seems that maybe it is better to try changing 'acc' to 'accuracy'.
59,802,608
I have this code and it raise an error in python 3 and such a comparison can work on python 2 how can I change it? ``` import tensorflow as tf def train_set(): class MyCallBacks(tf.keras.callbacks.Callback): def on_epoch_end(self,epoch,logs={}): if(logs.get('acc')>0.95): print('the training will stop !') self.model.stop_training=True callbacks=MyCallBacks() mnist_dataset=tf.keras.datasets.mnist (x_train,y_train),(x_test,y_test)=mnist_dataset.load_data() x_train=x_train/255.0 x_test=x_test/255.0 classifier=tf.keras.Sequential([ tf.keras.layers.Flatten(input_shape=(28,28)), tf.keras.layers.Dense(512,activation=tf.nn.relu), tf.keras.layers.Dense(10,activation=tf.nn.softmax) ]) classifier.compile( optimizer='sgd', loss='sparse_categorical_crossentropy', metrics=['accuracy'] ) history=classifier.fit(x_train,y_train,epochs=20,callbacks=[callbacks]) return history.epoch,history.history['acc'][-1] train_set() ```
2020/01/18
[ "https://Stackoverflow.com/questions/59802608", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11214617/" ]
it seems that your error is similar to [Exception with Callback in Keras - Tensorflow 2.0 - Python](https://stackoverflow.com/questions/56423505/exception-with-callback-in-keras-tensorflow-2-0-python) try replacing `logs.get('acc')` with `logs.get('accuracy')`
we are in the same boat from Coursera Course So, this is my code ``` # GRADED FUNCTION: train_mnist def train_mnist(): # Please write your code only where you are indicated. # please do not remove # model fitting inline comments. # YOUR CODE SHOULD START HERE class myCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs={}): if (logs.get('acc') > 0.99): print("\n Reached 99% Accuracy!") self.model.stop_training = True # YOUR CODE SHOULD END HERE mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data(path=path) # YOUR CODE SHOULD START HERE x_train = x_train/255.0 x_test = x_test/255.0 callbacks = myCallback() # YOUR CODE SHOULD END HERE model = tf.keras.models.Sequential([ # YOUR CODE SHOULD START HERE tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, 'relu'), tf.keras.layers.Dense(10, 'softmax') # YOUR CODE SHOULD END HERE ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) # model fitting history = model.fit(x_train, y_train, epochs=10, callbacks=[callbacks]# YOUR CODE SHOULD START HERE # YOUR CODE SHOULD END HERE ) # model fitting return history.epoch, history.history['acc'][-1] ``` Hope, it helps you! Keep Learning
59,802,608
I have this code and it raise an error in python 3 and such a comparison can work on python 2 how can I change it? ``` import tensorflow as tf def train_set(): class MyCallBacks(tf.keras.callbacks.Callback): def on_epoch_end(self,epoch,logs={}): if(logs.get('acc')>0.95): print('the training will stop !') self.model.stop_training=True callbacks=MyCallBacks() mnist_dataset=tf.keras.datasets.mnist (x_train,y_train),(x_test,y_test)=mnist_dataset.load_data() x_train=x_train/255.0 x_test=x_test/255.0 classifier=tf.keras.Sequential([ tf.keras.layers.Flatten(input_shape=(28,28)), tf.keras.layers.Dense(512,activation=tf.nn.relu), tf.keras.layers.Dense(10,activation=tf.nn.softmax) ]) classifier.compile( optimizer='sgd', loss='sparse_categorical_crossentropy', metrics=['accuracy'] ) history=classifier.fit(x_train,y_train,epochs=20,callbacks=[callbacks]) return history.epoch,history.history['acc'][-1] train_set() ```
2020/01/18
[ "https://Stackoverflow.com/questions/59802608", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11214617/" ]
Tensorflow 2.0 ============== ``` DESIRED_ACCURACY = 0.979 class myCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epochs, logs={}) : if(logs.get('acc') is not None and logs.get('acc') >= DESIRED_ACCURACY) : print('\nReached 99.9% accuracy so cancelling training!') self.model.stop_training = True callbacks = myCallback() ```
Use 'acc' instead of 'accuracy' and you don't need to change.
59,802,608
I have this code and it raise an error in python 3 and such a comparison can work on python 2 how can I change it? ``` import tensorflow as tf def train_set(): class MyCallBacks(tf.keras.callbacks.Callback): def on_epoch_end(self,epoch,logs={}): if(logs.get('acc')>0.95): print('the training will stop !') self.model.stop_training=True callbacks=MyCallBacks() mnist_dataset=tf.keras.datasets.mnist (x_train,y_train),(x_test,y_test)=mnist_dataset.load_data() x_train=x_train/255.0 x_test=x_test/255.0 classifier=tf.keras.Sequential([ tf.keras.layers.Flatten(input_shape=(28,28)), tf.keras.layers.Dense(512,activation=tf.nn.relu), tf.keras.layers.Dense(10,activation=tf.nn.softmax) ]) classifier.compile( optimizer='sgd', loss='sparse_categorical_crossentropy', metrics=['accuracy'] ) history=classifier.fit(x_train,y_train,epochs=20,callbacks=[callbacks]) return history.epoch,history.history['acc'][-1] train_set() ```
2020/01/18
[ "https://Stackoverflow.com/questions/59802608", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11214617/" ]
It works in Python2 because in Python2 you can compare `None` with `float` but this is not possible in Python3. This line ``` logs.get('acc') ``` returns `None` and there is your problem. Quick solution would be to replace the condition with ``` if logs.get('acc') is not None and logs.get('acc') > 0.95: ``` If `logs.get('acc')` is `None` then the above condition will be short-circuited and the second part, `logs.get('acc') > 0.95`, will not be evaluated, therefore it will not cause the mentioned error.
I had the same problem and instead of using 'acc', I changed it to 'accuracy' everywhere. So it seems that maybe it is better to try changing 'acc' to 'accuracy'.
59,802,608
I have this code and it raise an error in python 3 and such a comparison can work on python 2 how can I change it? ``` import tensorflow as tf def train_set(): class MyCallBacks(tf.keras.callbacks.Callback): def on_epoch_end(self,epoch,logs={}): if(logs.get('acc')>0.95): print('the training will stop !') self.model.stop_training=True callbacks=MyCallBacks() mnist_dataset=tf.keras.datasets.mnist (x_train,y_train),(x_test,y_test)=mnist_dataset.load_data() x_train=x_train/255.0 x_test=x_test/255.0 classifier=tf.keras.Sequential([ tf.keras.layers.Flatten(input_shape=(28,28)), tf.keras.layers.Dense(512,activation=tf.nn.relu), tf.keras.layers.Dense(10,activation=tf.nn.softmax) ]) classifier.compile( optimizer='sgd', loss='sparse_categorical_crossentropy', metrics=['accuracy'] ) history=classifier.fit(x_train,y_train,epochs=20,callbacks=[callbacks]) return history.epoch,history.history['acc'][-1] train_set() ```
2020/01/18
[ "https://Stackoverflow.com/questions/59802608", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11214617/" ]
we are in the same boat from Coursera Course So, this is my code ``` # GRADED FUNCTION: train_mnist def train_mnist(): # Please write your code only where you are indicated. # please do not remove # model fitting inline comments. # YOUR CODE SHOULD START HERE class myCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs={}): if (logs.get('acc') > 0.99): print("\n Reached 99% Accuracy!") self.model.stop_training = True # YOUR CODE SHOULD END HERE mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data(path=path) # YOUR CODE SHOULD START HERE x_train = x_train/255.0 x_test = x_test/255.0 callbacks = myCallback() # YOUR CODE SHOULD END HERE model = tf.keras.models.Sequential([ # YOUR CODE SHOULD START HERE tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, 'relu'), tf.keras.layers.Dense(10, 'softmax') # YOUR CODE SHOULD END HERE ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) # model fitting history = model.fit(x_train, y_train, epochs=10, callbacks=[callbacks]# YOUR CODE SHOULD START HERE # YOUR CODE SHOULD END HERE ) # model fitting return history.epoch, history.history['acc'][-1] ``` Hope, it helps you! Keep Learning
Do not use `accuracy` use `acc` instead in this particular case since `logs.get()` is not working on `accuracy` but normally both works for common callback blocks in `TF`. If someone is looking for this issue while doing the Coursera course, this is the simplest answer to get around this problem.
59,802,608
I have this code and it raise an error in python 3 and such a comparison can work on python 2 how can I change it? ``` import tensorflow as tf def train_set(): class MyCallBacks(tf.keras.callbacks.Callback): def on_epoch_end(self,epoch,logs={}): if(logs.get('acc')>0.95): print('the training will stop !') self.model.stop_training=True callbacks=MyCallBacks() mnist_dataset=tf.keras.datasets.mnist (x_train,y_train),(x_test,y_test)=mnist_dataset.load_data() x_train=x_train/255.0 x_test=x_test/255.0 classifier=tf.keras.Sequential([ tf.keras.layers.Flatten(input_shape=(28,28)), tf.keras.layers.Dense(512,activation=tf.nn.relu), tf.keras.layers.Dense(10,activation=tf.nn.softmax) ]) classifier.compile( optimizer='sgd', loss='sparse_categorical_crossentropy', metrics=['accuracy'] ) history=classifier.fit(x_train,y_train,epochs=20,callbacks=[callbacks]) return history.epoch,history.history['acc'][-1] train_set() ```
2020/01/18
[ "https://Stackoverflow.com/questions/59802608", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11214617/" ]
It works in Python2 because in Python2 you can compare `None` with `float` but this is not possible in Python3. This line ``` logs.get('acc') ``` returns `None` and there is your problem. Quick solution would be to replace the condition with ``` if logs.get('acc') is not None and logs.get('acc') > 0.95: ``` If `logs.get('acc')` is `None` then the above condition will be short-circuited and the second part, `logs.get('acc') > 0.95`, will not be evaluated, therefore it will not cause the mentioned error.
Do not use `accuracy` use `acc` instead in this particular case since `logs.get()` is not working on `accuracy` but normally both works for common callback blocks in `TF`. If someone is looking for this issue while doing the Coursera course, this is the simplest answer to get around this problem.
32,991,119
I am writing C extensions for python. I am just experimenting for the time being and I have written a hello world extension that looks like this : ``` #include <Python2.7/Python.h> static PyObject* helloworld(PyObject* self) { return Py_BuildValue("s", "Hello, Python extensions!!"); } static char helloworld_docs[] = "helloworld( ): Any message you want to put here!!\n"; static PyMethodDef helloworld_funcs[] = { {"helloworld", (PyCFunction)helloworld, METH_NOARGS, helloworld_docs}, {NULL,NULL,0,NULL} }; void inithelloworld(void) { Py_InitModule3("helloworld", helloworld_funcs,"Extension module example!"); } ``` the code works perfectly fine, after installing it from a setup.py file I wrote, and installing it from command line ``` python setup.py install ``` What I want is the following : I want to use the C file as a python extension module, without installing it, that is I want to use it as just another python file in my project, and not a file that I need to install before my python modules get to use its functionality. Is there some way of doing this ?
2015/10/07
[ "https://Stackoverflow.com/questions/32991119", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5414031/" ]
You can simply compile the extension without installing (usually something like `python setup.py build`). Then you have to make sure the interpreter can find the compiled module (for example by copying it next to a script that imports it, or setting `PYTHONPATH`).
You can create your "own interpreter" by not extending python, but embedding it into your application. In that way, your objects will be always available for the users who are running your program. This is a pretty common thing to do in certain cases, for example look at the Blender project where all the `bpy`, `bmesh` and `bge` modules are already included. The downside is, your users can't use the `python` command directly, they have to use your `hello_world_python` instead. (But of course you can provide your extension as a module as well.) And that also means, you have to compile and distribute your application for all platforms you want to support -- in case you want to distribute it as a binary, to make your users lives a bit easier. For further information on embedding python into your program, read the propriate sections of the documentation: [Embedding Python in Another Application](https://docs.python.org/2/extending/embedding.html) > > ***Personal suggestion:*** Use Python 3.5 whenever you can, and stop supporting the old 2.x versions. For more information, read this article: [Should I use Python 2 or Python 3 for my development activity?](https://wiki.python.org/moin/Python2orPython3) > > >
31,962,569
I am working on MQTT and using python paho-mqtt <https://pypi.python.org/pypi/paho-mqtt> I am unable to understand how can I publish msg to a specific client or list of clients? I'll appreciate your help.
2015/08/12
[ "https://Stackoverflow.com/questions/31962569", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1073780/" ]
This isn't directly possible with strict MQTT, although some brokers may offer that functionality, or you can construct your application so that the topic design works to do what you need.
Although I do agree that in some cases it would be useful to send a message to a particular client (or list of clients) that's simply not how the publish/subscribe messaging paradigm works. [Read more on the publish-subscribe pattern on Wikipedia.](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) If all your system needs to do is send messages to unique clients, then I would perhaps suggest thinking of a different architecture for the system you are designing. That being said, you can leverage off pub/sub to achieve what you want using a clever topic design architecture. For example, let's assume all clients are part of a group (list), you could think of the following topic design: Unique per client: *P2P/< client-name >* List/Group subscription: *LIST/< list-name >* For example, *P2P/user12345* and *LIST/QA* where only user12345 subscribes to *P2P/user12345* but all users of the QA group subscribe to *LIST/QA*. It would be the client's responsibility to ensure that it is subscribed to its own topic(s) (or if your broker allows it, you could also add the topics administratively to non-clean clients). With this design, a publisher would be able to send a message to a specific user or all members of a defined group (list).
61,581,612
i am in the process of converting some cython code to python, and it went well until i came to the bitwise operations. Here is a snippet of the code: ``` in_buf_word = b'\xff\xff\xff\xff\x00' bits = 8 in_buf_word >>= bits ``` If i run this it will spit out this error: ``` TypeError: unsupported operand type(s) for >>=: 'str' and 'int' ``` how would i fix this?
2020/05/03
[ "https://Stackoverflow.com/questions/61581612", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13462790/" ]
``` import bitstring in_buf_word = b'\xff\xff\xff\xff\x00' bits = 8 in_buf_word = bitstring.BitArray(in_buf_word ) >> bits ``` If you dont have it. Go to your terminal ``` pip3 install bitstring --> python 3 pip install bitstring --> python 2 ``` To covert it back into bytes use the tobytes() method: ``` print(in_buf_word.tobytes()) ```
Shifting to the right by 8 bits just means cutting off the rightmost byte. Since you already have a `bytes` object, this can be done more easily: ``` in_buf_word = in_buf_word[:-1] ```
61,581,612
i am in the process of converting some cython code to python, and it went well until i came to the bitwise operations. Here is a snippet of the code: ``` in_buf_word = b'\xff\xff\xff\xff\x00' bits = 8 in_buf_word >>= bits ``` If i run this it will spit out this error: ``` TypeError: unsupported operand type(s) for >>=: 'str' and 'int' ``` how would i fix this?
2020/05/03
[ "https://Stackoverflow.com/questions/61581612", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13462790/" ]
``` import bitstring in_buf_word = b'\xff\xff\xff\xff\x00' bits = 8 in_buf_word = bitstring.BitArray(in_buf_word ) >> bits ``` If you dont have it. Go to your terminal ``` pip3 install bitstring --> python 3 pip install bitstring --> python 2 ``` To covert it back into bytes use the tobytes() method: ``` print(in_buf_word.tobytes()) ```
You can do it by converting the bytes into an integer, shifting that, and then converting the result back into a byte string. ``` in_buf_word = b'\xff\xff\xff\xff\x00' bits = 8 print(in_buf_word) # -> b'\xff\xff\xff\xff\x00' temp = int.from_bytes(in_buf_word, byteorder='big') >> bits in_buf_word = temp.to_bytes(len(in_buf_word), byteorder='big') print(in_buf_word) # -> b'\x00\xff\xff\xff\xff' ```
10,331,413
I am working on the exel parsing using python. till now I have worked with english language but when I encounter the regional languages, I am getting the error. example : ``` IR05 měsíční (monthly) ``` It gives me the error as ``` UnicodeEncodeError: 'ascii' codec can't encode character u'\u011b' in position 6: ordinal not in range(128) ``` how I can parse it and I can again write in same language in output files? my code : ``` for j in val: print 'j is - ', j str(j).replace("'", "") ``` I am getting error at replace statement.
2012/04/26
[ "https://Stackoverflow.com/questions/10331413", "https://Stackoverflow.com", "https://Stackoverflow.com/users/778942/" ]
``` >>> "IR05 měsíční (monthly)".decode('utf8') u'IR05 m\u011bs\xed\u010dn\xed (monthly)' ``` which is a unicode version of your original string (which was encoded in utf8). Now you can compare it to your other string (from the file), which you decode (from utf8 or latin2 or a different format) and you can compare them. ``` >>> 'IR05 m\xecs\xed\xe8n\xed (monthly)'.decode('latin2') u'IR05 m\u011bs\xed\u010dn\xed (monthly)' ``` now you can compare the two unicode strings: ``` >>> s_utf8 = "IR05 měsíční (monthly)" >>> s_latin2 = 'IR05 m\xecs\xed\xe8n\xed (monthly)' >>> s_utf8.decode('utf8') == s_latin2.decode('latin2') True ``` To write the string into a file, `encode` it again: ``` >>> s = s_utf8.decode('utf8') >>> filehandle.write(s.encode('utf8')) ```
the error may be caused by str(j), try this: ``` for j in val: print 'j is - ', j j.replace("'", "") ```
16,269,396
I know I've seen clean examples on the proper way to do this, and could even swear it was in one of the standard python libraries. I can't seem to find it now. Could you please point me in the right direction. Iterator for a list of lists that only returns arbitrary values from the sub-list. The idea is to have this in a list-comprehension. ``` alist = [ [1,2,3,4,5], [2,4,6,8,10], [3,6,9,12,15] ] only_some_values = [list(x[2]]) for x in alist] [[3],[6],[9]] ``` But I am quite sure there is a function that does this same thing, but in iterator fashion, thus getting rid of my direct index access on the left side and chaining an additional iterator on the right side of the `for`. Perhaps it was a form of embedded list-comprehension, but I really thought there was a cleaner way (using imap maybe?).
2013/04/29
[ "https://Stackoverflow.com/questions/16269396", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2097818/" ]
Perhaps you were thinking of `itemgetter`? ``` >>> from operator import itemgetter >>> map(itemgetter(2), alist) [3, 6, 9] ``` But that doesn't leave the elements in sublists ``` only_some_values = [[x[2]] for x in alist] ``` Gives your desired output
Here is what I had in mind: ``` from operator import itemgetter alist = [ [1,2,3,4,5], [2,4,6,8,10], [3,6,9,12,15] ] [list(x) for x in zip(map(itemgetter(2),alist), map(itemgetter(0),alist)) ] [[3,1], [6,2], [9,3]] ``` The idea is that you keep the left side of the comprehension clean, probably replacing `list(x)` with another important call, and so it is easy to read where the first-level list elements are coming from. I'm still not sure where I got this from, but its definitely not one of my original ideas.
48,252,967
I'm currently doing a system that scrap data from foursquare. Right now i have scrap the review from the website using python and beautiful soup and have a json file like below ``` {"review": "From sunset too variety food u cant liked it.."}{"review": "Byk motor laju2"}{"review": "Good place to chill"}{"review": "If you wan to play bubble and take photo, here is the best place"}{"review": "Scenic view for weekend getaway... coconut shake not taste as original klebang coconut shake..."}{"review": "Getting dirtier"}{"review": "Weekend getaway!"}{"review": "Good for casual walk & watching sunset with loved ones since my last visit macam2 ade kat sini very packed during public holidays"}{"review": "Afternoon time quite dry..beach is normal. Maybe evening/night might be better. The coconut shake they add vanilla ice cream,hmmm"}{"review": "Pantai awesome beb"}{"review": "Nice place for picnic"}{"review": "Cannot mandi here. Good place for recreation.. Calm place for weekdays! Haha"}{"review": "Very bz place. Need to go there early if you want to lepak. If not, no parking for you"}{"review": "So many good attraction here, worth a visit"}{"review": "Beautiful place for sunset"}{"review": "New beach! Like all beaches, awesome view & windy. Some stretch got many small crabs."}{"review": "There is bustel \"hotel in a bus\" can get coconut shake or fried seafood in the evening at 5pm. Bustel rate is from RM80. Bus cafe, bus toilet... Total bus transformation"}{"review": "Too crowded la"}{"review": "Muzium kapal selam closed since 1/3 until further notice..\ud83d\ude29"}{"review": "If you are looking for public toilets, look for a red bus. An old bus was modified and transformed to operate as toilets. Cool."}{"review": "Most of the shops closed after 12 midnight..helloo,this place should be the place for the late nighters..late night supposed to be the peak hour for business..live band bar maybe?? :-P"} ``` My question is how do i insert the data into a database right away? Is MYSQL can be use, or should i go with PyMongo instead.
2018/01/14
[ "https://Stackoverflow.com/questions/48252967", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9216577/" ]
It depends on your usages. Basically, MongoDB is suitable for JSON document so, you will be able to insert your Python object "directly". If you want/need to use MySQL, you will probably need to perform some transformations before inserting. Check this post for more information: [Inserting JSON into MySQL using Python](https://stackoverflow.com/questions/4251124/inserting-json-into-mysql-using-python)
You can convert your json into a string (json.dumps()) and store in a character field. Or, Django has support for JSONField when using Postgres ([docs](https://docs.djangoproject.com/en/2.0/ref/contrib/postgres/fields/#jsonfield)), this has some additional features like querying inside the json
44,060,080
The subject of the study was taken from [Text processing and detection from a specific dictionary in python](https://stackoverflow.com/questions/43988958/text-processing-and-detection-from-a-specific-dictionary-in-python/43989724#43989724) topic. Perhaps i misunderstood the OP's problem but i have tried to improve the codes. So, perhaps my question can be a bit different. Before explaining what i wanted to do, let me share the codes with you: ``` dict_1={"Liquid Biopsy":"Blood for analysis","cfDNA":"Blood for analysis"} list_1=[u'Liquid', u'biopsy',u'based', u'on', u'circulating', u'cell-free', u'DNA', u'(cfDNA)', u'analysis', u'are', u'described', u'as', u'surrogate', u'samples', u'for', u'molecular', u'analysis.'] for i in dict_1: if i.lower() in " ".join(list_1).lower(): print("Key: {}\nValue: {}\n".format(i,dict_1[i])) ``` These codes can catch the dictionary keys from a plain text which was written in `list_1`. However, when i was studying with this codes, i wondered what if some dictionary keys repeat in `list_1`. Then i wrote same keys two times in this `list_1`. And the above codes didn't recognize the repeated ones, the program gave the same result as below. ``` Key: cfDNA Value: Blood for analysis Key: Liquid Biopsy Value: Blood for analysis Process finished with exit code 0 ``` Then i tried to change my method and wrote a different code which is given below: ``` dict_1={"Liquid Biopsy":"Blood for analysis","cfDNA":"Blood for analysis"} list_1=[u'Liquid', u'biopsy',u'based', u'on', u'circulating', u'cell-free', "cfdna",u'DNA', u'(cfDNA)', u'analysis', u'are', u'described', u'as', u'surrogate', u'samples', u'for', u'molecular', u'analysis.'] for i in list_1: for j in dict_1: for k in j.split(): count=0 if k.lower() in i.lower(): count+=1 print("Key: {}\nValue: {}\nCount: {}\nDescription: Came from '{}'\n".format(j, dict_1[j],str(count),i)) ``` But it was obvious, the last codes would give undesirable result. As can be seen at the below, the program catch both `liquid` and `biopsy` words from the `list_1`. `cfDNA` was written second times in the `list_1` so the program catches two times. But is it possible to write the result one time but sum up the catch time? ``` Key: Liquid Biopsy Value: Blood for analysis Count: 1 Description: Came from 'Liquid' Key: Liquid Biopsy Value: Blood for analysis Count: 1 Description: Came from 'biopsy' Key: cfDNA Value: Blood for analysis Count: 1 Description: Came from 'cfdna' Key: cfDNA Value: Blood for analysis Count: 1 Description: Came from '(cfDNA)' Process finished with exit code 0 ``` I hope you understand what i wanted to do. I want to catch all of the keys which is written in a text. And also i want to count how many times, these keys repeat in a text.
2017/05/19
[ "https://Stackoverflow.com/questions/44060080", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8016168/" ]
If I understood correctly, you want to find the number of times a "keyword" appears in a text. You can use the "re" module for this. ``` import re dict_1={"Liquid Biopsy":"Blood for analysis","cfDNA":"Blood for analysis", "asfdafaf":"dunno"} list_1=[u'Liquid', u'biopsy',u'based', u'on', u'circulating', u'cell-free', "cfdna",u'DNA', u'(cfDNA)', u'analysis', u'are', u'described', u'as', u'surrogate', u'samples', u'for', u'molecular', u'analysis.'] text = ' '.join(list_1).lower() for key in dict_1: n = len(re.findall(key.lower(), text)) if n > 0: print('Key:', key) print('Value:', dict_1[key]) print('n:', n) print() ```
Recently i have learned a new method about counting how many times does a dictionary key repeat in a plain text without importing "re" module. Perhaps it's suitable to put another method in this topic. ``` dict_1={"Liquid Biopsy":"Blood for analysis","cfDNA":"Blood for analysis"} list_1=[u'Liquid', u'biopsy', u'liquid', u'biopsy',u'based',u'cfdna' ,u'on', u'circulating', u'cell-free', u'DNA', u'(cfDNA)', u'analysis', u'are', u'described', u'as', u'surrogate', u'samples', u'for', u'molecular', u'analysis.'] string_1=" ".join(list_1).lower() for i in dict_1: if i.lower() in string_1: print("Key: {}\nValue: {}\nCount: {}\n".format(i,dict_1[i],string_1.count(i.lower()))) ``` The above codes give almost the same results with the method of importing re module. The difference is, it doesnt't write the keys two times. So it's a bit similar to the first code structure which was written in the first post. ``` Key: Liquid Biopsy Value: Blood for analysis Count: 2 Key: cfDNA Value: Blood for analysis Count: 2 Process finished with exit code 0 ```
54,830,602
Preface ======= I understand that `dict`s/`set`s should be created/updated with hashable objects only due to their implementation, so when this kind of code fails ``` >>> {{}} # empty dict of empty dict Traceback (most recent call last): File "<input>", line 1, in <module> TypeError: unhashable type: 'dict' ``` it's ok and I've seen tons of this kind of messages. But if I want to check if some unhashable object is in `set`/`dict` ``` >>> {} in {} # empty dict not in empty dict ``` I get error as well ``` Traceback (most recent call last): File "<input>", line 1, in <module> TypeError: unhashable type: 'dict' ``` Problem ======= What is the rationale behind this behavior? I understand that lookup and updating may be logically connected (like in [`dict.setdefault` method](https://docs.python.org/3/library/stdtypes.html#dict.setdefault)), but shouldn't it fail on modification step instead of lookup? Maybe I have some hashable "special" values that I handle in some way, but others (possibly unhashable) -- in another: ``` SPECIAL_CASES = frozenset(range(10)) | frozenset(range(100, 200)) ... def process_json(obj): if obj in SPECIAL_CASES: ... # handle special cases else: ... # do something else ``` so with given lookup behavior I'm forced to use one of the options * [LBYL](https://docs.python.org/3/glossary.html#term-lbyl) way: check if `obj` is hashable and only after that check if it is one of `SPECIAL_CASES` (which is not great since it is based on `SPECIAL_CASES` structure and lookup mechanism restrictions, but can be encapsulated in separate predicate), * [EAFP](https://docs.python.org/3/glossary.html#term-eafp) way: use some sort of utility for "safe lookup" like ``` def safe_contains(dict_or_set, obj): try: return obj in dict_or_set except TypeError: return False ``` * use `list`/`tuple` for `SPECIAL_CASES` (which is not `O(1)` on lookups). Or am I missing something trivial?
2019/02/22
[ "https://Stackoverflow.com/questions/54830602", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5997596/" ]
I have found a solution and want to share it here so it helps someone else looking to do the same thing. The user running the docker command (without sudo) needs to have the docker group. So I tried adding the service account as a user and gave it the docker group and that's it. `docker login` to gcr worked and so did `docker run`. So the problem is solved but this raises a couple of additional questions. First, is this the correct way to do it? If it is not, then what is? If this is indeed the correct way, then perhaps a service account selected while creating a VM must be added as a user when it (the VM) is created. I can understand this leads to some complications such as what happens when the service account is changed. Does the old service account user gets deleted or should it be retained? But I think at least an option can be given to add the service account user to the VM - something like a checkbox in the console - so the end user can take a call. Hope someone from GCP reads this.
As stated in this [article](https://docs.docker.com/install/linux/linux-postinstall/), the steps you taken are the correct way to do it. Adding users to the "docker" group will allow the users to run docker commands as non root. If you create a new service account and would like to have that service account run docker commands within a VM instance, then you will have to add that service account to the docker group as well. If you change the service account on a VM instance, then the old service account should still be able to run docker commands as long as the older service account is not removed from the docker group and has not been deleted from Cloud IAM; however, you will still need to add the new service account to the docker group to allow it to run docker commands as non root. Update: automating the creation of a service account when at VM instance creation manually would be tedious. Within your startup script, you would have to first create the Service Account using the [gcloud commands](https://cloud.google.com/iam/docs/creating-managing-service-accounts#creating_a_service_account) and then add the appropriate [IAM roles](https://cloud.google.com/iam/docs/granting-roles-to-service-accounts#granting_access_to_a_service_account_for_a_resource). Once that is done, you would have to still add the service account to the docker groupadd directory. It would be much easier to create the service account from the [Console](https://cloud.google.com/iam/docs/creating-managing-service-accounts#iam-service-accounts-create-console) when the VM instance is being created. Once the VM instance is created, you can add the service account to the docker groupadd directory. If you would like to request for a new feature within GCE, you can submit a Public Issue Tracker by visiting this [site](https://issuetracker.google.com).
32,834,419
I received this message: ``` --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-23-60bbe78150c2> in <module>() 17 men_only_stats=data[0::4]!="male" 18 ---> 19 women_onboard = data[women_only_stats,1].astype(np.float) 20 men_onboard = data[men_only_stats,1].astype(np.float) 21 proportion_women_survive= sum(women_onboard)/size(women_onboard) IndexError: too many indices for array ``` when I enter my code here: ``` import csv as csv import numpy as np csv_file_object = csv.reader(open(r"C:\Users\IT'S OVER 9000\Downloads\train.csv", 'rb')) header = csv_file_object.next() data=[] for row in csv_file_object: data.append(row) data=np.array(data) number_passengers= np.size(data[0::4,1].astype(np.float)) passengers_survived=np.sum(data[0::4,1].astype(np.float)) proportion_survived=passengers_survived/number_passengers women_only_stats= data[0::4]=="female" men_only_stats=data[0::4]!="male" women_onboard = data[women_only_stats,1].astype(np.float) men_onboard = data[men_only_stats,1].astype(np.float) proportion_women_survive= sum(women_onboard)/size(women_onboard) proportion_men_survive= sum(men_onboard)/size(men_onboard) print proportion_women_survive print proportion_men_survive ``` Here are two lines of data from my cvs file: ``` PassengerId,Survived,Pclass,Name,Sex,Age,SibSp,Parch,Ticket,Fare,Cabin,Embarked,‌​,,, 1,0,3,"Braund, Mr. Owen Harris",male,22,1,0,A/5,21171,7.25,,S,,, 2,1,1,"Cumings, Mrs. John Bradley (Florence Briggs Thayer)",female,38,1,0,PC,17599,71.2833,C85,C,,, ``` What did I do wrong, what caused it, and how do I fix it?
2015/09/29
[ "https://Stackoverflow.com/questions/32834419", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5386822/" ]
You need to use `getChildFragmentManager()` instead of `getFragmentManager()` for placing and managing Fragments inside of a Fragment. So ``` public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { super.onCreate(savedInstanceState); view = inflater.inflate(R.layout.fragment_layout, container, false); FragmentTransaction fragmentTransaction = getChildFragmentManager().beginTransaction(); fragmentTransaction.replace(R.id.fragmentContainer, subFragment1); fragmentTransaction.commit(); return view; } public void showSubFragment2() { FragmentTransaction fragmentTransaction = getChildFragmentManager().beginTransaction(); fragmentTransaction.setCustomAnimations(R.animator.slide_in_left, R.animator.slide_out_right); fragmentTransaction.replace(R.id.fragmentContainer, subFragment2); fragmentTransaction.commit(); } public void showSubFragment1() { FragmentTransaction fragmentTransaction = getChildFragmentManager().beginTransaction(); fragmentTransaction.setCustomAnimations(R.animator.slide_in_right, R.animator.slide_out_left); fragmentTransaction.replace(R.id.fragmentContainer, subFragment1); fragmentTransaction.commit(); } ```
Try calling `getChildFragmentManager()` instead of `getFragmentManager()` and see if that helps.
72,590,538
I have 2 models 1) patientprofile and 2) medInfo. In the first model patientprofile, I am trying to get patients informations like (name and other personal information) and 2nd model I want to add patients Medical information data.. when I am trying check is there a existing medical information for the patient then show and update it. otherwise need to create and update it. in medinfo model using forignkey of patientprofile (id). Its working good in admin panel perfectly. but when I am trying to do it UI getting error. below is code:view.py ``` @login_required def medinfoupdate(request, patid): # to get patient name and id in medinfo page accessing patientprofile data patprofileedit = patientprofile.objects.get(id=patid) try: med = medInfo.objects.get(pat_ID=patid) if request.method == 'GET': form = medInfo_form(instance=med) return render(request, 'pmp/medinfo.html', {'med': med, 'form':form, 'patprofileedit' : patprofileedit} ) except: if request.method == 'GET': return render(request, 'pmp/medinfo.html', {'patprofileedit' : patprofileedit} ) if request.method == 'POST': try: form = medInfo_form(request.POST, instance=med) form.save() return redirect(patientlist) except ValueError: return render(request, 'pmp/medinfo.html', {'form': form, 'error': 'Data entered is wrong!'}) ``` below is error : ``` UnboundLocalError at /medinfo/pat-11 local variable 'med' referenced before assignment Request Method: POST Request URL: http://localhost:8000/medinfo/pat-11 Django Version: 4.0.4 Exception Type: UnboundLocalError Exception Value: local variable 'med' referenced before assignment Exception Location: E:\py\patient_management_project\pmp\views.py, line 143, in medinfoupdate Python Executable: C:\Users\Lenovo\AppData\Local\Programs\Python\Python310\python.exe Python Version: 3.10.4 Python Path: ['E:\\py\\patient_management_project', 'C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Python\\Python310\\python310.zip', 'C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Python\\Python310\\DLLs', 'C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Python\\Python310\\lib', 'C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Python\\Python310', 'C:\\Users\\Lenovo\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages'] Server time: Sun, 12 Jun 2022 08:05:07 +0000 ```
2022/06/12
[ "https://Stackoverflow.com/questions/72590538", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15837741/" ]
You have to make sure to understand the difference between string literals [1] and references to exported attribute(s) from the resources [2]. The way you are currently trying to get the output means it will output `aws_subnet.main.availability_zone[*]` as a string literal. To make sure you get the values you just need to remove the double quotes from the start and the end of the string literal: ``` output "list_of_az" { value = aws_subnet.main[*].availability_zone } ``` --- [1] <https://www.terraform.io/language/expressions/strings> [2] <https://www.terraform.io/language/expressions/references>
If your goal is to display all the Availability Zones in a region, you don't necessary need to iterate over your subnets you have created. You simply display the names from the `data.aws_availability_zones`: ```hcl data "aws_availability_zones" "available" { state = "available" } output "list_of_az" { value = data.aws_availability_zones.available[*].names } ``` This will output something like: ``` list_of_az = [ tolist([ "us-east-1a", "us-east-1b", "us-east-1c", "us-east-1d", "us-east-1e", "us-east-1f", ]), ] ``` Obviously, the output will depend on your current region.
32,833,575
I a new to python and am stuck on this one exercise. I am supposed to enter a sentence and find the longest word. If there are two or more words that have the same longest length, then it is to return the first word. This is what I have so far: ``` def find_longest_word(word_list): longest_word = '' for word in word_list: print(word, len(word)) words = input('Please enter a few words') word_list = words.split() find_longest_word(word_list) ``` But I do not know how to compare the lists and return the first/longest word.
2015/09/28
[ "https://Stackoverflow.com/questions/32833575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5386649/" ]
Use `max` python built-in function, using as `key` parameter the `len` function. It would iterate over `word_list` applying `len` function and then returning the longest one. ``` def find_longest_word(word_list): longest_word = max(word_list, key=len) return longest_word ```
You shouldn't print out the length of each word. Instead, compare the length of the current `word` and the length of `longest_word`. If `word` is longer, you update `longest_word` to `word`. When you have been through all words, the longest world will be stored in `longest_word`. Then you can print or return it. ``` def find_longest_word(word_list): longest_word = '' for word in word_list: if len(word) > len(longest_word) longest_word = word print longest_word ``` edit: levi's answer is much more elegant, this is a solution with a simple for loop, and is somewhat close to the one you tried to make yourself.
32,833,575
I a new to python and am stuck on this one exercise. I am supposed to enter a sentence and find the longest word. If there are two or more words that have the same longest length, then it is to return the first word. This is what I have so far: ``` def find_longest_word(word_list): longest_word = '' for word in word_list: print(word, len(word)) words = input('Please enter a few words') word_list = words.split() find_longest_word(word_list) ``` But I do not know how to compare the lists and return the first/longest word.
2015/09/28
[ "https://Stackoverflow.com/questions/32833575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5386649/" ]
Use `max` python built-in function, using as `key` parameter the `len` function. It would iterate over `word_list` applying `len` function and then returning the longest one. ``` def find_longest_word(word_list): longest_word = max(word_list, key=len) return longest_word ```
Compare each word to the longest one yet, starting with the length of 0. If the word is longer than the longest yet, update the word and the longest\_size. Should look similar to this: ``` def find_longest_word(word_list): longest_word = '' longest_size = 0 for word in word_list: if len(word) > longest_size longest_word = word longest_size = len(word) return longest_word words = input('Please enter a few words') word_list = words.split() find_longest_word(word_list) ```
32,833,575
I a new to python and am stuck on this one exercise. I am supposed to enter a sentence and find the longest word. If there are two or more words that have the same longest length, then it is to return the first word. This is what I have so far: ``` def find_longest_word(word_list): longest_word = '' for word in word_list: print(word, len(word)) words = input('Please enter a few words') word_list = words.split() find_longest_word(word_list) ``` But I do not know how to compare the lists and return the first/longest word.
2015/09/28
[ "https://Stackoverflow.com/questions/32833575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5386649/" ]
You shouldn't print out the length of each word. Instead, compare the length of the current `word` and the length of `longest_word`. If `word` is longer, you update `longest_word` to `word`. When you have been through all words, the longest world will be stored in `longest_word`. Then you can print or return it. ``` def find_longest_word(word_list): longest_word = '' for word in word_list: if len(word) > len(longest_word) longest_word = word print longest_word ``` edit: levi's answer is much more elegant, this is a solution with a simple for loop, and is somewhat close to the one you tried to make yourself.
Compare each word to the longest one yet, starting with the length of 0. If the word is longer than the longest yet, update the word and the longest\_size. Should look similar to this: ``` def find_longest_word(word_list): longest_word = '' longest_size = 0 for word in word_list: if len(word) > longest_size longest_word = word longest_size = len(word) return longest_word words = input('Please enter a few words') word_list = words.split() find_longest_word(word_list) ```
14,369,739
I'm used to using dicts to represent graphs in python, but I'm running into some serious performance issues with large graphs and complex calculations, so I think I should cross over to using adjacency matrixes to bypass the overhead of hash tables. My question is, if I have a graph of the form g: {node: {vertex: weight . . . } . . . }, what would be the most efficient way to convert that into a list-based adjacency matrix?
2013/01/16
[ "https://Stackoverflow.com/questions/14369739", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1427661/" ]
Probably not the most efficient, but a simple way to convert your format to an adjacency matrix on a list-basis could look like this: ``` g = {1:{2:.5, 3:.2}, 2:{4:.7}, 4:{5:.6, 3:.3}} hubs = g.items() # list of nodes and outgoing vertices size=max(map(lambda hub: max(hub[0], max(hub[1].keys())), hubs))+1 # matrix dimension is highest known node index + 1 matrix=[[None]*size for row in range(size)] # set up a matrix of the appropriate size for node, vertices in hubs: # loop through every node in dictionary for vertice, weight in vertices.items(): # loop through vertices matrix[vertice][node] = weight # define adjacency of both nodes by assigning the vertice's weight ``` This works for directed graphs assuming that the nodes are represented simply by their indexes starting with zero. Here is a visualization and the resulting matrix for the graph processed in this sample: ![](https://i.stack.imgur.com/oBNMt.png) ``` 0 1 2 3 4 5 ------------------------------ 0 | 1 | 2 | 0.5 3 | 0.2 0.3 4 | 0.7 5 | 0.6 ``` If your graph is in fact undirected, I could think of some chances to optimize. In case the dictionary containes every node as a key with all its vertices listed, like `{1:{2:.2, 3:.3}, 2:{1:.2}, 3:{1:.3}}`, you could sort the corresponding list before traversing and limit the inner loop: ``` hubs = sorted(g.items()) for node, vertices in hubs: for vertice, weight in reversed(sorted(vertices.items())): if vertice >= node: matrix[vertice][node] = weight matrix[node][vertice] = weight else: # do only care about vertices that haven't been saved before, break # continue with next node when the current one won't introduce any more vertices ``` You can probably have this more efficient by using [binary search](http://docs.python.org/2/library/bisect.html). Since the resulting matrix will obviously be a mirror-symmetric one, you could also go further and only store one half of it. Easiest way to do this is maybe to flip it on the vertical axis: ``` # unlike the one before, this sample doesn't rely on the dictionary containing every vertice twice matrix=[[None]*size for row in range(size)] for node, vertices in hubs: for vertice, weight in vertices.items(): matrix[vertice][size-node-1] = weight ``` Because of one half of the matrix being cut off, not every lookup for the vertice between nodes `(u,v)` will work, so it has to be made sure that the index of the column is greater than the row's for the cell to look up: ``` u,v = sorted((u,v)) weight = matrix[v][u] ``` Good luck!
Well to implement in a adjacency list, you can create two classes, one for storing the information about the vertex's. ``` # Vertex, which will represent each vertex in the graph.Each Vertex uses a dictionary # to keep track of the vertices to which it is connected, and the weight of each edge. class Vertex: # Initialze a object of this class # we use double underscore def __init__(self, key): # we identify the vertex with its key self.id = key # this stores the info about the various connections any object # (vertex) of this class has using a dictionary which is called connectedTo. # initially its not connected to any other node so, self.connectedTo={} # Add the information about connection between vertexes into the dictionary connectedTo def addNeighbor(self,neighbor,weight=0): # neighbor is another vertex we update the connectedTo dictionary ( Vertex:weight ) # with the information of this new Edge, the key is the vertex and # the edge's weight is its value. This is the new element in the dictionary self.connectedTo[neighbor] = weight # Return a string containing a nicely printable representation of an object. def __str__(self): return str(self.id) + ' connectedTo: ' + str([x.id for x in self.connectedTo]) # Return the vertex's self is connected to in a List def getConnections(self): return self.connectedTo.keys() # Return the id with which we identify the vertex, its name you could say def getId(self): return self.id # Return the value (weight) of the edge (or arc) between self and nbr (two vertices) def getWeight(self,nbr): return self.connectedTo[nbr] ``` As you can see from the comments, each vertex stores a 'key' which is used to identify it, and it had a dictionary 'connectedTo' which holds the key-weight pairs of all connections from this vertex. key of the connected vertex and weight of the edge. Now we can store a collection of such vertex's using the Graph class which can be implemented like this, ``` # The Graph class contains a dictionary that maps vertex keys to vertex objects (vertlist) and a count of the number of vertices in the graph class Graph: def __init__(self): self.vertList = {} self.numVertices = 0 # Returns a vertex which was added to the graph with given key def addVertex(self,key): self.numVertices = self.numVertices + 1 # create a vertex object newVertex = Vertex(key) # set its key self.vertList[key] = newVertex return newVertex # Return the vertex object corresponding to the key - n def getVertex(self,n): if n in self.vertList: return self.vertList[n] else: return None # Returns boolean - checks if graph contains a vertex with key n def __contains__(self,n): return n in self.vertList # Add's an edge to the graph using addNeighbor method of Vertex def addEdge(self,f,t,cost=0): # check if the 2 vertices involved in this edge exists inside # the graph if not they are added to the graph # nv is the Vertex object which is part of the graph # and has key of 'f' and 't' respectively, cost is the edge weight if f not in self.vertList: nv = self.addVertex(f) if t not in self.vertList: nv = self.addVertex(t) # self.vertList[f] gets the vertex with f as key, we call this Vertex # object's addNeighbor with both the weight and self.vertList[t] (the vertice with t as key) self.vertList[f].addNeighbor(self.vertList[t], cost) # Return the list of all key's corresponding to the vertex's in the graph def getVertices(self): return self.vertList.keys() # Returns an iterator object, which contains all the Vertex's def __iter__(self): return iter(self.vertList.values()) ``` here, we have the graph class which holds the number of vertex's in 'numVertices' and has the dictionary 'vertList' which has the key and Vertex (the class we just made) objects present in the graph. we can create a graph and set it up by calling ``` # Now lets make the graph the_graph=Graph() print "enter the number of nodes in the graph" no_nodes=int(raw_input()) # setup the nodes for i in range(no_nodes): print "enter the "+str(i+1)+" Node's key" the_graph.addVertex(raw_input()) print "enter the number of edges in the graph" no_edges=int(raw_input()) print "enter the maximum weight possible for any of edges in the graph" max_weight=int(raw_input()) # setup the edges for i in range(no_edges): print "For the "+str(i+1)+" Edge, " print "of the 2 nodes involved in this edge \nenter the first Node's key" node1_key=raw_input() print "\nenter the second Node's key" node2_key=raw_input() print "\nenter the cost (or weight) of this edge (or arc) - an integer" cost=int(raw_input()) # add the edge with this info the_graph.addEdge(node1_key,node2_key,cost) ``` If you want undirected graphs then add this line `the_graph.addEdge(node2_key,node1_key,cost)` Thus the connection will be stored not as a connected to b but a connected to b and b connected to a. Also mind the indentation for both class implementations above, it might be incorrect.
48,266,643
I have the 2d list mainlist ``` mainlist = [['John','Doe',True],['Mary','Jane',False],['James','Smith',False]] slist1 = ['John', 'Doe'] slist2 = ['John', 'Smith'] slist3 = ['Doe', 'John'] slist4 = ['John', True] ``` How to determine if a sublist of a sublist exists in a list where if slist1 is tested against mainlist will return True while slist2 will return False I am thinking of something like this (code from [here](https://stackoverflow.com/questions/22673770/simplest-way-to-check-if-multiple-items-are-or-are-not-in-a-list "here")) ``` for sublist in mainlist: if all(i in sublist for i in slist1): return True break ``` is there a more "pythonic" way to do this? thanks edit: 1. slist1 tested against mainlist would return True 2. slist2 would return False 3. slist3 would return False 4. slist4 would return False so basically, i am just testing if slist is in the first 2 index of mainlist[x]
2018/01/15
[ "https://Stackoverflow.com/questions/48266643", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8573372/" ]
You could take an array with references to the wanted arrays and as index for the array for pusing the remainder value of the actual index and the length of the temporary array. ```js var array = ['fruit', 'vegetables', 'sugars', 'bread', 'fruit', 'vegetables', 'sugars', 'bread'], fruits = [], // final arrays vegetables = [], // sugars = [], // breads = [], // temp = [fruits, vegetables, sugars, breads], len = temp.length; array.forEach((v, i) => temp[i % len].push(v)); console.log(fruits); console.log(vegetables); console.log(sugars); console.log(breads); ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ```
Not super elegant but it will do the job.. ```js var a = ['bread_1','fruit_1','vegetable_1','sugars_1', 'bread_2','fruit_2','vegetable_2','sugars_2', 'bread_3','fruit_3','vegetable_3','sugars_3']; var i=0; a = a.reduce(function(ac, va, id, ar){ if(i==ac.length) i=0; ac[i].push(va); i++; return ac; }, [[],[],[],[]]); console.log(a); ```
48,266,643
I have the 2d list mainlist ``` mainlist = [['John','Doe',True],['Mary','Jane',False],['James','Smith',False]] slist1 = ['John', 'Doe'] slist2 = ['John', 'Smith'] slist3 = ['Doe', 'John'] slist4 = ['John', True] ``` How to determine if a sublist of a sublist exists in a list where if slist1 is tested against mainlist will return True while slist2 will return False I am thinking of something like this (code from [here](https://stackoverflow.com/questions/22673770/simplest-way-to-check-if-multiple-items-are-or-are-not-in-a-list "here")) ``` for sublist in mainlist: if all(i in sublist for i in slist1): return True break ``` is there a more "pythonic" way to do this? thanks edit: 1. slist1 tested against mainlist would return True 2. slist2 would return False 3. slist3 would return False 4. slist4 would return False so basically, i am just testing if slist is in the first 2 index of mainlist[x]
2018/01/15
[ "https://Stackoverflow.com/questions/48266643", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8573372/" ]
You could take an array with references to the wanted arrays and as index for the array for pusing the remainder value of the actual index and the length of the temporary array. ```js var array = ['fruit', 'vegetables', 'sugars', 'bread', 'fruit', 'vegetables', 'sugars', 'bread'], fruits = [], // final arrays vegetables = [], // sugars = [], // breads = [], // temp = [fruits, vegetables, sugars, breads], len = temp.length; array.forEach((v, i) => temp[i % len].push(v)); console.log(fruits); console.log(vegetables); console.log(sugars); console.log(breads); ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ```
I suggest (since the pattern may vary) to create an array with categories and what elements those categories include, thus creating an object with keys identifying your categories. The variables are not 'declared' outside the object but you can access the object keys the same way you'd have different variables: ``` // Simple old-style catalog as reference for your elements array. var Categories = { 'fruits': ['orange', 'apple', 'all other fruits…'], 'vegetables': ['ginger', 'broccoli', 'all other vegetables…'], 'bread': ['pizza', 'panini', 'all other breads…'], 'sugars': ['soda', 'sugar1', '90_percent_of_products_are_sugar', 'all other sugars…'] }; // Your actual elements array. var ElementsArray = [ 'orange', 'broccoli', 'pizza', 'sugar1', 'apple', 'ginger', 'panini', 'soda' ]; // Your organized-by-category variable, declare as object so you can easily access as Array or Object (keys are the variable arrays). var OrderedElementsArray = {}; for (element in ElementsArray) { for (category in Categories) { // Check if the key is not an array an initialize it for later use of push(). if (typeof OrderedElementsArray[category] != 'object') { OrderedElementsArray[category] = []; } // indexOf() returns -1 if no element matches an index, thus the expression `>= 0`. if (Categories[category].indexOf(ElementsArray[element]) >= 0) { OrderedElementsArray[category].push(ElementsArray[element]); } } } // Here you can access your object variables with dot notation. All your categories will be accessible either way. console.log(OrderedElementsArray.fruits); console.log(OrderedElementsArray.vegetables); console.log(OrderedElementsArray.bread); console.log(OrderedElementsArray.sugars); // Here you can access your object variables with key notation. All your categories will be accessible either way. console.log(OrderedElementsArray['fruits']); console.log(OrderedElementsArray['vegetables']); console.log(OrderedElementsArray['bread']); console.log(OrderedElementsArray['sugars']); ```
34,898,525
I want to generate a python list containing all months occurring between two dates, with the input and output formatted as follows: ``` date1 = "2014-10-10" # input start date date2 = "2016-01-07" # input end date month_list = ['Oct-14', 'Nov-14', 'Dec-14', 'Jan-15', 'Feb-15', 'Mar-15', 'Apr-15', 'May-15', 'Jun-15', 'Jul-15', 'Aug-15', 'Sep-15', 'Oct-15', 'Nov-15', 'Dec-15', 'Jan-16'] # output ```
2016/01/20
[ "https://Stackoverflow.com/questions/34898525", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3480116/" ]
With pandas, you can have a one liner like this: ``` import pandas as pd date1 = "2014-10-10" # input start date date2 = "2016-01-07" # input end date month_list = [i.strftime("%b-%y") for i in pd.date_range(start=date1, end=date2, freq='MS')] ```
Here is my solution with a simple list comprehension which uses `range` to know where months must start and end ``` from datetime import datetime as dt sd = dt.strptime('2014-10-10', "%Y-%m-%d") ed = dt.strptime('2016-01-07', "%Y-%m-%d") lst = [dt.strptime('%2.2d-%2.2d' % (y, m), '%Y-%m').strftime('%b-%y') \ for y in xrange(sd.year, ed.year+1) \ for m in xrange(sd.month if y==sd.year else 1, ed.month+1 if y == ed.year else 13)] print lst ``` produces ``` ['Oct-14', 'Nov-14', 'Dec-14', 'Jan-15', 'Feb-15', 'Mar-15', 'Apr-15', 'May-15', 'Jun-15', 'Jul-15', 'Aug-15', 'Sep-15', 'Oct-15', 'Nov-15', 'Dec-15', 'Jan-16'] ```
34,898,525
I want to generate a python list containing all months occurring between two dates, with the input and output formatted as follows: ``` date1 = "2014-10-10" # input start date date2 = "2016-01-07" # input end date month_list = ['Oct-14', 'Nov-14', 'Dec-14', 'Jan-15', 'Feb-15', 'Mar-15', 'Apr-15', 'May-15', 'Jun-15', 'Jul-15', 'Aug-15', 'Sep-15', 'Oct-15', 'Nov-15', 'Dec-15', 'Jan-16'] # output ```
2016/01/20
[ "https://Stackoverflow.com/questions/34898525", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3480116/" ]
If you want a dictionary that contains pair of month's starting date and ending date between your desired years, then here's how to get that. ``` start_year = int(input("Enter Starting Year: ")) end_year = int(input("Enter Ending Year: ")) import calendar month_dict = {} for x in range(start_year, end_year): for y in range(1, 13): if y < 10: start_date = '01'+'-'+'0'+str(y)+'-'+str(x) end_date = str(calendar.monthrange(x, y)[1])+'-'+'0'+str(y)+'-'+str(x) month_dict[start_date] = end_date else: start_date = '01'+'-'+str(y)+'-'+str(x) end_date = str(calendar.monthrange(x, y)[1])+'-'+str(y)+'-'+str(x) month_dict[start_date] = end_date ``` This gives the following output: ``` {'01-01-2000': '31-01-2000', '01-02-2000': '29-02-2000', '01-03-2000': '31-03-2000', '01-04-2000': '30-04-2000', '01-05-2000': '31-05-2000', '01-06-2000': '30-06-2000', '01-07-2000': '31-07-2000', '01-08-2000': '31-08-2000', '01-09-2000': '30-09-2000', '01-10-2000': '31-10-2000', '01-11-2000': '30-11-2000', '01-12-2000': '31-12-2000', '01-01-2001': '31-01-2001', '01-02-2001': '28-02-2001', '01-03-2001': '31-03-2001', '01-04-2001': '30-04-2001', '01-05-2001': '31-05-2001', '01-06-2001': '30-06-2001', '01-07-2001': '31-07-2001', '01-08-2001': '31-08-2001', '01-09-2001': '30-09-2001', '01-10-2001': '31-10-2001', '01-11-2001': '30-11-2001', '01-12-2001': '31-12-2001', '01-01-2002': '31-01-2002', '01-02-2002': '28-02-2002', '01-03-2002': '31-03-2002', '01-04-2002': '30-04-2002', '01-05-2002': '31-05-2002', '01-06-2002': '30-06-2002', '01-07-2002': '31-07-2002', '01-08-2002': '31-08-2002', '01-09-2002': '30-09-2002', '01-10-2002': '31-10-2002', '01-11-2002': '30-11-2002', '01-12-2002': '31-12-2002', '01-01-2003': '31-01-2003', '01-02-2003': '28-02-2003', '01-03-2003': '31-03-2003', '01-04-2003': '30-04-2003', '01-05-2003': '31-05-2003', '01-06-2003': '30-06-2003', '01-07-2003': '31-07-2003', '01-08-2003': '31-08-2003', '01-09-2003': '30-09-2003', '01-10-2003': '31-10-2003', '01-11-2003': '30-11-2003', '01-12-2003': '31-12-2003', '01-01-2004': '31-01-2004', '01-02-2004': '29-02-2004', '01-03-2004': '31-03-2004', '01-04-2004': '30-04-2004', '01-05-2004': '31-05-2004', '01-06-2004': '30-06-2004', '01-07-2004': '31-07-2004', '01-08-2004': '31-08-2004', '01-09-2004': '30-09-2004', '01-10-2004': '31-10-2004', '01-11-2004': '30-11-2004', '01-12-2004': '31-12-2004', '01-01-2005': '31-01-2005', '01-02-2005': '28-02-2005', '01-03-2005': '31-03-2005', '01-04-2005': '30-04-2005', '01-05-2005': '31-05-2005', '01-06-2005': '30-06-2005', '01-07-2005': '31-07-2005', '01-08-2005': '31-08-2005', '01-09-2005': '30-09-2005', '01-10-2005': '31-10-2005', '01-11-2005': '30-11-2005', '01-12-2005': '31-12-2005', '01-01-2006': '31-01-2006', '01-02-2006': '28-02-2006', '01-03-2006': '31-03-2006', '01-04-2006': '30-04-2006', '01-05-2006': '31-05-2006', '01-06-2006': '30-06-2006', '01-07-2006': '31-07-2006', '01-08-2006': '31-08-2006', '01-09-2006': '30-09-2006', '01-10-2006': '31-10-2006', '01-11-2006': '30-11-2006', '01-12-2006': '31-12-2006', '01-01-2007': '31-01-2007', '01-02-2007': '28-02-2007', '01-03-2007': '31-03-2007', '01-04-2007': '30-04-2007', '01-05-2007': '31-05-2007', '01-06-2007': '30-06-2007', '01-07-2007': '31-07-2007', '01-08-2007': '31-08-2007', '01-09-2007': '30-09-2007', '01-10-2007': '31-10-2007', '01-11-2007': '30-11-2007', '01-12-2007': '31-12-2007', '01-01-2008': '31-01-2008', '01-02-2008': '29-02-2008', '01-03-2008': '31-03-2008', '01-04-2008': '30-04-2008', '01-05-2008': '31-05-2008', '01-06-2008': '30-06-2008', '01-07-2008': '31-07-2008', '01-08-2008': '31-08-2008', '01-09-2008': '30-09-2008', '01-10-2008': '31-10-2008', '01-11-2008': '30-11-2008', '01-12-2008': '31-12-2008', '01-01-2009': '31-01-2009', '01-02-2009': '28-02-2009', '01-03-2009': '31-03-2009', '01-04-2009': '30-04-2009', '01-05-2009': '31-05-2009', '01-06-2009': '30-06-2009', '01-07-2009': '31-07-2009', '01-08-2009': '31-08-2009', '01-09-2009': '30-09-2009', '01-10-2009': '31-10-2009', '01-11-2009': '30-11-2009', '01-12-2009': '31-12-2009', '01-01-2010': '31-01-2010', '01-02-2010': '28-02-2010', '01-03-2010': '31-03-2010', '01-04-2010': '30-04-2010', '01-05-2010': '31-05-2010', '01-06-2010': '30-06-2010', '01-07-2010': '31-07-2010', '01-08-2010': '31-08-2010', '01-09-2010': '30-09-2010', '01-10-2010': '31-10-2010', '01-11-2010': '30-11-2010', '01-12-2010': '31-12-2010', '01-01-2011': '31-01-2011', '01-02-2011': '28-02-2011', '01-03-2011': '31-03-2011', '01-04-2011': '30-04-2011', '01-05-2011': '31-05-2011', '01-06-2011': '30-06-2011', '01-07-2011': '31-07-2011', '01-08-2011': '31-08-2011', '01-09-2011': '30-09-2011', '01-10-2011': '31-10-2011', '01-11-2011': '30-11-2011', '01-12-2011': '31-12-2011', '01-01-2012': '31-01-2012', '01-02-2012': '29-02-2012', '01-03-2012': '31-03-2012', '01-04-2012': '30-04-2012', '01-05-2012': '31-05-2012', '01-06-2012': '30-06-2012', '01-07-2012': '31-07-2012', '01-08-2012': '31-08-2012', '01-09-2012': '30-09-2012', '01-10-2012': '31-10-2012', '01-11-2012': '30-11-2012', '01-12-2012': '31-12-2012', '01-01-2013': '31-01-2013', '01-02-2013': '28-02-2013', '01-03-2013': '31-03-2013', '01-04-2013': '30-04-2013', '01-05-2013': '31-05-2013', '01-06-2013': '30-06-2013', '01-07-2013': '31-07-2013', '01-08-2013': '31-08-2013', '01-09-2013': '30-09-2013', '01-10-2013': '31-10-2013', '01-11-2013': '30-11-2013', '01-12-2013': '31-12-2013', '01-01-2014': '31-01-2014', '01-02-2014': '28-02-2014', '01-03-2014': '31-03-2014', '01-04-2014': '30-04-2014', '01-05-2014': '31-05-2014', '01-06-2014': '30-06-2014', '01-07-2014': '31-07-2014', '01-08-2014': '31-08-2014', '01-09-2014': '30-09-2014', '01-10-2014': '31-10-2014', '01-11-2014': '30-11-2014', '01-12-2014': '31-12-2014', '01-01-2015': '31-01-2015', '01-02-2015': '28-02-2015', '01-03-2015': '31-03-2015', '01-04-2015': '30-04-2015', '01-05-2015': '31-05-2015', '01-06-2015': '30-06-2015', '01-07-2015': '31-07-2015', '01-08-2015': '31-08-2015', '01-09-2015': '30-09-2015', '01-10-2015': '31-10-2015', '01-11-2015': '30-11-2015', '01-12-2015': '31-12-2015', '01-01-2016': '31-01-2016', '01-02-2016': '29-02-2016', '01-03-2016': '31-03-2016', '01-04-2016': '30-04-2016', '01-05-2016': '31-05-2016', '01-06-2016': '30-06-2016', '01-07-2016': '31-07-2016', '01-08-2016': '31-08-2016', '01-09-2016': '30-09-2016', '01-10-2016': '31-10-2016', '01-11-2016': '30-11-2016', '01-12-2016': '31-12-2016', '01-01-2017': '31-01-2017', '01-02-2017': '28-02-2017', '01-03-2017': '31-03-2017', '01-04-2017': '30-04-2017', '01-05-2017': '31-05-2017', '01-06-2017': '30-06-2017', '01-07-2017': '31-07-2017', '01-08-2017': '31-08-2017', '01-09-2017': '30-09-2017', '01-10-2017': '31-10-2017', '01-11-2017': '30-11-2017', '01-12-2017': '31-12-2017', '01-01-2018': '31-01-2018', '01-02-2018': '28-02-2018', '01-03-2018': '31-03-2018', '01-04-2018': '30-04-2018', '01-05-2018': '31-05-2018', '01-06-2018': '30-06-2018', '01-07-2018': '31-07-2018', '01-08-2018': '31-08-2018', '01-09-2018': '30-09-2018', '01-10-2018': '31-10-2018', '01-11-2018': '30-11-2018', '01-12-2018': '31-12-2018', '01-01-2019': '31-01-2019', '01-02-2019': '28-02-2019', '01-03-2019': '31-03-2019', '01-04-2019': '30-04-2019', '01-05-2019': '31-05-2019', '01-06-2019': '30-06-2019', '01-07-2019': '31-07-2019', '01-08-2019': '31-08-2019', '01-09-2019': '30-09-2019', '01-10-2019': '31-10-2019', '01-11-2019': '30-11-2019', '01-12-2019': '31-12-2019', '01-01-2020': '31-01-2020', '01-02-2020': '29-02-2020', '01-03-2020': '31-03-2020', '01-04-2020': '30-04-2020', '01-05-2020': '31-05-2020', '01-06-2020': '30-06-2020', '01-07-2020': '31-07-2020', '01-08-2020': '31-08-2020', '01-09-2020': '30-09-2020', '01-10-2020': '31-10-2020', '01-11-2020': '30-11-2020', '01-12-2020': '31-12-2020', '01-01-2021': '31-01-2021', '01-02-2021': '28-02-2021', '01-03-2021': '31-03-2021', '01-04-2021': '30-04-2021', '01-05-2021': '31-05-2021', '01-06-2021': '30-06-2021', '01-07-2021': '31-07-2021', '01-08-2021': '31-08-2021', '01-09-2021': '30-09-2021', '01-10-2021': '31-10-2021', '01-11-2021': '30-11-2021', '01-12-2021': '31-12-2021'} ```
here is the similar version of what Pynchia suggestioned, the below implementation is for python 3.8 the one he implemented is for python 2.x ``` import datetime st="2020-06-24" ed="2020-11-24" start_date = datetime.datetime.strptime(st.strip(), '%Y-%m-%d') end_date = datetime.datetime.strptime(ed.strip(), '%Y-%m-%d') months = [datetime.datetime.strptime('%2.2d-%2.2d' % (y, m), '%Y-%m').strftime('%b-%y') for y in range(start_date.year, end_date.year + 1) for m in range(start_date.month if y == start_date.year else 1, end_date.month + 1 if y == end_date.year else 13)] print(months) ```
34,898,525
I want to generate a python list containing all months occurring between two dates, with the input and output formatted as follows: ``` date1 = "2014-10-10" # input start date date2 = "2016-01-07" # input end date month_list = ['Oct-14', 'Nov-14', 'Dec-14', 'Jan-15', 'Feb-15', 'Mar-15', 'Apr-15', 'May-15', 'Jun-15', 'Jul-15', 'Aug-15', 'Sep-15', 'Oct-15', 'Nov-15', 'Dec-15', 'Jan-16'] # output ```
2016/01/20
[ "https://Stackoverflow.com/questions/34898525", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3480116/" ]
Here is my solution with a simple list comprehension which uses `range` to know where months must start and end ``` from datetime import datetime as dt sd = dt.strptime('2014-10-10', "%Y-%m-%d") ed = dt.strptime('2016-01-07', "%Y-%m-%d") lst = [dt.strptime('%2.2d-%2.2d' % (y, m), '%Y-%m').strftime('%b-%y') \ for y in xrange(sd.year, ed.year+1) \ for m in xrange(sd.month if y==sd.year else 1, ed.month+1 if y == ed.year else 13)] print lst ``` produces ``` ['Oct-14', 'Nov-14', 'Dec-14', 'Jan-15', 'Feb-15', 'Mar-15', 'Apr-15', 'May-15', 'Jun-15', 'Jul-15', 'Aug-15', 'Sep-15', 'Oct-15', 'Nov-15', 'Dec-15', 'Jan-16'] ```
Having done similar stuff previously, I took a stab at solving this. Using distinct components for doing this is more flexible and enables you to mix and match them for different use-cases. They also can be tested more easily this way, as you can see by the doctests in `iterate_months`. Also I suggest to use `datetime.date` objects for your input as you can just do more with those. To do that you'll have to first parse your input string, but this is very easily done. Parsing the date-strings ``` def datify(date): if isinstance(date, datetime.date): return date elif isinstance(date, datetime.datetime): return date.date() else: # taken from simleo's answer return datetime.strptime(date, "%Y-%m-%d") ``` First, we iterate through the months ------------------------------------ ``` import datetime def iterate_months(start_date, end_date): """Iterate monthly between two given dates. Emitted will be the first day of each month. >>> list(iterate_months(datetime.date(1999, 11, 1), ... datetime.date(2000, 2, 1))) [datetime.date(1999, 11, 1), datetime.date(1999, 12, 1),\ datetime.date(2000, 1, 1), datetime.date(2000, 2, 1)] """ assert isinstance(start_date, datetime.date) assert isinstance(end_date, datetime.date) assert start_date < end_date year = start_date.year month = start_date.month while True: current = datetime.date(year, month, 1) yield current if current.month == end_date.month and current.year == end_date.year: break else: month = ((month + 1) % 12) or 12 if month == 1: year += 1 if __name__ == '__main__': import doctest doctest.testmod() ``` To format your dates, use something like this --------------------------------------------- ``` def format_month(date): return date.strftime(r"%b-%y") ``` Putting it all together ----------------------- ``` start = datify("2014-10-10") end = datify("2016-01-07") for entry in iterate_months(start, end): print format_month(entry) ``` Or save it as a list: ``` result = list(iterate_months(start, end)) ```
34,898,525
I want to generate a python list containing all months occurring between two dates, with the input and output formatted as follows: ``` date1 = "2014-10-10" # input start date date2 = "2016-01-07" # input end date month_list = ['Oct-14', 'Nov-14', 'Dec-14', 'Jan-15', 'Feb-15', 'Mar-15', 'Apr-15', 'May-15', 'Jun-15', 'Jul-15', 'Aug-15', 'Sep-15', 'Oct-15', 'Nov-15', 'Dec-15', 'Jan-16'] # output ```
2016/01/20
[ "https://Stackoverflow.com/questions/34898525", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3480116/" ]
``` >>> from datetime import datetime, timedelta >>> from collections import OrderedDict >>> dates = ["2014-10-10", "2016-01-07"] >>> start, end = [datetime.strptime(_, "%Y-%m-%d") for _ in dates] >>> OrderedDict(((start + timedelta(_)).strftime(r"%b-%y"), None) for _ in xrange((end - start).days)).keys() ['Oct-14', 'Nov-14', 'Dec-14', 'Jan-15', 'Feb-15', 'Mar-15', 'Apr-15', 'May-15', 'Jun-15', 'Jul-15', 'Aug-15', 'Sep-15', 'Oct-15', 'Nov-15', 'Dec-15', 'Jan-16'] ``` **Update:** a bit of explanation, as requested in one comment. There are three problems here: parsing the dates into appropriate data structures (`strptime`); getting the date range given the two extremes and the step (one month); formatting the output dates (`strftime`). The `datetime` type overloads the subtraction operator, so that `end - start` makes sense. The result is a `timedelta` object that represents the difference between the two dates, and the `.days` attribute gets this difference expressed in days. There is no `.months` attribute, so we iterate one day at a time and convert the dates to the desired output format. This yields a lot of duplicates, which the `OrderedDict` removes while keeping the items in the right order. Now this is simple and concise because it lets the datetime module do all the work, but it's also horribly inefficient. We're calling a lot of methods for each day while we only need to output months. If performance is not an issue, the above code will be just fine. Otherwise, we'll have to work a bit more. Let's compare the above implementation with a more efficient one: ``` from datetime import datetime, timedelta from collections import OrderedDict dates = ["2014-10-10", "2016-01-07"] def monthlist_short(dates): start, end = [datetime.strptime(_, "%Y-%m-%d") for _ in dates] return OrderedDict(((start + timedelta(_)).strftime(r"%b-%y"), None) for _ in xrange((end - start).days)).keys() def monthlist_fast(dates): start, end = [datetime.strptime(_, "%Y-%m-%d") for _ in dates] total_months = lambda dt: dt.month + 12 * dt.year mlist = [] for tot_m in xrange(total_months(start)-1, total_months(end)): y, m = divmod(tot_m, 12) mlist.append(datetime(y, m+1, 1).strftime("%b-%y")) return mlist assert monthlist_fast(dates) == monthlist_short(dates) if __name__ == "__main__": from timeit import Timer for func in "monthlist_short", "monthlist_fast": print func, Timer("%s(dates)" % func, "from __main__ import dates, %s" % func).timeit(1000) ``` On my laptop, I get the following output: ``` monthlist_short 2.3209939003 monthlist_fast 0.0774540901184 ``` The concise implementation is about 30 times slower, so I would not recommend it in time-critical applications :)
Try this out, it will list down all the month from start to end as a continuous chain starting from the start date. ``` import datetime a='2021-12-1' b='2022-1-2' d1=datetime.datetime.strptime(a,'%Y-%m-%d') d2=datetime.datetime.strptime(b,'%Y-%m-%d') months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'] totalmonts=(d2.year-d1.year)*12+d2.month-11+12-d1.month print(totalmonts) for i in range(totalmonts): print(months[(d1.month+i-1)%12]) ```
34,898,525
I want to generate a python list containing all months occurring between two dates, with the input and output formatted as follows: ``` date1 = "2014-10-10" # input start date date2 = "2016-01-07" # input end date month_list = ['Oct-14', 'Nov-14', 'Dec-14', 'Jan-15', 'Feb-15', 'Mar-15', 'Apr-15', 'May-15', 'Jun-15', 'Jul-15', 'Aug-15', 'Sep-15', 'Oct-15', 'Nov-15', 'Dec-15', 'Jan-16'] # output ```
2016/01/20
[ "https://Stackoverflow.com/questions/34898525", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3480116/" ]
Here is my solution with a simple list comprehension which uses `range` to know where months must start and end ``` from datetime import datetime as dt sd = dt.strptime('2014-10-10', "%Y-%m-%d") ed = dt.strptime('2016-01-07', "%Y-%m-%d") lst = [dt.strptime('%2.2d-%2.2d' % (y, m), '%Y-%m').strftime('%b-%y') \ for y in xrange(sd.year, ed.year+1) \ for m in xrange(sd.month if y==sd.year else 1, ed.month+1 if y == ed.year else 13)] print lst ``` produces ``` ['Oct-14', 'Nov-14', 'Dec-14', 'Jan-15', 'Feb-15', 'Mar-15', 'Apr-15', 'May-15', 'Jun-15', 'Jul-15', 'Aug-15', 'Sep-15', 'Oct-15', 'Nov-15', 'Dec-15', 'Jan-16'] ```
I came to a solution that uses `python-dateutil` and works with Python 3.8+: <https://gist.github.com/anatoly-scherbakov/593770d446a06f109438a134863ba969> ``` def month_range( start: datetime.date, end: datetime.date, ) -> Iterator[datetime.date]: """Yields the 1st day of each month in the given date range.""" yield from itertools.takewhile( lambda date: date < end, itertools.accumulate( itertools.repeat(relativedelta(months=1)), operator.add, initial=start, ) ) ```
34,898,525
I want to generate a python list containing all months occurring between two dates, with the input and output formatted as follows: ``` date1 = "2014-10-10" # input start date date2 = "2016-01-07" # input end date month_list = ['Oct-14', 'Nov-14', 'Dec-14', 'Jan-15', 'Feb-15', 'Mar-15', 'Apr-15', 'May-15', 'Jun-15', 'Jul-15', 'Aug-15', 'Sep-15', 'Oct-15', 'Nov-15', 'Dec-15', 'Jan-16'] # output ```
2016/01/20
[ "https://Stackoverflow.com/questions/34898525", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3480116/" ]
You have to use [Calendar](https://docs.python.org/2/library/calendar.html#module-calendar) and [Datetime](https://docs.python.org/2/library/datetime.html) ``` import calendar from datetime import * date1 = datetime.strptime("2014-10-10", "%Y-%m-%d") date2 = datetime.strptime("2016-01-07", "%Y-%m-%d") date1 = date1.replace(day = 1) date2 = date2.replace(day = 1) months_str = calendar.month_name months = [] while date1 < date2: month = date1.month year = date1.year month_str = months_str[month][0:3] months.append("{0}-{1}".format(month_str,str(year)[-2:])) next_month = month+1 if month != 12 else 1 next_year = year + 1 if next_month == 1 else year date1 = date1.replace( month = next_month, year= next_year) print months ``` This code returns ``` ['Oct-14', 'Nov-14', 'Dec-14', 'Jan-14', 'Feb-15', 'Mar-15', 'Apr-15', 'May-15', 'Jun-15', 'Jul-15', 'Aug-15', 'Sep-15', 'Oct-15', 'Nov-15', 'Dec-15', 'Jan-15'] ```
here is the similar version of what Pynchia suggestioned, the below implementation is for python 3.8 the one he implemented is for python 2.x ``` import datetime st="2020-06-24" ed="2020-11-24" start_date = datetime.datetime.strptime(st.strip(), '%Y-%m-%d') end_date = datetime.datetime.strptime(ed.strip(), '%Y-%m-%d') months = [datetime.datetime.strptime('%2.2d-%2.2d' % (y, m), '%Y-%m').strftime('%b-%y') for y in range(start_date.year, end_date.year + 1) for m in range(start_date.month if y == start_date.year else 1, end_date.month + 1 if y == end_date.year else 13)] print(months) ```
34,898,525
I want to generate a python list containing all months occurring between two dates, with the input and output formatted as follows: ``` date1 = "2014-10-10" # input start date date2 = "2016-01-07" # input end date month_list = ['Oct-14', 'Nov-14', 'Dec-14', 'Jan-15', 'Feb-15', 'Mar-15', 'Apr-15', 'May-15', 'Jun-15', 'Jul-15', 'Aug-15', 'Sep-15', 'Oct-15', 'Nov-15', 'Dec-15', 'Jan-16'] # output ```
2016/01/20
[ "https://Stackoverflow.com/questions/34898525", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3480116/" ]
With pandas, you can have a one liner like this: ``` import pandas as pd date1 = "2014-10-10" # input start date date2 = "2016-01-07" # input end date month_list = [i.strftime("%b-%y") for i in pd.date_range(start=date1, end=date2, freq='MS')] ```
Having done similar stuff previously, I took a stab at solving this. Using distinct components for doing this is more flexible and enables you to mix and match them for different use-cases. They also can be tested more easily this way, as you can see by the doctests in `iterate_months`. Also I suggest to use `datetime.date` objects for your input as you can just do more with those. To do that you'll have to first parse your input string, but this is very easily done. Parsing the date-strings ``` def datify(date): if isinstance(date, datetime.date): return date elif isinstance(date, datetime.datetime): return date.date() else: # taken from simleo's answer return datetime.strptime(date, "%Y-%m-%d") ``` First, we iterate through the months ------------------------------------ ``` import datetime def iterate_months(start_date, end_date): """Iterate monthly between two given dates. Emitted will be the first day of each month. >>> list(iterate_months(datetime.date(1999, 11, 1), ... datetime.date(2000, 2, 1))) [datetime.date(1999, 11, 1), datetime.date(1999, 12, 1),\ datetime.date(2000, 1, 1), datetime.date(2000, 2, 1)] """ assert isinstance(start_date, datetime.date) assert isinstance(end_date, datetime.date) assert start_date < end_date year = start_date.year month = start_date.month while True: current = datetime.date(year, month, 1) yield current if current.month == end_date.month and current.year == end_date.year: break else: month = ((month + 1) % 12) or 12 if month == 1: year += 1 if __name__ == '__main__': import doctest doctest.testmod() ``` To format your dates, use something like this --------------------------------------------- ``` def format_month(date): return date.strftime(r"%b-%y") ``` Putting it all together ----------------------- ``` start = datify("2014-10-10") end = datify("2016-01-07") for entry in iterate_months(start, end): print format_month(entry) ``` Or save it as a list: ``` result = list(iterate_months(start, end)) ```
34,898,525
I want to generate a python list containing all months occurring between two dates, with the input and output formatted as follows: ``` date1 = "2014-10-10" # input start date date2 = "2016-01-07" # input end date month_list = ['Oct-14', 'Nov-14', 'Dec-14', 'Jan-15', 'Feb-15', 'Mar-15', 'Apr-15', 'May-15', 'Jun-15', 'Jul-15', 'Aug-15', 'Sep-15', 'Oct-15', 'Nov-15', 'Dec-15', 'Jan-16'] # output ```
2016/01/20
[ "https://Stackoverflow.com/questions/34898525", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3480116/" ]
Having done similar stuff previously, I took a stab at solving this. Using distinct components for doing this is more flexible and enables you to mix and match them for different use-cases. They also can be tested more easily this way, as you can see by the doctests in `iterate_months`. Also I suggest to use `datetime.date` objects for your input as you can just do more with those. To do that you'll have to first parse your input string, but this is very easily done. Parsing the date-strings ``` def datify(date): if isinstance(date, datetime.date): return date elif isinstance(date, datetime.datetime): return date.date() else: # taken from simleo's answer return datetime.strptime(date, "%Y-%m-%d") ``` First, we iterate through the months ------------------------------------ ``` import datetime def iterate_months(start_date, end_date): """Iterate monthly between two given dates. Emitted will be the first day of each month. >>> list(iterate_months(datetime.date(1999, 11, 1), ... datetime.date(2000, 2, 1))) [datetime.date(1999, 11, 1), datetime.date(1999, 12, 1),\ datetime.date(2000, 1, 1), datetime.date(2000, 2, 1)] """ assert isinstance(start_date, datetime.date) assert isinstance(end_date, datetime.date) assert start_date < end_date year = start_date.year month = start_date.month while True: current = datetime.date(year, month, 1) yield current if current.month == end_date.month and current.year == end_date.year: break else: month = ((month + 1) % 12) or 12 if month == 1: year += 1 if __name__ == '__main__': import doctest doctest.testmod() ``` To format your dates, use something like this --------------------------------------------- ``` def format_month(date): return date.strftime(r"%b-%y") ``` Putting it all together ----------------------- ``` start = datify("2014-10-10") end = datify("2016-01-07") for entry in iterate_months(start, end): print format_month(entry) ``` Or save it as a list: ``` result = list(iterate_months(start, end)) ```
here is the similar version of what Pynchia suggestioned, the below implementation is for python 3.8 the one he implemented is for python 2.x ``` import datetime st="2020-06-24" ed="2020-11-24" start_date = datetime.datetime.strptime(st.strip(), '%Y-%m-%d') end_date = datetime.datetime.strptime(ed.strip(), '%Y-%m-%d') months = [datetime.datetime.strptime('%2.2d-%2.2d' % (y, m), '%Y-%m').strftime('%b-%y') for y in range(start_date.year, end_date.year + 1) for m in range(start_date.month if y == start_date.year else 1, end_date.month + 1 if y == end_date.year else 13)] print(months) ```
34,898,525
I want to generate a python list containing all months occurring between two dates, with the input and output formatted as follows: ``` date1 = "2014-10-10" # input start date date2 = "2016-01-07" # input end date month_list = ['Oct-14', 'Nov-14', 'Dec-14', 'Jan-15', 'Feb-15', 'Mar-15', 'Apr-15', 'May-15', 'Jun-15', 'Jul-15', 'Aug-15', 'Sep-15', 'Oct-15', 'Nov-15', 'Dec-15', 'Jan-16'] # output ```
2016/01/20
[ "https://Stackoverflow.com/questions/34898525", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3480116/" ]
I found a very succinct way to do this with Pandas, sharing in case it helps anybody: --- **UPDATE:** I've got it down to a one-liner with the help of [this post](https://stackoverflow.com/questions/37890391/how-to-include-end-date-in-pandas-date-range-method) :) ``` pd.date_range('2014-10-10','2016-01-07', freq='MS').strftime("%Y-%b").tolist() ``` --- **OLD ANSWER:** ``` daterange = pd.date_range('2014-10-10','2016-01-07' , freq='1M') daterange = daterange.union([daterange[-1] + 1]) daterange = [d.strftime('%y-%b') for d in daterange] ``` The second line prevents the last date from getting clipped off the list.
I came to a solution that uses `python-dateutil` and works with Python 3.8+: <https://gist.github.com/anatoly-scherbakov/593770d446a06f109438a134863ba969> ``` def month_range( start: datetime.date, end: datetime.date, ) -> Iterator[datetime.date]: """Yields the 1st day of each month in the given date range.""" yield from itertools.takewhile( lambda date: date < end, itertools.accumulate( itertools.repeat(relativedelta(months=1)), operator.add, initial=start, ) ) ```
34,898,525
I want to generate a python list containing all months occurring between two dates, with the input and output formatted as follows: ``` date1 = "2014-10-10" # input start date date2 = "2016-01-07" # input end date month_list = ['Oct-14', 'Nov-14', 'Dec-14', 'Jan-15', 'Feb-15', 'Mar-15', 'Apr-15', 'May-15', 'Jun-15', 'Jul-15', 'Aug-15', 'Sep-15', 'Oct-15', 'Nov-15', 'Dec-15', 'Jan-16'] # output ```
2016/01/20
[ "https://Stackoverflow.com/questions/34898525", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3480116/" ]
Here is my solution with a simple list comprehension which uses `range` to know where months must start and end ``` from datetime import datetime as dt sd = dt.strptime('2014-10-10', "%Y-%m-%d") ed = dt.strptime('2016-01-07', "%Y-%m-%d") lst = [dt.strptime('%2.2d-%2.2d' % (y, m), '%Y-%m').strftime('%b-%y') \ for y in xrange(sd.year, ed.year+1) \ for m in xrange(sd.month if y==sd.year else 1, ed.month+1 if y == ed.year else 13)] print lst ``` produces ``` ['Oct-14', 'Nov-14', 'Dec-14', 'Jan-15', 'Feb-15', 'Mar-15', 'Apr-15', 'May-15', 'Jun-15', 'Jul-15', 'Aug-15', 'Sep-15', 'Oct-15', 'Nov-15', 'Dec-15', 'Jan-16'] ```
Find below my approach to this problem using **split** and simple **modulo-based** iterations without importing any special module. ``` date1 = "2014-10-10" date2 = "2016-01-07" y0 = int( date1.split('-')[0] ) # 2014 y1 = int( date2.split('-')[0] ) # 2016 m0 = int( date1.split('-')[1] ) - 1 # 10-1 --> 9 because will be used for indexing m1 = int( date2.split('-')[1] ) - 1 # 01-1 --> 0 because will be used for indexing months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'] result = [] start = m0 for y in range(y0, y1+1): for m in range(start,12): result.append( str( months[m % 12])+'-'+str(y) ) if y == y1 and (m % 12) == m1: break start = 0 print result ``` **$ python dates.py** ``` ['Oct-2014', 'Nov-2014', 'Dec-2014', 'Jan-2015', 'Feb-2015', 'Mar-2015', 'Apr-2015', 'May-2015', 'Jun-2015', 'Jul-2015', 'Aug-2015', 'Sep-2015', 'Oct-2015', 'Nov-2015', 'Dec-2015', 'Jan-2016'] ```
58,872,437
I launched `Jupyter Notebook`, created a new notebook in `python`, imported the necessary `libraries` and tried to access a `.xlsx` file on the desktop with this `code`: `haber = pd.read_csv('filename.xlsx')` but error keeps popping up. Want a reliable way of accessing this file on my desktop without incurring any error response
2019/11/15
[ "https://Stackoverflow.com/questions/58872437", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11003573/" ]
If you open the developer console, you'll see there's an error in one of the templates (DishDetailComponent.html@75:9): [![Error in template DishDetailComponent.html@75:9](https://i.stack.imgur.com/iH8LI.png)](https://i.stack.imgur.com/iH8LI.png) As you can see, it complains about there's no `dividerColor` property in the `mat-form-field` component. Perhaps it's a deprecated property because I don't see it in its API: <https://material.angular.io/components/form-field/api>
I found a changes log here: <https://www.reddit.com/r/Angular2/comments/86ta8k/angular_material_600beta5_changelog/> and replaced all `dividerColor`s with `color` in my project and it worked! Thanks for @Fel's help.
49,488,989
I'm looking into the Twitter Search API, and apparently, it has a count parameter that determines "The number of tweets to return per page, up to a maximum of 100." What does "per page" mean, if I'm for example running a python script like this: ``` import twitter #python-twitter package api = twitter.Api(consumer_key="mykey", consumer_secret="mysecret", access_token_key="myaccess", access_token_secret="myaccesssecret") results = api.GetSearch(raw_query="q=%23myHashtag&geocode=59.347937,18.072433,5km") print(len(results)) ``` This will only give me 15 tweets in results. I want more, preferably all tweets, if possible. So what should I do? Is there a "next page" option? Can't I just specify the search query in a way that gives me all tweets at once? Or if the number of tweets is too large, some maximum number of tweets?
2018/03/26
[ "https://Stackoverflow.com/questions/49488989", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3128156/" ]
Tweepy has a `Cursor` object that works like this: ``` for tweet in tweepy.Cursor(api.search, q="#myHashtag&geocode=59.347937,18.072433,5km", lang='en', tweet_mode='extended').items(): # handle tweets here ``` You can find more info in the [Tweepy Cursor docs](http://tweepy.readthedocs.io/en/v3.5.0/cursor_tutorial.html#introduction).
With [TwitterAPI](https://github.com/geduldig/TwitterAPI) you would access pages this way: ``` pager = TwitterPager(api, 'search/tweets', {'q':'#myHashtag', 'geocode':'59.347937,18.072433,5km'}) for item in pager.get_iterator(): print(item['text'] if 'text' in item else item) ``` A complete example is here: <https://github.com/geduldig/TwitterAPI/blob/master/examples/page_tweets.py>
50,195,029
today I have encountered a strange problem where python ide would not scale the font correctly on my 1920\*1080 screen. So i fixed it. Kinda I knew that there was an option in windows where one could toggle the "Override high DPI scaling behavior". Problem is that this tab is only available for application e.g ".exe". Windows is a strange beast. By default python ide has font size 9-10. That font on high res display just gets scaled by win 10 My solution is to manually enable DPI Awareness and then set the correct font size in class called run.py As this is not a real question i will post the code and mark it as answered It may not be compatible with displays that have higher resolution then 1920\*1080, but hey it works :D
2018/05/05
[ "https://Stackoverflow.com/questions/50195029", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4102180/" ]
This is what I was looking for: sort -t ';' -k 2,2 < some-csv.log Big thanks to @dmadic
If your input.txt is something like: ``` Any ANA Bill BOB Ana ``` and you want your output to be: ``` Ana Any Bill ANA BOB ``` then, maybe your could try something like: ``` grep -E "[a-z]+" input.txt | sort > lower.txt grep -wE "[A-Z]+" input.txt | sort > upper.txt cat lower.txt upper.txt ```
50,195,029
today I have encountered a strange problem where python ide would not scale the font correctly on my 1920\*1080 screen. So i fixed it. Kinda I knew that there was an option in windows where one could toggle the "Override high DPI scaling behavior". Problem is that this tab is only available for application e.g ".exe". Windows is a strange beast. By default python ide has font size 9-10. That font on high res display just gets scaled by win 10 My solution is to manually enable DPI Awareness and then set the correct font size in class called run.py As this is not a real question i will post the code and mark it as answered It may not be compatible with displays that have higher resolution then 1920\*1080, but hey it works :D
2018/05/05
[ "https://Stackoverflow.com/questions/50195029", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4102180/" ]
This is what I was looking for: sort -t ';' -k 2,2 < some-csv.log Big thanks to @dmadic
With Perl you could say: ``` perl -e ' print sort { ($ka = (split(/;/, $a))[1]) =~ tr/a-zA-Z/A-Za-z/; ($kb = (split(/;/, $b))[1]) =~ tr/a-zA-Z/A-Za-z/; $ka cmp $kb; } <>' input.txt ```
4,011,705
I've tried lots of solution that posted on the net, they don't work. ``` >>> import _imaging >>> _imaging.__file__ 'C:\\python26\\lib\\site-packages\\PIL\\_imaging.pyd' >>> ``` So the system can find the \_imaging but still can't use truetype font ``` from PIL import Image, ImageDraw, ImageFilter, ImageFont im = Image.new('RGB', (300,300), 'white') draw = ImageDraw.Draw(im) font = ImageFont.truetype('arial.ttf', 14) draw.text((100,100), 'test text', font = font) ``` Raises this error: ``` ImportError: The _imagingft C module is not installed File "D:\Python26\Lib\site-packages\PIL\ImageFont.py", line 34, in __getattr__ raise ImportError("The _imagingft C module is not installed") ```
2010/10/25
[ "https://Stackoverflow.com/questions/4011705", "https://Stackoverflow.com", "https://Stackoverflow.com/users/483144/" ]
The following worked for me on Ubuntu 14.04.1 64 bit: ``` sudo apt-get install libfreetype6-dev ``` Then, in the virtualenv: ``` pip uninstall pillow pip install --no-cache-dir pillow ```
Worked for Ubuntu 12.10: ``` sudo pip uninstall PIL sudo apt-get install libfreetype6-dev sudo apt-get install python-imaging ```
4,011,705
I've tried lots of solution that posted on the net, they don't work. ``` >>> import _imaging >>> _imaging.__file__ 'C:\\python26\\lib\\site-packages\\PIL\\_imaging.pyd' >>> ``` So the system can find the \_imaging but still can't use truetype font ``` from PIL import Image, ImageDraw, ImageFilter, ImageFont im = Image.new('RGB', (300,300), 'white') draw = ImageDraw.Draw(im) font = ImageFont.truetype('arial.ttf', 14) draw.text((100,100), 'test text', font = font) ``` Raises this error: ``` ImportError: The _imagingft C module is not installed File "D:\Python26\Lib\site-packages\PIL\ImageFont.py", line 34, in __getattr__ raise ImportError("The _imagingft C module is not installed") ```
2010/10/25
[ "https://Stackoverflow.com/questions/4011705", "https://Stackoverflow.com", "https://Stackoverflow.com/users/483144/" ]
For OS X (I'm running 10.6 but should work for others) I was able to get around this error using the advice from [this post](https://stackoverflow.com/questions/9070074/how-to-install-pil-on-mac-os-x-10-7-2-lion/11368029#11368029). Basically you need to install a couple of the dependencies then reinstall PIL.
Instead of running: `pip install Pillow` Run: `pip install Image` darwin Big Sur pyenv
4,011,705
I've tried lots of solution that posted on the net, they don't work. ``` >>> import _imaging >>> _imaging.__file__ 'C:\\python26\\lib\\site-packages\\PIL\\_imaging.pyd' >>> ``` So the system can find the \_imaging but still can't use truetype font ``` from PIL import Image, ImageDraw, ImageFilter, ImageFont im = Image.new('RGB', (300,300), 'white') draw = ImageDraw.Draw(im) font = ImageFont.truetype('arial.ttf', 14) draw.text((100,100), 'test text', font = font) ``` Raises this error: ``` ImportError: The _imagingft C module is not installed File "D:\Python26\Lib\site-packages\PIL\ImageFont.py", line 34, in __getattr__ raise ImportError("The _imagingft C module is not installed") ```
2010/10/25
[ "https://Stackoverflow.com/questions/4011705", "https://Stackoverflow.com", "https://Stackoverflow.com/users/483144/" ]
Worked for Ubuntu 12.10: ``` sudo pip uninstall PIL sudo apt-get install libfreetype6-dev sudo apt-get install python-imaging ```
In Windows 11 we need to solve this problem 'pip install --upgrade pip' 'pip install --upgrade Pillow' *pip install --upgrade Pillow*
4,011,705
I've tried lots of solution that posted on the net, they don't work. ``` >>> import _imaging >>> _imaging.__file__ 'C:\\python26\\lib\\site-packages\\PIL\\_imaging.pyd' >>> ``` So the system can find the \_imaging but still can't use truetype font ``` from PIL import Image, ImageDraw, ImageFilter, ImageFont im = Image.new('RGB', (300,300), 'white') draw = ImageDraw.Draw(im) font = ImageFont.truetype('arial.ttf', 14) draw.text((100,100), 'test text', font = font) ``` Raises this error: ``` ImportError: The _imagingft C module is not installed File "D:\Python26\Lib\site-packages\PIL\ImageFont.py", line 34, in __getattr__ raise ImportError("The _imagingft C module is not installed") ```
2010/10/25
[ "https://Stackoverflow.com/questions/4011705", "https://Stackoverflow.com", "https://Stackoverflow.com/users/483144/" ]
Your installed PIL was compiled without libfreetype. You can get precompiled installer of PIL (compiled with libfreetype) here (and many other precompiled Python C Modules): <http://www.lfd.uci.edu/~gohlke/pythonlibs/>
Instead of running: `pip install Pillow` Run: `pip install Image` darwin Big Sur pyenv
4,011,705
I've tried lots of solution that posted on the net, they don't work. ``` >>> import _imaging >>> _imaging.__file__ 'C:\\python26\\lib\\site-packages\\PIL\\_imaging.pyd' >>> ``` So the system can find the \_imaging but still can't use truetype font ``` from PIL import Image, ImageDraw, ImageFilter, ImageFont im = Image.new('RGB', (300,300), 'white') draw = ImageDraw.Draw(im) font = ImageFont.truetype('arial.ttf', 14) draw.text((100,100), 'test text', font = font) ``` Raises this error: ``` ImportError: The _imagingft C module is not installed File "D:\Python26\Lib\site-packages\PIL\ImageFont.py", line 34, in __getattr__ raise ImportError("The _imagingft C module is not installed") ```
2010/10/25
[ "https://Stackoverflow.com/questions/4011705", "https://Stackoverflow.com", "https://Stackoverflow.com/users/483144/" ]
For me none of the solutions posted here so far has worked. I found another solution here: <http://codeinthehole.com/writing/how-to-install-pil-on-64-bit-ubuntu-1204/> First install the dev packages: ``` $ sudo apt-get install python-dev libjpeg-dev libfreetype6-dev zlib1g-dev ``` Then create some symlinks: ``` $ sudo ln -s /usr/lib/`uname -i`-linux-gnu/libfreetype.so /usr/lib/ $ sudo ln -s /usr/lib/`uname -i`-linux-gnu/libjpeg.so /usr/lib/ $ sudo ln -s /usr/lib/`uname -i`-linux-gnu/libz.so /usr/lib/ ``` Afterwards PIL should compile just fine: ``` $ pip install PIL --upgrade ```
In my Mac, the following steps in terminal works: ``` $ brew install freetype $ sudo pip uninstall pil $ sudo pip install pillow ``` hopes it works for you. Good luck!
4,011,705
I've tried lots of solution that posted on the net, they don't work. ``` >>> import _imaging >>> _imaging.__file__ 'C:\\python26\\lib\\site-packages\\PIL\\_imaging.pyd' >>> ``` So the system can find the \_imaging but still can't use truetype font ``` from PIL import Image, ImageDraw, ImageFilter, ImageFont im = Image.new('RGB', (300,300), 'white') draw = ImageDraw.Draw(im) font = ImageFont.truetype('arial.ttf', 14) draw.text((100,100), 'test text', font = font) ``` Raises this error: ``` ImportError: The _imagingft C module is not installed File "D:\Python26\Lib\site-packages\PIL\ImageFont.py", line 34, in __getattr__ raise ImportError("The _imagingft C module is not installed") ```
2010/10/25
[ "https://Stackoverflow.com/questions/4011705", "https://Stackoverflow.com", "https://Stackoverflow.com/users/483144/" ]
The following worked for me on Ubuntu 14.04.1 64 bit: ``` sudo apt-get install libfreetype6-dev ``` Then, in the virtualenv: ``` pip uninstall pillow pip install --no-cache-dir pillow ```
For me none of the solutions posted here so far has worked. I found another solution here: <http://codeinthehole.com/writing/how-to-install-pil-on-64-bit-ubuntu-1204/> First install the dev packages: ``` $ sudo apt-get install python-dev libjpeg-dev libfreetype6-dev zlib1g-dev ``` Then create some symlinks: ``` $ sudo ln -s /usr/lib/`uname -i`-linux-gnu/libfreetype.so /usr/lib/ $ sudo ln -s /usr/lib/`uname -i`-linux-gnu/libjpeg.so /usr/lib/ $ sudo ln -s /usr/lib/`uname -i`-linux-gnu/libz.so /usr/lib/ ``` Afterwards PIL should compile just fine: ``` $ pip install PIL --upgrade ```
4,011,705
I've tried lots of solution that posted on the net, they don't work. ``` >>> import _imaging >>> _imaging.__file__ 'C:\\python26\\lib\\site-packages\\PIL\\_imaging.pyd' >>> ``` So the system can find the \_imaging but still can't use truetype font ``` from PIL import Image, ImageDraw, ImageFilter, ImageFont im = Image.new('RGB', (300,300), 'white') draw = ImageDraw.Draw(im) font = ImageFont.truetype('arial.ttf', 14) draw.text((100,100), 'test text', font = font) ``` Raises this error: ``` ImportError: The _imagingft C module is not installed File "D:\Python26\Lib\site-packages\PIL\ImageFont.py", line 34, in __getattr__ raise ImportError("The _imagingft C module is not installed") ```
2010/10/25
[ "https://Stackoverflow.com/questions/4011705", "https://Stackoverflow.com", "https://Stackoverflow.com/users/483144/" ]
The following worked for me on Ubuntu 14.04.1 64 bit: ``` sudo apt-get install libfreetype6-dev ``` Then, in the virtualenv: ``` pip uninstall pillow pip install --no-cache-dir pillow ```
The followed works on ubuntu 12.04: ``` pip uninstall PIL apt-get install libjpeg-dev apt-get install libfreetype6-dev apt-get install zlib1g-dev apt-get install libpng12-dev pip install PIL --upgrade ``` when your see "-- JPEG support avaliable" that means it works. But, if it still doesn't work when your edit your jpeg image, check the python path!! My python path missed `'/usr/local/lib/python2.7/dist-packages/PIL-1.1.7-py2.7-linux-x86_64.egg/'`, so I edit the `~/.bashrc` add the following code to this file: ``` export PYTHONPATH=$PYTHONPATH:/usr/local/lib/python2.7/dist-packages/PIL-1.1.7-py2.7-linux-x86_64.egg/ ``` then, finally, it works!!
4,011,705
I've tried lots of solution that posted on the net, they don't work. ``` >>> import _imaging >>> _imaging.__file__ 'C:\\python26\\lib\\site-packages\\PIL\\_imaging.pyd' >>> ``` So the system can find the \_imaging but still can't use truetype font ``` from PIL import Image, ImageDraw, ImageFilter, ImageFont im = Image.new('RGB', (300,300), 'white') draw = ImageDraw.Draw(im) font = ImageFont.truetype('arial.ttf', 14) draw.text((100,100), 'test text', font = font) ``` Raises this error: ``` ImportError: The _imagingft C module is not installed File "D:\Python26\Lib\site-packages\PIL\ImageFont.py", line 34, in __getattr__ raise ImportError("The _imagingft C module is not installed") ```
2010/10/25
[ "https://Stackoverflow.com/questions/4011705", "https://Stackoverflow.com", "https://Stackoverflow.com/users/483144/" ]
The following worked for me on Ubuntu 14.04.1 64 bit: ``` sudo apt-get install libfreetype6-dev ``` Then, in the virtualenv: ``` pip uninstall pillow pip install --no-cache-dir pillow ```
solution for CentOS 6 (and probably other rpm based): ``` yum install freetype-devel libjpeg-devel libpng-devel pip uninstall pil Pillow pip install pil Pillow ```
4,011,705
I've tried lots of solution that posted on the net, they don't work. ``` >>> import _imaging >>> _imaging.__file__ 'C:\\python26\\lib\\site-packages\\PIL\\_imaging.pyd' >>> ``` So the system can find the \_imaging but still can't use truetype font ``` from PIL import Image, ImageDraw, ImageFilter, ImageFont im = Image.new('RGB', (300,300), 'white') draw = ImageDraw.Draw(im) font = ImageFont.truetype('arial.ttf', 14) draw.text((100,100), 'test text', font = font) ``` Raises this error: ``` ImportError: The _imagingft C module is not installed File "D:\Python26\Lib\site-packages\PIL\ImageFont.py", line 34, in __getattr__ raise ImportError("The _imagingft C module is not installed") ```
2010/10/25
[ "https://Stackoverflow.com/questions/4011705", "https://Stackoverflow.com", "https://Stackoverflow.com/users/483144/" ]
Basically, you need to install freetype before installing PIL. If you're using [Homebrew](https://brew.sh/) on OS X it's just a matter of: ``` brew remove pil brew install freetype brew install pil ```
I used homebrew to install freetype and I have the following in /usr/local/lib: libfreetype.6.dylib libfreetype.a libfreetype.dylib But the usual: > > pip install pil > > > Does not work for me, so I used: > > pip install <http://effbot.org/downloads/Imaging-1.1.6.tar.gz> > > >
4,011,705
I've tried lots of solution that posted on the net, they don't work. ``` >>> import _imaging >>> _imaging.__file__ 'C:\\python26\\lib\\site-packages\\PIL\\_imaging.pyd' >>> ``` So the system can find the \_imaging but still can't use truetype font ``` from PIL import Image, ImageDraw, ImageFilter, ImageFont im = Image.new('RGB', (300,300), 'white') draw = ImageDraw.Draw(im) font = ImageFont.truetype('arial.ttf', 14) draw.text((100,100), 'test text', font = font) ``` Raises this error: ``` ImportError: The _imagingft C module is not installed File "D:\Python26\Lib\site-packages\PIL\ImageFont.py", line 34, in __getattr__ raise ImportError("The _imagingft C module is not installed") ```
2010/10/25
[ "https://Stackoverflow.com/questions/4011705", "https://Stackoverflow.com", "https://Stackoverflow.com/users/483144/" ]
solution for CentOS 6 (and probably other rpm based): ``` yum install freetype-devel libjpeg-devel libpng-devel pip uninstall pil Pillow pip install pil Pillow ```
Instead of running: `pip install Pillow` Run: `pip install Image` darwin Big Sur pyenv
10,059,497
Code is much more precise than English; Here's what I'd like to do: ``` import sys fileName = sys.argv[1] className = sys.argv[2] # open py file here and import the class # ??? # Instantiante new object of type "className" a = eval(className + "()") # I don't know if this is the way to do that. # I "know" that className will have this method: a.writeByte(0x0) ``` *EDIT:* Per the request of the answers, here's what I'm trying to do: I'm writing a virtual processor adhering to the SIC/XE instruction set. It's an educational theoretical processor used to teach the fundamentals of assembly language and systems software to computer science students. There is a notion of a "device" that I'm trying to abstract from the programming of the "processor." Essentially, I want the user of my program to be able to write their own device plugin (limited to "read\_byte" and "write\_byte" functionality) and then I want them to be able to "hook up" their devices to the processor at command-line time, so that they can write something like: `python3 sicsim -d1 dev1module Dev1Class -d2 ...` They would also supply the memory image, which would know how to interact with their device. I basically want both of us to be able to write our code without it interfering with each other.
2012/04/08
[ "https://Stackoverflow.com/questions/10059497", "https://Stackoverflow.com", "https://Stackoverflow.com/users/569302/" ]
Use [`importlib.import_module`](http://localhost/pythondocs/library/importlib.html#importlib.import_module) and the built in function [`getattr`](http://localhost/pythondocs/library/functions.html#getattr). No need for `eval`. ``` import sys import importlib module_name = sys.argv[1] class_name = sys.argv[2] module = importlib.import_module(module_name) cls = getattr(module, class_name) obj = cls() obj.writeByte(0x0) ``` This will require that the file lives somewhere on your python path. Most of the time, the current directory is on said path. If this is not sufficient, you'll have to parse the directory out of it and `append` it to the `sys.path`. I'll be glad to help with that. Just give me a sample input for the first commandline argument. Valid input for this version would be something like: ``` python3 myscript.py mypackage.mymodule MyClass ```
As aaronasterling mentions, you can take advantage of the import machinery if the file in question happens to be on the python path (somewhere under the directories listed in `sys.path`), but if that's not the case, use the built in [`exec()`](http://docs.python.org/dev/library/functions.html#exec) function: ``` fileVars = {} exec(file(fileName).read(), fileVars) ``` Then, to get an instance of the class, you can skip the `eval()`: ``` a = fileVars[className]() ```
6,095,818
Just curious to know is there any document utility available in PHP which can perform something like docutils in python ? A libary which can be very user friendly in terms of converting restructured text into HTML ?
2011/05/23
[ "https://Stackoverflow.com/questions/6095818", "https://Stackoverflow.com", "https://Stackoverflow.com/users/239670/" ]
phpDocumentor is quite outdated. Have a look at [DocBlox (Github Repository)](https://github.com/mvriel/Docblox) or [DocBlox-project.org](http://www.docblox-project.org/) edit: docblox merged with phpdocumentor and they now maintain phpdocumentor 2. links that take you directly to the project: [phpdoc.org](http://www.phpdoc.org/) [github repo](https://github.com/phpDocumentor/phpDocumentor2)
Try [phpDocumentor](http://www.phpdoc.org/).
9,966,250
I am trying to understand eval(), but am not having much luck. I am writing my own math library and am trying to include integration into the library. I need help getting python to recognize the function as a series of variables, constants, and operators. I was told that eval would do the trick but how would i go about it? ``` fofx = input ("Write your function of x here >") def integrate (fofx): #integration algorithm here #input fofx and recognize it as f(x) to be integrated. ``` i have tried the documentation but that is limited and i have no clue how i could apply it to my function to be evaluated.
2012/04/01
[ "https://Stackoverflow.com/questions/9966250", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1044726/" ]
The documentation for [`eval()`](http://docs.python.org/library/functions.html#eval) is pretty clear in my view and gives a reasonable example of what you need. Basically you want to hold an expression to be evaluated in a string: ``` >>> f = 'x**2 + 2*x' ``` Then you can define a value for `x`: ``` >>> x = 3 ``` And finally call evaluate: ``` >>> eval(f) 15 ``` Or if you want to make the call to eval a little more controlled, as opposed to creating a local variable named `x`, then you can pass in the evaluation environment in the parameters to `eval()`: ``` >>> f = 'x**2 + 2*x' >>> eval(f, {}, {'x': 5}) 35 ``` The reason you want to control the evaluation environment is to avoid any variables defined in your program inadvertently being used in the evaluation.
Perhaps you might be thinking of the 'eval' mode of the abstract syntax tree module which allows you to constuct a syntax tree for a single expression. For example the code below will take an expression in a string and modify it such that 'x\*\*2+3\*x\*\* 4+2' changes to 'x\*\*3+3\*x\*\* 5+2'. (Note that this is not the integral of the expression, that code would be much longer!) ``` import ast class IncreasePower(ast.NodeTransformer): def visit_BinOp(self,node): node=self.generic_visit(node) if isinstance( node.op , ast.Pow) and isinstance(node.right, ast.Num): node.right.n+=1 return node x=4 s='x**2+3*x**4+2' print eval(s) A = ast.parse(s,'source','eval') B = IncreasePower().visit(A) E = compile(A,'increased','eval') print eval(E) ``` You may also find it helpful to look at the symbolic maths library sympy which uses a different approach to building up expressions. In sympy you start with x=sympy.Symbol("x") before constructing your expressions. The "sympy.integrate" function does symbolic integration.
46,207,299
On Windows when I execute: c:\python35\scripts\tensorboard --logdir=C:\Users\Kevin\Documents\dev\Deadpool\Tensorflow-SegNet\logs and I web browse to <http://localhost:6006> the first time I am redirected to <http://localhost:6006/[[_traceDataUrl]]> and I get the command prompt messages: ``` W0913 14:32:25.401402 Reloader tf_logging.py:86] Found more than one graph event per run, or there was a metagraph containing a graph_def, as well as one or more graph events. Overwriting the graph with the newest event. W0913 14:32:25.417002 Reloader tf_logging.py:86] Found more than one metagraph event per run. Overwriting the metagraph with the newest event. W0913 14:32:36.446222 Thread-2 application.py:241] path /[[_traceDataUrl]] not found, sending 404 ``` When I try <http://localhost:6006> again, TensorBoard takes a long time presents the 404 message again but this time displays a blank web page. Logs directory: ``` checkpoint events.out.tfevents.1504911606.LTIIP82 events.out.tfevents.1504912739.LTIIP82 model.ckpt-194000.data-00000-of-00001 model.ckpt-194000.index model.ckpt-194000.meta ``` Why am I getting redirected and 404ed?
2017/09/13
[ "https://Stackoverflow.com/questions/46207299", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1637126/" ]
i'am having the exact same error. Maybe it is because of [this](https://github.com/tensorflow/tensorflow/issues/7856) issue. So try to Change the env-variable to --logdir=foo:C:\Users\Kevin\Documents\dev\Deadpool\Tensorflow-SegNet\logs. Hope it helps.
Could it be that you try to access the webpage with IE? Apparently IE is not supported by Tensorboard yet(<https://github.com/tensorflow/tensorflow/issues/9372>). Maybe use another Browser.
46,207,299
On Windows when I execute: c:\python35\scripts\tensorboard --logdir=C:\Users\Kevin\Documents\dev\Deadpool\Tensorflow-SegNet\logs and I web browse to <http://localhost:6006> the first time I am redirected to <http://localhost:6006/[[_traceDataUrl]]> and I get the command prompt messages: ``` W0913 14:32:25.401402 Reloader tf_logging.py:86] Found more than one graph event per run, or there was a metagraph containing a graph_def, as well as one or more graph events. Overwriting the graph with the newest event. W0913 14:32:25.417002 Reloader tf_logging.py:86] Found more than one metagraph event per run. Overwriting the metagraph with the newest event. W0913 14:32:36.446222 Thread-2 application.py:241] path /[[_traceDataUrl]] not found, sending 404 ``` When I try <http://localhost:6006> again, TensorBoard takes a long time presents the 404 message again but this time displays a blank web page. Logs directory: ``` checkpoint events.out.tfevents.1504911606.LTIIP82 events.out.tfevents.1504912739.LTIIP82 model.ckpt-194000.data-00000-of-00001 model.ckpt-194000.index model.ckpt-194000.meta ``` Why am I getting redirected and 404ed?
2017/09/13
[ "https://Stackoverflow.com/questions/46207299", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1637126/" ]
i'am having the exact same error. Maybe it is because of [this](https://github.com/tensorflow/tensorflow/issues/7856) issue. So try to Change the env-variable to --logdir=foo:C:\Users\Kevin\Documents\dev\Deadpool\Tensorflow-SegNet\logs. Hope it helps.
I encountered the same error before, and found that it was due to internet setting problem. On Internet Explorer, Go to **Tools** -> **Internet Options** -> **Connections**, click **LAN settings**, and then click **Automatic detect settings**.
7,093,121
Recently, reading Python ["Functional Programming HOWTO"](http://docs.python.org/howto/functional.html), I came across a mentioned there `test_generators.py` standard module, where I found the following generator: ``` # conjoin is a simple backtracking generator, named in honor of Icon's # "conjunction" control structure. Pass a list of no-argument functions # that return iterable objects. Easiest to explain by example: assume the # function list [x, y, z] is passed. Then conjoin acts like: # # def g(): # values = [None] * 3 # for values[0] in x(): # for values[1] in y(): # for values[2] in z(): # yield values # # So some 3-lists of values *may* be generated, each time we successfully # get into the innermost loop. If an iterator fails (is exhausted) before # then, it "backtracks" to get the next value from the nearest enclosing # iterator (the one "to the left"), and starts all over again at the next # slot (pumps a fresh iterator). Of course this is most useful when the # iterators have side-effects, so that which values *can* be generated at # each slot depend on the values iterated at previous slots. def simple_conjoin(gs): values = [None] * len(gs) def gen(i): if i >= len(gs): yield values else: for values[i] in gs[i](): for x in gen(i+1): yield x for x in gen(0): yield x ``` It took me a while to understand how it works. It uses a mutable list `values` to store the yielded results of the iterators, and the N+1 iterator return the `values`, which passes through the whole chain of the iterators. As I stumbled into this code while reading about functional programming, I started thinking if it was possible to rewrite this conjoin generator using functional programming (using functions from the [`itertools` module](http://docs.python.org/library/itertools.html)). There are a lot of routines written in functional style (just glance at the end of [this](http://docs.python.org/library/itertools.html#recipes) article in the Recipes section). But, unfortunately, I haven't found any solution. So, is it possible to write this conjoin generator using functional programming just using the [`itertools` module](http://docs.python.org/library/itertools.html)? Thanks
2011/08/17
[ "https://Stackoverflow.com/questions/7093121", "https://Stackoverflow.com", "https://Stackoverflow.com/users/862380/" ]
This seems to work, and it's still lazy: ``` def conjoin(gs): return [()] if not gs else ( (val,) + suffix for val in gs[0]() for suffix in conjoin(gs[1:]) ) def range3(): return range(3) print list(conjoin([range3, range3])) ``` Output: ``` [(0, 0), (0, 1), (0, 2), (1, 0), (1, 1), (1, 2), (2, 0), (2, 1), (2, 2)] ``` Example usage to show mutable state: ``` x = "" def mutablerange(): global x x += "x" return [x + str(i) for i in range(3)] print list(conjoin([range3, mutablerange])) ``` Output: (watch the increasing number of 'x's) ``` [(0, 'x0'), (0, 'x1'), (0, 'x2'), (1, 'xx0'), (1, 'xx1'), (1, 'xx2'), (2, 'xxx0'), (2, 'xxx1'), (2, 'xxx2')] ``` And if we use `itertools.product`: ``` x = "" print list(itertools.product(range3(), mutablerange())) ``` the result is the following: ``` [(0, 'x0'), (0, 'x1'), (0, 'x2'), (1, 'x0'), (1, 'x1'), (1, 'x2'), (2, 'x0'), (2, 'x1'), (2, 'x2')] ``` So, one clearly see, that `itertools.product` caches the values returned by the iterator.
`simple_conjoin` uses the same basic building blocks -- loops, conditions, and `yield` -- as the building blocks of the `itertools` recipes. It also treats functions as data, a hallmark of functional programming. > > Of course this is most useful when the > iterators have side-effects, so that which values *can* be generated at > each slot depend on the values iterated at previous slots. > > > This, however, is contrary to the way functional programming works. In functional programming, each function takes input and produces output, and reacts with the rest of the program in no other way. In `simple_conjoin`, the functions take no input, and have side effects. This is central to it's use. So while you can certainly *write* it in functional style, it won't be useful in simple translation. You'd need to figure out a way to write it so it operated without side effects before you could produce a truly "functional" implementation. Note: @recursive's answer is good, but if `range3` had side effects it wouldn't be truly functional.
11,915,432
Why raise UnicodeDecodeError? I try to deploy my django app using apache to copy static files, typing ``` $python manage.py collectstatic ``` and I got error message like below. ``` You have requested to collect static files at the destination location as specified in your settings. This will overwrite existing files! Are you sure you want to do this? Type 'yes' to continue, or 'no' to cancel: yes Traceback (most recent call last): File "manage.py", line 10, in <module> execute_from_command_line(sys.argv) File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 443, in execute_from_command_line utility.execute() File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 382, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 196, in run_from_argv self.execute(*args, **options.__dict__) File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 232, in execute output = self.handle(*args, **options) File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 371, in handle return self.handle_noargs(**options) File "/usr/local/lib/python2.7/dist-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 163, in handle_noargs collected = self.collect() File "/usr/local/lib/python2.7/dist-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 104, in collect for path, storage in finder.list(self.ignore_patterns): File "/usr/local/lib/python2.7/dist-packages/django/contrib/staticfiles/finders.py", line 137, in list for path in utils.get_files(storage, ignore_patterns): File "/usr/local/lib/python2.7/dist-packages/django/contrib/staticfiles/utils.py", line 37, in get_files for fn in get_files(storage, ignore_patterns, dir): File "/usr/local/lib/python2.7/dist-packages/django/contrib/staticfiles/utils.py", line 37, in get_files for fn in get_files(storage, ignore_patterns, dir): File "/usr/local/lib/python2.7/dist-packages/django/contrib/staticfiles/utils.py", line 25, in get_files directories, files = storage.listdir(location) File "/usr/local/lib/python2.7/dist-packages/django/core/files/storage.py", line 236, in listdir if os.path.isdir(os.path.join(path, entry)): File "/usr/lib/python2.7/posixpath.py", line 71, in join path += '/' + b UnicodeDecodeError: 'ascii' codec can't decode byte 0xba in position 1: ordinal not in range(128) ``` What's wrong with my static files? my settings.py ``` import os PROJECT_ROOT = os.path.dirname(__file__) STATIC_ROOT = os.path.join(PROJECT_ROOT, 'static/') # URL prefix for static files. # Example: "http://media.lawrence.com/static/" STATIC_URL = '/static/' STATICFILES_FINDERS = ( 'django.contrib.staticfiles.finders.FileSystemFinder', 'django.contrib.staticfiles.finders.AppDirectoriesFinder', # 'django.contrib.staticfiles.finders.DefaultStorageFinder', ) ``` and apache host conf ``` ServerName www.abcd.org DocumentRoot /srv/www/yyy <Directory /srv/www/yyy> Order allow,deny Allow from all </Directory> WSGIDaemonProcess yyy.djangoserver processes=2 threads=15 display-name=%{GROUP} WSGIProcessGroup iii.djangoserver WSGIScriptAlias / /srv/www/yyy/apache/django.wsgi ```
2012/08/11
[ "https://Stackoverflow.com/questions/11915432", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1559347/" ]
Looks like one or more paths to your static files that are going to be copied contains non ASCII characters. It has nothing to do with the path to the desctination directory. **One way to find out would be** to put ``` try: print path except: pass try: print entry except: pass ``` just before line 236 in /usr/local/lib/python2.7/dist-packages/django/core/files/storage.py for a moment and then run manage.py again. Then you should see where the problem occurs (you won't see the very culprit but the file just before it and propably directory of the problematic file). **Or, alternatively, you can use pdb**: ``` python -m pdb manage.py collectstatic ``` and check which file is causing the problem in debugger.
I had the same error when I used **django-pipeline** inside docker container. It turned out that for some reason the system used POSIX locale. I used the solution proposed here and exported locale setting in system shell: ``` export LC_ALL=en_US.UTF-8 export LANG=en_US.UTF-8 ``` You can check that afterwards your locale should be like: ``` vagrant@vagrant-ubuntu-trusty-64:/project$ locale LANG=en_US.UTF-8 LANGUAGE= LC_CTYPE="en_US.UTF-8" LC_NUMERIC="en_US.UTF-8" LC_TIME="en_US.UTF-8" LC_COLLATE="en_US.UTF-8" LC_MONETARY="en_US.UTF-8" LC_MESSAGES="en_US.UTF-8" LC_PAPER="en_US.UTF-8" LC_NAME="en_US.UTF-8" LC_ADDRESS="en_US.UTF-8" LC_TELEPHONE="en_US.UTF-8" LC_MEASUREMENT="en_US.UTF-8" LC_IDENTIFICATION="en_US.UTF-8" LC_ALL=en_US.UTF-8 ``` It worked well. Also, notice I did that both in docker and outside machine.
46,564,730
I am trying to read a table from a Google spanner database, and write it to a text file to do a backup, using google dataflow with the python sdk. I have written the following script: ``` from __future__ import absolute_import import argparse import itertools import logging import re import time import datetime as dt import logging import apache_beam as beam from apache_beam.io import iobase from apache_beam.io import WriteToText from apache_beam.io.range_trackers import OffsetRangeTracker, UnsplittableRangeTracker from apache_beam.metrics import Metrics from apache_beam.options.pipeline_options import PipelineOptions from apache_beam.options.pipeline_options import StandardOptions, SetupOptions from apache_beam.options.pipeline_options import GoogleCloudOptions from google.cloud.spanner.client import Client from google.cloud.spanner.keyset import KeySet BUCKET_URL = 'gs://my_bucket' OUTPUT = '%s/output/' % BUCKET_URL PROJECT_ID = 'my_project' INSTANCE_ID = 'my_instance' DATABASE_ID = 'my_db' JOB_NAME = 'spanner-backup' TABLE = 'my_table' class SpannerSource(iobase.BoundedSource): def __init__(self): logging.info('Enter __init__') self.spannerOptions = { "id": PROJECT_ID, "instance": INSTANCE_ID, "database": DATABASE_ID } self.SpannerClient = Client def estimate_size(self): logging.info('Enter estimate_size') return 1 def get_range_tracker(self, start_position=None, stop_position=None): logging.info('Enter get_range_tracker') if start_position is None: start_position = 0 if stop_position is None: stop_position = OffsetRangeTracker.OFFSET_INFINITY range_tracker = OffsetRangeTracker(start_position, stop_position) return UnsplittableRangeTracker(range_tracker) def read(self, range_tracker): # This is not called when using the dataflowRunner ! logging.info('Enter read') # instantiate spanner client spanner_client = self.SpannerClient(self.spannerOptions["id"]) instance = spanner_client.instance(self.spannerOptions["instance"]) database = instance.database(self.spannerOptions["database"]) # read from table table_fields = database.execute_sql("SELECT t.column_name FROM information_schema.columns AS t WHERE t.table_name = '%s'" % TABLE) table_fields.consume_all() self.columns = [x[0] for x in table_fields] keyset = KeySet(all_=True) results = database.read(table=TABLE, columns=self.columns, keyset=keyset) # iterator over rows results.consume_all() for row in results: JSON_row = { self.columns[i]: row[i] for i in range(len(self.columns)) } yield JSON_row def split(self, start_position=None, stop_position=None): # this should not be called since the source is unspittable logging.info('Enter split') if start_position is None: start_position = 0 if stop_position is None: stop_position = 1 # Because the source is unsplittable (for now), only a single source is returned yield iobase.SourceBundle( weight=1, source=self, start_position=start_position, stop_position=stop_position) def run(argv=None): """Main entry point""" pipeline_options = PipelineOptions() google_cloud_options = pipeline_options.view_as(GoogleCloudOptions) google_cloud_options.project = PROJECT_ID google_cloud_options.job_name = JOB_NAME google_cloud_options.staging_location = '%s/staging' % BUCKET_URL google_cloud_options.temp_location = '%s/tmp' % BUCKET_URL #pipeline_options.view_as(StandardOptions).runner = 'DirectRunner' pipeline_options.view_as(StandardOptions).runner = 'DataflowRunner' p = beam.Pipeline(options=pipeline_options) output = p | 'Get Rows from Spanner' >> beam.io.Read(SpannerSource()) iso_datetime = dt.datetime.now().replace(microsecond=0).isoformat() output | 'Store in GCS' >> WriteToText(file_path_prefix=OUTPUT + iso_datetime + '-' + TABLE, file_name_suffix='') # if this line is commented, job completes but does not do anything result = p.run() result.wait_until_finish() if __name__ == '__main__': logging.getLogger().setLevel(logging.INFO) run() ``` However, this script runs correctly only on the DirectRunner: when I let it run on the DataflowRunner, it runs for a while without any output, before exiting with an error: > > "Executing failure step failure14 [...] Workflow failed. Causes: [...] The worker lost contact with the service." > > > Sometimes, it just goes on forever, without creating an output. Moreover, if I comment the line 'output = ...', the job completes, but without actually reading the data. It also appears that the dataflowRunner calls the function 'estimate\_size' of the source, but not the functions 'read' or 'get\_range\_tracker'. Does anyone have any ideas about what may cause this ? I know there is a (more complete) java SDK with an experimental spanner source/sink available, but if possible I'd rather stick with python. Thanks
2017/10/04
[ "https://Stackoverflow.com/questions/46564730", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6837292/" ]
Google currently added support of Backup Spanner with Dataflow, you can choose related template when creating DataFlow job. For more: <https://cloud.google.com/blog/products/gcp/cloud-spanner-adds-import-export-functionality-to-ease-data-movement>
I have reworked my code following the suggestion to simply use a ParDo, instead of using the BoundedSource class. As a reference, here is my solution; I am sure there are many ways to improve on it, and I would be happy to to hear opinions. In particular I am surprised that I have to a create a dummy PColl when starting the pipeline (if I don't, I get an error > > AttributeError: 'PBegin' object has no attribute 'windowing' > > > that I could not work around. The dummy PColl feels a bit like a hack. ``` from __future__ import absolute_import import datetime as dt import logging import apache_beam as beam from apache_beam.io import WriteToText from apache_beam.options.pipeline_options import PipelineOptions from apache_beam.options.pipeline_options import StandardOptions, SetupOptions from apache_beam.options.pipeline_options import GoogleCloudOptions from google.cloud.spanner.client import Client from google.cloud.spanner.keyset import KeySet BUCKET_URL = 'gs://my_bucket' OUTPUT = '%s/some_folder/' % BUCKET_URL PROJECT_ID = 'my_project' INSTANCE_ID = 'my_instance' DATABASE_ID = 'my_database' JOB_NAME = 'my_jobname' class ReadTables(beam.DoFn): def __init__(self, project, instance, database): super(ReadTables, self).__init__() self._project = project self._instance = instance self._database = database def process(self, element): # get list of tables in the database table_names_row = Client(self._project).instance(self._instance).database(self._database).execute_sql('SELECT t.table_name FROM information_schema.tables AS t') for row in table_names_row: if row[0] in [u'COLUMNS', u'INDEXES', u'INDEX_COLUMNS', u'SCHEMATA', u'TABLES']: # skip these continue yield row[0] class ReadSpannerTable(beam.DoFn): def __init__(self, project, instance, database): super(ReadSpannerTable, self).__init__() self._project = project self._instance = instance self._database = database def process(self, element): # first read the columns present in the table table_fields = Client(self._project).instance(self._instance).database(self._database).execute_sql("SELECT t.column_name FROM information_schema.columns AS t WHERE t.table_name = '%s'" % element) columns = [x[0] for x in table_fields] # next, read the actual data in the table keyset = KeySet(all_=True) results_streamed_set = Client(self._project).instance(self._instance).database(self._database).read(table=element, columns=columns, keyset=keyset) for row in results_streamed_set: JSON_row = { columns[i]: row[i] for i in xrange(len(columns)) } yield (element, JSON_row) # output pairs of (table_name, data) def run(argv=None): """Main entry point""" pipeline_options = PipelineOptions() pipeline_options.view_as(SetupOptions).save_main_session = True pipeline_options.view_as(SetupOptions).requirements_file = "requirements.txt" google_cloud_options = pipeline_options.view_as(GoogleCloudOptions) google_cloud_options.project = PROJECT google_cloud_options.job_name = JOB_NAME google_cloud_options.staging_location = '%s/staging' % BUCKET_URL google_cloud_options.temp_location = '%s/tmp' % BUCKET_URL pipeline_options.view_as(StandardOptions).runner = 'DataflowRunner' p = beam.Pipeline(options=pipeline_options) init = p | 'Begin pipeline' >> beam.Create(["test"]) # have to create a dummy transform to initialize the pipeline, surely there is a better way ? tables = init | 'Get tables from Spanner' >> beam.ParDo(ReadTables(PROJECT, INSTANCE_ID, DATABASE_ID)) # read the tables in the db rows = (tables | 'Get rows from Spanner table' >> beam.ParDo(ReadSpannerTable(PROJECT, INSTANCE_ID, DATABASE_ID)) # for each table, read the entries | 'Group by table' >> beam.GroupByKey() | 'Formatting' >> beam.Map(lambda (table_name, rows): (table_name, list(rows)))) # have to force to list here (dataflowRunner produces _Unwindowedvalues) iso_datetime = dt.datetime.now().replace(microsecond=0).isoformat() rows | 'Store in GCS' >> WriteToText(file_path_prefix=OUTPUT + iso_datetime, file_name_suffix='') result = p.run() result.wait_until_finish() if __name__ == '__main__': logging.getLogger().setLevel(logging.INFO) run() ```