qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
29
22k
response_k
stringlengths
26
13.4k
__index_level_0__
int64
0
17.8k
211,046
What's a good way to generate an icon in-memory in python? Right now I'm forced to use pygame to draw the icon, then I save it to disk as an .ico file, and then I load it from disk as an ICO resource... Something like this: ``` if os.path.isfile(self.icon): icon_flags = win32con.LR_LOADFROMFILE | win32con.LR_DEFAULTSIZE hicon = win32gui.LoadImage(hinst, self.icon, win32con.IMAGE_ICON, 0, 0, icon_flags) ``` ...where self.icon is the filename of the icon I created. Is there any way to do this in memory? EDIT: All I want to do is create an icon with a 2-digit number displayed on it (weather-taskbar style.
2008/10/17
[ "https://Stackoverflow.com/questions/211046", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15055/" ]
You can use [wxPython](http://wxpython.org/) for this. ``` from wx import EmptyIcon icon = EmptyIcon() icon.CopyFromBitmap(your_wxBitmap) ``` The [wxBitmap](http://docs.wxwidgets.org/stable/wx_wxbitmap.html#wxbitmap) can be generated in memory using [wxMemoryDC](http://docs.wxwidgets.org/stable/wx_wxmemorydc.html#wxmemorydc), look [here](http://docs.wxwidgets.org/stable/wx_wxdc.html) for operations you can do on a DC. This icon can then be applied to a wxFrame (a window) or a wxTaskBarIcon using: ``` frame.SetIcon(icon) ```
You can probably create a object that mimics the python file-object interface. <http://docs.python.org/library/stdtypes.html#bltin-file-objects>
11,993
34,195,014
I have a dataframe that has aggregated people by location like so ``` location_id | score | number_of_males | number_of_females 1 | 20 | 2 | 1 2 | 45 | 1 | 2 ``` I want to create a new dataframe that unaggregated this one so I get something like ``` location_id | score | number_of_males | number_of_females 1 | 20 | 1 | 0 1 | 20 | 1 | 0 1 | 20 | 0 | 1 2 | 45 | 1 | 0 2 | 45 | 0 | 1 2 | 45 | 0 | 0 ``` Or even better ``` location_id | score | sex 1 | 20 | male 1 | 20 | male 1 | 20 | female 2 | 45 | male 2 | 45 | female 2 | 45 | female ``` I want to do something like ``` import pandas as pd aggregated_df = pd.DataFrame.from_csv(SOME_PATH) unaggregated_df = df = pd.DataFrame(columns=['location_id', 'score', 'sex']) for row in aggregated_df: for column in ['number_of_males', 'number_of_females']: for number_of_people in range(0, row[column]): if column == 'number_of_males': sex = 'male' else: sex = 'female' unaggregated_df.append([{'location_id': row['location_id'], 'score': row['score'], 'sex': sex}], ignore_index=True) ``` I am having trouble getting the dict to append even though this seems to be supported in [pandas](http://pandas.pydata.org/pandas-docs/stable/merging.html#appending-rows-to-a-dataframe) Is there a more pandthonic (panda's version of pythonic) way to accomplish this?
2015/12/10
[ "https://Stackoverflow.com/questions/34195014", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1096662/" ]
Here is a way to get your result using `group_by`: ``` ids = ['location_id','score'] def foo(d): return pd.Series(d['number_of_males'].values*['male'] + d['number_of_females'].values*['female']) pd.melt(df.groupby(ids).apply(foo).reset_index(), id_vars=ids).drop('variable', 1) #Out[13]: # location_id score value #0 1 20 male #1 2 45 male #2 1 20 male #3 2 45 female #4 1 20 female #5 2 45 female ```
Until this I could do in a pandas functions ``` print df location_id score number_of_males number_of_females 1 20 2 1 2 45 1 2 ``` Converting the two columns to one, ``` df.set_index(['location_id','score']).stack().reset_index() Out[102]: location_id score level_2 0 0 1 20 number_of_males 2 1 1 20 number_of_females 1 2 2 45 number_of_males 1 3 2 45 number_of_females 2 ``` But then I have to iterate to increase the number of rows using python loop :(
11,995
61,265,226
I use python boto3 when I upload file to s3,aws lambda will move the file to other bucket,I can get object url by lambda event,like `https://xxx.s3.amazonaws.com/xxx/xxx/xxxx/xxxx/diamond+white.side.jpg` The object key is `xxx/xxx/xxxx/xxxx/diamond+white.side.jpg` This is a simple example,I can replace "+" get object key, there are other complicated situations,I need to get object key by object url,How can I do it? thanks!!
2020/04/17
[ "https://Stackoverflow.com/questions/61265226", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11303583/" ]
You should use `urllib.parse.unquote` and then replace `+` with space. From my knowledge, `+` is the only exception from URL parsing, so you should be safe if you do that by hand.
I think this is what you want: ``` url_data = "https://xxx.s3.amazonaws.com/xxx/xxx/xxxx/xxxx/diamond+white.side.jpg".split("/")[3:] object_key = "/".join(url_data) ```
11,996
71,738,691
me is writing a simple code in python which should give me all the data available in my oracle table. Connection stuff are fine. ``` select column1,column2,column3 from table1. ``` column as following values [![enter image description here](https://i.stack.imgur.com/dWjEf.png)](https://i.stack.imgur.com/dWjEf.png) This is a huge table and 24 million rows. Issue is this is giving my null value in multiple columns, though it TEMPhas values. I thought The issue wat me feel is initial rows of these columns TEMPhas smaller (2 digit only) and dat's why anything having bigger than 2 digits get ignored by python. How can me write a select statement and take everything from oracle table irrespective of if the initial few hundred columns are null also. But as suggested here this is not the reason, not sure why this is happening. Any help will be appriciated. me is using python 3.10.
2022/04/04
[ "https://Stackoverflow.com/questions/71738691", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15238129/" ]
I think this is because Ubuntu 16.04 includes a fairly old version of git that does not support the `--progress` flag to the `git submodule update` command. I've [opened an issue](https://github.com/alire-project/alire/issues/966) against Alire to see if we might be able to remove this flag. In the meantime, I'd recommend upgrading git to the latest version. You may also want to consider a more recent Ubuntu version as Alire hasn't been tested extensively on older releases. Alire's integration tests are currently run on Ubuntu 20.04.
I *think* you need to say ``` alr index --update-all ``` `--update-all` is a bit misleading, but given that the error message mentions "index" it was the only likely thing in `alr index --help` (you find the possible commands, e.g. "index" here, by just `alr --help`).
11,997
65,390,129
I create a virtual environment; let's say test\_venv, and I activate it. All successful. HOWEVER, the path of the Python Interpreter doesn't not change. I have illustrated the situation below. For clarification, the python path SHOULD BE `~/Desktop/test_venv/bin/python`. ``` >>> python3 -m venv Desktop/test_venv >>> source Desktop/test_venv/bin/activate (test_venv) >>> which python /usr/bin/python ```
2020/12/21
[ "https://Stackoverflow.com/questions/65390129", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14392583/" ]
#### *Please make sure to read Note #2.* --- **This is what you should do if you don't want to create a new virtual environment**: In `venv/bin` folder there are 3 files that store your venv path explicitly and if the path is wrong they take the normal python path so you should change the path there to your new path. change: `set -gx VIRTUAL_ENV "what/ever/path/you/need"` in `activate.fish` change: `VIRTUAL_ENV="what/ever/path/you/need"` in `activate` change: `setenv VIRTUAL_ENV "what/ever/path/you/need"` in `activate.csh` **Note #1:** the path is to `/venv` and not to `/venv/bin` **Note #2:** If you reached this page it means that you are probably **not following Python's best practice for a project structure**. If you were, the process of creating a new virtual environment was just a matter of one command line. Please consider using one of the following methods: * add a [`requirements.txt`](https://pip.pypa.io/en/stable/user_guide/#requirements-files) to your project - *for very small projects.* * [implement an `setup.py` script](https://docs.python.org/3/distutils/setupscript.html) - *for real projects.* * use a tool like [Poetry](https://python-poetry.org/) - *just like the latter though somewhat user-friendlier for some tasks.* Thanks you Khalaimov Dmitrii, I didn't thought it was because I moved the folder.
It is not an answer specifically to your question, but it corresponds the title of the question. I faced similar problem and couldn't find solution on Internet. Maybe someone use my experience. I created virtual environment for my python project. Some time later my python interpreter also stopped changing after virtual environment activation. Similar to how you described. **My problem was that I moved the project folder to a different directory some time ago.** And if I return the folder to its original directory, then everything starts working again. There is following problem resolution. You save all package requirements (for example, using 'pip freeze' or 'poetry') and remove 'venv'-folder (or in your case 'test\_venv'-folder). After that we create virtual environment again, activate it and install all requirements. This approach resolved my problem.
11,998
55,378,150
I have a r Script with the code: ``` args = commandArgs(trailingOnly=TRUE) myData <- read.csv(file=args[0]) ``` I want to run this using a GUI and deliver a choosen csv file with this python code ``` from tkinter import filedialog from tkinter import * import subprocess window = Tk() window.geometry('500x200') window.title("Wordcloud Creator") lbl = Label(window, text="1. Please prepare a CSV (-Trennzeichen) file with the columns untgscod, berpos, SpezX3") lbl.grid(column=0, row=0) def runScript(): filename = filedialog.askopenfilename(initialdir = "/",title = "Select file",filetypes = (("csv files","*.csv"),("all files","*.*"))) subprocess.call(['Rscript', 'C:/Users/Name/Desktop/R-GUI/test.r', filename]) btn = Button(window, text="Select a file and start Cloud creation", command=runScript()) btn.grid(column=0, row=1) window.mainloop() ``` But unfortunately this is not working. I get this error but do not know what is wrong. ``` File "c:\Users\name\.vscode\extensions\ms-python.python-2019.2.5558\pythonFiles\lib\python\ptvsd\_vendored\pydevd\_pydev_bundle\pydev_monkey.py", line 444, in new_CreateProcess return getattr(_subprocess, original_name)(app_name, patch_arg_str_win(cmd_line), *args) FileNotFoundError: [WinError 2] The system cannot find the file specified ``` I do not see why the file cannot be found.
2019/03/27
[ "https://Stackoverflow.com/questions/55378150", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2377949/" ]
With Java 8 streams: ``` List<Accout> accounts = accounts.values().stream() .filter(account -> account.getBalance() > threshold) .collect(Collectors.toList()) ``` With `foreach`: ``` List<Account> accountsWithMinimum = new ArrayList<>(); for (Account acccount : accounts.values() ) { if (account.balance > threshold) { accountsWithMinimum.add(account); } } ``` The `values` method of the `Map` interface returns a `Collection` of the values stored in the map. You can also used `entrySet` to get the collection of key-value pairs, or `keySet` to get only the keys.
From information you provided best solution seems to be change data from HashMap to [LinkedHashMap](https://docs.oracle.com/javase/8/docs/api/java/util/LinkedHashMap.html), that keep ordering. If you take a close took to javadoc, you can find following part useful: > > his implementation spares its clients from the unspecified, generally > chaotic ordering provided by HashMap (and Hashtable), without > incurring the increased cost associated with TreeMap. It can be used > to produce a copy of a map that has the same order as the original, > regardless of the original map's implementation: > > > > ``` > void foo(Map m) { > Map copy = new LinkedHashMap(m); > ... > } > > ``` > > This technique is particularly useful if a module takes a map on > input, copies it, and later returns results whose order is determined > by that of the copy. (Clients generally appreciate having things > returned in the same order they were presented.) > > > So the idea is to get in input the map, create a LinkedHashMap over it, iterate using forEach. Here an example with comments where you can start to elaborate on it ``` import java.util.ArrayList; import java.util.HashMap; import java.util.LinkedHashMap; import java.util.List; import java.util.Map.Entry; public class Main { public static void main(String[] args) { Main main = new Main(); main.test(); } private void test() { // your input HashMap<Integer, Account> accounts = new HashMap<>(); accounts.put(0, new Account("first", 1)); accounts.put(1, new Account("second", 10)); accounts.put(2, new Account("third", 5)); // call your method List<Account> result = getAccountsWithMinumum(accounts); System.out.println(result.toString()); } private List<Account> getAccountsWithMinumum(HashMap<Integer, Account> accounts) { List<Account> result = new ArrayList<>(); // create linkedhashmap to get order LinkedHashMap<Integer, Account> data = new LinkedHashMap<>(accounts); // get entries for (Entry<Integer, Account> entry : data.entrySet()) { // print entry System.out.println("key=" + entry.getKey() + ", value=" + entry.getValue()); // check account balanece on a value, for example 6 (use it as parameter!) if (entry.getValue().balance < 6) { // add to the result list result.add(entry.getValue()); } } return result; } // your account class private class Account { public String name; public int balance; public Account(String name, int balance) { this.name = name; this.balance = balance; } @Override public String toString() { return "[name=" + name + ", balance=" + balance + "]"; } } } ``` Your answer is a little bit difficult to answer because of lack of what Hashmap is exactly, in my case is
12,001
761,824
I need to convert markdown text to plain text format to display summary in my website. I want the code in python.
2009/04/17
[ "https://Stackoverflow.com/questions/761824", "https://Stackoverflow.com", "https://Stackoverflow.com/users/43056/" ]
The [Markdown](https://pypi.org/project/Markdown/) and [BeautifulSoup](https://pypi.org/project/beautifulsoup4/) (now called *beautifulsoup4*) modules will help do what you describe. Once you have converted the markdown to HTML, you can use a HTML parser to strip out the plain text. Your code might look something like this: ``` from bs4 import BeautifulSoup from markdown import markdown html = markdown(some_html_string) text = ''.join(BeautifulSoup(html).findAll(text=True)) ```
Commented and removed it because I finally think I see the rub here: It may be easier to convert your markdown text to HTML and remove HTML from the text. I'm not aware of anything to remove markdown from text effectively but there are many HTML to plain text solutions.
12,003
53,264,593
I have been trying this code to get a random seed, but it always fails. ``` import string import random import time import sys from random import seed possibleCharacters = string.ascii_lowercase + string.digits + string.ascii_uppercase + ' .,!?;:' target = input("Enter text: ") attemptThis = ''.join(random.choice(possibleCharacters) for i in range(len(target))) attemptNext = '' completed = False generation = 0 while completed == False: print(attemptThis) attemptNext = '' completed = True for i in range(len(target)): if attemptThis[i] != target[i]: completed = False attemptNext += random.choice(possibleCharacters) else: attemptNext += target[i] generation += 1 attemptThis = attemptNext time.sleep(0) time.sleep(seed(1)) print("Operation completed in " + str(generation) + " different generations!") ``` The error is always as this one: ``` Traceback (most recent call last): File "N:\ict python\python programs\demo\genration text.py", line 30, in <module> time.sleep(seed(1)) TypeError: an integer is required (got type NoneType) at the end of it. ``` I have tried other functions - randint, pi and generating a random number and deciding by a random number. Do I have to hardcode that number, or is there a way to make it generate a random delay?
2018/11/12
[ "https://Stackoverflow.com/questions/53264593", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9199123/" ]
The `random.seed()` function always returns `None` but `time.sleep()` needs a number (the number of seconds to sleep). try `random.randint()` instead to generate a random integer: ``` import random time.sleep(random.randint(1, 50)) ```
Use also `random.randint(a, b)` function - to generate random integer number. `seed` function just initializes random numbers generator.
12,013
50,073,779
I am aware that [re.search(pattns,text)][1] in python method takes a regular expression pattern and a string and searches for that pattern within the string. If the search is successful, search() returns a match object or None otherwise. My problem however is, am trying to implement this using OOP (class) i want to return a string representation of the results of the matches whether true or none or any other form of representation(readable) not this **<\_\_main\_\_.Expression instance at 0x7f30d0a81440>** below are two example classes : Student and Epression. The one using **\_\_str\_\_(self)\_\_** works fine but i cannot figure out how to get the representation funtion for **re.search()**. Please someone help me out. ``` import re class Expression: def __init__(self,patterns,text): self.patterns = patterns self.text = text def __bool__(self): # i want to get a readable representation from here for pattern in self.patterns: result = re.search(pattern,self.text) return result patterns = ['term1','term2','23','ghghg'] text = 'This is a string with term1 23 not ghghg the other' reg = Expression(patterns,text) print(reg) class Student: def __init__(self, name): self.name = name def __str__(self): # string representation here works fine result = self.name return result # Usage: s1 = Student('john') print(s1) [1]: https://developers.google.com/edu/python/regular-expressions ```
2018/04/28
[ "https://Stackoverflow.com/questions/50073779", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4209906/" ]
The output of `re.search` returns a match object. It tells you whether the regex matches the string. You should identify the group to retrieve string from the match like so: ``` if result: return result.group(0) ``` Replace `return result` in your code with above code snippet. If you are not sure how [`group`](https://docs.python.org/2/library/re.html#re.MatchObject.group) works, here is an example from docs: ``` >>> m = re.match(r"(\w+) (\w+)", "Isaac Newton, physicist") >>> m.group(0) # The entire match 'Isaac Newton' >>> m.group(1) # The first parenthesized subgroup. 'Isaac' >>> m.group(2) # The second parenthesized subgroup. 'Newton' >>> m.group(1, 2) # Multiple arguments give us a tuple. ('Isaac', 'Newton') ```
First, there is a subtle *bug* in your code: ``` def __bool__(self): for pattern in self.patterns: result = re.search(pattern,self.text) return result ``` As you return the result of the searched pattern at the end of the first iteration, others patterns are simply ignored. You probaly want something like this: ``` def __bool__(self): result = True for pattern in self.patterns: result = result or bool(re.search(pattern,self.text)) return result ``` --- About the representation, you may use `.group(0)`. This will return the matched string, rather than the `re.Match` obscur representation. ``` import re s = re.search(r"ab", "okokabuyuihiab") print(s.group(0)) # "ab" ``` And as you use a list of patterns, maybe use instead: ``` results = [re.search(pattern, seld.text) for pattern in self.patterns] representation = [r.group(0) for r in results if r else None] ```
12,015
44,262,833
I am facing some error while using classes in python 2.7 My class definition is: ``` class Timer(object): def __init__(self): self.msg = "" def start(self,msg): self.msg = msg self.start = time.time() def stop(self): t = time.time() - self.start return self.msg, " => ", t, 'seconds' ``` On executing the following code. ``` timer = Timer() timer.start("Function 1") Some code timer.stop() timer.start('Function 2') some code timer.stop() ``` I am getting following error: ``` Function 1 => 0.01 seconds Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'float' object is not callable ``` For the first call it worked as desired but for the second call, it gave an error. I am unable to figure out the cause of the error.
2017/05/30
[ "https://Stackoverflow.com/questions/44262833", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2450212/" ]
When you write `self.start = time.time()`, you replace the function `start()` with a variable named `start`, which has a float value. The next time you write `timer.start()`, start is a float, and you are trying to call it as a function. Just replace the name `self.start` with something else.
I think the problem is that you are using the same name for a method and for an attribute. I would refactor it like this: ``` class Timer(object): def __init__(self): self.msg = "" self.start_time = None #I prefer to declare it but you can avoid this def start(self,msg): self.msg = msg self.start_time = time.time() def stop(self): t = time.time() - self.start_time return self.msg, " => ", t, 'seconds' ```
12,016
52,743,872
I have been creating a game where an image moves according to player input with Keydown and Keyup methods. I want to add boundaries so that the user cannot move the image/character out of the display (I dont want a game over kind of thing if boundary is hit, just that the image/character wont be able to move past that boundary) ``` import pygame pygame.init()#initiate pygame black = (0,0,0) white = (255,255,255) red = (255,0,0) display_width = 1200 display_height = 800 display = pygame.display.set_mode((display_width,display_height)) characterimg_left = pygame.image.load(r'/Users/ye57324/Desktop/Make/coding/python/characterimg_left.png') characterimg_right = pygame.image.load(r'/Users/ye57324/Desktop/Make/coding/python/characterimg_right.png') characterimg = characterimg_left def soldier(x,y): display.blit(characterimg, (x,y)) x = (display_width * 0.30) y = (display_height * 0.2) pygame.display.set_caption('No U') clock = pygame.time.Clock()#game clock flip_right = False x_change = 0 y_change = 0 bg_x = 0 start = True bg = pygame.image.load(r'/Users/ye57324/Desktop/Make/coding/python/bg.png').convert() class player: def __init__(self, x, y): self.jumping = False p = player(x, y) while start: for event in pygame.event.get(): if event.type == pygame.QUIT or (event.type == pygame.KEYDOWN and event.key == pygame.K_ESCAPE): start = False if event.type == pygame.KEYDOWN: if event.key == pygame.K_LEFT: x_change += -4 if flip_right == True: characterimg = characterimg_left flip_right = False x += -150 elif event.key == pygame.K_RIGHT: x_change += 4 if flip_right == False: characterimg = characterimg_right flip_right = True x += 150 elif event.key == pygame.K_UP: y_change += -4 elif event.key == pygame.K_DOWN: y_change += 4 if event.type == pygame.KEYUP: if event.key == pygame.K_LEFT: x_change += 4 elif event.key == pygame.K_RIGHT: x_change += -4 elif event.key == pygame.K_UP: y_change += 4 elif event.key == pygame.K_DOWN: y_change += -4 x += x_change y += y_change display.fill(white) soldier(x,y) pygame.display.update() clock.tick(120)#fps pygame.quit() ``` I have tried several times including switching to the key pressed method but they all failed. Help please, thank you.
2018/10/10
[ "https://Stackoverflow.com/questions/52743872", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10366920/" ]
**Root cause:** Your result is an array but your test is verifying an object. Thus, the postman will throw the exception since it could not compare. **Solution:** Use exactly value of an item in the list with if else command to compare. ``` var arr = pm.response.json(); console.log(arr.length) for (var i = 0; i < arr.length; i++) { if(arr[i].Verified === true){ pm.test("Verified should be true", function () { pm.expect(arr[i].Verified).to.be.true; }); } if(arr[i].Verified === false){ pm.test("Verified should be false", function () { pm.expect(arr[i].Verified).to.be.false; }); } } ``` Hope it help you.
You could also just do this: ``` pm.test('Check the response body properties', () => { _.each(pm.response.json(), (item) => { pm.expect(item.Verified).to.be.true pm.expect(item.VerifiedDate).to.be.a('string').and.match(/^\d{4}-\d{2}-\d{2}$/) }) }) ``` The check will do a few things for you, it will iterate over the whole array and check that the `Verified` property is `true` and also check that the `VerifiedDate` is a string and matches the `YYYY-MM-DD` format, like in the example given in your question.
12,019
61,924,960
I am querying (via sqlalchemy) *my\_table* with a conditional on a column and then retrieve distinct values in another column. Quite simply ``` selection_1 = session.query(func.distinct(my_table.col2)).\ filter(my_table.col1 == value1) ``` I need to do this repeatedly to get distinct values from different columns from *my\_table*. ``` selection_2 = session.query(func.distinct(my_table.col3)).\ filter(my_table.col1 == value1).\ filter(my_table.col2 == value2) selection_3 = session.query(func.distinct(my_table.col4)).\ filter(my_table.col1 == value1).\ filter(my_table.col2 == value2).\ filter(my_table.col3 == value3) ``` The above code works, but as I need to have 6 successive calls it's getting a bit out of hand. I have created a class to handle the method chaining: ``` class QueryProcessor: def add_filter(self, my_table_col, value): filter(my_table_col == value) return self def set_distinct_col(self, my_other_table_col): self.my_other_table_col = my_other_table_col return session.query(func.distinct(self.my_other_table_col)) ``` Ideally I'd be able to use the class like ``` selection_1 = QueryProcessor().set_distinct_col(my_table.col2).add_filter(my_table.col1, value1) selection_2 = selection_1.set_distinct_col(my_table.col3).add_filter(my_table.col2, value2) selection_3 = selection_2.set_distinct_col(my_table.col4).add_filter(my_table.col3, value3) ``` but when I run ``` selection_1 = QueryProcessor().set_distinct_col(my_table.col2).add_filter(my_table.col1, value1) ``` I get the following error: ``` Traceback (most recent call last): File " ... " exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-20-789b26eccbc5>", line 10, in <module> selection_1 = QueryProcessor().set_distinct_col(my_table.col2).add_filter(my_table.col1, value1) AttributeError: 'Query' object has no attribute 'add_filter' ``` Any help will be much welcomed.
2020/05/21
[ "https://Stackoverflow.com/questions/61924960", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13583344/" ]
You don't really need a special class for this. Your existing code ``` selection_2 = session.query(func.distinct(my_table.col3)).\ filter(my_table.col1 == value1).\ filter(my_table.col2 == value2) ``` works because `filter` is returning a *new* query based on the original query, but with an additional filter added to it. You can just iterate over the columns and their corresponding values, replacing each old query with its successor. ``` selection2 = session.query(func.distinct(my_table.col3)) for col, val in zip([my_table.col1, my_table.col2], [value1, value2]): selection2 = selection2.filter(col == val) selection_3 = session.query(func.distinct(my_table.col4)) for col, val in zip([mytable.col1, mytable.col2, mytable.col3], [value1, value2, value3]): selection_3 = selection_3.filter(col == val) ``` That said, the problem with your code is that `add_filter` doesn't actually call the query's `filter` method, or update the wrapped query. ``` class QueryProcessor: def set_distinct_col(self, my_other_table_col): self.query = session.query(func.distinct(self.my_other_table_col)) return self def add_filter(self, my_table_col, value): self.query = self.query.filter(my_table_col == value) return self ``` This poses a problem, though: `set_distinct_col` creates a new query, so it doesn't really make sense in the following ``` selection_1 = QueryProcessor().set_distinct_col(my_table.col2).add_filter(my_table.col1, value1) selection_2 = selection_1.set_distinct_col(my_table.col3).add_filter(my_table.col2, value2) selection_3 = selection_2.set_distinct_col(my_table.col4).add_filter(my_table.col3, value3) ``` to call `set_distinct_col` on an existing instance. It can return either a new query or the existing one, but not both (at least, not if you want to do chaining). Also, note that `selection_1` itself is not the query, but `selection_1.query`.
For your `add_filter()` function to work as intended, you need your `set_distinct_col()` function to return a reference to itself (*an instance of `QueryProcessor`*). [`session.query()`](https://docs.sqlalchemy.org/en/13/orm/session_api.html#sqlalchemy.orm.session.Session.query) returns a `Query` object which doesn't have an `add_filter()` method. Query could have an `add_filter` method if you did something like `Query.add_filter = add_filter`, but that's a bad practice because it modifies the Query class, so I don't recommend doing it. What you're doing is a better option. In order to have access to the query you create with the `set_distinct_col()` method, you need to store it as an instance variable. Below, I have done this by storing the query in the instance variable `query` with `self.query = session.query(func.distinct(self.my_other_table_col))` Then, I changed the `add_filter()` method to return itself to allow for chaining more add\_filter() methods. ``` class QueryProcessor: def add_filter(self, my_table_col, value): self.query = self.query.filter(my_table_col == value) return self def set_distinct_col(self, my_other_table_col): self.my_other_table_col = my_other_table_col self.query = session.query(func.distinct(self.my_other_table_col)) return self ``` You should also know that you can use multiple filter conditions at a time, so you don't actually need to chain multiple filters together. ``` session.query(db.users).filter(or_(db.users.name=='Ryan', db.users.country=='England')) ``` or ``` session.query(db.users).filter((db.users.name=='Ryan') | (db.users.country=='England')) ``` [Difference between filter and filter\_by in SQLAlchemy](https://stackoverflow.com/questions/2128505/difference-between-filter-and-filter-by-in-sqlalchemy) P.S. This code has not been tested
12,020
30,909,627
I am tried to create a slice in django template and get an attribute. How can i do in django template something like this: python code: ``` somelist[1].name ``` please help
2015/06/18
[ "https://Stackoverflow.com/questions/30909627", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3520178/" ]
If you are accessing a single element of a list, you don't need the `slice` filter, just use the dot notation. ``` {{ somelist.1.name }} ``` See [the docs](https://docs.djangoproject.com/en/1.8/ref/templates/language/#variables) for more info.
You can use the built-in tag [`slice`](https://docs.djangoproject.com/en/1.8/ref/templates/builtins/#slice) with a combination of [`with`](https://docs.djangoproject.com/en/1.8/ref/templates/builtins/#with): ``` {% with somelist|slice:"1" as item %} {{ item.name }} {% endwith %} ```
12,021
49,913,084
All, I am trying to build a Docker image to run my python 2 App within an embedded system which has OS yocto. Since the embedded system has limited flash, I want to have a small Docker image. However, after I install all the python and other packages, I get an image with 730M, which is too big for me. I don't know how to compress the image. Please share your wisdom. Thanks!! My Dcokerfile is like below: ``` FROM *****/base-rootfs:yocto-2.1.1 RUN opkg update RUN opkg install *****toolchain RUN opkg install python-pip RUN opkg install python-dev RUN opkg install python RUN pip install numpy RUN pip install pandas RUN pip install scipy RUN pip install sklearn COPY appa /opt/app/ RUN chmod 777 /opt/app/main.py CMD ["python", "./opt/app/main.py"] ```
2018/04/19
[ "https://Stackoverflow.com/questions/49913084", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1227199/" ]
**Q:** How to reduce the size of my docker image? **A:** There are a few ways you can do this. The most prominent thing I can immediately spot from your Dockerfile is that you are installing packages using individual `RUN` command. While this can make your code look a bit cleaner but each `RUN` statement is an overhead in size because docker images are build from `layers`. To docker, each `RUN` statement is like building a new `layer` from the previous one and wrapping it around the previous one. I think you should see a drop in size if you reduce the amount of `layers` by bundling the packages into single installation command. Maybe try grouping the `pip` ones together and `opkg` into another.
You can use docker history command to see which layer is causing more disk space addition. ``` docker@default:~Example>> docker history {image-name}:{image-tag} docker@default:~$ docker history mysql IMAGE CREATED CREATED BY SIZE COMMENT 5195076672a7 5 weeks ago /bin/sh -c #(nop) CMD ["mysqld"] 0B <missing> 5 weeks ago /bin/sh -c #(nop) EXPOSE 3306/tcp 0B <missing> 5 weeks ago /bin/sh -c #(nop) ENTRYPOINT ["docker-entry⦠0B <missing> 5 weeks ago /bin/sh -c ln -s usr/local/bin/docker-entryp⦠34B <missing> 5 weeks ago /bin/sh -c #(nop) COPY file:05922d368ede3042⦠5.92kB <missing> 5 weeks ago /bin/sh -c #(nop) VOLUME [/var/lib/mysql] 0B <missing> 5 weeks ago /bin/sh -c { echo mysql-community-server m⦠256MB <missing> 5 weeks ago /bin/sh -c echo "deb http://repo.mysql.com/a⦠56B <missing> 5 weeks ago /bin/sh -c #(nop) ENV MYSQL_VERSION=5.7.21-⦠0B <missing> 5 weeks ago /bin/sh -c #(nop) ENV MYSQL_MAJOR=5.7 0B <missing> 5 weeks ago /bin/sh -c set -ex; key='A4A9406876FCBD3C45⦠23kB <missing> 5 weeks ago /bin/sh -c apt-get update && apt-get install⦠44.7MB <missing> 5 weeks ago /bin/sh -c mkdir /docker-entrypoint-initdb.d 0B <missing> 5 weeks ago /bin/sh -c set -x && apt-get update && apt-⦠4.44MB <missing> 5 weeks ago /bin/sh -c #(nop) ENV GOSU_VERSION=1.7 0B <missing> 5 weeks ago /bin/sh -c apt-get update && apt-get install⦠10.2MB <missing> 5 weeks ago /bin/sh -c groupadd -r mysql && useradd -r -⦠329kB <missing> 5 weeks ago /bin/sh -c #(nop) CMD ["bash"] 0B <missing> 5 weeks ago /bin/sh -c #(nop) ADD file:e3250bb9848f956bd⦠55.3MB docker@default:~$ ``` Based on output you can see and compare results with the real size of those packages and eventually drill down to what causing such a huge docker image ?
12,022
54,419,118
I am trying to extract table from a PPT using `python-pptx`, however, the I am not sure how do I that using `shape.table`. ``` from pptx import Presentation prs = Presentation(path_to_presentation) # text_runs will be populated with a list of strings, # one for each text run in presentation text_runs = [] for slide in prs.slides: for shape in slide.shapes: if shape.has_table: tbl = shape.table rows = tbl.rows.count cols = tbl.columns.count ``` I found a post [here](https://stackoverflow.com/questions/27843018/read-from-powerpoint-table-in-python) but the accepted solution does not work, giving error that `count` attribute is not available. How do I modify the above code so I can get a table in a dataframe? **EDIT** Please see the image of the slide below [![enter image description here](https://i.stack.imgur.com/ZQ3Us.png)](https://i.stack.imgur.com/ZQ3Us.png)
2019/01/29
[ "https://Stackoverflow.com/questions/54419118", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1479974/" ]
This appears to work for me. ``` prs = Presentation((path_to_presentation)) # text_runs will be populated with a list of strings, # one for each text run in presentation text_runs = [] for slide in prs.slides: for shape in slide.shapes: if not shape.has_table: continue tbl = shape.table row_count = len(tbl.rows) col_count = len(tbl.columns) for r in range(0, row_count): for c in range(0, col_count): cell = tbl.cell(r,c) paragraphs = cell.text_frame.paragraphs for paragraph in paragraphs: for run in paragraph.runs: text_runs.append(run.text) print(text_runs)``` ```
To read the values present inside ppt | This code worked for me ``` slide = Deck.slides[1] table = slide.shapes[1].table for r in range(0,len(table.rows)): for c in range(2,len(table.columns)): cell_value = (table.cell(r,c)).text_frame.text print(cell_value) ```
12,023
68,173,157
i have a List a ``` a = [["v1_0001.jpg","v1_00015.jpg","v1_0002.jpg"],["v2_0001.jpg","v2_0002.jpg","v2_00015.jpg"]] ``` i want to concatenate these list of list in one list and sort list by alphanumeric While i am try to sort concatenation list ``` ['v1_0001.jpg', 'v1_00015.jpg', 'v1_0002.jpg','v2_0001.jpg' 'v2_00015.jpg', 'v2_0002.jpg'] ``` I got output like this ``` ['v1_0001.jpg','v1_0002.jpg','v1_00015.jpg','v2_0001.jpg','v2_00015.jpg' 'v2_0002.jpg] ``` How to resolve this in python
2021/06/29
[ "https://Stackoverflow.com/questions/68173157", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Use `chain` from `itertools`, as so - ``` from itertools import chain a = [[0, 15, 30, 45, 75, 105],[0, 15, 30, 45, 75, 105]] b = list(chain(*a)) print(b) # [0, 15, 30, 45, 75, 105,0, 15, 30, 45, 75, 105] ```
Here is one way. You can do it with recursion ``` solution : ``` ``` import itertools a = [[0, 15, 30, 45, 75, 105],[0, 15, 30, 45, 75, 105]] print(list(itertools.chain.from_iterable(a))) ```
12,024
37,090,675
I am working with canbus in python (Pcan basic api) and would like to make it easier to use. Via the bus a lot of devices/modules are connected. They are all allowed to send data, if a collison would happen the lowest ID will win. The data Is organized in frames with ID, SubID, hexvalues To illustrate the problem I am trying to adress, imagine the amplitude of a signal. To read the value a frame is send to * QuestionID QuestionSUBID QuestionData If there is no message with higher priority(=lowerID) the answer is written to the bus: * AnswerID AnswerSubID AnswerData Since any module/device is allowed to write to the bus, you don't know in advance which answer you will get next. Setting a value morks the same way, just with different IDs. So for the above example the amplitude would have: 1. 4 IDs and SubIds Associated with read/write question/answer 2. Additionally the lenght of the data has (0-8) has to be specified /stored. 3. Since the data is all hex values a parser has to be specified to obtain the human readable value (e.g Voltage in decimal representation) To store this information I use nested dicts: ``` parameters = {'Parameter_1': {'Read': {'question_ID': ID, 'question_SUBID': SubID, 'question_Data': hex_value_list, 'answer_ID': ..., 'answer_subID': ..., 'answer_parser': function}, 'Write': {'ID': ..., 'SubID': ..., 'parser' ..., 'answer_ID': ..., 'answer_subID': ...}}, 'Parameter_2': ... }} ``` There are a lot of tools to show which value was set when, but for hardware control, the order in which paremeters are read are not relevant as long as they are up to date. Thus one part of a possible solution would be storing the whole traffic in a dict of dicts: ``` busdata = {'firstID' : {'first_subID': {'data': data, 'timestamp': timestamp}, 'second_subID': {'data': data, 'timestamp': timestamp}, }, secondID': ...} ``` Due to the nature of the bus, I get a lot of answers other devices asked - the bus is quite full - these should not be dismissed since they might be the values I need next and there is no need to create additional traffic - I might use the timestamp with an expiry date, but I didn't think a lot about that so far. This works, but is horrible to work with. In general I guess I will have about 300 parameters. The final goal is to controll the devices via a (pyqt) Gui, read some values like serial numbers but as well run measurement tasks. So the big question is how to define a better datastructure that is easily accesible and understandable? I am looking forward to any suggestion on a clean design. The main goal would be something like rid of the whole message based aproach. **EDIT:** My goal is to get rid of the whole CAN specific message based aproach: I assume I will need one thread for the communication, it should: 1. Read the buffer and update my variables 2. Send requests (messages) to obtain other values/variables 3. Send some values periodically So from the gui I would like to be abled to: 1. get parameter by name --> send a string with parameter name 2. set parameter signal --> str(name), value(as displayedin the gui) 3. get values periodically --> name, interval, duration(10s or infinite) The thread would have to: 1. Log all data on the bus for internal storage 2. Process requests by generating messages from name, value and read until result is obtained 3. Send periodical signals I would like to have this design idependant of the actual hardware: * The solution I thought of, is the above *parameters\_dict* For internal storage I thought about the *bus\_data\_dict* **Still I am not sure how to**: 1. Pass data from the bus thread to the gui (all values vs. new/requested value) 2. How to implement it with signals and slots in pyqt 3. Store data internally (dict of dicts or some new better idea) 4. If this design is a good choice
2016/05/07
[ "https://Stackoverflow.com/questions/37090675", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Using the [python-can](https://bitbucket.org/hardbyte/python-can/) library will get you the networking thread - giving you a buffered queue of incoming messages. The library supports the PCAN interface among others. Then you would create a middle-ware layer that converts and routes these `can.Message` types into pyqt signals. Think of this as a one to many source of events/signals. I'd use another controller to be in charge of sending messages to the bus. It could have tasks like requesting periodic measurements from the bus, as well as on demand requests driven by the gui. Regarding storing the data internally, it really depends on your programming style and the complexity. I have seen projects where each CAN message would have its own class. Finally, queues are your friend!
``` class MySimpleCanBus: def parse_message(self,raw_message): return MyMessageClass._create(*struct.unpack(FRAME_FORMAT,msg)) def recieve_message(self,filter_data): #code to recieve and parse a message(filtered by id) raw = canbus.recv(FRAME_SIZE) return self.parse_message(raw) def send_message(msg_data): # code to make sure the message can be sent and send the message return self.recieve_message() class MySpecificCanBus(MySimpleCanBus): def get_measurement_reading(): msg_data = {} #code to request a measurement return self.send_message(msg_data) def get_device_id(): msg_data = {} # code to get device_id return self.send_message(msg_data) ``` I probably dont understand your question properly ... maybe you could update it with additional details
12,026
41,033,115
I downloaded and installed datasheder using the below steps: ``` git clone https://github.com/bokeh/datashader.git cd datashader conda install -c bokeh --file requirements.txt python setup.py install ``` After that, I have run the code using terminal like `python data.py, but no graph is displayed; nothin is being displayed. I am not sure if I've follwed the right steps here, can somebody help me display the graphs? Here is my code: ``` import pandas as pd import numpy as np import xarray as xr import datashader as ds import datashader.glyphs import datashader.transfer_functions as tf from collections import OrderedDict np.random.seed(1) num=10000 dists = {cat: pd.DataFrame(dict(x=np.random.normal(x,s,num), y=np.random.normal(y,s,num), val=val,cat=cat)) for x,y,s,val,cat in [(2,2,0.01,10,"d1"), (2,-2,0.1,20,"d2"), (-2,-2,0.5,30,"d3"), (-2,2,1.0,40,"d4"), (0,0,3,50,"d5")]} df = pd.concat(dists,ignore_index=True) df["cat"]=df["cat"].astype("category") df.tail() tf.shade(ds.Canvas().points(df,'x','y')) glyph = ds.glyphs.Point('x', 'y') canvas = ds.Canvas(plot_width=200, plot_height=200, x_range=(-8,8)y_range=(-8,8)) from datashader import reductions reduction = reductions.count() from datashader.core import bypixel agg = bypixel(df, canvas, glyph, reduction) agg canvas.points(df, 'x', 'y', agg=reductions.count()) tf.shade(canvas.points(df,'x','y',agg=reductions.count())) tf.shade(canvas.points(df,'x','y',agg=reductions.any())) tf.shade(canvas.points(df,'x','y',agg=reductions.mean('y'))) tf.shade(50-canvas.points(df,'x','y',agg=reductions.mean('val'))) agg = canvas.points(df, 'x', 'y') tf.shade(agg.where(agg>=np.percentile(agg,99))) tf.shade(np.sin(agg)) aggc = canvas.points(df, 'x', 'y', ds.count_cat('cat')) aggc tf.shade(aggc.sel(cat='d3')) agg_d3_d5=aggc.sel(cat=['d3', 'd5']).sum(dim='cat') tf.shade(agg_d3_d5) ```
2016/12/08
[ "https://Stackoverflow.com/questions/41033115", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7113481/" ]
I haven't tried your code, but there is nothing in there that would actually display the image. Each shade() call creates an image in memory, but then nothing is done with it here. If you were in a Jupyter notebook environment and the shade() call were the last item in the cell, it would display automatically, but the regular Python prompt doesn't have such "rich display" support. So you can either save it to an image file on disk (using e.g. [utils/export\_image](https://github.com/bokeh/datashader/blob/0.4.0/datashader/utils.py#L124)), or you can assign the result of shade() to a variable and then pass that to a Bokeh or Matplotlib or other plot, as you prefer. But you have to do something with the image if you want to see it.
I was able to produce the plot one of the tf.shade in your code this way. ``` from datashader.utils import export_image img = tf.shade(canvas.points(df,'x','y',agg=reductions.count())) export_image(img=img, filename='test1', fmt=".png", export_path=".") ``` This is the plot in test1.png[![Points_plot](https://i.stack.imgur.com/mW0wA.png)](https://i.stack.imgur.com/mW0wA.png)
12,029
31,662,355
I am creating a simple GUI production calculator in python. I am using Tkinter and have a main frame with 10 tabs in it. I have created all the entries and functions do do the calculations we need and it all works. My problem is that i want these entries and labels on each tab line10 - line19. I could manually recreate the code for each tab but that does not seem very pythonic. What I am hoping for is to be able to put this code in a loop that will rename the variable names for each line and place the objects into the different tab frames by changing the argument in the grid methods. I hope I am being clear enough I am very new and hoping to get a good grasp of this language. I am hoping to be able to just reiterate this code with a different number at the end of all the variable names, would concatenation work? here is my code. ``` from Tkinter import * from ttk import * import time class supervisorcalc: def __init__(self, master): self.notebook = Notebook(master) self.line10 = Frame(self.notebook); self.notebook.add(self.line10, text="Line 10") self.line11 = Frame(self.notebook); self.notebook.add(self.line11, text="Line 11") self.line12 = Frame(self.notebook); self.notebook.add(self.line12, text="Line 12") self.line13 = Frame(self.notebook); self.notebook.add(self.line13, text="Line 13") self.line14 = Frame(self.notebook); self.notebook.add(self.line14, text="Line 14") self.line15 = Frame(self.notebook); self.notebook.add(self.line15, text="Line 15") self.line16 = Frame(self.notebook); self.notebook.add(self.line16, text="Line 16") self.line17 = Frame(self.notebook); self.notebook.add(self.line17, text="Line 17") self.line18 = Frame(self.notebook); self.notebook.add(self.line18, text="Line 18") self.line19 = Frame(self.notebook); self.notebook.add(self.line19, text="Line 19") self.notebook.grid(row=0,column=0) ###functions### def cyclecnt(*args): cyclecount = int(self.cyccnt.get()) molds = int(self.vj.get()) cyccount = cyclecount * molds self.cyc.set(cyccount) return def currentproduction(*args): item = int(self.item.get()) case = int(self.case.get()) currprod = item * case self.production.set(currprod) return def lostunits(*args): cycle = int(self.cyc.get()) prod = int(self.production.get()) self.loss.set(cycle - prod) return def efficiency(*args): lost = float(self.loss.get()) prod = float(self.production.get()) self.efficiency.set((lost/prod)*100) return def getSec(x): l = x.split(':') return int(l[0]) * 3600 + int(l[1]) * 60 + int(l[2]) def future_time_seconds(*args): hrs = self.hour.get() mins = self.minute.get() return (int(hrs) * 3600) + (int(mins) * 60) def time_difference_seconds(*args): fseconds = future_time_seconds() s = time.strftime('%I:%M:%S') cursecs = getSec(s) return fseconds - cursecs def proj(*args): ctime = float(self.cycletime.get()) prod = int(self.production.get()) loss = int(self.loss.get()) case = float(self.case.get()) molds = int(self.vj.get()) item = int(self.item.get()) seconds = time_difference_seconds() pcycle = ((molds / ctime) * seconds) projeff = float(self.peff.get()) / float(100) pproduction = pcycle - (pcycle * projeff) self.projectedprod.set(prod + pproduction) projloss = loss + pcycle * projeff self.ploss.set(projloss) fcase = case + (pproduction / float(item)) self.fcase.set(fcase) ###line 19 self.ctlabelj = Label(self.line19, text = "Cycle Time:") self.ctlabelj.grid(row=2, column=0) self.cycletime = StringVar() self.cycletime.trace('w', proj) self.cycletimeentj = Entry(self.line19, textvariable=self.cycletime) self.cycletimeentj.grid(row=2,column=1) moldoptionsj = [1, 1, 2, 3, 4] self.vj = IntVar() self.vj.set(moldoptionsj[0]) self.headslabelj = Label(self.line19, text = "# of Molds:") self.headslabelj.grid(row=3, column=0) self.headcomboj = OptionMenu(self.line19, self.vj, *moldoptionsj) self.headcomboj.grid(row=3,column=1) self.vj.trace("w", cyclecnt) self.cclabelj = Label(self.line19, text = "Cycle Count:") self.cclabelj.grid(row=4, column=0) self.cyccnt = StringVar() self.cyclecountentj = Entry(self.line19, textvariable=self.cyccnt) self.cyclecountentj.grid(row=4,column=1) self.cyccnt.trace("w", cyclecnt) self.ipcj = Label(self.line19, text = "Items/Case:") self.ipcj.grid(row=5, column=0) self.item = StringVar() self.ipcentj = Entry(self.line19, textvariable=self.item) self.ipcentj.grid(row=5,column=1) self.item.trace("w", currentproduction) self.currj = Label(self.line19, text = "Current Case #:") self.currj.grid(row=6, column=0) self.case = StringVar() self.currentj = Entry(self.line19, textvariable=self.case) self.currentj.grid(row=6,column=1) self.case.trace("w", currentproduction) self.ctimej = Label(self.line19, text = "Current Time:") self.ctimej.grid(row=7, column=0, sticky='W') self.clockj = Label(self.line19) self.clockj.grid(row=7,column=1, sticky='w') ####futureztime### self.futureframe = Frame(self.line19) self.futureframe.grid(row=8, column=1) self.futurej = Label(self.line19, text = "Future Projections time:") self.futurej.grid(row=8, column=0, sticky='w') self.hour = StringVar() self.hour.trace('w', time_difference_seconds) self.hour.trace('w', proj) self.futureenthourj = Entry(self.futureframe, width=2, textvariable=self.hour) self.futureenthourj.grid(row=0, column=0) self.futurecolonj = Label(self.futureframe, text = ":") self.futurecolonj.grid(row=0, column=1) self.minute = StringVar() self.minute.trace('w', time_difference_seconds) self.minute.trace('w', proj) self.futureentminj = Entry(self.futureframe, width=2, textvariable=self.minute) self.futureentminj.grid(row=0, column=2) #### self.cycleslabel = Label(self.line19, text = 'Cycle Total:') self.cycleslabel.grid(row=2, column=2) self.cyc = StringVar() self.cyc.set("00000") self.cyc.trace('w', lostunits) self.cycles = Label(self.line19, foreground = 'green', background = 'black', text = "00000", textvariable = self.cyc) self.cycles.grid(row=2, column=3) self.currprodkeylabel = Label(self.line19, text = 'Current Production:') self.currprodkeylabel.grid(row=3, column=2) self.production = StringVar() self.production.set('00000') self.production.trace('w', lostunits) self.production.trace('w', efficiency) self.currentprod = Label(self.line19, foreground = 'green', background = 'black', textvariable=self.production) self.currentprod.grid(row=3, column=3) self.prodprojkeylabel = Label(self.line19, text = 'Projected Production:') self.prodprojkeylabel.grid(row=4, column=2) self.projectedprod = StringVar() self.projectedprod.set('00000') self.prodproj = Label(self.line19, foreground = 'green', background = 'black', textvariable=self.projectedprod ) self.prodproj.grid(row=4, column=3) self.losskeylabel = Label(self.line19, text = 'Lost units:') self.losskeylabel.grid(row=5, column=2) self.loss = StringVar() self.loss.set("0000") self.loss.trace('w', efficiency) self.lossprod = Label(self.line19, foreground = 'green', background = 'black', textvariable=self.loss) self.lossprod.grid(row=5, column=3) self.plosskeylabel = Label(self.line19, text = 'Projected Lost units:') self.plosskeylabel.grid(row=6, column=2) self.ploss = StringVar() self.ploss.set("0000") self.plossprod = Label(self.line19, foreground = 'green', background = 'black', textvariable=self.ploss) self.plossprod.grid(row=6, column=3) self.currefficiencykeylabel = Label(self.line19, text = 'Current efficiency %:') self.currefficiencykeylabel.grid(row=7, column=2) self.efficiency = StringVar() self.efficiency.set("00.00") self.currentefficiency = Label(self.line19, foreground = 'green', background = 'black', textvariable=self.efficiency) self.currentefficiency.grid(row=7, column=3) self.futurecaselabel = Label(self.line19, text = 'Future case # projection:') self.futurecaselabel.grid(row=8, column=2) self.fcase = StringVar() self.fcase.set("000.0") self.futurecase = Label(self.line19, foreground = 'green', background = 'black', textvariable=self.fcase) self.futurecase.grid(row=8, column=3) self.projefficiencylabel = Label(self.line19, text = "Efficiency Projection:") self.projefficiencylabel.grid(row=9, column=2) self.peff = StringVar() self.peff.set(0.00) self.peff.trace('w', proj) self.projefficiency = Entry(self.line19, textvariable=self.peff) self.projefficiency.grid(row=9,column=3) def tick(): s = time.strftime('%I:%M:%S') if s != self.clockj: self.clockj.configure(text=s) self.notebook.after(200, tick) tick() root = Tk() root.wm_title("Hillside Plastics Production Calculator") calc = supervisorcalc(root) mainloop() ```
2015/07/27
[ "https://Stackoverflow.com/questions/31662355", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5111347/" ]
This can't be intentional, because it can't work. As you've seen, you can't create a table twice. You should delete one of the migrations, possibly merging from the one you delete into the other one. The only differences are the taggings\_count field and the indexes. There isn't enough to go on here to say whether you need taggings\_count or which is the better index. If I had to guess, I'd say the index on the first was trying to create a covering index, for what that's worth.
Get use to it. This error will happen often in the future. Usually, it happens when your migration failed in the middle, and for example your table was created, but later creation of indexes failed. Then when you are trying to rerun the migration the table already exists, and migration fails. There are several ways to deal with such situation: 1) you can drop the table or anything else that was partially created, and rerun. 2) you can edit that particular migration and comment out the table creation part while rerunning (then you can uncomment it). I personally prefer the #2. I had to say that this situation happens only for some databases. You will see it with MySQL, but will not see it with PostreSQL. It happens because PostreSQL fully rolls back changes for failed migration (including successfully created tables and such); and MySQL decides that changes in partially successful migration should not be rolled back.
12,030
51,591,025
I have used Django 1.11.14, python 2.7 and win 7 to build a website which could let registered user to login. however, when I log in, some page display login link again when it should be logout link. main page ```html <!DOCTYPE html> <html lang="en"> <head> {% block title %}<title>xxxxx</title>{% endblock %} <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css"> <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.12.4/jquery.min.js"></script> <script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script> <!-- Add additional CSS in static file --> {% load static %} <link rel="stylesheet" href="{% static 'css/styles.css' %}"> </head> <body> <div class="container-fluid"> <div class="row"> <div class="col-sm-2"> {% block sidebar %} <ul class="sidebar-nav"> <br> <li><a href="{% url 'index' %}"><mark>Home</mark></a></li> <br> <li><a href="{% url 'FWs' %}"><mark>FW request</mark></a></li> <br> </ul> <ul class="sidebar-nav"> {% if user.is_authenticated %} <li>User: {{ user.get_username }}</li> <li><a href="{% url 'my-applied' %}"><mark>My applied</mark></a></li> <br> <li><a href="{% url 'logout'%}?next={{request.path}}"><mark>Logout</mark></a></li> {% else %} <li><a href="{% url 'login'%}?next={{request.path}}"><mark>Login</mark></a></li> {% endif %} </ul> {% if user.is_staff %} <hr /> <ul class="sidebar-nav"> <li>Staff</li> {% if perms.catalog.can_mark_returned %} <li><a href="{% url 'all-applied' %}"><mark>All applied</mark></a></li> {% endif %} </ul> {% endif %} {% endblock %} </div> <div class="col-sm-10 "> {% block content %}{% endblock %} {% block pagination %} {% if is_paginated %} <div class="pagination"> <span class="page-links"> {% if page_obj.has_previous %} <a href="{{ request.path }}?page={{ page_obj.previous_page_number }}">previous</a> {% endif %} <span class="page-current"> Page {{ page_obj.number }} of {{ page_obj.paginator.num_pages }}. </span> {% if page_obj.has_next %} <a href="{{ request.path }}?page={{ page_obj.next_page_number }}">next</a> {% endif %} </span> </div> {% endif %} {% endblock %} </div> </div> </div> </body> </html> ``` FW\_request ```html {% extends "base_generic.html" %} {% block content %} <p style = "color:#FF0000";> XXx</p> {% if FW_list %} <ul> {% for FWinst in FW_list %} {% if FWinst.is_approved %} <p style = "color:#008000";>xxxx</p> <p></p> <br> {% endif %} {% endfor %} </ul> {% else %} <p>Nothing.</p> {% endif %} {% endblock %} ``` view.py ``` class LoanedFWsByUserListView(LoginRequiredMixin,generic.ListView, FormView): model = XX paginate_by = 10 template_name = 'FW_request.html' def get(self, request): form = xx[![enter image description here][1]][1]Form() FW_list = FW.objects.all().order_by('approve_time').reverse() return render_to_response(self.template_name, locals()) ``` when I login to the mainpage, it is correct, but when I click other tab like My applied, although I am still in the status of login, it shows login on the page, where it should be logout
2018/07/30
[ "https://Stackoverflow.com/questions/51591025", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9669628/" ]
Instead of `render_to_response` you should use [`render`](https://docs.djangoproject.com/en/2.0/topics/http/shortcuts/#render) function. This make request available in template. ``` def get(self, request): form = xx[![enter image description here][1]][1]Form() FW_list = FW.objects.all().order_by('approve_time').reverse() return render(self.request, self.template_name, locals()) ```
You have missed the point of using class based views here. You should rarely, if ever, be defining `get` (or `post`). In this case you should only define `get_context_data` to add your FW\_list, and let the view handle creating the form and rendering the template. ``` def get_context_data(self, **kwargs): kwargs['FW_list'] = FW.objects.all().order_by('approve_time').reverse() return super().get_context_data(**kwargs) ```
12,032
9,072,175
fellow earthians. I, relatively sane of body and mind, hereby give up understanding CSS positioning by myself. The online resources about CSS go to great length to explain that the "color" attribute lets you set the "color" of stuff. Unmöglish. Then, that if you want to put something to the left of something else (crazy idea, right?), *all* you have to do is to set it to float to the left provided you set the "relative" flag on its parent block which has to have a grand-father node with the "absolute" flag set to true so that it's positionned relatively to an other container that may-or-not contain anything, have a position, a size, or not, depending on the browser, the size of other stuff, and possibly the phases of the moon. (CSS experts are advised not to take the previous paragraph seriously. I'm pretty sure someone will point out that my rant is not valid, or w3c-compliant - and that it only applies to the swedish beta version of IE6) Joking apart, I'm looking for any resource that explains the root causes of all the crazyness behind layout in CSS. In essence, something that would be to CSS what Crockford's articles are to Javascript. In this spirit, let me point out that I'm not looking for css libraries or grid frameworks like blueprint, or for CSS extension languages like lesscss. I've been using those to ease my sufferings, but I'm afraid it would be like telling someone to "just use jQuery" when they say they can't wrap their mind around prototype inheritence in JS. If all you can point me to is <http://shop.oreilly.com/product/9781565926226.do> , I guess I'll consider myself doomed. Thanks in advance. EDIT : I probably should not have talked about "positioning" (thanks to all who've explained again that 'position:relative' does not mean 'relative to your container' and that 'position:absolute' means relative to something. I've never been so close to making a monty python script out of a SO questions). I think I meant layout in general (positioning + floats + baselines + all the nonsense required to put stuff on a straight line). Also please excuse the ranting tone, I'm trying to pour some humour into frustration. I would use zen techniques to calm down if I could, but this only reminds me of [this](http://www.csszengarden.com/).
2012/01/30
[ "https://Stackoverflow.com/questions/9072175", "https://Stackoverflow.com", "https://Stackoverflow.com/users/77804/" ]
It seems most others have not quite understood the gist of your post. I'll break it down for you: CSS positiong is complex because it was designed by many different groups of people over a long period of time, with various versions, and legacy compatibility issues. The first attempts were to keep things simple. Just provide basic styling. Colors, fonts, sizes, magins, etc.. They added floats to provide the basic "cutout" functionality where text wraps around an image. Float was not intended to be used as a major layout feature as it currently is used. But people were not happy with that. They wanted columns, and grids, boxes, and shadows, and rounded corners, and all kinds of other stuff, which was added in various stages. All while trying to maintain compatibility with previous bad implementations. HTML has suffered from two opposing factions warring it out. One side wanted simple (compared to existing SGML anyways) solutions, another side wanted rich applications. So CSS has this sort of schitzophrenic nature to it sometimes. What's more, features were extended to do things they weren't initially intended to do. This made the existing implementations all very buggy. So what does that mean for you, a mere human? It means you are stuck dealing with everyone elses dirty laundry. It means you have to deal with decade old implementation bugs. It means you either have to write different CSS for different browsers, or you have to limit yourself to a common "well supported" featureset, which means you can't take full advantage of what the latest CSS can do (nor can you use the features there were designed to add some sanity to the standard). In my opinion, there is no better book for a "mere human" to undrstand CSS than this: [http://www.amazon.com/Eric-Meyer-CSS-Mastering-Language/dp/073571245X](https://rads.stackoverflow.com/amzn/click/com/073571245X) It's simple, concise, and gives you real world examples in a glossy "easy on the eyes" format, and lacking most of the nasty technical jargon. It is 10 years old, and doesn't cover any of the new stuff, but it should allow you to "grok" the way things work.
Have you checked out this great book? <http://shop.oreilly.com/product/9781565926226.do> just kidding. I don't think you need an entire resource devoted to this one question. It's rather simple once it clicks. Think of CSS positioning as a way to position items either relatively to themsevels (wherever they fall on the page) or absolutely from on an X/Y coordinate. You can position something relative and it will either move up or to the right with a positive number, or down and to the left with a negative number. If you position an element absolutely it will remove itself from the layout altogether (the other elements will not recognize it as being on the screen) and then do one of two things. It will either: 1 - position itself from the top left of the page and either go up/down right/left as I mentioned before depending on whether the numbers are +/-. 2- if the PARENT element is either positioned `absolute` or `relative` it will position itself from the top left "corner" of the parent element, NOT the browser window. Think of `z-index` as layers in photoshop. With 0 being the bottom (and newer browsers recognize negative z index for even more fun). and 100 as the top later (newer browsers recognize an infinite amount of numbers). The `z-index` only works with position: `relative` and `absolute`. So if I position something absolute, and it happens to fall underneath another element, I can give it `z-index: 100` and it will position itself on top. Keep in mind that the element itself is a rectangle, and the height/width of the element may inhibit you from clicking on the element beneath. You can't do angles,circles, etc. with pure CSS. Does that help?
12,033
25,237,039
I have some pickle files of deep learning models built on gpu. I'm trying to use them in production. But when i try to unpickle them on the server, i'm getting the following error. > > Traceback (most recent call last): > > File "score.py", line 30, in > > model = (cPickle.load(file)) > > File "/usr/local/python2.7/lib/python2.7/site-packages/Theano-0.6.0-py2.7.egg/theano/sandbox/cuda/type.py", line 485, in CudaNdarray\_unpickler > > return cuda.CudaNdarray(npa) > > AttributeError: ("'NoneType' object has no attribute 'CudaNdarray'", , (array([[ 0.011515 , 0.01171047, 0.10408644, ..., -0.0343636 , > > 0.04944979, -0.06583775], > > [-0.03771918, 0.080524 , -0.10609912, ..., 0.11019105, > > -0.0570752 , 0.02100536], > > [-0.03628891, -0.07109226, -0.00932018, ..., 0.04316209, > > 0.02817888, 0.05785328], > > ..., > > [ 0.0703947 , -0.00172865, -0.05942701, ..., -0.00999349, > > 0.01624184, 0.09832744], > > [-0.09029484, -0.11509365, -0.07193922, ..., 0.10658887, > > 0.17730837, 0.01104965], > > [ 0.06659461, -0.02492988, 0.02271739, ..., -0.0646857 , > > 0.03879852, 0.08779807]], dtype=float32),)) > > > I checked for that cudaNdarray package in my local machine and it is not installed, but still i am able to unpickle them. But in the server, i am unable to. How do i make them to run on a server which doesnt have a GPU?
2014/08/11
[ "https://Stackoverflow.com/questions/25237039", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1505986/" ]
There is a script in pylearn2 which may do what you need: `pylearn2/scripts/gpu_pkl_to_cpu_pkl.py`
This works for me. Note: this doesn't work unless the following environment variable is set: `export THEANO_FLAGS='device=cpu'` ``` import os from pylearn2.utils import serial import pylearn2.config.yaml_parse as yaml_parse if __name__=="__main__": _, in_path, out_path = sys.argv os.environ['THEANO_FLAGS']="device=cpu" model = serial.load(in_path) model2 = yaml_parse.load(model.yaml_src) model2.set_param_values(model.get_param_values()) serial.save(out_path, model2) ```
12,043
33,157,597
I am trying to install rasterio into my python environment and am getting the following errors. I can do ``` conda install rasterio ``` No error comes up on the install but I come up with the following error when I try to import ``` from rasterio._base import eval_window, window_shape, window_index ImportError: DLL load failed: The specified module could not be found. ``` if I try ``` pip install rasterio ``` it errors when installing with this: ``` rasterio/_base.c(263) : fatal error C1083: Cannot open include file:'cpl_conv.h': No such file or directory error: command 'C:\\Users\\Rdebbout\\AppData\\Local\\Programs\\Common\\Microsoft\\Visual C++ for Python\\9.0\\VC\\Bin\\amd64\\cl.exe' failed with exit status 2 ---------------------------------------- Failed building wheel for rasterio ``` I have the same problems with trying to import the fiona module. How and/or where do DLLs get loaded? I'm in the dark on this one and would appreciate any help or direction as to how to troubleshoot this problem. I am using the 64-bit version of spyder on windows 7.
2015/10/15
[ "https://Stackoverflow.com/questions/33157597", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4812453/" ]
I would suggest trying the ioos anaconda recipe (<https://anaconda.org/ioos/rasterio>). `conda install -c https://conda.anaconda.org/ioos rasterio`. I have run into the same DLL issue you are seeing when trying to install more recent versions of rasterio using the standard anaconda version.
I had the same issue. A reinstall solved it. ``` conda install -f rasterio ```
12,048
1,575,971
I'm making a stupid little game that saves your score in a highscores.txt file. My problem is sorting the lines. Here's what I have so far. Maybe an alphanumeric sorter for python would help? Thanks. ``` import os.path import string def main(): #Check if the file exists file_exists = os.path.exists("highscores.txt") score = 500 name = "Nicholas" #If the file doesn't exist, create one with the high scores format. if file_exists == False: f = open("highscores.txt", "w") f.write('Guppies High Scores\n1000..........Name\n750..........Name\n600..........Name\n450..........Name\n300..........Name') new_score = str(score) + ".........." + name f = open("highscores.txt", "r+") words = f.readlines() print words main() ```
2009/10/16
[ "https://Stackoverflow.com/questions/1575971", "https://Stackoverflow.com", "https://Stackoverflow.com/users/190983/" ]
after `words = f.readlines()`, try something like: ``` headers = words.pop(0) def myway(aline): i = 0 while aline[i].isdigit(): i += 1 score = int(aline[:i]) return score words.sort(key=myway, reverse=True) words.insert(0, headers) ``` The key (;-) idea is to make a function that returns the "sorting key" from each item (here, a line). I'm trying to write it in the simplest possible way: see how many leading digits there are, then turn them all into an int, and return that.
Doing a simple string sort on your ``` new_score = str(score) + ".........." + name ``` items isn't going to work since, for example str(1000) < str(500). In other words, 1000 will come before 500 in an alphanumeric sort. Alex's answer is good in that it demonstrates the use of a sort key function, but here is another solution which is a bit simpler and has the added advantage of visuallaly aligning the high score displays. What you need to do is right align your numbers in a fixed field of the maximum size of the scores, thus (assuming 5 digits max and ver < 3.0): ``` new_score = "%5d........%s" % (score, name) ``` or for Python ver 3.x: ``` new_score = "{0:5d}........{1}".format(score, name) ``` For each new\_score append it to the words list (you could use a better name here) and sort it reversed before printing. Or you could use the bisect.insort library function rather than doing a list.append. Also, a more Pythonic form than ``` if file_exists == False: ``` is: ``` if not file_exists: ```
12,051
26,645,502
The code is self-explained... ``` $ python3 Python 3.4.0 (default, Apr 11 2014, 13:05:18) [GCC 4.8.2] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import urllib.request as req >>> url = 'http://bangladeshbrands.com/342560550782-44083.html' >>> res = req.urlopen(url) >>> html = res.read() >>> type(html) <class 'bytes'> >>> html = html.decode('utf-8') # bytes -> str Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeDecodeError: 'utf-8' codec can't decode byte 0x92 in position 66081: invalid start byte ```
2014/10/30
[ "https://Stackoverflow.com/questions/26645502", "https://Stackoverflow.com", "https://Stackoverflow.com/users/907044/" ]
You need to give the dataset name and table name I am going to give you code which is work on my Machine Correctly ``` Dim ds As New DataSet Dim dt As New DataTable Dim RpDs1 As New Microsoft.Reporting.WinForms.ReportDataSource Dim SQL As String = "select * from mfcount" Dim da As New OleDbDataAdapter(SQL, My.Settings.trialConnectionString) da.Fill(ds, "mfcount") dt = ds.Tables(0) ReportViewer1.Reset() ReportViewer1.LocalReport.DataSources.Clear() RpDs1.Name = "trialDataSet4_MFCount" RpDs1.Value = dt ReportViewer1.ProcessingMode = WinForms.ProcessingMode.Local ReportViewer1.LocalReport.DataSources.Add(RpDs1) Dim path = New DirectoryInfo(Application.StartupPath).Parent.Parent.Parent.FullName ReportViewer1.LocalReport.ReportEmbeddedResource = Application.StartupPath & "\Report\" & "ADDRESSReport.rdlc" ReportViewer1.LocalReport.ReportPath = Application.StartupPath & "\Report\" & "ADDRESSReport.rdlc" ReportViewer1.ZoomMode = Microsoft.Reporting.WinForms.ZoomMode.PageWidth ReportViewer1.RefreshReport() ``` This code will help you..
thanks basuraj kumbhar for your help to solve this problem. Here is the working codes for MyCql connection Report in vb.net ``` Dim ds As New DataSet Dim dt As New DataTable Dim RpDs1 As New Microsoft.Reporting.WinForms.ReportDataSource Dim SQL As String = "select * from tb_course" Dim da As New MySqlDataAdapter(SQL, con) da.Fill(ds, "tb_course") dt = ds.Tables(0) ReportViewer1.Reset() ReportViewer1.LocalReport.DataSources.Clear() RpDs1.Name = "DataSet1" RpDs1.Value = dt ReportViewer1.ProcessingMode = Microsoft.Reporting.WinForms.ProcessingMode.Local ReportViewer1.LocalReport.ReportPath = System.Environment.CurrentDirectory + "\Report1.rdlc" ReportViewer1.LocalReport.DataSources.Clear() ReportViewer1.LocalReport.DataSources.Add(New Microsoft.Reporting.WinForms.ReportDataSource("DataSet1", ds.Tables(0))) ReportViewer1.ZoomMode = Microsoft.Reporting.WinForms.ZoomMode.PageWidth ReportViewer1.RefreshReport() ```
12,058
38,703,423
I have been scouring the internet looking for the answer to this. Please not my python coding skills are not all that great. I am trying to create a command line script that will take the input from the command line like this: ``` $python GetHostID.py serverName.com ``` the last part is what I am wanting to pass on as a variable to socket.gethostbyaddr("") module. this is the code that I have so far. can someone help me figure out how to put that variable into the (" "). I think the "" is creating problems with using a simple variable name as it is trying to treat it as a string of text as appose to a variable name. here is the code I have in my script: ``` #!/bin/python # import sys, os import optparse import socket remoteServer = input("Enter a remote host to scan: ") remoteServerIP = socket.gethostbyaddr(remoteServer) socket.gethostbyaddr('remoteServer')[0] os.getenv('remoteServer') print (remoteServerIP) ``` any help would be welcome. I have been racking my brain over this... thanks
2016/08/01
[ "https://Stackoverflow.com/questions/38703423", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6664141/" ]
The command line arguments are available as the list `sys.argv`, whose first element is the path to the program. There are a number of libraries you can use (argparse, optparse, etc.) to analyse the command line, but for your simple application you could do something like this: ``` import sys import sys, os import optparse import socket remoteServer = sys.argv[1] remoteServerIP = socket.gethostbyaddr(remoteServer) print (remoteServerIP) ``` Running this program with the command line ``` $ python GetHostID.py holdenweb.com ``` gives the output ``` ('web105.webfaction.com', [], ['108.59.9.144']) ```
os.getenv('remoteserver') does not use the variable remoteserver as an argument. Instead it uses a string 'remoteserver'. Also, are you trying to take input as a command line argument? Or are you trying to take it as user input? Your problem description and implementation differ here. The easiest way would be to run your script using ``` python GetHostID.py ``` and then in your code include ``` remoteServer = raw_input().strip().split() ``` to get the input you want for remoteserver.
12,059
38,897,842
I have an app running on AWS and I need to save every "event" in a file. An "event" happens, for instance, when a user logs in to the app. When that happens I need to save this information on a file (presumably I would need to save a time stamp and the session id) I expect to have a lot of events (of the order of a million per month) and I was wondering what would be the best way to do this. I thought of writing on S3, but I think I can't append to existing files. Another option would be to redirect the "event" to the standard output, but would not be the smartest solution. Any ideas? Also, this needs to be done in python.
2016/08/11
[ "https://Stackoverflow.com/questions/38897842", "https://Stackoverflow.com", "https://Stackoverflow.com/users/863713/" ]
The `.*` in the lookaheads checks for the letter presence not only in the adjacent word, but later in the string. Use `[a-zA-Z]*`: ``` echo "hello 123 worLD" | grep -oP "\\b(?=[A-Za-z]*[a-z])(?=[A-Za-z]*[A-Z])[a-zA-Z]+" ``` See the [demo online](https://ideone.com/aGaXTA) I also added a word boundary `\b` at the start so that the lookahead check was only performed after a word boundary.
**Answer:** ``` echo "hello 123 worLD" | grep -oP "\b(?=[A-Z]+[a-z]|[a-z]+[A-Z])[a-zA-Z]*" ``` Demo: <https://ideone.com/HjLH5o> **Explanation:** First check if word starts with one or more uppercase letters followed by one lowercase letters or vice versa followed by any number of lowercase and uppercase letters in any order. **Performance:** [This solution](https://regex101.com/r/bC8vT6/5) takes 31 steps to reach the match on the provided test string, while the [accepted solution](https://regex101.com/r/bC8vT6/1) takes 47 steps.
12,062
3,124,229
I'm trying to take a screenshot of the entire screen with C and GTK. I don't want to make a call to an external application for speed reasons. I've found Python code for this ([Take a screenshot via a python script. [Linux]](https://stackoverflow.com/questions/69645/take-a-screenshot-via-a-python-script-linux/782768#782768)); I just need to figure out how to do that in C.
2010/06/26
[ "https://Stackoverflow.com/questions/3124229", "https://Stackoverflow.com", "https://Stackoverflow.com/users/287750/" ]
After looking at the GNOME-Screenshot code and a Python example, I came up with this: ``` GdkPixbuf * get_screenshot(){ GdkPixbuf *screenshot; GdkWindow *root_window; gint x_orig, y_orig; gint width, height; root_window = gdk_get_default_root_window (); gdk_drawable_get_size (root_window, &width, &height); gdk_window_get_origin (root_window, &x_orig, &y_orig); screenshot = gdk_pixbuf_get_from_drawable (NULL, root_window, NULL, x_orig, y_orig, 0, 0, width, height); return screenshot; } ``` Which seems to work perfectly. Thanks!
9 years passed and as mentioned above API is removed. As far as I understand, currently the bare minimum to do this at Linux is: ``` GdkWindow * root; GdkPixbuf * screenshot; gint x, y, width, height; root = gdk_get_default_root_window (); gdk_window_get_geometry (root, &x, &y, &width, &height); screenshot = gdk_pixbuf_get_from_window (root, x, y, width, height); // gdk_pixbuf_save... ``` This is very slightly tested and may fail. Further reading is in gnome-screenshooter [repo](https://gitlab.gnome.org/GNOME/gnome-screenshot)
12,063
6,457,102
I'm trying to load ~2GB of text files (approx 35K files) in my python script. I'm getting a memory error around a third of the way through on page.read(). I' ``` for f in files: page = open(f) pageContent = page.read().replace('\n', '') page.close() cFile_list.append(pageContent) ``` I've never dealt with objects or processes of this size in python. I checked some of other Python MemoryError related threads but I couldn't get anything to fix my scenario. Hopefully there is something out there that can help me out.
2011/06/23
[ "https://Stackoverflow.com/questions/6457102", "https://Stackoverflow.com", "https://Stackoverflow.com/users/812575/" ]
You are trying to load too much into memory at once. This can be because of the process size limit (especially on a 32 bit OS), or because you don't have enough RAM. A 64 bit OS (and 64 bit Python) would be able to do this ok given enough RAM, but maybe you can simply change the way your program is working so not every page is in RAM at once. What is cFile\_list used for? Do you really need all the pages in memory at the same time?
Consider using generators, if possible in your case: ``` file_list = [] for file_ in files: file_list.append(line.replace('\n', '') for line in open(file_)) ``` file\_list now is a list of iterators which is more memory-efficient than reading the whole contents of each file into a string. As soon es you need the whole string of a particular file, you can do ``` string_ = ''.join(file_list[i]) ``` Note, however, that iterating over file\_list is only possible once due to the nature of iterators in Python. See <http://www.python.org/dev/peps/pep-0289/> for more details on generators.
12,064
70,623,704
The following code: ``` from typing import Union def process(actions: Union[list[str], list[int]]) -> None: for pos, action in enumerate(actions): act(action) def act(action: Union[str, int]) -> None: print(action) ``` generates a mypy error: `Argument 1 to "act" has incompatible type "object"; expected "Union[str, int]"` However when removing the enumerate function the typing is fine: ``` from typing import Union def process(actions: Union[list[str], list[int]]) -> None: for action in actions: act(action) def act(action: Union[str, int]) -> None: print(action) ``` Does anyone know what the enumerate function is doing to effect the types? This is python 3.9 and mypy 0.921
2022/01/07
[ "https://Stackoverflow.com/questions/70623704", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3139441/" ]
`enumerate.__next__` needs more context than is available to have a return type more specific than `Tuple[int, Any]`, so I believe `mypy` itself would need to be modified to make the inference that `enumerate(actions)` produces `Tuple[int,Union[str,int]]` values. Until that happens, you can explicitly cast the value of `action` before passing it to `act`. ``` from typing import Union, cast StrOrInt = Union[str, int] def process(actions: Union[list[str], list[int]]) -> None: for pos, action in enumerate(actions): act(cast(StrOrInt, action)) def act(action: Union[str, int]) -> None: print(action) ``` You can also make `process` generic (which now that I've thought of it, is probably a better idea, as it avoids the overhead of calling `cast` at runtime). ``` from typing import Union, cast, Iterable, TypeVar T = TypeVar("T", str, int) def process(actions: Iterable[T]) -> None: for pos, action in enumerate(actions): act(action) def act(action: T) -> None: print(action) ``` Here, `T` is not a union of types, but a single concrete type whose identity is fixed by the call to `process`. `Iterable[T]` is either `Iterable[str]` or `Iterable[int]`, depending on which type you pass to `process`. That fixes `T` for the rest of the call to `process`, which every call to `act` must take the same type of argument. An `Iterable[str]` or an `Iterable[int]` is a valid argument, binding `T` to `int` or `str` in the process. Now `enumerate.__next__` apparently *can* have a specific return type `Tuple[int, T]`.
I don't know how it's affecting the types. I do know that using len() can work the same way. It is slower but if it solves the problem it might be worth it. Sorry that it's not much help
12,067
53,715,925
Greetings for the past week (or more) I've been struggling with a problem. **Scenario:** I am developing an app which will allow an expert to create a recipe using a provided image of something to be used as a base. The recipe consists of areas of interests. The program's purpose is to allow non experts to use it, providing images similar to that original and the software cross checks these different areas of interest from the Recipe image to the Provided image. One use-case scenario could be banknotes. The expert would select an area on an a good picture of a banknote that is genuine, and then the user would provide the software with images of banknotes that need to be checked. So illumination, as well as capturing device could be different. I don't want you guys to delve into the nature of comparing banknotes, that's another monster to tackle and I got it covered for the most part. **My Problem:** Initially I [shrink](https://stackoverflow.com/questions/44650888/resize-an-image-without-distortion-opencv) one of the two pictures to the size of the smaller one. So now we are dealing with pictures having the same size. (I actually perform the shrinking to the areas of interest and not the whole picture, but that shouldn't matter.) I have tried and used different methodologies compare these parts but each one had it's limitations due to the nature of the images. Illumination might be different, provided image might have some sort of contamination etc. **What have I tried:** *Simple image similarity comparison using [RGB](https://rosettacode.org/wiki/Percentage_difference_between_images) difference.* Problem is provided image could be totally different but colours could be similar. So I would get high percentages on "totally" different banknotes. *SSIM on RGB Images.* Would give really low percentage of similarity on all channels. *SSIM after using sobel filter.* Again low percentage of similarity. I used SSIM from both [Scikit](http://scikit-image.org/docs/dev/auto_examples/transform/plot_ssim.html) in python and [SSIM from OpenCV](https://docs.opencv.org/2.4/doc/tutorials/gpu/gpu-basics-similarity/gpu-basics-similarity.html) *Feature matching with [Flann](https://docs.opencv.org/2.4/doc/tutorials/features2d/feature_flann_matcher/feature_flann_matcher.html)*. Couldn't find a good way to use detected matches to extract a similarity. Basically I am guessing that I need to use various methods and algorithms to achieve the best result. My gut tells me that I will need to combine RGB comparison results with a methodology that will: * Perform some form of edge detection like sobel. * Compare the results based on shape matching or something similar. I am an image analysis newbie and I also tried to find a way to compare, the sobel products of the provided images, using [mean and std calculations](https://docs.opencv.org/2.4/modules/core/doc/operations_on_arrays.html#void%20meanStdDev(InputArray%20src,%20OutputArray%20mean,%20OutputArray%20stddev,%20InputArray%20mask)) from openCV, however I either did it wrong, or the results I got were useless anyway. I calculated the [eucledian distance](https://stackoverflow.com/questions/23105777/opencv-euclidean-distance-between-two-vectors) between the vectors that resulted from mean and std calculation, however I could not use the results mainly because I couldn't see how they related between images. I am not providing code I used, firslty because I scrapped some of it, and secondly because I am not looking for a code solution but a methodology or some direction to study-material. (I've read shitload of papers already). Finally I am not trying to detect similar images, but given two images, extract the similarity between them, trying to bypass small differences created by illumination or paper distortion etc. Finally I would like to say that I tested all the methods by providing the same image twice and I would get 100% similarity, so I didn't totally fuck it up. Is what I am trying even possible without some sort of training sets to teach the software what are the acceptable variants of the image? (Again I have no idea if that even makes sense :D )
2018/12/11
[ "https://Stackoverflow.com/questions/53715925", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4789509/" ]
Your `DirectView` class must inherit from a `View` class in Django in order to use [`as_view`](https://docs.djangoproject.com/en/2.1/ref/class-based-views/base/#django.views.generic.base.View.as_view). ``` from django.views.generic import View class DirectView(mixins.CreateModelMixin, View): ``` If you're using the rest framework, maybe the inheritance you need here is [`CreateAPIView`](https://www.django-rest-framework.org/api-guide/generic-views/#createapiview) or [`GenericAPIView`](https://www.django-rest-framework.org/api-guide/generic-views/#genericapiview) (with `CreateModelMixin`) which is the API equivalent of the `View` class mentioned above.
If we are looking into the [source code of **`mixins.CreateModelMixin`**](https://github.com/encode/django-rest-framework/blob/master/rest_framework/mixins.py#L14), we could see it's inherited from [**`object`**](https://stackoverflow.com/questions/4015417/python-class-inherits-object) (***builtin type***) and hence it's independent of any kind of inheritance other than ***builtin type***. Apart from that, **Mixin** classes are special kind of multiple inheritance. You could read more about Mixins [here](https://stackoverflow.com/questions/533631/what-is-a-mixin-and-why-are-they-useful). In short, Mixins provides additional functionality to the class (kind of *helper class*). --- **So, What's the solution to this problem?** **Solution - 1 : Use `CreateAPIView`** Since you are trying to extend the functionality of `CreateModelMixin`, it's highly recomended to use this DRF builtin view as, ``` **from rest\_framework import generics** class DirectView(**generics.CreateAPIView**): serializer_class = DirectSerializer def perform_create(self, serializer): serializer.save(user=self.request.user) def post(self, request, *args, **kwargs): return self.create(request, *args, **kwargs) ``` --- **Reference** 1. [What is a mixin, and why are they useful?](https://stackoverflow.com/questions/533631/what-is-a-mixin-and-why-are-they-useful) 2. [Python class inherits object](https://stackoverflow.com/questions/4015417/python-class-inherits-object)
12,069
69,108,130
I am trying to build a voice assistant. I am facing a problem with the playsound library. Please view my code snippet. ``` def respond(output): """ function to respond to user questions """ num=0 print(output) num += 1 response=gTTS(text=output, lang='en') file = str(num)+".mp3" response.save(file) play(file, True) #playsound import playsound as play if __name__=='__main__': respond("Hi! I am Zoya, your personal assistant") ``` My audio file is getting generated, however at the line play(file,True) it is throwing the following error. ``` --------------------------------------------------------------------------- CalledProcessError Traceback (most recent call last) <ipython-input-52-be0c0a53e7e6> in <module>() 1 if __name__=='__main__': ----> 2 respond("Hi! I am Zoya, your personal assistant") 3 4 while(1): 5 respond("How can I help you?") 6 frames /usr/lib/python3.7/subprocess.py in check_call(*popenargs, **kwargs) 361 if cmd is None: 362 cmd = popenargs[0] --> 363 raise CalledProcessError(retcode, cmd) 364 return 0 365 CalledProcessError: Command '['/usr/bin/python3', '/usr/local/lib/python3.7/dist-packages/playsound.py', '1.mp3']' returned non-zero exit status 1. ``` How do I resolve the issue? I would also like to mention that I am working on google colab.
2021/09/08
[ "https://Stackoverflow.com/questions/69108130", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16863358/" ]
Query * lookup with it self, join only with the type that the id=3 has. * empty join results => different type so they are filtered out [Test code here](https://mongoplayground.net/p/KnKpOBZYJy7) ```js db.collection.aggregate([ { "$lookup": { "from": "collection", "let": { "type": "$type" }, "pipeline": [ { "$match": { "$expr": { "$and": [ { "$eq": [ "$_id", "3" ] }, { "$eq": [ "$$type", "$type" ] } ] } } } ], "as": "joined" } }, { "$match": { "$expr": { "$ne": [ "$joined", [] ] } } }, { "$unset": [ "joined" ] } ]) ```
You're basically mixing two separate queries: 1. Get an item by ID - returns a **single** item 2. Get a list of items, that have the same type as the type of the first item - returns a **list of items** Because of the difference of the queries, there's no super straightforward way to do so. Surely you can use [$aggregate](https://docs.mongodb.com/manual/aggregation/) to do the trick, but logic wise you'd still query quite a bit from the database, and you'd have to dig deeper to optimize it properly. As long as you're not querying tens of millions of records, I'd suggest you do the two queries one after another, for the sake of simplicity.
12,070
18,600,200
in python here is my multiprocessing setup. I subclassed the Process method and gave it a queue and some other fields for pickling/data purposes. This strategy works about 95% of the time, the other 5% for an unknown reason the queue just hangs and it never finishes (it's common that 3 of the 4 cores finish their jobs and the last one takes forever so I have to just kill the job). I am aware that queue's have a fixed size in python, or they will hang. My queue only stores one character strings... the id of the processor, so it can't be that. Here is the exact line where my code halts: ``` res = self._recv() ``` Does anyone have ideas? The formal code is below. Thank you. ``` from multiprocessing import Process, Queue from multiprocessing import cpu_count as num_cores import codecs, cPickle class Processor(Process): def __init__(self, queue, elements, process_num): super(Processor, self).__init__() self.queue = queue self.elements = elements self.id = process_num def job(self): ddd = [] for l in self.elements: obj = ... heavy computation ... dd = {} dd['data'] = obj.data dd['meta'] = obj.meta ddd.append(dd) cPickle.dump(ddd, codecs.open( urljoin(TOPDIR, self.id+'.txt'), 'w')) return self.id def run(self): self.queue.put(self.job()) if __name__=='__main__': processes = [] for i in range(0, num_cores()): q = Queue() p = Processor(q, divided_work(), process_num=str(i)) processes.append((p, q)) p.start() for val in processes: val[0].join() key = val[1].get() storage = urljoin(TOPDIR, key+'.txt') ddd = cPickle.load(codecs.open(storage , 'r')) .. unpack ddd process data ... ```
2013/09/03
[ "https://Stackoverflow.com/questions/18600200", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1660802/" ]
Do a `time.sleep(0.001)` at the beginning of your `run()` method.
From my experience ``` time.sleep(0.001) ``` Is by far not long enough. I had a similar problem. It seems to happen if you call `get()` or `put()` on a queue "too early". I guess it somehow fails to initialize quick enough. Not entirely sure, but I'm speculating that it might have something to do with the ways a queue might use the underlying operating system to pass messages. It started happening to me after I started using BeautifulSoup and lxml and it affected totally unrelated code. My solution is a little big ugly but it's simple and it works: ``` import time def run(self): error = True while error: self.queue.put(self.job()) error = False except EOFError: print "EOFError. retrying..." time.sleep(1) ``` On my machine it usually retries twice during application start-up and afterwards never again. You need to do that inside of sender AND receiver since this error will occur on both sides.
12,071
46,373,433
I am working in python, trying to be able to put in a data set (eg. (1, 6, 8) that returns a string (eg. 'NO+ F- NO+'). I think that maybe array is not the correct object. I want to be able to plug in large data sets (eg. (1, 1, 6, 1, ..., 8, 8, 6, 1) to return a string. ``` def protein(array): ligand = '' for i in range(array): if i == 1: ligand = ligand + 'NO+' if i == 6: ligand = ligand + 'F-' if i == 8: ligand = ligand + 'NO+' return ligand ``` The following is the input and error code: ``` protein(1, 6, 8) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-44-a33f3d5c265e> in <module>() ----> 1 protein(1, 6, 8) TypeError: protein() takes 1 positional argument but 3 were given ``` For single inputs, I get the wrong output: ``` protein(1) Out[45]: '' protein(6) Out[46]: 'NO+' ``` If any further clarification is needed, let me know, thanks
2017/09/22
[ "https://Stackoverflow.com/questions/46373433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8658260/" ]
You probably want `def protein(*array):` This allows you to give in any number of arguments. You also must use `for i in array:` instead of `for i in range(array):`
If you call it like `protein(1, 6, 8)` you are **not** passing it a tuple: you pass it **three parameters**. Since you defined `protein` with *one* parameter: `array`, that errors. You can use arbitrary parameters, by using `*args`. But nevertheless this function is still not very elegant nor is it efficient: it will take *O(n2)* to calculate the string. A more declarative and efficient approach is probably to use a dictionary and perform lookups that are then `''.join`ed together: ``` translate = {1: 'NO+', 6: 'F-', 8: 'NO+'} def protein(*array): return ''.join(translate[x] for x in array) ``` In case you want to *ignore* values you pass that are not in the dictionary (for instance ignore `7` in `protein(1,7,6,8)`, you can replace `[x]` with `.get(x, '')`: ``` translate = {1: 'NO+', 6: 'F-', 8: 'NO+'} def protein(*array): return ''.join(translate.get(x, '') for x in array) ```
12,072
55,253,980
How to debug the code written in python in container using azure dev spaces for kubernetes ?
2019/03/20
[ "https://Stackoverflow.com/questions/55253980", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8267052/" ]
Debugging should be similar just like we have it in Dot net core.In dot net , we used to debug something like this **Setting and using breakpoints for debugging** If Visual Studio 2017 is still connected to your dev space, click the stop button. Open Controllers/HomeController.cs and click somewhere on line 20 to put your cursor there. To set a breakpoint hit F9 or click Debug then Toggle Breakpoint. To start your service in debugging mode in your dev space, hit F5 or click Debug then Start Debugging. Open your service in a browser and notice no message is displayed. Return to Visual Studio 2017 and observe line 20 is highlighted. The breakpoint you set has paused the service at line 20. To resume the service, hit F5 or click Debug then Continue. Return to your browser and notice the message is now displayed. While running your service in Kubernetes with a debugger attached, you have full access to debug information such as the call stack, local variables, and exception information. Remove the breakpoint by putting your cursor on line 20 in Controllers/HomeController.cs and hitting F9. Try something like this and see if it works. Here is an article which explains debugging python code in visual studio 2017 <https://learn.microsoft.com/en-us/visualstudio/python/tutorial-working-with-python-in-visual-studio-step-04-debugging?view=vs-2017> Hope it helps.
Currently debugging in Azure Dev Spaces only lists Node.js, .NET Core, and Java as officially supported. The documentation for how to debug these 3 types of environments was written pretty recently (Quickstarts published on 7/7/2019). I am assuming that a guide for Python should be on the way shortly, but I have been unable to find any published timeline for this.
12,078
73,565,617
I'm making an age calculator and when calculating the months, I need numbers, not strings such as "January" etc. how do I make it that when the user's selecting their birth month, they see month strings ("Jan", "Feb"), and backend I return the month's number? thought this could be done with if statements, but it's simply too long and I wonder if there's a better way. (I'm programming in python btw)
2022/09/01
[ "https://Stackoverflow.com/questions/73565617", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19517516/" ]
You can use list comprehension for that : ``` myList = ['BUD', 'CDG', 'DEL', 'DOH', 'DSM,ORD', 'EWR,HND', 'EWR,HND,ICN', 'EWR,HND,JFK', 'EWR,HND,JFK,LGA', 'EWR,HND,JFK,LGA', 'EWY,LHR', 'EWY,LHR,SFO,DSM', 'EWY,LHR,SFO,DSM,ORD,BGI', 'EWY,LHR,SFO,DSM,ORD,BGI,LGA'] result = [el.split(',') for el in myList] print(result) ``` output: ``` [['BUD'], ['CDG'], ['DEL'], ['DOH'], ['DSM', 'ORD'], ['EWR', 'HND'], ['EWR', 'HND', 'ICN'], ['EWR', 'HND', 'JFK'], ['EWR', 'HND', 'JFK', 'LGA'], ['EWR', 'HND', 'JFK', 'LGA'], ['EWY', 'LHR'], ['EWY', 'LHR', 'SFO', 'DSM'], ['EWY', 'LHR', 'SFO', 'DSM', 'ORD', 'BGI'], ['EWY', 'LHR', 'SFO', 'DSM', 'ORD', 'BGI', 'LGA']] ```
``` myList = ['BUD', 'CDG', 'DEL', 'DOH', 'DSM,ORD', 'EWR,HND', 'EWR,HND,ICN', 'EWR,HND,JFK', 'EWR,HND,JFK,LGA', 'EWR,HND,JFK,LGA', 'EWY,LHR', 'EWY,LHR,SFO,DSM', 'EWY,LHR,SFO,DSM,ORD,BGI', 'EWY,LHR,SFO,DSM,ORD,BGI,LGA'] nested_list = [[element] for element in myList] ``` output : ``` [['BUD'], ['CDG'], ['DEL'], ['DOH'], ['DSM,ORD'], ['EWR,HND'], ['EWR,HND,ICN'], ['EWR,HND,JFK'], ['EWR,HND,JFK,LGA'], ['EWR,HND,JFK,LGA'], ['EWY,LHR'], ['EWY,LHR,SFO,DSM'], ['EWY,LHR,SFO,DSM,ORD,BGI'], ['EWY,LHR,SFO,DSM,ORD,BGI,LGA']] ```
12,079
17,256,602
Does anyone know why I can't overwrite an existing endpoint function if i have two url rules like this ``` app.add_url_rule('/', view_func=Main.as_view('main'), methods=["GET"]) app.add_url_rule('/<page>/', view_func=Main.as_view('main'), methods=["GET"]) ``` Traceback: ``` Traceback (most recent call last): File "demo.py", line 20, in <module> methods=["GET"]) File ".../python2.6/site-packages/flask‌​/app.py", line 62, in wrapper_func return f(self, *args, **kwargs) File ".../python2.6/site-packages/flask‌​/app.py", line 984, in add_url_rule 'existing endpoint function: %s' % endpoint) AssertionError: View function mapping is overwriting an existing endpoint function: main ```
2013/06/23
[ "https://Stackoverflow.com/questions/17256602", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2016434/" ]
This same issue happened to me when I had more than one API function in the module and tried to wrap each function with 2 decorators: 1. @app.route() 2. My custom @exception\_handler decorator I got this same exception because I tried to wrap more than one function with those two decorators: ``` @app.route("/path1") @exception_handler def func1(): pass @app.route("/path2") @exception_handler def func2(): pass ``` Specifically, it is caused by trying to register a few functions with the name **wrapper**: ``` def exception_handler(func): def wrapper(*args, **kwargs): try: return func(*args, **kwargs) except Exception as e: error_code = getattr(e, "code", 500) logger.exception("Service exception: %s", e) r = dict_to_json({"message": e.message, "matches": e.message, "error_code": error_code}) return Response(r, status=error_code, mimetype='application/json') return wrapper ``` Changing the name of the function solved it for me (**wrapper.\_\_name\_\_ = func.\_\_name\_\_**): ``` def exception_handler(func): def wrapper(*args, **kwargs): try: return func(*args, **kwargs) except Exception as e: error_code = getattr(e, "code", 500) logger.exception("Service exception: %s", e) r = dict_to_json({"message": e.message, "matches": e.message, "error_code": error_code}) return Response(r, status=error_code, mimetype='application/json') # Renaming the function name: wrapper.__name__ = func.__name__ return wrapper ``` Then, decorating more than one endpoint worked.
In case you are using flask on python notebook, you need to restart kernel everytime you make changes in code
12,084
28,418,677
i just began playing with python 3 and got a little stuck. I have this code: ``` person = ['George', 'Andrew', 'Ryan', 'Jack', 'Daniel'] for item in person: opinion = input("What do you think about "+person[0]+"? ") print(person[0]+" is a "+opinion+"") ``` And i do not know how to make it ask about each of the persons in the list. I know person[0] is not good, but i do not know what to put there. EDIT: I tried fixing it with a loop: ``` person = ['George', 'Andrew', 'Ryan', 'Jack', 'Daniel'] i = 0 while i<5: opinion = input("What do you think about "+person[i]+"? ") print(person[i]+" is a "+opinion+"") i += 1 ``` And it works, but after it runs out of people on the list and i keep replying, i get an error. There has to be a better way
2015/02/09
[ "https://Stackoverflow.com/questions/28418677", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3025008/" ]
You are assigning `item` to each value in `person`, use that variable. ``` for item in person: opinion = input("What do you think about "+item+"? ") print(item+" is a "+opinion+"") ``` When you called `person[0]`, you were setting it to the first value in `person` **every time**, which is not what you want.
What about ``` for item in person opinion = input("What do you think about "+item+"? ") print(item+" is a "+opinion+"") ``` And even better, if you want to make it more verbose: ``` # List with "s" to make it plural persons = ['George', 'Andrew', 'Ryan', 'Jack', 'Daniel'] # List item without the "s" for person in persons opinion = input("What do you think about " + person + "? ") print(person + " is a " + opinion + "") ```
12,094
45,654,850
I would like to do a stuff like this in c++ : ``` for (int i = 0, i < 3; ++i) { const auto& author = {"pierre", "paul", "jean"}[i]; const auto& age = {12, 45, 43}[i]; const auto& object = {o1, o2, o3}[i]; print({"even", "without", "identifier"}[i]); ... } ``` Does everyone know how to do this kind of trick ? I do it a lot in python. It helps me to factorize code nicely.
2017/08/12
[ "https://Stackoverflow.com/questions/45654850", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2443456/" ]
Looks like you should have used a vector of your custom class with `author`, `age`, `object` and `whatever` attributes, put it in a vector and do range for-loop over it - that would be idiomatic in C++: ``` struct foo { std::string author; int age; object_t object; whatever_t whatever; }; std::vector<foo> foos = { /* contents */ }; for(auto const& foo : foos) { // do stuff } ``` If you really want to, you can do: ``` const auto author = std::vector<std::string>{"pierre", "paul", "jean"}[i]; // ^ not a reference ``` but I'm not sure how well this will be optimised. You could also declare those vectors before the loop and keep the references.
Creating an object like `{"pierre", "paul", "jean"}` results in a initializer list. Initializer list does not have any [] operator [Why doesn't `std::initializer\_list` provide a subscript operator?](https://stackoverflow.com/questions/17787394/why-doesnt-stdinitializer-list-provide-a-subscript-operator). So you should convert to `const auto& author = (std::vector<std::string>{"pierre", "paul", "jean"})[i];`. Also the reference symbol should not be there as you are creating a temporary object and you are storing a reference to a temporary.
12,097
22,703,333
I'm working in python, trying to write a code that makes the Fibonacci sequence and return the results as a list. How would I go about doing so? I was able to write a code to return the set of values not as a list, but I'm unsure how I would go about writing a code to return a list. (Here's the code I have to return just the values, just not as a list) ``` def fibo1(par): var1 = 0 var2 = 1 while var2 < par: print var2 var3 = var1 + var2 var1 = var2 var2 = var3 def main(): number = int(raw_input("What is the number? ")) return (fibo1(number)) main() ```
2014/03/28
[ "https://Stackoverflow.com/questions/22703333", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3471046/" ]
The `string` class is in the namespace `std`. You can remove the scoping for `std::`. You'd do best to include it inside the main function, so names don't colide if you use a library that uses the name string or `cout`, or such. ``` #include <iostream> #include <string> int main() { using namespace std; string a; cin >> a; cout << a << endl; return 0; } ```
`string` is in `std` namespace, it's only valid to use `std::string` rather than `string` (the same for `std::cin`, `std::vector` etc). However, in practice, some compilers may let a program using `string` etc without `std::` prefix compile, which makes some programmers think it's OK to omit `std::`, but it's not in standard C++. So it's best to use: ``` #include <iostream> #include <string> int main() { std::string a; std::cin >> a; std::cout << a << std::endl; return 0; } ``` Note that it's NOT a good idea to use `using namespace std;` (though it's legal), especially NOT put it in a header. If you are tired of typing all the `std::`, declare all the names with namespace you **use** is one option: ``` #include <iostream> #include <string> using std::string; using std::cin; using std::cout; using std::endl; int main() { string a; cin >> a; cout << a << endl; return 0; } ```
12,098
841,096
I've recently hit a wall in a project I'm working on which uses PyQt. I have a QTreeView hooked up to a QAbstractItemModel which typically has thousands of nodes in it. So far, it works alright, but I realized today that selecting a lot of nodes is very slow. After some digging, it turns out that QAbstractItemModel.parent() is called way too often. I created minimal code to reproduce the problem: ``` #!/usr/bin/env python import sys import cProfile import pstats from PyQt4.QtCore import Qt, QAbstractItemModel, QVariant, QModelIndex from PyQt4.QtGui import QApplication, QTreeView # 200 root nodes with 10 subnodes each class TreeNode(object): def __init__(self, parent, row, text): self.parent = parent self.row = row self.text = text if parent is None: # root node, create subnodes self.children = [TreeNode(self, i, unicode(i)) for i in range(10)] else: self.children = [] class TreeModel(QAbstractItemModel): def __init__(self): QAbstractItemModel.__init__(self) self.nodes = [TreeNode(None, i, unicode(i)) for i in range(200)] def index(self, row, column, parent): if not self.nodes: return QModelIndex() if not parent.isValid(): return self.createIndex(row, column, self.nodes[row]) node = parent.internalPointer() return self.createIndex(row, column, node.children[row]) def parent(self, index): if not index.isValid(): return QModelIndex() node = index.internalPointer() if node.parent is None: return QModelIndex() else: return self.createIndex(node.parent.row, 0, node.parent) def columnCount(self, parent): return 1 def rowCount(self, parent): if not parent.isValid(): return len(self.nodes) node = parent.internalPointer() return len(node.children) def data(self, index, role): if not index.isValid(): return QVariant() node = index.internalPointer() if role == Qt.DisplayRole: return QVariant(node.text) return QVariant() app = QApplication(sys.argv) treemodel = TreeModel() treeview = QTreeView() treeview.setSelectionMode(QTreeView.ExtendedSelection) treeview.setSelectionBehavior(QTreeView.SelectRows) treeview.setModel(treemodel) treeview.expandAll() treeview.show() cProfile.run('app.exec_()', 'profdata') p = pstats.Stats('profdata') p.sort_stats('time').print_stats() ``` To reproduce the problem, just run the code (which does profiling) and select all nodes in the tree widget (either through shift selection or Cmd-A). When you quit the app, the profiling stats will show something like: ``` Fri May 8 20:04:26 2009 profdata 628377 function calls in 6.210 CPU seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 1 4.788 4.788 6.210 6.210 {built-in method exec_} 136585 0.861 0.000 1.182 0.000 /Users/hsoft/Desktop/slow_selection.py:34(parent) 142123 0.217 0.000 0.217 0.000 {built-in method createIndex} 17519 0.148 0.000 0.164 0.000 /Users/hsoft/Desktop/slow_selection.py:52(data) 162198 0.094 0.000 0.094 0.000 {built-in method isValid} 8000 0.055 0.000 0.076 0.000 /Users/hsoft/Desktop/slow_selection.py:26(index) 161357 0.047 0.000 0.047 0.000 {built-in method internalPointer} 94 0.000 0.000 0.000 0.000 /Users/hsoft/Desktop/slow_selection.py:46(rowCount) 404 0.000 0.000 0.000 0.000 /Users/hsoft/Desktop/slow_selection.py:43(columnCount) 94 0.000 0.000 0.000 0.000 {len} 1 0.000 0.000 6.210 6.210 <string>:1(<module>) 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} ``` The weird part in this data is how often parent() is called: 136k times for 2k nodes! Anyone has a clue why?
2009/05/08
[ "https://Stackoverflow.com/questions/841096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/103667/" ]
Try calling `setUniformRowHeights(true)` for your tree view: <https://doc.qt.io/qt-4.8/qtreeview.html#uniformRowHeights-prop> Also, there's a C++ tool called modeltest from qt labs. I'm not sure if there is something for python though: <https://wiki.qt.io/Model_Test>
I converted your very nice example code to PyQt5 and ran under Qt5.2 and can confirm that the numbers are still similar, i.e. inexplicably huge numbers of calls. Here for example is the top part of the report for start, cmd-A to select all, scroll one page, quit: ``` ncalls tottime percall cumtime percall filename:lineno(function) 1 14.880 14.880 15.669 15.669 {built-in method exec_} 196712 0.542 0.000 0.703 0.000 /Users/dcortes1/Desktop/scratch/treeview.py:36(parent) 185296 0.104 0.000 0.104 0.000 {built-in method createIndex} 20910 0.050 0.000 0.056 0.000 /Users/dcortes1/Desktop/scratch/treeview.py:54(data) 225252 0.036 0.000 0.036 0.000 {built-in method isValid} 224110 0.034 0.000 0.034 0.000 {built-in method internalPointer} 7110 0.020 0.000 0.027 0.000 /Users/dcortes1/Desktop/scratch/treeview.py:28(index) ``` And while the counts are really excessive (and I have no explanation), notice the cumtime values aren't so big. Also those functions could be recoded to run faster; for example in index(), is "if not self.nodes" ever true? Similarly, notice the counts for parent() and createIndex() are almost the same, hence index.isValid() is true much more often than not (reasonable, as end-nodes are much more numerous than parent nodes). Recoding to handle that case first would cut the parent() cumtime further. Edit: on second thought, such optimizations are "rearranging the deck chairs on the titanic".
12,099
71,398,447
In below code 21 is hour and 53 is min and 10 is wait time in this code I want to send message in loop frequently but I failed. I also tried for loop but it is not working. Any body know how to send 100 message in whatsapp using python please help me ``` import pywhatkit from flask import Flask while 1: pywhatkit.sendwhatmsg("+9198xxxxxxxx", "Hi",21,53,10) ```
2022/03/08
[ "https://Stackoverflow.com/questions/71398447", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18126137/" ]
Use ``` final response = await http.post(Uri.parse(url), body: { "fecha_inicio": _fechaInicioBBDD, "fecha_fin": _fechaFinalBBDD, "latitud": _controllerLatitud.text, "longitud": _controllerLongitud.text, "calle": _controllerDireccion.text, "descripcion": _controllerDescripcion.text, "tipo_aviso": tipoAviso, "activar_x_antes": _horas.toString() }); ``` `_horas.toString()`
The documentation for [`http.post`](https://pub.dev/documentation/http/latest/http/post.html) states: > > `body` sets the body of the request. It can be a `String`, a `List<int>` or a `Map<String, String>`. > > > Since you are passing a `Map`, all keys and values in your `Map` are required to be `String`s. You should not be converting a `String` to a `double` as one of the `Map` values.
12,100
53,913,303
Im trying to implement a menu in my tool. But i couldn't implement a switch case in python. i know that python has only dictionary mapping. How to call parameterised methods in those switch case? For example, I have this program `def Choice(i): switcher = { 1: subdomain(host), 2: reverseLookup(host), 3: lambda: 'two' } func = switcher.get(i, lambda:'Invalid') print(func())` Here, I couldn't perform the parameterised call `subdomain(host)`. Please help.
2018/12/24
[ "https://Stackoverflow.com/questions/53913303", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10829002/" ]
Switch cases can be implemented using dictionary mapping in Python like so: ``` def Choice(i): switcher = {1: subdomain, 2: reverseLookup} func = switcher.get(i, 'Invalid') if func != 'Invalid': print(func(host)) ``` There is a dictionary `switcher` which helps on mapping to the right function based on the input to the function `Choice`. There is default case to be implemented which is done using `switcher.get(i, 'Invalid')`, so if this returns `'Invalid'`, you can give an error message to user or neglect it. The call goes like this: ``` Choice(2) # For example ``` Remember to set value of `host` prior to calling `Choice`.
**try this ....** ``` def Choice(i): switcher = { 1: subdomain(host), 2: reverseLookup(host), 3: lambda: 'two' } func = switcher.get(i, lambda:'Invalid') print(func()) if __name__ == "__main__": argument=0 print Choice(argument) ```
12,103
40,527,051
I'm trying to make my program allow the user to input vectors of the form (x,y,z) using python's built in input() function. If entered normally in python with out using the input() function it indexes each vector separately. For example, ``` >>> z = (1,2,3), (4,5,6), (7,8,9) >>> z[1] (4, 5, 6) ``` But when I try to use the input function I run into the following problem. ``` >>> z = input('What are the vectors? ') What are the vectors? (1,2,3), (4,5,6), (7,8,9) >>> z[1] '1' ``` Why does using the input function turn it into a string and is there a way around this? Thanks
2016/11/10
[ "https://Stackoverflow.com/questions/40527051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7141092/" ]
In Python 3, `input` always returns a string. You need to convert the string. For this type of input I recommend using `liter_eval` from the module `ast`: ``` import ast vectors = ast.literal_eval('(1,2,3), (4,5,6), (7,8,9)') vectors[1] #(4, 5, 6) ```
In your example, `z` is just a string as input by the user. This string is: `"(1,2,3), (4,5,6), (7,8,9)"` so the second element, `z[1]` is just giving you `"1"`. If you want an actual vector object you would have to write code to parse the string input by the user. For example, you could delimit based on parentheses and convert the numbers individually. Hope this helps!
12,108
44,155,564
When trying to plot a graph with pyplot I am running the following code: ``` from matplotlib import pyplot as plt x = [6, 5, 4] y = [3, 4, 5] plt.plot(x, y) plt.show() ``` This is returning the following error: ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-3-59955f73b463> in <module>() 4 y = [3, 4, 5] 5 ----> 6 plt.plot(x, y) 7 plt.show() /usr/local/lib/python2.7/site-packages/matplotlib/pyplot.pyc in plot(*args, **kwargs) 3304 @_autogen_docstring(Axes.plot) 3305 def plot(*args, **kwargs): -> 3306 ax = gca() 3307 # Deprecated: allow callers to override the hold state 3308 # by passing hold=True|False /usr/local/lib/python2.7/site-packages/matplotlib/pyplot.pyc in gca(**kwargs) 948 matplotlib.figure.Figure.gca : The figure's gca method. 949 """ --> 950 return gcf().gca(**kwargs) 951 952 # More ways of creating axes: /usr/local/lib/python2.7/site-packages/matplotlib/figure.pyc in gca(self, **kwargs) 1367 1368 # no axes found, so create one which spans the figure -> 1369 return self.add_subplot(1, 1, 1, **kwargs) 1370 1371 def sca(self, a): /usr/local/lib/python2.7/site-packages/matplotlib/figure.pyc in add_subplot(self, *args, **kwargs) 1019 self._axstack.remove(ax) 1020 -> 1021 a = subplot_class_factory(projection_class)(self, *args, **kwargs) 1022 1023 self._axstack.add(key, a) /usr/local/lib/python2.7/site-packages/matplotlib/axes/_subplots.pyc in __init__(self, fig, *args, **kwargs) 71 72 # _axes_class is set in the subplot_class_factory ---> 73 self._axes_class.__init__(self, fig, self.figbox, **kwargs) 74 75 def __reduce__(self): /usr/local/lib/python2.7/site-packages/matplotlib/axes/_base.pyc in __init__(self, fig, rect, facecolor, frameon, sharex, sharey, label, xscale, yscale, axisbg, **kwargs) 527 528 # this call may differ for non-sep axes, e.g., polar --> 529 self._init_axis() 530 if axisbg is not None and facecolor is not None: 531 raise TypeError('Both axisbg and facecolor are not None. ' /usr/local/lib/python2.7/site-packages/matplotlib/axes/_base.pyc in _init_axis(self) 620 def _init_axis(self): 621 "move this out of __init__ because non-separable axes don't use it" --> 622 self.xaxis = maxis.XAxis(self) 623 self.spines['bottom'].register_axis(self.xaxis) 624 self.spines['top'].register_axis(self.xaxis) /usr/local/lib/python2.7/site-packages/matplotlib/axis.pyc in __init__(self, axes, pickradius) 674 self._minor_tick_kw = dict() 675 --> 676 self.cla() 677 self._set_scale('linear') 678 /usr/local/lib/python2.7/site-packages/matplotlib/axis.pyc in cla(self) 758 self._set_artist_props(self.label) 759 --> 760 self.reset_ticks() 761 762 self.converter = None /usr/local/lib/python2.7/site-packages/matplotlib/axis.pyc in reset_ticks(self) 769 # define 1 so properties set on ticks will be copied as they 770 # grow --> 771 cbook.popall(self.majorTicks) 772 cbook.popall(self.minorTicks) 773 AttributeError: 'module' object has no attribute 'popall' ``` My matplotlib has always worked fine, but this error popped up after I reinstalled it using homebrew and pip yesterday. I am running the following: ``` OS: Mac OS Sierra 10.12.5 Python: 2.7.13 Matplotlib: 2.0.2 ``` I have tried a complete reinstall of matplotlib and python again since but still getting the same error. I have also tried multiple editors (Jupiter, Sublime, Terminal). Any help would be very appreciated!
2017/05/24
[ "https://Stackoverflow.com/questions/44155564", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8058705/" ]
I had this exact error and in my case it turned out to be that both `pip` and `conda` had installed copies of `matplotlib`. In a 'mixed' environment with `pip` used to fill gaps in Anaconda, `pip` can automatically install upgrades to (already-installed) dependencies of the package you asked to install, creating duplication. To test for this: ``` $ conda list matplotlib # packages in environment at /home/ec2-user/anaconda3: # matplotlib 2.0.2 np113py35_0 matplotlib 2.1.1 <pip> ``` Problem! Fix: ``` $ pip uninstall matplotlib ``` Probably a good idea to force `matplotlib` upgrade to the version `pip` wanted: ``` $ conda install matplotlib=2.1.1 ```
I have solved my problem although I am not entirely sure why this has solved it. I used `pip uninstall matplotlib`, to remove the python install, and also updated my `~/.zshrc` and `~/.bash_profile` paths to contain: HomeBrew: `export PATH=/usr/local/bin:$PATH` Python: `export PATH=/usr/local/share/python:$PATH` This has solved the issue. I am guessing the issue was caused by having two install of matplotlib and having the path in `~/.bash_proile` but not the `~/.zshrc`.
12,111
18,153,913
I used `python -mtimeit` to test and found out it takes more time to `from Module import Sth` comparing to `import Module` E.g. ``` $ python -mtimeit "import math; math.sqrt(4)" 1000000 loops, best of 3: 0.618 usec per loop $ python -mtimeit "from math import sqrt; sqrt(4)" 1000000 loops, best of 3: 1.11 usec per loop ``` same for other case. Could someone please explain the rationale behind? Thank you!
2013/08/09
[ "https://Stackoverflow.com/questions/18153913", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2325350/" ]
There are two issues here. The first step is to figure out which part is faster: the import statement, or the call. So, let's do that: ``` $ python -mtimeit 'import math' 1000000 loops, best of 3: 0.555 usec per loop $ python -mtimeit 'from math import sqrt' 1000000 loops, best of 3: 1.22 usec per loop $ python -mtimeit -s 'from math import sqrt' 'sqrt(10)' 10000000 loops, best of 3: 0.0879 usec per loop $ python -mtimeit -s 'import math' 'math.sqrt(10)' 10000000 loops, best of 3: 0.122 usec per loop ``` (That's with Apple CPython 2.7.2 64-bit on OS X 10.6.4 on my laptop. But python.org 3.4 dev on the same laptop and 3.3.1 on a linux box give roughly similar results. With PyPy, the smarter caching makes it impossible to test, since everything finishes in 1ns… Anyway, I think these results are probably about as portable as microbenchmarks ever can be.) So it turns out that the `import` statement is more than twice as fast; after that, calling the function is a little slower, but not nearly enough to make up for the cheaper `import`. (Keep in mind that your test was doing an `import` for each call. In real-life code, of course, you tend to call things a lot more than once per `import`. So, we're really looking at an edge case that will rarely affect real code. But as long as you keep that in mind, we proceed.) --- Conceptually, you can understand why the `from … import` statement takes longer: it has more work to do. The first version has to find the module, compile it if necessary, and execute it. The second version has to do all of that, and then *also* extract `sqrt` and insert it into your current module's globals. So, it has to be at least a little slower. If you look at the bytecode (e.g., by using the [`dis`](http://docs.python.org/2/library/dis.html) module and calling `dis.dis('import math')`), this is exactly the difference. Compare: ``` 0 LOAD_CONST 0 (0) 3 LOAD_CONST 1 (None) 6 IMPORT_NAME 0 (math) 9 STORE_NAME 0 (math) 12 LOAD_CONST 1 (None) 15 RETURN_VALUE ``` … to: ``` 0 LOAD_CONST 0 (0) 3 LOAD_CONST 1 (('sqrt',)) 6 IMPORT_NAME 0 (math) 9 IMPORT_FROM 1 (sqrt) 12 STORE_NAME 1 (sqrt) 15 POP_TOP 16 LOAD_CONST 2 (None) 19 RETURN_VALUE ``` The extra stack manipulation (the `LOAD_CONST` and `POP_TOP`) probably doesn't make much difference, and using a different argument to `STORE_NAME` is unlikely to matter at all… but the `IMPORT_FROM` is a significant extra step. --- Surprisingly, a quick&dirty attempt to profile the `IMPORT_FROM` code shows that the majority of the cost is actually looking up the appropriate globals to import into. I'm not sure why, but… that implies that importing a whole slew of names should be not much slower than importing just one. And, as you pointed out in a comment, that's exactly what you see. (But don't read too much into that. There are many reasons that `IMPORT_FROM` might have a large constant factor and only a small linear one, and we're not exactly throwing a huge number of names at it.) --- One last thing: If this ever really does matter in real code, and you want to get the best of both worlds, `import math; sqrt = math.sqrt` is faster than `from math import sqrt`, but gives you the same small speedup to lookup/call time. (But again, I can't imagine any real code where this would matter. The only time you'll ever care how long `sqrt` takes is when you're calling it a billion times, at which point you won't care how long the import takes. Plus, if you really do need to optimize that, create a local scope and bind `sqrt` there to avoid the global lookup entirely.)
This is not an answer, but some information. It needed formatting so I didn't include it as a comment. Here is the bytecode for 'from math import sqrt': ``` >>> from math import sqrt >>> import dis >>> def f(n): return sqrt(n) ... >>> dis.dis(f) 1 0 LOAD_GLOBAL 0 (sqrt) 3 LOAD_FAST 0 (n) 6 CALL_FUNCTION 1 9 RETURN_VALUE ``` And for 'import math' ``` >>> import math >>> import dis >>> dis.dis(math.sqrt) >>> def f(n): return math.sqrt(n) ... >>> dis.dis(f) 1 0 LOAD_GLOBAL 0 (math) 3 LOAD_ATTR 1 (sqrt) 6 LOAD_FAST 0 (n) 9 CALL_FUNCTION 1 12 RETURN_VALUE ``` Interestingly, the faster method has one more instruction.
12,114
51,481,021
My friend told me about [Josephus problem](https://en.wikipedia.org/wiki/Josephus_problem), where you have `41` people sitting in the circle. Person number `1` has a sword, kills person on the right and passes the sword to the next person. This goes on until there is only one person left alive. I came up with this solution in python: ``` print('''There are n people in the circle. You give the knife to one of them, he stabs person on the right and gives the knife to the next person. What will be the number of whoever will be left alive?''') pplList = [] numOfPeople = int(input('How many people are there in the circle?')) for i in range(1, (numOfPeople + 1)): pplList.append(i) print(pplList) while len(pplList) > 1: for i in pplList: if i % 2 == 0: del pplList[::i] print(f'The number of person which survived is {pplList[0]+1}') break ``` But it only works up to `42` people. What should I do, or how should I change the code so it would work for, for example, `100, 1000` and more people in the circle? I've looked up Josephus problem and seen different solutions but I'm curious if my answer could be correct after some minor adjustment or should I start from scratch.
2018/07/23
[ "https://Stackoverflow.com/questions/51481021", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9274224/" ]
I see two serious bugs. 1. I guarantee that `del ppList[::i]` does nothing resembling what you hope it does. 2. When you wrap around the circle, it is important to know if you killed the last person in the list (first in list kills again) or didn't (first person in list dies). And contrary to your assertion that it works up to 42, it does not work for many smaller numbers. The first that it doesn't work for is 2. (It gives 3 as an answer instead of 1.)
The problem is you are not considering the guy in the end if he is not killed. Example, if there are 9 people, after killing 8, 9 has the sword, but you are just starting with 1, instead of 9 in the next loop. As someone mentioned already, it is not working for smaller numbers also. Actually if you look close, you're killing odd numbers in the very first loop, instead of even numbers. which is very wrong. You can correct your code as followed ```py while len(pplList )>1: if len(pplList )%2 == 0: pplList = pplList [::2] #omitting every second number elif len(pplList )%2 ==1: last = pplList [-1] #last one won't be killed pplList = pplList [:-2:2] pplList .insert(0,last) # adding the last to the start ``` There are very effective methods to solve the problem other than this method. check [this link](https://medium.com/@sudheernaidu53/the-josephus-problem-36cbf94f3a64) to know more
12,115
61,975,308
I am trying to read the url code using URLLIB. Here is my code: ``` import urllib url = "https://www.facebook.com/fads0000fass" r = urllib.request.urlopen(url) p = r.code if(p == "HTTP Error 404: Not Found" ): print("hello") else: print("null") ``` The url I am using will show error code 404 but I am not able to read it. I also tried `if(p == 404)` but I get the same issue. I Can read other codes i.e. 200, 201 etc. Can you please help me fix it? traceback: ``` Traceback (most recent call last): File "gd.py", line 7, in <module> r = urllib.request.urlopen(url) File "/usr/lib64/python3.7/urllib/request.py", line 222, in urlopen return opener.open(url, data, timeout) File "/usr/lib64/python3.7/urllib/request.py", line 531, in open response = meth(req, response) File "/usr/lib64/python3.7/urllib/request.py", line 641, in http_response 'http', request, response, code, msg, hdrs) File "/usr/lib64/python3.7/urllib/request.py", line 569, in error return self._call_chain(*args) File "/usr/lib64/python3.7/urllib/request.py", line 503, in _call_chain result = func(*args) File "/usr/lib64/python3.7/urllib/request.py", line 649, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 404: Not Found ```
2020/05/23
[ "https://Stackoverflow.com/questions/61975308", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13288913/" ]
I'm not sure that's what you're asking. ``` import urllib.request url = "https://www.facebook.com/fads0000fass" try: r = urllib.request.urlopen(url) p = r.code except urllib.error.HTTPError: print("hello") ```
In order to reach your if statement, your code needs exception handling. An exception is being raised when you call `urlopen` on line 7. See the first step of your traceback. ``` File "gd.py", line 7, in <module> r = urllib.request.urlopen(url) ``` The exception happens here, which causes your code to exit, so further statements aren't evaluated. To get past this, you must [handle the exception](https://docs.python.org/3/tutorial/errors.html#handling-exceptions). ```py import urllib.request url = "https://www.facebook.com/fads0000fass" try: r = urllib.request.urlopen(url) except urllib.error.HTTPError as e: # More useful # print(f"{e.code}: {e.reason}\n\n{e.headers}") if e.code in [404]: print("hello") else: print("null") ``` --- Going beyond this, if you want something more like your original logic, I'd recommend using the [requests](https://pypi.org/project/requests/) library. I'd actually recommend using requests for all of your HTTP needs whenever possible, it's exceptionally good. ```py import requests r = requests.get(url) p = r.status_code if r.status_code == 404: print("hello") else: print("null") ```
12,116
71,902,562
I have started the Yolo5 Training with custom data The command I have used: ``` !python train.py --img 640 --batch-size 32 --epochs 5 --data /content/drive/MyDrive/yolov5_dataset/dataset_Trafic/data.yaml --cfg /content/drive/MyDrive/yolov5/models/yolov5s.yaml --name Model ``` Training started as below & completed: [![enter image description here](https://i.stack.imgur.com/Iddtf.png)](https://i.stack.imgur.com/Iddtf.png) For resuming/continue for more epoch I have below command ``` !python train.py --img 640 --batch-size 32 --epochs 6 --data /content/drive/MyDrive/yolov5_dataset/dataset_Trafic/data.yaml --weights /content/drive/MyDrive/yolov5/runs/train/Model/weights/best.pt --cache --exist-ok ``` [![enter image description here](https://i.stack.imgur.com/lax4X.png)](https://i.stack.imgur.com/lax4X.png) But still the Training start from the scratch. How to continue from previous epoch. Also I tried with resume command ``` !python train.py --epochs 10 --resume ``` but I am getting below error message [![enter image description here](https://i.stack.imgur.com/4Fb2A.png)](https://i.stack.imgur.com/4Fb2A.png)
2022/04/17
[ "https://Stackoverflow.com/questions/71902562", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2707200/" ]
#### Case 1 Here we consider the statement: ``` Animal a1 = func1(); ``` The call expression `func1()` is an **rvlalue** of type `Animal`. And from C++17 onwards, due to [mandatory copy elison](https://en.cppreference.com/w/cpp/language/copy_elision#Mandatory_elision_of_copy/move_operations): > > Under the following circumstances, the compilers are required to omit the copy and move construction of class objects, even if the copy/move constructor and the destructor have observable side-effects. The objects are constructed directly into the storage where they would otherwise be copied/moved to. The **copy/move constructors need not be present or accessible**: > > > > > * In a return statement, when the operand is a prvalue of the same class type (ignoring cv-qualification) as the function return type: > > > That is, the object is constructed directly into the storage where they would otherwise be copied/moved to. That is, in this case(for C++17), there is no need of a copy/move constructor to be available. And so this statement works. #### Case 2 Here we consider the statement: ``` Animal a2 = func2(); ``` Here from [non mandatory copy elison](https://en.cppreference.com/w/cpp/language/copy_elision#Mandatory_elision_of_copy/move_operations), > > Under the following circumstances, the compilers are permitted, but not required to omit the copy and move (since C++11) construction of class objects even if the copy/move (since C++11) constructor and the destructor have observable side-effects. The objects are constructed directly into the storage where they would otherwise be copied/moved to. This is an optimization: even when it takes place and the copy/move (since C++11) constructor is not called, it still **must be present and accessible** (as if no optimization happened at all), otherwise the program is ill-formed: > > > > > * In a return statement, when the operand is the name of a non-volatile object with automatic storage duration, which isn't a function parameter or a catch clause parameter, and which is of the same class type (ignoring cv-qualification) as the function return type. > > > That is, the copy/move constructors are required to exist(that is these ctors must be present and accessible) but since you've explicitly marked them as **deleted** this statement fails with the [error](https://onlinegdb.com/p_Ws9ITJV): ``` error: use of deleted function ‘Animal::Animal(Animal&&)’ ``` The error can also be seen [here](https://onlinegdb.com/p_Ws9ITJV)
In C++17 there is a new set of rules about temporary materialization. In a simple explanation an expression that evaluates to a prvalue (a temporary) doesn't immediately create an object, but is instead a recipe for creating an object. So in your example `Animal()` doesn't create an object straight away so that's why you can return it even if the copy and move constructors are deleted. In your main assigning the prvalue to `a` triggers the temporary materialization so the object is only now created directly in the scope of `main`. Throught all of this there is a single object so there is no operation of copy or move.
12,117
4,175,697
I am writting a daemon server using python, sometimes there are python runtime errors, for example some variable type is not correct. That error will not cause the process to exit. Is it possible for me to redirect such runtime error to a log file?
2010/11/14
[ "https://Stackoverflow.com/questions/4175697", "https://Stackoverflow.com", "https://Stackoverflow.com/users/197036/" ]
It looks like you are asking two questions. To prevent your process from exiting on errors, you need to catch all [`exception`](http://www.python.org/doc//current/library/exceptions.html)s that are raised using [`try...except...finally`](http://www.python.org/doc//current/reference/compound_stmts.html#try). You also wish to redirect all output to a log. Happily, Python provides a comprehensive [`logging`](http://www.python.org/doc//current/library/logging.html) module for your convenience. An example, for your delight and delectation: ``` #!/usr/bin/env python import logging logging.basicConfig(filename='warning.log', level=logging.WARNING) try: 1/0 except ZeroDivisionError, e: logging.warning('The following error occurred, yet I shall carry on regardless: %s', e) ``` This graciously emits: ``` % cat warning.log WARNING:root:The following error occurred, yet I shall carry on regardless: integer division or modulo by zero ```
Look at the `traceback` module. If you catch a `RuntimeError`, you can write it to the log (look at the `logging` module for that).
12,118
17,860,717
Spent almost more than 30 mins of my time in trying all different possibly. Finally now I'm exhausted. Can someone please help me on this quote problem ``` def remote_shell_func_execute(): with settings(host_string='user@XXX.yyy.com',warn_only=True): process = run("subprocess.Popen(\["/root/test/shell_script_for_test.sh func2"\],shell=True,stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE)") process.wait() for line in process.stdout.readlines(): print(line) ``` when run the fab, I get ``` fab remote_shell_func_execute Traceback (most recent call last): File "/usr/local/lib/python2.7/site-packages/Fabric-1.6.1-py2.7.egg/fabric/main.py",line 654, in main docstring, callables, default = load_fabfile(fabfile) File "/usr/local/lib/python2.7/site-packages/Fabric-1.6.1-py2.7.egg/fabric/main.py",line 165, in load_fabfile imported = importer(os.path.splitext(fabfile)[0]) File "/home/fabfile.py", line 18 process = run("subprocess.Popen(\["/root/test/shell_script_for_test.sh func2"\],shell=True,stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE)") ^ SyntaxError: invalid syntax ```
2013/07/25
[ "https://Stackoverflow.com/questions/17860717", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2436055/" ]
Just use a single quoted string. ``` run('subprocess.Popen(\["/root/test/shell_script_for_test.sh func2"\],shell=True,stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE)') ``` Or escape the inner `"`. ``` run("subprocess.Popen(\[\"/root/test/shell_script_for_test.sh func2\"\],shell=True,stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE)") ```
When you escape quotes, the escape backslash must go directly before the quote character: ``` "[\"/..." ``` Alternatively, use single quotes for the string, this avoids the need for escaping at all: ``` '["/...' ```
12,119
7,314,925
I'm saving data to a PostgreSQL backend through Django. Many of the fields in my models are DecimalFields set to arbitrarily high max\_digits and decimal\_places, corresponding to numeric columns in the database backend. The data in each column have a precision (or number of decimal places) that is not known *a priori*, and each datum in a given column need not have the same precision. For example, arguments to a model may look like: ``` {'dist': Decimal("94.3"), 'dist_e': Decimal("1.2")} {'dist': Decimal("117"), 'dist_e': Decimal("4")} ``` where the keys are database column names. Upon output, I need to preserve and redisplay those data with the precision with which they were read in. In other words, after the database is queried, the displayed data need to look exactly like the data in that were read in, with no additional or missing trailing 0's in the decimals. When queried, however, either in a django shell or in the admin interface, all of the DecimalField data come back with many trailing 0's. I have seen similar questions answered for money values, where the precision (2 decimal places) is both known and the same for all data in a given column. However, how might one best preserve the exact precision represented by Decimal values in Django and numeric values in PostgreSQL when the precision is not the same and not known beforehand? EDIT: Possibly an additional useful piece of information: When viewing the table to which the data are saved in a Django dbshell, the many trailing 0's are also present. The python Decimal value is apparently converted to the maximum precision value specified in the models.py file upon being saved to the PostgreSQL backend.
2011/09/06
[ "https://Stackoverflow.com/questions/7314925", "https://Stackoverflow.com", "https://Stackoverflow.com/users/929783/" ]
If you need perfect parity forwards and backwards, you'll need to use a CharField. Any number-based database field is going to interact with your data muxing it in some way or another. Now, I know you mentioned not being able to know the digit length of the data points, and a CharField requires some length. You can either set it arbitrarily high (1000, 2000, etc) or I suppose you could use a TextField, instead. However, with either approach, you're going to be wasting a lot database resources in most scenarios. I would suggest modifying your approach such that extra zeros at the end don't matter (for display purpose you could always chop them off), or such that the precision is not longer arbitrary.
Since I asked this question awhile ago and the answer remains the same, I'll share what I found should it be helpful to anyone in a similar position. Django doesn't have the ability to take advantage of the PostgreSQL Numerical column type with arbitrary precision. In order to preserve the display precision of data I upload to my database, and in order to be able to perform mathematical calculations on values obtained from database queries without first recasting strings into python Decimal types, I opted to add an extra precision column for every numerical column in the database. The precision value is an integer indicating how many digits after the decimal point are required. The datum `4.350` is assigned a value of `3` in its corresponding precision column. Normally displayed integers (e.g. `2531`) have a precision entry of `0`. However, large integers reported in scientific notation are assigned a negative integer to preserve their display precision. The value `4.320E+33`, for example, gets the precision entry `-3`. The database recognizes that all objects with negative precision values should be re-displayed in scientific notation. This solution adds some complexity to the structure and code surrounding the database, but it has proven effective. It also allows me to accurately preserve precision through calculations like converting to/from log and linear values.
12,120
46,258,924
I am trying to recreate some code from MatLab using numpy, and I cannot find out how to store a variable amount of matrices. In MatLab I used the following code: ``` for i = 1:rows K{i} = zeros(5,4); %create 5x4 matrix K{i}(1,1)= ET(i,1); %put knoop i in table K{i}(1,3)= ET(i,2); %put knoop j in table ... *do some stuff with it* end ``` What i assumed i needed to do was to create a List of matrices, but I've only been able to store single arrays in list, not matrices. Something like this, but then working: ``` for i in range(ET.shape[0]): K[[i]] = np.zeros((5, 4)) K[[i]][1, 2] = ET[i, 2] ``` I've tried looking on <https://docs.scipy.org/doc/numpy-dev/user/numpy-for-matlab-users.html> but it didn't help me. Looking through somewhat simular questions a dirty method seems to be using globals, and than changing the variable name, like this: ``` for x in range(0, 9): globals()['string%s' % x] = 'Hello' print(string3) ``` Is this the best way for me to achieve my goal, or is there a proper way of storing multiple matrices in a variable? Or am i wanting something that I shouldnt want to do because python has a different way of handeling it?
2017/09/16
[ "https://Stackoverflow.com/questions/46258924", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8620430/" ]
In the MATLAB code you are using a Cell array. Cells are generic containers. The equivalent in Python is a regular [list](https://docs.python.org/2/tutorial/introduction.html#lists) - not a numpy structure. You can create your numpy arrays and then store them in a list like so: ``` import numpy as np array1 = np.array([1, 2, 3, 4]) # Numpy array (1D) array2 = np.matrix([[4,5],[6,7]]) # Numpy matrix array3 = np.zeros((3,4)) # 2D numpy array array_list = [a1, a2, a3] # List containing the numpy objects ``` So your code would need to be modified to look more like this: ``` K = [] for i in range(rows): K.append(np.zeros((5,4))) # create 5x4 matrix K[i][1,1]= ET[i,1] # put knoop i in table K[i][1,3]= ET[i,2] # put knoop j in table ... *do some stuff with it* ``` If you are just getting started with scientific computing in Python this [article](http://engineeringterminal.com/electrical-engineering/tutorials/intro-to-scipy-for-matlab-users.html) is helpful.
How about something like this: ``` import numpy as np myList = [] for i in range(100): mtrx = np.zeros((5,4)) mtrx[1,2] = 7 mtrx[3,0] = -5 myList.append(mtrx) ```
12,121
1,242,904
I use CMake to build my application. How can I find where the python site-packages directory is located? I need the path in order to compile an extension to python. CMake has to be able to find the path on all three major OS as I plan to deploy my application on Linux, Mac and Windows. I tried using ``` include(FindPythonLibs) find_path( PYTHON_SITE_PACKAGES site-packages ${PYTHON_INCLUDE_PATH}/.. ) ``` however that does not work. I can also obtain the path by running ``` python -c "from distutils.sysconfig import get_python_lib; print get_python_lib()" ``` on the shell, but how would I invoke that from CMake ? SOLUTION: Thanks, Alex. So the command that gives me the site-package dir is: ``` execute_process ( COMMAND python -c "from distutils.sysconfig import get_python_lib; print get_python_lib()" OUTPUT_VARIABLE PYTHON_SITE_PACKAGES OUTPUT_STRIP_TRAILING_WHITESPACE) ``` The OUTPUT\_STRIP\_TRAILING\_WHITESPACE command is needed to remove the trailing new line.
2009/08/07
[ "https://Stackoverflow.com/questions/1242904", "https://Stackoverflow.com", "https://Stackoverflow.com/users/134397/" ]
You can execute external processes in cmake with [execute\_process](http://www.cmake.org/cmake/help/cmake2.6docs.html#command:execute_process) (and get the output into a variable if needed, as it would be here).
I suggest to use `get_python_lib(True)` if you are making this extension as a dynamic library. This first parameter should be true if you need the platform specific location (in 64bit linux machines, this could be `/usr/lib64` instead of `/usr/lib`)
12,122
55,598,548
I'm trying to update my Spyder to fix some error in my Spyder 3.2.3. But when I called `conda update spyder` mentioned in (<https://github.com/spyder-ide/spyder/issues/9019#event-2225858161>), the Anaconda prompt showed as follow: ![enter image description here](https://i.stack.imgur.com/IFWTL.png) and the Spyder wasn't updated to the latest version (3.3.3). I guessed the reason I couldn't update Spyder is because my Conda isn't the latest version, so I ran `conda update -n base -c defaults conda` However after that (update conda to latest version 4.6.11) I found that all my Spyder and my Anaconda Navigator could not be opened. It seems that the commands not only update the Conda, but also update some other packages to py3.7. When I called `conda update spyder` again, the prompt showed as follow: ``` WARNING: The conda.compat module is deprecated and will be removed in a future release. Collecting package metadata: done Solving environment: | The environment is inconsistent, please check the package plan carefully The following packages are causing the inconsistency: - defaults/win-64::anaconda==5.3.1=py37_0 - https://mirrors.ustc.edu.cn/anaconda/pkgs/free/win-64::anaconda-navigator==1.6.4=py36_0 - defaults/win-64::astropy==3.0.4=py37hfa6e2cd_0 - defaults/win-64::blaze==0.11.3=py37_0 - defaults/win-64::bottleneck==1.2.1=py37h452e1ab_1 - defaults/win-64::dask==0.19.1=py37_0 - defaults/win-64::datashape==0.5.4=py37_1 - defaults/win-64::h5py==2.8.0=py37h3bdd7fb_2 - defaults/win-64::imageio==2.4.1=py37_0 - defaults/win-64::matplotlib==2.2.3=py37hd159220_0 - defaults/win-64::mkl-service==1.1.2=py37hb217b18_5 - defaults/win-64::mkl_fft==1.0.4=py37h1e22a9b_1 - defaults/win-64::mkl_random==1.0.1=py37h77b88f5_1 - defaults/win-64::numba==0.39.0=py37h830ac7b_0 - defaults/win-64::numexpr==2.6.8=py37h9ef55f4_0 - defaults/win-64::numpy-base==1.15.1=py37h8128ebf_0 - defaults/win-64::odo==0.5.1=py37_0 - defaults/win-64::pandas==0.23.4=py37h830ac7b_0 - defaults/win-64::patsy==0.5.0=py37_0 - defaults/win-64::pytables==3.4.4=py37he6f6034_0 - defaults/win-64::pytest-arraydiff==0.2=py37h39e3cac_0 - defaults/win-64::pytest-astropy==0.4.0=py37_0 - defaults/win-64::pytest-doctestplus==0.1.3=py37_0 - defaults/win-64::pywavelets==1.0.0=py37h452e1ab_0 - defaults/win-64::scikit-image==0.14.0=py37h6538335_1 - defaults/win-64::scikit-learn==0.19.2=py37heebcf9a_0 - defaults/win-64::scipy==1.1.0=py37h4f6bf74_1 - defaults/win-64::seaborn==0.9.0=py37_0 - defaults/win-64::statsmodels==0.9.0=py37h452e1ab_0 done # All requested packages already installed. ``` I guess maybe the python version conflict (my python version is 3.6.2) causes the exception of the Spyder and Navigator. So I try to restore these packages to py3.6 version by called `conda install python = 3.6`, but it doesn't works. This is the result of `conda list -version`(the last 2 rev) ``` 2019-04-09 22:59:08 (rev 3) certifi {2016.2.28 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 2019.3.9} conda {4.5.13 -> 4.6.11} cryptography {1.8.1 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 2.6.1} curl {7.52.1 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 7.64.0} libcurl {7.61.0 -> 7.64.0} libpng {1.6.34 -> 1.6.36} libprotobuf {3.2.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 3.6.1} libssh2 {1.8.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 1.8.0} menuinst {1.4.7 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 1.4.16} openssl {1.0.2l (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 1.1.1b} protobuf {3.2.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 3.6.1} pycurl {7.43.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 7.43.0.2} pyqt {5.6.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 5.9.2} python {3.6.2 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 3.6.8} qt {5.6.2 -> 5.9.7} requests {2.14.2 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 2.21.0} sip {4.18 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 4.19.8} sqlite {3.24.0 -> 3.27.2} vc {14 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free) -> 14.1} +krb5-1.16.1 2019-04-09 23:02:48 (rev 4) cryptography {2.6.1 -> 1.8.1 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)} curl {7.64.0 -> 7.52.1 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)} krb5 {1.16.1 -> 1.13.2 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)} libcurl {7.64.0 -> 7.61.1} libpng {1.6.36 -> 1.6.34} libprotobuf {3.6.1 -> 3.2.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)} libssh2 {1.8.0 -> 1.8.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)} menuinst {1.4.16 -> 1.4.14} openssl {1.1.1b -> 1.0.2l (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)} protobuf {3.6.1 -> 3.2.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)} pycurl {7.43.0.2 -> 7.43.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)} pyqt {5.9.2 -> 5.6.0 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)} python {3.6.8 -> 3.6.2 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)} qt {5.9.7 -> 5.6.2} sqlite {3.27.2 -> 3.25.2} vc {14.1 -> 14 (https://mirrors.ustc.edu.cn/anaconda/pkgs/free)} ``` This is the result of `conda info` ``` active environment : base active env location : C:\Users\lenovo\Anaconda3 shell level : 1 user config file : C:\Users\lenovo\.condarc populated config files : C:\Users\lenovo\.condarc conda version : 4.6.11 conda-build version : 3.0.19 python version : 3.6.2.final.0 base environment : C:\Users\lenovo\Anaconda3 (writable) channel URLs : https://mirrors.ustc.edu.cn/anaconda/pkgs/free/win-64 https://mirrors.ustc.edu.cn/anaconda/pkgs/free/noarch https://repo.anaconda.com/pkgs/main/win-64 https://repo.anaconda.com/pkgs/main/noarch https://repo.anaconda.com/pkgs/free/win-64 https://repo.anaconda.com/pkgs/free/noarch https://repo.anaconda.com/pkgs/r/win-64 https://repo.anaconda.com/pkgs/r/noarch https://repo.anaconda.com/pkgs/msys2/win-64 https://repo.anaconda.com/pkgs/msys2/noarch package cache : C:\Users\lenovo\Anaconda3\pkgs C:\Users\lenovo\.conda\pkgs C:\Users\lenovo\AppData\Local\conda\conda\pkgs envs directories : C:\Users\lenovo\Anaconda3\envs C:\Users\lenovo\.conda\envs C:\Users\lenovo\AppData\Local\conda\conda\envs platform : win-64 user-agent : conda/4.6.11 requests/2.21.0 CPython/3.6.2 Windows/10 Windows/10.0.17134 administrator : False netrc file : None offline mode : False ``` What is the best way to fix the issue? How can I get my Spyder to work again?
2019/04/09
[ "https://Stackoverflow.com/questions/55598548", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11335756/" ]
Fortunately I have fixed my Spyder by using command 'conda install --revision 2', and updated my Spyder to version 3.3.4 in the Anaconda Navigator. The `conda list --version` can show each rev before, so I used command `conda install --revision 2` to restore the environment to what it was before I updated conda. After that my Spyder and Anaconda Navigator can be used normally. Then I update my Spyder in the Anaconda Navigator to the version 3.3.4. This is the link of the [`conda install`](https://docs.conda.io/projects/conda/en/latest/commands/install.html?highlight=revision)
I reinstalled the package that's causing inconsistency and then the problem is gone. My inconsistency error: [![My Inconsistency Error](https://i.stack.imgur.com/ACgYe.png)](https://i.stack.imgur.com/ACgYe.png) What I did: ``` conda install -c conda-forge mkl-service ```
12,127
64,832,243
I'm trying to install apache airflow with pip, so I enter "pip install apache-airflow". but somehow i got an error that i don't understand. Could you please help me with this? for a little bit context, I'm using macOS catalina and python 3.8.2. I have tried to upgrade my pip, but the error still there. These are the error that appear ``` ERROR: Command errored out with exit status 1: command: /Users/muhammadsyamsularifin/airflow/venv/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/tmp/pip-install-ruyjyg1t/setproctitle/setup.py'"'"'; __file__='"'"'/private/tmp/pip-install-ruyjyg1t/setproctitle/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/tmp/pip-record-8qj0qv0h/install-record.txt --single-version-externally-managed --compile --install-headers /Users/muhammadsyamsularifin/airflow/venv/include/site/python3.8/setproctitle cwd: /private/tmp/pip-install-ruyjyg1t/setproctitle/ Complete output (119 lines): running install running build running build_ext building 'setproctitle' extension creating build creating build/temp.macosx-10.14.6-x86_64-3.8 creating build/temp.macosx-10.14.6-x86_64-3.8/src xcrun -sdk macosx clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -iwithsysroot/System/Library/Frameworks/System.framework/PrivateHeaders -iwithsysroot/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/Headers -arch arm64 -arch x86_64 -DSPT_VERSION=1.1.10 -D__darwin__=1 -I/Users/muhammadsyamsularifin/airflow/venv/include -I/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8 -c src/setproctitle.c -o build/temp.macosx-10.14.6-x86_64-3.8/src/setproctitle.o In file included from src/setproctitle.c:14: In file included from src/spt.h:15: In file included from src/spt_python.h:14: In file included from /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:11: In file included from /Library/Developer/CommandLineTools/usr/lib/clang/12.0.0/include/limits.h:21: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/limits.h:63: /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/cdefs.h:807:2: error: Unsupported architecture #error Unsupported architecture ^ In file included from src/setproctitle.c:14: In file included from src/spt.h:15: In file included from src/spt_python.h:14: In file included from /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:11: In file included from /Library/Developer/CommandLineTools/usr/lib/clang/12.0.0/include/limits.h:21: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/limits.h:64: /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/machine/limits.h:8:2: error: architecture not supported #error architecture not supported ^ In file included from src/setproctitle.c:14: In file included from src/spt.h:15: In file included from src/spt_python.h:14: In file included from /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/stdio.h:64: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/_stdio.h:71: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/_types.h:27: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:33: /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/machine/_types.h:34:2: error: architecture not supported #error architecture not supported ^ In file included from src/setproctitle.c:14: In file included from src/spt.h:15: In file included from src/spt_python.h:14: In file included from /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/stdio.h:64: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/_stdio.h:71: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/_types.h:27: /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:55:9: error: unknown type name '__int64_t' typedef __int64_t __darwin_blkcnt_t; /* total blocks */ ^ /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:56:9: error: unknown type name '__int32_t'; did you mean '__int128_t'? typedef __int32_t __darwin_blksize_t; /* preferred block size */ ^ note: '__int128_t' declared here /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:57:9: error: unknown type name '__int32_t'; did you mean '__int128_t'? typedef __int32_t __darwin_dev_t; /* dev_t */ ^ note: '__int128_t' declared here /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:60:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_gid_t; /* [???] process and group IDs */ ^ note: '__uint128_t' declared here /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:61:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_id_t; /* [XSI] pid_t, uid_t, or gid_t*/ ^ note: '__uint128_t' declared here /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:62:9: error: unknown type name '__uint64_t' typedef __uint64_t __darwin_ino64_t; /* [???] Used for 64 bit inodes */ ^ /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:68:9: error: unknown type name '__darwin_natural_t' typedef __darwin_natural_t __darwin_mach_port_name_t; /* Used by mach */ ^ /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:70:9: error: unknown type name '__uint16_t'; did you mean '__uint128_t'? typedef __uint16_t __darwin_mode_t; /* [???] Some file attributes */ ^ note: '__uint128_t' declared here /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:71:9: error: unknown type name '__int64_t' typedef __int64_t __darwin_off_t; /* [???] Used for file sizes */ ^ /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:72:9: error: unknown type name '__int32_t'; did you mean '__int128_t'? typedef __int32_t __darwin_pid_t; /* [???] process and group IDs */ ^ note: '__int128_t' declared here /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:73:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_sigset_t; /* [???] signal set */ ^ note: '__uint128_t' declared here /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:74:9: error: unknown type name '__int32_t'; did you mean '__int128_t'? typedef __int32_t __darwin_suseconds_t; /* [???] microseconds */ ^ note: '__int128_t' declared here /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:75:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_uid_t; /* [???] user IDs */ ^ note: '__uint128_t' declared here /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types.h:76:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_useconds_t; /* [???] microseconds */ ^ note: '__uint128_t' declared here In file included from src/setproctitle.c:14: In file included from src/spt.h:15: In file included from src/spt_python.h:14: In file included from /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/stdio.h:64: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/_stdio.h:71: /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/_types.h:43:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_wctype_t; ^ note: '__uint128_t' declared here In file included from src/setproctitle.c:14: In file included from src/spt.h:15: In file included from src/spt_python.h:14: In file included from /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/stdio.h:64: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/_stdio.h:75: In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/sys/_types/_va_list.h:31: /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/machine/types.h:37:2: error: architecture not supported #error architecture not supported ^ fatal error: too many errors emitted, stopping now [-ferror-limit=] 20 errors generated. error: command 'xcrun' failed with exit status 1 ---------------------------------------- ERROR: Command errored out with exit status 1: /Users/muhammadsyamsularifin/airflow/venv/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/tmp/pip-install-ruyjyg1t/setproctitle/setup.py'"'"'; __file__='"'"'/private/tmp/pip-install-ruyjyg1t/setproctitle/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/tmp/pip-record-8qj0qv0h/install-record.txt --single-version-externally-managed --compile --install-headers /Users/muhammadsyamsularifin/airflow/venv/include/site/python3.8/setproctitle Check the logs for full command output. ```
2020/11/14
[ "https://Stackoverflow.com/questions/64832243", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14607085/" ]
I looked into similar errors and here are a few possible fixes: 1. If you installed Python3.8 via `Brew`, try to uninstall it and install a new version that you build from source. 2. Try `sudo python3.8 -m pip install apache-airflow`. 3. Upgrade to Python 3.8.5 as per [this post](https://stackoverflow.com/questions/64111015/pip-install-psutil-is-throwing-error-unsupported-architecture-any-workarou). 4. Export the environment variable `export ARCHFLAGS="-arch x86_64"` as per [this post](https://github.com/giampaolo/psutil/issues/1832#issuecomment-704596756).
I faced exact same issue, here's how I solved it. I explicitly used python 3.8.5 as pointed in [Meghdeep Ray's answer](https://stackoverflow.com/a/64867996/2508468) I also exported the `export ARCHFLAGS="-arch x86_64"` environment variable still inspired by [Meghdeep Ray's answer](https://stackoverflow.com/a/64867996/2508468) But the important part for me was to install `apache-airflow` from `pip` using the following command : ``` pip install apache-airflow[]==1.10.12 \ --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-1.10.12/constraints-3.7.txt" ``` The constraint is important as pointed in this [github issue](https://github.com/apache/airflow/issues/12031)
12,130
1,417,473
I'm trying to call a function in a Python script from my main C++ program. The python function takes a string as the argument and returns nothing (ok.. 'None'). It works perfectly well (never thought it would be that easy..) as long as the previous call is finished before the function is called again, otherwise there is an access violation at `pModule = PyImport_Import(pName)`. There are a lot of tutorials how to embed python in C and vice versa but I found nothing about that problem. ``` int callPython(TCHAR* title){ PyObject *pName, *pModule, *pFunc; PyObject *pArgs, *pValue; Py_Initialize(); pName = PyUnicode_FromString("Main"); /* Name of Pythonfile */ pModule = PyImport_Import(pName); Py_DECREF(pName); if (pModule != NULL) { pFunc = PyObject_GetAttrString(pModule, "writeLyricToFile"); /* function name. pFunc is a new reference */ if (pFunc && PyCallable_Check(pFunc)) { pArgs = PyTuple_New(1); pValue = PyUnicode_FromWideChar(title, -1); if (!pValue) { Py_DECREF(pArgs); Py_DECREF(pModule); showErrorBox(_T("pValue is false")); return 1; } PyTuple_SetItem(pArgs, 0, pValue); pValue = PyObject_CallObject(pFunc, pArgs); Py_DECREF(pArgs); if (pValue != NULL) { //worked as it should! Py_DECREF(pValue); } else { Py_DECREF(pFunc); Py_DECREF(pModule); PyErr_Print(); showErrorBox(_T("pValue is null")); return 1; } } else { if (PyErr_Occurred()) PyErr_Print(); showErrorBox(_T("pFunc null or not callable")); return 1; } Py_XDECREF(pFunc); Py_DECREF(pModule); } else { PyErr_Print(); showErrorBox(_T("pModule is null")); return 1; } Py_Finalize(); return 0; } ```
2009/09/13
[ "https://Stackoverflow.com/questions/1417473", "https://Stackoverflow.com", "https://Stackoverflow.com/users/144746/" ]
When you say "as long as the previous call is finished before the function is called again", I can only assume that you have multiple threads calling from C++ into Python. The python is not thread safe, so this is going to fail! Read up on the Global Interpreter Lock (GIL) in the Python manual. Perhaps the following links will help: * <http://docs.python.org/c-api/init.html#thread-state-and-the-global-interpreter-lock> * <http://docs.python.org/c-api/init.html#PyEval_InitThreads> * <http://docs.python.org/c-api/init.html#PyEval_AcquireLock> * <http://docs.python.org/c-api/init.html#PyEval_ReleaseLock> The GIL is mentioned on Wikipedia: * <http://en.wikipedia.org/wiki/Global_Interpreter_Lock>
Thank you for your help! Yes you're right, there are several C threads. Never thought I'd need mutex for the interpreter itself - the GIL is a completly new concept for me (and isn't even once mentioned in the whole tutorial). After reading the reference (for sure not the easiest part of it, although the PyGILState\_\* functions simplify the whole thing a lot), I added an ``` void initPython(){ PyEval_InitThreads(); Py_Initialize(); PyEval_ReleaseLock(); } ``` function to initialise the interpreter correctly. Every thread creates its data structure, acquires the lock and releases it afterwards as shown in the reference. Works as it should, but when calling Py\_Finalize() before terminating the process I get a segfault.. any problems with just leaving it?
12,131
23,900,878
Is it possible to mock a module in python using `unittest.mock`? I have a module named `config`, while running tests I want to mock it by another module `test_config`. how can I do that ? Thanks. config.py: ``` CONF_VAR1 = "VAR1" CONF_VAR2 = "VAR2" ``` test\_config.py: ``` CONF_VAR1 = "test_VAR1" CONF_VAR2 = "test_VAR2" ``` All other modules read config variables from the `config` module. While running tests I want them to read config variables from `test_config` module instead.
2014/05/28
[ "https://Stackoverflow.com/questions/23900878", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2085665/" ]
If you're always accessing the variables in config.py like this: ``` import config ... config.VAR1 ``` You can replace the `config` module imported by whatever module you're actually trying to test. So, if you're testing a module called `foo`, and it imports and uses `config`, you can say: ``` from mock import patch import foo import config_test .... with patch('foo.config', new=config_test): foo.whatever() ``` But this isn't actually replacing the module globally, it's only replacing it within the `foo` module's namespace. So you would need to patch it everywhere it's imported. It also wouldn't work if `foo` does this instead of `import config`: ``` from config import VAR1 ``` You can also mess with `sys.modules` to do this: ``` import config_test import sys sys.modules["config"] = config_test # import modules that uses "import config" here, and they'll actually get config_test ``` But generally it's not a good idea to mess with `sys.modules`, and I don't think this case is any different. I would favor all of the other suggestions made over it.
Consider this following setup configuration.py: ``` import os class Config(object): CONF_VAR1 = "VAR1" CONF_VAR2 = "VAR2" class TestConfig(object): CONF_VAR1 = "test_VAR1" CONF_VAR2 = "test_VAR2" if os.getenv("TEST"): config = TestConfig else: config = Config ``` now everywhere else in your code you can use: ``` from configuration import config print config.CONF_VAR1, config.CONF_VAR2 ``` And when you want to mock your coniguration file just set the environment variable "TEST". Extra credit: If you have lots of configuration variables that are shared between your testing and non-testing code, then you can derive TestConfig from Config and simply overwrite the variables that need changing: ``` class Config(object): CONF_VAR1 = "VAR1" CONF_VAR2 = "VAR2" CONF_VAR3 = "VAR3" class TestConfig(Config): CONF_VAR2 = "test_VAR2" # CONF_VAR1, CONF_VAR3 remain unchanged ```
12,132
50,601,935
I'm debugging my python application using VSCode. I have a main python file from where I start the debugger. I'm able to put breakpoints in this file, but if I want to put breakpoints in other files which are called by the main file, I get them as 'Unverified breakpoint' and the debugger ignores them. How can I change my `launch.json` so that I'm able to put breakpoints on all the files in my project? Here's my current `launch.json`: ```js { // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 "version": "0.2.0", "configurations": [ { "name": "Python: Current File", "type": "python", "request": "launch", "program": "${file}" }, { "name": "Python: Attach", "type": "python", "request": "attach", "localRoot": "${workspaceFolder}", "remoteRoot": "${workspaceFolder}", "port": 3000, "secret": "my_secret", "host": "localhost" }, { "name": "Python: Terminal (integrated)", "type": "python", "request": "launch", "program": "${file}", "console": "integratedTerminal" }, { "name": "Python: Terminal (external)", "type": "python", "request": "launch", "program": "${file}", "console": "externalTerminal" }, { "name": "Python: Django", "type": "python", "request": "launch", "program": "${workspaceFolder}/manage.py", "args": [ "runserver", "--noreload", "--nothreading" ], "debugOptions": [ "RedirectOutput", "Django" ] }, { "name": "Python: Flask (0.11.x or later)", "type": "python", "request": "launch", "module": "flask", "env": { "FLASK_APP": "${workspaceFolder}/app.py" }, "args": [ "run", "--no-debugger", "--no-reload" ] }, { "name": "Python: Module", "type": "python", "request": "launch", "module": "nf.session.session" }, { "name": "Python: Pyramid", "type": "python", "request": "launch", "args": [ "${workspaceFolder}/development.ini" ], "debugOptions": [ "RedirectOutput", "Pyramid" ] }, { "name": "Python: Watson", "type": "python", "request": "launch", "program": "${workspaceFolder}/console.py", "args": [ "dev", "runserver", "--noreload=True" ] }, { "name": "Python: All debug Options", "type": "python", "request": "launch", "pythonPath": "${config:python.pythonPath}", "program": "${file}", "module": "module.name", "env": { "VAR1": "1", "VAR2": "2" }, "envFile": "${workspaceFolder}/.env", "args": [ "arg1", "arg2" ], "debugOptions": [ "RedirectOutput" ] } ] } ``` Thanks
2018/05/30
[ "https://Stackoverflow.com/questions/50601935", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1616955/" ]
This may be a result of the "[justMyCode](https://code.visualstudio.com/docs/python/debugging#_justmycode)" configuration option, as it defaults to true. While the description from the provider is "...restricts debugging to user-written code only. Set to False to also enable debugging of standard library functions.", I think what they mean is that anything in the site-packages for the current python environment will not be debugged. I wanted to debug the Django stack to determine where an exception originating in my code was being eaten-up and not surfaced, so I did this to my VSCode's debugger's launch configuration: ``` { // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 "version": "0.2.0", "configurations": [ { "name": "Control Center", "type": "python", "request": "launch", "program": "${workspaceFolder}/manage.py", "args": [ "runserver", "--noreload" ], "justMyCode": false, // I want to debug through Django framework code sometimes "django": true } ] } ``` As soon as I did that, the debugger removed the "Unverified breakpoint" message and allowed me to debug files in the site-packages for the virtual environment I was working on, which contained Django. I should note that the Breakpoint section of the debugger gave me a hint as to why the breakpoint was unverified and what tweaks to make to the launch config: [![Breakpoints section of VSCode Debugger with useful tip](https://i.stack.imgur.com/H6KYn.png)](https://i.stack.imgur.com/H6KYn.png) One additional note: that "--noreload" argument is also important. In debugging a different application using Flask, the debugger wouldn't stop at any breakpoint until I added it to the launch config.
The imports for the other modules were inside strings and called using the `execute` function. That's why VSCode couldn't very the breakpoints in the other files as it didn't know that these other files are used by the main file..
12,141
16,068,532
So this happened to me: ``` thing = ModelClass() thing.foo = bar() thing.do_Stuff() thing.save() #works fine thing.decimal_field = decimal_value thing.save() #error here ``` Traceback follows: ``` TypeError at /journey/collaborators/2/ unsupported operand type(s) for ** or pow(): 'Decimal' and 'str' 274. oH.save() File "/usr/lib/python2.7/dist-packages/django/db/models/base.py" in save 460. self.save_base(using=using, force_insert=force_insert, force_update=force_update) File "/usr/lib/python2.7/dist-packages/django/db/models/base.py" in save_base 543. for f in meta.local_fields if not isinstance(f, AutoField)] File "/usr/lib/python2.7/dist-packages/django/db/models/fields/subclassing.py" in inner 28. return func(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/django/db/models/fields/__init__.py" in get_db_prep_save 787. self.max_digits, self.decimal_places) File "/usr/lib/python2.7/dist-packages/django/db/backends/__init__.py" in value_to_db_decimal 705. return util.format_number(value, max_digits, decimal_places) File "/usr/lib/python2.7/dist-packages/django/db/backends/util.py" in format_number 145. return u'%s' % str(value.quantize(decimal.Decimal(".1") ** decimal_places, context=context)) ``` I've tried setting `decimal_value` to a `decimal.Decimal` instance, a float, an int and a string. It seems I can't save my model instance unless I leave that field blank. Any ideas how to fix this?
2013/04/17
[ "https://Stackoverflow.com/questions/16068532", "https://Stackoverflow.com", "https://Stackoverflow.com/users/742082/" ]
Linking turned on will throw away any methods, attributes, properties... that are not used at the time of compilation. This is problem for example with reflection approach. Your problem - very large package can be solved by: 1. reducing library code - removing unused code manually and linking off - probably don't want to do that 2. using DataContractJsonSerializer - application using this class is significantly smaller 3. live with 17 MB app, after all it still bearable :) And linking "Sdk assemblies only" could help also a little bit
Use the Json.Net provided by the [Xamarin Component Store](http://components.xamarin.com/view/json.net/). I have used this component for multiple projects and my Release builds with linking enabled come in between 4-8 MB.
12,142
4,413,912
Why is `print` a keyword in python and not a function?
2010/12/10
[ "https://Stackoverflow.com/questions/4413912", "https://Stackoverflow.com", "https://Stackoverflow.com/users/538442/" ]
The `print` statement in Python 2.x has some special syntax which would not be available for an ordinary function. For example you can use a trailing `,` to suppress the output of a final newline or you can use `>>` to redirect the output to a file. But all this wasn't convincing enough even to Guido van Rossum himself to keep it a statement -- he turned `print` into a function in Python 3.x.
I will throw in my thoughts on this: In Python 2.x `print` is not a statement by mistake, or because printing to `stdout` is such a basic thing to do. Everything else is so thought-through or has at least understandable reasons that a mistake of that order would seem odd. If communicating with `stdout` would have been cosidered so basic, communicating with `stdin` would have to be just as important, yet `input()` is a function. If you look at the [list of reserved keywords](https://docs.python.org/2/reference/lexical_analysis.html#keywords) and the [list of statements](https://docs.python.org/2/reference/simple_stmts.html) which are not expressions, `print` clearly stands out which is another hint that there must be very specific reasons. I think `print` *had* to be a statement and not an expression, to avoid a security breach in `input()`. Remember that `input()` in Python2 evaluates whatever the user types into `stdin`. If the user typed `print a` and `a` holds a list of all passwords, that would be quiet catastrophic. Apparently, the ability of `input()` to evaluate expressions was considered more important than `print` being a normal built-in function.
12,144
71,214,931
I want to access items from a new dictionary called `conversations` by implementing a for loop. ``` {" conversations": [ {"tag": "greeting", "user": ["Hi", " What's your name?", " How are you?", "Hello", "Good day"], "response": ["Hello, my name is Rosie. Nice to see you", "Good to see you again", " How can I help you?"], "context_set": "" }, {"tag": "Good-bye", "user": ["Bye", "See you", "Goodbye"], "response": ["Thanks for visiting our company", "Have a nice day", "Good-bye."] }, {"tag": "thanks", "user": ["Thanks", "Thank you", "That's helpful", "Appreciated your service" ], "response": ["Glad to help!", "My pleasure", "You’re welcome."] } ] } ``` The code I use to load the dictionary in a notebook is ``` file_name = 'dialogue.txt' with open(file_name, encoding='utf8') as f: for line in f: print(line.strip()) dialogue_text = f.read() ``` This line of code does not return any results when trying to access the dictionary. ``` for k in dialogue_text: print(k) ``` My intention is to write this code by implementing tokenization and stemming, but it returned an error ``` words = [] labels = [] docs_x = [] docs_y = [] for conversation in dialogue_text["conversations"]: for user in dialogue_text["user"]: words = nltk.word_tokenize(user) words.extend(words) docs_x.append(words) docs_y.append(intent["tag"])if intent["tag"] not in labels: labels.append(intent["tag"])words = [stemmer.stemWord(w.lower()) for w in words if w != "?"] words = sorted(list(set(words)))labels = sorted(labels) ``` Error Message: ``` TypeError Traceback (most recent call last) <ipython-input-12-d42234f8e809> in <module>() 10 docs_y = [] 11 ---> 12 for conversation in dialogue_text["conversations"]: 13 for user in dialogue_text["user"]: 14 words = nltk.word_tokenize(user) TypeError: string indices must be integers ``` What code should I write to resolve this issue?
2022/02/22
[ "https://Stackoverflow.com/questions/71214931", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18273393/" ]
You should use the `json` module to load in JSON data as opposed to reading in the file line-by-line. Whatever procedure you build yourself is likely to be fragile and less efficient. Here is the looping structure that you're looking for: ```py import json with open('input.json') as input_file: data = json.load(input_file) # Spaces were originally in the key name. for conversation in data[' conversations']: for user_words in conversation['user']: # Do stuff with user_words ... ```
Try this out: ``` import json dialogue_text = json.load(open("dialogue.txt", encoding='utf8')) for conversation in dialogue_text[" conversations"]: for user in conversation['user']: print(user) ``` Output: ```none Hi What's your name? How are you? Hello Good day Bye See you Goodbye Thanks Thank you That's helpful Appreciated your service ```
12,154
72,393,418
I am working with python and numpy! I have a txt file with integers, space seperated, and each row of the file must be a row in an array or dataframe. The problem is that not every row has the same size! I know the size that i want them to have and i want to put to the missing values the number zero! As is not comma seperated i can't find a way to do that! I was wondering if there is a way to find the length of each row of my array and add the appropriate number of zeros! Is that possible? Any other ideas? I am new at numpy library as you can see..
2022/05/26
[ "https://Stackoverflow.com/questions/72393418", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19207300/" ]
Forbidden often correlated to SSL/TLS certificate verification failure. Please try using the `requests.get` by setting the `verify=False` as following Fixing the SSL certificate issue ``` requests.get("https://www.example.com/insert.php?network=testnet&id=1245300&c=2803824&lat=7555457", verify=False) ``` Fixing the TLS certificate issue Check out my [answer](https://stackoverflow.com/questions/72347165/python-requests-403-forbidden-error-while-downlaoding-pdf-file-from-www-resear/72349915#72349915) related to the TLS certificate verification fix.
Somehow I overcomplicated it and when I tried the absolute minimum that works. ``` import requests headers = { 'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.5005.61 Safari/537.36' } response = requests.get("http://www.example.com/insert.php?network=testnet&id=1245200&c=2803824&lat=7555457", headers=headers) print(response.text) ```
12,155
62,801,244
I am working on a project where i have to use Mouse as a paintbrush. I have used `cv2.setMouseCallback()` function but it returned the following error. here is the part of my code ``` import cv2 import numpy as np # mouse callback function def draw_circle(event,x,y,flags,param): if event == cv2.EVENT_LBUTTONDBLCLK: cv2.circle(img,(x,y),100,(255,0,0),-1) # Create a black image, a window and bind the function to window img = np.zeros((512,512,3), np.uint8) cv2.namedWindow('image') cv2.setMouseCallback('image',draw_circle) while(1): cv2.imshow('image',img) if cv2.waitKey(20) & 0xFF == 27: break cv2.destroyAllWindows() ``` when I run this It returned me the following error: error Traceback ``` (most recent call last) <ipython-input-1-640e54baca5f> in <module> 10 img = np.zeros((512,512,3), np.uint8) 11 cv2.namedWindow('image') ---> 12 cv2.setMouseCallback('image',draw_circle) 13 14 while(1): error: OpenCV(4.3.0) /io/opencv/modules/highgui/src/window_QT.cpp:717: error: (-27:Null pointer) NULL window handler in function 'cvSetMouseCallback' ``` My python version - 3.8 Operating System - ubuntu 20
2020/07/08
[ "https://Stackoverflow.com/questions/62801244", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7976921/" ]
The error is resolved I removed the previously installed OpenCV (which is installed using pip `pip install opencv-python`) and reinstalled it using `sudo apt install libopencv-dev python3-opencv`
for me, also worked with the pip3 installations. Just made a fresh virtual env, and installed opencv-python and opencv-contrib-python.
12,156
30,013,383
I want to be able to get from `[2, 3]` and `3 : [2, 3, 2, 3, 2, 3]`. (Like `3 * a` in python where a is a list) Is there a quick and efficient way to do this in Javascript ? I do this with a `for` loop, but it lacks visibility and I guess efficiency. I would like for it to work with every types of element. For instance, I used the code : ``` function dup (n, obj) { var ret = []; for (var i = 0; i<n; i++) { ret[i] = obj; } return (ret); } ``` The problem is that it doesn't work with arrays or objects, only with primitive values. Do I have to make conditions, or is there a clean way to duplicate a variable ?
2015/05/03
[ "https://Stackoverflow.com/questions/30013383", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4859055/" ]
You can use this (very readable :P) function: ``` function repeat(arr, n){ var a = []; for (var i=0;i<n;[i++].push.apply(a,arr)); return a; } ``` `repeat([2,3], 3)` returns an array `[2, 3, 2, 3, 2, 3]`. Basically, it's this: ``` function repeat(array, times){ var newArray = []; for (var i=0; i < times; i++){ Array.prototype.push.apply(newArray, array); } return newArray; } ``` we push `array`'s values onto `newArray` `times` times. To be able to push an array as its values (so, `push(2, 3)` instead of `push([2, 3])`) I used apply, which takes an array an passes it to push as a list of arguments. Or, extend the prototype: ``` Array.prototype.repeat = function(n){ var a = []; for (var i=0;i<n;[i++].push.apply(a,this)); return a; } ``` `[2, 3].repeat(3)` returns an array `[2, 3, 2, 3, 2, 3]`. If you want something reasonably readable, you can use [concat](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/concat) within a loop: ``` function repeat(array, n){ var newArray = []; for (var i = 0; i < n; i++){ newArray = newArray.concat(array); } return newArray; } ```
There is not. This is a very Pythonic idea. You could devise a function to do it, but I doubt there is any computational benefit because you would just be using a loop or some weird misuse of string functions.
12,159
28,314,014
I have written this code in python ``` import os files = os.listdir(".") x = "" for file in files: x = ("\"" + file + "\" ") f = open("files.txt", "w") f.write(x) f.close() ``` this works and I get a single string with all the files in a directory as `"foo.txt" "bar.txt" "baz.txt"` but I don't like the for loop. Can't I write the code more succinctly.... like those python pros? I tried `"\"".join(files)` but how do I get the `"` the end of the file name as well?
2015/02/04
[ "https://Stackoverflow.com/questions/28314014", "https://Stackoverflow.com", "https://Stackoverflow.com/users/337134/" ]
1. You can write string literals using both `'single'` and `"double-quotes"`; you don't have to escape one inside the other. 2. You can use the `format` function to apply quotes before you `join`. 3. You should use the `with` statement when opening files to save you from having to `close` it explicitly. Thus: ``` import os with open("files.txt", "w") as f: f.write(' '.join('"{}"'.format(file) for file in os.listdir('.')) ```
``` import os files = os.listdir(".") x = " ".join('"%s"'%f for f in files) with open("files.txt", "w") as f: f.write(x) ```
12,160
58,223,422
I'm currently trying to find an effective way of running a machine learning task over a set amount of cores using `tensorflow`. From the information I found there were two main approaches to doing this. The first of which was using the two tensorflow variables intra\_op\_parallelism\_threads and inter\_op\_parallelism\_threads and then creating a session using this configuration. The second of which is using `OpenMP`. Setting the environment variable `OMP_NUM_THREADS` allows for manipulation of the amount of threads spawned for the process. My problem arose when I discovered that installing tensorflow through conda and through pip gave two different environments. In the `conda install` modifying the `OpenMP` environment variables seemed to change the way the process was parallelised, whilst in the 'pip environment' the only thing which appeared to change it was the inter/intra config variables which I mentioned earlier. This led to some difficulty in trying to compare the two installs for benchmarking reasons. If I set `OMP_NUM_THREADS` equal to 1 and inter/intra to 16 on a 48 core processor on the `conda install` I only get about 200% CPU usage as most of the threads are idle at any given time. ```py omp_threads = 1 mkl_threads = 1 os.environ["OMP_NUM_THREADS"] = str(omp_threads) os.environ["MKL_NUM_THREADS"] = str(mkl_threads) config = tf.ConfigProto() config.intra_op_parallelism_threads = 16 config.inter_op_parallelism_threads = 16 session = tf.Session(config=config) K.set_session(session) ``` I would expect this code to spawn 32 threads where most of which are being utilized at any given time, when in fact it spawns 32 threads and only 4-5 are being used at once. Has anyone ran into anything similar before when using tensorflow? Why is it that installing through conda and through pip seems to give two different environments? Is there any way of having comparable performance on the two installs by using some combination of the two methods discussed earlier? Finally is there maybe an even better way to limit python to a specific number of cores?
2019/10/03
[ "https://Stackoverflow.com/questions/58223422", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8776042/" ]
**first check the index labels and columns** ``` fact.index fact.columns ``` If you need to convert index to columns use: Use: ``` fact.reset_index() ``` **Then you can use:** ``` fact.groupby(['store_id', 'month'])['quantity'].mean() ``` Output: ``` store_id month 174 8 1 354 7 1 8 1 9 1 Name: quantity, dtype: int64 ``` **or better:** ``` fact['mean']=fact.groupby(['store_id', 'month'])['quantity'].transform('mean') print(fact) store_id sku_id date quantity city city.1 category month \ 0 354 31253 2017-08-08 1 Paris Paris Shirt 8 1 354 31253 2017-08-19 1 Paris Paris Shirt 8 2 354 31258 2017-07-30 1 Paris Paris Shirt 7 3 354 277171 2017-09-28 1 Paris Paris Shirt 9 4 174 295953 2017-08-16 1 London London Shirt 8 mean 0 1 1 1 2 1 3 1 4 1 ```
need to add "**as\_index=True**" ex: "count\_in = df.groupby(['time\_in','id'], **as\_index=True**)['time\_in'].count()"
12,163
58,791,530
I am making a poem generator in python, and I am currently working on how poems are outputted to the user. I would like to make it so every line that is outputted will have a comma follow after. I thought I could achieve this easily by using the .join function, but it seems to attach to letters rather than the end of the string stored in the list. ``` line1[-1]=', '.join(line1[-1]) print(*line1) print(*line2) ``` Will output something like: ``` Moonlight is w, i, n, t, e, r The waters scent fallen snow ``` In hindsight, I should have known join was the wrong function to use, but I'm still lost. I tried .append, but as the items in my list are strings, I get the error message "'str' object has no attribute 'append'." I realize another fix to all this might be something like: ``` print(*line1, ",") ``` But I'm working on a function that will decide whether the correct punctuation needs to be "!", ",", "?", or "--", so I am really hoping to find something that can be attached to the string in the list itself, instead of tacked on during the output printing process.
2019/11/10
[ "https://Stackoverflow.com/questions/58791530", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12334091/" ]
Just use the `+` or `+=` operator for strings, for example: ``` trailing_punct = ',' # can be '!', '?', etc. line1 += trailing_punct # or line1 = line1 + trailing_punct ``` `+=` can be used to modify the string "in place" (note that under the covers, it does create a new object and assign to it, so `id(line1)` will have changed after this operation).
It seems your `line1` and `line2`are lists of strings, so I'll start by assuming that: ``` line1 = ["Moonlight", "is", "winter"] line2 = ["The", "waters", "scent", "fallen", "snow"] ``` You are using the default behaviour of the `print` function when given several string arguments to add the space between words: `print(*line1)` is equivalent to calling `print(line1[0], line1[1], ...)` (see [\*args and \*\*kwargs](https://stackoverflow.com/questions/3394835/use-of-args-and-kwargs)). That makes adding the line separator to the list of words of the line insufficient, as it will have a space before it: ``` print("\n--//--\nUsing print default space between given arguments:") line_separator = "," line1.append(line_separator) print(*line1) print(*line2) ``` Results in: ``` --//-- Using print default space between given arguments: Moonlight is winter , The waters scent fallen snow ``` What you want to do can be done by joining the list of words into a single string, and then joining the list of lines with the separator you want: ``` print("\n--//--\nPrinting a single string:") line1_str = ' '.join(line1) line2_str = ' '.join(line2) line_separator = ",\n" # notice the new line \n lines = line_separator.join([line1_str, line2_str]) print(lines) ``` Results in ``` --//-- Printing a single string: Moonlight is winter, The waters scent fallen snow ``` Consider using a list of lines for easier expansion, and maybe a list of separators to be used in order for each line.
12,164
18,788,493
I would like to extract filename from url in R. For now I do it as follows, but maybe it can be done shorter like in python. assuming path is just string. ``` path="http://www.exanple.com/foo/bar/fooXbar.xls" ``` in R: ``` tail(strsplit(path,"[/]")[[1]],1) ``` in Python: ``` path.split("/")[-1:] ``` Maybe some sub, gsub solution?
2013/09/13
[ "https://Stackoverflow.com/questions/18788493", "https://Stackoverflow.com", "https://Stackoverflow.com/users/953553/" ]
There's a function for that... ``` basename(path) [1] "fooXbar.xls" ```
@SimonO101 has the most robust answer IMO, but some other options: Since regular expressions are greedy, you can use that to your advantage ``` sub('.*/', '', path) # [1] "fooXbar.xls" ``` Also, you shouldn't need the `[]` around the `/` in your `strsplit`. ``` > tail(strsplit(path,"/")[[1]],1) [1] "fooXbar.xls" ```
12,165
68,608,899
I want python to be printing a bunch of numbers like 1 to 10000. Then I want to decide when to stop the numbers from printing and the last number printed will be my number. something like. ``` for item in range(10000): print (item) number = input("") ``` But the problem is that it waits for me to place the input and then continues the loop. I want it to be looping until I say so. I'll appreciate your help. Thanks.
2021/08/01
[ "https://Stackoverflow.com/questions/68608899", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13472873/" ]
Catch `KeyboardInterrupt` exception when you interrupts the code by `Ctrl`+`C`: ``` import time try: for item in range(10000): print(item) time.sleep(1) except KeyboardInterrupt: print(f'Last item: {item}') ```
You can use the keyboard module: ``` import keyboard # using module keyboard i=0 while i<10000: # making a loop try: # used try so that if user pressed other than the given key error will not be shown print(i) if keyboard.is_pressed('q'): # if key 'q' is pressed print('Last number: ',i) break # finishing the loop else: i+=1 except Exception: break ```
12,166
22,081,361
I'm wondering if there's a way to fill under a pyplot curve with a vertical gradient, like in this quick mockup: ![image](https://i.imgur.com/ZoWCwRb.png) I found this hack on StackOverflow, and I don't mind the polygons if I could figure out how to make the color map vertical: [How to fill rainbow color under a curve in Python matplotlib](https://stackoverflow.com/questions/18215276/how-to-fill-rainbow-color-under-a-curve-in-python-matplotlib)
2014/02/27
[ "https://Stackoverflow.com/questions/22081361", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2774479/" ]
There may be a better way, but here goes: ``` from matplotlib import pyplot as plt x = range(10) y = range(10) z = [[z] * 10 for z in range(10)] num_bars = 100 # more bars = smoother gradient plt.contourf(x, y, z, num_bars) background_color = 'w' plt.fill_between(x, y, y2=max(y), color=background_color) plt.show() ``` Shows: ![enter image description here](https://i.stack.imgur.com/hO8Nk.png)
There is an alternative solution closer to the sketch in the question. It's given on Henry Barthes' blog <http://pradhanphy.blogspot.com/2014/06/filling-between-curves-with-color.html>. This applies an imshow to each of the patches, I've copied the code in case the link changes, ``` import numpy as np import matplotlib.pyplot as plt from matplotlib.path import Path from matplotlib.patches import PathPatch xx=np.arange(0,10,0.01) yy=xx*np.exp(-xx) path = Path(np.array([xx,yy]).transpose()) patch = PathPatch(path, facecolor='none') plt.gca().add_patch(patch) im = plt.imshow(xx.reshape(yy.size,1), cmap=plt.cm.Reds, interpolation="bicubic", origin='lower', extent=[0,10,-0.0,0.40], aspect="auto", clip_path=patch, clip_on=True) plt.show() ```
12,167
53,494,637
I'm trying to build a Flask app that has Kafka as an interface. I used a Python connector, [kafka-python](https://kafka-python.readthedocs.io/en/master/index.html) and a Docker image for Kafka, [spotify/kafkaproxy](https://hub.docker.com/r/spotify/kafkaproxy/) . Below is the docker-compose file. ``` version: '3.3' services: kafka: image: spotify/kafkaproxy container_name: kafka_dev ports: - '9092:9092' - '2181:2181' environment: - ADVERTISED_HOST=0.0.0.0 - ADVERTISED_PORT=9092 - CONSUMER_THREADS=1 - TOPICS=PROFILE_CREATED,IMG_RATED - ZK_CONNECT=kafka7zookeeper:2181/root/path flaskapp: build: ./flask-app container_name: flask_dev ports: - '9000:5000' volumes: - ./flask-app:/app depends_on: - kafka ``` Below is the Python snippet I used to connect to kafka. Here, I used the Kafka container's alias `kafka` to connect, as Docker would take care of mapping the alias to it's IP address. ``` from kafka import KafkaConsumer, KafkaProducer TOPICS = ['PROFILE_CREATED', 'IMG_RATED'] BOOTSTRAP_SERVERS = ['kafka:9092'] consumer = KafkaConsumer(TOPICS, bootstrap_servers=BOOTSTRAP_SERVERS) ``` I got `NoBrokersAvailable` error. From this, I could understand that the Flask app could not find the Kafka server. ``` Traceback (most recent call last): File "./app.py", line 11, in <module> consumer = KafkaConsumer("PROFILE_CREATED", bootstrap_servers=BOOTSTRAP_SERVERS) File "/usr/local/lib/python3.6/site-packages/kafka/consumer/group.py", line 340, in __init__ self._client = KafkaClient(metrics=self._metrics, **self.config) File "/usr/local/lib/python3.6/site-packages/kafka/client_async.py", line 219, in __init__ self.config['api_version'] = self.check_version(timeout=check_timeout) File "/usr/local/lib/python3.6/site-packages/kafka/client_async.py", line 819, in check_version raise Errors.NoBrokersAvailable() kafka.errors.NoBrokersAvailable: NoBrokersAvailable ``` **Other Observations:** 1. I was able to run `ping kafka` from the Flask container and get packets from the Kafka container. 2. When I run the Flask app locally, trying to connect to the Kafka container by setting `BOOTSTRAP_SERVERS = ['localhost:9092']`, it works fine.
2018/11/27
[ "https://Stackoverflow.com/questions/53494637", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4306852/" ]
**UPDATE** As mentioned by cricket\_007, given that you are using the docker-compose provided below, you should use `kafka:29092` to connect to Kafka from another container. So your code would look like this: ``` from kafka import KafkaConsumer, KafkaProducer TOPICS = ['PROFILE_CREATED', 'IMG_RATED'] BOOTSTRAP_SERVERS = ['kafka:29092'] consumer = KafkaConsumer(TOPICS, bootstrap_servers=BOOTSTRAP_SERVERS) ``` **END UPDATE** I would recommend you use the Kafka images from [Confluent Inc](https://www.confluent.io/about/#about_confluent), they have all sorts of example setups using docker-compose that are ready to use and they are always updating them. Try this out: ``` --- version: '2' services: zookeeper: image: confluentinc/cp-zookeeper:latest environment: ZOOKEEPER_CLIENT_PORT: 2181 ZOOKEEPER_TICK_TIME: 2000 kafka: image: confluentinc/cp-kafka:latest depends_on: - zookeeper ports: - 9092:9092 environment: KAFKA_BROKER_ID: 1 KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 flaskapp: build: ./flask-app container_name: flask_dev ports: - '9000:5000' volumes: - ./flask-app:/app ``` I used this [docker-compose.yml](https://github.com/confluentinc/cp-docker-images/blob/5.0.1-post/examples/kafka-single-node/docker-compose.yml) and added your service on top Please note that: > > The config used here exposes port 9092 for *external* connections to the broker i.e. those from *outside* the docker network. This could be from the host machine running docker, or maybe further afield if you've got a more complicated setup. If the latter is true, you will need to change the value 'localhost' in KAFKA\_ADVERTISED\_LISTENERS to one that is resolvable to the docker host from those remote clients > > > Make sure you check out the other examples, may be useful for you especially when moving to production environments: <https://github.com/confluentinc/cp-docker-images/tree/5.0.1-post/examples> Also worth checking: It seems that you need to specify the api\_version to avoid this error. For more details check [here](https://github.com/dpkp/kafka-python/issues/1308#issuecomment-355430042). > > Version 1.3.5 of this library (which is latest on pypy) only lists certain API versions 0.8.0 to 0.10.1. So unless you explicitly specify api\_version to be (0, 10, 1) the client library's attempt to discover the version will cause a NoBrokersAvailable error. > > > ``` producer = KafkaProducer( bootstrap_servers=URL, client_id=CLIENT_ID, value_serializer=JsonSerializer.serialize, api_version=(0, 10, 1) ) ``` This should work, interestingly enough setting the api\_version is accidentally fixing the issue according to this: > > When you set api\_version the client will not attempt to probe brokers for version information. So it is the probe operation that is failing. One large difference between the version probe connections and the general connections is that the former only attempts to connect on a single interface per connection (per broker), where as the latter -- general operation -- will cycle through all interfaces continually until a connection succeeds. #1411 fixes this by switching the version probe logic to attempt a connection on all found interfaces. > > > The actual issue is described [here](https://github.com/dpkp/kafka-python/issues/1308#issuecomment-371532689)
I managed to get this up-and-running using a [network](https://docs.docker.com/compose/networking/) named `stream_net` between all services. ``` # for local development version: "3.7" services: zookeeper: image: confluentinc/cp-zookeeper:latest environment: ZOOKEEPER_CLIENT_PORT: 2181 ZOOKEEPER_TICK_TIME: 2000 networks: - stream_net kafka: image: confluentinc/cp-kafka:latest depends_on: - zookeeper ports: - 9092:9092 environment: KAFKA_BROKER_ID: 1 KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 networks: - stream_net flaskapp: build: ./flask-app container_name: flask_dev ports: - "9000:5000" volumes: - ./flask-app:/app networks: - stream_net depends_on: - kafka networks: stream_net: ``` * connection from outside the containers on `localhost:9092` * connection within the network on `kafka:29092` of course it is strange to put all containers that are already running within a network within a network. But in this way the containers can be named by their actual name. Maybe someone can explain exactly how this works, or it helps someone else to understand the core of the problem and to solve it properly.
12,168
65,734,652
I open a binary file with Python3 and want to print byte-by-byte in hex. However, all online resource only mention printing a "byte array" in hex. Please tell me how to print only 1 single byte, thanks. ```py #!/usr/bin/env python3 if __name__ == "__main__": with open("./datasets/data.bin", 'rb') as file: byte = file.read(1) while byte: print(byte) # how to print hex instead of ascii? byte = file.read(1) ```
2021/01/15
[ "https://Stackoverflow.com/questions/65734652", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1417929/" ]
Try this one: ```py print(hex(byte[0])) ```
Using an `f-string` like this just prints two hex digits for each byte: ```py print(f'{byte[0]:02x}') ```
12,169
26,897,208
``` from pythonds.basic.stack import Stack rStack = Stack() def toStr(n,base): convertString = "0123456789ABCDEF" while n > 0: if n < base: rStack.push(convertString[n]) else: rStack.push(convertString[n % base]) n = n // base res = "" while not rStack.isEmpty(): res = res + str(rStack.pop()) return res print(toStr(1345,2)) ``` I'm referring to [this tutorial](http://interactivepython.org/runestone/static/pythonds/Recursion/StackFramesImplementingRecursion.html) and also pasted the code above. The tutorial says the function is recursive but I don't see a recursive call anywhere, just a while loop. What am I missing?
2014/11/12
[ "https://Stackoverflow.com/questions/26897208", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3448561/" ]
You are right that this particular function is **not** recursive. However, the context is, that on the previous slide there was a recursive function, and in this one they want to show a glimpse of how it *behaves* internally. They later say: > > The previous example [i.e. the one in question - B.] gives us some insight into how Python implements a recursive function call. > > > So, yes, the title is misleading, it should be rather *Expanding a recursive function* or *Imitating recursive function behavior with a stack* or something like this. One may say that this function employs a recursive approach/strategy in some sense, to the problem being solved, but is not recursive itself.
Because you're using a stack structure. If you consider how function calling is implemented, recursion is essentially an easy way to get the compiler to manage a stack of invocations for you. This function does all the stack handling manually, but it is still conceptually a recursive function, just one where the stack management is done manually instead of letting the compiler do it.
12,170
15,196,321
my project is to identify a sentiment either positive or negative ( sentiment analysis ) in Arabic language,to do this task I used NLTK and python, when I enter tweets in arabic an error occurs ``` >>> pos_tweets = [(' أساند كل عون أمن شريف', 'positive'), ('ما أحلى الثورة التونسية', 'positive'), ('أجمل طفل في العالم', 'positive'), ('الشعب يحرس', 'positive'), ('ثورة شعبنا هي ثورة الكـــرامة وثـــورة الأحــــرار', 'positive')] Unsupported characters in input ``` how can I solve this problem?
2013/03/04
[ "https://Stackoverflow.com/questions/15196321", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2048995/" ]
You're on the right track with your `delete_model` method. When the django admin performs an action on multiple objects at once it uses the [update function](https://docs.djangoproject.com/en/dev/topics/db/queries/#updating-multiple-objects-at-once). However, as you see in the docs these actions are performed at the database level only using SQL. You need to add your `delete_model` method in as a [custom action](https://docs.djangoproject.com/en/dev/ref/contrib/admin/actions/) in the django admin. ``` def delete_model(modeladmin, request, queryset): for obj in queryset: filename=obj.profile_name+".xml" os.remove(os.path.join(obj.type,filename)) obj.delete() ``` Then add your function to your modeladmin - ``` class profilesAdmin(admin.ModelAdmin): list_display = ["type","username","domain_name"] actions = [delete_model] ```
Your method should be ``` class profilesAdmin(admin.ModelAdmin): #... def _profile_delete(self, sender, instance, **kwargs): # do something def delete_model(self, request, object): # do something ``` You should add a reference to current object as the first argument in every method signature (usually called `self`). Also, the delete\_model should be implemented as a method.
12,173
51,861,677
I am trying to solve the problem called [R2](https://open.kattis.com/problems/r2) on kattis but for some reason, while the program (written in python) runs in the IDLE, I am met with a run time error in kattis with the judgement being a valueerror. Here's my code: ``` R1 = int(input('input R1 ')) S = int(input('input S ')) R2 = (S*2)-R1 print(R2) ```
2018/08/15
[ "https://Stackoverflow.com/questions/51861677", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10216731/" ]
``` nums = input().split(' ') r2 = 2*int(nums[1]) - int(nums[0]) print(r2) ``` The problem states that the two numbers will be input on a single line. You are attempting to capture two numbers input on two separate lines by calling `input` twice.
Darrahts pointed one problem out. The 2nd Problem is: `input([prompt])` writes the prompt to the standard output, however you should only write your solution to standard output.
12,181
30,205,473
I have the following JSON structure. I am attempting to extract the following information from the "brow\_eventdetails" section. * ATime * SBTime * CTime My question is is there any easy way to do this without using regular expression. In other words my question is the a nested JSON format that I can extract by some means using python. ``` { "AppName": "undefined", "Event": "browser information event", "Message": "brow_eventdetails:{\"Message\":\"for https://mysite.myspace.com/display/CORE/mydetails took too long (821 ms : ATime: 5 ms, SBTime: 391 ms, CTime: 425 ms), and exceeded threshold of 5 ms\",\"Title\":\"mydetails My Work Details\",\"Host\":\"nzmyserver.ad.mydomain.com\",\"Page URL\":\"https://nzmyserver.mydomain.com/display/CORE/mydetails\",\"PL\":821,\"ATime\":5,\"SBTime\":391,\"CTime\":425}", "Severity": "warn", "UserInfo": "General Info" } ``` The program that I use is given below. ``` with open(fname, 'r+') as f: json_data = json.load(f) message = json_data['Message'] nt = message.split('ATime')[1].strip().split(':')[1].split(',')[0] bt = message.split('SBTime')[1].strip().split(':')[1].split('\s')[0]) st = message.split('CTime')[1].strip().split(':')[1].split('\s')[0]) json_data["ATime"] = bt json_data["SBTime"] = st json_data["CTime"] = nt f.seek(0) json.dump(json_data,f,ensure_ascii=True) ``` There are some issues with this program.The first one is extracting ATime,SBTime and CTime. These values are repeated.I want to extract just the numeric values, 5, 391 and 425.I don't want ms that follows it.how can I achieve this? If I were to update the program to use json.loads() as below I get the following error with open(fname, 'r+') as f: json\_data = json.load(f) message = json\_data['Message'] message\_data = json.loads(message) f.seek(0) json.dump(json\_data,f,ensure\_ascii=True) I get ``` ValueError: No JSON object could be decoded ```
2015/05/13
[ "https://Stackoverflow.com/questions/30205473", "https://Stackoverflow.com", "https://Stackoverflow.com/users/316082/" ]
You need to parse again the json string of `json_data['message']` then just access the desired values, one way to do it: ``` # since the string value of `message` itself isn't a valid json string # discard it, and parse it with json again brow_eventdetails = json.loads(json_data['message'].replace('brow_eventdetails:', '')) brow_eventdetails['ATime'] Out[6]: 5 brow_eventdetails['SBTime'] Out[7]: 391 brow_eventdetails['CTime'] Out[8]: 425 ... ```
Parse this string value using json.loads as you would with every other string that contains JSON.
12,182
34,818,960
I need to highlight a specific word in a text within a tkinter frame. In order to find the word, I put a balise like in html. So in a text like "hello i'm in the |house|" I want to highlight the word "house". My frame is defined like that: `class FrameCodage(Frame): self.t2Codage = Text(self, height=20, width=50)` and I insert my text with this code: `fenetre.fCodage.t2Codage.insert(END, res)` , res being a variable containing my text. I saw this code on an other post: ``` class CustomText(tk.Text): '''A text widget with a new method, highlight_pattern() example: text = CustomText() text.tag_configure("red", foreground="#ff0000") text.highlight_pattern("this should be red", "red") The highlight_pattern method is a simplified python version of the tcl code at http://wiki.tcl.tk/3246 ''' def __init__(self, *args, **kwargs): tk.Text.__init__(self, *args, **kwargs) def highlight_pattern(self, pattern, tag, start="1.0", end="end", regexp=False): '''Apply the given tag to all text that matches the given pattern If 'regexp' is set to True, pattern will be treated as a regular expression. ''' start = self.index(start) end = self.index(end) self.mark_set("matchStart", start) self.mark_set("matchEnd", start) self.mark_set("searchLimit", end) count = tk.IntVar() while True: index = self.search(pattern, "matchEnd","searchLimit", count=count, regexp=regexp) if index == "": break self.mark_set("matchStart", index) self.mark_set("matchEnd", "%s+%sc" % (index, count.get())) self.tag_add(tag, "matchStart", "matchEnd") ``` But they are few things that I don't understand: how can I apply this function to my case? When did I call this function? What's the pattern and the tag in my case? I'm a beginner in Tkinter so don't hesitate to explain to me this code, or another.
2016/01/15
[ "https://Stackoverflow.com/questions/34818960", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5464538/" ]
Instead of this: ``` class FrameCodage(Frame): self.t2Codage = Text(self, height=20, width=50) ``` ... do this: ``` class FrameCodage(Frame): self.t2Codage = CustomText(self, height=20, width=50) ``` Next, create a "highlight" tag, and configure it however you want: ``` self.t2Codage.tag_configure("highlight", foreground="red") ``` Finally, you can call the `highlight_pattern` method as if it were a standard method: ``` self.t2Codage.highlight_pattern(r"\|.*?\|", "red", regexp=True) ```
Below is widget I created to deal with this, hope it helps. ``` try: import tkinter as tk except ImportError: import Tkinter as tk class code_editor(tk.Text): def __init__(self, parent, case_insensetive = True, current_line_colour = '', word_end_at = r""" .,{}[]()=+-*/\|<>%""", tags = {}, *args, **kwargs): tk.Text.__init__(self, *args, **kwargs) self.bind("<KeyRelease>", lambda e: self.highlight()) self.case_insensetive = case_insensetive self.highlight_current_line = current_line_colour != '' self.word_end = word_end_at self.tags = tags if self.case_insensetive: for tag in self.tags: self.tags[tag]['words'] = [word.lower() for word in self.tags[tag]['words']] #loops through the syntax dictionary to creat tags for each type for tag in self.tags: self.tag_config(tag, **self.tags[tag]['style']) if self.highlight_current_line: self.tag_configure("current_line", background = current_line_colour) self.tag_add("current_line", "insert linestart", "insert lineend+1c") self.tag_raise("sel") #find what is the last word thats being typed. def last_word(self): line, last = self.index(tk.INSERT).split('.') last=int(last) #this limit issues when user is a fast typer last_char = self.get(f'{line}.{int(last)-1}', f'{line}.{last}') while last_char in self.word_end and last > 0: last-=1 last_char = self.get(f'{line}.{int(last)-1}', f'{line}.{last}') first = int(last) while True: first-=1 if first<0: break if self.get(f"{line}.{first}", f"{line}.{first+1}") in self.word_end: break return {'word': self.get(f"{line}.{first+1}", f"{line}.{last}"), 'first': f"{line}.{first+1}", 'last': f"{line}.{last}"} #highlight the last word if its a syntax, See: syntax dictionary on the top. #this runs on every key release which is why it fails when the user is too fast. #it also highlights the current line def highlight(self): if self.highlight_current_line: self.tag_remove("current_line", 1.0, "end") self.tag_add("current_line", "insert linestart", "insert lineend+1c") lastword = self.last_word() wrd = lastword['word'].lower() if self.case_insensetive else lastword['word'] for tag in self.tags: if wrd in self.tags[tag]['words']: self.tag_add(tag, lastword['first'], lastword['last']) else: self.tag_remove(tag, lastword['first'], lastword['last']) self.tag_raise("sel") #### example #### if __name__ == '__main__': # from pyFilename import code_editor ms = tk.Tk() example_text = code_editor( parent = ms, case_insensetive = True, #True by default. current_line_colour = 'grey10', #'' by default which will not highlight the current line. word_end_at = r""" .,{}[]()=+-*/\|<>%""", #<< by default, this will till the class where is word ending. tags = {#'SomeTagName': {'style': {'someStyle': 'someValue', ... etc}, 'words': ['word1', 'word2' ... etc]}} this tells it to apply this style to these words "failSynonyms": {'style': {'foreground': 'red', 'font': 'helvetica 8'}, 'words': ['fail', 'bad']}, "passSynonyms":{'style': {'foreground': 'green', 'font': 'helvetica 12'}, 'words': ['Pass', 'ok']}, "sqlSyntax":{'style': {'foreground': 'blue', 'font': 'italic'}, 'words': ['select', 'from']}, }, font='helvetica 10 bold', #Sandard tkinter text arguments background = 'black', #Sandard tkinter text arguments foreground = 'white' #Sandard tkinter text arguments ) example_text.pack() ms.mainloop() ```
12,183
14,670,768
Hi I'm new to django and python. I want to extend django `User` model and add a `create_user` method in a model, then I call this `create_user` method from view. However, I got an error msg. My model: ``` from django.db import models from django.contrib.auth.models import User class Basic(models.Model): user = models.OneToOneField(User) gender = models.CharField(max_length = 10) def create_ck_user(acc_type, fb_id,fb_token): # "Create a user and insert into auth_user table" user = User.objects.create_user(acc_type,fb_id,fb_token) user.save() class External(models.Model): user = models.OneToOneField(Basic) external_id = models.IntegerField() locale = models.CharField(max_length = 40) token = models.CharField(max_length = 250) ``` and in view, I did sth like this: ``` Basic.create_ck_user('acc_type' = 'fb', 'fb_id' = fb_id, 'fb_token' = fb_token) ``` error shows that: ``` keyword can't be an expression (views.py, Basic.objects.create_ck_user('acc_type' = 'fb', 'fb_id' = fb_id, 'fb_token' = fb_token)) ``` Edit: After add @classmehtod, and change view.py to: ``` ... if (request.method == 'POST'): acc_type = request.POST.get('acc_type') fb_id = request.POST.get('fb_id') fb_token = request.POST.get('fb_token') Basic.create_ck_user(acc_type,fb_id,fb_token) ... ``` An error msg shows: > > create\_ck\_user() takes exactly 3 arguments (4 given) > > > checked error details, it includes "request" variable though that i just pass 3 variables: `acc_type`, `fb_id` and `fb_token`. Any ideas?
2013/02/03
[ "https://Stackoverflow.com/questions/14670768", "https://Stackoverflow.com", "https://Stackoverflow.com/users/204127/" ]
Try ``` Basic.create_ck_user('fb', fb_id, fb_token) ``` You don't assign to strings when you call a function/method. You assign to variables. But since you are using positional arguments in your function definition then you don't even need them. Assigning to a string will never work anyway... strings are immutable objects. Also, you want this method to be a class method. Otherwise you would need to create an instance of User before calling it. ``` # inside class definition @classmethod def create_ck_user(cls, acc_type, fb_id,fb_token): # "Create a user and insert into auth_user table" user = cls.objects.create_user(acc_type,fb_id,fb_token) user.save() ```
Just look here: <https://docs.djangoproject.com/en/dev/topics/auth/default/#creating-users> You can find a lot of answers just looking at the documentation.
12,184
30,183,795
I know the normal way to use APScheduler is "python setup.py install". But I want to embed it into my program directly, so the user don't need install it when using my program. ``` class BaseScheduler(six.with_metaclass(ABCMeta)): _trigger_plugins = dict((ep.name, ep) for ep in iter_entry_points('apscheduler.triggers')) # print(_trigger_plugins, 'ddd') _trigger_classes = {} _executor_plugins = dict((ep.name, ep) for ep in iter_entry_points('apscheduler.executors')) _executor_classes = {} _jobstore_plugins = dict((ep.name, ep) for ep in iter_entry_points('apscheduler.jobstores')) _jobstore_classes = {} _stopped = True ``` thks.
2015/05/12
[ "https://Stackoverflow.com/questions/30183795", "https://Stackoverflow.com", "https://Stackoverflow.com/users/620853/" ]
You can instantiate the triggers directly, without going through their aliases. That eliminates the need to install APScheduler or setuptools. Does this answer your question?
I find a way to work around this problem. 1. use 'pip install apscheduler' to install locally 2. goto the installed directory and cp that directory to your lib directory 3. use 'pip uninstall apscheduler' to remove it. 4. make you code to import the apscheduler from your lib directory. 5. done.
12,185
11,174,997
(Python 2.7)I need to print the bfs of a binary tree with a given preorder and inorder and a max lenght of the strings of preorder and inorder. I know how it works, for example: preorder:ABCDE inorder:CBDAE max length:5 ``` A / \ B E / \ C D ``` BFS:ABECD So far I got this figured out ``` class BinaryTree: def __init__ (self, value, parent=None): self.parent = parent self.left_child = None self.right_child = None self.value=value def setLeftChild(self, child=None): self.left_child = child if child: child.parent = self def setRightChild(self, child=None): self.right_child = child if child: child.parent = self preorder={} inorder={} print "max string length?" i=int(raw_input()) count=0 while i>count: print"insert the preorder" preorder[raw_input()]=count count=count+1 print "preorder is",sorted(preorder, key=preorder.get) count2=0 while i>count2: print"insert the inorder" inorder[raw_input()]=count2 count2=count2+1 print "inorder is",sorted(inorder, key=inorder.get) root= ``` I've figured out how to create a binary tree in python but the thing is I don't know how to add the values of the next childs. As you can see I already have the root and figured out how to insert the first childs (left and right) but I don't know how to add the next ones.
2012/06/24
[ "https://Stackoverflow.com/questions/11174997", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1419828/" ]
I guess essentially the question is how to get all the parent-leftChild pairs and parent-rightChild pairs of the tree from given preorder and inorder To get the parent-leftChild pairs, you need to check: 1) if node1 is right after node2 in preorder; 2) if node2 is in front of node1 in inorder For your example preorder:ABCDE inorder:CBDAE * B is right after A in preorder and B is in front of A in inorder, thus B is the left child of A. * D is right after C in preorder, but D is also after C in inorder, thus D is not the left child of C You can use the similar trick to get all parent-rightChild pairs
To add children to any node, just get the node that you want to add children to and call setLeftChild or setRightChild on it.
12,186
61,590,927
I have a rather simple program using `dask`: ``` import dask.array as darray import numpy as np X = np.array([[1.,2.,3.], [4.,5.,6.], [7.,8.,9.]]) arr = darray.from_array(X) arr = arr[:,0] a = darray.min(arr) b = darray.max(arr) quantiles = darray.linspace(a, b, 4) print(np.array(quantiles)) ``` Running this program results in an error like this: ``` Traceback (most recent call last): File "discretization.py", line 12, in <module> print(np.array(quantiles)) File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/array/core.py", line 1341, in __array__ x = np.array(x) File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/array/core.py", line 1341, in __array__ x = np.array(x) File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/array/core.py", line 1341, in __array__ x = np.array(x) [Previous line repeated 325 more times] File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/array/core.py", line 1337, in __array__ x = self.compute() File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/base.py", line 166, in compute (result,) = compute(self, traverse=False, **kwargs) File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/base.py", line 434, in compute dsk = collections_to_dsk(collections, optimize_graph, **kwargs) File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/base.py", line 220, in collections_to_dsk [opt(dsk, keys, **kwargs) for opt, (dsk, keys) in groups.items()], File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/base.py", line 220, in <listcomp> [opt(dsk, keys, **kwargs) for opt, (dsk, keys) in groups.items()], File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/array/optimization.py", line 42, in optimize dsk = optimize_blockwise(dsk, keys=keys) File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/blockwise.py", line 547, in optimize_blockwise out = _optimize_blockwise(graph, keys=keys) File "/Users/zhujun/job/adf/local_training/venv/lib/python3.7/site-packages/dask/blockwise.py", line 572, in _optimize_blockwise if isinstance(layers[layer], Blockwise): File "/anaconda3/lib/python3.7/abc.py", line 139, in __instancecheck__ return _abc_instancecheck(cls, instance) RecursionError: maximum recursion depth exceeded in comparison ``` Python is version 3.7.1 and `dask` is version 2.15.0. What is wrong with this program? Thanks in advance.
2020/05/04
[ "https://Stackoverflow.com/questions/61590927", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2595776/" ]
Simply use `dnf` ```sh dnf -y install gcc-toolset-9-gcc gcc-toolset-9-gcc-c++ source /opt/rh/gcc-toolset-9/enable ``` ref: <https://centos.pkgs.org/8/centos-appstream-x86_64/gcc-toolset-9-gcc-9.1.1-2.4.el8.x86_64.rpm.html> Note: `source` won't work inside a Dockerfile so prefer to use: ``` ENV PATH=/opt/rh/gcc-toolset-9/root/usr/bin:$PATH ``` or better ``` RUN dnf -y install gcc-toolset-9-gcc gcc-toolset-9-gcc-c++ RUN echo "source /opt/rh/gcc-toolset-9/enable" >> /etc/bashrc SHELL ["/bin/bash", "--login", "-c"] RUN gcc --version ```
this command work for me ``` dnf install gcc --best --allowerasing ```
12,189
55,422,150
This is my first time using python and matplotlib and I'd like to plot data from a CSV file. The CSV file is in the form of: ``` 10/03/2018 00:00,454.95,594.86 ``` with about 4000 rows. I'd like to plot the data from the second column vs the datetime for each row and the data from the third column vs the datetime for each row, both on the same plot. This is my code so far but it's not working: ``` import matplotlib.pyplot as plt import csv import datetime import re T = [] X = [] Y = [] with open('Book2.csv','r') as csvfile: plots = csv.reader(csvfile, delimiter=',') for row in plots: datetime_format = '%d/%m/%Y %H:%M' date_time_data = datetime.datetime.strptime(row[0],datetime_format) T.append(date_time_data) X.append(float(row[1])) Y.append(float(row[2])) plt.plot(T,X, label='second column data vs datetime') plt.plot(T,Y, label='third column data vs datetime') plt.xlabel('DateTime') plt.ylabel('Data') plt.title('Interesting Graph\nCheck it out') plt.legend() plt.show() ``` Any help or guidance would be great. Many thanks! :)
2019/03/29
[ "https://Stackoverflow.com/questions/55422150", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11245101/" ]
Just use promises (or callbacks) ================================ I know that everyone hates JavaScript and so these anti-idiomatic transpilers and new language "features" exist to make JavaScript look like C# and whatnot but, honestly, it's just easier to use the language the way that it was originally designed (otherwise use Go or some other language that actually behaves the way that you want - and is more performant anyway). If you must expose async/await in your application, put it at the interface rather than littering it throughout. My 2¢. I'm just going to write some psuedo code to show you how easy this can be: ``` function doItAll(jobs) { var results = []; function next() { var job = jobs.shift(); if (!job) { return Promise.resolve(results); } return makeRequest(job.url).then(function (stuff) { return updateDb().then(function (dbStuff) { results.push(dbStuff); }).then(next); }); } return next(); } function makeRequest() { return new Promise(function (resolve, reject) { var resp = http.get('...', {...}); resp.on('end', function () { // ... do whatever resolve(); }); }); } ``` Simple. Easy to read. 1:1 correspondence between what the code looks like and what's actually happening. No trying to "force" JavaScript to behave counter to the way it was designed. The longer you fight learning to understand async code, the longer it will take to to understand it. Just dive in and learn to write JavaScript "the JavaScript way"! :D
Here's my updated function that works properly and synchronously, getting the data one by one and adding it to the database before moving to the next one. I have made it by customizing @coolAJ86 answer and I've marked that as the correct one but thought it would be helpful for people stumbling across this thread to see my final, working & tested version. ``` var geoApiUrl = 'https://maps.googleapis.com/maps/api/geocode/json?key=<<MY API KEY>>&address='; doItAll(allJobs) function doItAll(jobs) { var results = []; var errors = []; function nextJob() { var job = jobs.shift(); if (!job) { return Promise.resolve(results); } var friendlyAddress = geoApiUrl + encodeURIComponent(job.addressLine1 + ' ' + job.postcode); return makeRequest(friendlyAddress).then(function(result) { if((result.results[0] === undefined) || (result.results[0].geometry === undefined)){ nextJob(); } else { return knex('LOCATIONS') .returning('*') .insert({ UPRN: job.UPRN, lat: result.results[0].geometry.location.lat, lng: result.results[0].geometry.location.lng, title: job.title, postcode: job.postcode, addressLine1: job.addressLine1, theo_id: job.clientId }) .then(function(data) { // console.log('KNEX CALLBACK COMING') // console.log(data[0]) console.log(data[0]); results.push(data[0]); nextJob(); }) .catch(function(err) { console.log(err); errors.push(job); }); } }); } return nextJob(); } function makeRequest(url) { return new Promise(function(resolve, reject) { https .get(url, resp => { let data = ''; resp.on('data', chunk => { data += chunk; }); // The whole response has been received. Print out the result. resp.on('end', () => { let result = JSON.parse(data); resolve(result); }); }) .on('error', err => { console.log('Error: ' + err.message); reject(err); }); }); } ```
12,190
48,685,715
The problem is very simple: I want to call a script from a rule and I would like that rule to both: * Perform stdout and stderr redirection * Access the snakemake variables from the script(variable can be both lists and literals) If I use the `shell:` then, I can perform the I/O redirection but I cannot use the `snakemake` variable inside the script. Note: Of course it is possible to pass the variables to the script as arguments from the shell. However by doing so, the script cannot distinguish a literal and a list variable. If I instead use `script:` then, I can access my snakemake variables but I cannot perform I/O redirection and many other shell facilities. --- An example to illustrate the question: 1) Using the `shell:` ``` rule create_hdf5: input: genes_file = OUTPUT_PATH+'/{sample}/outs/genes.tsv' params: # frequencies is a list!!! frequencies = config['X_var']['freqs'] output: HDF5_OUTPUT+'/{sample}.h5' log: out = LOG_FILES+'/create_hdf5/sample_{sample}.out', err = LOG_FILES+'/create_hdf5/sample_{sample}.err' shell: 'python scripts/create_hdf5.py {input.genes_file} {params.frequencies} {output} {threads} 2> {log.err} 1> {log.out} ' ``` Problem with 1): Naturally, the python script thinks that each element in the frequencies list is a new argument. Yet, the script cannot access the `snakemake` variable. 2) Using the `script:` ``` rule create_hdf5: input: genes_file = OUTPUT_PATH+'/{sample}/outs/genes.tsv' params: # frequencies is a list!!! frequencies = config['X_var']['freqs'] output: HDF5_OUTPUT+'/{sample}.h5' log: out = LOG_FILES+'/create_hdf5/sample_{sample}.out', err = LOG_FILES+'/create_hdf5/sample_{sample}.err' script: 'scripts/create_hdf5.py' ``` Problem with 2): I can access the snakemake variable inside the script. But now I cannot use the bash facilities such as I/O redirection. I wonder if there is a way of achieving both (perhaps I am missing something from the snakemake documentation)? Thanks in advance!
2018/02/08
[ "https://Stackoverflow.com/questions/48685715", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1935611/" ]
If possible, I suggest you use the [argparse](https://docs.python.org/3/library/argparse.html) module to parse the input of your script, so that it can parse a list of arguments as such, using the `nargs="*"` option: ```python def main(): """Main function of the program.""" parser = argparse.ArgumentParser( description=__doc__, formatter_class=argparse.ArgumentDefaultsHelpFormatter) parser.add_argument( "-g", "--genes_file", required=True, help="Path to a file containing the genes.") parser.add_argument( "-o", "--output_file", required=True, help="Path to the output file.") parser.add_argument( "-f", "--frequencies", nargs="*", help="Space-separated list of frequencies.") parser.add_argument( "-t", "--threads", type=int, default=1, help="Number of threads to use.") args = parser.parse_args() # then use args.gene_file as a file name and args.frequencies as a list, etc. ``` And you would call this as follows: ```python shell: """ python scripts/create_hdf5.py \\ -g {input.genes_file} -f {params.frequencies} \\ -o {output} -t {threads} 2> {log.err} 1> {log.out} """ ```
You can access the log filenames within the python script with the snakemake.log varaibale, which is a list containing both filenames: ``` snakemake.log = [ LOG_FILES+'/create_hdf5/sample_1.out', LOG_FILES+'/create_hdf5/sample_1.err' ] ``` you can thus use this within your script to create log files for logging, e.g. ``` import logging mylogger = logging.getLogger('My logger') # create file handler fh = logging.FileHandler(snakemake.log[0]) mylogger.addHandler(fh) mylogger.error("Some error") ```
12,191
65,095,357
When I am adding new functions to a file I can't import them neither if I just run the script in terminal or if I launch `ipython` and try importing a function there. I have no `.pyc` files. It looks as if there is some kind of caching going on. I never actually faced such an issue even though have been working with various projects for a while. Why could it happen? What I see is the following: I launch `ipython` and the functions that were written long time ago by other programmers can be imported fine. If I comment them out and save the file, they still can be imported without any issues. If I write new functions they can't be imported. The directory is `git` directory, I cloned the repo. Then the new branch was created and I switched to it. Python version is `3.7.5`, and I am working with virtual environment that I created some time ago which I activated with `source activate py37`. I don't know whether its important but I have an empty `__init__.py` in the folder where script is located. The code (I don't think its relevant, but still): ``` import hail as hl import os class SeqrDataValidationError(Exception): pass # Get public properties of a class def public_class_props(cls): return {k: v for k, v in cls.__dict__.items() if k[:1] != '_'} def hello(): print('hello') ``` `public_class_props` is an old function and can be imported, but `hello` - can't.
2020/12/01
[ "https://Stackoverflow.com/questions/65095357", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3815432/" ]
We can use `pd.to_datetime` here with `errors='coerce'` to ignore the faulty dates. Then use the `dt.year` to calculate the difference: ``` df['date_until'] = pd.to_datetime(df['date_until'], format='%d.%m.%y', errors='coerce') df['diff_year'] = df['date_until'].dt.year - df['year'] ``` ``` year date_until diff_year 0 2010 NaT NaN 1 2011 2013-06-30 2.0 2 2011 NaT NaN 3 2015 2018-06-30 3.0 4 2020 NaT NaN ```
For everybody who is trying to replace values just like I wanted to in the first place, here is how you could solve it: ``` for i in range(len(df)): if pd.isna(df['date_until'].iloc[i]): df['date_until'].iloc[i] = f'30.06.{df["year"].iloc[i] +1}' if df['date_until'].iloc[i] == '-': df['date_until'].iloc[i] = f'30.06.{df["year"].iloc[i] +1} ``` But @Erfan's approach is much cleaner
12,192
47,726,913
I am trying to run the following code here to save information to the database. I have seen other messages - but - it appears that the solutions are for older versions of `Python/DJango` (as they do not seem to be working on the versions I am using now: `Python 3.6.3` and `DJango 1.11.7` ``` if form.is_valid(): try: item = form.save(commit=False) item.tenantid = tenantid item.save() message = 'saving data was successful' except DatabaseError as e: message = 'Database Error: ' + str(e.message) ``` When doing so, I get an error message the error message listed below. How can I fix this so I can get the message found at the DB level printed out? ``` 'DatabaseError' object has no attribute 'message' Request Method: POST Request URL: http://127.0.0.1:8000/storeowner/edit/ Django Version: 1.11.7 Exception Type: AttributeError Exception Value: 'DatabaseError' object has no attribute 'message' Exception Location: C:\WORK\AppPython\ContractorsClubSubModuleDEVELOP\libmstr\storeowner\views.py in edit_basic_info, line 40 Python Executable: C:\WORK\Software\Python64bitv3.6\python.exe Python Version: 3.6.3 Python Path: ['C:\\WORK\\AppPython\\ContractorsClubSubModuleDEVELOP', 'C:\\WORK\\Software\\OracleInstantClient64Bit\\instantclient_12_2', 'C:\\WORK\\Software\\Python64bitv3.6\\python36.zip', 'C:\\WORK\\Software\\Python64bitv3.6\\DLLs', 'C:\\WORK\\Software\\Python64bitv3.6\\lib', 'C:\\WORK\\Software\\Python64bitv3.6', 'C:\\Users\\dgmufasa\\AppData\\Roaming\\Python\\Python36\\site-packages', 'C:\\WORK\\AppPython\\ContractorsClubSubModuleDEVELOP\\libintgr', 'C:\\WORK\\AppPython\\ContractorsClubSubModuleDEVELOP\\libmstr', 'C:\\WORK\\AppPython\\ContractorsClubSubModuleDEVELOP\\libtrans', 'C:\\WORK\\AppPython\\ContractorsClubBackofficeCode\\libintgr', 'C:\\WORK\\AppPython\\ContractorsClubBackofficeCode\\libmstr', 'C:\\WORK\\TRASH\\tempforcustomer\\tempforcustomer\\libtempmstr', 'C:\\WORK\\AppPython\\ContractorsClubBackofficeCode\\libtrans', 'C:\\WORK\\Software\\Python64bitv3.6\\lib\\site-packages', 'C:\\WORK\\Software\\Python64bitv3.6\\lib\\site-packages\\django-1.11.7-py3.6.egg', 'C:\\WORK\\Software\\Python64bitv3.6\\lib\\site-packages\\pytz-2017.3-py3.6.egg'] Server time: Sat, 9 Dec 2017 08:42:49 +0000 ```
2017/12/09
[ "https://Stackoverflow.com/questions/47726913", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2707727/" ]
Just change ``` str(e.message) ``` to ``` str(e) ```
**some change** > > str(e.message) > > > **to** > > HttpResponse(e.message) > > >
12,193
60,543,957
I have a project folder with different cloud functions folders e.g. ``` Project_Folder -Cloud-Function-Folder1 -main.py -requirements.txt -cloudbuild.yaml -Cloud-Function-Folder2 -main.py -requirements.txt -cloudbuild.yaml -Cloud-Function-Folder3 -main.py -requirements.txt -cloudbuild.yaml --------- and so on! ``` Now what i have right now is. I push the code to the Source Repository one by one from the Cloud Fucntions folder to Source Repository(Separate Repos for each function folder). And then it has a Trigger enabled which trigger the cloud-build and then deploy the function. The cloudbuild.yaml file i have is like this below.. ``` steps: - name: 'python:3.7' entrypoint: 'bash' args: - '-c' - | pip3 install -r requirements.txt pytest - name: 'gcr.io/cloud-builders/gcloud' args: - functions - deploy - Function - --runtime=python37 - --source=. - --entry-point=function_main - --trigger-topic=Function - --region=europe-west3 ``` Now, what I would like to do is I would like to make a single source repo and whenever i change the code in one cloud function and push it then only it get deploys and rest remains like before. --- Update ------ Now i have also tried something like this below but it also deploy all the functions at the same time even though i am working on a single function. ``` Project_Folder -Cloud-Function-Folder1 -main.py -requirements.txt -Cloud-Function-Folder2 -main.py -requirements.txt -Cloud-Function-Folder3 -main.py -requirements.txt -cloudbuild.yaml -requirements.txt ``` cloudbuild.yaml file looks like this below ``` steps: - name: 'python:3.7' entrypoint: 'bash' args: - '-c' - | pip3 install -r requirements.txt pytest - name: 'gcr.io/cloud-builders/gcloud' args: - functions - deploy - Function1 - --runtime=python37 - --source=./Cloud-Function-Folder1 - --entry-point=function1_main - --trigger-topic=Function1 - --region=europe-west3 - name: 'gcr.io/cloud-builders/gcloud' args: - functions - deploy - Function2 - --runtime=python37 - --source=./Cloud-Function-Folder2 - --entry-point=function2_main - --trigger-topic=Function2 - --region=europe-west3 ```
2020/03/05
[ "https://Stackoverflow.com/questions/60543957", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6508416/" ]
It's more complex et you have to play with limit and constraint of Cloud Build. I do this: * get the directory updated since the previous commit * loop on this directory and do what I want --- **Hypothesis 1**: all the subfolders are deployed by using the same commands So, for this I put a `cloudbuild.yaml` at the root of my directory, and not in the subfolders ``` steps: - name: 'gcr.io/cloud-builders/git' entrypoint: /bin/bash args: - -c - | # Cloud Build doesn't recover the .git file. Thus checkout the repo for this git clone --branch $BRANCH_NAME https://github.com/guillaumeblaquiere/cloudbuildtest.git /tmp/repo ; # Copy only the .git file mv /tmp/repo/.git . # Make a diff between this version and the previous one and store the result into a file git diff --name-only --diff-filter=AMDR @~..@ | grep "/" | cut -d"/" -f1 | uniq > /workspace/diff # Do what you want, by performing a loop in to the directory - name: 'python:3.7' entrypoint: /bin/bash args: - -c - | for i in $$(cat /workspace/diff); do cd $$i # No strong isolation between each function, take care of conflicts!! pip3 install -r requirements.txt pytest cd .. done - name: 'gcr.io/cloud-builders/gcloud' entrypoint: /bin/bash args: - -c - | for i in $$(cat /workspace/diff); do cd $$i gcloud functions deploy ......... cd .. done ``` --- **Hypothesis 2**: the deployment is specific by subfolder So, for this I put a `cloudbuild.yaml` at the root of my directory, and another one in the subfolders ``` steps: - name: 'gcr.io/cloud-builders/git' entrypoint: /bin/bash args: - -c - | # Cloud Build doesn't recover the .git file. Thus checkout the repo for this git clone --branch $BRANCH_NAME https://github.com/guillaumeblaquiere/cloudbuildtest.git /tmp/repo ; # Copy only the .git file mv /tmp/repo/.git . # Make a diff between this version and the previous one and store the result into a file git diff --name-only --diff-filter=AMDR @~..@ | grep "/" | cut -d"/" -f1 | uniq > /workspace/diff # Do what you want, by performing a loop in to the directory. Here launch a cloud build - name: 'gcr.io/cloud-builders/gcloud' entrypoint: /bin/bash args: - -c - | for i in $$(cat /workspace/diff); do cd $$i gcloud builds submit cd .. done ``` Be careful to the [timeout](https://cloud.google.com/cloud-build/docs/build-config#timeout_2) here, because you can trigger a lot of Cloud Build and it take times. --- Want to run manually your build, don't forget to add the $BRANCH\_NAME as substitution variable ``` gcloud builds submit --substitutions=BRANCH_NAME=master ```
If you create a single source repo and change your code as one cloud function you have to create a single ['cloudbuild.yaml' configuration file](https://cloud.google.com/cloud-build/docs/build-config). You need to connect this single repo to Cloud Build. Then create a [build trigger](https://cloud.google.com/cloud-build/docs/running-builds/create-manage-triggers#build_trigger) select this repo as a Source. Also you need to [configure deployment](https://cloud.google.com/cloud-build/docs/deploying-builds/deploy-functions#continuous_deployment) and anytime you push new code to your repository, you will automatically trigger a build and deploy on Cloud Functions.
12,194
30,857,579
To be more specific: Is there a way for a python program to continue running even after it's closed (like automatic open at a certain time)? Or like a gmail notification? This is for an alarm project, and I want it to ring/open itself even if the user closes the window. Is there a way for this to happen/get scripted? If so, how? Any help would be appreciated!
2015/06/16
[ "https://Stackoverflow.com/questions/30857579", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4343751/" ]
You can implement a global `operator>>` for your `Numbers` class, eg: ``` std::istream& operator>>(std::istream &strm, Number &n) { int value; strm >> value; // or however you need to read the value... n = Number(value); return strm; } ``` Or: ``` class Number { //... friend std::istream& operator>>(std::istream &strm, Number &n); }; std::istream& operator>>(std::istream &strm, Number &n) { strm >> n.value; // or however you need to read the value... return strm; } ``` Usually when you override the global streaming operators, you should implement member methods to handle the actual streaming, and then call those methods in the global operators. This allows the class to decide how best to stream itself: ``` class Number { //... void readFrom(std::istream &strm); }; void Number::readFrom(std::istream &strm) { strm >> value; // or however you need to read the value... } std::istream& operator>>(std::istream &strm, Number &n) { n.readFrom(strm); return strm; } ``` If you are not allowed to define a custom `operator>>`, you can still use the `readFrom()` approach, at least: ``` for (int i = 0; i < length; i++) { std::cout << "Enter the number value "; numbers[i].readFrom(std::cin); } ```
You will probably have to return `number` at the end of function `getNumbersFromUser` to avoid memory leakage. Secondly, the line `cin >> number[i]` means that you are taking input in a variable of type `Number` which is not allowed. It is only allowed for primitive data type (int, char double etc.) or some built in objects like strings. To take input in your own defined data type, you will have to overload stream extraction operator `>>` or you can write a member function that takes input in class data member(s) and call that function. For example if your function is like ``` void Number::takeInput () { cin >> val; } ``` Now go in function and write `number[i].takeInput()` instead of `cin >> number[i]`.
12,199
53,201,387
I am trying to write python code that organizes n-dimensional data into bins. To do this, I'm initializing a list of empty lists using the following function, which takes an array with the number of bins for each dimension as an argument: ``` def empties(b): invB = np.flip(b, axis=0) empty = [] for b in invB: build = deepcopy(empty) empty = [] for i in range(0,b): empty.append(build) return np.array(empty).tolist() # workaround to clear list references ``` For example, for two dimensional data with 3 bins along each dimension, the following should be expected: Input: ``` empties([3,3]) ``` Output: ``` [ [[],[],[]], [[],[],[]], [[],[],[]] ] ``` I'd like to append objects to this list of lists. This is easy if the dimensions are known. If I wanted to append an object to the above list at position (1,2), I could use: ``` bins = empties([3,3]) obj = Object() bins[1][2].append(obj) ``` However, I want this to work for any unknown number of dimensions and number of bins. Therefore, I cannot use "[ ][ ][ ]..." notation to define the list index. Lists do not take lists or tuples for the index, so this is not an option. Additionally, I cannot use a numpy array because all lists can be different lengths. Is there any solution for how to set an element of a list based on a dynamic number of indices? Ideally, if lists could take a list as the index, I would do this: ``` idx = some_function_that_gets_bin_numbers() bins[idx].append(obj) ```
2018/11/08
[ "https://Stackoverflow.com/questions/53201387", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4976543/" ]
You can `reduce` into an array, iterating over each subarray, and then over each split number from the subarray items: ```js const a = [ ["1.31069258855609,103.848649478524", "1.31138534529796,103.848923050526"], ["1.31213221536436,103.848328363879", "1.31288473199114,103.849575392632"] ]; const [d, e] = a.reduce((a, arr) => { arr.forEach((item) => { item.split(',').map(Number).forEach((num, i) => { if (!a[i]) a[i] = []; a[i].push(num); }); }); return a; }, []); console.log(d); console.log(e); ```
One approach to this problem would be to take advantage of the ordering of number values in your string arrays. First flatten the two arrays into a single array, and then reduce the result - per iteration of the reduce operation, split a string by `,` into it's two parts, and then put the number value for each part into the output array based on that values split index: ```js var a = [ [ "1.31069258855609,103.848649478524", "1.31138534529796,103.848923050526" ], [ "1.31213221536436,103.848328363879", "1.31288473199114,103.849575392632" ] ]; const result = a.flat().reduce((output, string) => { string.split(',') // Split any string of array item .map(Number) // Convert each string to number .forEach((item, index) => { output[index].push(item) // Map each number to corresponding output subarray }) return output }, [[],[]]) const [ d, e ] = result console.log( 'd = ', d ) console.log( 'e = ', e ) ```
12,200
39,290,932
How to change python code to **.exe** file using microsoft Visual Studio 2015 without installing any package? Under "Build" button, there is no convert to **.exe** file.
2016/09/02
[ "https://Stackoverflow.com/questions/39290932", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6737263/" ]
Here is the complete fix for the issue: ``` private async Task<IEnumerable<byte[]>> GetAttachmentsAsByteArrayAsync(Activity activity) { var attachments = activity?.Attachments? .Where(attachment => attachment.ContentUrl != null) .Select(c => Tuple.Create(c.ContentType, c.ContentUrl)); if (attachments != null && attachments.Any()) { var contentBytes = new List<byte[]>(); using (var connectorClient = new ConnectorClient(new Uri(activity.ServiceUrl))) { var token = await (connectorClient.Credentials as MicrosoftAppCredentials).GetTokenAsync(); foreach (var content in attachments) { var uri = new Uri(content.Item2); using (var httpClient = new HttpClient()) { if (uri.Host.EndsWith("skype.com") && uri.Scheme == "https") { httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", token); httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/octet-stream")); } else { httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue(content.Item1)); } contentBytes.Add(await httpClient.GetByteArrayAsync(uri)); } } } return contentBytes; } return null; } ```
<https://github.com/Microsoft/BotBuilder/issues/662#issuecomment-232223965> you mean this fix? Did this work out for you?
12,208
16,002,862
I'm rather new at using python and especially numpy, and matplotlib. Running the code below (which works fine without the `\frac{}{}` part) yields the error: ``` Normalized Distance in Chamber ($ rac{x}{L}$) ^ Expected end of text (at char 32), (line:1, col:33) ``` The math mode seems to work fine for everything else I've tried (symbols mostly, e.g. `$\mu$` works fine and displays µ) so I'm not sure what is happening here. I've looked up other peoples code for examples and they just seem to use `\frac{}{}` with nothing special and it works fine. I don't know what I'm doing differently. Here is the code. Thanks for the help! ``` import numpy as np import math import matplotlib.pylab as plt [ ... bunch of calculations ... ] plt.plot(xspace[:]/L,vals[:,60]) plt.axis([0,1,0,1]) plt.xlabel('Normalized Distance in Chamber ($\frac{x}{L}$)') plt.savefig('test.eps') ``` Also, I did look up \f and it seems its an "escape character", but I don't know what that means or why it would be active within TeX mode.
2013/04/14
[ "https://Stackoverflow.com/questions/16002862", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1411736/" ]
In many languages, backslash-letter is a way to enter otherwise hard-to-type characters. In this case it's a "form feed". Examples: ``` \n — newline \r — carriage return \t — tab character \b — backspace ``` To disable that, you either need to escape the backslash itself (backslash-backslash is a backslash) ``` 'Normalized Distance in Chamber ($\\frac{x}{L}$)' ``` Or use "raw" strings where escape sequences are disabled: ``` r'Normalized Distance in Chamber ($\frac{x}{L}$)' ``` This is relevant to Python, not TeX. [Documentation on Python string literals](http://docs.python.org/2.7/reference/lexical_analysis.html#string-literals)
`"\f"` is a form-feed character in Python. TeX never sees the backslash because Python interprets the `\f` in your Python source, before the string is sent to TeX. You can either double the backslash, or make your string a raw string by using `r'Normalized Distance ... etc.'`.
12,211
31,438,147
it is my first post on stackoverflow so please go easy on me! :) I am also relatively new to python so bear with me :) With all that said here is my issue: I am writing a bit of code for fun which calls an API and grabs the latest Bitcoin Nonce data. I have managed to do this fine, however now I want to be able to save the first nonce value found as a string such as Nonce1 and then recall the API every few seconds till I get another Nonce value and name it Nonce2 for example? Is this possible? My code is down bellow. ``` from __future__ import print_function import blocktrail client = blocktrail.APIClient(api_key="x", api_secret="x", network="BTC", testnet=False) address = client.address('x') latest_block = client.block_latest() nonce = latest_block['nonce'] print(nonce) noncestr = str(nonce) ``` Thanks, again please go easy on me I am very new to Python :)
2015/07/15
[ "https://Stackoverflow.com/questions/31438147", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5120590/" ]
Not entirely. It's right that the rules are only based on classes, and it does not matter if it is the same instance or another instance and that was basically your question. However, you made a mistake about *protected* in general. From the documentation: > > Members declared protected can be accessed only within the class itself and by inherited **and** parent classes > > > (highlight added) So, the following statements are **wrong**: 1. > > Accessing protected members from a superclass is not OK. > > > 2. > > Accessing protected members from another instance of a superclass is not OK. > > >
Yes, it's right. The visibility rules are based only on classes, instances have no impact. So if a class has access to a particular member in the same instance, it also has access to that member in other instances of that class. It's not a quirk, it's a deliberate design choice, similar to many other OO languages (I think the rules are essentially the same in C++, for instance). Once you grant that a class method is allowed to know about the implementation details of that class or some related class, it doesn't matter whether the instance it's dealing with is the one it was called on or some other instance of the class.
12,214
52,033,549
I have a csv file mentioned as below screen shot ... [![enter image description here](https://i.stack.imgur.com/WAVzb.png)](https://i.stack.imgur.com/WAVzb.png) and i want to convert the whole file in the below format in python. [![enter image description here](https://i.stack.imgur.com/ladmh.png)](https://i.stack.imgur.com/ladmh.png)
2018/08/27
[ "https://Stackoverflow.com/questions/52033549", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8966278/" ]
You can try this after reading your CSV file correct file path. ``` import pandas as pd df = pd.read_csv("path/to/file", names=["Sentence", "Value"]) result = [(row["Sentence"], row["Value"]) for index, row in df.iterrows()] print(result) ```
It's a single line using `apply()` method of dataframe `df.apply(lambda x: x.tolist(), axis=1)` OR `df.values.tolist()` will also work
12,215
2,137,619
Is it possible to implement in Scala something equivalent to the Python `yield` statement where it remembers the local state of the function where it is used and "yields" the next value each time it is called? I wanted to have something like this to convert a recursive function into an iterator. Sort of like this: ``` # this is python def foo(i): yield i if i > 0: for j in foo(i - 1): yield j for i in foo(5): print i ``` Except, `foo` may be more complex and recurs through some acyclic object graph. **Additional Edit:** Let me add a more complex example (but still simple): I can write a simple recursive function printing things as it goes along: ``` // this is Scala def printClass(clazz:Class[_], indent:String=""): Unit = { clazz match { case null => case _ => println(indent + clazz) printClass(clazz.getSuperclass, indent + " ") for (c <- clazz.getInterfaces) { printClass(c, indent + " ") } } } ``` Ideally I would like to have a library that allows me to easily change a few statements and have it work as an Iterator: ``` // this is not Scala def yieldClass(clazz:Class[_]): Iterator[Class[_]] = { clazz match { case null => case _ => sudoYield clazz for (c <- yieldClass(clazz.getSuperclass)) sudoYield c for (c <- clazz.getInterfaces; d <- yieldClasss(c)) sudoYield d } } ``` It does seem continuations allow to do that, but I just don't understand the `shift/reset` concept. Will continuation eventually make it into the main compiler and would it be possible to extract out the complexity in a library? **Edit 2:** check [Rich's answer](https://stackoverflow.com/questions/2201882/implementing-yield-yield-return-using-scala-continuations/2215182#2215182) in that other thread.
2010/01/26
[ "https://Stackoverflow.com/questions/2137619", "https://Stackoverflow.com", "https://Stackoverflow.com/users/257449/" ]
While Python generators are cool, trying to duplicate them really isn't the best way to go about in Scala. For instance, the following code does the equivalent job to what you want: ```scala def classStream(clazz: Class[_]): Stream[Class[_]] = clazz match { case null => Stream.empty case _ => ( clazz #:: classStream(clazz.getSuperclass) #::: clazz.getInterfaces.toStream.flatMap(classStream) #::: Stream.empty ) } ``` In it the stream is generated lazily, so it won't process any of the elements until asked for, which you can verify by running this: ```scala def classStream(clazz: Class[_]): Stream[Class[_]] = clazz match { case null => Stream.empty case _ => ( clazz #:: { println(clazz.toString+": super"); classStream(clazz.getSuperclass) } #::: { println(clazz.toString+": interfaces"); clazz.getInterfaces.toStream.flatMap(classStream) } #::: Stream.empty ) } ``` The result can be converted into an `Iterator` simply by calling `.iterator` on the resulting `Stream`: ``` def classIterator(clazz: Class[_]): Iterator[Class[_]] = classStream(clazz).iterator ``` The `foo` definition, using `Stream`, would be rendered thus: ```scala scala> def foo(i: Int): Stream[Int] = i #:: (if (i > 0) foo(i - 1) else Stream.empty) foo: (i: Int)Stream[Int] scala> foo(5) foreach println 5 4 3 2 1 0 ``` Another alternative would be concatenating the various iterators, taking care to not pre-compute them. Here's an example, also with debugging messages to help trace the execution: ``` def yieldClass(clazz: Class[_]): Iterator[Class[_]] = clazz match { case null => println("empty"); Iterator.empty case _ => def thisIterator = { println("self of "+clazz); Iterator(clazz) } def superIterator = { println("super of "+clazz); yieldClass(clazz.getSuperclass) } def interfacesIterator = { println("interfaces of "+clazz); clazz.getInterfaces.iterator flatMap yieldClass } thisIterator ++ superIterator ++ interfacesIterator } ``` This is pretty close to your code. Instead of `sudoYield`, I have definitions, and then I just concatenate them as I wish. So, while this is a non-answer, I just think you are barking up the wrong tree here. Trying to write Python in Scala is bound to be unproductive. Work harder at the Scala idioms that accomplish the same goals.
To do this in a general way, I think you need the [continuations plugin](http://blog.richdougherty.com/search/label/continuations). A naive implementation (freehand, not compiled/checked): ``` def iterator = new { private[this] var done = false // Define your yielding state here // This generator yields: 3, 13, 0, 1, 3, 6, 26, 27 private[this] var state: Unit=>Int = reset { var x = 3 giveItUp(x) x += 10 giveItUp(x) x = 0 giveItUp(x) List(1,2,3).foreach { i => x += i; giveItUp(x) } x += 20 giveItUp(x) x += 1 done = true x } // Well, "yield" is a keyword, so how about giveItUp? private[this] def giveItUp(i: Int) = shift { k: (Unit=>Int) => state = k i } def hasNext = !done def next = state() } ``` What is happening is that any call to `shift` captures the control flow from where it is called to the end of the `reset` block that it is called in. This is passed as the `k` argument into the shift function. So, in the example above, each `giveItUp(x)` returns the value of `x` (up to that point) and saves the rest of the computation in the `state` variable. It is driven from outside by the `hasNext` and `next` methods. Go gentle, this is obviously a terrible way to implement this. But it best I could do late at night without a compiler handy.
12,216
69,978,383
everyone, beforehand - I'm a bloody but motivated developer beginner. I am currently trying to react to simple events (click on a button) in the HTML code in my Django project. Unfortunately without success ... HTML: ``` <form> {% csrf_token %} <button id="CSVDownload" type="button">CSV Download!</button> </form> ``` JavaScript: ``` <script> document.addEventListener("DOMContentLoaded", () => { const CSVDownload = document.querySelector("#CSVDownload") CSVDownload.addEventListener("click", () => { console.dir("Test") }) }) </script> ``` Do I need JavaScript for this? Or is there a way to react directly to such events in python (Django)? I am really grateful for all the support. Since I'm not really "good" yet - a simple solution would be great :) Python (Django) ``` if request.method == "POST": projectName = request.POST["projectName"] rootDomain = request.POST["rootDomain"] startURL = request.POST["startURL"] .... ``` With this, for example, I managed to react to a kind of event, i.e. when the user sends the form. The problem here, however, is - if I have several forms on one page, then I cannot distinguish which function should be carried out: / I am at a loss
2021/11/15
[ "https://Stackoverflow.com/questions/69978383", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17420473/" ]
You should be able to use ``` delete a,b from `t ``` to delete in place (The backtick implies in place). Alternatively, for more flexibility you could use the functional form; ``` ![`t;();0b;`a`b] ```
The simplest way to achieve column deletion in place is using qSQL: `t:([]a:1 2 3;b:4 5 6;c:`d`e`f)` `delete a,b from `t` -- here, the backtick before `t` makes the change in place. ``` q)t c - d e f ```
12,225
45,096,654
I recently signed up and started playing with GAE for python. I was able to get their standard/flask/hello\_world project. But, when I tried to upload a simple cron job following the instructions at <https://cloud.google.com/appengine/docs/standard/python/config/cron>, I get an "Internal Server Error". My cron.yaml ``` cron: - description: test cron url: / schedule: every 24 hours ``` The error I see ``` Do you want to continue (Y/n)? y Updating config [cron]...failed. ERROR: (gcloud.app.deploy) Server responded with code [500]: Internal Server Error. Server Error (500) A server error has occurred. ``` Have I done something wrong here or is it possible that I am not eligible to add cron jobs as a free user?
2017/07/14
[ "https://Stackoverflow.com/questions/45096654", "https://Stackoverflow.com", "https://Stackoverflow.com/users/593644/" ]
I was having this problem as well, at about the same timeline. Without any changes, the deploy worked this morning, so my guess is that this was a transient server problem on Google's part.
I was just struggling with this same issue. In my case, I am using the PHP standard environment and kept receiving the '500 Internal Server Error' when I tried to publish our cron.yaml file from the Google Cloud SDK with the command: ``` gcloud app deploy cron.yaml --project {PROJECT_NAME} ``` To fix it, I did the following: 1. I removed all credentials from my gcloud client `gcloud auth revoke --all` 2. Reauthenticated within the client `gcloud auth login` 3. Published the cron.yaml `gcloud app deploy cron.yaml --project {PROJECT NAME}` From what I can tell, the permissions in my gcloud client got out of sync which is what caused the internal server error. Hopefully that's the same case for you!
12,228
74,041,729
I have a dataset like this ``` ID Year Day 1 2001 150 2 2001 140 3 2001 120 3 2002 160 3 2002 160 3 2017 75 3 2017 75 4 2017 80 ``` I would like to drop the duplicates within each year, but keep those were the year differs. End result would be this: ``` 1 2001 150 2 2001 140 3 2001 120 3 2002 160 3 2017 75 4 2017 80 ``` I tried to do something like this in python with my pandas dataframe: ``` data = read_csv('data.csv') data = data.drop_duplicates(subset = ['ID'], keep = first) ``` But this will delete duplicates between years, while I would like to keep this. How do I keep the duplicates between years?
2022/10/12
[ "https://Stackoverflow.com/questions/74041729", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12546311/" ]
add 'year' in your subset. ``` data.drop_duplicates(subset = ['ID','Year'], keep = 'first') ``` ``` ID Year Day 0 1 2001 150 1 2 2001 140 2 3 2001 120 3 3 2002 160 5 3 2017 75 7 4 2017 80 ```
Another possible solution: ``` df.groupby('Year').apply(lambda g: g.drop_duplicates()).reset_index(drop=True) ``` Output: ``` ID Year Day 0 1 2001 150 1 2 2001 140 2 3 2001 120 3 3 2002 160 4 3 2017 75 5 4 2017 80 ```
12,229
57,015,932
I'm developing an GUI for multi-robot system using ROS, but i'm freezing in the last thing i want in my interface: embedding the RVIZ, GMAPPING or another screen in my application. I already put an terminal in the interface, but i can't get around of how to add an external application window to my app. I know that PyQt5 have the createWindowContainer, with uses the window ID to dock an external application, but i didn't find any example to help me with that. If possible, i would like to drag and drop an external window inside of a tabbed frame in my application. But, if this is not possible or is too hard, i'm good with only opening the window inside a tabbed frame after the click of a button. I already tried to open the window similar to the terminal approach (see the code bellow), but the RVIZ window opens outside of my app. Already tried to translate the [attaching/detaching code](https://stackoverflow.com/questions/54388685/issues-when-attaching-and-detaching-external-app-from-qdockwidget) code to linux using the wmctrl command, but didn't work wither. See [my code here](https://pastebin.com/kWtKb4UN). Also already tried the [rviz Python Tutorial](http://docs.ros.org/indigo/api/rviz_python_tutorial/html/) but i'm receveing the error: Traceback (most recent call last): File "rvizTutorial.py", line 23, in import rviz File "/opt/ros/indigo/lib/python2.7/dist-packages/rviz/**init**.py", line 19, in import librviz\_shiboken ImportError: No module named librviz\_shiboken ``` # Frame where i want to open the external Window embedded self.Simulation = QtWidgets.QTabWidget(self.Base) self.Simulation.setGeometry(QtCore.QRect(121, 95, 940, 367)) self.Simulation.setTabPosition(QtWidgets.QTabWidget.North) self.Simulation.setObjectName("Simulation") self.SimulationFrame = QtWidgets.QWidget() self.SimulationFrame.setObjectName("SimulationFrame") self.Simulation.addTab(rviz(), "rViz") # Simulation Approach like Terminal class rviz(QtWidgets.QWidget): def __init__(self, parent=None): super(rviz, self).__init__(parent) self.process = QtCore.QProcess(self) self.rvizProcess = QtWidgets.QWidget(self) layout = QtWidgets.QVBoxLayout(self) layout.addWidget(self.rvizProcess) # Works also with urxvt: self.process.start('rViz', [str(int(self.winId()))]) self.setGeometry(121, 95, 940, 367) ```
2019/07/13
[ "https://Stackoverflow.com/questions/57015932", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11595480/" ]
Solved it! just pass the style to the scrollableTabView style={{width: '100%' }}
Solved it! Replace DefaultTabBar with ScrollableTabBar (don't forget to import it)
12,230
27,780,868
I need to use Backpropagation Neural Netwrok for multiclass classification purposes in my application. I have found [this code](http://danielfrg.com/blog/2013/07/03/basic-neural-network-python/#disqus_thread) and try to adapt it to my needs. It is based on the lections of Machine Learning in Coursera from Andrew Ng. I have tested it in IRIS dataset and achieved good results (accuracy of classification around 0.96), whereas on my real data I get terrible results. I assume there is some implementation error, because the data is very simple. But I cannot figure out what exactly is the problem. What are the parameters that it make sense to adjust? I tried with: * number of units in hidden layer * generalization parameter (lambda) * number of iterations for minimization function Built-in minimization function used in this code is pretty much confusing me. It is used just once, as @goncalopp has mentioned in comment. Shouldn't it iteratively update the weights? How it can be implemented? Here is my training data (target class is in the last column): --- ``` 65535, 3670, 65535, 3885, -0.73, 1 65535, 3962, 65535, 3556, -0.72, 1 65535, 3573, 65535, 3529, -0.61, 1 3758, 3123, 4117, 3173, -0.21, 0 3906, 3119, 4288, 3135, -0.28, 0 3750, 3073, 4080, 3212, -0.26, 0 65535, 3458, 65535, 3330, -0.85, 2 65535, 3315, 65535, 3306, -0.87, 2 65535, 3950, 65535, 3613, -0.84, 2 65535, 32576, 65535, 19613, -0.35, 3 65535, 16657, 65535, 16618, -0.37, 3 65535, 16657, 65535, 16618, -0.32, 3 ``` The dependencies are so obvious, I think it should be so easy to classify it... But results are terrible. I get accuracy of 0.6 to 0.8. This is absolutely inappropriate for my application. Can someone please point out possible improvements I could make in order to achieve better results. Here is the code: ``` import numpy as np from scipy import optimize from sklearn import cross_validation from sklearn.metrics import accuracy_score import math class NN_1HL(object): def __init__(self, reg_lambda=0, epsilon_init=0.12, hidden_layer_size=25, opti_method='TNC', maxiter=500): self.reg_lambda = reg_lambda self.epsilon_init = epsilon_init self.hidden_layer_size = hidden_layer_size self.activation_func = self.sigmoid self.activation_func_prime = self.sigmoid_prime self.method = opti_method self.maxiter = maxiter def sigmoid(self, z): return 1 / (1 + np.exp(-z)) def sigmoid_prime(self, z): sig = self.sigmoid(z) return sig * (1 - sig) def sumsqr(self, a): return np.sum(a ** 2) def rand_init(self, l_in, l_out): self.epsilon_init = (math.sqrt(6))/(math.sqrt(l_in + l_out)) return np.random.rand(l_out, l_in + 1) * 2 * self.epsilon_init - self.epsilon_init def pack_thetas(self, t1, t2): return np.concatenate((t1.reshape(-1), t2.reshape(-1))) def unpack_thetas(self, thetas, input_layer_size, hidden_layer_size, num_labels): t1_start = 0 t1_end = hidden_layer_size * (input_layer_size + 1) t1 = thetas[t1_start:t1_end].reshape((hidden_layer_size, input_layer_size + 1)) t2 = thetas[t1_end:].reshape((num_labels, hidden_layer_size + 1)) return t1, t2 def _forward(self, X, t1, t2): m = X.shape[0] ones = None if len(X.shape) == 1: ones = np.array(1).reshape(1,) else: ones = np.ones(m).reshape(m,1) # Input layer a1 = np.hstack((ones, X)) # Hidden Layer z2 = np.dot(t1, a1.T) a2 = self.activation_func(z2) a2 = np.hstack((ones, a2.T)) # Output layer z3 = np.dot(t2, a2.T) a3 = self.activation_func(z3) return a1, z2, a2, z3, a3 def function(self, thetas, input_layer_size, hidden_layer_size, num_labels, X, y, reg_lambda): t1, t2 = self.unpack_thetas(thetas, input_layer_size, hidden_layer_size, num_labels) m = X.shape[0] Y = np.eye(num_labels)[y] _, _, _, _, h = self._forward(X, t1, t2) costPositive = -Y * np.log(h).T costNegative = (1 - Y) * np.log(1 - h).T cost = costPositive - costNegative J = np.sum(cost) / m if reg_lambda != 0: t1f = t1[:, 1:] t2f = t2[:, 1:] reg = (self.reg_lambda / (2 * m)) * (self.sumsqr(t1f) + self.sumsqr(t2f)) J = J + reg return J def function_prime(self, thetas, input_layer_size, hidden_layer_size, num_labels, X, y, reg_lambda): t1, t2 = self.unpack_thetas(thetas, input_layer_size, hidden_layer_size, num_labels) m = X.shape[0] t1f = t1[:, 1:] t2f = t2[:, 1:] Y = np.eye(num_labels)[y] Delta1, Delta2 = 0, 0 for i, row in enumerate(X): a1, z2, a2, z3, a3 = self._forward(row, t1, t2) # Backprop d3 = a3 - Y[i, :].T d2 = np.dot(t2f.T, d3) * self.activation_func_prime(z2) Delta2 += np.dot(d3[np.newaxis].T, a2[np.newaxis]) Delta1 += np.dot(d2[np.newaxis].T, a1[np.newaxis]) Theta1_grad = (1 / m) * Delta1 Theta2_grad = (1 / m) * Delta2 if reg_lambda != 0: Theta1_grad[:, 1:] = Theta1_grad[:, 1:] + (reg_lambda / m) * t1f Theta2_grad[:, 1:] = Theta2_grad[:, 1:] + (reg_lambda / m) * t2f return self.pack_thetas(Theta1_grad, Theta2_grad) def fit(self, X, y): num_features = X.shape[0] input_layer_size = X.shape[1] num_labels = len(set(y)) theta1_0 = self.rand_init(input_layer_size, self.hidden_layer_size) theta2_0 = self.rand_init(self.hidden_layer_size, num_labels) thetas0 = self.pack_thetas(theta1_0, theta2_0) options = {'maxiter': self.maxiter} _res = optimize.minimize(self.function, thetas0, jac=self.function_prime, method=self.method, args=(input_layer_size, self.hidden_layer_size, num_labels, X, y, 0), options=options) self.t1, self.t2 = self.unpack_thetas(_res.x, input_layer_size, self.hidden_layer_size, num_labels) np.savetxt("weights_t1.txt", self.t1, newline="\n") np.savetxt("weights_t2.txt", self.t2, newline="\n") def predict(self, X): return self.predict_proba(X).argmax(0) def predict_proba(self, X): _, _, _, _, h = self._forward(X, self.t1, self.t2) return h ################## # IR data # ################## values = np.loadtxt('infrared_data.txt', delimiter=', ', usecols=[0,1,2,3,4]) targets = np.loadtxt('infrared_data.txt', delimiter=', ', dtype=(int), usecols=[5]) X_train, X_test, y_train, y_test = cross_validation.train_test_split(values, targets, test_size=0.4) nn = NN_1HL() nn.fit(values, targets) print("Accuracy of classification: "+str(accuracy_score(y_test, nn.predict(X_test)))) ```
2015/01/05
[ "https://Stackoverflow.com/questions/27780868", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4202221/" ]
Try `$eval` on `attr.target` like ``` var data = $scope.$eval($attrs.target) ``` Or if your data is dynamic you can $watch the attr ``` var data = []; $scope.$watch($attrs.target, function(newValue, oldValue){ data = newValue; }) ``` Also correct your controller injection like below, else if you will get error if you minified your source code. ``` controller: ['$scope','$element','$attrs', function($scope, $element, $attrs) { var data = $scope.$eval($attrs.target) this.foo = function(){ console.log('click'); }; }] ```
What I went with was removing the `target` attribute altogether, and instead broadcasting on the `$rootScope`. Directive: ``` this.foo = function(){ $rootScope.$broadcast('sortButtons', { predicate: 'foo', reverse: false }); }; ``` Controller: ``` $rootScope.$on('sortButtons', function(event, data){ $scope.filters.sort = data; }); ```
12,231
55,050,699
### Task I have a text file with alphanumeric filenames: ``` \abc1.txt. \abc2.txt \abc3.txt \abcde3.txt \Zxcv1.txt \mnbd2.txt \dhtdv.txt ``` I need to extract all `.txt` extensions from the file, which will be in the same line and also different line in the file in python. ### Desired Output: ``` abc1.txt abc2.txt abc3.txt abcde3.txt Zxcv1.txt mnbd2.txt dhtdv.txt ``` I appreciate your help.
2019/03/07
[ "https://Stackoverflow.com/questions/55050699", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11167162/" ]
Another way to do this, is to wrap the `CupertinoDatePicker` in `CupertinoTheme`. ``` CupertinoTheme( data: CupertinoThemeData( textTheme: CupertinoTextThemeData( dateTimePickerTextStyle: TextStyle( fontSize: 16, ), ), ), child: CupertinoDatePicker( ... ```
Finally got it, works as expected. ``` DefaultTextStyle.merge( style: TextStyle(fontSize: 20), child: CupertinoDatePicker(....) ) ```
12,232