qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
29
22k
response_k
stringlengths
26
13.4k
__index_level_0__
int64
0
17.8k
28,506,726
I am new to the `subprocess` module in python. The documentation provided this example: ``` >>> subprocess.check_output(["echo", "Hello World!"]) b'Hello World!\n' ``` What I tried is: ``` >>> import subprocess >>> subprocess.check_output(["cd", "../tests", "ls"]) /usr/bin/cd: line 4: cd: ../tests: No such file or directory Traceback (most recent call last): File "<input>", line 1, in <module> File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/subprocess.py", line 620, in check_output raise CalledProcessError(retcode, process.args, output=output) subprocess.CalledProcessError: Command '['cd', '../tests', 'ls']' returned non-zero exit status 1 ``` I am confused because this is my file structure: ``` /proj /cron test_scheduler.py /tests printy.py test1.py test2.py ... ``` These are my other attempts as well: ``` >>> subprocess.check_output(["cd", "../tests", "python", "printy.py"]) /usr/bin/cd: line 4: cd: ../tests: No such file or directory Traceback (most recent call last): File "<input>", line 1, in <module> File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/subprocess.py", line 620, in check_output raise CalledProcessError(retcode, process.args, output=output) subprocess.CalledProcessError: Command '['cd', '../tests', 'python', 'printy.py']' returned non-zero exit status 1 >>> subprocess.check_output(["cd", "../tests;", "ls"]) /usr/bin/cd: line 4: cd: ../tests;: No such file or directory Traceback (most recent call last): File "<input>", line 1, in <module> File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/subprocess.py", line 620, in check_output raise CalledProcessError(retcode, process.args, output=output) subprocess.CalledProcessError: Command '['cd', '../tests;', 'ls']' returned non-zero exit status 1 ```
2015/02/13
[ "https://Stackoverflow.com/questions/28506726", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1815710/" ]
The relative path to the `tests` directory depends on where the script is being run from. I would suggest calling `subprocess.check_output(["pwd"])` to check where you are. Also you can't combine two commands in the same call like in your attempt with `["cd", "../tests", "python", "printy.py"]`. You'll need to make two separate calls with `["cd", "../tests"]` and `["python", "printy.py"]` respectively.
You are missing a argument here I think. Here a snippet from the only python script I ever wrote: ``` #!/usr/local/bin/python from subprocess import call ... call( "rm " + backupFolder + "*.bz2", shell=True ) ``` Please note the `shell=True` in the end of that call.
7,466
63,030,306
I have the below python snippet ```py @click.argument('file',type=click.Path(exists=True)) ``` The above command read from a file in the below format ```sh python3 code.py file.txt ``` The same file is processed using a function ```py def get_domains(domain_names_file): with open(domain_names_file) as f: domains = f.readlines() return domains domains = get_domains(file) ``` I dont want to read it from file , I want to provide a domain as an argument while executing in terminal , the command will be ```sh python3 code.py example.com ``` How should I rewrite the code . Python Version : 3.8.2
2020/07/22
[ "https://Stackoverflow.com/questions/63030306", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8474328/" ]
`click.argument` by default creates arguments that are read from the command line: ``` @click.argument('file') ``` This should create an argument that is read from the command line and made available in the `file` argument. See the docs & examples [here](https://pocoo-click.readthedocs.io/en/latest/arguments/)
You may use `argparse`: ``` import argparse # set up the different arguments parser = argparse.ArgumentParser(description='Some nasty description here.') parser.add_argument("--domain", help="Domain: www.some-domain.com", required=True) args = parser.parse_args() print(args.domain) ``` And you invoke it via ``` python your-python-file.py --domain www.google.com ```
7,471
60,527,883
I have a dataset of 284 features I am trying to impute using scikit-learn, however I get an error where the number of features changes to 283: ``` imputer = SimpleImputer(missing_values = np.nan, strategy = "mean") imputer = imputer.fit(data.iloc[:,0:284]) df[:,0:284] = imputer.transform(df[:,0:284]) X = MinMaxScaler().fit_transform(df) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-150-849be5be8fcb> in <module> 1 imputer = SimpleImputer(missing_values = np.nan, strategy = "mean") 2 imputer = imputer.fit(data.iloc[:,0:284]) ----> 3 df[:,0:284] = imputer.transform(df[:,0:284]) 4 X = MinMaxScaler().fit_transform(df) ~\Anaconda3\envs\environment\lib\site-packages\sklearn\impute\_base.py in transform(self, X) 411 if X.shape[1] != statistics.shape[0]: 412 raise ValueError("X has %d features per sample, expected %d" --> 413 % (X.shape[1], self.statistics_.shape[0])) 414 415 # Delete the invalid columns if strategy is not constant ValueError: X has 283 features per sample, expected 284 ``` I don't understand how this is reaching 283 features, I assume on fitting it's finding features that have all 0s or something and deciding to drop that, but I can't find documentation which tells me how to make sure those features are still kept. I am not a programmer so not sure if I am missing something else that's obvious or if I'm better looking into another method?
2020/03/04
[ "https://Stackoverflow.com/questions/60527883", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8831033/" ]
This could happen if you have a feature without any values, from <https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html>: 'Columns which only contained missing values at fit are discarded upon transform if strategy is not “constant”'. You can tell if this is indeed the problem by using a high 'verbose' value when constructing the imputer: sklearn.impute.SimpleImputer(..., verbose=100,...) It will spit sth like: UserWarning: Deleting features without observed values: [ ... ]
I was dealing with the same situation and i got my solution by adding this transformation before the SimpleImputer mean strategy ``` imputer = SimpleImputer(strategy = 'constant', fill_value = 0) df_prepared_to_mean_or_anything_else = imputer.fit_transform(previous_df) ``` What does it do? Fills everything missing with the value specified on parameter `fill_value`
7,473
69,011,571
Which function was used for the following plot in R? At least it looks like a predefined function to me. Edit: Okay it seems to be Stata according Claudio. New question: Is there anything comparable in python/R to get this output? How to calculate Coef.? What kind of coefficient is this? [![enter image description here](https://i.stack.imgur.com/cZ1Tg.png)](https://i.stack.imgur.com/cZ1Tg.png)
2021/09/01
[ "https://Stackoverflow.com/questions/69011571", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15870842/" ]
You can simply switch over `status` inside the body of your view and assign the correct `String` and `foregroundColor` to your `Text` inside each `case. ``` struct StatusView: View { let status: Status var body: some View { switch status { case .accepted: Text("accepted") .foregroundColor(.green) case .standby: Text("standby") .foregroundColor(.yellow) case .notAllowed: Text("not allowed") .foregroundColor(.red) } } } ``` Or if you can modify `Status`, you can simply assign a `String` `rawValue` to it, then displaying the appropriate text based on its value is even easier. ``` enum Status: String { case accepted case standby case notAllowed } struct StatusView: View { let status: Status var body: some View { Text(status.rawValue) .foregroundColor(statusColor(status: status)) } private func statusColor(status: Status) -> Color { switch status { case .accepted: return .green case .standby: return .yellow case .notAllowed: return .red } } } ```
Here is an updated and refactored answer based on **David** answer, with this way you do not need that `ststusColor` function anymore and you can access the **colorValue** every where in your project instead of last answer that was accessible only inside `StatusView`. ``` struct StatusView: View { let status: Status var body: some View { Text(status.rawValue) .foregroundColor(status.colorValue) } } enum Status: String { case accepted case standby case notAllowed var colorValue: Color { switch self { case .accepted: return .green case .standby: return .yellow case .notAllowed: return .red } } } ```
7,474
5,373,474
I'm trying to use argparse to parse the command line arguments for a program I'm working on. Essentially, I need to support multiple positional arguments spread within the optional arguments, but cannot get argparse to work in this situation. In the actual program, I'm using a custom action (I need to store a snapshot of the namespace each time a positional argument is found), but the problem I'm having can be replicated with the `append` action: ``` >>> import argparse >>> parser = argparse.ArgumentParser() >>> parser.add_argument('-a', action='store_true') >>> parser.add_argument('-b', action='store_true') >>> parser.add_argument('input', action='append') >>> parser.parse_args(['fileone', '-a', 'filetwo', '-b', 'filethree']) usage: ipython [-h] [-a] [-b] input ipython: error: unrecognized arguments: filetwo filethree ``` I'd like this to result in the namespace `(a=True, b=True, input=['fileone', 'filetwo', 'filethree'])`, but cannot see how to do this - if indeed it can. I can't see anything in the docs or Google which says one way or the other if this is possible, although its quite possible (likely?) I've overlooked something. Does anyone have any suggestions?
2011/03/21
[ "https://Stackoverflow.com/questions/5373474", "https://Stackoverflow.com", "https://Stackoverflow.com/users/668807/" ]
You can't interleave the switches (i.e. `-a` and `-b`) with the positional arguments (i.e. fileone, filetwo and filethree) in this way. The switches must appear before or after the positional arguments, not in-between. Also, in order to have multiple positional arguments, you need to specify the `nargs` parameter to `add_argument`. For example: ``` parser.add_argument('input', nargs='+') ``` This tells `argparse` to consume one or more positional arguments and append them to a list. See the [argparse documentation](http://docs.python.org/library/argparse.html) for more information. With this line, the code: ``` parser.parse_args(['-a', '-b', 'fileone', 'filetwo', 'filethree']) ``` results in: ``` Namespace(a=True, b=True, input=['fileone', 'filetwo', 'filethree']) ```
The 'append' action makes more sense with an optional: ``` parser.add_argument('-i', '--input',action='append') parser.parse_args(['-i','fileone', '-a', '-i','filetwo', '-b', '-i','filethree']) ``` You can interleave optionals with separate positionals ('input1 -a input2 -b input3'), but you cannot interleave optionals within one multiitem positional. But you can accomplish this with a two step parse. ``` import argparse parser1 = argparse.ArgumentParser() parser1.add_argument('-a', action='store_true') parser1.add_argument('-b', action='store_true') parser2 = argparse.ArgumentParser() parser2.add_argument('input', nargs='*') ns, rest = parser1.parse_known_args(['fileone', '-a', 'filetwo', '-b', 'filethree']) # Namespace(a=True, b=True), ['fileone', 'filetwo', 'filethree'] ns = parser2.parse_args(rest, ns) # Namespace(a=True, b=True, input=['fileone', 'filetwo', 'filethree']) ``` <http://bugs.python.org/issue14191> is a proposed patch that will do this with single call to: ``` parser.parse_intermixed_args(['fileone', '-a', 'filetwo', '-b', 'filethree']) ```
7,475
2,604,917
I'm trying to make a program so that I can run it through the command line with the following format: ``` ./myProgram ``` I made it executable and put `#!/usr/bin/env python` in the header, but it's giving me the following error. ``` env: python\r: No such file or directory ``` However, when I run "python myProgram", it runs fine. Can someone tell me what I'm doing wrong?
2010/04/09
[ "https://Stackoverflow.com/questions/2604917", "https://Stackoverflow.com", "https://Stackoverflow.com/users/305105/" ]
Your line endings are wrong. Use `dos2unix` to fix them.
`dos2unix filename.py` or inside vim issue the command `:set fileformat=unix` and save.
7,480
16,903,416
I've used: ``` data = DataReader("yhoo", "yahoo", datetime.datetime(2000, 1, 1), datetime.datetime.today()) ``` in pandas (python) to get history data of yahoo, but it cannot show today's price (the market has not yet closed) how can I resolve such problem, thanks in advance.
2013/06/03
[ "https://Stackoverflow.com/questions/16903416", "https://Stackoverflow.com", "https://Stackoverflow.com/users/857130/" ]
``` import pandas import pandas.io.data import datetime import urllib2 import csv YAHOO_TODAY="http://download.finance.yahoo.com/d/quotes.csv?s=%s&f=sd1ohgl1vl1" def get_quote_today(symbol): response = urllib2.urlopen(YAHOO_TODAY % symbol) reader = csv.reader(response, delimiter=",", quotechar='"') for row in reader: if row[0] == symbol: return row ## main ## symbol = "TSLA" history = pandas.io.data.DataReader(symbol, "yahoo", start="2014/1/1") print history.tail(2) today = datetime.date.today() df = pandas.DataFrame(index=pandas.DatetimeIndex(start=today, end=today, freq="D"), columns=["Open", "High", "Low", "Close", "Volume", "Adj Close"], dtype=float) row = get_quote_today(symbol) df.ix[0] = map(float, row[2:]) history = history.append(df) print "today is %s" % today print history.tail(2) ``` just to complete perigee's answer, it cost me quite some time to find a way to append the data. ``` Open High Low Close Volume Adj Close Date 2014-02-04 180.7 181.60 176.20 178.73 4686300 178.73 2014-02-05 178.3 180.59 169.36 174.42 7268000 174.42 today is 2014-02-06 Open High Low Close Volume Adj Close 2014-02-05 178.30 180.59 169.36 174.420 7268000 174.420 2014-02-06 176.36 180.11 176.00 178.793 5199297 178.793 ```
The simplest way to extract Indian stock price data into Python is to use the nsepy library. In case you do not have the nsepy library do the following: ``` pip install nsepy ``` The following code allows you to extract HDFC stock price for 10 years. ``` from nsepy import get_history from datetime import date dfc=get_history(symbol="HDFCBANK",start=date(2015,5,12),end=date(2020,5,18)) ``` This is so far the easiest code I have found.
7,483
2,040,616
When I run my python script I get the following warning ``` DeprecationWarning: the sets module is deprecated ``` How do I fix this?
2010/01/11
[ "https://Stackoverflow.com/questions/2040616", "https://Stackoverflow.com", "https://Stackoverflow.com/users/247873/" ]
Stop using the `sets` module, or switch to an older version of python where it's not deprecated. According to [pep-004](http://www.python.org/dev/peps/pep-0004/), `sets` is deprecated as of v2.6, replaced by the built-in [`set` and `frozenset` types](http://docs.python.org/library/stdtypes.html#set-types-set-frozenset).
You don't need to import the `sets` module to use them, they're in the builtin namespace.
7,493
5,118,608
I'm novice in python and got a problem in which I would appreciate some help. The problem in short: 1. ask for a string 2. check if all letter in a predefined list 3. if any letter is not in the list then ask for a new string, otherwise go to next step 4. ask for a second string 5. check again whether the second string's letters in the list 6. if any letter is not in the list then start over with asking for a new **FIRST** string basicly my main question is how to go back to a previous part of my program, and it would also help if someone would write me the base of this code. It start like this: ``` list1=[a,b,c,d] string1=raw_input("first:") for i in string1: if i not in list1: ``` Thanks
2011/02/25
[ "https://Stackoverflow.com/questions/5118608", "https://Stackoverflow.com", "https://Stackoverflow.com/users/634333/" ]
I suggest you start here: <http://docs.python.org/tutorial/introduction.html#first-steps-towards-programming> And continue to next chapter: <http://docs.python.org/tutorial/controlflow.html>
You have a couple of options, you could use iteration, or recursion. For this kind of problem I would go with iteration. If you don't know what iteration and recursion are, and how they work in Python then you should use the links Kugel suggested.
7,502
52,345,375
i'm new with python and wants to do the following: 1. search inside text to check if token exists 2. token cannot be substring inside the text - must be "as is" (string11111 is not string1) ``` file = "string11111 aaaaa string1 bbbbb" token = "string1" if token in file: print "NOT yay!" ``` 3. token needs to be searched for end position to beginning (reversly)
2018/09/15
[ "https://Stackoverflow.com/questions/52345375", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1596023/" ]
First tokenize your `file` variable ``` tokens = file.split() ``` Then look for your token ``` if token in tokens: # do your thing ```
hoping the below solution meets your need - ``` file = "string11111 aaaaa string1 bbbbb" token = "string1" token_matched = [file_token for file_token in file.split()[::-1] if token in file_token and len(token) == len(file_token)] print('Matched tokens (reverse order) - ', token_matched) if len(token_matched) > 1: # Reoccurs more than one time which means the token could be sub-string print("NOT yay!") elif len(token_matched) == 1: # Matches only time definitely it could not be the sub-string print("OHH yay!") else: print("Token not exist in file.") ```
7,505
58,603,894
I am trying to convert the below mentioned json string to python dictionary. I am using python 3's json package for the same. Here is the code that I am using : ``` a = "[{'id': 35, 'name': 'Comedy'}, {'id': 18, 'name': 'Drama'}, {'id': 10751, 'name': 'Family'}, {'id': 10749, 'name': 'Romance'}]" b = json.loads(json.dumps(a)) print(type(b)) ``` And the output that I am getting from the above code is: > > <class 'str'> > > > I saw the similar questions asked in stackoverflow, but the solutions presented for those questions do not apply to my case.
2019/10/29
[ "https://Stackoverflow.com/questions/58603894", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4720757/" ]
The json string that you are trying to convert is not properly formatted. Also, you need to only call json.loads to convert string into `dict` or `list`. The updated code would look like: ``` import json a = '[{"id": 35, "name": "Comedy"}, {"id": 18, "name": "Drama"}, {"id": 10751, "name": "Family"}, {"id": 10749, "name": "Romance"}]' b = json.loads(a) print(type(b)) ``` Hope this explains why you are not getting the expected results.
**JSON Array** is enclosed in `[ ]` while **JSON object** is enclosed in `{ }` > > The string in `a` is a *json array* so you can change that into a *list* only. > > > > Your *key and value should be enclosed with double quotes*, that's the requirement to use json library of python. > > `b = json.loads(a)` will give a list of dictionary objects. > > > To get further dictionary of dictionary you need to associate a key with each individual dictionary. ``` d = dict() ind = 0 for data in b: d[ind] = data ind+=1 ``` Now the output that you get will be `{0: {'id': 35, 'name': 'Comedy'}, 1: {'id': 18, 'name': 'Drama'}, 2: {'id': 10751, 'name': 'Family'}, 3: {'id': 10749, 'name': 'Romance'}}` which is a dictionary of dictionary. Thank you
7,507
44,780,952
So I'm writing a Python program that reads lines of serial data, and compares them to a dictionary of line codes to figure out which specific lines are being transmitted. I am attempting to use a Regular Expression in order to filter out the extra garbage line serial read string has on it, but I'm having a bit of an issue. Every single code in my dictionary looks like this: `T12F8B0A22**F8`. The asterisks are the two alpha numeric pieces that differentiate each string code. This is what I have so far as my regex: `'/^T12F8B0A22[A-Z0-9]{2}F8$/'` I am getting a few errors with this however. My first error, is that there are some characters are the end of the string I still need to get rid of, which is odd because I thought `$/` denoted the end of the line in regex. However when I run my code through the debugger I notice that after running through the following code: ``` #regexString contains the serial read line data regexString = re.sub('/^T12F8B0A22[A-Z0-9]{2}F8$/', '', regexString) ``` My string looks something like this: `'T12F8B0A2200F8\\r'` I need to get rid of the `\\r`. If for some reason I can't get rid of this with regex, how in python do you send specific string character through an argument? In this case I suppose it would be length - 3?
2017/06/27
[ "https://Stackoverflow.com/questions/44780952", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3711832/" ]
Your problem is threefold: 1) your string contains extra `\r` (Carriage Return character) before `\n` (New Line character); this is common in Windows and in network communication protocols; it is probably best to remove any trailing whitespace from your string: ``` regexString = regexString.rstrip() ``` 2) as mentioned by Wiktor Stribiżew, your regexp is unnecessarily surrounded with `/` characters - some languages, like Perl, define regexp as a string delimited by `/` characters, but Python is not one of them; 3) your instruction using `re.sub` is actually replacing the matching part of `regexString` with an empty string - I believe this is the exact opposite of what you want (you want to **keep** the match and remove everything else, right?); that's why fixing the regexp makes things "even worse". To summarize, I think you should use this instead of your current code: ``` m = re.match('T12F8B0A22[A-Z0-9]{2}F8', regexString) regexString = m.group(0) ```
There are several ways to get rid of the "\r", but first a little analysis of your code : 1. the special charakter for the end is just '$' not '$\' in python. 2. re.sub will substitute the matched pattern with a string ( '' in your case) wich would substitute the string you want to get with an empty string and you are left with the //r possible solutions: 1. use simple replace: ``` regexString.replace('\\r','') ``` 2. if you want to stick to regex the approach is the same ``` pattern = '\\\\r' match = re.sub(pattern, '',regexString) ``` 2.2 if you want the acces the different groubs use re.search ``` match = re.search('(^T12F8B0A22[A-Z0-9]{2}F8)(.*)',regexString) match.group(1) # will give you the T12... match.groupe(2) # gives you the \\r ```
7,508
25,518,623
I wonder why the python magic method (**str**) always looking for the return statement rather a print method ? ``` class test: def __init__(self): print("constructor called") def __call__(self): print("callable") def __str__(self): return "string method" obj=test() ## print constructor called obj() ### print callable print(obj) ## print string method ``` my question is why i can't use something like this inside the **str** method ``` def __str__(self): print("string method") ```
2014/08/27
[ "https://Stackoverflow.com/questions/25518623", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2199012/" ]
This is more to enable the conversion of an object into a `str` - your users don't necessary want all that stuff be printed into the terminal whenever they want to do something like ``` text = str(obj_instance) ``` They want `text` to contain the result, not printed out onto the terminal. Doing it your way, the code would effectively be this ``` text = print(obj_instance) ``` Which is kind of nonsensical because the result of print isn't typically useful and `text` won't contain the stream of text that was passed into `str` type. As you already commented (but since deleted), not providing the correct type for the return value will cause an exception to be raised, for example: ``` >>> class C(object): ... def __str__(self): ... return None ... >>> str(C()) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: __str__ returned non-string (type NoneType) >>> ```
Because `__str__()` is used when you `print` the object, so the user is already calling `print` which needs the String that represent the Object - as a variable to pass back to the user's `print` In the example you provided above, if `__str__` would print you would get: ``` print(obj) ``` translated into: ``` print(print("string method")) ```
7,510
56,063,686
I've just recently switched to PyTorch after getting frustrated in debugging tf and understand that it is equivalent to coding in numpy almost completely. My question is what are the permitted python aspects we can use in a PyTorch model (to be put completely on GPU) eg. if-else has to be implemented as follows in tensorflow ``` a = tf.Variable([1,2,3,4,5], dtype=tf.float32) b = tf.Variable([6,7,8,9,10], dtype=tf.float32) p = tf.placeholder(dtype=tf.float32) ps = tf.placeholder(dtype=tf.bool) li = [None]*5 li_switch = [True, False, False, True, True] for i in range(5): li[i] = tf.Variable(tf.random.normal([5])) sess = tf.Session() sess.run(tf.global_variables_initializer()) def func_0(): return tf.add(a, p) def func_1(): return tf.subtract(b, p) with tf.device('GPU:0'): my_op = tf.cond(ps, func_1, func_0) for i in range(5): print(sess.run(my_op, feed_dict={p:li[i], ps:li_switch[i]})) ``` How would the structure change in pytorch for the above code? How to place the variables and ops above on GPU and parallelize the list inputs to our graph in pytorch?
2019/05/09
[ "https://Stackoverflow.com/questions/56063686", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7779411/" ]
The problem is that if it can only contain one of those words, then where it doesn't contain one of the keywords the SEARCH function will return an error. Capture that using IFERROR to set errors (values not found) to 0, and then get the MAX to find the position of the word that was found (if any). If no values are found, then the result will just be 0: ``` =MAX(INDEX(IFERROR(SEARCH({"Success","Unknown","Failed"},Q_DTL_GetAll__3[@Message]),0),)) ```
You can use the normally entered function: ``` =AGGREGATE(14,6,SEARCH({"Success","Unknown","Failed"},Q_DTL_GetAll__3[@MESSAGE]),1) ``` Your `SEARCH` is returning an array of values. In the given case: `{#VALUE!,42,#VALUE!}` So you need some way of only returning the non-error value. `AGGREGATE` can do that. This formula will return a `#NUM!` error if none of the words are present. You can handle that as you wish.
7,511
28,532,672
I have N 10-dimensional vectors where each element can have value of 0,1 or 2. For example, `vector v=(0,1,1,2,0,1,2,0,1,1)` is one of the vectors. Is there an algorithm (preferably in python) that compresses these vectors into a minimum number of Cartesian products. If not perfect solution, is there a algorithm that at least gives a good compression. Example: the two "Cartesian vectors" `([1,2], 0, 1, 0, 0, 0, 1, 1, [0,1], 0])` (gives 4 vectors) and `(0, 1, 0, 2, 0, 0, [0,2], 2, 0, 1)` (gives 2 vectors) gives optimal solution for the N=6 vectors: ``` 1,0,1,0,0,0,1,1,0,0 2,0,1,0,0,0,1,1,0,0 1,0,1,0,0,0,1,1,1,0 2,0,1,0,0,0,1,1,1,0 0,1,0,2,0,0,0,2,0,1 0,1,0,2,0,0,2,2,0,1 ```
2015/02/15
[ "https://Stackoverflow.com/questions/28532672", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4570066/" ]
Another alternative way is using [.one()](http://api.jquery.com/one/) (the handler is executed at most once per element per event type), something like this, ``` $(".done p").one('click', function() { $(this).parent().attr("class", "item not-done"); $(this).parent().hide().prependTo('.list').fadeIn('.5s'); }); ```
You just need to unbind the click handler, so the following should work: ``` $(".done p").click(function() { $(this).parent().attr("class", "item not-done"); $(this).parent().hide().prependTo('.list').fadeIn('.5s'); $(this).unbind('click'); }); ```
7,512
39,191,252
I'm running Spark 1.5.1 in standalone (client) mode using Pyspark. I'm trying to start a job that seems to be memory heavy (in python that is, so that should not be part of the executor-memory setting). I'm testing on a machine with 96 cores and 128 GB of RAM. I have a master and worker running, started using the start-all.sh script in /sbin. These are the config files I use in /conf. spark-defaults.conf: ``` spark.eventLog.enabled true spark.eventLog.dir /home/kv/Spark/spark-1.5.1-bin-hadoop2.6/logs spark.serializer org.apache.spark.serializer.KryoSerializer spark.dynamicAllocation.enabled false spark.deploy. defaultCores 40 ``` spark-env.sh: ``` PARK_MASTER_IP='5.153.14.30' # Will become deprecated SPARK_MASTER_HOST='5.153.14.30' SPARK_MASTER_PORT=7079 SPARK_MASTER_WEBUI_PORT=8080 SPARK_WORKER_WEBUI_PORT=8081 ``` I'm starting my script using the following command: ``` export SPARK_MASTER=spark://5.153.14.30:7079 #"local[*]" spark-submit \ --master ${SPARK_MASTER} \ --num-executors 1 \ --driver-memory 20g \ --executor-memory 30g \ --executor-cores 40 \ --py-files code.zip \ <script> ``` Now, I'm noticing behaviour that I don't understand: * When I start my application with the settings above, I expect there to be 1 executor. However, 2 executors are started, each having 30g of memory and 40 cores. Why does spark do this? I'm trying to limit the number of cores to have more memory per core, how can I enforce this? Now my application gets killed because it uses too much memory. * When I increase `executor-cores` to over 40, my job does not get started because of not enough resources. I expect that this is because of the `defaultCores 40` setting in my spark-defaults. But is't this just as a backup for when my application does not provide a maximum number of cores? I should be able to overwrite that right? Extract from the error messages I get: ``` Lost task 1532.0 in stage 2.0 (TID 5252, 5.153.14.30): org.apache.spark.SparkException: Python worker exited unexpectedly (crashed) at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:203) at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207) at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125) at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297) at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:69) at org.apache.spark.rdd.RDD.iterator(RDD.scala:262) at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297) at org.apache.spark.rdd.RDD.iterator(RDD.scala:264) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:139) ... 15 more [...] py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 111 in stage 2.0 failed 4 times, most recent failure: Lost task 111.3 in stage 2.0 (TID 5673, 5.153.14.30): org.apache.spark.SparkException: Python worker exited unexpectedly (crashed) ```
2016/08/28
[ "https://Stackoverflow.com/questions/39191252", "https://Stackoverflow.com", "https://Stackoverflow.com/users/696992/" ]
Check or set the value for spark.executor.instances. The default is 2, which may explain why you get 2 executors. Since your server has 96 cores, and you set defaultcores to 40, you only have room for 2 executors since 2\*40 = 80. The remaining 16 cores are insufficient for another executor and the driver also requires CPU cores.
> > I expect there to be 1 executor. However, 2 executors are started > > > I think the one executor you see, it's actually the driver. So one master, one slave (2 nodes in totals). You can add to your script these configuration flags: ``` --conf spark.executor.cores=8 <-- will set it 8, you probably want less --conf spark.driver.cores=8 <-- same, but for driver only ``` --- > > my job does not get started because of not enough resources. > > > I believe the container gets killed. You see, you ask for too many resources, so every container/task/core tries to take as much memory as possible, and your system can't simple give more. The container might exceed its memory limits (you should be able to see more in the logs to be certain though).
7,515
27,798,829
I have installed PySide in my Ubuntu 12.04. When I try to use import PySide in the python console I am getting the following error. ``` import PySide Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named PySide ``` My Python Path is : ``` print sys.path ['', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-linux2', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/PIL', '/usr/lib/python2.7/dist-packages/gst-0.10', '/usr/lib/python2.7/dist-packages/gtk-2.0', '/usr/lib/pymodules/python2.7', '/usr/lib/python2.7/dist-packages/ubuntu-sso-client', '/usr/lib/python2.7/dist-packages/ubuntuone-client', '/usr/lib/python2.7/dist-packages/ubuntuone-control-panel', '/usr/lib/python2.7/dist-packages/ubuntuone-couch', '/usr/lib/python2.7/dist-packages/ubuntuone-installer', '/usr/lib/python2.7/dist-packages/ubuntuone-storage-protocol'] ``` how to fix this problem ?
2015/01/06
[ "https://Stackoverflow.com/questions/27798829", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2871542/" ]
To use python 3, just follow the instructions here: <https://wiki.qt.io/PySide_Binaries_Linux> which in ubuntu 12.04 means just typing one line in the console: ``` sudo apt-get install python3-pyside ```
The latest build and install instructions for PySide are here: <http://pyside.readthedocs.org/en/latest/building/linux.html>
7,516
70,922,066
I am building a docker image to run a flask app, which is named dp-offsets for context. This flask app uses matplotlib. I have been unable to fully install matlplotlib despite including all of the necessary dependencies (i think). The code seems to be erroring on timestamp **791.9**s due to bdist\_wheel. I'm not sure why bdist\_wheel is erroring, because I install wheel before I install matplotlib. Seen below is the terminal error, my requirements.txt file, and my Dockerfile. Any help would be appreciated! **Docker File** ``` FROM python:3.7.4-alpine #Dependancies for matplotlib, pandas, and numpy RUN apk add --no-cache --update \ python3 python3-dev gcc \ gfortran musl-dev g++ \ libffi-dev openssl-dev \ libxml2 libxml2-dev \ libxslt libxslt-dev \ jpeg-dev libjpeg make \ libjpeg-turbo-dev zlib-dev RUN pip install --upgrade cython RUN pip install --upgrade pip RUN pip install --upgrade setuptools WORKDIR /dp-offsets ADD . /dp-offsets RUN pip install -r requirements.txt CMD ["python", "app_main.py"] ``` **Requirements.txt. File** ``` wheel==0.37.0 flask==2.0.1 flask_bootstrap form numpy==1.21.2 matplotlib==3.4.3 pandas==1.3.2 flask_wtf==0.15.1 wtforms==2.3.3 ``` **Error Received** ``` > [8/8] RUN pip install -r requirements.txt: #13 1.125 Collecting wheel==0.37.0 #13 1.713 Downloading wheel-0.37.0-py2.py3-none-any.whl (35 kB) #13 1.874 Collecting flask==2.0.1 #13 1.975 Downloading Flask-2.0.1-py3-none-any.whl (94 kB) #13 2.171 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 94.8/94.8 KB 444.0 kB/s eta 0:00:00 #13 2.348 Collecting flask_bootstrap #13 2.458 Downloading Flask-Bootstrap-3.3.7.1.tar.gz (456 kB) #13 3.130 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 456.4/456.4 KB 684.5 kB/s eta 0:00:00 #13 3.164 Preparing metadata (setup.py): started #13 3.417 Preparing metadata (setup.py): finished with status 'done' #13 3.585 Collecting form #13 3.684 Downloading form-0.0.1.tar.gz (1.4 kB) #13 3.699 Preparing metadata (setup.py): started #13 3.929 Preparing metadata (setup.py): finished with status 'done' #13 4.556 Collecting numpy==1.21.2 #13 4.641 Downloading numpy-1.21.2.zip (10.3 MB) #13 15.18 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 10.3/10.3 MB 974.4 kB/s eta 0:00:00 #13 15.79 Installing build dependencies: started #13 22.28 Installing build dependencies: finished with status 'done' #13 22.28 Getting requirements to build wheel: started #13 22.69 Getting requirements to build wheel: finished with status 'done' #13 22.69 Preparing metadata (pyproject.toml): started #13 23.05 Preparing metadata (pyproject.toml): finished with status 'done' #13 23.34 Collecting matplotlib==3.4.3 #13 23.43 Downloading matplotlib-3.4.3.tar.gz (37.9 MB) #13 53.17 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 37.9/37.9 MB 1.3 MB/s eta 0:00:00 #13 55.07 Preparing metadata (setup.py): started #13 298.3 Preparing metadata (setup.py): still running... #13 298.8 Preparing metadata (setup.py): finished with status 'done' #13 299.1 Collecting pandas==1.3.2 #13 299.2 Downloading pandas-1.3.2.tar.gz (4.7 MB) #13 302.7 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.7/4.7 MB 1.4 MB/s eta 0:00:00 #13 303.5 Installing build dependencies: started #13 383.9 Installing build dependencies: still running... #13 446.6 Installing build dependencies: still running... #13 461.3 Installing build dependencies: finished with status 'done' #13 461.4 Getting requirements to build wheel: started #13 524.1 Getting requirements to build wheel: still running... #13 524.5 Getting requirements to build wheel: finished with status 'done' #13 524.5 Preparing metadata (pyproject.toml): started #13 525.2 Preparing metadata (pyproject.toml): finished with status 'done' #13 525.3 Collecting flask_wtf==0.15.1 #13 525.4 Downloading Flask_WTF-0.15.1-py2.py3-none-any.whl (13 kB) #13 525.5 Collecting wtforms==2.3.3 #13 525.6 Downloading WTForms-2.3.3-py2.py3-none-any.whl (169 kB) #13 525.7 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 169.1/169.1 KB 2.0 MB/s eta 0:00:00 #13 525.9 Collecting Werkzeug>=2.0 #13 526.1 Downloading Werkzeug-2.0.2-py3-none-any.whl (288 kB) #13 526.3 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 288.9/288.9 KB 1.1 MB/s eta 0:00:00 #13 526.5 Collecting Jinja2>=3.0 #13 526.6 Downloading Jinja2-3.0.3-py3-none-any.whl (133 kB) #13 526.7 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 133.6/133.6 KB 1.5 MB/s eta 0:00:00 #13 526.9 Collecting itsdangerous>=2.0 #13 527.0 Downloading itsdangerous-2.0.1-py3-none-any.whl (18 kB) #13 527.2 Collecting click>=7.1.2 #13 527.3 Downloading click-8.0.3-py3-none-any.whl (97 kB) #13 527.3 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 97.5/97.5 KB 1.8 MB/s eta 0:00:00 #13 527.5 Collecting cycler>=0.10 #13 527.6 Downloading cycler-0.11.0-py3-none-any.whl (6.4 kB) #13 527.7 Collecting kiwisolver>=1.0.1 #13 527.9 Downloading kiwisolver-1.3.2.tar.gz (54 kB) #13 527.9 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 54.6/54.6 KB 3.0 MB/s eta 0:00:00 #13 527.9 Preparing metadata (setup.py): started #13 530.1 Preparing metadata (setup.py): finished with status 'done' #13 530.7 Collecting pillow>=6.2.0 #13 530.8 Downloading Pillow-9.0.0.tar.gz (49.5 MB) #13 569.3 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 49.5/49.5 MB 1.2 MB/s eta 0:00:00 #13 570.4 Preparing metadata (setup.py): started #13 570.7 Preparing metadata (setup.py): finished with status 'done' #13 570.8 Collecting pyparsing>=2.2.1 #13 571.0 Downloading pyparsing-3.0.7-py3-none-any.whl (98 kB) #13 571.1 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 98.0/98.0 KB 825.7 kB/s eta 0:00:00 #13 571.2 Collecting python-dateutil>=2.7 #13 571.3 Downloading python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB) #13 571.6 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 247.7/247.7 KB 887.6 kB/s eta 0:00:00 #13 571.8 Collecting pytz>=2017.3 #13 572.0 Downloading pytz-2021.3-py2.py3-none-any.whl (503 kB) #13 572.5 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 503.5/503.5 KB 944.0 kB/s eta 0:00:00 #13 572.7 Collecting MarkupSafe #13 572.8 Downloading MarkupSafe-2.0.1-cp37-cp37m-musllinux_1_1_x86_64.whl (30 kB) #13 573.1 Collecting dominate #13 573.2 Downloading dominate-2.6.0-py2.py3-none-any.whl (29 kB) #13 573.5 Collecting visitor #13 573.6 Downloading visitor-0.1.3.tar.gz (3.3 kB) #13 573.6 Preparing metadata (setup.py): started #13 573.8 Preparing metadata (setup.py): finished with status 'done' #13 574.0 Collecting importlib-metadata #13 574.1 Downloading importlib_metadata-4.10.1-py3-none-any.whl (17 kB) #13 574.2 Collecting six>=1.5 #13 574.3 Downloading six-1.16.0-py2.py3-none-any.whl (11 kB) #13 574.5 Collecting typing-extensions>=3.6.4 #13 574.8 Downloading typing_extensions-4.0.1-py3-none-any.whl (22 kB) #13 575.1 Collecting zipp>=0.5 #13 575.6 Downloading zipp-3.7.0-py3-none-any.whl (5.3 kB) #13 575.6 Building wheels for collected packages: numpy, matplotlib, pandas, flask_bootstrap, form, kiwisolver, pillow, visitor #13 575.6 Building wheel for numpy (pyproject.toml): started #13 657.8 Building wheel for numpy (pyproject.toml): still running... #13 720.6 Building wheel for numpy (pyproject.toml): still running... #13 777.1 Building wheel for numpy (pyproject.toml): finished with status 'done' #13 777.1 Created wheel for numpy: filename=numpy-1.21.2-cp37-cp37m-linux_x86_64.whl size=21275305 sha256=82ac227d9585fb707983648e7ab6b8ff47b953a1d5d687409339ad505a8467b4 #13 777.1 Stored in directory: /root/.cache/pip/wheels/6b/8c/55/e7f441ea696acba3eba6931857214e3b33dcfe1e971b663032 #13 777.1 Building wheel for matplotlib (setup.py): started #13 791.9 Building wheel for matplotlib (setup.py): finished with status 'error' #13 791.9 error: subprocess-exited-with-error #13 791.9 #13 791.9 × python setup.py bdist_wheel did not run successfully. #13 791.9 │ exit code: 1 #13 791.9 ╰─> [861 lines of output] #13 791.9 #13 791.9 Edit setup.cfg to change the build options; suppress output with --quiet. #13 791.9 #13 791.9 BUILDING MATPLOTLIB #13 791.9 matplotlib: yes [3.4.3] #13 791.9 python: yes [3.7.4 (default, Aug 21 2019, 00:19:59) [GCC 8.3.0]] #13 791.9 platform: yes [linux] #13 791.9 tests: no [skipping due to configuration] #13 791.9 macosx: no [Mac OS-X only] ``` **The Error continues for a bit longer. Below is the final output** ``` #13 1427.6 UPDATING build/lib.linux-x86_64-3.7/matplotlib/_version.py #13 1427.6 set build/lib.linux-x86_64-3.7/matplotlib/_version.py to '3.4.3' #13 1427.6 running build_ext #13 1427.6 gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -DTHREAD_STACK_SIZE=0x100000 -fPIC -I/usr/local/include/python3.7m -c /tmp/tmpzzp8tz7k.cpp -o tmp/t mpzzp8tz7k.o -fvisibility=hidden #13 1427.6 gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -DTHREAD_STACK_SIZE=0x100000 -fPIC -I/usr/local/include/python3.7m -c /tmp/tmpqr5gbp_k.cpp -o tmp/t mpqr5gbp_k.o -fvisibility-inlines-hidden #13 1427.6 gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -DTHREAD_STACK_SIZE=0x100000 -fPIC -I/usr/local/include/python3.7m -c /tmp/tmptx14kry1.cpp -o tmp/t mptx14kry1.o -flto #13 1427.6 error: Failed to download any of the following: ['http://www.qhull.org/download/qhull-2020-src-8.0.2.tgz']. Please download one of these urls and extract it into 'bui ld/' at the top-level of the source repository. #13 1427.6 [end of output] #13 1427.6 #13 1427.6 note: This error originates from a subprocess, and is likely not a problem with pip. #13 1427.7 error: legacy-install-failure #13 1427.7 #13 1427.7 × Encountered error while trying to install package. #13 1427.7 ╰─> matplotlib #13 1427.7 #13 1427.7 note: This is an issue with the package mentioned above, not pip. #13 1427.7 hint: See above for output from the failure. ------ executor failed running [/bin/sh -c pip install -r requirements.txt]: exit code: 1 ```
2022/01/31
[ "https://Stackoverflow.com/questions/70922066", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18075955/" ]
You have to rearrange the whole thing in your required format. To do that you have to access the data to it's specific position. You did something awkward there in your code at the second array base. you should specify your section somewhere else,maybe the next line. To access it you have to write: ``` obj.name //For the school name ```
Objects are defined like this: {key1: value1, key2: value2} Keys are the identifiers of your values. When you are assigning section: 'A', section: 'B', section: 'C', you are using the same key, so the previous values are overwritten and only the last is stored. That's why it logs 'C'. You can try changing the keys or maybe write it like this: section:['A', 'B', 'C']
7,518
70,186,395
I have uploaded my databricks notebooks to a repo and replace %run sentences with import using the new databrick public available features (Repo integration and python import): <https://databricks.com/blog/2021/10/07/databricks-repos-is-now-generally-available.html> But its seems its not working I already activate the repo integration option in the Admin panel but i Get this error > > ModuleNotFoundError: No module named 'petitions' > > > For simplicity I moved all python files to the same directory. I get the error in the procesado notebook [![Repo structure1](https://i.stack.imgur.com/Y6lv6.png)
2021/12/01
[ "https://Stackoverflow.com/questions/70186395", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1075163/" ]
You can use a hierarchical query and `CONNECT_BY_ROOT`. Either starting at the root of the hierarchy and working down: ```sql SELECT id, CONNECT_BY_ROOT(id) AS root_id FROM entry WHERE id IN (6, 3) START WITH parent_id IS NULL CONNECT BY PRIOR id = parent_id; ``` Or, from the entry back up to the root: ```sql SELECT CONNECT_BY_ROOT(id) AS id, id AS root_id FROM entry WHERE parent_id IS NULL START WITH id IN (6, 3) CONNECT BY PRIOR parent_id = id; ``` Which, for the sample data: ```sql CREATE TABLE entry( id, parent_id ) AS SELECT 1, NULL FROM DUAL UNION ALL SELECT 2, 1 FROM DUAL UNION ALL SELECT 3, 2 FROM DUAL UNION ALL SELECT 4, NULL FROM DUAL UNION ALL SELECT 5, 4 FROM DUAL UNION ALL SELECT 6, 5 FROM DUAL UNION ALL SELECT 7, 6 FROM DUAL ``` Both output: > > > > > | ID | ROOT\_ID | > | --- | --- | > | 3 | 1 | > | 6 | 4 | > > > *db<>fiddle [here](https://dbfiddle.uk/?rdbms=oracle_18&fiddle=4a4ee39e30707104bb256336bd77f50f)*
You can use recursive CTE to walk the graph and find the initial parent. For example: ``` with n (starting_id, current_id, parent_id, v) as ( select id, id, parent_id, 0 from entry where id in (6, 3) union all select n.starting_id, e.id, e.parent_id, n.v - 1 from n join entry e on e.id = n.parent_id ) select starting_id, current_id as initial_id from ( select n.*, row_number() over(partition by starting_id order by v) as rn from n ) x where rn = 1 ``` Result: ``` STARTING_ID INITIAL_ID ------------ ---------- 3 1 6 4 ``` See running example at [db<>fiddle](https://dbfiddle.uk/?rdbms=oracle_18&fiddle=81d2ae02d0006ecdbfb87de91b36d857).
7,519
18,541,648
I have a table in a database which contains query statements in the columns. I need to update this. Is there any way I can update this It seems to be giving me an error: ``` UPDATE Items SET Query = 'SELECT isnull((sum(OrigDocAmt) ),0) amount from AP where Acct in (1234) and Status='O' and Doc in ('CK') {SLLocCode}' WHERE ID='111' ``` It gives me an error because it considers 'O' as a separate string. Is there a way that I can do this like in python "What's up". Not sure why it was setup this way but it was done so by my predecessor. Please help.
2013/08/30
[ "https://Stackoverflow.com/questions/18541648", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2684009/" ]
You need to use 2 single quotes ('') around each literal value ``` 'SELECT isnull((sum(OrigDocAmt) ),0) amount from AP where Acct in (1234) and Status=''O'' and Doc in (''CK'') {SLLocCode}' ```
You need to escape the single quotes in the query string. In SQL Server, you just double the single quotes: ``` UPDATE Items SET Query = 'SELECT isnull((sum(OrigDocAmt) ),0) amount from AP where Acct in (1234) and Status=''O'' and Doc in (''CK'') {SLLocCode}' WHERE ID = '111'; ```
7,520
49,781,303
I'm trying to write in a microsoft azure jupyter python notebook and I am receiving an error when I try to import the Tweepy module. Please take a look at the simple code below and let me know your thoughts. Thank you. I'm working on a chromebook if that helps, but I'm not sure it's relevant. ``` import tweepy as tw tw.__version__ ``` Here's what comes up: ``` ModuleNotFoundError Traceback (most recent call last) <ipython-input-1-453b41c5a7f9> in <module>() ----> 1 import tweepy as tw 2 tw.__version__ ModuleNotFoundError: No module named 'tweepy' ```
2018/04/11
[ "https://Stackoverflow.com/questions/49781303", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9631959/" ]
From the [Azure Notebooks docs](https://notebooks.azure.com/help/jupyter-notebooks/package-installation/python): > > The simplest way to install packages is to do it from within a Jupyter Python notebook. Inside of the notebook your path will be setup to have both pip and conda on it pointing to the proper version of Python. So inside of a notebook you can simply do: > > > `!pip install <pkg name>` > > > or > > > `!conda install <pkg name> -y` > > > So just first execute a cell that contains: ``` !pip install tweepy ``` and you should be good to go.
I had the same error when I was trying to use tweepy. You can try using these commands instead: `from tweepy import OAuthHandler from tweepy import API from tweepy import Cursor`
7,522
46,509,906
This code is supposed to find the number which is biggest and then it should print out how many is there, but for some reason this commented if statement doesn't work. ``` #!/bin/python3 import sys def birthdayCakeCandles(n, ar): j=1 b=0 f=0 maxn=0 for f in range(0,n-1,1): b=ar[f] # if maxn==b: j=j+1 elif b>maxn: maxn=b print(j) n = 4 ar = 3, 1, 2, 3 print(birthdayCakeCandles(n, ar)) ``` and when i run this code the output is: ``` 1 1 1 None ``` so, final answer is supposed to be 2 instead of None.
2017/10/01
[ "https://Stackoverflow.com/questions/46509906", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6659439/" ]
I found the **almost** perfect working answer in [Levi Fuller's blog](https://medium.com/@levifuller/how-to-deploy-an-angular-cli-application-built-on-asp-net-1fa03c0ca365). You can get it working with a minor change: unlike what Levi states, you really need **only a single** npm task 1. set up the **npm task** * by clicking the three dots button -> set the **Working folder with package.json** to your folder that contains the package.json file. 2. set up the **Azure App Service Deploy** * by clicking the three dots button -> set the **Package or folder** to your folder that contains the .csproj file. [![build definition](https://i.stack.imgur.com/ButsI.png)](https://i.stack.imgur.com/ButsI.png)
There isn’t the build template that you use directly, the template is convenient to use, you need to modify it per to detail requirement. Refer to these steps: 1. Go to build page of team project (e.g. `https://XXX.visualstudio.com/[teamproject]/_build`) 2. Click +New button to create a build definition with ASP.NET Core template [![enter image description here](https://i.stack.imgur.com/aLvJp.jpg)](https://i.stack.imgur.com/aLvJp.jpg) [![enter image description here](https://i.stack.imgur.com/aKifI.jpg)](https://i.stack.imgur.com/aKifI.jpg) 3. Add npm install task before .NET Core Restore task (Command: `install`; Working folder with package.json:[package.json folder path]) [![enter image description here](https://i.stack.imgur.com/oPLtx.jpg)](https://i.stack.imgur.com/oPLtx.jpg) 4. (optional) Delete/disable .NET Core Test task if you don’t need 5. Add Azure App Service Deploy task at the end (Package or folder: `$(build.artifactstagingdirectory)/**/*.zip`; Check `Publish using Web Deploy` option) [![enter image description here](https://i.stack.imgur.com/QAX94.jpg)](https://i.stack.imgur.com/QAX94.jpg) Note: you can move step 4 to release and link this build to release (change package or folder to `$(System.DefaultWorkingDirectory)/**/*.zip`).
7,523
31,234,170
So I am a bit new to Java and Eclipse. I am more used to python. Using IDLE in python I am able to run my program from it's file and and then continue to use the variables. For example, if I have all the code written out defining a function, in idle I can just write it there. ``` x = foo() print x ``` However, in Java it seems like I need to put that in the main method. ``` public static void main(String[] args) ``` This is fine if I already know everything I want to do with a function, but what if I am running a code that took a day to run, and forgot to write the out put to a file. In python, I can just wait for it to finish running and then write it to a file in IDLE. In Java I need to tell it to write it to a file in the main method and then re run it. Is it possible to set up Eclipse to work like IDLE where you don't need rerun a program if you want to do new things with the variables already calculated? I have never used NetBeans, but would this type of thing be easier to do in NetBeans?
2015/07/05
[ "https://Stackoverflow.com/questions/31234170", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4967646/" ]
Java is a compiled language, python is a scripting language. You could use scala, or jython (or another scripting language) to get the behavior you want. It's also possible to use a [*Scrapbook page*](http://help.eclipse.org/luna/index.jsp?topic=%2Forg.eclipse.jdt.doc.user%2Ftasks%2Ftask-create_scrapbook_page.htm) in eclipse, but that isn't a true [REPL](https://en.wikipedia.org/wiki/Read%E2%80%93eval%E2%80%93print_loop) (which is what you same to be asking for).
Your environment is one that sounds like its a python based environment. in this case you are storing the variables into your IDE's runtime variable pool. thats why you can later go and act on a variable you set up. in eclipse when you run your program you are launching a new instance of java that is disconnected (from variables in the memory heap point of view) from the instance of java that is running eclipse. once that copy of java that is running your program exits all its memory (including that holding variables) is returned to the system you are running on.
7,524
399,991
When I pass the options in the program (a computational biology experiment) I usually pass them through a .py file. So I have this .py file that reads like: ``` starting_length=9 starting_cell_size=1000 LengthofExperiments=5000000 ``` Then I execute the file and get the data. Since the program is all on my machine and no one else has access to it, it is secure in a trivial way. I can also write a similar file very easily: ``` def writeoptions(directory): options="" options+="starting_length=%s%s"%(starting_length,os.linesep) options+="starting_cell_size=%s%s"%(starting_cell_size,os.linesep) options+="LengthofExperiments=%s%s"%(LengthofExperiments,os.linesep) ... open("%s%soptions.py"%(directory,os.sep),'w').write(options) ``` I want to pass a function as one of the parameters: ``` starting_length=9 starting_cell_size=1000 LengthofExperiments=5000000 def pippo(a,b): return a+b functionoperator=pippo ``` And of course in the real experiment the function pippo will be much more complex. And different from experiment to experiment. But what I am unable to do is to write the function automatically. In short I don't know how to generalise the writeoptions function to keep on writing the options, if one of the options is a function. I could of course copy the original file, but this is inelegant, inefficient (because it contains a lot of extra options that are not being used), and generally does not solve the question. How do you get python to write down the code of a function, as it writes down the value of a variable?
2008/12/30
[ "https://Stackoverflow.com/questions/399991", "https://Stackoverflow.com", "https://Stackoverflow.com/users/46634/" ]
``` vinko@mithril$ more a.py def foo(a): print a vinko@mithril$ more b.py import a import inspect a.foo(89) print inspect.getsource(a.foo) vinko@mithril$ python b.py 89 def foo(a): print a ```
Are you asking about this? ``` def writeoptions(directory): options="" options+="starting_length=%s%s"%(starting_length,os.linesep) options+="starting_cell_size=%s%s"%(starting_cell_size,os.linesep) options+="LengthofExperiments=%s%s"%(LengthofExperiments,os.linesep) options+="def pippo(a,b):%s" % ( os.linesep, ) options+=" '''Some version of pippo'''%s" % ( os.linesep, ) options+=" return 2*a+b%s" % ( os.linesep, ) open("%s%soptions.py"%(directory,os.sep),'w').write(options) ``` Or something else?
7,525
51,007,893
I have a homework to draw a spiral(from inside to outside) in python with turtle, but I cant think of a way to do that, beside what I did its need to be like this: [![enter image description here](https://i.stack.imgur.com/UQCW8.gif)](https://i.stack.imgur.com/UQCW8.gif) I tried to do it like that, but its not working properly. ``` import turtle turtle.shape ('turtle') d = 20 #Distance a = 1 #StartingAngle x = 200 #Num of loops for i in range (x): turtle.left(a) turtle.forward(d) a = a + 5 ```
2018/06/24
[ "https://Stackoverflow.com/questions/51007893", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7528938/" ]
As described in this link : [Create Deep Links](https://developer.android.com/training/app-links/deep-linking) You should add this in your manifest : ``` <activity android:name="com.example.android.GizmosActivity" android:label="@string/title_gizmos" > <intent-filter android:label="@string/filter_view_http_gizmos"> <action android:name="android.intent.action.VIEW" /> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.BROWSABLE" /> <!-- Accepts URIs that begin with "http://www.example.com/gizmos” --> <data android:scheme="http" android:host="www.example.com" android:pathPrefix="/gizmos" /> <!-- note that the leading "/" is required for pathPrefix--> </intent-filter> </activity> ``` and then share a link that starts with "<http://www.example.com/gizmos>" (fake url) with another person.
Yes it is possible by handling adding state parms in your url and then redirecting to play store from your server side code. eg - your app generates url which points to some user profile - <https://www.yourSocialNewtwork.com/profile/sandeshDahake> Step 1 - Create a deep link in your app with intent filter. This will handle your app is already installed and pass params to it - ``` <data android:scheme="https" android:host="www.yourSocialNewtwork.com" android:pathPrefix="/profile" /> ``` Step2 - If app is not installed you can redirect to play store from your server side <https://developer.android.com/google/play/installreferrer/library#java> <https://play.google.com/store/apps/details?id=com.profile.yourHierarchy> &referrer=sandeshDahake I have heard there are some opensource projects which handle this scenario but not explored them yet. Basically trick here is proper url structure and deep linking. <https://developer.android.com/training/app-links/deep-linking>
7,532
70,431,040
I think the title gives the general idea of what I am looking for, but to be more specific I will give an example with code. So let's say I have a Python class with a few required position variables that also takes an arbitrary number of keyword arguments. The class has many data members, and some of them will be defined by the required position variables, but most are keyword argument variables where the program has default values for these variables, but if the user uses a keyword argument this will override the defaults. I am looking for the most "pythonic" way to initialize a class of this type. I have two ideas for how to do this, but each of them feels unsatisfying and like there is a more pythonic way I am missing. ``` #First Option class SampleOne: def __init__(pos1, pos2, **kwargs): def do_defaults(): self.kwarg1 = default_kwarg1 self.kwarg2 = default_kwarg2 self.kwarg3 = default_kwarg3 def do_given(): for variable, value in kwargs.items(): self.variable = value self.pos1 = pos1 self.pos2 = pos2 do_defaults() do_given() ``` Or ``` #Second Option class SampleTwo: def __init__(pos1, pos2, **kwargs): self.pos1 = pos1 self.pos2 = pos2 self.kwarg1 = kwargs[kwarg1] if kwarg1 in kwargs else default_kwarg1 self.kwarg2 = kwargs[kwarg2] if kwarg2 in kwargs else default_kwarg2 self.kwarg3 = kwargs[kwarg3] if kwarg3 in kwargs else default_kwarg3 ``` I don't love the first option because it seems wasteful to set a bunch of default data members if a bunch are going to be changed, especially if there are many data members. I don't love the second option because it looks unnecessarily busy and less readable in my opinion - I like the separation of the default values from the user-defined values and think it will make my code easier to read and change. Also, I am using \*\*kwargs instead of keyword arguments with default values because I am still in the early phase of the development of this codebase so the member variables needed are subject to change, but also because there is going to be a lot of member variables and it will make the function signature very ugly to have all of those parameters. Apologies if my question is a bit long-winded, this is one of my first times asking questions on StackOverflow and I wanted to make sure I gave enough detail. Also if it makes a difference my code needs to work in Python 3.8 and later.
2021/12/21
[ "https://Stackoverflow.com/questions/70431040", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16043632/" ]
It would be easier to find the answer if you explained the purpose for the requirement. From my experience one common task is to distinguish `authorization_code` and `client_credentials` flow use for the same client, but that's easy: the second one does not contain user information (`sub` and `sid` claims). Also don't forget about restricted auth flow combinations in Identityserver (for instance you can't allow both `implicit` and `authorization_code` flow for the same client), so one client is usually bound to the only user interactive flow. Finally, the auth flow is generally not about API. It's only about interaction among IdP and Client. API usually use scopes as general information about clients, so... when you have two clients -- one with `implicit` grant and the other with `authorization_code`, you can distinguish which one is in use by setting different scopes. Isn't that enough? A [check for a particular grant type](https://github.com/dfrunet/IdentityServer4.Quickstart.UI/blob/9bb00961aa22c2d105b2259541acc065415b7d9b/Extensions/ExtendedClaimsService.cs#L159) could be performed in Identityserver the following way: ```cs public class ExtendedClaimsService : DefaultClaimsService{ public override async Task<IEnumerable<Claim>> GetAccessTokenClaimsAsync( ClaimsPrincipal subject, ResourceValidationResult resourceResult, ValidatedRequest request) { var outputClaims = (await base.GetAccessTokenClaimsAsync(subject, resourceResult, request)).ToList(); //if (request.Secret.Type == "NoSecret") //this is more or less the same if ((request as ValidatedTokenRequest)?.GrantType != "client_credentials") { //filter out server-side-only scopes here //or add any custom claim you like } return outputClaims; } } ``` Registration: `services.AddTransient<IClaimsService, ExtendedClaimsService>();` after `services.AddIdentityServer()` in your `Startup`
One option is to have different client definitions for the different flows and then use a [ClientClaim](https://docs.duendesoftware.com/identityserver/v5/fundamentals/claims/#client-claims) to indicate what type of client it is: To set the client claims in the client definition just set it like: ``` ClientClaimsPrefix="", AlwaysSendClientClaims=true, Claims = new List<ClientClaim>() { new ClientClaim("role","admin"), new ClientClaim("name","joe"), new ClientClaim("admin","yes"), new ClientClaim("employmentid","employee"), new ClientClaim("employeetype","yes"), new ClientClaim("creditlimit","100000") } ```
7,533
10,062,967
Is there an idiomatic way of encoding and decoding a string in Clojure as hexadecimal? Example from Python: ```python 'Clojure'.encode('hex') # ⇒ '436c6f6a757265' '436c6f6a757265'.decode('hex') # ⇒ 'Clojure' ``` To show some effort on my part: ```clj (defn hexify [s] (apply str (map #(format "%02x" (int %)) s))) (defn unhexify [hex] (apply str (map (fn [[x y]] (char (Integer/parseInt (str x y) 16))) (partition 2 hex)))) (hexify "Clojure") ;; ⇒ "436c6f6a757265" (unhexify "436c6f6a757265") ;; ⇒ "Clojure" ```
2012/04/08
[ "https://Stackoverflow.com/questions/10062967", "https://Stackoverflow.com", "https://Stackoverflow.com/users/165806/" ]
Your implementation(s) don't work for non-ascii characters, ```clj (defn hexify [s] (apply str (map #(format "%02x" (int %)) s))) (defn unhexify [hex] (apply str (map (fn [[x y]] (char (Integer/parseInt (str x y) 16))) (partition 2 hex)))) (= "\u2195" (unhexify(hexify "\u2195"))) false ; should be true ``` To overcome this you need to serialize the bytes of the string using the required character encoding, which can be multi-byte per character. There are a few 'issues' with this. * Remember that all numeric types are signed in the JVM. * There is no unsigned-byte. In idiomatic java you would use the low byte of an integer and mask it like this wherever you used it. ```java int intValue = 0x80; byte byteValue = (byte)(intValue & 0xff); -- use only low byte System.out.println("int:\t" + intValue); System.out.println("byte:\t" + byteValue); -- output: -- int: 128 -- byte: -128 ``` clojure has `(unchecked-byte)` to effectively do the same. For example, using UTF-8 you can do this: ```clj (defn hexify [s] (apply str (map #(format "%02x" %) (.getBytes s "UTF-8")))) (defn unhexify [s] (let [bytes (into-array Byte/TYPE (map (fn [[x y]] (unchecked-byte (Integer/parseInt (str x y) 16))) (partition 2 s)))] (String. bytes "UTF-8"))) ; with the above implementation: ;=> (hexify "\u2195") "e28695" ;=> (unhexify "e28695") "↕" ;=> (= "\u2195" (unhexify (hexify "\u2195"))) true ```
I believe your `unhexify` function is as idiomatic as it can be. However, `hexify` can be written in a simpler way: ``` (defn hexify [s] (format "%x" (new java.math.BigInteger (.getBytes s)))) ```
7,534
57,318,921
I'm trying to use xtensor-python example found [here](https://xtensor-python.readthedocs.io/en/latest/). I have xtensor-python, pybind11, and xtensor installed and also created a CMakeLists.txt. from /build I ran. $ cmake .. $ make and it builds without errors. My CMakeLists.txt looks like this. ``` cmake_minimum_required(VERSION 3.15) project(P3) find_package(xtensor-python REQUIRED) find_package(pybind11 REQUIRED) find_package(xtensor REQUIRED) ``` My example.cpp file. ``` #include <numeric> // Standard library import for std::accumulate #include "pybind11/pybind11.h" // Pybind11 import to define Python bindings #include "xtensor/xmath.hpp" // xtensor import for the C++ universal functions #define FORCE_IMPORT_ARRAY // numpy C api loading #include "xtensor-python/pyarray.hpp" // Numpy bindings double sum_of_sines(xt::pyarray<double>& m) { auto sines = xt::sin(m); // sines does not actually hold values. return std::accumulate(sines.cbegin(), sines.cend(), 0.0); } PYBIND11_MODULE(ex3, m) { xt::import_numpy(); m.doc() = "Test module for xtensor python bindings"; m.def("sum_of_sines", sum_of_sines, "Sum the sines of the input values"); } ``` My python file. ``` import numpy as np import example as ext a = np.arange(15).reshape(3, 5) s = ext.sum_of_sines(v) s ``` But my python file can't import my example.cpp file. ``` File "examplepyth.py", line 2, in <module> import example as ext ImportError: No module named 'example' ``` I am new to cmake. I would like to know how to set this project up properly with CMakeLists.txt
2019/08/02
[ "https://Stackoverflow.com/questions/57318921", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11395552/" ]
This is actually really simple and done in a few lines ```c# public GameObject prefab; public float radius; public float amount; // Start is called before the first frame update private void Start() { var angle = 0f; for (var i = 0; i <= amount; i++) { var y = Mathf.Sin(Mathf.Deg2Rad * angle) * radius; var z = - Mathf.Cos(Mathf.Deg2Rad * angle) * radius; var obj = Instantiate(prefab, transform); obj.transform.localPosition = new Vector3(0, y, z); obj.transform.localRotation = Quaternion.Euler(angle, 0, 0); angle += (360f / amount); } } // just for demo private void Update() { transform.localRotation = Quaternion.Euler(Time.time * 45, 0, 0); } ``` --- [![enter image description here](https://i.stack.imgur.com/S2P0Y.png)](https://i.stack.imgur.com/S2P0Y.png) [![enter image description here](https://i.stack.imgur.com/0jTCe.gif)](https://i.stack.imgur.com/0jTCe.gif)
Ok, so this is more of a math problem then anything else really. Now assuming that you are not a total beginner with Unity I will not write you code for your solution, but just generaly describe it. First thing you need to be inputed is radius, this will determine how far away from the center of the circle should your items be. You can just take the scale of the circle object and multiply it by some variable value. Then you also need the number of ticks that you wish to place around the circle as a variable. In your case this can be 27. Then divide 360 by that variable and you should get a segment of the circle for each item. Last thing you need to do is put a for loop for each tick on the circle and in there spawn an item on point where you get the point position by taking a vector from the center of the circle to the top point and multiply it by an Euler that has as many degrees as the segment size that we got earlier. For the rotation of the object, you just need to subtract it with the same segment size and thats basically it. Hope this helps. If you need some code clarification I can provide it later today.
7,538
48,825,312
I am new to python, I have written test cases for my class , I am using `python -m pytest --cov=azuread_api` to get code coverage. I am getting coverage on the console as [![enter image description here](https://i.stack.imgur.com/iKRNP.png)](https://i.stack.imgur.com/iKRNP.png) How do I get which lines are missed by test for e.g in aadadapter.py file Thanks,
2018/02/16
[ "https://Stackoverflow.com/questions/48825312", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3556028/" ]
If you check the [documentation for reporting](https://pytest-cov.readthedocs.io/en/latest/reporting.html) in pytest-cov, you can see how to manipulate the report and generate extra versions. For example, adding the option `--cov-report term-missing` you'll get the missing lines printed in the terminal. A more user friendly option, would be to generate an html report by usign the `--cov-report html` option. Then you can navigate to the generated folder (`htmlcov` by default) and open the `index.html` with your browser and navigate your source code where the missing lines are highlighted.
In addition to the [answer from Ignacio](https://stackoverflow.com/a/48825483/149900), one can also set [`show_missing = true`](https://coverage.readthedocs.io/en/latest/config.html#config-report-show-missing) in `.coveragerc`, as pytest-cov reads that config file as well.
7,539
30,987,825
I have a list of lists in python. I want to group similar lists together. That is, if first three elements of each list are the same then those three lists should go in one group. For eg ``` [["a", "b", "c", 1, 2], ["d", "f", "g", 8, 9], ["a", "b", "c", 3, 4], ["d","f", "g", 3, 4], ["a", "b", "c", 5, 6]] ``` I want this to look like ``` [[["a", "b", "c", 1, 2], ["a", "b", "c", 5, 6], ["a", "b", "c", 3, 4]], [["d","f", "g", 3, 4], ["d", "f", "g", 8, 9]]] ``` I could do this by running an iterator and manually comparing each element of two consecutive lists and then based on the no of elements within those lists that were same I can decide to group them together. But i was just wondering if there is any other way or a pythonic way to do this.
2015/06/22
[ "https://Stackoverflow.com/questions/30987825", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4378672/" ]
You can use [`itertools.groupby`](https://docs.python.org/3.4/library/itertools.html#itertools.groupby) : ``` >>> A=[["a", "b", "c", 1, 2], ... ["d", "f", "g", 8, 9], ... ["a", "b", "c", 3, 4], ... ["d","f", "g", 3, 4], ... ["a", "b", "c", 5, 6]] >>> from operator import itemgetter >>> [list(g) for _,g in groupby(sorted(A),itemgetter(0,1,2)] [[['a', 'b', 'c', 1, 2], ['a', 'b', 'c', 3, 4], ['a', 'b', 'c', 5, 6]], [['d', 'f', 'g', 3, 4], ['d', 'f', 'g', 8, 9]]] ```
You don't need to sort, you can group in a dict using a tuple of the first three elements from each list as the key: ``` from collections import OrderedDict l=[ ["a", "b", "c", 1, 2], ["d", "f", "g", 8, 9], ["a", "b", "c", 3, 4], ["d","f", "g", 3, 4], ["a", "b", "c", 5, 6] ] od = OrderedDict() for sub in l: k = tuple(sub[:3]) od.setdefault(k,[]).append(sub) from pprint import pprint as pp pp(od.values()) [[['a', 'b', 'c', 1, 2], ['a', 'b', 'c', 3, 4], ['a', 'b', 'c', 5, 6]], [['d', 'f', 'g', 8, 9], ['d', 'f', 'g', 3, 4]]] ``` Which is `O(n)` as opposed to `O(n log n)`. If you don't care about order use a defaultdict: ``` from collections import defaultdict od = defaultdict(list) for sub in l: a, b, c, *_ = sub # python3 k = a,b,c od[k].append(sub) from pprint import pprint as pp pp(list(od.values())) [[['a', 'b', 'c', 1, 2], ['a', 'b', 'c', 3, 4], ['a', 'b', 'c', 5, 6]], [['d', 'f', 'g', 8, 9], ['d', 'f', 'g', 3, 4]]] ```
7,540
24,272,228
I am using ArgParse for giving commandline parameters in Python. ``` import argparse parser = argparse.ArgumentParser() parser.add_argument("--quality", type=int,help="enter some quality limit") args = parser.parse_args() qual=args.quality if args.quality: qual=0 $ python a.py --quality a.py: error: argument --quality: expected one argument ``` In case of no value provided,I want to use it as 0,I also have tried to put it as "default=0" in parser.add\_argument,and also with an if statement.But,I get the error above. Basically,I want to use it as a flag and give a default value in case no value is provided.
2014/06/17
[ "https://Stackoverflow.com/questions/24272228", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2596951/" ]
Use `nargs='?'` to allow `--quality` to be used with 0 or 1 value supplied. Use `const=0` to handle `script.py --quality` without a value supplied. Use `default=0` to handle bare calls to `script.py` (without `--quality` supplied). ``` import argparse parser = argparse.ArgumentParser() parser.add_argument("--quality", type=int, help="enter some quality limit", nargs='?', default=0, const=0) args = parser.parse_args() print(args) ``` behaves like this: ``` % script.py Namespace(quality=0) % script.py --quality Namespace(quality=0) % script.py --quality 1 Namespace(quality=1) ```
Have a loot at <https://docs.python.org/2/howto/argparse.html#id1>. Simply add the argument `default` to your add\_argument call. `parser.add_argument("--quality", type=int, default=0, nargs='?', help="enter some quality limit")` If you want to use `--quality` as a flag you should use `action="store_true"`. This will set `args.quality` to `True` if `--quality` is used.
7,541
4,858,733
Python has this magic [`__call__`](http://docs.python.org/reference/datamodel.html#object.__call__) method that gets called when the object is called like a function. Does C# support something similar? --- Specifically, I was hoping for a way to use delegates and objects interchangeably. Trying to design an API where a user can pass in a list of functions, but sometimes those functions need some initial params, in which case they'd use one of those callable objects instead.
2011/02/01
[ "https://Stackoverflow.com/questions/4858733", "https://Stackoverflow.com", "https://Stackoverflow.com/users/65387/" ]
Sure, if you inherit from [DynamicObject](http://msdn.microsoft.com/en-us/library/system.dynamic.dynamicobject.aspx). I think you're after [TryInvoke](http://msdn.microsoft.com/en-us/library/system.dynamic.dynamicobject.tryinvoke.aspx) which executes on `obj(...)`, but there are several other method you can override to handle casting, index access (`obj[idx]`), method invocations, property invocations, etc. ``` using System; using System.Diagnostics; using System.Dynamic; using System.Linq; using System.Text; namespace ConsoleApplication { public static class ConsoleApp { public static void Main() { dynamic x = new MyDynamicObject(); var result = x("awe", "some"); Debug.Assert(result == "awesome"); } } public class MyDynamicObject : DynamicObject { public override Boolean TryInvoke(InvokeBinder binder, Object[] args, out Object result) { result = args.Aggregate(new StringBuilder(), (builder, item) => builder.Append(item), builder => builder.ToString()); return true; } } } ```
I bow to Simon Svensson - who shows a way to do it if you inherit from DynamicObject - for a more strait forward non dynamic point of view: Sorry but no - but there are types of objects that can be called - delegates for instance. ``` Func<int, int> myDelagate = x=>x*2; int four = myDelagate(2) ``` There is a default property though - that has to have at least one parameter and its access looks like an array access: ``` class Test1 { public int this[int i, int j] { get { return i * j; } } } ``` Calling ``` Test1 test1 = new Test1(); int six = test1[2, 3]; ``` Then you can do some really silly stuff with delegates like this: ``` class Test2 // I am not saying that this is a good idea. { private int MyFunc(int z, int i) { return z * i; } public Func<int, int> this[int i] { get { return x => MyFunc(x, i); } } } ``` Then calling it looks weird like this: ``` Test2 test = new Test2(); test[2](2); // this is quite silly - don't use this..... ```
7,544
21,377,656
Why is the self.year twice? I am having trouble to find out the logic of the line. Can some one help me with this? ``` return (self.year and self.year == date.year or True) ``` I am going through <http://www.openp2p.com/pub/a/python/2004/12/02/tdd_pyunit.html> and encountered the line ... And of course I have no problem understanding and, or, nor, xor, xnor, or any boolean expression. But I am confused by the way it has been used here.. :-)
2014/01/27
[ "https://Stackoverflow.com/questions/21377656", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2770850/" ]
Assuming these are parallel arrays (the first entry in `eenhedennamen` uses the first entry in `value`), you can loop through with jQuery's [`$.each`](http://api.jquery.com/jQuery.each), which gives you the index and the entry for each entry, and build the object from the loop. ``` var obj = {}; $.each(eenhedennamen, function(index, entry) { obj[entry] = value[index]; }); ``` This works because in JavaScript, you can access properties using either dot notation and a property name literal (`obj.foo = "bar"`), or *bracketed* notation with a *string* property name (`obj["foo"] = "bar"`). In the latter case, the string can be the result of any expression. So in the above, we're using `entry` as the property name, which will be each name in `eenhedennamen`. Then of course, we get the corresponding value from `value` using the `index`.
``` var eenhedennamen = [ 'unit1', 'unit2', 'unit3' ]; var value = [ 1, 2, 3 ]; var z = new Array(); for ( var i = 0; i < eenhedennamen.length; i++) { z[eenhedennamen[i]]=value[i]; } ``` The previous answer is better.
7,546
33,106,871
I have a batch script runs python script continuously in a loop. ``` :start python log_capture.py > log.txt goto start ``` I want to print the output of each iteration in a .txt file. I am using following command get output from log\_capture.py to a log.txt file. ``` python log_capture.py >log.txt ``` But in the next loop, the logs from previous iteration is overwritten. How can I prevent log.txt file being overwritten or lets say save output from each iteration in different log.txt file
2015/10/13
[ "https://Stackoverflow.com/questions/33106871", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5138377/" ]
I have used this service in the past and I noticed today that text messages sent were not received. Looking into this a bit further it seems something happened due to folks using this service to spam... and it's not working at present. Not sure what the future holds... See: [issue listed on textbelt](https://github.com/typpo/textbelt/issues/76) Looking for a solution still.... Joel Parke
2-563-567-890 doesn't look like a valid US phone number, so I would double-check that. There is also an international endpoint, `/intl`, but it tends to be less reliable.
7,548
20,307,590
I am trying to make an HTTP POST request using javascript and connecting it to an onclick event. For example, if someone clicks on a button then make a HTTP POST request to `http://www.example.com/?test=test1&test2=test2`. It just needs to hit the url and can close the connection. I've messed around in python and got this to work. ``` import urllib2 def hitURL(): urllib2.urlopen("http://www.example.com/?test=test1&test2=test2").close hitURL() ``` I have read about some ways to make HTTP requests using JavaScript in this thread [JavaScript post request like a form submit](https://stackoverflow.com/questions/133925/javascript-post-request-like-a-form-submit), but think it's overkill for what I need to do. Is it possible to just say something like this: ``` <button onclick=POST(http://www.example.com/?test=test1&test2=test2)>hello</button> Or build it in to an event listener. ``` I know that is not anything real but I am just looking for a simple solution that non-technical people can use, follow directions, and implement. I honestly doubt there is something that simple out there but still any recommendations would be appreciated.
2013/12/01
[ "https://Stackoverflow.com/questions/20307590", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2159019/" ]
You need to use `XMLHttpRequest` (see [MDN](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest)). ``` var xhr = new XMLHttpRequest(); xhr.open("POST", url, false); xhr.onload = // something document.getElementById("your_button's_ID").addEventListener("click", function() {xhr.send(data)}, false ); ```
If you can include the JQuery library, then I'd suggest you look in to the jQuery .ajax() method (<http://api.jquery.com/jQuery.ajax/>): ``` $.ajax("http://www.example.com/", { type: 'POST', data: { test: 'test1', test2: 'test2' } }) ```
7,549
17,793,742
I want to profile python code on Widnows 7. I would like to use something a little more user friendly than the raw dump of cProfile. In that search I found the GUI RunSnakeRun, but I cannot find a way to download RunSnakeRun on Windows. Is it possible to use RunSnakeRun on windows or what other tools could I use? **Edit:** I have installed RunSnakeRun now. That's progress, thanks guys. How do you run it without a linux command line? **Edit 2:** I am using this tutorial <http://sullivanmatas.wordpress.com/2013/02/03/profiling-python-scripts-with-runsnakerun/> but I hang up at the last line with "python: can't open file 'runsnake.py': [Errno 2] No such file or directory "
2013/07/22
[ "https://Stackoverflow.com/questions/17793742", "https://Stackoverflow.com", "https://Stackoverflow.com/users/417902/" ]
The standard solution is to use cProfile (which is in the standard library) and then open the profiles in RunSnakeRun: <http://www.vrplumber.com/programming/runsnakerun/> cProfile, however only profiles at the per-functions level. If you want line by line profiling try line profiler: <https://github.com/rkern/line_profiler>
I installed runsnake following these [installation instructions](http://www.vrplumber.com/programming/runsnakerun/). The step `python runsnake.py profile.pfl` failed because the installation step (`easy_install SquareMap RunSnakeRun`) did not create a file `runsnake.py`. For me (on Ubuntu), the installation step created an executable at `/usr/local/bin/runsnake`. I figured this out by reading the console output from the installation step. It may be in a different place on Windows, but it should be printed in the output of `easy_install`. To read a profile file, I can execute `/usr/local/bin/runsnake profile.pfl`.
7,550
72,452,208
I'm trying to make a publisher for a Ublox GPS sensor, but I'm getting this ROS error: > > ubuntu@fieldrover:~/field-rover-gps/gps/gps\_pkg$ cd > ~/field-rover-gps/gps/gps\_pkg/ && colcon build && . install/setup.bash > && ros2 run gps\_pkg gps > > > Starting >>> gps\_pkg Finished <<< gps\_pkg [2.98s] > > > Summary: 1 package finished [3.49s] Traceback (most recent call last): > File > "/opt/ros/galactic/lib/python3.8/site-packages/rosidl\_generator\_py/import\_type\_support\_impl.py", > line 46, in import\_type\_support > return importlib.import\_module(module\_name, package=pkg\_name) File "/usr/lib/python3.8/importlib/**init**.py", line 127, in > import\_module > return \_bootstrap.\_gcd\_import(name[level:], package, level) File "", line 1014, in \_gcd\_import File > "", line 991, in \_find\_and\_load File > "", line 975, in \_find\_and\_load\_unlocked > File "", line 657, in \_load\_unlocked > > File "", line 556, in module\_from\_spec > > File "", line 1166, in > create\_module File "", line 219, in > \_call\_with\_frames\_removed ImportError: /opt/ros/galactic/lib/libgeometry\_msgs\_\_rosidl\_generator\_c.so: > undefined symbol: std\_msgs\_\_msg\_\_Header\_\_copy > > > During handling of the above exception, another exception occurred: > > > Traceback (most recent call last): File > "/home/ubuntu/field-rover-gps/gps/gps\_pkg/install/gps\_pkg/lib/gps\_pkg/gps", > line 33, in > sys.exit(load\_entry\_point('gps-pkg==0.0.0', 'console\_scripts', 'gps')()) File > "/home/ubuntu/field-rover-gps/gps/gps\_pkg/install/gps\_pkg/lib/python3.8/site-packages/gps\_pkg/gps.py", > line 49, in main > gps\_node = GpsNode() File "/home/ubuntu/field-rover-gps/gps/gps\_pkg/install/gps\_pkg/lib/python3.8/site-packages/gps\_pkg/gps.py", > line 17, in **init** > self.publisher\_ = self.create\_publisher(NavSatFix, 'gps/fix', 10) File "/opt/ros/galactic/lib/python3.8/site-packages/rclpy/node.py", > line 1282, in create\_publisher > check\_is\_valid\_msg\_type(msg\_type) File "/opt/ros/galactic/lib/python3.8/site-packages/rclpy/type\_support.py", > line 35, in check\_is\_valid\_msg\_type > check\_for\_type\_support(msg\_type) File "/opt/ros/galactic/lib/python3.8/site-packages/rclpy/type\_support.py", > line 29, in check\_for\_type\_support > msg\_or\_srv\_type.**class**.**import\_type\_support**() File "/opt/ros/galactic/lib/python3.8/site-packages/sensor\_msgs/msg/\_nav\_sat\_fix.py", > line 34, in **import\_type\_support** > module = import\_type\_support('sensor\_msgs') File "/opt/ros/galactic/lib/python3.8/site-packages/rosidl\_generator\_py/import\_type\_support\_impl.py", > line 48, in import\_type\_support > raise UnsupportedTypeSupport(pkg\_name) rosidl\_generator\_py.import\_type\_support\_impl.UnsupportedTypeSupport: > Could not import 'rosidl\_typesupport\_c' for package 'sensor\_msgs' > > > It seems to have an issue with NavSatFix. I've tested other sensor\_msgs types like Image in the same package, and that works fine. Here's the code I tried running. ``` import rclpy import os from rclpy.node import Node from sensor_msgs.msg import NavSatFix from sensor_msgs.msg import NavSatStatus from std_msgs.msg import Header import serial from ublox_gps import UbloxGps port = serial.Serial('/dev/ttyACM0', baudrate=38400, timeout=1) gps = UbloxGps(port) class GpsNode(Node): def __init__(self): super().__init__('gps_node') self.publisher_ = self.create_publisher(NavSatFix, 'gps/fix', 10) timer_period = 0.5 # seconds self.timer = self.create_timer(timer_period, self.timer_callback) def timer_callback(self): msg = NavSatFix() msg.header = Header() msg.header.stamp = self.get_clock().now().to_msg() msg.header.frame_id = "gps" msg.status.status = NavSatStatus.STATUS_FIX msg.status.service = NavSatStatus.SERVICE_GPS geo = gps.geo_coords() # Position in degrees. msg.latitude = geo.lat msg.longitude = geo.lon # Altitude in metres. #msg.altitude = 1.15 msg.position_covariance[0] = 0 msg.position_covariance[4] = 0 msg.position_covariance[8] = 0 msg.position_covariance_type = NavSatFix.COVARIANCE_TYPE_DIAGONAL_KNOWN self.publisher_.publish(msg) self.best_pos_a = None def main(args=None): rclpy.init(args=args) gps_node = GpsNode() rclpy.spin(gps_node) # Destroy the node explicitly # (optional - otherwise it will be done automatically # when the garbage collector destroys the node object) gps_node.destroy_node() rclpy.shutdown() if __name__ == '__main__': main() ```
2022/05/31
[ "https://Stackoverflow.com/questions/72452208", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12497264/" ]
As it already has been pointed outed both in the comments and the answer by *@AbhinavMathur*, in order to improve performance you need to implement [*Doubly linked list*](https://en.wikipedia.org/wiki/Doubly_linked_list) data structure. Note that it's mandatory to create your *own implementation* that will maintain a reference to the *current node*. Attempt to utilize an implementation built-in in the JDK in place of the `items` array will not buy you anything because the advantage of the fast deletion will be nullified by the cost of iteration (in order to reach the element at position `n`, `LinkedList` needs to crawl through the `n` elements starting from the *head*, and this operation has a liner time complexity). Methods `left()`, `right()` and `position()` will have the following outcome: * `left()` - in case when the *previous* node (denoted as `prev` in the code) associated with `current` is not `null`, and in tern its *previous node* exists, the *current node* will be dereferenced (i.e. next and previous nodes associated with the `current` node will be linked with each other), and the variable `current` would be assigned to the `prev` of the *previous node*, i.e. `current.prev.prev`. Time complexity **O(1)**. * `right()` - in case when the *next* node (denoted as `next` in the code) associated with `current` is not `null`, and in tern its *next node* exists, the *current node* will be dereferenced in a way that has been described above, and the variable `current` would be assigned to the `next` of the *next node*, i.e. `current.next.next`. Time complexity **O(1)**. * `position()` - will return a value of the `current` node. Time complexity **O(1)**. That's how it might look like: ``` public class MyClass { private Node current; // a replacement for both position and items fields public MyClass(int n, int position) { Node current = new Node(0, null, null); // initialing the head node if (position == 0) { this.current = current; } for (int i = 1; i < n; i++) { // initialing the rest past of the linked list Node nextNode = new Node(i, current, null); current.setNext(nextNode); current = nextNode; if (position == i) { this.current = current; } } } public void left() { // removes the current node and sets the current to the node 2 position to the left (`prev` of the `prev` node) if (current.prev == null || current.prev.prev == null) { return; } Node prev = current.prev; Node next = current.next; prev.setNext(next); next.setPrev(prev); this.current = prev.prev; } public void right() { // removes the current node and sets the current to the node 2 position to the right (`next` of the `next` node) if (current.next == null || current.next.next == null) { return; } Node prev = current.prev; Node next = current.next; prev.setNext(next); next.setPrev(prev); this.current = next.next; } public int position() { return current.getValue(); } public static class Node { private int value; private Node prev; private Node next; public Node(int value, Node prev, Node next) { this.value = value; this.prev = prev; this.next = next; } // getters and setters } } ``` [*A link to Online Demo*](https://www.jdoodle.com/ia/rFs)
Using an array, you're setting the "removed" elements as `-1`; repeatedly skipping them in each traversal causes the performance penalty. Instead of an array, use a [doubly linked list](https://www.geeksforgeeks.org/doubly-linked-list/). Each removal can be easily done in `O(1)` time, and each left/right operation would only require shifting the current pointer by 2 nodes.
7,552
52,787,147
I want to use CTR mode in DES algorithm in python by using PyCryptodome package. My code presented at the end of this post. However I got this error: "TypeError: Impossible to create a safe nonce for short block sizes". It is worth to mention that, this code work well for AES algorithm but it does not work for DES, DES3 , Blowfish and etc (with 64 block size). To my knowledge CTR mode can be applied in the 64 block cipher algorithms. ``` from Crypto.Cipher import DES from Crypto.Random import get_random_bytes data = b'My plain text' key = get_random_bytes(8) cipher = DES.new(key, DES.MODE_CTR) ct_bytes = cipher.encrypt(data) nonce = cipher.nonce cipher = DES.new(key, DES.MODE_CTR, nonce=nonce) pt = cipher.decrypt(ct_bytes) print("The message was: ", pt) ``` Thanks alot.
2018/10/12
[ "https://Stackoverflow.com/questions/52787147", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9982452/" ]
The library [defines the nonce](https://www.pycryptodome.org/en/latest/src/cipher/classic.html#ctr-mode) as that part of the counter block that is not incremented. Since the block is only 64 bits long, it is hard to securely define how long that nonce should be, given the danger of wraparound (if you encrypt a lot of blocks) or nonce reuse (if you generate the nonce randomly). You can instead decide that the nonce is not present, the counter takes the full 64 bits and a random initial value. ``` iv = get_random_bytes(8) cipher = DES.new(key, nonce=b'', initial_value=iv) ``` Finally, I guess that this is only an exercise. DES is a very weak cipher, with a key length of only 56 bits and a block size of only 64 bits. Use AES instead.
``` bs = DES.block_size plen = bs - len(plaintext) % bs padding = [plen] * plen padding = pack('b' * plen, *padding) key = get_random_bytes(8) nonce = Random.get_random_bytes(4) ctr = Counter.new(32, prefix=nonce) cipher = DES.new(key, DES.MODE_CTR,counter=ctr) ciphertext = cipher.encrypt(plaintext+padding) ```
7,553
52,416,852
I am trying to use your project named dask-spark proposed by Matthew Rocklin. When adding the dask-spark into my project, I have a problem: Waiting for workers as shown in the following figure. Here, I run two worker nodes (dask) as dask-worker tcp://ubuntu8:8786 and tcp://ubuntu9:8786 and run two worker nodes (spark) over a standalone model, as worker-20180918112328-ubuntu8-45764 and worker-20180918112413-ubuntu9-41972 [Waiting for workers](https://i.stack.imgur.com/xaPOb.jpg) My python code is as: ``` from tpot import TPOTClassifier from sklearn.datasets import load_digits from sklearn.model_selection import train_test_split from sklearn.externals import joblib from dask.distributed import Client import distributed.joblib from sklearn.externals.joblib import parallel_backend from dask_spark import spark_to_dask from pyspark import SparkConf, SparkContext from dask_spark import dask_to_spark if __name__ == '__main__': sc = SparkContext() #connect to the cluster client = spark_to_dask(sc) digits = load_digits() X_train, X_test, y_train, y_test = train_test_split( digits.data, digits.target, train_size=0.75, test_size=0.25, ) tpot = TPOTClassifier( generations=2, population_size=10, cv=2, n_jobs=-1, random_state=0, verbosity=0 ) with joblib.parallel_backend('dask.distributed', scheduler_host=' ubuntu8:8786'): tpot.fit(X_train, y_train) print(tpot.score(X_test, y_test)) ``` I will highly appreciate it if you can help me to solve this question.
2018/09/20
[ "https://Stackoverflow.com/questions/52416852", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
I have revised the program in core.py, as: ``` def spark_to_dask(sc, loop=None): """ Launch a Dask cluster from a Spark Context """ cluster = LocalCluster(n_workers=None, loop=loop, threads_per_worker=None) rdd = sc.parallelize(range(1000)) address = cluster.scheduler.address ``` Following which, running my test case over Spark with Standalone or Mesos was successful.
As noted in the README of the project, dask-spark is not mature. It was a weekend project and I do not recommend its use. Instead, I recommend launching Dask directly using one of the mechanisms described here: <http://dask.pydata.org/en/latest/setup.html> If you have to use Mesos then I'm not sure I'll be of much help, but there is a package [daskathon](https://github.com/daskos/daskathon) that runs on top of Marathon that may interest you.
7,554
59,697,566
My input is a list, say `l` It can either contain 4 or 5 elements. I want to assign it to 5 variables , say `a`, `b`, `c`, `d` and `e`. If the list has only 4 elements then the third variable (`c`) should be `None`. If python had an increment (++) operator I could do something like this. ``` l = [4 or 5 string inputs] i = -1 a = l[i++] b = l[i++] c = None if len(l) > 4: c = l[i++] d = l[i++] e = l[i++] ``` I can't seem to find an elegant way to do this apart from writing `i+=1` before each assignment. Is there a simpler pythonic way to do this?
2020/01/11
[ "https://Stackoverflow.com/questions/59697566", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11316253/" ]
You're trying to use a C solution because you're unfamiliar with Python's tools. Using unpacking is much cleaner than trying to emulate `++`: ``` a, b, *c, d, e = l c = c[0] if c else None ``` The `*c` target receives a list of all elements of `l` that weren't unpacked into the other targets. If this list is nonempty, then `c` is considered true when coerced to boolean, so `c[0] if c else None` takes `c[0]` if there is a `c[0]` and `None` otherwise.
I can't see that you really need to be incrementing at all since you have fixed positions for each variable subject to your c condition. ``` l = [4 or 5 string inputs] a = l[0] b = l[1] if len(l) > 4: c = l[2] d = l[3] e = l[4] else: c = None d = l[2] e = l[3] ```
7,555
66,593,382
To run `pytest` within GitHub Actions, I have to pass some `secrets` for Python running environ. e.g., ``` - name: Test env vars for python run: python -c 'import os;print(os.environ)' env: TEST_ENV: 'hello world' TEST_SECRET: ${{ secrets.MY_TOKEN }} ``` However, the output is as follows, ``` environ({ 'TEST_ENV': 'hello world', 'TEST_SECRET':'', ...}) ``` It seems not working due to [GitHub's redaction](https://docs.github.com/en/actions/learn-github-actions/security-hardening-for-github-actions#using-secrets). Based on @raspiduino 's answer, I did more explore on both options to import env vars. ``` name: python on: push jobs: test_env: runs-on: ubuntu-latest steps: - name: Set up Python uses: actions/setup-python@v2 with: python-version: 3.8 - name: Test env vars for python run: python -c 'import os;print(os.environ)' env: ENV_SECRET: ${{ secrets.ENV_SECRET }} REPO_SECRET: ${{ secrets.REPO_SECRET }} - name: Test inline env vars for python run: ENV_SECRET=${{ secrets.ENV_SECRET }} REPO_SECRET=${{ secrets.REPO_SECRET }} python -c 'import os;print(os.environ)' ``` Basically, both steps are in same outputs. The `REPO_SECRET` can be passed thru but not the `ENV_SECRET`. [![enter image description here](https://i.stack.imgur.com/Q9CbS.png)](https://i.stack.imgur.com/Q9CbS.png) Outputs [![enter image description here](https://i.stack.imgur.com/gjLYJ.png)](https://i.stack.imgur.com/gjLYJ.png)
2021/03/12
[ "https://Stackoverflow.com/questions/66593382", "https://Stackoverflow.com", "https://Stackoverflow.com/users/482899/" ]
There are three types of secrets within GitHub Actions. 1. Organization secrets 2. Repository secrets 3. Environment secrets To access Environment secrets, you have to [referencing an environment](https://docs.github.com/en/actions/reference/environments#referencing-an-environment) in your job. (Thanks to @riQQ) [![Actions secrets](https://i.stack.imgur.com/Q9CbS.png)](https://i.stack.imgur.com/Q9CbS.png) ``` name: python on: push jobs: test_env: environment: TEST_SECRET runs-on: ubuntu-latest steps: - name: Set up Python uses: actions/setup-python@v2 with: python-version: 3.8 - name: Test env vars for python run: python -c 'import os;print(os.environ)' env: ENV_SECRET: ${{ secrets.ENV_SECRET }} REPO_SECRET: ${{ secrets.REPO_SECRET }} ```
You try the things below: ``` - name: Test env vars for python run: TEST_SECRET=${{ secrets.MY_TOKEN }} python -c 'import os;print(os.environ['TEST_SECRET']) ``` This will pass `${{ secrets.MY_TOKEN }}` directly as an environment variable to the python process and not share with other processes. Then you can use `os.environ['TEST_SECRET']` to get it. I have done this [here](https://github.com/raspiduino/raspiduino/blob/main/.github/workflows/minesweeper.yml) and [here](https://github.com/raspiduino/raspiduino/blob/main/minesweeper.py)
7,560
12,763,015
Sorry if my title is not correct. Below is the explanation of what i'm looking for. I've coded a small GUI game (let say a snake game) in python, and I want it to be run on Linux machine. I can run this program by just run command "python snake.py" in the terminal. However, I want to combine all my .py files into one file, and when I click on this file, it just run my game. I don't want to go to shell and type "python snake.py". I means something like manifest .jar in java. Could any one help me please? If my explanation is not good enough, please let me know. I'll give some more explanation.
2012/10/06
[ "https://Stackoverflow.com/questions/12763015", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1058861/" ]
You can use [Freeze](http://wiki.python.org/moin/Freeze) for Unix, or [py2exe](http://wiki.python.org/moin/Py2Exe) for Windows. [cx\_freeze](http://cx-freeze.sourceforge.net/), [PyInstaller](http://www.pyinstaller.org/), [bbfreeze](http://www.jroller.com/alessiopace/entry/python_standalone_executables_with_bbfreeze) and [py2app](http://svn.pythonmac.org/py2app/py2app/trunk/doc/index.html) - which I have never tried - are also available for various platforms, so there are many options.
If you only want it to run on a Linux machine, using Python eggs is the simplest way. python snake.egg will try to execute the **main**.py inside the egg. Python eggs are meant to be packages, and basically is a zip file with metadata files included.
7,561
57,035,263
I'm trying RSA encrypt text with JSEncrypt(javascript) and decrypt with python crypto (python3.7). Most of the time, it works. But sometimes, python cannot decrypt. ```js const encrypt = new JSEncrypt() encrypt.setPublicKey(publicKey) encrypt.encrypt(data) ``` ```py from base64 import b64decode from Crypto.Cipher import PKCS1_v1_5 as Cipher_PKCS1_v1_5 from Crypto.PublicKey import RSA crypt_text = "J9I/IdsSGZqrQ5XBTlDrze5+U3otrGEGn7J7f330/tbIpdPNwu9k5gCh35HJHuRF6tXhbOD9XbHS6dGXwRdj0KNSWa43tDQMyGp/ZSewCd4wWkqIx83YzDKnYTVc9zWYbg2iYrmR03AqtWMysl8vZDUSmQn7gNdYEJGxSUzVng==" private_key = "MIICXQIBAAKBgQClFImg7N+5ziGtjrMDwN7frootgwrLUmbE9YFBtecnjchCRjAn1wqq69XiWynEv0q3/U91N5g0nJxeMuolSM8cwdQbT3KZFwQF6vreSzDNhfEYOsFVZknILLPiJpUYm5w3Gi34UeM60iHGH9EUnmQeVwKSG0WF2nK2SCU6EyfoJwIDAQABAoGAHHk2Y/N3g2zykiUS64rQ5nQMkV0Q95D2+PH/oX3mqQPjjsrcc4K77E9RTQG8aps0IBgpJGa6chixP+44RMYSMvRIK0wqgX7s6AFIkFIIM+v+bP9pd3kKaVKTcNIjfnKJZokgAnU0QVdf0zeSNElZC+2qe1FbblsSQ6sqaFmHaMECQQC4oZO+w0q2smQh7VZbM0fSIbdZEimX/4y9KN4VYzPQZkDzQcEQX1Al2YAP8eqlzB4r7QcpRJgvUQDODhzMUtP9AkEA5ORFhPVK5slpqYP7pj2F+D2xAoL9XkgBKmhVppD/Sje/vg4yEKCTQ7fRlIzSvtwAvbDJi3ytYqXQWVdaD/Eb8wJAdYC3k8ecTCu6WHFA7Wf0hIJausA5YngMLPLObFQnTLFXErm9UlsmmgATZZJz4LLIXPJMBXKXXD20Qm9u2oa4TQJBAKxBopP6KiFfSNabDkLAoFb+znzuaZGPrNjmZjcRfh6zr+hvNHxQ7CMVbnNWO7AJT8FyD2ubK71GvnLOC2hd8sMCQQCT70B5EpFqULt7RBvCa7wwJsmwaMZLhBcfNmbry/J9SZG3FVrfYf15r0SBRug7mT2gRmH+tvt/mFafjG50VCnw" decode_data = b64decode(crypt_text) other_private_key = RSA.importKey(b64decode(private_key)) cipher = Cipher_PKCS1_v1_5.new(other_private_key) decrypt_text = cipher.decrypt(decode_data, None).decode() print(decrypt_text) ``` this is a example text that python can't decrypt, but js can decrypt it well. python throws the error: ``` File "/usr/local/lib/python3.7/site-packages/Crypto/Cipher/PKCS1_v1_5.py", line 165, in decrypt raise ValueError("Ciphertext with incorrect length.") ValueError: Ciphertext with incorrect length. ```
2019/07/15
[ "https://Stackoverflow.com/questions/57035263", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11784926/" ]
If the ciphertext is Base64-decoded, the reason becomes clearer: The ciphertext doesn't have the length of the modulus (128 byte), but only 127 byte, i.e. it isn't padded to the length of the modulus with leading `0x00` values. This ciphertext is invalid (see [RFC8017](https://www.rfc-editor.org/rfc/rfc8017#section-7.2.2), step 1) and the decryption in the Python code fails with the error message *Ciphertext with incorrect length*. In contrast, the decryption in the JavaScript code works, i.e. `JSEncrypt#decrypt` obviously adjusts the ciphertext to the length of the modulus by stealthily padding with `0x00` values. If the ciphertext was created with `JSEncrypt#encrypt`, this method doesn't seem to work properly. In detail: The modulus can be determined with: ``` openssl rsa -modulus -noout -in <path to private key> ``` and is (as hex-string): ``` A51489A0ECDFB9CE21AD8EB303C0DEDFAE8A2D830ACB5266C4F58141B5E7278DC842463027D70AAAEBD5E25B29C4BF4AB7FD4F753798349C9C5E32EA2548CF1CC1D41B4F7299170405EAFADE4B30CD85F1183AC1556649C82CB3E22695189B9C371A2DF851E33AD221C61FD1149E641E5702921B4585DA72B648253A1327E827 ``` The length is 128 byte. The Base64-decoded ciphertext is (as hex-string): ``` 27d23f21db12199aab4395c14e50ebcdee7e537a2dac61069fb27b7f7df4fed6c8a5d3cdc2ef64e600a1df91c91ee445ead5e16ce0fd5db1d2e9d197c11763d0a35259ae37b4340cc86a7f6527b009de305a4a88c7cdd8cc32a761355cf735986e0da262b991d3702ab56332b25f2f6435129909fb80d7581091b1494cd59e ``` The length is 127 byte. If the ciphertext is padded manually to the length of the modulus with `0x00`-values, it can also be decrypted in the Python code: ``` 0027d23f21db12199aab4395c14e50ebcdee7e537a2dac61069fb27b7f7df4fed6c8a5d3cdc2ef64e600a1df91c91ee445ead5e16ce0fd5db1d2e9d197c11763d0a35259ae37b4340cc86a7f6527b009de305a4a88c7cdd8cc32a761355cf735986e0da262b991d3702ab56332b25f2f6435129909fb80d7581091b1494cd59e ``` The decrypted data are: ``` Mzg4MDE1NDU4MTI1ODI0OA==NDQyODYwNjI1MjU4NTM2MA== ``` which are two valid Base64-encoded strings.
Thanks to Topaco, it solved. ``` from base64 import b64decode, b16decode from Crypto.Cipher import PKCS1_v1_5 as Cipher_PKCS1_v1_5 from Crypto.PublicKey import RSA crypt_text = \ "R247QGAFEeSW1wwXQuNf/cm/K/tnW5xwXLb5MuHW6/Fr8SRklM0n6Rmj07TgFwApeN72j/avXAvpoR70U92ehOJsDnnZguYN4u2bMXHDyTNmAXuJw9xPm59bSGcvgRm1X+V0Zq1FLzGEsPG6tOYEIX+wnIuH3P7QMd02XJfj0w0=" private_key = "MIICXQIBAAKBgQClFImg7N+5ziGtjrMDwN7frootgwrLUmbE9YFBtecnjchCRjAn1wqq69XiWynEv0q3/U91N5g0nJxeMuolSM8cwdQbT3KZFwQF6vreSzDNhfEYOsFVZknILLPiJpUYm5w3Gi34UeM60iHGH9EUnmQeVwKSG0WF2nK2SCU6EyfoJwIDAQABAoGAHHk2Y/N3g2zykiUS64rQ5nQMkV0Q95D2+PH/oX3mqQPjjsrcc4K77E9RTQG8aps0IBgpJGa6chixP+44RMYSMvRIK0wqgX7s6AFIkFIIM+v+bP9pd3kKaVKTcNIjfnKJZokgAnU0QVdf0zeSNElZC+2qe1FbblsSQ6sqaFmHaMECQQC4oZO+w0q2smQh7VZbM0fSIbdZEimX/4y9KN4VYzPQZkDzQcEQX1Al2YAP8eqlzB4r7QcpRJgvUQDODhzMUtP9AkEA5ORFhPVK5slpqYP7pj2F+D2xAoL9XkgBKmhVppD/Sje/vg4yEKCTQ7fRlIzSvtwAvbDJi3ytYqXQWVdaD/Eb8wJAdYC3k8ecTCu6WHFA7Wf0hIJausA5YngMLPLObFQnTLFXErm9UlsmmgATZZJz4LLIXPJMBXKXXD20Qm9u2oa4TQJBAKxBopP6KiFfSNabDkLAoFb+znzuaZGPrNjmZjcRfh6zr+hvNHxQ7CMVbnNWO7AJT8FyD2ubK71GvnLOC2hd8sMCQQCT70B5EpFqULt7RBvCa7wwJsmwaMZLhBcfNmbry/J9SZG3FVrfYf15r0SBRug7mT2gRmH+tvt/mFafjG50VCnw" decode_data = b64decode(crypt_text) if len(decode_data) == 127: hex_fixed = '00' + decode_data.hex() decode_data = b16decode(hex_fixed.upper()) other_private_key = RSA.importKey(b64decode(private_key)) cipher = Cipher_PKCS1_v1_5.new(other_private_key) decrypt_text = cipher.decrypt(decode_data, None).decode() print(decrypt_text) ```
7,562
54,813,438
I am looking to extract content from [a page](https://app.updateimpact.com/treeof/org.json4s/json4s-native_2.11/3.5.2) that is requires a list node to be selected. I have retrieve the page html using python and Selenium. Passing the page source to BS4 I can parse out the content that I am looking for using ``` open_li = soup.select('div#tree ul.jstree-container-ul li') ``` Each list item returned has an ``` aria-expanded = "false" and class="jstree-node jstree-closed" ``` Looking at inspect element the content is called when these variables are set to ``` aria-expanded = "true" and class="jstree-node jstree-open" ``` I have tried using .click method on the content ``` driver.find_element_by_id('tree').click() ``` But that only changes other content on the page. I think the list nodes themselves have to be expanded when making the request. Does someone know how to change aria-expand elements on a page before returning the content ? Thanks
2019/02/21
[ "https://Stackoverflow.com/questions/54813438", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2033214/" ]
``` =Unique(A:B) ``` should be enough to return non-duplicate rows <https://support.google.com/docs/answer/3093198?hl=en> [![enter image description here](https://i.stack.imgur.com/w77Gm.png)](https://i.stack.imgur.com/w77Gm.png) You can also use Sortn: ``` =sortn(A:B,9E+99,2,1,true,2,true) ```
``` =QUERY(QUERY(A1:B, "select A, B, count(A) group by A, B", 1), "select Col1, Col2 where Col1 is not null", 1) ``` [![](https://i.stack.imgur.com/oQyB2.png)](https://i.stack.imgur.com/oQyB2.png)
7,563
31,695,910
I'm parsing the US Patent XML files (downloaded from [Google patent dumps](https://www.google.com/googlebooks/uspto-patents-redbook.html)) using Python and Beautifulsoup; parsed data is exported to MYSQL database. Each year's data contains close to 200-300K patents - which means parsing 200-300K xml files. The server on which I'm running the python script is pretty powerful - 16 cores, 160 gigs of RAM, etc. but still it is taking close to 3 days to parse one year's worth of data. [![enter image description here](https://i.stack.imgur.com/Rve8M.png)](https://i.stack.imgur.com/Rve8M.png) [![enter image description here](https://i.stack.imgur.com/dEmQB.png)](https://i.stack.imgur.com/dEmQB.png) I've been learning and using python since 2 years - so I can get stuff done but do not know how to get it done in the most efficient manner. I'm reading on it. How can I optimize the below script to make it efficient? Any guidance would be greatly appreciated. Below is the code: ``` from bs4 import BeautifulSoup import pandas as pd from pandas.core.frame import DataFrame import MySQLdb as db import os cnxn = db.connect('xx.xx.xx.xx','xxxxx','xxxxx','xxxx',charset='utf8',use_unicode=True) def separated_xml(infile): file = open(infile, "r") buffer = [file.readline()] for line in file: if line.startswith("<?xml "): yield "".join(buffer) buffer = [] buffer.append(line) yield "".join(buffer) file.close() def get_data(soup): df = pd.DataFrame(columns = ['doc_id','patcit_num','patcit_document_id_country', 'patcit_document_id_doc_number','patcit_document_id_kind','patcit_document_id_name','patcit_document_id_date','category']) if soup.findAll('us-citation'): cit = soup.findAll('us-citation') else: cit = soup.findAll('citation') doc_id = soup.findAll('publication-reference')[0].find('doc-number').text for x in cit: try: patcit_num = x.find('patcit')['num'] except: patcit_num = None try: patcit_document_id_country = x.find('country').text except: patcit_document_id_country = None try: patcit_document_id_doc_number = x.find('doc-number').text except: patcit_document_id_doc_number = None try: patcit_document_id_kind = x.find('kind').text except: patcit_document_id_kind = None try: patcit_document_id_name = x.find('name').text except: patcit_document_id_name = None try: patcit_document_id_date = x.find('date').text except: patcit_document_id_date = None try: category = x.find('category').text except: category = None print doc_id val = {'doc_id':doc_id,'patcit_num':patcit_num, 'patcit_document_id_country':patcit_document_id_country,'patcit_document_id_doc_number':patcit_document_id_doc_number, 'patcit_document_id_kind':patcit_document_id_kind,'patcit_document_id_name':patcit_document_id_name,'patcit_document_id_date':patcit_document_id_date,'category':category} df = df.append(val, ignore_index=True) df.to_sql(name = 'table_name', con = cnxn, flavor='mysql', if_exists='append') print '1 doc exported' i=0 l = os.listdir('/path/') for item in l: f = '/path/'+item print 'Currently parsing - ',item for xml_string in separated_xml(f): soup = BeautifulSoup(xml_string,'xml') if soup.find('us-patent-grant'): print item, i, xml_string[177:204] get_data(soup) else: print item, i, xml_string[177:204],'***********************************soup not found********************************************' i+=1 print 'DONE!!!' ```
2015/07/29
[ "https://Stackoverflow.com/questions/31695910", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2404998/" ]
Here is a [tutorial on multi-threading](http://www.tutorialspoint.com/python/python_multithreading.htm), because currently that code will run on 1 thread, 1 core. Remove all try/except statements and handle the code properly. Exceptions are expensive. Run a [profiler](http://docs.python.org/2/library/profile.html) to find the chokepoints, and multi-thread those or find a way to do them less times.
So, you're doing two things wrong. First, you're using BeautifulSoup, which is slow, and second, you're using a "find" call, which is also slow. As a first cut, look at `lxml`'s [ability to pre-compile xpath queries](https://lxml.de/xpathxslt.html) (Look at the heading "The Xpath class). That will give you a **huge** speed boost. Alternatively, I've been working on a library to do this kind of parsing declaratively, using best practices for `lxml` speed, including precompiled xpath called `yankee`. [Yankee on PyPI](https://pypi.org/project/yankee/) | [Yankee on GitHub](https://github.com/parkerhancock/yankee) You could do the same thing with `yankee` like this: ```py from yankee.xml import Schema, fields as f # Create a schema for citations class Citation(Schema): num = f.Str(".//patcit") country = f.Str(".//country") # ... and so forth for the rest of your fields # Then create a "wrapper" to get all the citations class Patent(Schema): citations = f.List(".//us-citation|.//citation") # Then just feed the Schema your lxml.etrees for each patent: import lxml.etree as ET schema = Patent() for _, doc in ET.iterparse(xml_string, "xml"): result = schema.load(doc) ``` The result will look like this: ```py { "citations": [ { "num": "<some value>", "country": "<some value>", }, { "num": "<some value>", "country": "<some value>", }, ] } ``` I would also check out [Dask](https://www.dask.org/) to help you multithread it more efficiently. Pretty much all my projects use it.
7,566
11,707,151
allow me to preface this by saying that i am learning python on my own as part of my own curiosity, and i was recommended a free online computer science course that is publicly available, so i apologize if i am using terms incorrectly. i have seen questions regarding this particular problem on here before - but i have a separate question from them and did not want to hijack those threads. the question: "a substring is any consecutive sequence of characters inside another string. The same substring may occur several times inside the same string: for example "assesses" has the substring "sses" 2 times, and "trans-Panamanian banana" has the substring "an" 6 times. Write a program that takes two lines of input, we call the first needle and the second haystack. Print the number of times that needle occurs as a substring of haystack." my solution (which works) is: ``` first = str(input()) second = str(input()) count = 0 location = 0 while location < len(second): if location == 0: location = str.find(second,first,0) if location < 0: break count = count + 1 location = str.find(second,first,location +1) if location < 0: break count = count + 1 print(count) ``` if you notice, i have on two separate occasions made the if statement that if location is less than 0, to break. is there some way to make this a 'global' condition so i do not have repetitive code? i imagine efficiency becomes paramount with increasing program sophistication so i am trying to develop good practice now. how would python gurus optimize this code or am i just being too nitpicky?
2012/07/29
[ "https://Stackoverflow.com/questions/11707151", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1560582/" ]
Check out regular expressions, python's `re` module (http://docs.python.org/library/re.html). For example, ``` import re first = str(input()) second = str(input()) regex = first[:-1] + '(?=' + first[-1] + ')' print(len(re.findall(regex, second))) ```
**Answer** ``` needle=input() haystack=input() counter=0 for i in range(0,len(haystack)): if(haystack[i:len(needle)+i]!=needle): continue counter=counter+1 print(counter) ```
7,567
53,844,589
I wrote the below python script in sublime text3 on executing it ( ctrl + B ) it is not giving any result. Step 1: Code: ``` class Avengers(object): def __init__(self): print('hello') avenger1 = Avengers() avenger1.__init__(self) ``` Step 2: ``` ctrl + B ``` Step 3: Result: ***Repl Closed***
2018/12/19
[ "https://Stackoverflow.com/questions/53844589", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9983752/" ]
That's because you're only declaring a class, not instantiating it. Your variable avenger1 exists within the **init** function, therefore it isn't being called. Indentation matters in python. Try this: ``` class Avengers(object): def __init__(self): print('hello') if __name__ == "__main__": avenger1 = Avengers() ```
You are not instantiating the class. Try something like: ``` class Avengers(object): def __init__(self): print('hello') avenger1 = Avengers() avenger1.__init__(self) avengers = Avengers() # Initiates the class ``` When you instantiate a class like this, it will execute the `__init__` function for that class.
7,577
21,369,607
I am trying to convert the following python extract to C ``` tvip = "192.168.0.3" myip = "192.168.0.7" mymac = "00-0c-29-3e-b1-4f" appstring = "iphone..iapp.samsung" tvappstring = "iphone.UE55C8000.iapp.samsung" remotename = "Python Samsung Remote" ipencoded = base64.b64encode(myip) macencoded = base64.b64encode(mymac) messagepart1 = chr(0x64) + chr(0x00) + chr(len(ipencoded)) \ + chr(0x00) + ipencoded + chr(len(macencoded)) + chr(0x00) \ + macencoded + chr(len(base64.b64encode(remotename))) + chr(0x00) \ + base64.b64encode(remotename) part1 = chr(0x00) + chr(len(appstring)) + chr(0x00) + appstring \ + chr(len(messagepart1)) + chr(0x00) + messagepart1 ``` I don't actually know what `messagepart1` actually represents in terms of datastructure. Here is my attempt: ``` var myip_e = myip.ToBase64(); var mymac_e = mymac.ToBase64(); var m1 = (char)0x64 + (char)0x00 + (char)myip_e.Length + (char)0x00 + myip_e + (char)mymac_e.Length + (char)0x00 + mymac_e + (char)remotename.ToBase64().Length + (char)0x00 +remotename.ToBase64(); var p1 = (char)0x00 + (char)appstring.Length + (char)0x00 + appstring + (char)m1.Length + (char)0x00 + m1; //var b1 = p1.GetBytes(); // this is to write to the socket. public static string ToBase64(this string source) { return Convert.ToBase64String(source.GetBytes()); } public static byte[] GetBytes(this string source) { byte[] bytes = new byte[source.Length * sizeof(char)]; System.Buffer.BlockCopy(source.ToCharArray(), 0, bytes, 0, bytes.Length); return bytes; } ``` The way I am comparing is that I am printing both to console, expecting them to be the same if correct - obviously I am doing something wrong.
2014/01/26
[ "https://Stackoverflow.com/questions/21369607", "https://Stackoverflow.com", "https://Stackoverflow.com/users/126280/" ]
`fragment` lexer rules can only be used by other lexer rules: these will never become a token on their own. Therefor, you cannot use `fragment` rules in parser rules.
The `fragment` is not the root cause. First, try to reproduce your errors: ------------------------------------ When compiling your Test.g4, it will appear warnings below: ``` warning(156): Test.g4:11:21: invalid escape sequence \" warning(156): Test.g4:123:59: invalid escape sequence \" warning(146): Test.g4:11:0: non-fragment lexer rule QUOTE can match the empty string warning(125): Test.g4:3:8: implicit definition of token NonZeroDigit in parser warning(125): Test.g4:3:25: implicit definition of token Digit in parser ``` After removing unused rules: ```g4 grammar Test; start : NonZeroDigit '.' Digit Digit? EOF ; fragment NonZeroDigit : [1-9] ; fragment Digit : '0' | NonZeroDigit ; ``` Then compile it again and test it: ``` warning(125): Test.g4:3:8: implicit definition of token NonZeroDigit in parser warning(125): Test.g4:3:25: implicit definition of token Digit in parser line 1:0 token recognition error at: '1' line 1:2 token recognition error at: '1' line 1:3 token recognition error at: '1' line 1:1 missing NonZeroDigit at '.' line 1:4 missing Digit at '<EOF>' (start <missing NonZeroDigit> . <missing Digit> <EOF>) ``` (try to reproduce your errors) When applying **'fragment'** ---------------------------- When applying **'fragment'** on `NonZeroDigit` and `Digit`, the g4 will be equivalent to : replace `NonZeroDigit` with `[1-9]` ``` grammar Test; start : [1-9] '.' Digit Digit? EOF ; fragment Digit : '0' | [1-9] ; ``` replace `Digit` with `('0' | [1-9])` ``` grammar Test; start : [1-9] '.' ('0' | [1-9]) ('0' | [1-9])? EOF ; ``` but the parser rule `start`(the identifier starts with a lowercase alphabet) cannot be all letters. Refer to `The Definitive ANTLR 4 Reference` Page73 > > lexer rule names with uppercase letters and parser rule names with > lowercase letters. For example, ID is a lexical rule name, and expr is > a parser rule name. > > > After removing 'fragment' ------------------------- After removing 'fragment' from g4, there is still an unexpected error. ``` line 1:3 extraneous input '3' expecting {<EOF>, Digit} (start 1 . 0 3 <EOF>) ``` **Error study:** for `NonZeroDigit`: if naming as nonZeroDigit, we will get: ``` syntax error: '1-9' came as a complete surprise to me while matching alternative ``` Because `[1-9]` is a letter (constant token). We need to name it with an uppercase prefix. (=lexer rule) `for Digit`: it containing an identifier `NonZeroDigit`, so we need to name it with a lowercase prefix. (=parser rule) The correct Test.g4 should be: ------------------------------ ```g4 grammar Test; start : NonZeroDigit '.' digit digit? EOF ; NonZeroDigit : [1-9] ; digit : '0' | NonZeroDigit ; ``` If you want to use `fragment`, you should create a lexer rule `Number` because the rule ONLY consists of letters (constant tokens). And the identifier should start with an uppercase prefix, `start` is not ```g4 grammar Test; start : Number EOF ; Number : NonZeroDigit '.' Digit Digit? ; fragment NonZeroDigit : [1-9] ; fragment Digit : '0' | NonZeroDigit ; ```
7,582
59,415,503
I am trying to run the object detection API in tensorflow following this tutorial / accompanying code: <https://gilberttanner.com/blog/creating-your-own-objectdetector> When I type `python2 generate_tfrecord.py --csv_input=images_train.csv --image_dir=images\train --output_path=train.record` into the terminal, I see a file train.record is created in this directory, but I also get the following error message: ``` Traceback (most recent call last): File "generate_tfrecord.py", line 107, in <module> tf.app.run() File "/usr/local/lib/python2.7/site-packages/tensorflow_core/python/platform/app.py", line 40, in run _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef) File "/usr/local/lib/python2.7/site-packages/absl/app.py", line 299, in run _run_main(main, args) File "/usr/local/lib/python2.7/site-packages/absl/app.py", line 250, in _run_main sys.exit(main(argv)) File "generate_tfrecord.py", line 96, in main grouped = split(examples, 'filename') File "generate_tfrecord.py", line 47, in split gb = df.groupby(group) File "/Users/sofiatomov/Library/Python/2.7/lib/python/site-packages/pandas/core/generic.py", line 6665, in groupby observed=observed, **kwargs) File "/Users/sofiatomov/Library/Python/2.7/lib/python/site-packages/pandas/core/groupby/groupby.py", line 2152, in groupby return klass(obj, by, **kwds) File "/Users/sofiatomov/Library/Python/2.7/lib/python/site-packages/pandas/core/groupby/groupby.py", line 599, in __init__ mutated=self.mutated) File "/Users/sofiatomov/Library/Python/2.7/lib/python/site-packages/pandas/core/groupby/groupby.py", line 3291, in _get_grouper raise KeyError(gpr) KeyError: 'filename' ``` How do I fix this? Thanks.
2019/12/19
[ "https://Stackoverflow.com/questions/59415503", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11756066/" ]
What you have here is an async function that performs an async operation, where that operation does *not* use promises. This means that you need to setup a function that manages and returns a promise explicitly. You don't need the `async` keyword here, since you want to explicitly `return` a `Promise` that you create, and not a promise created for you by the async keyword (which you cannot directly manage). ``` function httpRequest() { // Return a promise (this is what an async function does for you) return new Promise(resolve => { const oReq = new XMLHttpRequest(); oReq.addEventListener("load", function() { // Resolve the promise when the async operation is complete. resolve(this.responseText); }); oReq.open("GET", "http://some.url.here"); oReq.send(); }; } ``` Now that the function explicitly returns a promise, it can be `await`ed just like an `async` function. Or used with `then()` like any promise. ``` async function someFunction() { // await const awaitedResponse = await httpRequest(); console.log(awaitedResponse); // or promise chain httpRequest().then(responseText => console.log(responseText)); }; ```
Basically, you are trying to write an `async` function without having anything in that function to await. You use async/await when there is some asynchronous-ness in the code, while in yours, there isn't. This is an example that might be useful: ``` const getItemsAsync = async () => { const res = await DoSomethingAsync(); return res.items; } const items = await getItemsAsync(); ``` As you can see, `DoSomethingAsync` is an asynchronous function I await the result for, which is `res`. At that point, the code will pause. Once the promise will be resolved, the code will resume and therefore will return `res.items`, which means that `items` will actually contain the result of the async function. Also, async/await is just a spacial syntax that makes the code more readable, by giving it a more synchronous form (not substance). If you really want to make it asynchronous, you can promisify your synchronous code in order to make it return a promise to await, that will then be either resolved or rejected.
7,583
63,664,484
I have to create a function called read\_data that takes a filename as its only parameter. This function must then open the file with the given name and return a dictionary where the keys are the location names in the file and the values are a list of the readings. The result of the first function works and displays: ``` {'Monday': [67 , 43], 'Tuesday': [14, 26], 'Wednesday': [68, 44], ‘Thursday’:[15, 35],’Friday’:[70, 31],’Saturday’;[34, 39],’Sunday’:[22, 18]} ``` The second function named get\_average\_dictionary that takes a dictionary structured like the return value of read\_data as its only parameter and returns a dictionary with the same keys as the parameter, but with the average value of the readings rather than the list of individual readings. This has to return: ``` {'Monday': [55.00], 'Tuesday': [20.00], 'Wednesday': [56.00], ‘Thursday’:[25.00],’Friday’:[50.50],’Saturday’;[36.50],’Sunday’:[20.00]} ``` But I can not get it to work. I get the following errors: ``` line 25, in <module> averages = get_average_dictionary(readings) line 15, in get_average_dictionary average = {key: sum(val)/len(val) for key, val in readings.items()} AttributeError: 'NoneType' object has no attribute 'items' ``` Here is the code I have at the moment. Any help would be appreciated. ``` def read_data(filename): readings = {} with open("c:\\users\\jstew\\documents\\readings.txt") as f: for line in f: (key, val) = line.split(',') if not key in readings.keys(): readings[key] = [] readings[key].append(int(val)) print(readings) def get_average_dictionary(readings): average = {key: sum(val)/len(val) for key, val in readings.items()} print(average) FILENAME = "readings.txt" if __name__ == "__main__": try: readings = read_data(FILENAME) averages = get_average_dictionary(readings) # Loops through the keys in averages, sorted from that with the largest associated value in averages to the lowest - see https://docs.python.org/3.5/library/functions.html#sorted for details for days in sorted(averages, key = averages.get, reverse = True): print(days, averages[days]) ```
2020/08/31
[ "https://Stackoverflow.com/questions/63664484", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14001036/" ]
Given: ``` di={'Monday': [67 , 43], 'Tuesday': [14, 26], 'Wednesday': [68, 44], 'Thursday':[15, 35],'Friday':[70, 31],'Saturday':[34, 39],'Sunday':[22, 18]} ``` You can do: ``` >>> {k:sum(v)/len(v) for k,v in di.items()} {'Monday': 55.0, 'Tuesday': 20.0, 'Wednesday': 56.0, 'Thursday': 25.0, 'Friday': 50.5, 'Saturday': 36.5, 'Sunday': 20.0} ``` The error you have seems to be that you are returning nothing from your function. Just do: ``` def a_func(di): return {k:sum(v)/len(v) for k,v in di.items()} ``` And you should be good to go...
You were close but had at least one problem. One was this: `Friday’:[50.50],’Saturday’;[36.50],’Sunday’: [22, 18]` Notice ’Saturday’ is followed by a semicolon, not a colon. That's in both examples. Also, notice your text changes color from red to blue. That usually (this case included) means that you switched from single quotes to something like smartquotes or a character that looks like a normal quote but isn't recognized as such. ``` {'Monday': [67 , 43], 'Tuesday': [14, 26], 'Wednesday': [68, 44], ‘Thursday’:[15, 35],’Friday’:[70, 31],’Saturday’;[34, 39],’Sunday’:[22, 18]} ``` Once those are cleared up you just have to deal with the last part, returning vs printing the result. ``` def get_average_dictionary(readings): return {k:(sum(v)/len(v)) for (k,v) in vals.items()} ```
7,584
46,078,088
I want to upload an image on Google Cloud Storage from a python script. This is my code: ``` from oauth2client.service_account import ServiceAccountCredentials from googleapiclient import discovery scopes = ['https://www.googleapis.com/auth/devstorage.full_control'] credentials = ServiceAccountCredentials.from_json_keyfile_name('serviceAccount.json', scop es) service = discovery.build('storage','v1',credentials = credentials) body = {'name':'my_image.jpg'} req = service.objects().insert( bucket='my_bucket', body=body, media_body=googleapiclient.http.MediaIoBaseUpload( gcs_image, 'application/octet-stream')) resp = req.execute() ``` if `gcs_image = open('img.jpg', 'r')` the code works and correctly save my image on Cloud Storage. How can I directly upload a bytes image? (for example from an OpenCV/Numpy array: `gcs_image = cv2.imread('img.jpg')`)
2017/09/06
[ "https://Stackoverflow.com/questions/46078088", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7219743/" ]
If you want to upload your image from file. ``` import os from google.cloud import storage def upload_file_to_gcs(bucket_name, local_path, local_file_name, target_key): try: client = storage.Client() bucket = client.bucket(bucket_name) full_file_path = os.path.join(local_path, local_file_name) bucket.blob(target_key).upload_from_filename(full_file_path) return bucket.blob(target_key).public_url except Exception as e: print(e) return None ``` but if you want to upload bytes directly: ``` import os from google.cloud import storage def upload_data_to_gcs(bucket_name, data, target_key): try: client = storage.Client() bucket = client.bucket(bucket_name) bucket.blob(target_key).upload_from_string(data) return bucket.blob(target_key).public_url except Exception as e: print(e) return None ``` note that `target_key` is prefix and the name of the uploaded file.
`MediaIoBaseUpload` expects an [`io.Base`](https://docs.python.org/3/library/io.html#io.IOBase)-like object and raises following error: ``` 'numpy.ndarray' object has no attribute 'seek' ``` upon receiving a ndarray object. To solve it I am using `TemporaryFile` and `numpy.ndarray().tofile()` ``` from oauth2client.service_account import ServiceAccountCredentials from googleapiclient import discovery import googleapiclient import numpy as np import cv2 from tempfile import TemporaryFile scopes = ['https://www.googleapis.com/auth/devstorage.full_control'] credentials = ServiceAccountCredentials.from_json_keyfile_name('serviceAccount.json', scopes) service = discovery.build('storage','v1',credentials = credentials) body = {'name':'my_image.jpg'} with TemporaryFile() as gcs_image: cv2.imread('img.jpg').tofile(gcs_image) req = service.objects().insert( bucket='my_bucket’, body=body, media_body=googleapiclient.http.MediaIoBaseUpload( gcs_image, 'application/octet-stream')) resp = req.execute() ``` Be aware that googleapiclient is non-idiomatic and **maintenance only**(it’s not developed anymore). I would recommend using [idiomatic one](https://googlecloudplatform.github.io/google-cloud-python/latest/index.html).
7,585
35,144,550
My ubuntu is 14.04 LTS. When I install cryptography, the error is: ``` Installing egg-scripts. uses namespace packages but the distribution does not require setuptools. Getting distribution for 'cryptography==0.2.1'. no previously-included directories found matching 'documentation/_build' zip_safe flag not set; analyzing archive contents... six: module references __path__ Installed /tmp/easy_install-oUz7ei/cryptography-0.2.1/.eggs/six-1.10.0-py2.7.egg Searching for cffi>=0.8 Reading https://pypi.python.org/simple/cffi/ Best match: cffi 1.5.0 Downloading https://pypi.python.org/packages/source/c/cffi/cffi-1.5.0.tar.gz#md5=dec8441e67880494ee881305059af656 Processing cffi-1.5.0.tar.gz Writing /tmp/easy_install-oUz7ei/cryptography-0.2.1/temp/easy_install-Yf2Yl3/cffi-1.5.0/setup.cfg Running cffi-1.5.0/setup.py -q bdist_egg --dist-dir /tmp/easy_install-oUz7ei/cryptography-0.2.1/temp/easy_install-Yf2Yl3/cffi-1.5.0/egg-dist-tmp-A2kjMD c/_cffi_backend.c:15:17: fatal error: ffi.h: No such file or directory #include <ffi.h> ^ compilation terminated. error: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 An error occurred when trying to install cryptography 0.2.1. Look above this message for any errors that were output by easy_install. While: Installing egg-scripts. Getting distribution for 'cryptography==0.2.1'. Error: Couldn't install: cryptography 0.2.1 ``` I don't know why it was failed. What is the reason. Is there something necessary when install it on ubuntu system?
2016/02/02
[ "https://Stackoverflow.com/questions/35144550", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2890633/" ]
The answer is on the docs of `cryptography`'s [installation section](https://cryptography.io/en/latest/installation/#building-cryptography-on-linux) which pretty much reflects Angelos' answer: Quoting it: > > For Debian and **Ubuntu**, the following command will ensure that the > required dependencies are installed: > > > > ``` > $ sudo apt-get install build-essential libssl-dev libffi-dev python-dev > > ``` > > For Fedora and RHEL-derivatives, the following command will ensure > that the required dependencies are installed: > > > > ``` > $ sudo yum install gcc libffi-devel python-devel openssl-devel > > ``` > > You should now be able to build and install cryptography with the > usual > > > > ``` > $ pip install cryptography > > ``` > > If you're using Python 3, please use `python3-dev` instead of `python-dev` in the first command. (thanks to @chasmani) If you're installing this on `Ubuntu 18.04`, please use `libssl1.0` instead of `libssl-dev` in the first command. (thanks to @pobe)
I had the same problem when pip installing the cryptography module on Ubuntu 14.04. I solved it by installing libffi-dev: ``` apt-get install -y libffi-dev ``` Then I got the following error: ``` build/temp.linux-x86_64-3.4/_openssl.c:431:25: fatal error: openssl/aes.h: No such file or directory #include <openssl/aes.h> ^ compilation terminated. error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 ``` Which I resolved by installing libssl-dev: ``` apt-get install -y libssl-dev ```
7,590
70,021,042
``` import spacy nlp = spacy.load('en_core_web_sm') from spacy.lemmatizer import Lemmatizer from spacy.lang.en import LEMMA_INDEX, LEMMA_EXC, LEMMA_RULES lemmatizer = Lemmatizer(LEMMA_INDEX, LEMMA_EXC, LEMMA_RULES) lemmattizer('chunkles', 'NOUN') ``` Can anyone help me? I'm using Version 3 of python
2021/11/18
[ "https://Stackoverflow.com/questions/70021042", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17447993/" ]
The official document shows that after spacy 3.0, the lemmatizer has become a standalone pipeline component. Therefore, you should install the spacy whose version is smaller than 3.0. The link is as follow: <https://spacy.io/api/lemmatizer>
try: doc = nlp('chuckles') doc[0].lemma\_
7,593
9,345,250
I have tried: ``` >>> l = [1,2,3] >>> x = 1 >>> x in l and lambda: print("Foo") x in l && print "Horray" ^ SyntaxError: invalid syntax ``` A bit of googling revealed that `print` is a statement in `python2` whereas it's a function in `python3`. But, I have tried the above snipped in `python3` and it throws SyntaxError exception. Any idea on how can I do it in **one line**? (Readability or google programming practice is not an issue here)
2012/02/18
[ "https://Stackoverflow.com/questions/9345250", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1218583/" ]
``` l = [1, 2, 3] x = 1 if x in l: print "Foo" ``` I'm not being a smart ass, this is the way to do it in **one line**. Or, if you're using Python3: ``` if x in l: print("Foo") ```
Getting rid of the shortcomings of print as a statement in Python2.x using `from __future__ import print_function` is the first step. Then the following all work: ``` x in l and (lambda: print("yes"))() # what an overkill! (x in l or print("no")) and print("yes") # note the order, print returns None print("yes") if x in l else print("no") # typical A if Cond else Y print("yes" if x in l else "no") # a more condensed form ``` For even more fun, if you're into this, you can consider this - prints and returns True or False, depending on the `x in l` condition (to get the False I used the double not): ``` def check_and_print(x, l): return x in l and not print("yes") or not not print("no") ``` That was ugly. To make the print transparent, you could define 2 other version of print, which return True or False. This could actually be useful for logging: ``` def trueprint(*args, **kwargs): print(*args, **kwargs) return True def falseprint(*args, **kwargs): return not trueprint(*args, **kwargs) result = x in l and trueprint("yes") or falseprint("no") ```
7,594
61,642,246
What I want to make is angrybirds game. There is a requirement 1.Draw a rectangle randomly between 100 and 200 in length and length 10 in length. 2. Receive the user inputting the launch speed and the launch angle. 3. Project shells square from origin (0,0). 4. If the shell is hit, we'll end it, or we'll continue from number two. So this is what I wrote ``` import turtle as t import math import random def square(): for i in range(4): t.forward(10) t.left(90) def fire(): x = 0 y = 0 speed = int(input("속도:")) angle = int(input("각도:")) vx = speed * math.cos(angle * 3.14/180.0) vy = speed * math.sin(angle * 3.14/180.0) while t.ycor() >= 0: vx = vx vy = vy - 10 x = x + vx y = y + by t.goto(x,y) d = t.distance(d1+5, 5) if d < 10: print("Game End") else: t.up() t.goto(0,0) t.down() fire() d1 = random.randint(100,200) t.up() t.forward(d1) t.down() square() t.up() t.goto(0,0) t.down() fire() ``` I want to get an answer to this problem. The problem is I want to calculate a minimum distance between a point(target point is (d1+5,5)) and a parabola which turtle draw. so I try to find an answer searching in google, python book, but I can't find it. please help me
2020/05/06
[ "https://Stackoverflow.com/questions/61642246", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13483726/" ]
the local declaration has to be like this way, ``` int (*localArr)[M][N]; //pointer to an MxN array //int * localArr[m][N];//An MxN array of pointer to int ```
What do you want to achieve? If you just want to print your 2d array then why don't you use this approach? ``` void print(int localArr[M][N]) { for (int i = 0; i < M; i++) { for (int j = 0; j < N; j++) { cout << localArr[i][j]; } } } ``` If there are some constraints then Nitheesh is right!
7,598
35,859,927
For say in terminal I did `cd Desktop` you should know it moves you to that directory, but how do I do that in python but with use Desktop with `raw_input("")` to pick my command?
2016/03/08
[ "https://Stackoverflow.com/questions/35859927", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Your code structure is very unconventional and I suspect you're rather new to scheme/racket. Your procedure can be written in a much more idiomatic way. The first criticism I'd probably make about your code is that it makes the assumption that the lists you're unzipping will only have 2 elements each. * What about unzipping 3 lists of 5 elements or 5 lists of 3 elements ? * What about unzipping 4 lists of 4 elemens ? * What about unzipping 1 list of 7 elements or 7 lists of 1 element ? * What about unzipping nothing ? These questions all point to a fundamental concept that helps shape well-structured procedures: ***"What is a "total" procedure ?"*** A **total procedure** is one that is defined for all values of an accepted type. What that means to us is that, if we write an `unzip` procedure, it should * accept an empty list * accept any number of lists * accept lists of any length1 Let's take a look at an `unzip` procedure that does that now. It's likely this procedure can be improved, but at the very least, it's easy to read and comprehend ``` (define (unzip xs (ys empty)) ; if no xs are given, return ys (cond [(empty? xs) empty] ; if the first input is empty, return the final answer; reversed [(empty? (car xs)) (reverse ys)] ; otherwise, unzip the tail of each xs, and attach each head to ys [else (unzip (map cdr xs) (cons (map car xs) ys))])) (unzip '((1 2) (3 4) (5 6))) ; => '((1 3 5) (2 4 6)) ``` Let's step through the evaluation. ``` ; initial call (unzip '((1 2) (3 4) (5 6))) ; (empty? xs) nope ; (empty? (car xs)) nope ; (unzip (map cdr xs) (cons (map car xs) ys)) ; substitue values (unzip (map cdr '((1 2) (3 4) (5 6))) (cons (map car '((1 2) (3 4) (5 6))) empty)) ; eval (map cdr xs) (unzip '((2) (4) (6)) (cons (map car '((1 2) (3 4) (5 6))) empty)) ; eval (map car xs) (unzip '((2) (4) (6)) (cons '(1 3 5) empty)) ; eval cons ; then recurse unzip (unzip '((2) (4) (6)) '((1 3 5))) ; (empty? xs) nope ; (empty? (car xs)) nope ; (unzip (map cdr xs) (cons (map car xs) ys)) ; substitue values (unzip (map cdr '((2) (4) (6))) (cons (map car '((2) (4) (6))) '((1 3 5)))) ; eval (map cdr xs) (unzip '(() () ()) (cons (map car '((2) (4) (6))) '((1 3 5)))) ; eval (map car xs) (unzip '(() () ()) (cons '(2 4 5) '((1 3 5)))) ; eval cons ; then recurse (unzip '(() () ()) '((2 4 5) (1 3 5))) ; (empty? xs) nope ; (empty? (car xs)) yup! ; (reverse ys) ; substituion (reverse '((2 4 5) (1 3 5))) ; return '((1 3 5) (2 4 5)) ``` Here's another thing to think about. Did you notice that `unzip` is basically doing the same thing as `zip` ? Let's look at your input little closer ``` '((1 2) (3 4) (5 6)) ^ ^ ``` Look at the columns. If we were to `zip` these, we'd get ``` '((1 3 5) (2 4 6)) ``` ***"Wait, so do you mean that a `unzip` is just another `zip` and vice versa ?"*** Yup. ``` (unzip '((1 2) (3 4) (5 6))) ; => '((1 3 5) (2 4 6)) (unzip (unzip '((1 2) (3 4) (5 6)))) ; '((1 2) (3 4) (5 6)) (unzip (unzip (unzip '((1 2) (3 4) (5 6))))) ; '((1 3 5) (2 4 6)) ``` Knowing this, if you already had a `zip` procedure, your definition to `unzip` becomes insanely easy ``` (define unzip zip) ``` Which basically means: **You don't need an `unzip` procedure**, just re-`zip` it ``` (zip '((1 2) (3 4) (5 6))) ; => '((1 3 5) (2 4 6)) (zip (zip '((1 2) (3 4) (5 6)))) ; '((1 2) (3 4) (5 6)) (zip (zip (zip '((1 2) (3 4) (5 6))))) ; '((1 3 5) (2 4 6)) ``` Anyway, I'm guessing your `unzip` procedure implementation is a bit of homework. The long answer your professor is expecting is probably something along the lines of the procedure I originally provided. The sneaky answer is `(define unzip zip)` --- ***"So is this `unzip` procedure considered a total procedure ?"*** * What about unzipping 3 lists of 5 elements or 5 lists of 3 elements ? ``` (unzip '((a b c d e) (f g h i j) (k l m n o p))) ; => '((a f k) (b g l) (c h m) (d i n) (e j o)) (unzip '((a b c) (d e f) (g h i) (k l m) (n o p))) ; => '((a d g k n) (b e h l o) (c f i m p)) ``` * What about unzipping 4 lists of 4 elemens ? ``` (unzip '((a b c d) (e f g h) (i j k l) (m n o p))) ; => '((a e i m) (b f j n) (c g k o) (d h l p)) ``` * What about unzipping 1 list of 7 elements or 7 lists of 1 element ? ``` (unzip '((a b c d e f g))) ; => '((a) (b) (c) (d) (e) (f) (g)) (unzip '((a) (b) (c) (d) (e) (f) (g))) ; => '((a b c d e f g)) ``` * What about unzipping nothing ? ``` (unzip '()) ; => '() ``` * What about unzipping 3 empty lists ? ``` (unzip '(() () ())) ; => '() ``` > > **1** We said that `unzip` should "accept lists of any length" but we're bending the rules just a little bit here. It's true that `unzip` accepts lists of any length, but it's also true that each list much be the same length as the others. For lists of varying length, an objective "correct" solution is not possible and for this lesson, we'll leave the behavior for mixed-length lists as *undefined*. > > > > ``` > ; mixed length input is undefined > (unzip '((a) (b c d) (e f))) ; => ??? > > ``` > > --- **A couple side notes** Things like ``` (car (car x)) (car (cdr (car x))) ``` Can be simplified to ``` (caar x) (cadar x) ``` The following [pair accessor short-hand procedures](https://docs.racket-lang.org/reference/pairs.html#%28part._.Pair_.Accessor_.Shorthands%29) exist ``` caar ; (car (car x)) cadr ; (car (cdr x)) cdar ; (cdr (car x)) cddr ; (cdr (cdr x)) caaar ; (car (car (car x))) caadr ; (car (car (cdr x))) cadar ; (car (cdr (car x))) caddr ; (car (cdr (cdr x))) cdaar ; (cdr (car (car x))) cdadr ; (cdr (car (cdr x))) cddar ; (cdr (cdr (car x))) cdddr ; (cdr (cdr (cdr x))) caaaar ; (car (car (car (car x)))) caaadr ; (car (car (car (cdr x)))) caadar ; (car (car (cdr (car x)))) caaddr ; (car (car (cdr (cdr x)))) cadaar ; (car (cdr (car (car x)))) cadadr ; (car (cdr (car (cdr x)))) caddar ; (car (cdr (cdr (car x)))) cadddr ; (car (cdr (cdr (cdr x)))) cdaaar ; (cdr (car (car (car x)))) cdaadr ; (cdr (car (car (cdr x)))) cdadar ; (cdr (car (cdr (car x)))) cdaddr ; (cdr (car (cdr (cdr x)))) cddaar ; (cdr (cdr (car (car x)))) cddadr ; (cdr (cdr (car (cdr x)))) cdddar ; (cdr (cdr (cdr (car x)))) cddddr ; (cdr (cdr (cdr (cdr x)))) ```
It is combining the lists correctly, but it's not combining the correct lists. Extracting the local definitions makes them testable in isolation: ``` (define (front a) (if (null? a) '() (cons (car (car a)) (unzip (cdr a))))) (define (back b) (if (null? b) '() (cons (car (cdr (car b))) (unzip (cdr b))))) (define (unzip l) (list (front l) (back l))) (define test '((1 2) (3 4) (5 6))) ``` Test: ``` > (front test) '(1 (3 (5 () ()) (6 () ())) (4 (5 () ()) (6 () ()))) > (front '((1 2))) '(1 () ()) > (back '((1 2))) '(2 () ()) ``` Weird... ``` > (unzip '()) '(() ()) > (unzip '((1 2))) '((1 () ()) (2 () ())) ``` It looks like *something* is correct, but the lists' tails are wrong. If you look carefully at the definitions of `front` and `back`, they're recursing to `unzip`. But they should recurse to themselves - `front` is the "first first" followed by the rest of the "firsts", and `back` is the "first second" followed by the rest of the "seconds". `unzip` has nothing to do with this. ``` (define (front a) (if (null? a) '() (cons (car (car a)) (front (cdr a))))) (define (back b) (if (null? b) '() (cons (car (cdr (car b))) (back (cdr b))))) ``` And now... ``` > (front test) '(1 3 5) > (back test) '(2 4 6) > (unzip test) '((1 3 5) (2 4 6)) ```
7,600
62,678,802
I have a client for whom I have created a program that utilizes a variety of data and machine learning packages. The client would like for the program to be easily run without installing any type of python environment. Is this possible? I am assuming the best bet would be to transform the .py file into a .exe file but am unsure of how to do this if I have packages that need to be installed before the program can be run. Are there websites that exist that allow you to easily host complex .py files on them to be run by anyone that accesses the URL?
2020/07/01
[ "https://Stackoverflow.com/questions/62678802", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12512519/" ]
This usually happens when you're opening someone else's project after unzipping it and your current Android Studio Version is older to the version the project was compiled in. The way to solve it is 1. Go to Help and about to see your android studio version 2. go to File>Project Structure> and set your Android Gradle Plugin version to your android studio version 3. Change the Gradle version to the one you usually use. Build the project and it should run without any errors **Edit**: It is tough to get the android gradle plugin version directly due to the new naming convention going forward(2020.3.1). Refer to this guide for such cases [Android Gradle plugin release](https://developer.android.com/studio/releases/gradle-plugin) or use an older Gradle Plugin like 4.2. If it is higher than 7.0.0 use 7.0.2 or 7.0.3 If everything else fails download the Android Studio Canary build and run your project there **EDIT FEB 2022** Sometimes when you go to File>Project Structure> the Gradle plugin will not show the option. In this case go to the project level build.gradle file Change the below ``` id 'com.android.application' version '7.0.3' apply false id 'com.android.library' version '7.0.2' apply false id 'org.jetbrains.kotlin.android' version '1.5.21' apply false ``` versions from whatever you have to a lower one. (make sure that it exists).
Check the version of Android Studio, or the IDEA Plugin. For example, Android Studio **4.0** and Android Plugin 10.**4.0** require a 4.0.x version of the Android tools. Therefore in build.gradle, change e.g. `com.android.tools.build:gradle:4.2.0-beta2"` to `com.android.tools.build:gradle:4.0.2"`. **Update:** This doesn't apply for 7+ as IntelliJ have changed their version numbers to be date-based. Tools version 7.1 works with Android Studio/Plugin 2021.1, and 7.0 works with 2020.3.
7,601
55,125,763
I'm trying to find the size, *in points*, of some text using Pillow in python. As I understand it, font sizes in points correspond to real physical inches on a target display or surface, with 72 points per inch. When I use Pillow's `textsize` method, I can find the size in pixels of some text rendered at a given font size (in points), but don't know how to get back to a coordinate system based in inches, because I don't have (and can't set) the pixel density of the image: ``` from PIL import Image, ImageFont, ImageDraw image = Image.new('RGBA', (400, 300), (255, 255, 255)) font = ImageFont.truetype('/Library/Fonts/Arial.ttf', 16) image.info['dpi'] = 100 print( ImageDraw.Draw(image).textsize('Loreum ipsum', font=font) ) image.info['dpi'] = 1000 print( ImageDraw.Draw(image).textsize('Loreum ipsum', font=font) ) ``` will print ``` (101, 18) (101, 18) ``` How do I get the size *in points* of the given text?
2019/03/12
[ "https://Stackoverflow.com/questions/55125763", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10686733/" ]
**TL;DR** There is no implicit pixel density, because the Pillow documentation is incorrect. When you create the font, you are specifying the `size` in *pixels* even though the Pillow documentation says it's in points. It's actually doing all of these operations in pixels. **More detail** The Pillow documentation for `ImageFont.truetype` says that the `size` argument is in points. However, looking at the [source](https://github.com/python-pillow/Pillow/blob/44da2c3878c9def8a103b19b0915aa774985aee1/src/PIL/ImageFont.py#L193) for the `ImageFont` module, it passes the `size` argument to `core.getfont`. ``` self.font = core.getfont( font, size, index, encoding, layout_engine=layout_engine ) ``` That call [ultimately leads](https://github.com/python-pillow/Pillow/blob/44da2c3878c9def8a103b19b0915aa774985aee1/src/_imagingft.c#L313) to a call to `FT_Set_Pixel_Sizes`, which is provided by the FreeType library. This call forwards the given `size`, unmodified. ``` error = FT_Set_Pixel_Sizes(self->face, 0, size); ``` The [documentation](https://www.freetype.org/freetype2/docs/reference/ft2-base_interface.html#ft_set_pixel_sizes) for `FT_Set_Pixel_Sizes` states > > > ``` > FT_EXPORT( FT_Error ) > FT_Set_Pixel_Sizes( FT_Face face, > FT_UInt pixel_width, > FT_UInt pixel_height ); > > ``` > > Call FT\_Request\_Size to request the nominal size (in **pixels**). > > > `face` A handle to the target face object. > > > `pixel_width` The nominal width, in **pixels**. > > > `pixel_height` The nominal height, in **pixels**. > > > So really there are no physical distances involved here at all, and there is no assumed DPI for either `ImageFont.truetype` or `ImageDraw.textsize`, despite the misleading Pillow documentation. Everything is in pixels, so no DPI is required. **Sidenote**: You may notice that the size you requested (16) is not exactly equal to the size you are getting back (18), but FreeType mentions that the size you ask for is not necessarily the size you get, and is at the discretion of the font itself.
Image don't have an "implicit" pixel density, they just have different number of pixels. The size of anything measured in pixels will depend on the display device's DPI or dots-per-inch. For example on a 100 DPI device, 12 pixels would appear to be 12/100 or 0.12 inches long. To convert inches to points, multiply them by 72. So 12 pixels → 0.12 inches \* 72 → 8.84 points.
7,611
39,980,658
I am starting to work with the [Django REST framework](http://www.django-rest-framework.org/) for a mini-reddit project I already developed. The problem is that I am stuck in this situation: A `Minisub` is like a subreddit. It has, among others, a field named `managers` which is `ManyToMany` with `User`. An `Ad` is an advertising which will be displayed on the minisub, and it has a field named `minisubs` which is `ManyToMany` with `Minisub`. It has also a `author` field, foreign key with `User`. I would like to allow these managers to add some ads on their minisubs through a DRF API. It is actually working. But I want to check that they put in `minisubs` only minisubs where they are managers. I found a way like that: ```python class AdSerializer(serializers.HyperlinkedModelSerializer): # ... def validate_minisubs(self, value): for m in value: if user not in m.managers.all(): raise serializers.ValidationError("...") return value ``` My question is: How to get `user` ? I can't find a way to get the value `Ad.author` (this field is set automatically in the serial data according to the user authentication). Maybe I don't find a way because there is no ways ? The place to do this is somewhere else ? Thanks in advance.
2016/10/11
[ "https://Stackoverflow.com/questions/39980658", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2595458/" ]
You may get it out of the serializer this way: ``` class YourModelSeializer(serializers.HyperlinkedModelSerializer): class Meta: model=YourModel def validate_myfield(self): instance = getattr(self, 'instance', None) ... ```
I believe that this is a job for the [permissions](http://www.django-rest-framework.org/api-guide/permissions/#permissions), if you are performing CRUD operations for inserting that into a database then u can have a permission class returns `True` if the user is a manager. a permissions instance has access to the request which u can use to get the user and check if he is a manager: <http://www.django-rest-framework.org/api-guide/permissions/#custom-permissions>
7,612
33,722,333
It is very nice and easy to run Python from the command line. Especially for testing purposes. The only drawback is that after making a change in the script, I have to restart Python, do all the imports over again, create the objects and enter the parameters. ``` $ python >>> from foo import bar >>> from package.file import Class >>> c = Class >>> c.name = "John" >>> c.age = 33 >>> c.function() >>> from datetime import timedelta, datetime >>> now = datetime.now().year() >>> next_year = now + timedelta(year=1) >>> etc... ``` Can someone tell me if there is an easier way then doing all the work over and over again every time I make a change in the Python code?
2015/11/15
[ "https://Stackoverflow.com/questions/33722333", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5039579/" ]
You could consider turning your testing into an actual python script. which can be run like this, and then checking the output ``` $ python my_tests.py ``` However, a much better way would be to write some unit tests which you can run in a similar way. <https://docs.python.org/2/library/unittest.html>. The unittest framework will run all the tests you've defined and gather the results into a report. If you need some steps to be done interactively, then you can achieve that by writing your setup into a script, and then executing the script before doing your interactive tests. See this other SO question: [Is there a possibility to execute a Python script while being in interactive mode](https://stackoverflow.com/questions/4624416/is-there-a-posibility-to-execute-a-python-script-while-being-in-interactive-mode)
Use IPython with a [notebook](http://jupyter.org/) instead. Much better for interactive computing.
7,613
32,616,406
I'm writing a python application that allows users to write their own plugins and extend the core functionality I provide - ``` $ tree project_dir/ . ├── application.py ├── plugins │   ├── __init__.py │   ├── example_plugin.py │   ├── plugin1.py │   ├── plugin2.py │   └── plugin3 │   ├── sounds │   │   └── test.wav │   └── player.py └── plugins_manager.py ``` plugins\_manager.py - ``` class Manager(object): def __init__(self): self.plugins = {} def register_plugin(self, name, plugin_func): self.plugins[name] = plugin_func ``` application.py initializes `Manager` instance globally - ``` manager = Manager() def main(): print manager.plugins main() ``` Each plugin is required to import the `Manager` instance from `application.py` and register itself like, plugin1.py - ``` from application import manager PLUGIN_NAME = "AAC_Player" def plugin_function(options): # do something manager.register_plugin(PLUGIN_NAME, plugin_function) ``` Now when I run application.py, obviously nothing gets printed. How do I make the plugins register themselves (call `.register_plugin()`) at program startup? So on that lines, a more generalised question would be - How can I make python execute a line of code that's global in a file without actually running the file? Suggestions on improving the plugin architecture welcome!
2015/09/16
[ "https://Stackoverflow.com/questions/32616406", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2417277/" ]
You can use the `__import__()` builtin to import the plugins, and then include the `register_plugin()` call in either the plugin file `example_plugin.py` or in `__init__.py` if it's a directory. For example, let's say this is your project structure: ``` ./ application.py plugins_manager.py plugins/ __init__.py plugin1.py plugin2.py plugin3/ __init__.py ``` Plugins have these contents: ``` $ cat plugins/plugin1.py print 'Plugin 1' $ cat plugins/plugin2.py print 'Plugin 2' $ cat plugins/plugin3/__init__.py print 'Plugin 3' ``` In `plugins_manager.py`, identify the plugins and import them in: ```py from os import listdir from os.path import exists, isdir, basename, join, splitext def is_plugin(filename): filepath = join('plugins', filename) _, ext = splitext(filepath) # Ignore plugins/__init__.py if filename == '__init__.py': return False # Find single file plugins if ext == '.py': return True # Find plugins packaged in directories if isdir(filepath) and exists(join(filepath, '__init__.py')): return True return False plugin_names = [ splitext(p)[0] for p in listdir('plugins/') if is_plugin(p) ] plugins = [ __import__('plugins.' + p) for p in plugin_names ] ``` Should get output similar to: ``` Plugin 1 Plugin 2 Plugin 3 ``` Note that in this case the `plugins` variable contains a list of the module objects imported.
Strictly speaking I'd say there is no way to run code without it being invoked somehow. To do this, the running program can use ``` import importlib ``` so that once you've found the file you can import it with: ``` mod = importlib.import_module(import_name, pkg_name) ``` and if that file provides a known function (Run in this case) you can call it with: ``` mod.Run(your_args) ``` This works for Python 2.7. Version 3 might be different.
7,616
50,716,680
I have a file with contents like this (I don't wish to change the contents of the file in any way): ``` . . lines I don't need. . . abc # I know where it starts and the data can be anything, not just abc efg # I know where it ends. . . lines I don't need. . . ``` I know the line numbers (index) from where my useful data starts and ends. The useful lines can have any unpredictable data. Now I wish to make a list out of this data, like this: ``` [['a','b','c'],['e','f','g']] ``` Please note that there are no spaces in between a, b and so on in the input file so i guess the split() function won't work. What would be the best way to achieve this in python?
2018/06/06
[ "https://Stackoverflow.com/questions/50716680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6027453/" ]
I **guess** the compiler error that you see is referring to the fact that you are using `listener` into it's own defining context. Try this for a change: In UserManager: ``` func allUsers(completion:@escaping ([User])->Void) -> ListenerRegistration? { return db.collection("users").addSnapshotListener { querySnapshot, error in if let documents = querySnapshot?.documents { var users = [User]() for document in documents { let user = User(snapshot: document) users.append(user) } completion(users) } } } ``` In ViewController: ``` override func viewDidLoad() { super.viewDidLoad() self.listener = UserManager.shared.allUsers(completion: { (users) in self.users = users self.tableView.reloadData() }) } deinit { self.listener.remove() } ```
I think that *getDocument* instead of *addSnapshotListener* is what you are looking for. Using this method the listener is automatically detached at the end of the request... It will be something similar to ``` func allUsers(completion:@escaping ([User])->Void) { db.collection("users").getDocument { querySnapshot, error in if let documents = querySnapshot?.documents { var users = [User]() for document in documents { let user = User(snapshot: document) users.append(user) } completion(users) } } } ```
7,617
47,747,516
I hope i will get help here. I'm writing program who will read and export to txt 'devices live logging events' every two minutes. Everything works fine until i generate exe file. What is more interesting, program works on my enviroment(geckodriver and python libraries installed), but does not work on computers without python enviroment. Even if I generate exe with --onedir. Any ideas or tips? part of code is below(without tkinter): ``` browser = webdriver.Firefox() def logs(): global writing global browser logs_content = browser.find_element_by_css_selector(".content") if writing: curent_time = datetime.datetime.now() threading.Timer(120, logs).start() save_path = 'C:/Users/' + getpass.getuser() + '/Desktop/Logs ' + curent_time.strftime("%d-%B-%Y") + '.txt' with open(save_path, "w") as logs_txt: logs_txt.write(logs_content.text) def enter_to_IDE(): username = browser.find_element_by_id("username") username_input = login.get() username.send_keys(username_input) browser.find_element_by_id("next-step-btn").click() time.sleep(5) password_css = browser.find_element_by_id("password") password_input = password.get() password_css.send_keys(password_input) browser.find_element_by_id("login-user-btn").click() time.sleep(10) logs() def US_shard(): global browser browser.get('link') enter_to_IDE() def EU_shard(): global browser browser.get('link') enter_to_IDE() ```
2017/12/11
[ "https://Stackoverflow.com/questions/47747516", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9082203/" ]
The problem is that the `deny from all` denies **everything** including the error documents. But hey, .htaccess files work in cascade, so you can 1. create a subfolder in your web root (assuming your webroot is `/www` - `/www/errordocs` 2. => in there put your ErrorDocuments like 403.html etc. 3. create another `.htaccess` **there** - `/www/errordocs/.htaccess` 4. => into this `/www/errordocs/.htaccess` put `allow from all` 5. In the main `.htaccess` in the webroot (`/www/.htaccess` ) put `ErrorDocument 403 /errordocs/403.html` etc.. If this is still not working for you, check there are public/others/everyone READ permissions on both the folder and the file `/www/errordocs` => `755` `/www/errordocs/.htaccess` => `640` `/www/errordocs/403.html` => `644` Don't be confused - Windows OS **also** has permissions, you will need at least Read permissions for `Everyone` on these, [more on the Windows permissions here](https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-2000-server/bb727008(v=technet.10)?redirectedfrom=MSDN). **Just remember, files in this folder will all be public!** (don't put there anything you don't want public :-)
yes I can help a Little bit to solve permission issue , I was encountered by same problem , You just need to give permission +777 to your /app if you have linux machine , go inside your web folder ``` sudo chmod -R +777 /app ``` and do the same to any other folders you write there and for 403 error I think you missed "l" If I am not wrong , ``` Order deny,allow Deny from all Allow from 192.168.1.0/24 ErrorDocument 403 /403.html ``` :) :)
7,618
23,006,023
I'm trying to install pyOpenSSL using pip, python version is 2.7, OS is linux. After pyOpenSSL installed, when I tried to import the module in python, I got the following error: ``` Python 2.7.5 (default, Jun 27 2013, 03:17:39) [GCC 4.1.2 20080704 (Red Hat 4.1.2-44)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import OpenSSL Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.7/site-packages/OpenSSL/__init__.py", line 8, in <module> from OpenSSL import rand, crypto, SSL File "/usr/local/lib/python2.7/site-packages/OpenSSL/SSL.py", line 84, in <module> OP_NO_TICKET = _lib.SSL_OP_NO_TICKET AttributeError: 'FFILibrary' object has no attribute 'SSL_OP_NO_TICKET' >>> ``` I tried to uninstall pyOpenSSL and install it again, but got the same error.
2014/04/11
[ "https://Stackoverflow.com/questions/23006023", "https://Stackoverflow.com", "https://Stackoverflow.com/users/855643/" ]
This is because low version pyopenssl has not defined SSL\_OP\_NO\_TICKET。 clone the latest pyopenssl from <https://github.com/pyca/pyopenssl.git> and install it, then that'll be fine. no thks.
The fix is described here: <https://github.com/pyca/pyopenssl/issues/130> Indeed, you can apply it manually (not really recommended, but easy) Or download archive from github The link to the fix: <https://github.com/pyca/pyopenssl/commit/e7a6939a22a4290fff7aafe39dd0db85157d5e05> And the fix applied to SSL.py ``` -OP_NO_TICKET = _lib.SSL_OP_NO_TICKET +try: + OP_NO_TICKET = _lib.SSL_OP_NO_TICKET +except AttributeError: + pass ```
7,619
32,156,008
I am using Calendar and recieve list of lists of lists of tuples from it ``` calendar.Calendar.yeardays2calendar(calendar.Calendar(), year, 1)) ``` Output is: ``` [[[[(0, 0), (0, 1), (0, 2), (1, 3), (2, 4), (3, 5), (4, 6)], [(5, 0), (6, 1), (7, 2), (8, 3), (9, 4), (10, 5), (11, 6)], [(12, 0), (13, 1),... ``` I want to flat map it to simple tuples list saving their order. What is the best way to do map list of any deepness into plain list in python 2.7? Example of what I want: ``` [(0, 0), (0, 1), (0, 2), (1, 3), (2, 4), (3, 5), (4, 6),(5, 0), (6, 1), (7, 2), (8, 3), (9, 4), (10, 5), (11, 6), (12, 0), (13, 1)... ``` Tryied code from other questions - didn't help. Sorry for silly questions, I'm new to python UPD I tried functions from here [python list comprehensions; compressing a list of lists?](https://stackoverflow.com/questions/1077015/python-list-comprehensions-compressing-a-list-of-lists) - didn't help
2015/08/22
[ "https://Stackoverflow.com/questions/32156008", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4386321/" ]
Python has a function to flatten one nesting level. It goes by the unfortunate name `itertools.chain.from_iterable()`. If you apply it three times, it will flatten three levels: ``` import itertools flatten1 = itertools.chain.from_iterable flattened_data = flatten1(flatten1(flatten1(your_data))) for a, b in flattened_data: # whatever ``` More generically, a function that flattens `n` levels would be ``` def flatten_n(n, iterable): for x in reduce(apply, [itertools.chain.from_iterable] * n, iterable): yield x ``` A function that recursively flattens all lists could look like this: ``` def flatten_lists(a): if isinstance(a, list): for b in a: for x in flatten_lists(b): yield x else: yield a ```
Try this: ``` def flatten(x): if isinstance(x, list): return [a for i in x for a in flatten(i)] else: return [x] ``` This answer is similar to this: <https://stackoverflow.com/a/2158522/1628832> but checking for the specific `list` type instead of an iterable. For optimization, memory efficiency, etc.. you can use `yield` operation too. Demo ``` >>> year = 2015 >>> x = calendar.Calendar.yeardays2calendar(calendar.Calendar(), year, 1) >>> flatten(x) [(0, 0), (0, 1), (0, 2), (1, 3), (2, 4), (3, 5), (4, 6), (5, 0), (6, 1), (7, 2), (8, 3), (9, 4), (10, 5), (11, 6), (12, 0), (13, 1), (14, 2), (15, 3), (16, 4), (17, 5), (18, 6), (19, 0), (20, 1), (21, 2), (22, 3), (23, 4), (24, 5), (25, 6), ...] ```
7,620
56,721,424
I have two pandas columns, both converted to datetime format, and can't subtract one from the other. ``` df['date_listed'] = pd.to_datetime(df['date_listed'], errors='coerce').dt.floor('d') df['date_unconditional'] = pd.to_datetime(df['date_unconditional'], errors='coerce').dt.floor('d') print df['date_listed'][:5] print df['date_unconditional'][:5] 0 2013-01-01 1 2013-01-01 2 2015-04-08 3 2016-03-24 4 2016-04-27 Name: date_listed, dtype: datetime64[ns] 0 2018-10-15 1 2018-06-12 2 2018-08-28 3 2018-08-29 4 2018-10-29 Name: date_unconditional, dtype: datetime64[ns] ``` The formats seem to be correct to be able to do a subtraction, but then I get this mistake: ``` df['date_listed_to_sale'] = (df['date_sold'] - df['date_listed']).dt.days print df['date_listed_to_sale'][:5] TypeErrorTraceback (most recent call last) <ipython-input-139-85a5efbde0f1> in <module>() ----> 1 df['date_listed_to_sale'] = (df['date_sold'] - df['date_listed']).dt.days 2 print df['date_listed_to_sale'][:5] /Users/virt_env/virt1/lib/python2.7/site-packages/pandas/core/ops.pyc in wrapper(left, right) 1581 rvalues = rvalues.values 1582 -> 1583 result = safe_na_op(lvalues, rvalues) 1584 return construct_result(left, result, 1585 index=left.index, name=res_name, dtype=None) /Users/virt_env/virt1/lib/python2.7/site-packages/pandas/core/ops.pyc in safe_na_op(lvalues, rvalues) 1531 if is_object_dtype(lvalues): 1532 return libalgos.arrmap_object(lvalues, -> 1533 lambda x: op(x, rvalues)) 1534 raise 1535 pandas/_libs/algos.pyx in pandas._libs.algos.arrmap() /Users/virt_env/virt1/lib/python2.7/site-packages/pandas/core/ops.pyc in <lambda>(x) 1531 if is_object_dtype(lvalues): 1532 return libalgos.arrmap_object(lvalues, -> 1533 lambda x: op(x, rvalues)) 1534 raise 1535 TypeError: ufunc subtract cannot use operands with types dtype('S1') and dtype('<M8[ns]') ``` I added errors='coerce' thinking it may resolve the problem, it didn't. I would appreciate some help with this.
2019/06/23
[ "https://Stackoverflow.com/questions/56721424", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4718221/" ]
You could use [destructuring](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Destructuring_assignment) and [spread](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Destructuring_assignment) operations and then map to recombine. ```js data = [ ['Jenny', 'id100', 'F', 'English', 'Science', 'Math', 'Physics'], ['Johnny', 'id101', 'M', 'Science', 'Sports', 'Gym', 'English'] ]; result = []; data.map(row => { var [name, id, gender, ...prefs] = row; prefs.map((x) => result.push([name, id, gender, x])); }) console.log(result); ``` As pointed out in the comments, since some of the features [might not be available](https://developers.google.com/apps-script/guides/services/#basic_javascript_features), here's a more conservative alternative. ```js data.forEach(function(row) { prefs = row.slice(3); prefs.map(function(x) { result.push([row[0], row[1], row[2], x]) }); }) ```
Try this: ``` function addingRows() { var ss=SpreadsheetApp.getActive(); var sh=ss.getSheetByName('Sheet1'); var rg=sh.getRange(2, 1, sh.getLastRow()-1,sh.getLastColumn()); var vA=rg.getValues(); var vB=[]; for(var i=0;i<vA.length;i++) { vt=vA[i].slice(3); for(var j=0;j<vt.length;j++) { vB.push([vA[i][0],vA[i][1],vA[i][2],vt[j]]) } } var osh=ss.getSheetByName('Sheet2'); osh.appendRow(['Name','ID','Gender','Course']); osh.getRange(2,1,vB.length,4).setValues(vB); } ``` My Sheet1: [![enter image description here](https://i.stack.imgur.com/nN6sZ.jpg)](https://i.stack.imgur.com/nN6sZ.jpg) My Sheet2: [![enter image description here](https://i.stack.imgur.com/EtTV2.jpg)](https://i.stack.imgur.com/EtTV2.jpg) * [Array.slice](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/slice)
7,623
4,192,744
If I enter Baltic characters in textctrl and click button **test1** I have an error ``` "InicodeEncodeError: 'ascii' codec can't encode characters in position 0-3: ordinal not in range(128)" ``` Button **test2** works fine. ``` #!/usr/bin/python # -*- coding: UTF-8 -*- import wx class MyFrame(wx.Frame): def __init__(self, parent, id, title): wx.Frame.__init__(self, parent, id, title, (-1, -1), wx.Size(450, 300)) self.panel = wx.Panel(self) self.input_area = wx.TextCtrl(self.panel, -1, '',(5,5),(200,200), style=wx.TE_MULTILINE) self.output_list = wx.ListCtrl(self.panel, -1, (210,5), (200,200), style=wx.LC_REPORT) self.output_list.InsertColumn(0, 'column') self.output_list.SetColumnWidth(0, 100) self.btn1 = wx.Button(self.panel, -1, 'test1', (5,220)) self.btn1.Bind(wx.EVT_BUTTON, self.OnTest1) self.btn2 = wx.Button(self.panel, -1, 'test2', (100,220)) self.btn2.Bind(wx.EVT_BUTTON, self.OnTest2) self.Centre() def OnTest1(self, event): self.output_list.InsertStringItem(0,str(self.input_area.GetValue()).decode('utf-8')) def OnTest2(self, event): self.output_list.InsertStringItem(0,"ąčęėįš".decode('utf-8')) class MyApp(wx.App): def OnInit(self): frame = MyFrame(None, -1, 'encoding') frame.Show(True) return True app = MyApp(0) app.MainLoop() ``` Update 1 -------- I have tried this code on two Windows 7 Ultimate x64 computers. Both have **python 2.7** and **wxPython2.8 win64 unicode** for python 2.7 In both machines I have the same error.
2010/11/16
[ "https://Stackoverflow.com/questions/4192744", "https://Stackoverflow.com", "https://Stackoverflow.com/users/509289/" ]
The [documentation](http://www.novell.com/documentation/suse91/suselinux-adminguide/html/ch12s03.html) for SUSE Linux provides a good explanation of why Linux is booted with a RAMDisk: > > As soon as the Linux kernel has been > booted and the root file system (/) > mounted, programs can be run and > further kernel modules can be > integrated to provide additional > functions. **To mount the root file > system, certain conditions must be > met. The kernel needs the > corresponding drivers to access the > device on which the root file system > is located** (especially SCSI > drivers). **The kernel must also contain > the code needed to read the file > system** (ext2, reiserfs, romfs, etc.). > It is also conceivable that the root > file system is already encrypted. In > this case, a password is needed to > mount the file system. > > > For the problem of SCSI drivers, a > number of different solutions are > possible. The kernel could contain all > imaginable drivers, but this might be > a problem because different drivers > could conflict with each other. Also, > the kernel would become very large > because of this. Another possibility > is to provide different kernels, each > one containing just one or a few SCSI > drivers. This method has the problem > that a large number of different > kernels are required, a problem then > increased by the differently optimized > kernels (Athlon optimization, SMP). > **The idea of loading the SCSI driver as > a module leads to the general problem > resolved by the concept of an initial > ramdisk: running user space programs > even before the root file system is > mounted.** > > > This prevents a potential chicken-or-egg situation where the root file system cannot be loaded until the device on which it is located can be accessed, but that device can't be accessed until the root file system has been loaded: > > **The initial ramdisk** (also called initdisk or initrd) **solves precisely the problems described above. The Linux kernel provides an option of having a small file system loaded to a RAM disk and running programs there before the actual root file system is mounted.** The loading of initrd is handled by the boot loader (GRUB, LILO, etc.). Boot loaders only need BIOS routines to load data from the boot medium. **If the boot loader is able to load the kernel, it can also load the initial ramdisk. Special drivers are not required.** > > > Of course, a RAMDisk is not *strictly necessary* for the boot process to take place. For example, you could compile a kernel that contained all necessary hardware drivers and modules to be loaded at startup. But apparently this is too much work for most people, and the RAMDisk proved to be a simpler, more scalable solution.
The reason that most Linux distributions use a ramfs (initramfs) when booting, is because its contents can be included in the kernel file, or provided by the bootloader. They are therefore available immediately at boot, without the kernel having to load them from somewhere. That allows the kernel to run userspace programs that e.g. configure devices, load modules, setup that nifty RAID array that contains all filesystems or even ask the user for the password to his encrypted root filesystem. When this configuration is done, the first script that is called just exec()s /sbin/init from the (now configured and available) root filesystem. I have seen quite a few systems where the drivers themselvess for the disk controllers and the rootfs are loaded via modules in an initramfs, rather than being included in the kernel image. You do not strictly *need* an initramfs to boot - if your kernel image contains all drivers necessary to access the rootfs and you don't need any special configuration or user input (like RAID arrays or encrypted filesystems) to mount it, it is often possible to directly start /sbin/init from the rootfs. See also: <http://www.kernel.org/doc/Documentation/filesystems/ramfs-rootfs-initramfs.txt> <http://www.kernel.org/doc/Documentation/initrd.txt> As a side note, some systems (rescue disks, embedded and such) may use a ramfs as the root filesystem when the actual root filesystem is in a medium that may be removed or is not writable (CD, Flash MTDs etc).
7,624
62,026,013
I have created a python script which do birthday wish to a person automatically when birthdate is arrive. I added this script on window start up but it run every time when i start my pc and do birthday wish to person also. I want to run that script only once a day. What should i do?
2020/05/26
[ "https://Stackoverflow.com/questions/62026013", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12942284/" ]
Try this at the start of the file: ``` import datetime actualday = datetime.datetime.today().day # get the actual day actualmonth = datetime.datetime.today().month # get the actual month bday = 1 # day of birthday bmonth = 1 # month of birthday if actualday == bday and actualmonth == bmonth : # code ``` it should finish the process if the dates aren't equal
You can run this program when the system boot [How to start a python file while Windows starts?](https://stackoverflow.com/questions/4438020/how-to-start-a-python-file-while-windows-starts) And after that, you need check the time of when the system started like: ``` import datetime dayToday = datetime.datetime.today().day monthToday = datetime.datetime.today().month birthdayDay = 1 birthdayMonth = 10 if dayToday == birthdayDay and monthToday == birthdayMonth: print "HPBD" ```
7,625
50,225,903
I am trying a POC running a python script in a back-end implemented in PHP. The web server is Apache in a Docker container. This is the PHP code: ``` $command = escapeshellcmd('/usr/local/test/script.py'); $output = shell_exec($command); echo $output; ``` When I execute the python script using the back-end we are getting a permission denied error for creating the file. My python script: ``` #!/usr/bin/env python file = open("/tmp/testfile.txt","w+") file.write("Hello World") file.close() ``` This is the error I'm getting: > > IOError: [Errno 13] Permission denied: 'testfile.txt' > > > For the directory im working with the permissions are as follows, > > drwxrwsr-x 2 1001 www-data 4096 May 8 05:35 . > > > drwxrwxr-x 3 1001 1001 4096 May 3 08:49 .. > > > Any thoughts on this? How do I overcome this problem?
2018/05/08
[ "https://Stackoverflow.com/questions/50225903", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4681701/" ]
To start is is incredibly bad practice to have relative paths in *any* scripting environment. Start by rewriting your code to use a full path such as `/usr/local/test/script.py` and `/tmp/testfile.txt`. My guess is your script is attempting to write to a different spot than you think it is. When you *know* exactly where the files are being written go to the directory and run `ls -la` and check the permissions on the directory. You want it to be writeable by the same user or group as the web server runs. Looking at the permissions you have shown you don't have the user able to write to the directory, just everyone and the group. You need to add user write permissions - `chmod u+w /tmp` will do the job.
I believe the problem is that you are trying to write to an existing file in the `/tmp/` directory. Typically `/tmp/` [will have the sticky permission bit set.](https://unix.stackexchange.com/questions/71622/what-are-correct-permissions-for-tmp-i-unintentionally-set-it-all-public-recu) That means that only the owner of a file has permission to write or delete it. Group write permissions on files do not matter if the sticky bit is set on the parent directory. So if this is the contents of your /tmp ``` $ ls -al /tmp drwxrwxrwt 5 root root 760 Apr 30 12:00 . drwxr-xr-x 21 root root 4096 Apr 30 12:00 .. -rw-rw---- 2 1001 www-data 80 May 8 12:00 testfile.txt ``` We might assume that users in the group `www-data` should be able to write to `testfile.txt`. But that is not the case, since `.` (the /tmp/ directory itself) has the sticky bit set (the `t` in the permissions section indicates this). The reason why the sticky bit is set here is that everyone should be able to write files there, but not have to worry that other users might modify our temporary files. To avoid permission errors, you can use the standard library [tempfile](https://docs.python.org/3/library/tempfile.html#tempfile-examples) module. This code will create a unique filename such as `testfile.JCDxK2.txt`, so it doesn't matter if `testfile.txt` already exists. ``` #!/usr/bin/env python import tempfile with tempfile.NamedTemporaryFile( mode='w', prefix='testfile.', suffix='.txt', delete=False, ) as file: file.write("Hello World") ```
7,626
73,523,116
i was wondering how i can get the product dimensions and weight from an amazon page. this is the page: [https://www.amazon.com/Vic-Firth-American-5B-Drumsticks/dp/B0002F73Z8/ref=psdc\_11966431\_t1\_B0064RNNP2?th=1](https://rads.stackoverflow.com/amzn/click/com/B0002F73Z8) there is a place where it says item weight: 3.2 ounces product dimensions: 16 x 0.6 x 0.6. I am new to webscraping and python so if you could please help me, that would be awesome!
2022/08/29
[ "https://Stackoverflow.com/questions/73523116", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17215378/" ]
What you need to do is first install selenium in your computer. Then after setting it up you can go to the page and click on inspect.Then you search what you need to scrap after that copy the XPath . You can Follow [Selenium with Python](https://selenium-python.readthedocs.io/) Docs or watch any tutorial for more detail .
I would advise you look into requests, Beautiful Soup and Selenium. These are useful libraries/tools for web scraping. Also I believe Amazon specifically blocks a lot of scraping requests so you will need to mimic a regular users browser for it to work.
7,627
35,173,118
I am using httplib in my python code and have following mentioned. `import httplib httplib.HTTPConnection.debuglevel = 2` I do get all the info I need but httplib library is printing it on console. I do not know any way by which I can get all those logs in a log file and not on console.
2016/02/03
[ "https://Stackoverflow.com/questions/35173118", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2606665/" ]
If you take a look at the httplib source (<https://hg.python.org/cpython/file/2.7/Lib/httplib.py>) you'll see the debugging is done via print statements, so you can't use logging configuration to intercept the logs, and because print is a statement, you can't monkeypatch it to do your bidding. You have a few options: 1. Use an alternative to httplib 2. Subclass HTTPConnection and do something ugly with `__getattribute__` (or just write pass-throughs for every method you want to log) and log when calls happen. 3. Depending on what you're doing, just redirect your program's stdout into a file.
Maybe try this: ``` f = open('output.txt','w') sys.stdout = f ``` Redirecting stdout to a file.
7,629
40,518,865
I am trying to figure out how to use a barely documented feature of a poorly documented API. I have distilled the chunk of code that is giving me trouble down to this for simplicity: ``` def build_custom_args(args): custom_args = {} for key in args: custom_args.update(key.get()) print(custom_args) ``` I can tell from the function that constructs `args` that it is a list. Problem is, a list of *what?* No matter what I put in the list, key.get() raises one exception of another. For example, if I execute that code like so: `build_custom_args(['foo'])` then I get an understandable error message: `AttributeError: 'str' object has no attribute 'get'`. Or, try a dictionary: `build_custom_args([{'foo': 'bar'}])`, but the error is raised: `TypeError: get expected at least 1 arguments, got 0`. As best I can tell `args` is a list of some standard python objects - there is no indication that these are special objects with a custom `get()` method. Am I missing something obvious? Is there some standard python object that has a `get()` method which takes no positional arguments? Is this a syntax of some older version of Python? Or have I found a bug in the API? **Edit** The accepted answer shows that I was mistaken believing that `args` had to be a python built-in. Jack's answer is worth a look because it actually does solve the problem "what Python built-in could `args` be that would cause this function to not throw an error?"
2016/11/10
[ "https://Stackoverflow.com/questions/40518865", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2197234/" ]
Well... I found this in the github link you have provided ``` class Email(object): # code... def get(self): ``` Matter of fact, a lot of those classes have get methods. Since Python is duck typed, there isn't really a way to determine what object is correct, but you're just building a dictionary, so shouldn't matter too much
AFAIK the only class with a `get` [method that can take zero arguments](https://docs.python.org/3/library/queue.html?highlight=get#queue.Queue.get) is a `Queue`. Which in your example is probably a `Queue` of `dict`s
7,630
8,159,414
I'm trying to debug my [DJango Paypal IPN integration](https://stackoverflow.com/questions/8145907/what-might-cause-the-django-ipn-tool-to-fail-when-posting-to-a-django-based-rece) but I'm struggling. The Django dev server reports a 500 error to the console (but no other details) and the IPN test tool reports a 500 error but no other details. I've tried disabling the `DEBUG` mode to try and get it to send me emails but despite setting up the EMAIL\_HOST to something suitable, I'm not seeing any emails. I tried and verifying that the email system is working with a call to [send\_mail](https://docs.djangoproject.com/en/dev/topics/email/) ..but that succeeds and I still see no emails regarding internal server errors. What could I be missing? **edit** I'm running the dev server from PyCharm and the console output looks like this: ``` runnerw.exe C:\Python26\python.exe manage.py runserver 192.168.1.4:80 Validating models... 0 errors found Django version 1.4 pre-alpha, using settings 'settings' Development server is running at http://192.168.1.4:80/ Quit the server with CTRL-BREAK. Verifying... ...response: VERIFIED IpnEndPoint.on_process Valid: {u'last_name': u'Smith', u'txn_id': u'491116223', u'receiver_email': u'seller@paypalsandbox.com', u'payment_status': u'Completed', u'tax': u'2.02', u'payer_status': u'unverified', u'residence_country': u'US', u'invoice': u'abc1234', u'address_state': u'CA', u'item_name1': u'something', u'txn_type': u'cart', u'item_number1': u'AK-1234', u'quantity1': u'1', u'payment_date': u'14:03:49 Nov 16, 2011 PST', u'first_name': u'John', u'mc_shipping': u'3.02', u'address_street': u'123, any street', u'charset': u'windows-1252', u'custom': u'xyz123', u'notify_version': u'2.4', u'address_name': u'John Smith', u'address_zip': u'95131', u'test_ipn': u'1', u'receiver_id': u'TESTSELLERID1', u'payer_id': u'TESTBUYERID01', u'mc_handling1': u'1.67', u'verify_sign': u'A8SIYWSxkrwNPfuNewSuxsIAatvMAi2mxYjlYvaiWh3Z4BuIQojK3KBO', u'mc_handling': u'2.06', u'mc_gross_1': u'9.34', u'address_country_code': u'US', u'address_city': u'San Jose', u'address_status': u'confirmed', u'address_country': u'United States', u'mc_fee': u'0.44', u'mc_currency': u'USD', u'payer_email': u'buyer@paypalsandbox.com', u'payment_type': u'instant', u'mc_shipping1': u'1.02'} Logging Transaction.. [16/Nov/2011 22:20:49] "POST /IPN/ HTTP/1.0" 500 104946 ```
2011/11/16
[ "https://Stackoverflow.com/questions/8159414", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15369/" ]
given that you are running a pre-alpha version of django, I would recomment asking this question on the django user list <https://groups.google.com/group/django-users>
have a look at [django-sentry](http://readthedocs.org/docs/sentry/en/latest/index.html) it logs 500 errors (also supports regular logging), making the dynamic "orange 500 pages" browsable after the fact. this is especially useful when you never get the original error page, e.g. when using ajax or remote apis like your case
7,631
48,373,718
I have a spring boot web application without registration page or allowing users to register. I'm manually creating passwords using another web spring application which ones gives me encoded password on request. Using the below link to generate the encoded password: <http://www.baeldung.com/spring-security-registration-password-encoding-bcrypt> But i was researching to find an alternate simple python equivalent,so i can use it on the cli easily.?
2018/01/22
[ "https://Stackoverflow.com/questions/48373718", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1550594/" ]
I was facing the same problem, but the following worked for me: ``` bcrypt.gensalt(rounds = 10, prefix=b"2a") ``` This seems to be in sync with the bean `BCryptPasswordEncoder` in SpringBoot :)
BCrypt is a module in python which can be installed using `pip install bcrypt`. The equivalent to BCryptPasswordEncoder() would require importing bcrypt as `import bcrypt` and then executing `bcrypt.hashpw(password, bcrypt.gensalt())` to encrypt a password. Source: <https://pypi.python.org/pypi/bcrypt/3.1.0>
7,634
34,030,599
Hoping you can help me figure this one out. I'm slogging through the [Heroku 'Getting Started with Python'](https://devcenter.heroku.com/articles/getting-started-with-python#declare-app-dependencies) page where you install the necessary dependencies to run an app locally. It all works well up until the 2nd to last step, running the command `virtualenv venv`. I run that, then the next command, `pip install -r requirements.txt --allow-all-external`, and that's where I get this error: ``` Error: pg_config executable not found. Please add the directory containing pg_config to the PATH or specify the full executable path with the option: python setup.py build_ext --pg-config /path/to/pg_config build ... or with the pg_config option in 'setup.cfg'. ---------------------------------------- Cleaning up... Command python setup.py egg_info failed with error code 1 in /tmp/pip_build_nickcode/psycopg2 Storing debug log for failure in /tmp/tmpb5tCeK ``` I don't understand why I'm getting this error. Can anyone help me make sense of it? My version of requirements.txt: ``` dj-database-url==0.3.0 Django==1.8.1 django-postgrespool==0.3.0 gunicorn==19.3.0 psycopg2==2.6 SQLAlchemy==1.0.4 whitenoise==1.0.6 ```
2015/12/01
[ "https://Stackoverflow.com/questions/34030599", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5266746/" ]
pg\_config is part of postgresql ( <http://www.postgresql.org/docs/8.1/static/app-pgconfig.html> ). So you need to install postgresql.
``` brew install postgresql ``` resolves this issue on macOS
7,635
68,565,656
I'm using Selenium to automate a report but when I download it the last number changes as if it were a sequence for example: 0001849191, 0001849192 and so on. I can't make it keep following the order because if someone else generates the report it can skip a number and python doesn't understand that it followed the sequence and ends up not renaming, so I'd like to make it possible for it to check what the last downloaded file and after that he renames the .xlsx file. Here's the code I made to try more without success: filepath = r'C:\Users\Luis.Serpa\Downloads' filename = max([filepath +'\'+ f for f in os.listdir(filepath)], key=os.path.getmtime)
2021/07/28
[ "https://Stackoverflow.com/questions/68565656", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16541350/" ]
You can use the glob and os module to get the latest file.First import os, glob modules i.e: ``` import os import glob ``` Then you define path.My example gives path for any ones Download folder. ``` home = os.path.expanduser('~') path = os.path.join(home, 'Downloads') ``` "\*" is necessary. ``` path_a = path + "/*" list_of_files = glob.glob(path_a) # * means all if need specific format then *.csv latest_file = max(list_of_files, key=os.path.getctime) ``` This is name of new file ``` new_file = os.path.join(path, "b.kml") print(latest_file) #prints a.txt which was latest file i created os.rename(latest_file, new_file) ``` the file name becomes b.kml
First off, `'\'` is an invalid string, because `\` is used for escape sequences. `'\\'` would get you a string with a single backslash in it. But more than that, it's best not to construct paths this way; you should use [os.path.join](https://docs.python.org/3/library/os.path.html#os.path.join) to create paths out of components: ```py filename = max([os.path.join(filepath, f) for f in os.listdir(filepath)], key=os.path.getmtime) ``` This code works just fine to find the latest file on my OS, if I give a `filepath` to my own downloads folder. For a more modern python3 approach that has better cross-platform path support, you can also use [pathlib](https://docs.python.org/3/library/pathlib.html) Once you have the path of the file, you can rename it with [os.rename](https://docs.python.org/3/library/os.html#os.rename).
7,638
16,840,554
What's a pythonic approach for reading a line from a file but not advancing where you are in the file? For example, if you have a file of ``` cat1 cat2 cat3 ``` and you do `file.readline()` you will get `cat1\n` . The next `file.readline()` will return `cat2\n` . Is there some functionality like `file.some_function_here_nextline()` to get `cat1\n` then you can later do `file.readline()` and get back `cat1\n`?
2013/05/30
[ "https://Stackoverflow.com/questions/16840554", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1431282/" ]
As far as I know, there's no builtin functionality for this, but such a function is easy to write, since most Python `file` objects support `seek` and `tell` methods for jumping around within a file. So, the process is very simple: * Find the current position within the file using `tell`. * Perform a `read` (or `write`) operation of some kind. * `seek` back to the previous file pointer. This allows you to do nice things like read a chunk of data from the file, analyze it, and then potentially overwrite it with different data. A simple wrapper for the functionality might look like: ``` def peek_line(f): pos = f.tell() line = f.readline() f.seek(pos) return line print peek_line(f) # cat1 print peek_line(f) # cat1 ``` --- You could implement the same thing for other `read` methods just as easily. For instance, implementing the same thing for `file.read`: ``` def peek(f, length=1): pos = f.tell() data = f.read(length) # Might try/except this line, and finally: f.seek(pos) f.seek(pos) return data print peek(f, 4) # cat1 print peek(f, 4) # cat1 ```
You could use wrap the file up with [itertools.tee](http://docs.python.org/2/library/itertools.html#itertools.tee) and get back two iterators, bearing in mind the caveats stated in the documentation For example ``` from itertools import tee import contextlib from StringIO import StringIO s = '''\ cat1 cat2 cat3 ''' with contextlib.closing(StringIO(s)) as f: handle1, handle2 = tee(f) print next(handle1) print next(handle2) cat1 cat1 ```
7,639
68,877,064
I'm pretty new in python and I'm having trouble doing this double summation. ![Summation](https://i.stack.imgur.com/C1TdQ.png) I already tried using ``` x = sum(sum((math.pow(j, 2) * (k+1)) for k in range(1, M-1)) for j in range(N)) ``` and using 2 for loops but nothing seens to work
2021/08/21
[ "https://Stackoverflow.com/questions/68877064", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16722883/" ]
The solution is incredibly easy - Instead of using just the relevant part of the Alchemy key: 40Oo3XScVabXXXX8sePUEp9tb90gXXXX I used the whole URL: <https://eth-rinkeby.alchemyapi.io/v2/40Oo3XScVabXXXX8sePUEp9tb90gXXXX>
I had the same experience but when using hardhat not truffle.My internet connection was ok,try switching from Git bash to terminal(CMD).Use a completely new terminal avoid Gitbash and powershell.
7,646
63,247,247
I downloaded pyttsx3 in command prompt I have a big error this the code: ``` import pyttsx3 engine = pyttsx3.init() engine.say("I will speak this text") engine.runAndWait() ``` output error: > > > > > > > > > > > > > > > > > > > > > ''' = RESTART: C:\Users\RAMS\AppData\Local\Programs\Python\Python38\text t speech.py Traceback (most recent call last): File "C:\Users\RAMS\AppData\Local\Programs\Python\Python38\lib\site-packages\pyttsx3\_*init*\_.py", line 20, in init eng = \_activeEngines[driverName] File "C:\Users\RAMS\AppData\Local\Programs\Python\Python38\lib\weakref.py", line 131, in **getitem** o = self.datakey KeyError: None ``` During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\RAMS\AppData\Local\Programs\Python\Python38\lib\site-packages\pyttsx3\drivers\sapi5.py", line 3, in <module> from comtypes.gen import SpeechLib # comtypes ImportError: cannot import name 'SpeechLib' from 'comtypes.gen' (C:\Users\RAMS\AppData\Local\Programs\Python\Python38\lib\site-packages\comtypes\gen\__init__.py) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\RAMS\AppData\Local\Programs\Python\Python38\lib\ctypes\__init__.py", line 123, in WINFUNCTYPE return _win_functype_cache[(restype, argtypes, flags)] KeyError: (<class 'ctypes.HRESULT'>, (<class 'comtypes.automation.tagVARIANT'>, <class 'ctypes.wintypes.LP_c_long'>), 0) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\RAMS\AppData\Local\Programs\Python\Python38\text t speech.py", line 2, in <module> engine = pyttsx3.init() File "C:\Users\RAMS\AppData\Local\Programs\Python\Python38\lib\site-packages\pyttsx3\__init__.py", line 22, in init eng = Engine(driverName, debug) File "C:\Users\RAMS\AppData\Local\Programs\Python\Python38\lib\site-packages\pyttsx3\engine.py", line 30, in __init__ self.proxy = driver.DriverProxy(weakref.proxy(self), driverName, debug) File "C:\Users\RAMS\AppData\Local\Programs\Python\Python38\lib\site-packages\pyttsx3\driver.py", line 50, in __init__ self._module = importlib.import_module(name) File "C:\Users\RAMS\AppData\Local\Programs\Python\Python38\lib\importlib\__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 783, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "C:\Users\RAMS\AppData\Local\Programs\Python\Python38\lib\site-packages\pyttsx3\drivers\sapi5.py", line 6, in <module> engine = comtypes.client.CreateObject("SAPI.SpVoice") File "C:\Users\RAMS\AppData\Local\Programs\Python\Python38\lib\site-packages\comtypes\client\__init__.py", line 250, in CreateObject return _manage(obj, clsid, interface=interface) File "C:\Users\RAMS\AppData\Local\Programs\Python\Python38\lib\site-packages\comtypes\client\__init__.py", line 188, in _manage obj = GetBestInterface(obj) File "C:\Users\RAMS\AppData\Local\Programs\Python\Python38\lib\site-packages\comtypes\client\__init__.py", line 110, in GetBestInterface mod = GetModule(tlib) File "C:\Users\RAMS\AppData\Local\Programs\Python\Python38\lib\site-packages\comtypes\client\_generate.py", line 110, in GetModule mod = _CreateWrapper(tlib, pathname) File "C:\Users\RAMS\AppData\Local\Programs\Python\Python38\lib\site-packages\comtypes\client\_generate.py", line 184, in _CreateWrapper mod = _my_import(fullname) File "C:\Users\RAMS\AppData\Local\Programs\Python\Python38\lib\site-packages\comtypes\client\_generate.py", line 24, in _my_import return __import__(fullname, globals(), locals(), ['DUMMY']) File "C:\Users\RAMS\AppData\Local\Programs\Python\Python38\lib\site-packages\comtypes\gen\_C866CA3A_32F7_11D2_9602_00C04F8EE628_0_5_4.py", line 1467, in <module> ISpeechBaseStream._methods_ = [ File "C:\Users\RAMS\AppData\Local\Programs\Python\Python38\lib\site-packages\comtypes\__init__.py", line 329, in __setattr__ self._make_methods(value) File "C:\Users\RAMS\AppData\Local\Programs\Python\Python38\lib\site-packages\comtypes\__init__.py", line 698, in _make_methods prototype = WINFUNCTYPE(restype, *argtypes) File "C:\Users\RAMS\AppData\Local\Programs\Python\Python38\lib\ctypes\__init__.py", line 125, in WINFUNCTYPE class WinFunctionType(_CFuncPtr): TypeError: item 1 in _argtypes_ passes a union by value, which is unsupported ``` ''' this is the error I don't know why In command prompt pip install pyttsx3: ``` Requirement already satisfied: pyttsx3 in c:\users\rams\appdata\local\programs\p ython\python38\lib\site-packages (2.90) Requirement already satisfied: comtypes; platform_system == "Windows" in c:\user s\rams\appdata\local\programs\python\python38\lib\site-packages (from pyttsx3) ( 1.1.7) Requirement already satisfied: pypiwin32; platform_system == "Windows" in c:\use rs\rams\appdata\local\programs\python\python38\lib\site-packages (from pyttsx3) (223) Requirement already satisfied: pywin32; platform_system == "Windows" in c:\users \rams\appdata\local\programs\python\python38\lib\site-packages (from pyttsx3) (2 28) ```
2020/08/04
[ "https://Stackoverflow.com/questions/63247247", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14047733/" ]
Looks like the Vercel docs are currently outdated (AWS SDK V2 instead of V3). You can pass the credentials object to the AWS service when you instantiate it. Use an environment variable that is not reserved by adding the name of your app to it for example. *.env.local* ``` YOUR_APP_AWS_ACCESS_KEY_ID=[your key] YOUR_APP_AWS_SECRET_ACCESS_KEY=[your secret] ``` Add these env variables to your Vercel deployment settings (or Netlify, etc) and pass them in when you start up your AWS service client. ``` import { S3Client } from '@aws-sdk/client-s3' ... const s3 = new S3Client({ region: 'us-east-1', credentials: { accessKeyId: process.env.TRENDZY_AWS_ACCESS_KEY_ID ?? '', secretAccessKey: process.env.TRENDZY_AWS_SECRET_ACCESS_KEY ?? '', }, }) ``` (note: undefined check so Typescript stays happy)
If I'm not mistaken, you want to make `AWS_ACCESS_KEY_ID` into a runtime variable as well. Currently, it is a build time variable, which won't be accessible in your node application. ``` // replace this env: { //..others AWS_ACCESS_KEY_ID: process.env.AWS_ACCESS_KEY_ID }, // with this module.exports = { serverRuntimeConfig: { //..others AWS_ACCESS_KEY_ID: process.env.AWS_ACCESS_KEY_ID } } ``` Reference: <https://nextjs.org/docs/api-reference/next.config.js/environment-variables>
7,650
31,264,522
I've made a simple combobox in python using Tkinter, I want to retrieve the value selected by the user. After searching, I think I can do this by binding an event of selection and call a function that will use something like box.get(), but this is not working. When the program starts the method is automatically called and it doesn't print the current selection. When I select any item from the combobox no method gets called. Here is a snippet of my code: ``` self.box_value = StringVar() self.locationBox = Combobox(self.master, textvariable=self.box_value) self.locationBox.bind("<<ComboboxSelected>>", self.justamethod()) self.locationBox['values'] = ('one', 'two', 'three') self.locationBox.current(0) ``` This is the method that is supposed to be called when I select an item from the box: ``` def justamethod (self): print("method is called") print (self.locationBox.get()) ``` Can anyone please tell me how to get the selected value? EDIT: I've corrected the call to justamethod by removing the brackets when binding the box to a function as suggested by James Kent. But now I'm getting this error: TypeError: justamethod() takes exactly 1 argument (2 given) EDIT 2: I've posted the solution to this problem. Thank You.
2015/07/07
[ "https://Stackoverflow.com/questions/31264522", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5018299/" ]
I've figured out what's wrong in the code. First, as James said the brackets should be removed when binding justamethod to the combobox. Second, regarding the type error, this is because justamethod is an event handler, so it should take two parameters, self and event, like this, ``` def justamethod (self, event): ``` After making these changes the code is working well.
``` from tkinter import ttk from tkinter import messagebox from tkinter import Tk root = Tk() root.geometry("400x400") #^ width - heghit window :D cmb = ttk.Combobox(root, width="10", values=("prova","ciao","come","stai")) #cmb = Combobox class TableDropDown(ttk.Combobox): def __init__(self, parent): self.current_table = tk.StringVar() # create variable for table ttk.Combobox.__init__(self, parent)# init widget self.config(textvariable = self.current_table, state = "readonly", values = ["Customers", "Pets", "Invoices", "Prices"]) self.current(0) # index of values for current table self.place(x = 50, y = 50, anchor = "w") # place drop down box def checkcmbo(): if cmb.get() == "prova": messagebox.showinfo("What user choose", "you choose prova") elif cmb.get() == "ciao": messagebox.showinfo("What user choose", "you choose ciao") elif cmb.get() == "come": messagebox.showinfo("What user choose", "you choose come") elif cmb.get() == "stai": messagebox.showinfo("What user choose", "you choose stai") elif cmb.get() == "": messagebox.showinfo("nothing to show!", "you have to be choose something") cmb.place(relx="0.1",rely="0.1") btn = ttk.Button(root, text="Get Value",command=checkcmbo) btn.place(relx="0.5",rely="0.1") root.mainloop() ```
7,652
70,461,032
How do I get a user keyboard input with python and print the name of that key ? e.g: > > user clicked on "SPACE" and output is "SPACE" > user clicked on "CTRL" and output is "CTRL". > > > for better understanding i'm using pygame libary. and i built setting controller for my game. its worked fine but i'm able to use only keys on my dict. i dont know how to add the other keyboard keys. see example: ``` class KeyboardSettings(): def __init__(self,description,keyboard): self.keyboard = keyboard self.default = keyboard self.description = description self.active = False def activate_change(self,x,y): fixed_rect = self.rect.move(x_fix,y_fix) pos = pygame.mouse.get_pos() if fixed_rect.collidepoint((pos)): if pygame.mouse.get_pressed()[0]: self.active = True elif pygame.mouse.get_pressed()[0]: self.active = False ``` This is a part of my class. in my script i loading all related object to that class. the related object are the optional keys in the game. e.g ``` SHOOT1 = KeyboardSettings("SHOOT","q") move_right = KeyboardSettings("move_right","d") #and more keys key_obj_lst = [SHOOT1,move_right....] #also i built a dict a-z, 0,9 dict_key = { 'a' : pygame.K_a, 'b' : pygame.K_b, 'c' : pygame.K_c, ... 'z' : pygame.K_z, '0' : pygame.K_0, ... '1' : pygame.K_1, '9' : pygame.K_9, ``` then in game loop: ``` for event in pygame.event.get(): if event.type == pygame.KEYDOWN: for k in key_obj_lst: #define each key by user if k.active: k.keyboard = event.unicode default = False if k.keyboard in dict_key: if event.key == dict_key[k.keyboard]: if k.description == 'Moving Right': moving_right = True if k.description == 'SHOOT': SHOOT = True ``` The code worked perfectly but i trully dont know how to add keys that not letters and numbers such as "ENTER","SPACE" etc.
2021/12/23
[ "https://Stackoverflow.com/questions/70461032", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17408857/" ]
`pygame` provides a function for getting the name of the key pressed: [`pygame.key.name`](https://www.pygame.org/docs/ref/key.html#pygame.key.name) And so you can use it to get the name of the key, there is no need to use a dictionary for this: ```py import pygame pygame.init() screen = pygame.display.set_mode((500, 400)) while True: for event in pygame.event.get(): if event.type == pygame.QUIT: exit() if event.type == pygame.KEYDOWN: key_name = pygame.key.name(event.key) print(key_name) ```
``` pip install keyboard import keyboard #use keyboard module while True: if keyboard.is_pressed('SPACE'): print('You pressed SPACE!') break elif keyboard.is_pressed("ENTER"): print("You pressed ENTER.") break ```
7,653
52,295,117
I tried to use Basemap package to plot a map by PyCharm, but I got something wrong with ``` from mpl_toolkits.basemap import Basemap` ``` And the Traceback as followed: ``` Traceback (most recent call last): File "/Users/yupeipei/anaconda3/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2963, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-4-0a24a3a77efd>", line 7, in <module> from mpl_toolkits.basemap import Basemap File "/Applications/PyCharm.app/Contents/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 20, in do_import module = self._system_import(name, *args, **kwargs) File "/Users/yupeipei/anaconda3/lib/python3.6/site-packages/mpl_toolkits/basemap/__init__.py", line 146, in <module> pyproj_datadir = os.environ['PROJ_LIB'] File "/Users/yupeipei/anaconda3/lib/python3.6/os.py", line 669, in __ getitem__ raise KeyError(key) from None KeyError: 'PROJ_LIB' ``` I'm confused with this error on PyCharm, because the same script is running correctly on Jupyter or Spyder! The environment in PyCharm is ../anaconda3/lib/python3.6 where is same from anaconda. Has anyone met this error before? Could anyone can help me to solve this error?
2018/09/12
[ "https://Stackoverflow.com/questions/52295117", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9101918/" ]
Following mewahl's comment I've added to my .bashrc (I use bash): > > export PROJ\_LIB=/path/to/your/instalation/of/anaconda/**share/proj/** > > > and now basemap (and others work).
I faced same problem. I installed anaconda and install conda install -c anaconda basemap. I used Anaconda built in IDE named "Spyder". Spyder is better than pycharm. only problem with spyder is lack of intellisense. I solved problem of Proj4 by setting path. Other problem kernel restarting when loading of .json larger file dataset. I use notepad++ and 010 editor to re-save file in small chunks and at last I merged all outputs.
7,655
36,419,442
I'm getting the error "The role defined for the function cannot be assumed by Lambda" when I'm trying to create a lambda function with create-function command. > > aws lambda create-function > > --region us-west-2 > > --function-name HelloPython > > --zip-file fileb://hello\_python.zip > > --role arn:aws:iam::my-acc-account-id:role/default > > --handler hello\_python.my\_handler > > --runtime python2.7 > > --timeout 15 > > --memory-size 512 > > >
2016/04/05
[ "https://Stackoverflow.com/questions/36419442", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3045354/" ]
I got this problem while testing lambda function. What worked for me was formatting JSON.
For me, the issue was that I had set the wrong default region environment key.
7,665
65,751,105
I'm using python prometheus client and have troubles pushing metrics to VictoriaMetrics (VM). There is a function called `push_to_gateway` and I tried to replace prometheus URL with VM: `http://prometheus:9091 -> http://vm:8428/api/v1/write`. But VM responded with 400 status code.
2021/01/16
[ "https://Stackoverflow.com/questions/65751105", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6689249/" ]
It possible to use `push_to_gateway` method with VictoriaMetrics, check examples at gist <https://gist.github.com/f41gh7/85b2eb895bb63b93ce46ef73448c62d0>
Also, pls take a look at the client I recently created: <https://github.com/gistart/prometheus-push-client> > > supports pushes directly to VictoriaMetrics via UDP and HTTP using InfluxDB line protocol > > > > > to StatsD or statsd-exporter in StatsD format via UDP > > > > > to pushgateway or prom-aggregation-gateway in OpenMetrics format via HTTP > > >
7,675
21,202,434
I have to random choose names from a list in python **using random.randint.** I have done it so far. but i am unable to figure out how to print them without repetition. some of the name repeat them after 10 to 15 names. please help me out. **I am not allowed to use any high functions.** I should do it with simple functions. here is my program. ``` import random names = [about 150 names] print([names[random.randint(0, len(names)-1)] for i in range(user_input)]) ```
2014/01/18
[ "https://Stackoverflow.com/questions/21202434", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3209210/" ]
Assuming, that you can only use **randint** and basic operations (loops, assignments and sublistings) - you can do the "in place shuffling trick" (modern version of Fisher Yates) to achieve such a result ``` copy = names[:] for i in xrange( user_input-1, 1, -1 ): swap = random.randint(0, i) copy[i],copy[swap] = copy[swap],copy[i] print copy[ :user_input ] ```
Generate an array of random numbers of the same length as `names` ``` sortarr = [random.randint(0, 10*len(names)) for i in range(len(names))] ``` and sort your `names` array based on this new array ``` names = [name for sa, name in sorted(zip(sortarr, names))] ``` What it does is assigns some random numbers to the `names`. They can be repeating, but it will not make repeating names because if two numbers are the same, they will be assigned to some arbitrary names.
7,677
18,942,003
I have this Python skeleton, where main.py is fine and server.py is fine. But the moment chat() is launched i see the GUI, but the moment `gtk.main()` was executed it does not allow any activity under server.py module nor in chat.py itself. How can i have the chat.py flexible so that it does not interupt other class and run all like multi-task? Note: os.system('/var/tmp/chat.py') when i execute from server.py then i have no issue but i problem is i cant communicate in that way (so trying to avoid that method) Any idea, why and how can i make chat.py work independently without causing my whole application to be blocked until chat.py is exit? main.py: ``` #!/usr/bin/python from myglobal import bgcolors from myglobal import parsePresets from server import server from chat import chat t = server(58888) t.start() ``` server.py ``` class server(threading.Thread): def __init__(self, port): threading.Thread.__init__(self) self.port = port self.selfv = chat() self.selfv.run() def run(self): host = '' s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.bind((host, self.port)) #s.setblocking(0) # non-blocking s.listen(1) conn, addr = s.accept() serverFlag = True while serverFlag: try: data = conn.recv(1024) except socket.error: s.listen(1) conn, addr = s.accept() continue if not data: s.listen(1) conn, addr = s.accept() else: line = data conn.send('ok') conn.close() ``` chat.py ``` class chat(object): def listener(self, sock, *args): conn, addr = sock.accept() gobject.io_add_watch(conn, gobject.IO_IN, self.handler) return True def handler(self, conn, *args): line = conn.recv(4096) if not len(line): return False else: return True def __init__(self): self.window = gtk.Window(gtk.WINDOW_TOPLEVEL) self.window.set_size_request(800, 450) self.window.move(0,0) self.window.set_name("main window") self.window.connect("delete-event", gtk.main_quit) self.drawingarea = gtk.DrawingArea() self.window.add(self.drawingarea) def run(self): self.sock = socket.socket() self.sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) self.sock.bind(('', 58881)) self.sock.listen(1) gobject.io_add_watch(self.sock, gobject.IO_IN, self.listener) self.window.show_all() self.window.set_keep_above(True) if(self.window.get_window().get_state() == gtk.gdk.WINDOW_STATE_MAXIMIZED): self.window.unmaximize() gtk.main() def quit(self): gtk.main_quit() #if __name__=='__main__': # s=SelfView() # s.run() #gobject.MainLoop.run() ```
2013/09/22
[ "https://Stackoverflow.com/questions/18942003", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Instead of the default `sent_tokenize`, what you'll need is the realignment feature that is already pre-coded pre-trained in the `punkt` sentence tokenizer. ``` >>> import nltk >>> st2 = nltk.data.load('tokenizers/punkt/english.pickle') >>> sent = 'A problem. She said: "I don\'t know about it."' >>> st2.tokenize(sent, realign_boundaries=True) ['A problem.', 'She said: "I don\'t know about it."'] ``` see `6 Punkt Tokenizer` section from <http://nltk.googlecode.com/svn/trunk/doc/howto/tokenize.html>
The default sentence tokenizer is `PunktSentenceTokenizer` that detects a new sentence each time it founds a period except, for example, the period belongs to an acronym like U.S.A. In nltk documentation there are examples of how to train a new sentence splitter with different corpus. You can find it [here.](http://nltk.googlecode.com/svn/trunk/doc/book/ch06.html#sec-further-examples-of-supervised-classification) So I guess that your problem can't be solved with the default sentence tokenizer and you have to train a new one and try.
7,683
23,386,290
I want to use the logging module instead of printing for debug information and documentation. The goal is to print on the console with DEBUG level and log to a file with INFO level. I read through a lot of documentation, the cookbook and other tutorials on the logging module but couldn't figure out, how I can use it the way I want it. (I'm on python25) I want to have the names of the modules in which the logs are written in my logfile. The documentation says I should use `logger = logging.getLogger(__name__)` but how do I declare the loggers used in classes in other modules / packages, so they use the same handlers like the main logger? To recognize the 'parent' I can use `logger = logging.getLogger(parent.child)` but where do I know, who has called the class/method?` The example below shows my problem, if I run this, the output will only have the `__main__` logs in and ignore the logs in `Class` This is my **Mainfile:** ``` # main.py import logging from module import Class logger = logging.getLogger(__name__) logger.setLevel(logging.DEBUG) # create file handler which logs info messages fh = logging.FileHandler('foo.log', 'w', 'utf-8') fh.setLevel(logging.INFO) # create console handler with a debug log level ch = logging.StreamHandler() ch.setLevel(logging.DEBUG) # creating a formatter formatter = logging.Formatter('- %(name)s - %(levelname)-8s: %(message)s') # setting handler format fh.setFormatter(formatter) ch.setFormatter(formatter) # add the handlers to the logger logger.addHandler(fh) logger.addHandler(ch) if __name__ == '__main__': logger.info('Script starts') logger.info('calling class Class') c = Class() logger.info('calling c.do_something()') c.do_something() logger.info('calling c.try_something()') c.try_something() ``` **Module:** ``` # module.py imnport logging class Class: def __init__(self): self.logger = logging.getLogger(__name__) # What do I have to enter here? self.logger.info('creating an instance of Class') self.dict = {'a':'A'} def do_something(self): self.logger.debug('doing something') a = 1 + 1 self.logger.debug('done doing something') def try_something(self): try: logging.debug(self.dict['b']) except KeyError, e: logging.exception(e) ``` Output in **console:** ``` - __main__ - INFO : Script starts - __main__ - INFO : calling class Class - __main__ - INFO : calling c.do_something() - __main__ - INFO : calling c.try_something() No handlers could be found for logger "module" ``` Besides: Is there a way to get the module names were the logs ocurred in my logfile, without declaring a new logger in each class like above? Also like this way I have to go for `self.logger.info()` each time I want to log something. I would prefer to use `logging.info()` or `logger.info()` in my whole code. Is a global logger perhaps the right answer for this? But then I won't get the modules where the errors occur in the logs... And my last question: Is this pythonic? Or is there a better recommendation to do such things right.
2014/04/30
[ "https://Stackoverflow.com/questions/23386290", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2889642/" ]
In your main module, you're configuring the logger of name `'__main__'` (or whatever `__name__` equates to in your case) while in module.py you're using a different logger. You either need to configure loggers per module, or you can configure the root logger (by configuring `logging.getLogger()`) in your main module which will apply by default to all loggers in your project. I recommend using configuration files for configuring loggers. This link should give you a good idea of good practices: <http://victorlin.me/posts/2012/08/26/good-logging-practice-in-python> EDIT: use %(module) in your formatter to include the module name in the log message.
The generally recommended logging setup is having at most 1 logger per module. If your project is [properly packaged](https://packaging.python.org/), `__name__` will have the value of `"mypackage.mymodule"`, except in your main file, where it has the value `"__main__"` If you want more context about the code that is logging messages, note that you can set your formatter with a [formatter string](https://docs.python.org/3/library/logging.html#logrecord-attributes) like `%(funcName)s`, which will add the function name to all messages. If you **really** want per-class loggers, you can do something like: ``` class MyClass: def __init__(self): self.logger = logging.getLogger(__name__+"."+self.__class__.__name__) ```
7,684
65,222,770
i am new to all this and im trying to make a shoot 'em up game. after i try to run the code, i encounter an error `TypeError: 'int' object is not subscriptable` at: line 269: `collision = isCollision(enemyX[i],enemyY[i],knifeX,knifeY)` line 118: `distance = math.sqrt((math.pow(enemyX[i] - knifeX,2)) + (math.pow(enemyY[i] - knifeY,2)))` not just here but pretty much every where in the loop if i put an 'i' on it ``` import pygame from pygame import mixer mixer.init() import random import math #Define some colors BLACK = (0,0,0) WHITE = (255,255,255) #intialize the pygame pygame.init() #create the screen screen = pygame.display.set_mode((700,583)) #Caption and icon pygame.display.set_caption("Yoshi & the rise of mushroom ") icon = pygame.image.load("Yoshi_icon.png") pygame.display.set_icon(icon) #Player playerImg = pygame.image.load("YoshiMario.png") playerX = 370 playerY = 480 playerX_change = 0 playerY_change = 0 #Enemy enemyImg = [] enemyX = [] enemyY = [] enemyX_change = [] enemyY_change = [] num_of_enemies = 10 for i in range(num_of_enemies): enemyImg.append(pygame.image.load('msh1.png')) enemyImg.append(pygame.image.load('msh2.png')) enemyX.append(random.randint(0,583)) enemyY.append(random.randint(50,150)) enemyX_change.append(2) enemyY_change.append(20) #Knife # ready - you cant see the knife on the screen # fire - the knife is currently moving knifeImg = pygame.image.load('diamondsword3.png') knifeX = 0 knifeY = 480 knifeX_change = 0 knifeY_change = 10 knife_state = "ready" #Score score_value = 0 font = pygame.font.Font('freesansbold.ttf',28) testX = 10 testY = 10 #Game Over Text over_font = pygame.font.Font('freesansbold.ttf',64) def show_score(x,y): score = font.render("Score : "+ str(score_value),True,(255,255,255)) screen.blit(score, (x, y)) def game_over_text(): over_text = over_font.render("GAME OVER",True,(255,255,255)) screen.blit(over_text, (150, 250)) def player(x,y): screen.blit(playerImg, (x, y)) def enemy(x,y,i): screen.blit(enemyImg[i], (x, y)) def fire_knife( x, y ): """ Start a knife flying upwards from the player """ global knife_state, knifeX, knifeY knifeX = x + 16 knifeY = y + 10 knife_state = "fire" def knife_hits( enemyX, enemyY ): """ Return True if the knife hits the enemy at the given (x,y). If so, prepare the knife for firing again. """ global knife_state, knifeX, knifeY collision_result = False if ( knife_state == "fire" and isCollision( enemyX[i], enemyY[i], knifeX, knifeY ) ): knife_state = "ready" collision_result = True return collision_result def draw_knife( screen ): """ If the knife is flying, draw it to the screen """ global knife_state, knifeImg, knifeX, knifeY if ( knife_state == "fire" ): screen.blit( knifeImg, ( knifeX, knifeY ) ) def update_knife(): """ Make any knife fly up the screen, resetting at the top """ global knife_state, knifeX, knifeY, knifeY_change # if the knife is already flying, move it if ( knife_state == "fire" ): knifeY -= knifeY_change if ( knifeY <= 0 ): knife_state = "ready" # went off-screen def isCollision(enemyX,enemyY,knifeX,knifeY): distance = math.sqrt((math.pow(enemyX[i] - knifeX,2)) + (math.pow(enemyY[i] - knifeY,2))) if distance < 27: return True else: return False #used to manage how fast the screen updates clock = pygame.time.Clock() font = pygame.font.Font(None,28) frame_count = 0 frame_rate = 60 start_time = 90 #game loop running = True while running: for event in pygame.event.get(): if event.type == pygame.QUIT: running = False #if keystroke is pressed check whether its right or left if event.type == pygame.KEYDOWN: if event.key == pygame.K_LEFT: playerX_change = -2 if event.key == pygame.K_RIGHT: playerX_change = 2 if event.key == pygame.K_UP: playerY_change = -2 if event.key == pygame.K_DOWN: playerY_change = 2 if event.key == pygame.K_SPACE: if knife_state is "ready": knife_Sound = mixer.Sound("knife_hitwall1.wav") knife_Sound.play() # get the current x coordinate of yoshi knifeX = playerX fire_knife(playerX,playerY) if event.type == pygame.KEYUP: if event.key == pygame.K_LEFT or event.key == pygame.K_RIGHT: playerX_change = 0 if event.key == pygame.K_UP or event.key == pygame.K_DOWN: playerY_change = 0 ## -- Timer going up -- #Calculate total seconds total_seconds = frame_count // frame_rate #divide by 60 to get total minures minutes = total_seconds // 60 #use modulus (remainder) to get seconds seconds = total_seconds % 60 #use python string formatting to format in leading zeros output_string = "Time : {0:02}:{1:02}".format(minutes, seconds) # Blit to the screen text = font.render(output_string, True, (255,255,255)) screen.blit(text, [10,40]) # --- Timer going down --- # --- Timer going up --- # Calculate total seconds total_seconds = start_time - (frame_count // frame_rate) if total_seconds < 0: total_seconds = 0 # Divide by 60 to get total minutes minutes = total_seconds // 60 # Use modulus (remainder) to get seconds seconds = total_seconds % 60 # Use python string formatting to format in leading zeros output_string = "Time left: {0:02}:{1:02}".format(minutes, seconds) # Blit to the screen text = font.render(output_string, True,(255,255,255)) screen.blit(text, [10,70]) # ALL CODE TO DRAW SHOULD GO ABOVE THIS COMMENT frame_count += 1 # Limit frames per second clock.tick(frame_rate) # Go ahead and update the screen with what we've drawn. pygame.display.flip() # RGB - Red, Green, Blue screen.fill((0, 255, 0)) #add a wallpaper bgimage=pygame.image.load("Background.png") screen.blit(bgimage, (0, 0)) # 5 = 5 + -0.1 ->5 = 5 - 0.1 # 5 = 5 + 0.1 # checking for boundaries of yoshi/mushroom so it doesnt go out of bounds playerX += playerX_change if playerX < 0: playerX = 0 elif playerX > 645: playerX = 645 playerY += playerY_change if playerY < 0: playerY = 0 elif playerY > 500: playerY = 500 # enemy movement for i in range(num_of_enemies): #Game Over if enemyY[i]> 440: for j in range(num_of_enemies): enemyY[j] = 2000 game_over_text() break enemyX[i] += enemyX_change[i] if enemyX[i] <= 0: enemyX_change[i] = 2 enemyY[i] += enemyY_change[i] elif enemyX[i] > 645: enemyX_change[i] = -2 enemyY[i] += enemyY_change[i] update_knife() # move the flying knife (if any) if ( knife_hits( enemyX[i], enemyY[i] ) ): score_value += 1 print(score_value) enemyX[i] = random.randint(0,735) enemyY[i] = random.randint(50,150) else: draw_knife( screen ) # paint the flying knife (if any) player(playerX,playerY) enemy(enemyX[i],enemyY[i],i) pygame.display.update() # collision collision = isCollision(enemyX[i],enemyY[i],knifeX,knifeY) if collision: pop_Sound = mixer.Sound('pop.wav') pop_Sound.play() knifeY = 480 knife_state = "ready" score_value += 1 enemyX[i] = random.randint(0,735) enemyY[i] = random.randint(50,150) enemy(enemyX[i],enemyY[i],i) # knife movement if knifeY <= 0: knifeY = 480 knife_state = "ready" if knife_state == "fire": fire_knife(knifeX,knifeY) knifeY -= knifeY_change playerX += playerX_change playerY += playerY_change player(playerX,playerY) show_score(testX,testY) pygame.display.update() ```
2020/12/09
[ "https://Stackoverflow.com/questions/65222770", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11592606/" ]
You call `isCollision()` with `enemyX[i]`, but then in `isCollision`, you try to access `enemyX[i]`. So you're trying to do `enemyX[i][i]`. But since `enemyX[i]` is an integer, trying to get a subscript of it, `[i]`, is invalid so that's why you get that error.
You're already sending the specific number with e.g. `enemyX[i]` when you call `isCollision(enemyX[i], ...)`. Once in that function, you can just use the passed argument (which you also happen to call `enemyX` - it's just a different one from the caller's `enemyX`) by itself, e.g. `math.pow(enemyX -...`. I recommend reconsidering those variable names.
7,685
38,218,609
I started to learn Scrapy but I stuck up at weird point where I couldn't set default shell to ipython. The operating system of my laptop is Ubuntu 15.10. I also installed ipython and scrapy. They run well without causing any errors. According to Scrapy's [official tutorial](http://doc.scrapy.org/en/latest/topics/shell.html), I can change my default scrapy shell by entering this in the global configuration file ``` [settings] shell = ipython ``` The problem is I couldn't locate the configuration file. I tried following instructions from [another page](http://doc.scrapy.org/en/latest/topics/commands.html#topics-config-settings). I made these three config files in 1. `/etc/scrapy.cfg` (system-wide), 2. `~/.config/scrapy.cfg` ($XDG\_CONFIG\_HOME) and `~/.scrapy.cfg` ($HOME) for global (user-wide) settings. but It didn't help at all. what should I do? --- I followed the instruction in the first answer by paul trmbrth. There still seems to be a problem though. [![enter image description here](https://i.stack.imgur.com/MqEpr.png)](https://i.stack.imgur.com/MqEpr.png) seems like I do have a right configuration file in the right place. But I still cannot open scrapy shell with ipython, as you can see in the screenshot. Have any idea?
2016/07/06
[ "https://Stackoverflow.com/questions/38218609", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5538922/" ]
Another way to configure (or test) the shell used by `scrapy shell` is the [`SCRAPY_PYTHON_SHELL` environment variable](http://doc.scrapy.org/en/latest/topics/shell.html#configuring-the-shell). So running: ``` paul@paul:~$ SCRAPY_PYTHON_SHELL=ipython scrapy shell ``` would use `ipython` as first choice, whatever setting in `*scrapy.cfg` you may have. To check where scrapy is looking for config files, and what it finds, you can start the `python` interpreter and run [what `scrapy shell` does](https://github.com/scrapy/scrapy/blob/ebef6d7c6dd8922210db8a4a44f48fe27ee0cd16/scrapy/shell.py#L67): ``` $ python Python 3.5.1+ (default, Mar 30 2016, 22:46:26) [GCC 5.3.1 20160330] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from scrapy.utils.conf import get_config, get_sources >>> get_sources() ['/etc/scrapy.cfg', 'c:\\scrapy\\scrapy.cfg', '/home/paul/.config/scrapy.cfg', '/home/paul/.scrapy.cfg', ''] >>> cfg = get_config() >>> cfg.sections() ['deploy', 'settings'] >>> cfg.options('settings') ['shell'] >>> cfg.get('settings', 'shell') 'bpython' ```
If you are inside of the project you can use this: ``` from scrapy.utils.project import get_project_settings settings = get_project_settings() settings.get('IMPORT_API_URL') ``` If you are outside of the project, you can use this: ``` from scrapy.settings import Settings settings = Settings() settings_module_path = os.environ.get('SCRAPY_ENV', 'project.settings.dev') settings.setmodule(settings_module_path, priority='project') settings.get('BASE_URL') ```
7,687
41,780,388
Please refrain from calling this a duplicate, I am completely new to idea of accessing USB devices via Python. The other questions and answers were often too high level for me to comprehend. I have a qr code scanner that is USB plug and play. I can't find it on the command line for whatever reason and it has me stumped. When the scanner scans a QR code I want its data to be sent to my python script so I can set it to a variable for comparison against a database. I don't understand how to access a USB device and retrieve the information with Python. I have read quite a bit about it and still nothing. Is there a some-what simple way of doing this?
2017/01/21
[ "https://Stackoverflow.com/questions/41780388", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4562973/" ]
With this (second) call: ``` name_array = realloc(name_array, sizeof(char *)); ``` You are still allocating just one char pointer. So, you can't store two pointers; you need to increase the size: ``` name_array = realloc(name_array, 2 * sizeof *name_array); ``` Now, you'd be fine. Note that `p = realloc(p, ..);` style of realloc() could lead to memory leaks if `realloc()` fails. Also, you'd be better off using a format string to avoid pontential format string attack (if it's going to be user inputted): ``` /* Print the names */ printf ("%s\n", name_array[0]); printf ("%s\n", name_array[1]); ```
The error is better reported by address sanitizer than valgrind. If you compile your code like: ``` gcc test.c -fsanitize=address -g ``` and then run it, it will report heap-buffer-overflow error in line 19 of your code. This is the line where you assign second element of name\_array having allocated memory for only one element.
7,688
16,223,412
I have a relatively big project that has many dependencies, and I would like to distribute this project around, but installing these dependencies where a bit of a pain, and takes a very long time (pip install takes quite some time). So I was wondering if it was possible to migrate a whole virtualenv to another machine and have it running. I tried copying the whole virtualenv, but whenever I try running something, this virtualenv still uses the path of my old machine. For instance when I run ``` source activate pserve development.ini ``` I get ``` bash: ../bin/pserve: /home/sshum/backend/bin/python: bad interpreter: No such file or directory ``` This is my old directory. So is there a way to have virtualenv reconfigure this path with a new path? I tried `sed -i 's/sshum/dev1/g' *` in the bin directory and it solved that issue. However, I'm getting a different issue now, my guess is that this sed changed something. I've confirmed that I have `libssl-dev` installed but when I run `python` I get: ``` E: Unable to locate package libssl.so.1.0.0 E: Couldn't find any package by regex 'libssl.so.1.0.0' ``` But when I run `aptitude search libssl` and I see: ``` i A libssl-dev - SSL development libraries, header files and documentation ``` I also tried `virtualenv --relocatable backend` but no go.
2013/04/25
[ "https://Stackoverflow.com/questions/16223412", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1515864/" ]
Export virtualenvironment *from within the virtual environment:* ``` pip freeze > requirements.txt ``` as example, here is for myproject virtual environment: [![enter image description here](https://i.stack.imgur.com/EJUEk.png)](https://i.stack.imgur.com/EJUEk.png) once in the new machine & environment, copy the requirements.txt into the *new project folder in the new machine* and run the terminal command: ``` sudo pip install -r requirements.txt ``` then you should have all the packages previously available in the old virtual environment.
When you create a new virtualenv it is configured for the computer it is running on. I even think that it is configured for that specific directory it is created in. So I think you should always create a fresh virtualenv when you move you code. What might work is copying the lib/Pythonx.x/site-packages in your virtualenv directory, but I don't think that is a particularly good solution. What may be a better solution is using the pip download cache. This will at least speed up the download part of pip install. Have a look at this thread: [How do I install from a local cache with pip?](https://stackoverflow.com/questions/4806448/how-do-i-install-from-a-local-cache-with-pip)
7,689
24,209,181
I have uploaded my first Django Application called **survey** (Its a work in progress) using `mod_wsgi` with `Apache` on an `Ubuntu` VM but I don't know what the URL of it should be. My VM has been made public through a proxyPass at <http://phaedrus.scss.tcd.ie/bias_experiment>. When working on my application locally I simply go to `http://127.0.0.1:8000/surveythree/` Based on my urls.py (below) I thought that I would simply have to go to <http://phaedrus.scss.tcd.ie/bias_experiment/surveythree/> to see my Survey application online. However I cant seem to find it... My question: **What URL should I be using to locate my Survey Application based on my below settings?** Or have I missed some other step in the process? The project has been uploaded, I have restarted the server, I have set it running with `python manage.py runserver` Some of the urls I have tried * <http://phaedrus.scss.tcd.ie/bias_experiment/surveythree/> * <http://phaedrus.scss.tcd.ie/bias_experiment/src/surveythree/> * <http://phaedrus.scss.tcd.ie/bias_experiment/src/bias_experiment/surveythree/> Below is my setup and what I have tried so far. NOTE: I have a Bias\_Experiment Django Project created in Pydev. It has three applications contained within an src folder. * survey (my working project) * polls (a tutorial i was following) * bias\_experiment (the root application with my settings file etc) **My URL patterns** from bias\_experiment/src/bias\_experiment/urls.py ``` urlpatterns = patterns('', url(r'^polls/', include('polls.urls', namespace="polls")), url(r'^admin/', include(admin.site.urls)), url(r'^surveythree/$', SurveyWizard.as_view([SurveyForm1, SurveyForm2, SurveyForm3, SurveyForm4, SurveyForm5])), ) ``` **My virtual host** located at /etc/apache2/sites-available/bias\_experiment ``` <VirtualHost *:80> ServerAdmin myemail@gmail.com ServerName phaedrus.scss.tcd.ie/bias_experiment ServerAlias phaedrus.scss.tcd.ie WSGIScriptAlias /bias_experiment /var/www/bias_experiment/src/bias_experiment/index.wsgi Alias /static/ /var/www/bias_experiment/src/bias_experiment/static/ <Location "/static/"> Options -Indexes </Location > </VirtualHost > ``` **My WSGI file** located at /var/www/bias\_experiment/src/bias\_experiment/index.wsgi ``` import os import sys import site # This is to add the src folder sys.path.append('/var/www/bias_experiment/src/bias_experiment') os.environ['DJANGO_SETTINGS_MODULE'] = 'bias_experiment.settings' # Activate your virtual env activate_env=os.path.expanduser("/var/www/bias_experiment/bin/activate_this.py") execfile(activate_env, dict(__file__=activate_env)) import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() ``` **This is the project structure** ![enter image description here](https://i.stack.imgur.com/4lSyZ.png) I had a [previous question related to this](https://stackoverflow.com/questions/24188604/django-application-not-visable) which had multiple issues pointed out to me which I have since fixed so I am re-posting this here. I have been following several tutorials as details in that question. Any help with this would be massively appreciated. Thanks Deepend EDIT: My Apache Error Log: `tail /var/log/apache2/error.log` ``` (bias_experiment)spillab@kdeg-vm-18:/var/www/bias_experiment/src$ sudo su root@kdeg-vm-18:/var/www/bias_experiment/src# tail /var/log/apache2/error.log [Fri Jun 13 16:21:04 2014] [error] [client 134.226.38.233] File does not exist: /var/www/bias_experiment/surveythree, referer: https://stackoverflow.com/questions/24209181/what-should-be-the-url-of-my-django-application/24209864?noredirect=1 [Fri Jun 13 16:22:36 2014] [error] [client 134.226.38.233] PHP Notice: Use of undefined constant PHP_SELF - assumed 'PHP_SELF' in /var/www/bias_experiment/brendy.php on line 24, referer: http://phaedrus.scss.tcd.ie/bias_experiment/ [Fri Jun 13 16:22:36 2014] [error] [client 134.226.38.233] PHP Notice: Undefined index: brendy in /var/www/bias_experiment/brendy.php on line 27, referer: http://phaedrus.scss.tcd.ie/bias_experiment/ [Fri Jun 13 16:22:36 2014] [error] [client 134.226.38.233] PHP Notice: Use of undefined constant action - assumed 'action' in /var/www/bias_experiment/brendy.php on line 72, referer: http://phaedrus.scss.tcd.ie/bias_experiment/ [Fri Jun 13 16:22:36 2014] [error] [client 134.226.38.233] PHP Notice: Undefined index: action in /var/www/bias_experiment/brendy.php on line 72, referer: http://phaedrus.scss.tcd.ie/bias_experiment/ [Fri Jun 13 16:22:36 2014] [error] [client 134.226.38.233] PHP Warning: include(footer.php): failed to open stream: No such file or directory in /var/www/bias_experiment/brendy.php on line 118, referer: http://phaedrus.scss.tcd.ie/bias_experiment/ [Fri Jun 13 16:22:36 2014] [error] [client 134.226.38.233] PHP Warning: include(): Failed opening 'footer.php' for inclusion (include_path='.:/usr/share/php:/usr/share/pear') in /var/www/bias_experiment/brendy.php on line 118, referer: http://phaedrus.scss.tcd.ie/bias_experiment/ [Fri Jun 13 16:22:37 2014] [error] [client 134.226.38.233] File does not exist: /var/www/bias_experiment/special.css, referer: http://phaedrus.scss.tcd.ie/bias_experiment/brendy.php [Fri Jun 13 16:22:37 2014] [error] [client 134.226.38.233] File does not exist: /var/www/bias_experiment/images, referer: http://phaedrus.scss.tcd.ie/bias_experiment/brendy.php [Fri Jun 13 16:22:37 2014] [error] [client 134.226.38.233] File does not exist: /var/www/bias_experiment/images, referer: http://phaedrus.scss.tcd.ie/bias_experiment/brendy.php root@kdeg-vm-18:/var/www/bias_experiment/src# ```
2014/06/13
[ "https://Stackoverflow.com/questions/24209181", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1214163/" ]
The URL should be <http://phaedrus.scss.tcd.ie/bias_experiment/surveythree/> I think there is one tiny error in the Apache configuration, which might be my fault (sorry): you need a trailing slash, so: ``` WSGIScriptAlias /bias_experiment/ /var/www/bias_experiment/src/bias_experiment/index.wsgi ``` Also note that you don't need to run manage.py runserver, that's pointless as Apache is serving your app.
Try with this changes: Apache conf: ``` WSGIApplicationGroup %{GLOBAL} ServerName phaedrus.scss.tcd.ie WSGIScriptAlias /bias_experiment/ /var/www/bias_experiment/src/bias_experiment/index.wsgi WSGIDaemonProcess bias_experiment processes=4 threads=25 display-name=%{GROUP} WSGIProcessGroup bias_experiment WSGIPassAuthorization On ``` And you need to restart the Apache server.
7,691
24,618,832
I wonder if anyone has an elegant solution to being able to pass a python list, a numpy vector (shape(n,)) or a numpy vector (shape(n,1)) to a function. The idea would be to generalize a function such that any of the three would be valid without adding complexity. Initial thoughts: ``` 1) Use a type checking decorator function and cast to a standard representation. 2) Add type checking logic inline (significantly less ideal than #1). 3) ? ``` I do not generally use python builtin array types, but suspect a solution to this question would also support those.
2014/07/07
[ "https://Stackoverflow.com/questions/24618832", "https://Stackoverflow.com", "https://Stackoverflow.com/users/839375/" ]
You can convert the three types to a "canonical" type, which is a 1dim array, using: ``` arr = np.asarray(arr).ravel() ``` Put in a decorator: ``` import numpy as np import functools def takes_1dim_array(func): @functools.wraps(func) def f(arr, *a, **kw): arr = np.asarray(arr).ravel() return func(arr, *a, **kw) return f ``` Then: ``` @takes_1dim_arr def func(arr): print arr.shape ```
I think the simplest thing to do is to start off your function with [`numpy.atleast_2d`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.atleast_2d.html). Then, all 3 of your possibilities will be converted to the `x.shape == (n, 1)` case, and you can use that to simplify your function. For example, ``` def sum(x): x = np.atleast_2d(x) return np.dot(x, np.ones((x.shape[0], 1))) ``` `atleast_2d` returns a view on that array, so there won't be much overhead if you pass in something that's already an `ndarray`. However, if you plan to modify `x` and therefore want to make a copy instead, you can do `x = np.atleast_2d(np.array(x))`.
7,694
20,714,517
**Hi, I ran into an encoding error with Python Django. In my views.py, I have the following:** ``` from django.shortcuts import render from django.http import HttpResponse from django.template.loader import get_template from django.template import Context # Create your views here. def hello(request): name = 'Mike' html = '<html><body>Hi %s, this seems to have !!!!worked!</body></html>' % name return HttpResponse(html) def hello2(request): name = 'Andrew' html = '<html><body>Hi %s, this seems to have !!!!worked!</body></html>' % name return HttpResponse(html) # -*- coding: utf-8 -*- def hello3_template(request): name = u'哈哈' t = get_template('hello3.html') html = t.render(Context({'name' : name})) return HttpResponse(html) ``` **I got the following error:** SyntaxError at /hello3\_template/ ================================= Non-ASCII character '\xe5' in file D:\WinPython-32bit-2.7.5.3\django\_test\article\views.py on line 19, but no encoding declared; see <http://www.python.org/peps/pep-0263.html> for details (views.py, line 19) I look up that link, but I am still puzzled on how to resolve it. Could you help? Thanks, smallbee **As lalo points out, the following line has to be on the top** ``` # -*- coding: utf-8 -*- ``` **Thank you, all.**
2013/12/21
[ "https://Stackoverflow.com/questions/20714517", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1139783/" ]
Well, here you are: Put `# -*- coding: utf-8 -*-` at top of file, it define de encoding. The [docs](http://www.python.org/dev/peps/pep-0263/) says: > > Python will default to ASCII as standard encoding if no other > encoding hints are given. > > > > ``` > To define a source code encoding, a magic comment must > be placed into the source files either as first or second > line in the file, such as: > > ``` > > So, you code must begin: ``` # -*- coding: utf-8 -*- from django.shortcuts import render from django.http import HttpResponse from django.template.loader import get_template ... ``` Hope helps
If you read [PEP 263](http://www.python.org/dev/peps/pep-0263/), it clearly says: > > To define a source code encoding, a magic comment must be placed into the source files either as first or second line in the file… > > > (The original proposal said that it had to be the first line after the #!, if any, but presumably it turned out to be easier to implement with the "first or second line" rule.) The actual reference docs describe the same thing, in a less friendly but more rigorous way, for [3.3](http://docs.python.org/3/reference/lexical_analysis.html#encoding-declarations) and [2.7](http://docs.python.org/2.7/reference/lexical_analysis.html#encoding-declaration). A "magic comment" that appears later in the file is not magic, it's just a comment to mislead your readers without affecting the Python compiler. UTF-8 for `u'哈哈'` is `'\xe5\x93\x88\xe5\x93\x88'`, so those are the bytes in the file. In recent Python versions (including 2.7 and all 3.x), the default encoding is always ASCII unless the file starts with a UTF BOM (as some Microsoft editors like to do); even in 2.3-2.6 it's usually ASCII; in earlier versions it's Latin-1. Trying to interpret `'\xe5\x93\x88\xe5\x93\x88'` will fail with the exact exception you saw.
7,695
49,510,289
I have a gigantic excel workbook with a lot of personal data. Each person has a unique numeric identifier, but has multiple rows of information. I want to filter all the content through that identifier, and then copy the resulting rows to a template excel workbook and save the results. I'm trying to do this with Python and openpyxl. I thought that applying an AutoFilter and then copying the results would solve the problem. But it seems that openpyxl can only apply the AutoFilter and [not do the actual filtering?](https://openpyxl.readthedocs.io/en/2.5/filters.html) I tried to follow the answer to [this question](https://stackoverflow.com/questions/47918217/python3-openpyxl-copying-data-from-row-that-contains-certain-value-to-new-sheet), but it won't do anything. I want to filter the number in column D (4). ``` import openpyxl, os from openpyxl.utils import range_boundaries #Intitializes workbooks print('Opening data file...') min_col, min_row, max_col, max_row = range_boundaries("A:AG") wb = openpyxl.load_workbook('Data.xlsx') ws = wb.active template = openpyxl.load_workbook('Template.xlsx') templatews = template.active #Asks for numeric identifier print('Done! Now introduce identifier:') filterNumber = input() #Does the actual thing for row in ws.iter_rows(): if row[3].value == str(filterNumber): templatews.append((cell.value for cell in row[min_col-1:max_col])) #Saves the results template.save('templatesave.xlsx') print('All done! Have fun!') ``` Any insight on this will be appreciated. Thanks! EDIT: corrected column number according to @alexis suggestion, although it has not solved the issue. SOLVED: it turns out that the IF statement asks for an integer, not a string. Using **int()** solved the problem. ``` for row in ws.iter_rows(): if row[3].value == int(filterNumber): templatews.append((cell.value for cell in row[min_col-1:max_col])) ```
2018/03/27
[ "https://Stackoverflow.com/questions/49510289", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9553462/" ]
Yes, all of your answers are correct. `int` will always take up `sizeof(int)` bytes, 8(int) assuming 32-bit `int` it will take 4 bytes, whereas 8(char) will take up one byte. The way to think about your last question IMO is that data is stored as *bytes*. *char* and *int* are way of interpreting bytes, so in text files you write bytes, but if you want to write human-readable "8" into a text file, you must write this in some encoding, such as ASCII where bytes correspond to human-readable characters. So, to write "8" you would need to write the byte `0x38` (ASCII value of `8`). So, in files you have *data*, not *int* or *chars*.
When we consider the memory location for an `int` or for a `char` we think as a whole. Integers are commonly stored using a word of memory, which is 4 bytes or 32 bits, so integers from 0 up to 4,294,967,295 (232 - 1) can be stored in an `int` variable. As we need total 32 bits (32/8 = 4) hence we need 4 bytes for an `int` variable. But to store a ascii character we need 7 bits. The ASCII table has 128 characters, with values from 0 through 127. Thus, 7 bits are sufficient to represent a character in ASCII; (However, most computers typically reserve 1 bits more, (i.e. 8 bits), for an ASCII character And about your question:- > > and if I create an int variable as the number 12345 and a character array of "12345" the character array will have consumed more memory? > > > Yes from the above definition it is true. In the first case(int value) it just need 4 bytes and for the second case it need total 5 bytes. The reason is in the first case `12345` is a single integer value and in the second case `"12345"` are total 5 ascii characters. Even in the second case, you actually need one more byte to hold the `'\0'` character as a part of a string (marks end of string).
7,696
57,230,353
I am working on a Python program for displaying photos on the Raspberry Pi (Model B Revision 2.0 with 512MB RAM). It uses Tk for displaying the images. The program is mostly finished, but I ran into an issue where the program is terminated by the kernel because of low memory. This seems to happen randomly. I do not understand why this happens. I have noticed that during image switching, the CPU spikes up significantly (up to 90%). I therefore thought that it might be an issue with the CPU not keeping up between two images and then falling behind and running out of memory. To test this I increased the timeout between showing images to 1 minute, but that did not help. My question is, whether I am doing something wrong/inefficiently in the code (see below)? If not: I am considering switching to PyQt, because it seems to accelerate graphics with OpenGL (from what I read). Is this true and/or do you think that this might help with the issue I am facing? This is my current Python code: ``` # From: https://stackoverflow.com/questions/19838972/how-to-update-an-image-on-a-canvas import os from pathlib import Path from tkinter import * from PIL import Image, ExifTags, ImageTk import ipdb class MainWindow(): def __init__(self, main): self.my_images = [] self._imageDirectory = str(Path.home().joinpath("./Pictures/rpictureframe")) self.main = main w, h = main.winfo_screenwidth(), root.winfo_screenheight() self.w, self.h = w, h main.attributes("-fullscreen", True) # REF: https://stackoverflow.com/questions/45136287/python-tkinter-toggle-quit-fullscreen-image-with-double-mouse-click main.focus_set() self.canvas = Canvas(main, width=w, height=h) self.canvas.configure(background="black", highlightthickness=0) self.canvas.pack() self.firstCall = True # set first image on canvas self.image_on_canvas = self.canvas.create_image(w/2, h/2, image = self.getNextImage()) ### replacing getNextImage instead of getNextImageV1 here fails @property def imageDirectory(self): return self._imageDirectory @imageDirectory.setter def setImageDirectory(self,imageDirectory): self._imageDirectory = imageDirectory def getNextImage(self): if self.my_images == []: self.my_images = os.listdir(self.imageDirectory) currentImagePath = self.imageDirectory + "/" + self.my_images.pop() self.currentImage = self.readImage(currentImagePath, self.w, self.h) return self.currentImage def readImage(self,imagePath,w,h): pilImage = Image.open(imagePath) pilImage = self.rotateImage(pilImage) pilImage = self.resizeImage(pilImage,w,h) return ImageTk.PhotoImage(pilImage) def rotateImage(self,image): # REF: https://stackoverflow.com/a/26928142/653770 try: for orientation in ExifTags.TAGS.keys(): if ExifTags.TAGS[orientation]=='Orientation': break exif=dict(image._getexif().items()) if exif[orientation] == 3: image=image.rotate(180, expand=True) elif exif[orientation] == 6: image=image.rotate(270, expand=True) elif exif[orientation] == 8: image=image.rotate(90, expand=True) except (AttributeError, KeyError, IndexError): # cases: image don't have getexif pass return image def resizeImage(self,pilImage,w,h): imgWidth, imgHeight = pilImage.size if imgWidth > w or imgHeight > h: ratio = min(w/imgWidth, h/imgHeight) imgWidth = int(imgWidth*ratio) imgHeight = int(imgHeight*ratio) pilImage = pilImage.resize((imgWidth,imgHeight), Image.ANTIALIAS) return pilImage def update_image(self): # REF: https://stackoverflow.com/questions/7573031/when-i-use-update-with-tkinter-my-label-writes-another-line-instead-of-rewriti/7582458# self.canvas.itemconfig(self.image_on_canvas, image = self.getNextImage()) ### replacing getNextImage instead of getNextImageV1 here fails self.main.after(5000, self.update_image) root = Tk() app = MainWindow(root) app.update_image() root.mainloop() ``` **UPDATE:** Below you will find the current code that still produces the out-of-memory issue. You can find the dmesg out-of-memory error here: <https://pastebin.com/feTFLSxq> Furthermore this is the periodic (every second) output from `top`: <https://pastebin.com/PX99VqX0> I have plotted the columns 6 and 7 (memory usage) of the `top` output: [![Plot top memory usage.](https://i.stack.imgur.com/2OEF2.png)](https://i.stack.imgur.com/2OEF2.png) As you can see, there does not appear to be a continues increase in memory usage as I would expect from a memory leak. This is my current code: ``` # From: https://stackoverflow.com/questions/19838972/how-to-update-an-image-on-a-canvas import glob from pathlib import Path from tkinter import * from PIL import Image, ExifTags, ImageTk class MainWindow(): def __init__(self, main): self.my_images = [] self._imageDirectory = str(Path.home().joinpath("Pictures/rpictureframe")) self.main = main w, h = main.winfo_screenwidth(), root.winfo_screenheight() self.w, self.h = w, h # main.attributes("-fullscreen", True) # REF: https://stackoverflow.com/questions/45136287/python-tkinter-toggle-quit-fullscreen-image-with-double-mouse-click main.focus_set() self.canvas = Canvas(main, width=w, height=h) self.canvas.configure(background="black", highlightthickness=0) self.canvas.pack() # set first image on canvas self.image_on_canvas = self.canvas.create_image(w / 2, h / 2, image=self.getNextImage()) ### replacing getNextImage instead of getNextImageV1 here fails @property def imageDirectory(self): return self._imageDirectory @imageDirectory.setter def setImageDirectory(self, imageDirectory): self._imageDirectory = imageDirectory def getNextImage(self): if self.my_images == []: # self.my_images = os.listdir(self.imageDirectory) self.my_images = glob.glob(f"{self.imageDirectory}/*.jpg") currentImagePath = self.my_images.pop() self.currentImage = self.readImage(currentImagePath, self.w, self.h) return self.currentImage def readImage(self, imagePath, w, h): with Image.open(imagePath) as pilImage: pilImage = self.rotateImage(pilImage) pilImage = self.resizeImage(pilImage, w, h) return ImageTk.PhotoImage(pilImage) def rotateImage(self, image): # REF: https://stackoverflow.com/a/26928142/653770 try: for orientation in ExifTags.TAGS.keys(): if ExifTags.TAGS[orientation] == 'Orientation': break exif = dict(image._getexif().items()) if exif[orientation] == 3: image = image.rotate(180, expand=True) elif exif[orientation] == 6: image = image.rotate(270, expand=True) elif exif[orientation] == 8: image = image.rotate(90, expand=True) except (AttributeError, KeyError, IndexError): # cases: image don't have getexif pass return image def resizeImage(self, pilImage, w, h): imgWidth, imgHeight = pilImage.size if imgWidth > w or imgHeight > h: ratio = min(w / imgWidth, h / imgHeight) imgWidth = int(imgWidth * ratio) imgHeight = int(imgHeight * ratio) pilImage = pilImage.resize((imgWidth, imgHeight), Image.ANTIALIAS) return pilImage def update_image(self): # REF: https://stackoverflow.com/questions/7573031/when-i-use-update-with-tkinter-my-label-writes-another-line-instead-of-rewriti/7582458# self.canvas.itemconfig(self.image_on_canvas, image=self.getNextImage()) ### replacing getNextImage instead of getNextImageV1 here fails self.main.after(30000, self.update_image) root = Tk() app = MainWindow(root) app.update_image() root.mainloop() ```
2019/07/27
[ "https://Stackoverflow.com/questions/57230353", "https://Stackoverflow.com", "https://Stackoverflow.com/users/653770/" ]
I believe there is a memory leak when you open the image files with PIL and don't close them. To avoid it, you must call `Image.close()`, or better yet consider using the `with` syntax. ``` def readImage(self,imagePath,w,h): with Image.open(imagePath) as pilImage: pilImage = self.rotateImage(pilImage) pilImage = self.resizeImage(pilImage,w,h) return ImageTk.PhotoImage(pilImage) ```
I run the code on my machine and I noticed similiar spikes. After some memory adjustments on a virtual machine I had a system without swap (turned of to get the crash "faster") and approximately 250Mb free memory. While base memory usage was somewhere around 120Mb, the image change was between 190Mb and 200Mb (using images with a file size of 6,6Mb and 5184x3456 pixel) similiar to your plot. Then I copied a bigger (panorama) image (8,1Mb with 20707x2406 pixel) to the folder - and voila the machine got stuck. I could see that the memory usage of the process reached 315Mb and the system became unusable (after 1 minute I "pulled the plug" on the VM). So I think your problem has nothing todo with the actual code, but with the pictures you are trying to load (and the limited amount of RAM/Swap from your system). Maybe skipping the rotate and resize functions might mitigate your problem...
7,701
20,578,798
I'm new to python dictionaries as well as nesting. Here's what I'm trying to find - I have objects that all have the same attributes: color and height. I need to compare all the attributes and make of list of all the ones that match. ``` matchList = [] dict = {obj1:{'color': (1,0,0), 'height': 10.6}, obj2:{'color': (1,0.5,0), 'height': 5}, obj3:{'color': (1,0.5,0), 'height': 5}, obj4:{'color': (1,0,0), 'height': 10.6}} ``` I need to find a way to compare each of the objs to one another and create a nested list of all the ones that match. So if obj1 and obj4 match, and obj2 & 3 match, I want this is my result: ``` matchList = [[obj1, obj4], [obj2, obj3]] ``` How would I go about doing this?
2013/12/14
[ "https://Stackoverflow.com/questions/20578798", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3101267/" ]
**Update:** Found this great article from [Matt Gaunt](https://plus.google.com/+MattGaunt/posts) (a Google Employee) on adding a Translucent theme to Android apps. It is very thorough and addresses some of the issues many people seem to be having while implementing this style: [Translucent Theme in Android](http://blog.gauntface.co.uk/2014/01/10/translucent-theme-in-android/) Just add the following to your custom style. This prevents the shifting of the content behind the ActionBar and up to the top of the window, not sure about the bottom of the screen though. ``` <item name="android:fitsSystemWindows">true</item> ``` Credit: [Transparent Status Bar System UI on Kit-Kat](https://stackoverflow.com/questions/20167755/transparent-status-bar-system-ui-on-4-4-kit-kat)
It looks like all you need to do is add this element to the themes you want a translucent status bar on: ``` <item name="android:windowTranslucentStatus">true</item> ```
7,702
48,921,068
I am using `python 3` and `django 1.11` got the following data in a `.sql` file: How do I use `DecimalField` and `DateField` components to represent the fields correctly. I was thinking of doing something like this: e.g `per_diem = models.DecimalField(max_digits=12, decimal_places=2,null=False)` `` ``` CREATE TABLE employee_per_diem ( employee_per_diem_id serial primary key, employee_month_id integer references employee_month not gs, travel_date date not null, return_date date not null, days_travelled integer not null, per_diem float default 0 not null, cash_paid float default 0 not null, tax_amount float default 0 not null, full_amount float default 0 not null, ); ``` Am I adding the `null` correctly?
2018/02/22
[ "https://Stackoverflow.com/questions/48921068", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8342189/" ]
The first one is from the apache servlet API for status codes from Interface HttpServletResponse found [here](https://tomcat.apache.org/tomcat-5.5-doc/servletapi/javax/servlet/http/HttpServletResponse.html) > > SC\_NOT\_FOUND - Status code (404) indicating that the requested > resource is not available. > > > The second one is from spring framework http status codes constants from [here](https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/http/HttpStatus.html) > > NOT\_FOUND 404 Not Found. > > > For spring Framework (& spring boot) the second one is used widely.
There is no difference, it is same status code for HTTP from different libraries.
7,710
30,578,381
I am trying to compile opencv on Slackware 4.1. However I encountered the following error each time. ``` In file included from /usr/include/gstreamer-0.10/gst/pbutils/encoding-profile.h:29:0, from /tmp/SBo/opencv-2.4.11/modules/highgui/src/cap_gstreamer.cpp:65: /usr/include/gstreamer-0.10/gst/pbutils/gstdiscoverer.h:35:9: error: 'GstMiniObjectClass' does not name a type typedef GstMiniObjectClass GstDiscovererStreamInfoClass; /usr/include/gstreamer-0.10/gst/pbutils/gstdiscoverer.h:83:9: error: 'GstMiniObjectClass' does not name a type typedef GstMiniObjectClass GstDiscovererContainerInfoClass; /usr/include/gstreamer-0.10/gst/pbutils/gstdiscoverer.h:104:9: error: 'GstMiniObjectClass' does not name a type typedef GstMiniObjectClass GstDiscovererAudioInfoClass; ^ /usr/include/gstreamer-0.10/gst/pbutils/gstdiscoverer.h:129:9: error: 'GstMiniObjectClass' does not name a type typedef GstMiniObjectClass GstDiscovererVideoInfoClass; ^ /usr/include/gstreamer-0.10/gst/pbutils/gstdiscoverer.h:159:9: error: 'GstMiniObjectClass' does not name a type typedef GstMiniObjectClass GstDiscovererSubtitleInfoClass; ^ /usr/include/gstreamer-0.10/gst/pbutils/gstdiscoverer.h:202:9: error: 'GstMiniObjectClass' does not name a type typedef GstMiniObjectClass GstDiscovererInfoClass; ^ In file included from /tmp/SBo/opencv-2.4.11/modules/highgui/src/cap_gstreamer.cpp:65:0: /usr/include/gstreamer-0.10/gst/pbutils/encoding-profile.h:47:9: error: 'GstMiniObjectClass' does not name a type typedef GstMiniObjectClass GstEncodingProfileClass; ^ /usr/include/gstreamer-0.10/gst/pbutils/encoding-profile.h:66:9: error: 'GstEncodingProfileClass' does not name a type typedef GstEncodingProfileClass GstEncodingContainerProfileClass; ^ /usr/include/gstreamer-0.10/gst/pbutils/encoding-profile.h:85:9: error: 'GstEncodingProfileClass' does not name a type typedef GstEncodingProfileClass GstEncodingVideoProfileClass; ^ /usr/include/gstreamer-0.10/gst/pbutils/encoding-profile.h:104:9: error: 'GstEncodingProfileClass' does not name a type typedef GstEncodingProfileClass GstEncodingAudioProfileClass; ^ /tmp/SBo/opencv-2.4.11/modules/highgui/src/cap_gstreamer.cpp: In member function 'virtual bool CvCapture_GStreamer::grabFrame()': /tmp/SBo/opencv-2.4.11/modules/highgui/src/cap_gstreamer.cpp:232:57: error: 'gst_app_sink_pull_sample' was not declared in this scope sample = gst_app_sink_pull_sample(GST_APP_SINK(sink)); ^ make[2]: *** [modules/highgui/CMakeFiles/opencv_highgui.dir/src/cap_gstreamer.cpp.o] Error 1 ``` The configuration report for the compilation is as follows: ``` -- General configuration for OpenCV 2.4.11 ===================================== -- Version control: unknown -- -- Platform: -- Host: Linux 3.10.17 i686 -- CMake: 2.8.12 -- CMake generator: Unix Makefiles -- CMake build tool: /usr/bin/gmake -- Configuration: Release -- -- C/C++: -- Built as dynamic libs?: YES -- C++ Compiler: /usr/bin/c++ (ver 4.8.2) -- C++ flags (Release): -O2 -march=i486 -mtune=i686 -fsigned-char -W -Wall -Werror=return-type -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wno-narrowing -Wno-delete-non-virtual-dtor -fdiagnostics-show-option -pthread -march=i686 -fomit-frame-pointer -msse -msse2 -msse3 -mfpmath=sse -ffunction-sections -O2 -DNDEBUG -DNDEBUG -- C++ flags (Debug): -O2 -march=i486 -mtune=i686 -fsigned-char -W -Wall -Werror=return-type -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wno-narrowing -Wno-delete-non-virtual-dtor -fdiagnostics-show-option -pthread -march=i686 -fomit-frame-pointer -msse -msse2 -msse3 -mfpmath=sse -ffunction-sections -g -O0 -DDEBUG -D_DEBUG -- C Compiler: /usr/bin/cc -- C flags (Release): -O2 -march=i486 -mtune=i686 -fsigned-char -W -Wall -Werror=return-type -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wno-narrowing -fdiagnostics-show-option -pthread -march=i686 -fomit-frame-pointer -msse -msse2 -msse3 -mfpmath=sse -ffunction-sections -O2 -DNDEBUG -DNDEBUG -- C flags (Debug): -O2 -march=i486 -mtune=i686 -fsigned-char -W -Wall -Werror=return-type -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wno-narrowing -fdiagnostics-show-option -pthread -march=i686 -fomit-frame-pointer -msse -msse2 -msse3 -mfpmath=sse -ffunction-sections -g -O0 -DDEBUG -D_DEBUG -- Linker flags (Release): -- Linker flags (Debug): -- Precompiled headers: NO -- -- OpenCV modules: -- To be built: core flann imgproc highgui features2d calib3d ml video legacy objdetect photo gpu ocl nonfree contrib stitching superres ts videostab -- Disabled: world -- Disabled by dependency: - -- Unavailable: androidcamera dynamicuda java python viz -- -- GUI: -- QT 4.x: YES (ver 4.8.5 EDITION = OpenSource) -- QT OpenGL support: NO -- OpenGL support: NO -- VTK support: NO -- -- Media I/O: -- ZLib: /usr/lib/libz.so (ver 1.2.8) -- JPEG: /usr/lib/libjpeg.so (ver 80) -- PNG: /usr/lib/libpng.so (ver 1.4.12) -- TIFF: /usr/lib/libtiff.so (ver 42 - 3.9.7) -- JPEG 2000: build (ver 1.900.1) -- OpenEXR: build (ver 1.7.1) -- -- Video I/O: -- DC1394 1.x: NO -- DC1394 2.x: YES (ver 2.2.2) -- FFMPEG: NO -- codec: NO -- format: NO -- util: NO -- swscale: NO -- gentoo-style: NO -- GStreamer: -- base: YES (ver 0.10.36) -- video: YES (ver 0.10.36) -- app: YES (ver 0.10.36) -- riff: YES (ver 0.10.36) -- pbutils: YES (ver 0.10.36) -- OpenNI: NO -- OpenNI PrimeSensor Modules: NO -- PvAPI: NO -- GigEVisionSDK: NO -- UniCap: NO -- UniCap ucil: NO -- V4L/V4L2: Using libv4l1 (ver 0.9.5) / libv4l2 (ver 0.9.5) -- XIMEA: NO -- Xine: NO -- -- Other third-party libraries: -- Use IPP: NO -- Use Eigen: NO -- Use TBB: NO -- Use OpenMP: NO -- Use GCD NO -- Use Concurrency NO -- Use C=: NO -- Use Cuda: NO -- Use OpenCL: YES -- -- OpenCL: -- Version: dynamic -- Include path: /tmp/SBo/opencv-2.4.11/3rdparty/include/opencl/1.2 -- Use AMD FFT: NO -- Use AMD BLAS: NO -- -- Python: -- Interpreter: /usr/bin/python2 (ver 2.7.5) -- -- Java: -- ant: NO -- JNI: /usr/lib/java/include /usr/lib/java/include/linux /usr/lib/java/include -- Java tests: NO -- -- Documentation: -- Build Documentation: NO -- Sphinx: NO -- PdfLaTeX compiler: /usr/share/texmf/bin/pdflatex -- Doxygen: YES (/usr/bin/doxygen) -- -- Tests and samples: -- Tests: YES -- Performance tests: YES -- C/C++ Examples: NO -- -- Install path: /usr -- -- cvconfig.h is in: /tmp/SBo/opencv-2.4.11/build ``` I looked through the opencv requirement from below link <http://docs.opencv.org/doc/tutorials/introduction/linux_install/linux_install.html#linux-installation> That I need ffmpeg or libav packages, which I cannot find under standard slackware 14.1 packages. But I installed gstreamer completely instead (gstreamer, gst-pluigns-base, and good) and the error I encountered above definitely has something to do with gstreamer.
2015/06/01
[ "https://Stackoverflow.com/questions/30578381", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4962063/" ]
It worked for me after I set WITH\_GSTREAMER\_0\_10 to ON
I am using Ubuntu 12.04 but got the same error. This can be avoided by using the **-D WITH\_GSTREAMER=OFF** parameter. As advised [here](http://code.opencv.org/issues/3953) and [here](https://stackoverflow.com/questions/23669638/installing-opencv-on-ubuntu-12-04). Then [here](https://gist.github.com/melvincabatuan/c8a95cf4beef39614ce0) they advise to update gstreamer, but this didn't fix it for me. I still want to verify this with a fresh installation.
7,711
70,819,915
This is the code: ``` name = input("Enter file: ") handle = open(name) counts = dict() filetext = handle.read() for line in handle: words = line.split() for word in words: counts[word] = counts.get(word, 0) + 1 print(words) print(counts) bigcount = None bigword = None for word,count in counts.items(): if bigcount == None or count > bigcount: bigword = word bigcount = count print(filetext) print("Most common word: ", bigword, bigcount) print(counts.items()) this is the output: Enter file: pls.txt Traceback (most recent call last): File "D:\Tools\Coding\PyCharm Community Edition 2021.2.3\bin\pythonProject2\Mostcommonword.py", line 10, in <module> print(words) NameError: name 'words' is not defined ``` Process finished with exit code 1 When running the program, instead of returning the most common number, it returned None. I managed to find out that the reason for that is that the "words" list is completely empty, for some reason. The good thing about simple problems is that I know what's going on. The bad thing is that there are not many ways to fix it at all.
2022/01/23
[ "https://Stackoverflow.com/questions/70819915", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18007432/" ]
After you do `handle.read()`, the file is positioned at end-of-file. There's nothing left for the `for line in handle:` to read. You either need to rewind in between (`handle.seek(0)`), or just skip the first read altogether.
You could consider building a list of lines as you iterate over the file handle rather than doing a read/seek. Also, you can work out the most frequently occurring word as you work through the file rather than doing a second pass of your dictionary. Something like this: ``` D = dict() M = 0 B = None C = list() with open('<Your filename goes here>') as txt: for line in txt: C.append(line) for word in line.strip().split(): D[word] = D.get(word, 0) + 1 if D[word] > M: B = word M = D[word] print(''.join(C)) print(f'Most common word with {M} occurrences is {B}') ```
7,717