qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
17,128,878
|
I was trying to install `autoclose.vim` to Vim. I noticed I didn't have a `~/.vim/plugin` folder, so I accidentally made a `~/.vim/plugins` folder (notice the extra 's' in plugins). I then added `au FileType python set rtp += ~/.vim/plugins` to my .vimrc, because from what I've read, that will allow me to automatically source the scripts in that folder.
The plugin didn't load for me until I realized my mistake and took out the extra 's' from 'plugins'. I'm confused because this new path isn't even defined in my runtime path. I'm basically wondering why the plugin loaded when I had it in `~/.vim/plugin` but not in `~/.vim/plugins`?
|
2013/06/15
|
[
"https://Stackoverflow.com/questions/17128878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2467761/"
] |
[:help load-plugins](http://vimdoc.sourceforge.net/htmldoc/starting.html#load-plugins) outlines how plugins are loaded.
Adding a folder to your `rtp` alone does not suffice; it must have a `plugin` subdirectory. For example, given `:set rtp+=/tmp/foo`, a file `/tmp/foo/plugin/bar.vim` would be detected and loaded, but neither `/tmp/foo/plugins/bar.vim` nor `/tmp/foo/bar.vim` would be.
|
All folders in the `rtp` (runtimepath) option need to have the same folder structure as your `$VIMRUNTIME` (`$VIMRUNTIME` is usually `/usr/share/vim/vim{version}`). So it should have the same subdirectory names e.g. `autoload`, `doc`, `plugin` (whichever you need, but having the same names is key). The plugins should be in their corresponding subdirectory.
Let's say you have `/path/to/dir` (in your case it's `~/.vim`) is in your `rtp`, `vim` will
* look for global plugins in `/path/to/dir/plugin`
* look for file-type plugins in `/path/to/dir/ftplugin`
* look for syntax files in `/path/to/dir/syntax`
* look for help files in `/path/to/dir/doc`
and so on...
`vim` only looks for a couple of recognized subdirectories† in `/path/to/dir`. If you have some unrecognized subdirectory name in there (like `/path/to/dir/plugins`), `vim` won't see it.
† "recognized" here means that a subdirectory of the same name can be found in `/usr/share/vim/vim{version}` or wherever you have `vim` installed.
|
17,128,878
|
I was trying to install `autoclose.vim` to Vim. I noticed I didn't have a `~/.vim/plugin` folder, so I accidentally made a `~/.vim/plugins` folder (notice the extra 's' in plugins). I then added `au FileType python set rtp += ~/.vim/plugins` to my .vimrc, because from what I've read, that will allow me to automatically source the scripts in that folder.
The plugin didn't load for me until I realized my mistake and took out the extra 's' from 'plugins'. I'm confused because this new path isn't even defined in my runtime path. I'm basically wondering why the plugin loaded when I had it in `~/.vim/plugin` but not in `~/.vim/plugins`?
|
2013/06/15
|
[
"https://Stackoverflow.com/questions/17128878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2467761/"
] |
[:help load-plugins](http://vimdoc.sourceforge.net/htmldoc/starting.html#load-plugins) outlines how plugins are loaded.
Adding a folder to your `rtp` alone does not suffice; it must have a `plugin` subdirectory. For example, given `:set rtp+=/tmp/foo`, a file `/tmp/foo/plugin/bar.vim` would be detected and loaded, but neither `/tmp/foo/plugins/bar.vim` nor `/tmp/foo/bar.vim` would be.
|
In addition to @Nikita Kouevda answer: modifying `rtp` on `FileType` event may be too late for vim to load any plugins from the modified runtimepath: if this event was launched after vimrc was sourced it is not guaranteed plugins from new addition will be loaded; if this event was launched after `VimEnter` event it is guaranteed plugins from new addition will *not* be sourced automatically.
If you want to *source* autoclose only when you edit python files you should use `:au FileType python :source ~/.vim/macros/autoclose.vim` (note: `macros` or any other subdirectory except `plugin` and directories found in `$VIMRUNTIME` or even any directory not found in runtimepath at all).
If you want to *use* autoclose only when you edit python files you should check out plugin source and documentation, there must be support on the plugin side for it to work.
// Or, if autoclose does not support this, use `:au FileType` command from above paragraph, but prepend `source` with something that records vim state (commands, mappings and autocommands), append same after `source`, find out differences in the state and delete the differences on each `:au BufEnter` if filetype is not python and restore them otherwise: hacky and may introduce strange bugs. The example of state-recording and diff-determining code may be found [here](https://github.com/MarcWeber/vim-addon-manager/blob/extended-autoload/autoload/vam/autoloading.vim).
|
17,128,878
|
I was trying to install `autoclose.vim` to Vim. I noticed I didn't have a `~/.vim/plugin` folder, so I accidentally made a `~/.vim/plugins` folder (notice the extra 's' in plugins). I then added `au FileType python set rtp += ~/.vim/plugins` to my .vimrc, because from what I've read, that will allow me to automatically source the scripts in that folder.
The plugin didn't load for me until I realized my mistake and took out the extra 's' from 'plugins'. I'm confused because this new path isn't even defined in my runtime path. I'm basically wondering why the plugin loaded when I had it in `~/.vim/plugin` but not in `~/.vim/plugins`?
|
2013/06/15
|
[
"https://Stackoverflow.com/questions/17128878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2467761/"
] |
You are on the right track with `set rtp+=...` but there's a bit more to it (`rtp` is non-recursive, help indexing, many corner cases) than what meets the eye so it is not a very good idea to do it by yourself. Unless you are ready for a months-long drop in productivity.
If you want to store all your plugins in a special directory you should use a proper `runtimepath`/plugin-management solution. I suggest [Pathogen](http://www.vim.org/scripts/script.php?script_id=2332) (`rtp`-manager) or [Vundle](http://www.vim.org/scripts/script.php?script_id=3458) (plugin-manager) but there are many others.
|
All folders in the `rtp` (runtimepath) option need to have the same folder structure as your `$VIMRUNTIME` (`$VIMRUNTIME` is usually `/usr/share/vim/vim{version}`). So it should have the same subdirectory names e.g. `autoload`, `doc`, `plugin` (whichever you need, but having the same names is key). The plugins should be in their corresponding subdirectory.
Let's say you have `/path/to/dir` (in your case it's `~/.vim`) is in your `rtp`, `vim` will
* look for global plugins in `/path/to/dir/plugin`
* look for file-type plugins in `/path/to/dir/ftplugin`
* look for syntax files in `/path/to/dir/syntax`
* look for help files in `/path/to/dir/doc`
and so on...
`vim` only looks for a couple of recognized subdirectories† in `/path/to/dir`. If you have some unrecognized subdirectory name in there (like `/path/to/dir/plugins`), `vim` won't see it.
† "recognized" here means that a subdirectory of the same name can be found in `/usr/share/vim/vim{version}` or wherever you have `vim` installed.
|
17,128,878
|
I was trying to install `autoclose.vim` to Vim. I noticed I didn't have a `~/.vim/plugin` folder, so I accidentally made a `~/.vim/plugins` folder (notice the extra 's' in plugins). I then added `au FileType python set rtp += ~/.vim/plugins` to my .vimrc, because from what I've read, that will allow me to automatically source the scripts in that folder.
The plugin didn't load for me until I realized my mistake and took out the extra 's' from 'plugins'. I'm confused because this new path isn't even defined in my runtime path. I'm basically wondering why the plugin loaded when I had it in `~/.vim/plugin` but not in `~/.vim/plugins`?
|
2013/06/15
|
[
"https://Stackoverflow.com/questions/17128878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2467761/"
] |
In addition to @Nikita Kouevda answer: modifying `rtp` on `FileType` event may be too late for vim to load any plugins from the modified runtimepath: if this event was launched after vimrc was sourced it is not guaranteed plugins from new addition will be loaded; if this event was launched after `VimEnter` event it is guaranteed plugins from new addition will *not* be sourced automatically.
If you want to *source* autoclose only when you edit python files you should use `:au FileType python :source ~/.vim/macros/autoclose.vim` (note: `macros` or any other subdirectory except `plugin` and directories found in `$VIMRUNTIME` or even any directory not found in runtimepath at all).
If you want to *use* autoclose only when you edit python files you should check out plugin source and documentation, there must be support on the plugin side for it to work.
// Or, if autoclose does not support this, use `:au FileType` command from above paragraph, but prepend `source` with something that records vim state (commands, mappings and autocommands), append same after `source`, find out differences in the state and delete the differences on each `:au BufEnter` if filetype is not python and restore them otherwise: hacky and may introduce strange bugs. The example of state-recording and diff-determining code may be found [here](https://github.com/MarcWeber/vim-addon-manager/blob/extended-autoload/autoload/vam/autoloading.vim).
|
All folders in the `rtp` (runtimepath) option need to have the same folder structure as your `$VIMRUNTIME` (`$VIMRUNTIME` is usually `/usr/share/vim/vim{version}`). So it should have the same subdirectory names e.g. `autoload`, `doc`, `plugin` (whichever you need, but having the same names is key). The plugins should be in their corresponding subdirectory.
Let's say you have `/path/to/dir` (in your case it's `~/.vim`) is in your `rtp`, `vim` will
* look for global plugins in `/path/to/dir/plugin`
* look for file-type plugins in `/path/to/dir/ftplugin`
* look for syntax files in `/path/to/dir/syntax`
* look for help files in `/path/to/dir/doc`
and so on...
`vim` only looks for a couple of recognized subdirectories† in `/path/to/dir`. If you have some unrecognized subdirectory name in there (like `/path/to/dir/plugins`), `vim` won't see it.
† "recognized" here means that a subdirectory of the same name can be found in `/usr/share/vim/vim{version}` or wherever you have `vim` installed.
|
53,494,097
|
I am trying to get hands on with selenium and webdriver with python.
```
from selenium import webdriver
PROXY = "119.82.253.95:61853"
url = 'http://google.co.in/search?q=book+flights'
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--proxy-server=%s' % PROXY)
driver = webdriver.Chrome(options=chrome_options, executable_path="/usr/local/bin/chromedriver")
driver.get(url)
driver.implicitly_wait(20)
```
When I access normally without a proxy everything works fine. But when I try to access using proxy it shows captcha with message "Our systems have detected unusual traffic from your computer". How do I avoid it?
|
2018/11/27
|
[
"https://Stackoverflow.com/questions/53494097",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2954789/"
] |
`fscanf` is a non-starter. The only way to read empty fields would be to use `"%c"` to read delimiters (and that would require you to know which fields were empty beforehand -- not very useful) Otherwise, depending on the *format specifier* used, `fscanf` would simply consume the `tabs` as leading whitespace or experience a *matching failure* or *input failure*.
Continuing from the comment, in order to tokenize based on delimiters that may separate empty fields, you will need to use `strsep` as `strtok` will consider consecutive delimiters as one.
While your string is a bit unclear where the `tabs` are located, a short example of tokenizing with `strsep` could be as follows. Note that `strsep` takes a pointer-to-pointer as its first argument, e.g.
```
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main (void) {
int n = 0;
const char *delim = "\t\n";
char *s = strdup ("usrid\tUser Id 0\t15\tstring\td\tk\ty\ty\t\t\t0\t0"),
*toks = s, /* tokenize with separate pointer to preserve s */
*p;
while ((p = strsep (&toks, delim)))
printf ("token[%2d]: '%s'\n", n++ + 1, p);
free (s);
}
```
(**note:** since `strsep` will modify the address held by the string pointer, you need to preserve a pointer to the beginning of `s` so it can be freed when no longer needed -- thanks JL)
**Example Use/Output**
```
$ ./bin/strtok_tab
token[ 1]: 'usrid'
token[ 2]: 'User Id 0'
token[ 3]: '15'
token[ 4]: 'string'
token[ 5]: 'd'
token[ 6]: 'k'
token[ 7]: 'y'
token[ 8]: 'y'
token[ 9]: ''
token[10]: ''
token[11]: '0'
token[12]: '0'
```
Look things over and let me know if you have further questions.
|
>
> I wanna use fscanf to read consecutive tabs as empty fields and store them in a structure.
>
>
>
Ideally, code should read a *line*, as with `fgets()` and then parse the *string*.
Yet staying with `fscanf()`, this can be done in a loop.
---
The main idea is to use `"%[^/t/n]"` to read one token. If the next character is a `'\t'`, then the return value will not be 1. Test for that. A width limit is wise.
Then read the separator and look for tab, end-of-line or if end-of-file/error occurred.
```
#define TABS_PER_LINE 12
#define TOKENS_PER_LINE (TABS_PER_LINE + 1)
#define TOKEN_SIZE 100
#define TOKEN_FMT_N "99"
int fread_tab_delimited_line(FILE *istream, int n, char token[n][TOKEN_SIZE]) {
for (int i = 0; i < n; i++) {
int token_count = fscanf(istream, "%" TOKEN_FMT_N "[^\t\n]", token[i]);
if (token_count != 1) {
token[i][0] = '\0'; // Empty token
}
char separator;
int term_count = fscanf(istream, "%c", &separator); // fgetc() makes more sense here
// if end-of-file or end-of-line
if (term_count != 1 || separator == '\n') {
if (i == 0 && token_count != 1 && term_count != 1) {
return 0;
}
return i + 1;
}
if (separator != '\t') {
return -1; // Token too long
}
}
return -1; // Token too many tokens found
}
```
Sample driving code
```
void test_tab_delimited_line(FILE *istream) {
char token[TOKENS_PER_LINE][TOKEN_SIZE];
long long line_count = 0;
int token_count;
while ((token_count = fread_tab_delimited_line(istream, TOKENS_PER_LINE, token)) > 0) {
printf("Line %lld\n", ++line_count);
for (int i = 0; i < token_count; i++) {
printf("%d: <%s>\n", i, token[i]);
}
} while (token_count > 0);
if (token_count < 0) {
puts("Trouble reading any tokens.");
}
}
```
|
63,322,884
|
I have a python script that is responsible for verifying the existence of a process with its respective name, I am using the pip module `pgrep`, the problem is that it does not allow me to kill the processes with the kill module of pip or with the of `os.kill` because there are several processes that I want to kill and these are saved in list, for example
`pid = [2222, 4444, 6666]`
How could you kill those processes at once? since the above modules don't give me results.
|
2020/08/09
|
[
"https://Stackoverflow.com/questions/63322884",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14063362/"
] |
You would loop over processes using a `for` loop. Ideally you should send a `SIGTERM` before resorting to `SIGKILL`, because it can allow processes to exit more gracefully.
```
import time
import os
import signal
# send all the processes a SIGTERM
for p in pid:
os.kill(p, signal.SIGTERM)
# give them a short time to do any cleanup
time.sleep(2)
# in case still exist - send them a SIGKILL to definitively remove them
# if they are already exited, just ignore the error and carry on
for p in pid:
try:
os.kill(p, signal.SIGKILL)
except ProcessLookupError:
pass
```
|
Try this it may work
```
processes = {'pro1', 'pro2', 'pro3'}
for proc in psutil.process_iter():
if proc.name() in processes:
proc.kill()
```
For more information you can refer [here](https://psutil.readthedocs.io/en/latest/)
|
53,546,396
|
How to reduce numbers in python after comma without rounding
Example : I have x = 2.97656
I want it to be 2.9 not 3.0
Thank you
|
2018/11/29
|
[
"https://Stackoverflow.com/questions/53546396",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9705031/"
] |
If you don't want to use `math.round()` you can use `math.floor()`:
```
import math
x = 2.97656
print(math.floor(x * 10) / 10)
#Output = 2.9
```
|
you can use round(var , number precision )
see this link please to more info
**<https://www.geeksforgeeks.org/precision-handling-python/>**
|
64,575,636
|
I'm trying to convert json data into a dict by using load() but I'm unable to do so if I have more than one object. For example, the code below works perfectly, I can dump 'dog' into a json file and then I can load 'dog' and print it out as a dict.
```
import json
dog = {
"name":"Sally",
"color": "yellow",
"breed": "lab",
"age": 2,
},
with open("Pets.json","w") as output_file:
json.dump(dog,output_file)
with open("Pets.json","r") as infile:
dog_dict = json.load(infile)
print(dog_dict)
```
Output:
[{'name': 'Sally', 'color': 'yellow', 'breed': 'lab', 'age': 2}]
However, let's say I add an object 'cat' to the existing code:
```
dog = {
"name":"Sally",
"color": "yellow",
"breed": "lab",
"age": 2,
},
cat = {
"name":"Daniel",
"color": "black",
"breed": "unknown",
"age": 8,
}
with open("Pets.json","w") as output_file:
json.dump(dog,output_file)
json.dump(cat,output_file)
with open("Pets.json","r") as infile:
dog_dict = json.load(infile)
cat_dict = json.load(infile)
print(dog_dict)
print(cat_dict)
```
I can successfull dump 'dog' and 'cat' it into the json file, but when I try to load both 'dog' and 'cat' as dicts, I get an error message:
```
dog_dict = json.load(infile)
File "/usr/lib/python3.8/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/usr/lib/python3.8/json/__init__.py", line 357, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.8/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 1 column 65 (char 64)
```
|
2020/10/28
|
[
"https://Stackoverflow.com/questions/64575636",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14209856/"
] |
You should only have one json thing you dump to a file. `json.load` will try to load the whole file, it doesn't find the first instance of a valid json object
You could combine them into an array
```
j_obj = [dog, cat]
```
Or create a new dict
```
j_obj = {'dog': dog, 'cat': cat}
```
Then `j_obj` can be dumped to a file and read back and you'll still be able to get `dog` and `cat` back individually if you need them that way
A quick note. In your first example, the trailing `,` on the dog object actually makes what your dumping a json array, which is what you are printing out
```
[{'name': 'Sally', 'color': 'yellow', 'breed': 'lab', 'age': 2}]
```
It's not just a dog dictionary
|
The JSON module doesn't append it automatically.
If you want your JSON to contain a number of objects use an array as insert your dictionaries into it. the dump the array
|
64,575,636
|
I'm trying to convert json data into a dict by using load() but I'm unable to do so if I have more than one object. For example, the code below works perfectly, I can dump 'dog' into a json file and then I can load 'dog' and print it out as a dict.
```
import json
dog = {
"name":"Sally",
"color": "yellow",
"breed": "lab",
"age": 2,
},
with open("Pets.json","w") as output_file:
json.dump(dog,output_file)
with open("Pets.json","r") as infile:
dog_dict = json.load(infile)
print(dog_dict)
```
Output:
[{'name': 'Sally', 'color': 'yellow', 'breed': 'lab', 'age': 2}]
However, let's say I add an object 'cat' to the existing code:
```
dog = {
"name":"Sally",
"color": "yellow",
"breed": "lab",
"age": 2,
},
cat = {
"name":"Daniel",
"color": "black",
"breed": "unknown",
"age": 8,
}
with open("Pets.json","w") as output_file:
json.dump(dog,output_file)
json.dump(cat,output_file)
with open("Pets.json","r") as infile:
dog_dict = json.load(infile)
cat_dict = json.load(infile)
print(dog_dict)
print(cat_dict)
```
I can successfull dump 'dog' and 'cat' it into the json file, but when I try to load both 'dog' and 'cat' as dicts, I get an error message:
```
dog_dict = json.load(infile)
File "/usr/lib/python3.8/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/usr/lib/python3.8/json/__init__.py", line 357, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.8/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 1 column 65 (char 64)
```
|
2020/10/28
|
[
"https://Stackoverflow.com/questions/64575636",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14209856/"
] |
Before dumping, include your objects in a list, then dump them:
```
dog = {
"name":"Sally",
"color": "yellow",
"breed": "lab",
"age": 2,
},
cat = {
"name":"Daniel",
"color": "black",
"breed": "unknown",
"age": 8,
}
all_objects = [dog, cat]
with open("Pets.json","w") as output_file:
output_file.write(json.dumps(all_objects ))
```
|
The JSON module doesn't append it automatically.
If you want your JSON to contain a number of objects use an array as insert your dictionaries into it. the dump the array
|
63,074,629
|
I am a newbie to a python dictionary. Excume me for my mistakes.
I want to create a list of **all** the keys which have a Maximum and Minimum values from Python Dictionary. I searched it about on Google but didn't get any answer.
I have written the following code:
```
a = {1:1, 2:3, 4:3, 3:2, 5:1, 6:3}
maxi = [keys for keys, values in a.items() if keys == max(a, key=a.get)]
mini = [keys for keys, values in a.items() if keys == min(a, key=a.get)]
print(maxi)
print(mini)
```
My output:
```
[2]
[1]
```
Expected output:
```
[2,4,6]
[1,5]
```
What did I do wrong? Is there any better (or other) way to do this?
I would be more than happy for your help.
Thanks in advance!
|
2020/07/24
|
[
"https://Stackoverflow.com/questions/63074629",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13285566/"
] |
The `ngModel` binding might have precedence here. You could ignore the `value` attribute and set `updatedStockValue` in it's definition.
Try the following
```js
@Component({
selector: 'app-stock-status',
template:`
<input type="number" min="0" required [(ngModel)]="updatedStockValue"/>
<button class="btn btn-primary" [style.background]="color" (click)="stockValueChanged()">Change Stock Value</button>
`,
styleUrls: ['./stock-status.component.css']
})
export class AppComponent {
updatedStockValue: number = 0;
...
}
```
|
You can initialize a variable in the template with ng-init if you don't want to do it in the controller.
```
<input type="number" min='0' required [(ngModel)]='updatedStockValue'
ng-init="updatedStockValue=0"/>
```
|
68,500,403
|
I am using Pandas to analyze a dataset which includes a column named "Age on Intake" (floating numbers). I had been trying to further categorize the data into a few small age buckets using the function I wrote. However, I keep getting the error **"*'<=' not supported between instances of 'str' and 'int'*"**. How could I fix this please?
**My function:**
```
def convert_age(num):
if num <=7:
return "0-7 days"
elif num <= 21:
return "1-3 weeks"
elif num <= 42:
return "3-6 weeks"
elif num <= 84:
return "7-12 weeks"
elif num <= 168:
return "12 weeks - 6 months"
elif num <= 365:
return "6-12 months"
elif num <= 730:
return "1-2 years"
elif num <= 1095:
return "2-3 years"
else:
return "3+ years"
df['Age on Intake'] = df['Age on Intake'].apply(convert_age)
```
**The df['Age on Intake'] column includes floating numbers:**
```
0 95.0
1 1096.0
2 111.0
3 111.0
4 397.0
...
21474 NaN
21475 NaN
21476 365.0
21477 699.0
21478 61.0
Name: Age on Intake, Length: 21479, dtype: float64
```
**Error Message I get:**
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-31-ca12621d6b19> in <module>
22 return "3+ years"
23
---> 24 df['Age on Intake'] = df['Age on Intake'].apply(convert_age)
25
26
/opt/anaconda3/lib/python3.8/site-packages/pandas/core/series.py in apply(self, func, convert_dtype, args, **kwds)
4198 else:
4199 values = self.astype(object)._values
-> 4200 mapped = lib.map_infer(values, f, convert=convert_dtype)
4201
4202 if len(mapped) and isinstance(mapped[0], Series):
pandas/_libs/lib.pyx in pandas._libs.lib.map_infer()
<ipython-input-31-ca12621d6b19> in convert_age(num)
3 def convert_age(num):
4
----> 5 if num <=7:
6 return "0-7 days"
7 elif num <= 21:
TypeError: '<=' not supported between instances of 'str' and 'int'
```
|
2021/07/23
|
[
"https://Stackoverflow.com/questions/68500403",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16494766/"
] |
`for` loops in Rust act on iterators, so if you want succinct semantics, change your code to use them. There's not really that much other choice - what's ergonomic in C isn't necessarily ergonomic in Rust, and vice versa.
If your `next` functions follow a common pattern, you can create a structure that implements `Iterator` that takes the `next` function as a `FnMut` closure.
In my opinion, your "useless variable" is only there because you've special cased getting the first element by doing it without the `next` function. If you changed your code so that `next(None)` returns the first item, you wouldn't need that.
|
It would be most idiomatic to convert the code to use an `Iterator`, but that is "non-trivial" in this case due to how next works. The simplest version I could create was to create that was similar to the C code yet IMO reasonably idiomatic was to create an `on_each` style function that accepts a closure.
```
#[derive(Default)]
pub struct SomeData {}
fn next(data: &mut SomeData) -> bool {todo!() }
fn process(data: &mut SomeData) { todo!() }
fn condition1(data: &SomeData) -> bool { todo!() }
fn condition2(data: &SomeData) -> bool { todo!() }
fn condition3(data: &SomeData) -> bool { todo!() }
pub fn on_each_data(f: impl for<'a> Fn(&'a mut SomeData)) {
let mut data = SomeData::default();
f(&mut data);
while next(&mut data) {
f(&mut data);
}
}
pub fn iterate_data_2() {
on_each_data(|data| {
if condition1(data) {
return;
}
if condition2(data) {
return;
}
if condition3(data) {
return;
}
process(data)
});
}
```
|
55,639,746
|
I am new to python and Jupyter Notebook
The objective of the code I am writing is to request the user to introduce 10 different integers. The program is supposed to return the highest odd number introduced previously by the user.
My code is as followws:
```
i=1
c=1
y=1
while i<=10:
c=int(input('Enter an integer number: '))
if c%2==0:
print('The number is even')
elif c> y
y=c
print('y')
i=i+1
```
My loop is running over and over again, and I don't get a solution.
I guess the code is well written. It must be a slight detail I am not seeing.
Any help would be much appreciated!
|
2019/04/11
|
[
"https://Stackoverflow.com/questions/55639746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10023598/"
] |
You have `elif c > y`, you should just need to add a colon there so it's `elif c > y:`
|
Yup.
```
i=1
c=1
y=1
while i<=10:
c=int(input('Enter an integer number: ')) # This line was off
if c%2==0:
print('The number is even')
elif c> y: # Need also ':'
y=c
print('y')
i=i+1
```
|
55,639,746
|
I am new to python and Jupyter Notebook
The objective of the code I am writing is to request the user to introduce 10 different integers. The program is supposed to return the highest odd number introduced previously by the user.
My code is as followws:
```
i=1
c=1
y=1
while i<=10:
c=int(input('Enter an integer number: '))
if c%2==0:
print('The number is even')
elif c> y
y=c
print('y')
i=i+1
```
My loop is running over and over again, and I don't get a solution.
I guess the code is well written. It must be a slight detail I am not seeing.
Any help would be much appreciated!
|
2019/04/11
|
[
"https://Stackoverflow.com/questions/55639746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10023598/"
] |
You have `elif c > y`, you should just need to add a colon there so it's `elif c > y:`
|
You can right this in a much compact fashion like so.
Start by asking for 10 numbers in a single line, separated by a space. Then split the string by `,` into a list of numbers and exit the code if exactly 10 numbers are not provided.
```
numbers_str = input("Input 10 integers separated by a comma(,) >>> ")
numbers = [int(number.strip()) for number in numbers_str.split(',')]
if len(numbers) != 10:
print("You didn't enter 10 numbers! try again")
exit()
```
A bad run of the code above might be
```
Input 10 integers separated by a comma(,) >>> 1,2,3,4
You didn't enter 10 numbers! try again
```
Assuming 10 integers are provided, loop through the elements, considering only odd numbers and updating highest odd number as you go.
```
largest = None
for number in numbers:
if number % 2 != 0 and (not largest or number > largest):
largest = number
```
Finally, check if the largest number is None, which means we didn't have any odd numbers, so provide the user that information, otherwise display the largest odd number
```
if largest is None:
print("You didn't enter any odd numbers")
else:
print("Your largest odd number was:", largest)
```
Possible outputs are
```
Input 10 integers separated by a comma(,) >>> 1,2,3,4,5,6,7,8,9,10
Your largest odd number was: 9
```
```
Input 10 integers separated by a comma(,) >>> 2,4,6,8,2,4,6,8,2,4
You didn't enter any odd numbers
```
|
32,893,568
|
I'm trying to parse json string with an escape character (Of some sort I guess)
```
{
"publisher": "\"O'Reilly Media, Inc.\""
}
```
Parser parses well if I remove the character `\"` from the string,
the exceptions raised by different parsers are,
**json**
```
File "/usr/lib/python2.7/json/__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "/usr/lib/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python2.7/json/decoder.py", line 382, in raw_decode
obj, end = self.scan_once(s, idx)
ValueError: Expecting , delimiter: line 17 column 20 (char 392)
```
**ujson**
```
ValueError: Unexpected character in found when decoding object value
```
How do I make the parser to escape this characters ?
update:
[](https://i.stack.imgur.com/cY8l2.png)
*ps. json is imported as ujson in this example*
[](https://i.stack.imgur.com/2d195.png)
This is what my ide shows
comma is just added accidently, it has no trailing comma at the end of json, json is valid
[](https://i.stack.imgur.com/uuFTB.png)
the string definition.
|
2015/10/01
|
[
"https://Stackoverflow.com/questions/32893568",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4597501/"
] |
You almost certainly did not define properly escaped backslashes. If you define the string properly the JSON parses *just fine*:
```
>>> import json
>>> json_str = r'''
... {
... "publisher": "\"O'Reilly Media, Inc.\""
... }
... ''' # raw string to prevent the \" from being interpreted by Python
>>> json.loads(json_str)
{u'publisher': u'"O\'Reilly Media, Inc."'}
```
Note that I used a *raw string literal* to define the string in Python; if I did not, the `\"` would be interpreted by Python and a regular `"` would be inserted. You'd have to *double* the backslash otherwise:
```
>>> print '\"'
"
>>> print '\\"'
\"
>>> print r'\"'
\"
```
Reencoding the parsed Python structure back to JSON shows the backslashes re-appearing, with the `repr()` output for the string using the same double backslash:
```
>>> json.dumps(json.loads(json_str))
'{"publisher": "\\"O\'Reilly Media, Inc.\\""}'
>>> print json.dumps(json.loads(json_str))
{"publisher": "\"O'Reilly Media, Inc.\""}
```
If you did not escape the `\` escape you'll end up with unescaped quotes:
```
>>> json_str_improper = '''
... {
... "publisher": "\"O'Reilly Media, Inc.\""
... }
... '''
>>> print json_str_improper
{
"publisher": ""O'Reilly Media, Inc.""
}
>>> json.loads(json_str_improper)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/mj/Development/Library/buildout.python/parts/opt/lib/python2.7/json/__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "/Users/mj/Development/Library/buildout.python/parts/opt/lib/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/Users/mj/Development/Library/buildout.python/parts/opt/lib/python2.7/json/decoder.py", line 382, in raw_decode
obj, end = self.scan_once(s, idx)
ValueError: Expecting , delimiter: line 3 column 20 (char 22)
```
Note that the `\"` sequences now are printed as `"`, the backslash is gone!
|
Your JSON is invalid. If you have questions about your JSON objects, you can always validate them with [JSONlint](http://jsonlint.com). In your case you have an object
```
{
"publisher": "\"O'Reilly Media, Inc.\"",
}
```
and you have an extra comma indicating that something else should be coming. So JSONlint yields
>
> Parse error on line 2:
> ...edia, Inc.\"", }
> ---------------------^
> Expecting 'STRING'
>
>
>
which would begin to help you find where the error was.
Removing the comma for
```
{
"publisher": "\"O'Reilly Media, Inc.\""
}
```
yields
>
> Valid JSON
>
>
>
Update: I'm keeping the stuff in about JSONlint as it may be helpful to others in the future. As for your well formed JSON object, I have
```
import json
d = {
"publisher": "\"O'Reilly Media, Inc.\""
}
print "Here is your string parsed."
print(json.dumps(d))
```
yielding
>
> Here is your string parsed.
> {"publisher": "\"O'Reilly Media, Inc.\""}
>
>
> Process finished with exit code 0
>
>
>
|
35,901,517
|
I get the following error when I run my code which has been annotated with @profile:
```
Wrote profile results to monthly_spi_gamma.py.prof
Traceback (most recent call last):
File "/home/james.adams/anaconda2/lib/python2.7/site-packages/kernprof.py", line 233, in <module>
sys.exit(main(sys.argv))
File "/home/james.adams/anaconda2/lib/python2.7/site-packages/kernprof.py", line 223, in main
prof.runctx('execfile_(%r, globals())' % (script_file,), ns, ns)
File "/home/james.adams/anaconda2/lib/python2.7/cProfile.py", line 140, in runctx
exec cmd in globals, locals
File "<string>", line 1, in <module>
File "monthly_spi_gamma.py", line 1, in <module>
import indices
File "indices.py", line 14, in <module>
@profile
NameError: name 'profile' is not defined
```
Can anyone comment as to what may solve the problem? I am using Python 2.7 (Anaconda) on Windows 7.
|
2016/03/09
|
[
"https://Stackoverflow.com/questions/35901517",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/85248/"
] |
I worked this out by using the -l option, i.e.
```
$ kernprof.py -l my_code.py
```
|
```
kernprof -l -b web_app.py
```
This worked for me, if we see
```
kernprof --help
```
we see an option to include in builtin namespace
```
usage: kernprof [-h] [-V] [-l] [-b] [-o OUTFILE] [-s SETUP] [-v] [-u UNIT]
[-z]
script ...
Run and profile a python script.
positional arguments:
script The python script file to run
args Optional script arguments
optional arguments:
-h, --help show this help message and exit
-V, --version show program's version number and exit
-l, --line-by-line Use the line-by-line profiler instead of cProfile.
Implies --builtin.
-b, --builtin Put 'profile' in the builtins. Use
'profile.enable()'/'.disable()', '@profile' to
decorate functions, or 'with profile:' to profile a
section of code.
-o OUTFILE, --outfile OUTFILE
Save stats to <outfile> (default: 'scriptname.lprof'
with --line-by-line, 'scriptname.prof' without)
-s SETUP, --setup SETUP
Code to execute before the code to profile
-v, --view View the results of the profile in addition to saving
it
-u UNIT, --unit UNIT Output unit (in seconds) in which the timing info is
displayed (default: 1e-6)
-z, --skip-zero Hide functions which have not been called
```
|
32,838,802
|
Say that I have a color image, and naturally this will be represented by a 3-dimensional array in python, say of shape (n x m x 3) and call it img.
I want a new 2-d array, call it "narray" to have a shape (3,nxm), such that each row of this array contains the "flattened" version of R,G,and B channel respectively. Moreover, it should have the property that I can easily reconstruct back any of the original channel by something like
```
narray[0,].reshape(img.shape[0:2]) #so this should reconstruct back the R channel.
```
The question is how can I construct the "narray" from "img"? The simple img.reshape(3,-1) does not work as the order of the elements are not desirable for me.
Thanks
|
2015/09/29
|
[
"https://Stackoverflow.com/questions/32838802",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4929035/"
] |
You need to use [`np.transpose`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.transpose.html) to rearrange dimensions. Now, `n x m x 3` is to be converted to `3 x (n*m)`, so send the last axis to the front and shift right the order of the remaining axes `(0,1)`. Finally , reshape to have `3` rows. Thus, the implementation would be -
```
img.transpose(2,0,1).reshape(3,-1)
```
Sample run -
```
In [16]: img
Out[16]:
array([[[155, 33, 129],
[161, 218, 6]],
[[215, 142, 235],
[143, 249, 164]],
[[221, 71, 229],
[ 56, 91, 120]],
[[236, 4, 177],
[171, 105, 40]]])
In [17]: img.transpose(2,0,1).reshape(3,-1)
Out[17]:
array([[155, 161, 215, 143, 221, 56, 236, 171],
[ 33, 218, 142, 249, 71, 91, 4, 105],
[129, 6, 235, 164, 229, 120, 177, 40]])
```
|
[ORIGINAL ANSWER]
Let's say we have an array `img` of size `m x n x 3` to transform into an array `new_img` of size `3 x (m*n)`
Initial Solution:
```
new_img = img.reshape((img.shape[0]*img.shape[1]), img.shape[2])
new_img = new_img.transpose()
```
[EDITED ANSWER]
**Flaw**: The reshape starts from the first dimension and reshapes the remainder, this solution has the potential to mix the values from the third dimension. Which in the case of images could be semantically incorrect.
Adapted Solution:
```
# Dimensions: [m, n, 3]
new_img = new_img.transpose()
# Dimensions: [3, n, m]
new_img = img.reshape(img.shape[0], (img.shape[1]*img.shape[2]))
```
Strict Solution:
```
# Dimensions: [m, n, 3]
new_img = new_img.transpose((2, 0, 1))
# Dimensions: [3, m, n]
new_img = img.reshape(img.shape[0], (img.shape[1]*img.shape[2]))
```
The strict is a better way forward to account for the order of dimensions, while the results from the `Adapted` and `Strict` will be identical in terms of the values (`set(new_img[0,...])`), however with the order shuffled.
|
32,838,802
|
Say that I have a color image, and naturally this will be represented by a 3-dimensional array in python, say of shape (n x m x 3) and call it img.
I want a new 2-d array, call it "narray" to have a shape (3,nxm), such that each row of this array contains the "flattened" version of R,G,and B channel respectively. Moreover, it should have the property that I can easily reconstruct back any of the original channel by something like
```
narray[0,].reshape(img.shape[0:2]) #so this should reconstruct back the R channel.
```
The question is how can I construct the "narray" from "img"? The simple img.reshape(3,-1) does not work as the order of the elements are not desirable for me.
Thanks
|
2015/09/29
|
[
"https://Stackoverflow.com/questions/32838802",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4929035/"
] |
You need to use [`np.transpose`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.transpose.html) to rearrange dimensions. Now, `n x m x 3` is to be converted to `3 x (n*m)`, so send the last axis to the front and shift right the order of the remaining axes `(0,1)`. Finally , reshape to have `3` rows. Thus, the implementation would be -
```
img.transpose(2,0,1).reshape(3,-1)
```
Sample run -
```
In [16]: img
Out[16]:
array([[[155, 33, 129],
[161, 218, 6]],
[[215, 142, 235],
[143, 249, 164]],
[[221, 71, 229],
[ 56, 91, 120]],
[[236, 4, 177],
[171, 105, 40]]])
In [17]: img.transpose(2,0,1).reshape(3,-1)
Out[17]:
array([[155, 161, 215, 143, 221, 56, 236, 171],
[ 33, 218, 142, 249, 71, 91, 4, 105],
[129, 6, 235, 164, 229, 120, 177, 40]])
```
|
If you have the scikit module installed, then you can use the rgb2grey (or rgb2gray) to make a photo from color to gray (from 3D to 2D)
```
from skimage import io, color
lina_color = io.imread(path+img)
lina_gray = color.rgb2gray(lina_color)
In [33]: lina_color.shape
Out[33]: (1920, 1280, 3)
In [34]: lina_gray.shape
Out[34]: (1920, 1280)
```
|
32,838,802
|
Say that I have a color image, and naturally this will be represented by a 3-dimensional array in python, say of shape (n x m x 3) and call it img.
I want a new 2-d array, call it "narray" to have a shape (3,nxm), such that each row of this array contains the "flattened" version of R,G,and B channel respectively. Moreover, it should have the property that I can easily reconstruct back any of the original channel by something like
```
narray[0,].reshape(img.shape[0:2]) #so this should reconstruct back the R channel.
```
The question is how can I construct the "narray" from "img"? The simple img.reshape(3,-1) does not work as the order of the elements are not desirable for me.
Thanks
|
2015/09/29
|
[
"https://Stackoverflow.com/questions/32838802",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4929035/"
] |
[ORIGINAL ANSWER]
Let's say we have an array `img` of size `m x n x 3` to transform into an array `new_img` of size `3 x (m*n)`
Initial Solution:
```
new_img = img.reshape((img.shape[0]*img.shape[1]), img.shape[2])
new_img = new_img.transpose()
```
[EDITED ANSWER]
**Flaw**: The reshape starts from the first dimension and reshapes the remainder, this solution has the potential to mix the values from the third dimension. Which in the case of images could be semantically incorrect.
Adapted Solution:
```
# Dimensions: [m, n, 3]
new_img = new_img.transpose()
# Dimensions: [3, n, m]
new_img = img.reshape(img.shape[0], (img.shape[1]*img.shape[2]))
```
Strict Solution:
```
# Dimensions: [m, n, 3]
new_img = new_img.transpose((2, 0, 1))
# Dimensions: [3, m, n]
new_img = img.reshape(img.shape[0], (img.shape[1]*img.shape[2]))
```
The strict is a better way forward to account for the order of dimensions, while the results from the `Adapted` and `Strict` will be identical in terms of the values (`set(new_img[0,...])`), however with the order shuffled.
|
If you have the scikit module installed, then you can use the rgb2grey (or rgb2gray) to make a photo from color to gray (from 3D to 2D)
```
from skimage import io, color
lina_color = io.imread(path+img)
lina_gray = color.rgb2gray(lina_color)
In [33]: lina_color.shape
Out[33]: (1920, 1280, 3)
In [34]: lina_gray.shape
Out[34]: (1920, 1280)
```
|
71,140,438
|
I am a beginner in Python and would really appreciate if someone could help me with the following:
I would like to run this script 10 times and for that change for every run the sub-batch (from 0-9):
E.g. the first run would be:
```
python $GWAS_TOOLS/gwas_summary_imputation.py \
-by_region_file $DATA/eur_ld.bed.gz \
-gwas_file $OUTPUT/harmonized_gwas/CARDIoGRAM_C4D_CAD_ADDITIVE.txt.gz \
-parquet_genotype $DATA/reference_panel_1000G/chr1.variants.parquet \
-parquet_genotype_metadata $DATA/reference_panel_1000G/variant_metadata.parquet \
-window 100000 \
-parsimony 7 \
-chromosome 1 \
-regularization 0.1 \
-frequency_filter 0.01 \
-sub_batches 10 \
-sub_batch 0 \
--standardise_dosages \
-output $OUTPUT/summary_imputation_1000G/CARDIoGRAM_C4D_CAD_ADDITIVE_chr1_sb0_reg0.1_ff0.01_by_region.txt.gz
```
The second run would be
```
python $GWAS_TOOLS/gwas_summary_imputation.py \
-by_region_file $DATA/eur_ld.bed.gz \
-gwas_file $OUTPUT/harmonized_gwas/CARDIoGRAM_C4D_CAD_ADDITIVE.txt.gz \
-parquet_genotype $DATA/reference_panel_1000G/chr1.variants.parquet \
-parquet_genotype_metadata $DATA/reference_panel_1000G/variant_metadata.parquet \
-window 100000 \
-parsimony 7 \
-chromosome 1 \
-regularization 0.1 \
-frequency_filter 0.01 \
-sub_batches 10 \
-sub_batch 1 \
--standardise_dosages \
-output $OUTPUT/summary_imputation_1000G/CARDIoGRAM_C4D_CAD_ADDITIVE_chr1_sb0_reg0.1_ff0.01_by_region.txt.gz
```
I am sure this can be done with a loop but not quite sure how to do it in python?
Thank you so much for any advice,
Sally
|
2022/02/16
|
[
"https://Stackoverflow.com/questions/71140438",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18222525/"
] |
While we can't show you how to retrofit a loop to the python code without actually seeing the python code, you could just use a shell loop to accomplish what you want without touching the python code.
For bash shell, it would look like this:
```
for sub_batch in {0..9}; do \
python $GWAS_TOOLS/gwas_summary_imputation.py \
-by_region_file $DATA/eur_ld.bed.gz \
-gwas_file $OUTPUT/harmonized_gwas/CARDIoGRAM_C4D_CAD_ADDITIVE.txt.gz \
-parquet_genotype $DATA/reference_panel_1000G/chr1.variants.parquet \
-parquet_genotype_metadata
$DATA/reference_panel_1000G/variant_metadata.parquet \
-window 100000 \
-parsimony 7 \
-chromosome 1 \
-regularization 0.1 \
-frequency_filter 0.01 \
-sub_batches 10 \
-sub_batch $sub_batch \
--standardise_dosages \
-output $OUTPUT/summary_imputation_1000G/CARDIoGRAM_C4D_CAD_ADDITIVE_chr1_sb0_reg0.1_ff0.01_by_region.txt.gz
done
```
|
a loop in python from 0 to 10 is very easy.
```py
for i in range(0, 10):
do stuff
```
|
71,140,438
|
I am a beginner in Python and would really appreciate if someone could help me with the following:
I would like to run this script 10 times and for that change for every run the sub-batch (from 0-9):
E.g. the first run would be:
```
python $GWAS_TOOLS/gwas_summary_imputation.py \
-by_region_file $DATA/eur_ld.bed.gz \
-gwas_file $OUTPUT/harmonized_gwas/CARDIoGRAM_C4D_CAD_ADDITIVE.txt.gz \
-parquet_genotype $DATA/reference_panel_1000G/chr1.variants.parquet \
-parquet_genotype_metadata $DATA/reference_panel_1000G/variant_metadata.parquet \
-window 100000 \
-parsimony 7 \
-chromosome 1 \
-regularization 0.1 \
-frequency_filter 0.01 \
-sub_batches 10 \
-sub_batch 0 \
--standardise_dosages \
-output $OUTPUT/summary_imputation_1000G/CARDIoGRAM_C4D_CAD_ADDITIVE_chr1_sb0_reg0.1_ff0.01_by_region.txt.gz
```
The second run would be
```
python $GWAS_TOOLS/gwas_summary_imputation.py \
-by_region_file $DATA/eur_ld.bed.gz \
-gwas_file $OUTPUT/harmonized_gwas/CARDIoGRAM_C4D_CAD_ADDITIVE.txt.gz \
-parquet_genotype $DATA/reference_panel_1000G/chr1.variants.parquet \
-parquet_genotype_metadata $DATA/reference_panel_1000G/variant_metadata.parquet \
-window 100000 \
-parsimony 7 \
-chromosome 1 \
-regularization 0.1 \
-frequency_filter 0.01 \
-sub_batches 10 \
-sub_batch 1 \
--standardise_dosages \
-output $OUTPUT/summary_imputation_1000G/CARDIoGRAM_C4D_CAD_ADDITIVE_chr1_sb0_reg0.1_ff0.01_by_region.txt.gz
```
I am sure this can be done with a loop but not quite sure how to do it in python?
Thank you so much for any advice,
Sally
|
2022/02/16
|
[
"https://Stackoverflow.com/questions/71140438",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18222525/"
] |
What you're looking for seems to be a way to run the same command on the command line a set number of times.
If you're on Linux using the bash shell, this can be done using a shell loop:
```
for i in {0..9}; do
python $GWAS_TOOLS/gwas_summary_imputation.py \
-by_region_file $DATA/eur_ld.bed.gz \
-gwas_file $OUTPUT/harmonized_gwas/CARDIoGRAM_C4D_CAD_ADDITIVE.txt.gz \
-parquet_genotype $DATA/reference_panel_1000G/chr1.variants.parquet \
-parquet_genotype_metadata $DATA/reference_panel_1000G/variant_metadata.parquet \
-window 100000 \
-parsimony 7 \
-chromosome 1 \
-regularization 0.1 \
-frequency_filter 0.01 \
-sub_batches 10 \
-sub_batch $i \
--standardise_dosages \
-output $OUTPUT/summary_imputation_1000G/CARDIoGRAM_C4D_CAD_ADDITIVE_chr1_sb0_reg0.1_ff0.01_by_region.txt.gz
done
```
If you're on Windows, similar can be achieved using powershell:
```
for ($i=0; $i -le 9; $i++) {
python $GWAS_TOOLS/gwas_summary_imputation.py \
-by_region_file $DATA/eur_ld.bed.gz \
-gwas_file $OUTPUT/harmonized_gwas/CARDIoGRAM_C4D_CAD_ADDITIVE.txt.gz \
-parquet_genotype $DATA/reference_panel_1000G/chr1.variants.parquet \
-parquet_genotype_metadata $DATA/reference_panel_1000G/variant_metadata.parquet \
-window 100000 \
-parsimony 7 \
-chromosome 1 \
-regularization 0.1 \
-frequency_filter 0.01 \
-sub_batches 10 \
-sub_batch $i \
--standardise_dosages \
-output $OUTPUT/summary_imputation_1000G/CARDIoGRAM_C4D_CAD_ADDITIVE_chr1_sb0_reg0.1_ff0.01_by_region.txt.gz
}
```
|
a loop in python from 0 to 10 is very easy.
```py
for i in range(0, 10):
do stuff
```
|
36,680,407
|
I on RHEL6 with Python 2.6 and need to install rrdtool with python. I have to upload and install packages manually as network admin blocks yum and pip outgoing traffic for security reason. During installation I encounter missing error missing rrdtoolmodule.c, where can I locate the file? or I missing something?
```
[user@host ~]$ sudo pip install py-rrdtool-1.0b1.tar.gz
[sudo] password for user:
/usr/lib/python2.6/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
Processing ./py-rrdtool-1.0b1.tar.gz
Installing collected packages: py-rrdtool
Running setup.py install for py-rrdtool
Complete output from command /usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip-a5tFI5-build/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-krfsUz-record/install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build/lib.linux-x86_64-2.6
copying rrdtool.py -> build/lib.linux-x86_64-2.6
running build_ext
building '_rrdtool' extension
creating build/temp.linux-x86_64-2.6
creating build/temp.linux-x86_64-2.6/src
gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -I/usr/local/include -I/usr/include/python2.6 -c src/_rrdtoolmodule.c -o build/temp.linux-x86_64-2.6/src/_rrdtoolmodule.o
src/_rrdtoolmodule.c:34:17: error: rrd.h: No such file or directory
In file included from src/rrd_extra.h:37,
from src/_rrdtoolmodule.c:35:
src/rrd_format.h:59: error: expected specifier-qualifier-list before ‘rrd_value_t’
src/rrd_format.h:295: error: expected specifier-qualifier-list before ‘rrd_value_t’
src/_rrdtoolmodule.c: In function ‘PyRRD_create’:
src/_rrdtoolmodule.c:93: warning: implicit declaration of function ‘rrd_create’
src/_rrdtoolmodule.c:94: warning: implicit declaration of function ‘rrd_get_error’
src/_rrdtoolmodule.c:94: warning: passing argument 2 of ‘PyErr_SetString’ makes pointer from integer without a cast
/usr/include/python2.6/pyerrors.h:78: note: expected ‘const char *’ but argument is of type ‘int’
src/_rrdtoolmodule.c:95: warning: implicit declaration of function ‘rrd_clear_error’
src/_rrdtoolmodule.c: In function ‘PyRRD_update’:
src/_rrdtoolmodule.c:122: warning: implicit declaration of function ‘rrd_update’
src/_rrdtoolmodule.c:123: warning: passing argument 2 of ‘PyErr_SetString’ makes pointer from integer without a cast
/usr/include/python2.6/pyerrors.h:78: note: expected ‘const char *’ but argument is of type ‘int’
src/_rrdtoolmodule.c: In function ‘PyRRD_fetch’:
src/_rrdtoolmodule.c:145: error: ‘rrd_value_t’ undeclared (first use in this function)
src/_rrdtoolmodule.c:145: error: (Each undeclared identifier is reported only once
src/_rrdtoolmodule.c:145: error: for each function it appears in.)
src/_rrdtoolmodule.c:145: error: ‘data’ undeclared (first use in this function)
src/_rrdtoolmodule.c:145: error: ‘datai’ undeclared (first use in this function)
src/_rrdtoolmodule.c:145: warning: left-hand operand of comma expression has no effect
src/_rrdtoolmodule.c:154: warning: implicit declaration of function ‘rrd_fetch’
src/_rrdtoolmodule.c:156: warning: passing argument 2 of ‘PyErr_SetString’ makes pointer from integer without a cast
/usr/include/python2.6/pyerrors.h:78: note: expected ‘const char *’ but argument is of type ‘int’
src/_rrdtoolmodule.c:165: error: expected ‘;’ before ‘dv’
src/_rrdtoolmodule.c:191: error: ‘dv’ undeclared (first use in this function)
src/_rrdtoolmodule.c: In function ‘PyRRD_graph’:
src/_rrdtoolmodule.c:245: warning: implicit declaration of function ‘rrd_graph’
src/_rrdtoolmodule.c:247: warning: passing argument 2 of ‘PyErr_SetString’ makes pointer from integer without a cast
/usr/include/python2.6/pyerrors.h:78: note: expected ‘const char *’ but argument is of type ‘int’
src/_rrdtoolmodule.c: In function ‘PyRRD_tune’:
src/_rrdtoolmodule.c:297: warning: implicit declaration of function ‘rrd_tune’
src/_rrdtoolmodule.c:298: warning: passing argument 2 of ‘PyErr_SetString’ makes pointer from integer without a cast
/usr/include/python2.6/pyerrors.h:78: note: expected ‘const char *’ but argument is of type ‘int’
src/_rrdtoolmodule.c: In function ‘PyRRD_last’:
src/_rrdtoolmodule.c:324: warning: implicit declaration of function ‘rrd_last’
src/_rrdtoolmodule.c:325: warning: passing argument 2 of ‘PyErr_SetString’ makes pointer from integer without a cast
/usr/include/python2.6/pyerrors.h:78: note: expected ‘const char *’ but argument is of type ‘int’
src/_rrdtoolmodule.c: In function ‘PyRRD_resize’:
src/_rrdtoolmodule.c:350: warning: implicit declaration of function ‘rrd_resize’
src/_rrdtoolmodule.c:351: warning: passing argument 2 of ‘PyErr_SetString’ makes pointer from integer without a cast
/usr/include/python2.6/pyerrors.h:78: note: expected ‘const char *’ but argument is of type ‘int’
src/_rrdtoolmodule.c: In function ‘PyRRD_info’:
src/_rrdtoolmodule.c:380: warning: passing argument 2 of ‘PyErr_SetString’ makes pointer from integer without a cast
/usr/include/python2.6/pyerrors.h:78: note: expected ‘const char *’ but argument is of type ‘int’
src/_rrdtoolmodule.c:423: error: ‘unival’ has no member named ‘u_val’
src/_rrdtoolmodule.c:423: error: ‘unival’ has no member named ‘u_val’
src/_rrdtoolmodule.c:423: error: ‘unival’ has no member named ‘u_val’
src/_rrdtoolmodule.c:423: error: ‘unival’ has no member named ‘u_val’
src/_rrdtoolmodule.c:423: error: ‘unival’ has no member named ‘u_val’
src/_rrdtoolmodule.c:423: error: ‘unival’ has no member named ‘u_val’
src/_rrdtoolmodule.c:424: error: ‘unival’ has no member named ‘u_val’
src/_rrdtoolmodule.c:424: error: ‘unival’ has no member named ‘u_val’
src/_rrdtoolmodule.c:424: error: ‘unival’ has no member named ‘u_val’
src/_rrdtoolmodule.c:424: error: ‘unival’ has no member named ‘u_val’
src/_rrdtoolmodule.c:424: error: ‘unival’ has no member named ‘u_val’
src/_rrdtoolmodule.c:424: error: ‘unival’ has no member named ‘u_val’
src/_rrdtoolmodule.c:426: error: ‘unival’ has no member named ‘u_val’
src/_rrdtoolmodule.c:426: error: ‘unival’ has no member named ‘u_val’
src/_rrdtoolmodule.c:426: error: ‘unival’ has no member named ‘u_val’
src/_rrdtoolmodule.c:426: error: ‘unival’ has no member named ‘u_val’
src/_rrdtoolmodule.c:426: error: ‘unival’ has no member named ‘u_val’
src/_rrdtoolmodule.c:426: error: ‘unival’ has no member named ‘u_val’
src/_rrdtoolmodule.c:443: error: ‘unival’ has no member named ‘u_val’
src/_rrdtoolmodule.c:443: error: ‘unival’ has no member named ‘u_val’
src/_rrdtoolmodule.c:443: error: ‘unival’ has no member named ‘u_val’
src/_rrdtoolmodule.c:443: error: ‘unival’ has no member named ‘u_val’
src/_rrdtoolmodule.c:443: error: ‘unival’ has no member named ‘u_val’
src/_rrdtoolmodule.c:443: error: ‘unival’ has no member named ‘u_val’
src/_rrdtoolmodule.c:455: error: ‘unival’ has no member named ‘u_val’
src/_rrdtoolmodule.c:455: error: ‘unival’ has no member named ‘u_val’
src/_rrdtoolmodule.c:455: error: ‘unival’ has no member named ‘u_val’
src/_rrdtoolmodule.c:455: error: ‘unival’ has no member named ‘u_val’
src/_rrdtoolmodule.c:455: error: ‘unival’ has no member named ‘u_val’
src/_rrdtoolmodule.c:455: error: ‘unival’ has no member named ‘u_val’
error: command 'gcc' failed with exit status 1
----------------------------------------
Command "/usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip-a5tFI5-build/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-krfsUz-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-a5tFI5-build
```
|
2016/04/17
|
[
"https://Stackoverflow.com/questions/36680407",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/79311/"
] |
The one-hour difference is due to Daylight Savings Time, which by definition is not reflected in Unix timestamps.
You may want to consider [moment-timezone.js](http://momentjs.com/timezone/docs/) to cope with DST in time conversions.
|
You can use [Date.parse()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/parse) in javascript.
```js
const isoDate = new Date();
const convertToUnix = Date.parse(isoDate.toISOString());
```
|
8,198,162
|
I have a script for deleting images older than a date.
Can I pass this date as an argument when I call to run the script?
Example: This script `delete_images.py` deletes images older than a date (YYYY-MM-DD)
```
python delete_images.py 2010-12-31
```
Script (works with a fixed date (xDate variable))
```
import os, glob, time
root = '/home/master/files/' # one specific folder
#root = 'D:\\Vacation\\*' # or all the subfolders too
# expiration date in the format YYYY-MM-DD
### I have to pass the date from the script ###
xDate = '2010-12-31'
print '-'*50
for folder in glob.glob(root):
print folder
# here .jpg image files, but could be .txt files or whatever
for image in glob.glob(folder + '/*.jpg'):
# retrieves the stats for the current jpeg image file
# the tuple element at index 8 is the last-modified-date
stats = os.stat(image)
# put the two dates into matching format
lastmodDate = time.localtime(stats[8])
expDate = time.strptime(xDate, '%Y-%m-%d')
print image, time.strftime("%m/%d/%y", lastmodDate)
# check if image-last-modified-date is outdated
if expDate > lastmodDate:
try:
print 'Removing', image, time.strftime("(older than %m/%d/%y)", expDate)
os.remove(image) # commented out for testing
except OSError:
print 'Could not remove', image
```
|
2011/11/19
|
[
"https://Stackoverflow.com/questions/8198162",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/871976/"
] |
The quick but crude way is to use `sys.argv`.
```
import sys
xDate = sys.argv[1]
```
A more robust, extendable way is to use the [argparse](http://docs.python.org/library/argparse.html#module-argparse) module:
```
import argparse
parser=argparse.ArgumentParser()
parser.add_argument('xDate')
args=parser.parse_args()
```
Then to access the user-supplied value you'd use `args.xDate` instead of `xDate`.
Using the `argparse` module you automatically get a help message for free when a user types
```
delete_images.py -h
```
It also gives a helpful error message if the user fails to supply the proper inputs.
You can also easily set up a default value for `xDate`, convert `xDate` into a `datetime.date` object, and, as they say on TV, "much, much more!".
---
I see later in you script you use
```
expDate = time.strptime(xDate, '%Y-%m-%d')
```
to convert the `xDate` string into a time tuple. You could do this with `argparse` so `args.xDate` is automatically a time tuple. For example,
```
import argparse
import time
def mkdate(datestr):
return time.strptime(datestr, '%Y-%m-%d')
parser=argparse.ArgumentParser()
parser.add_argument('xDate',type=mkdate)
args=parser.parse_args()
print(args.xDate)
```
when run like this:
```
% test.py 2000-1-1
```
yields
```
time.struct_time(tm_year=2000, tm_mon=1, tm_mday=1, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=5, tm_yday=1, tm_isdst=-1)
```
---
PS. Whatever method you choose to use (sys.argv or argparse), it would be a good idea to pull
```
expDate = time.strptime(xDate, '%Y-%m-%d')
```
outside of the `for-loop`. Since the value of `xDate` never changes, you only need to compute `expDate` once.
|
The command line options can be accessed via the list `sys.argv`. So you can simply use
```
xDate = sys.argv[1]
```
(`sys.argv[0]` is the name of the current script.)
|
8,198,162
|
I have a script for deleting images older than a date.
Can I pass this date as an argument when I call to run the script?
Example: This script `delete_images.py` deletes images older than a date (YYYY-MM-DD)
```
python delete_images.py 2010-12-31
```
Script (works with a fixed date (xDate variable))
```
import os, glob, time
root = '/home/master/files/' # one specific folder
#root = 'D:\\Vacation\\*' # or all the subfolders too
# expiration date in the format YYYY-MM-DD
### I have to pass the date from the script ###
xDate = '2010-12-31'
print '-'*50
for folder in glob.glob(root):
print folder
# here .jpg image files, but could be .txt files or whatever
for image in glob.glob(folder + '/*.jpg'):
# retrieves the stats for the current jpeg image file
# the tuple element at index 8 is the last-modified-date
stats = os.stat(image)
# put the two dates into matching format
lastmodDate = time.localtime(stats[8])
expDate = time.strptime(xDate, '%Y-%m-%d')
print image, time.strftime("%m/%d/%y", lastmodDate)
# check if image-last-modified-date is outdated
if expDate > lastmodDate:
try:
print 'Removing', image, time.strftime("(older than %m/%d/%y)", expDate)
os.remove(image) # commented out for testing
except OSError:
print 'Could not remove', image
```
|
2011/11/19
|
[
"https://Stackoverflow.com/questions/8198162",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/871976/"
] |
The quick but crude way is to use `sys.argv`.
```
import sys
xDate = sys.argv[1]
```
A more robust, extendable way is to use the [argparse](http://docs.python.org/library/argparse.html#module-argparse) module:
```
import argparse
parser=argparse.ArgumentParser()
parser.add_argument('xDate')
args=parser.parse_args()
```
Then to access the user-supplied value you'd use `args.xDate` instead of `xDate`.
Using the `argparse` module you automatically get a help message for free when a user types
```
delete_images.py -h
```
It also gives a helpful error message if the user fails to supply the proper inputs.
You can also easily set up a default value for `xDate`, convert `xDate` into a `datetime.date` object, and, as they say on TV, "much, much more!".
---
I see later in you script you use
```
expDate = time.strptime(xDate, '%Y-%m-%d')
```
to convert the `xDate` string into a time tuple. You could do this with `argparse` so `args.xDate` is automatically a time tuple. For example,
```
import argparse
import time
def mkdate(datestr):
return time.strptime(datestr, '%Y-%m-%d')
parser=argparse.ArgumentParser()
parser.add_argument('xDate',type=mkdate)
args=parser.parse_args()
print(args.xDate)
```
when run like this:
```
% test.py 2000-1-1
```
yields
```
time.struct_time(tm_year=2000, tm_mon=1, tm_mday=1, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=5, tm_yday=1, tm_isdst=-1)
```
---
PS. Whatever method you choose to use (sys.argv or argparse), it would be a good idea to pull
```
expDate = time.strptime(xDate, '%Y-%m-%d')
```
outside of the `for-loop`. Since the value of `xDate` never changes, you only need to compute `expDate` once.
|
you can use runtime arguments for this approach. Please see following link: <http://www.faqs.org/docs/diveintopython/kgp_commandline.html>
|
8,198,162
|
I have a script for deleting images older than a date.
Can I pass this date as an argument when I call to run the script?
Example: This script `delete_images.py` deletes images older than a date (YYYY-MM-DD)
```
python delete_images.py 2010-12-31
```
Script (works with a fixed date (xDate variable))
```
import os, glob, time
root = '/home/master/files/' # one specific folder
#root = 'D:\\Vacation\\*' # or all the subfolders too
# expiration date in the format YYYY-MM-DD
### I have to pass the date from the script ###
xDate = '2010-12-31'
print '-'*50
for folder in glob.glob(root):
print folder
# here .jpg image files, but could be .txt files or whatever
for image in glob.glob(folder + '/*.jpg'):
# retrieves the stats for the current jpeg image file
# the tuple element at index 8 is the last-modified-date
stats = os.stat(image)
# put the two dates into matching format
lastmodDate = time.localtime(stats[8])
expDate = time.strptime(xDate, '%Y-%m-%d')
print image, time.strftime("%m/%d/%y", lastmodDate)
# check if image-last-modified-date is outdated
if expDate > lastmodDate:
try:
print 'Removing', image, time.strftime("(older than %m/%d/%y)", expDate)
os.remove(image) # commented out for testing
except OSError:
print 'Could not remove', image
```
|
2011/11/19
|
[
"https://Stackoverflow.com/questions/8198162",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/871976/"
] |
The quick but crude way is to use `sys.argv`.
```
import sys
xDate = sys.argv[1]
```
A more robust, extendable way is to use the [argparse](http://docs.python.org/library/argparse.html#module-argparse) module:
```
import argparse
parser=argparse.ArgumentParser()
parser.add_argument('xDate')
args=parser.parse_args()
```
Then to access the user-supplied value you'd use `args.xDate` instead of `xDate`.
Using the `argparse` module you automatically get a help message for free when a user types
```
delete_images.py -h
```
It also gives a helpful error message if the user fails to supply the proper inputs.
You can also easily set up a default value for `xDate`, convert `xDate` into a `datetime.date` object, and, as they say on TV, "much, much more!".
---
I see later in you script you use
```
expDate = time.strptime(xDate, '%Y-%m-%d')
```
to convert the `xDate` string into a time tuple. You could do this with `argparse` so `args.xDate` is automatically a time tuple. For example,
```
import argparse
import time
def mkdate(datestr):
return time.strptime(datestr, '%Y-%m-%d')
parser=argparse.ArgumentParser()
parser.add_argument('xDate',type=mkdate)
args=parser.parse_args()
print(args.xDate)
```
when run like this:
```
% test.py 2000-1-1
```
yields
```
time.struct_time(tm_year=2000, tm_mon=1, tm_mday=1, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=5, tm_yday=1, tm_isdst=-1)
```
---
PS. Whatever method you choose to use (sys.argv or argparse), it would be a good idea to pull
```
expDate = time.strptime(xDate, '%Y-%m-%d')
```
outside of the `for-loop`. Since the value of `xDate` never changes, you only need to compute `expDate` once.
|
Little bit more polish to unutbu's answer:
```
import argparse
import time
def mkdate(datestr):
try:
return time.strptime(datestr, '%Y-%m-%d')
except ValueError:
raise argparse.ArgumentTypeError(datestr + ' is not a proper date string')
parser=argparse.ArgumentParser()
parser.add_argument('xDate',type=mkdate)
args=parser.parse_args()
print(args.xDate)
```
|
8,198,162
|
I have a script for deleting images older than a date.
Can I pass this date as an argument when I call to run the script?
Example: This script `delete_images.py` deletes images older than a date (YYYY-MM-DD)
```
python delete_images.py 2010-12-31
```
Script (works with a fixed date (xDate variable))
```
import os, glob, time
root = '/home/master/files/' # one specific folder
#root = 'D:\\Vacation\\*' # or all the subfolders too
# expiration date in the format YYYY-MM-DD
### I have to pass the date from the script ###
xDate = '2010-12-31'
print '-'*50
for folder in glob.glob(root):
print folder
# here .jpg image files, but could be .txt files or whatever
for image in glob.glob(folder + '/*.jpg'):
# retrieves the stats for the current jpeg image file
# the tuple element at index 8 is the last-modified-date
stats = os.stat(image)
# put the two dates into matching format
lastmodDate = time.localtime(stats[8])
expDate = time.strptime(xDate, '%Y-%m-%d')
print image, time.strftime("%m/%d/%y", lastmodDate)
# check if image-last-modified-date is outdated
if expDate > lastmodDate:
try:
print 'Removing', image, time.strftime("(older than %m/%d/%y)", expDate)
os.remove(image) # commented out for testing
except OSError:
print 'Could not remove', image
```
|
2011/11/19
|
[
"https://Stackoverflow.com/questions/8198162",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/871976/"
] |
The command line options can be accessed via the list `sys.argv`. So you can simply use
```
xDate = sys.argv[1]
```
(`sys.argv[0]` is the name of the current script.)
|
you can use runtime arguments for this approach. Please see following link: <http://www.faqs.org/docs/diveintopython/kgp_commandline.html>
|
8,198,162
|
I have a script for deleting images older than a date.
Can I pass this date as an argument when I call to run the script?
Example: This script `delete_images.py` deletes images older than a date (YYYY-MM-DD)
```
python delete_images.py 2010-12-31
```
Script (works with a fixed date (xDate variable))
```
import os, glob, time
root = '/home/master/files/' # one specific folder
#root = 'D:\\Vacation\\*' # or all the subfolders too
# expiration date in the format YYYY-MM-DD
### I have to pass the date from the script ###
xDate = '2010-12-31'
print '-'*50
for folder in glob.glob(root):
print folder
# here .jpg image files, but could be .txt files or whatever
for image in glob.glob(folder + '/*.jpg'):
# retrieves the stats for the current jpeg image file
# the tuple element at index 8 is the last-modified-date
stats = os.stat(image)
# put the two dates into matching format
lastmodDate = time.localtime(stats[8])
expDate = time.strptime(xDate, '%Y-%m-%d')
print image, time.strftime("%m/%d/%y", lastmodDate)
# check if image-last-modified-date is outdated
if expDate > lastmodDate:
try:
print 'Removing', image, time.strftime("(older than %m/%d/%y)", expDate)
os.remove(image) # commented out for testing
except OSError:
print 'Could not remove', image
```
|
2011/11/19
|
[
"https://Stackoverflow.com/questions/8198162",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/871976/"
] |
Little bit more polish to unutbu's answer:
```
import argparse
import time
def mkdate(datestr):
try:
return time.strptime(datestr, '%Y-%m-%d')
except ValueError:
raise argparse.ArgumentTypeError(datestr + ' is not a proper date string')
parser=argparse.ArgumentParser()
parser.add_argument('xDate',type=mkdate)
args=parser.parse_args()
print(args.xDate)
```
|
you can use runtime arguments for this approach. Please see following link: <http://www.faqs.org/docs/diveintopython/kgp_commandline.html>
|
47,555,613
|
It appears, based on a [urwid example](http://urwid.org/tutorial/#horizontal-menu) that `u'\N{HYPHEN BULLET}` will create a unicode character that is a hyphen intended for a bullet.
The names for unicode characters seem to be defined at [fileformat.info](http://www.fileformat.info/info/unicode/char/b.htm) and some element of using Unicode in Python appears in the [howto documentation](https://docs.python.org/2/howto/unicode.html). Though there is no mention of the `\N{}` syntax.
If you pull all these docs together you get the idea that the constant `u"\N{HYPHEN BULLET}"` creates a ⁃
However, this is all a theory based on pulling all this data together. I can find no documentation for `"\N{}` in the Python docs.
My question is whether my theory of operation is correct and whether it is documented anywhere?
|
2017/11/29
|
[
"https://Stackoverflow.com/questions/47555613",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4360746/"
] |
Not every gory detail can be found in a how-to. The [table of escape sequences](https://docs.python.org/2/reference/lexical_analysis.html#string-literals) in the reference manual includes:
Escape Sequence: `\N{name}`
Meaning: Character named `name` in the Unicode database (Unicode only)
|
The `\N{}` syntax is documented in the [Unicode HOWTO](https://docs.python.org/3/howto/unicode.html?highlight=unicode%20howto#the-string-type), at least.
The names are documented in the Unicode standard, such as:
```
http://www.unicode.org/Public/UCD/latest/ucd/NamesList.txt
```
The `unicodedata` module can look up a name for a character:
```
>>> import unicodedata as ud
>>> ud.name('A')
'LATIN CAPITAL LETTER A'
>>> print('\N{LATIN CAPITAL LETTER A}')
A
```
|
47,555,613
|
It appears, based on a [urwid example](http://urwid.org/tutorial/#horizontal-menu) that `u'\N{HYPHEN BULLET}` will create a unicode character that is a hyphen intended for a bullet.
The names for unicode characters seem to be defined at [fileformat.info](http://www.fileformat.info/info/unicode/char/b.htm) and some element of using Unicode in Python appears in the [howto documentation](https://docs.python.org/2/howto/unicode.html). Though there is no mention of the `\N{}` syntax.
If you pull all these docs together you get the idea that the constant `u"\N{HYPHEN BULLET}"` creates a ⁃
However, this is all a theory based on pulling all this data together. I can find no documentation for `"\N{}` in the Python docs.
My question is whether my theory of operation is correct and whether it is documented anywhere?
|
2017/11/29
|
[
"https://Stackoverflow.com/questions/47555613",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4360746/"
] |
You are correct that `u"\N{CHARACTER NAME}` produces a valid unicode character in Python.
It is not documented much in the Python docs, but after some searching I found a reference to it on effbot.org
[http://effbot.org/librarybook/ucnhash.htm](https://web.archive.org/web/20200719141800/http://effbot.org/librarybook/ucnhash.htm)
>
> The ucnhash module
> ------------------
>
>
> (Implementation, 2.0 only) This module is an implementation module,
> which provides a name to character code mapping for Unicode string
> literals. If this module is present, you can use \N{} escapes to map
> Unicode character names to codes.
>
>
> In Python 2.1, the functionality of this module was moved to the
> **unicodedata** module.
>
>
>
Checking the documentation for [`unicodedata`](https://docs.python.org/3/library/unicodedata.html) shows that the module is using the data from the Unicode Character Database.
>
> unicodedata — Unicode Database
> ------------------------------
>
>
> This module provides access to the Unicode Character Database (UCD)
> which defines character properties for all Unicode characters. The
> data contained in this database is compiled from the UCD version
> 9.0.0.
>
>
>
The full data can be found at: <https://www.unicode.org/Public/9.0.0/ucd/UnicodeData.txt>
The data has the structure: `HEXVALUE;CHARACTER NAME;etc..` so you could use this data to look up characters.
For example:
```
# 0041;LATIN CAPITAL LETTER A;Lu;0;L;;;;;N;;;;0061;
>>> u"\N{LATIN CAPITAL LETTER A}"
'A'
# FF7B;HALFWIDTH KATAKANA LETTER SA;Lo;0;L;<narrow> 30B5;;;;N;;;;;
>>> u"\N{HALFWIDTH KATAKANA LETTER SA}"
'サ'
```
|
The `\N{}` syntax is documented in the [Unicode HOWTO](https://docs.python.org/3/howto/unicode.html?highlight=unicode%20howto#the-string-type), at least.
The names are documented in the Unicode standard, such as:
```
http://www.unicode.org/Public/UCD/latest/ucd/NamesList.txt
```
The `unicodedata` module can look up a name for a character:
```
>>> import unicodedata as ud
>>> ud.name('A')
'LATIN CAPITAL LETTER A'
>>> print('\N{LATIN CAPITAL LETTER A}')
A
```
|
55,837,477
|
convert all txt files delimiter '|' from dir path and convert to csv and save in a location using python?
i have tried this code which is hardcoded.
```
import csv
txt_file = r"SentiWS_v1.8c_Positive.txt"
csv_file = r"NewProcessedDoc.csv"
with open(txt_file, "r") as in_text:
in_reader = csv.reader(in_text, delimiter = '|')
with open(csv_file, "w") as out_csv:
out_writer = csv.writer(out_csv, newline='')
for row in in_reader:
out_writer.writerow(row)
```
Expecting csv files with same file names in dir path for all txt files in path location
|
2019/04/24
|
[
"https://Stackoverflow.com/questions/55837477",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11186737/"
] |
You are trying to instantiate a typealias and are getting `interface doesn't have a constructor` error. To my understanding, typealias with function types work with three steps:
1. Define the typealias itself
```
typealias MyHandler = (Int, String) -> Unit
```
2. declare an action of that type
```
val myHandler: MyHandler = {intValue, stringValue ->
// do something
}
```
3. use that action, e.g.
```
class Foo(val action: MyHandler) {
val stateOne: Boolean = false
// ...
fun bar() {
if (stateOne) {
action.invoke(1, "One")
} else {
action.invoke(0, "notOne")
}
}
}
```
|
`typealias` are just an alias for the type :) in other words, it's just another name for the type.
Imagine having to write all the time `(Int, String) -> Unit`. With `typealias` you can define something like you did to help out and write less,i.e. instead of:
```
fun Foo(handler: (Int, String) -> Unit)
```
You can write:
```
fun Foo(handler: MyHandler)
```
They also help giving hints, meaning they can give you a way to describe types in a more contextualized way. Imagine implementing an app where in it's entire domain time is represented as an `Int`. One approach we could follow is defining:
```
typealias Time = Int
```
From there on, every time you want to code something specifically with time, instead of using `Int` you can provide more context to others by using `Time`. This is not a new type, it's just another name for an `Int`, so therefore everything that works with integers works with it too.
There's more if you want to have a [look](https://kotlinlang.org/docs/reference/type-aliases.html)
|
55,837,477
|
convert all txt files delimiter '|' from dir path and convert to csv and save in a location using python?
i have tried this code which is hardcoded.
```
import csv
txt_file = r"SentiWS_v1.8c_Positive.txt"
csv_file = r"NewProcessedDoc.csv"
with open(txt_file, "r") as in_text:
in_reader = csv.reader(in_text, delimiter = '|')
with open(csv_file, "w") as out_csv:
out_writer = csv.writer(out_csv, newline='')
for row in in_reader:
out_writer.writerow(row)
```
Expecting csv files with same file names in dir path for all txt files in path location
|
2019/04/24
|
[
"https://Stackoverflow.com/questions/55837477",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11186737/"
] |
You are trying to instantiate a typealias and are getting `interface doesn't have a constructor` error. To my understanding, typealias with function types work with three steps:
1. Define the typealias itself
```
typealias MyHandler = (Int, String) -> Unit
```
2. declare an action of that type
```
val myHandler: MyHandler = {intValue, stringValue ->
// do something
}
```
3. use that action, e.g.
```
class Foo(val action: MyHandler) {
val stateOne: Boolean = false
// ...
fun bar() {
if (stateOne) {
action.invoke(1, "One")
} else {
action.invoke(0, "notOne")
}
}
}
```
|
Adding to [theThapa](https://stackoverflow.com/a/55838293/14619383) and [Fred](https://stackoverflow.com/a/55842709/14619383), function typealiases are a way to declare the type of a function. It can be later used.
For example, the following shows a good example of how to declare it and use it:
```
import kotlin.test.*
import java.util.*
//define alias
typealias MyHandler = (Int, Int) -> Int
fun main(args: Array<String>) {
//define the function
val myHandler: MyHandler = {intValue, bValue ->
intValue + bValue
}
// class Foo takes MyHandler type as parameter to instantiate
class Foo(val action: MyHandler) {
val stateOne: Boolean = false
fun bar() {
if (stateOne) {
println(action.invoke(1, 45))
} else {
println(action.invoke(0, 65))
}
}
}
// instantiate Foo class along with constructor whose parameter is of type MyHandler
val f = Foo(myHandler)
f.bar()
}
```
|
55,837,477
|
convert all txt files delimiter '|' from dir path and convert to csv and save in a location using python?
i have tried this code which is hardcoded.
```
import csv
txt_file = r"SentiWS_v1.8c_Positive.txt"
csv_file = r"NewProcessedDoc.csv"
with open(txt_file, "r") as in_text:
in_reader = csv.reader(in_text, delimiter = '|')
with open(csv_file, "w") as out_csv:
out_writer = csv.writer(out_csv, newline='')
for row in in_reader:
out_writer.writerow(row)
```
Expecting csv files with same file names in dir path for all txt files in path location
|
2019/04/24
|
[
"https://Stackoverflow.com/questions/55837477",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11186737/"
] |
`typealias` are just an alias for the type :) in other words, it's just another name for the type.
Imagine having to write all the time `(Int, String) -> Unit`. With `typealias` you can define something like you did to help out and write less,i.e. instead of:
```
fun Foo(handler: (Int, String) -> Unit)
```
You can write:
```
fun Foo(handler: MyHandler)
```
They also help giving hints, meaning they can give you a way to describe types in a more contextualized way. Imagine implementing an app where in it's entire domain time is represented as an `Int`. One approach we could follow is defining:
```
typealias Time = Int
```
From there on, every time you want to code something specifically with time, instead of using `Int` you can provide more context to others by using `Time`. This is not a new type, it's just another name for an `Int`, so therefore everything that works with integers works with it too.
There's more if you want to have a [look](https://kotlinlang.org/docs/reference/type-aliases.html)
|
Adding to [theThapa](https://stackoverflow.com/a/55838293/14619383) and [Fred](https://stackoverflow.com/a/55842709/14619383), function typealiases are a way to declare the type of a function. It can be later used.
For example, the following shows a good example of how to declare it and use it:
```
import kotlin.test.*
import java.util.*
//define alias
typealias MyHandler = (Int, Int) -> Int
fun main(args: Array<String>) {
//define the function
val myHandler: MyHandler = {intValue, bValue ->
intValue + bValue
}
// class Foo takes MyHandler type as parameter to instantiate
class Foo(val action: MyHandler) {
val stateOne: Boolean = false
fun bar() {
if (stateOne) {
println(action.invoke(1, 45))
} else {
println(action.invoke(0, 65))
}
}
}
// instantiate Foo class along with constructor whose parameter is of type MyHandler
val f = Foo(myHandler)
f.bar()
}
```
|
50,279,728
|
I have a code like this:
```
x = []
for fitur in self.fiturs:
x.append(fitur[0])
a = [x , rpxy_list]
join = zip(*a)
print join
```
and in the self.fiturs is:
```
F1,1,1,1,1,0,1,1,0,0,1
F2,1,0,0,0,0,0,1,0,1,1
F3,1,0,0,0,0,0,1,1,1,1
F4,1,0,0,0,0,0,1,1,1,0
F5,14,24,22,22,22,16,18,19,26,22
F6,8.0625,6.2,6.2609,6.6818,6.2174,6.3333,7.85,6.0833,6.9655,6.9167
F7,0,0,0,0,0,0,1,0,1,0
F8,1,0,2,0,0,0,2,0,0,0
F9,1,0,0,0,1,1,0,0,0,0
F10,8,4,3,3,3,6,8,5,8,4
F11,0,0,1,0,0,1,0,0,0,0
F12,1,0,0,0,1,0,1,1,1,1
```
In the **rpxt\_list** is the float
and the output of the program is:
```
C:\Users\USER\PycharmProjects\Skripsi\venv\Scripts\python.exe C:/Users/USER/PycharmProjects/Skripsi/coba.py
[('F1', 0.2182178902359924), ('F1', 0.2182178902359924), ('F2', 0.408248290463863), ('F3', 0.2), ('F4', 0.408248290463863), ('F5', 0.37142857142857144), ('F6', 0.5053765608632352), ('F7', 0.5), ('F8', 0.6201736729460423), ('F9', 0.2182178902359924), ('F10', 0.6864064729836441), ('F11', 0.5), ('F12', 0.0), ('F13', 0), ('F14', 0), ('F15', 0), ('F16', 0), ('F17', 0), ('F18', 0), ('F19', 0), ('F20', 0), ('F21', 0), ('F22', 0), ('F23', 0.2672612419124244), ('F24', 0.4364357804719848), ('F25', 0), ('F26', 0), ('F27', 0), ('F28', 0), ('F29', 0), ('F30', 0), ('F31', 0), ('F32', 0), ('F33', 0), ('F34', 0), ('F35', 0), ('F36', 0), ('F37', 0.7808688094430304)]
Process finished with exit code 0
```
And I just want the output like this:
```
['F1', 0.2182178902359924]
['F2', 0.408248290463863]
etc
```
What should i do with my code?
|
2018/05/10
|
[
"https://Stackoverflow.com/questions/50279728",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9665999/"
] |
It look okay for the most part,
With Spark 2 you can try something like this by eliminating extra values there,
```
case class Rating(name:Int, product:Int, rating:Int)
val spark:SparkSession = ???
val df = spark.read.csv("/path/to/file")
.map({
case Row(u: Int, p: Int, r:Int) => Rating(u, p, r)
})
```
Hope this helps. Cheers.
|
my problem was related with NaN values down the road.
I got it fixed using this:
predictions.select([to\_null(c).alias(c) for c in predictions.columns]).na.drop()
also I had to import "from pyspark.sql.functions import col, isnan, when, trim"
|
45,690,043
|
I have a str like
`rjg[]u[ur"fur[ufrng[]"gree`,
and i want to replace "[" and "]" between "" with #,the result is
`rjg[]u[ur"fur[ufrng[]"gree` => `rjg[]u[ur"fur#ufrng##"gree`,
how can i get this in python?
|
2017/08/15
|
[
"https://Stackoverflow.com/questions/45690043",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6298732/"
] |
One liner solution:
```
import re
text = 'rjg[]u[ur"fur[ufrng[]"gree'
text = re.sub(r'(")([^"]+)(")', lambda pat: pat.group(1)+pat.group(2).replace(']', '#').replace('[', '#')+pat.group(3), text)
print text
```
Output:
```
rjg[]u[ur"fur#ufrng##"gree
```
|
I would try
```
L = data.split('"')
for i in range(1, len(L), 2):
L[i] = re.sub(r'[\[\]]', '#', L[i])
result = '"'.join(L)
```
|
45,690,043
|
I have a str like
`rjg[]u[ur"fur[ufrng[]"gree`,
and i want to replace "[" and "]" between "" with #,the result is
`rjg[]u[ur"fur[ufrng[]"gree` => `rjg[]u[ur"fur#ufrng##"gree`,
how can i get this in python?
|
2017/08/15
|
[
"https://Stackoverflow.com/questions/45690043",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6298732/"
] |
I would try
```
L = data.split('"')
for i in range(1, len(L), 2):
L[i] = re.sub(r'[\[\]]', '#', L[i])
result = '"'.join(L)
```
|
An option would be using [`str`](https://docs.python.org/3/library/stdtypes.html#str) built-in functions [`split()`](https://docs.python.org/3/library/stdtypes.html#str.split) and [`replace()`](https://docs.python.org/3/library/stdtypes.html#str.replace) like below (**without regex**):
```
s = 'rjg[]u[ur"fur[ufrng[]"gree'
l = s.split('"')
new_string = '"'.join(w.replace('[', '#').replace(']', '#') for w in l[1:-1])
res = '"'.join((l[0], new_string, l[-1]))
```
**Output:**
```
>>> res
'rjg[]u[ur"fur#ufrng##"gree'
```
|
45,690,043
|
I have a str like
`rjg[]u[ur"fur[ufrng[]"gree`,
and i want to replace "[" and "]" between "" with #,the result is
`rjg[]u[ur"fur[ufrng[]"gree` => `rjg[]u[ur"fur#ufrng##"gree`,
how can i get this in python?
|
2017/08/15
|
[
"https://Stackoverflow.com/questions/45690043",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6298732/"
] |
I would try
```
L = data.split('"')
for i in range(1, len(L), 2):
L[i] = re.sub(r'[\[\]]', '#', L[i])
result = '"'.join(L)
```
|
A one liner without regular expression. Though your solution is very wonderful @jpnkls.
```
>>> text = 'rjg[]u[ur"fur[ufrng[]"gre[e]"abc[d"ef]"'
>>> '\"'.join([substr.replace('[', '#').replace(']', '#') if n % 2 == 1 else substr for n, substr in enumerate(text.split('\"')[:-1])]+[text.split('\"')[-1]])
rjg[]u[ur"fur#ufrng##"gre[e]"abc#d"ef]"
```
This still works for uneven numbers of quotes and a quote at the beginning or the end.
|
45,690,043
|
I have a str like
`rjg[]u[ur"fur[ufrng[]"gree`,
and i want to replace "[" and "]" between "" with #,the result is
`rjg[]u[ur"fur[ufrng[]"gree` => `rjg[]u[ur"fur#ufrng##"gree`,
how can i get this in python?
|
2017/08/15
|
[
"https://Stackoverflow.com/questions/45690043",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6298732/"
] |
One liner solution:
```
import re
text = 'rjg[]u[ur"fur[ufrng[]"gree'
text = re.sub(r'(")([^"]+)(")', lambda pat: pat.group(1)+pat.group(2).replace(']', '#').replace('[', '#')+pat.group(3), text)
print text
```
Output:
```
rjg[]u[ur"fur#ufrng##"gree
```
|
An option would be using [`str`](https://docs.python.org/3/library/stdtypes.html#str) built-in functions [`split()`](https://docs.python.org/3/library/stdtypes.html#str.split) and [`replace()`](https://docs.python.org/3/library/stdtypes.html#str.replace) like below (**without regex**):
```
s = 'rjg[]u[ur"fur[ufrng[]"gree'
l = s.split('"')
new_string = '"'.join(w.replace('[', '#').replace(']', '#') for w in l[1:-1])
res = '"'.join((l[0], new_string, l[-1]))
```
**Output:**
```
>>> res
'rjg[]u[ur"fur#ufrng##"gree'
```
|
45,690,043
|
I have a str like
`rjg[]u[ur"fur[ufrng[]"gree`,
and i want to replace "[" and "]" between "" with #,the result is
`rjg[]u[ur"fur[ufrng[]"gree` => `rjg[]u[ur"fur#ufrng##"gree`,
how can i get this in python?
|
2017/08/15
|
[
"https://Stackoverflow.com/questions/45690043",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6298732/"
] |
One liner solution:
```
import re
text = 'rjg[]u[ur"fur[ufrng[]"gree'
text = re.sub(r'(")([^"]+)(")', lambda pat: pat.group(1)+pat.group(2).replace(']', '#').replace('[', '#')+pat.group(3), text)
print text
```
Output:
```
rjg[]u[ur"fur#ufrng##"gree
```
|
A one liner without regular expression. Though your solution is very wonderful @jpnkls.
```
>>> text = 'rjg[]u[ur"fur[ufrng[]"gre[e]"abc[d"ef]"'
>>> '\"'.join([substr.replace('[', '#').replace(']', '#') if n % 2 == 1 else substr for n, substr in enumerate(text.split('\"')[:-1])]+[text.split('\"')[-1]])
rjg[]u[ur"fur#ufrng##"gre[e]"abc#d"ef]"
```
This still works for uneven numbers of quotes and a quote at the beginning or the end.
|
49,638,674
|
I have a string `s`, and I want to remove `'.mainlog'` from it. I tried:
```
>>> s = 'ntm_MonMar26_16_59_41_2018.mainlog'
>>> s.strip('.mainlog')
'tm_MonMar26_16_59_41_2018'
```
Why did the `n` get removed from `'ntm...'`?
Similarly, I had another issue:
```
>>> s = 'MonMar26_16_59_41_2018_rerun.mainlog'
>>> s.strip('.mainlog')
'MonMar26_16_59_41_2018_reru'
```
Why does python insist on removing `n`'s from my strings? How I can properly remove `.mainlog` from my strings?
|
2018/04/03
|
[
"https://Stackoverflow.com/questions/49638674",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/868546/"
] |
From Python documentation:
<https://docs.python.org/2/library/string.html#string.strip>
Currently, it tries to strip all the characters which you mentioned ('.', 'm', 'a', 'i'...)
You can use string.replace instead.
```
s.replace('.mainlog', '')
```
|
You are using the wrong function. `strip` removes characters from the beginning and end of the string. By default spaces, but you can give a list of characters to remove.
You should use instead:
```
s.replace('.mainlog', '')
```
Or:
```
import os.path
os.path.splitext(s)[0]
```
|
49,638,674
|
I have a string `s`, and I want to remove `'.mainlog'` from it. I tried:
```
>>> s = 'ntm_MonMar26_16_59_41_2018.mainlog'
>>> s.strip('.mainlog')
'tm_MonMar26_16_59_41_2018'
```
Why did the `n` get removed from `'ntm...'`?
Similarly, I had another issue:
```
>>> s = 'MonMar26_16_59_41_2018_rerun.mainlog'
>>> s.strip('.mainlog')
'MonMar26_16_59_41_2018_reru'
```
Why does python insist on removing `n`'s from my strings? How I can properly remove `.mainlog` from my strings?
|
2018/04/03
|
[
"https://Stackoverflow.com/questions/49638674",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/868546/"
] |
If you read the docs for [`str.strip`](https://docs.python.org/3/library/stdtypes.html#str.strip) you will see that:
>
> The chars argument is a string specifying the **set of characters** to be removed.
>
>
>
So all the characters in `'.mainlog'` (`['.', 'm', 'a', 'i', 'n', 'l', 'o', 'g']`) are stripped just from the beginning and end.
---
What you want is [`str.replace`](https://docs.python.org/3/library/stdtypes.html#str.replace) to replace all occurrences of `'.mainlog'` with nothing:
```
s.replace('.mainlog', '')
#'ntm_MonMar26_16_59_41_2018'
```
|
You are using the wrong function. `strip` removes characters from the beginning and end of the string. By default spaces, but you can give a list of characters to remove.
You should use instead:
```
s.replace('.mainlog', '')
```
Or:
```
import os.path
os.path.splitext(s)[0]
```
|
49,638,674
|
I have a string `s`, and I want to remove `'.mainlog'` from it. I tried:
```
>>> s = 'ntm_MonMar26_16_59_41_2018.mainlog'
>>> s.strip('.mainlog')
'tm_MonMar26_16_59_41_2018'
```
Why did the `n` get removed from `'ntm...'`?
Similarly, I had another issue:
```
>>> s = 'MonMar26_16_59_41_2018_rerun.mainlog'
>>> s.strip('.mainlog')
'MonMar26_16_59_41_2018_reru'
```
Why does python insist on removing `n`'s from my strings? How I can properly remove `.mainlog` from my strings?
|
2018/04/03
|
[
"https://Stackoverflow.com/questions/49638674",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/868546/"
] |
If you read the docs for [`str.strip`](https://docs.python.org/3/library/stdtypes.html#str.strip) you will see that:
>
> The chars argument is a string specifying the **set of characters** to be removed.
>
>
>
So all the characters in `'.mainlog'` (`['.', 'm', 'a', 'i', 'n', 'l', 'o', 'g']`) are stripped just from the beginning and end.
---
What you want is [`str.replace`](https://docs.python.org/3/library/stdtypes.html#str.replace) to replace all occurrences of `'.mainlog'` with nothing:
```
s.replace('.mainlog', '')
#'ntm_MonMar26_16_59_41_2018'
```
|
From Python documentation:
<https://docs.python.org/2/library/string.html#string.strip>
Currently, it tries to strip all the characters which you mentioned ('.', 'm', 'a', 'i'...)
You can use string.replace instead.
```
s.replace('.mainlog', '')
```
|
49,638,674
|
I have a string `s`, and I want to remove `'.mainlog'` from it. I tried:
```
>>> s = 'ntm_MonMar26_16_59_41_2018.mainlog'
>>> s.strip('.mainlog')
'tm_MonMar26_16_59_41_2018'
```
Why did the `n` get removed from `'ntm...'`?
Similarly, I had another issue:
```
>>> s = 'MonMar26_16_59_41_2018_rerun.mainlog'
>>> s.strip('.mainlog')
'MonMar26_16_59_41_2018_reru'
```
Why does python insist on removing `n`'s from my strings? How I can properly remove `.mainlog` from my strings?
|
2018/04/03
|
[
"https://Stackoverflow.com/questions/49638674",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/868546/"
] |
From Python documentation:
<https://docs.python.org/2/library/string.html#string.strip>
Currently, it tries to strip all the characters which you mentioned ('.', 'm', 'a', 'i'...)
You can use string.replace instead.
```
s.replace('.mainlog', '')
```
|
The argument to the strip function, in this case, `.mainlog` is not a string, it's a set of individual characters.
That's removing all leading and trailing characters that are in that list.
We'd get the same result if we passed in the argument `aiglmno.`.
|
49,638,674
|
I have a string `s`, and I want to remove `'.mainlog'` from it. I tried:
```
>>> s = 'ntm_MonMar26_16_59_41_2018.mainlog'
>>> s.strip('.mainlog')
'tm_MonMar26_16_59_41_2018'
```
Why did the `n` get removed from `'ntm...'`?
Similarly, I had another issue:
```
>>> s = 'MonMar26_16_59_41_2018_rerun.mainlog'
>>> s.strip('.mainlog')
'MonMar26_16_59_41_2018_reru'
```
Why does python insist on removing `n`'s from my strings? How I can properly remove `.mainlog` from my strings?
|
2018/04/03
|
[
"https://Stackoverflow.com/questions/49638674",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/868546/"
] |
If you read the docs for [`str.strip`](https://docs.python.org/3/library/stdtypes.html#str.strip) you will see that:
>
> The chars argument is a string specifying the **set of characters** to be removed.
>
>
>
So all the characters in `'.mainlog'` (`['.', 'm', 'a', 'i', 'n', 'l', 'o', 'g']`) are stripped just from the beginning and end.
---
What you want is [`str.replace`](https://docs.python.org/3/library/stdtypes.html#str.replace) to replace all occurrences of `'.mainlog'` with nothing:
```
s.replace('.mainlog', '')
#'ntm_MonMar26_16_59_41_2018'
```
|
The argument to the strip function, in this case, `.mainlog` is not a string, it's a set of individual characters.
That's removing all leading and trailing characters that are in that list.
We'd get the same result if we passed in the argument `aiglmno.`.
|
44,218,387
|
This is what I encountered when trying to import thread package:
`>>> import thread
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packag
es/thread.py", line 3
print('This is ultran00b's package - thread')`
I tried uninstalling and install again but it won't work.
|
2017/05/27
|
[
"https://Stackoverflow.com/questions/44218387",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7074612/"
] |
thread module was deprecated in python 3. Try threading instead:
```
import threading
```
|
You are trying to import the thread class ?
Use :
```
from threading import Thread
```
|
2,990,819
|
I'm looking for the templates engine for Java with syntax like in Django templates or Twig (PHP). Does it exists?
Update:
The target is to have same templates files for different languages.
```
<html>
{{head}}
{{ var|escape }}
{{body}}
</html>
```
can be rendered from python (Django) code as well as from PHP, using Twig. I'm looking for Java solution.
Any other templates system available in Java, PHP and python is suitable.
|
2010/06/07
|
[
"https://Stackoverflow.com/questions/2990819",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/108826/"
] |
* <http://www.jangod.org/> (There is now also <https://github.com/HubSpot/jinjava>)
* run django via jython on jvm
* use <http://mustache.github.com/>
|
Sure, there are all sorts of template engines for Java. I've used FreeMarker, Velocity and StringTemplate. I'm not sure what you mean by Django-like syntax; each engine has it's own variations on a templating approach.
For a comparison of some different engines check out [here](http://java-source.net/open-source/template-engines).
|
2,990,819
|
I'm looking for the templates engine for Java with syntax like in Django templates or Twig (PHP). Does it exists?
Update:
The target is to have same templates files for different languages.
```
<html>
{{head}}
{{ var|escape }}
{{body}}
</html>
```
can be rendered from python (Django) code as well as from PHP, using Twig. I'm looking for Java solution.
Any other templates system available in Java, PHP and python is suitable.
|
2010/06/07
|
[
"https://Stackoverflow.com/questions/2990819",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/108826/"
] |
If you want the same templates for different languages, you might take a look at Clearsilver.
Clearsilver is a language-neutral template engine which helps separate presentation from code by inserting a language-neutral hierarchial data format (HDF) between your code and templates. Think of HDF like XML, but much simpler.
It's is used for many high-traffic websites, including Yahoo! Groups, Gmail Static HTML, orkut.com, wunderground.com, and others. Implementation languages used with it include C/C++, Python, Java, Ruby, PHP, C#, and others. The Python framework also includes a Page-Class dispatcher and simple ORM which is a bit Ruby-On-Rails like in that it makes mapping between database tables, HDF, and templates take very little code.
The main Clearsilver implementation is in C with language-specific wrappers. There is also a 100% java implementation made by Google and open-sourced called JSilver.
<http://www.clearsilver.net/>
<http://code.google.com/p/jsilver/>
|
Sure, there are all sorts of template engines for Java. I've used FreeMarker, Velocity and StringTemplate. I'm not sure what you mean by Django-like syntax; each engine has it's own variations on a templating approach.
For a comparison of some different engines check out [here](http://java-source.net/open-source/template-engines).
|
2,990,819
|
I'm looking for the templates engine for Java with syntax like in Django templates or Twig (PHP). Does it exists?
Update:
The target is to have same templates files for different languages.
```
<html>
{{head}}
{{ var|escape }}
{{body}}
</html>
```
can be rendered from python (Django) code as well as from PHP, using Twig. I'm looking for Java solution.
Any other templates system available in Java, PHP and python is suitable.
|
2010/06/07
|
[
"https://Stackoverflow.com/questions/2990819",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/108826/"
] |
I've developed [Jtwig](http://jtwig.org). You could give a try. It's being used in some projects with success. It's easy to setup with a nice integration with spring webmvc.
Just include the dependency using maven or a similar system.
```
<dependency>
<groupId>com.lyncode</groupId>
<artifactId>jtwig-spring</artifactId>
<version>2.0.3</version>
</dependency>
```
And configure the view resolver bean to return the Jtwig one.
```
@Bean
public ViewResolver viewResolver() {
JtwigViewResolver viewResolver = new JtwigViewResolver();
viewResolver.setPrefix("/WEB-INF/views/");
viewResolver.setSuffix(".twig");
return viewResolver;
}
```
Or, if you use xml base configurations:
```
<bean id="viewResolver" class="com.lyncode.jtwig.mvc.JtwigViewResolver">
<property name="prefix" value="/WEB-INF/views/"/>
<property name="suffix" value=".twig"/>
</bean>
```
|
Sure, there are all sorts of template engines for Java. I've used FreeMarker, Velocity and StringTemplate. I'm not sure what you mean by Django-like syntax; each engine has it's own variations on a templating approach.
For a comparison of some different engines check out [here](http://java-source.net/open-source/template-engines).
|
2,990,819
|
I'm looking for the templates engine for Java with syntax like in Django templates or Twig (PHP). Does it exists?
Update:
The target is to have same templates files for different languages.
```
<html>
{{head}}
{{ var|escape }}
{{body}}
</html>
```
can be rendered from python (Django) code as well as from PHP, using Twig. I'm looking for Java solution.
Any other templates system available in Java, PHP and python is suitable.
|
2010/06/07
|
[
"https://Stackoverflow.com/questions/2990819",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/108826/"
] |
* <http://www.jangod.org/> (There is now also <https://github.com/HubSpot/jinjava>)
* run django via jython on jvm
* use <http://mustache.github.com/>
|
If you want the same templates for different languages, you might take a look at Clearsilver.
Clearsilver is a language-neutral template engine which helps separate presentation from code by inserting a language-neutral hierarchial data format (HDF) between your code and templates. Think of HDF like XML, but much simpler.
It's is used for many high-traffic websites, including Yahoo! Groups, Gmail Static HTML, orkut.com, wunderground.com, and others. Implementation languages used with it include C/C++, Python, Java, Ruby, PHP, C#, and others. The Python framework also includes a Page-Class dispatcher and simple ORM which is a bit Ruby-On-Rails like in that it makes mapping between database tables, HDF, and templates take very little code.
The main Clearsilver implementation is in C with language-specific wrappers. There is also a 100% java implementation made by Google and open-sourced called JSilver.
<http://www.clearsilver.net/>
<http://code.google.com/p/jsilver/>
|
2,990,819
|
I'm looking for the templates engine for Java with syntax like in Django templates or Twig (PHP). Does it exists?
Update:
The target is to have same templates files for different languages.
```
<html>
{{head}}
{{ var|escape }}
{{body}}
</html>
```
can be rendered from python (Django) code as well as from PHP, using Twig. I'm looking for Java solution.
Any other templates system available in Java, PHP and python is suitable.
|
2010/06/07
|
[
"https://Stackoverflow.com/questions/2990819",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/108826/"
] |
* <http://www.jangod.org/> (There is now also <https://github.com/HubSpot/jinjava>)
* run django via jython on jvm
* use <http://mustache.github.com/>
|
You can use [Mustache.java](https://github.com/spullara/mustache.java) and [Handlebars.java](https://github.com/jknack/handlebars.java). Mustache is very minimalistic. Handlebars is similar and compatible with Mustache, but you can **very easily** write your own extensions.
|
2,990,819
|
I'm looking for the templates engine for Java with syntax like in Django templates or Twig (PHP). Does it exists?
Update:
The target is to have same templates files for different languages.
```
<html>
{{head}}
{{ var|escape }}
{{body}}
</html>
```
can be rendered from python (Django) code as well as from PHP, using Twig. I'm looking for Java solution.
Any other templates system available in Java, PHP and python is suitable.
|
2010/06/07
|
[
"https://Stackoverflow.com/questions/2990819",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/108826/"
] |
I've developed [Jtwig](http://jtwig.org). You could give a try. It's being used in some projects with success. It's easy to setup with a nice integration with spring webmvc.
Just include the dependency using maven or a similar system.
```
<dependency>
<groupId>com.lyncode</groupId>
<artifactId>jtwig-spring</artifactId>
<version>2.0.3</version>
</dependency>
```
And configure the view resolver bean to return the Jtwig one.
```
@Bean
public ViewResolver viewResolver() {
JtwigViewResolver viewResolver = new JtwigViewResolver();
viewResolver.setPrefix("/WEB-INF/views/");
viewResolver.setSuffix(".twig");
return viewResolver;
}
```
Or, if you use xml base configurations:
```
<bean id="viewResolver" class="com.lyncode.jtwig.mvc.JtwigViewResolver">
<property name="prefix" value="/WEB-INF/views/"/>
<property name="suffix" value=".twig"/>
</bean>
```
|
If you want the same templates for different languages, you might take a look at Clearsilver.
Clearsilver is a language-neutral template engine which helps separate presentation from code by inserting a language-neutral hierarchial data format (HDF) between your code and templates. Think of HDF like XML, but much simpler.
It's is used for many high-traffic websites, including Yahoo! Groups, Gmail Static HTML, orkut.com, wunderground.com, and others. Implementation languages used with it include C/C++, Python, Java, Ruby, PHP, C#, and others. The Python framework also includes a Page-Class dispatcher and simple ORM which is a bit Ruby-On-Rails like in that it makes mapping between database tables, HDF, and templates take very little code.
The main Clearsilver implementation is in C with language-specific wrappers. There is also a 100% java implementation made by Google and open-sourced called JSilver.
<http://www.clearsilver.net/>
<http://code.google.com/p/jsilver/>
|
2,990,819
|
I'm looking for the templates engine for Java with syntax like in Django templates or Twig (PHP). Does it exists?
Update:
The target is to have same templates files for different languages.
```
<html>
{{head}}
{{ var|escape }}
{{body}}
</html>
```
can be rendered from python (Django) code as well as from PHP, using Twig. I'm looking for Java solution.
Any other templates system available in Java, PHP and python is suitable.
|
2010/06/07
|
[
"https://Stackoverflow.com/questions/2990819",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/108826/"
] |
If you want the same templates for different languages, you might take a look at Clearsilver.
Clearsilver is a language-neutral template engine which helps separate presentation from code by inserting a language-neutral hierarchial data format (HDF) between your code and templates. Think of HDF like XML, but much simpler.
It's is used for many high-traffic websites, including Yahoo! Groups, Gmail Static HTML, orkut.com, wunderground.com, and others. Implementation languages used with it include C/C++, Python, Java, Ruby, PHP, C#, and others. The Python framework also includes a Page-Class dispatcher and simple ORM which is a bit Ruby-On-Rails like in that it makes mapping between database tables, HDF, and templates take very little code.
The main Clearsilver implementation is in C with language-specific wrappers. There is also a 100% java implementation made by Google and open-sourced called JSilver.
<http://www.clearsilver.net/>
<http://code.google.com/p/jsilver/>
|
You can use [Mustache.java](https://github.com/spullara/mustache.java) and [Handlebars.java](https://github.com/jknack/handlebars.java). Mustache is very minimalistic. Handlebars is similar and compatible with Mustache, but you can **very easily** write your own extensions.
|
2,990,819
|
I'm looking for the templates engine for Java with syntax like in Django templates or Twig (PHP). Does it exists?
Update:
The target is to have same templates files for different languages.
```
<html>
{{head}}
{{ var|escape }}
{{body}}
</html>
```
can be rendered from python (Django) code as well as from PHP, using Twig. I'm looking for Java solution.
Any other templates system available in Java, PHP and python is suitable.
|
2010/06/07
|
[
"https://Stackoverflow.com/questions/2990819",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/108826/"
] |
I've developed [Jtwig](http://jtwig.org). You could give a try. It's being used in some projects with success. It's easy to setup with a nice integration with spring webmvc.
Just include the dependency using maven or a similar system.
```
<dependency>
<groupId>com.lyncode</groupId>
<artifactId>jtwig-spring</artifactId>
<version>2.0.3</version>
</dependency>
```
And configure the view resolver bean to return the Jtwig one.
```
@Bean
public ViewResolver viewResolver() {
JtwigViewResolver viewResolver = new JtwigViewResolver();
viewResolver.setPrefix("/WEB-INF/views/");
viewResolver.setSuffix(".twig");
return viewResolver;
}
```
Or, if you use xml base configurations:
```
<bean id="viewResolver" class="com.lyncode.jtwig.mvc.JtwigViewResolver">
<property name="prefix" value="/WEB-INF/views/"/>
<property name="suffix" value=".twig"/>
</bean>
```
|
You can use [Mustache.java](https://github.com/spullara/mustache.java) and [Handlebars.java](https://github.com/jknack/handlebars.java). Mustache is very minimalistic. Handlebars is similar and compatible with Mustache, but you can **very easily** write your own extensions.
|
55,656,522
|
I installed Python 3.7.3 on windows 10, but I can't install Python packages via PIP in Gitbash (Git SCM), due to my company's internet proxy.
I tryed to create environment variables for the proxy via the following, but it didn't work:
* export http\_proxy='proxy.com:8080'
* export https\_proxy='proxy.com:8080'
I found a temporary solution that works for me: inserting the following aliases into the .bashrc file:
* alias python='winpty python.exe'
* alias pip='pip --proxy=proxy.com:8080'
The above is working but I am looking for a nicer solution so that I don't need to set aliases for every command I use. I was thinking about something like an environment variable but didn't found out how to set it up on a windows' git bash environment yet.
Do you have an idea on how to do it?
|
2019/04/12
|
[
"https://Stackoverflow.com/questions/55656522",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8691122/"
] |
One approach here would be to use lookarounds to ensure that you match *only* islands of exactly two sixes:
```
String regex = "(?<!6)66(?!6)";
String text = "6678793346666786784966";
Pattern pattern = Pattern.compile(regex);
Matcher matcher = pattern.matcher(text);
```
This finds a count of two, for the input string you provided (the two matches being the `66` at the very start and end of the string).
The regex pattern uses two lookarounds to assert that what comes before the first 6 and after the second 6 are *not* other sixes:
```
(?<!6) assert that what precedes is NOT 6
66 match and consume two 6's
(?!6) assert that what follows is NOT 6
```
|
You need to use
```
String regex = "(?<!6)66(?!6)";
```
See the [regex demo](https://regex101.com/r/3QHER6/2).
[](https://i.stack.imgur.com/6b4St.png)
**Details**
* `(?<!6)` - no `6` right before the current location
* `66` - `66` substring
* `(?!6)` - no `6` right after the current location.
See the [Java demo](https://ideone.com/UrxExY):
```
String regex = "(?<!6)66(?!6)";
String text = "6678793346666786784966";
Pattern pattern = Pattern.compile(regex);
Matcher matcher = pattern.matcher(text);
int match=0;
while (matcher.find()) {
match++;
}
System.out.println("count is "+match); // => count is 2
```
|
55,656,522
|
I installed Python 3.7.3 on windows 10, but I can't install Python packages via PIP in Gitbash (Git SCM), due to my company's internet proxy.
I tryed to create environment variables for the proxy via the following, but it didn't work:
* export http\_proxy='proxy.com:8080'
* export https\_proxy='proxy.com:8080'
I found a temporary solution that works for me: inserting the following aliases into the .bashrc file:
* alias python='winpty python.exe'
* alias pip='pip --proxy=proxy.com:8080'
The above is working but I am looking for a nicer solution so that I don't need to set aliases for every command I use. I was thinking about something like an environment variable but didn't found out how to set it up on a windows' git bash environment yet.
Do you have an idea on how to do it?
|
2019/04/12
|
[
"https://Stackoverflow.com/questions/55656522",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8691122/"
] |
One approach here would be to use lookarounds to ensure that you match *only* islands of exactly two sixes:
```
String regex = "(?<!6)66(?!6)";
String text = "6678793346666786784966";
Pattern pattern = Pattern.compile(regex);
Matcher matcher = pattern.matcher(text);
```
This finds a count of two, for the input string you provided (the two matches being the `66` at the very start and end of the string).
The regex pattern uses two lookarounds to assert that what comes before the first 6 and after the second 6 are *not* other sixes:
```
(?<!6) assert that what precedes is NOT 6
66 match and consume two 6's
(?!6) assert that what follows is NOT 6
```
|
This didn't take long to come up with. I like regular expressions but I don't use them unless really necessary. Here is one loop method that appears to work.
```
char TARGET = '6';
int GROUPSIZE = 2;
// String with random termination character that's not a TARGET
String s = "6678793346666786784966" + "z";
int consecutiveCount = 0;
int groupCount = 0;
for (char c : s.toCharArray()) {
if (c == TARGET) {
consecutiveCount++;
}
else {
// if current character is not a TARGET, update group count if
// consecutive count equals GROUPSIZE
if (consecutiveCount == GROUPSIZE) {
groupCount++;
}
// in any event, reset consecutive count
consecutiveCount = 0;
}
}
System.out.println(groupCount);
```
|
19,838,976
|
What's the most pythonic way of joining a list so that there are commas between each item, except for the last which uses "and"?
```
["foo"] --> "foo"
["foo","bar"] --> "foo and bar"
["foo","bar","baz"] --> "foo, bar and baz"
["foo","bar","baz","bah"] --> "foo, bar, baz and bah"
```
|
2013/11/07
|
[
"https://Stackoverflow.com/questions/19838976",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1277170/"
] |
The fix based on the comment led to this fun way. It assumes no commas occur in the string entries of the list to be joined (which would be problematic anyway, so is a reasonable assumption.)
```
def special_join(my_list):
return ", ".join(my_list)[::-1].replace(",", "dna ", 1)[::-1]
In [50]: def special_join(my_list):
return ", ".join(my_list)[::-1].replace(",", "dna ", 1)[::-1]
....:
In [51]: special_join(["foo", "bar", "baz", "bah"])
Out[51]: 'foo, bar, baz and bah'
In [52]: special_join(["foo"])
Out[52]: 'foo'
In [53]: special_join(["foo", "bar"])
Out[53]: 'foo and bar'
```
|
In case you need a solution where negative indexing isn't supported (i.e. Django QuerySet)
```
def oxford_join(string_list):
if len(string_list) < 1:
text = ''
elif len(string_list) == 1:
text = string_list[0]
elif len(string_list) == 2:
text = ' and '.join(string_list)
else:
text = ', '.join(string_list)
text = '{parts[0]}, and {parts[2]}'.format(parts=text.rpartition(', ')) # oxford comma
return text
oxford_join(['Apples', 'Oranges', 'Mangoes'])
```
|
19,838,976
|
What's the most pythonic way of joining a list so that there are commas between each item, except for the last which uses "and"?
```
["foo"] --> "foo"
["foo","bar"] --> "foo and bar"
["foo","bar","baz"] --> "foo, bar and baz"
["foo","bar","baz","bah"] --> "foo, bar, baz and bah"
```
|
2013/11/07
|
[
"https://Stackoverflow.com/questions/19838976",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1277170/"
] |
The fix based on the comment led to this fun way. It assumes no commas occur in the string entries of the list to be joined (which would be problematic anyway, so is a reasonable assumption.)
```
def special_join(my_list):
return ", ".join(my_list)[::-1].replace(",", "dna ", 1)[::-1]
In [50]: def special_join(my_list):
return ", ".join(my_list)[::-1].replace(",", "dna ", 1)[::-1]
....:
In [51]: special_join(["foo", "bar", "baz", "bah"])
Out[51]: 'foo, bar, baz and bah'
In [52]: special_join(["foo"])
Out[52]: 'foo'
In [53]: special_join(["foo", "bar"])
Out[53]: 'foo and bar'
```
|
just special-case the last one. something like this:
```
'%s and %s'%(', '.join(mylist[:-1]),mylist[-1])
```
there's probably not going to be any more concise method.
this will fail in the zero case too.
|
19,838,976
|
What's the most pythonic way of joining a list so that there are commas between each item, except for the last which uses "and"?
```
["foo"] --> "foo"
["foo","bar"] --> "foo and bar"
["foo","bar","baz"] --> "foo, bar and baz"
["foo","bar","baz","bah"] --> "foo, bar, baz and bah"
```
|
2013/11/07
|
[
"https://Stackoverflow.com/questions/19838976",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1277170/"
] |
This expression does it:
```
print ", ".join(data[:-2] + [" and ".join(data[-2:])])
```
As seen here:
```
>>> data
['foo', 'bar', 'baaz', 'bah']
>>> while data:
... print ", ".join(data[:-2] + [" and ".join(data[-2:])])
... data.pop()
...
foo, bar, baaz and bah
foo, bar and baaz
foo and bar
foo
```
|
The fix based on the comment led to this fun way. It assumes no commas occur in the string entries of the list to be joined (which would be problematic anyway, so is a reasonable assumption.)
```
def special_join(my_list):
return ", ".join(my_list)[::-1].replace(",", "dna ", 1)[::-1]
In [50]: def special_join(my_list):
return ", ".join(my_list)[::-1].replace(",", "dna ", 1)[::-1]
....:
In [51]: special_join(["foo", "bar", "baz", "bah"])
Out[51]: 'foo, bar, baz and bah'
In [52]: special_join(["foo"])
Out[52]: 'foo'
In [53]: special_join(["foo", "bar"])
Out[53]: 'foo and bar'
```
|
19,838,976
|
What's the most pythonic way of joining a list so that there are commas between each item, except for the last which uses "and"?
```
["foo"] --> "foo"
["foo","bar"] --> "foo and bar"
["foo","bar","baz"] --> "foo, bar and baz"
["foo","bar","baz","bah"] --> "foo, bar, baz and bah"
```
|
2013/11/07
|
[
"https://Stackoverflow.com/questions/19838976",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1277170/"
] |
This expression does it:
```
print ", ".join(data[:-2] + [" and ".join(data[-2:])])
```
As seen here:
```
>>> data
['foo', 'bar', 'baaz', 'bah']
>>> while data:
... print ", ".join(data[:-2] + [" and ".join(data[-2:])])
... data.pop()
...
foo, bar, baaz and bah
foo, bar and baaz
foo and bar
foo
```
|
Already good answers available. This one works in all test cases and is slightly different than some others.
```
def grammar_join(words):
return reduce(lambda x, y: x and x + ' and ' + y or y,
(', '.join(words[:-1]), words[-1])) if words else ''
tests = ([], ['a'], ['a', 'b'], ['a', 'b', 'c'])
for test in tests:
print grammar_join(test)
```
---
```
a
a and b
a, b and c
```
|
19,838,976
|
What's the most pythonic way of joining a list so that there are commas between each item, except for the last which uses "and"?
```
["foo"] --> "foo"
["foo","bar"] --> "foo and bar"
["foo","bar","baz"] --> "foo, bar and baz"
["foo","bar","baz","bah"] --> "foo, bar, baz and bah"
```
|
2013/11/07
|
[
"https://Stackoverflow.com/questions/19838976",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1277170/"
] |
Try this, it takes into consideration the edge cases and uses `format()`, to show another possible solution:
```
def my_join(lst):
if not lst:
return ""
elif len(lst) == 1:
return str(lst[0])
return "{} and {}".format(", ".join(lst[:-1]), lst[-1])
```
Works as expected:
```
my_join([])
=> ""
my_join(["x"])
=> "x"
my_join(["x", "y"])
=> "x and y"
my_join(["x", "y", "z"])
=> "x, y and z"
```
|
just special-case the last one. something like this:
```
'%s and %s'%(', '.join(mylist[:-1]),mylist[-1])
```
there's probably not going to be any more concise method.
this will fail in the zero case too.
|
19,838,976
|
What's the most pythonic way of joining a list so that there are commas between each item, except for the last which uses "and"?
```
["foo"] --> "foo"
["foo","bar"] --> "foo and bar"
["foo","bar","baz"] --> "foo, bar and baz"
["foo","bar","baz","bah"] --> "foo, bar, baz and bah"
```
|
2013/11/07
|
[
"https://Stackoverflow.com/questions/19838976",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1277170/"
] |
This expression does it:
```
print ", ".join(data[:-2] + [" and ".join(data[-2:])])
```
As seen here:
```
>>> data
['foo', 'bar', 'baaz', 'bah']
>>> while data:
... print ", ".join(data[:-2] + [" and ".join(data[-2:])])
... data.pop()
...
foo, bar, baaz and bah
foo, bar and baaz
foo and bar
foo
```
|
Try this, it takes into consideration the edge cases and uses `format()`, to show another possible solution:
```
def my_join(lst):
if not lst:
return ""
elif len(lst) == 1:
return str(lst[0])
return "{} and {}".format(", ".join(lst[:-1]), lst[-1])
```
Works as expected:
```
my_join([])
=> ""
my_join(["x"])
=> "x"
my_join(["x", "y"])
=> "x and y"
my_join(["x", "y", "z"])
=> "x, y and z"
```
|
19,838,976
|
What's the most pythonic way of joining a list so that there are commas between each item, except for the last which uses "and"?
```
["foo"] --> "foo"
["foo","bar"] --> "foo and bar"
["foo","bar","baz"] --> "foo, bar and baz"
["foo","bar","baz","bah"] --> "foo, bar, baz and bah"
```
|
2013/11/07
|
[
"https://Stackoverflow.com/questions/19838976",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1277170/"
] |
Already good answers available. This one works in all test cases and is slightly different than some others.
```
def grammar_join(words):
return reduce(lambda x, y: x and x + ' and ' + y or y,
(', '.join(words[:-1]), words[-1])) if words else ''
tests = ([], ['a'], ['a', 'b'], ['a', 'b', 'c'])
for test in tests:
print grammar_join(test)
```
---
```
a
a and b
a, b and c
```
|
In case you need a solution where negative indexing isn't supported (i.e. Django QuerySet)
```
def oxford_join(string_list):
if len(string_list) < 1:
text = ''
elif len(string_list) == 1:
text = string_list[0]
elif len(string_list) == 2:
text = ' and '.join(string_list)
else:
text = ', '.join(string_list)
text = '{parts[0]}, and {parts[2]}'.format(parts=text.rpartition(', ')) # oxford comma
return text
oxford_join(['Apples', 'Oranges', 'Mangoes'])
```
|
19,838,976
|
What's the most pythonic way of joining a list so that there are commas between each item, except for the last which uses "and"?
```
["foo"] --> "foo"
["foo","bar"] --> "foo and bar"
["foo","bar","baz"] --> "foo, bar and baz"
["foo","bar","baz","bah"] --> "foo, bar, baz and bah"
```
|
2013/11/07
|
[
"https://Stackoverflow.com/questions/19838976",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1277170/"
] |
This expression does it:
```
print ", ".join(data[:-2] + [" and ".join(data[-2:])])
```
As seen here:
```
>>> data
['foo', 'bar', 'baaz', 'bah']
>>> while data:
... print ", ".join(data[:-2] + [" and ".join(data[-2:])])
... data.pop()
...
foo, bar, baaz and bah
foo, bar and baaz
foo and bar
foo
```
|
just special-case the last one. something like this:
```
'%s and %s'%(', '.join(mylist[:-1]),mylist[-1])
```
there's probably not going to be any more concise method.
this will fail in the zero case too.
|
19,838,976
|
What's the most pythonic way of joining a list so that there are commas between each item, except for the last which uses "and"?
```
["foo"] --> "foo"
["foo","bar"] --> "foo and bar"
["foo","bar","baz"] --> "foo, bar and baz"
["foo","bar","baz","bah"] --> "foo, bar, baz and bah"
```
|
2013/11/07
|
[
"https://Stackoverflow.com/questions/19838976",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1277170/"
] |
This expression does it:
```
print ", ".join(data[:-2] + [" and ".join(data[-2:])])
```
As seen here:
```
>>> data
['foo', 'bar', 'baaz', 'bah']
>>> while data:
... print ", ".join(data[:-2] + [" and ".join(data[-2:])])
... data.pop()
...
foo, bar, baaz and bah
foo, bar and baaz
foo and bar
foo
```
|
In case you need a solution where negative indexing isn't supported (i.e. Django QuerySet)
```
def oxford_join(string_list):
if len(string_list) < 1:
text = ''
elif len(string_list) == 1:
text = string_list[0]
elif len(string_list) == 2:
text = ' and '.join(string_list)
else:
text = ', '.join(string_list)
text = '{parts[0]}, and {parts[2]}'.format(parts=text.rpartition(', ')) # oxford comma
return text
oxford_join(['Apples', 'Oranges', 'Mangoes'])
```
|
19,838,976
|
What's the most pythonic way of joining a list so that there are commas between each item, except for the last which uses "and"?
```
["foo"] --> "foo"
["foo","bar"] --> "foo and bar"
["foo","bar","baz"] --> "foo, bar and baz"
["foo","bar","baz","bah"] --> "foo, bar, baz and bah"
```
|
2013/11/07
|
[
"https://Stackoverflow.com/questions/19838976",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1277170/"
] |
Try this, it takes into consideration the edge cases and uses `format()`, to show another possible solution:
```
def my_join(lst):
if not lst:
return ""
elif len(lst) == 1:
return str(lst[0])
return "{} and {}".format(", ".join(lst[:-1]), lst[-1])
```
Works as expected:
```
my_join([])
=> ""
my_join(["x"])
=> "x"
my_join(["x", "y"])
=> "x and y"
my_join(["x", "y", "z"])
=> "x, y and z"
```
|
Already good answers available. This one works in all test cases and is slightly different than some others.
```
def grammar_join(words):
return reduce(lambda x, y: x and x + ' and ' + y or y,
(', '.join(words[:-1]), words[-1])) if words else ''
tests = ([], ['a'], ['a', 'b'], ['a', 'b', 'c'])
for test in tests:
print grammar_join(test)
```
---
```
a
a and b
a, b and c
```
|
13,555,386
|
I try to start a Celery worker server from a command line:
```
celery -A tasks worker --loglevel=info
```
The code in tasks.py:
```
import os
os.environ[ 'DJANGO_SETTINGS_MODULE' ] = "proj.settings"
from celery import task
@task()
def add_photos_task( lad_id ):
...
```
I get the next error:
```
Traceback (most recent call last):
File "/usr/local/bin/celery", line 8, in <module>
load_entry_point('celery==3.0.12', 'console_scripts', 'celery')()
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/__main__.py", line 14, in main
main()
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/celery.py", line 946, in main
cmd.execute_from_commandline(argv)
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/celery.py", line 890, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 177, in execute_from_commandline
argv = self.setup_app_from_commandline(argv)
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 295, in setup_app_from_commandline
self.app = self.find_app(app)
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 313, in find_app
return sym.celery
AttributeError: 'module' object has no attribute 'celery'
```
Does anybody know why the 'celery' attribute cannot be found? Thank you for help.
The operating system is Linux Debian 5.
**Edit**. May be the clue. Could anyone explain me the next comment to a function (why we must be sure that it finds modules in the current directory)?
```
# from celery/utils/imports.py
def import_from_cwd(module, imp=None, package=None):
"""Import module, but make sure it finds modules
located in the current directory.
Modules located in the current directory has
precedence over modules located in `sys.path`.
"""
if imp is None:
imp = importlib.import_module
with cwd_in_path():
return imp(module, package=package)
```
|
2012/11/25
|
[
"https://Stackoverflow.com/questions/13555386",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/749288/"
] |
I forgot to create a celery object in tasks.py:
```
from celery import Celery
from celery import task
celery = Celery('tasks', broker='amqp://guest@localhost//') #!
import os
os.environ[ 'DJANGO_SETTINGS_MODULE' ] = "proj.settings"
@task()
def add_photos_task( lad_id ):
...
```
After that we could normally start tasks:
```
celery -A tasks worker --loglevel=info
```
|
When you run `celery -A tasks worker --loglevel=info`, your celery app should be exposed in the module `tasks`. It shouldn't be wrapped in a function or an `if` statements that.
If you `make_celery` in another file, you should import the celery app in to your the file you are passing to celery.
|
13,555,386
|
I try to start a Celery worker server from a command line:
```
celery -A tasks worker --loglevel=info
```
The code in tasks.py:
```
import os
os.environ[ 'DJANGO_SETTINGS_MODULE' ] = "proj.settings"
from celery import task
@task()
def add_photos_task( lad_id ):
...
```
I get the next error:
```
Traceback (most recent call last):
File "/usr/local/bin/celery", line 8, in <module>
load_entry_point('celery==3.0.12', 'console_scripts', 'celery')()
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/__main__.py", line 14, in main
main()
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/celery.py", line 946, in main
cmd.execute_from_commandline(argv)
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/celery.py", line 890, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 177, in execute_from_commandline
argv = self.setup_app_from_commandline(argv)
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 295, in setup_app_from_commandline
self.app = self.find_app(app)
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 313, in find_app
return sym.celery
AttributeError: 'module' object has no attribute 'celery'
```
Does anybody know why the 'celery' attribute cannot be found? Thank you for help.
The operating system is Linux Debian 5.
**Edit**. May be the clue. Could anyone explain me the next comment to a function (why we must be sure that it finds modules in the current directory)?
```
# from celery/utils/imports.py
def import_from_cwd(module, imp=None, package=None):
"""Import module, but make sure it finds modules
located in the current directory.
Modules located in the current directory has
precedence over modules located in `sys.path`.
"""
if imp is None:
imp = importlib.import_module
with cwd_in_path():
return imp(module, package=package)
```
|
2012/11/25
|
[
"https://Stackoverflow.com/questions/13555386",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/749288/"
] |
Celery uses `celery` file for storing configuration of your app, you can't just give a python file with tasks and start celery.
You should define `celery` file ( for Celery>3.0; previously it was `celeryconfig.py`)..
>
> celeryd --app app.celery -l info
>
>
>
This example how to start celery with config file at `app/celery.py`
Here is example of celery file: <https://github.com/Kami/libcloud-sandbox/blob/master/celeryconfig.py>
|
My problem was that I put the `celery` variable inside a main function:
```
if __name__ == '__main__': # Remove this row
app = Flask(__name__)
celery = make_celery(app)
```
when it should be put outside.
|
13,555,386
|
I try to start a Celery worker server from a command line:
```
celery -A tasks worker --loglevel=info
```
The code in tasks.py:
```
import os
os.environ[ 'DJANGO_SETTINGS_MODULE' ] = "proj.settings"
from celery import task
@task()
def add_photos_task( lad_id ):
...
```
I get the next error:
```
Traceback (most recent call last):
File "/usr/local/bin/celery", line 8, in <module>
load_entry_point('celery==3.0.12', 'console_scripts', 'celery')()
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/__main__.py", line 14, in main
main()
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/celery.py", line 946, in main
cmd.execute_from_commandline(argv)
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/celery.py", line 890, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 177, in execute_from_commandline
argv = self.setup_app_from_commandline(argv)
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 295, in setup_app_from_commandline
self.app = self.find_app(app)
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 313, in find_app
return sym.celery
AttributeError: 'module' object has no attribute 'celery'
```
Does anybody know why the 'celery' attribute cannot be found? Thank you for help.
The operating system is Linux Debian 5.
**Edit**. May be the clue. Could anyone explain me the next comment to a function (why we must be sure that it finds modules in the current directory)?
```
# from celery/utils/imports.py
def import_from_cwd(module, imp=None, package=None):
"""Import module, but make sure it finds modules
located in the current directory.
Modules located in the current directory has
precedence over modules located in `sys.path`.
"""
if imp is None:
imp = importlib.import_module
with cwd_in_path():
return imp(module, package=package)
```
|
2012/11/25
|
[
"https://Stackoverflow.com/questions/13555386",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/749288/"
] |
I forgot to create a celery object in tasks.py:
```
from celery import Celery
from celery import task
celery = Celery('tasks', broker='amqp://guest@localhost//') #!
import os
os.environ[ 'DJANGO_SETTINGS_MODULE' ] = "proj.settings"
@task()
def add_photos_task( lad_id ):
...
```
After that we could normally start tasks:
```
celery -A tasks worker --loglevel=info
```
|
My problem was that I put the `celery` variable inside a main function:
```
if __name__ == '__main__': # Remove this row
app = Flask(__name__)
celery = make_celery(app)
```
when it should be put outside.
|
13,555,386
|
I try to start a Celery worker server from a command line:
```
celery -A tasks worker --loglevel=info
```
The code in tasks.py:
```
import os
os.environ[ 'DJANGO_SETTINGS_MODULE' ] = "proj.settings"
from celery import task
@task()
def add_photos_task( lad_id ):
...
```
I get the next error:
```
Traceback (most recent call last):
File "/usr/local/bin/celery", line 8, in <module>
load_entry_point('celery==3.0.12', 'console_scripts', 'celery')()
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/__main__.py", line 14, in main
main()
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/celery.py", line 946, in main
cmd.execute_from_commandline(argv)
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/celery.py", line 890, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 177, in execute_from_commandline
argv = self.setup_app_from_commandline(argv)
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 295, in setup_app_from_commandline
self.app = self.find_app(app)
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 313, in find_app
return sym.celery
AttributeError: 'module' object has no attribute 'celery'
```
Does anybody know why the 'celery' attribute cannot be found? Thank you for help.
The operating system is Linux Debian 5.
**Edit**. May be the clue. Could anyone explain me the next comment to a function (why we must be sure that it finds modules in the current directory)?
```
# from celery/utils/imports.py
def import_from_cwd(module, imp=None, package=None):
"""Import module, but make sure it finds modules
located in the current directory.
Modules located in the current directory has
precedence over modules located in `sys.path`.
"""
if imp is None:
imp = importlib.import_module
with cwd_in_path():
return imp(module, package=package)
```
|
2012/11/25
|
[
"https://Stackoverflow.com/questions/13555386",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/749288/"
] |
I forgot to create a celery object in tasks.py:
```
from celery import Celery
from celery import task
celery = Celery('tasks', broker='amqp://guest@localhost//') #!
import os
os.environ[ 'DJANGO_SETTINGS_MODULE' ] = "proj.settings"
@task()
def add_photos_task( lad_id ):
...
```
After that we could normally start tasks:
```
celery -A tasks worker --loglevel=info
```
|
Celery uses `celery` file for storing configuration of your app, you can't just give a python file with tasks and start celery.
You should define `celery` file ( for Celery>3.0; previously it was `celeryconfig.py`)..
>
> celeryd --app app.celery -l info
>
>
>
This example how to start celery with config file at `app/celery.py`
Here is example of celery file: <https://github.com/Kami/libcloud-sandbox/blob/master/celeryconfig.py>
|
13,555,386
|
I try to start a Celery worker server from a command line:
```
celery -A tasks worker --loglevel=info
```
The code in tasks.py:
```
import os
os.environ[ 'DJANGO_SETTINGS_MODULE' ] = "proj.settings"
from celery import task
@task()
def add_photos_task( lad_id ):
...
```
I get the next error:
```
Traceback (most recent call last):
File "/usr/local/bin/celery", line 8, in <module>
load_entry_point('celery==3.0.12', 'console_scripts', 'celery')()
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/__main__.py", line 14, in main
main()
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/celery.py", line 946, in main
cmd.execute_from_commandline(argv)
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/celery.py", line 890, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 177, in execute_from_commandline
argv = self.setup_app_from_commandline(argv)
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 295, in setup_app_from_commandline
self.app = self.find_app(app)
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 313, in find_app
return sym.celery
AttributeError: 'module' object has no attribute 'celery'
```
Does anybody know why the 'celery' attribute cannot be found? Thank you for help.
The operating system is Linux Debian 5.
**Edit**. May be the clue. Could anyone explain me the next comment to a function (why we must be sure that it finds modules in the current directory)?
```
# from celery/utils/imports.py
def import_from_cwd(module, imp=None, package=None):
"""Import module, but make sure it finds modules
located in the current directory.
Modules located in the current directory has
precedence over modules located in `sys.path`.
"""
if imp is None:
imp = importlib.import_module
with cwd_in_path():
return imp(module, package=package)
```
|
2012/11/25
|
[
"https://Stackoverflow.com/questions/13555386",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/749288/"
] |
For anyone who is getting the same error message for an apparently different reason, note that if any of the imports in your initialization file fail, your app will raise this totally ambiguous `AttributeError` rather than the exception that initially caused it.
|
My problem was that I put the `celery` variable inside a main function:
```
if __name__ == '__main__': # Remove this row
app = Flask(__name__)
celery = make_celery(app)
```
when it should be put outside.
|
13,555,386
|
I try to start a Celery worker server from a command line:
```
celery -A tasks worker --loglevel=info
```
The code in tasks.py:
```
import os
os.environ[ 'DJANGO_SETTINGS_MODULE' ] = "proj.settings"
from celery import task
@task()
def add_photos_task( lad_id ):
...
```
I get the next error:
```
Traceback (most recent call last):
File "/usr/local/bin/celery", line 8, in <module>
load_entry_point('celery==3.0.12', 'console_scripts', 'celery')()
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/__main__.py", line 14, in main
main()
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/celery.py", line 946, in main
cmd.execute_from_commandline(argv)
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/celery.py", line 890, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 177, in execute_from_commandline
argv = self.setup_app_from_commandline(argv)
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 295, in setup_app_from_commandline
self.app = self.find_app(app)
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 313, in find_app
return sym.celery
AttributeError: 'module' object has no attribute 'celery'
```
Does anybody know why the 'celery' attribute cannot be found? Thank you for help.
The operating system is Linux Debian 5.
**Edit**. May be the clue. Could anyone explain me the next comment to a function (why we must be sure that it finds modules in the current directory)?
```
# from celery/utils/imports.py
def import_from_cwd(module, imp=None, package=None):
"""Import module, but make sure it finds modules
located in the current directory.
Modules located in the current directory has
precedence over modules located in `sys.path`.
"""
if imp is None:
imp = importlib.import_module
with cwd_in_path():
return imp(module, package=package)
```
|
2012/11/25
|
[
"https://Stackoverflow.com/questions/13555386",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/749288/"
] |
My problem was that I put the `celery` variable inside a main function:
```
if __name__ == '__main__': # Remove this row
app = Flask(__name__)
celery = make_celery(app)
```
when it should be put outside.
|
Try start celery:
`celeryd --config=my_app.my_config --loglevel=INFO --purge -Q my_queue`
There is next script in my `tasks.py`:
```
@task(name="my_queue", routing_key="my_queue")
def add_photos_task( lad_id ):
```
There is next script in `my_config.py`:
```
CELERY_IMPORTS = \
(
"my_app.tasks",
)
CELERY_ROUTES = \
{
"my_queue":
{
"queue": "my_queue"
},
}
CELERY_QUEUES = \
{
"my_queue":
{
"exchange": "my_app",
"exchange_type": "direct",
"binding_key": "my_queue"
},
}
celery = Celery(broker='amqp://guest@localhost//')
```
|
13,555,386
|
I try to start a Celery worker server from a command line:
```
celery -A tasks worker --loglevel=info
```
The code in tasks.py:
```
import os
os.environ[ 'DJANGO_SETTINGS_MODULE' ] = "proj.settings"
from celery import task
@task()
def add_photos_task( lad_id ):
...
```
I get the next error:
```
Traceback (most recent call last):
File "/usr/local/bin/celery", line 8, in <module>
load_entry_point('celery==3.0.12', 'console_scripts', 'celery')()
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/__main__.py", line 14, in main
main()
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/celery.py", line 946, in main
cmd.execute_from_commandline(argv)
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/celery.py", line 890, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 177, in execute_from_commandline
argv = self.setup_app_from_commandline(argv)
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 295, in setup_app_from_commandline
self.app = self.find_app(app)
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 313, in find_app
return sym.celery
AttributeError: 'module' object has no attribute 'celery'
```
Does anybody know why the 'celery' attribute cannot be found? Thank you for help.
The operating system is Linux Debian 5.
**Edit**. May be the clue. Could anyone explain me the next comment to a function (why we must be sure that it finds modules in the current directory)?
```
# from celery/utils/imports.py
def import_from_cwd(module, imp=None, package=None):
"""Import module, but make sure it finds modules
located in the current directory.
Modules located in the current directory has
precedence over modules located in `sys.path`.
"""
if imp is None:
imp = importlib.import_module
with cwd_in_path():
return imp(module, package=package)
```
|
2012/11/25
|
[
"https://Stackoverflow.com/questions/13555386",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/749288/"
] |
I forgot to create a celery object in tasks.py:
```
from celery import Celery
from celery import task
celery = Celery('tasks', broker='amqp://guest@localhost//') #!
import os
os.environ[ 'DJANGO_SETTINGS_MODULE' ] = "proj.settings"
@task()
def add_photos_task( lad_id ):
...
```
After that we could normally start tasks:
```
celery -A tasks worker --loglevel=info
```
|
For anyone who is getting the same error message for an apparently different reason, note that if any of the imports in your initialization file fail, your app will raise this totally ambiguous `AttributeError` rather than the exception that initially caused it.
|
13,555,386
|
I try to start a Celery worker server from a command line:
```
celery -A tasks worker --loglevel=info
```
The code in tasks.py:
```
import os
os.environ[ 'DJANGO_SETTINGS_MODULE' ] = "proj.settings"
from celery import task
@task()
def add_photos_task( lad_id ):
...
```
I get the next error:
```
Traceback (most recent call last):
File "/usr/local/bin/celery", line 8, in <module>
load_entry_point('celery==3.0.12', 'console_scripts', 'celery')()
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/__main__.py", line 14, in main
main()
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/celery.py", line 946, in main
cmd.execute_from_commandline(argv)
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/celery.py", line 890, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 177, in execute_from_commandline
argv = self.setup_app_from_commandline(argv)
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 295, in setup_app_from_commandline
self.app = self.find_app(app)
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 313, in find_app
return sym.celery
AttributeError: 'module' object has no attribute 'celery'
```
Does anybody know why the 'celery' attribute cannot be found? Thank you for help.
The operating system is Linux Debian 5.
**Edit**. May be the clue. Could anyone explain me the next comment to a function (why we must be sure that it finds modules in the current directory)?
```
# from celery/utils/imports.py
def import_from_cwd(module, imp=None, package=None):
"""Import module, but make sure it finds modules
located in the current directory.
Modules located in the current directory has
precedence over modules located in `sys.path`.
"""
if imp is None:
imp = importlib.import_module
with cwd_in_path():
return imp(module, package=package)
```
|
2012/11/25
|
[
"https://Stackoverflow.com/questions/13555386",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/749288/"
] |
When you run `celery -A tasks worker --loglevel=info`, your celery app should be exposed in the module `tasks`. It shouldn't be wrapped in a function or an `if` statements that.
If you `make_celery` in another file, you should import the celery app in to your the file you are passing to celery.
|
Try start celery:
`celeryd --config=my_app.my_config --loglevel=INFO --purge -Q my_queue`
There is next script in my `tasks.py`:
```
@task(name="my_queue", routing_key="my_queue")
def add_photos_task( lad_id ):
```
There is next script in `my_config.py`:
```
CELERY_IMPORTS = \
(
"my_app.tasks",
)
CELERY_ROUTES = \
{
"my_queue":
{
"queue": "my_queue"
},
}
CELERY_QUEUES = \
{
"my_queue":
{
"exchange": "my_app",
"exchange_type": "direct",
"binding_key": "my_queue"
},
}
celery = Celery(broker='amqp://guest@localhost//')
```
|
13,555,386
|
I try to start a Celery worker server from a command line:
```
celery -A tasks worker --loglevel=info
```
The code in tasks.py:
```
import os
os.environ[ 'DJANGO_SETTINGS_MODULE' ] = "proj.settings"
from celery import task
@task()
def add_photos_task( lad_id ):
...
```
I get the next error:
```
Traceback (most recent call last):
File "/usr/local/bin/celery", line 8, in <module>
load_entry_point('celery==3.0.12', 'console_scripts', 'celery')()
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/__main__.py", line 14, in main
main()
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/celery.py", line 946, in main
cmd.execute_from_commandline(argv)
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/celery.py", line 890, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 177, in execute_from_commandline
argv = self.setup_app_from_commandline(argv)
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 295, in setup_app_from_commandline
self.app = self.find_app(app)
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 313, in find_app
return sym.celery
AttributeError: 'module' object has no attribute 'celery'
```
Does anybody know why the 'celery' attribute cannot be found? Thank you for help.
The operating system is Linux Debian 5.
**Edit**. May be the clue. Could anyone explain me the next comment to a function (why we must be sure that it finds modules in the current directory)?
```
# from celery/utils/imports.py
def import_from_cwd(module, imp=None, package=None):
"""Import module, but make sure it finds modules
located in the current directory.
Modules located in the current directory has
precedence over modules located in `sys.path`.
"""
if imp is None:
imp = importlib.import_module
with cwd_in_path():
return imp(module, package=package)
```
|
2012/11/25
|
[
"https://Stackoverflow.com/questions/13555386",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/749288/"
] |
For anyone who is getting the same error message for an apparently different reason, note that if any of the imports in your initialization file fail, your app will raise this totally ambiguous `AttributeError` rather than the exception that initially caused it.
|
Try start celery:
`celeryd --config=my_app.my_config --loglevel=INFO --purge -Q my_queue`
There is next script in my `tasks.py`:
```
@task(name="my_queue", routing_key="my_queue")
def add_photos_task( lad_id ):
```
There is next script in `my_config.py`:
```
CELERY_IMPORTS = \
(
"my_app.tasks",
)
CELERY_ROUTES = \
{
"my_queue":
{
"queue": "my_queue"
},
}
CELERY_QUEUES = \
{
"my_queue":
{
"exchange": "my_app",
"exchange_type": "direct",
"binding_key": "my_queue"
},
}
celery = Celery(broker='amqp://guest@localhost//')
```
|
13,555,386
|
I try to start a Celery worker server from a command line:
```
celery -A tasks worker --loglevel=info
```
The code in tasks.py:
```
import os
os.environ[ 'DJANGO_SETTINGS_MODULE' ] = "proj.settings"
from celery import task
@task()
def add_photos_task( lad_id ):
...
```
I get the next error:
```
Traceback (most recent call last):
File "/usr/local/bin/celery", line 8, in <module>
load_entry_point('celery==3.0.12', 'console_scripts', 'celery')()
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/__main__.py", line 14, in main
main()
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/celery.py", line 946, in main
cmd.execute_from_commandline(argv)
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/celery.py", line 890, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 177, in execute_from_commandline
argv = self.setup_app_from_commandline(argv)
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 295, in setup_app_from_commandline
self.app = self.find_app(app)
File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 313, in find_app
return sym.celery
AttributeError: 'module' object has no attribute 'celery'
```
Does anybody know why the 'celery' attribute cannot be found? Thank you for help.
The operating system is Linux Debian 5.
**Edit**. May be the clue. Could anyone explain me the next comment to a function (why we must be sure that it finds modules in the current directory)?
```
# from celery/utils/imports.py
def import_from_cwd(module, imp=None, package=None):
"""Import module, but make sure it finds modules
located in the current directory.
Modules located in the current directory has
precedence over modules located in `sys.path`.
"""
if imp is None:
imp = importlib.import_module
with cwd_in_path():
return imp(module, package=package)
```
|
2012/11/25
|
[
"https://Stackoverflow.com/questions/13555386",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/749288/"
] |
I forgot to create a celery object in tasks.py:
```
from celery import Celery
from celery import task
celery = Celery('tasks', broker='amqp://guest@localhost//') #!
import os
os.environ[ 'DJANGO_SETTINGS_MODULE' ] = "proj.settings"
@task()
def add_photos_task( lad_id ):
...
```
After that we could normally start tasks:
```
celery -A tasks worker --loglevel=info
```
|
Try start celery:
`celeryd --config=my_app.my_config --loglevel=INFO --purge -Q my_queue`
There is next script in my `tasks.py`:
```
@task(name="my_queue", routing_key="my_queue")
def add_photos_task( lad_id ):
```
There is next script in `my_config.py`:
```
CELERY_IMPORTS = \
(
"my_app.tasks",
)
CELERY_ROUTES = \
{
"my_queue":
{
"queue": "my_queue"
},
}
CELERY_QUEUES = \
{
"my_queue":
{
"exchange": "my_app",
"exchange_type": "direct",
"binding_key": "my_queue"
},
}
celery = Celery(broker='amqp://guest@localhost//')
```
|
31,800,998
|
Issue: Remove the hyperlinks, numbers and signs like `^&*$ etc` from twitter text. The tweet file is in CSV tabulated format as shown below:
```
s.No. username tweetText
1. @abc This is a test #abc example.com
2. @bcd This is another test #bcd example.com
```
Being a novice at python, I search and string together the following code, thanks to a the code given [here](https://stackoverflow.com/questions/8376691/how-to-remove-hashtag-user-link-of-a-tweet-using-regular-expression):
```
import re
fileName="path-to-file//tweetfile.csv"
fileout=open("Output.txt","w")
with open(fileName,'r') as myfile:
data=myfile.read().lower() # read the file and convert all text to lowercase
clean_data=' '.join(re.sub("(@[A-Za-z0-9]+)|([^0-9A-Za-z \t])|(\w+:\/\/\S+)"," ",data).split()) # regular expression to strip the html out of the text
fileout.write(clean_data+'\n') # write the cleaned data to a file
fileout.close()
myfile.close()
print "All done"
```
It does the data stripping, but the output file format is not as I desire. The output text file is in a single line like
`s.no username tweetText 1 abc` This is a cleaned tweet `2 bcd` This is another cleaned tweet `3 efg` This is yet another cleaned tweet
How can I fix this code to give me an output like given below:
```
s.No. username tweetText
1 abc This is a test
2 bcd This is another test
3 efg This is yet another test
```
I think something needs to be added in the regular expression code but I don't know what it could be. Any pointers or suggestions will be helpful.
|
2015/08/04
|
[
"https://Stackoverflow.com/questions/31800998",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4195053/"
] |
To remove `&` from string you can use [html\_entity\_decode](http://php.net/manual/en/function.html-entity-decode.php)
```
while ($row = mysql_fetch_array($result)) {
$row['value'] = html_entity_decode($row['value']);
$row['id'] = (int) $row['client_id'];
$row_set[] = $row;
}
```
|
Change this `htmlentities` to this `html_entity_decode()`
So **final Code will** be
```
$term = trim(strip_tags($_GET['term']));
$term = str_replace(' ', '%', $term);
$qstring = "SELECT name as value, client_id FROM goa WHERE name LIKE '" . $term . "%' limit 0,5000";
$result = mysql_query($qstring);
$qcount = 0;
if ($result) {
while ($row = mysql_fetch_array($result)) {
$row['value'] = html_entity_decode(stripslashes($row['value']));//change
$row['id'] = (int) $row['client_id'];
$row_set[] = $row; //build an array$qcount= $qcount + 1;}}echo json_encode($row_set); //format the array into json data
}
}
```
[`html_entity_decode()` example in W3Schools](http://www.w3schools.com/php/func_string_html_entity_decode.asp)
|
58,926,146
|
I trained a model with RBF kernel-based support vector machine regression. I want to know the features that are very important or major contributing features for the RBF kernel-based support vector machine. I know there is a method to know the most contributing features for linear support vector regression based on weight vectors which are the size of the vectors. However, for the RBF kernel-based support vector machine, since the features are transformed into a new space, I have no clue how to extract the most contributing features. I am using scikit-learn in python. Is there a way to extract the most contributing features in RBF kernel-based support vector regression or non-linear support vector regression?
```
from sklearn import svm
svm = svm.SVC(gamma=0.001, C=100., kernel = 'linear')
```
In this case:
[Determining the most contributing features for SVM classifier in sklearn](https://stackoverflow.com/questions/41592661/determining-the-most-contributing-features-for-svm-classifier-in-sklearn)
does work very well. However, if the kernel is changed in to
```
from sklearn import svm
svm = svm.SVC(gamma=0.001, C=100., kernel = 'rbf')
```
The above answer doesn't work.
|
2019/11/19
|
[
"https://Stackoverflow.com/questions/58926146",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8739662/"
] |
Let me sort the comments as an answer:
As you can read [here](https://stackoverflow.com/questions/52640386/how-do-i-solve-the-future-warning-min-groups-self-n-splits-warning-in):
>
> Weights asigned to the features (coefficients in the primal
> problem). This is only available in the case of linear kernel.
>
>
>
but also it doesn't make sense. In linear SVM the resulting separating plane is in the same space as your input features. Therefore its coefficients can be viewed as weights of the input's "dimensions".
In other kernels, the separating plane exists in another space - a result of kernel transformation of the original space. Its coefficients are not directly related to the input space. In fact, for the rbf kernel the transformed space is infinite-dimensional.
>
> As menionted in the comments, things you can do:
>
>
>
Play with the features (leave some out), and see how the accuracy will change, this will give you an idea which features are important.
If you use other classifier as random forest, you will get the feature importances, for the other algorithm. But this will not answer your question which is important for your svm. So this does not necessarily answer your question.
|
In relation with the inspection of non linear SVM models (e.g. using RBF kernel), here I share an answer posted in another thread which might be useful for this purpose.
The method is based on "[sklearn.inspection.permutation\_importance](https://stackoverflow.com/a/67910281/13670156)".
And here, a compressive discussion about the significance of ["permutation\_importance" applied on SVM models](http://rasbt.github.io/mlxtend/user_guide/evaluate/feature_importance_permutation/).
|
42,149,079
|
I've managed to install pymol on windows following the instructions [here](https://stackoverflow.com/questions/27885397/how-do-i-install-a-python-package-with-a-whl-file) and using the file Pmw‑2.0.1‑py2‑none‑any.whl from [here](http://www.lfd.uci.edu/~gohlke/pythonlibs/#pymol)
Various folders have appeared in `C:\Users\Python27\Lib\site-packages` (`Pmw` and `Pmw-2.0.1.dist-info`). However, I can't actually work out how to run pymol.
It used to be provided as a .exe format which could just be run in the usual way for windows applications. The folders that have installed just contain lots of python scripts, but I can't find anything which actually launches the programme.
|
2017/02/09
|
[
"https://Stackoverflow.com/questions/42149079",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2923519/"
] |
Try changing
```
lastRow44 = Cells(Rows.Count, "A").End(xlUp).Row
LastRow3 = Worksheets("Temp").Cells(Rows.Count, "A").End(xlUp).Offset(1, 0).Row
```
to
```
lastRow44 = Sheets("Temp").Cells(Rows.Count, 1).End(xlUp).Row
LastRow3 = Worksheets("Temp").Cells(Rows.Count, 1).End(xlUp).Offset(1, 0).Row
```
Also, I am not sure what you are trying to accomplish with
```
Range("A" & LastRow3).End(xlDown).Offset(0, 11).Formula = _
"=Sum(("M" & LastRow3).End(xlDown).Offset(0, 11) & lastRow44 & ")"
```
What your formula is doing is first setting to the lastrow that you defined, and then searching downward (as if you hit CTRL + down-arrow). If this is not what you intend, try removing the ".END(xlDown" portion of both.
Lastly, if you know you are using an offset of 11, why not set it to use "M" instead of A, and simply not offset?
|
How about something like that:
```
lastRow44 = Cells(Rows.Count, "A").End(xlUp).Row
For x = 50 To LastRow3
Range("A" & x).Formula = "=Sum(""M""" & x & "": M "" & lastRow44 & ")"
Next x
```
|
40,687,397
|
I am trying to update my chromedriver.exe file as outlined here.
[Python selenium webdriver "Session not created" exception when opening Chrome](https://stackoverflow.com/questions/40373801/python-selenium-webdriver-session-not-created-exception-when-opening-chrome)
The problem is, I do not know the location of the old chromedriver on my Windows machine, and therefore can't update. Any help is appreciated!
|
2016/11/18
|
[
"https://Stackoverflow.com/questions/40687397",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5960274/"
] |
If you don't want to expose your constructors for some reasons, you can easily hide them behind a factory method based on templates and perfect forwarding:
```
class Foo {
// defined somewhere
Foo( Param1, Param2 );
Foo( Param1, Param3, Param4 );
Foo( Param1, Param4 );
Foo( Param1, Param2, Param4 );
private:
template<typename... Args>
static auto factory(Args&&... args) {
Foo foo{std::forward<Args>(args)...};
// do whatever you want here
return foo;
}
}
```
No need to throw anything at runtime.
If a constructor that accepts those parameters doesn't exist, you'll receive a compile-time error.
---
Otherwise, another idiomatic way of doing that is by using [named constructors](https://en.m.wikibooks.org/wiki/More_C%2B%2B_Idioms/Named_Constructor).
I copy-and-paste directly the example from the link above:
```
class Game {
public:
static Game createSinglePlayerGame() { return Game(0); }
static Game createMultiPlayerGame() { return Game(1); }
protected:
Game (int game_type);
};
```
Not sure this fits your requirements anyway.
---
That said, think about what's the benefit of doing this:
```
CreateFoo({ Param1V, Param3V });
```
Or even worse, this:
```
FooParams params{ Param1V, Param3V };
CreateFoo(params);
```
Instead of this:
```
new Foo{Param1V, Param3V};
```
By introducing an intermediate class you are not actually helping the users of your class.
They still have to remember what are the required params for the specific case.
|
As an user, I prefer
```
Foo* CreateFoo(Param1* P1, Param2* P2, Param3* P3, Param4* P4);
```
Why should I construc a `struct` just to pass some (maybe NULL) parameters?
|
72,173,142
|
I need to create a fórmula that when it is dragged down it jumps a certain pre defined number of cells. For example, I have this column:
[](https://i.stack.imgur.com/y7c78.png)
However I want a formula that when I drag down it jumps 6 rows, something like =A(1+6) in the second row and so on, so it gets to look like this:
[](https://i.stack.imgur.com/NOfCg.png)
Is there a "pythonic" way to do that or I need to create some regexextract in a new column + query formula getting only non blank cells?
Example sheet in this link: <https://docs.google.com/spreadsheets/d/1RYzX31i8sBFROwFrQGql_eZ6tPu69KDesqzQ3hSj028/edit#gid=0>
|
2022/05/09
|
[
"https://Stackoverflow.com/questions/72173142",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5606352/"
] |
Try in B2
```
=offset($A$1;5*row(A2)-10;)
```
|
try instead:
```
=QUERY(A1:A; "skipping 5"; 0)
```
[](https://i.stack.imgur.com/09MiK.png)
|
41,131,038
|
Given an interactive python script
```
#!/usr/bin/python
import sys
name = raw_input("Please enter your name: ")
age = raw_input("Please enter your age: ")
print("Happy %s.th birthday %s!" % (age, name))
while 1:
r = raw_input("q for quit: ")
if r == "q":
sys.exit()
```
I want to interact with it from an expect script
```
#!/usr/bin/expect -f
set timeout 3
puts "example to interact"
spawn python app.py
expect {
"name: " { send "jani\r"; }
"age: " { send "12\r"; }
"quit: " { send "q\r"; }
}
puts "bye"
```
The expect script seems to be not interacting with the python appliction just run over that.
Is the problem with the python or with the expect code ?
|
2016/12/13
|
[
"https://Stackoverflow.com/questions/41131038",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1922202/"
] |
Storing an unsigned integer straight *in* a pointer portably isn't allowed, but you can:
* do the reverse: you can store your pointer in an unsigned integer; specifically, `uintptr_t` is explicitly guaranteed by the standard to be big enough to let pointers survive the roundtrip;
* use a `union`:
```
union NodePtr {
Octree *child;
uint32_t value;
}
```
here `child` and `value` share the same memory location, and you are allowed to read only from the one where you last wrote; when you are in a terminal node you use `value`, otherwise use `child`.
|
Well, you can store int as a pointer with casts:
```
uint32_t i = 123;
Octree* ptr = reinterpret_cast<Octree*>(i);
uint32_t ii = reinterpret_cast<uint32_t>(ptr);
std::cout << ii << std::endl; //Prints 123
```
But if you do it this way I can't see how you detect that a given Octree\* actually stores data and is not a pointer to another Octree
|
12,569,356
|
I'm a very beginner with Python classes and JSON and I'm not sure I'm going in the right direction.
Basically, I have a web service that accepts a JSON request in a POST body like this:
```
{ "item" :
{
"thing" : "foo",
"flag" : true,
"language" : "en_us"
},
"numresults" : 3
}
```
I started going down the route of creating a class for "item" like this:
```
class Item(object):
def __init__:
self.name = "item"
@property
def thing(self):
return self.thing
@thing.setter
def thing(self, value):
self.thing = value
...
```
So, my questions are:
1. Am I going in the right direction?
2. How do I turn the Python object into a JSON string?
I've found a lot of information about JSON in python, I've looked at jsonpickle, but I can't seem to create a class that ends up outputting the nested dictionaries needed.
EDIT:
Thanks to Joran's suggestion, I stuck with a class using properties and added a method like this:
```
def jsonify(self):
return json.dumps({ "item" : self.__dict__ }, indent=4)
```
and that worked perfectly.
Thanks everyone for your help.
|
2012/09/24
|
[
"https://Stackoverflow.com/questions/12569356",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/476638/"
] |
just add one method to your class that returns a dictionary
```
def jsonify(self):
return { 'Class Whatever':{
'data1':self.data1,
'data2':self.data2,
...
}
}
```
and call your tojson function on the result ... or call it before your return to just return a json result...
|
Take a look at the [`colander` project](http://docs.pylonsproject.org/projects/colander/en/latest/); it let's you define an object-oriented 'schema' that is easily serializable to and from JSON.
```
import colander
class Item(colander.MappingSchema):
thing = colander.SchemaNode(colander.String(),
validator=colander.OneOf(['foo', 'bar']))
flag = colander.SchemaNode(colander.Boolean())
language = colander.SchemaNode(colander.String()
validator=colander.OneOf(supported_languages)
class Items(colander.SequenceSchema):
item = Item()
```
Then load these from JSON:
```
items = Items().deserialize(json.loads(jsondata))
```
and `colander` validates the data for you, returning a set of python objects that then can be acted upon.
Alternatively, you'd have to create specific per-object handling to be able to turn Python objects into JSON structures and vice-versa.
|
12,569,356
|
I'm a very beginner with Python classes and JSON and I'm not sure I'm going in the right direction.
Basically, I have a web service that accepts a JSON request in a POST body like this:
```
{ "item" :
{
"thing" : "foo",
"flag" : true,
"language" : "en_us"
},
"numresults" : 3
}
```
I started going down the route of creating a class for "item" like this:
```
class Item(object):
def __init__:
self.name = "item"
@property
def thing(self):
return self.thing
@thing.setter
def thing(self, value):
self.thing = value
...
```
So, my questions are:
1. Am I going in the right direction?
2. How do I turn the Python object into a JSON string?
I've found a lot of information about JSON in python, I've looked at jsonpickle, but I can't seem to create a class that ends up outputting the nested dictionaries needed.
EDIT:
Thanks to Joran's suggestion, I stuck with a class using properties and added a method like this:
```
def jsonify(self):
return json.dumps({ "item" : self.__dict__ }, indent=4)
```
and that worked perfectly.
Thanks everyone for your help.
|
2012/09/24
|
[
"https://Stackoverflow.com/questions/12569356",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/476638/"
] |
Take a look at the [`colander` project](http://docs.pylonsproject.org/projects/colander/en/latest/); it let's you define an object-oriented 'schema' that is easily serializable to and from JSON.
```
import colander
class Item(colander.MappingSchema):
thing = colander.SchemaNode(colander.String(),
validator=colander.OneOf(['foo', 'bar']))
flag = colander.SchemaNode(colander.Boolean())
language = colander.SchemaNode(colander.String()
validator=colander.OneOf(supported_languages)
class Items(colander.SequenceSchema):
item = Item()
```
Then load these from JSON:
```
items = Items().deserialize(json.loads(jsondata))
```
and `colander` validates the data for you, returning a set of python objects that then can be acted upon.
Alternatively, you'd have to create specific per-object handling to be able to turn Python objects into JSON structures and vice-versa.
|
The colander and dict suggestions are both great.
I'd just add that you should step back for a moment and decide how you'll use these objects.
A potential problem with this general approach is that you'll need a list or function or some other mapping to decide which attributes are exported. ( you see this in the colander definition and the jsonify definition ).
You might want this behavior - or you might not.
The sqlalchemy project for example, has classes which are defined with the field names are characteristics like ion colander. *but* the objects themselves store a lot of ancillary data relating to the database and db connection.
in order to get to just the underlying field data that you care about , you need to either access a private internal dict that wraps the data, or query the class for the mapped columns :
```
def columns_as_dict(self):
return dict((col.name, getattr(self, col.name)) for col in sqlalchemy_orm.class_mapper(self.__class__).mapped_table.c)
```
i don't mean to overcomplicate this - i just want to suggest that you think about exactly how you're likely to use these objects. if you only want the same info out of your object that you put in it -- then a simple solution is fine. but if these objects are going to have some other sort of 'private' data , you might need to maintain some distinction between the different types of information.
|
12,569,356
|
I'm a very beginner with Python classes and JSON and I'm not sure I'm going in the right direction.
Basically, I have a web service that accepts a JSON request in a POST body like this:
```
{ "item" :
{
"thing" : "foo",
"flag" : true,
"language" : "en_us"
},
"numresults" : 3
}
```
I started going down the route of creating a class for "item" like this:
```
class Item(object):
def __init__:
self.name = "item"
@property
def thing(self):
return self.thing
@thing.setter
def thing(self, value):
self.thing = value
...
```
So, my questions are:
1. Am I going in the right direction?
2. How do I turn the Python object into a JSON string?
I've found a lot of information about JSON in python, I've looked at jsonpickle, but I can't seem to create a class that ends up outputting the nested dictionaries needed.
EDIT:
Thanks to Joran's suggestion, I stuck with a class using properties and added a method like this:
```
def jsonify(self):
return json.dumps({ "item" : self.__dict__ }, indent=4)
```
and that worked perfectly.
Thanks everyone for your help.
|
2012/09/24
|
[
"https://Stackoverflow.com/questions/12569356",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/476638/"
] |
just add one method to your class that returns a dictionary
```
def jsonify(self):
return { 'Class Whatever':{
'data1':self.data1,
'data2':self.data2,
...
}
}
```
and call your tojson function on the result ... or call it before your return to just return a json result...
|
The colander and dict suggestions are both great.
I'd just add that you should step back for a moment and decide how you'll use these objects.
A potential problem with this general approach is that you'll need a list or function or some other mapping to decide which attributes are exported. ( you see this in the colander definition and the jsonify definition ).
You might want this behavior - or you might not.
The sqlalchemy project for example, has classes which are defined with the field names are characteristics like ion colander. *but* the objects themselves store a lot of ancillary data relating to the database and db connection.
in order to get to just the underlying field data that you care about , you need to either access a private internal dict that wraps the data, or query the class for the mapped columns :
```
def columns_as_dict(self):
return dict((col.name, getattr(self, col.name)) for col in sqlalchemy_orm.class_mapper(self.__class__).mapped_table.c)
```
i don't mean to overcomplicate this - i just want to suggest that you think about exactly how you're likely to use these objects. if you only want the same info out of your object that you put in it -- then a simple solution is fine. but if these objects are going to have some other sort of 'private' data , you might need to maintain some distinction between the different types of information.
|
6,800,280
|
This is a follow-up to this previous question: [Complicated COUNT query in MySQL](https://stackoverflow.com/questions/6580684/complicated-count-query-in-mysql). None of the answers worked under all conditions, and I have had trouble figuring out a solution as well. I will be awarding a 75 point bounty to the first person that provides a fully correct answer (I will award the bounty as soon as it is available, and as reference I've done this before: [Improving Python/django view code](https://stackoverflow.com/questions/6245755/improving-python-django-view-code)).
I want to get the count of video credits a user has and not allow duplicates (i.e., for every video a user can be credited in it 0 or 1 times. I want to find three counts: the number of videos a user has uploaded (easy) -- `Uploads`; the number of videos credited in from videos not uploaded by the user -- `Credited_by_others`; and the total number of videos a user has been credited in -- `Total_credits`.
I have three tables:
```
CREATE TABLE `userprofile_userprofile` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`full_name` varchar(100) NOT NULL,
...
)
CREATE TABLE `videos_video` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`title` int(11) NOT NULL,
`uploaded_by_id` int(11) NOT NULL,
...
KEY `userprofile_video_e43a31e7` (`uploaded_by_id`),
CONSTRAINT `uploaded_by_id_refs_id_492ba9396be0968c` FOREIGN KEY (`uploaded_by_id`) REFERENCES `userprofile_userprofile` (`id`)
)
```
**Note that the `uploaded_by_id` is the same as the `userprofile.id`**
```
CREATE TABLE `videos_videocredit` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`video_id` int(11) NOT NULL,
`profile_id` int(11) DEFAULT NULL,
`position` int(11) NOT NULL
...
KEY `videos_videocredit_fa26288c` (`video_id`),
KEY `videos_videocredit_141c6eec` (`profile_id`),
CONSTRAINT `profile_id_refs_id_31fc4a6405dffd9f` FOREIGN KEY (`profile_id`) REFERENCES `userprofile_userprofile` (`id`),
CONSTRAINT `video_id_refs_id_4dcff2eeed362a80` FOREIGN KEY (`video_id`) REFERENCES `videos_video` (`id`)
)
```
Here is a step-by-step to illustrate:
1) create 2 users:
```
insert into userprofile_userprofile (id, full_name) values (1, 'John Smith');
insert into userprofile_userprofile (id, full_name) values (2, 'Jane Doe');
```
2) a user uploads a video. He does not yet credit anyone -- including himself -- in it.
```
insert into videos_video (id, title, uploaded_by_id) values (1, 'Hamlet', 1);
```
The result should be as follows:
```
**User** **Uploads** **Credited_by_others** **Total_credits**
John Smith 1 0 1
Jane Doe 0 0 0
```
3) the user who uploaded the video now credits himself in the video. Note this should not change anything, since the user has already received a credit for uploading the film and I am not allowing duplicate credits:
```
insert into videos_videocredit (id, video_id, profile_id, position) values (1, 1, 1, 'director')
```
The result should now be as follows:
```
**User** **Uploads** **Credited_by_others** **Total_credits**
John Smith 1 0 1
Jane Doe 0 0 0
```
4) The user now credits himself two more times in the same video (i.e., he has had multiple 'positions' in the video). In addition, he credits Jane Doe three times for that video:
```
insert into videos_videocredit (id, video_id, profile_id, position) values (2, 1, 1, 'writer')
insert into videos_videocredit (id, video_id, profile_id, position) values (3, 1, 1, 'producer')
insert into videos_videocredit (id, video_id, profile_id, position) values (4, 1, 2, 'director')
insert into videos_videocredit (id, video_id, profile_id, position) values (5, 1, 2, 'editor')
insert into videos_videocredit (id, video_id, profile_id, position) values (6, 1, 2, 'decorator')
```
The result should now be as follows:
```
**User** **Uploads** **Credited_by_others** **Total_credits**
John Smith 1 0 1
Jane Doe 0 1 1
```
5) Jane Doe now uploads a video. She does not credit herself, but credits John Smith twice in the video:
```
insert into videos_video (id, title, uploaded_by_id) values (2, 'Othello', 2)
insert into videos_videocredit (id, video_id, profile_id, position) values (7, 2, 1, 'writer')
insert into videos_videocredit (id, video_id, profile_id, position) values (8, 2, 1, 'producer')
```
The result should now be as follows:
```
**User** **Uploads** **Credited_by_others** **Total_credits**
John Smith 1 1 2
Jane Doe 1 1 2
```
So, I would like to find those three fields for each user -- `Uploads`, `Credited_by_others`, and `Total_credits`. Data should never be Null, but instead be 0 when the field has no count. Thank you.
|
2011/07/23
|
[
"https://Stackoverflow.com/questions/6800280",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/651174/"
] |
Couldn't there be problem with `<Files .*>`? I think this is a wildcard pattern so you should use just `<Files *>`.
|
The order of your htaccess, should be...
```
RewriteEngine On
<Files .*>
Order allow,deny
Allow from all
</Files>
Options FollowSymLinks
RewriteRule ^photos.+$ thumbs.php [L,QSA]
RewriteRule ^[a-zA-Z0-9\-_]*$ index.php [L,QSA]
RewriteRule ^[a-zA-Z0-9\-_]+\.html$ index.php [L,QSA]
```
|
6,800,280
|
This is a follow-up to this previous question: [Complicated COUNT query in MySQL](https://stackoverflow.com/questions/6580684/complicated-count-query-in-mysql). None of the answers worked under all conditions, and I have had trouble figuring out a solution as well. I will be awarding a 75 point bounty to the first person that provides a fully correct answer (I will award the bounty as soon as it is available, and as reference I've done this before: [Improving Python/django view code](https://stackoverflow.com/questions/6245755/improving-python-django-view-code)).
I want to get the count of video credits a user has and not allow duplicates (i.e., for every video a user can be credited in it 0 or 1 times. I want to find three counts: the number of videos a user has uploaded (easy) -- `Uploads`; the number of videos credited in from videos not uploaded by the user -- `Credited_by_others`; and the total number of videos a user has been credited in -- `Total_credits`.
I have three tables:
```
CREATE TABLE `userprofile_userprofile` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`full_name` varchar(100) NOT NULL,
...
)
CREATE TABLE `videos_video` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`title` int(11) NOT NULL,
`uploaded_by_id` int(11) NOT NULL,
...
KEY `userprofile_video_e43a31e7` (`uploaded_by_id`),
CONSTRAINT `uploaded_by_id_refs_id_492ba9396be0968c` FOREIGN KEY (`uploaded_by_id`) REFERENCES `userprofile_userprofile` (`id`)
)
```
**Note that the `uploaded_by_id` is the same as the `userprofile.id`**
```
CREATE TABLE `videos_videocredit` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`video_id` int(11) NOT NULL,
`profile_id` int(11) DEFAULT NULL,
`position` int(11) NOT NULL
...
KEY `videos_videocredit_fa26288c` (`video_id`),
KEY `videos_videocredit_141c6eec` (`profile_id`),
CONSTRAINT `profile_id_refs_id_31fc4a6405dffd9f` FOREIGN KEY (`profile_id`) REFERENCES `userprofile_userprofile` (`id`),
CONSTRAINT `video_id_refs_id_4dcff2eeed362a80` FOREIGN KEY (`video_id`) REFERENCES `videos_video` (`id`)
)
```
Here is a step-by-step to illustrate:
1) create 2 users:
```
insert into userprofile_userprofile (id, full_name) values (1, 'John Smith');
insert into userprofile_userprofile (id, full_name) values (2, 'Jane Doe');
```
2) a user uploads a video. He does not yet credit anyone -- including himself -- in it.
```
insert into videos_video (id, title, uploaded_by_id) values (1, 'Hamlet', 1);
```
The result should be as follows:
```
**User** **Uploads** **Credited_by_others** **Total_credits**
John Smith 1 0 1
Jane Doe 0 0 0
```
3) the user who uploaded the video now credits himself in the video. Note this should not change anything, since the user has already received a credit for uploading the film and I am not allowing duplicate credits:
```
insert into videos_videocredit (id, video_id, profile_id, position) values (1, 1, 1, 'director')
```
The result should now be as follows:
```
**User** **Uploads** **Credited_by_others** **Total_credits**
John Smith 1 0 1
Jane Doe 0 0 0
```
4) The user now credits himself two more times in the same video (i.e., he has had multiple 'positions' in the video). In addition, he credits Jane Doe three times for that video:
```
insert into videos_videocredit (id, video_id, profile_id, position) values (2, 1, 1, 'writer')
insert into videos_videocredit (id, video_id, profile_id, position) values (3, 1, 1, 'producer')
insert into videos_videocredit (id, video_id, profile_id, position) values (4, 1, 2, 'director')
insert into videos_videocredit (id, video_id, profile_id, position) values (5, 1, 2, 'editor')
insert into videos_videocredit (id, video_id, profile_id, position) values (6, 1, 2, 'decorator')
```
The result should now be as follows:
```
**User** **Uploads** **Credited_by_others** **Total_credits**
John Smith 1 0 1
Jane Doe 0 1 1
```
5) Jane Doe now uploads a video. She does not credit herself, but credits John Smith twice in the video:
```
insert into videos_video (id, title, uploaded_by_id) values (2, 'Othello', 2)
insert into videos_videocredit (id, video_id, profile_id, position) values (7, 2, 1, 'writer')
insert into videos_videocredit (id, video_id, profile_id, position) values (8, 2, 1, 'producer')
```
The result should now be as follows:
```
**User** **Uploads** **Credited_by_others** **Total_credits**
John Smith 1 1 2
Jane Doe 1 1 2
```
So, I would like to find those three fields for each user -- `Uploads`, `Credited_by_others`, and `Total_credits`. Data should never be Null, but instead be 0 when the field has no count. Thank you.
|
2011/07/23
|
[
"https://Stackoverflow.com/questions/6800280",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/651174/"
] |
I managed to find the solution, the issue was really simple, i was missing a + before FollowSymLinks. Simple issue, few wasted hours to find the solution. The weirdest part is that the on the same hosting, different domain/folder it is working the same without the +... Anyway thank you for your help.
this is my final working code:
```
RewriteEngine On
Options +FollowSymLinks
RewriteRule ^photos.+$ thumbs.php [L,QSA]
RewriteRule ^[a-zA-Z0-9\-_]*$ index.php [L,QSA]
RewriteRule ^[a-zA-Z0-9\-_]+\.html$ index.php [L,QSA]
```
|
The order of your htaccess, should be...
```
RewriteEngine On
<Files .*>
Order allow,deny
Allow from all
</Files>
Options FollowSymLinks
RewriteRule ^photos.+$ thumbs.php [L,QSA]
RewriteRule ^[a-zA-Z0-9\-_]*$ index.php [L,QSA]
RewriteRule ^[a-zA-Z0-9\-_]+\.html$ index.php [L,QSA]
```
|
44,732,839
|
I am trying to process txt file using pandas.
However, I get following error at read\_csv
>
> CParserError Traceback (most recent call
> last) in ()
> 22 Col.append(elm)
> 23
> ---> 24 revised=pd.read\_csv(Path+file,skiprows=Header+1,header=None,delim\_whitespace=True)
> 25
> 26 TimeSeries.append(revised)
>
>
> C:\Users\obakatsu\Anaconda3\lib\site-packages\pandas\io\parsers.py in
> parser\_f(filepath\_or\_buffer, sep, delimiter, header, names, index\_col,
> usecols, squeeze, prefix, mangle\_dupe\_cols, dtype, engine, converters,
> true\_values, false\_values, skipinitialspace, skiprows, skipfooter,
> nrows, na\_values, keep\_default\_na, na\_filter, verbose,
> skip\_blank\_lines, parse\_dates, infer\_datetime\_format, keep\_date\_col,
> date\_parser, dayfirst, iterator, chunksize, compression, thousands,
> decimal, lineterminator, quotechar, quoting, escapechar, comment,
> encoding, dialect, tupleize\_cols, error\_bad\_lines, warn\_bad\_lines,
> skip\_footer, doublequote, delim\_whitespace, as\_recarray, compact\_ints,
> use\_unsigned, low\_memory, buffer\_lines, memory\_map, float\_precision)
> 560 skip\_blank\_lines=skip\_blank\_lines)
> 561
> --> 562 return \_read(filepath\_or\_buffer, kwds)
> 563
> 564 parser\_f.**name** = name
>
>
> C:\Users\obakatsu\Anaconda3\lib\site-packages\pandas\io\parsers.py in
> \_read(filepath\_or\_buffer, kwds)
> 323 return parser
> 324
> --> 325 return parser.read()
> 326
> 327 \_parser\_defaults = {
>
>
> C:\Users\obakatsu\Anaconda3\lib\site-packages\pandas\io\parsers.py in
> read(self, nrows)
> 813 raise ValueError('skip\_footer not supported for iteration')
> 814
> --> 815 ret = self.\_engine.read(nrows)
> 816
> 817 if self.options.get('as\_recarray'):
>
>
> C:\Users\obakatsu\Anaconda3\lib\site-packages\pandas\io\parsers.py in
> read(self, nrows) 1312 def read(self, nrows=None): 1313
>
> try:
> -> 1314 data = self.\_reader.read(nrows) 1315 except StopIteration: 1316 if self.\_first\_chunk:
>
>
> pandas\parser.pyx in pandas.parser.TextReader.read
> (pandas\parser.c:8748)()
>
>
> pandas\parser.pyx in pandas.parser.TextReader.\_read\_low\_memory
> (pandas\parser.c:9003)()
>
>
> pandas\parser.pyx in pandas.parser.TextReader.\_read\_rows
> (pandas\parser.c:9731)()
>
>
> pandas\parser.pyx in pandas.parser.TextReader.\_tokenize\_rows
> (pandas\parser.c:9602)()
>
>
> pandas\parser.pyx in pandas.parser.raise\_parser\_error
> (pandas\parser.c:23325)()
>
>
> CParserError: Error tokenizing data. C error: Expected 4 fields in
> line 6, saw 8
>
>
>
Does anyone know how I can fix this problem?
My python script and example txt file I want to process is shown below.
```
Path='data/NanFung/OCTA_Tower/test/'
files=os.listdir(Path)
TimeSeries=[]
Cols=[]
for file in files:
new=open(Path+file)
Supplement=[]
Col=[]
data=[]
Header=0
#calculate how many rows should be skipped
for line in new:
if line.startswith('Timestamp'):
new1=line.split(" ")
new1[-1]=str(file)[:-4]
break
else:
Header += 1
#clean col name
for elm in new1:
if len(elm)>0:
Col.append(elm)
revised=pd.read_csv(Path+file,skiprows=Header+1,header=None,delim_whitespace=True)
TimeSeries.append(revised)
Cols.append(Col)
```
txt file
```
history:/NIKL6215_ENC_1/CH$2d19$2d1$20$20CHW$20OUTLET$20TEMP
20-Oct-12 8:00 PM CT to ?
Timestamp Trend Flags Status Value (ºC)
------------------------- ----------- ------ ----------
20-Oct-12 8:00:00 PM HKT {start} {ok} 15.310 ºC
21-Oct-12 12:00:00 AM HKT { } {ok} 15.130 ºC
```
|
2017/06/24
|
[
"https://Stackoverflow.com/questions/44732839",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7124344/"
] |
It fails because the part of the file you're reading looks like this:
```
Timestamp Trend Flags Status Value (ºC)
------------------------- ----------- ------ ----------
20-Oct-12 8:00:00 PM HKT {start} {ok} 15.310 ºC
21-Oct-12 12:00:00 AM HKT { } {ok} 15.130 ºC
```
But there are no consistent delimiters here. `read_csv` does not understand how to read fixed-width formats like yours. You might consider using a delimited file, such as with tab characters between the columns.
|
Include This line before
========================
```
file_name = Path+file #change below line to given
```
>
> revised=pd.read\_csv(Path+file,skiprows=Header+1,header=None,delim\_whitespace=True)
> revised=pd.read\_csv(file\_name,skiprows=Header+1,header=None,sep=" ")
>
>
>
|
27,647,922
|
I'm working on my python script as I'm created a list to stored the elements in the arrays.
I have got a problem with the if statement. I'm trying to find the elements if I have the values `375` but it won't let me to get pass on the if statement.
Here is the code:
```
program_X = list()
#create the rows to count for 69 program buttons
for elem in programs_button:
program_width.append(elem.getWidth())
program_X.append(elem.getX())
program_X = map(str, program_X)
#get the list of position_X for all buttons
for pos_X in programs_X:
#find the position with 375
if pos_X == 375:
print pos_X
```
Here is the list of elements that I use to print from the arrays:
```
14:08:55 T:1260 NOTICE: 375
14:08:55 T:1260 NOTICE: 724.06
14:08:55 T:1260 NOTICE: 1610.21
14:08:55 T:1260 NOTICE: 2496.39
14:08:55 T:1260 NOTICE: 2845.45
14:08:55 T:1260 NOTICE: 3194.51
14:08:55 T:1260 NOTICE: 3543.57
14:08:55 T:1260 NOTICE: 3892.63
14:08:55 T:1260 NOTICE: 4241.69
14:08:55 T:1260 NOTICE: 4590.75
14:08:55 T:1260 NOTICE: 4939.81
14:08:55 T:1260 NOTICE: 5288.87
14:08:55 T:1260 NOTICE: 5637.93
14:08:55 T:1260 NOTICE: 5986.99
14:08:55 T:1260 NOTICE: 6336.05
14:08:55 T:1260 NOTICE: 6685.11
14:08:55 T:1260 NOTICE: 7034.17
14:08:55 T:1260 NOTICE: 7383.23
14:08:55 T:1260 NOTICE: 7732.29
14:08:55 T:1260 NOTICE: 8081.35
14:08:55 T:1260 NOTICE: 8430.41
14:08:55 T:1260 NOTICE: 8779.47
14:08:55 T:1260 NOTICE: 9665.59
14:08:55 T:1260 NOTICE: 10014.65
14:08:55 T:1260 NOTICE: 10363.71
14:08:55 T:1260 NOTICE: 10712.77
14:08:55 T:1260 NOTICE: 11061.83
14:08:55 T:1260 NOTICE: 11410.89
14:08:55 T:1260 NOTICE: 11759.95
14:08:55 T:1260 NOTICE: 12109.01
14:08:55 T:1260 NOTICE: 12458.07
14:08:55 T:1260 NOTICE: 12807.13
14:08:55 T:1260 NOTICE: 13156.19
14:08:55 T:1260 NOTICE: 13505.25
14:08:55 T:1260 NOTICE: 13854.31
14:08:55 T:1260 NOTICE: 14203.37
14:08:55 T:1260 NOTICE: 14552.43
14:08:55 T:1260 NOTICE: 14901.49
14:08:55 T:1260 NOTICE: 15250.55
14:08:55 T:1260 NOTICE: 15599.61
14:08:55 T:1260 NOTICE: 15948.67
14:08:55 T:1260 NOTICE: 16297.73
14:08:55 T:1260 NOTICE: 17183.85
14:08:55 T:1260 NOTICE: 17532.91
14:08:55 T:1260 NOTICE: 17881.97
14:08:55 T:1260 NOTICE: 18231.03
14:08:55 T:1260 NOTICE: 18580.09
14:08:55 T:1260 NOTICE: 18929.15
14:08:55 T:1260 NOTICE: 19278.21
14:08:55 T:1260 NOTICE: 19627.27
14:08:55 T:1260 NOTICE: 19976.33
14:08:55 T:1260 NOTICE: 20325.39
14:08:55 T:1260 NOTICE: 20674.45
14:08:55 T:1260 NOTICE: 21023.51
14:08:55 T:1260 NOTICE: 21372.57
14:08:55 T:1260 NOTICE: 21721.63
14:08:55 T:1260 NOTICE: 22070.69
14:08:55 T:1260 NOTICE: 22419.75
14:08:55 T:1260 NOTICE: 22768.81
14:08:55 T:1260 NOTICE: 23117.87
14:08:55 T:1260 NOTICE: 23466.93
14:08:55 T:1260 NOTICE: 24353.05
14:08:55 T:1260 NOTICE: 24702.11
14:08:55 T:1260 NOTICE: 25051.17
14:08:55 T:1260 NOTICE: 25400.23
14:08:55 T:1260 NOTICE: 25749.29
14:08:55 T:1260 NOTICE: 26098.35
14:08:55 T:1260 NOTICE: 26447.41
14:08:55 T:1260 NOTICE: 26796.47
14:08:55 T:1260 NOTICE: 375
14:08:55 T:1260 NOTICE: 724.06
14:08:55 T:1260 NOTICE: 1610.21
14:08:55 T:1260 NOTICE: 1959.27
14:08:55 T:1260 NOTICE: 2308.33
14:08:55 T:1260 NOTICE: 3194.45
14:08:55 T:1260 NOTICE: 3543.51
14:08:55 T:1260 NOTICE: 4241.6
14:08:55 T:1260 NOTICE: 4590.66
14:08:55 T:1260 NOTICE: 4939.72
14:08:55 T:1260 NOTICE: 5825.9
14:08:55 T:1260 NOTICE: 6174.96
```
Can you please help me how I can get pass on the if statement when I'm trying to find the elements of `375`?
|
2014/12/25
|
[
"https://Stackoverflow.com/questions/27647922",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4275381/"
] |
As the `program_x` contains string elements :
```
program_X = map(str, program_X)
^
```
you need to change the following :
```
if pos_X == 375
```
to
```
if pos_X == '375'
```
|
If you are storing strings in list that way,
```
program_X = ['14:08:55 T:1260 NOTICE: 8081.35', ...]
```
Then use `in` keyword to check the word
```
for pos_X in programs_X:
#find the position with 375
if '375' in pos_X:
print pos_X
```
|
45,692,894
|
Problem
-------
Some reoccurring events, that don't really end at some point (like club meetings?), depend on other conditions (like holiday season). However, manually adding these exceptions would be necessary every year, as the dates might differ.
**Research**
* I have found out about `exdate` (see the image of ["iCalendar components and their properties"](https://en.wikipedia.org/wiki/ICalendar) on Wikipedia [(2)](http://www.kanzaki.com/docs/ical/exdate.html))
* Also found some possible workaround: 'just writing a [script](https://stackoverflow.com/questions/3408097/parsing-files-ics-icalendar-using-python) to do process such events'. This would still mean I need to process a `.ics` manually and import it into my calendar, which implies some limitations:
+ can not be determined for all time spans (e.g. holidays not fixed for more than three years)
+ these events would probably be separate and not reoccurring/'grouped', which makes further edits harder
Question
--------
>
> Is there a way to specify recurring exceptions in iCal ?
>
>
>
* To clarify, I have a recurring event *and* recurring exceptions.
* So for instance I have a *infinitely reoccurring weekly* event, that depends on the month; where it might only take place *if it's not* e.g. January, August, or December.
>
> Is there a way to use another event (/calendar) to filter events by boolean logic ?
>
>
>
If one could use a second event (or several) to plug into `exdate` this would solve the first problem and add some more possibilities.
---
**note**
if this question is too specific and the original problem could be solved by other means (other calendar-formats), feel free to comment/edit/answer
|
2017/08/15
|
[
"https://Stackoverflow.com/questions/45692894",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4550784/"
] |
[RFC2445 defines an `EXRULE`](https://www.rfc-editor.org/rfc/rfc2445#section-4.8.5.2) (exception rule) property. You can use that in addition to the `RRULE` to define recurring exceptions.
However, RFC2445 was superseded by [RFC5545, which unfortunately deprecates the `EXRULE`](https://www.rfc-editor.org/rfc/rfc5545#appendix-A.3) property. So, client support is questionable.
As you already proposed, automatically adding `EXDATE` properties is a possible solution.
|
`BYMONTH` would be another possibility, e.g. here's a rule for a club meeting that occurs the first Wednesday of every month except December (which is their Christmas party, so no business meeting)
```
RRULE:FREQ=MONTHLY;BYDAY=1WE;BYMONTH=1,2,3,4,5,6,7,8,9,10,11
```
|
42,519,094
|
I am trying to start a Python 3.6 project by creating a virtualenv to keep the dependencies. I currently have both Python 2.7 and 3.6 installed on my machine, as I have been coding in 2.7 up until now and I wish to try out 3.6. I am running into a problem with the different versions of Python not detecting modules I am installing inside the virtualenv.
For example, I create a virtualenv with the command: `virtualenv venv`
I then activate the virtualenv and install Django with the command: `pip install django`
My problems arise when I activate either Python 2.7 or 3.6 with the commands
`py -2` or `py -3`, neither of the interactive shells detect Django as being installed.
Django is only detected when I run the `python` command, which defaults to 2.7 when I want to use 3.6. Does anyone know a possible fix for this so I can get my virtualenv working correctly? Thanks! If it matters at all I am on a machine running Windows 7.
|
2017/02/28
|
[
"https://Stackoverflow.com/questions/42519094",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4588188/"
] |
You have to select the interpreter when you create the virtualenv.
```
virtualenv --python=PYTHON36_EXE my_venv
```
Substitute the path to your Python 3.6 installation in place of `PYTHON36_EXE`. Then after you've activated, `python` executable will be bound to 3.6 and you can just `pip install Django` as usual.
|
The key is that `pip` installs things for a specific version of Python, and to a very specific location. Basically, the `pip` command in your virtual environment is set up specifically for the interpreter that your virtual environment is using. So even if you explicitly call another interpreter with that environment activated, it will not pick up the packages `pip` installed for the default interpreter.
|
42,519,094
|
I am trying to start a Python 3.6 project by creating a virtualenv to keep the dependencies. I currently have both Python 2.7 and 3.6 installed on my machine, as I have been coding in 2.7 up until now and I wish to try out 3.6. I am running into a problem with the different versions of Python not detecting modules I am installing inside the virtualenv.
For example, I create a virtualenv with the command: `virtualenv venv`
I then activate the virtualenv and install Django with the command: `pip install django`
My problems arise when I activate either Python 2.7 or 3.6 with the commands
`py -2` or `py -3`, neither of the interactive shells detect Django as being installed.
Django is only detected when I run the `python` command, which defaults to 2.7 when I want to use 3.6. Does anyone know a possible fix for this so I can get my virtualenv working correctly? Thanks! If it matters at all I am on a machine running Windows 7.
|
2017/02/28
|
[
"https://Stackoverflow.com/questions/42519094",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4588188/"
] |
Create virtual environment based on python3.6
```
virtualenv -p python3.6 env36
```
Activate it:
```
source env36/bin/activate
```
Then the venv36 has been activated, `venv36`'s pip is available now , you can install `Django` as usual, and the package would be stored under `env36/lib/python3.6/site-packages`:
```
pip install django
```
|
The key is that `pip` installs things for a specific version of Python, and to a very specific location. Basically, the `pip` command in your virtual environment is set up specifically for the interpreter that your virtual environment is using. So even if you explicitly call another interpreter with that environment activated, it will not pick up the packages `pip` installed for the default interpreter.
|
21,426,329
|
I followed mbrochh's instruction <https://github.com/mbrochh/vim-as-a-python-ide> to build my vim as a python IDE. But things go wrong when openning the vim after I put `jedi-vim` into `~/.vim/bundle`. The following is the warnings
```
Error detected while processing CursorMovedI Auto commands for "buffer=1":
Traceback (most recent call last)
Error detected while processing CursorMovedI Auto commands for "buffer=1":
File "string", line 1, in module
Error detected while processing CursorMovedI Auto commands for "buffer=1":
NameError: name 'jedi_vim' is not defined
```
I hope someone can figure out the problem and thanks for your help.
|
2014/01/29
|
[
"https://Stackoverflow.com/questions/21426329",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1263069/"
] |
If you’re trying to use Vundle to install the jedi-vim plugin, I don’t think you should have to place it under `~/.vim/bundle`. Instead, make sure you have Vundle set up correctly, as [described in its “Quick start”](https://github.com/gmarik/vundle#quick-start), and then try adding this line to your `~/.vimrc` after the lines where Vundle is set up:
```
Plugin 'davidhalter/jedi-vim'
```
Then run `:PluginInstall` and the plugin should be installed.
|
make sure that you have install jedi,
I solved my problem with below command..
```
cd ~/.vim/bundle/jedi-vim
git submodule update --init
```
|
21,426,329
|
I followed mbrochh's instruction <https://github.com/mbrochh/vim-as-a-python-ide> to build my vim as a python IDE. But things go wrong when openning the vim after I put `jedi-vim` into `~/.vim/bundle`. The following is the warnings
```
Error detected while processing CursorMovedI Auto commands for "buffer=1":
Traceback (most recent call last)
Error detected while processing CursorMovedI Auto commands for "buffer=1":
File "string", line 1, in module
Error detected while processing CursorMovedI Auto commands for "buffer=1":
NameError: name 'jedi_vim' is not defined
```
I hope someone can figure out the problem and thanks for your help.
|
2014/01/29
|
[
"https://Stackoverflow.com/questions/21426329",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1263069/"
] |
If you’re trying to use Vundle to install the jedi-vim plugin, I don’t think you should have to place it under `~/.vim/bundle`. Instead, make sure you have Vundle set up correctly, as [described in its “Quick start”](https://github.com/gmarik/vundle#quick-start), and then try adding this line to your `~/.vimrc` after the lines where Vundle is set up:
```
Plugin 'davidhalter/jedi-vim'
```
Then run `:PluginInstall` and the plugin should be installed.
|
Dependencies exists in the Jedi git repo. *I expect you are using pathogen as extension manager.* Use `git clone` with `--recursive` option.
>
> cd ~/.vim/bundle/ && git clone --recursive <https://github.com/davidhalter/jedi-vim.git>
>
>
>
Dave Halter has this instruction in the [docs on github](https://github.com/davidhalter/jedi-vim#installation).
---
BTW, this is common behavior for all vim extensions with dependencies, such as flake8-vim. Furthermore if you just cloned any repo, which has dependencies, not recursively, you can have very unexpected issues. So this question in a greater extent about [git recursive cloning](http://git-scm.com/docs/git-clone) and [git submodules](http://git-scm.com/book/en/Git-Tools-Submodules).
|
21,426,329
|
I followed mbrochh's instruction <https://github.com/mbrochh/vim-as-a-python-ide> to build my vim as a python IDE. But things go wrong when openning the vim after I put `jedi-vim` into `~/.vim/bundle`. The following is the warnings
```
Error detected while processing CursorMovedI Auto commands for "buffer=1":
Traceback (most recent call last)
Error detected while processing CursorMovedI Auto commands for "buffer=1":
File "string", line 1, in module
Error detected while processing CursorMovedI Auto commands for "buffer=1":
NameError: name 'jedi_vim' is not defined
```
I hope someone can figure out the problem and thanks for your help.
|
2014/01/29
|
[
"https://Stackoverflow.com/questions/21426329",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1263069/"
] |
If you’re trying to use Vundle to install the jedi-vim plugin, I don’t think you should have to place it under `~/.vim/bundle`. Instead, make sure you have Vundle set up correctly, as [described in its “Quick start”](https://github.com/gmarik/vundle#quick-start), and then try adding this line to your `~/.vimrc` after the lines where Vundle is set up:
```
Plugin 'davidhalter/jedi-vim'
```
Then run `:PluginInstall` and the plugin should be installed.
|
(Using ubuntu 14.04LTS with Python 2.7)
I had a very similar issue and I found that I needed to integrate Jedi into my Python installation.
I did the following...
```
sudo apt-get install python-pip
sudo pip install jedi
```
Then if you haven't done so already, you can then add Jedi to VIM via Pathogen as follows...
```
mkdir -p ~/.vim/autoload ~/.vim/bundle
curl -so ~/.vim/autoload/pathogen.vim https://raw.githubusercontent.com/tpope/vim-pathogen/master/autoload/pathogen.vim
```
Then... add this line to your '**~/.vimrc**' file (Create it if it doesn't already exist.)
```
call pathogen#infect()
```
Then Save and Quit.
Lastly...
```
cd ~/.vim/bundle
git clone git://github.com/davidhalter/jedi-vim.git
```
That's it.
|
21,426,329
|
I followed mbrochh's instruction <https://github.com/mbrochh/vim-as-a-python-ide> to build my vim as a python IDE. But things go wrong when openning the vim after I put `jedi-vim` into `~/.vim/bundle`. The following is the warnings
```
Error detected while processing CursorMovedI Auto commands for "buffer=1":
Traceback (most recent call last)
Error detected while processing CursorMovedI Auto commands for "buffer=1":
File "string", line 1, in module
Error detected while processing CursorMovedI Auto commands for "buffer=1":
NameError: name 'jedi_vim' is not defined
```
I hope someone can figure out the problem and thanks for your help.
|
2014/01/29
|
[
"https://Stackoverflow.com/questions/21426329",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1263069/"
] |
make sure that you have install jedi,
I solved my problem with below command..
```
cd ~/.vim/bundle/jedi-vim
git submodule update --init
```
|
Dependencies exists in the Jedi git repo. *I expect you are using pathogen as extension manager.* Use `git clone` with `--recursive` option.
>
> cd ~/.vim/bundle/ && git clone --recursive <https://github.com/davidhalter/jedi-vim.git>
>
>
>
Dave Halter has this instruction in the [docs on github](https://github.com/davidhalter/jedi-vim#installation).
---
BTW, this is common behavior for all vim extensions with dependencies, such as flake8-vim. Furthermore if you just cloned any repo, which has dependencies, not recursively, you can have very unexpected issues. So this question in a greater extent about [git recursive cloning](http://git-scm.com/docs/git-clone) and [git submodules](http://git-scm.com/book/en/Git-Tools-Submodules).
|
21,426,329
|
I followed mbrochh's instruction <https://github.com/mbrochh/vim-as-a-python-ide> to build my vim as a python IDE. But things go wrong when openning the vim after I put `jedi-vim` into `~/.vim/bundle`. The following is the warnings
```
Error detected while processing CursorMovedI Auto commands for "buffer=1":
Traceback (most recent call last)
Error detected while processing CursorMovedI Auto commands for "buffer=1":
File "string", line 1, in module
Error detected while processing CursorMovedI Auto commands for "buffer=1":
NameError: name 'jedi_vim' is not defined
```
I hope someone can figure out the problem and thanks for your help.
|
2014/01/29
|
[
"https://Stackoverflow.com/questions/21426329",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1263069/"
] |
(Using ubuntu 14.04LTS with Python 2.7)
I had a very similar issue and I found that I needed to integrate Jedi into my Python installation.
I did the following...
```
sudo apt-get install python-pip
sudo pip install jedi
```
Then if you haven't done so already, you can then add Jedi to VIM via Pathogen as follows...
```
mkdir -p ~/.vim/autoload ~/.vim/bundle
curl -so ~/.vim/autoload/pathogen.vim https://raw.githubusercontent.com/tpope/vim-pathogen/master/autoload/pathogen.vim
```
Then... add this line to your '**~/.vimrc**' file (Create it if it doesn't already exist.)
```
call pathogen#infect()
```
Then Save and Quit.
Lastly...
```
cd ~/.vim/bundle
git clone git://github.com/davidhalter/jedi-vim.git
```
That's it.
|
Dependencies exists in the Jedi git repo. *I expect you are using pathogen as extension manager.* Use `git clone` with `--recursive` option.
>
> cd ~/.vim/bundle/ && git clone --recursive <https://github.com/davidhalter/jedi-vim.git>
>
>
>
Dave Halter has this instruction in the [docs on github](https://github.com/davidhalter/jedi-vim#installation).
---
BTW, this is common behavior for all vim extensions with dependencies, such as flake8-vim. Furthermore if you just cloned any repo, which has dependencies, not recursively, you can have very unexpected issues. So this question in a greater extent about [git recursive cloning](http://git-scm.com/docs/git-clone) and [git submodules](http://git-scm.com/book/en/Git-Tools-Submodules).
|
47,972,811
|
I am on CentOs7. I installed tk, tk-devel, tkinter through yum. I can import tkinter in Python 3, but not in Python 2.7. Any ideas?
Success in Python 3 (Anaconda):
```
Python 3.6.3 |Anaconda custom (64-bit)| (default, Oct 13 2017, 12:02:49)
[GCC 7.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tkinter
>>>
```
But fail on Python 2.7 (CentOS default):
```
Python 2.7.5 (default, Aug 4 2017, 00:39:18)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-16)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import Tkinter
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python2.7/lib-tk/Tkinter.py", line 39, in <module>
import _tkinter # If this fails your Python may not be configured for Tk
ImportError: libTix.so: cannot open shared object file: No such file or directory
```
I read some answers said
>
> If it fails with "No module named \_tkinter", your Python configuration needs to be modified to include this module (which is an extension module implemented in C). Do not edit Modules/Setup (it is out of date). You may have to install Tcl and Tk (when using RPM, install the -devel RPMs as well) and/or edit the setup.py script to point to the right locations where Tcl/Tk is installed. If you install Tcl/Tk in the default locations, simply rerunning "make" should build the \_tkinter extension.
>
>
>
I have reinstalled tk, tk-devel and tkinter through yum, but the problem is same.
How can I configure it to work on Python 2.7?
|
2017/12/25
|
[
"https://Stackoverflow.com/questions/47972811",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9139945/"
] |
For python 3 use:
```
import tkinter
```
For python 2 use:
```
import Tkinter
```
If these do not work install with, for python 3:
```
sudo apt-get install python3-tk
```
or, for python 2:
```
sudo apt-get install python-tk
```
you can find more details [here](https://www.techinfected.net/2015/09/how-to-install-and-use-tkinter-in-ubuntu-debian-linux-mint.html)
|
For python2.7 try
```
import Tkinter
```
With a capital T. It should already be pre-installed in default centos 7 python setup, if not do `yum install tkinter`
|
17,273,393
|
In my python code I have global `requests.session` instance:
```
import requests
session = requests.session()
```
How can I mock it with `Mock`? Is there any decorator for this kind of operations? I tried following:
```
session.get = mock.Mock(side_effect=self.side_effects)
```
but (as expected) this code doesn't return `session.get` to original state after each test, like `@mock.patch` decorator do.
|
2013/06/24
|
[
"https://Stackoverflow.com/questions/17273393",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1325846/"
] |
Use `mock.patch` to patch session in your module. Here you go, a complete working example <https://gist.github.com/k-bx/5861641>
|
With some inspiration from the previous answer and :
[mock-attributes-in-python-mock](https://stackoverflow.com/questions/16867509/mock-attributes-in-python-mock)
I was able to mock a session defined like this:
```
class MyClient(object):
"""
"""
def __init__(self):
self.session = requests.session()
```
with that: (the call to *get* returns a response with a *status\_code* attribute set to 200)
```
def test_login_session():
with mock.patch('path.to.requests.session') as patched_session:
# instantiate service: Arrange
test_client = MyClient()
type(patched_session().get.return_value).status_code = mock.PropertyMock(return_value=200)
# Act (+assert)
resp = test_client.login_cookie()
# Assert
assert resp is None
```
|
17,273,393
|
In my python code I have global `requests.session` instance:
```
import requests
session = requests.session()
```
How can I mock it with `Mock`? Is there any decorator for this kind of operations? I tried following:
```
session.get = mock.Mock(side_effect=self.side_effects)
```
but (as expected) this code doesn't return `session.get` to original state after each test, like `@mock.patch` decorator do.
|
2013/06/24
|
[
"https://Stackoverflow.com/questions/17273393",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1325846/"
] |
Since requests.session() returns an instance of the Session class, it is also possible to use patch.object()
```
from requests import Session
from unittest.mock import patch
@patch.object(Session, 'get')
def test_foo(mock_get):
mock_get.return_value = 'bar'
```
|
Use `mock.patch` to patch session in your module. Here you go, a complete working example <https://gist.github.com/k-bx/5861641>
|
17,273,393
|
In my python code I have global `requests.session` instance:
```
import requests
session = requests.session()
```
How can I mock it with `Mock`? Is there any decorator for this kind of operations? I tried following:
```
session.get = mock.Mock(side_effect=self.side_effects)
```
but (as expected) this code doesn't return `session.get` to original state after each test, like `@mock.patch` decorator do.
|
2013/06/24
|
[
"https://Stackoverflow.com/questions/17273393",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1325846/"
] |
Use `mock.patch` to patch session in your module. Here you go, a complete working example <https://gist.github.com/k-bx/5861641>
|
I discovered [the requests\_mock library](https://pypi.org/project/requests-mock/). It saved me a lot of bother. With pytest...
```
def test_success(self, requests_mock):
"""They give us a token.
"""
requests_mock.get("https://example.com/api/v1/login",
text=(
'{"result":1001, "errMsg":null,'
f'"token":"TEST_TOKEN",' '"expire":1799}'))
auth_token = the_module_I_am_testing.BearerAuth('test_apikey')
assert auth_token == 'TEST_TOKEN'
```
The module I am testing has my `BearerAuth` class which hits an endpoint for a token to start a requests.session with.
|
17,273,393
|
In my python code I have global `requests.session` instance:
```
import requests
session = requests.session()
```
How can I mock it with `Mock`? Is there any decorator for this kind of operations? I tried following:
```
session.get = mock.Mock(side_effect=self.side_effects)
```
but (as expected) this code doesn't return `session.get` to original state after each test, like `@mock.patch` decorator do.
|
2013/06/24
|
[
"https://Stackoverflow.com/questions/17273393",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1325846/"
] |
Since requests.session() returns an instance of the Session class, it is also possible to use patch.object()
```
from requests import Session
from unittest.mock import patch
@patch.object(Session, 'get')
def test_foo(mock_get):
mock_get.return_value = 'bar'
```
|
With some inspiration from the previous answer and :
[mock-attributes-in-python-mock](https://stackoverflow.com/questions/16867509/mock-attributes-in-python-mock)
I was able to mock a session defined like this:
```
class MyClient(object):
"""
"""
def __init__(self):
self.session = requests.session()
```
with that: (the call to *get* returns a response with a *status\_code* attribute set to 200)
```
def test_login_session():
with mock.patch('path.to.requests.session') as patched_session:
# instantiate service: Arrange
test_client = MyClient()
type(patched_session().get.return_value).status_code = mock.PropertyMock(return_value=200)
# Act (+assert)
resp = test_client.login_cookie()
# Assert
assert resp is None
```
|
17,273,393
|
In my python code I have global `requests.session` instance:
```
import requests
session = requests.session()
```
How can I mock it with `Mock`? Is there any decorator for this kind of operations? I tried following:
```
session.get = mock.Mock(side_effect=self.side_effects)
```
but (as expected) this code doesn't return `session.get` to original state after each test, like `@mock.patch` decorator do.
|
2013/06/24
|
[
"https://Stackoverflow.com/questions/17273393",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1325846/"
] |
Since requests.session() returns an instance of the Session class, it is also possible to use patch.object()
```
from requests import Session
from unittest.mock import patch
@patch.object(Session, 'get')
def test_foo(mock_get):
mock_get.return_value = 'bar'
```
|
I discovered [the requests\_mock library](https://pypi.org/project/requests-mock/). It saved me a lot of bother. With pytest...
```
def test_success(self, requests_mock):
"""They give us a token.
"""
requests_mock.get("https://example.com/api/v1/login",
text=(
'{"result":1001, "errMsg":null,'
f'"token":"TEST_TOKEN",' '"expire":1799}'))
auth_token = the_module_I_am_testing.BearerAuth('test_apikey')
assert auth_token == 'TEST_TOKEN'
```
The module I am testing has my `BearerAuth` class which hits an endpoint for a token to start a requests.session with.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.