qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
14,075,337
|
I have a csv file with date, time., price, mag, signal.
62035 rows; there are 42 times of day associated to each unique date in the file.
For each date, when there is an 'S' in the signal column append the corresponding price at the time the 'S' occurred. Below is the attempt.
```
from pandas import *
from numpy import *
from io import *
from os import *
from sys import *
DF1 = read_csv('___.csv')
idf=DF1.set_index(['date','time','price'],inplace=True)
sStore=[]
for i in idf.index[i][0]:
sStore.append([idf.index[j][2] for j in idf[j][1] if idf['signal']=='S'])
sStore.head()
```
```
Traceback (most recent call last)
<ipython-input-7-8769220929e4> in <module>()
1 sStore=[]
2
----> 3 for time in idf.index[i][0]:
4
5 sStore.append([idf.index[j][2] for j in idf[j][1] if idf['signal']=='S'])
NameError: name 'i' is not defined
```
I do not understand why the i index is not permitted here. Thanks.
I also think it's strange that :
idf.index.levels[0] will show the dates "not parsed" as it is in the file but out of order. Despite that parse\_date=True as an argument in set\_index.
I bring this up since I was thinking of side swiping the problem with something like:
```
for i in idf.index.levels[0]:
sStore.append([idf.index[j][2] for j in idf.index.levels[1] if idf['signal']=='S'])
sStore.head()
```
**My edit 12/30/2012 based on DSM's comment below:**
I would like to use your idea to get the P&L, as I commented below. Where if S!=B, for any given date, we difference using the closing time, 1620.
```
v=[df["signal"]=="S"]
t=[df["time"]=="1620"]
u=[df["signal"]!="S"]
df["price"][[v and (u and t)]]
```
That is, "give me the price at 1620; (even when it doesn't give a "sell signal", S) so that I can diff. with the "extra B's"--for the special case where B>S. This ignores the symmetric concern (where S>B) but for now I want to understand this logical issue.
On traceback, this expression gives:
```
ValueError: boolean index array should have 1 dimension
```
Note that in order to invoke df["time'] **I do not set\_index here**. Trying the union operator | gives:
```
TypeError: unsupported operand type(s) for |: 'list' and 'list'
```
**Looking at Max Fellows's approach**,
@Max Fellows
The point is to close out the positions at the end of the day; so we need to capture to price at the close to "unload" all those B, S which were accumulated; but didn't net each other out.
If I say:
```
filterFunc1 = lambda row: row["signal"] == "S" and ([row["signal"] != "S"][row["price"]=="1620"])
filterFunc2 =lambda row: ([row["price"]=="1620"][row["signal"] != "S"])
filterFunc=filterFunc1 and filterFunc2
filteredData = itertools.ifilter(filterFunc, reader)
```
On traceback:
```
IndexError: list index out of range
```
|
2012/12/28
|
[
"https://Stackoverflow.com/questions/14075337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1374969/"
] |
Using @Max Fellows' handy example data, we can have a look at it in `pandas`. [BTW, you should always try to provide a short, self-contained, correct example (see [here](http://sscce.org/) for more details), so that the people trying to help you don't have to spend time coming up with one.]
First, `import pandas as pd`. Then:
```
In [23]: df = pd.read_csv("sample.csv", names="date time price mag signal".split())
In [24]: df.set_index(["date", "time"], inplace=True)
```
which gives me
```
In [25]: df
Out[25]:
price mag signal
date time
12/28/2012 1:30 10 foo S
2:15 11 bar N
3:00 12 baz S
4:45 13 fibble N
5:30 14 whatsit S
6:15 15 bobs N
7:00 16 widgets S
7:45 17 weevils N
8:30 18 badger S
9:15 19 moose S
11/29/2012 1:30 10 foo N
2:15 11 bar N
3:00 12 baz S
4:45 13 fibble N
5:30 14 whatsit N
6:15 15 bobs N
7:00 16 widgets S
7:45 17 weevils N
8:30 18 badger N
9:15 19 moose N
[etc.]
```
We can see which rows have a signal of `S` easily:
```
In [26]: df["signal"] == "S"
Out[26]:
date time
12/28/2012 1:30 True
2:15 False
3:00 True
4:45 False
5:30 True
6:15 False
[etc..]
```
and we can select using this too:
```
In [27]: df["price"][df["signal"] == "S"]
Out[27]:
date time
12/28/2012 1:30 10
3:00 12
5:30 14
7:00 16
8:30 18
9:15 19
11/29/2012 3:00 12
7:00 16
12/29/2012 3:00 12
7:00 16
8/9/2008 3:00 12
7:00 16
Name: price
```
This is a `DataFrame` with every date, time, and price where there's an `S`. And if you simply want a list:
```
In [28]: list(df["price"][df["signal"] == "S"])
Out[28]: [10.0, 12.0, 14.0, 16.0, 18.0, 19.0, 12.0, 16.0, 12.0, 16.0, 12.0, 16.0]
```
**Update**:
`v=[df["signal"]=="S"]` makes `v` a Python `list` containing a `Series`. That's not what you want. `df["price"][[v and (u and t)]]` doesn't make much sense to me either --: `v` and `u` are mutually exclusive, so if you and them together, you'll get nothing. For these logical vector ops you can use `&` and `|` instead of `and` and `or`. Using the reference data again:
```
In [85]: import pandas as pd
In [86]: df = pd.read_csv("sample.csv", names="date time price mag signal".split())
In [87]: v=df["signal"]=="S"
In [88]: t=df["time"]=="4:45"
In [89]: u=df["signal"]!="S"
In [90]: df[t]
Out[90]:
date time price mag signal
3 12/28/2012 4:45 13 fibble N
13 11/29/2012 4:45 13 fibble N
23 12/29/2012 4:45 13 fibble N
33 8/9/2008 4:45 13 fibble N
In [91]: df["price"][t]
Out[91]:
3 13
13 13
23 13
33 13
Name: price
In [92]: df["price"][v | (u & t)]
Out[92]:
0 10
2 12
3 13
4 14
6 16
8 18
9 19
12 12
13 13
16 16
22 12
23 13
26 16
32 12
33 13
36 16
Name: price
```
[Note: this question has now become too long and meandering. I suggest spending some time working through the examples in the `pandas` documentation at the console to get a feel for it.]
|
This is what I think you're trying to accomplish based on your edit: for every date in your CSV file, group the date along with a list of prices for each item with a signal of "S".
You didn't include any sample data in your question, so I made a test one that I hope matches the format you described:
```
12/28/2012,1:30,10.00,"foo","S"
12/28/2012,2:15,11.00,"bar","N"
12/28/2012,3:00,12.00,"baz","S"
12/28/2012,4:45,13.00,"fibble","N"
12/28/2012,5:30,14.00,"whatsit","S"
12/28/2012,6:15,15.00,"bobs","N"
12/28/2012,7:00,16.00,"widgets","S"
12/28/2012,7:45,17.00,"weevils","N"
12/28/2012,8:30,18.00,"badger","S"
12/28/2012,9:15,19.00,"moose","S"
11/29/2012,1:30,10.00,"foo","N"
11/29/2012,2:15,11.00,"bar","N"
11/29/2012,3:00,12.00,"baz","S"
11/29/2012,4:45,13.00,"fibble","N"
11/29/2012,5:30,14.00,"whatsit","N"
11/29/2012,6:15,15.00,"bobs","N"
11/29/2012,7:00,16.00,"widgets","S"
11/29/2012,7:45,17.00,"weevils","N"
11/29/2012,8:30,18.00,"badger","N"
11/29/2012,9:15,19.00,"moose","N"
12/29/2012,1:30,10.00,"foo","N"
12/29/2012,2:15,11.00,"bar","N"
12/29/2012,3:00,12.00,"baz","S"
12/29/2012,4:45,13.00,"fibble","N"
12/29/2012,5:30,14.00,"whatsit","N"
12/29/2012,6:15,15.00,"bobs","N"
12/29/2012,7:00,16.00,"widgets","S"
12/29/2012,7:45,17.00,"weevils","N"
12/29/2012,8:30,18.00,"badger","N"
12/29/2012,9:15,19.00,"moose","N"
8/9/2008,1:30,10.00,"foo","N"
8/9/2008,2:15,11.00,"bar","N"
8/9/2008,3:00,12.00,"baz","S"
8/9/2008,4:45,13.00,"fibble","N"
8/9/2008,5:30,14.00,"whatsit","N"
8/9/2008,6:15,15.00,"bobs","N"
8/9/2008,7:00,16.00,"widgets","S"
8/9/2008,7:45,17.00,"weevils","N"
8/9/2008,8:30,18.00,"badger","N"
8/9/2008,9:15,19.00,"moose","N"
```
And here's a method using Python 2.7 and built-in libraries to group it in the way it sounds like you want:
```
import csv
import itertools
import time
from collections import OrderedDict
with open("sample.csv", "r") as file:
reader = csv.DictReader(file,
fieldnames=["date", "time", "price", "mag", "signal"])
# Reduce the size of the data set by filtering out the non-"S" rows.
filterFunc = lambda row: row["signal"] == "S"
filteredData = itertools.ifilter(filterFunc, reader)
# Sort by date so we can use the groupby function.
dateKeyFunc = lambda row: time.strptime(row["date"], r"%m/%d/%Y")
sortedData = sorted(filteredData, key=dateKeyFunc)
# Group by date: create a new dictionary of date to a list of prices.
datePrices = OrderedDict((date, [row["price"] for row in rows])
for date, rows
in itertools.groupby(sortedData, dateKeyFunc))
for date, prices in datePrices.iteritems():
print "{0}: {1}".format(time.strftime(r"%m/%d/%Y", date),
", ".join(str(price) for price in prices))
>>> 08/09/2008: 12.00, 16.00
>>> 11/29/2012: 12.00, 16.00
>>> 12/28/2012: 10.00, 12.00, 14.00, 16.00, 18.00, 19.00
>>> 12/29/2012: 12.00, 16.00
```
The type conversions are up to you, since you may be using other libraries to do your CSV reading, but that should hopefully get you started -- and take careful note of @DSM's comment about import \*.
|
70,352,477
|
I am wondering if it is possible to do grouping of lists in python in a simple way without importing any libraries.
and example of input could be:
```py
a="0 0 1 2 1"
```
and output
```
[(0,0),(1,1),(2)]
```
I am thinking something like the implementation of itertools groupby
```py
class groupby:
# [k for k, g in groupby('AAAABBBCCDAABBB')] --> A B C D A B
# [list(g) for k, g in groupby('AAAABBBCCD')] --> AAAA BBB CC D
def __init__(self, iterable, key=None):
if key is None:
key = lambda x: x
self.keyfunc = key
self.it = iter(iterable)
self.tgtkey = self.currkey = self.currvalue = object()
def __iter__(self):
return self
def __next__(self):
self.id = object()
while self.currkey == self.tgtkey:
self.currvalue = next(self.it) # Exit on StopIteration
self.currkey = self.keyfunc(self.currvalue)
self.tgtkey = self.currkey
return (self.currkey, self._grouper(self.tgtkey, self.id))
def _grouper(self, tgtkey, id):
while self.id is id and self.currkey == tgtkey:
yield self.currvalue
try:
self.currvalue = next(self.it)
except StopIteration:
return
self.currkey = self.keyfunc(self.currvalue)
```
but shorter
|
2021/12/14
|
[
"https://Stackoverflow.com/questions/70352477",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12868928/"
] |
I just realized how to do it.
```py
{k:[v for v in a.split() if v == k] for k in set(a.split())}
```
or
```py
[tuple(v for v in a.split() if v == k) for k in set(a.split())]
```
however - this solution can not be used on dictionaties because dicts are not hashable - so maybe thats one of the reasons why the itertools implementation is so much more complicated.
|
How about using `a.split(' ')` then using for loop to do that?
|
70,352,477
|
I am wondering if it is possible to do grouping of lists in python in a simple way without importing any libraries.
and example of input could be:
```py
a="0 0 1 2 1"
```
and output
```
[(0,0),(1,1),(2)]
```
I am thinking something like the implementation of itertools groupby
```py
class groupby:
# [k for k, g in groupby('AAAABBBCCDAABBB')] --> A B C D A B
# [list(g) for k, g in groupby('AAAABBBCCD')] --> AAAA BBB CC D
def __init__(self, iterable, key=None):
if key is None:
key = lambda x: x
self.keyfunc = key
self.it = iter(iterable)
self.tgtkey = self.currkey = self.currvalue = object()
def __iter__(self):
return self
def __next__(self):
self.id = object()
while self.currkey == self.tgtkey:
self.currvalue = next(self.it) # Exit on StopIteration
self.currkey = self.keyfunc(self.currvalue)
self.tgtkey = self.currkey
return (self.currkey, self._grouper(self.tgtkey, self.id))
def _grouper(self, tgtkey, id):
while self.id is id and self.currkey == tgtkey:
yield self.currvalue
try:
self.currvalue = next(self.it)
except StopIteration:
return
self.currkey = self.keyfunc(self.currvalue)
```
but shorter
|
2021/12/14
|
[
"https://Stackoverflow.com/questions/70352477",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12868928/"
] |
To group the items by value (as opposed to grouping by changes as itertools groupby would do), you can use a list comprehension that embeds a dictionary to organize the data into lists:
```
a="0 0 1 2 1"
groups = [ g[v] for g in [dict()] for v in map(int,a.split())
if g.setdefault(v,[]).append(v) or len(g[v])==1 ]
print(groups)
[[0, 0], [1, 1], [2]]
```
Each list (value in the internal (`g`) dictionary) is augmented with the items in `a` but only returned once (when encountering a new grouping key).
Using a dictionary to group the items avoids the need to sort the items (which would be required if using itertools.groupby)
If you want the grouping to work exactly like groupby, then the internal dictionary needs to be cleared when a new key is encountered:
```
groups = [ g.setdefault(v,g.clear() or [v])
for g in [dict()] for v in map(int,a.split())
if v not in g or g[v].append(v) ]
print(groups)
[[0, 0], [1], [2], [1]]
```
which you can expand to produce the same output as groupby (including computed grouping keys and working from an iterable:
```
a="0 0 1 2 1"
ia = map(int,a.split()) # an iterable
key = lambda v: v > 0 # a grouping key function
groups = [ (k,g.setdefault(k,g.clear() or [v]))
for g in [dict()] for v in ia for k in [key(v)]
if k not in g or g[k].append(v) ]
print(groups)
[(False, [0, 0]), (True, [1, 2, 1])]
```
*Note that all the above solutions produce lists of grouped items (as opposed to grouby's item iterators*
|
How about using `a.split(' ')` then using for loop to do that?
|
63,315,233
|
If I have two matrices a and b, is there any function I can find the matrix x, that when dot multiplied by a makes b? Looking for python solutions, for matrices in the form of numpy arrays.
|
2020/08/08
|
[
"https://Stackoverflow.com/questions/63315233",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12818944/"
] |
This problem of finding X such as `A*X=B` is equivalent to search the "inverse of A", i.e. a matrix such as `X = Ainverse * B`.
For information, in math `Ainverse` is noted `A^(-1)` ("A to the power -1", but you can say "A inverse" instead).
In numpy, this is a builtin function to find the inverse of a matrix `a`:
```
import numpy as np
ainv = np.linalg.inv(a)
```
see for instance [this tutorial](https://scriptverse.academy/tutorials/python-matrix-inverse.html) for explanations.
You need to be aware that some matrices are not "invertible", most obvious examples (roughly) are:
* matrix that are not square
* matrix that represent a projection
numpy can still approximate some value in certain cases.
|
if `A` is a full rank, square matrix
```
import numpy as np
from numpy.linalg import inv
X = inv(A) @ B
```
if not, then such a matrix does not exist, but we can approximate it
```
import numpy as np
from numpy.linalg import inv
X = inv(A.T @ A) @ A.T @ B
```
|
3,292,643
|
Which is the most pythonic way to convert a list of tuples to string?
I have:
```
[(1,2), (3,4)]
```
and I want:
```
"(1,2), (3,4)"
```
My solution to this has been:
```
l=[(1,2),(3,4)]
s=""
for t in l:
s += "(%s,%s)," % t
s = s[:-1]
```
Is there a more pythonic way to do this?
|
2010/07/20
|
[
"https://Stackoverflow.com/questions/3292643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/170912/"
] |
How about:
```
>>> tups = [(1, 2), (3, 4)]
>>> ', '.join(map(str, tups))
'(1, 2), (3, 4)'
```
|
The most pythonic solution is
```
tuples = [(1, 2), (3, 4)]
tuple_strings = ['(%s, %s)' % tuple for tuple in tuples]
result = ', '.join(tuple_strings)
```
|
3,292,643
|
Which is the most pythonic way to convert a list of tuples to string?
I have:
```
[(1,2), (3,4)]
```
and I want:
```
"(1,2), (3,4)"
```
My solution to this has been:
```
l=[(1,2),(3,4)]
s=""
for t in l:
s += "(%s,%s)," % t
s = s[:-1]
```
Is there a more pythonic way to do this?
|
2010/07/20
|
[
"https://Stackoverflow.com/questions/3292643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/170912/"
] |
you might want to use something such simple as:
```
>>> l = [(1,2), (3,4)]
>>> str(l).strip('[]')
'(1, 2), (3, 4)'
```
.. which is handy, but not guaranteed to work correctly
|
I think this is pretty neat:
```python
>>> l = [(1,2), (3,4)]
>>> "".join(str(l)).strip('[]')
'(1,2), (3,4)'
```
Try it, it worked like a charm for me.
|
3,292,643
|
Which is the most pythonic way to convert a list of tuples to string?
I have:
```
[(1,2), (3,4)]
```
and I want:
```
"(1,2), (3,4)"
```
My solution to this has been:
```
l=[(1,2),(3,4)]
s=""
for t in l:
s += "(%s,%s)," % t
s = s[:-1]
```
Is there a more pythonic way to do this?
|
2010/07/20
|
[
"https://Stackoverflow.com/questions/3292643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/170912/"
] |
I think this is pretty neat:
```python
>>> l = [(1,2), (3,4)]
>>> "".join(str(l)).strip('[]')
'(1,2), (3,4)'
```
Try it, it worked like a charm for me.
|
Three more :)
```
l = [(1,2), (3,4)]
unicode(l)[1:-1]
# u'(1, 2), (3, 4)'
("%s, "*len(l) % tuple(l))[:-2]
# '(1, 2), (3, 4)'
", ".join(["%s"]*len(l)) % tuple(l)
# '(1, 2), (3, 4)'
```
|
3,292,643
|
Which is the most pythonic way to convert a list of tuples to string?
I have:
```
[(1,2), (3,4)]
```
and I want:
```
"(1,2), (3,4)"
```
My solution to this has been:
```
l=[(1,2),(3,4)]
s=""
for t in l:
s += "(%s,%s)," % t
s = s[:-1]
```
Is there a more pythonic way to do this?
|
2010/07/20
|
[
"https://Stackoverflow.com/questions/3292643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/170912/"
] |
How about
```
l = [(1, 2), (3, 4)]
print repr(l)[1:-1]
# (1, 2), (3, 4)
```
|
Three more :)
```
l = [(1,2), (3,4)]
unicode(l)[1:-1]
# u'(1, 2), (3, 4)'
("%s, "*len(l) % tuple(l))[:-2]
# '(1, 2), (3, 4)'
", ".join(["%s"]*len(l)) % tuple(l)
# '(1, 2), (3, 4)'
```
|
3,292,643
|
Which is the most pythonic way to convert a list of tuples to string?
I have:
```
[(1,2), (3,4)]
```
and I want:
```
"(1,2), (3,4)"
```
My solution to this has been:
```
l=[(1,2),(3,4)]
s=""
for t in l:
s += "(%s,%s)," % t
s = s[:-1]
```
Is there a more pythonic way to do this?
|
2010/07/20
|
[
"https://Stackoverflow.com/questions/3292643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/170912/"
] |
How about:
```
>>> tups = [(1, 2), (3, 4)]
>>> ', '.join(map(str, tups))
'(1, 2), (3, 4)'
```
|
I think this is pretty neat:
```python
>>> l = [(1,2), (3,4)]
>>> "".join(str(l)).strip('[]')
'(1,2), (3,4)'
```
Try it, it worked like a charm for me.
|
3,292,643
|
Which is the most pythonic way to convert a list of tuples to string?
I have:
```
[(1,2), (3,4)]
```
and I want:
```
"(1,2), (3,4)"
```
My solution to this has been:
```
l=[(1,2),(3,4)]
s=""
for t in l:
s += "(%s,%s)," % t
s = s[:-1]
```
Is there a more pythonic way to do this?
|
2010/07/20
|
[
"https://Stackoverflow.com/questions/3292643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/170912/"
] |
You can try something like this ([see also on ideone.com](http://ideone.com/VfbJp)):
```
myList = [(1,2),(3,4)]
print ",".join("(%s,%s)" % tup for tup in myList)
# (1,2),(3,4)
```
|
The most pythonic solution is
```
tuples = [(1, 2), (3, 4)]
tuple_strings = ['(%s, %s)' % tuple for tuple in tuples]
result = ', '.join(tuple_strings)
```
|
3,292,643
|
Which is the most pythonic way to convert a list of tuples to string?
I have:
```
[(1,2), (3,4)]
```
and I want:
```
"(1,2), (3,4)"
```
My solution to this has been:
```
l=[(1,2),(3,4)]
s=""
for t in l:
s += "(%s,%s)," % t
s = s[:-1]
```
Is there a more pythonic way to do this?
|
2010/07/20
|
[
"https://Stackoverflow.com/questions/3292643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/170912/"
] |
The most pythonic solution is
```
tuples = [(1, 2), (3, 4)]
tuple_strings = ['(%s, %s)' % tuple for tuple in tuples]
result = ', '.join(tuple_strings)
```
|
Three more :)
```
l = [(1,2), (3,4)]
unicode(l)[1:-1]
# u'(1, 2), (3, 4)'
("%s, "*len(l) % tuple(l))[:-2]
# '(1, 2), (3, 4)'
", ".join(["%s"]*len(l)) % tuple(l)
# '(1, 2), (3, 4)'
```
|
3,292,643
|
Which is the most pythonic way to convert a list of tuples to string?
I have:
```
[(1,2), (3,4)]
```
and I want:
```
"(1,2), (3,4)"
```
My solution to this has been:
```
l=[(1,2),(3,4)]
s=""
for t in l:
s += "(%s,%s)," % t
s = s[:-1]
```
Is there a more pythonic way to do this?
|
2010/07/20
|
[
"https://Stackoverflow.com/questions/3292643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/170912/"
] |
you might want to use something such simple as:
```
>>> l = [(1,2), (3,4)]
>>> str(l).strip('[]')
'(1, 2), (3, 4)'
```
.. which is handy, but not guaranteed to work correctly
|
Three more :)
```
l = [(1,2), (3,4)]
unicode(l)[1:-1]
# u'(1, 2), (3, 4)'
("%s, "*len(l) % tuple(l))[:-2]
# '(1, 2), (3, 4)'
", ".join(["%s"]*len(l)) % tuple(l)
# '(1, 2), (3, 4)'
```
|
3,292,643
|
Which is the most pythonic way to convert a list of tuples to string?
I have:
```
[(1,2), (3,4)]
```
and I want:
```
"(1,2), (3,4)"
```
My solution to this has been:
```
l=[(1,2),(3,4)]
s=""
for t in l:
s += "(%s,%s)," % t
s = s[:-1]
```
Is there a more pythonic way to do this?
|
2010/07/20
|
[
"https://Stackoverflow.com/questions/3292643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/170912/"
] |
you might want to use something such simple as:
```
>>> l = [(1,2), (3,4)]
>>> str(l).strip('[]')
'(1, 2), (3, 4)'
```
.. which is handy, but not guaranteed to work correctly
|
You can try something like this ([see also on ideone.com](http://ideone.com/VfbJp)):
```
myList = [(1,2),(3,4)]
print ",".join("(%s,%s)" % tup for tup in myList)
# (1,2),(3,4)
```
|
3,292,643
|
Which is the most pythonic way to convert a list of tuples to string?
I have:
```
[(1,2), (3,4)]
```
and I want:
```
"(1,2), (3,4)"
```
My solution to this has been:
```
l=[(1,2),(3,4)]
s=""
for t in l:
s += "(%s,%s)," % t
s = s[:-1]
```
Is there a more pythonic way to do this?
|
2010/07/20
|
[
"https://Stackoverflow.com/questions/3292643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/170912/"
] |
How about:
```
>>> tups = [(1, 2), (3, 4)]
>>> ', '.join(map(str, tups))
'(1, 2), (3, 4)'
```
|
Three more :)
```
l = [(1,2), (3,4)]
unicode(l)[1:-1]
# u'(1, 2), (3, 4)'
("%s, "*len(l) % tuple(l))[:-2]
# '(1, 2), (3, 4)'
", ".join(["%s"]*len(l)) % tuple(l)
# '(1, 2), (3, 4)'
```
|
14,897,426
|
My XML file test.xml contains the following tags
```
<?xml version="1.0" encoding="ISO-8859-1"?>
<AppName>
<author>Subho Halder</author>
<description> Description</description>
<date>2012-11-06</date>
<out>Output 1</out>
<out>Output 2</out>
<out>Output 3</out>
</AppName>
```
I want to count the number of times the `<out>` tag has occured
This is my python code so far which I have written:
```
from xml.dom.minidom import parseString
file = open('test.xml','r')
data = file.read()
file.close()
dom = parseString(data)
if (len(dom.getElementsByTagName('author'))!=0):
xmlTag = dom.getElementsByTagName('author')[0].toxml()
author = xmlTag.replace('<author>','').replace('</author>','')
print author
```
Can someone help me out here?
|
2013/02/15
|
[
"https://Stackoverflow.com/questions/14897426",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/122840/"
] |
Try [`len(dom.getElementsByTagName('out'))`](http://docs.python.org/2/library/xml.dom.html#document-objects)
```
from xml.dom.minidom import parseString
file = open('test.xml','r')
data = file.read()
file.close()
dom = parseString(data)
print len(dom.getElementsByTagName('out'))
```
gives
```
3
```
|
I would recommend using lxml
```
import lxml.etree
doc = lxml.etree.parse(test.xml)
count = doc.xpath('count(//out)')
```
You can look up more information on XPATH [here](http://infohost.nmt.edu/tcc/help/pubs/pylxml/web/xpath.html).
|
14,897,426
|
My XML file test.xml contains the following tags
```
<?xml version="1.0" encoding="ISO-8859-1"?>
<AppName>
<author>Subho Halder</author>
<description> Description</description>
<date>2012-11-06</date>
<out>Output 1</out>
<out>Output 2</out>
<out>Output 3</out>
</AppName>
```
I want to count the number of times the `<out>` tag has occured
This is my python code so far which I have written:
```
from xml.dom.minidom import parseString
file = open('test.xml','r')
data = file.read()
file.close()
dom = parseString(data)
if (len(dom.getElementsByTagName('author'))!=0):
xmlTag = dom.getElementsByTagName('author')[0].toxml()
author = xmlTag.replace('<author>','').replace('</author>','')
print author
```
Can someone help me out here?
|
2013/02/15
|
[
"https://Stackoverflow.com/questions/14897426",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/122840/"
] |
Try [`len(dom.getElementsByTagName('out'))`](http://docs.python.org/2/library/xml.dom.html#document-objects)
```
from xml.dom.minidom import parseString
file = open('test.xml','r')
data = file.read()
file.close()
dom = parseString(data)
print len(dom.getElementsByTagName('out'))
```
gives
```
3
```
|
If you want you can also use [ElementTree](https://docs.python.org/2/library/xml.etree.elementtree.html). With the function below you will get a dictionary with the tag names as the key and number of times this tag is encountered in you XML file.
```
import xml.etree.ElementTree as ET
from collections import Counter
def count_tags(filename):
my_tags = []
for event, element in ET.iterparse(filename):
my_tags.append(element.tag)
my_keys = Counter(my_tags).keys()
my_values = Counter(my_tags).values()
my_dict = dict(zip(my_keys, my_values))
return my_dict
```
|
56,911,541
|
I am new to python/flask so need your help.
I have multiselect dropdown like below
I have a multiselect in html file like this:
```
<select multiple id="mymultiselect" name="mymultiselect">
<option value="1">India</option>
<option value="2">USA</option>
<option value="3">Japanthing</option>
</select>
```
as i have multiple button in form tag, and each tag point to different route, so instead of Post, I am using Get method.
But in Python wile not able to get the value from `request.args.get('mymultiselect')` , its getting none.
I tried `request.args.getlist('mymultiselect')` as well but no luck.
Could you please help me on same
|
2019/07/06
|
[
"https://Stackoverflow.com/questions/56911541",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11746629/"
] |
One way would be to create a pipe in the parent process. Each child process closes the write end of the pipe, then calls `read` (which blocks). When the parent process is ready, it closes both ends of its pipe. This makes all the children return from `read` and they can now close their read end of the pipe and proceed with their code.
|
For situations like this, I sometimes use [atomic](https://en.cppreference.com/w/c/atomic) operation. You can use atomic flags to notify sub-process, but it creates busy-waiting. So you can use it on a very special cases.
Other method is to create an event with something like `pthread_cond_broadcast()` and `pthread_cond_timedwait()`. All sub-process wait on your condition. When you are ready to go, unlock it on parent process.
|
45,734,960
|
I want to write my own small mailserver application in python with **aiosmtpd**
a) for educational purpose to better understand mailservers
b) to realize my own features
So my question is, what is missing (besides aiosmtpd) for an **Mail-Transfer-Agent**, that can send and receive emails to/from other full MTAs (gmail.com, yahoo.com ...)?
I'm guessing:
1.) Of course a ***domain*** and static ip
2.) Valid ***certificate*** for this domain
...should be doable with Lets Encrypt
3.) ***Encryption***
...should be doable with SSL/Context/Starttls... with aiosmtpd itself
4.) ***Resolving*** MX DNS entries for outgoing emails!?
...should be doable with python library dnspython
5.) ***Error*** handling for SMTP communication errors, error replies from other MTAs, bouncing!?
6.) ***Queue*** for handling inbound and pending outbund emails!?
Are there any other ***"essential"*** features missing?
Of course i know, there are a lot more "advanced" features for a mailserver like spam checking, malware checking, certificate validation, blacklisting, rules, mailboxes and more...
Thanks for all hints!
---
EDIT:
Let me clarify what is in my mind:
I want to write a mailserver for a club. Its main purpose will be a mailing-list-server. There will be different lists for different groups of the club.
Lets say my domain is *myclub.org* then there will be for example *youth@myclub.org*, *trainer@myclub.org* and so on.
**Only members** will be allowed to use this mailserver and only the members will receive emails from this mailserver. No one else will be allowed to send emails to this mailserver nor will receive emails from it. The members email-addresses and their group(s) are stored in a database.
In the future i want to integrate some other useful features, for example:
* Auto-reminders
* A chatbot, where members can control services and request informations by email
What i don't need:
* User Mailboxes
* POP/IMAP access
* Webinterface
**Open relay issue**:
* I want to reject any [FROM] email address that is not in the members database during SMTP negotiation.
* I want to check the sending mailservers for a valid certificate.
* The number of emails/member/day will be limited.
* I'm not sure, if i really need spam detection for the incoming emails?
**Losing emails issue**:
I think i will need a "lightweight" retry mechanism. However if an outgoing email can't be delivered after some retries, it will be dropped and only the administrator will be notified, not the sender. The members should not be bothered by email delivery issues. Is there any **Python Library** that can **generate RFC3464 compliant** error reply emails?
**Reboot issue**:
I'm not sure if i really need persistent storage for emails, that are not yet sent? In my use case, all the outgoing emails should be delivered usually within a few seconds (if no delivery problem occurs). Before a (planned) reboot i can check for an empty send queue.
|
2017/08/17
|
[
"https://Stackoverflow.com/questions/45734960",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6002171/"
] |
The most important thing about running your own SMTP server is that you **must not be an open relay**. That means you must not accept messages from strangers and relay them to any destination on the internet, since that would enable spammers to send spam through your SMTP server -- which would quickly get you blocked.
Thus, your server should
* relay from *authenticated* users/senders to remote destinations, or
* relay from strangers to *your own domains*.
Since your question talks about resolving MX records for outgoing email, I'm assuming you want your server to accept emails from authenticated users. Thus you need to consider how your users will authenticate themselves to the server. aiosmtpd currently has an [open pull request](https://github.com/aio-libs/aiosmtpd/pull/66) providing a basic SMTP AUTH implementation; you may use that, or you may implement your own (by subclassing `aiosmtpd.smtp.SMTP` and implementing the `smtp_AUTH()` method).
---
The second-most important thing about running your own SMTP server is that you **must not lose emails without notifying the sender**. When you accept an email from an authenticated user to be relayed to an external destination, you should let the user know (by sending an [RFC 3464 Delivery Status Notification](https://www.rfc-editor.org/rfc/rfc3464) via email) if the message is delayed or if it is not delivered at all.
You should not drop the email immediately if the remote destination fails to receive it; you should try again later and repeatedly try until you deem that you have tried for long enough. Postfix, for instance, waits 10 minutes before trying to deliver the email after the first delivery attempt fails, and then it waits 20 minutes if the second attempt fails, and so on until the message has been attempted delivered for a couple days.
You should also take care to allow the host running your mail server to be rebooted, meaning you should store queued messages on disk. For this you might be able to use the [mailbox module](https://docs.python.org/3/library/mailbox.html).
---
Of course, I haven't covered every little detail, but I think the above two points are the most important, and you didn't seem to cover them in your question.
|
You may consider the following features:
* Message threading
* Support for Delivery status
* Support for POP and IMAP protocols
* Supports for protocols such as RFC 2821 SMTP and RFC 2033 LMTP email message transport
* Support Multiple message tagging
* Support for PGP/MIME (RFC2015)
* Support list-reply
* Lets each user manage their own mail lists Supports
* Control of message headers during composition
* Support for address groups
* Prevention of mailing list loops
* Junk mail control
|
45,734,960
|
I want to write my own small mailserver application in python with **aiosmtpd**
a) for educational purpose to better understand mailservers
b) to realize my own features
So my question is, what is missing (besides aiosmtpd) for an **Mail-Transfer-Agent**, that can send and receive emails to/from other full MTAs (gmail.com, yahoo.com ...)?
I'm guessing:
1.) Of course a ***domain*** and static ip
2.) Valid ***certificate*** for this domain
...should be doable with Lets Encrypt
3.) ***Encryption***
...should be doable with SSL/Context/Starttls... with aiosmtpd itself
4.) ***Resolving*** MX DNS entries for outgoing emails!?
...should be doable with python library dnspython
5.) ***Error*** handling for SMTP communication errors, error replies from other MTAs, bouncing!?
6.) ***Queue*** for handling inbound and pending outbund emails!?
Are there any other ***"essential"*** features missing?
Of course i know, there are a lot more "advanced" features for a mailserver like spam checking, malware checking, certificate validation, blacklisting, rules, mailboxes and more...
Thanks for all hints!
---
EDIT:
Let me clarify what is in my mind:
I want to write a mailserver for a club. Its main purpose will be a mailing-list-server. There will be different lists for different groups of the club.
Lets say my domain is *myclub.org* then there will be for example *youth@myclub.org*, *trainer@myclub.org* and so on.
**Only members** will be allowed to use this mailserver and only the members will receive emails from this mailserver. No one else will be allowed to send emails to this mailserver nor will receive emails from it. The members email-addresses and their group(s) are stored in a database.
In the future i want to integrate some other useful features, for example:
* Auto-reminders
* A chatbot, where members can control services and request informations by email
What i don't need:
* User Mailboxes
* POP/IMAP access
* Webinterface
**Open relay issue**:
* I want to reject any [FROM] email address that is not in the members database during SMTP negotiation.
* I want to check the sending mailservers for a valid certificate.
* The number of emails/member/day will be limited.
* I'm not sure, if i really need spam detection for the incoming emails?
**Losing emails issue**:
I think i will need a "lightweight" retry mechanism. However if an outgoing email can't be delivered after some retries, it will be dropped and only the administrator will be notified, not the sender. The members should not be bothered by email delivery issues. Is there any **Python Library** that can **generate RFC3464 compliant** error reply emails?
**Reboot issue**:
I'm not sure if i really need persistent storage for emails, that are not yet sent? In my use case, all the outgoing emails should be delivered usually within a few seconds (if no delivery problem occurs). Before a (planned) reboot i can check for an empty send queue.
|
2017/08/17
|
[
"https://Stackoverflow.com/questions/45734960",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6002171/"
] |
aiosmtpd is an excellent tool for writing custom routing and header rewriting rules for email. However, aiosmtpd is not an MTA, since it does not do message queuing or DSN generation. One popular choice of MTA is postfix, and since postfix can be configured to relay all emails for a domain to another local SMTP server (such as aiosmtpd), a natural choice is to use postfix as the internet-facing frontend and aiosmtpd as the business-logic backend.
Advantages of using postfix as the middle-man instead of letting aiosmtpd face the public internet:
* No need to handle DNS MX lookups in aiosmtpd -- just relay through postfix (localhost:25)
* No worry about non-compliant SMTP clients in aiosmtpd
* No worry about STARTTLS in aiosmtpd -- configure this in postfix instead (simpler and more battle-hardened)
* No worry about retrying failed email deliveries and sending delivery status notifications
* aiosmtpd can be configured to respond with "transient failure" (SMTP 4xx code) upon programming errors, so no email is lost as long as the programming error is fixed within 4 days
---
Here's how you might configure postfix to work with a local SMTP server powered by e.g. aiosmtpd.
We're going to run postfix on port 25 and aiosmtpd on port 20381.
To specify that postfix should relay emails for `example.com` to an SMTP server running on port 20381, add the following to `/etc/postfix/main.cf`:
```
transport_maps = hash:/etc/postfix/smtp_transport
relay_domains = example.com
```
And create `/etc/postfix/smtp_transport` with the contents:
```
# Table of special transport method for domains in
# virtual_mailbox_domains. See postmap(5), virtual(5) and
# transport(5).
#
# Remember to run
# postmap /etc/postfix/smtp_transport
# and update relay_domains in main.cf after changing this file!
example.com smtp:127.0.0.1:20381
```
Run `postmap /etc/postfix/smtp_transport` after creating that file (and every time you modify it).
---
On the aiosmtpd side, there are a few things to consider.
The most important is how you handle bounce emails. The short story is that you should set the envelope sender to an email address you control that is dedicated to receiving bounces, e.g. `bounce@example.com`. When email arrives at this address, it should be stored somewhere so you can process bounces, e.g. by removing member email addresses from your database.
Another important thing to consider is how you tell your members' email providers that you are doing mailing list forwarding. You might want to add the following headers when forwarding emails to `GROUP@example.com`:
```
Sender: bounce@example.com
List-Name: GROUP
List-Id: GROUP.example.com
List-Unsubscribe: <mailto:postmaster@example.com?subject=unsubscribe%20GROUP>
List-Help: <mailto:postmaster@example.com?subject=list-help>
List-Subscribe: <mailto:postmaster@example.com?subject=subscribe%20GROUP>
Precedence: bulk
X-Auto-Response-Suppress: OOF
```
Here, I used `postmaster@example.com` as the recipient for list unsubscribe requests. This should be an address that forwards to the email administrator (that is, you).
Below is a skeleton (untested) that does the above. It stores bounce emails in a directory named `bounces` and forwards emails with a valid From:-header (one that appears in `MEMBERS`) according to the list of groups (in `GROUPS`).
```
import os
import email
import email.utils
import mailbox
import smtplib
import aiosmtpd.controller
LISTEN_HOST = '127.0.0.1'
LISTEN_PORT = 20381
DOMAIN = 'example.com'
BOUNCE_ADDRESS = 'bounce'
POSTMASTER = 'postmaster'
BOUNCE_DIRECTORY = os.path.join(
os.path.dirname(__file__), 'bounces')
def get_extra_headers(list_name, is_group=True, skip=()):
list_id = '%s.%s' % (list_name, DOMAIN)
bounce = '%s@%s' % (BOUNCE_ADDRESS, DOMAIN)
postmaster = '%s@%s' % (POSTMASTER, DOMAIN)
unsub = '<mailto:%s?subject=unsubscribe%%20%s>' % (postmaster, list_name)
help = '<mailto:%s?subject=list-help>' % (postmaster,)
sub = '<mailto:%s?subject=subscribe%%20%s>' % (postmaster, list_name)
headers = [
('Sender', bounce),
('List-Name', list_name),
('List-Id', list_id),
('List-Unsubscribe', unsub),
('List-Help', help),
('List-Subscribe', sub),
]
if is_group:
headers.extend([
('Precedence', 'bulk'),
('X-Auto-Response-Suppress', 'OOF'),
])
headers = [(k, v) for k, v in headers if k.lower() not in skip]
return headers
def store_bounce_message(message):
mbox = mailbox.Maildir(BOUNCE_DIRECTORY)
mbox.add(message)
MEMBERS = ['foo@example.net', 'bar@example.org',
'clubadmin@example.org']
GROUPS = {
'group1': ['foo@example.net', 'bar@example.org'],
POSTMASTER: ['clubadmin@example.org'],
}
class ClubHandler:
def validate_sender(self, message):
from_ = message.get('From')
if not from_:
return False
realname, address = email.utils.parseaddr(from_)
if address not in MEMBERS:
return False
return True
def translate_recipient(self, local_part):
try:
return GROUPS[local_part]
except KeyError:
return None
async def handle_RCPT(self, server, session, envelope, address, rcpt_options):
local, domain = address.split('@')
if domain.lower() != DOMAIN:
return '550 wrong domain'
if local.lower() == BOUNCE:
envelope.is_bounce = True
return '250 OK'
translated = self.translate_recipient(local.lower())
if translated is None:
return '550 no such user'
envelope.rcpt_tos.extend(translated)
return '250 OK'
async def handle_DATA(self, server, session, envelope):
if getattr(envelope, 'is_bounce', False):
if len(envelope.rcpt_tos) > 0:
return '500 Cannot send bounce message to multiple recipients'
store_bounce_message(envelope.original_content)
return '250 OK'
message = email.message_from_bytes(envelope.original_content)
if not self.validate_sender(message):
return '500 I do not know you'
for header_key, header_value in get_extra_headers('club'):
message[header_key] = header_value
bounce = '%s@%s' % (BOUNCE_ADDRESS, DOMAIN)
with smtplib.SMTP('localhost', 25) as smtp:
smtp.sendmail(bounce, envelope.rcpt_tos, message.as_bytes())
return '250 OK'
if __name__ == '__main__':
controller = aiosmtpd.controller.Controller(ClubHandler, hostname=LISTEN_HOST, port=LISTEN_PORT)
controller.start()
print("Controller started")
try:
while True:
input()
except (EOFError, KeyboardInterrupt):
controller.stop()
```
|
You may consider the following features:
* Message threading
* Support for Delivery status
* Support for POP and IMAP protocols
* Supports for protocols such as RFC 2821 SMTP and RFC 2033 LMTP email message transport
* Support Multiple message tagging
* Support for PGP/MIME (RFC2015)
* Support list-reply
* Lets each user manage their own mail lists Supports
* Control of message headers during composition
* Support for address groups
* Prevention of mailing list loops
* Junk mail control
|
45,734,960
|
I want to write my own small mailserver application in python with **aiosmtpd**
a) for educational purpose to better understand mailservers
b) to realize my own features
So my question is, what is missing (besides aiosmtpd) for an **Mail-Transfer-Agent**, that can send and receive emails to/from other full MTAs (gmail.com, yahoo.com ...)?
I'm guessing:
1.) Of course a ***domain*** and static ip
2.) Valid ***certificate*** for this domain
...should be doable with Lets Encrypt
3.) ***Encryption***
...should be doable with SSL/Context/Starttls... with aiosmtpd itself
4.) ***Resolving*** MX DNS entries for outgoing emails!?
...should be doable with python library dnspython
5.) ***Error*** handling for SMTP communication errors, error replies from other MTAs, bouncing!?
6.) ***Queue*** for handling inbound and pending outbund emails!?
Are there any other ***"essential"*** features missing?
Of course i know, there are a lot more "advanced" features for a mailserver like spam checking, malware checking, certificate validation, blacklisting, rules, mailboxes and more...
Thanks for all hints!
---
EDIT:
Let me clarify what is in my mind:
I want to write a mailserver for a club. Its main purpose will be a mailing-list-server. There will be different lists for different groups of the club.
Lets say my domain is *myclub.org* then there will be for example *youth@myclub.org*, *trainer@myclub.org* and so on.
**Only members** will be allowed to use this mailserver and only the members will receive emails from this mailserver. No one else will be allowed to send emails to this mailserver nor will receive emails from it. The members email-addresses and their group(s) are stored in a database.
In the future i want to integrate some other useful features, for example:
* Auto-reminders
* A chatbot, where members can control services and request informations by email
What i don't need:
* User Mailboxes
* POP/IMAP access
* Webinterface
**Open relay issue**:
* I want to reject any [FROM] email address that is not in the members database during SMTP negotiation.
* I want to check the sending mailservers for a valid certificate.
* The number of emails/member/day will be limited.
* I'm not sure, if i really need spam detection for the incoming emails?
**Losing emails issue**:
I think i will need a "lightweight" retry mechanism. However if an outgoing email can't be delivered after some retries, it will be dropped and only the administrator will be notified, not the sender. The members should not be bothered by email delivery issues. Is there any **Python Library** that can **generate RFC3464 compliant** error reply emails?
**Reboot issue**:
I'm not sure if i really need persistent storage for emails, that are not yet sent? In my use case, all the outgoing emails should be delivered usually within a few seconds (if no delivery problem occurs). Before a (planned) reboot i can check for an empty send queue.
|
2017/08/17
|
[
"https://Stackoverflow.com/questions/45734960",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6002171/"
] |
aiosmtpd is an excellent tool for writing custom routing and header rewriting rules for email. However, aiosmtpd is not an MTA, since it does not do message queuing or DSN generation. One popular choice of MTA is postfix, and since postfix can be configured to relay all emails for a domain to another local SMTP server (such as aiosmtpd), a natural choice is to use postfix as the internet-facing frontend and aiosmtpd as the business-logic backend.
Advantages of using postfix as the middle-man instead of letting aiosmtpd face the public internet:
* No need to handle DNS MX lookups in aiosmtpd -- just relay through postfix (localhost:25)
* No worry about non-compliant SMTP clients in aiosmtpd
* No worry about STARTTLS in aiosmtpd -- configure this in postfix instead (simpler and more battle-hardened)
* No worry about retrying failed email deliveries and sending delivery status notifications
* aiosmtpd can be configured to respond with "transient failure" (SMTP 4xx code) upon programming errors, so no email is lost as long as the programming error is fixed within 4 days
---
Here's how you might configure postfix to work with a local SMTP server powered by e.g. aiosmtpd.
We're going to run postfix on port 25 and aiosmtpd on port 20381.
To specify that postfix should relay emails for `example.com` to an SMTP server running on port 20381, add the following to `/etc/postfix/main.cf`:
```
transport_maps = hash:/etc/postfix/smtp_transport
relay_domains = example.com
```
And create `/etc/postfix/smtp_transport` with the contents:
```
# Table of special transport method for domains in
# virtual_mailbox_domains. See postmap(5), virtual(5) and
# transport(5).
#
# Remember to run
# postmap /etc/postfix/smtp_transport
# and update relay_domains in main.cf after changing this file!
example.com smtp:127.0.0.1:20381
```
Run `postmap /etc/postfix/smtp_transport` after creating that file (and every time you modify it).
---
On the aiosmtpd side, there are a few things to consider.
The most important is how you handle bounce emails. The short story is that you should set the envelope sender to an email address you control that is dedicated to receiving bounces, e.g. `bounce@example.com`. When email arrives at this address, it should be stored somewhere so you can process bounces, e.g. by removing member email addresses from your database.
Another important thing to consider is how you tell your members' email providers that you are doing mailing list forwarding. You might want to add the following headers when forwarding emails to `GROUP@example.com`:
```
Sender: bounce@example.com
List-Name: GROUP
List-Id: GROUP.example.com
List-Unsubscribe: <mailto:postmaster@example.com?subject=unsubscribe%20GROUP>
List-Help: <mailto:postmaster@example.com?subject=list-help>
List-Subscribe: <mailto:postmaster@example.com?subject=subscribe%20GROUP>
Precedence: bulk
X-Auto-Response-Suppress: OOF
```
Here, I used `postmaster@example.com` as the recipient for list unsubscribe requests. This should be an address that forwards to the email administrator (that is, you).
Below is a skeleton (untested) that does the above. It stores bounce emails in a directory named `bounces` and forwards emails with a valid From:-header (one that appears in `MEMBERS`) according to the list of groups (in `GROUPS`).
```
import os
import email
import email.utils
import mailbox
import smtplib
import aiosmtpd.controller
LISTEN_HOST = '127.0.0.1'
LISTEN_PORT = 20381
DOMAIN = 'example.com'
BOUNCE_ADDRESS = 'bounce'
POSTMASTER = 'postmaster'
BOUNCE_DIRECTORY = os.path.join(
os.path.dirname(__file__), 'bounces')
def get_extra_headers(list_name, is_group=True, skip=()):
list_id = '%s.%s' % (list_name, DOMAIN)
bounce = '%s@%s' % (BOUNCE_ADDRESS, DOMAIN)
postmaster = '%s@%s' % (POSTMASTER, DOMAIN)
unsub = '<mailto:%s?subject=unsubscribe%%20%s>' % (postmaster, list_name)
help = '<mailto:%s?subject=list-help>' % (postmaster,)
sub = '<mailto:%s?subject=subscribe%%20%s>' % (postmaster, list_name)
headers = [
('Sender', bounce),
('List-Name', list_name),
('List-Id', list_id),
('List-Unsubscribe', unsub),
('List-Help', help),
('List-Subscribe', sub),
]
if is_group:
headers.extend([
('Precedence', 'bulk'),
('X-Auto-Response-Suppress', 'OOF'),
])
headers = [(k, v) for k, v in headers if k.lower() not in skip]
return headers
def store_bounce_message(message):
mbox = mailbox.Maildir(BOUNCE_DIRECTORY)
mbox.add(message)
MEMBERS = ['foo@example.net', 'bar@example.org',
'clubadmin@example.org']
GROUPS = {
'group1': ['foo@example.net', 'bar@example.org'],
POSTMASTER: ['clubadmin@example.org'],
}
class ClubHandler:
def validate_sender(self, message):
from_ = message.get('From')
if not from_:
return False
realname, address = email.utils.parseaddr(from_)
if address not in MEMBERS:
return False
return True
def translate_recipient(self, local_part):
try:
return GROUPS[local_part]
except KeyError:
return None
async def handle_RCPT(self, server, session, envelope, address, rcpt_options):
local, domain = address.split('@')
if domain.lower() != DOMAIN:
return '550 wrong domain'
if local.lower() == BOUNCE:
envelope.is_bounce = True
return '250 OK'
translated = self.translate_recipient(local.lower())
if translated is None:
return '550 no such user'
envelope.rcpt_tos.extend(translated)
return '250 OK'
async def handle_DATA(self, server, session, envelope):
if getattr(envelope, 'is_bounce', False):
if len(envelope.rcpt_tos) > 0:
return '500 Cannot send bounce message to multiple recipients'
store_bounce_message(envelope.original_content)
return '250 OK'
message = email.message_from_bytes(envelope.original_content)
if not self.validate_sender(message):
return '500 I do not know you'
for header_key, header_value in get_extra_headers('club'):
message[header_key] = header_value
bounce = '%s@%s' % (BOUNCE_ADDRESS, DOMAIN)
with smtplib.SMTP('localhost', 25) as smtp:
smtp.sendmail(bounce, envelope.rcpt_tos, message.as_bytes())
return '250 OK'
if __name__ == '__main__':
controller = aiosmtpd.controller.Controller(ClubHandler, hostname=LISTEN_HOST, port=LISTEN_PORT)
controller.start()
print("Controller started")
try:
while True:
input()
except (EOFError, KeyboardInterrupt):
controller.stop()
```
|
The most important thing about running your own SMTP server is that you **must not be an open relay**. That means you must not accept messages from strangers and relay them to any destination on the internet, since that would enable spammers to send spam through your SMTP server -- which would quickly get you blocked.
Thus, your server should
* relay from *authenticated* users/senders to remote destinations, or
* relay from strangers to *your own domains*.
Since your question talks about resolving MX records for outgoing email, I'm assuming you want your server to accept emails from authenticated users. Thus you need to consider how your users will authenticate themselves to the server. aiosmtpd currently has an [open pull request](https://github.com/aio-libs/aiosmtpd/pull/66) providing a basic SMTP AUTH implementation; you may use that, or you may implement your own (by subclassing `aiosmtpd.smtp.SMTP` and implementing the `smtp_AUTH()` method).
---
The second-most important thing about running your own SMTP server is that you **must not lose emails without notifying the sender**. When you accept an email from an authenticated user to be relayed to an external destination, you should let the user know (by sending an [RFC 3464 Delivery Status Notification](https://www.rfc-editor.org/rfc/rfc3464) via email) if the message is delayed or if it is not delivered at all.
You should not drop the email immediately if the remote destination fails to receive it; you should try again later and repeatedly try until you deem that you have tried for long enough. Postfix, for instance, waits 10 minutes before trying to deliver the email after the first delivery attempt fails, and then it waits 20 minutes if the second attempt fails, and so on until the message has been attempted delivered for a couple days.
You should also take care to allow the host running your mail server to be rebooted, meaning you should store queued messages on disk. For this you might be able to use the [mailbox module](https://docs.python.org/3/library/mailbox.html).
---
Of course, I haven't covered every little detail, but I think the above two points are the most important, and you didn't seem to cover them in your question.
|
34,147,515
|
When playing around with the Python interpreter, I stumbled upon this conflicting case regarding the `is` operator:
If the evaluation takes place in the function it returns `True`, if it is done outside it returns `False`.
```
>>> def func():
... a = 1000
... b = 1000
... return a is b
...
>>> a = 1000
>>> b = 1000
>>> a is b, func()
(False, True)
```
Since the `is` operator evaluates the `id()`'s for the objects involved, this means that `a` and `b` point to the same `int` instance when declared inside of function `func` but, on the contrary, they point to a different object when outside of it.
Why is this so?
---
**Note**: I am aware of the difference between identity (`is`) and equality (`==`) operations as described in [Understanding Python's "is" operator](https://stackoverflow.com/questions/13650293/understanding-pythons-is-operator). In addition, I'm also aware about the caching that is being performed by python for the integers in range `[-5, 256]` as described in ["is" operator behaves unexpectedly with integers](https://stackoverflow.com/questions/306313/pythons-is-operator-behaves-unexpectedly-with-integers).
This **isn't the case here** since the numbers are outside that range and **I do** want to evaluate identity and **not** equality.
|
2015/12/08
|
[
"https://Stackoverflow.com/questions/34147515",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4952130/"
] |
tl;dr:
------
As the [reference manual](https://docs.python.org/3.5/reference/executionmodel.html#structure-of-a-program) states:
>
> A block is a piece of Python program text that is executed as a unit.
> The following are blocks: a module, a function body, and a class definition.
> **Each command typed interactively is a block.**
>
>
>
This is why, in the case of a function, you have a **single** code block which contains a **single** object for the numeric literal
`1000`, so `id(a) == id(b)` will yield `True`.
In the second case, you have **two distinct code objects** each with their own different object for the literal `1000` so `id(a) != id(b)`.
Take note that this behavior doesn't manifest with `int` literals only, you'll get similar results with, for example, `float` literals (see [here](https://stackoverflow.com/questions/38834770/is-operator-gives-unexpected-results-on-floats/38835101#38835101)).
Of course, comparing objects (except for explicit `is None` tests ) should always be done with the equality operator `==` and *not* `is`.
*Everything stated here applies to the most popular implementation of Python, CPython. Other implementations might differ so no assumptions should be made when using them.*
---
Longer Answer:
--------------
To get a little clearer view and additionally verify this *seemingly odd* behaviour we can look directly in the [`code`](https://docs.python.org/3.5/reference/datamodel.html#the-standard-type-hierarchy) objects for each of these cases using the [`dis`](https://docs.python.org/3.5/library/dis.html) module.
**For the function `func`**:
Along with all other attributes, function objects also have a `__code__` attribute that allows you to peek into the compiled bytecode for that function. Using [`dis.code_info`](https://docs.python.org/3.5/library/dis.html#dis.code_info) we can get a nice pretty view of all stored attributes in a code object for a given function:
```
>>> print(dis.code_info(func))
Name: func
Filename: <stdin>
Argument count: 0
Kw-only arguments: 0
Number of locals: 2
Stack size: 2
Flags: OPTIMIZED, NEWLOCALS, NOFREE
Constants:
0: None
1: 1000
Variable names:
0: a
1: b
```
We're only interested in the `Constants` entry for function `func`. In it, we can see that we have two values, `None` (always present) and `1000`. We only have a **single** int instance that represents the constant `1000`. This is the value that `a` and `b` are going to be assigned to when the function is invoked.
Accessing this value is easy via `func.__code__.co_consts[1]` and so, another way to view our `a is b` evaluation in the function would be like so:
```
>>> id(func.__code__.co_consts[1]) == id(func.__code__.co_consts[1])
```
Which, of course, will evaluate to `True` because we're referring to the same object.
**For each interactive command:**
As noted previously, each interactive command is interpreted as a single code block: parsed, compiled and evaluated independently.
We can get the code objects for each command via the [`compile`](https://docs.python.org/3.5/library/functions.html#compile) built-in:
```
>>> com1 = compile("a=1000", filename="", mode="single")
>>> com2 = compile("b=1000", filename="", mode="single")
```
For each assignment statement, we will get a similar looking code object which looks like the following:
```
>>> print(dis.code_info(com1))
Name: <module>
Filename:
Argument count: 0
Kw-only arguments: 0
Number of locals: 0
Stack size: 1
Flags: NOFREE
Constants:
0: 1000
1: None
Names:
0: a
```
The same command for `com2` looks the same but *has a fundamental difference*: each of the code objects `com1` and `com2` have different int instances representing the literal `1000`. This is why, in this case, when we do `a is b` via the `co_consts` argument, we actually get:
```
>>> id(com1.co_consts[0]) == id(com2.co_consts[0])
False
```
Which agrees with what we actually got.
*Different code objects, different contents.*
---
**Note:** I was somewhat curious as to how exactly this happens in the source code and after digging through it I believe I finally found it.
During compilations phase the [**`co_consts`**](https://github.com/python/cpython/blob/master/Python/compile.c#L595) attribute is represented by a dictionary object. In [`compile.c`](https://github.com/python/cpython/blob/master/Python/compile.c) we can actually see the initialization:
```
/* snippet for brevity */
u->u_lineno = 0;
u->u_col_offset = 0;
u->u_lineno_set = 0;
u->u_consts = PyDict_New();
/* snippet for brevity */
```
During compilation this is checked for already existing constants. See [@Raymond Hettinger's answer below](https://stackoverflow.com/a/39325641/4952130) for a bit more on this.
---
### Caveats:
* Chained statements will evaluate to an identity check of `True`
It should be more clear now why exactly the following evaluates to `True`:
```
>>> a = 1000; b = 1000;
>>> a is b
```
In this case, by chaining the two assignment commands together we tell the interpreter to compile these **together**. As in the case for the function object, only one object for the literal `1000` will be created resulting in a `True` value when evaluated.
* Execution on a module level yields `True` again:
As previously mentioned, the reference manual states that:
>
> ... The following are blocks: **a module** ...
>
>
>
So the same premise applies: we will have a single code object (for the module) and so, as a result, single values stored for each different literal.
* The same **doesn't** apply for **mutable** objects:
Meaning that unless we explicitly initialize to the same mutable object (for example with `a = b = []`), the identity of the objects will never be equal, for example:
```
a = []; b = []
a is b # always evaluates to False
```
Again, in [the documentation](https://docs.python.org/3/reference/datamodel.html#objects-values-and-types), this is specified:
>
> after a = 1; b = 1, a and b may or may not refer to the same object with the value one, depending on the implementation, but after c = []; d = [], c and d are guaranteed to refer to two different, unique, newly created empty lists.
>
>
>
|
At the interactive prompt, entry are [compiled in a *single* mode](https://docs.python.org/3/library/functions.html#compile) which processes one complete statement at a time. The compiler itself (in [Python/compile.c](https://hg.python.org/cpython/file/tip/Python/compile.c)) tracks the constants in a dictionary called [*u\_consts*](https://hg.python.org/cpython/file/tip/Python/compile.c#l114) that maps the constant object to its index.
In the [*compiler\_add\_o()*](https://hg.python.org/cpython/file/tip/Python/compile.c#l1099) function, you see that before adding a new constant (and incrementing the index), the dict is checked to see whether the constant object and index already exist. If so, they are reused.
In short, that means that repeated constants in one statement (such as in your function definition) are folded into one singleton. In contrast, your `a = 1000` and `b = 1000` are two separate statements, so no folding takes place.
FWIW, this is all just a CPython implementation detail (i.e. not guaranteed by the language). This is why the references given here are to the C source code rather than the language specification which makes no guarantees on the subject.
Hope you enjoyed this insight into how CPython works under the hood :-)
|
23,819,117
|
I use python 3.4 and I can format strings in two ways:
```
print("%d %d" %(1, 2))
```
and
```
print("{:d} {:d}".format(1, 2))
```
In [documentation](https://docs.python.org/2/library/string.html) they show examples only using 'format'. Do it mean that using '%' is not good, or it does not matter which version to use?
|
2014/05/23
|
[
"https://Stackoverflow.com/questions/23819117",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3654650/"
] |
Quoting from the [official documentation](https://docs.python.org/2/library/stdtypes.html#str.format),
>
> This method of string formatting is the new standard in Python 3, and should be preferred to the % formatting described in String Formatting Operations in new code.
>
>
>
So, `format` is the recommended way to do, going forward.
|
In addition to the recommendations on the official site the format() method is more flexible, powerful and readable than operator '%'.
For example:
```
>>> '{2}, {1}, {0}'.format(*'abc')
'c, b, a'
>>> coord = {'latitude': '37.24N', 'longitude': '-115.81W'}
>>> 'Coordinates: {latitude}, {longitude}'.format(**coord)
'Coordinates: 37.24N, -115.81W'
>>> "Units destroyed: {players[0]}".format(players = [1, 2, 3])
'Units destroyed: 1'
```
and more, and more, and more... Hard to do something similar with the operator '%'.
|
61,417,765
|
Disclaimer. I am new to python and trying to learn. I have a list of dictionaries containing address information I would like to iterate over and then pass into a function as arguments.
`print(data)`
`[{'firstName': 'John', 'lastName': 'Smith', 'address': '123 Lane', 'country': 'United States', 'state': 'TX', 'city': 'Springfield', 'zip': '12345'}, {'firstName': 'Mary', 'lastName': 'Smith', 'address': '321 Lanet', 'country': 'United States', 'state': 'Washington', 'city': 'Springfield', 'zip': '54321'}]`
I iterate over the list and attempt pass in the values, but the values are past as a list instead of individually. I'm not sure how to correct. I'm still grasping arguments and keyword arguments. Any help and guidance is appreciated.
```
from usps import USPSApi, Address
input_name = [li['lastName'] for li in data]
input_address = [li['address'] for li in data]
input_city = [li['city'] for li in data]
input_state = [li['state'] for li in data]
input_zip = [li['zip'] for li in data]
input_country = [li['country'] for li in data]
address = Address(
name = input_name,
address_1= input_address,
city= input_city,
state=input_state,
zipcode=input_zip
)
usps = USPSApi('------', test=True)
validation = usps.validate_address(address)
data_results = validation.result
print(data_results)
```
|
2020/04/24
|
[
"https://Stackoverflow.com/questions/61417765",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3950634/"
] |
You already have all the logic to process a *single* data point, just expand it to *multiple* data points as shown below using a loop.
```
from usps import USPSApi, Address
for item in data:
kwargs = dict()
kwargs['name'] = item['lastName']
kwargs['address_1'] = item['address']
kwargs['city'] = item['city']
kwargs['state'] = item['state']
kwargs['zipcode'] = item['zip']
address = Address(**kwargs)
usps = USPSApi('------', test=True)
validation = usps.validate_address(address)
data_results = validation.result
print(data_results)
```
Without syntactic sugar it becomes
```
for item in data:
kwargs = dict()
name = item['lastName']
address_ = item['address']
city = item['city']
state = item['state']
zip_ = item['zip']
address = Address(
name=name,
address_1=address_,
city=city,
state=state,
zipcode=zip_)
```
|
If the intention is to get the key's values in a variable:
```
d_data = [{'firstName': 'John', 'lastName': 'Smith', 'address': '123 Lane', 'country': 'United States', 'state': 'TX', 'city': 'Springfield', 'zip': '12345'}, {'firstName': 'Mary', 'lastName': 'Smith', 'address': '321 Lanet', 'country': 'United States', 'state': 'Washington', 'city': 'Springfield', 'zip': '54321'}]
input_name = d_data[0]['lastName']
input_address = d_data[0]['address']
input_city = d_data[0]['city']
input_state = d_data[0]['state']
input_zip = d_data[0]['zip']
input_country = d_data[0]['country']
print(input_name)
print(input_address)
print(input_city)
print(input_zip)
print(input_country)
```
**OUTPUT:**
```
John
Smith
123 Lane
Springfield
12345
United States
```
|
55,239,297
|
I'm never done development in a career as a software programmer
I'm given this domain name on NameCheap with the server disk. Now I design Django app and trying to deploy on the server but I had problems (stated below)
```
[ E 2019-03-19 06:23:19.7356 598863/T2n age/Cor/App/Implementation.cpp:221 ]: Could not spawn process for application /home/username/IOT: The application process exited prematurely.
App 644163 output: File "/home/username/IOT/passenger_wsgi.py", line 1, in <module>
App 644163 output: File "/home/username/virtualenv/IOT/3.7/lib64/python3.7/imp.py", line 171, in load_source
```
**edited:** Read more about the software supporting the WSGI is using Phusion Passenger, you could read more here; www.phusionpassenger.com
this is my passenger\_wsgi.py:
```
from myproject.wsgi import application
```
I had tried several tutorials:
1. <https://www.youtube.com/watch?v=ffqMZ5IcmSY&ab_channel=iFastNetLtd.InternetServices>
2. <https://smartlazycoding.com/django-tutorial/deploy-a-django-website-to-a2-hosting>
3. <https://hostpresto.com/community/tutorials/how-to-setup-a-python-django-website-on-hostpresto/>
4. <https://www.helloworldhost.com/knowledgebase/9/Deploy-Django-App-on-cPanel-HelloWorldHost.html>
5. <https://www.helloworldhost.com/knowledgebase/9/Deploy-Django-App-on-cPanel-HelloWorldHost.html>
6. [how to install django on cpanel](https://stackoverflow.com/questions/15625151/how-to-install-django-on-cpanel)
Very much applicate if you could help
|
2019/03/19
|
[
"https://Stackoverflow.com/questions/55239297",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10865416/"
] |
When you setup a python app in cpanel, you specify the folder where you have setup the app. That will be the folder that will contain your passenger\_wsgi.py file. You have to upload your django project to that same folder. To make sure they you have uploaded to the right directory, just have this simple check\_ your `manage.py` and `passenger_wsgi.py` should be in the same folder. Now edit your `passenger_wsgi.py` and replace everything with the following code:
```
from myapp.wsgi import application
```
After this, do not forget to restart the python app. I have written a step by step guide for deploying django app on shared hosting using cpanel. Check it [here](https://blog.umersoftwares.com/2019/09/django-shared-hosting.html).
|
The answer very simple, my server is using a programme name passenger, the official website for more information: <https://www.phusionpassenger.com/>
Now error very simply; **passenger can't find my application**, All I did was move my project and app folder on the same layer passenger\_wsgi.py and it works like charm.
|
62,903,138
|
Here is my attempt:
```
int* globalvar = new int[8];
void cpp_init(){
for (int i = 0; i < 8; i++)
globalvar[i] = 0;
}
void writeAtIndex(int index, int value){
globalvar[index] = value;
}
int accessIndex(int index){
return globalvar[index];
}
BOOST_PYTHON_MODULE(MpUtils){
def("cpp_init", &cpp_init);
def("writeAtIndex", &writeAtIndex);
def("accessIndex", &accessIndex);
}
```
and in the python file
```
def do_stuff(index):
writeAtIndex(index, randint(1, 100))
time.sleep(index/10)
print([accessIndex(i) for i in range(8)])
if __name__ == '__main__':
freeze_support()
processes = []
cpp_init()
for i in range(0, 10):
processes.append( Process( target=do_stuff, args=(i,) ) )
for process in processes:
process.start()
for process in processes:
process.join()
```
And the output is this:
```
[48, 0, 0, 0, 0, 0, 0, 0]
[0, 23, 0, 0, 0, 0, 0, 0]
[0, 0, 88, 0, 0, 0, 0, 0]
[0, 0, 0, 9, 0, 0, 0, 0]
[0, 0, 0, 0, 18, 0, 0, 0]
[0, 0, 0, 0, 0, 59, 0, 0]
[0, 0, 0, 0, 0, 0, 12, 0]
[0, 0, 0, 0, 0, 0, 0, 26]
```
Could someone explain why this is not working? I tried printing the globalvar and it's always the same value. There shouldn't be any race conditions as wating for 0.1 to 0.8 seconds should be more than enough to make the computer write something. Shouldn't C++ be accessing directly the location of the pointer?
Thanks
|
2020/07/14
|
[
"https://Stackoverflow.com/questions/62903138",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11627201/"
] |
Processes can generally access only their own memory space.
You can possibly use the [shared\_memory](https://docs.python.org/3/library/multiprocessing.shared_memory.html#multiprocessing.shared_memory.SharedMemory) module of multiprocessing to share the same array across processes. See example in the linked page.
|
The following loop will create a list of processes with consecutive first `args`, from `0` to `9`. (First process has `0`, second one `1` and so on.)
```
for i in range(0, 10):
processes.append( Process( target=do_stuff, args=(i,) ) )
```
The writing may take a list of numbers, but the call to `writeAtIndex` will be called with the first element only:
```
def do_stuff(index):
writeAtIndex(index, randint(1, 100))
```
Function `writeAtIndex` takes one `int`, not a list:
```
void writeAtIndex(int index, int value)
```
That's why your output has a value on each position.
And as said by @Voo and in the [docs](https://docs.python.org/3/library/multiprocessing.html):
>
> The multiprocessing package offers both local and remote concurrency, effectively side-stepping the Global Interpreter Lock by using subprocesses instead of threads.
>
>
>
>
> Process objects represent activity that is run in a separate process.
>
>
>
So there is a `globalvar` for each process.
If you want to share data between threads you can use the [threading](https://docs.python.org/3/library/threading.html#module-threading) module. There are details you should read though:
>
> due to the Global Interpreter Lock, only one thread can execute Python code at once (even though certain performance-oriented libraries might overcome this limitation). If you want your application to make better use of the computational resources of multi-core machines, you are advised to use multiprocessing or concurrent.futures.ProcessPoolExecutor.
>
>
>
|
33,697,379
|
Now this could be me being **very** stupid here but please see the below code.
I am trying to work out what percentage of my carb goal I've already consumed at the point I run the script. I get the totals and store them in `carbsConsumed` and `carbsGoal`. `carbsPercent` then calculates the percentage consumed. However, `carbsPercent` returns 0 each time. Any thoughts?
```
#!/usr/bin/env python2.7
import myfitnesspal
from datetime import datetime
username = 'someusername'
password = 'somepassword'
date = datetime.now()
client = myfitnesspal.Client(username, password)
day = client.get_date(date.year, date.month, date.day)
#day = client.get_date(2015,11,12)
carbs = 'carbohydrates'
carbsConsumed = day.totals[carbs]
carbsGoal = day.goals[carbs]
carbsPercent = (carbsConsumed / carbsGoal) * 100
print 'Carbs consumed: ' + str(carbsConsumed)
print 'Carbs goal: ' + str(carbsGoal)
print 'Percentage consumed: ' + str(carbsPercent)
```
|
2015/11/13
|
[
"https://Stackoverflow.com/questions/33697379",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4461239/"
] |
Try this:
```
carbsPercent = (float(carbsConsumed) / carbsGoal) * 100
```
The problem is that in Python 2.7, the default division mode is integer division, so 1000/1200 = 0. The way you force Python to change that is to cast at least one operand IN THE DIVISION operation to a float.
|
For easily portable code, in `python2`, see <https://stackoverflow.com/a/10768737/610569>:
```
from __future__ import division
carbsPercent = (carbsConsumed / carbsGoal) * 100
```
E.g.
```
$ python
>>> from __future__ import division
>>> 6 / 5
1.2
$ python3
>>> 6 / 5
1.2
```
|
46,520,136
|
Is there any way to run a function from python and a function from java in one app in parallel and get the result of each function to do another process?
|
2017/10/02
|
[
"https://Stackoverflow.com/questions/46520136",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6700881/"
] |
There are at least three ways to achieve that.
a) You could use `java.lang.Runtime` (as in `Runtime.getRuntime().exec(...)`) to launch an external process (from Java side), the external process being your Python script.
b) You could do the same as a), just using Python as launcher.
c) You could use some Python-Java binding, and from Java side use a separate `Thread` to run your Python code.
|
You should probably look for **[jython](https://wiki.python.org/jython/WhyJython)**. This support `Java` and `Python`.
|
40,239,866
|
I'm trying to list the containers under an azure account using the python sdk - why do I get the following?
```
>>> azure.storage.blob.baseblobservice.BaseBlobService(account_name='x', account_key='x').list_containers()
>>> <azure.storage.models.ListGenerator at 0x7f7cf935fa58>
```
Surely the above is a call to the function and not a reference to the function itself.
|
2016/10/25
|
[
"https://Stackoverflow.com/questions/40239866",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1693831/"
] |
you get the following according to [source code](https://github.com/Azure/azure-storage-python/blob/54e60c8c029f41d2573233a70021c4abf77ce67c/azure-storage-blob/azure/storage/blob/baseblobservice.py#L609) it return `ListGenerator(resp, self._list_containers, (), kwargs)`
you can access what you want as follow:
python2:
```
>>> from azure.storage.blob.baseblobservice import BaseBlobService
>>> blob_service = BaseBlobService(account_name='x', account_key='x')
>>> containers = blob_service.list_containers()
>>> for c in containers:
print c.name
```
python3
```
>>> from azure.storage.blob.baseblobservice import BaseBlobService
>>> blob_service = BaseBlobService(account_name='x', account_key='x')
>>> containers = blob_service.list_containers()
>>> for c in containers:
print(c.name)
```
|
for python 3 and more recent distribution of `azure` libraries, you can do:
```
from azure.storage.blob import BlockBlobService
block_blob_service = BlockBlobService(account_name=account_name, account_key=account_key)
containers = block_blob_service.list_containers()
for c in containers:
print(c.name)
```
|
40,239,866
|
I'm trying to list the containers under an azure account using the python sdk - why do I get the following?
```
>>> azure.storage.blob.baseblobservice.BaseBlobService(account_name='x', account_key='x').list_containers()
>>> <azure.storage.models.ListGenerator at 0x7f7cf935fa58>
```
Surely the above is a call to the function and not a reference to the function itself.
|
2016/10/25
|
[
"https://Stackoverflow.com/questions/40239866",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1693831/"
] |
you get the following according to [source code](https://github.com/Azure/azure-storage-python/blob/54e60c8c029f41d2573233a70021c4abf77ce67c/azure-storage-blob/azure/storage/blob/baseblobservice.py#L609) it return `ListGenerator(resp, self._list_containers, (), kwargs)`
you can access what you want as follow:
python2:
```
>>> from azure.storage.blob.baseblobservice import BaseBlobService
>>> blob_service = BaseBlobService(account_name='x', account_key='x')
>>> containers = blob_service.list_containers()
>>> for c in containers:
print c.name
```
python3
```
>>> from azure.storage.blob.baseblobservice import BaseBlobService
>>> blob_service = BaseBlobService(account_name='x', account_key='x')
>>> containers = blob_service.list_containers()
>>> for c in containers:
print(c.name)
```
|
According to **[source code](https://github.com/Azure/azure-storage-python/blob/54e60c8c029f41d2573233a70021c4abf77ce67c/azure-storage-blob/azure/storage/blob/baseblobservice.py#L573)**
You can access the containers in the storage account by using the snippet below:
```
from azure.storage.blob.baseblobservice import BaseBlobService
blob_service = BaseBlobService(account_name='<your account name>', account_key='<your account key>')
containers = blob_service.list_containers()
for container in containers:
print ("Container name: {}".format(container.name))
```
more info on setting up account name and account key visit [link](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-python#copy-your-credentials-from-the-azure-portal)
|
40,239,866
|
I'm trying to list the containers under an azure account using the python sdk - why do I get the following?
```
>>> azure.storage.blob.baseblobservice.BaseBlobService(account_name='x', account_key='x').list_containers()
>>> <azure.storage.models.ListGenerator at 0x7f7cf935fa58>
```
Surely the above is a call to the function and not a reference to the function itself.
|
2016/10/25
|
[
"https://Stackoverflow.com/questions/40239866",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1693831/"
] |
for python 3 and more recent distribution of `azure` libraries, you can do:
```
from azure.storage.blob import BlockBlobService
block_blob_service = BlockBlobService(account_name=account_name, account_key=account_key)
containers = block_blob_service.list_containers()
for c in containers:
print(c.name)
```
|
According to **[source code](https://github.com/Azure/azure-storage-python/blob/54e60c8c029f41d2573233a70021c4abf77ce67c/azure-storage-blob/azure/storage/blob/baseblobservice.py#L573)**
You can access the containers in the storage account by using the snippet below:
```
from azure.storage.blob.baseblobservice import BaseBlobService
blob_service = BaseBlobService(account_name='<your account name>', account_key='<your account key>')
containers = blob_service.list_containers()
for container in containers:
print ("Container name: {}".format(container.name))
```
more info on setting up account name and account key visit [link](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-python#copy-your-credentials-from-the-azure-portal)
|
67,401,326
|
i have json file as below .Using python i want a fetch a record for source\_name = 'Abc' . How i can achieve it
---json file: file.json
```
[{
"source_name" :"Abc",
"target_table" : "TABLE1",
"schema": "col1 string, col2 string"
},
{
"source_name" :"xyz",
"target_table" : "TABLE2",
"schema": "col3 string, col4 string"
}
]
```
final o/p ---
```
{
"source_name" :"Abc",
"target_table" : "TABLE1",
"schema": "col1 string, col2 string"
}
```
|
2021/05/05
|
[
"https://Stackoverflow.com/questions/67401326",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4050708/"
] |
You can use list comprehensions, which is considered to be more pythonic since the initial data is a list of dictionaries.
```
# using list comprehension
result = [element for element in json.load(json_file) if element['source_name'] == "Abc"]
# without using list comprehension
for element in json.load(json_file):
if element['source_name'] == "Abc":
print(element)
```
|
Try this.
```
data = [{
"source_name" :"Abc",
"target_table" : "TABLE1",
"schema": "col1 string, col2 string"
},
{
"source_name" :"xyz",
"target_table" : "TABLE2",
"schema": "col3 string, col4 string"
}
]
l = len(data)
n = 0
while n < l:
if (data[n]["source_name"] == "Abc"):
print(data[n])
if n == (l-1):
break
n +=1
```
final output
```
{
'source_name': 'Abc',
'target_table': 'TABLE1',
'schema': 'col1 string, col2 string'
}
```
|
4,128,462
|
I just started working in an environment that uses multiple programming languages, doesn't have source control and doesn't have automated deploys.
I am interested in recommending VSS for source control and using CruiseControl.net for automated deploys but I have only used CC.NET with ASP.NET applications. Is it possible to use CC.NET to deploy python, php, ASP.NET and ??? apps all from the same instance?
|
2010/11/08
|
[
"https://Stackoverflow.com/questions/4128462",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/226897/"
] |
Yes you can.
CC .net is written in .Net but can handle any project. Your project langage does not matter, you can still use Batch, Powershell, Nant or MsBuild scripts. You may also use Cruise Control or Hudson, as you like.
As for the source control provider, I would prefer svn (or even git) but that's more a matter of habbits : from my point of view VSS is too linked to VS and I don't like the lock on check out by default behaviour.
|
VSS is [unsafe for any source](http://www.developsense.com/testing/VSSDefects.html) and is damn near useless outside visual studio. And cruise control is painful to learn and make work at best. Your heart is in the right place, but you probably want slightly different technical solutions. For SCM, you probably want either subversion (cf [VisualSVN Server](http://www.visualsvn.com/server/)) or Mercurial ([on IIS7](http://www.jeremyskinner.co.uk/mercurial-on-iis7/)). For continuious integration, I'd look at TeamCity first or Hudson second. Either of which is vastly superior to CCNet in terms of ease of use.
|
39,615,436
|
When using LDA model, I get different topics each time and I want to replicate the same set. I have searched for the similar question in Google such as [this](https://groups.google.com/forum/#!topic/gensim/s1EiOUsqT8s).
I fix the seed as shown in the article by `num.random.seed(1000)` but it doesn't work. I read the `ldamodel.py` and find the code below:
```
def get_random_state(seed):
"""
Turn seed into a np.random.RandomState instance.
Method originally from maciejkula/glove-python, and written by @joshloyal
"""
if seed is None or seed is numpy.random:
return numpy.random.mtrand._rand
if isinstance(seed, (numbers.Integral, numpy.integer)):
return numpy.random.RandomState(seed)
if isinstance(seed, numpy.random.RandomState):
return seed
raise ValueError('%r cannot be used to seed a numpy.random.RandomState'
' instance' % seed)
```
So I use the code:
```
lda = models.LdaModel(
corpus_tfidf,
id2word=dic,
num_topics=2,
random_state=numpy.random.RandomState(10)
)
```
But it's still not working.
|
2016/09/21
|
[
"https://Stackoverflow.com/questions/39615436",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6858032/"
] |
First: there is no problem.
For international sites Unicode is often the preferred choice, as it can combine all possible scripts on one page (if the fonts can display them).
For a Chinese-only site, say combined with plain ASCII, you can display the page in a Chinese encoding.
Now the XML can be in UTF-8, a Unicode formatting, using multiple bytes for Chinese. For Chinese another encoding might be more compact, but the ASCII tags in XML and HTML make UTF-8 still sufficient efficient.
Java internally holds Strings as chars in Unicode (UTF-16 actually), so there is no problem too.
Only the output encoding of the JSP needs a choice. The JSP itself can also be in UTF-8 with hard-coded Chinese strings, and still specify another encoding.
The following would be for Mainland China, resp. Taiwan/Hong Kong/Macau.
```
<%@ page contentType="text/html; charset=GB18030" %>
<%@ page contentType="text/html; charset=Big5" %>
```
Check that the installed java has the charset.
|
use dom4j and reload xml file;
```
org.dom4j.io.SAXReader reader=new SAXReader();
org.dom4j.Document doc=reader.read(new File(yourFilePath));
org.dom4j.io.OutputFormat format=new OutputFormat();
format.setEncoding("utf-8");
org.dom4j.io.XMLWriter writer=new XMLWriter(new FileOutputStream(path),format);
writer.write(doc);
writer.close();
```
|
12,201,498
|
I am a beginer python learner. I am trying to create a basic dictionary where random meaning of words will come and user have to input the correct word. I used the following method, but random doesn't work. I always get the first word first and when the last word finishes, I get infinite 'none' until I kill it. Using python 3.2
```
from random import choice
print("Welcome , let's get started")
input()
def word():
print('Humiliate')
a = input(':')
while a == 'abasement':
break
else:
word()
# --------------------------------------------------------- #
def word1():
print('Swelling')
a = input(':')
while a == 'billowing':
break
else:
word()
# ------------------------------------------------------------ #
wooo = [word(),word1()]
while 1==1:
print(choice(wooo))
```
is there any faster way of doing this and get real random? I tried classes but it seems harder than this. Also, is there any way I can make python not care about weather the input is capital letter or not?
|
2012/08/30
|
[
"https://Stackoverflow.com/questions/12201498",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1570162/"
] |
```
#!/usr/bin/env perl
use strict;
use warnings;
my $data = <<END;
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . : 152.225.244.1
DHCP Server . . . . . . . . . . . : 10.204.40.57
DNS Servers . . . . . . . . . . . : 10.204.127.11
10.207.2.50
10.200.10.6
Primary WINS Server . . . . . . . : 10.207.40.145
Secondary WINS Server . . . . . . : 10.232.40.38
Lease Obtained. . . . . . . . . . : Tuesday, August 28, 2012 6:45:12 AM
Lease Expires . . . . . . . . . . : Sunday, September 02, 2012 6:45:12 A
END
my @ips = ();
if ($data =~ /^DNS Servers[\s\.:]+((\d{2}\.\d{3}\.\d{1,3}\.\d{1,3}\s*)+)/m) {
@ips = split(/\s+/, $1);
print "$_\n" foreach(@ips);
}
```
|
I would use [`unpack`](http://perldoc.perl.org/functions/unpack.html) instead of regular expressions for parsing column-based data:
```
#!/usr/bin/env perl
use strict;
use warnings;
while (<DATA>) {
my ($ip) = unpack 'x36 A*';
print "$ip\n";
}
__DATA__
DNS Servers . . . . . . . . . . . : 10.204.127.11
10.207.2.50
10.200.10.6
Primary WINS Server . . . . . . . : 10.207.40.145
Secondary WINS Server . . . . . . : 10.232.40.38
```
You may have to adjust the number `36` to the actual number of characters that should be skipped.
|
12,201,498
|
I am a beginer python learner. I am trying to create a basic dictionary where random meaning of words will come and user have to input the correct word. I used the following method, but random doesn't work. I always get the first word first and when the last word finishes, I get infinite 'none' until I kill it. Using python 3.2
```
from random import choice
print("Welcome , let's get started")
input()
def word():
print('Humiliate')
a = input(':')
while a == 'abasement':
break
else:
word()
# --------------------------------------------------------- #
def word1():
print('Swelling')
a = input(':')
while a == 'billowing':
break
else:
word()
# ------------------------------------------------------------ #
wooo = [word(),word1()]
while 1==1:
print(choice(wooo))
```
is there any faster way of doing this and get real random? I tried classes but it seems harder than this. Also, is there any way I can make python not care about weather the input is capital letter or not?
|
2012/08/30
|
[
"https://Stackoverflow.com/questions/12201498",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1570162/"
] |
```
#!/usr/bin/env perl
use strict;
use warnings;
my $data = <<END;
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . : 152.225.244.1
DHCP Server . . . . . . . . . . . : 10.204.40.57
DNS Servers . . . . . . . . . . . : 10.204.127.11
10.207.2.50
10.200.10.6
Primary WINS Server . . . . . . . : 10.207.40.145
Secondary WINS Server . . . . . . : 10.232.40.38
Lease Obtained. . . . . . . . . . : Tuesday, August 28, 2012 6:45:12 AM
Lease Expires . . . . . . . . . . : Sunday, September 02, 2012 6:45:12 A
END
my @ips = ();
if ($data =~ /^DNS Servers[\s\.:]+((\d{2}\.\d{3}\.\d{1,3}\.\d{1,3}\s*)+)/m) {
@ips = split(/\s+/, $1);
print "$_\n" foreach(@ips);
}
```
|
Match
```
DNS.+?:(\s*([\d.]+).)+
```
and pull out the groups. This assumes you have the entire multi-line string in one blob, ans that the extracted text may contain newlines and other whitespace.
The last dot is to match the newline, you need to use `/m` option
|
12,201,498
|
I am a beginer python learner. I am trying to create a basic dictionary where random meaning of words will come and user have to input the correct word. I used the following method, but random doesn't work. I always get the first word first and when the last word finishes, I get infinite 'none' until I kill it. Using python 3.2
```
from random import choice
print("Welcome , let's get started")
input()
def word():
print('Humiliate')
a = input(':')
while a == 'abasement':
break
else:
word()
# --------------------------------------------------------- #
def word1():
print('Swelling')
a = input(':')
while a == 'billowing':
break
else:
word()
# ------------------------------------------------------------ #
wooo = [word(),word1()]
while 1==1:
print(choice(wooo))
```
is there any faster way of doing this and get real random? I tried classes but it seems harder than this. Also, is there any way I can make python not care about weather the input is capital letter or not?
|
2012/08/30
|
[
"https://Stackoverflow.com/questions/12201498",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1570162/"
] |
```
#!/usr/bin/env perl
use strict;
use warnings;
my $data = <<END;
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . : 152.225.244.1
DHCP Server . . . . . . . . . . . : 10.204.40.57
DNS Servers . . . . . . . . . . . : 10.204.127.11
10.207.2.50
10.200.10.6
Primary WINS Server . . . . . . . : 10.207.40.145
Secondary WINS Server . . . . . . : 10.232.40.38
Lease Obtained. . . . . . . . . . : Tuesday, August 28, 2012 6:45:12 AM
Lease Expires . . . . . . . . . . : Sunday, September 02, 2012 6:45:12 A
END
my @ips = ();
if ($data =~ /^DNS Servers[\s\.:]+((\d{2}\.\d{3}\.\d{1,3}\.\d{1,3}\s*)+)/m) {
@ips = split(/\s+/, $1);
print "$_\n" foreach(@ips);
}
```
|
Personally, I'd go in a different direction. Instead of manually parsing the output of ipconfig, I'd use the Win32::IPConfig module.
[Win32::IPConfig - IP Configuration Settings for Windows NT/2000/XP/2003](http://search.cpan.org/~jmacfarla/Win32-IPConfig-0.10/lib/Win32/IPConfig.pm)
```
use Win32::IPConfig;
use Data::Dumper;
my $host = shift || "127.0.0.1";
my $ipconfig = Win32::IPConfig->new($host);
my @searchlist = $ipconfig->get_searchlist;
print Dumper \@searchlist;
```
|
12,201,498
|
I am a beginer python learner. I am trying to create a basic dictionary where random meaning of words will come and user have to input the correct word. I used the following method, but random doesn't work. I always get the first word first and when the last word finishes, I get infinite 'none' until I kill it. Using python 3.2
```
from random import choice
print("Welcome , let's get started")
input()
def word():
print('Humiliate')
a = input(':')
while a == 'abasement':
break
else:
word()
# --------------------------------------------------------- #
def word1():
print('Swelling')
a = input(':')
while a == 'billowing':
break
else:
word()
# ------------------------------------------------------------ #
wooo = [word(),word1()]
while 1==1:
print(choice(wooo))
```
is there any faster way of doing this and get real random? I tried classes but it seems harder than this. Also, is there any way I can make python not care about weather the input is capital letter or not?
|
2012/08/30
|
[
"https://Stackoverflow.com/questions/12201498",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1570162/"
] |
```
#!/usr/bin/env perl
use strict;
use warnings;
my $data = <<END;
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . : 152.225.244.1
DHCP Server . . . . . . . . . . . : 10.204.40.57
DNS Servers . . . . . . . . . . . : 10.204.127.11
10.207.2.50
10.200.10.6
Primary WINS Server . . . . . . . : 10.207.40.145
Secondary WINS Server . . . . . . : 10.232.40.38
Lease Obtained. . . . . . . . . . : Tuesday, August 28, 2012 6:45:12 AM
Lease Expires . . . . . . . . . . : Sunday, September 02, 2012 6:45:12 A
END
my @ips = ();
if ($data =~ /^DNS Servers[\s\.:]+((\d{2}\.\d{3}\.\d{1,3}\.\d{1,3}\s*)+)/m) {
@ips = split(/\s+/, $1);
print "$_\n" foreach(@ips);
}
```
|
Match against this regex (see [**in action**](http://regexr.com?320ji)):
```
DNS Servers.*:\s*(.*(?:[\n\r]+\s+.*(?:[\n\r]+\s+.*)?)?)
```
First capture group will be your three IP's (atmost three) as you requested. You need to trim whitespaces surely.
**Edit:** Regex fixed to match at most three IP's. If there is less IP's, matches them only.
|
58,801,994
|
I have images with Borders like the below. Can I use OpenCV or python to remove the borders like this in images?
I used the following code to crop, but it didn't work.
```
copy = Image.fromarray(img_final_bin)
try:
bg = Image.new(copy.mode, copy.size, copy.getpixel((0, 0)))
except:
return None
diff = ImageChops.difference(copy, bg)
diff = ImageChops.add(diff, diff, 2.0, -100)
bbox = diff.getbbox()
if bbox:
return np.array(copy.crop(bbox))
```
|
2019/11/11
|
[
"https://Stackoverflow.com/questions/58801994",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9186998/"
] |
Here's an approach using thresholding + contour filtering. The idea is to threshold to obtain a binary image. From here we find contours and filter using a maximum area threshold. We draw all contours that pass this filter onto a blank mask then perform bitwise operations to remove the border. Here's the result with the removed border
[](https://i.stack.imgur.com/ThU1A.png)
```
import cv2
import numpy as np
image = cv2.imread('1.png')
mask = np.zeros(image.shape, dtype=np.uint8)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
cnts = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
area = cv2.contourArea(c)
if area < 10000:
cv2.drawContours(mask, [c], -1, (255,255,255), -1)
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
result = cv2.bitwise_and(image,image,mask=mask)
result[mask==0] = (255,255,255)
cv2.imwrite('result.png', result)
cv2.waitKey()
```
|
You could also use Werk24's API for reading Technical Drawings: [www.werk24.io](https://werk24.io)
```py
from werk24 import W24TechreadClient, W24AskCanvasThumbnail
async with W24TechreadClient.make_from_env() as session:
response = await session.read_drawing(document_bytes,[W24AskCanvasThumbnail()])
```
It also allows you to extract the measures etc. See:
[Fully read](https://i.stack.imgur.com/ny2Gi.jpg)
I work at Werk24.
|
70,972,961
|
I have this list of objects which have p1, and p2 parameter (and some other stuff).
```
Job("J1", p1=8, p2=3)
Job("J2", p1=7, p2=11)
Job("J3", p1=5, p2=12)
Job("J4", p1=10, p2=5)
...
Job("J222",p1=22,p2=3)
```
With python, I can easily get max value of p2 by
```
(max(job.p2 for job in instance.jobs))
```
but how do I get the value of p1, where p2 is the biggest?
Seems simple, but I can't get it anyway... Could You guys help me?
|
2022/02/03
|
[
"https://Stackoverflow.com/questions/70972961",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18110528/"
] |
You can do it like this:
```html
<style>
.input-wrapper {
display: flex;
position: relative;
}
.input-wrapper .icon {
position: absolute;
top: 50%;
transform: translateY(-50%);
padding: 0 10px;
}
.input-wrapper input {
padding: 0 0 0 25px;
height: 50px;
}
</style>
<div class="input-wrapper">
<i class="icon">2</i>
<input type="text" class="form-control ps-5" placeholder="Name" name="s" required="">
</div>
```
|
There are two styling options you can use. These are block and flex.
Block will not be responsive while flex will be responsive. I hope this solve your issue.
|
70,777,985
|
I'm trying to create a pandas multiIndexed dataframe that is a summary of the unique values in each column.
Is there an easier way to have this information summarized besides creating this dataframe?
Either way, it would be nice to know how to complete this code challenge. Thanks for your help! Here is the toy dataframe and the solution I attempted using a for loop with a dictionary and a value\_counts dataframe. Not sure if it's possible to incorporate MultiIndex.from\_frame or .from\_product here somehow...
Original Dataframe:
```
data = pd.DataFrame({'A': ['case', 'case', 'case', 'case', 'case'],
'B': [2001, 2002, 2003, 2004, 2005],
'C': ['F', 'M', 'F', 'F', 'M'],
'D': [0, 0, 0, 1, 0],
'E': [1, 0, 1, 0, 1],
'F': [1, 1, 0, 0, 0]})
A B C D E F
0 case 2001 F 0 1 1
1 case 2002 M 0 0 1
2 case 2003 F 0 1 0
3 case 2004 F 1 0 0
4 case 2005 M 1 1 0
```
Desired outcome:
```
unique percent
A case 100
B 2001 20
2002 20
2003 20
2004 20
2005 20
C F 60
M 40
D 0 80
1 20
E 0 40
1 60
F 0 60
1 40
```
My failed for loop attempt:
```
def unique_values(df):
values = {}
columns = []
df = pd.DataFrame(values, columns=columns)
for col in data:
df2 = data[col].value_counts(normalize=True)*100
values = values.update(df2.to_dict)
columns = columns.append(col*len(df2))
return df
unique_values(data)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-84-a341284fb859> in <module>
11
12
---> 13 unique_values(data)
<ipython-input-84-a341284fb859> in unique_values(df)
5 for col in data:
6 df2 = data[col].value_counts(normalize=True)*100
----> 7 values = values.update(df2.to_dict)
8 columns = columns.append(col*len(df2))
9 return df
TypeError: 'method' object is not iterable
```
Let me know if there's something obvious I'm missing! Still relatively new to EDA and pandas, any pointers appreciated.
|
2022/01/19
|
[
"https://Stackoverflow.com/questions/70777985",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17119804/"
] |
Well, there are many ways to handle errors, it really depends on what you want to do in case of an error.
In your current setup, the solution is straightforward: **first, `NotificationException` should extend `RuntimeException`**, thus, in case of an HTTP error, `.block()` will throw a `NotificationException`. It is a good practice to add it in the signature of the method, accompanied with a Javadoc entry.
In another method, you just need to catch the exception and do what you want with it.
```java
/**
* @param notification
* @throws NotificationException in case of a HTTP error
*/
public void sendNotification(String notification) throws NotificationException {
final WebClient webClient = WebClient.builder()
.defaultHeader(CONTENT_TYPE, APPLICATION_JSON_VALUE)
.build();
webClient.post()
.uri("http://localhost:9000/api")
.body(BodyInserters.fromValue(notification))
.retrieve()
.onStatus(HttpStatus::isError, clientResponse -> Mono.error(NotificationException::new))
.toBodilessEntity()
.block();
log.info("Notification delivered successfully");
}
public void someOtherMethod() {
try {
sendNotification("test");
} catch (NotificationException e) {
// Treat exception
}
}
```
In a more reactive style, you could return a `Mono` and use [`onErrorResume()`](https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Mono.html#onErrorResume-java.lang.Class-java.util.function.Function-).
```java
public Mono<Void> sendNotification(String notification) {
final WebClient webClient = WebClient.builder()
.defaultHeader(CONTENT_TYPE, APPLICATION_JSON_VALUE)
.build();
return webClient.post()
.uri("http://localhost:9000/api")
.body(BodyInserters.fromValue(notification))
.retrieve()
.onStatus(HttpStatus::isError, clientResponse -> Mono.error(NotificationException::new))
.bodyToMono(Void.class);
}
public void someOtherMethod() {
sendNotification("test")
.onErrorResume(NotificationException.class, ex -> {
log.error(ex.getMessage());
return Mono.empty();
})
.doOnSuccess(unused -> log.info("Notification delivered successfully"))
.block();
}
```
|
Using imperative/blocking style you can surround it with a try-catch:
```java
try {
webClient.post()
.uri("http://localhost:9000/api")
.body(BodyInserters.fromValue(notification))
.retrieve()
.onStatus(HttpStatus::isError, clientResponse -> Mono.error(NotificationException::new))
.toBodilessEntity()
.block();
} catch(NotificationException e) {...}
```
A reactive solution would be to use the `onErrorResume` operator like this:
```java
webClient.post()
.uri("http://localhost:9000/api")
.body(BodyInserters.fromValue(notification))
.retrieve()
.onErrorResume(e -> someOtherMethod())
.toBodilessEntity();
```
Here, the reactive method `someOtherMethod()` will be executed in case of any error.
|
25,737,589
|
I run a django app via gunicorn, supervisor and nginx as reverse proxy and struggle to make my gunicorn access log show the actual ip instead of 127.0.0.1:
Log entries look like this at the moment:
```
127.0.0.1 - - [09/Sep/2014:15:46:52] "GET /admin/ HTTP/1.0" ...
```
supervisord.conf
```
[program:gunicorn]
command=/opt/middleware/bin/gunicorn --chdir /opt/middleware -c /opt/middleware/gunicorn_conf.py middleware.wsgi:application
stdout_logfile=/var/log/middleware/gunicorn.log
```
gunicorn\_conf.py
```
#!python
from os import environ
from gevent import monkey
import multiprocessing
monkey.patch_all()
bind = "0.0.0.0:9000"
x_forwarded_for_header = "X-Real-IP"
policy_server = False
worker_class = "socketio.sgunicorn.GeventSocketIOWorker"
accesslog = '-'
```
my nginx module conf
```
server {
listen 80;
root /opt/middleware;
index index.html index.htm;
client_max_body_size 200M;
server_name _;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
real_ip_header X-Real-IP;
}
}
```
I tried all sorts of combinations in the location {} block, but can't see that it makes any difference. Any hint appreciated.
|
2014/09/09
|
[
"https://Stackoverflow.com/questions/25737589",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/640916/"
] |
The problem is that you need to configure [`gunicorn`'s logging](http://docs.gunicorn.org/en/0.17.0/configure.html#logging), because it will (by default) not display any custom headers.
From the documentation, we find out that the default access log format is controlled by [`access_log_format`](http://docs.gunicorn.org/en/0.17.0/configure.html#access-log-format) and is set to the following:
```
"%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"
```
where:
* `h` is the remote address
* `l` is `-` (not used)
* `u` is `-` (not used, reserved)
* `t` is the time stamp
* `r` is the status line
* `s` is the status of the request
* `b` is length of response
* `f` is referrer
* `a` is user agent
You can also customize it with the following extra variables that are not used by default:
* `T` - request time (in seconds)
* `D` - request time (in microseconds)
* `p` - the process id
* `{Header}i` - request header (custom)
* `{Response}o` - response header (custom)
To `gunicorn`, all requests are coming from nginx so it will display that as the remote IP. To get it to log any custom headers (what you are sending from `nginx`) you'll need to adjust this parameter and add the appropriate variables, in your case you would set it to the following:
```
%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s" "%({X-Real-IP}i)s"
```
|
Note that headers containing `-` should be referred to here by replacing `-` with `_`, thus `X-Forwarded-For` becomes `%({X_Forwarded_For}i)s`.
|
16,640,610
|
So I tried to use parses generator [waxeye](http://waxeye.org/), but as I try to use tutorial example of program in python using generated parser I get error:
```
AttributeError: 'module' object has no attribute 'Parser'
```
Here's part of code its reference to:
```
import waxeye
import parser
p = parser.Parser()
```
The last line cause the error. Parser generated by waxeye I put in the same directory as the **init**.py . It's parser.py .
Anyone have any idea what's wrong with my code?
---
**Edit 20-05-2013:**
Beggining of the parser.py file:
```
from waxeye import Edge, State, FA, WaxeyeParser
class Parser (WaxeyeParser):
```
|
2013/05/19
|
[
"https://Stackoverflow.com/questions/16640610",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1334531/"
] |
can you try this code and then please tell me result.
be sure you check all file names.
```
@Override
protected void onCreate(Bundle savedInstanceState) {
//.......
DatabaseHelper dbHelper = new DatabaseHelper(this);
dbHelper.createDatabase();
dbHelper.openDatabase();
// do stuff
Cursor data =dbHelper.Sample_use_of_helper();
dbHelper.close();
}
class DatabaseHelper extends SQLiteOpenHelper {
private static String DB_PATH = "/data/data/com.mycomp.myapp/databases/";
private static String DB_NAME = "database.db";
private SQLiteDatabase myDataBase;
private final Context myContext;
public DatabaseHelper (Context context) {
super(context, DB_NAME, null, 1);
this.myContext = context;
}
public void crateDatabase() throws IOException {
boolean vtVarMi = isDatabaseExist();
if (!vtVarMi) {
this.getReadableDatabase();
try {
copyDataBase();
} catch (IOException e) {
throw new Error("Error copying database");
}
}
}
private void copyDataBase() throws IOException {
// Open your local db as the input stream
InputStream myInput = myContext.getAssets().open(DB_NAME);
// Path to the just created empty db
String outFileName = DB_PATH + DB_NAME;
// Open the empty db as the output stream
OutputStream myOutput = new FileOutputStream(outFileName);
// transfer bytes from the inputfile to the outputfile
byte[] buffer = new byte[1024];
int length;
while ((length = myInput.read(buffer)) > 0) {
myOutput.write(buffer, 0, length);
}
// Close the streams
myOutput.flush();
myOutput.close();
myInput.close();
}
private boolean isDatabaseExist() {
SQLiteDatabase kontrol = null;
try {
String myPath = DB_PATH + DB_NAME;
kontrol = SQLiteDatabase.openDatabase(myPath, null, SQLiteDatabase.OPEN_READONLY);
} catch (SQLiteException e) {
kontrol = null;
}
if (kontrol != null) {
kontrol.close();
}
return kontrol != null ? true : false;
}
public void openDataBase() throws SQLException {
// Open the database
String myPath = DB_PATH + DB_NAME;
myDataBase = SQLiteDatabase.openDatabase(myPath, null, SQLiteDatabase.OPEN_READWRITE);
}
public Cursor Sample_use_of_helper() {
return myDataBase.query("TABLE_NAME", null, null, null, null, null, null);
}
@Override
public synchronized void close() {
if (myDataBase != null)
myDataBase.close();
super.close();
}
@Override
public void onCreate(SQLiteDatabase db) {
}
@Override
public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) {
}
}
```
|
remove **.db** extension from both **DB\_NAME** and **.open("database.db")**
It looks ugly but currently same thing is working for me....
And before installing new apk please do ***Clear Data*** for earlier installed apk
And still if it won't work, please mention your database size.
|
48,630,957
|
I'm using subprocess in Python and passing ec2 instance id at runtime..while giving instance id it is taking as a string and throwing error saying instance id is not valid. Please don't suggest boto3 package as my company restricted of using it.My question is how can I send the character+integer+special character at runtime.
Pgm:
```
#!/bin/python
import subprocess
import sys
def describe_one(instanceid):
subprocess.call(["aws", "ec2", "describe-instances", "--instance-ids", '"instanceid"'])
if __name__ == "__main__":
instanceid=sys.argv[1]
describe_one(instanceid)
```
Output
```
# ./sample.py i-061e41edcc2afc01
```
>
> An error occurred (InvalidInstanceID.Malformed) when calling the DescribeInstances operation: Invalid id: ""instanceid""
>
>
>
|
2018/02/05
|
[
"https://Stackoverflow.com/questions/48630957",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9318698/"
] |
Are you intending to pass the literal string `"instanceid"`?
Surely you mean to use its value?
```
subprocess.call(["aws", "ec2", "describe-instances", "--instance-ids", f'"{instanceid}"'])
```
or
```
subprocess.call(["aws", "ec2", "describe-instances", "--instance-ids", '"{}"'.format(instanceid)])
```
if not using Python3.6+
|
This has nothing to do with "passing integer or character types", whatever that means.
You are passing the literal value `"instanceid"`, complete with quotes, rather than the value of the instanceid argument to your script. You did do both sets of quotes.
(And really, you should take up this silly "restriction" about using boto3 with your boss. You can't be expected to do your gin without the correct tools.)
|
17,436,045
|
I'm sure this is something easy to do for someone with programming skills (unlike me). I am playing around with the Google Sites API. Basically, I want to be able to batch-create a bunch of pages, instead of having to do them one by one using the slow web form, which is a pain.
I have installed all the necessary files, read the [documentation](https://developers.google.com/google-apps/sites/docs/1.0/developers_guide_python), and successfully ran [this](https://code.google.com/p/gdata-python-client/source/browse/samples/sites/sites_example.py) sample file. As a matter of fact, this sample file already has the python code for creating a page in Google Sites:
```
elif choice == 4:
print "\nFetching content feed of '%s'...\n" % self.client.site
feed = self.client.GetContentFeed()
try:
selection = self.GetChoiceSelection(
feed, 'Select a parent to upload to (or hit ENTER for none): ')
except ValueError:
selection = None
page_title = raw_input('Enter a page title: ')
parent = None
if selection is not None:
parent = feed.entry[selection - 1]
new_entry = self.client.CreatePage(
'webpage', page_title, '<b>Your html content</b>',
parent=parent)
if new_entry.GetAlternateLink():
print 'Created. View it at: %s' % new_entry.GetAlternateLink().href
```
I understand the creation of a page revolves around page\_title and new\_entry and CreatePage. However, instead of creating one page at a time, I want to create many.
I've done some research, and I gather I need something like
```
page_titles = input("Enter a list of page titles separated by commas: ").split(",")
```
to gather a list of page titles (like page1, page2, page3, etc. -- I plan to use a text editor or spreadsheet to generate a long list of comma separated names).
Now I am trying to figure out how to get that string and "feed" it to new\_entry so that it creates a separate page for each value in the string. I can't figure out how to do that. Can anyone help, please?
In case it helps, this is what the Google API needs to create a page:
```
entry = client.CreatePage('webpage', 'New WebPage Title', html='<b>HTML content</b>')
print 'Created. View it at: %s' % entry.GetAlternateLink().href
```
Thanks.
|
2013/07/02
|
[
"https://Stackoverflow.com/questions/17436045",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2544281/"
] |
Variable 'initialization' will always trigger an event. From the last Verilog standard([IEEE 1364-2005](http://standards.ieee.org/findstds/standard/1364-2005.html)):
>
> If a variable declaration assignment is used (see 6.2.1), the variable
> shall take this value as if the assignment occurred in a blocking
> assignment in an initial construct.
>
>
>
Also be aware
>
> If the same variable is assigned different values both in an initial
> block and in a variable declaration assignment, the order of the
> evaluation is undefined.
>
>
>
|
A good reference for order of events is this paper:
<http://www.sunburst-design.com/papers/CummingsSNUG2000SJ_NBA_rev1_2.pdf>
page 6 has a diagram of the order of events as far as evaluation of variables and triggering goes.
|
28,379,373
|
In python is there any way at all to get a function to use a variable and return it without passing it in as an argument?
|
2015/02/07
|
[
"https://Stackoverflow.com/questions/28379373",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3983128/"
] |
What you ask **is** possible. If a function doesn't re-bind (e.g, assign to) a name, but uses it, that name is taken as a *global* one, e.g:
```
def f():
print(foo)
foo = 23
f()
```
will do what you ask. However, it's a dubious idea already: why not just `def f(foo):` and call `f(23)`, so much more direct and clearer?!
If the function needs to bind the name, it's even more dubious, even though the `global` statement allows it...:
```
def f():
global foo
print(foo)
foo += 1
foo = 23
f()
print(foo)
```
Here, the **much** better alternative would be:
```
def f(foo):
print(foo)
foo += 1
return foo
foo = f(23)
print(foo)
```
Same effect - much cleaner, better-maintainable, preferable structure.
**Why** are you looking for the inferior (though feasible) approach, when a better one is easier and cleaner...?!
|
As @Alex Martelli pointed out, you *can*... but *should* you? That's the more relevant question, IMO.
I understand the *appeal* of what you're asking. It seems like it should *just be* good form because otherwise you'd have to pass the variables everywhere, which of course seems wasteful.
I won't presume to know how far along you are in your Python-Fu (I can only guess), but back when I was more of a beginner there were times when I had the same question you do now. Using `global` felt ugly (everyone said *so*), yet I had this irresistible urge to do exactly what you're asking. In the end, I went with passing variables in to functions.
**Why?**
Because it's a code smell. You may not see it, but there are good reasons to avoid it in the long run.
*However, there is a name for what you're asking*:
### Classes
What you need are *instance* variables. Technically, it might just be a step away from your current situation. So, *you might be wondering*: Why such a fuss about it? After all, instance variables are widely accepted and global variables *seem* similar in nature. So what's the big problem with using global variables in regular functions?
The problem is mostly *conceptual*. Classes are designed for grouping related behaviour (functions) and state (player position variables) and have a wide array of functionality that goes along with it (which wouldn't otherwise be possible with only functions). Classes are also ***semantically significant***, in that they signal a clear intention in your mind and to future readers about the way the program is organized and the way it operates.
I won't go into much further detail other than to say this: You should probably reconsider your strategy because the road it leads to inevitably is this:
* 1) Increased coupling in your codebase
* 2) Undecipherable flow of execution
* 3) Debugging bonanza
* 4) **Spaghetti** code.
Right now, it may seem like an exaggeration. The point is you'd greatly benefit to learn how to organize your program in a way that makes sense, ***conceptually***, instead of using what currently requires the least amount of code or what seems intuitive to you *right now*. You may be working on a small application for all I know and with one only ***1 module***, so it might resemble the use of a single class.
The *worst* thing that can happen is you'll adopt bad coding practices. In the long run, this might hurt you.
|
46,318,530
|
Hello all I'm very new to python, so just bear with me.
I have a sample json file in this format.
```
[{
"5":"5",
"0":"0",
"1":"1"},{
"14":"14",
"11":"11",
"15":"15"},{
"25":"25",
"23":"23",
"22":"22"}]
```
I would like to convert the json file to csv in a specific way. All the values in 1 map i.e., `{` and `}` in the json should be converted to a csv file(say FILE\_A.csv).
For the above example,
FILE\_A.csv,
```
DAILY,COUNT
5,5
0,0
1,1
```
FILE\_B.csv,
```
WEEKLY,COUNT
14,14
11,11
15,15
```
FILE\_C.csv,
```
MONTHLY,COUNT
25,25
23,23
22,22
```
There will only be 3 maps in the json.
**Can anyone suggest the best way to convert the 3 maps in the json to 3 different csv files of the above structure?**
Thanks in advance.
|
2017/09/20
|
[
"https://Stackoverflow.com/questions/46318530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7923318/"
] |
`bytes(iterator)` create bytes object from iterator using internal C-API `_PyBytes_FromIterator` function, which use special `_PyBytes_Writer` protocol. It internaly use a buffer, which resize when it overflows using a rule:
```
bufsize += bufsize / OVERALLOCATE_FACTOR
```
For linux OVERALLOCATE\_FACTOR=4, for windows OVERALLOCATE\_FACTOR=2.
Those. This process looks like writing to a file in RAM. At the end, the contents of the buffer returns.
|
this is probably more a remark or discussion start than an answer but I think it's better to format it like this.
I just hooking in 'cause I find this topic very inteeresting as well.
I would recommend to paste the real call and generator mock. Since imho the generator expression example does not fit really well for your question.
And the example code you pasted does not work.
Normally you have a generator like this: (in your case calling the module instead of generating numbers of course ..)
Example slightly modified from [dabeaz](http://www.dabeaz.com/coroutines/Coroutines.pdf)
**Update: deleted the explicit byte creatinon.**
```
def genbytes():
for i in range(100):
yield i**2
```
You would the probably call it with something like this:
```
for newbyte in genbytes():
print(newbyte)
print(id(newbyte))
input("press key...")
```
leeds to:
>
> Do I have to assume that for every element the generator yields, there
> is a new bytes object created
>
>
>
I would say absolutely yes. That's the way that yield works. And what we can see above. Bytes always has a new id.
Normally this shouldn't be a problem since you want to consume the bytes one by one and then collect it into something like you suggested a `bytearray` using `bytearray append`
But even if new bytes are produced I think this is at least not producing 33 MB in the input at once and returning them.
I add this exerpt from [PEP 289](https://www.python.org/dev/peps/pep-0289/) where the equivalence of a gen expression and the generator in "function" style is pointed out:
>
> The semantics of a generator expression are equivalent to creating an
> anonymous generator function and calling it. For example:
>
>
>
```
g = (x**2 for x in range(10))
print g.next()
```
is equivalent to:
```
def __gen(exp):
for x in exp:
yield x**2
g = __gen(iter(range(10)))
print g.next()
```
So `bytes(gen)` also calls `gen.next()` (which `yields` ints in this case but probably bytes in your real case) while itrerating over the `gen`.
|
46,318,530
|
Hello all I'm very new to python, so just bear with me.
I have a sample json file in this format.
```
[{
"5":"5",
"0":"0",
"1":"1"},{
"14":"14",
"11":"11",
"15":"15"},{
"25":"25",
"23":"23",
"22":"22"}]
```
I would like to convert the json file to csv in a specific way. All the values in 1 map i.e., `{` and `}` in the json should be converted to a csv file(say FILE\_A.csv).
For the above example,
FILE\_A.csv,
```
DAILY,COUNT
5,5
0,0
1,1
```
FILE\_B.csv,
```
WEEKLY,COUNT
14,14
11,11
15,15
```
FILE\_C.csv,
```
MONTHLY,COUNT
25,25
23,23
22,22
```
There will only be 3 maps in the json.
**Can anyone suggest the best way to convert the 3 maps in the json to 3 different csv files of the above structure?**
Thanks in advance.
|
2017/09/20
|
[
"https://Stackoverflow.com/questions/46318530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7923318/"
] |
`bytes(iterator)` create bytes object from iterator using internal C-API `_PyBytes_FromIterator` function, which use special `_PyBytes_Writer` protocol. It internaly use a buffer, which resize when it overflows using a rule:
```
bufsize += bufsize / OVERALLOCATE_FACTOR
```
For linux OVERALLOCATE\_FACTOR=4, for windows OVERALLOCATE\_FACTOR=2.
Those. This process looks like writing to a file in RAM. At the end, the contents of the buffer returns.
|
Creating immutable types from arbitrary sized iterables is a common task for Python. Most notably, `tuple` is an ubiquitous immutable type. This makes it a necessity to *efficiently* create such instances – constantly creating and discarding intermediate instances would not be efficient.
Using a generator as the source of `bytes` will be reasonably fast. However, directly providing data as a fixed `tuple` or `list` would be faster, and a [readable buffer](https://docs.python.org/3/c-api/buffer.html) can even be directly copied to a new `bytes` object. If the data is provided as a generator, intermediate `tuple`, `list` or `bytearray` will not offer a benefit.
---
The general task of creating a fixed size instance from an iterable is not unique to immutable types. Even a mutable type such as `list` has a fixed size at any specific point in time. While filling a `list` by `.append`ing elements during iteration, Python cannot know the final size ahead of time.
Looking at the size of a `list` in such a scenario reveals the general strategy of turning an arbitrary size iterable into a specific sized container:
```py
>>> items = []
>>> for i in (i*2 for i in range(10)):
>>> items.append(i)
>>> print(i // 2, "\t", sys.getsizeof(items))
0 88
1 88
2 88
3 88
4 120
5 120
6 120
7 120
8 184
9 184
```
As can be seen, Python does not grow the `list` for each item. Instead, when the `list` is too small an entire chunk of space is added that can hold *several* items before resizing is required again.
A naive implementation of `bytes` when presented with an arbitrary sized iterable would first convert its argument to a `list`, then copy the content to a new `bytes` instance. This would already *create only one `bytes` instance* and one temporary, auxiliary `list` – much more efficient than creating a new instance per element.
---
However, it is possible to go one step further: `bytes` is an immutable sequence of immutable integers, so it does not have to preserve the identity of its items: since the value of each `bytes` item fits exactly into a byte / C `char`, the items can be written directly to memory.
In the [CPython 3.9 implementation, constructing `bytes` from iterables](https://github.com/python/cpython/blob/f26fec4f74ae7da972595703bc39d3d30b8cfdf9/Objects/bytesobject.c#L2804-L2845) has some special cases for *direct access buffers*, guaranteed-size *`list` and `tuple`*, and a generic path for *any* other iterable that is not a string. The case of a generator also falls into the latter path.
[This generic path](https://github.com/python/cpython/blob/3.9/Objects/bytesobject.c#L2742-L2802) uses [a `struct` as a primitive, low-level "list" for `char`s](https://github.com/python/cpython/blob/3.9/Include/cpython/bytesobject.h#L44-L65) – it uses a `char` array for storage, and holds metadata like current and maximum size. The byte value of wach item fetched from the iterable/generator is simply copied directly to the current end of the `char` array; if the `char` array becomes too small, it is replaced by a new, larger `char` array. Once the iterable/generator is exhausted, the `char` array can be directly used as the internal storage of the new `bytes` object.
---
When looking at the timings, the important part is whether the data already comes from a generator, or whether one can chose other formats.
When there already is a generator, feeding it directly to `bytes` is the fastest – though not by much.
```
%timeit bytes(i*2 % 256 for i in range(2**20))
107 ms ± 411 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit bytes(bytearray(i*2 % 256 for i in range(2**20)))
111 ms ± 946 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit bytes(tuple(i*2 % 256 for i in range(2**20)))
114 ms ± 1.13 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit bytes(list(i*2 % 256 for i in range(2**20)))
115 ms ± 741 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
```
However, if the initial data structure can be freely chosen, a `list` is faster at the cost of intermediate storage.
```py
%timeit bytes([i*2 % 256 for i in range(2**20)])
94.5 ms ± 1.11 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
```
When directly creating the data in a C/Cython module, providing a `bytes`/`bytesarray` is by far fastest and has low overhead.
|
57,643,746
|
I have a pipeline with a set of PTransforms and my method is getting very long.
I'd like to write my DoFns and my composite transforms in a separate package and use them back in my main method. With python it's pretty straightforward, how can I achieve that with Scio? I don't see any example of doing that. :(
```
withFixedWindows(
FIXED_WINDOW_DURATION,
options = WindowOptions(
trigger = groupedWithinTrigger,
timestampCombiner = TimestampCombiner.END_OF_WINDOW,
accumulationMode = AccumulationMode.ACCUMULATING_FIRED_PANES,
allowedLateness = Duration.ZERO
)
)
.sumByKey
// How to write this in an another file and use it here?
.transform("Format Output") {
_
.withWindow[IntervalWindow]
.withTimestamp
}
```
|
2019/08/25
|
[
"https://Stackoverflow.com/questions/57643746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9726037/"
] |
You can try to use the encoding option "Latin-1" standard (also known as ISO 8859-1 or ISO/IEC 8859-1).
```
library(data.table)
type <- fread(file.path("C:/Users/Alonso/Desktop/Tesis_MGII/Avance_mayo/escrito/natural-disasters-by-type.csv", encoding = "Latin-1"))
```
|
Use the encoding option inside your read.csv code.
Following code sample is working for me:
```
file <- textConnection("# ---------------------
#
# ---------------------
Año, Número
2001, 3152
2002, 3200
2003, 3500
2004, 3700
2005, 3850
2006, 4200", encoding = c("UTF-8"))
file
# read data from textConnection
desaster <- read.csv(file, skip=3, head=TRUE, blank.lines.skip = TRUE, sep=",", encoding="UTF-8")
desaster
```
|
57,643,746
|
I have a pipeline with a set of PTransforms and my method is getting very long.
I'd like to write my DoFns and my composite transforms in a separate package and use them back in my main method. With python it's pretty straightforward, how can I achieve that with Scio? I don't see any example of doing that. :(
```
withFixedWindows(
FIXED_WINDOW_DURATION,
options = WindowOptions(
trigger = groupedWithinTrigger,
timestampCombiner = TimestampCombiner.END_OF_WINDOW,
accumulationMode = AccumulationMode.ACCUMULATING_FIRED_PANES,
allowedLateness = Duration.ZERO
)
)
.sumByKey
// How to write this in an another file and use it here?
.transform("Format Output") {
_
.withWindow[IntervalWindow]
.withTimestamp
}
```
|
2019/08/25
|
[
"https://Stackoverflow.com/questions/57643746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9726037/"
] |
You can try to use the encoding option "Latin-1" standard (also known as ISO 8859-1 or ISO/IEC 8859-1).
```
library(data.table)
type <- fread(file.path("C:/Users/Alonso/Desktop/Tesis_MGII/Avance_mayo/escrito/natural-disasters-by-type.csv", encoding = "Latin-1"))
```
|
For what it's worth: `read_csv` imports columns with row names and column names in Spanish for me without specifying any encoding and `ggplot2` is able to graph them. The defaults should be UTF-8 and perfectly capable of handling Spanish special characters. You do not even need to add backticks ``. Unfortunately the the reprex add-in I normally use does not recognise those same characters!
If you're on Windows, make sure you are saving as UTF-8 `.csv` and it should all work.
|
61,996,944
|
I'm completely new to the Python world, so I've been struggling with this issue for a couple days now. I thank you guys in advance.
I have been trying to separate a single Row and column text in three diferente ones. To explain myself better, here's where I am.
So this is my pandas dataframe from a csv:
In[2]:
```
df = pd.read_csv('raw_csv/consejo_judicatura_guerrero.csv', header=None)
df.columns = ["institution"]
df
```
Out[2]:
```
institution
0 1.1.2. Consejo Nacional de Ciencias (CNCOO00012)
```
Then, I try first to separate the **1.1.2.** in a new column called **number**, which I kind of nailed it:
In[3]:
```
new_df = pd.DataFrame(df['institution'].str.split('. ',1).tolist(),columns=['number', 'institution'])
```
Out[3]:
```
number institution
0 1.1.2. Consejo Nacional de Ciencias (CNCOO00012)
```
Finally, trying to split the **(CNCOO00012)** in a new column called **unit\_id** I get the following:
In[4]:
```
new_df['institution'] = pd.DataFrame(new_df['institution'].str.split('(').tolist(),columns=['institution', 'unit_id'])
```
Out[4]:
```
------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-24-70d13206881c> in <module>
----> 1 new_df['institution'] = pd.DataFrame(new_df['institution'].str.split('(').tolist(),columns=['institution', 'unit_id'])
~/opt/anaconda3/lib/python3.7/site-packages/pandas/core/frame.py in __init__(self, data, index, columns, dtype, copy)
472 if is_named_tuple(data[0]) and columns is None:
473 columns = data[0]._fields
--> 474 arrays, columns = to_arrays(data, columns, dtype=dtype)
475 columns = ensure_index(columns)
476
~/opt/anaconda3/lib/python3.7/site-packages/pandas/core/internals/construction.py in to_arrays(data, columns, coerce_float, dtype)
459 return [], [] # columns if columns is not None else []
460 if isinstance(data[0], (list, tuple)):
--> 461 return _list_to_arrays(data, columns, coerce_float=coerce_float, dtype=dtype)
462 elif isinstance(data[0], abc.Mapping):
463 return _list_of_dict_to_arrays(
~/opt/anaconda3/lib/python3.7/site-packages/pandas/core/internals/construction.py in _list_to_arrays(data, columns, coerce_float, dtype)
491 else:
492 # list of lists
--> 493 content = list(lib.to_object_array(data).T)
494 # gh-26429 do not raise user-facing AssertionError
495 try:
pandas/_libs/lib.pyx in pandas._libs.lib.to_object_array()
TypeError: object of type 'NoneType' has no len()
```
What can I do to successfully achieve this task?
|
2020/05/25
|
[
"https://Stackoverflow.com/questions/61996944",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13611327/"
] |
You can use `assign` with `str.split` like below. But format of text should be fixed.
```
df.assign(number = df.institution.str.split().str[0], \
unit_id = df.institution.str.split().str[-1])
```
Output:
```
institution number unit_id
0 1.1.2. Consejo Nacional de Ciencias (CNCOO00012) 1.1.2. (CNCOO00012)
```
Or If you want to strip `()` from `unit_id` use
```
df.assign(number = df.institution.str.split().str[0], \
unit_id = df.institution.str.split().str[-1].str.strip('()'))
institution number unit_id
0 1.1.2. Consejo Nacional de Ciencias (CNCOO00012) 1.1.2. CNCOO00012
```
|
Just a thought, but what about using named capture groups in a regular expression. For example, use the following after you imported your CSV-file:
```
df.iloc[:,0].str.extract(r'^(?P<number>[\d.]*)\s+(?P<instituion>.*)\s+\((?P<unit_id>[A-Z\d]*)\)$')
```
This would expand your dataframe as such:
```
number instituion unit_id
0 1.1.2. Consejo Nacional de Ciencias CNCOO00012
```
---
About the regular expression's pattern:

* `^` - A start string ancor
* `^(?P<number>[\d.]*)` - A named capture group, number, made up of zero or more characters (greedy) in character class of dots and digits.
* `\s+` - One or more spaces.
* `(?P<instituion>.*)` - A named capture group, institution, made up of zero or more characters (greedy) other than new-line.
* `\s+\(` - One or more spaces followed by a literal opening paranthesis.
* `(?P<unit_id>[A-Z\d]*)` - A named capture group, unit\_id, made up of zero or more characters (greedy) in character class of uppercase letters and digits.
* `\)$` - Closing paranthesis followed by end of string ancor.
[Online Demo](https://regex101.com/r/ZliNc0/4)
|
61,996,944
|
I'm completely new to the Python world, so I've been struggling with this issue for a couple days now. I thank you guys in advance.
I have been trying to separate a single Row and column text in three diferente ones. To explain myself better, here's where I am.
So this is my pandas dataframe from a csv:
In[2]:
```
df = pd.read_csv('raw_csv/consejo_judicatura_guerrero.csv', header=None)
df.columns = ["institution"]
df
```
Out[2]:
```
institution
0 1.1.2. Consejo Nacional de Ciencias (CNCOO00012)
```
Then, I try first to separate the **1.1.2.** in a new column called **number**, which I kind of nailed it:
In[3]:
```
new_df = pd.DataFrame(df['institution'].str.split('. ',1).tolist(),columns=['number', 'institution'])
```
Out[3]:
```
number institution
0 1.1.2. Consejo Nacional de Ciencias (CNCOO00012)
```
Finally, trying to split the **(CNCOO00012)** in a new column called **unit\_id** I get the following:
In[4]:
```
new_df['institution'] = pd.DataFrame(new_df['institution'].str.split('(').tolist(),columns=['institution', 'unit_id'])
```
Out[4]:
```
------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-24-70d13206881c> in <module>
----> 1 new_df['institution'] = pd.DataFrame(new_df['institution'].str.split('(').tolist(),columns=['institution', 'unit_id'])
~/opt/anaconda3/lib/python3.7/site-packages/pandas/core/frame.py in __init__(self, data, index, columns, dtype, copy)
472 if is_named_tuple(data[0]) and columns is None:
473 columns = data[0]._fields
--> 474 arrays, columns = to_arrays(data, columns, dtype=dtype)
475 columns = ensure_index(columns)
476
~/opt/anaconda3/lib/python3.7/site-packages/pandas/core/internals/construction.py in to_arrays(data, columns, coerce_float, dtype)
459 return [], [] # columns if columns is not None else []
460 if isinstance(data[0], (list, tuple)):
--> 461 return _list_to_arrays(data, columns, coerce_float=coerce_float, dtype=dtype)
462 elif isinstance(data[0], abc.Mapping):
463 return _list_of_dict_to_arrays(
~/opt/anaconda3/lib/python3.7/site-packages/pandas/core/internals/construction.py in _list_to_arrays(data, columns, coerce_float, dtype)
491 else:
492 # list of lists
--> 493 content = list(lib.to_object_array(data).T)
494 # gh-26429 do not raise user-facing AssertionError
495 try:
pandas/_libs/lib.pyx in pandas._libs.lib.to_object_array()
TypeError: object of type 'NoneType' has no len()
```
What can I do to successfully achieve this task?
|
2020/05/25
|
[
"https://Stackoverflow.com/questions/61996944",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13611327/"
] |
You can use `assign` with `str.split` like below. But format of text should be fixed.
```
df.assign(number = df.institution.str.split().str[0], \
unit_id = df.institution.str.split().str[-1])
```
Output:
```
institution number unit_id
0 1.1.2. Consejo Nacional de Ciencias (CNCOO00012) 1.1.2. (CNCOO00012)
```
Or If you want to strip `()` from `unit_id` use
```
df.assign(number = df.institution.str.split().str[0], \
unit_id = df.institution.str.split().str[-1].str.strip('()'))
institution number unit_id
0 1.1.2. Consejo Nacional de Ciencias (CNCOO00012) 1.1.2. CNCOO00012
```
|
If your data is tidy enough you can perform all three steps in one command using `pd.Series.str.extract()` which can split a series of strings into columns by regex, ie.:
```
df.institution.str.extract(r'(?P<number>[0-9.]+) (?P<institution>[A-Za-z ]+) \((?P<unit_id>[A-Z0-9]+)')
```
If the missing values cause a problem you can add `dropna()`:
```
df.institution.dropna().str.extract(r'(?P<number>[0-9.]+) (?P<institution>[A-Za-z ]+) \((?P<unit_id>[A-Z0-9]+)')
```
Output:
```
number institution unit_id
0 1.1.2. Consejo Nacional de Ciencias CNCOO00012
```
|
36,002,647
|
Given an iterator `i`, I want an iterator that yields each element `n` times, i.e., the equivalent of this function
```
def duplicate(i, n):
for x in i:
for k in range(n):
yield x
```
Is there an one-liner for this?
Related question: [duplicate each member in a list - python](https://stackoverflow.com/questions/2449077/duplicate-each-member-in-a-list-python), but the `zip` solution doesn't work here.
|
2016/03/15
|
[
"https://Stackoverflow.com/questions/36002647",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2201385/"
] |
```
itertools.chain.from_iterable(itertools.izip(*itertools.tee(source, n)))
```
Example:
```
>>> x = (a**2 for a in xrange(5))
>>> list(itertools.chain.from_iterable(itertools.izip(*itertools.tee(x, 3))))
[0, 0, 0, 1, 1, 1, 4, 4, 4, 9, 9, 9, 16, 16, 16]
```
Another way:
```
itertools.chain.from_iterable(itertools.repeat(item, n) for item in source)
>>> x = (a**2 for a in xrange(5))
>>> list(itertools.chain.from_iterable(itertools.repeat(item, 3) for item in x))
[0, 0, 0, 1, 1, 1, 4, 4, 4, 9, 9, 9, 16, 16, 16]
```
|
Use a generator expression:
```
>>> x = (n for n in range(4))
>>> i = (v for v in x for _ in range(3))
>>> list(i)
[0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3]
```
|
36,002,647
|
Given an iterator `i`, I want an iterator that yields each element `n` times, i.e., the equivalent of this function
```
def duplicate(i, n):
for x in i:
for k in range(n):
yield x
```
Is there an one-liner for this?
Related question: [duplicate each member in a list - python](https://stackoverflow.com/questions/2449077/duplicate-each-member-in-a-list-python), but the `zip` solution doesn't work here.
|
2016/03/15
|
[
"https://Stackoverflow.com/questions/36002647",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2201385/"
] |
This is my simple solution, if you want to duplicate each element same times. It returns a generator expression, which should be memory efficient.
```
def duplicate(i, n):
return (k for k in i for j in range(n))
```
An example usage could be,
```
print (list(duplicate(range(1, 10), 3)))
```
Which prints,
>
> [1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6, 7, 7, 7, 8, 8,
> 8, 9, 9, 9]
>
>
>
|
```
itertools.chain.from_iterable(itertools.izip(*itertools.tee(source, n)))
```
Example:
```
>>> x = (a**2 for a in xrange(5))
>>> list(itertools.chain.from_iterable(itertools.izip(*itertools.tee(x, 3))))
[0, 0, 0, 1, 1, 1, 4, 4, 4, 9, 9, 9, 16, 16, 16]
```
Another way:
```
itertools.chain.from_iterable(itertools.repeat(item, n) for item in source)
>>> x = (a**2 for a in xrange(5))
>>> list(itertools.chain.from_iterable(itertools.repeat(item, 3) for item in x))
[0, 0, 0, 1, 1, 1, 4, 4, 4, 9, 9, 9, 16, 16, 16]
```
|
36,002,647
|
Given an iterator `i`, I want an iterator that yields each element `n` times, i.e., the equivalent of this function
```
def duplicate(i, n):
for x in i:
for k in range(n):
yield x
```
Is there an one-liner for this?
Related question: [duplicate each member in a list - python](https://stackoverflow.com/questions/2449077/duplicate-each-member-in-a-list-python), but the `zip` solution doesn't work here.
|
2016/03/15
|
[
"https://Stackoverflow.com/questions/36002647",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2201385/"
] |
This is my simple solution, if you want to duplicate each element same times. It returns a generator expression, which should be memory efficient.
```
def duplicate(i, n):
return (k for k in i for j in range(n))
```
An example usage could be,
```
print (list(duplicate(range(1, 10), 3)))
```
Which prints,
>
> [1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6, 7, 7, 7, 8, 8,
> 8, 9, 9, 9]
>
>
>
|
Use a generator expression:
```
>>> x = (n for n in range(4))
>>> i = (v for v in x for _ in range(3))
>>> list(i)
[0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3]
```
|
4,081,230
|
**Update 2010-11-02 7p:** Shortened description; posted initial bash solution.
---
**Description**
I'd like to create a semantic file structure to better organize my data. I don't want to go a route like recoll, strigi, or beagle; I want no gui and full control. The closest might be oyepa or even closer, [Tagsistant](http://www.tagsistant.net/index.php).
Here's the idea: one maintains a "regular" tree of their files. For example, mine are organized in project folders like this:
```
,---
| ~/proj1
| ---- ../proj1_file1[tag1-tag2].ext
| ---- ../proj1_file2[tag3]_yyyy-mm-dd.ext
| ~/proj2
| ---- ../proj2_file3[tag2-tag4].ext
| ---- ../proj1_file4[tag1].ext
`---
```
proj1, proj2 are very short abbreviations I have for my projects.
Then what I want to do is recursively go through the directory and get the following:
* proj ID
* tags
* extension
Each of these will be form a complete "tag list" for each file.
Then in a user-defined directory, a "semantic hierarchy" will be created based on these tags. This gets a bit long, so just take a look at the directory structure created for all files containing tag2 in the name:
```
,---
| ~/tag2
| --- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
| ---../tag1
| ------- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../tag4
| ------- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
| --- ../proj1
| ------- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../proj2
| ------- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
`---
```
In other words, directories are created with all combinations of a file's tags, and each contains a symlink to the actual files having those tags. I have omitted the file type directories, but these would also exist. It looks really messy in type, but I think the effect would be very cool. One could then fine a given file along a number of "tag bread crumbs."
My thoughts so far:
* ls -R in a top directory to get all the file names
* identify those files with a [ and ] in the filename (tagged files)
* with what's left, enter a loop:
+ strip out the proj ID, tags, and extension
+ create all the necessary dirs based on the tags
+ create symlinks to the file in all of the dirs created
**First Solution! 2010-11-3 7p**
Here's my current working code. It only works on files in the top level directory, does not figure out extension types yet, and only works on 2 tags + the project ID for a total of 3 tags per file. It is a hacked manual chug solution but maybe it would help someone see what I'm doing and how this could be muuuuch better:
```
#!/bin/bash
########################
#### User Variables ####
########################
## set top directory for the semantic filer
## example: ~/semantic
## result will be ~/semantic/tag1, ~/semantic/tag2, etc.
top_dir=~/Desktop/semantic
## set document extensions, space separated
## example: "doc odt txt"
doc_ext="doc odt txt"
## set presentation extensions, space separated
pres_ext="ppt odp pptx"
## set image extensions, space separated
img_ext="jpg png gif"
#### End User Variables ####
#####################
#### Begin Script####
#####################
cd $top_dir
ls -1 | (while read fname;
do
if [[ $fname == *[* ]]
then
tag_names=$( echo $fname | sed -e 's/-/ /g' -e 's/_.*\[/ /' -e 's/\].*$//' )
num_tags=$(echo $tag_names | wc -w)
current_tags=( `echo $tag_names | sed -e 's/ /\n/g'` )
echo ${current_tags[0]}
echo ${current_tags[1]}
echo ${current_tags[2]}
case $num_tags in
3)
mkdir -p ./${current_tags[0]}/${current_tags[1]}/${current_tags[2]}
mkdir -p ./${current_tags[0]}/${current_tags[2]}/${current_tags[1]}
mkdir -p ./${current_tags[1]}/${current_tags[0]}/${current_tags[2]}
mkdir -p ./${current_tags[1]}/${current_tags[2]}/${current_tags[0]}
mkdir -p ./${current_tags[2]}/${current_tags[0]}/${current_tags[1]}
mkdir -p ./${current_tags[2]}/${current_tags[1]}/${current_tags[0]}
cd $top_dir/${current_tags[0]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[1]}/$fname
ln -s $top_dir/$fname ./${current_tags[2]}/$fname
cd $top_dir/${current_tags[1]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[0]}/$fname
ln -s $top_dir/$fname ./${current_tags[2]}/$fname
cd $top_dir/${current_tags[2]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[0]}/$fname
ln -s $top_dir/$fname ./${current_tags[1]}/$fname
cd $top_dir
;;
esac
fi
done
)
```
It's actually pretty neat. If you want to try it, do this:
* create a dir somewhere
* use touch to create a bunch of files with the format above: proj\_name[tag1-tag2].ext
* define the top\_dir variable
* run the script
* play around!
**ToDo**
* make this work using an "ls -R" in order to get into sub-dirs in my actual tree
* robustness check
* consider switching languages; hey, I've always wanted to learn perl and/or python!
Still open to any suggestions you have. Thanks!
|
2010/11/02
|
[
"https://Stackoverflow.com/questions/4081230",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/495152/"
] |
>
> A priviledged cleint can invoke the private constructor reflectively with the aid of the AccessibleObject.setAccessible method, If you need to defend this, modify the constructor. My question is: How exactly can a private constructor is invoked? and what is AccessibleObject.setAccessible??
>
>
>
Obviously a private constructor can be invoked by the class itself (e.g. from a static factory method). Reflectively, what Bloch is talking about is this:
```
import java.lang.reflect.Constructor;
public class PrivateInvoker {
public static void main(String[] args) throws Exception{
//compile error
// Private p = new Private();
//works fine
Constructor<?> con = Private.class.getDeclaredConstructors()[0];
con.setAccessible(true);
Private p = (Private) con.newInstance();
}
}
class Private {
private Private() {
System.out.println("Hello!");
}
}
```
>
> 2.What approach do you experts follow with singletons:
>
>
> ...
>
>
>
Typically, the first is favoured. The second (assuming you were to test if `TestInstance` is null before returning a new instance) gains lazy-loading at the cost of needing to be synchronized or being thread-unsafe.
I wrote the above when your second example didn't assign the instance to `TestInstance` at declaration. As stated now, the above consideration is irrelevant.
>
> Isn't the second approach more flexible in case we have to make a check for new instance every time or same instance every time?
>
>
>
It's not about flexibility, it's about when the cost of creating the one (and only) instance is incurred. If you do option a) it's incurred at class loading time. That's typically fine since the class is only loaded once it's needed anyway.
I wrote the above when your second example didn't assign the instance to `TestInstance` at declaration. As stated now, in both cases the Singleton will be created at class load.
>
> What if I try to clone the class/object?
>
>
>
A singleton should not allow cloning for obvious reasons. A CloneNotSupportedException should be thrown, and will be automatically unless you for some reason implement `Cloneable`.
>
> a single-element enum type is the best way to implement a singleton. Why? and How?
>
>
>
Examples for this are in the book, as are justifications. What part didn't you understand?
|
>
> A priviledged cleint can invoke the private constructor reflectively with
> the aid of the
> AccessibleObject.setAccessible method,
> If you need to defend this, modify the
> constructor. My question is: How
> exactly can a private constructor is
> invoked? and what is
> AccessibleObject.setAccessible??
>
>
>
You can use java reflection to call private constructors.
>
> What approach do you experts follow with singletons:
>
>
>
I am a fan of using enum to actually do this. This is in the book also. If that's not an option, it is much simpler to do option a because you don't have to check or worry about instance being already created.
>
> What if I try to clone the
> class/object?
>
>
>
Not sure what you mean? do you mean clone() or something else that I don't know of?
>
> a single-element enum type is the best way to implement a singleton.
> Why? and How?
>
>
>
Ahh my own answer. haha. This is the best way because in this case, the java programming language guarantees a singleton instead of the developer having to check for a singleton. It's almost as if the singleton was part of the framework/language.
Edit:
I didn't see that it was a getter before. Updating my answer to this - it's better to use a function such getInstance because you can control what happens through a getter but you can't do the same if everybody is using a reference directly instead. Think of the future. If you ended up doing SomeClass.INTANCE and then later you wanted to make it lazy so it doesn't load right away then you would need change this everywhere that its being used.
|
4,081,230
|
**Update 2010-11-02 7p:** Shortened description; posted initial bash solution.
---
**Description**
I'd like to create a semantic file structure to better organize my data. I don't want to go a route like recoll, strigi, or beagle; I want no gui and full control. The closest might be oyepa or even closer, [Tagsistant](http://www.tagsistant.net/index.php).
Here's the idea: one maintains a "regular" tree of their files. For example, mine are organized in project folders like this:
```
,---
| ~/proj1
| ---- ../proj1_file1[tag1-tag2].ext
| ---- ../proj1_file2[tag3]_yyyy-mm-dd.ext
| ~/proj2
| ---- ../proj2_file3[tag2-tag4].ext
| ---- ../proj1_file4[tag1].ext
`---
```
proj1, proj2 are very short abbreviations I have for my projects.
Then what I want to do is recursively go through the directory and get the following:
* proj ID
* tags
* extension
Each of these will be form a complete "tag list" for each file.
Then in a user-defined directory, a "semantic hierarchy" will be created based on these tags. This gets a bit long, so just take a look at the directory structure created for all files containing tag2 in the name:
```
,---
| ~/tag2
| --- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
| ---../tag1
| ------- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../tag4
| ------- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
| --- ../proj1
| ------- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../proj2
| ------- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
`---
```
In other words, directories are created with all combinations of a file's tags, and each contains a symlink to the actual files having those tags. I have omitted the file type directories, but these would also exist. It looks really messy in type, but I think the effect would be very cool. One could then fine a given file along a number of "tag bread crumbs."
My thoughts so far:
* ls -R in a top directory to get all the file names
* identify those files with a [ and ] in the filename (tagged files)
* with what's left, enter a loop:
+ strip out the proj ID, tags, and extension
+ create all the necessary dirs based on the tags
+ create symlinks to the file in all of the dirs created
**First Solution! 2010-11-3 7p**
Here's my current working code. It only works on files in the top level directory, does not figure out extension types yet, and only works on 2 tags + the project ID for a total of 3 tags per file. It is a hacked manual chug solution but maybe it would help someone see what I'm doing and how this could be muuuuch better:
```
#!/bin/bash
########################
#### User Variables ####
########################
## set top directory for the semantic filer
## example: ~/semantic
## result will be ~/semantic/tag1, ~/semantic/tag2, etc.
top_dir=~/Desktop/semantic
## set document extensions, space separated
## example: "doc odt txt"
doc_ext="doc odt txt"
## set presentation extensions, space separated
pres_ext="ppt odp pptx"
## set image extensions, space separated
img_ext="jpg png gif"
#### End User Variables ####
#####################
#### Begin Script####
#####################
cd $top_dir
ls -1 | (while read fname;
do
if [[ $fname == *[* ]]
then
tag_names=$( echo $fname | sed -e 's/-/ /g' -e 's/_.*\[/ /' -e 's/\].*$//' )
num_tags=$(echo $tag_names | wc -w)
current_tags=( `echo $tag_names | sed -e 's/ /\n/g'` )
echo ${current_tags[0]}
echo ${current_tags[1]}
echo ${current_tags[2]}
case $num_tags in
3)
mkdir -p ./${current_tags[0]}/${current_tags[1]}/${current_tags[2]}
mkdir -p ./${current_tags[0]}/${current_tags[2]}/${current_tags[1]}
mkdir -p ./${current_tags[1]}/${current_tags[0]}/${current_tags[2]}
mkdir -p ./${current_tags[1]}/${current_tags[2]}/${current_tags[0]}
mkdir -p ./${current_tags[2]}/${current_tags[0]}/${current_tags[1]}
mkdir -p ./${current_tags[2]}/${current_tags[1]}/${current_tags[0]}
cd $top_dir/${current_tags[0]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[1]}/$fname
ln -s $top_dir/$fname ./${current_tags[2]}/$fname
cd $top_dir/${current_tags[1]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[0]}/$fname
ln -s $top_dir/$fname ./${current_tags[2]}/$fname
cd $top_dir/${current_tags[2]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[0]}/$fname
ln -s $top_dir/$fname ./${current_tags[1]}/$fname
cd $top_dir
;;
esac
fi
done
)
```
It's actually pretty neat. If you want to try it, do this:
* create a dir somewhere
* use touch to create a bunch of files with the format above: proj\_name[tag1-tag2].ext
* define the top\_dir variable
* run the script
* play around!
**ToDo**
* make this work using an "ls -R" in order to get into sub-dirs in my actual tree
* robustness check
* consider switching languages; hey, I've always wanted to learn perl and/or python!
Still open to any suggestions you have. Thanks!
|
2010/11/02
|
[
"https://Stackoverflow.com/questions/4081230",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/495152/"
] |
>
> A priviledged cleint can invoke the private constructor reflectively with
> the aid of the
> AccessibleObject.setAccessible method,
> If you need to defend this, modify the
> constructor. My question is: How
> exactly can a private constructor is
> invoked? and what is
> AccessibleObject.setAccessible??
>
>
>
You can use java reflection to call private constructors.
>
> What approach do you experts follow with singletons:
>
>
>
I am a fan of using enum to actually do this. This is in the book also. If that's not an option, it is much simpler to do option a because you don't have to check or worry about instance being already created.
>
> What if I try to clone the
> class/object?
>
>
>
Not sure what you mean? do you mean clone() or something else that I don't know of?
>
> a single-element enum type is the best way to implement a singleton.
> Why? and How?
>
>
>
Ahh my own answer. haha. This is the best way because in this case, the java programming language guarantees a singleton instead of the developer having to check for a singleton. It's almost as if the singleton was part of the framework/language.
Edit:
I didn't see that it was a getter before. Updating my answer to this - it's better to use a function such getInstance because you can control what happens through a getter but you can't do the same if everybody is using a reference directly instead. Think of the future. If you ended up doing SomeClass.INTANCE and then later you wanted to make it lazy so it doesn't load right away then you would need change this everywhere that its being used.
|
The first rule of the Singleton (Anti-) Pattern is *don't use it*. The second rule is *don't use it* for the purpose of making it easy to get at a single instance of a class that you want multiple other objects to share, *especially* if it is a dependency of those classes. Use dependency injection instead. There are valid uses for Singletons, but people have a tendenecy to badly misuse them because they're so "easy" to use. They make it very difficult to test classes that depend on them and make systems inflexible.
As far as your questions, I think **1**, **2** and **3** can all be answered by saying "use an `enum`-singleton". Then you don't need to worry about constructor accessiblity issues, `clone()`ing, etc. As far as **4**, the above is a good argument for them. I also have to ask, did you read the section in Effective Java that explicitly answers your question?
|
4,081,230
|
**Update 2010-11-02 7p:** Shortened description; posted initial bash solution.
---
**Description**
I'd like to create a semantic file structure to better organize my data. I don't want to go a route like recoll, strigi, or beagle; I want no gui and full control. The closest might be oyepa or even closer, [Tagsistant](http://www.tagsistant.net/index.php).
Here's the idea: one maintains a "regular" tree of their files. For example, mine are organized in project folders like this:
```
,---
| ~/proj1
| ---- ../proj1_file1[tag1-tag2].ext
| ---- ../proj1_file2[tag3]_yyyy-mm-dd.ext
| ~/proj2
| ---- ../proj2_file3[tag2-tag4].ext
| ---- ../proj1_file4[tag1].ext
`---
```
proj1, proj2 are very short abbreviations I have for my projects.
Then what I want to do is recursively go through the directory and get the following:
* proj ID
* tags
* extension
Each of these will be form a complete "tag list" for each file.
Then in a user-defined directory, a "semantic hierarchy" will be created based on these tags. This gets a bit long, so just take a look at the directory structure created for all files containing tag2 in the name:
```
,---
| ~/tag2
| --- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
| ---../tag1
| ------- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../tag4
| ------- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
| --- ../proj1
| ------- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../proj2
| ------- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
`---
```
In other words, directories are created with all combinations of a file's tags, and each contains a symlink to the actual files having those tags. I have omitted the file type directories, but these would also exist. It looks really messy in type, but I think the effect would be very cool. One could then fine a given file along a number of "tag bread crumbs."
My thoughts so far:
* ls -R in a top directory to get all the file names
* identify those files with a [ and ] in the filename (tagged files)
* with what's left, enter a loop:
+ strip out the proj ID, tags, and extension
+ create all the necessary dirs based on the tags
+ create symlinks to the file in all of the dirs created
**First Solution! 2010-11-3 7p**
Here's my current working code. It only works on files in the top level directory, does not figure out extension types yet, and only works on 2 tags + the project ID for a total of 3 tags per file. It is a hacked manual chug solution but maybe it would help someone see what I'm doing and how this could be muuuuch better:
```
#!/bin/bash
########################
#### User Variables ####
########################
## set top directory for the semantic filer
## example: ~/semantic
## result will be ~/semantic/tag1, ~/semantic/tag2, etc.
top_dir=~/Desktop/semantic
## set document extensions, space separated
## example: "doc odt txt"
doc_ext="doc odt txt"
## set presentation extensions, space separated
pres_ext="ppt odp pptx"
## set image extensions, space separated
img_ext="jpg png gif"
#### End User Variables ####
#####################
#### Begin Script####
#####################
cd $top_dir
ls -1 | (while read fname;
do
if [[ $fname == *[* ]]
then
tag_names=$( echo $fname | sed -e 's/-/ /g' -e 's/_.*\[/ /' -e 's/\].*$//' )
num_tags=$(echo $tag_names | wc -w)
current_tags=( `echo $tag_names | sed -e 's/ /\n/g'` )
echo ${current_tags[0]}
echo ${current_tags[1]}
echo ${current_tags[2]}
case $num_tags in
3)
mkdir -p ./${current_tags[0]}/${current_tags[1]}/${current_tags[2]}
mkdir -p ./${current_tags[0]}/${current_tags[2]}/${current_tags[1]}
mkdir -p ./${current_tags[1]}/${current_tags[0]}/${current_tags[2]}
mkdir -p ./${current_tags[1]}/${current_tags[2]}/${current_tags[0]}
mkdir -p ./${current_tags[2]}/${current_tags[0]}/${current_tags[1]}
mkdir -p ./${current_tags[2]}/${current_tags[1]}/${current_tags[0]}
cd $top_dir/${current_tags[0]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[1]}/$fname
ln -s $top_dir/$fname ./${current_tags[2]}/$fname
cd $top_dir/${current_tags[1]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[0]}/$fname
ln -s $top_dir/$fname ./${current_tags[2]}/$fname
cd $top_dir/${current_tags[2]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[0]}/$fname
ln -s $top_dir/$fname ./${current_tags[1]}/$fname
cd $top_dir
;;
esac
fi
done
)
```
It's actually pretty neat. If you want to try it, do this:
* create a dir somewhere
* use touch to create a bunch of files with the format above: proj\_name[tag1-tag2].ext
* define the top\_dir variable
* run the script
* play around!
**ToDo**
* make this work using an "ls -R" in order to get into sub-dirs in my actual tree
* robustness check
* consider switching languages; hey, I've always wanted to learn perl and/or python!
Still open to any suggestions you have. Thanks!
|
2010/11/02
|
[
"https://Stackoverflow.com/questions/4081230",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/495152/"
] |
>
> A priviledged cleint can invoke the private constructor reflectively with
> the aid of the
> AccessibleObject.setAccessible method,
> If you need to defend this, modify the
> constructor. My question is: How
> exactly can a private constructor is
> invoked? and what is
> AccessibleObject.setAccessible??
>
>
>
You can use java reflection to call private constructors.
>
> What approach do you experts follow with singletons:
>
>
>
I am a fan of using enum to actually do this. This is in the book also. If that's not an option, it is much simpler to do option a because you don't have to check or worry about instance being already created.
>
> What if I try to clone the
> class/object?
>
>
>
Not sure what you mean? do you mean clone() or something else that I don't know of?
>
> a single-element enum type is the best way to implement a singleton.
> Why? and How?
>
>
>
Ahh my own answer. haha. This is the best way because in this case, the java programming language guarantees a singleton instead of the developer having to check for a singleton. It's almost as if the singleton was part of the framework/language.
Edit:
I didn't see that it was a getter before. Updating my answer to this - it's better to use a function such getInstance because you can control what happens through a getter but you can't do the same if everybody is using a reference directly instead. Think of the future. If you ended up doing SomeClass.INTANCE and then later you wanted to make it lazy so it doesn't load right away then you would need change this everywhere that its being used.
|
Example singleton lazy init:
Class main:
```
public class Main {
public static void main(String[] args) {
System.out.println(Singleton.getInstance("first").value);
System.out.println(Singleton.getInstance("second").value);
System.out.println(Singleton.getInstance("therd").value);
}
}
```
Class singleton:
```
public class Singleton {
private static Singleton instance;
public String value;
private Singleton (String s){
this.value =s;
}
public static Singleton getInstance(String param) {
if (instance == null)
instance = new Singleton(param);
return instance;
}
}
```
When start application, console will contains next strings:
```
first
first
first
```
\|/ 73
|
4,081,230
|
**Update 2010-11-02 7p:** Shortened description; posted initial bash solution.
---
**Description**
I'd like to create a semantic file structure to better organize my data. I don't want to go a route like recoll, strigi, or beagle; I want no gui and full control. The closest might be oyepa or even closer, [Tagsistant](http://www.tagsistant.net/index.php).
Here's the idea: one maintains a "regular" tree of their files. For example, mine are organized in project folders like this:
```
,---
| ~/proj1
| ---- ../proj1_file1[tag1-tag2].ext
| ---- ../proj1_file2[tag3]_yyyy-mm-dd.ext
| ~/proj2
| ---- ../proj2_file3[tag2-tag4].ext
| ---- ../proj1_file4[tag1].ext
`---
```
proj1, proj2 are very short abbreviations I have for my projects.
Then what I want to do is recursively go through the directory and get the following:
* proj ID
* tags
* extension
Each of these will be form a complete "tag list" for each file.
Then in a user-defined directory, a "semantic hierarchy" will be created based on these tags. This gets a bit long, so just take a look at the directory structure created for all files containing tag2 in the name:
```
,---
| ~/tag2
| --- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
| ---../tag1
| ------- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../tag4
| ------- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
| --- ../proj1
| ------- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../proj2
| ------- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
`---
```
In other words, directories are created with all combinations of a file's tags, and each contains a symlink to the actual files having those tags. I have omitted the file type directories, but these would also exist. It looks really messy in type, but I think the effect would be very cool. One could then fine a given file along a number of "tag bread crumbs."
My thoughts so far:
* ls -R in a top directory to get all the file names
* identify those files with a [ and ] in the filename (tagged files)
* with what's left, enter a loop:
+ strip out the proj ID, tags, and extension
+ create all the necessary dirs based on the tags
+ create symlinks to the file in all of the dirs created
**First Solution! 2010-11-3 7p**
Here's my current working code. It only works on files in the top level directory, does not figure out extension types yet, and only works on 2 tags + the project ID for a total of 3 tags per file. It is a hacked manual chug solution but maybe it would help someone see what I'm doing and how this could be muuuuch better:
```
#!/bin/bash
########################
#### User Variables ####
########################
## set top directory for the semantic filer
## example: ~/semantic
## result will be ~/semantic/tag1, ~/semantic/tag2, etc.
top_dir=~/Desktop/semantic
## set document extensions, space separated
## example: "doc odt txt"
doc_ext="doc odt txt"
## set presentation extensions, space separated
pres_ext="ppt odp pptx"
## set image extensions, space separated
img_ext="jpg png gif"
#### End User Variables ####
#####################
#### Begin Script####
#####################
cd $top_dir
ls -1 | (while read fname;
do
if [[ $fname == *[* ]]
then
tag_names=$( echo $fname | sed -e 's/-/ /g' -e 's/_.*\[/ /' -e 's/\].*$//' )
num_tags=$(echo $tag_names | wc -w)
current_tags=( `echo $tag_names | sed -e 's/ /\n/g'` )
echo ${current_tags[0]}
echo ${current_tags[1]}
echo ${current_tags[2]}
case $num_tags in
3)
mkdir -p ./${current_tags[0]}/${current_tags[1]}/${current_tags[2]}
mkdir -p ./${current_tags[0]}/${current_tags[2]}/${current_tags[1]}
mkdir -p ./${current_tags[1]}/${current_tags[0]}/${current_tags[2]}
mkdir -p ./${current_tags[1]}/${current_tags[2]}/${current_tags[0]}
mkdir -p ./${current_tags[2]}/${current_tags[0]}/${current_tags[1]}
mkdir -p ./${current_tags[2]}/${current_tags[1]}/${current_tags[0]}
cd $top_dir/${current_tags[0]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[1]}/$fname
ln -s $top_dir/$fname ./${current_tags[2]}/$fname
cd $top_dir/${current_tags[1]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[0]}/$fname
ln -s $top_dir/$fname ./${current_tags[2]}/$fname
cd $top_dir/${current_tags[2]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[0]}/$fname
ln -s $top_dir/$fname ./${current_tags[1]}/$fname
cd $top_dir
;;
esac
fi
done
)
```
It's actually pretty neat. If you want to try it, do this:
* create a dir somewhere
* use touch to create a bunch of files with the format above: proj\_name[tag1-tag2].ext
* define the top\_dir variable
* run the script
* play around!
**ToDo**
* make this work using an "ls -R" in order to get into sub-dirs in my actual tree
* robustness check
* consider switching languages; hey, I've always wanted to learn perl and/or python!
Still open to any suggestions you have. Thanks!
|
2010/11/02
|
[
"https://Stackoverflow.com/questions/4081230",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/495152/"
] |
>
> A priviledged cleint can invoke the private constructor reflectively with the aid of the AccessibleObject.setAccessible method, If you need to defend this, modify the constructor. My question is: How exactly can a private constructor is invoked? and what is AccessibleObject.setAccessible??
>
>
>
Obviously a private constructor can be invoked by the class itself (e.g. from a static factory method). Reflectively, what Bloch is talking about is this:
```
import java.lang.reflect.Constructor;
public class PrivateInvoker {
public static void main(String[] args) throws Exception{
//compile error
// Private p = new Private();
//works fine
Constructor<?> con = Private.class.getDeclaredConstructors()[0];
con.setAccessible(true);
Private p = (Private) con.newInstance();
}
}
class Private {
private Private() {
System.out.println("Hello!");
}
}
```
>
> 2.What approach do you experts follow with singletons:
>
>
> ...
>
>
>
Typically, the first is favoured. The second (assuming you were to test if `TestInstance` is null before returning a new instance) gains lazy-loading at the cost of needing to be synchronized or being thread-unsafe.
I wrote the above when your second example didn't assign the instance to `TestInstance` at declaration. As stated now, the above consideration is irrelevant.
>
> Isn't the second approach more flexible in case we have to make a check for new instance every time or same instance every time?
>
>
>
It's not about flexibility, it's about when the cost of creating the one (and only) instance is incurred. If you do option a) it's incurred at class loading time. That's typically fine since the class is only loaded once it's needed anyway.
I wrote the above when your second example didn't assign the instance to `TestInstance` at declaration. As stated now, in both cases the Singleton will be created at class load.
>
> What if I try to clone the class/object?
>
>
>
A singleton should not allow cloning for obvious reasons. A CloneNotSupportedException should be thrown, and will be automatically unless you for some reason implement `Cloneable`.
>
> a single-element enum type is the best way to implement a singleton. Why? and How?
>
>
>
Examples for this are in the book, as are justifications. What part didn't you understand?
|
Singletons are a good pattern to learn, especially as an introductory design pattern. Beware, however, that they often end up being one of the most [over-used patterns](http://aabs.wordpress.com/2007/03/08/singleton-%E2%80%93-the-most-overused-pattern/). It's gotten to the point that some consider them an ["anti-pattern"](http://accu.org/index.php/journals/337). The best advice is to ["use them wisely."](http://www.ibm.com/developerworks/webservices/library/co-single.html)
For a great introduction to this, and many other useful patterns (personally, I find Strategy, Observer, and Command to be far more useful than Singleton), check out [Head First Design Patterns](https://rads.stackoverflow.com/amzn/click/com/0596007124).
|
4,081,230
|
**Update 2010-11-02 7p:** Shortened description; posted initial bash solution.
---
**Description**
I'd like to create a semantic file structure to better organize my data. I don't want to go a route like recoll, strigi, or beagle; I want no gui and full control. The closest might be oyepa or even closer, [Tagsistant](http://www.tagsistant.net/index.php).
Here's the idea: one maintains a "regular" tree of their files. For example, mine are organized in project folders like this:
```
,---
| ~/proj1
| ---- ../proj1_file1[tag1-tag2].ext
| ---- ../proj1_file2[tag3]_yyyy-mm-dd.ext
| ~/proj2
| ---- ../proj2_file3[tag2-tag4].ext
| ---- ../proj1_file4[tag1].ext
`---
```
proj1, proj2 are very short abbreviations I have for my projects.
Then what I want to do is recursively go through the directory and get the following:
* proj ID
* tags
* extension
Each of these will be form a complete "tag list" for each file.
Then in a user-defined directory, a "semantic hierarchy" will be created based on these tags. This gets a bit long, so just take a look at the directory structure created for all files containing tag2 in the name:
```
,---
| ~/tag2
| --- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
| ---../tag1
| ------- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../tag4
| ------- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
| --- ../proj1
| ------- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../proj2
| ------- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
`---
```
In other words, directories are created with all combinations of a file's tags, and each contains a symlink to the actual files having those tags. I have omitted the file type directories, but these would also exist. It looks really messy in type, but I think the effect would be very cool. One could then fine a given file along a number of "tag bread crumbs."
My thoughts so far:
* ls -R in a top directory to get all the file names
* identify those files with a [ and ] in the filename (tagged files)
* with what's left, enter a loop:
+ strip out the proj ID, tags, and extension
+ create all the necessary dirs based on the tags
+ create symlinks to the file in all of the dirs created
**First Solution! 2010-11-3 7p**
Here's my current working code. It only works on files in the top level directory, does not figure out extension types yet, and only works on 2 tags + the project ID for a total of 3 tags per file. It is a hacked manual chug solution but maybe it would help someone see what I'm doing and how this could be muuuuch better:
```
#!/bin/bash
########################
#### User Variables ####
########################
## set top directory for the semantic filer
## example: ~/semantic
## result will be ~/semantic/tag1, ~/semantic/tag2, etc.
top_dir=~/Desktop/semantic
## set document extensions, space separated
## example: "doc odt txt"
doc_ext="doc odt txt"
## set presentation extensions, space separated
pres_ext="ppt odp pptx"
## set image extensions, space separated
img_ext="jpg png gif"
#### End User Variables ####
#####################
#### Begin Script####
#####################
cd $top_dir
ls -1 | (while read fname;
do
if [[ $fname == *[* ]]
then
tag_names=$( echo $fname | sed -e 's/-/ /g' -e 's/_.*\[/ /' -e 's/\].*$//' )
num_tags=$(echo $tag_names | wc -w)
current_tags=( `echo $tag_names | sed -e 's/ /\n/g'` )
echo ${current_tags[0]}
echo ${current_tags[1]}
echo ${current_tags[2]}
case $num_tags in
3)
mkdir -p ./${current_tags[0]}/${current_tags[1]}/${current_tags[2]}
mkdir -p ./${current_tags[0]}/${current_tags[2]}/${current_tags[1]}
mkdir -p ./${current_tags[1]}/${current_tags[0]}/${current_tags[2]}
mkdir -p ./${current_tags[1]}/${current_tags[2]}/${current_tags[0]}
mkdir -p ./${current_tags[2]}/${current_tags[0]}/${current_tags[1]}
mkdir -p ./${current_tags[2]}/${current_tags[1]}/${current_tags[0]}
cd $top_dir/${current_tags[0]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[1]}/$fname
ln -s $top_dir/$fname ./${current_tags[2]}/$fname
cd $top_dir/${current_tags[1]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[0]}/$fname
ln -s $top_dir/$fname ./${current_tags[2]}/$fname
cd $top_dir/${current_tags[2]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[0]}/$fname
ln -s $top_dir/$fname ./${current_tags[1]}/$fname
cd $top_dir
;;
esac
fi
done
)
```
It's actually pretty neat. If you want to try it, do this:
* create a dir somewhere
* use touch to create a bunch of files with the format above: proj\_name[tag1-tag2].ext
* define the top\_dir variable
* run the script
* play around!
**ToDo**
* make this work using an "ls -R" in order to get into sub-dirs in my actual tree
* robustness check
* consider switching languages; hey, I've always wanted to learn perl and/or python!
Still open to any suggestions you have. Thanks!
|
2010/11/02
|
[
"https://Stackoverflow.com/questions/4081230",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/495152/"
] |
Singletons are a good pattern to learn, especially as an introductory design pattern. Beware, however, that they often end up being one of the most [over-used patterns](http://aabs.wordpress.com/2007/03/08/singleton-%E2%80%93-the-most-overused-pattern/). It's gotten to the point that some consider them an ["anti-pattern"](http://accu.org/index.php/journals/337). The best advice is to ["use them wisely."](http://www.ibm.com/developerworks/webservices/library/co-single.html)
For a great introduction to this, and many other useful patterns (personally, I find Strategy, Observer, and Command to be far more useful than Singleton), check out [Head First Design Patterns](https://rads.stackoverflow.com/amzn/click/com/0596007124).
|
The first rule of the Singleton (Anti-) Pattern is *don't use it*. The second rule is *don't use it* for the purpose of making it easy to get at a single instance of a class that you want multiple other objects to share, *especially* if it is a dependency of those classes. Use dependency injection instead. There are valid uses for Singletons, but people have a tendenecy to badly misuse them because they're so "easy" to use. They make it very difficult to test classes that depend on them and make systems inflexible.
As far as your questions, I think **1**, **2** and **3** can all be answered by saying "use an `enum`-singleton". Then you don't need to worry about constructor accessiblity issues, `clone()`ing, etc. As far as **4**, the above is a good argument for them. I also have to ask, did you read the section in Effective Java that explicitly answers your question?
|
4,081,230
|
**Update 2010-11-02 7p:** Shortened description; posted initial bash solution.
---
**Description**
I'd like to create a semantic file structure to better organize my data. I don't want to go a route like recoll, strigi, or beagle; I want no gui and full control. The closest might be oyepa or even closer, [Tagsistant](http://www.tagsistant.net/index.php).
Here's the idea: one maintains a "regular" tree of their files. For example, mine are organized in project folders like this:
```
,---
| ~/proj1
| ---- ../proj1_file1[tag1-tag2].ext
| ---- ../proj1_file2[tag3]_yyyy-mm-dd.ext
| ~/proj2
| ---- ../proj2_file3[tag2-tag4].ext
| ---- ../proj1_file4[tag1].ext
`---
```
proj1, proj2 are very short abbreviations I have for my projects.
Then what I want to do is recursively go through the directory and get the following:
* proj ID
* tags
* extension
Each of these will be form a complete "tag list" for each file.
Then in a user-defined directory, a "semantic hierarchy" will be created based on these tags. This gets a bit long, so just take a look at the directory structure created for all files containing tag2 in the name:
```
,---
| ~/tag2
| --- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
| ---../tag1
| ------- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../tag4
| ------- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
| --- ../proj1
| ------- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../proj2
| ------- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
`---
```
In other words, directories are created with all combinations of a file's tags, and each contains a symlink to the actual files having those tags. I have omitted the file type directories, but these would also exist. It looks really messy in type, but I think the effect would be very cool. One could then fine a given file along a number of "tag bread crumbs."
My thoughts so far:
* ls -R in a top directory to get all the file names
* identify those files with a [ and ] in the filename (tagged files)
* with what's left, enter a loop:
+ strip out the proj ID, tags, and extension
+ create all the necessary dirs based on the tags
+ create symlinks to the file in all of the dirs created
**First Solution! 2010-11-3 7p**
Here's my current working code. It only works on files in the top level directory, does not figure out extension types yet, and only works on 2 tags + the project ID for a total of 3 tags per file. It is a hacked manual chug solution but maybe it would help someone see what I'm doing and how this could be muuuuch better:
```
#!/bin/bash
########################
#### User Variables ####
########################
## set top directory for the semantic filer
## example: ~/semantic
## result will be ~/semantic/tag1, ~/semantic/tag2, etc.
top_dir=~/Desktop/semantic
## set document extensions, space separated
## example: "doc odt txt"
doc_ext="doc odt txt"
## set presentation extensions, space separated
pres_ext="ppt odp pptx"
## set image extensions, space separated
img_ext="jpg png gif"
#### End User Variables ####
#####################
#### Begin Script####
#####################
cd $top_dir
ls -1 | (while read fname;
do
if [[ $fname == *[* ]]
then
tag_names=$( echo $fname | sed -e 's/-/ /g' -e 's/_.*\[/ /' -e 's/\].*$//' )
num_tags=$(echo $tag_names | wc -w)
current_tags=( `echo $tag_names | sed -e 's/ /\n/g'` )
echo ${current_tags[0]}
echo ${current_tags[1]}
echo ${current_tags[2]}
case $num_tags in
3)
mkdir -p ./${current_tags[0]}/${current_tags[1]}/${current_tags[2]}
mkdir -p ./${current_tags[0]}/${current_tags[2]}/${current_tags[1]}
mkdir -p ./${current_tags[1]}/${current_tags[0]}/${current_tags[2]}
mkdir -p ./${current_tags[1]}/${current_tags[2]}/${current_tags[0]}
mkdir -p ./${current_tags[2]}/${current_tags[0]}/${current_tags[1]}
mkdir -p ./${current_tags[2]}/${current_tags[1]}/${current_tags[0]}
cd $top_dir/${current_tags[0]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[1]}/$fname
ln -s $top_dir/$fname ./${current_tags[2]}/$fname
cd $top_dir/${current_tags[1]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[0]}/$fname
ln -s $top_dir/$fname ./${current_tags[2]}/$fname
cd $top_dir/${current_tags[2]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[0]}/$fname
ln -s $top_dir/$fname ./${current_tags[1]}/$fname
cd $top_dir
;;
esac
fi
done
)
```
It's actually pretty neat. If you want to try it, do this:
* create a dir somewhere
* use touch to create a bunch of files with the format above: proj\_name[tag1-tag2].ext
* define the top\_dir variable
* run the script
* play around!
**ToDo**
* make this work using an "ls -R" in order to get into sub-dirs in my actual tree
* robustness check
* consider switching languages; hey, I've always wanted to learn perl and/or python!
Still open to any suggestions you have. Thanks!
|
2010/11/02
|
[
"https://Stackoverflow.com/questions/4081230",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/495152/"
] |
Singletons are a good pattern to learn, especially as an introductory design pattern. Beware, however, that they often end up being one of the most [over-used patterns](http://aabs.wordpress.com/2007/03/08/singleton-%E2%80%93-the-most-overused-pattern/). It's gotten to the point that some consider them an ["anti-pattern"](http://accu.org/index.php/journals/337). The best advice is to ["use them wisely."](http://www.ibm.com/developerworks/webservices/library/co-single.html)
For a great introduction to this, and many other useful patterns (personally, I find Strategy, Observer, and Command to be far more useful than Singleton), check out [Head First Design Patterns](https://rads.stackoverflow.com/amzn/click/com/0596007124).
|
Example singleton lazy init:
Class main:
```
public class Main {
public static void main(String[] args) {
System.out.println(Singleton.getInstance("first").value);
System.out.println(Singleton.getInstance("second").value);
System.out.println(Singleton.getInstance("therd").value);
}
}
```
Class singleton:
```
public class Singleton {
private static Singleton instance;
public String value;
private Singleton (String s){
this.value =s;
}
public static Singleton getInstance(String param) {
if (instance == null)
instance = new Singleton(param);
return instance;
}
}
```
When start application, console will contains next strings:
```
first
first
first
```
\|/ 73
|
4,081,230
|
**Update 2010-11-02 7p:** Shortened description; posted initial bash solution.
---
**Description**
I'd like to create a semantic file structure to better organize my data. I don't want to go a route like recoll, strigi, or beagle; I want no gui and full control. The closest might be oyepa or even closer, [Tagsistant](http://www.tagsistant.net/index.php).
Here's the idea: one maintains a "regular" tree of their files. For example, mine are organized in project folders like this:
```
,---
| ~/proj1
| ---- ../proj1_file1[tag1-tag2].ext
| ---- ../proj1_file2[tag3]_yyyy-mm-dd.ext
| ~/proj2
| ---- ../proj2_file3[tag2-tag4].ext
| ---- ../proj1_file4[tag1].ext
`---
```
proj1, proj2 are very short abbreviations I have for my projects.
Then what I want to do is recursively go through the directory and get the following:
* proj ID
* tags
* extension
Each of these will be form a complete "tag list" for each file.
Then in a user-defined directory, a "semantic hierarchy" will be created based on these tags. This gets a bit long, so just take a look at the directory structure created for all files containing tag2 in the name:
```
,---
| ~/tag2
| --- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
| ---../tag1
| ------- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../tag4
| ------- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
| --- ../proj1
| ------- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../proj2
| ------- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
`---
```
In other words, directories are created with all combinations of a file's tags, and each contains a symlink to the actual files having those tags. I have omitted the file type directories, but these would also exist. It looks really messy in type, but I think the effect would be very cool. One could then fine a given file along a number of "tag bread crumbs."
My thoughts so far:
* ls -R in a top directory to get all the file names
* identify those files with a [ and ] in the filename (tagged files)
* with what's left, enter a loop:
+ strip out the proj ID, tags, and extension
+ create all the necessary dirs based on the tags
+ create symlinks to the file in all of the dirs created
**First Solution! 2010-11-3 7p**
Here's my current working code. It only works on files in the top level directory, does not figure out extension types yet, and only works on 2 tags + the project ID for a total of 3 tags per file. It is a hacked manual chug solution but maybe it would help someone see what I'm doing and how this could be muuuuch better:
```
#!/bin/bash
########################
#### User Variables ####
########################
## set top directory for the semantic filer
## example: ~/semantic
## result will be ~/semantic/tag1, ~/semantic/tag2, etc.
top_dir=~/Desktop/semantic
## set document extensions, space separated
## example: "doc odt txt"
doc_ext="doc odt txt"
## set presentation extensions, space separated
pres_ext="ppt odp pptx"
## set image extensions, space separated
img_ext="jpg png gif"
#### End User Variables ####
#####################
#### Begin Script####
#####################
cd $top_dir
ls -1 | (while read fname;
do
if [[ $fname == *[* ]]
then
tag_names=$( echo $fname | sed -e 's/-/ /g' -e 's/_.*\[/ /' -e 's/\].*$//' )
num_tags=$(echo $tag_names | wc -w)
current_tags=( `echo $tag_names | sed -e 's/ /\n/g'` )
echo ${current_tags[0]}
echo ${current_tags[1]}
echo ${current_tags[2]}
case $num_tags in
3)
mkdir -p ./${current_tags[0]}/${current_tags[1]}/${current_tags[2]}
mkdir -p ./${current_tags[0]}/${current_tags[2]}/${current_tags[1]}
mkdir -p ./${current_tags[1]}/${current_tags[0]}/${current_tags[2]}
mkdir -p ./${current_tags[1]}/${current_tags[2]}/${current_tags[0]}
mkdir -p ./${current_tags[2]}/${current_tags[0]}/${current_tags[1]}
mkdir -p ./${current_tags[2]}/${current_tags[1]}/${current_tags[0]}
cd $top_dir/${current_tags[0]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[1]}/$fname
ln -s $top_dir/$fname ./${current_tags[2]}/$fname
cd $top_dir/${current_tags[1]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[0]}/$fname
ln -s $top_dir/$fname ./${current_tags[2]}/$fname
cd $top_dir/${current_tags[2]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[0]}/$fname
ln -s $top_dir/$fname ./${current_tags[1]}/$fname
cd $top_dir
;;
esac
fi
done
)
```
It's actually pretty neat. If you want to try it, do this:
* create a dir somewhere
* use touch to create a bunch of files with the format above: proj\_name[tag1-tag2].ext
* define the top\_dir variable
* run the script
* play around!
**ToDo**
* make this work using an "ls -R" in order to get into sub-dirs in my actual tree
* robustness check
* consider switching languages; hey, I've always wanted to learn perl and/or python!
Still open to any suggestions you have. Thanks!
|
2010/11/02
|
[
"https://Stackoverflow.com/questions/4081230",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/495152/"
] |
>
> A priviledged cleint can invoke the private constructor reflectively with the aid of the AccessibleObject.setAccessible method, If you need to defend this, modify the constructor. My question is: How exactly can a private constructor is invoked? and what is AccessibleObject.setAccessible??
>
>
>
Obviously a private constructor can be invoked by the class itself (e.g. from a static factory method). Reflectively, what Bloch is talking about is this:
```
import java.lang.reflect.Constructor;
public class PrivateInvoker {
public static void main(String[] args) throws Exception{
//compile error
// Private p = new Private();
//works fine
Constructor<?> con = Private.class.getDeclaredConstructors()[0];
con.setAccessible(true);
Private p = (Private) con.newInstance();
}
}
class Private {
private Private() {
System.out.println("Hello!");
}
}
```
>
> 2.What approach do you experts follow with singletons:
>
>
> ...
>
>
>
Typically, the first is favoured. The second (assuming you were to test if `TestInstance` is null before returning a new instance) gains lazy-loading at the cost of needing to be synchronized or being thread-unsafe.
I wrote the above when your second example didn't assign the instance to `TestInstance` at declaration. As stated now, the above consideration is irrelevant.
>
> Isn't the second approach more flexible in case we have to make a check for new instance every time or same instance every time?
>
>
>
It's not about flexibility, it's about when the cost of creating the one (and only) instance is incurred. If you do option a) it's incurred at class loading time. That's typically fine since the class is only loaded once it's needed anyway.
I wrote the above when your second example didn't assign the instance to `TestInstance` at declaration. As stated now, in both cases the Singleton will be created at class load.
>
> What if I try to clone the class/object?
>
>
>
A singleton should not allow cloning for obvious reasons. A CloneNotSupportedException should be thrown, and will be automatically unless you for some reason implement `Cloneable`.
>
> a single-element enum type is the best way to implement a singleton. Why? and How?
>
>
>
Examples for this are in the book, as are justifications. What part didn't you understand?
|
The first rule of the Singleton (Anti-) Pattern is *don't use it*. The second rule is *don't use it* for the purpose of making it easy to get at a single instance of a class that you want multiple other objects to share, *especially* if it is a dependency of those classes. Use dependency injection instead. There are valid uses for Singletons, but people have a tendenecy to badly misuse them because they're so "easy" to use. They make it very difficult to test classes that depend on them and make systems inflexible.
As far as your questions, I think **1**, **2** and **3** can all be answered by saying "use an `enum`-singleton". Then you don't need to worry about constructor accessiblity issues, `clone()`ing, etc. As far as **4**, the above is a good argument for them. I also have to ask, did you read the section in Effective Java that explicitly answers your question?
|
4,081,230
|
**Update 2010-11-02 7p:** Shortened description; posted initial bash solution.
---
**Description**
I'd like to create a semantic file structure to better organize my data. I don't want to go a route like recoll, strigi, or beagle; I want no gui and full control. The closest might be oyepa or even closer, [Tagsistant](http://www.tagsistant.net/index.php).
Here's the idea: one maintains a "regular" tree of their files. For example, mine are organized in project folders like this:
```
,---
| ~/proj1
| ---- ../proj1_file1[tag1-tag2].ext
| ---- ../proj1_file2[tag3]_yyyy-mm-dd.ext
| ~/proj2
| ---- ../proj2_file3[tag2-tag4].ext
| ---- ../proj1_file4[tag1].ext
`---
```
proj1, proj2 are very short abbreviations I have for my projects.
Then what I want to do is recursively go through the directory and get the following:
* proj ID
* tags
* extension
Each of these will be form a complete "tag list" for each file.
Then in a user-defined directory, a "semantic hierarchy" will be created based on these tags. This gets a bit long, so just take a look at the directory structure created for all files containing tag2 in the name:
```
,---
| ~/tag2
| --- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
| ---../tag1
| ------- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../tag4
| ------- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
| --- ../proj1
| ------- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../proj2
| ------- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
`---
```
In other words, directories are created with all combinations of a file's tags, and each contains a symlink to the actual files having those tags. I have omitted the file type directories, but these would also exist. It looks really messy in type, but I think the effect would be very cool. One could then fine a given file along a number of "tag bread crumbs."
My thoughts so far:
* ls -R in a top directory to get all the file names
* identify those files with a [ and ] in the filename (tagged files)
* with what's left, enter a loop:
+ strip out the proj ID, tags, and extension
+ create all the necessary dirs based on the tags
+ create symlinks to the file in all of the dirs created
**First Solution! 2010-11-3 7p**
Here's my current working code. It only works on files in the top level directory, does not figure out extension types yet, and only works on 2 tags + the project ID for a total of 3 tags per file. It is a hacked manual chug solution but maybe it would help someone see what I'm doing and how this could be muuuuch better:
```
#!/bin/bash
########################
#### User Variables ####
########################
## set top directory for the semantic filer
## example: ~/semantic
## result will be ~/semantic/tag1, ~/semantic/tag2, etc.
top_dir=~/Desktop/semantic
## set document extensions, space separated
## example: "doc odt txt"
doc_ext="doc odt txt"
## set presentation extensions, space separated
pres_ext="ppt odp pptx"
## set image extensions, space separated
img_ext="jpg png gif"
#### End User Variables ####
#####################
#### Begin Script####
#####################
cd $top_dir
ls -1 | (while read fname;
do
if [[ $fname == *[* ]]
then
tag_names=$( echo $fname | sed -e 's/-/ /g' -e 's/_.*\[/ /' -e 's/\].*$//' )
num_tags=$(echo $tag_names | wc -w)
current_tags=( `echo $tag_names | sed -e 's/ /\n/g'` )
echo ${current_tags[0]}
echo ${current_tags[1]}
echo ${current_tags[2]}
case $num_tags in
3)
mkdir -p ./${current_tags[0]}/${current_tags[1]}/${current_tags[2]}
mkdir -p ./${current_tags[0]}/${current_tags[2]}/${current_tags[1]}
mkdir -p ./${current_tags[1]}/${current_tags[0]}/${current_tags[2]}
mkdir -p ./${current_tags[1]}/${current_tags[2]}/${current_tags[0]}
mkdir -p ./${current_tags[2]}/${current_tags[0]}/${current_tags[1]}
mkdir -p ./${current_tags[2]}/${current_tags[1]}/${current_tags[0]}
cd $top_dir/${current_tags[0]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[1]}/$fname
ln -s $top_dir/$fname ./${current_tags[2]}/$fname
cd $top_dir/${current_tags[1]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[0]}/$fname
ln -s $top_dir/$fname ./${current_tags[2]}/$fname
cd $top_dir/${current_tags[2]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[0]}/$fname
ln -s $top_dir/$fname ./${current_tags[1]}/$fname
cd $top_dir
;;
esac
fi
done
)
```
It's actually pretty neat. If you want to try it, do this:
* create a dir somewhere
* use touch to create a bunch of files with the format above: proj\_name[tag1-tag2].ext
* define the top\_dir variable
* run the script
* play around!
**ToDo**
* make this work using an "ls -R" in order to get into sub-dirs in my actual tree
* robustness check
* consider switching languages; hey, I've always wanted to learn perl and/or python!
Still open to any suggestions you have. Thanks!
|
2010/11/02
|
[
"https://Stackoverflow.com/questions/4081230",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/495152/"
] |
>
> A priviledged cleint can invoke the private constructor reflectively with the aid of the AccessibleObject.setAccessible method, If you need to defend this, modify the constructor. My question is: How exactly can a private constructor is invoked? and what is AccessibleObject.setAccessible??
>
>
>
Obviously a private constructor can be invoked by the class itself (e.g. from a static factory method). Reflectively, what Bloch is talking about is this:
```
import java.lang.reflect.Constructor;
public class PrivateInvoker {
public static void main(String[] args) throws Exception{
//compile error
// Private p = new Private();
//works fine
Constructor<?> con = Private.class.getDeclaredConstructors()[0];
con.setAccessible(true);
Private p = (Private) con.newInstance();
}
}
class Private {
private Private() {
System.out.println("Hello!");
}
}
```
>
> 2.What approach do you experts follow with singletons:
>
>
> ...
>
>
>
Typically, the first is favoured. The second (assuming you were to test if `TestInstance` is null before returning a new instance) gains lazy-loading at the cost of needing to be synchronized or being thread-unsafe.
I wrote the above when your second example didn't assign the instance to `TestInstance` at declaration. As stated now, the above consideration is irrelevant.
>
> Isn't the second approach more flexible in case we have to make a check for new instance every time or same instance every time?
>
>
>
It's not about flexibility, it's about when the cost of creating the one (and only) instance is incurred. If you do option a) it's incurred at class loading time. That's typically fine since the class is only loaded once it's needed anyway.
I wrote the above when your second example didn't assign the instance to `TestInstance` at declaration. As stated now, in both cases the Singleton will be created at class load.
>
> What if I try to clone the class/object?
>
>
>
A singleton should not allow cloning for obvious reasons. A CloneNotSupportedException should be thrown, and will be automatically unless you for some reason implement `Cloneable`.
>
> a single-element enum type is the best way to implement a singleton. Why? and How?
>
>
>
Examples for this are in the book, as are justifications. What part didn't you understand?
|
Example singleton lazy init:
Class main:
```
public class Main {
public static void main(String[] args) {
System.out.println(Singleton.getInstance("first").value);
System.out.println(Singleton.getInstance("second").value);
System.out.println(Singleton.getInstance("therd").value);
}
}
```
Class singleton:
```
public class Singleton {
private static Singleton instance;
public String value;
private Singleton (String s){
this.value =s;
}
public static Singleton getInstance(String param) {
if (instance == null)
instance = new Singleton(param);
return instance;
}
}
```
When start application, console will contains next strings:
```
first
first
first
```
\|/ 73
|
4,081,230
|
**Update 2010-11-02 7p:** Shortened description; posted initial bash solution.
---
**Description**
I'd like to create a semantic file structure to better organize my data. I don't want to go a route like recoll, strigi, or beagle; I want no gui and full control. The closest might be oyepa or even closer, [Tagsistant](http://www.tagsistant.net/index.php).
Here's the idea: one maintains a "regular" tree of their files. For example, mine are organized in project folders like this:
```
,---
| ~/proj1
| ---- ../proj1_file1[tag1-tag2].ext
| ---- ../proj1_file2[tag3]_yyyy-mm-dd.ext
| ~/proj2
| ---- ../proj2_file3[tag2-tag4].ext
| ---- ../proj1_file4[tag1].ext
`---
```
proj1, proj2 are very short abbreviations I have for my projects.
Then what I want to do is recursively go through the directory and get the following:
* proj ID
* tags
* extension
Each of these will be form a complete "tag list" for each file.
Then in a user-defined directory, a "semantic hierarchy" will be created based on these tags. This gets a bit long, so just take a look at the directory structure created for all files containing tag2 in the name:
```
,---
| ~/tag2
| --- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
| ---../tag1
| ------- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../tag4
| ------- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
| --- ../proj1
| ------- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../proj2
| ------- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
`---
```
In other words, directories are created with all combinations of a file's tags, and each contains a symlink to the actual files having those tags. I have omitted the file type directories, but these would also exist. It looks really messy in type, but I think the effect would be very cool. One could then fine a given file along a number of "tag bread crumbs."
My thoughts so far:
* ls -R in a top directory to get all the file names
* identify those files with a [ and ] in the filename (tagged files)
* with what's left, enter a loop:
+ strip out the proj ID, tags, and extension
+ create all the necessary dirs based on the tags
+ create symlinks to the file in all of the dirs created
**First Solution! 2010-11-3 7p**
Here's my current working code. It only works on files in the top level directory, does not figure out extension types yet, and only works on 2 tags + the project ID for a total of 3 tags per file. It is a hacked manual chug solution but maybe it would help someone see what I'm doing and how this could be muuuuch better:
```
#!/bin/bash
########################
#### User Variables ####
########################
## set top directory for the semantic filer
## example: ~/semantic
## result will be ~/semantic/tag1, ~/semantic/tag2, etc.
top_dir=~/Desktop/semantic
## set document extensions, space separated
## example: "doc odt txt"
doc_ext="doc odt txt"
## set presentation extensions, space separated
pres_ext="ppt odp pptx"
## set image extensions, space separated
img_ext="jpg png gif"
#### End User Variables ####
#####################
#### Begin Script####
#####################
cd $top_dir
ls -1 | (while read fname;
do
if [[ $fname == *[* ]]
then
tag_names=$( echo $fname | sed -e 's/-/ /g' -e 's/_.*\[/ /' -e 's/\].*$//' )
num_tags=$(echo $tag_names | wc -w)
current_tags=( `echo $tag_names | sed -e 's/ /\n/g'` )
echo ${current_tags[0]}
echo ${current_tags[1]}
echo ${current_tags[2]}
case $num_tags in
3)
mkdir -p ./${current_tags[0]}/${current_tags[1]}/${current_tags[2]}
mkdir -p ./${current_tags[0]}/${current_tags[2]}/${current_tags[1]}
mkdir -p ./${current_tags[1]}/${current_tags[0]}/${current_tags[2]}
mkdir -p ./${current_tags[1]}/${current_tags[2]}/${current_tags[0]}
mkdir -p ./${current_tags[2]}/${current_tags[0]}/${current_tags[1]}
mkdir -p ./${current_tags[2]}/${current_tags[1]}/${current_tags[0]}
cd $top_dir/${current_tags[0]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[1]}/$fname
ln -s $top_dir/$fname ./${current_tags[2]}/$fname
cd $top_dir/${current_tags[1]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[0]}/$fname
ln -s $top_dir/$fname ./${current_tags[2]}/$fname
cd $top_dir/${current_tags[2]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[0]}/$fname
ln -s $top_dir/$fname ./${current_tags[1]}/$fname
cd $top_dir
;;
esac
fi
done
)
```
It's actually pretty neat. If you want to try it, do this:
* create a dir somewhere
* use touch to create a bunch of files with the format above: proj\_name[tag1-tag2].ext
* define the top\_dir variable
* run the script
* play around!
**ToDo**
* make this work using an "ls -R" in order to get into sub-dirs in my actual tree
* robustness check
* consider switching languages; hey, I've always wanted to learn perl and/or python!
Still open to any suggestions you have. Thanks!
|
2010/11/02
|
[
"https://Stackoverflow.com/questions/4081230",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/495152/"
] |
The first rule of the Singleton (Anti-) Pattern is *don't use it*. The second rule is *don't use it* for the purpose of making it easy to get at a single instance of a class that you want multiple other objects to share, *especially* if it is a dependency of those classes. Use dependency injection instead. There are valid uses for Singletons, but people have a tendenecy to badly misuse them because they're so "easy" to use. They make it very difficult to test classes that depend on them and make systems inflexible.
As far as your questions, I think **1**, **2** and **3** can all be answered by saying "use an `enum`-singleton". Then you don't need to worry about constructor accessiblity issues, `clone()`ing, etc. As far as **4**, the above is a good argument for them. I also have to ask, did you read the section in Effective Java that explicitly answers your question?
|
Example singleton lazy init:
Class main:
```
public class Main {
public static void main(String[] args) {
System.out.println(Singleton.getInstance("first").value);
System.out.println(Singleton.getInstance("second").value);
System.out.println(Singleton.getInstance("therd").value);
}
}
```
Class singleton:
```
public class Singleton {
private static Singleton instance;
public String value;
private Singleton (String s){
this.value =s;
}
public static Singleton getInstance(String param) {
if (instance == null)
instance = new Singleton(param);
return instance;
}
}
```
When start application, console will contains next strings:
```
first
first
first
```
\|/ 73
|
8,342,891
|
I'm very new to Python and have a question. Currently I'm using this to calculate the times between a message goes out and the message comes in. The resulting starting time, time delta, and the Unique ID are then presented in file. As well, I'm using Python 2.7.2
Currently it is subtracting the two times and the result is 0.002677777777(the 0.00 are hours and minutes) which I don't need. How do I format it so that it would start with seconds (59.178533,00) with 59 being seconds. As well, sometimes python will display the number as 8.9999999 x e-05. How can I force it to always display the exact number.
```
def time_deltas(infile):
entries = (line.split() for line in open(INFILE, "r"))
ts = {}
for e in entries:
if " ".join(e[2:5]) == "OuchMsg out: [O]":
ts[e[8]] = e[0]
elif " ".join(e[2:5]) == "OuchMsg in: [A]":
in_ts, ref_id = e[0], e[7]
out_ts = ts.pop(ref_id, None)
yield (float(out_ts),ref_id[1:-1], float(in_ts) - float(out_ts))
INFILE = 'C:/Users/klee/Documents/Ouch20111130_cartprop04.txt'
import csv
with open('output_file.csv', 'w') as f:
csv.writer(f).writerows(time_deltas(INFILE))
```
Here's a small sample of the data:
```
61336.206267 - OuchMsg out: [O] enter order. RefID [F25Q282588] OrdID [X1F3500687 ]
```
|
2011/12/01
|
[
"https://Stackoverflow.com/questions/8342891",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1057243/"
] |
If `float(out_ts)` are hours, then `'{0:.3f}'.format(float(out_ts) * 3600.)` will give you the text representation of seconds with three digits after decimal point.
|
>
> How can I force it to always display the exact number.
>
>
>
This is sometimes not possible, because of floating point inaccuracies
What you *can* do on the other hand is to format your floats as you wish. See [here](http://docs.python.org/library/stdtypes.html#string-formatting-operations) for instructions.
E.G: replace your `yield` line with:
```
yield ("%.5f"%float(out_ts),"%.5f"%ref_id[1:-1], "%.5f"%(float(in_ts) - float(out_ts)))
```
to truncate all your values to 5 digits after the comma. There are other formatting options, for a lot of different formats.
|
2,036,260
|
>
> **Possible Duplicate:**
>
> [Django Unhandled Exception](https://stackoverflow.com/questions/1925898/django-unhandled-exception)
>
>
>
I'm randomly getting 500 server errors and trying to diagnose the problem. The setup is:
Apache + mod\_python + Django
My 500.html page is being served by Django, but I have no clue what is causing the error. My Apache access.log and error.log files don't contain any valuable debugging info, besides showing the request returned a 500.
Is there a mod\_python or general python error log somewhere (Ubuntu server)?
Thanks!
|
2010/01/10
|
[
"https://Stackoverflow.com/questions/2036260",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/87719/"
] |
Yes, you should have an entry in your apache confs as to where the error log is for the virtual server.
For instance the name of my virtual server is djangoserver, and in my /etc/apache2/sites-enabled/djangoserver file is the line
```
ErrorLog /var/log/apache2/djangoserver-errors.log
```
Although now that I reread your question, it appears you already have the apache log taken care of. I don't believe there is any separate log for mod\_python or python itself.
Is this a production set up?
If not you might wish to turn on [Debug mode](http://docs.djangoproject.com/en/1.1/ref/settings/#debug), and then Django produces very detailed screens with the error information. In your project's settings.py, set
```
DEBUG = True
TEMPLATE_DEBUG = DEBUG
```
to disable the generic 500 error screens and see the detailed breakdown.
|
Salsa makes several good suggestions. I would add that the Django development server is an excellent environment for tracking these things down. Sometimes I even run it from the production directory (gasp!) with `./manage.py runserver 0.0.0.0:8000` so I *know* I'm running the same code.
Admittedly sometimes something will fail under Apache and not under the dev server, but that is a hint in and of itself.
|
58,550,284
|
After an automatic update of [macOS v10.15](https://en.wikipedia.org/wiki/MacOS_Catalina) (Catalina), I am unable to open Xcode. Xcode prompts me to install additional components but the installation fails because of MobileDevice.pkg (Applications/Xcode.app/Contents/Resources/Packages)
I have found multiple answers on how to locate MobileDevice.pkg and that I should try to install it directly, but when I try to do this the installation fails too. I have also tried updating Xcode from [App Store](https://en.wikipedia.org/wiki/App_Store_(macOS)), but the update failed when it was nearly finished.
Has anyone experienced the same behaviour? Should I reset the Mac to default and install [macOS v10.13](https://en.wikipedia.org/wiki/MacOS_High_Sierra) (High Sierra) or Catalina from scratch or it is a problem of Xcode and re-install would do the job?
I have found a discussion [here](https://discussions.apple.com/thread/250782804) that was posted today and is probably regarding the same issue and it seems like many people are dealing with it, too.
The log:
```python
*2019-10-25 01:03:34+02 Vendula-MacBook-Pro Xcode[1567]: Package: PKLeopardPackage
<id=com.apple.pkg.MobileDevice, version=4.0.0.0.1.1567124787, url=file:///Applications/Xcode.app/Contents/Resources/Packages/MobileDevice.pkg>
Failed to verify with error: Error Domain=PKInstallErrorDomain Code=102
"The package “MobileDevice.pkg” is untrusted."
UserInfo={
NSLocalizedDescription=The package “MobileDevice.pkg” is untrusted.,
NSURL=MobileDevice.pkg -- file:///Applications/Xcode.app/Contents/Resources/Packages/,
PKInstallPackageIdentifier=com.apple.pkg.MobileDevice,
NSUnderlyingError=0x7fabf6626d00
{
Error Domain=NSOSStatusErrorDomain
Code=-2147409654 "CSSMERR_TP_CERT_EXPIRED"
UserInfo={
SecTrustResult=5,
PKTrustLevel=PKTrustLevelExpiredCertificate,
NSLocalizedFailureReason=CSSMERR_TP_CERT_EXPIRED
}
}
}*
```
|
2019/10/24
|
[
"https://Stackoverflow.com/questions/58550284",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11216994/"
] |
I had a similar problem, where I installed Xcode 11.1, and installed the components and everything within the same folder where I had Xcode 10.2.1. Then, I tried to go back to Xcode 10.2.1 and couldn't opened as it was asking me to install components again, and when I tried I was getting this error.
>
> The package “MobileDeviceDevelopment.pkg” is untrusted.
>
>
>
So, the workaround that fixed it for me was navigating to...
```
/Users/YourUser/Applications/Xcode\ 10.2.1.app/Contents/Resources/
```
Then, deleting **MobileDeviceDevelopment.pkg** and everything went back to normal :)
I hope this helps anyone else with this issue. Cheers!
|
You may solve this issue by setting the date of your Mac as October 1st, 2019. But this is just a hack! The real solution (suggested by apple) is this:
All you have to is to upgrade Xcode
-----------------------------------
**But** there is a [known Issues on apple developers site](https://developer.apple.com/documentation/xcode_release_notes/xcode_11_1_release_notes#3400988)
>
> Xcode may fail to update from the Mac App Store after updating to macOS Catalina. (56061273)
>
>
>
Apple suggests this:
>
> To trigger a new download you can delete the existing Xcode.app or temporarily change the file extension so it is no longer visible to the App Store.
>
>
>
---
Always working solution for all Xcode issues:
---------------------------------------------
1. Go [here](http://developer.apple.com/account) and log in.
2. Then [**download the xib from here**](https://developer.apple.com/services-account/download?path=/Developer_Tools/Xcode_11.2.1/Xcode_11.2.1.xip).
More information [here on this answer](https://stackoverflow.com/a/58285944/5623035).
---
##Answer to this specific issue##
Get rid of those packages.
```
cd /Applications/Xcode.app/Contents/Resources/Packages
sudo rm -rf MobileDevice.pkg
sudo rm -rf MobileDeviceDevelopment.pkg
```
Xcode will install all of them again for you.
|
58,550,284
|
After an automatic update of [macOS v10.15](https://en.wikipedia.org/wiki/MacOS_Catalina) (Catalina), I am unable to open Xcode. Xcode prompts me to install additional components but the installation fails because of MobileDevice.pkg (Applications/Xcode.app/Contents/Resources/Packages)
I have found multiple answers on how to locate MobileDevice.pkg and that I should try to install it directly, but when I try to do this the installation fails too. I have also tried updating Xcode from [App Store](https://en.wikipedia.org/wiki/App_Store_(macOS)), but the update failed when it was nearly finished.
Has anyone experienced the same behaviour? Should I reset the Mac to default and install [macOS v10.13](https://en.wikipedia.org/wiki/MacOS_High_Sierra) (High Sierra) or Catalina from scratch or it is a problem of Xcode and re-install would do the job?
I have found a discussion [here](https://discussions.apple.com/thread/250782804) that was posted today and is probably regarding the same issue and it seems like many people are dealing with it, too.
The log:
```python
*2019-10-25 01:03:34+02 Vendula-MacBook-Pro Xcode[1567]: Package: PKLeopardPackage
<id=com.apple.pkg.MobileDevice, version=4.0.0.0.1.1567124787, url=file:///Applications/Xcode.app/Contents/Resources/Packages/MobileDevice.pkg>
Failed to verify with error: Error Domain=PKInstallErrorDomain Code=102
"The package “MobileDevice.pkg” is untrusted."
UserInfo={
NSLocalizedDescription=The package “MobileDevice.pkg” is untrusted.,
NSURL=MobileDevice.pkg -- file:///Applications/Xcode.app/Contents/Resources/Packages/,
PKInstallPackageIdentifier=com.apple.pkg.MobileDevice,
NSUnderlyingError=0x7fabf6626d00
{
Error Domain=NSOSStatusErrorDomain
Code=-2147409654 "CSSMERR_TP_CERT_EXPIRED"
UserInfo={
SecTrustResult=5,
PKTrustLevel=PKTrustLevelExpiredCertificate,
NSLocalizedFailureReason=CSSMERR_TP_CERT_EXPIRED
}
}
}*
```
|
2019/10/24
|
[
"https://Stackoverflow.com/questions/58550284",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11216994/"
] |
At macOS Catalina
```
cd /Applications/Xcode.app/Contents/Resources/Packages
sudo rm -rf MobileDevice.pkg
sudo rm -rf MobileDeviceDevelopment.pkg
```
Try again.
It means you entered on the Xcode downloaded packages and remove it. I really don't understand how Apple do but if you remove Xcode will download it again and revalidates.
**Some remarks, I'm on XCODE Version 11.0 (11A420a) if you are not this is not guaranteed to work.**
|
For me, I just uninstalled (deleted the app from the Applications folder) and then went back to app store and clicked the cloud icon and it downloaded fresh and installed. Now all is good and back to normal.
|
58,550,284
|
After an automatic update of [macOS v10.15](https://en.wikipedia.org/wiki/MacOS_Catalina) (Catalina), I am unable to open Xcode. Xcode prompts me to install additional components but the installation fails because of MobileDevice.pkg (Applications/Xcode.app/Contents/Resources/Packages)
I have found multiple answers on how to locate MobileDevice.pkg and that I should try to install it directly, but when I try to do this the installation fails too. I have also tried updating Xcode from [App Store](https://en.wikipedia.org/wiki/App_Store_(macOS)), but the update failed when it was nearly finished.
Has anyone experienced the same behaviour? Should I reset the Mac to default and install [macOS v10.13](https://en.wikipedia.org/wiki/MacOS_High_Sierra) (High Sierra) or Catalina from scratch or it is a problem of Xcode and re-install would do the job?
I have found a discussion [here](https://discussions.apple.com/thread/250782804) that was posted today and is probably regarding the same issue and it seems like many people are dealing with it, too.
The log:
```python
*2019-10-25 01:03:34+02 Vendula-MacBook-Pro Xcode[1567]: Package: PKLeopardPackage
<id=com.apple.pkg.MobileDevice, version=4.0.0.0.1.1567124787, url=file:///Applications/Xcode.app/Contents/Resources/Packages/MobileDevice.pkg>
Failed to verify with error: Error Domain=PKInstallErrorDomain Code=102
"The package “MobileDevice.pkg” is untrusted."
UserInfo={
NSLocalizedDescription=The package “MobileDevice.pkg” is untrusted.,
NSURL=MobileDevice.pkg -- file:///Applications/Xcode.app/Contents/Resources/Packages/,
PKInstallPackageIdentifier=com.apple.pkg.MobileDevice,
NSUnderlyingError=0x7fabf6626d00
{
Error Domain=NSOSStatusErrorDomain
Code=-2147409654 "CSSMERR_TP_CERT_EXPIRED"
UserInfo={
SecTrustResult=5,
PKTrustLevel=PKTrustLevelExpiredCertificate,
NSLocalizedFailureReason=CSSMERR_TP_CERT_EXPIRED
}
}
}*
```
|
2019/10/24
|
[
"https://Stackoverflow.com/questions/58550284",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11216994/"
] |
Try to run `Xcode-beta` instead of `Xcode` to install additional components. After that you'll be able to use `Xcode` release.
|
This requires Xcode 11.1 to be installed.
I was not able to update to Xcode 11.1 until I updated macOS Catalina to 10.15.1. After updating my macOS, I was able to install Xcode 11.1, which also allowed the installation of the additional components package.
|
58,550,284
|
After an automatic update of [macOS v10.15](https://en.wikipedia.org/wiki/MacOS_Catalina) (Catalina), I am unable to open Xcode. Xcode prompts me to install additional components but the installation fails because of MobileDevice.pkg (Applications/Xcode.app/Contents/Resources/Packages)
I have found multiple answers on how to locate MobileDevice.pkg and that I should try to install it directly, but when I try to do this the installation fails too. I have also tried updating Xcode from [App Store](https://en.wikipedia.org/wiki/App_Store_(macOS)), but the update failed when it was nearly finished.
Has anyone experienced the same behaviour? Should I reset the Mac to default and install [macOS v10.13](https://en.wikipedia.org/wiki/MacOS_High_Sierra) (High Sierra) or Catalina from scratch or it is a problem of Xcode and re-install would do the job?
I have found a discussion [here](https://discussions.apple.com/thread/250782804) that was posted today and is probably regarding the same issue and it seems like many people are dealing with it, too.
The log:
```python
*2019-10-25 01:03:34+02 Vendula-MacBook-Pro Xcode[1567]: Package: PKLeopardPackage
<id=com.apple.pkg.MobileDevice, version=4.0.0.0.1.1567124787, url=file:///Applications/Xcode.app/Contents/Resources/Packages/MobileDevice.pkg>
Failed to verify with error: Error Domain=PKInstallErrorDomain Code=102
"The package “MobileDevice.pkg” is untrusted."
UserInfo={
NSLocalizedDescription=The package “MobileDevice.pkg” is untrusted.,
NSURL=MobileDevice.pkg -- file:///Applications/Xcode.app/Contents/Resources/Packages/,
PKInstallPackageIdentifier=com.apple.pkg.MobileDevice,
NSUnderlyingError=0x7fabf6626d00
{
Error Domain=NSOSStatusErrorDomain
Code=-2147409654 "CSSMERR_TP_CERT_EXPIRED"
UserInfo={
SecTrustResult=5,
PKTrustLevel=PKTrustLevelExpiredCertificate,
NSLocalizedFailureReason=CSSMERR_TP_CERT_EXPIRED
}
}
}*
```
|
2019/10/24
|
[
"https://Stackoverflow.com/questions/58550284",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11216994/"
] |
I had a similar problem, where I installed Xcode 11.1, and installed the components and everything within the same folder where I had Xcode 10.2.1. Then, I tried to go back to Xcode 10.2.1 and couldn't opened as it was asking me to install components again, and when I tried I was getting this error.
>
> The package “MobileDeviceDevelopment.pkg” is untrusted.
>
>
>
So, the workaround that fixed it for me was navigating to...
```
/Users/YourUser/Applications/Xcode\ 10.2.1.app/Contents/Resources/
```
Then, deleting **MobileDeviceDevelopment.pkg** and everything went back to normal :)
I hope this helps anyone else with this issue. Cheers!
|
This requires Xcode 11.1 to be installed.
I was not able to update to Xcode 11.1 until I updated macOS Catalina to 10.15.1. After updating my macOS, I was able to install Xcode 11.1, which also allowed the installation of the additional components package.
|
58,550,284
|
After an automatic update of [macOS v10.15](https://en.wikipedia.org/wiki/MacOS_Catalina) (Catalina), I am unable to open Xcode. Xcode prompts me to install additional components but the installation fails because of MobileDevice.pkg (Applications/Xcode.app/Contents/Resources/Packages)
I have found multiple answers on how to locate MobileDevice.pkg and that I should try to install it directly, but when I try to do this the installation fails too. I have also tried updating Xcode from [App Store](https://en.wikipedia.org/wiki/App_Store_(macOS)), but the update failed when it was nearly finished.
Has anyone experienced the same behaviour? Should I reset the Mac to default and install [macOS v10.13](https://en.wikipedia.org/wiki/MacOS_High_Sierra) (High Sierra) or Catalina from scratch or it is a problem of Xcode and re-install would do the job?
I have found a discussion [here](https://discussions.apple.com/thread/250782804) that was posted today and is probably regarding the same issue and it seems like many people are dealing with it, too.
The log:
```python
*2019-10-25 01:03:34+02 Vendula-MacBook-Pro Xcode[1567]: Package: PKLeopardPackage
<id=com.apple.pkg.MobileDevice, version=4.0.0.0.1.1567124787, url=file:///Applications/Xcode.app/Contents/Resources/Packages/MobileDevice.pkg>
Failed to verify with error: Error Domain=PKInstallErrorDomain Code=102
"The package “MobileDevice.pkg” is untrusted."
UserInfo={
NSLocalizedDescription=The package “MobileDevice.pkg” is untrusted.,
NSURL=MobileDevice.pkg -- file:///Applications/Xcode.app/Contents/Resources/Packages/,
PKInstallPackageIdentifier=com.apple.pkg.MobileDevice,
NSUnderlyingError=0x7fabf6626d00
{
Error Domain=NSOSStatusErrorDomain
Code=-2147409654 "CSSMERR_TP_CERT_EXPIRED"
UserInfo={
SecTrustResult=5,
PKTrustLevel=PKTrustLevelExpiredCertificate,
NSLocalizedFailureReason=CSSMERR_TP_CERT_EXPIRED
}
}
}*
```
|
2019/10/24
|
[
"https://Stackoverflow.com/questions/58550284",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11216994/"
] |
```
rm -rf /Applications/Xcode.app/Contents/Resources/Packages/*.pkg
```
It will work and re-open the x-code
|
This requires Xcode 11.1 to be installed.
I was not able to update to Xcode 11.1 until I updated macOS Catalina to 10.15.1. After updating my macOS, I was able to install Xcode 11.1, which also allowed the installation of the additional components package.
|
58,550,284
|
After an automatic update of [macOS v10.15](https://en.wikipedia.org/wiki/MacOS_Catalina) (Catalina), I am unable to open Xcode. Xcode prompts me to install additional components but the installation fails because of MobileDevice.pkg (Applications/Xcode.app/Contents/Resources/Packages)
I have found multiple answers on how to locate MobileDevice.pkg and that I should try to install it directly, but when I try to do this the installation fails too. I have also tried updating Xcode from [App Store](https://en.wikipedia.org/wiki/App_Store_(macOS)), but the update failed when it was nearly finished.
Has anyone experienced the same behaviour? Should I reset the Mac to default and install [macOS v10.13](https://en.wikipedia.org/wiki/MacOS_High_Sierra) (High Sierra) or Catalina from scratch or it is a problem of Xcode and re-install would do the job?
I have found a discussion [here](https://discussions.apple.com/thread/250782804) that was posted today and is probably regarding the same issue and it seems like many people are dealing with it, too.
The log:
```python
*2019-10-25 01:03:34+02 Vendula-MacBook-Pro Xcode[1567]: Package: PKLeopardPackage
<id=com.apple.pkg.MobileDevice, version=4.0.0.0.1.1567124787, url=file:///Applications/Xcode.app/Contents/Resources/Packages/MobileDevice.pkg>
Failed to verify with error: Error Domain=PKInstallErrorDomain Code=102
"The package “MobileDevice.pkg” is untrusted."
UserInfo={
NSLocalizedDescription=The package “MobileDevice.pkg” is untrusted.,
NSURL=MobileDevice.pkg -- file:///Applications/Xcode.app/Contents/Resources/Packages/,
PKInstallPackageIdentifier=com.apple.pkg.MobileDevice,
NSUnderlyingError=0x7fabf6626d00
{
Error Domain=NSOSStatusErrorDomain
Code=-2147409654 "CSSMERR_TP_CERT_EXPIRED"
UserInfo={
SecTrustResult=5,
PKTrustLevel=PKTrustLevelExpiredCertificate,
NSLocalizedFailureReason=CSSMERR_TP_CERT_EXPIRED
}
}
}*
```
|
2019/10/24
|
[
"https://Stackoverflow.com/questions/58550284",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11216994/"
] |
At macOS Catalina
```
cd /Applications/Xcode.app/Contents/Resources/Packages
sudo rm -rf MobileDevice.pkg
sudo rm -rf MobileDeviceDevelopment.pkg
```
Try again.
It means you entered on the Xcode downloaded packages and remove it. I really don't understand how Apple do but if you remove Xcode will download it again and revalidates.
**Some remarks, I'm on XCODE Version 11.0 (11A420a) if you are not this is not guaranteed to work.**
|
Here's what I did to resolve:
Right click the xcode.app > show package contents > Contents > Developer > Platforms > iPhoneOS.platform > Device Support
I am on XCode 10.2.1. I had downloaded a 13.7 folder and contents from an external GitHub site and imported that folder into here for running my app on a physical iPhone Xr. I am prevented from upgrading to Catalina on my dev machine. Deleting the 13.7 folder and then re-launching XCode resolved the issue for me.
|
58,550,284
|
After an automatic update of [macOS v10.15](https://en.wikipedia.org/wiki/MacOS_Catalina) (Catalina), I am unable to open Xcode. Xcode prompts me to install additional components but the installation fails because of MobileDevice.pkg (Applications/Xcode.app/Contents/Resources/Packages)
I have found multiple answers on how to locate MobileDevice.pkg and that I should try to install it directly, but when I try to do this the installation fails too. I have also tried updating Xcode from [App Store](https://en.wikipedia.org/wiki/App_Store_(macOS)), but the update failed when it was nearly finished.
Has anyone experienced the same behaviour? Should I reset the Mac to default and install [macOS v10.13](https://en.wikipedia.org/wiki/MacOS_High_Sierra) (High Sierra) or Catalina from scratch or it is a problem of Xcode and re-install would do the job?
I have found a discussion [here](https://discussions.apple.com/thread/250782804) that was posted today and is probably regarding the same issue and it seems like many people are dealing with it, too.
The log:
```python
*2019-10-25 01:03:34+02 Vendula-MacBook-Pro Xcode[1567]: Package: PKLeopardPackage
<id=com.apple.pkg.MobileDevice, version=4.0.0.0.1.1567124787, url=file:///Applications/Xcode.app/Contents/Resources/Packages/MobileDevice.pkg>
Failed to verify with error: Error Domain=PKInstallErrorDomain Code=102
"The package “MobileDevice.pkg” is untrusted."
UserInfo={
NSLocalizedDescription=The package “MobileDevice.pkg” is untrusted.,
NSURL=MobileDevice.pkg -- file:///Applications/Xcode.app/Contents/Resources/Packages/,
PKInstallPackageIdentifier=com.apple.pkg.MobileDevice,
NSUnderlyingError=0x7fabf6626d00
{
Error Domain=NSOSStatusErrorDomain
Code=-2147409654 "CSSMERR_TP_CERT_EXPIRED"
UserInfo={
SecTrustResult=5,
PKTrustLevel=PKTrustLevelExpiredCertificate,
NSLocalizedFailureReason=CSSMERR_TP_CERT_EXPIRED
}
}
}*
```
|
2019/10/24
|
[
"https://Stackoverflow.com/questions/58550284",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11216994/"
] |
At macOS Catalina
```
cd /Applications/Xcode.app/Contents/Resources/Packages
sudo rm -rf MobileDevice.pkg
sudo rm -rf MobileDeviceDevelopment.pkg
```
Try again.
It means you entered on the Xcode downloaded packages and remove it. I really don't understand how Apple do but if you remove Xcode will download it again and revalidates.
**Some remarks, I'm on XCODE Version 11.0 (11A420a) if you are not this is not guaranteed to work.**
|
Reinstall Xcode 11.1 from <https://developer.apple.com/download/more/> . Afterwards the update works.
|
58,550,284
|
After an automatic update of [macOS v10.15](https://en.wikipedia.org/wiki/MacOS_Catalina) (Catalina), I am unable to open Xcode. Xcode prompts me to install additional components but the installation fails because of MobileDevice.pkg (Applications/Xcode.app/Contents/Resources/Packages)
I have found multiple answers on how to locate MobileDevice.pkg and that I should try to install it directly, but when I try to do this the installation fails too. I have also tried updating Xcode from [App Store](https://en.wikipedia.org/wiki/App_Store_(macOS)), but the update failed when it was nearly finished.
Has anyone experienced the same behaviour? Should I reset the Mac to default and install [macOS v10.13](https://en.wikipedia.org/wiki/MacOS_High_Sierra) (High Sierra) or Catalina from scratch or it is a problem of Xcode and re-install would do the job?
I have found a discussion [here](https://discussions.apple.com/thread/250782804) that was posted today and is probably regarding the same issue and it seems like many people are dealing with it, too.
The log:
```python
*2019-10-25 01:03:34+02 Vendula-MacBook-Pro Xcode[1567]: Package: PKLeopardPackage
<id=com.apple.pkg.MobileDevice, version=4.0.0.0.1.1567124787, url=file:///Applications/Xcode.app/Contents/Resources/Packages/MobileDevice.pkg>
Failed to verify with error: Error Domain=PKInstallErrorDomain Code=102
"The package “MobileDevice.pkg” is untrusted."
UserInfo={
NSLocalizedDescription=The package “MobileDevice.pkg” is untrusted.,
NSURL=MobileDevice.pkg -- file:///Applications/Xcode.app/Contents/Resources/Packages/,
PKInstallPackageIdentifier=com.apple.pkg.MobileDevice,
NSUnderlyingError=0x7fabf6626d00
{
Error Domain=NSOSStatusErrorDomain
Code=-2147409654 "CSSMERR_TP_CERT_EXPIRED"
UserInfo={
SecTrustResult=5,
PKTrustLevel=PKTrustLevelExpiredCertificate,
NSLocalizedFailureReason=CSSMERR_TP_CERT_EXPIRED
}
}
}*
```
|
2019/10/24
|
[
"https://Stackoverflow.com/questions/58550284",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11216994/"
] |
You may solve this issue by setting the date of your Mac as October 1st, 2019. But this is just a hack! The real solution (suggested by apple) is this:
All you have to is to upgrade Xcode
-----------------------------------
**But** there is a [known Issues on apple developers site](https://developer.apple.com/documentation/xcode_release_notes/xcode_11_1_release_notes#3400988)
>
> Xcode may fail to update from the Mac App Store after updating to macOS Catalina. (56061273)
>
>
>
Apple suggests this:
>
> To trigger a new download you can delete the existing Xcode.app or temporarily change the file extension so it is no longer visible to the App Store.
>
>
>
---
Always working solution for all Xcode issues:
---------------------------------------------
1. Go [here](http://developer.apple.com/account) and log in.
2. Then [**download the xib from here**](https://developer.apple.com/services-account/download?path=/Developer_Tools/Xcode_11.2.1/Xcode_11.2.1.xip).
More information [here on this answer](https://stackoverflow.com/a/58285944/5623035).
---
##Answer to this specific issue##
Get rid of those packages.
```
cd /Applications/Xcode.app/Contents/Resources/Packages
sudo rm -rf MobileDevice.pkg
sudo rm -rf MobileDeviceDevelopment.pkg
```
Xcode will install all of them again for you.
|
For me, I just uninstalled (deleted the app from the Applications folder) and then went back to app store and clicked the cloud icon and it downloaded fresh and installed. Now all is good and back to normal.
|
58,550,284
|
After an automatic update of [macOS v10.15](https://en.wikipedia.org/wiki/MacOS_Catalina) (Catalina), I am unable to open Xcode. Xcode prompts me to install additional components but the installation fails because of MobileDevice.pkg (Applications/Xcode.app/Contents/Resources/Packages)
I have found multiple answers on how to locate MobileDevice.pkg and that I should try to install it directly, but when I try to do this the installation fails too. I have also tried updating Xcode from [App Store](https://en.wikipedia.org/wiki/App_Store_(macOS)), but the update failed when it was nearly finished.
Has anyone experienced the same behaviour? Should I reset the Mac to default and install [macOS v10.13](https://en.wikipedia.org/wiki/MacOS_High_Sierra) (High Sierra) or Catalina from scratch or it is a problem of Xcode and re-install would do the job?
I have found a discussion [here](https://discussions.apple.com/thread/250782804) that was posted today and is probably regarding the same issue and it seems like many people are dealing with it, too.
The log:
```python
*2019-10-25 01:03:34+02 Vendula-MacBook-Pro Xcode[1567]: Package: PKLeopardPackage
<id=com.apple.pkg.MobileDevice, version=4.0.0.0.1.1567124787, url=file:///Applications/Xcode.app/Contents/Resources/Packages/MobileDevice.pkg>
Failed to verify with error: Error Domain=PKInstallErrorDomain Code=102
"The package “MobileDevice.pkg” is untrusted."
UserInfo={
NSLocalizedDescription=The package “MobileDevice.pkg” is untrusted.,
NSURL=MobileDevice.pkg -- file:///Applications/Xcode.app/Contents/Resources/Packages/,
PKInstallPackageIdentifier=com.apple.pkg.MobileDevice,
NSUnderlyingError=0x7fabf6626d00
{
Error Domain=NSOSStatusErrorDomain
Code=-2147409654 "CSSMERR_TP_CERT_EXPIRED"
UserInfo={
SecTrustResult=5,
PKTrustLevel=PKTrustLevelExpiredCertificate,
NSLocalizedFailureReason=CSSMERR_TP_CERT_EXPIRED
}
}
}*
```
|
2019/10/24
|
[
"https://Stackoverflow.com/questions/58550284",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11216994/"
] |
I had a similar problem, where I installed Xcode 11.1, and installed the components and everything within the same folder where I had Xcode 10.2.1. Then, I tried to go back to Xcode 10.2.1 and couldn't opened as it was asking me to install components again, and when I tried I was getting this error.
>
> The package “MobileDeviceDevelopment.pkg” is untrusted.
>
>
>
So, the workaround that fixed it for me was navigating to...
```
/Users/YourUser/Applications/Xcode\ 10.2.1.app/Contents/Resources/
```
Then, deleting **MobileDeviceDevelopment.pkg** and everything went back to normal :)
I hope this helps anyone else with this issue. Cheers!
|
For me, I just uninstalled (deleted the app from the Applications folder) and then went back to app store and clicked the cloud icon and it downloaded fresh and installed. Now all is good and back to normal.
|
58,550,284
|
After an automatic update of [macOS v10.15](https://en.wikipedia.org/wiki/MacOS_Catalina) (Catalina), I am unable to open Xcode. Xcode prompts me to install additional components but the installation fails because of MobileDevice.pkg (Applications/Xcode.app/Contents/Resources/Packages)
I have found multiple answers on how to locate MobileDevice.pkg and that I should try to install it directly, but when I try to do this the installation fails too. I have also tried updating Xcode from [App Store](https://en.wikipedia.org/wiki/App_Store_(macOS)), but the update failed when it was nearly finished.
Has anyone experienced the same behaviour? Should I reset the Mac to default and install [macOS v10.13](https://en.wikipedia.org/wiki/MacOS_High_Sierra) (High Sierra) or Catalina from scratch or it is a problem of Xcode and re-install would do the job?
I have found a discussion [here](https://discussions.apple.com/thread/250782804) that was posted today and is probably regarding the same issue and it seems like many people are dealing with it, too.
The log:
```python
*2019-10-25 01:03:34+02 Vendula-MacBook-Pro Xcode[1567]: Package: PKLeopardPackage
<id=com.apple.pkg.MobileDevice, version=4.0.0.0.1.1567124787, url=file:///Applications/Xcode.app/Contents/Resources/Packages/MobileDevice.pkg>
Failed to verify with error: Error Domain=PKInstallErrorDomain Code=102
"The package “MobileDevice.pkg” is untrusted."
UserInfo={
NSLocalizedDescription=The package “MobileDevice.pkg” is untrusted.,
NSURL=MobileDevice.pkg -- file:///Applications/Xcode.app/Contents/Resources/Packages/,
PKInstallPackageIdentifier=com.apple.pkg.MobileDevice,
NSUnderlyingError=0x7fabf6626d00
{
Error Domain=NSOSStatusErrorDomain
Code=-2147409654 "CSSMERR_TP_CERT_EXPIRED"
UserInfo={
SecTrustResult=5,
PKTrustLevel=PKTrustLevelExpiredCertificate,
NSLocalizedFailureReason=CSSMERR_TP_CERT_EXPIRED
}
}
}*
```
|
2019/10/24
|
[
"https://Stackoverflow.com/questions/58550284",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11216994/"
] |
Edit and set the date of your Mac as October 1st, 2019.
|
I had a similar problem, where I installed Xcode 11.1, and installed the components and everything within the same folder where I had Xcode 10.2.1. Then, I tried to go back to Xcode 10.2.1 and couldn't opened as it was asking me to install components again, and when I tried I was getting this error.
>
> The package “MobileDeviceDevelopment.pkg” is untrusted.
>
>
>
So, the workaround that fixed it for me was navigating to...
```
/Users/YourUser/Applications/Xcode\ 10.2.1.app/Contents/Resources/
```
Then, deleting **MobileDeviceDevelopment.pkg** and everything went back to normal :)
I hope this helps anyone else with this issue. Cheers!
|
30,668,557
|
I have a problem loading an external dll using Python through Python for .NET. I have tried different methodologis following stackoverflow and similar. I will try to summarize the situation and to describe all the steps that I've done.
I have a dll named for e.g. Test.NET.dll. I checked with dotPeek and I can see, clicking on it, x64 and .NET Framework v4.5. On my computer I have installed the .Net Framework 4.
I have also installed Python for .NET in different ways. I think that the best one is download the .whl from this website [LINK](http://www.lfd.uci.edu/~gohlke/pythonlibs/#pythonnet). I have downloaded and installed: pythonnet‑2.0.0.dev1‑cp27‑none‑win\_amd64.whl. I can imagine that it will works for .NET 4.0 since *Requires the Microsoft .NET Framework 4.0.*
Once I have installed everything, I can do these commands:
```
>>> import clr
>>> import System
>>> print System.Environmnet.Version
>>> print System.Environment.Version
4.0.30319.34209
```
It seems work. Then, I have tried to load my dll typing these commands:
```
>>> import clr
>>> dllpath= r'C:\Program Files\API\Test.NET'
>>> clr.AddReference(dllpath)
Traceback (most recent call last):
File "<pyshell#20>", line 1, in <module>
clr.AddReference(dllpath)
FileNotFoundException: Unable to find assembly 'C:\Program Files\API\Test.NET'.
at Python.Runtime.CLRModule.AddReference(String name)
```
I have also tried to add '.dll' at the end of the path but nothing changed. Then, I have also tried different solutions as described in [LINK](https://stackoverflow.com/questions/14633695/how-to-install-python-for-net-on-windows), [LINK](https://stackoverflow.com/questions/13259617/python-for-net-unable-to-find-assembly-error), [LINK](https://github.com/geographika/PythonDotNet), and much more.... Unfortunately, it doesn't work and I get different errors. I know that exists IronPython but I was trying to avoid to use it.
Thanks for your help!
|
2015/06/05
|
[
"https://Stackoverflow.com/questions/30668557",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3075816/"
] |
>
> The return value of this decorator is ignored.
>
>
>
This is irrelevant to your situation. The return value of parameter decorators is ignored because they don't need to be able to replace anything (unlike method and class decorators which can replace the descriptor).
>
> My problem is that when I try to implement the decorator. **All the changes that I do to the target are ignored.**
>
>
>
The target is the object's prototype. It works fine:
```
class MyClass {
myMethod(@logParam myParameter: string) {}
}
function logParam(target: any, methodKey: string, parameterIndex: number) {
target.test = methodKey;
// and parameterIndex says which parameter
}
console.log(MyClass.prototype["test"]); // myMethod
var c = new MyClass();
console.log(c["test"]); // myMethod
```
Just change that situation to put the data where you want it (using `Reflect.defineMetadata` is probably best).
|
For my case I used a simple solution **without reflect metadata** API but **using method decoration**.
You can modify it for your purposes:
```
type THandler = (param: any, paramIndex: number, params: any[]) => void;
/**
* @example
* @Params((param1) => someHandlerFn(param1), ...otherParams)
* public async someMethod(param1, ...otherParams) { ...
*/
function Params(...handlers: THandler[]): MethodDecorator {
return (_target, _propertyKey, descriptor: PropertyDescriptor) => {
const originalMethod = descriptor.value;
descriptor.value = async function (...args: any[]) {
args.forEach((arg, index) => {
const handler = handlers[index];
if (handler) {
handler(arg, index, args);
}
});
return originalMethod.apply(this, args);
};
};
}
```
With usage:
```
export class SomeClass {
@Params(console.log, (param2) => {
if (param2.length < 3) {
console.log('param2 length must be more than 2 symbols');
}
}) // ...add your handlers
async someMethod(param1: string, param2: string) {
// ...some implementation
}
}
```
|
49,369,438
|
I'm trying to create a blobstore entry from an image data-uri object, but am getting stuck.
Basically, I'm posting via ajax the data-uri as text, an example of the payload:
```
data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAPA...
```
I'm trying to receive this payload with the following handler. I'm assuming I need to convert the `data-uri` back into an image before storing? So am using the PIL library.
My python handler is as follows:
```
import os
import urllib
import webapp2
from google.appengine.ext.webapp import template
from google.appengine.ext import blobstore
from google.appengine.ext.webapp import blobstore_handlers
from google.appengine.api import images
class ImageItem(db.Model):
section = db.StringProperty(required=False)
description = db.StringProperty(required=False)
img_url = db.StringProperty()
blob_info = blobstore.BlobReferenceProperty()
when = db.DateTimeProperty(auto_now_add=True)
#Paste upload handler
class PasteUpload(webapp2.RequestHandler):
def post(self):
from PIL import Image
import io
import base64
data = self.request.body
#file_name = data['file_name']
img_data = data.split('data:image/png;base64,')[1]
#Convert base64 to jpeg bytes
f = Image.open(io.BytesIO(base64.b64decode(img_data)))
img = ImageItem(description=self.request.get('description'), section=self.request.get('section') )
img.blob_info = f.key()
img.img_url = images.get_serving_url( f.key() )
img.put()
```
This is likely all kinds of wrong. I get the following error when posting:
```
img.blob_info = f.key()
AttributeError: 'PngImageFile' object has no attribute 'key'
```
What am I doing wrong here? Is there an easier way to do this? I'm guessing I don't need to convert the `data-uri` into an image to store as a blob?
I also want this Handler to return the URL of the image created in the blobstore.
|
2018/03/19
|
[
"https://Stackoverflow.com/questions/49369438",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/791793/"
] |
There are a couple of ways to view your question and the sample code you posted, and it's a little confusing what you need because you are mixing strategies and technologies.
**POST base64 to `_ah/upload/...`**
Your service uses `create_upload_url()` to make a one-time upload URL/session for your client. Your client makes a POST to that URL and the data never touches your service (no HTTP-request-size restrictions, no CPU-time spent handling the POST). An App Engine internal "blob service" receives that POST and saves the body as a Blob in the Blobstore. App Engine then hands control back to your service in the `BlobstoreUploadHandler` class you write and then you can determine how you want to respond to the successful POST. In the case of the example/tutorial, `PhotoUploadHandler` redirects the client to the photo that was just uploaded.
That POST from your client must be encoded as `multipart/mixed` and use the fields shown in the example HTML `<form>`.
The multipart form can take the optional parameter, `Content-Transfer-Encoding`, and the App Engine internal handler will properly decode base64 data. From `blob_upload.py`:
```
base64_encoding = (form_item.headers.get('Content-Transfer-Encoding') ==
'base64')
...
if base64_encoding:
blob_file = cStringIO.StringIO(base64.urlsafe_b64decode(blob_file.read()))
...
```
Here's a complete multipart form I tested with cURL, based on the fields used in the example. I found out how to do this over at [Is there a way to pass the content of a file to curl?](https://stackoverflow.com/questions/31061379/is-there-a-way-to-pass-the-content-of-a-file-to-curl):
**myconfig.txt**:
```
header = "Content-length: 435"
header = "Content-type: multipart/mixed; boundary=XX
data-binary = "@myrequestbody.txt"
```
**myrequestbody.txt**:
```
--XX
Content-Disposition: form-data; name="file"; filename="test.gif"
Content-Type: image/gif
Content-Transfer-Encoding: base64
R0lGODdhDwAPAIEAAAAAzMzM/////wAAACwAAAAADwAPAAAIcQABCBxIsODAAAACAAgAIACAAAAiSgwAIACAAAACAAgAoGPHACBDigwAoKTJkyhTqlwpQACAlwIEAJhJc6YAAQByChAAoKfPn0CDCh1KtKhRAAEAKF0KIACApwACBAAQIACAqwECAAgQAIDXr2DDAggIADs=
--XX
Content-Disposition: form-data; name="submit"
Submit
--XX--
```
and then run like:
```
curl --config myconfig.txt "http://127.0.0.1:8080/_ah/upload/..."
```
You'll need to create/mock-up the multipart form in your client.
Also, as an alternative to Blobstore, you can use Cloud Storage if you want to save a little on storage costs or have some need to share the data without your API. Follow the documentation for [Setting Up Google Cloud Storage](https://cloud.google.com/appengine/docs/standard/python/googlecloudstorageclient/setting-up-cloud-storage), and then modify your service to create the upload URL for your bucket of choice:
```
create_upload_url(gs_bucket_name=...)
```
It's a little more complicated than just that, but reading the section *Using the Blobstore API with Google Cloud Storage* in the Blobstore document will get you pointed in the right direction.
**POST base64 directly to your service/handler**
Kind of like you coded in the original post, your service receives the POST from your client and you then decide if you need to manipulate the image and where you want to store it (Datastore, Blobstore, Cloud Storage).
If you need to manipulate the image, then using PIL is good:
```
from io import BytesIO
from PIL import Image
from StringIO import StringIO
data = self.request.body
#file_name = data['file_name']
img_data = data.split('data:image/png;base64,')[1]
# Decode base64 and open as Image
img = Image.open(BytesIO(base64.b64decode(img_data)))
# Create thumbnail
img.thumbnail((128, 128))
# Save img output as blob-able string
output = StringIO()
img.save(output, format=img.format)
img_blob = output.getvalue()
# now you choose how to save img_blob
```
If you don't need to manipulate the image, just stop at `b64decode()`:
```
img_blob = base64.b64decode(img_data)
```
|
An image object (<https://cloud.google.com/appengine/docs/standard/python/refdocs/google.appengine.api.images>) isn't a Datastore entity, so it has no key. You need to actually save the image to blobstore[2] or Google Cloud Storage[1] then get a serving url for your image.
[1] <https://cloud.google.com/appengine/docs/standard/python/googlecloudstorageclient/setting-up-cloud-storage>
[2] <https://cloud.google.com/appengine/docs/standard/python/blobstore/>
|
59,156,316
|
I have a python code like this to interact with an API
```
from oauthlib.oauth2 import BackendApplicationClient
from requests_oauthlib import OAuth2Session
import json
from pprint import pprint
key = "[SOME_KEY]" # FROM API PROVIDER
secret = "[SOME_SECRET]" # FROM API PROVIDER
api_client = BackendApplicationClient(client_id=key)
oauth = OAuth2Session(client=api_client)
url = "[SOME_URL_FOR_AN_API_ENDPOINT]"
# GETTING TOKEN AFTER PROVIDING KEY AND SECRET
token = oauth.fetch_token(token_url="[SOME_OAUTH_TOKEN_URL]", client_id=key, client_secret=secret)
# GENERATING AN OAuth2Session OBJECT; WITH THE TOKEN:
client = OAuth2Session(key, token=token)
body = {
"key1": "value1",
"key2": "value2",
"key3": "value3"
}
response = client.post(url, data=json.dumps(body))
pprint(response.json())
```
When I run this py file, I get this response from the API, **that I have to include the content type in the header**. How do I include the header with Oauth2Session?
```
{'detailedMessage': 'Your request was missing the Content-Type header. Please '
'add this HTTP header and try your request again.',
'errorId': '0a8868ec-d9c0-42cb-9570-59059e5b39a9',
'simpleMessage': 'Your field could not be created at this time.',
'statusCode': 400,
'statusName': 'Bad Request'}
```
|
2019/12/03
|
[
"https://Stackoverflow.com/questions/59156316",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11719873/"
] |
Have you tried to you send a header paramter with this requests?
```py
headers = {"Content-Type": "application/json"}
response = client.post(url, data=json.dumps(body), headers=headers)
```
|
This is how I was able to configure the POST request for exchanging the code for a token.
```py
from requests_oauthlib import OAuth2Session
from oauthlib.oauth2 import WebApplicationClient, BackendApplicationClient
from requests.auth import HTTPBasicAuth
client_id = CLIENT_ID
client_secret = CLIENT_SECRET
authorization_base_url = AUTHORIZE_URI
token_url = TOKEN_URI
redirect_uri = REDIRECT_URI
auth = HTTPBasicAuth(client_id, client_secret)
scope = SCOPE
header = {
'User-Agent': 'myapplication/0.0.1',
'Content-Type': 'application/x-www-form-urlencoded',
'Accept': 'application/json',
}
# Create the Authorization URI
# Not included here but store the state in a safe place for later
the_first_session = OAuth2Session(client_id=client_id, redirect_uri=redirect_uri, scope=scope)
authorization_url, state = the_first_session.authorization_url(authorization_base_url)
# Browse to the Authorization URI
# Login and Auth with the OAuth provider
# Now to respond to the callback
the_second_session = OAuth2Session(client_id, state=state)
body = 'grant_type=authorization_code&code=%s&redirect_uri=%s&scope=%s' % (request.GET.get('code'), redirect_uri, scope)
token = the_second_session.fetch_token(token_url, code=request.GET.get('code'), auth=auth, header=header, body=body)
```
|
31,932,371
|
I have both Python 2.7 and Python 3.4 installed on my MacBook, as both are needed sometimes.
Python 2.7 is shipped by Apple itself.
Python 3.4 is installed by Mac OS X 64-bit/32-bit installer in the link
<https://www.python.org/downloads/release/python-343/>
Here is how I installed Meld on Mac OS X 10.10:
1. Install apple development command line tools;
2. Install Homebrew;
3. `homebrew install homebrew/x11/meld`
4. When launched meld, it says:
```
**bash: /usr/local/bin/meld: /usr/local/opt/python/bin/python2.7: bad interpreter: No such file or directory**
```
From my research, some people recommend to modify the first line in `/usr/local/bin/pip`, i.e.,
```none
#!/usr/local/opt/python/bin/python2.7
```
This file is missing. However, if I want to be able to use both Python 2.7 and Python 3.4 for Meld, what should I do to make it work?
|
2015/08/11
|
[
"https://Stackoverflow.com/questions/31932371",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4778233/"
] |
This should solve the problem: `brew link --overwrite python`
|
I use Linux. I have no clue about Apple and the special things they do. But judging from the error messages given it seems that
1. bash starts *the meld program*
2. *the meld program* refers to *the python program in **someplace***
3. **someplace** is wrong in *the meld program*, causing the bash error message
Now here is how you debug and solve such problems:
1. open a terminal (black window where you can enter commands)
2. find out where *the python program* is installed by entering `which python` into the terminal window.
```
$ which python
/usr/bin/python
```
In my case we can see that python is installed in `/usr/bin/python`
3. Find out where *the meld program* is installed. In the original question this is in `/usr/local/bin/meld`, as we can deduce from the error message in 4. But we can try again the *which* command and ask where the system finds a meld: `which meld`:
```
$ which meld
/usr/bin/meld
```
In my case this is in `/usr/bin/meld`.
4. now you open the file that `which meld` reported in step 3. and make a backup copy of it
5. now you open the file that `which meld` reported in step 3. in your editor of choice and change the first line, such that it starts with `#!` followed by the path of *the python program* from step 2. in my case the first line of meld is: `#!/usr/bin/python`. Save the changes to the file and retry starting meld. You probably need some admin access rights to be able to save the file.
1. first start it in the terminal by entering `meld`, so it is easiert to read the error messages.
2. if meld started in the terminal, start meld your usual way from the gui, and hope that it works, too
The original question mentioned several versions of python. I have also several versions installed, I can ask the system for the different versions:
```
$ which python2
/usr/bin/python2
$ which python2.7
/usr/bin/python2.7
$ which python3
/usr/bin/python3
$ which python3.4
/usr/bin/python3.4
```
If you have several versions installed: try one at a time when editing the file from step 3.
|
31,932,371
|
I have both Python 2.7 and Python 3.4 installed on my MacBook, as both are needed sometimes.
Python 2.7 is shipped by Apple itself.
Python 3.4 is installed by Mac OS X 64-bit/32-bit installer in the link
<https://www.python.org/downloads/release/python-343/>
Here is how I installed Meld on Mac OS X 10.10:
1. Install apple development command line tools;
2. Install Homebrew;
3. `homebrew install homebrew/x11/meld`
4. When launched meld, it says:
```
**bash: /usr/local/bin/meld: /usr/local/opt/python/bin/python2.7: bad interpreter: No such file or directory**
```
From my research, some people recommend to modify the first line in `/usr/local/bin/pip`, i.e.,
```none
#!/usr/local/opt/python/bin/python2.7
```
This file is missing. However, if I want to be able to use both Python 2.7 and Python 3.4 for Meld, what should I do to make it work?
|
2015/08/11
|
[
"https://Stackoverflow.com/questions/31932371",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4778233/"
] |
This should solve the problem: `brew link --overwrite python`
|
It is highly recommended not to use the System Python on osx, but install a separate version of Python (for a list of the reasons see for example this page [here](https://github.com/MacPython/wiki/wiki/Which-Python)).
The reason boils down to the fact that OSX depends on it's own Python installation, which you should not tamper with, leaving unable to update Python or work with some modules.
You can easily install a separate version of Python with homebrew by running:
```
brew install python
```
This should give you a working installation of python 2.7 (`brew install python3` for 3.x) that is located in `/usr/local/bin/python`. This is (if I understand your error messages correctly) also the place Meld is looking for it.
(I like [this](http://nerderati.com/2014/09/03/installing-matplotlib-numpy-scipy-nltk-and-pandas-on-os-x-without-going-crazy/) blogpost for more info on how to setup Python on osx.)
|
8,380,733
|
I'm trying to run a custom django command as a scheduled task on Heroku. I am able to execute the custom command locally via: `python manage.py send_daily_email`. (note: I do NOT have any problems with the custom management command itself)
However, Heroku is giving me the following exception when trying to "Run" the task through Heroku Scheduler addon:
```
Traceback (most recent call last):
File "bin/send_daily_visit_email.py", line 2, in <module>
from django.conf import settings
ImportError: No module named django.conf
```
I placed a python script in **/bin/send\_daily\_email.py**, and it is the following:
```
#! /usr/bin/python
from django.conf import settings
settings.configure()
from django.core import management
management.call_command('send_daily_email') #delegates off to custom command
```
Within Heroku, however, I am able to run `heroku run bin/python` - launch the python shell - and successfully import `settings` from `django.conf`
I am pretty sure it has something to do with my `PYTHON_PATH` or visibility to Django's `SETTINGS_MODULE`, but I'm unsure how to resolve the issue. Could someone point me in the right direction? Is there an easier way to accomplish what I'm trying to do here?
Thank you so much for your tips and advice in advance! New to Heroku! :)
**EDIT:**
Per Nix's comment, I made some adjustments, and did discover that specifying my exact python path, I did get past the Django setup.
I now receive:
```
File "/app/lib/python2.7/site-packages/django/core/management/__init__.py", line 155, in call_command
raise CommandError("Unknown command: %r" % name)
django.core.management.base.CommandError: Unknown command: 'send_daily_email'
```
Although, I can see 'send\_daily\_email' when I run ``heroku run bin/python app/manage.py```.
I'll keep an update if I come across the answer.
|
2011/12/05
|
[
"https://Stackoverflow.com/questions/8380733",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/873197/"
] |
You are probably using a different interpreter.
Check to make sure shell python is the same as the one you reference in your script /usr/bin/python . It could be that there is a different one in your path, which would explain why it works when you run `python manage.py` but not your shell scrip which you explicitly reference `/usr/bin/python`.
---
Typing `which python` will tell you what interpreter is being found on your path.
|
In addition, this can also be resolved by adding your home directory to your Python path. A quick and unobtrusive way to accomplish that is to add it to the PYTHONPATH environment variable (which is generally /app on the Heroku Cedar stack).
Add it via the heroku config command:
```
$ heroku config:add PYTHONPATH=/app
```
That should do it! For more details: <http://tomatohater.com/2012/01/17/custom-django-management-commands-on-heroku/>
|
5,077,765
|
I am using Google App Engine's datastore and wants to retrieve an entity whose key value is written as
```
ID/Name
id=1
```
Can anyone suggest me a GQL query to view that entity in datastore admin console and also in my python program?
|
2011/02/22
|
[
"https://Stackoverflow.com/questions/5077765",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/617462/"
] |
From your application use the [get\_by\_id()](http://code.google.com/intl/it/appengine/docs/python/datastore/modelclass.html#Model_get_by_id) class method of the Model:
```
entity = YourModel.get_by_id(1)
```
From Datastore viewer you should use the `KEY` function:
```
SELECT * FROM YourModel WHERE __key__ = KEY('YourModel',1)
```
|
An application can retrieve a model instance for a given Key using the [get()](http://code.google.com/appengine/docs/python/datastore/functions.html#get) function.
```
class member(db.Model):
firstName=db.StringProperty(verbose_name='First Name',required=False)
lastName=db.StringProperty(verbose_name='Last Name',required=False)
...
id = int(self.request.get('id'))
entity= member.get(db.Key.from_path('member', id))
```
I'm not sure how to return a specific entity in the admin console.
|
8,062,564
|
I try to apply image filters using python's [PIL](http://www.pythonware.com/products/pil/). The code is straight forward:
```
im = Image.open(fnImage)
im = im.filter(ImageFilter.BLUR)
```
This code works as expected on PNGs, JPGs and on 8-bit TIFs. However, when I try to apply this code on 16-bit TIFs, I get the following error
```
ValueError: image has wrong mode
```
Note that PIL was able to load, resize and save 16-bit TIFs without complains, so I assume that this problem is filter-related. However, [ImageFilter documentation](http://www.pythonware.com/library/pil/handbook/imagefilter.htm) says nothing about 16-bit support
Is there any way to solve it?
|
2011/11/09
|
[
"https://Stackoverflow.com/questions/8062564",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17523/"
] |
Your TIFF image's mode is most likely a "I;16".
In the current version of ImageFilter, kernels can only be applied to
"L" and "RGB" images (see source of ImageFilter.py)
Try converting first to another mode:
```
im.convert('L')
```
If it fails, try:
```
im.mode = 'I'
im = im.point(lambda i:i*(1./256)).convert('L').filter(ImageFilter.BLUR)
```
Remark: Possible duplicate from [Python and 16 Bit Tiff](https://stackoverflow.com/questions/7247371/python-and-16-bit-tiff)
|
To move ahead, try using [ImageMagick](http://www.imagemagick.org/script/index.php), look for PythonMagick hooks to the program. On the command prompt, you can use `convert.exe image-16.tiff -blur 2x2 output.tiff`. Didn't manage to install PythonMagick in my windows OS as the source needs compiling.
|
13,103,806
|
So I have been working on a two player "guess the number" program.
But im just having trouble with one thing.
So here's code:
```
import time
import random
thenumber = random.randint(1, 10)
print "Welcome to Noah's Two Player guess a number game."
print "What is player one's name?"
player1 = raw_input()
print "What is player two's name?"
player2 = raw_input()
print "Well " + player1 + " and " + player2 + ", are you ready to play?"
choice = raw_input()
if choice == yes:
print player1 + ", pick a number from 1 to 10."
player1guess = raw_input()
print player2 + ", pick a number from 1 to 10."
player2guess = raw_input()
print "Calculating..."
time.sleep(3)
p1 = thenumber - player1guess
p2 = thenumber - player2guess
if p1 > p2:
print player1 + " won!"
elif p2 > p1:
print player2 + " won!"
```
Everything is running smoothly until I get this error:
```
Traceback (most recent call last):
File "C:\Python27\Script 1", line 11, in <module>
if choice == yes:
NameError: name 'yes' is not defined
```
To my knowledge, I don't think I have done anything wrong, but then again I am a beginner to python.
Someone please help me with this.
EDIT: (This is python 2.7 if it makes a difference)
|
2012/10/27
|
[
"https://Stackoverflow.com/questions/13103806",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1779877/"
] |
I guess you need quotes around `yes` : `choice =='yes'` , otherwise python will think that `yes` is a variable.
|
try `if choice == 'yes':`. You're comparing it with a symbol that isn't defined.
Note there are no 'variables' in Python, only symbols and their values(with their types internally understood by the language interpreter). Think of them as names you give to various objects. They're all symbols.
|
73,697,975
|
I am working on a Django application where I have used twilio to send sms and whatsapp messages and the sendgrid api for sending the emails. The problem occurs in scheduled messages in all three. For example, if I have schedule an email to be sent at 06:24 PM(scheduled time is 06:24 PM), then I am receiving the email at 11:54 PM, hence the difference is of 5 hours and 30 minutes. The same problem also arises in sms and whatsapp.
I think the problem might be due to the timezone. I am scheduling the messages from India and twilio account is in USA. And, the time difference of 5 hours 30 mins is due to the UTC format as India is 5 hours 30 mins ahead of UTC. But I am not getting any solution for this.
Also the time is specified as the UNIX time. I am getting a datetime object whose value is like: "2022-09-13 11:44:00". I am converting it to unix time with the "time module" of python. Hence the code looks like:
```
message = MAIL(...)
finalScheduleTime = int(time.mktime(scheduleTime.timetupple()))
message.send_at = finalScheduleTime
```
In the above code, scheduleTime is the datetime object.
So, is there any specific way to solve this problem ?
|
2022/09/13
|
[
"https://Stackoverflow.com/questions/73697975",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19663784/"
] |
If need replace missing value in `country` column by last value after split `city` by `,` use:
```
df['country'] = df['country'].fillna(df['city'].str.split(',').str[-1])
```
Or if need assign all column in `country` column:
```
df['country'] = df['city'].str.split(',').str[-1]
```
|
You can use [`str.extract`](https://pandas.pydata.org/docs/reference/api/pandas.Series.str.extract.html) with a word regex `\w+` anchored to the end of the string (`$`) to get the last word:
```
# replacing all values
df['country'] = df['city'].str.extract('(\w+)$', expand=False)
# only updating NaNs
df.loc[df['country'].isna(), 'country'] = df['city'].str.extract('(\w+)$', expand=False)
```
output:
```
city country
0 Toronto,Canada Canada
```
Alternatively, you can use the `([^,]+)$` regex that is more permissive (any terminal character except `,`)
|
71,431,272
|
I'm having a problem with calling my TextInputs from one class in another. I have a special class inheriting from TextInput, which makes moving with arrows possible, in my MainScreen I want to grab the letters from TextInputs and later on do something with them, however I need to pass them into this class first. I don't know how I am supposed to do that. I checked, that it's because I am unable to find any ids with id1,id2,... key, but I don't know why and how to overcome this.
***.py***
```
import kivy
import sys
kivy.require('1.11.1')
sys.path.append("c:\\users\\dom\\appdata\\local\\programs\\python\\python39\\lib\\site-packages")
from kivy.app import App
from kivy.config import Config
from kivy.core.window import Window
from kivy.uix.widget import Widget
from kivy.lang import Builder
from kivy.uix.relativelayout import RelativeLayout
from kivy.uix.gridlayout import GridLayout
from kivy.uix.screenmanager import ScreenManager, Screen
from kivy.uix.textinput import TextInput
from kivy.uix.label import Label
from kivy.app import App
from random import choice
import math
from kivy.properties import StringProperty
import string
letters= string.ascii_lowercase
Builder.load_file("GuessWord.kv")
def set_color(letter, color):
COLORS_DICT = {"W" : "ffffff", "Y" : "ffff00", "G" : "00ff00"}
c = COLORS_DICT.get(color, "W")
return f"[color={c}]{letter}[/color]"
class MyTextInput(TextInput):
focused = 'id1'
def change_focus(self, *args):
app = App.get_running_app()
if app.root is not None:
# Now access the container.
layout = app.root.ids["layout"]
# Access the required widget and set its focus.
print("Changefocus", MyTextInput.focused)
layout.ids[MyTextInput.focused].focus = True
def keyboard_on_key_down(self, window, keycode, text, modifiers):
focusedid = int(MyTextInput.focused[2])
if keycode[1] == "backspace":
if self.text=="":
if int(MyTextInput.focused[2]) > 1:
focusedid -= 1
MyTextInput.focused = "id" + str(focusedid)
else:
self.text = self.text[:-1]
if keycode[1] == "right":
if int(MyTextInput.focused[2]) < 5:
focusedid += 1
MyTextInput.focused = "id" + str(focusedid)
elif int(MyTextInput.focused[2]) == 5:
MyTextInput.focused = "id" + str(1)
elif keycode[1] == "left":
if int(MyTextInput.focused[2]) > 1:
focusedid -= 1
MyTextInput.focused = "id" + str(focusedid)
elif int(MyTextInput.focused[2]) == 1:
MyTextInput.focused = "id" + str(5)
elif keycode[1] in letters:
if int(MyTextInput.focused[2]) < 5:
self.text=text
focusedid += 1
MyTextInput.focused = "id" + str(focusedid)
self.change_focus()
print("After changing", MyTextInput.focused)
return True
#TextInput.keyboard_on_key_down(self, window, keycode, text, modifiers)
class MainScreen(Widget):
def GetLetters(self):
app = App.get_running_app()
if app.root is not None:
t1=app.root.ids['id1']
t2=app.root.ids['id2']
t3=app.root.ids['id3']
t4=app.root.ids['id4']
t5=app.root.ids['id5']
listofletters=[t1,t2,t3,t4,t5]
for letter in listofletters:
print(letter.text)
class TestingappApp(App):
def build(self):
return MainScreen()
TestingappApp().run()
```
***.kv***
```
<MainScreen>:
BoxLayout:
size: root.size
orientation: "vertical"
CustomBox:
id: layout
cols:5
Button:
pos_hint: {'center_x':0.5,'y':0.67}
size_hint: 0.5,0.1
text: "Click here to get letters!"
on_release: root.GetLetters()
<CustomBox@BoxLayout>:
MyTextInput:
id: id1
focus: True
focused: "id1"
multiline: False
MyTextInput:
id: id2
focused: "id2"
multiline: False
MyTextInput:
id: id3
focused: "id3"
multiline: False
MyTextInput:
id: id4
focused: "id4"
multiline: False
MyTextInput:
id: id5
focused: "id5"
multiline: False
```
|
2022/03/10
|
[
"https://Stackoverflow.com/questions/71431272",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15846202/"
] |
I finally figured this one out. I ended up using a callback in the onchange attribute that set the value of the current property in the loop.
```
<div class="form-section">
@foreach (var property in DataModel.GetType().GetProperties())
{
var propertyString = $"DataModel.{property.Name.ToString()}";
@if (property.PropertyType == typeof(DateTime))
{
<input type="date" id="@property.Name" @onchange="@(e => property.SetValue(dataModel, DateTime.Parse(e.Value.ToString())))" />
}
else if (property.PropertyType == typeof(int))
{
<input type="number" id="@property.Name" @onchange="@(e => property.SetValue(dataModel, Int32.Parse(e.Value.ToString())))" />
}
else
{
<input type="number" id="@property.Name" @onchange="@(e => property.SetValue(dataModel, Decimal.Parse(e.Value.ToString())))" />
}
}
</div>
```
|
I think what @nocturns2 said is correct, you could try this code:
```
@if (property.PropertyType == typeof(DateTime))
{
DateTime.TryParse(YourPropertyString, out var parsedValue);
var resutl= (YourType)(object)parsedValue
<InputDate id=@"property.Name" @bind-Value="@resutl">
}
```
|
71,431,272
|
I'm having a problem with calling my TextInputs from one class in another. I have a special class inheriting from TextInput, which makes moving with arrows possible, in my MainScreen I want to grab the letters from TextInputs and later on do something with them, however I need to pass them into this class first. I don't know how I am supposed to do that. I checked, that it's because I am unable to find any ids with id1,id2,... key, but I don't know why and how to overcome this.
***.py***
```
import kivy
import sys
kivy.require('1.11.1')
sys.path.append("c:\\users\\dom\\appdata\\local\\programs\\python\\python39\\lib\\site-packages")
from kivy.app import App
from kivy.config import Config
from kivy.core.window import Window
from kivy.uix.widget import Widget
from kivy.lang import Builder
from kivy.uix.relativelayout import RelativeLayout
from kivy.uix.gridlayout import GridLayout
from kivy.uix.screenmanager import ScreenManager, Screen
from kivy.uix.textinput import TextInput
from kivy.uix.label import Label
from kivy.app import App
from random import choice
import math
from kivy.properties import StringProperty
import string
letters= string.ascii_lowercase
Builder.load_file("GuessWord.kv")
def set_color(letter, color):
COLORS_DICT = {"W" : "ffffff", "Y" : "ffff00", "G" : "00ff00"}
c = COLORS_DICT.get(color, "W")
return f"[color={c}]{letter}[/color]"
class MyTextInput(TextInput):
focused = 'id1'
def change_focus(self, *args):
app = App.get_running_app()
if app.root is not None:
# Now access the container.
layout = app.root.ids["layout"]
# Access the required widget and set its focus.
print("Changefocus", MyTextInput.focused)
layout.ids[MyTextInput.focused].focus = True
def keyboard_on_key_down(self, window, keycode, text, modifiers):
focusedid = int(MyTextInput.focused[2])
if keycode[1] == "backspace":
if self.text=="":
if int(MyTextInput.focused[2]) > 1:
focusedid -= 1
MyTextInput.focused = "id" + str(focusedid)
else:
self.text = self.text[:-1]
if keycode[1] == "right":
if int(MyTextInput.focused[2]) < 5:
focusedid += 1
MyTextInput.focused = "id" + str(focusedid)
elif int(MyTextInput.focused[2]) == 5:
MyTextInput.focused = "id" + str(1)
elif keycode[1] == "left":
if int(MyTextInput.focused[2]) > 1:
focusedid -= 1
MyTextInput.focused = "id" + str(focusedid)
elif int(MyTextInput.focused[2]) == 1:
MyTextInput.focused = "id" + str(5)
elif keycode[1] in letters:
if int(MyTextInput.focused[2]) < 5:
self.text=text
focusedid += 1
MyTextInput.focused = "id" + str(focusedid)
self.change_focus()
print("After changing", MyTextInput.focused)
return True
#TextInput.keyboard_on_key_down(self, window, keycode, text, modifiers)
class MainScreen(Widget):
def GetLetters(self):
app = App.get_running_app()
if app.root is not None:
t1=app.root.ids['id1']
t2=app.root.ids['id2']
t3=app.root.ids['id3']
t4=app.root.ids['id4']
t5=app.root.ids['id5']
listofletters=[t1,t2,t3,t4,t5]
for letter in listofletters:
print(letter.text)
class TestingappApp(App):
def build(self):
return MainScreen()
TestingappApp().run()
```
***.kv***
```
<MainScreen>:
BoxLayout:
size: root.size
orientation: "vertical"
CustomBox:
id: layout
cols:5
Button:
pos_hint: {'center_x':0.5,'y':0.67}
size_hint: 0.5,0.1
text: "Click here to get letters!"
on_release: root.GetLetters()
<CustomBox@BoxLayout>:
MyTextInput:
id: id1
focus: True
focused: "id1"
multiline: False
MyTextInput:
id: id2
focused: "id2"
multiline: False
MyTextInput:
id: id3
focused: "id3"
multiline: False
MyTextInput:
id: id4
focused: "id4"
multiline: False
MyTextInput:
id: id5
focused: "id5"
multiline: False
```
|
2022/03/10
|
[
"https://Stackoverflow.com/questions/71431272",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15846202/"
] |
I finally figured this one out. I ended up using a callback in the onchange attribute that set the value of the current property in the loop.
```
<div class="form-section">
@foreach (var property in DataModel.GetType().GetProperties())
{
var propertyString = $"DataModel.{property.Name.ToString()}";
@if (property.PropertyType == typeof(DateTime))
{
<input type="date" id="@property.Name" @onchange="@(e => property.SetValue(dataModel, DateTime.Parse(e.Value.ToString())))" />
}
else if (property.PropertyType == typeof(int))
{
<input type="number" id="@property.Name" @onchange="@(e => property.SetValue(dataModel, Int32.Parse(e.Value.ToString())))" />
}
else
{
<input type="number" id="@property.Name" @onchange="@(e => property.SetValue(dataModel, Decimal.Parse(e.Value.ToString())))" />
}
}
</div>
```
|
Here's a componentized version with an example component for DateOnlyFields
```
@using System.Reflection
@using System.Globalization
<input type="date" value="@GetValue()" @onchange=this.OnChanged @attributes=UserAttributes />
@code {
[Parameter] [EditorRequired] public object? Model { get; set; }
[Parameter] [EditorRequired] public PropertyInfo? Property { get; set; }
[Parameter] public EventCallback ValueChanged { get; set; }
[Parameter(CaptureUnmatchedValues = true)] public IDictionary<string, object> UserAttributes { get; set; } = new Dictionary<string, object>();
private string GetValue()
{
var value = this.Property?.GetValue(Model);
return value is null
? DateTime.Now.ToString("yyyy-MM-dd")
: ((DateOnly)value).ToString("yyyy-MM-dd");
}
private void OnChanged(ChangeEventArgs e)
{
if (BindConverter.TryConvertToDateOnly(e.Value, CultureInfo.InvariantCulture, out DateOnly value))
this.Property?.SetValue(this.Model, value);
else
this.Property?.SetValue(this.Model, DateOnly.MinValue);
if (this.ValueChanged.HasDelegate)
this.ValueChanged.InvokeAsync();
}
}
```
And using it:
```
@foreach (var property in dataModel.GetType().GetProperties())
{
@if (property.PropertyType == typeof(DateOnly))
{
<div class="form-text">
<DateOnlyEditor class="form-control" Model=this.dataModel Property=property ValueChanged=this.DataChanged />
</div>
}
}
<div class="p-3">
Date: @dataModel.SelectedDate
</div>
@code {
private DataModel dataModel = new DataModel();
private EditContext? editContext;
public class DataModel
{
public DateOnly SelectedDate { get; set; } = new DateOnly(2020, 12, 25);
public int SelectedNumber { get; set; }
public decimal SelectDecimal { get; set; }
}
protected override void OnInitialized()
{
this.editContext = new EditContext(dataModel);
base.OnInitialized();
}
void DataChanged()
{
StateHasChanged();
}
}
```
|
22,727,782
|
I'd previously used Anaconda to handle python, but I'm and start working with virtual environments.
I set up virtualenv and virtualenvwrapper, and have been trying to add modules, specifically scrapy and lxml, for a project I want to try.
Each time I pip install, I hit an error.
For scrapy:
```
File "/home/philip/Envs/venv/local/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 1003, in run_setup
raise DistutilsError("Setup script exited with %s" % (v.args[0],))
distutils.errors.DistutilsError: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
---------------------------------------
Cleaning up...
Command python setup.py egg_info failed with error code 1 in /home/philip/Envs/venv/build/cryptography
Storing debug log for failure in /home/philip/.pip/pip.log
```
For lxml:
```
In file included from src/lxml/lxml.etree.c:346:0:
/home/philip/Envs/venv/build/lxml/src/lxml/includes/etree_defs.h:9:31: fatal error: libxml/xmlversion.h: No such file or directory
include "libxml/xmlversion.h"
^
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
Cleaning up... Command /home/philip/Envs/venv/bin/python -c "import setuptools, tokenize;__file__='/home/philip/Envs/venv/build/lxml/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-zIsPdl-record/install-record.txt
--single-version-externally-managed --compile --install-headers /home/philip/Envs/venv/include/site/python2.7 failed with error code 1 in /home/philip/Envs/venv/build/lxml Storing debug log for failure in /home/philip/.pip/pip.log
```
I tried to install it following [scrapy's documentation](http://doc.scrapy.org/en/latest/topics/ubuntu.html#topics-ubuntu), but scrapy was still not listed when I called for python's installed modules.
Any ideas? Thanks--really appreciate it!
I'm on Ubuntu 13.10 if it matters. Other modules I've tried have installed fine (though I've only gone for a handful).
|
2014/03/29
|
[
"https://Stackoverflow.com/questions/22727782",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3115915/"
] |
I had the same problem in Ubuntu 14.04. I've solved it with the instructions of the page linked by @jdigital and the openssl-dev library pointed by @user3115915. Just to help others:
```
sudo apt-get install libxslt1-dev libxslt1.1 libxml2-dev libxml2 libssl-dev
sudo pip install scrapy
```
|
In my case, I solve the problem installing all the libraries that Manuel mention plus the extra library: libffi-dev
<https://askubuntu.com/questions/499714/error-installing-scrapy-in-virtualenv-using-pip>
|
22,727,782
|
I'd previously used Anaconda to handle python, but I'm and start working with virtual environments.
I set up virtualenv and virtualenvwrapper, and have been trying to add modules, specifically scrapy and lxml, for a project I want to try.
Each time I pip install, I hit an error.
For scrapy:
```
File "/home/philip/Envs/venv/local/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 1003, in run_setup
raise DistutilsError("Setup script exited with %s" % (v.args[0],))
distutils.errors.DistutilsError: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
---------------------------------------
Cleaning up...
Command python setup.py egg_info failed with error code 1 in /home/philip/Envs/venv/build/cryptography
Storing debug log for failure in /home/philip/.pip/pip.log
```
For lxml:
```
In file included from src/lxml/lxml.etree.c:346:0:
/home/philip/Envs/venv/build/lxml/src/lxml/includes/etree_defs.h:9:31: fatal error: libxml/xmlversion.h: No such file or directory
include "libxml/xmlversion.h"
^
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
Cleaning up... Command /home/philip/Envs/venv/bin/python -c "import setuptools, tokenize;__file__='/home/philip/Envs/venv/build/lxml/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-zIsPdl-record/install-record.txt
--single-version-externally-managed --compile --install-headers /home/philip/Envs/venv/include/site/python2.7 failed with error code 1 in /home/philip/Envs/venv/build/lxml Storing debug log for failure in /home/philip/.pip/pip.log
```
I tried to install it following [scrapy's documentation](http://doc.scrapy.org/en/latest/topics/ubuntu.html#topics-ubuntu), but scrapy was still not listed when I called for python's installed modules.
Any ideas? Thanks--really appreciate it!
I'm on Ubuntu 13.10 if it matters. Other modules I've tried have installed fine (though I've only gone for a handful).
|
2014/03/29
|
[
"https://Stackoverflow.com/questions/22727782",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3115915/"
] |
I had the same problem in Ubuntu 14.04. I've solved it with the instructions of the page linked by @jdigital and the openssl-dev library pointed by @user3115915. Just to help others:
```
sudo apt-get install libxslt1-dev libxslt1.1 libxml2-dev libxml2 libssl-dev
sudo pip install scrapy
```
|
Updated from @Mario C. and @Manuel,
Here is commands:
```
sudo apt-get install libxslt1-dev libxslt1.1 libxml2-dev libxml2 libssl-dev libffi-dev
sudo pip install scrapy
```
|
22,727,782
|
I'd previously used Anaconda to handle python, but I'm and start working with virtual environments.
I set up virtualenv and virtualenvwrapper, and have been trying to add modules, specifically scrapy and lxml, for a project I want to try.
Each time I pip install, I hit an error.
For scrapy:
```
File "/home/philip/Envs/venv/local/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 1003, in run_setup
raise DistutilsError("Setup script exited with %s" % (v.args[0],))
distutils.errors.DistutilsError: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
---------------------------------------
Cleaning up...
Command python setup.py egg_info failed with error code 1 in /home/philip/Envs/venv/build/cryptography
Storing debug log for failure in /home/philip/.pip/pip.log
```
For lxml:
```
In file included from src/lxml/lxml.etree.c:346:0:
/home/philip/Envs/venv/build/lxml/src/lxml/includes/etree_defs.h:9:31: fatal error: libxml/xmlversion.h: No such file or directory
include "libxml/xmlversion.h"
^
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
Cleaning up... Command /home/philip/Envs/venv/bin/python -c "import setuptools, tokenize;__file__='/home/philip/Envs/venv/build/lxml/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-zIsPdl-record/install-record.txt
--single-version-externally-managed --compile --install-headers /home/philip/Envs/venv/include/site/python2.7 failed with error code 1 in /home/philip/Envs/venv/build/lxml Storing debug log for failure in /home/philip/.pip/pip.log
```
I tried to install it following [scrapy's documentation](http://doc.scrapy.org/en/latest/topics/ubuntu.html#topics-ubuntu), but scrapy was still not listed when I called for python's installed modules.
Any ideas? Thanks--really appreciate it!
I'm on Ubuntu 13.10 if it matters. Other modules I've tried have installed fine (though I've only gone for a handful).
|
2014/03/29
|
[
"https://Stackoverflow.com/questions/22727782",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3115915/"
] |
In my case, I solve the problem installing all the libraries that Manuel mention plus the extra library: libffi-dev
<https://askubuntu.com/questions/499714/error-installing-scrapy-in-virtualenv-using-pip>
|
Updated from @Mario C. and @Manuel,
Here is commands:
```
sudo apt-get install libxslt1-dev libxslt1.1 libxml2-dev libxml2 libssl-dev libffi-dev
sudo pip install scrapy
```
|
65,093,883
|
I often debug my python code by plotting NumPy arrays in the vscode debugger.
Often I spend more than 3s looking at a plot. When I do vscode prints the extremely
long warning below. It's very annoying because I then have to scroll up a lot
all the time to see previous debugging outputs. Where is this PYDEVD\_WARN\_EVALUATION\_TIMEOUT
variable? How do I turn this off?
I included the warning below for completeness, thanks a lot for your help!
>
> Evaluating: plt.show() did not finish after 3.00s seconds.
> This may mean a number of things:
>
>
> * This evaluation is really slow and this is expected.
> In this case it's possible to silence this error by raising the timeout, setting the
> PYDEVD\_WARN\_EVALUATION\_TIMEOUT environment variable to a bigger value.
> * The evaluation may need other threads running while it's running:
> In this case, it's possible to set the PYDEVD\_UNBLOCK\_THREADS\_TIMEOUT
> environment variable so that if after a given timeout an evaluation doesn't finish,
> other threads are unblocked or you can manually resume all threads.
>
>
> Alternatively, it's also possible to skip breaking on a particular thread by setting a
> `pydev_do_not_trace = True` attribute in the related threading.Thread instance
> (if some thread should always be running and no breakpoints are expected to be hit in it).
> * The evaluation is deadlocked:
> In this case you may set the PYDEVD\_THREAD\_DUMP\_ON\_WARN\_EVALUATION\_TIMEOUT
> environment variable to true so that a thread dump is shown along with this message and
> optionally, set the PYDEVD\_INTERRUPT\_THREAD\_TIMEOUT to some value so that the debugger
> tries to interrupt the evaluation (if possible) when this happens.
>
>
>
|
2020/12/01
|
[
"https://Stackoverflow.com/questions/65093883",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8726027/"
] |
If found a way to adapt the launch.json which takes care of this problem.
```
{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Current File",
"type": "python",
"request": "launch",
"program": "${file}",
"env": {"DISPLAY":":1",
"PYTHONPATH": "${workspaceRoot}",
"PYDEVD_WARN_EVALUATION_TIMEOUT": "500"},
"cwd": "${workspaceFolder}",
"console": "integratedTerminal"
}
]
}
```
|
If you are keen to surpress the warning, you'd go like this:
In this documentation, point 28.6.3, you can do so:
<https://docs.python.org/2/library/warnings.html#temporarily-suppressing-warnings>
Here's the code if the link dies in the future.
```
import warnings
def fxn():
warnings.warn("deprecated", DeprecationWarning)
with warnings.catch_warnings():
warnings.simplefilter("ignore")
fxn()
```
You should be ready to go with a simple copy paste.
|
65,093,883
|
I often debug my python code by plotting NumPy arrays in the vscode debugger.
Often I spend more than 3s looking at a plot. When I do vscode prints the extremely
long warning below. It's very annoying because I then have to scroll up a lot
all the time to see previous debugging outputs. Where is this PYDEVD\_WARN\_EVALUATION\_TIMEOUT
variable? How do I turn this off?
I included the warning below for completeness, thanks a lot for your help!
>
> Evaluating: plt.show() did not finish after 3.00s seconds.
> This may mean a number of things:
>
>
> * This evaluation is really slow and this is expected.
> In this case it's possible to silence this error by raising the timeout, setting the
> PYDEVD\_WARN\_EVALUATION\_TIMEOUT environment variable to a bigger value.
> * The evaluation may need other threads running while it's running:
> In this case, it's possible to set the PYDEVD\_UNBLOCK\_THREADS\_TIMEOUT
> environment variable so that if after a given timeout an evaluation doesn't finish,
> other threads are unblocked or you can manually resume all threads.
>
>
> Alternatively, it's also possible to skip breaking on a particular thread by setting a
> `pydev_do_not_trace = True` attribute in the related threading.Thread instance
> (if some thread should always be running and no breakpoints are expected to be hit in it).
> * The evaluation is deadlocked:
> In this case you may set the PYDEVD\_THREAD\_DUMP\_ON\_WARN\_EVALUATION\_TIMEOUT
> environment variable to true so that a thread dump is shown along with this message and
> optionally, set the PYDEVD\_INTERRUPT\_THREAD\_TIMEOUT to some value so that the debugger
> tries to interrupt the evaluation (if possible) when this happens.
>
>
>
|
2020/12/01
|
[
"https://Stackoverflow.com/questions/65093883",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8726027/"
] |
If found a way to adapt the launch.json which takes care of this problem.
```
{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Current File",
"type": "python",
"request": "launch",
"program": "${file}",
"env": {"DISPLAY":":1",
"PYTHONPATH": "${workspaceRoot}",
"PYDEVD_WARN_EVALUATION_TIMEOUT": "500"},
"cwd": "${workspaceFolder}",
"console": "integratedTerminal"
}
]
}
```
|
based on [here](https://github.com/microsoft/vscode-python/issues/2752#issuecomment-471640985), you can simply set the timeout parameter of debug configuration(launch.json). for example, sth like this:
```
{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Django",
"type": "python",
"request": "launch",
"timeout": 10,
"program": "${workspaceFolder}/src/manage.py",
"args": [
"runserver"
],
"django": true
}
]
}
```
|
48,729,915
|
I am trying to read a `png` image in python. The `imread` function in `scipy` is being [deprecated](https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.imread.html#scipy.ndimage.imread) and they recommend using `imageio` library.
However, I am would rather restrict my usage of external libraries to `scipy`, `numpy` and `matplotlib` libraries. Thus, using `imageio` or `scikit image` is not a good option for me.
Are there any methods in python or `scipy`, `numpy` or `matplotlib` to read images, which are not being deprecated?
|
2018/02/11
|
[
"https://Stackoverflow.com/questions/48729915",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5495304/"
] |
With matplotlib you can use (as shown in the matplotlib [documentation](https://matplotlib.org/2.0.0/users/image_tutorial.html))
```
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
img=mpimg.imread('image_name.png')
```
And plot the image if you want
```
imgplot = plt.imshow(img)
```
|
From [documentation](https://matplotlib.org/3.1.1/api/image_api.html?highlight=matplotlib%20image#module-matplotlib.image.imread):
>
> Matplotlib can only read PNGs natively. Further image formats are supported via the optional dependency on Pillow.
>
>
>
So in case of `PNG` we may use `plt.imread()`. In other cases it's probably better to use `Pillow` directly.
|
48,729,915
|
I am trying to read a `png` image in python. The `imread` function in `scipy` is being [deprecated](https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.imread.html#scipy.ndimage.imread) and they recommend using `imageio` library.
However, I am would rather restrict my usage of external libraries to `scipy`, `numpy` and `matplotlib` libraries. Thus, using `imageio` or `scikit image` is not a good option for me.
Are there any methods in python or `scipy`, `numpy` or `matplotlib` to read images, which are not being deprecated?
|
2018/02/11
|
[
"https://Stackoverflow.com/questions/48729915",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5495304/"
] |
With matplotlib you can use (as shown in the matplotlib [documentation](https://matplotlib.org/2.0.0/users/image_tutorial.html))
```
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
img=mpimg.imread('image_name.png')
```
And plot the image if you want
```
imgplot = plt.imshow(img)
```
|
you can try to use cv2 like this
```py
import cv2
image= cv2.imread('image page')
cv2.imshow('image', image)
cv2.waitKey(0)
cv2.destroyAllWindows()
```
|
48,729,915
|
I am trying to read a `png` image in python. The `imread` function in `scipy` is being [deprecated](https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.imread.html#scipy.ndimage.imread) and they recommend using `imageio` library.
However, I am would rather restrict my usage of external libraries to `scipy`, `numpy` and `matplotlib` libraries. Thus, using `imageio` or `scikit image` is not a good option for me.
Are there any methods in python or `scipy`, `numpy` or `matplotlib` to read images, which are not being deprecated?
|
2018/02/11
|
[
"https://Stackoverflow.com/questions/48729915",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5495304/"
] |
>
> If you just want to **read an image in Python** using the specified
> libraries only, I will go with `matplotlib`
>
>
>
**In matplotlib :**
```
import matplotlib.image
read_img = matplotlib.image.imread('your_image.png')
```
|
you can try to use cv2 like this
```py
import cv2
image= cv2.imread('image page')
cv2.imshow('image', image)
cv2.waitKey(0)
cv2.destroyAllWindows()
```
|
48,729,915
|
I am trying to read a `png` image in python. The `imread` function in `scipy` is being [deprecated](https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.imread.html#scipy.ndimage.imread) and they recommend using `imageio` library.
However, I am would rather restrict my usage of external libraries to `scipy`, `numpy` and `matplotlib` libraries. Thus, using `imageio` or `scikit image` is not a good option for me.
Are there any methods in python or `scipy`, `numpy` or `matplotlib` to read images, which are not being deprecated?
|
2018/02/11
|
[
"https://Stackoverflow.com/questions/48729915",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5495304/"
] |
>
> If you just want to **read an image in Python** using the specified
> libraries only, I will go with `matplotlib`
>
>
>
**In matplotlib :**
```
import matplotlib.image
read_img = matplotlib.image.imread('your_image.png')
```
|
**Easy way**
from IPython.display import Image
Image(filename ="Covid.jpg" size )
|
48,729,915
|
I am trying to read a `png` image in python. The `imread` function in `scipy` is being [deprecated](https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.imread.html#scipy.ndimage.imread) and they recommend using `imageio` library.
However, I am would rather restrict my usage of external libraries to `scipy`, `numpy` and `matplotlib` libraries. Thus, using `imageio` or `scikit image` is not a good option for me.
Are there any methods in python or `scipy`, `numpy` or `matplotlib` to read images, which are not being deprecated?
|
2018/02/11
|
[
"https://Stackoverflow.com/questions/48729915",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5495304/"
] |
You can also use [Pillow](https://python-pillow.org/) like this:
```
from PIL import Image
image = Image.open("image_path.jpg")
image.show()
```
|
I read all answers but I think one of the best method is using [openCV](https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_gui/py_image_display/py_image_display.html) library.
```
import cv2
img = cv2.imread('your_image.png',0)
```
and for displaying the image, use the following code :
```
from matplotlib import pyplot as plt
plt.imshow(img, cmap = 'gray', interpolation = 'bicubic')
plt.xticks([]), plt.yticks([]) # to hide tick values on X and Y axis
plt.show()
```
|
48,729,915
|
I am trying to read a `png` image in python. The `imread` function in `scipy` is being [deprecated](https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.imread.html#scipy.ndimage.imread) and they recommend using `imageio` library.
However, I am would rather restrict my usage of external libraries to `scipy`, `numpy` and `matplotlib` libraries. Thus, using `imageio` or `scikit image` is not a good option for me.
Are there any methods in python or `scipy`, `numpy` or `matplotlib` to read images, which are not being deprecated?
|
2018/02/11
|
[
"https://Stackoverflow.com/questions/48729915",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5495304/"
] |
For the better answer, you can use these lines of code.
Here is the example maybe help you :
```
import cv2
image = cv2.imread('/home/pictures/1.jpg')
plt.imshow(image)
plt.show()
```
In **`imread()`** you can pass the directory .so you can also use `str()` and `+` to combine dynamic directories and fixed directory like this:
```
path = '/home/pictures/'
for i in range(2) :
image = cv2.imread(str(path)+'1.jpg')
plt.imshow(image)
plt.show()
```
Both are the same.
|
you can try to use cv2 like this
```py
import cv2
image= cv2.imread('image page')
cv2.imshow('image', image)
cv2.waitKey(0)
cv2.destroyAllWindows()
```
|
48,729,915
|
I am trying to read a `png` image in python. The `imread` function in `scipy` is being [deprecated](https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.imread.html#scipy.ndimage.imread) and they recommend using `imageio` library.
However, I am would rather restrict my usage of external libraries to `scipy`, `numpy` and `matplotlib` libraries. Thus, using `imageio` or `scikit image` is not a good option for me.
Are there any methods in python or `scipy`, `numpy` or `matplotlib` to read images, which are not being deprecated?
|
2018/02/11
|
[
"https://Stackoverflow.com/questions/48729915",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5495304/"
] |
You can also use [Pillow](https://python-pillow.org/) like this:
```
from PIL import Image
image = Image.open("image_path.jpg")
image.show()
```
|
you can try to use cv2 like this
```py
import cv2
image= cv2.imread('image page')
cv2.imshow('image', image)
cv2.waitKey(0)
cv2.destroyAllWindows()
```
|
48,729,915
|
I am trying to read a `png` image in python. The `imread` function in `scipy` is being [deprecated](https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.imread.html#scipy.ndimage.imread) and they recommend using `imageio` library.
However, I am would rather restrict my usage of external libraries to `scipy`, `numpy` and `matplotlib` libraries. Thus, using `imageio` or `scikit image` is not a good option for me.
Are there any methods in python or `scipy`, `numpy` or `matplotlib` to read images, which are not being deprecated?
|
2018/02/11
|
[
"https://Stackoverflow.com/questions/48729915",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5495304/"
] |
```
import matplotlib.pyplot as plt
image = plt.imread('images/my_image4.jpg')
plt.imshow(image)
```
*Using 'matplotlib.pyplot.imread' is recommended by warning messages in jupyter.*
|
For the better answer, you can use these lines of code.
Here is the example maybe help you :
```
import cv2
image = cv2.imread('/home/pictures/1.jpg')
plt.imshow(image)
plt.show()
```
In **`imread()`** you can pass the directory .so you can also use `str()` and `+` to combine dynamic directories and fixed directory like this:
```
path = '/home/pictures/'
for i in range(2) :
image = cv2.imread(str(path)+'1.jpg')
plt.imshow(image)
plt.show()
```
Both are the same.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.