qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
8,117,249
|
I think I might be repeating the question but I didn't find any of the answers suited to my requirement.
Pardon my ignorance.
I have a program running which continuously spits out some binary data from a server.It never stops until it's killed.
I want to wrap it in a python script to read the output and process it as and when it arrives.
I tried out few of the subprocess ideas in stack overflow but no use.
Please suggest.
```
p=subprocess.popen(args,stderr=PIPE,stdin=PIPE,stdout=PIPE,shell=FALSE)
#p.communicate#blocks forever as expected
#p.stdout.read/readlines/readline-->blocks
#select(on p.stdout.fileno())-->blocks
```
what is the best method?
|
2011/11/14
|
[
"https://Stackoverflow.com/questions/8117249",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1044911/"
] |
Read with a length limit:
```
proc = subprocess.Popen(args, stdin=None, stdout=subprocess.PIPE, stderr=None)
while True:
chunk = proc.stdout.read(1024)
# chunk is <= 1024 bytes
```
---
This is the code from your comment, slightly modified. It works for me:
```
import subprocess
class container(object):
pass
self = container()
args = ['yes', 'test ' * 10]
self.p = subprocess.Popen(args, stdin=None, stderr=None,
stdout=subprocess.PIPE, shell=False)
while True:
chunk = self.p.stdout.read(1024)
print 'printing chunk'
print chunk
```
|
You could run the other program with its output directed to a file and then use Python's *f.readline()* to tail the file.
|
8,117,249
|
I think I might be repeating the question but I didn't find any of the answers suited to my requirement.
Pardon my ignorance.
I have a program running which continuously spits out some binary data from a server.It never stops until it's killed.
I want to wrap it in a python script to read the output and process it as and when it arrives.
I tried out few of the subprocess ideas in stack overflow but no use.
Please suggest.
```
p=subprocess.popen(args,stderr=PIPE,stdin=PIPE,stdout=PIPE,shell=FALSE)
#p.communicate#blocks forever as expected
#p.stdout.read/readlines/readline-->blocks
#select(on p.stdout.fileno())-->blocks
```
what is the best method?
|
2011/11/14
|
[
"https://Stackoverflow.com/questions/8117249",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1044911/"
] |
Read with a length limit:
```
proc = subprocess.Popen(args, stdin=None, stdout=subprocess.PIPE, stderr=None)
while True:
chunk = proc.stdout.read(1024)
# chunk is <= 1024 bytes
```
---
This is the code from your comment, slightly modified. It works for me:
```
import subprocess
class container(object):
pass
self = container()
args = ['yes', 'test ' * 10]
self.p = subprocess.Popen(args, stdin=None, stderr=None,
stdout=subprocess.PIPE, shell=False)
while True:
chunk = self.p.stdout.read(1024)
print 'printing chunk'
print chunk
```
|
It sounds like you could use an [asynchronous version of the subprocess module](http://code.google.com/p/subprocdev/source/browse/subprocess.py). For more information, check out the developer's [blog](http://subdev.blogspot.com/).
|
54,469,599
|
Here's the code, it's from <https://plot.ly/python/line-and-scatter/>
=====================================================================
```
import plotly.plotly as py
import plotly.graph_objs as go
# Create random data with numpy
import numpy as np
N = 100
random_x = np.linspace(0, 1, N)
random_y0 = np.random.randn(N)+5
random_y1 = np.random.randn(N)
random_y2 = np.random.randn(N)-5
# Create traces
trace0 = go.Scatter(
x = random_x,
y = random_y0,
mode = 'markers',
name = 'markers'
)
trace1 = go.Scatter(
x = random_x,
y = random_y1,
mode = 'lines+markers',
name = 'lines+markers'
)
trace2 = go.Scatter(
x = random_x,
y = random_y2,
mode = 'lines',
name = 'lines'
)
data = [trace0, trace1, trace2]
py.iplot(data, filename='scatter-mode')
```
When I copy and paste it into jupyter notebook I get the following error:
PlotlyError: Because you didn't supply a 'file\_id' in the call, we're assuming you're trying to snag a figure from a url. You supplied the url, '', we expected it to start with '<https://plot.ly>'.
Run help on this function for more information.
whole error:
```
Aw, snap! We didn't get a username with your request.
Don't have an account? https://plot.ly/api_signup
Questions? accounts@plot.ly
---------------------------------------------------------------------------
PlotlyError Traceback (most recent call last)
<ipython-input-7-70bd62361f83> in <module>()
27
28 data = [trace0, trace1, trace2]
---> 29 py.iplot(data, filename='scatter-mode')
c:\users\appdata\local\programs\python\python37-32\lib\site-packages\plotly\plotly\plotly.py in iplot(figure_or_data, **plot_options)
162 embed_options['height'] = str(embed_options['height']) + 'px'
163
--> 164 return tools.embed(url, **embed_options)
165
166
c:\users\appdata\local\programs\python\python37-32\lib\site-packages\plotly\tools.py in embed(file_owner_or_url, file_id, width, height)
394 else:
395 url = file_owner_or_url
--> 396 return PlotlyDisplay(url, width, height)
397 else:
398 if (get_config_defaults()['plotly_domain']
c:\users\appdata\local\programs\python\python37-32\lib\site-packages\plotly\tools.py in __init__(self, url, width, height)
1438 def __init__(self, url, width, height):
1439 self.resource = url
-> 1440 self.embed_code = get_embed(url, width=width, height=height)
1441 super(PlotlyDisplay, self).__init__(data=self.embed_code)
1442
c:\users\appdata\local\programs\python\python37-32\lib\site-packages\plotly\tools.py in get_embed(file_owner_or_url, file_id, width, height)
299 "'{1}'."
300 "\nRun help on this function for more information."
--> 301 "".format(url, plotly_rest_url))
302 urlsplit = six.moves.urllib.parse.urlparse(url)
303 file_owner = urlsplit.path.split('/')[1].split('~')[1]
PlotlyError: Because you didn't supply a 'file_id' in the call, we're assuming you're trying to snag a figure from a url. You supplied the url, '', we expected it to start with 'https://plot.ly'.
Run help on this function for more information.
```
|
2019/01/31
|
[
"https://Stackoverflow.com/questions/54469599",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9493894/"
] |
You can try this code:
```
Sub Sample()
' Define object variables
Dim listRange As Range
Dim cellValue As Range
' Define other variables
Dim itemsQuantity As Integer
Dim stringResult As String
Dim separator As String
Dim counter As Integer
' Define the range where the options are located
Set listRange = Range("A1:A4")
itemsQuantity = listRange.Cells.Count
counter = 1
For Each cellValue In listRange
' Select the case for inner items, penultimate and last item
Select Case counter
Case Is < itemsQuantity
separator = ", "
Case Is = itemsQuantity - 1
separator = " And "
Case Else
separator = vbNullString
End Select
stringResult = stringResult & cellValue.Value & separator
counter = counter + 1
Next cellValue
' Assamble the last sentence
stringResult = "You have entered " & stringResult & "."
MsgBox stringResult
End Sub
```
Customize the:
' Define the range where the options are located portion
Cheers!
|
Column to Sentence
==================
Features
--------
* At least two cells of data in Range, or else "" is returned.
* Only first column of Range is processed (`Resize`).
Usage in Excel
--------------
[](https://i.stack.imgur.com/FnDPU.jpg)
The Code
--------
```
Function CCE(Range As Range) As String
Application.Volatile
Const strFirst = "You have entered " ' First String
Const strDEL = ", " ' Delimiter
Const strDELLast = " and " ' Last Delimiter
Const strLast = "." ' Last String
Dim vnt1 As Variant ' Source Array
Dim vnt0 As Variant ' Zero Array
Dim i As Long ' Arrays Row Counter
' Copy Source Range's first column to 2D 1-based 1-column Source Array.
vnt1 = Range.Resize(, 1)
' Note: Join can be used only on a 0-based 1D array.
' Resize Zero Array to hold all data from Source Array.
ReDim vnt0(UBound(vnt1) - 1)
' Copy data from Source Array to Zero Array.
For i = 1 To UBound(vnt1)
If vnt1(i, 1) = "" Then Exit For
vnt0(i - 1) = vnt1(i, 1)
Next
' If no "" was found, "i" has to be greater than 3 ensuring that
' Source Range contains at least 2 cells.
If i < 3 Then Exit Function
ReDim Preserve vnt0(i - 2)
' Join data from Zero Array to CCE.
CCE = Join(vnt0, strDEL)
' Replace last occurence of strDEL with strDELLast.
CCE = WorksheetFunction.Replace( _
CCE, InStrRev(CCE, strDEL), Len(strDEL), strDELLast)
' Add First and Last Strings.
CCE = strFirst & CCE & strLast
End Function
```
|
54,469,599
|
Here's the code, it's from <https://plot.ly/python/line-and-scatter/>
=====================================================================
```
import plotly.plotly as py
import plotly.graph_objs as go
# Create random data with numpy
import numpy as np
N = 100
random_x = np.linspace(0, 1, N)
random_y0 = np.random.randn(N)+5
random_y1 = np.random.randn(N)
random_y2 = np.random.randn(N)-5
# Create traces
trace0 = go.Scatter(
x = random_x,
y = random_y0,
mode = 'markers',
name = 'markers'
)
trace1 = go.Scatter(
x = random_x,
y = random_y1,
mode = 'lines+markers',
name = 'lines+markers'
)
trace2 = go.Scatter(
x = random_x,
y = random_y2,
mode = 'lines',
name = 'lines'
)
data = [trace0, trace1, trace2]
py.iplot(data, filename='scatter-mode')
```
When I copy and paste it into jupyter notebook I get the following error:
PlotlyError: Because you didn't supply a 'file\_id' in the call, we're assuming you're trying to snag a figure from a url. You supplied the url, '', we expected it to start with '<https://plot.ly>'.
Run help on this function for more information.
whole error:
```
Aw, snap! We didn't get a username with your request.
Don't have an account? https://plot.ly/api_signup
Questions? accounts@plot.ly
---------------------------------------------------------------------------
PlotlyError Traceback (most recent call last)
<ipython-input-7-70bd62361f83> in <module>()
27
28 data = [trace0, trace1, trace2]
---> 29 py.iplot(data, filename='scatter-mode')
c:\users\appdata\local\programs\python\python37-32\lib\site-packages\plotly\plotly\plotly.py in iplot(figure_or_data, **plot_options)
162 embed_options['height'] = str(embed_options['height']) + 'px'
163
--> 164 return tools.embed(url, **embed_options)
165
166
c:\users\appdata\local\programs\python\python37-32\lib\site-packages\plotly\tools.py in embed(file_owner_or_url, file_id, width, height)
394 else:
395 url = file_owner_or_url
--> 396 return PlotlyDisplay(url, width, height)
397 else:
398 if (get_config_defaults()['plotly_domain']
c:\users\appdata\local\programs\python\python37-32\lib\site-packages\plotly\tools.py in __init__(self, url, width, height)
1438 def __init__(self, url, width, height):
1439 self.resource = url
-> 1440 self.embed_code = get_embed(url, width=width, height=height)
1441 super(PlotlyDisplay, self).__init__(data=self.embed_code)
1442
c:\users\appdata\local\programs\python\python37-32\lib\site-packages\plotly\tools.py in get_embed(file_owner_or_url, file_id, width, height)
299 "'{1}'."
300 "\nRun help on this function for more information."
--> 301 "".format(url, plotly_rest_url))
302 urlsplit = six.moves.urllib.parse.urlparse(url)
303 file_owner = urlsplit.path.split('/')[1].split('~')[1]
PlotlyError: Because you didn't supply a 'file_id' in the call, we're assuming you're trying to snag a figure from a url. You supplied the url, '', we expected it to start with 'https://plot.ly'.
Run help on this function for more information.
```
|
2019/01/31
|
[
"https://Stackoverflow.com/questions/54469599",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9493894/"
] |
You can try this code:
```
Sub Sample()
' Define object variables
Dim listRange As Range
Dim cellValue As Range
' Define other variables
Dim itemsQuantity As Integer
Dim stringResult As String
Dim separator As String
Dim counter As Integer
' Define the range where the options are located
Set listRange = Range("A1:A4")
itemsQuantity = listRange.Cells.Count
counter = 1
For Each cellValue In listRange
' Select the case for inner items, penultimate and last item
Select Case counter
Case Is < itemsQuantity
separator = ", "
Case Is = itemsQuantity - 1
separator = " And "
Case Else
separator = vbNullString
End Select
stringResult = stringResult & cellValue.Value & separator
counter = counter + 1
Next cellValue
' Assamble the last sentence
stringResult = "You have entered " & stringResult & "."
MsgBox stringResult
End Sub
```
Customize the:
' Define the range where the options are located portion
Cheers!
|
**Array solution via `Join` with simple transposition**
* Your post assumes a flexible range in column `A:A`, so the first step `[1]` gets the last row number and defines the data range.
* In step `[2]` you assign the found data range to an **array** which has to be variant. The `Application.Transpose` function changes the original column data to a "flat" array in only *one code line* and reduces its 2-dim default dimension to a simple 1-dim array. Furthermore the last element is simply *enriched* by insertion of " and ". This allows you to avoid a complicated split & find action.
* Step `[3]` allows to *concatenate* any 1-dim array via the **`Join`** function and insert any user defined delimiter (e.g. a colon ","). Finally the leading colon before " and" gets deleted by replacing ", and " with " and" only.
* Step `[4]` displays the resulting message box.
**Example code**
```
Option Explicit ' declaration head of your code module
Sub displayMsg()
' [0] declare constants and variables
Const LNK$ = " and ", COLON$ = "," ' define linking constants "and" plus COLON
Dim v As Variant, msg$, lastRow& ' provide for variant datafield array and message string
Dim ws As Worksheet, rng As Range ' declare worksheet object *)
Set ws = ThisWorkbook.Worksheets("MySheetName") ' << change to your sheet name *)
' [1] define flexible range object in column A:A via last row number
lastRow = ws.Range("A" & ws.Rows.count).End(xlUp).Row
Set rng = ws.Range("A1:A" & lastRow) ' e.g. A1:A4, if n = 4
' [2] get 2-dim column data to "flat" 1-dim array
v = Application.Transpose(rng) ' read into array and make it "flat"
v(UBound(v)) = LNK & v(UBound(v)) ' insert " and " into last array element
' [3] concatenate elements and delete superfluous last colon
msg = Replace(Join(v, COLON), COLON & LNK, LNK) ' get wanted message string
' [4] display message
MsgBox "You have entered " & msg & ".", vbInformation, UBound(v) & " elements"
End Sub
```
**Alternative referencing**
\*) Instead of referencing a work sheet `ws` by e.g. `ThisWorkBook.Worksheets("MySheetName")`, you can simply use the worksheet's **CodeName** instead as listed in the VB Editor (without declaring `ws` as well as setting it into the memory) just coding as follows:
```
' [1] define flexible range object in column A:A via last row number
lastRow = Sheet1.Range("A" & Sheet1.Rows.count).End(xlUp).Row
Set rng = Sheet1.Range("A1:A" & lastRow)
```
*Enjoy it* :-)
|
6,978,204
|
I made a set of XMLRPC client-server programs in python and set up a little method for authenticating my clients. However, after coding pretty much the whole thing, I realized that once a client was authenticated, the flag I had set for it was global in my class i.e. as long as one client is authenticated, all clients are authenticated. I don't know why, but I was under the impression that whenever SimpleXMLRPCServer was connected to by a client, it would create a new set of variables in my program.
Basically the way it's set up now is
```
class someclass:
authenticate(self, username, pass):
#do something here
if(check_for_authentication(username, pass))
self.authenticated=True
other_action(self, vars):
if authenticated:
#do whatever
else:
return "Not authorized."
server=SimpleXMLRPCServer.SimpleXMLRPCServer("0.0.0.0", 8000)
server.register_instance(someclass())
server.serve_forever()
```
I need either a way to hack this into what I am looking for (i.e. the authenticated flag needs to be set for each client that connects), or another protocol that can do this more easily. After some searching I have been looking at twisted, but since this is already written, I'd rather modify it than have to rewrite it. I know for now I could just always get the username and password from the client, but in the intrest of resources (having to authenticate on every request) and saving bandwidth (which some of my clients have in very limited quantities), I'd rather not do that.
Also, this is my first time trying to secure something like this(and I am not trained in internet security), so if I am overlooking some glaring error in my logic, please tell me. Basically, I can't have someone sending me fake variables in "other\_actions"
|
2011/08/08
|
[
"https://Stackoverflow.com/questions/6978204",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/400612/"
] |
Something like this would work:
```
class SomeClass(object):
authenticated = {}
def authenticate(self, username, password):
#do something here
if authenticate(username, password):
# make unique token can probably be just a hash
# of the millisecond time and the username
self.authenticated[make_unique_token(username)] = True
def other_action(self, vars):
# This will return True if the user is authenticated
# and None otherwise, which evaluates to False
if authenticated.get(vars.get('authentication-token')):
#do whatever
pass
else:
return "Not authorized."
server=SimpleXMLRPCServer.SimpleXMLRPCServer("0.0.0.0", 8000)
server.register_instance(someclass())
server.serve_forever()
```
You just need to pass them an authentication token once they've logged in.
I assume you know you can't actually use `pass` as a variable name. Please remember to accept answers to you questions (I noticed you haven't for your last several).
|
You have to decide. If you really want to use one instance for all clients, you have to store the "authenticated" state somewhere else. I am not familiar with SimpleXMLRPCServer(), but if you could get the conection object somewhere, or at least its source address, you could establish a set() where all authenticated clients/connections/whatever are registered.
|
70,102,585
|
I'm [using Nikola](https://getnikola.com/), a static website generator, to build a website. I am automating its building through [Github Actions](https://github.com/getnikola/nikola-action). I also wanted to use [Pandoc](https://pandoc.org) to help convert my markdown to html, but I noted that Pandoc was not included in the original action. Therefore, I had to try to figure out myself how to include it. However, I've been thwarted time and again by `FileNotFound` errors.
First, I tried to edit the action so that it installed Pandoc on the Ubuntu environment. Below is my edited version of the action. I only added the `Install Pandoc on Ubuntu` step.
```yaml
on: [push]
jobs:
nikola-build:
runs-on: ubuntu-latest
steps:
- name: Install Pandoc on Ubuntu
run: sudo apt-get install -y pandoc
- name: Check out
uses: actions/checkout@v2
- name: Build and Deploy Nikola
uses: getnikola/nikola-action@v3
with:
dry_run: false
```
When this failed again informing me that Pandoc could not be found, I added a `requirements.txt` file to my repository:
```
Pandoc
```
I tried running the action again. Both installations—the action step I wrote and `pip install pandoc`—ran without any issues and were successful. And yet when it came to the step where Nikola starts to build the website, it seems that no matter what is done, it fails in rendering, because Pandoc cannot not be found:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/nikola/plugins/compile/pandoc.py", line 76, in compile
subprocess.check_call(['pandoc', '-o', dest, source] + self._get_pandoc_options(source))
File "/usr/local/lib/python3.8/subprocess.py", line 359, in check_call
retcode = call(*popenargs, **kwargs)
File "/usr/local/lib/python3.8/subprocess.py", line 340, in call
with Popen(*popenargs, **kwargs) as p:
File "/usr/local/lib/python3.8/subprocess.py", line 858, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/local/lib/python3.8/subprocess.py", line 1704, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'pandoc'
```
I've looked absolutely everywhere for solutions to similar problems, but they are few and outdated. I would greatly appreciate any insight into this problem, what is at fault, what I can do to fix it, etc.
|
2021/11/24
|
[
"https://Stackoverflow.com/questions/70102585",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17501797/"
] |
`pandoc` is not a Python package. It is a separate and very powerful command line tool. `nikola` invokes the command line tool to do its work. You need to install it using the `sudo apt install pandoc` command line that they suggest.
|
I discovered that the system was unable to find Pandoc as the entirety of the project was run in a Docker container; I had previously installed Pandoc on the system itself and failed. I was able to solve the problem by modifying the [shell script](https://github.com/getnikola/nikola-action/blob/master/entrypoint.sh) to install Pandoc in the container.
|
25,851,090
|
The results from the code below in Python 2.7 struck me as a contradiction. The `is` operator is supposed to work with object identity and so is `id`. But their results diverge when I'm looking at a user-defined method. Why is that?
```
py-mach >>class Hello(object):
... def hello():
... pass
...
py-mach >>Hello.hello is Hello.hello
False
py-mach >>id(Hello.hello) - id(Hello.hello)
0
```
I found the following excerpt from the description of the [Python data model](https://docs.python.org/2/reference/datamodel.html) somewhat useful. But it didn't really make everything clear. Why does the `id` function return the same integer if the user-defined method objects are constructed anew each time?
>
> User-defined method objects may be created when getting an attribute of a class (perhaps via an instance of that class), if that attribute is a user-defined function object, an unbound user-defined method object, or a class method object. When the attribute is a user-defined method object, a new method object is only created if the class from which it is being retrieved is the same as, or a derived class of, the class stored in the original method object; otherwise, the original method object is used as it is.
>
>
>
|
2014/09/15
|
[
"https://Stackoverflow.com/questions/25851090",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1988435/"
] |
The Python documentation for the [id function](https://docs.python.org/2/library/functions.html#id) states:
>
> Return the "identity" of an object. This is an integer (or long integer) which is guaranteed to be unique and constant for this object during its lifetime. **Two objects with non-overlapping lifetimes may have the same id() value.**
>
>
>
(emphasis mine)
When you do `id(Hello.hello) == id(Hello.hello)`, the method object is created only briefly and is considered "dead" after the first call to 'id'. Because of the call to `id`, you only need `Hello.hello` to be alive for a short period of time -- enough to obtain the id. Once you get that id, the object is dead and the second `Hello.hello` can reuse that address, which makes it appear as if the two objects have the same id.
This is in contrast to doing `Hello.hello is Hello.hello` -- both instances have to live long enough to be compared to each other, so you end up having two live instances.
If you instead tried:
```
>>> a = Hello.hello
>>> b = Hello.hello
>>> id(a) == id(b)
False
```
...you'd get the expected value of `False`.
|
This is a "simple" consequence of how the memory allocator works. It is very similar to the case:
```
>>> id([]) == id([])
True
```
Basically python doesn't guarantee that ID's don't get reused -- it only guarantees that the id is unique *as long as the object is alive*. In this case, the first object being passed to `id` is dead after the call to `id` and (C)python re-uses that `id` when creating the second object.
Never rely on this behavior as it is *allowed* by the language reference, but certainly not *required*.
|
58,040,654
|
I have this json dataset. From this dataset i only want "column\_names" keys and its values and "data" keys and its values.Each values of column\_names corresponds to values of data. How do i combine only these two keys in python for analysis
```
{"dataset":{"id":42635350,"dataset_code":"MSFT","column_names":
["Date","Open","High","Low","Close","Volume","Dividend","Split",
"Adj_Open","Adj_High","Adj_Low","Adj_Close","Adj_Volume"],
"frequency":"daily","type":"Time Series",
"data":[["2017-12-28",85.9,85.93,85.55,85.72,10594344.0,0.0,1.0,83.1976157998082,
83.22667201021558,82.85862667838872,83.0232785373639,10594344.0],
["2017-12-27",85.65,85.98,85.215,85.71,14678025.0,0.0,1.0,82.95548071308001,
83.27509902756123,82.53416566217294,83.01359313389476,14678025.0]
for cnames in data['dataset']['column_names']:
print(cnames)
for cdata in data['dataset']['data']:
print(cdata)
```
For loop gives me column names and data values i want but i am not sure how to combine it and make it as a python data frame for analysis.
Ref:The above piece of code is from quandal website
|
2019/09/21
|
[
"https://Stackoverflow.com/questions/58040654",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10331731/"
] |
```
data = {
"dataset": {
"id":42635350,"dataset_code":"MSFT",
"column_names": ["Date","Open","High","Low","Close","Volume","Dividend","Split","Adj_Open","Adj_High","Adj_Low","Adj_Close","Adj_Volume"],
"frequency":"daily",
"type":"Time Series",
"data":[
["2017-12-28",85.9,85.93,85.55,85.72,10594344.0,0.0,1.0,83.1976157998082, 83.22667201021558,82.85862667838872,83.0232785373639,10594344.0],
["2017-12-27",85.65,85.98,85.215,85.71,14678025.0,0.0,1.0,82.95548071308001,83.27509902756123,82.53416566217294,83.01359313389476,14678025.0]
]
}
}
```
Should the following code do what you want ?
```
import pandas as pd
df = pd.DataFrame(data, columns = data['dataset']['column_names'])
for i, data_row in enumerate(data['dataset']['data']):
df.loc[i] = data_row
```
|
The following snippet should work for you
```py
import pandas as pd
df = pd.DataFrame(data['dataset']['data'],columns=data['dataset']['column_names'])
```
Check the following link to learn more
<https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html>
|
58,040,654
|
I have this json dataset. From this dataset i only want "column\_names" keys and its values and "data" keys and its values.Each values of column\_names corresponds to values of data. How do i combine only these two keys in python for analysis
```
{"dataset":{"id":42635350,"dataset_code":"MSFT","column_names":
["Date","Open","High","Low","Close","Volume","Dividend","Split",
"Adj_Open","Adj_High","Adj_Low","Adj_Close","Adj_Volume"],
"frequency":"daily","type":"Time Series",
"data":[["2017-12-28",85.9,85.93,85.55,85.72,10594344.0,0.0,1.0,83.1976157998082,
83.22667201021558,82.85862667838872,83.0232785373639,10594344.0],
["2017-12-27",85.65,85.98,85.215,85.71,14678025.0,0.0,1.0,82.95548071308001,
83.27509902756123,82.53416566217294,83.01359313389476,14678025.0]
for cnames in data['dataset']['column_names']:
print(cnames)
for cdata in data['dataset']['data']:
print(cdata)
```
For loop gives me column names and data values i want but i am not sure how to combine it and make it as a python data frame for analysis.
Ref:The above piece of code is from quandal website
|
2019/09/21
|
[
"https://Stackoverflow.com/questions/58040654",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10331731/"
] |
```
data = {
"dataset": {
"id":42635350,"dataset_code":"MSFT",
"column_names": ["Date","Open","High","Low","Close","Volume","Dividend","Split","Adj_Open","Adj_High","Adj_Low","Adj_Close","Adj_Volume"],
"frequency":"daily",
"type":"Time Series",
"data":[
["2017-12-28",85.9,85.93,85.55,85.72,10594344.0,0.0,1.0,83.1976157998082, 83.22667201021558,82.85862667838872,83.0232785373639,10594344.0],
["2017-12-27",85.65,85.98,85.215,85.71,14678025.0,0.0,1.0,82.95548071308001,83.27509902756123,82.53416566217294,83.01359313389476,14678025.0]
]
}
}
```
Should the following code do what you want ?
```
import pandas as pd
df = pd.DataFrame(data, columns = data['dataset']['column_names'])
for i, data_row in enumerate(data['dataset']['data']):
df.loc[i] = data_row
```
|
```
cols = data['dataset']['column_names']
data = data['dataset']['data']
```
It's quite simple
```
labeled_data = [dict(zip(cols, d)) for d in data]
```
|
21,585,730
|
I want to launch a program from python, in this case abaqus (a finite element analysis software), using:
```
os.system('abaqus job=' + JobName + ' user=' + UELname + ' interactive')
```
After say 5 minutes running the program I want to execute a python script that monitors some output files generated by abaqus. If a certain condition is met than the python script will terminate the abaqus job. There's a catch here. To read the output files I need to run the python script from abaqus:
```
os.system('abaqus cae noGUI=results2.py')
```
My question is this:
Can I do this simply by:
```
os.system('abaqus job=' + JobName + ' user=' + UELname + ' interactive')
time.sleep(300)
os.system('abaqus cae noGUI=results2.py')
```
I know that using `interactive` key makes ths system wait for the abaqus job to finish before doing other stuff. Therefore, I asssume this is not as simple as I'd like it to be. Any ideas?
|
2014/02/05
|
[
"https://Stackoverflow.com/questions/21585730",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1262767/"
] |
Did you try [subprocess](http://docs.python.org/2/library/subprocess.html) module?
|
Logic seems fine, i would suggest you to use [subprocess](http://docs.python.org/2/library/subprocess.html), instead of os.system. Since you are calling commands, you can run all these commands at once like this
```
cmdToRun = '\'abaqus job=\' + JobName + \' user=\' + UELname + \' interactive\' ; sleep 300; abaqus cae noGUI=results2.py'
```
|
749,680
|
I tried using the Process class as always but that didn't work. All I am doing is trying to run a Python file like someone double clicked it.
Is it possible?
EDIT:
Sample code:
```
string pythonScript = @"C:\callme.py";
string workDir = System.IO.Path.GetDirectoryName ( pythonScript );
Process proc = new Process ( );
proc.StartInfo.WorkingDirectory = workDir;
proc.StartInfo.UseShellExecute = true;
proc.StartInfo.FileName = pythonScript;
proc.StartInfo.Arguments = "1, 2, 3";
```
I don't get any error, but the script isn't run. When I run the script manually, I see the result.
|
2009/04/14
|
[
"https://Stackoverflow.com/questions/749680",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/51816/"
] |
Here's my code for executing a python script from C#, with a redirected standard input and output ( I pass info in via the standard input), copied from an example on the web somewhere. Python location is hard coded as you can see, can refactor.
```
private static string CallPython(string script, string pyArgs, string workingDirectory, string[] standardInput)
{
ProcessStartInfo startInfo;
Process process;
string ret = "";
try
{
startInfo = new ProcessStartInfo(@"c:\python25\python.exe");
startInfo.WorkingDirectory = workingDirectory;
if (pyArgs.Length != 0)
startInfo.Arguments = script + " " + pyArgs;
else
startInfo.Arguments = script;
startInfo.UseShellExecute = false;
startInfo.CreateNoWindow = true;
startInfo.RedirectStandardOutput = true;
startInfo.RedirectStandardError = true;
startInfo.RedirectStandardInput = true;
process = new Process();
process.StartInfo = startInfo;
process.Start();
// write to standard input
foreach (string si in standardInput)
{
process.StandardInput.WriteLine(si);
}
string s;
while ((s = process.StandardError.ReadLine()) != null)
{
ret += s;
throw new System.Exception(ret);
}
while ((s = process.StandardOutput.ReadLine()) != null)
{
ret += s;
}
return ret;
}
catch (System.Exception ex)
{
string problem = ex.Message;
return problem;
}
}
```
|
[Process.Start](http://msdn.microsoft.com/en-us/library/system.diagnostics.process.start.aspx) should work. if it doesn't, would you post your code and the error you are getting?
|
749,680
|
I tried using the Process class as always but that didn't work. All I am doing is trying to run a Python file like someone double clicked it.
Is it possible?
EDIT:
Sample code:
```
string pythonScript = @"C:\callme.py";
string workDir = System.IO.Path.GetDirectoryName ( pythonScript );
Process proc = new Process ( );
proc.StartInfo.WorkingDirectory = workDir;
proc.StartInfo.UseShellExecute = true;
proc.StartInfo.FileName = pythonScript;
proc.StartInfo.Arguments = "1, 2, 3";
```
I don't get any error, but the script isn't run. When I run the script manually, I see the result.
|
2009/04/14
|
[
"https://Stackoverflow.com/questions/749680",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/51816/"
] |
[Process.Start](http://msdn.microsoft.com/en-us/library/system.diagnostics.process.start.aspx) should work. if it doesn't, would you post your code and the error you are getting?
|
You forgot proc.Start() at the end. The code you have should work if you call Start().
|
749,680
|
I tried using the Process class as always but that didn't work. All I am doing is trying to run a Python file like someone double clicked it.
Is it possible?
EDIT:
Sample code:
```
string pythonScript = @"C:\callme.py";
string workDir = System.IO.Path.GetDirectoryName ( pythonScript );
Process proc = new Process ( );
proc.StartInfo.WorkingDirectory = workDir;
proc.StartInfo.UseShellExecute = true;
proc.StartInfo.FileName = pythonScript;
proc.StartInfo.Arguments = "1, 2, 3";
```
I don't get any error, but the script isn't run. When I run the script manually, I see the result.
|
2009/04/14
|
[
"https://Stackoverflow.com/questions/749680",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/51816/"
] |
Here's my code for executing a python script from C#, with a redirected standard input and output ( I pass info in via the standard input), copied from an example on the web somewhere. Python location is hard coded as you can see, can refactor.
```
private static string CallPython(string script, string pyArgs, string workingDirectory, string[] standardInput)
{
ProcessStartInfo startInfo;
Process process;
string ret = "";
try
{
startInfo = new ProcessStartInfo(@"c:\python25\python.exe");
startInfo.WorkingDirectory = workingDirectory;
if (pyArgs.Length != 0)
startInfo.Arguments = script + " " + pyArgs;
else
startInfo.Arguments = script;
startInfo.UseShellExecute = false;
startInfo.CreateNoWindow = true;
startInfo.RedirectStandardOutput = true;
startInfo.RedirectStandardError = true;
startInfo.RedirectStandardInput = true;
process = new Process();
process.StartInfo = startInfo;
process.Start();
// write to standard input
foreach (string si in standardInput)
{
process.StandardInput.WriteLine(si);
}
string s;
while ((s = process.StandardError.ReadLine()) != null)
{
ret += s;
throw new System.Exception(ret);
}
while ((s = process.StandardOutput.ReadLine()) != null)
{
ret += s;
}
return ret;
}
catch (System.Exception ex)
{
string problem = ex.Message;
return problem;
}
}
```
|
You forgot proc.Start() at the end. The code you have should work if you call Start().
|
20,503,671
|
Ok so I'm trying to run a C program from a python script. Currently I'm using a test C program:
```
#include <stdio.h>
int main() {
while (1) {
printf("2000\n");
sleep(1);
}
return 0;
}
```
To simulate the program that I will be using, which takes readings from a sensor constantly.
Then I'm trying to read the output (in this case `"2000"`) from the C program with subprocess in python:
```
#!usr/bin/python
import subprocess
process = subprocess.Popen("./main", stdout=subprocess.PIPE)
while True:
for line in iter(process.stdout.readline, ''):
print line,
```
but this is not working. From using print statements, it runs the `.Popen` line then waits at `for line in iter(process.stdout.readline, ''):`, until I press Ctrl-C.
Why is this? This is exactly what most examples that I've seen have as their code, and yet it does not read the file.
Is there a way of making it run only when there is something to be read?
|
2013/12/10
|
[
"https://Stackoverflow.com/questions/20503671",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2836175/"
] |
Your program isn't hung, it just runs very slowly. Your program is using buffered output; the `"2000\n"` data is not being written to stdout immediately, but will eventually make it. In your case, it might take `BUFSIZ/strlen("2000\n")` seconds (probably 1638 seconds) to complete.
After this line:
```
printf("2000\n");
```
add
```
fflush(stdout);
```
|
See [readline docs](http://docs.python.org/2/library/io.html#io.TextIOBase.readline).
Your code:
```
process.stdout.readline
```
Is waiting for EOF or a newline.
I cannot tell what you are ultimately trying to do, but adding a newline to your printf, e.g., `printf("2000\n");`, should at least get you started.
|
20,503,671
|
Ok so I'm trying to run a C program from a python script. Currently I'm using a test C program:
```
#include <stdio.h>
int main() {
while (1) {
printf("2000\n");
sleep(1);
}
return 0;
}
```
To simulate the program that I will be using, which takes readings from a sensor constantly.
Then I'm trying to read the output (in this case `"2000"`) from the C program with subprocess in python:
```
#!usr/bin/python
import subprocess
process = subprocess.Popen("./main", stdout=subprocess.PIPE)
while True:
for line in iter(process.stdout.readline, ''):
print line,
```
but this is not working. From using print statements, it runs the `.Popen` line then waits at `for line in iter(process.stdout.readline, ''):`, until I press Ctrl-C.
Why is this? This is exactly what most examples that I've seen have as their code, and yet it does not read the file.
Is there a way of making it run only when there is something to be read?
|
2013/12/10
|
[
"https://Stackoverflow.com/questions/20503671",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2836175/"
] |
It is a block buffering issue.
What follows is an extended for your case version of my answer to [Python: read streaming input from subprocess.communicate()](https://stackoverflow.com/a/17698359/4279) question.
Fix stdout buffer in C program directly
---------------------------------------
`stdio`-based programs as a rule are line buffered if they are running interactively in a terminal and block buffered when their stdout is redirected to a pipe. In the latter case, you won't see new lines until the buffer overflows or flushed.
To avoid calling `fflush()` after each `printf()` call, you could force line buffered output by calling in a C program at the very beginning:
```
setvbuf(stdout, (char *) NULL, _IOLBF, 0); /* make line buffered stdout */
```
As soon as a newline is printed the buffer is flushed in this case.
### Or fix it without modifying the source of C program
There is `stdbuf` utility that allows you to change buffering type without modifying the source code e.g.:
```
from subprocess import Popen, PIPE
process = Popen(["stdbuf", "-oL", "./main"], stdout=PIPE, bufsize=1)
for line in iter(process.stdout.readline, b''):
print line,
process.communicate() # close process' stream, wait for it to exit
```
There are also other utilities available, see [Turn off buffering in pipe](https://unix.stackexchange.com/q/25372/1321).
Or use pseudo-TTY
-----------------
To trick the subprocess into thinking that it is running interactively, you could use [`pexpect` module](http://pexpect.readthedocs.org/en/latest/) or its analogs, for code examples that use `pexpect` and `pty` modules, see [Python subprocess readlines() hangs](https://stackoverflow.com/a/12471855/4279). Here's a variation on the `pty` example provided there (it should work on Linux):
```
#!/usr/bin/env python
import os
import pty
import sys
from select import select
from subprocess import Popen, STDOUT
master_fd, slave_fd = pty.openpty() # provide tty to enable line buffering
process = Popen("./main", stdin=slave_fd, stdout=slave_fd, stderr=STDOUT,
bufsize=0, close_fds=True)
timeout = .1 # ugly but otherwise `select` blocks on process' exit
# code is similar to _copy() from pty.py
with os.fdopen(master_fd, 'r+b', 0) as master:
input_fds = [master, sys.stdin]
while True:
fds = select(input_fds, [], [], timeout)[0]
if master in fds: # subprocess' output is ready
data = os.read(master_fd, 512) # <-- doesn't block, may return less
if not data: # EOF
input_fds.remove(master)
else:
os.write(sys.stdout.fileno(), data) # copy to our stdout
if sys.stdin in fds: # got user input
data = os.read(sys.stdin.fileno(), 512)
if not data:
input_fds.remove(sys.stdin)
else:
master.write(data) # copy it to subprocess' stdin
if not fds: # timeout in select()
if process.poll() is not None: # subprocess ended
# and no output is buffered <-- timeout + dead subprocess
assert not select([master], [], [], 0)[0] # race is possible
os.close(slave_fd) # subproces don't need it anymore
break
rc = process.wait()
print("subprocess exited with status %d" % rc)
```
### Or use `pty` via `pexpect`
`pexpect` wraps `pty` handling into [higher level interface](http://pexpect.readthedocs.org/en/latest/overview.html):
```
#!/usr/bin/env python
import pexpect
child = pexpect.spawn("/.main")
for line in child:
print line,
child.close()
```
[Q: Why not just use a pipe (popen())?](http://pexpect.readthedocs.org/en/latest/FAQ.html#whynotpipe) explains why pseudo-TTY is useful.
|
See [readline docs](http://docs.python.org/2/library/io.html#io.TextIOBase.readline).
Your code:
```
process.stdout.readline
```
Is waiting for EOF or a newline.
I cannot tell what you are ultimately trying to do, but adding a newline to your printf, e.g., `printf("2000\n");`, should at least get you started.
|
20,503,671
|
Ok so I'm trying to run a C program from a python script. Currently I'm using a test C program:
```
#include <stdio.h>
int main() {
while (1) {
printf("2000\n");
sleep(1);
}
return 0;
}
```
To simulate the program that I will be using, which takes readings from a sensor constantly.
Then I'm trying to read the output (in this case `"2000"`) from the C program with subprocess in python:
```
#!usr/bin/python
import subprocess
process = subprocess.Popen("./main", stdout=subprocess.PIPE)
while True:
for line in iter(process.stdout.readline, ''):
print line,
```
but this is not working. From using print statements, it runs the `.Popen` line then waits at `for line in iter(process.stdout.readline, ''):`, until I press Ctrl-C.
Why is this? This is exactly what most examples that I've seen have as their code, and yet it does not read the file.
Is there a way of making it run only when there is something to be read?
|
2013/12/10
|
[
"https://Stackoverflow.com/questions/20503671",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2836175/"
] |
It is a block buffering issue.
What follows is an extended for your case version of my answer to [Python: read streaming input from subprocess.communicate()](https://stackoverflow.com/a/17698359/4279) question.
Fix stdout buffer in C program directly
---------------------------------------
`stdio`-based programs as a rule are line buffered if they are running interactively in a terminal and block buffered when their stdout is redirected to a pipe. In the latter case, you won't see new lines until the buffer overflows or flushed.
To avoid calling `fflush()` after each `printf()` call, you could force line buffered output by calling in a C program at the very beginning:
```
setvbuf(stdout, (char *) NULL, _IOLBF, 0); /* make line buffered stdout */
```
As soon as a newline is printed the buffer is flushed in this case.
### Or fix it without modifying the source of C program
There is `stdbuf` utility that allows you to change buffering type without modifying the source code e.g.:
```
from subprocess import Popen, PIPE
process = Popen(["stdbuf", "-oL", "./main"], stdout=PIPE, bufsize=1)
for line in iter(process.stdout.readline, b''):
print line,
process.communicate() # close process' stream, wait for it to exit
```
There are also other utilities available, see [Turn off buffering in pipe](https://unix.stackexchange.com/q/25372/1321).
Or use pseudo-TTY
-----------------
To trick the subprocess into thinking that it is running interactively, you could use [`pexpect` module](http://pexpect.readthedocs.org/en/latest/) or its analogs, for code examples that use `pexpect` and `pty` modules, see [Python subprocess readlines() hangs](https://stackoverflow.com/a/12471855/4279). Here's a variation on the `pty` example provided there (it should work on Linux):
```
#!/usr/bin/env python
import os
import pty
import sys
from select import select
from subprocess import Popen, STDOUT
master_fd, slave_fd = pty.openpty() # provide tty to enable line buffering
process = Popen("./main", stdin=slave_fd, stdout=slave_fd, stderr=STDOUT,
bufsize=0, close_fds=True)
timeout = .1 # ugly but otherwise `select` blocks on process' exit
# code is similar to _copy() from pty.py
with os.fdopen(master_fd, 'r+b', 0) as master:
input_fds = [master, sys.stdin]
while True:
fds = select(input_fds, [], [], timeout)[0]
if master in fds: # subprocess' output is ready
data = os.read(master_fd, 512) # <-- doesn't block, may return less
if not data: # EOF
input_fds.remove(master)
else:
os.write(sys.stdout.fileno(), data) # copy to our stdout
if sys.stdin in fds: # got user input
data = os.read(sys.stdin.fileno(), 512)
if not data:
input_fds.remove(sys.stdin)
else:
master.write(data) # copy it to subprocess' stdin
if not fds: # timeout in select()
if process.poll() is not None: # subprocess ended
# and no output is buffered <-- timeout + dead subprocess
assert not select([master], [], [], 0)[0] # race is possible
os.close(slave_fd) # subproces don't need it anymore
break
rc = process.wait()
print("subprocess exited with status %d" % rc)
```
### Or use `pty` via `pexpect`
`pexpect` wraps `pty` handling into [higher level interface](http://pexpect.readthedocs.org/en/latest/overview.html):
```
#!/usr/bin/env python
import pexpect
child = pexpect.spawn("/.main")
for line in child:
print line,
child.close()
```
[Q: Why not just use a pipe (popen())?](http://pexpect.readthedocs.org/en/latest/FAQ.html#whynotpipe) explains why pseudo-TTY is useful.
|
Your program isn't hung, it just runs very slowly. Your program is using buffered output; the `"2000\n"` data is not being written to stdout immediately, but will eventually make it. In your case, it might take `BUFSIZ/strlen("2000\n")` seconds (probably 1638 seconds) to complete.
After this line:
```
printf("2000\n");
```
add
```
fflush(stdout);
```
|
55,352,756
|
From a user given input of job description, i need to extract the keywords or phrases, using python and its libraries. I am open for suggestions and guidance from the community of what libraries work best and if in case, its simple, please guide through.
Example of user input:
`user_input = "i want a full stack developer. Specialization in python is a must".`
Expected output:
`keywords = ['full stack developer', 'python']`
|
2019/03/26
|
[
"https://Stackoverflow.com/questions/55352756",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10486777/"
] |
Well, a good keywords set is a good method. But, the key is how to build it. There are many way to do it.
Firstly, the simplest one is searching open keywords set in the web. It's depend on your luck and your knowledge. Your keywords (likes "python, java, machine learing") are common tags in Stackoverflow, Recruitment websites. Don't break the law!
The second one is IR(Information Extraction), it's more complex than the last one. There are many algorithms, likes "TextRank", "Entropy", "Apriori", "HMM", "Tf-IDF", "Conditional Random Fields", and so on.
Good lucky.
For matching keywords/phases, `Trie Tree` is more faster.
|
Well, i answered my own question. Thanks anyways for those who replied.
```
keys = ['python', 'full stack developer','java','machine learning']
keywords = []
for i in range(len(keys)):
word = keys[i]
if word in keys:
keywords.append(word)
else:
continue
print(keywords)
```
Output was as expected!
|
55,373,000
|
I wanted to import `train_test_split` to split my dataset into a test dataset and a training dataset but an import error has occurred.
I tried all of these but none of them worked:
```
conda upgrade scikit-learn
pip uninstall scipy
pip3 install scipy
pip uninstall sklearn
pip uninstall scikit-learn
pip install sklearn
```
Here is the code which yields the error:
```
from sklearn.preprocessing import train_test_split
X_train, X_test, y_train, y_test =
train_test_split(X,y,test_size=0.2,random_state=0)
```
And here is the error:
```
from sklearn.preprocessing import train_test_split
Traceback (most recent call last):
File "<ipython-input-3-e25c97b1e6d9>", line 1, in <module>
from sklearn.preprocessing import train_test_split
ImportError: cannot import name 'train_test_split' from 'sklearn.preprocessing' (C:\ProgramData\Anaconda3\lib\site-packages\sklearn\preprocessing\__init__.py)
```
|
2019/03/27
|
[
"https://Stackoverflow.com/questions/55373000",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11264930/"
] |
`train_test_split` isn't in `preprocessing`, it is in `model_selection` and `cross_validation`, so you meant:
```
from sklearn.model_selection import train_test_split
```
Or:
```
from sklearn.cross_validation import train_test_split
```
|
test\_train\_split is not present in preprocessing.
It is present in model\_selection module so try.
```
from sklearn.model_selection import train_test_split
```
it will work.
|
46,210,757
|
Assume I have a python dictionary with 2 keys.
```
dic = {0:'Hi!', 1:'Hello!'}
```
What I want to do is to extend this dictionary by duplicating itself, but change the key value.
For example, if I have a code
```
dic = {0:'Hi!', 1:'Hello'}
multiplier = 3
def DictionaryExtend(number_of_multiplier, dictionary):
"Function code"
```
then the result should look like
```
>>> DictionaryExtend(multiplier, dic)
>>> dic
>>> dic = {0:'Hi!', 1:'Hello', 2:'Hi!', 3:'Hello', 4:'Hi!', 5:'Hello'}
```
In this case, I changed the key values by adding the multipler at each duplication step. What's the efficient way of doing this?
Plus, I'm also planning to do the same job for list variable. I mean, extend a list by duplicating itself and change some values like above exmple. Any suggestion for this would be helpful, too!
|
2017/09/14
|
[
"https://Stackoverflow.com/questions/46210757",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3794041/"
] |
It's not immediately clear why you might want to do this. If the keys are always consecutive integers then you probably just want a list.
Anyway, here's a snippet:
```
def dictExtender(multiplier, d):
return dict(zip(range(multiplier * len(d)), list(d.values()) * multiplier))
```
|
I don't think you need to use inheritance to achieve that. It's also unclear what the keys should be in the resulting dictionary.
If the keys are always consecutive integers, then why not use a list?
```
origin = ['Hi', 'Hello']
extended = origin * 3
extended
>> ['Hi', 'Hello', 'Hi', 'Hello', 'Hi', 'Hello']
extended[4]
>> 'Hi'
```
If you want to perform a different operation with the keys, then simply:
```
mult_key = lambda key: [key,key+2,key+4] # just an example, this can be any custom implementation but beware of duplicate keys
dic = {0:'Hi', 1:'Hello'}
extended = { mkey:dic[key] for key in dic for mkey in mult_key(key) }
extended
>> {0:'Hi', 1:'Hello', 2:'Hi', 3:'Hello', 4:'Hi', 5:'Hello'}
```
|
46,210,757
|
Assume I have a python dictionary with 2 keys.
```
dic = {0:'Hi!', 1:'Hello!'}
```
What I want to do is to extend this dictionary by duplicating itself, but change the key value.
For example, if I have a code
```
dic = {0:'Hi!', 1:'Hello'}
multiplier = 3
def DictionaryExtend(number_of_multiplier, dictionary):
"Function code"
```
then the result should look like
```
>>> DictionaryExtend(multiplier, dic)
>>> dic
>>> dic = {0:'Hi!', 1:'Hello', 2:'Hi!', 3:'Hello', 4:'Hi!', 5:'Hello'}
```
In this case, I changed the key values by adding the multipler at each duplication step. What's the efficient way of doing this?
Plus, I'm also planning to do the same job for list variable. I mean, extend a list by duplicating itself and change some values like above exmple. Any suggestion for this would be helpful, too!
|
2017/09/14
|
[
"https://Stackoverflow.com/questions/46210757",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3794041/"
] |
You can try `itertools` to repeat the values and `OrderedDict` to maintain input order.
```
import itertools as it
import collections as ct
def extend_dict(multiplier, dict_):
"""Return a dictionary of repeated values."""
return dict(enumerate(it.chain(*it.repeat(dict_.values(), multiplier))))
d = ct.OrderedDict({0:'Hi!', 1:'Hello!'})
multiplier = 3
extend_dict(multiplier, d)
# {0: 'Hi!', 1: 'Hello!', 2: 'Hi!', 3: 'Hello!', 4: 'Hi!', 5: 'Hello!'}
```
---
Regarding handling other collection types, it is not clear what output is desired, but the following modification reproduces the latter and works for lists as well:
```
def extend_collection(multiplier, iterable):
"""Return a collection of repeated values."""
repeat_values = lambda x: it.chain(*it.repeat(x, multiplier))
try:
iterable = iterable.values()
except AttributeError:
result = list(repeat_values(iterable))
else:
result = dict(enumerate(repeat_values(iterable)))
return result
lst = ['Hi!', 'Hello!']
multiplier = 3
extend_collection(multiplier, lst)
# ['Hi!', 'Hello!', 'Hi!', 'Hello!', 'Hi!', 'Hello!']
```
|
I don't think you need to use inheritance to achieve that. It's also unclear what the keys should be in the resulting dictionary.
If the keys are always consecutive integers, then why not use a list?
```
origin = ['Hi', 'Hello']
extended = origin * 3
extended
>> ['Hi', 'Hello', 'Hi', 'Hello', 'Hi', 'Hello']
extended[4]
>> 'Hi'
```
If you want to perform a different operation with the keys, then simply:
```
mult_key = lambda key: [key,key+2,key+4] # just an example, this can be any custom implementation but beware of duplicate keys
dic = {0:'Hi', 1:'Hello'}
extended = { mkey:dic[key] for key in dic for mkey in mult_key(key) }
extended
>> {0:'Hi', 1:'Hello', 2:'Hi', 3:'Hello', 4:'Hi', 5:'Hello'}
```
|
46,210,757
|
Assume I have a python dictionary with 2 keys.
```
dic = {0:'Hi!', 1:'Hello!'}
```
What I want to do is to extend this dictionary by duplicating itself, but change the key value.
For example, if I have a code
```
dic = {0:'Hi!', 1:'Hello'}
multiplier = 3
def DictionaryExtend(number_of_multiplier, dictionary):
"Function code"
```
then the result should look like
```
>>> DictionaryExtend(multiplier, dic)
>>> dic
>>> dic = {0:'Hi!', 1:'Hello', 2:'Hi!', 3:'Hello', 4:'Hi!', 5:'Hello'}
```
In this case, I changed the key values by adding the multipler at each duplication step. What's the efficient way of doing this?
Plus, I'm also planning to do the same job for list variable. I mean, extend a list by duplicating itself and change some values like above exmple. Any suggestion for this would be helpful, too!
|
2017/09/14
|
[
"https://Stackoverflow.com/questions/46210757",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3794041/"
] |
You can try `itertools` to repeat the values and `OrderedDict` to maintain input order.
```
import itertools as it
import collections as ct
def extend_dict(multiplier, dict_):
"""Return a dictionary of repeated values."""
return dict(enumerate(it.chain(*it.repeat(dict_.values(), multiplier))))
d = ct.OrderedDict({0:'Hi!', 1:'Hello!'})
multiplier = 3
extend_dict(multiplier, d)
# {0: 'Hi!', 1: 'Hello!', 2: 'Hi!', 3: 'Hello!', 4: 'Hi!', 5: 'Hello!'}
```
---
Regarding handling other collection types, it is not clear what output is desired, but the following modification reproduces the latter and works for lists as well:
```
def extend_collection(multiplier, iterable):
"""Return a collection of repeated values."""
repeat_values = lambda x: it.chain(*it.repeat(x, multiplier))
try:
iterable = iterable.values()
except AttributeError:
result = list(repeat_values(iterable))
else:
result = dict(enumerate(repeat_values(iterable)))
return result
lst = ['Hi!', 'Hello!']
multiplier = 3
extend_collection(multiplier, lst)
# ['Hi!', 'Hello!', 'Hi!', 'Hello!', 'Hi!', 'Hello!']
```
|
It's not immediately clear why you might want to do this. If the keys are always consecutive integers then you probably just want a list.
Anyway, here's a snippet:
```
def dictExtender(multiplier, d):
return dict(zip(range(multiplier * len(d)), list(d.values()) * multiplier))
```
|
46,210,757
|
Assume I have a python dictionary with 2 keys.
```
dic = {0:'Hi!', 1:'Hello!'}
```
What I want to do is to extend this dictionary by duplicating itself, but change the key value.
For example, if I have a code
```
dic = {0:'Hi!', 1:'Hello'}
multiplier = 3
def DictionaryExtend(number_of_multiplier, dictionary):
"Function code"
```
then the result should look like
```
>>> DictionaryExtend(multiplier, dic)
>>> dic
>>> dic = {0:'Hi!', 1:'Hello', 2:'Hi!', 3:'Hello', 4:'Hi!', 5:'Hello'}
```
In this case, I changed the key values by adding the multipler at each duplication step. What's the efficient way of doing this?
Plus, I'm also planning to do the same job for list variable. I mean, extend a list by duplicating itself and change some values like above exmple. Any suggestion for this would be helpful, too!
|
2017/09/14
|
[
"https://Stackoverflow.com/questions/46210757",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3794041/"
] |
It's not immediately clear why you might want to do this. If the keys are always consecutive integers then you probably just want a list.
Anyway, here's a snippet:
```
def dictExtender(multiplier, d):
return dict(zip(range(multiplier * len(d)), list(d.values()) * multiplier))
```
|
You don't need to extend anything, you need to pick a better input format or a more appropriate type.
As others have mentioned, you need a list, not an extended dict or OrderedDict. Here's an example with `lines.txt`:
```
1:Hello!
0: Hi.
2: pylang
```
And here's a way to parse the lines in the correct order:
```
def extract_number_and_text(line):
number, text = line.split(':')
return (int(number), text.strip())
with open('lines.txt') as f:
lines = f.readlines()
data = [extract_number_and_text(line) for line in lines]
print(data)
# [(1, 'Hello!'), (0, 'Hi.'), (2, 'pylang')]
sorted_text = [text for i,text in sorted(data)]
print(sorted_text)
# ['Hi.', 'Hello!', 'pylang']
print(sorted_text * 2)
# ['Hi.', 'Hello!', 'pylang', 'Hi.', 'Hello!', 'pylang']
print(list(enumerate(sorted_text * 2)))
# [(0, 'Hi.'), (1, 'Hello!'), (2, 'pylang'), (3, 'Hi.'), (4, 'Hello!'), (5, 'pylang')]
```
|
46,210,757
|
Assume I have a python dictionary with 2 keys.
```
dic = {0:'Hi!', 1:'Hello!'}
```
What I want to do is to extend this dictionary by duplicating itself, but change the key value.
For example, if I have a code
```
dic = {0:'Hi!', 1:'Hello'}
multiplier = 3
def DictionaryExtend(number_of_multiplier, dictionary):
"Function code"
```
then the result should look like
```
>>> DictionaryExtend(multiplier, dic)
>>> dic
>>> dic = {0:'Hi!', 1:'Hello', 2:'Hi!', 3:'Hello', 4:'Hi!', 5:'Hello'}
```
In this case, I changed the key values by adding the multipler at each duplication step. What's the efficient way of doing this?
Plus, I'm also planning to do the same job for list variable. I mean, extend a list by duplicating itself and change some values like above exmple. Any suggestion for this would be helpful, too!
|
2017/09/14
|
[
"https://Stackoverflow.com/questions/46210757",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3794041/"
] |
You can try `itertools` to repeat the values and `OrderedDict` to maintain input order.
```
import itertools as it
import collections as ct
def extend_dict(multiplier, dict_):
"""Return a dictionary of repeated values."""
return dict(enumerate(it.chain(*it.repeat(dict_.values(), multiplier))))
d = ct.OrderedDict({0:'Hi!', 1:'Hello!'})
multiplier = 3
extend_dict(multiplier, d)
# {0: 'Hi!', 1: 'Hello!', 2: 'Hi!', 3: 'Hello!', 4: 'Hi!', 5: 'Hello!'}
```
---
Regarding handling other collection types, it is not clear what output is desired, but the following modification reproduces the latter and works for lists as well:
```
def extend_collection(multiplier, iterable):
"""Return a collection of repeated values."""
repeat_values = lambda x: it.chain(*it.repeat(x, multiplier))
try:
iterable = iterable.values()
except AttributeError:
result = list(repeat_values(iterable))
else:
result = dict(enumerate(repeat_values(iterable)))
return result
lst = ['Hi!', 'Hello!']
multiplier = 3
extend_collection(multiplier, lst)
# ['Hi!', 'Hello!', 'Hi!', 'Hello!', 'Hi!', 'Hello!']
```
|
You don't need to extend anything, you need to pick a better input format or a more appropriate type.
As others have mentioned, you need a list, not an extended dict or OrderedDict. Here's an example with `lines.txt`:
```
1:Hello!
0: Hi.
2: pylang
```
And here's a way to parse the lines in the correct order:
```
def extract_number_and_text(line):
number, text = line.split(':')
return (int(number), text.strip())
with open('lines.txt') as f:
lines = f.readlines()
data = [extract_number_and_text(line) for line in lines]
print(data)
# [(1, 'Hello!'), (0, 'Hi.'), (2, 'pylang')]
sorted_text = [text for i,text in sorted(data)]
print(sorted_text)
# ['Hi.', 'Hello!', 'pylang']
print(sorted_text * 2)
# ['Hi.', 'Hello!', 'pylang', 'Hi.', 'Hello!', 'pylang']
print(list(enumerate(sorted_text * 2)))
# [(0, 'Hi.'), (1, 'Hello!'), (2, 'pylang'), (3, 'Hi.'), (4, 'Hello!'), (5, 'pylang')]
```
|
40,914,325
|
I'm beginner to python and I would like to start with automation.
Below is the task I'm trying to do.
```
ssh -p 2024 root@10.54.3.32
root@10.54.3.32's password:
```
I try to ssh to a particular machine and its prompting for password. But I have no clue how to give the input to this console. I have tried this
```
import sys
import subprocess
con = subprocess.Popen("ssh -p 2024 root@10.54.3.32", shell=True,stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr =subprocess.PIPE)
print con.stdout.readlines()
```
If I execute this, output will be like
```
python auto.py
root@10.54.3.32's password:
```
But I have no clue how to give the input to this. If some could help me out in this, would be much grateful. Also could you please help me after logging in, how to execute the commands on the remote machine via ssh.
Would proceed with my automation if this is done
I tried with `con.communicate()` since stdin is in `PIPE mode`. But no luck.
If this cant be accomplished by subprocess, could you please suggest me alternate way to execute commands on remote console(some other module) useful for automation ? since most of my automation depends on executin commands on remote console
Thanks
|
2016/12/01
|
[
"https://Stackoverflow.com/questions/40914325",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3275349/"
] |
I have implemented through pexpect. You may need to `pip install pexpect` before you run the code:
```
import pexpect
from pexpect import pxssh
accessDenied = None
unreachable = None
username = 'someuser'
ipaddress = 'mymachine'
password = 'somepassword'
command = 'ls -al'
try:
ssh = pexpect.spawn('ssh %s@%s' % (username, ipaddress))
ret = ssh.expect([pexpect.TIMEOUT, '.*sure.*connect.*\(yes/no\)\?', '[P|p]assword:'])
if ret == 0:
unreachable = True
elif ret == 1: #Case asking for storing key
ssh.sendline('yes')
ret = ssh.expect([pexpect.TIMEOUT, '[P|p]assword:'])
if ret == 0:
accessDenied = True
elif ret == 1:
ssh.sendline(password)
auth = ssh.expect(['[P|p]assword:', '#']) #Match for the prompt
elif ret == 2: #Case asking for password
ssh.sendline(password)
auth = ssh.expect(['[P|p]assword:', '#']) #Match for the prompt
if not auth == 1:
accessDenied = True
else:
(command_output, exitstatus) = pexpect.run("ssh %s@%s '%s'" % (username, ipaddress, command), events={'(?i)password':'%s\n' % password}, withexitstatus=1, timeout=1000)
print(command_output)
except pxssh.ExceptionPxssh as e:
print(e)
accessDenied = 'Access denied'
if accessDenied:
print('Could not connect to the machine')
elif unreachable:
print('System unreachable')
```
This works only on Linux as pexpect is available only for Linux. You may use plink.exe if you need to run on Windows. `paramiko` is another module you may try, with which I had few issues before.
|
I have implemented through paramiko. You may need to `pip install paramiko` before you run the code:
```
import paramiko
username = 'root'
password = 'calvin'
host = '192.168.0.1'
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(host, username=str(username), password=str(password))
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
chan = ssh.invoke_shell()
time.sleep(1)
print("Cnnection Successfully")
```
If you want to pass command and grab the output, Simply perform the following steps:
```
chan.send('Your Command')
if chan is not None and chan.recv_ready():
resp = chan.recv(2048)
while (chan.recv_ready()):
resp += chan.recv(2048)
output = str(resp, 'utf-8')
print(output)
```
|
42,207,798
|
When I using lxml library in python to get data on a html page (Youtube video title), It not return text correctly It return a text Like this "à·à·à¶½à¶±à·à¶§à¶ºà¶±à"
Here my code,
```
page = requests.get("https://www.youtube.com/watch?v=MZMapfEg5g8")
source = html.fromstring(page.content)
links = source.xpath('//link[@type="text/xml+oembed"]')
for href in links:
return href.attrib['title']
```
Language I need is is in sinhala, and it's unicode.
|
2017/02/13
|
[
"https://Stackoverflow.com/questions/42207798",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6440193/"
] |
Because your `on` clause is `1 = 0` nothing matches, so all rows are inserted.
Changing your `on` clause to `a = b` will yield your expected results of `2,4,5,1,3`.
rextester for `on a = b`: <http://rextester.com/OPLL86727>
It might be helpful to be more explicit with aliasing your source and target:
```
declare @t1 table (a int)
declare @t2 table (b int)
insert into @t1 (a) values ( 1 ),(3),(5)
insert into @t2 (b) values ( 2 ),(4),(5)
;with source as (
select * from @t1
)
merge into @t2 as target
using source
on source.a = target.b
when not matched then
insert (b) values (a);
select *
from @t2;
```
|
You are matching on 1=0 which will always fire the insert. You should use On Source.a = @t2.b
|
55,877,915
|
I am trying to gather weather data from an API and then store that Data in a Database for use later.
I have been able to access the data and print it out using a for loop, but I would like to assign each iteration of that for loop to a variable to be stored in a different location in a database.
How would I be able to do so?
My Current Code Below:
```
#!/usr/bin/python3
from urllib.request import urlopen
import json
apikey="redacted"
# Latitude & longitude
lati="-26.20227"
longi="28.04363"
# Add units=si to get it in sensible ISO units
url="https://api.forecast.io/forecast/"+apikey+"/"+lati+","+longi+"?units=si"
meteo=urlopen(url).read()
meteo = meteo.decode('utf-8')
weather = json.loads(meteo)
cTemp = (weather['currently']['temperature'])
cSum = (weather['currently']['summary'])
cRain1 = (weather['currently']['precipProbability'])
cRain2 = cRain1*100
daily = (weather['daily']['summary'])
print (cTemp)
print (cSum)
print (cRain2)
print (daily)
#Everthing above this line works as expected, I am focusing on the below code
dailyTHigh = (weather['daily']['data'])
for i in dailyTHigh:
print (i['temperatureHigh'])
```
Gives me an output of the following:
```
12.76
Clear
0
No precipitation throughout the week, with high temperatures rising to 24°C on Friday.
22.71
22.01
22.82
23.13
23.87
23.71
23.95
22.94
```
How would I go about assigning each of the 8 High Temperatures to a different variable?
ie,
var1 = 22.71
var2 = 22.01
etc
Thanks in advance,
|
2019/04/27
|
[
"https://Stackoverflow.com/questions/55877915",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6908101/"
] |
IMO you need some sort of dynamic length data structure to which you can append the data inside a `for` loop and then access it using `index`.
Therefore, you can create a `list` and then append all the values of `for` lop into it as shown below:
```
list = []
for i in dailyTHigh:
list.append(i['temperatureHigh'])
```
Now you will be able to access the values of `list` as shown below:
```
for i in range(0,len(x)):
print x[i]
```
The above approach is good as you don't need to know the number of items you need to insert as you may require equal number of variables to have the values assigned to. And you can easily access the values also.
|
IMO you can use a stack data structure and store the data in the FILO (First In Last Out) form. This way, you can also manage data in more efficient way, even it get's bigger in size (in future)
|
55,877,915
|
I am trying to gather weather data from an API and then store that Data in a Database for use later.
I have been able to access the data and print it out using a for loop, but I would like to assign each iteration of that for loop to a variable to be stored in a different location in a database.
How would I be able to do so?
My Current Code Below:
```
#!/usr/bin/python3
from urllib.request import urlopen
import json
apikey="redacted"
# Latitude & longitude
lati="-26.20227"
longi="28.04363"
# Add units=si to get it in sensible ISO units
url="https://api.forecast.io/forecast/"+apikey+"/"+lati+","+longi+"?units=si"
meteo=urlopen(url).read()
meteo = meteo.decode('utf-8')
weather = json.loads(meteo)
cTemp = (weather['currently']['temperature'])
cSum = (weather['currently']['summary'])
cRain1 = (weather['currently']['precipProbability'])
cRain2 = cRain1*100
daily = (weather['daily']['summary'])
print (cTemp)
print (cSum)
print (cRain2)
print (daily)
#Everthing above this line works as expected, I am focusing on the below code
dailyTHigh = (weather['daily']['data'])
for i in dailyTHigh:
print (i['temperatureHigh'])
```
Gives me an output of the following:
```
12.76
Clear
0
No precipitation throughout the week, with high temperatures rising to 24°C on Friday.
22.71
22.01
22.82
23.13
23.87
23.71
23.95
22.94
```
How would I go about assigning each of the 8 High Temperatures to a different variable?
ie,
var1 = 22.71
var2 = 22.01
etc
Thanks in advance,
|
2019/04/27
|
[
"https://Stackoverflow.com/questions/55877915",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6908101/"
] |
IMO you need some sort of dynamic length data structure to which you can append the data inside a `for` loop and then access it using `index`.
Therefore, you can create a `list` and then append all the values of `for` lop into it as shown below:
```
list = []
for i in dailyTHigh:
list.append(i['temperatureHigh'])
```
Now you will be able to access the values of `list` as shown below:
```
for i in range(0,len(x)):
print x[i]
```
The above approach is good as you don't need to know the number of items you need to insert as you may require equal number of variables to have the values assigned to. And you can easily access the values also.
|
Just to neaten up on my above comment to the accepted answer
```
#Everthing above this line works as expected, I am focusing on the below code
dailyTHigh = (weather['daily']['data'])
list = []
for i in dailyTHigh:
list.append(i['temperatureHigh'])
for i in range(0,len(list)):
var1 = list[0]
var2 = list[1]
```
Saved each list iteration to a variable, I know there will always be 8 variables so this works for me
```
print (var1)
```
Just for testing gives me what I was looking for
|
55,877,915
|
I am trying to gather weather data from an API and then store that Data in a Database for use later.
I have been able to access the data and print it out using a for loop, but I would like to assign each iteration of that for loop to a variable to be stored in a different location in a database.
How would I be able to do so?
My Current Code Below:
```
#!/usr/bin/python3
from urllib.request import urlopen
import json
apikey="redacted"
# Latitude & longitude
lati="-26.20227"
longi="28.04363"
# Add units=si to get it in sensible ISO units
url="https://api.forecast.io/forecast/"+apikey+"/"+lati+","+longi+"?units=si"
meteo=urlopen(url).read()
meteo = meteo.decode('utf-8')
weather = json.loads(meteo)
cTemp = (weather['currently']['temperature'])
cSum = (weather['currently']['summary'])
cRain1 = (weather['currently']['precipProbability'])
cRain2 = cRain1*100
daily = (weather['daily']['summary'])
print (cTemp)
print (cSum)
print (cRain2)
print (daily)
#Everthing above this line works as expected, I am focusing on the below code
dailyTHigh = (weather['daily']['data'])
for i in dailyTHigh:
print (i['temperatureHigh'])
```
Gives me an output of the following:
```
12.76
Clear
0
No precipitation throughout the week, with high temperatures rising to 24°C on Friday.
22.71
22.01
22.82
23.13
23.87
23.71
23.95
22.94
```
How would I go about assigning each of the 8 High Temperatures to a different variable?
ie,
var1 = 22.71
var2 = 22.01
etc
Thanks in advance,
|
2019/04/27
|
[
"https://Stackoverflow.com/questions/55877915",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6908101/"
] |
Just to neaten up on my above comment to the accepted answer
```
#Everthing above this line works as expected, I am focusing on the below code
dailyTHigh = (weather['daily']['data'])
list = []
for i in dailyTHigh:
list.append(i['temperatureHigh'])
for i in range(0,len(list)):
var1 = list[0]
var2 = list[1]
```
Saved each list iteration to a variable, I know there will always be 8 variables so this works for me
```
print (var1)
```
Just for testing gives me what I was looking for
|
IMO you can use a stack data structure and store the data in the FILO (First In Last Out) form. This way, you can also manage data in more efficient way, even it get's bigger in size (in future)
|
38,326,357
|
**following is my code in python for the scraping and output efforts**
```
html = urlopen("http://www.imdb.com/news/top")
wineReviews = BeautifulSoup(html)
lines = []
for headLine in imdbNews.findAll("h2"):
#headLine.encode('ascii', 'ignore')
imdb_news = headLine.get_text()
lines.append(imdb_news)
#f = open("output.txt", "a")
#f.write(imdb_news)
#f.close()
```
**The #s have been my attempts on trying to get rid of the Unicode errors but it just results into more errors that I can't seem to wrap my head around. Current code results in the following output:**
```
[u'Warner Bros. Brings \u2018Wonder Woman,\u2019 \u2018Suicide Squad,\u2019 \u2018Fantastic Beasts\u2019 to Comic-Con',
u"\u2018Ghostbusters': Is There a Post-Credit Scene?",
u'Javier Bardem Eyed for Frankenstein Role in Universal\u2019s Monster Universe (Exclusive)',
u'\u2018Battlefield\u2019 Video Game Being Developed for TV Series by Paramount Television & Anonymous Content',
u'\u2018Ghostbusters\u2019 Review Roundup: Critics Generally Positive On Female-Led Blockbuster',
u'\u2018Assassin\u2019s Creed\u2019 Movie Won\u2019t Make Money, Ubisoft Chief Says',
u"Fargo Taps The Leftovers' Carrie Coon as Female Lead in Season 3",
u'Ridley Scott Long-Time Collaborator Julie Payne Dies at 64',
u'Ridley Scott Longtime Collaborator Julie Payne Dies at 64',
u'15 Highest Paid Music Stars of 2016, From The Weeknd to Taylor Swift (Photos)',
u'South Africa\u2019s Pubcaster Draws Ire From Demonstrators, the Government',
u'Jerry Greer, Son of Country Music Singer Craig Morgan, Dies at 19',
u'Queen Latifah Says Racism Is "Still Alive and Kicking" at VH1 Hip Hop Honors',
u'Jerry Greer, Son of Country Singer Craig Morgan, Found Dead After Boating Accident',
u'[Watch] Emmy Awards movie/mini slugfest: \u2018The People v. O.J. Simpson\u2019 and \u2018Fargo\u2019 battle for the win',
u'Amanda Evans Wraps Videovision\u2019s Thriller \u2018Serpent\u2019',
u'\u2018Oslo\u2019 Theater Review: The Handshake That Shook the World',
u'\u2018The Bachelorette\u2019 Recap: JoJo Tames Some Wild Horses',
u'Disney Accelerator Names 9 Startups to Participate in 2016 Mentorship Program',
u'Karlovy Vary Film Review: \u2018The Teacher\u2019',
u'Top News',
u'Movie News',
u'TV News',
u'Celebrity News']
```
**How do I get rid of the u' and \u2018 , \u2019 etc..? and get my results in a txt file**
|
2016/07/12
|
[
"https://Stackoverflow.com/questions/38326357",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6578757/"
] |
To avoid reflection, you can use a generic method:
```
public void DoSomething(MyClass a) => MakeSomeStaff(a, () => { /* Do method body */ });
private void MakeSomeStaff<T>(T item, Action action) where T: class
{
if (item == null)
throw new Exception();
action();
}
```
|
**EDIT: Had an idea that abuses operator overloading, original answer at the bottom:**
Use operator overloading to throw on null
```
public struct Some<T> where T : class {
public T Value { get; }
public Some(T value)
{
if (ReferenceEquals(value, null))
throw new Exception();
Value = value;
}
public override string ToString() => Value.ToString();
public static implicit operator T(Some<T> some) => some.Value;
public static implicit operator Some<T>(T value) => new Some<T>(value);
}
private void DoThingsInternal(string foo) =>
Console.Out.WriteLine($"string len:{foo.Length}");
public void DoStuff(Some<string> foo)
{
DoThingsInternal(foo);
string fooStr = foo;
string fooStrBis = foo.Value;
// do stuff
}
```
**Original answer**
You can use an extension method to throw for you
```
public static class NotNullExt{
public static T Ensure<T>(this T value,string message=null) where T:class
{
if(ReferenceEquals(null,value) throw new Exception(message??"Null value");
return value;
}
}
public void DoSomething(MyClass a) {
a=a.Ensure("foo");
// go ...
}
```
|
25,824,417
|
Before Django 1.7 I used to define a per-project `fixtures` directory in the settings:
```
FIXTURE_DIRS = ('myproject/fixtures',)
```
and use that to place my `initial_data.json` fixture storing the default **groups** essential for the whole project. This has been working well for me as I could keep the design clean by separating per-project data from app-specific data.
Now with Django 1.7, `initial_data` fixtures have been deprecated, [suggesting](https://docs.djangoproject.com/en/1.7/howto/initial-data/#automatically-loading-initial-data-fixtures) to include [data migrations](https://docs.djangoproject.com/en/1.7/topics/migrations/#data-migrations) together with app's schema migrations; leaving no obvious choice for global per-project initial data.
Moreover the new [migrations framework](https://docs.djangoproject.com/en/1.7/topics/migrations/) installs all legacy initial data fixtures **before** executing migrations for the compliant apps (including the `django.contrib.auth` app). This behavior causes my fixture containing default groups to **fail installation**, since the `auth_group` table is not present in the DB yet.
Any suggestions on how to (elegantly) make fixtures run **after** all the migrations, or at least after the auth app migrations? Or any other ideas to solve this problem?
I find fixtures a great way for providing initial data and would like to have a simple and clean way of declaring them for automatic installation. The new [RunPython](https://docs.djangoproject.com/en/1.7/ref/migration-operations/#runpython) is just too cumbersome and I consider it an overkill for most purposes; and it seems to be only available for per-app migrations.
|
2014/09/13
|
[
"https://Stackoverflow.com/questions/25824417",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2263517/"
] |
If you absolutely want to use fixtures, just use `RunPython`and `call_command` in your data migrations.
```
from django.db import migrations
from django.core.management import call_command
def add_data(apps, schema_editor):
call_command('loaddata', 'thefixture.json')
def remove_data(apps, schema_editor):
call_command('flush')
class Migration(migrations.Migration):
dependencies = [
('roundtable', '0001_initial'),
]
operations = [
migrations.RunPython(
add_data,
reverse_code=remove_data),
]
```
However this is recommanded to load data using python code and Django ORM, as you won't have to face integrity issues.
[Source](http://andrewsforge.com/article/upgrading-django-to-17/part-2-migrations-in-django-16-and-17/#data-migrations-in-django-17).
|
I recommend using factories instead of fixtures, they are a mess and difficult to maintain, better to use FactoryBoy with Django.
|
28,250,578
|
how to set proper document root for vagrant. Now it takes docroot from the wrong place. I'm trying to run laravel project, so it has to be not /var/www/project but /vae/www/project/public...
My YAML file:
```
---
vagrantfile-local:
vm:
box: puphpet/debian75-x64
box_url: puphpet/debian75-x64
hostname: ''
memory: '512'
cpus: '1'
chosen_provider: virtualbox
network:
private_network: 192.168.56.101
forwarded_port:
1ztIcBOBAG3R:
host: '7958'
guest: '22'
post_up_message: ''
provider:
virtualbox:
modifyvm:
natdnshostresolver1: on
vmware:
numvcpus: 1
parallels:
cpus: 1
provision:
puppet:
manifests_path: puphpet/puppet
manifest_file: site.pp
module_path: puphpet/puppet/modules
options:
- '--verbose'
- '--hiera_config /vagrant/puphpet/puppet/hiera.yaml'
- '--parser future'
synced_folder:
2jsmp5Xo8wAe:
owner: www-data
group: www-data
source: 'C:\\Users\\Vygandas\\Documents\\git\\project.x\\web'
target: /var/www/projectx
sync_type: default
rsync:
args:
- '--verbose'
- '--archive'
- '-z'
exclude:
- .vagrant/
auto: 'false'
usable_port_range:
start: 10200
stop: 10500
ssh:
host: null
port: null
private_key_path: null
username: vagrant
guest_port: null
keep_alive: true
forward_agent: false
forward_x11: false
shell: 'bash -l'
vagrant:
host: detect
server:
install: '1'
packages: { }
users_groups:
install: '1'
groups: { }
users: { }
cron:
install: '1'
jobs: { }
firewall:
install: '1'
rules: null
apache:
install: '1'
settings:
user: www-data
group: www-data
default_vhost: true
manage_user: false
manage_group: false
sendfile: 0
modules:
- rewrite
vhosts:
495wa1uc3p0z:
servername: projectx.dev
serveraliases:
- www.projectx.dev
docroot: /var/www/projectx/public
port: '80'
setenv:
- 'APP_ENV dev'
directories:
8yngfatheg7u:
provider: directory
path: /var/www/projectx/public
options:
- Indexes
- FollowSymlinks
- MultiViews
allow_override:
- All
require:
- all
- granted
custom_fragment: ''
engine: php
custom_fragment: ''
ssl_cert: ''
ssl_key: ''
ssl_chain: ''
ssl_certs_dir: ''
mod_pagespeed: 0
nginx:
install: '0'
settings:
default_vhost: 1
proxy_buffer_size: 128k
proxy_buffers: '4 256k'
upstreams: { }
vhosts:
ksovqgz8jsgn:
proxy: ''
server_name: awesome.dev
server_aliases:
- www.awesome.dev
www_root: /var/www/awesome
listen_port: '80'
location: \.php$
index_files:
- index.html
- index.htm
- index.php
envvars:
- 'APP_ENV dev'
engine: php
client_max_body_size: 1m
ssl_cert: ''
ssl_key: ''
php:
install: '1'
version: '56'
composer: '1'
composer_home: ''
modules:
php:
- cli
- intl
- mcrypt
- gd
- imagick
- mysql
pear: { }
pecl:
- pecl_http
ini:
display_errors: On
error_reporting: '-1'
session.save_path: /var/lib/php/session
timezone: America/Chicago
mod_php: 0
hhvm:
install: '0'
nightly: 0
composer: '1'
composer_home: ''
settings:
host: 127.0.0.1
port: '9000'
ini:
display_errors: On
error_reporting: '-1'
timezone: null
xdebug:
install: '0'
settings:
xdebug.default_enable: '1'
xdebug.remote_autostart: '0'
xdebug.remote_connect_back: '1'
xdebug.remote_enable: '1'
xdebug.remote_handler: dbgp
xdebug.remote_port: '9000'
xhprof:
install: '0'
wpcli:
install: '0'
version: v0.17.1
drush:
install: '0'
version: 6.3.0
ruby:
install: '1'
versions:
gA1kSNQgqjbS:
version: ''
nodejs:
install: '0'
npm_packages: { }
python:
install: '1'
packages: { }
versions:
S0v3NX4H3glU:
version: ''
mysql:
install: '1'
override_options: { }
root_password: '123'
adminer: 0
databases:
3kES6Zw0Brtz:
grant:
- ALL
name: projectx
host: localhost
user: projectxuser
password: '123'
sql_file: ''
postgresql:
install: '0'
settings:
root_password: '123'
user_group: postgres
encoding: UTF8
version: '9.3'
databases: { }
adminer: 0
mariadb:
install: '0'
override_options: { }
root_password: '123'
adminer: 0
databases: { }
version: '10.0'
sqlite:
install: '0'
adminer: 0
databases: { }
mongodb:
install: '0'
settings:
auth: 1
port: '27017'
databases: { }
redis:
install: '0'
settings:
conf_port: '6379'
mailcatcher:
install: '1'
settings:
smtp_ip: 0.0.0.0
smtp_port: 1025
http_ip: 0.0.0.0
http_port: '1080'
mailcatcher_path: /usr/local/rvm/wrappers/default
from_email_method: inline
beanstalkd:
install: '0'
settings:
listenaddress: 0.0.0.0
listenport: '13000'
maxjobsize: '65535'
maxconnections: '1024'
binlogdir: /var/lib/beanstalkd/binlog
binlogfsync: null
binlogsize: '10485760'
beanstalk_console: 0
binlogdir: /var/lib/beanstalkd/binlog
rabbitmq:
install: '0'
settings:
port: '5672'
elastic_search:
install: '1'
settings:
version: 1.4.1
java_install: true
solr:
install: '0'
settings:
version: 4.10.2
port: '8984'
```
I can access it only via <http://projectx.dev/public> ... Plz help me O\_o
|
2015/01/31
|
[
"https://Stackoverflow.com/questions/28250578",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2470912/"
] |
I can see you have:
`vhosts:
495wa1uc3p0z:
servername: projectx.dev
serveraliases:
- www.projectx.dev
docroot: /var/www/projectx/public
port: '80'
setenv:
- 'APP_ENV dev'
directories:
8yngfatheg7u:
provider: directory
path: /var/www/projectx/public
options:
- Indexes
- FollowSymlinks
- MultiViews
allow_override:
- All
require:
- all
- granted
custom_fragment: ''
engine: php
custom_fragment: ''
ssl_cert: ''
ssl_key: ''
ssl_chain: ''
ssl_certs_dir: ''`
Which tells me you probably started with `/var/www/projectx`, ran `$ vagrant up`, changed `/var/www/projectx` to `/var/www/projectx/public` and didn't do a `$ vagrant provision` to apply the changes.
|
It was right that I needed to run "vagran provision" after modifications, but another thing that config should look like this
```
vhosts:
495wa1uc3p0z:
servername: projectx.dev
docroot: /var/www/projectx/public
port: '80'
setenv:
- 'APP_ENV dev'
directories:
495wa1uc3p0z:
provider: directory
path: /var/www/projectx/public
options:
- Indexes
- FollowSymlinks
- MultiViews
allow_override:
- All
allow:
- All
custom_fragment: ''
```
|
57,417,939
|
I am currently running a python script in a batch file. In the python, I have some print function to monitor the running code. The printed information then will be shown in the command window. In the meantime, I also want to save all these print-out text to a log-file, so I can track them in the long run.
Currently, to do this, I need to have both print function in the python and use the text.write function to write to a text file. This causes some troubles in maintenance because every time I change some printing text, I also need to change the text in the write function. Also I feel it is not the most efficient way to do that.
For example:
```
start_time = datetime.now()
print("This code is run at " + str(start_time) + "\n")
log_file.write("This code is run at " + str(start_time) + "\n")
```
I would like to use the print function in the python, so I can see that in the command window and then save all the print-out information to a log file at one time.
|
2019/08/08
|
[
"https://Stackoverflow.com/questions/57417939",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11730027/"
] |
For a better solution in the long run, consider the built in [logging module](https://docs.python.org/3/library/logging.html). You can give multiple destinations, such as stdout and files, log rotation, formatting, and importance levels.
Example:
```py
import logging
logging.basicConfig(filename='log_file', filemode='w', level=logging.DEBUG)
logging.info("This code is run at %", start_time)
```
|
Just make a function
```
def print_and_log(text):
print(text)
with open("logfile.txt", "a") as logfile:
logfile.write(text+"\n")
```
Then wherever you need to print, use this function and it will also log.
|
47,979,852
|
I just do this:
```
t = Variable(torch.randn(5))
t =t.cuda()
print(t)
```
but it takes 5 to 10 minitues,everytime.
I used cuda samples to test bandwidth, it's fine.
Then I used pdb to find which takes the most time.
I find in `/anaconda3/lib/python3.6/site-packages/torch/cuda/__init__`:
```
def _lazy_new(cls, *args, **kwargs):
_lazy_init()
# We need this method only for lazy init, so we can remove it
del _CudaBase.__new__
return super(_CudaBase, cls).__new__(cls, *args, **kwargs)
```
it takes about 5 minitues in the `return`
I don't know how to solve my problem by these imformation.
My environment is: Ubuntu 16.04 + CUDA 9.1
|
2017/12/26
|
[
"https://Stackoverflow.com/questions/47979852",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9141892/"
] |
There’s a cuda version mismatch between the cuda my pytorch was compiled with the cuda I'm running.I divided the official installation commond
>
> conda install pytorch torchvision cuda90 -c pytorch
>
>
>
into two section:
>
> conda install -c soumith magma-cuda90
>
>
> conda install pytorch torchvision -c soumith
>
>
>
The second commond installed pytorch-0.2.0 by default,which mathchs CUDA8.0. After I update my pytorch to 0.3.0,this commond only takes one second.
|
Try doing it this way:
```
torch.cuda.synchronize()
t = Variable(torch.randn(5))
t =t.cuda()
print(t)
```
Then, it should be *blazing fast* depending on your GPU memory, at least on every *re-run* it should be.
|
45,468,073
|
I have successfully installed Pandas through Anaconda in PyCharm. Unfortunately when I run Import Pandas this is what I get as the output:
```
/System/Library/Frameworks/Python.framework/Versions/2.7/bin/python2.7
"/Users/PycharmProjects/Security upload/Security
upload.py"
Traceback (most recent call last):
File "/Users/PycharmProjects/Security upload/Security
upload.py", line 3, in <module>
import pandas
File "/Users/Library/Python/2.7/lib/python/site-
packages/pandas/__init__.py", line 23, in <module>
from pandas.compat.numpy import *
File "/Users/Library/Python/2.7/lib/python/site-
packages/pandas/compat/__init__.py", line 361, in <module>
from dateutil import parser as _date_parser
File "/Users/Library/Python/2.7/lib/python/site-
packages/dateutil/parser.py", line 43, in <module>
from . import tz
File "/Users/Library/Python/2.7/lib/python/site-
packages/dateutil/tz/__init__.py", line 1, in <module>
from .tz import *
File "/Users/Library/Python/2.7/lib/python/site-
packages/dateutil/tz/tz.py", line 23, in <module>
from ._common import tzname_in_python2, _tzinfo, _total_seconds
File "/Users/Library/Python/2.7/lib/python/site-
packages/dateutil/tz/_common.py", line 2, in <module>
from six.moves import _thread
ImportError: cannot import name _thread
```
Could someone provide some insight on how to approach a solution?
|
2017/08/02
|
[
"https://Stackoverflow.com/questions/45468073",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7678127/"
] |
According to [here](https://github.com/opencobra/cobrapy/issues/490) and [here](https://github.com/awslabs/aws-shell/issues/161), you need to fix your dateutil package.
```
pip uninstall python-dateutil
pip install python-dateutil --upgrade
```
Maybe this:
```
sudo pip uninstall python-dateutil
sudo pip install python-dateutil==2.2
```
|
Was facing the same issue and started installing jupyter and got few errors
reinstalling ipython worked for me
```
sudo -H pip install --ignore-installed -U ipython
```
I also needed to reinstall pyzmq
```
sudo -H pip install --ignore-installed -U pyzmq
```
after this I re-ran import pandas in ipython and it worked
|
66,996,203
|
I am trying to flip the lat long in a exported csv but am having a hard time getting python to recognize the rows to reorder them. Need the below data to read W#### N#####, W#### N#### so that QGIS's WKT layer import will work correctly later after I finish the formatting for WKT using Linestring().
```
Example Data:
name,start_y,start_x,end_y,end_x
name2: 10,N 42.50105, W 122.87444, N 42.50079, W 122.74144
name3: 11,N 42.49398, W 123.47816, N 42.49453, W 123.29451
name4: 12,N 42.48980, W 123.47812, N 42.49036, W 123.29027
name5: 13,N 42.49403, W 123.20165, N 42.49411, W 123.12354
```
The code I'm trying to use is:
```
with open(mycsv.csv', 'r') as infile, open(mycsv.csv', 'a') as outfile:
# output dict needs a list for new column ordering
writer = csv.DictWriter(outfile, fieldnames= ['name', 'start_x', 'start_y', 'end_x', 'end_y'], extrasaction='ignore', delimiter = ',')
# reorder the header first
writer.writeheader()
for row in csv.DictReader(infile):
# writes the reordered rows to the new file
writer.writerow(row)
```
When I use this code the csv stays the same. So I ran:
```
import sys
f = open(sys.argv[0],'r')
reader = csv.reader(f,delimiter=",")
num_cols = len(next(reader)) # Read first line and count columns
print(num_cols)
```
and it tells me that it's only counting 1 column so it makes sense that the first formula isn't working because it's not reading the csv in separate columns, but one single line. What am I missing? Python 3.9 is what I'm using. Thanks in advance!
ps, this is my first python program and I have no formal coding education so please excuse any rookie mistakes
|
2021/04/08
|
[
"https://Stackoverflow.com/questions/66996203",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15578731/"
] |
-For the `Access Key ID` you can find in `IAM` user management
-The `Secret Access Key` is the private SSH key you have created/you can create to log into your account, you would register the public key on IAM and keep the secret key to use the aws cli
-You can find the `Region` in the information for the bucket that you want to use or on the top right of the page
-I imagine the path is simply where in the bucket you want to store information so it is for you to decide
-The `Bucket` is simply the name of the bucket that you want to use
|
* For region -> just check your upper right of the console you can choose any one, the default region when you access a resource from the AWS Management Console is US East (Ohio) (us-east-2).
* For Bucket `s3 -> navigation pane ->buckets -> search your buckets if not then create one in a specific region`. If you want arn of the bucket as aws identifies resources by ARN then click on bucket and go to properties.
* For Access key ID and Secret Access Key you will; `find this under IAM -> users -> select your name -> under credentials( if you don't found one then you need to create one ( by default it is not created)`
* For ACL, `choose your bucket and go under permissions`.
* for Path either it can be `/` for the bucket level permission or `/*` for object level permissions
|
3,211,031
|
```
def file_open(filename):
fo=open(filename,'r')
#fo.seek(5)
fo.read(3)
fo.close()
file_open("file_ro.py")
```
I expect above program to return first 3 bytes from file . But it returns nothing. When I ran these in interactive python command prompt - I get expected output!
|
2010/07/09
|
[
"https://Stackoverflow.com/questions/3211031",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/246365/"
] |
`fo.read()` *returns* the data that was read and you never assign it to anything. You are talking about 'output', but your code isn't supposed to output anything. Are you trying to print those three bytes? In that case you are looking for something like
```
f = open('file_ro.py', 'r')
print f.read(3)
```
You are getting the 'expected output' in the interactive prompt, because it prints the result of the evaluation if it is not assigned anywhere (and if it is not `None`?), just like in the `fo.read(3)` line. Or something along those lines, - maybe someone can explain it better.
|
```
import sys
def file_open(filename):
fo=open(filename,'r')
#fo.seek(5)
read_data=fo.read(3)
fo.close()
print read_data
file_open("file.py")
```
|
3,211,031
|
```
def file_open(filename):
fo=open(filename,'r')
#fo.seek(5)
fo.read(3)
fo.close()
file_open("file_ro.py")
```
I expect above program to return first 3 bytes from file . But it returns nothing. When I ran these in interactive python command prompt - I get expected output!
|
2010/07/09
|
[
"https://Stackoverflow.com/questions/3211031",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/246365/"
] |
While your own answer *prints* the bytes read, it doesn't *return* them, so you won't be able to use the result somewhere else. Also, there's room for a few other improvements:
* `file_open` isn't a good name for the function, since it reads and returns bytes from a file rather than just opening it.
* You should make sure that you close the file even if `fo.read(3)` fails. You can use [the with statement](http://effbot.org/zone/python-with-statement.htm) to solve this issue.
The modified code could look something like this:
```
def read_first_bytes(filename):
with open(filename,'r') as f:
return f.read(3)
```
Usage:
```
>>> print read_first_bytes("file.py")
```
|
```
import sys
def file_open(filename):
fo=open(filename,'r')
#fo.seek(5)
read_data=fo.read(3)
fo.close()
print read_data
file_open("file.py")
```
|
3,211,031
|
```
def file_open(filename):
fo=open(filename,'r')
#fo.seek(5)
fo.read(3)
fo.close()
file_open("file_ro.py")
```
I expect above program to return first 3 bytes from file . But it returns nothing. When I ran these in interactive python command prompt - I get expected output!
|
2010/07/09
|
[
"https://Stackoverflow.com/questions/3211031",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/246365/"
] |
While your own answer *prints* the bytes read, it doesn't *return* them, so you won't be able to use the result somewhere else. Also, there's room for a few other improvements:
* `file_open` isn't a good name for the function, since it reads and returns bytes from a file rather than just opening it.
* You should make sure that you close the file even if `fo.read(3)` fails. You can use [the with statement](http://effbot.org/zone/python-with-statement.htm) to solve this issue.
The modified code could look something like this:
```
def read_first_bytes(filename):
with open(filename,'r') as f:
return f.read(3)
```
Usage:
```
>>> print read_first_bytes("file.py")
```
|
`fo.read()` *returns* the data that was read and you never assign it to anything. You are talking about 'output', but your code isn't supposed to output anything. Are you trying to print those three bytes? In that case you are looking for something like
```
f = open('file_ro.py', 'r')
print f.read(3)
```
You are getting the 'expected output' in the interactive prompt, because it prints the result of the evaluation if it is not assigned anywhere (and if it is not `None`?), just like in the `fo.read(3)` line. Or something along those lines, - maybe someone can explain it better.
|
60,827,864
|
I am trying to have a master dag which will create further dags based on my need.
I have the following python file inside the *dags\_folder* in *airflow.cfg*.
This code creates the master dag in database. This master dag should read a text file and should create dags for each line in the text file. But the dags created inside the master dag are not added to the database. What is the correct way to create it?
Version details:
Python version: 3.7
Apache-airflow version: 1.10.8
```
import datetime as dt
from airflow import DAG
from airflow.operators.bash_operator import BashOperator
from airflow.operators.python_operator import PythonOperator
root_dir = "/home/user/TestSpace/airflow_check/res"
print("\n\n ===> \n Dag generator")
default_args = {
'owner': 'airflow',
'start_date': dt.datetime(2020, 3, 22, 00, 00, 00),
'concurrency': 1,
'retries': 0
}
def greet(_name):
message = "Greetings {} at UTC: {} Local: {}\n".format(_name, dt.datetime.utcnow(), dt.datetime.now())
f = open("{}/greetings.txt".format(root_dir), "a+")
print("\n\n =====> {}\n\n".format(message))
f.write(message)
f.close()
def create_dag(dag_name):
with DAG(dag_name, default_args=default_args,
schedule_interval='*/2 * * * *',
catchup=False
) as i_dag:
i_opr_greet = PythonOperator(task_id='greet', python_callable=greet,
op_args=["{}_{}".format("greet", dag_name)])
i_echo_op = BashOperator(task_id='echo', bash_command='echo `date`')
i_opr_greet >> i_echo_op
return i_dag
def create_all_dags():
all_lines = []
f = open("{}/../dag_names.txt".format(root_dir), "r")
for x in f:
all_lines.append(str(x))
f.close()
for line in all_lines:
print("Dag creation for {}".format(line))
globals()[line] = create_dag(line)
with DAG('master_dag', default_args=default_args,
schedule_interval='*/1 * * * *',
catchup=False
) as dag:
echo_op = BashOperator(task_id='echo', bash_command='echo `date`')
create_op = PythonOperator(task_id='create_dag', python_callable=create_all_dags)
echo_op >> create_op
```
|
2020/03/24
|
[
"https://Stackoverflow.com/questions/60827864",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3211801/"
] |
You have 2 options:
1. **Use SubDagOperator**: [Example DAG](https://github.com/apache/airflow/blob/1.10.9/airflow/operators/subdag_operator.py). Use it if your Schedule Interval can be the same.
2. **Write a Python DAG File**: From you master DAG, create Python files in your AIRFLOW\_HOME containing DAGs. You can use Jinja2 templating engine for this.
|
Have a look at the TriggerDagRunOperator:
<https://airflow.apache.org/docs/stable/_api/airflow/operators/dagrun_operator/index.html>
Example usage:
<https://github.com/apache/airflow/blob/master/airflow/example_dags/example_trigger_controller_dag.py>
|
52,345,911
|
I'm working on a python(3.6) project in which I need to clone a GitHub repo which will have the directory structure as:
```
|parent_DIR
|--sub_DIR
|file1....
|file2....
|--sub_DIR2
|file1...
```
Now I need to get the following info:
```
1. Parent directory name
2. How many subdirectories are
3. names of subdirectories
```
Here's how I'm cloning the GitHub repo:
**from views.py:**
```
# clone the github repo
tempdir = tempfile.mkdtemp()
saved_unmask = os.umask(0o077)
out_dir = os.path.join(tempdir)
Repo.clone_from(data['repo_url'], out_dir)
```
|
2018/09/15
|
[
"https://Stackoverflow.com/questions/52345911",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7644562/"
] |
You can try your pop-over viewController's modalPresentationStyle to either `.overCurrentContext` or `.overFullScreen`.
>
> case overCurrentContext:
>
>
>
> >
> > A presentation style where the content is displayed over another view controller’s content.
> >
> >
> >
>
>
>
This means it will present the next viewController over the viewController's content.
**So in case of Container ViewControllers:**
So if you have any `tabBarController`, the `tabBar` will allow user to interact with it.
>
> case overFullScreen
>
>
>
> >
> > A view presentation style in which the presented view covers the screen.
> >
> >
> >
>
>
>
This means it will present next viewController over the fullScreen so the `taBar` will not be interactive till the presentation finish.
```
func presentNextController() {
// In case your viewController is in storyboard or any other initialisation
guard let nextVC = storyboard.instantiateViewController(with: "nextVC") as? NextViewController else { return }
nextVC.modalPresentationStyle = .overFullScreen
// set your custom transitioning delegate
self.present(nextVC, animated: true, completion: nil)
}
```
|
You need to set your new view controller's
```
modalPresentationStyle = .overCurrentContext
```
Do this when you initialise your view controller, or in the storyboard.
|
37,888,565
|
I'm having an issue with ctypes. I think my type conversion is correct and the error isn't making sense to me.
Error on line " arg - ct.c\_char\_p(logfilepath) "
TypeError: bytes or integer address expected instead of str instance
I tried in both python 3.5 and 3.4.
function i'm calling:
```
stream_initialize('stream_log.txt')
```
Stream\_initialize code"
```
def stream_initialize(logfilepath):
f = shim.stream_initialize
arg = ct.c_char_p(logfilepath)
result = f(arg)
if result:
print(find_shim_error(result))
```
|
2016/06/17
|
[
"https://Stackoverflow.com/questions/37888565",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5344673/"
] |
`c_char_p` takes `bytes` object so you have to convert your `string` to `bytes` first:
```
ct.c_char_p(logfilepath.encode('utf-8'))
```
Another solution is using the `c_wchar_p` type which takes a `string`.
|
*For completeness' sake*:
It is also possible to call it as `stream_initialize(b'stream_log.txt')`. Note the `b` in front of the string, which causes it to be interpreted as a `bytes` object.
|
42,441,687
|
I use `pyspark.sql.functions.udf` to define a UDF that uses a class imported from a .py module written by me.
```
from czech_simple_stemmer import CzechSimpleStemmer #this is my class in my module
from pyspark.sql.functions import udf
from pyspark.sql.types import StringType
...some code here...
def clean_one_raw_doc(my_raw_doc):
... calls something from CzechSimpleStemmer ...
udf_clean_one_raw_doc = udf(clean_one_raw_doc, StringType())
```
When I call
```
df = spark.sql("SELECT * FROM mytable").withColumn("output_text", udf_clean_one_raw_doc("input_text"))
```
I get a typical huge error message where probably this is the relevant part:
```
File "/data2/hadoop/yarn/local/usercache/ja063930/appcache/application_1472572954011_132777/container_e23_1472572954011_132777_01_000003/pyspark.zip/pyspark/serializers.py", line 431, in loads
return pickle.loads(obj, encoding=encoding)
ImportError: No module named 'czech_simple_stemmer'
```
Do I understand it correctly that pyspark distributes `udf_clean_one_raw_doc` to all the worker nodes but `czech_simple_stemmer.py` is missing there in the nodes' python installations (being present only on the edge node where I run the spark driver)?
And if yes, is there any way how I could tell pyspark to distribute this module too? I guess I could probably copy manually `czech_simple_stemmer.py` to all the nodes' pythons but 1) I don't have the admin access to the nodes, and 2) even if I beg the admin to put it there and he does it, then in case I need to do some tuning to the module itself, he'd probably kill me.
|
2017/02/24
|
[
"https://Stackoverflow.com/questions/42441687",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4985473/"
] |
SparkContext.addPyFile("my\_module.py") will do it.
|
from the spark-submit [documentation](http://spark.apache.org/docs/2.0.1/submitting-applications.html)
>
> For Python, you can use the --py-files argument of spark-submit to add
> .py, .zip or .egg files to be distributed with your application. If
> you depend on multiple Python files we recommend packaging them into a
> .zip or .egg.
>
>
>
|
47,944,185
|
It's my first post, I hope it will be well done.
I'm trying to run the following ZipLine Algo with local AAPL data :
```
import pandas as pd
from collections import OrderedDict
import pytz
from zipline.api import order, symbol, record, order_target
from zipline.algorithm import TradingAlgorithm
data = OrderedDict()
data['AAPL'] = pd.read_csv('AAPL.csv', index_col=0, parse_dates=['Date'])
panel = pd.Panel(data)
panel.minor_axis = ['Open', 'High', 'Low', 'Close', 'Volume', 'Price']
panel.major_axis = panel.major_axis.tz_localize(pytz.utc)
print panel["AAPL"]
def initialize(context):
context.security = symbol('AAPL')
def handle_data(context, data):
MA1 = data[context.security].mavg(50)
MA2 = data[context.security].mavg(100)
date = str(data[context.security].datetime)[:10]
current_price = data[context.security].price
current_positions = context.portfolio.positions[symbol('AAPL')].amount
cash = context.portfolio.cash
value = context.portfolio.portfolio_value
current_pnl = context.portfolio.pnl
# code (this will come under handle_data function only)
if (MA1 > MA2) and current_positions == 0:
number_of_shares = int(cash / current_price)
order(context.security, number_of_shares)
record(date=date, MA1=MA1, MA2=MA2, Price=
current_price, status="buy", shares=number_of_shares, PnL=current_pnl, cash=cash, value=value)
elif (MA1 < MA2) and current_positions != 0:
order_target(context.security, 0)
record(date=date, MA1=MA1, MA2=MA2, Price=current_price, status="sell", shares="--", PnL=current_pnl, cash=cash,
value=value)
else:
record(date=date, MA1=MA1, MA2=MA2, Price=current_price, status="--", shares="--", PnL=current_pnl, cash=cash,
value=value)
#initializing trading enviroment
algo_obj = TradingAlgorithm(initialize=initialize, handle_data=handle_data)
#run algo
perf_manual = algo_obj.run(panel)
#code
#calculation
print "total pnl : " + str(float(perf_manual[["PnL"]].iloc[-1]))
buy_trade = perf_manual[["status"]].loc[perf_manual["status"] == "buy"].count()
sell_trade = perf_manual[["status"]].loc[perf_manual["status"] == "sell"].count()
total_trade = buy_trade + sell_trade
print "buy trade : " + str(int(buy_trade)) + " sell trade : " + str(int(sell_trade)) + " total trade : " + str(int(total_trade))
```
I was inspired by <https://www.quantinsti.com/blog/introduction-zipline-python/> and <https://www.quantinsti.com/blog/importing-csv-data-zipline-backtesting/>.
I get this error :
```
Traceback (most recent call last):
File "C:/Users/main/Desktop/docs/ALGO_TRADING/_DATAS/_zipline_data_bundle /temp.py", line 51, in <module>
algo_obj = TradingAlgorithm(initialize=initialize, handle_data=handle_data)
File "C:\Python27-32\lib\site-packages\zipline\algorithm.py", line 273, in __init__
self.trading_environment = TradingEnvironment()
File "C:\Python27-32\lib\site-packages\zipline\finance\trading.py", line 99, in __init__
self.bm_symbol,
File "C:\Python27-32\lib\site-packages\zipline\data\loader.py", line 166, in load_market_data
environ,
File "C:\Python27-32\lib\site-packages\zipline\data\loader.py", line 230, in ensure_benchmark_data
last_date,
File "C:\Python27-32\lib\site-packages\zipline\data\benchmarks.py", line 50, in get_benchmark_returns
last_date
File "C:\Python27-32\lib\site-packages\pandas_datareader\data.py", line 137, in DataReader
session=session).read()
File "C:\Python27-32\lib\site-packages\pandas_datareader\base.py", line 181, in read
params=self._get_params(self.symbols))
File "C:\Python27-32\lib\site-packages\pandas_datareader\base.py", line 79, in _read_one_data
out = self._read_url_as_StringIO(url, params=params)
File "C:\Python27-32\lib\site-packages\pandas_datareader\base.py", line 90, in _read_url_as_StringIO
response = self._get_response(url, params=params)
File "C:\Python27-32\lib\site-packages\pandas_datareader\base.py", line 139, in _get_response
raise RemoteDataError('Unable to read URL: {0}'.format(url))
pandas_datareader._utils.RemoteDataError: Unable to read URL: http://www.google.com/finance/historical?q=SPY&startdate=Dec+29%2C+1989&enddate=Dec+20%2C+2017&output=csv
```
I don't understand : "<http://www.google.com/finance/historical?q=SPY&startdate=Dec+29%2C+1989&enddate=Dec+20%2C+2017&output=csv>".
I don't ask for online data request... and not 'SPY' stock but 'APPL'...
What does this error mean to you ?
Thanks a lot for your help !
C.
|
2017/12/22
|
[
"https://Stackoverflow.com/questions/47944185",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9131416/"
] |
Only reference and workaround I found regarding this issue is [here](https://github.com/pydata/pandas-datareader/issues/394):
```py
from pandas_datareader.google.daily import GoogleDailyReader
@property
def url(self):
return 'http://finance.google.com/finance/historical'
GoogleDailyReader.url = url
```
|
do:
```
pip install fix_yahoo_finance
```
then modify the file: zipline/lib/pythonx.x/site-packages/zipline/data/benchmarks.py
add the following two statements to the file:
```
import fix_yahoo_finance as yf
yf.pdr_override ()
```
then change following instruction:
```
data = pd_reader.DataReader (symbol, 'Google' first_date, last_date)
```
to:
```
data = pd_reader.get_data_yahoo(symbol,first_date, last_date)
```
|
56,057,132
|
I define some func here, it will change all user defined attribtutes into upper case
```
def up(name, parent, attr):
user_defined_attr = ((k, v) for k, v in attr.items() if not k.startswith('_'))
up_attr = {k.upper(): v for k,v in user_defined_attr}
return type(name, parent, up_attr)
```
For example:
```
my_class = up('my_class', (object,), {'some_attr': 'some_value'})
hasattr(my_class, 'SOME_ATTR')
True
```
Here is some words from python doc about **metaclass**
[https://docs.python.org/2/reference/datamodel.html?highlight=**metaclass**#**metaclass**](https://docs.python.org/2/reference/datamodel.html?highlight=__metaclass__#__metaclass__)
```
The appropriate metaclass is determined by the following precedence rules:
If dict['__metaclass__'] exists, it is used.
Otherwise, if there is at least one base class, its metaclass is used (this looks for a __class__ attribute first and if not found, uses its type).
Otherwise, if a global variable named __metaclass__ exists, it is used.
Otherwise, the old-style, classic metaclass (types.ClassType) is used.
```
So I did some test
```
>>> def up(name, parent, attr):
... user_defined_attr = ((k, v) for k, v in attr.items() if not k.startswith('_'))
... up_attr = {k.upper(): v for k,v in user_defined_attr}
... return type(name, parent, up_attr)
...
>>>
>>>
>>> __metaclass__ = up
>>>
>>> class C1(object):
... attr1 = 1
...
>>> hasattr(C1, 'ATTR1')
False
```
Not working for the global var case, why?
|
2019/05/09
|
[
"https://Stackoverflow.com/questions/56057132",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5245972/"
] |
Okay, seems like this behaviour cannot be avoided, so you should parse dates manually. But the way to parse it is pretty simple.
If we are parsing date in ISO 8601 format, the mask of date string looks like this:
```
<yyyy>-<mm>-<dd>T<hh>:<mm>:<ss>(.<ms>)?(Z|(+|-)<hh>:<mm>)?
```
1. Getting date and time separately
-----------------------------------
The `T` in string separates date from time. So, we can just split ISO string by `T`
```
var isoString = `2019-05-09T13:26:10.979Z`
var [dateString, timeString] = isoString.split("T")
```
2. Extracting date parameters from date string
----------------------------------------------
So, we have `dateString == "2019-05-09"`. This is pretty simple now to get this parameters separately
```
var [year, month, date] = dateString.split("-").map(Number)
```
3. Handling time string
-----------------------
With time string we should make more complex actions due to its variability.
We have `timeString == "13:26:10Z"`
Also it's possible `timeString == "13:26:10"` and `timeString == "13:26:10+01:00`
```
var clearTimeString = timeString.split(/[Z+-]/)[0]
var [hours, minutes, seconds] = clearTimeString.split(":").map(Number)
var offset = 0 // we will store offset in minutes, but in negation of native JS Date getTimezoneOffset
if (timeString.includes("Z")) {
// then clearTimeString references the UTC time
offset = new Date().getTimezoneOffset() * -1
} else {
var clearOffset = timeString.split(/[+-]/)[1]
if (clearOffset) {
// then we have offset tail
var negation = timeString.includes("+") ? 1 : -1 // detecting is offset positive or negative
var [offsetHours, offsetMinutes] = clearOffset.split(":").map(Number)
offset = (offsetMinutes + offsetHours * 60) * negation
} // otherwise we do nothing because there is no offset marker
}
```
At this point we have our data representation in numeric format:
`year`, `month`, `date`, `hours`, `minutes`, `seconds` and `offset` in minutes.
4. Using ...native JS Date constructor
--------------------------------------
Yes, we cannot avoid it, because it is too cool. JS `Date` automatically match date for all negative and too big values. So we can just pass all parameters in raw format, and the JS `Date` constructor will create the right date for us automatically!
```
new Date(year, month - 1, date, hours, minutes + offset, seconds)
```
Voila! Here is fully working example.
```js
function convertHistoricalDate(isoString) {
var [dateString, timeString] = isoString.split("T")
var [year, month, date] = dateString.split("-").map(Number)
var clearTimeString = timeString.split(/[Z+-]/)[0]
var [hours, minutes, seconds] = clearTimeString.split(":").map(Number)
var offset = 0 // we will store offset in minutes, but in negation of native JS Date getTimezoneOffset
if (timeString.includes("Z")) {
// then clearTimeString references the UTC time
offset = new Date().getTimezoneOffset() * -1
} else {
var clearOffset = timeString.split(/[+-]/)[1]
if (clearOffset) {
// then we have offset tail
var negation = timeString.includes("+") ? 1 : -1 // detecting is offset positive or negative
var [offsetHours, offsetMinutes] = clearOffset.split(":").map(Number)
offset = (offsetMinutes + offsetHours * 60) * negation
} // otherwise we do nothing because there is no offset marker
}
return new Date(year, month - 1, date, hours, minutes + offset, seconds)
}
var testDate1 = convertHistoricalDate("1894-01-01T00:00:00+01:00")
var testDate2 = convertHistoricalDate("1893-01-01T00:00:00+01:00")
var testDate3 = convertHistoricalDate("1894-01-01T00:00:00-01:00")
var testDate4 = convertHistoricalDate("1893-01-01T00:00:00-01:00")
console.log(testDate1.toLocaleDateString(), testDate1.toLocaleTimeString())
console.log(testDate2.toLocaleDateString(), testDate2.toLocaleTimeString())
console.log(testDate3.toLocaleDateString(), testDate3.toLocaleTimeString())
console.log(testDate4.toLocaleDateString(), testDate4.toLocaleTimeString())
```
Note
----
In this case we are getting `Date` instance with all its own values (like `.getHours()`) being **normalized**, including timezone offset. The `testDate1.toISOString` will still return weird result. But if you are working with this date, it will probably 100% fit your needings.
Hope that helped :)
|
This might be the case when all browsers follow their own standards for encoding date formats (but I am not sure on this part). Anyways a simple fix for this is to apply the `toISOString` method.
```js
const today = new Date();
console.log(today.toISOString());
```
|
19,202,921
|
Here is my input:
```
>>> from decimal import Decimal
>>> Decimal('114.3')
Decimal('114.3')
>>> Decimal(114.3)
Decimal('114.2999999999999971578290569595992565155029296875')
```
I thought those two instances of Decimal have to be equal but Decimal with float arg seems to loose precision. Why is this? My python version is 2.7.3. Thanks in advance!
|
2013/10/05
|
[
"https://Stackoverflow.com/questions/19202921",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/699931/"
] |
The second line is giving you the exact decimal value of the binary floating-point number which approximates 114.3. This is almost all about binary floating point, not much about Decimal. See [the docs](http://docs.python.org/2/tutorial/floatingpoint.html) for details.
Later: if using Python 3, see [these docs](http://docs.python.org/3/tutorial/floatingpoint.html) instead. Same basic thing, but Python 3 has more tools to help you explore cases "like this".
|
When you do `Decimal(114.3)`, you are creating a regular float object and then passing it to Decimal. The accuracy is lost due to binary floating-point imprecision when the float 114.3 is created, before Decimal ever gets to see it. There's no way to get that accuracy back. That's why Decimal accepts string representations as input, so it can see what you actually typed and use the right level of precision.
|
16,739,894
|
I've found [this Library](https://github.com/pythonforfacebook/facebook-sdk/) it seems it is the official one, then [found this](https://stackoverflow.com/questions/10488913/how-to-obtain-a-user-access-token-in-python), but everytime i find an answer the half of it are links to [Facebook API Documentation](https://developers.facebook.com/docs/reference/php/facebook-getAccessToken/) which talks about Javascript or PHP and how to extract it from links!
How do i make it on simple python script?
NB: what i realy dont understand, why using a library and cant extract `token` if we can use `urllib` and `regex` to extract informations?
|
2013/05/24
|
[
"https://Stackoverflow.com/questions/16739894",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/861487/"
] |
Javascript and PHP can be used as web development languages. You need a web front end for the user to grant permission so that you can obtain the access token.
Rephrased: **You cannot obtain the access token programmatically, there must be manual user interaction**
In Python it will involve setting up a web server, for example a script to update feed using facepy
```
import web
from facepy import GraphAPI
from urlparse import parse_qs
url = ('/', 'index')
app_id = "YOUR_APP_ID"
app_secret = "APP_SECRET"
post_login_url = "http://0.0.0.0:8080/"
user_data = web.input(code=None)
if not user_data.code:
dialog_url = ( "http://www.facebook.com/dialog/oauth?" +
"client_id=" + app_id +
"&redirect_uri=" + post_login_url +
"&scope=publish_stream" )
return "<script>top.location.href='" + dialog_url + "'</script>"
else:
graph = GraphAPI()
response = graph.get(
path='oauth/access_token',
client_id=app_id,
client_secret=app_secret,
redirect_uri=post_login_url,
code=code
)
data = parse_qs(response)
graph = GraphAPI(data['access_token'][0])
graph.post(path = 'me/feed', message = 'Your message here')
```
|
here is a Gist i tried to make using `Tornado` since the answer uses `web.py`
<https://gist.github.com/abdelouahabb/5647185>
|
16,739,894
|
I've found [this Library](https://github.com/pythonforfacebook/facebook-sdk/) it seems it is the official one, then [found this](https://stackoverflow.com/questions/10488913/how-to-obtain-a-user-access-token-in-python), but everytime i find an answer the half of it are links to [Facebook API Documentation](https://developers.facebook.com/docs/reference/php/facebook-getAccessToken/) which talks about Javascript or PHP and how to extract it from links!
How do i make it on simple python script?
NB: what i realy dont understand, why using a library and cant extract `token` if we can use `urllib` and `regex` to extract informations?
|
2013/05/24
|
[
"https://Stackoverflow.com/questions/16739894",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/861487/"
] |
Javascript and PHP can be used as web development languages. You need a web front end for the user to grant permission so that you can obtain the access token.
Rephrased: **You cannot obtain the access token programmatically, there must be manual user interaction**
In Python it will involve setting up a web server, for example a script to update feed using facepy
```
import web
from facepy import GraphAPI
from urlparse import parse_qs
url = ('/', 'index')
app_id = "YOUR_APP_ID"
app_secret = "APP_SECRET"
post_login_url = "http://0.0.0.0:8080/"
user_data = web.input(code=None)
if not user_data.code:
dialog_url = ( "http://www.facebook.com/dialog/oauth?" +
"client_id=" + app_id +
"&redirect_uri=" + post_login_url +
"&scope=publish_stream" )
return "<script>top.location.href='" + dialog_url + "'</script>"
else:
graph = GraphAPI()
response = graph.get(
path='oauth/access_token',
client_id=app_id,
client_secret=app_secret,
redirect_uri=post_login_url,
code=code
)
data = parse_qs(response)
graph = GraphAPI(data['access_token'][0])
graph.post(path = 'me/feed', message = 'Your message here')
```
|
Not sure if this helps anyone, but I was able to get an oauth\_access\_token by following this code.
```
from facepy import utils
app_id = 134134134134 # must be integer
app_secret = "XXXXXXXXXXXXXXXXXX"
oath_access_token = utils.get_application_access_token(app_id, app_secret)
```
Hope this helps.
|
16,739,894
|
I've found [this Library](https://github.com/pythonforfacebook/facebook-sdk/) it seems it is the official one, then [found this](https://stackoverflow.com/questions/10488913/how-to-obtain-a-user-access-token-in-python), but everytime i find an answer the half of it are links to [Facebook API Documentation](https://developers.facebook.com/docs/reference/php/facebook-getAccessToken/) which talks about Javascript or PHP and how to extract it from links!
How do i make it on simple python script?
NB: what i realy dont understand, why using a library and cant extract `token` if we can use `urllib` and `regex` to extract informations?
|
2013/05/24
|
[
"https://Stackoverflow.com/questions/16739894",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/861487/"
] |
Not sure if this helps anyone, but I was able to get an oauth\_access\_token by following this code.
```
from facepy import utils
app_id = 134134134134 # must be integer
app_secret = "XXXXXXXXXXXXXXXXXX"
oath_access_token = utils.get_application_access_token(app_id, app_secret)
```
Hope this helps.
|
here is a Gist i tried to make using `Tornado` since the answer uses `web.py`
<https://gist.github.com/abdelouahabb/5647185>
|
54,958,169
|
I have the three following dataframes:
```
df_A = pd.DataFrame( {'id_A': [1, 1, 1, 1, 2, 2, 3, 3],
'Animal_A': ['cat','dog','fish','bird','cat','fish','bird','cat' ]})
df_B = pd.DataFrame( {'id_B': [1, 2, 2, 3, 4, 4, 5],
'Animal_B': ['dog','cat','fish','dog','fish','cat','cat' ]})
df_P = pd.DataFrame( {'id_A': [1, 1, 2, 3],
'id_B': [2, 3, 4, 5]})
df_A
id_A Animal_A
0 1 cat
1 1 dog
2 1 fish
3 1 bird
4 2 cat
5 2 fish
6 3 bird
7 3 cat
df_B
id_B Animal_B
0 1 dog
1 2 cat
2 2 fish
3 3 dog
4 4 fish
5 4 cat
6 5 cat
df_P
id_A id_B
0 1 2
1 1 3
2 2 4
3 3 5
```
And I would like to get an additional column to df\_P that tells the number of Animals shared between id\_A and id\_B. What I'm doing is:
```
df_P["n_common"] = np.nan
for i in df_P.index.tolist():
id_A = df_P["id_A"][i]
id_B = df_P["id_B"][i]
df_P.iloc[i,df_P.columns.get_loc('n_common')] = len(set(df_A['Animal_A'][df_A['id_A']==id_A]).intersection(df_B['Animal_B'][df_B['id_B']==id_B]))
```
The result being:
```
df_P
id_A id_B n_common
0 1 2 2.0
1 1 3 1.0
2 2 4 2.0
3 3 5 1.0
```
Is there a faster, more pythonic, way to do this? Is there a way to avoid the for loop?
|
2019/03/02
|
[
"https://Stackoverflow.com/questions/54958169",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11139882/"
] |
Not sure if it is faster or more pythonic, but it avoids the for loop :)
```
import pandas as pd
df_A = pd.DataFrame( {'id_A': [1, 1, 1, 1, 2, 2, 3, 3],
'Animal_A': ['cat','dog','fish','bird','cat','fish','bird','cat' ]})
df_B = pd.DataFrame( {'id_B': [1, 2, 2, 3, 4, 4, 5],
'Animal_B': ['dog','cat','fish','dog','fish','cat','cat' ]})
df_P = pd.DataFrame( {'id_A': [1, 1, 2, 3],
'id_B': [2, 3, 4, 5]})
df = pd.merge(df_A, df_P, on='id_A')
df = pd.merge(df_B, df, on='id_B')
df = df[df['Animal_A'] == df['Animal_B']].groupby(['id_A', 'id_B'])['Animal_A'].count().reset_index()
df.rename({'Animal_A': 'n_common'},inplace=True,axis=1)
```
|
You can try the below:
```
df_A.merge(df_B, left_on = ['Animal_A'], right_on = ['Animal_B'] ).groupby(['id_A' ,'id_B']).count().reset_index().merge(df_P).drop('Animal_B', axis = 1).rename(columns = {'Animal_A': 'count'})
```
|
42,282,577
|
I have this HTML
```html
<div class="callout callout-accordion" style="background-image: url("/images/expand.png");">
<span class="edit" data-pk="bandwidth_bar">Bandwidth Settings</span>
<span class="telnet-arrow"></span>
</div>
```
I'm trying to select **span** with text = `Bandwidth Settings`, and click on the **div** with class name = `callout`.
```python
if driver.find_element_by_tag_name("span") == ("Bandwidth Settings"):
print "Found"
time.sleep(100)
driver.find_element_by_tag_name("div").find_element_by_class_name("callout").click()
print "Not found"
time.sleep(100)
```
I kept getting
```none
Testing started at 1:59 PM ...
Not found
Process finished with exit code 0
```
---
What did I miss?
---
### Select the parent *div*
```python
if driver.find_element_by_xpath("//span[text()='Bandwidth Settings']") is None:
print "Not Found"
else:
print "Found"
span = driver.find_element_by_xpath("//span[text()='Bandwidth Settings']")
div = span.find_element_by_xpath('..')
div.click()
```
I got
>
> WebDriverException: Message: unknown error: Element
>
>
>
|
2017/02/16
|
[
"https://Stackoverflow.com/questions/42282577",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4480164/"
] |
One way would be to use `find_element_by_xpath(xpath)` like this:
```
if driver.find_element_by_xpath("//span[contains(.,'Bandwidth Settings')]") is None:
print "Not found"
else:
print "Found"
...
```
For an exact match (as you asked for in your comment), use `"//span[text()='Bandwidth Settings']"`
On your *edited* question, try one of these:
Locate directly (if there is no other matching element):
```
driver.find_element_by_css_selector("div[style*='/images/telenet/expand.png']")
```
Locate via *span* (provided there isn't any other *div* on that level):
```
driver.find_element_by_xpath("//span[contains(.,'Bandwidth Settings')]/../div")
```
|
The code that you need to use:
```
from selenium.common.exceptions import NoSuchElementException
try:
span = driver.find_element_by_xpath('//span[text()="Bandwidth Settings"]')
print "Found"
except NoSuchElementException:
print "Not found"
```
If you need to select the parent `div` element:
```
div = span.find_element_by_xpath('./parent::div')
```
|
42,282,577
|
I have this HTML
```html
<div class="callout callout-accordion" style="background-image: url("/images/expand.png");">
<span class="edit" data-pk="bandwidth_bar">Bandwidth Settings</span>
<span class="telnet-arrow"></span>
</div>
```
I'm trying to select **span** with text = `Bandwidth Settings`, and click on the **div** with class name = `callout`.
```python
if driver.find_element_by_tag_name("span") == ("Bandwidth Settings"):
print "Found"
time.sleep(100)
driver.find_element_by_tag_name("div").find_element_by_class_name("callout").click()
print "Not found"
time.sleep(100)
```
I kept getting
```none
Testing started at 1:59 PM ...
Not found
Process finished with exit code 0
```
---
What did I miss?
---
### Select the parent *div*
```python
if driver.find_element_by_xpath("//span[text()='Bandwidth Settings']") is None:
print "Not Found"
else:
print "Found"
span = driver.find_element_by_xpath("//span[text()='Bandwidth Settings']")
div = span.find_element_by_xpath('..')
div.click()
```
I got
>
> WebDriverException: Message: unknown error: Element
>
>
>
|
2017/02/16
|
[
"https://Stackoverflow.com/questions/42282577",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4480164/"
] |
One way would be to use `find_element_by_xpath(xpath)` like this:
```
if driver.find_element_by_xpath("//span[contains(.,'Bandwidth Settings')]") is None:
print "Not found"
else:
print "Found"
...
```
For an exact match (as you asked for in your comment), use `"//span[text()='Bandwidth Settings']"`
On your *edited* question, try one of these:
Locate directly (if there is no other matching element):
```
driver.find_element_by_css_selector("div[style*='/images/telenet/expand.png']")
```
Locate via *span* (provided there isn't any other *div* on that level):
```
driver.find_element_by_xpath("//span[contains(.,'Bandwidth Settings')]/../div")
```
|
If you have sizzle on the page ([jQuery](https://en.wikipedia.org/wiki/JQuery)) you can select spans by their text like so:
```javascript
$("span:contains('Bandwidth Settings')")
```
Which would be selected like so using the [C#](https://en.wikipedia.org/wiki/C_Sharp_%28programming_language%29) bindings:
```csharp
By.CssSelector("span:contains('Bandwidth Settings')")
```
|
24,857,779
|
I used os.rename() method to rename the directory in my python script. This script called automatically by the scheduler every day. Sometimes the os.rename() function returns the error,
```
[Error 5] Access is denied
```
But all other times its working fine.
Code,
```
try:
if(os.path.exists(Downloaded_Path)):
os.rename(Downloaded_Path, Downloaded_Path + "_ByClientTool")
except Exception,e:
print "Error !!", str(e)
return 1
```
|
2014/07/21
|
[
"https://Stackoverflow.com/questions/24857779",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1553605/"
] |
The error means that the user account that the scheduler uses to run the program does not have permissions to rename that directory.
One common reason for the fact that it sometimes works and sometimes does not is that the program creates some of the directories it needs to rename but not others.
* The directories created directly by the program have modify permissions for the user running the program, so it can rename those.
* But, directories that were previously created by something else may restrict the access for the user running the program by default.
Read about Windows File and Folder permissions: <http://technet.microsoft.com/en-us/library/bb727008.aspx>
|
This will also fail if the host names are not "network qualified" the same way.
```
>>> os.renames(r'\\host.domain.com\joan\rocks', r'\\host\joan\jett\rocks')
WindowsError: [Error 5] Access is denied
>>> os.renames(r'\\host\joan\rocks', r'\\host\joan\jett\rocks')
>>>
>>> os.renames(r'\\host.domain.com\joan\rocks', r'\\host.domain.com\joan\jett\rocks')
>>>
```
|
24,857,779
|
I used os.rename() method to rename the directory in my python script. This script called automatically by the scheduler every day. Sometimes the os.rename() function returns the error,
```
[Error 5] Access is denied
```
But all other times its working fine.
Code,
```
try:
if(os.path.exists(Downloaded_Path)):
os.rename(Downloaded_Path, Downloaded_Path + "_ByClientTool")
except Exception,e:
print "Error !!", str(e)
return 1
```
|
2014/07/21
|
[
"https://Stackoverflow.com/questions/24857779",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1553605/"
] |
I had a similar problem on Windows 10: sometimes my python script could not rename a directory even though I could manually rename it without a problem.
I used Sysinternal's handle.exe tool to find that explorer.exe had a handle to a sub-directory of the directory I was trying to rename. It turns out explorer was adding this sub-directory to its "Quick Access" section which prevented my script from renaming the folder.
I wound up disabling the "Show frequently used folders in Quick access" option from Explorer -> View -> Options -> General -> Privacy.
|
The error means that the user account that the scheduler uses to run the program does not have permissions to rename that directory.
One common reason for the fact that it sometimes works and sometimes does not is that the program creates some of the directories it needs to rename but not others.
* The directories created directly by the program have modify permissions for the user running the program, so it can rename those.
* But, directories that were previously created by something else may restrict the access for the user running the program by default.
Read about Windows File and Folder permissions: <http://technet.microsoft.com/en-us/library/bb727008.aspx>
|
24,857,779
|
I used os.rename() method to rename the directory in my python script. This script called automatically by the scheduler every day. Sometimes the os.rename() function returns the error,
```
[Error 5] Access is denied
```
But all other times its working fine.
Code,
```
try:
if(os.path.exists(Downloaded_Path)):
os.rename(Downloaded_Path, Downloaded_Path + "_ByClientTool")
except Exception,e:
print "Error !!", str(e)
return 1
```
|
2014/07/21
|
[
"https://Stackoverflow.com/questions/24857779",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1553605/"
] |
The error means that the user account that the scheduler uses to run the program does not have permissions to rename that directory.
One common reason for the fact that it sometimes works and sometimes does not is that the program creates some of the directories it needs to rename but not others.
* The directories created directly by the program have modify permissions for the user running the program, so it can rename those.
* But, directories that were previously created by something else may restrict the access for the user running the program by default.
Read about Windows File and Folder permissions: <http://technet.microsoft.com/en-us/library/bb727008.aspx>
|
So if you have any file, application or folder that is open in the directory you are trying to rename you'll get that error. You to close them so that windows removes them from the quick access list. This worked for me.
|
24,857,779
|
I used os.rename() method to rename the directory in my python script. This script called automatically by the scheduler every day. Sometimes the os.rename() function returns the error,
```
[Error 5] Access is denied
```
But all other times its working fine.
Code,
```
try:
if(os.path.exists(Downloaded_Path)):
os.rename(Downloaded_Path, Downloaded_Path + "_ByClientTool")
except Exception,e:
print "Error !!", str(e)
return 1
```
|
2014/07/21
|
[
"https://Stackoverflow.com/questions/24857779",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1553605/"
] |
I had a similar problem on Windows 10: sometimes my python script could not rename a directory even though I could manually rename it without a problem.
I used Sysinternal's handle.exe tool to find that explorer.exe had a handle to a sub-directory of the directory I was trying to rename. It turns out explorer was adding this sub-directory to its "Quick Access" section which prevented my script from renaming the folder.
I wound up disabling the "Show frequently used folders in Quick access" option from Explorer -> View -> Options -> General -> Privacy.
|
This will also fail if the host names are not "network qualified" the same way.
```
>>> os.renames(r'\\host.domain.com\joan\rocks', r'\\host\joan\jett\rocks')
WindowsError: [Error 5] Access is denied
>>> os.renames(r'\\host\joan\rocks', r'\\host\joan\jett\rocks')
>>>
>>> os.renames(r'\\host.domain.com\joan\rocks', r'\\host.domain.com\joan\jett\rocks')
>>>
```
|
24,857,779
|
I used os.rename() method to rename the directory in my python script. This script called automatically by the scheduler every day. Sometimes the os.rename() function returns the error,
```
[Error 5] Access is denied
```
But all other times its working fine.
Code,
```
try:
if(os.path.exists(Downloaded_Path)):
os.rename(Downloaded_Path, Downloaded_Path + "_ByClientTool")
except Exception,e:
print "Error !!", str(e)
return 1
```
|
2014/07/21
|
[
"https://Stackoverflow.com/questions/24857779",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1553605/"
] |
So if you have any file, application or folder that is open in the directory you are trying to rename you'll get that error. You to close them so that windows removes them from the quick access list. This worked for me.
|
This will also fail if the host names are not "network qualified" the same way.
```
>>> os.renames(r'\\host.domain.com\joan\rocks', r'\\host\joan\jett\rocks')
WindowsError: [Error 5] Access is denied
>>> os.renames(r'\\host\joan\rocks', r'\\host\joan\jett\rocks')
>>>
>>> os.renames(r'\\host.domain.com\joan\rocks', r'\\host.domain.com\joan\jett\rocks')
>>>
```
|
24,857,779
|
I used os.rename() method to rename the directory in my python script. This script called automatically by the scheduler every day. Sometimes the os.rename() function returns the error,
```
[Error 5] Access is denied
```
But all other times its working fine.
Code,
```
try:
if(os.path.exists(Downloaded_Path)):
os.rename(Downloaded_Path, Downloaded_Path + "_ByClientTool")
except Exception,e:
print "Error !!", str(e)
return 1
```
|
2014/07/21
|
[
"https://Stackoverflow.com/questions/24857779",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1553605/"
] |
I had a similar problem on Windows 10: sometimes my python script could not rename a directory even though I could manually rename it without a problem.
I used Sysinternal's handle.exe tool to find that explorer.exe had a handle to a sub-directory of the directory I was trying to rename. It turns out explorer was adding this sub-directory to its "Quick Access" section which prevented my script from renaming the folder.
I wound up disabling the "Show frequently used folders in Quick access" option from Explorer -> View -> Options -> General -> Privacy.
|
So if you have any file, application or folder that is open in the directory you are trying to rename you'll get that error. You to close them so that windows removes them from the quick access list. This worked for me.
|
36,428,178
|
Even with the most basic of code, my .txt file is coming out empty, and I can't understand why. I'm running this subroutine in `python 3` to gather information from the user. When I open the .txt file in both notepad and N++, I get an empty file.
Here's my code :
```
def Setup():
fw = open('AutoLoader.txt', 'a')
x = True
while x == True:
print("Enter new location to enter")
new_entry = str(input('Start with \'web\' if it\'s a web page\n'))
fw.write(new_entry)
y = input('New Data? Y/N\n')
if y == 'N' or y == 'n':
fw.close
break
fw.close
Start()
```
|
2016/04/05
|
[
"https://Stackoverflow.com/questions/36428178",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
Try replacing fw.close with fw.close()
|
It's working on python 3.4
```
def Setup():
fw = open('AutoLoader3.4.txt', 'a+')
x = True
while x == True:
print("Enter new location to enter")
new_entry = str(input('Start with \'web\' if it\'s a web page\n'))
fw.write(new_entry)
y = input('New Data? Y/N\n')
if y == 'N' or y == 'n':
fw.close()
break
fw.close()
Setup()
```
|
36,428,178
|
Even with the most basic of code, my .txt file is coming out empty, and I can't understand why. I'm running this subroutine in `python 3` to gather information from the user. When I open the .txt file in both notepad and N++, I get an empty file.
Here's my code :
```
def Setup():
fw = open('AutoLoader.txt', 'a')
x = True
while x == True:
print("Enter new location to enter")
new_entry = str(input('Start with \'web\' if it\'s a web page\n'))
fw.write(new_entry)
y = input('New Data? Y/N\n')
if y == 'N' or y == 'n':
fw.close
break
fw.close
Start()
```
|
2016/04/05
|
[
"https://Stackoverflow.com/questions/36428178",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
Try replacing fw.close with fw.close()
|
Not knowing what `Start()` does, it must be ignored in answers, so far...
I wouldn't bother to close the file myself, but let a `with` statement do the job properly.
Following script works at least:
```
#!/usr/bin/env python3
def Setup():
with open('AutoLoader.txt', 'a') as fw:
while True:
print("Enter new location to enter")
new_entry = str(input("Start with 'web' if it's a web page\n"))
fw.write(new_entry + "\n")
y = input('New Data? Y/N\n')
if y in ['N', 'n']:
break
#Start()
Setup()
```
See:
```
nico@ometeotl:~/temp$ ./test_script3.py
Enter new location to enter
Start with 'web' if it's a web page
First user's entry
New Data? Y/N
N
nico@ometeotl:~/temp$ ./test_script3.py
Enter new location to enter
Start with 'web' if it's a web page
Another user's entry
New Data? Y/N
N
nico@ometeotl:~/temp$ cat AutoLoader.txt
First user's entry
Another user's entry
nico@ometeotl:~/temp$
```
Also note that a possibly missing AutoLoader.txt at start would be created automatically.
|
36,428,178
|
Even with the most basic of code, my .txt file is coming out empty, and I can't understand why. I'm running this subroutine in `python 3` to gather information from the user. When I open the .txt file in both notepad and N++, I get an empty file.
Here's my code :
```
def Setup():
fw = open('AutoLoader.txt', 'a')
x = True
while x == True:
print("Enter new location to enter")
new_entry = str(input('Start with \'web\' if it\'s a web page\n'))
fw.write(new_entry)
y = input('New Data? Y/N\n')
if y == 'N' or y == 'n':
fw.close
break
fw.close
Start()
```
|
2016/04/05
|
[
"https://Stackoverflow.com/questions/36428178",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
It's working on python 3.4
```
def Setup():
fw = open('AutoLoader3.4.txt', 'a+')
x = True
while x == True:
print("Enter new location to enter")
new_entry = str(input('Start with \'web\' if it\'s a web page\n'))
fw.write(new_entry)
y = input('New Data? Y/N\n')
if y == 'N' or y == 'n':
fw.close()
break
fw.close()
Setup()
```
|
Not knowing what `Start()` does, it must be ignored in answers, so far...
I wouldn't bother to close the file myself, but let a `with` statement do the job properly.
Following script works at least:
```
#!/usr/bin/env python3
def Setup():
with open('AutoLoader.txt', 'a') as fw:
while True:
print("Enter new location to enter")
new_entry = str(input("Start with 'web' if it's a web page\n"))
fw.write(new_entry + "\n")
y = input('New Data? Y/N\n')
if y in ['N', 'n']:
break
#Start()
Setup()
```
See:
```
nico@ometeotl:~/temp$ ./test_script3.py
Enter new location to enter
Start with 'web' if it's a web page
First user's entry
New Data? Y/N
N
nico@ometeotl:~/temp$ ./test_script3.py
Enter new location to enter
Start with 'web' if it's a web page
Another user's entry
New Data? Y/N
N
nico@ometeotl:~/temp$ cat AutoLoader.txt
First user's entry
Another user's entry
nico@ometeotl:~/temp$
```
Also note that a possibly missing AutoLoader.txt at start would be created automatically.
|
26,506,642
|
if d is a dict, python docs provide us with this method:
```
dict.get = get(...)
D.get(k[,d]) -> D[k] if k in D, else d. d defaults to None.
```
Another famous pattern is:
```
try:
D[k]:
except:
d
```
But in my benchmarks (done with timeit), i have found that the following approach is the best:
```
if k in D:
D[k]
else:
d
```
the try/except pattern IS the fastest if no exceptions are raised, but if no exceptions are raised, then we don't need this pattern. I now need to refactor all my code, which is written with a lot of .get()'s, because i need to squeeze some performance on a limited system. Am i correct or wrong? Why is the try/except pattern recommended when it looks like it's the slowest?
|
2014/10/22
|
[
"https://Stackoverflow.com/questions/26506642",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3599101/"
] |
Looks like the `D[k] if k in D else d` pattern is about twice faster than .get, at least for some usages.
.get
```
$ python -m timeit -s 'D={}; k=xrange(0,100000)' 'D.get(k)'
10000000 loops, best of 3: 0.0934 usec per loop
```
if/else
```
$ python -m timeit -s 'D={}; k=xrange(0,100000)' 'D[k] if k in D else None'
10000000 loops, best of 3: 0.0487 usec per loop
```
|
Maybe you should give `pypy` a try and compile that on your embedded system. `if-then-else` and `get` have similar results. Some benchmarks:
PyPy:
```
$ pypy3 -m timeit -s 'd={}; k=0' 'd[k] if k in d else None'
1000000000 loops, best of 3: 0.0008 usec per loop
$ pypy3 -m timeit -s 'd={}; k=0' 'd.get(k)'
1000000000 loops, best of 3: 0.000803 usec per loop
```
Python 3:
```
$ python -m timeit -s 'd={}; k=0' 'd.get(k)' 1 ↵
10000000 loops, best of 3: 0.101 usec per loop
$ python -m timeit -s 'd={}; k=0' 'd[k] if k in d else 0'
10000000 loops, best of 3: 0.0372 usec per loop
```
|
37,033,709
|
I know that it's often [best practice](https://softwareengineering.stackexchange.com/questions/213935/why-use-classes-when-programming-a-tkinter-gui-in-python) to write Tkinter GUI code using object-oriented programming (OOP), but I'm trying to keep things simple because I'm new to Python.
I have written the following code to create a simple GUI:
```
#!/usr/bin/python3
from tkinter import *
from tkinter import ttk
def ChangeLabelText():
MyLabel.config(text = 'You pressed the button!')
def main():
Root = Tk()
MyLabel = ttk.Label(Root, text = 'The button has not been pressed.')
MyLabel.pack()
MyButton = ttk.Button(Root, text = 'Press Me', command = ChangeLabelText)
MyButton.pack()
Root.mainloop()
if __name__ == "__main__": main()
```
[The GUI looks like this.](https://i.stack.imgur.com/ppWic.png)
I thought the text in the GUI (MyLabel) would change to "You pressed the button!" when the button is clicked, but I get the following error when I click the button:
```
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Users\elsey\AppData\Local\Programs\Python\Python35-32\lib\tkinter\__init__.py", line 1549, in __call__
return self.func(*args)
File "C:/Users/elsey/Documents/question code.py", line 6, in ChangeLabelText
MyLabel.config(text = 'You pressed the button!')
NameError: name 'MyLabel' is not defined
```
What am I doing wrong? Any guidance would be appreciated.
|
2016/05/04
|
[
"https://Stackoverflow.com/questions/37033709",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6030297/"
] |
`MyLabel` is local to `main()` so the way you can not access it that way from `ChangeLabelText()`.
If you do not want to change the design of your program, then you will need to change the definition of `ChangeLabelText()` like what follows:
```
def ChangeLabelText(m):
m.config(text = 'You pressed the button!')
```
And withing main() you will need to pass `MyLabel` as an argument to `ChangeLabelText()`.
But again, you will have a problem if you code this `command = ChangeLabelText(MyLabel)` when you declare and define `MyButton` because the program will execute directly the body of `ChangeLabelText()` at the start and you will not have the desired result.
To resolve this later problem, you will have to use (and may be read about) [`lambda`](http://www.secnetix.de/olli/Python/lambda_functions.hawk)
Full program
============
So your program becomes:
```
#!/usr/bin/python3
from tkinter import *
from tkinter import ttk
def ChangeLabelText(m):
m.config(text = 'You pressed the button!')
def main():
Root = Tk()
MyLabel = ttk.Label(Root, text = 'The button has not been pressed.')
MyLabel.pack()
MyButton = ttk.Button(Root, text = 'Press Me', command = lambda: ChangeLabelText(MyLabel))
MyButton.pack()
Root.mainloop()
if __name__ == "__main__":
main()
```
Demo
====
Before clicking:
[](https://i.stack.imgur.com/MqwvI.png)
After clicking:
[](https://i.stack.imgur.com/9RyzN.png)
|
*but I'm trying to keep things simple because I'm new to Python* Hopefully this helps in understanding that classes are the simple way, otherwise you have to jump through hoops and manually keep track of many variables. Also, the Python Style Guide suggests that CamelCase is used for class names and lower\_case\_with\_underlines for variables and functions. <https://www.python.org/dev/peps/pep-0008/>
```
from tkinter import *
from tkinter import ttk
class ChangeLabel():
def __init__(self):
root = Tk()
self.my_label = ttk.Label(root, text = 'The button has not been pressed.')
self.my_label.pack()
## not necessary to keep a reference to this button
## because it is not referenced anywhere else
ttk.Button(root, text = 'Press Me',
command = self.change_label_text).pack()
root.mainloop()
def change_label_text(self):
self.my_label.config(text = 'You pressed the button!')
if __name__ == "__main__":
CL=ChangeLabel()
```
|
37,033,709
|
I know that it's often [best practice](https://softwareengineering.stackexchange.com/questions/213935/why-use-classes-when-programming-a-tkinter-gui-in-python) to write Tkinter GUI code using object-oriented programming (OOP), but I'm trying to keep things simple because I'm new to Python.
I have written the following code to create a simple GUI:
```
#!/usr/bin/python3
from tkinter import *
from tkinter import ttk
def ChangeLabelText():
MyLabel.config(text = 'You pressed the button!')
def main():
Root = Tk()
MyLabel = ttk.Label(Root, text = 'The button has not been pressed.')
MyLabel.pack()
MyButton = ttk.Button(Root, text = 'Press Me', command = ChangeLabelText)
MyButton.pack()
Root.mainloop()
if __name__ == "__main__": main()
```
[The GUI looks like this.](https://i.stack.imgur.com/ppWic.png)
I thought the text in the GUI (MyLabel) would change to "You pressed the button!" when the button is clicked, but I get the following error when I click the button:
```
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Users\elsey\AppData\Local\Programs\Python\Python35-32\lib\tkinter\__init__.py", line 1549, in __call__
return self.func(*args)
File "C:/Users/elsey/Documents/question code.py", line 6, in ChangeLabelText
MyLabel.config(text = 'You pressed the button!')
NameError: name 'MyLabel' is not defined
```
What am I doing wrong? Any guidance would be appreciated.
|
2016/05/04
|
[
"https://Stackoverflow.com/questions/37033709",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6030297/"
] |
`MyLabel` is local to `main()` so the way you can not access it that way from `ChangeLabelText()`.
If you do not want to change the design of your program, then you will need to change the definition of `ChangeLabelText()` like what follows:
```
def ChangeLabelText(m):
m.config(text = 'You pressed the button!')
```
And withing main() you will need to pass `MyLabel` as an argument to `ChangeLabelText()`.
But again, you will have a problem if you code this `command = ChangeLabelText(MyLabel)` when you declare and define `MyButton` because the program will execute directly the body of `ChangeLabelText()` at the start and you will not have the desired result.
To resolve this later problem, you will have to use (and may be read about) [`lambda`](http://www.secnetix.de/olli/Python/lambda_functions.hawk)
Full program
============
So your program becomes:
```
#!/usr/bin/python3
from tkinter import *
from tkinter import ttk
def ChangeLabelText(m):
m.config(text = 'You pressed the button!')
def main():
Root = Tk()
MyLabel = ttk.Label(Root, text = 'The button has not been pressed.')
MyLabel.pack()
MyButton = ttk.Button(Root, text = 'Press Me', command = lambda: ChangeLabelText(MyLabel))
MyButton.pack()
Root.mainloop()
if __name__ == "__main__":
main()
```
Demo
====
Before clicking:
[](https://i.stack.imgur.com/MqwvI.png)
After clicking:
[](https://i.stack.imgur.com/9RyzN.png)
|
Are your sure you don't want to do it as a class (i think it makes the code a bit more clean as your project grows)? Here is a way to accomplish what you'e looking for:
```
#!/usr/bin/python3
from tkinter import *
from tkinter import ttk
class myWindow:
def __init__(self, master):
self.MyLabel = ttk.Label(root, text = 'The button has not been pressed.')
self.MyLabel.pack()
self.MyButton = ttk.Button(root, text = 'Press Me', command = self.ChangeLabelText)
self.MyButton.pack()
def ChangeLabelText(self, event=None):
self.MyLabel.config(text = 'You pressed the button!')
if __name__ == "__main__":
root = Tk()
mainWindow = myWindow(root)
root.mainloop()
```
In a Mac, is looks like this before pressing the button:
[](https://i.stack.imgur.com/riZhv.png)
And when you press it:
[](https://i.stack.imgur.com/YVJEy.png)
But basically, in order to be able to change the text in a Label or a button, you need to ensure it has an active reference. In this case, we are doing it by creating the window as a class and referencing the widgets in the form `self. widget_name = widget()`.
|
37,033,709
|
I know that it's often [best practice](https://softwareengineering.stackexchange.com/questions/213935/why-use-classes-when-programming-a-tkinter-gui-in-python) to write Tkinter GUI code using object-oriented programming (OOP), but I'm trying to keep things simple because I'm new to Python.
I have written the following code to create a simple GUI:
```
#!/usr/bin/python3
from tkinter import *
from tkinter import ttk
def ChangeLabelText():
MyLabel.config(text = 'You pressed the button!')
def main():
Root = Tk()
MyLabel = ttk.Label(Root, text = 'The button has not been pressed.')
MyLabel.pack()
MyButton = ttk.Button(Root, text = 'Press Me', command = ChangeLabelText)
MyButton.pack()
Root.mainloop()
if __name__ == "__main__": main()
```
[The GUI looks like this.](https://i.stack.imgur.com/ppWic.png)
I thought the text in the GUI (MyLabel) would change to "You pressed the button!" when the button is clicked, but I get the following error when I click the button:
```
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Users\elsey\AppData\Local\Programs\Python\Python35-32\lib\tkinter\__init__.py", line 1549, in __call__
return self.func(*args)
File "C:/Users/elsey/Documents/question code.py", line 6, in ChangeLabelText
MyLabel.config(text = 'You pressed the button!')
NameError: name 'MyLabel' is not defined
```
What am I doing wrong? Any guidance would be appreciated.
|
2016/05/04
|
[
"https://Stackoverflow.com/questions/37033709",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6030297/"
] |
Are your sure you don't want to do it as a class (i think it makes the code a bit more clean as your project grows)? Here is a way to accomplish what you'e looking for:
```
#!/usr/bin/python3
from tkinter import *
from tkinter import ttk
class myWindow:
def __init__(self, master):
self.MyLabel = ttk.Label(root, text = 'The button has not been pressed.')
self.MyLabel.pack()
self.MyButton = ttk.Button(root, text = 'Press Me', command = self.ChangeLabelText)
self.MyButton.pack()
def ChangeLabelText(self, event=None):
self.MyLabel.config(text = 'You pressed the button!')
if __name__ == "__main__":
root = Tk()
mainWindow = myWindow(root)
root.mainloop()
```
In a Mac, is looks like this before pressing the button:
[](https://i.stack.imgur.com/riZhv.png)
And when you press it:
[](https://i.stack.imgur.com/YVJEy.png)
But basically, in order to be able to change the text in a Label or a button, you need to ensure it has an active reference. In this case, we are doing it by creating the window as a class and referencing the widgets in the form `self. widget_name = widget()`.
|
*but I'm trying to keep things simple because I'm new to Python* Hopefully this helps in understanding that classes are the simple way, otherwise you have to jump through hoops and manually keep track of many variables. Also, the Python Style Guide suggests that CamelCase is used for class names and lower\_case\_with\_underlines for variables and functions. <https://www.python.org/dev/peps/pep-0008/>
```
from tkinter import *
from tkinter import ttk
class ChangeLabel():
def __init__(self):
root = Tk()
self.my_label = ttk.Label(root, text = 'The button has not been pressed.')
self.my_label.pack()
## not necessary to keep a reference to this button
## because it is not referenced anywhere else
ttk.Button(root, text = 'Press Me',
command = self.change_label_text).pack()
root.mainloop()
def change_label_text(self):
self.my_label.config(text = 'You pressed the button!')
if __name__ == "__main__":
CL=ChangeLabel()
```
|
39,026,120
|
I have a python string that I need to remove parentheses. The standard way is to use `text = re.sub(r'\([^)]*\)', '', text)`, so the content within the parentheses will be removed.
However, I just found a string that looks like `(Data with in (Boo) And good luck)`. With the regex I use, it will still have `And good luck)` part left. I know I can scan through the entire string and try to keep a counter of number of `(` and `)` and when the numbers are balanced, index the location of `(` and `)` and remove the content within middle, but is there a better/cleaner way for doing that? It doesn't need to be regex, whatever it will work is great, thanks.
Someone asked for expected result so here's what I am expecting:
`Hi this is a test ( a b ( c d) e) sentence`
Post replace I want it to be `Hi this is a test sentence`, instead of `Hi this is a test e) sentence`
|
2016/08/18
|
[
"https://Stackoverflow.com/questions/39026120",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1294529/"
] |
With the re module (replace the innermost parenthesis until there's no more replacement to do):
```
import re
s = r'Sainte Anne -(Data with in (Boo) And good luck) Charenton'
nb_rep = 1
while (nb_rep):
(s, nb_rep) = re.subn(r'\([^()]*\)', '', s)
print(s)
```
With the [regex module](https://pypi.python.org/pypi/regex) that allows recursion:
```
import regex
s = r'Sainte Anne -(Data with in (Boo) And good luck) Charenton'
print(regex.sub(r'\([^()]*+(?:(?R)[^()]*)*+\)', '', s))
```
Where `(?R)` refers to the whole pattern itself.
|
First I split the line into tokens that do not contain the parenthesis, for later on joining them into a new line:
```
line = "(Data with in (Boo) And good luck)"
new_line = "".join(re.split(r'(?:[()])',line))
print ( new_line )
# 'Data with in Boo And good luck'
```
|
39,026,120
|
I have a python string that I need to remove parentheses. The standard way is to use `text = re.sub(r'\([^)]*\)', '', text)`, so the content within the parentheses will be removed.
However, I just found a string that looks like `(Data with in (Boo) And good luck)`. With the regex I use, it will still have `And good luck)` part left. I know I can scan through the entire string and try to keep a counter of number of `(` and `)` and when the numbers are balanced, index the location of `(` and `)` and remove the content within middle, but is there a better/cleaner way for doing that? It doesn't need to be regex, whatever it will work is great, thanks.
Someone asked for expected result so here's what I am expecting:
`Hi this is a test ( a b ( c d) e) sentence`
Post replace I want it to be `Hi this is a test sentence`, instead of `Hi this is a test e) sentence`
|
2016/08/18
|
[
"https://Stackoverflow.com/questions/39026120",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1294529/"
] |
First I split the line into tokens that do not contain the parenthesis, for later on joining them into a new line:
```
line = "(Data with in (Boo) And good luck)"
new_line = "".join(re.split(r'(?:[()])',line))
print ( new_line )
# 'Data with in Boo And good luck'
```
|
No regex...
```
>>> a = 'Hi this is a test ( a b ( c d) e) sentence'
>>> o = ['(' == t or t == ')' for t in a]
>>> o
[False, False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, True, False, False,
False, False, False, True, False, False, False, False, True, False, False,
True, False, False, False, False, False, False, False, False, False]
>>> start,end=0,0
>>> for n,i in enumerate(o):
... if i and not start:
... start = n
... if i and start:
... end = n
...
>>>
>>> start
18
>>> end
32
>>> a1 = ' '.join(''.join(i for n,i in enumerate(a) if (n<start or n>end)).split())
>>> a1
'Hi this is a test sentence'
>>>
```
|
39,026,120
|
I have a python string that I need to remove parentheses. The standard way is to use `text = re.sub(r'\([^)]*\)', '', text)`, so the content within the parentheses will be removed.
However, I just found a string that looks like `(Data with in (Boo) And good luck)`. With the regex I use, it will still have `And good luck)` part left. I know I can scan through the entire string and try to keep a counter of number of `(` and `)` and when the numbers are balanced, index the location of `(` and `)` and remove the content within middle, but is there a better/cleaner way for doing that? It doesn't need to be regex, whatever it will work is great, thanks.
Someone asked for expected result so here's what I am expecting:
`Hi this is a test ( a b ( c d) e) sentence`
Post replace I want it to be `Hi this is a test sentence`, instead of `Hi this is a test e) sentence`
|
2016/08/18
|
[
"https://Stackoverflow.com/questions/39026120",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1294529/"
] |
First I split the line into tokens that do not contain the parenthesis, for later on joining them into a new line:
```
line = "(Data with in (Boo) And good luck)"
new_line = "".join(re.split(r'(?:[()])',line))
print ( new_line )
# 'Data with in Boo And good luck'
```
|
Assuming (1) there are always matching parentheses and (2) we only remove the parentheses and everything in between them (ie. surrounding spaces around the parentheses are untouched), the following should work.
It's basically a state machine that maintains the current depth of nested parentheses. We keep the character if it's (1) not a parenthesis and (2) the current depth is 0.
*No regexes. No recursion. A single pass through the input string without any intermediate lists.*
```
tests = [
"Hi this is a test ( a b ( c d) e) sentence",
"(Data with in (Boo) And good luck)",
]
delta = {
'(': 1,
')': -1,
}
def remove_paren_groups(input):
depth = 0
for c in input:
d = delta.get(c, 0)
depth += d
if d != 0 or depth > 0:
continue
yield c
for input in tests:
print ' IN: %s' % repr(input)
print 'OUT: %s' % repr(''.join(remove_paren_groups(input)))
```
Output:
```
IN: 'Hi this is a test ( a b ( c d) e) sentence'
OUT: 'Hi this is a test sentence'
IN: '(Data with in (Boo) And good luck)'
OUT: ''
```
|
39,026,120
|
I have a python string that I need to remove parentheses. The standard way is to use `text = re.sub(r'\([^)]*\)', '', text)`, so the content within the parentheses will be removed.
However, I just found a string that looks like `(Data with in (Boo) And good luck)`. With the regex I use, it will still have `And good luck)` part left. I know I can scan through the entire string and try to keep a counter of number of `(` and `)` and when the numbers are balanced, index the location of `(` and `)` and remove the content within middle, but is there a better/cleaner way for doing that? It doesn't need to be regex, whatever it will work is great, thanks.
Someone asked for expected result so here's what I am expecting:
`Hi this is a test ( a b ( c d) e) sentence`
Post replace I want it to be `Hi this is a test sentence`, instead of `Hi this is a test e) sentence`
|
2016/08/18
|
[
"https://Stackoverflow.com/questions/39026120",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1294529/"
] |
First I split the line into tokens that do not contain the parenthesis, for later on joining them into a new line:
```
line = "(Data with in (Boo) And good luck)"
new_line = "".join(re.split(r'(?:[()])',line))
print ( new_line )
# 'Data with in Boo And good luck'
```
|
Referenced from [here](https://www.w3resource.com/python-exercises/re/python-re-exercise-50.php)
```
import re
item = "example (.com) w3resource github (.com) stackoverflow (.com)"
### Add lines in case there are non-ascii problem:
# -*- coding: utf-8 -*-
item = item .decode('ascii', errors = 'ignore').encode()
print re.sub(r" ?\([^)]+\)", "", item)
```
|
39,026,120
|
I have a python string that I need to remove parentheses. The standard way is to use `text = re.sub(r'\([^)]*\)', '', text)`, so the content within the parentheses will be removed.
However, I just found a string that looks like `(Data with in (Boo) And good luck)`. With the regex I use, it will still have `And good luck)` part left. I know I can scan through the entire string and try to keep a counter of number of `(` and `)` and when the numbers are balanced, index the location of `(` and `)` and remove the content within middle, but is there a better/cleaner way for doing that? It doesn't need to be regex, whatever it will work is great, thanks.
Someone asked for expected result so here's what I am expecting:
`Hi this is a test ( a b ( c d) e) sentence`
Post replace I want it to be `Hi this is a test sentence`, instead of `Hi this is a test e) sentence`
|
2016/08/18
|
[
"https://Stackoverflow.com/questions/39026120",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1294529/"
] |
With the re module (replace the innermost parenthesis until there's no more replacement to do):
```
import re
s = r'Sainte Anne -(Data with in (Boo) And good luck) Charenton'
nb_rep = 1
while (nb_rep):
(s, nb_rep) = re.subn(r'\([^()]*\)', '', s)
print(s)
```
With the [regex module](https://pypi.python.org/pypi/regex) that allows recursion:
```
import regex
s = r'Sainte Anne -(Data with in (Boo) And good luck) Charenton'
print(regex.sub(r'\([^()]*+(?:(?R)[^()]*)*+\)', '', s))
```
Where `(?R)` refers to the whole pattern itself.
|
No regex...
```
>>> a = 'Hi this is a test ( a b ( c d) e) sentence'
>>> o = ['(' == t or t == ')' for t in a]
>>> o
[False, False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, True, False, False,
False, False, False, True, False, False, False, False, True, False, False,
True, False, False, False, False, False, False, False, False, False]
>>> start,end=0,0
>>> for n,i in enumerate(o):
... if i and not start:
... start = n
... if i and start:
... end = n
...
>>>
>>> start
18
>>> end
32
>>> a1 = ' '.join(''.join(i for n,i in enumerate(a) if (n<start or n>end)).split())
>>> a1
'Hi this is a test sentence'
>>>
```
|
39,026,120
|
I have a python string that I need to remove parentheses. The standard way is to use `text = re.sub(r'\([^)]*\)', '', text)`, so the content within the parentheses will be removed.
However, I just found a string that looks like `(Data with in (Boo) And good luck)`. With the regex I use, it will still have `And good luck)` part left. I know I can scan through the entire string and try to keep a counter of number of `(` and `)` and when the numbers are balanced, index the location of `(` and `)` and remove the content within middle, but is there a better/cleaner way for doing that? It doesn't need to be regex, whatever it will work is great, thanks.
Someone asked for expected result so here's what I am expecting:
`Hi this is a test ( a b ( c d) e) sentence`
Post replace I want it to be `Hi this is a test sentence`, instead of `Hi this is a test e) sentence`
|
2016/08/18
|
[
"https://Stackoverflow.com/questions/39026120",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1294529/"
] |
With the re module (replace the innermost parenthesis until there's no more replacement to do):
```
import re
s = r'Sainte Anne -(Data with in (Boo) And good luck) Charenton'
nb_rep = 1
while (nb_rep):
(s, nb_rep) = re.subn(r'\([^()]*\)', '', s)
print(s)
```
With the [regex module](https://pypi.python.org/pypi/regex) that allows recursion:
```
import regex
s = r'Sainte Anne -(Data with in (Boo) And good luck) Charenton'
print(regex.sub(r'\([^()]*+(?:(?R)[^()]*)*+\)', '', s))
```
Where `(?R)` refers to the whole pattern itself.
|
Assuming (1) there are always matching parentheses and (2) we only remove the parentheses and everything in between them (ie. surrounding spaces around the parentheses are untouched), the following should work.
It's basically a state machine that maintains the current depth of nested parentheses. We keep the character if it's (1) not a parenthesis and (2) the current depth is 0.
*No regexes. No recursion. A single pass through the input string without any intermediate lists.*
```
tests = [
"Hi this is a test ( a b ( c d) e) sentence",
"(Data with in (Boo) And good luck)",
]
delta = {
'(': 1,
')': -1,
}
def remove_paren_groups(input):
depth = 0
for c in input:
d = delta.get(c, 0)
depth += d
if d != 0 or depth > 0:
continue
yield c
for input in tests:
print ' IN: %s' % repr(input)
print 'OUT: %s' % repr(''.join(remove_paren_groups(input)))
```
Output:
```
IN: 'Hi this is a test ( a b ( c d) e) sentence'
OUT: 'Hi this is a test sentence'
IN: '(Data with in (Boo) And good luck)'
OUT: ''
```
|
39,026,120
|
I have a python string that I need to remove parentheses. The standard way is to use `text = re.sub(r'\([^)]*\)', '', text)`, so the content within the parentheses will be removed.
However, I just found a string that looks like `(Data with in (Boo) And good luck)`. With the regex I use, it will still have `And good luck)` part left. I know I can scan through the entire string and try to keep a counter of number of `(` and `)` and when the numbers are balanced, index the location of `(` and `)` and remove the content within middle, but is there a better/cleaner way for doing that? It doesn't need to be regex, whatever it will work is great, thanks.
Someone asked for expected result so here's what I am expecting:
`Hi this is a test ( a b ( c d) e) sentence`
Post replace I want it to be `Hi this is a test sentence`, instead of `Hi this is a test e) sentence`
|
2016/08/18
|
[
"https://Stackoverflow.com/questions/39026120",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1294529/"
] |
With the re module (replace the innermost parenthesis until there's no more replacement to do):
```
import re
s = r'Sainte Anne -(Data with in (Boo) And good luck) Charenton'
nb_rep = 1
while (nb_rep):
(s, nb_rep) = re.subn(r'\([^()]*\)', '', s)
print(s)
```
With the [regex module](https://pypi.python.org/pypi/regex) that allows recursion:
```
import regex
s = r'Sainte Anne -(Data with in (Boo) And good luck) Charenton'
print(regex.sub(r'\([^()]*+(?:(?R)[^()]*)*+\)', '', s))
```
Where `(?R)` refers to the whole pattern itself.
|
Referenced from [here](https://www.w3resource.com/python-exercises/re/python-re-exercise-50.php)
```
import re
item = "example (.com) w3resource github (.com) stackoverflow (.com)"
### Add lines in case there are non-ascii problem:
# -*- coding: utf-8 -*-
item = item .decode('ascii', errors = 'ignore').encode()
print re.sub(r" ?\([^)]+\)", "", item)
```
|
39,026,120
|
I have a python string that I need to remove parentheses. The standard way is to use `text = re.sub(r'\([^)]*\)', '', text)`, so the content within the parentheses will be removed.
However, I just found a string that looks like `(Data with in (Boo) And good luck)`. With the regex I use, it will still have `And good luck)` part left. I know I can scan through the entire string and try to keep a counter of number of `(` and `)` and when the numbers are balanced, index the location of `(` and `)` and remove the content within middle, but is there a better/cleaner way for doing that? It doesn't need to be regex, whatever it will work is great, thanks.
Someone asked for expected result so here's what I am expecting:
`Hi this is a test ( a b ( c d) e) sentence`
Post replace I want it to be `Hi this is a test sentence`, instead of `Hi this is a test e) sentence`
|
2016/08/18
|
[
"https://Stackoverflow.com/questions/39026120",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1294529/"
] |
No regex...
```
>>> a = 'Hi this is a test ( a b ( c d) e) sentence'
>>> o = ['(' == t or t == ')' for t in a]
>>> o
[False, False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, True, False, False,
False, False, False, True, False, False, False, False, True, False, False,
True, False, False, False, False, False, False, False, False, False]
>>> start,end=0,0
>>> for n,i in enumerate(o):
... if i and not start:
... start = n
... if i and start:
... end = n
...
>>>
>>> start
18
>>> end
32
>>> a1 = ' '.join(''.join(i for n,i in enumerate(a) if (n<start or n>end)).split())
>>> a1
'Hi this is a test sentence'
>>>
```
|
Referenced from [here](https://www.w3resource.com/python-exercises/re/python-re-exercise-50.php)
```
import re
item = "example (.com) w3resource github (.com) stackoverflow (.com)"
### Add lines in case there are non-ascii problem:
# -*- coding: utf-8 -*-
item = item .decode('ascii', errors = 'ignore').encode()
print re.sub(r" ?\([^)]+\)", "", item)
```
|
39,026,120
|
I have a python string that I need to remove parentheses. The standard way is to use `text = re.sub(r'\([^)]*\)', '', text)`, so the content within the parentheses will be removed.
However, I just found a string that looks like `(Data with in (Boo) And good luck)`. With the regex I use, it will still have `And good luck)` part left. I know I can scan through the entire string and try to keep a counter of number of `(` and `)` and when the numbers are balanced, index the location of `(` and `)` and remove the content within middle, but is there a better/cleaner way for doing that? It doesn't need to be regex, whatever it will work is great, thanks.
Someone asked for expected result so here's what I am expecting:
`Hi this is a test ( a b ( c d) e) sentence`
Post replace I want it to be `Hi this is a test sentence`, instead of `Hi this is a test e) sentence`
|
2016/08/18
|
[
"https://Stackoverflow.com/questions/39026120",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1294529/"
] |
Assuming (1) there are always matching parentheses and (2) we only remove the parentheses and everything in between them (ie. surrounding spaces around the parentheses are untouched), the following should work.
It's basically a state machine that maintains the current depth of nested parentheses. We keep the character if it's (1) not a parenthesis and (2) the current depth is 0.
*No regexes. No recursion. A single pass through the input string without any intermediate lists.*
```
tests = [
"Hi this is a test ( a b ( c d) e) sentence",
"(Data with in (Boo) And good luck)",
]
delta = {
'(': 1,
')': -1,
}
def remove_paren_groups(input):
depth = 0
for c in input:
d = delta.get(c, 0)
depth += d
if d != 0 or depth > 0:
continue
yield c
for input in tests:
print ' IN: %s' % repr(input)
print 'OUT: %s' % repr(''.join(remove_paren_groups(input)))
```
Output:
```
IN: 'Hi this is a test ( a b ( c d) e) sentence'
OUT: 'Hi this is a test sentence'
IN: '(Data with in (Boo) And good luck)'
OUT: ''
```
|
Referenced from [here](https://www.w3resource.com/python-exercises/re/python-re-exercise-50.php)
```
import re
item = "example (.com) w3resource github (.com) stackoverflow (.com)"
### Add lines in case there are non-ascii problem:
# -*- coding: utf-8 -*-
item = item .decode('ascii', errors = 'ignore').encode()
print re.sub(r" ?\([^)]+\)", "", item)
```
|
3,433,806
|
I was reading on web2py framework for a hobby project of mine I am doing. I learned how to program in Python when I was younger so I do have a grasp on it. Right now I am more of a PHP dev but kindda loathe it.
I just have this doubt that pops in: Is there a way to use "Vanilla" python on the backend? I mean Vanilla like PHP, without a Framework. How does templating work in that way? I mean, with indentation and Everything it kinda misses the point.
Anyway I am trying web2py and really liking it.
|
2010/08/08
|
[
"https://Stackoverflow.com/questions/3433806",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/399621/"
] |
There is no reason to do that :) but if you insist you can write on top of [WSGI](http://wsgi.org/wsgi/)
I suggest that you can try a micro-framework such as web.py if u like it Vanilla style
|
without a framework, you use WSGI. to do this, you write a function `application` like so:
```
def application(environment, start_response):
start_response("200 OK", [('Content-Type', 'text/plain')])
return "hello world"
```
`environment` contains cgi variables and other stuff. Normally what happens is application will call other functions with the same call signature and you get a chain of functions each of which handles a particular aspect of processing the request.
You are of course responsible for handling your own templates. Nothing about it is built into the language.
|
3,433,806
|
I was reading on web2py framework for a hobby project of mine I am doing. I learned how to program in Python when I was younger so I do have a grasp on it. Right now I am more of a PHP dev but kindda loathe it.
I just have this doubt that pops in: Is there a way to use "Vanilla" python on the backend? I mean Vanilla like PHP, without a Framework. How does templating work in that way? I mean, with indentation and Everything it kinda misses the point.
Anyway I am trying web2py and really liking it.
|
2010/08/08
|
[
"https://Stackoverflow.com/questions/3433806",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/399621/"
] |
There is no reason to do that :) but if you insist you can write on top of [WSGI](http://wsgi.org/wsgi/)
I suggest that you can try a micro-framework such as web.py if u like it Vanilla style
|
The mixing of logic, content, and presentation as naïvely encouraged by PHP is an abomination. It is the polar opposite of good design practice, and should not be imported to other languages (it shouldn't even be used in PHP, and thankfully the PHP world in general is ever so slowly moving away from it).
You should learn about [Model-View-Controller (MVC)](http://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller) which, while not the final word on good real-world design, forms an important basis for modern web development practices, and serves as common ground, or a sort of *lingua franca*, in discussions about application layout.
Most of the time, you should be using some form of web framework, particularly one that provides templating. web2py is not a bad choice. Other popular frameworks include [Pylons](http://pylonshq.com/) and [Django](http://www.djangoproject.com/).
Most Python web frameworks are very modular. You can use them in their entirety for everything in your app, or just bits and pieces. You might, for example, use Django's URL dispatcher, but not its models/ORM, or maybe you use everything in it except its templating engine, pulling in, say, [Jinja](http://jinja.pocoo.org/). It's up to you.
You can even write traditional CGI scripts (take a look at the [CGI module](http://docs.python.org/library/cgi.html)), while still using a templating engine of your choice.
You should start learning about all of these things and finding what works best for you. But the one thing you should *not* do is try to treat Python web development like PHP.
|
56,663,388
|
I'm trying to learn and use tensorboard and followed [these guideline codes](https://www.tensorflow.org/tensorboard/r2/get_started) with a few modifications.
When I run the code
```
model.fit(x=x_train,
y=y_train,
epochs=5,
validation_data=(x_test, y_test),
callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)])
```
, I got ProfilerNotRunningError with this message "summary\_ops\_v2.py:1161] Trace already enabled".
Why trace already enabled? How can I solve the problem?
I tried to solve it with new log directions(I thought then it would make the trace be renewed), but it happened again.
```py
import tensorflow as tf
import datetime
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
def create_model():
return tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model = create_model()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
log_dir="logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
train_summary_writer = tf.summary.create_file_writer(log_dir)
tensorboard_callback = [tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)]
model.fit(x=x_train,
y=y_train,
epochs=5,
validation_data=(x_test, y_test),
callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)])
```
* error -->
[enter image description here](https://i.stack.imgur.com/vE6hb.jpg)
```
Epoch 1/5
W0619 17:02:10.383985 15544 summary_ops_v2.py:1161] Trace already enabled
32/60000 [..............................] - ETA: 15:05 - loss: 2.3275 - accuracy: 0.0625
---------------------------------------------------------------------------
ProfilerNotRunningError Traceback (most recent call last)
<ipython-input-23-0c608b0df5ad> in <module>
3 epochs=5,
4 validation_data=(x_test, y_test),
----> 5 callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)])
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
641 max_queue_size=max_queue_size,
642 workers=workers,
--> 643 use_multiprocessing=use_multiprocessing)
644
645 def evaluate(self,
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, **kwargs)
662 validation_steps=validation_steps,
663 validation_freq=validation_freq,
--> 664 steps_name='steps_per_epoch')
665
666 def evaluate(self,
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs)
392 # Callbacks batch end.
393 batch_logs = cbks.make_logs(model, batch_logs, batch_outs, mode)
--> 394 callbacks._call_batch_hook(mode, 'end', batch_index, batch_logs)
395 progbar.on_batch_end(batch_index, batch_logs)
396
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in _call_batch_hook(self, mode, hook, batch, logs)
230 for callback in self.callbacks:
231 batch_hook = getattr(callback, hook_name)
--> 232 batch_hook(batch, logs)
233 self._delta_ts[hook_name].append(time.time() - t_before_callbacks)
234
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in on_train_batch_end(self, batch, logs)
513 """
514 # For backwards compatibility.
--> 515 self.on_batch_end(batch, logs=logs)
516
517 def on_test_batch_begin(self, batch, logs=None):
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in on_batch_end(self, batch, logs)
1600 self._total_batches_seen += 1
1601 if self._is_tracing:
-> 1602 self._log_trace()
1603 elif (not self._is_tracing and
1604 self._total_batches_seen == self._profile_batch - 1):
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in _log_trace(self)
1634 name='batch_%d' % self._total_batches_seen,
1635 step=self._total_batches_seen,
-> 1636 profiler_outdir=os.path.join(self.log_dir, 'train'))
1637 self._is_tracing = False
1638
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\summary_ops_v2.py in trace_export(name, step, profiler_outdir)
1216
1217 if profiler:
-> 1218 _profiler.save(profiler_outdir, _profiler.stop())
1219
1220 trace_off()
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\eager\profiler.py in stop()
101 if _profiler is None:
102 raise ProfilerNotRunningError(
--> 103 'Cannot stop profiling. No profiler is running.')
104 with c_api_util.tf_buffer() as buffer_:
105 pywrap_tensorflow.TFE_ProfilerSerializeToString(
ProfilerNotRunningError: Cannot stop profiling. No profiler is running.
```
|
2019/06/19
|
[
"https://Stackoverflow.com/questions/56663388",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11668817/"
] |
I was facing the same issue and even customizing the log\_dir option using datetime didn't work. Check this page: <https://github.com/tensorflow/tensorboard/issues/2819> which helped me. I just added the 'profile\_batch = 100000000' in this callback as:
TensorBoard(log\_dir=log\_dir, .., profile\_batch = 100000000)
|
I faced with issue becuase of mixed slashes in tensorboard logdir (on Windows).
This happened when I used:
```
logdir_path = os.path.join(r'D:something\models', 'model1')
```
Solved using:
```
logdir_path = os.path.normpath(os.path.join(r'D:something\models', 'model1'))
callbacks = [tf.keras.callbacks.TensorBoard(log_dir=logdir_path)]
```
|
56,663,388
|
I'm trying to learn and use tensorboard and followed [these guideline codes](https://www.tensorflow.org/tensorboard/r2/get_started) with a few modifications.
When I run the code
```
model.fit(x=x_train,
y=y_train,
epochs=5,
validation_data=(x_test, y_test),
callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)])
```
, I got ProfilerNotRunningError with this message "summary\_ops\_v2.py:1161] Trace already enabled".
Why trace already enabled? How can I solve the problem?
I tried to solve it with new log directions(I thought then it would make the trace be renewed), but it happened again.
```py
import tensorflow as tf
import datetime
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
def create_model():
return tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model = create_model()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
log_dir="logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
train_summary_writer = tf.summary.create_file_writer(log_dir)
tensorboard_callback = [tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)]
model.fit(x=x_train,
y=y_train,
epochs=5,
validation_data=(x_test, y_test),
callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)])
```
* error -->
[enter image description here](https://i.stack.imgur.com/vE6hb.jpg)
```
Epoch 1/5
W0619 17:02:10.383985 15544 summary_ops_v2.py:1161] Trace already enabled
32/60000 [..............................] - ETA: 15:05 - loss: 2.3275 - accuracy: 0.0625
---------------------------------------------------------------------------
ProfilerNotRunningError Traceback (most recent call last)
<ipython-input-23-0c608b0df5ad> in <module>
3 epochs=5,
4 validation_data=(x_test, y_test),
----> 5 callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)])
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
641 max_queue_size=max_queue_size,
642 workers=workers,
--> 643 use_multiprocessing=use_multiprocessing)
644
645 def evaluate(self,
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, **kwargs)
662 validation_steps=validation_steps,
663 validation_freq=validation_freq,
--> 664 steps_name='steps_per_epoch')
665
666 def evaluate(self,
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs)
392 # Callbacks batch end.
393 batch_logs = cbks.make_logs(model, batch_logs, batch_outs, mode)
--> 394 callbacks._call_batch_hook(mode, 'end', batch_index, batch_logs)
395 progbar.on_batch_end(batch_index, batch_logs)
396
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in _call_batch_hook(self, mode, hook, batch, logs)
230 for callback in self.callbacks:
231 batch_hook = getattr(callback, hook_name)
--> 232 batch_hook(batch, logs)
233 self._delta_ts[hook_name].append(time.time() - t_before_callbacks)
234
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in on_train_batch_end(self, batch, logs)
513 """
514 # For backwards compatibility.
--> 515 self.on_batch_end(batch, logs=logs)
516
517 def on_test_batch_begin(self, batch, logs=None):
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in on_batch_end(self, batch, logs)
1600 self._total_batches_seen += 1
1601 if self._is_tracing:
-> 1602 self._log_trace()
1603 elif (not self._is_tracing and
1604 self._total_batches_seen == self._profile_batch - 1):
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in _log_trace(self)
1634 name='batch_%d' % self._total_batches_seen,
1635 step=self._total_batches_seen,
-> 1636 profiler_outdir=os.path.join(self.log_dir, 'train'))
1637 self._is_tracing = False
1638
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\summary_ops_v2.py in trace_export(name, step, profiler_outdir)
1216
1217 if profiler:
-> 1218 _profiler.save(profiler_outdir, _profiler.stop())
1219
1220 trace_off()
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\eager\profiler.py in stop()
101 if _profiler is None:
102 raise ProfilerNotRunningError(
--> 103 'Cannot stop profiling. No profiler is running.')
104 with c_api_util.tf_buffer() as buffer_:
105 pywrap_tensorflow.TFE_ProfilerSerializeToString(
ProfilerNotRunningError: Cannot stop profiling. No profiler is running.
```
|
2019/06/19
|
[
"https://Stackoverflow.com/questions/56663388",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11668817/"
] |
I was facing the same issue and even customizing the log\_dir option using datetime didn't work. Check this page: <https://github.com/tensorflow/tensorboard/issues/2819> which helped me. I just added the 'profile\_batch = 100000000' in this callback as:
TensorBoard(log\_dir=log\_dir, .., profile\_batch = 100000000)
|
```
callbacks = [tensorflow.keras.callbacks.TensorBoard(log_dir=logdir, histogram_freq=1, profile_batch = 100000000)]
```
Simply this solved my problem. Adding the profile batch.
The problem mostly mainly occurs in `Tensorflow V2.1` and in `Windows`.
|
56,663,388
|
I'm trying to learn and use tensorboard and followed [these guideline codes](https://www.tensorflow.org/tensorboard/r2/get_started) with a few modifications.
When I run the code
```
model.fit(x=x_train,
y=y_train,
epochs=5,
validation_data=(x_test, y_test),
callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)])
```
, I got ProfilerNotRunningError with this message "summary\_ops\_v2.py:1161] Trace already enabled".
Why trace already enabled? How can I solve the problem?
I tried to solve it with new log directions(I thought then it would make the trace be renewed), but it happened again.
```py
import tensorflow as tf
import datetime
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
def create_model():
return tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model = create_model()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
log_dir="logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
train_summary_writer = tf.summary.create_file_writer(log_dir)
tensorboard_callback = [tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)]
model.fit(x=x_train,
y=y_train,
epochs=5,
validation_data=(x_test, y_test),
callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)])
```
* error -->
[enter image description here](https://i.stack.imgur.com/vE6hb.jpg)
```
Epoch 1/5
W0619 17:02:10.383985 15544 summary_ops_v2.py:1161] Trace already enabled
32/60000 [..............................] - ETA: 15:05 - loss: 2.3275 - accuracy: 0.0625
---------------------------------------------------------------------------
ProfilerNotRunningError Traceback (most recent call last)
<ipython-input-23-0c608b0df5ad> in <module>
3 epochs=5,
4 validation_data=(x_test, y_test),
----> 5 callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)])
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
641 max_queue_size=max_queue_size,
642 workers=workers,
--> 643 use_multiprocessing=use_multiprocessing)
644
645 def evaluate(self,
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, **kwargs)
662 validation_steps=validation_steps,
663 validation_freq=validation_freq,
--> 664 steps_name='steps_per_epoch')
665
666 def evaluate(self,
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs)
392 # Callbacks batch end.
393 batch_logs = cbks.make_logs(model, batch_logs, batch_outs, mode)
--> 394 callbacks._call_batch_hook(mode, 'end', batch_index, batch_logs)
395 progbar.on_batch_end(batch_index, batch_logs)
396
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in _call_batch_hook(self, mode, hook, batch, logs)
230 for callback in self.callbacks:
231 batch_hook = getattr(callback, hook_name)
--> 232 batch_hook(batch, logs)
233 self._delta_ts[hook_name].append(time.time() - t_before_callbacks)
234
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in on_train_batch_end(self, batch, logs)
513 """
514 # For backwards compatibility.
--> 515 self.on_batch_end(batch, logs=logs)
516
517 def on_test_batch_begin(self, batch, logs=None):
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in on_batch_end(self, batch, logs)
1600 self._total_batches_seen += 1
1601 if self._is_tracing:
-> 1602 self._log_trace()
1603 elif (not self._is_tracing and
1604 self._total_batches_seen == self._profile_batch - 1):
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in _log_trace(self)
1634 name='batch_%d' % self._total_batches_seen,
1635 step=self._total_batches_seen,
-> 1636 profiler_outdir=os.path.join(self.log_dir, 'train'))
1637 self._is_tracing = False
1638
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\summary_ops_v2.py in trace_export(name, step, profiler_outdir)
1216
1217 if profiler:
-> 1218 _profiler.save(profiler_outdir, _profiler.stop())
1219
1220 trace_off()
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\eager\profiler.py in stop()
101 if _profiler is None:
102 raise ProfilerNotRunningError(
--> 103 'Cannot stop profiling. No profiler is running.')
104 with c_api_util.tf_buffer() as buffer_:
105 pywrap_tensorflow.TFE_ProfilerSerializeToString(
ProfilerNotRunningError: Cannot stop profiling. No profiler is running.
```
|
2019/06/19
|
[
"https://Stackoverflow.com/questions/56663388",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11668817/"
] |
I was facing the same issue and even customizing the log\_dir option using datetime didn't work. Check this page: <https://github.com/tensorflow/tensorboard/issues/2819> which helped me. I just added the 'profile\_batch = 100000000' in this callback as:
TensorBoard(log\_dir=log\_dir, .., profile\_batch = 100000000)
|
Proposed suggestions to set 'profile\_batch = 100000000' work. But, the original point of was to enable profile (which is essentially disabled when you set it to 100000000)
What made it work for me with with any profile\_batch was installing 2019 C++, from this link (I'm on windows 10):
<https://support.microsoft.com/en-us/help/2977003/the-latest-supported-visual-c-downloads>
You will need to restart your machine to have it take effect
|
56,663,388
|
I'm trying to learn and use tensorboard and followed [these guideline codes](https://www.tensorflow.org/tensorboard/r2/get_started) with a few modifications.
When I run the code
```
model.fit(x=x_train,
y=y_train,
epochs=5,
validation_data=(x_test, y_test),
callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)])
```
, I got ProfilerNotRunningError with this message "summary\_ops\_v2.py:1161] Trace already enabled".
Why trace already enabled? How can I solve the problem?
I tried to solve it with new log directions(I thought then it would make the trace be renewed), but it happened again.
```py
import tensorflow as tf
import datetime
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
def create_model():
return tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model = create_model()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
log_dir="logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
train_summary_writer = tf.summary.create_file_writer(log_dir)
tensorboard_callback = [tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)]
model.fit(x=x_train,
y=y_train,
epochs=5,
validation_data=(x_test, y_test),
callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)])
```
* error -->
[enter image description here](https://i.stack.imgur.com/vE6hb.jpg)
```
Epoch 1/5
W0619 17:02:10.383985 15544 summary_ops_v2.py:1161] Trace already enabled
32/60000 [..............................] - ETA: 15:05 - loss: 2.3275 - accuracy: 0.0625
---------------------------------------------------------------------------
ProfilerNotRunningError Traceback (most recent call last)
<ipython-input-23-0c608b0df5ad> in <module>
3 epochs=5,
4 validation_data=(x_test, y_test),
----> 5 callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)])
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
641 max_queue_size=max_queue_size,
642 workers=workers,
--> 643 use_multiprocessing=use_multiprocessing)
644
645 def evaluate(self,
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, **kwargs)
662 validation_steps=validation_steps,
663 validation_freq=validation_freq,
--> 664 steps_name='steps_per_epoch')
665
666 def evaluate(self,
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs)
392 # Callbacks batch end.
393 batch_logs = cbks.make_logs(model, batch_logs, batch_outs, mode)
--> 394 callbacks._call_batch_hook(mode, 'end', batch_index, batch_logs)
395 progbar.on_batch_end(batch_index, batch_logs)
396
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in _call_batch_hook(self, mode, hook, batch, logs)
230 for callback in self.callbacks:
231 batch_hook = getattr(callback, hook_name)
--> 232 batch_hook(batch, logs)
233 self._delta_ts[hook_name].append(time.time() - t_before_callbacks)
234
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in on_train_batch_end(self, batch, logs)
513 """
514 # For backwards compatibility.
--> 515 self.on_batch_end(batch, logs=logs)
516
517 def on_test_batch_begin(self, batch, logs=None):
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in on_batch_end(self, batch, logs)
1600 self._total_batches_seen += 1
1601 if self._is_tracing:
-> 1602 self._log_trace()
1603 elif (not self._is_tracing and
1604 self._total_batches_seen == self._profile_batch - 1):
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in _log_trace(self)
1634 name='batch_%d' % self._total_batches_seen,
1635 step=self._total_batches_seen,
-> 1636 profiler_outdir=os.path.join(self.log_dir, 'train'))
1637 self._is_tracing = False
1638
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\summary_ops_v2.py in trace_export(name, step, profiler_outdir)
1216
1217 if profiler:
-> 1218 _profiler.save(profiler_outdir, _profiler.stop())
1219
1220 trace_off()
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\eager\profiler.py in stop()
101 if _profiler is None:
102 raise ProfilerNotRunningError(
--> 103 'Cannot stop profiling. No profiler is running.')
104 with c_api_util.tf_buffer() as buffer_:
105 pywrap_tensorflow.TFE_ProfilerSerializeToString(
ProfilerNotRunningError: Cannot stop profiling. No profiler is running.
```
|
2019/06/19
|
[
"https://Stackoverflow.com/questions/56663388",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11668817/"
] |
Proposed suggestions to set 'profile\_batch = 100000000' work. But, the original point of was to enable profile (which is essentially disabled when you set it to 100000000)
What made it work for me with with any profile\_batch was installing 2019 C++, from this link (I'm on windows 10):
<https://support.microsoft.com/en-us/help/2977003/the-latest-supported-visual-c-downloads>
You will need to restart your machine to have it take effect
|
I faced with issue becuase of mixed slashes in tensorboard logdir (on Windows).
This happened when I used:
```
logdir_path = os.path.join(r'D:something\models', 'model1')
```
Solved using:
```
logdir_path = os.path.normpath(os.path.join(r'D:something\models', 'model1'))
callbacks = [tf.keras.callbacks.TensorBoard(log_dir=logdir_path)]
```
|
56,663,388
|
I'm trying to learn and use tensorboard and followed [these guideline codes](https://www.tensorflow.org/tensorboard/r2/get_started) with a few modifications.
When I run the code
```
model.fit(x=x_train,
y=y_train,
epochs=5,
validation_data=(x_test, y_test),
callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)])
```
, I got ProfilerNotRunningError with this message "summary\_ops\_v2.py:1161] Trace already enabled".
Why trace already enabled? How can I solve the problem?
I tried to solve it with new log directions(I thought then it would make the trace be renewed), but it happened again.
```py
import tensorflow as tf
import datetime
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
def create_model():
return tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model = create_model()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
log_dir="logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
train_summary_writer = tf.summary.create_file_writer(log_dir)
tensorboard_callback = [tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)]
model.fit(x=x_train,
y=y_train,
epochs=5,
validation_data=(x_test, y_test),
callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)])
```
* error -->
[enter image description here](https://i.stack.imgur.com/vE6hb.jpg)
```
Epoch 1/5
W0619 17:02:10.383985 15544 summary_ops_v2.py:1161] Trace already enabled
32/60000 [..............................] - ETA: 15:05 - loss: 2.3275 - accuracy: 0.0625
---------------------------------------------------------------------------
ProfilerNotRunningError Traceback (most recent call last)
<ipython-input-23-0c608b0df5ad> in <module>
3 epochs=5,
4 validation_data=(x_test, y_test),
----> 5 callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)])
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
641 max_queue_size=max_queue_size,
642 workers=workers,
--> 643 use_multiprocessing=use_multiprocessing)
644
645 def evaluate(self,
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, **kwargs)
662 validation_steps=validation_steps,
663 validation_freq=validation_freq,
--> 664 steps_name='steps_per_epoch')
665
666 def evaluate(self,
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs)
392 # Callbacks batch end.
393 batch_logs = cbks.make_logs(model, batch_logs, batch_outs, mode)
--> 394 callbacks._call_batch_hook(mode, 'end', batch_index, batch_logs)
395 progbar.on_batch_end(batch_index, batch_logs)
396
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in _call_batch_hook(self, mode, hook, batch, logs)
230 for callback in self.callbacks:
231 batch_hook = getattr(callback, hook_name)
--> 232 batch_hook(batch, logs)
233 self._delta_ts[hook_name].append(time.time() - t_before_callbacks)
234
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in on_train_batch_end(self, batch, logs)
513 """
514 # For backwards compatibility.
--> 515 self.on_batch_end(batch, logs=logs)
516
517 def on_test_batch_begin(self, batch, logs=None):
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in on_batch_end(self, batch, logs)
1600 self._total_batches_seen += 1
1601 if self._is_tracing:
-> 1602 self._log_trace()
1603 elif (not self._is_tracing and
1604 self._total_batches_seen == self._profile_batch - 1):
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in _log_trace(self)
1634 name='batch_%d' % self._total_batches_seen,
1635 step=self._total_batches_seen,
-> 1636 profiler_outdir=os.path.join(self.log_dir, 'train'))
1637 self._is_tracing = False
1638
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\summary_ops_v2.py in trace_export(name, step, profiler_outdir)
1216
1217 if profiler:
-> 1218 _profiler.save(profiler_outdir, _profiler.stop())
1219
1220 trace_off()
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\eager\profiler.py in stop()
101 if _profiler is None:
102 raise ProfilerNotRunningError(
--> 103 'Cannot stop profiling. No profiler is running.')
104 with c_api_util.tf_buffer() as buffer_:
105 pywrap_tensorflow.TFE_ProfilerSerializeToString(
ProfilerNotRunningError: Cannot stop profiling. No profiler is running.
```
|
2019/06/19
|
[
"https://Stackoverflow.com/questions/56663388",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11668817/"
] |
```
callbacks = [tensorflow.keras.callbacks.TensorBoard(log_dir=logdir, histogram_freq=1, profile_batch = 100000000)]
```
Simply this solved my problem. Adding the profile batch.
The problem mostly mainly occurs in `Tensorflow V2.1` and in `Windows`.
|
[](https://i.stack.imgur.com/0wRMl.png)
Add this to your code
```
callbacks = [tensorflow.keras.callbacks.TensorBoard(log_dir=logdir, histogram_freq=1, profile_batch = 100000000)]
```
|
56,663,388
|
I'm trying to learn and use tensorboard and followed [these guideline codes](https://www.tensorflow.org/tensorboard/r2/get_started) with a few modifications.
When I run the code
```
model.fit(x=x_train,
y=y_train,
epochs=5,
validation_data=(x_test, y_test),
callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)])
```
, I got ProfilerNotRunningError with this message "summary\_ops\_v2.py:1161] Trace already enabled".
Why trace already enabled? How can I solve the problem?
I tried to solve it with new log directions(I thought then it would make the trace be renewed), but it happened again.
```py
import tensorflow as tf
import datetime
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
def create_model():
return tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model = create_model()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
log_dir="logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
train_summary_writer = tf.summary.create_file_writer(log_dir)
tensorboard_callback = [tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)]
model.fit(x=x_train,
y=y_train,
epochs=5,
validation_data=(x_test, y_test),
callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)])
```
* error -->
[enter image description here](https://i.stack.imgur.com/vE6hb.jpg)
```
Epoch 1/5
W0619 17:02:10.383985 15544 summary_ops_v2.py:1161] Trace already enabled
32/60000 [..............................] - ETA: 15:05 - loss: 2.3275 - accuracy: 0.0625
---------------------------------------------------------------------------
ProfilerNotRunningError Traceback (most recent call last)
<ipython-input-23-0c608b0df5ad> in <module>
3 epochs=5,
4 validation_data=(x_test, y_test),
----> 5 callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)])
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
641 max_queue_size=max_queue_size,
642 workers=workers,
--> 643 use_multiprocessing=use_multiprocessing)
644
645 def evaluate(self,
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, **kwargs)
662 validation_steps=validation_steps,
663 validation_freq=validation_freq,
--> 664 steps_name='steps_per_epoch')
665
666 def evaluate(self,
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs)
392 # Callbacks batch end.
393 batch_logs = cbks.make_logs(model, batch_logs, batch_outs, mode)
--> 394 callbacks._call_batch_hook(mode, 'end', batch_index, batch_logs)
395 progbar.on_batch_end(batch_index, batch_logs)
396
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in _call_batch_hook(self, mode, hook, batch, logs)
230 for callback in self.callbacks:
231 batch_hook = getattr(callback, hook_name)
--> 232 batch_hook(batch, logs)
233 self._delta_ts[hook_name].append(time.time() - t_before_callbacks)
234
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in on_train_batch_end(self, batch, logs)
513 """
514 # For backwards compatibility.
--> 515 self.on_batch_end(batch, logs=logs)
516
517 def on_test_batch_begin(self, batch, logs=None):
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in on_batch_end(self, batch, logs)
1600 self._total_batches_seen += 1
1601 if self._is_tracing:
-> 1602 self._log_trace()
1603 elif (not self._is_tracing and
1604 self._total_batches_seen == self._profile_batch - 1):
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in _log_trace(self)
1634 name='batch_%d' % self._total_batches_seen,
1635 step=self._total_batches_seen,
-> 1636 profiler_outdir=os.path.join(self.log_dir, 'train'))
1637 self._is_tracing = False
1638
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\summary_ops_v2.py in trace_export(name, step, profiler_outdir)
1216
1217 if profiler:
-> 1218 _profiler.save(profiler_outdir, _profiler.stop())
1219
1220 trace_off()
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\eager\profiler.py in stop()
101 if _profiler is None:
102 raise ProfilerNotRunningError(
--> 103 'Cannot stop profiling. No profiler is running.')
104 with c_api_util.tf_buffer() as buffer_:
105 pywrap_tensorflow.TFE_ProfilerSerializeToString(
ProfilerNotRunningError: Cannot stop profiling. No profiler is running.
```
|
2019/06/19
|
[
"https://Stackoverflow.com/questions/56663388",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11668817/"
] |
I assume you run this code on windows & I think you might ran into this issue: <https://github.com/tensorflow/tensorboard/issues/2279#issuecomment-512089344>
Just use backslashes as path delimiter:
```
log_dir="logs\\fit\\" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
```
|
[](https://i.stack.imgur.com/0wRMl.png)
Add this to your code
```
callbacks = [tensorflow.keras.callbacks.TensorBoard(log_dir=logdir, histogram_freq=1, profile_batch = 100000000)]
```
|
56,663,388
|
I'm trying to learn and use tensorboard and followed [these guideline codes](https://www.tensorflow.org/tensorboard/r2/get_started) with a few modifications.
When I run the code
```
model.fit(x=x_train,
y=y_train,
epochs=5,
validation_data=(x_test, y_test),
callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)])
```
, I got ProfilerNotRunningError with this message "summary\_ops\_v2.py:1161] Trace already enabled".
Why trace already enabled? How can I solve the problem?
I tried to solve it with new log directions(I thought then it would make the trace be renewed), but it happened again.
```py
import tensorflow as tf
import datetime
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
def create_model():
return tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model = create_model()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
log_dir="logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
train_summary_writer = tf.summary.create_file_writer(log_dir)
tensorboard_callback = [tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)]
model.fit(x=x_train,
y=y_train,
epochs=5,
validation_data=(x_test, y_test),
callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)])
```
* error -->
[enter image description here](https://i.stack.imgur.com/vE6hb.jpg)
```
Epoch 1/5
W0619 17:02:10.383985 15544 summary_ops_v2.py:1161] Trace already enabled
32/60000 [..............................] - ETA: 15:05 - loss: 2.3275 - accuracy: 0.0625
---------------------------------------------------------------------------
ProfilerNotRunningError Traceback (most recent call last)
<ipython-input-23-0c608b0df5ad> in <module>
3 epochs=5,
4 validation_data=(x_test, y_test),
----> 5 callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)])
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
641 max_queue_size=max_queue_size,
642 workers=workers,
--> 643 use_multiprocessing=use_multiprocessing)
644
645 def evaluate(self,
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, **kwargs)
662 validation_steps=validation_steps,
663 validation_freq=validation_freq,
--> 664 steps_name='steps_per_epoch')
665
666 def evaluate(self,
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs)
392 # Callbacks batch end.
393 batch_logs = cbks.make_logs(model, batch_logs, batch_outs, mode)
--> 394 callbacks._call_batch_hook(mode, 'end', batch_index, batch_logs)
395 progbar.on_batch_end(batch_index, batch_logs)
396
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in _call_batch_hook(self, mode, hook, batch, logs)
230 for callback in self.callbacks:
231 batch_hook = getattr(callback, hook_name)
--> 232 batch_hook(batch, logs)
233 self._delta_ts[hook_name].append(time.time() - t_before_callbacks)
234
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in on_train_batch_end(self, batch, logs)
513 """
514 # For backwards compatibility.
--> 515 self.on_batch_end(batch, logs=logs)
516
517 def on_test_batch_begin(self, batch, logs=None):
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in on_batch_end(self, batch, logs)
1600 self._total_batches_seen += 1
1601 if self._is_tracing:
-> 1602 self._log_trace()
1603 elif (not self._is_tracing and
1604 self._total_batches_seen == self._profile_batch - 1):
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in _log_trace(self)
1634 name='batch_%d' % self._total_batches_seen,
1635 step=self._total_batches_seen,
-> 1636 profiler_outdir=os.path.join(self.log_dir, 'train'))
1637 self._is_tracing = False
1638
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\summary_ops_v2.py in trace_export(name, step, profiler_outdir)
1216
1217 if profiler:
-> 1218 _profiler.save(profiler_outdir, _profiler.stop())
1219
1220 trace_off()
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\eager\profiler.py in stop()
101 if _profiler is None:
102 raise ProfilerNotRunningError(
--> 103 'Cannot stop profiling. No profiler is running.')
104 with c_api_util.tf_buffer() as buffer_:
105 pywrap_tensorflow.TFE_ProfilerSerializeToString(
ProfilerNotRunningError: Cannot stop profiling. No profiler is running.
```
|
2019/06/19
|
[
"https://Stackoverflow.com/questions/56663388",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11668817/"
] |
I assume you run this code on windows & I think you might ran into this issue: <https://github.com/tensorflow/tensorboard/issues/2279#issuecomment-512089344>
Just use backslashes as path delimiter:
```
log_dir="logs\\fit\\" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
```
|
Proposed suggestions to set 'profile\_batch = 100000000' work. But, the original point of was to enable profile (which is essentially disabled when you set it to 100000000)
What made it work for me with with any profile\_batch was installing 2019 C++, from this link (I'm on windows 10):
<https://support.microsoft.com/en-us/help/2977003/the-latest-supported-visual-c-downloads>
You will need to restart your machine to have it take effect
|
56,663,388
|
I'm trying to learn and use tensorboard and followed [these guideline codes](https://www.tensorflow.org/tensorboard/r2/get_started) with a few modifications.
When I run the code
```
model.fit(x=x_train,
y=y_train,
epochs=5,
validation_data=(x_test, y_test),
callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)])
```
, I got ProfilerNotRunningError with this message "summary\_ops\_v2.py:1161] Trace already enabled".
Why trace already enabled? How can I solve the problem?
I tried to solve it with new log directions(I thought then it would make the trace be renewed), but it happened again.
```py
import tensorflow as tf
import datetime
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
def create_model():
return tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model = create_model()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
log_dir="logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
train_summary_writer = tf.summary.create_file_writer(log_dir)
tensorboard_callback = [tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)]
model.fit(x=x_train,
y=y_train,
epochs=5,
validation_data=(x_test, y_test),
callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)])
```
* error -->
[enter image description here](https://i.stack.imgur.com/vE6hb.jpg)
```
Epoch 1/5
W0619 17:02:10.383985 15544 summary_ops_v2.py:1161] Trace already enabled
32/60000 [..............................] - ETA: 15:05 - loss: 2.3275 - accuracy: 0.0625
---------------------------------------------------------------------------
ProfilerNotRunningError Traceback (most recent call last)
<ipython-input-23-0c608b0df5ad> in <module>
3 epochs=5,
4 validation_data=(x_test, y_test),
----> 5 callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)])
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
641 max_queue_size=max_queue_size,
642 workers=workers,
--> 643 use_multiprocessing=use_multiprocessing)
644
645 def evaluate(self,
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, **kwargs)
662 validation_steps=validation_steps,
663 validation_freq=validation_freq,
--> 664 steps_name='steps_per_epoch')
665
666 def evaluate(self,
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs)
392 # Callbacks batch end.
393 batch_logs = cbks.make_logs(model, batch_logs, batch_outs, mode)
--> 394 callbacks._call_batch_hook(mode, 'end', batch_index, batch_logs)
395 progbar.on_batch_end(batch_index, batch_logs)
396
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in _call_batch_hook(self, mode, hook, batch, logs)
230 for callback in self.callbacks:
231 batch_hook = getattr(callback, hook_name)
--> 232 batch_hook(batch, logs)
233 self._delta_ts[hook_name].append(time.time() - t_before_callbacks)
234
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in on_train_batch_end(self, batch, logs)
513 """
514 # For backwards compatibility.
--> 515 self.on_batch_end(batch, logs=logs)
516
517 def on_test_batch_begin(self, batch, logs=None):
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in on_batch_end(self, batch, logs)
1600 self._total_batches_seen += 1
1601 if self._is_tracing:
-> 1602 self._log_trace()
1603 elif (not self._is_tracing and
1604 self._total_batches_seen == self._profile_batch - 1):
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in _log_trace(self)
1634 name='batch_%d' % self._total_batches_seen,
1635 step=self._total_batches_seen,
-> 1636 profiler_outdir=os.path.join(self.log_dir, 'train'))
1637 self._is_tracing = False
1638
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\summary_ops_v2.py in trace_export(name, step, profiler_outdir)
1216
1217 if profiler:
-> 1218 _profiler.save(profiler_outdir, _profiler.stop())
1219
1220 trace_off()
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\eager\profiler.py in stop()
101 if _profiler is None:
102 raise ProfilerNotRunningError(
--> 103 'Cannot stop profiling. No profiler is running.')
104 with c_api_util.tf_buffer() as buffer_:
105 pywrap_tensorflow.TFE_ProfilerSerializeToString(
ProfilerNotRunningError: Cannot stop profiling. No profiler is running.
```
|
2019/06/19
|
[
"https://Stackoverflow.com/questions/56663388",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11668817/"
] |
```
callbacks = [tensorflow.keras.callbacks.TensorBoard(log_dir=logdir, histogram_freq=1, profile_batch = 100000000)]
```
Simply this solved my problem. Adding the profile batch.
The problem mostly mainly occurs in `Tensorflow V2.1` and in `Windows`.
|
I faced with issue becuase of mixed slashes in tensorboard logdir (on Windows).
This happened when I used:
```
logdir_path = os.path.join(r'D:something\models', 'model1')
```
Solved using:
```
logdir_path = os.path.normpath(os.path.join(r'D:something\models', 'model1'))
callbacks = [tf.keras.callbacks.TensorBoard(log_dir=logdir_path)]
```
|
56,663,388
|
I'm trying to learn and use tensorboard and followed [these guideline codes](https://www.tensorflow.org/tensorboard/r2/get_started) with a few modifications.
When I run the code
```
model.fit(x=x_train,
y=y_train,
epochs=5,
validation_data=(x_test, y_test),
callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)])
```
, I got ProfilerNotRunningError with this message "summary\_ops\_v2.py:1161] Trace already enabled".
Why trace already enabled? How can I solve the problem?
I tried to solve it with new log directions(I thought then it would make the trace be renewed), but it happened again.
```py
import tensorflow as tf
import datetime
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
def create_model():
return tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model = create_model()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
log_dir="logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
train_summary_writer = tf.summary.create_file_writer(log_dir)
tensorboard_callback = [tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)]
model.fit(x=x_train,
y=y_train,
epochs=5,
validation_data=(x_test, y_test),
callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)])
```
* error -->
[enter image description here](https://i.stack.imgur.com/vE6hb.jpg)
```
Epoch 1/5
W0619 17:02:10.383985 15544 summary_ops_v2.py:1161] Trace already enabled
32/60000 [..............................] - ETA: 15:05 - loss: 2.3275 - accuracy: 0.0625
---------------------------------------------------------------------------
ProfilerNotRunningError Traceback (most recent call last)
<ipython-input-23-0c608b0df5ad> in <module>
3 epochs=5,
4 validation_data=(x_test, y_test),
----> 5 callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)])
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
641 max_queue_size=max_queue_size,
642 workers=workers,
--> 643 use_multiprocessing=use_multiprocessing)
644
645 def evaluate(self,
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, **kwargs)
662 validation_steps=validation_steps,
663 validation_freq=validation_freq,
--> 664 steps_name='steps_per_epoch')
665
666 def evaluate(self,
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs)
392 # Callbacks batch end.
393 batch_logs = cbks.make_logs(model, batch_logs, batch_outs, mode)
--> 394 callbacks._call_batch_hook(mode, 'end', batch_index, batch_logs)
395 progbar.on_batch_end(batch_index, batch_logs)
396
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in _call_batch_hook(self, mode, hook, batch, logs)
230 for callback in self.callbacks:
231 batch_hook = getattr(callback, hook_name)
--> 232 batch_hook(batch, logs)
233 self._delta_ts[hook_name].append(time.time() - t_before_callbacks)
234
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in on_train_batch_end(self, batch, logs)
513 """
514 # For backwards compatibility.
--> 515 self.on_batch_end(batch, logs=logs)
516
517 def on_test_batch_begin(self, batch, logs=None):
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in on_batch_end(self, batch, logs)
1600 self._total_batches_seen += 1
1601 if self._is_tracing:
-> 1602 self._log_trace()
1603 elif (not self._is_tracing and
1604 self._total_batches_seen == self._profile_batch - 1):
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in _log_trace(self)
1634 name='batch_%d' % self._total_batches_seen,
1635 step=self._total_batches_seen,
-> 1636 profiler_outdir=os.path.join(self.log_dir, 'train'))
1637 self._is_tracing = False
1638
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\summary_ops_v2.py in trace_export(name, step, profiler_outdir)
1216
1217 if profiler:
-> 1218 _profiler.save(profiler_outdir, _profiler.stop())
1219
1220 trace_off()
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\eager\profiler.py in stop()
101 if _profiler is None:
102 raise ProfilerNotRunningError(
--> 103 'Cannot stop profiling. No profiler is running.')
104 with c_api_util.tf_buffer() as buffer_:
105 pywrap_tensorflow.TFE_ProfilerSerializeToString(
ProfilerNotRunningError: Cannot stop profiling. No profiler is running.
```
|
2019/06/19
|
[
"https://Stackoverflow.com/questions/56663388",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11668817/"
] |
I was facing the same issue and even customizing the log\_dir option using datetime didn't work. Check this page: <https://github.com/tensorflow/tensorboard/issues/2819> which helped me. I just added the 'profile\_batch = 100000000' in this callback as:
TensorBoard(log\_dir=log\_dir, .., profile\_batch = 100000000)
|
[](https://i.stack.imgur.com/0wRMl.png)
Add this to your code
```
callbacks = [tensorflow.keras.callbacks.TensorBoard(log_dir=logdir, histogram_freq=1, profile_batch = 100000000)]
```
|
56,663,388
|
I'm trying to learn and use tensorboard and followed [these guideline codes](https://www.tensorflow.org/tensorboard/r2/get_started) with a few modifications.
When I run the code
```
model.fit(x=x_train,
y=y_train,
epochs=5,
validation_data=(x_test, y_test),
callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)])
```
, I got ProfilerNotRunningError with this message "summary\_ops\_v2.py:1161] Trace already enabled".
Why trace already enabled? How can I solve the problem?
I tried to solve it with new log directions(I thought then it would make the trace be renewed), but it happened again.
```py
import tensorflow as tf
import datetime
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
def create_model():
return tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model = create_model()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
log_dir="logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
train_summary_writer = tf.summary.create_file_writer(log_dir)
tensorboard_callback = [tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)]
model.fit(x=x_train,
y=y_train,
epochs=5,
validation_data=(x_test, y_test),
callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)])
```
* error -->
[enter image description here](https://i.stack.imgur.com/vE6hb.jpg)
```
Epoch 1/5
W0619 17:02:10.383985 15544 summary_ops_v2.py:1161] Trace already enabled
32/60000 [..............................] - ETA: 15:05 - loss: 2.3275 - accuracy: 0.0625
---------------------------------------------------------------------------
ProfilerNotRunningError Traceback (most recent call last)
<ipython-input-23-0c608b0df5ad> in <module>
3 epochs=5,
4 validation_data=(x_test, y_test),
----> 5 callbacks=[tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)])
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
641 max_queue_size=max_queue_size,
642 workers=workers,
--> 643 use_multiprocessing=use_multiprocessing)
644
645 def evaluate(self,
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, **kwargs)
662 validation_steps=validation_steps,
663 validation_freq=validation_freq,
--> 664 steps_name='steps_per_epoch')
665
666 def evaluate(self,
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs)
392 # Callbacks batch end.
393 batch_logs = cbks.make_logs(model, batch_logs, batch_outs, mode)
--> 394 callbacks._call_batch_hook(mode, 'end', batch_index, batch_logs)
395 progbar.on_batch_end(batch_index, batch_logs)
396
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in _call_batch_hook(self, mode, hook, batch, logs)
230 for callback in self.callbacks:
231 batch_hook = getattr(callback, hook_name)
--> 232 batch_hook(batch, logs)
233 self._delta_ts[hook_name].append(time.time() - t_before_callbacks)
234
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in on_train_batch_end(self, batch, logs)
513 """
514 # For backwards compatibility.
--> 515 self.on_batch_end(batch, logs=logs)
516
517 def on_test_batch_begin(self, batch, logs=None):
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in on_batch_end(self, batch, logs)
1600 self._total_batches_seen += 1
1601 if self._is_tracing:
-> 1602 self._log_trace()
1603 elif (not self._is_tracing and
1604 self._total_batches_seen == self._profile_batch - 1):
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\keras\callbacks.py in _log_trace(self)
1634 name='batch_%d' % self._total_batches_seen,
1635 step=self._total_batches_seen,
-> 1636 profiler_outdir=os.path.join(self.log_dir, 'train'))
1637 self._is_tracing = False
1638
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\summary_ops_v2.py in trace_export(name, step, profiler_outdir)
1216
1217 if profiler:
-> 1218 _profiler.save(profiler_outdir, _profiler.stop())
1219
1220 trace_off()
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\eager\profiler.py in stop()
101 if _profiler is None:
102 raise ProfilerNotRunningError(
--> 103 'Cannot stop profiling. No profiler is running.')
104 with c_api_util.tf_buffer() as buffer_:
105 pywrap_tensorflow.TFE_ProfilerSerializeToString(
ProfilerNotRunningError: Cannot stop profiling. No profiler is running.
```
|
2019/06/19
|
[
"https://Stackoverflow.com/questions/56663388",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11668817/"
] |
I assume you run this code on windows & I think you might ran into this issue: <https://github.com/tensorflow/tensorboard/issues/2279#issuecomment-512089344>
Just use backslashes as path delimiter:
```
log_dir="logs\\fit\\" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
```
|
```
callbacks = [tensorflow.keras.callbacks.TensorBoard(log_dir=logdir, histogram_freq=1, profile_batch = 100000000)]
```
Simply this solved my problem. Adding the profile batch.
The problem mostly mainly occurs in `Tensorflow V2.1` and in `Windows`.
|
58,320,969
|
I’m working on the below script on my MacbookAir and I’m unclear where this syntax error comes from and tried searching on google why it breaks at the = sign in the print function.
I understood there are different functions to print and tried many of them. But am unclear if I’m using the correct Python version (both 2 and 3 are installed).
Can you please help out?
I get an error in line `61`:
```
print("The interest rate is too high to trade {}".format(total_profit) , end="\n", file=output_file)
```
**Script:**
```
## Initial values from the Python script
principal=1000
coupon=0.06
frequency=2
r=0.03
transaction_fee = 0.003*principal
## Amendments to the variables as per question 7
probabilitypayingout=0.85
probabolilitynotpayingout=0.15
notpayingoutfullamount=200
maturity=7
market_price=1070
avoidtradingaboveinterestrate=0.02
#!/usr/bin/env python3.7
import numpy as np
# Open a file to store output
output_file = open("outputfile.txt", "w")
# print variables of this case
print("The variables used for this calculation are: \n - Probability of paying out the full principal {}.".format(probabilitypayingout), "\n - Probability of paying out partial principal {}.".format(probabolilitynotpayingout), "\n - Amount in case of paying out partial principal {}.".format(notpayingoutfullamount), "\n - Market price bond {}.".format(market_price), "\n - Bond maturity in years {}.".format(maturity), "\n - Coupon rate bond {}.".format(coupon), "\n - Principal bond {}.".format(principal), "\n - Frequency coupon bond {}.".format(frequency) , "\n - Risk free rate {}.".format(r) , "\n - Avoid trading aboe interest rate {}.".format(avoidtradingaboveinterestrate), "\n \n" )
# calculate true value and decide whether to trade
true_price=0
principalpayout=(probabilitypayingout*principal)+(probabolilitynotpayingout*notpayingoutfullamount)
for t in range(1,maturity*frequency+1):
if t<(maturity*frequency):
true_price = true_price + (coupon/frequency)*principal/(1+(r/frequency))**t # Present value of coupon payments
else:
true_price = true_price + (coupon/frequency)*principal/(1+(r/frequency))**t + principalpayout/(1+(r/frequency))**t # Present value of coupons and principal
print("The price of the bond according to the pricing model is {}, while the current market price is {}.".format(true_price, market_price))
if true_price-transaction_fee>market_price:
profit = true_price-transaction_fee-market_price
print("The trade is executed and if the pricing model is correct, the profit will be {}".format(profit), "after deduction of trading fees.")
else:
print("The trade was not executed, because the expected profit after transaction fees is negative.")
# Fifth, mimic changes in market conditions by adjusting the interest rate and market price. The indented code below the "for" line is repeated 1,000 times.
total_profit=0
for n in range(0,1000):
# Adds some random noise to the interest rate and market price, so each changes slightly (each time code is executed, values will differ because they are random)
change_r=np.random.normal(0,0.015)
change_market_price=np.random.normal(0,40)
r_new = r + change_r
market_price_new = market_price + change_market_price
# Sixth, execute trading algorithm using new values
true_price_new=0
if r_new>avoidtradingaboveinterestrate:
print("The interest rate is too high to trade {}".format(total_profit) , end="\n", file=output_file)
output_file.close()
else:
for t in range(1,maturity*frequency+1):
if t<(maturity*frequency):
true_price_new = true_price_new + (coupon/frequency)*principal/(1+(r_new/frequency))**t
else:
true_price_new = true_price_new + (coupon/frequency)*principal/(1+(r_new/frequency))**t + principalpayout/(1+(r_new/frequency))**t
if true_price_new-transaction_fee>market_price_new:
trading_profit = true_price_new-transaction_fee-market_price_new
total_profit = total_profit + trading_profit
print("The trade was executed and is expected to yield a profit of {}. The total profit from trading is {}.".format(trading_profit,total_profit), end="\n", file=output_file )
print("The total profit from trading is {}".format(total_profit), end="\n", file=output_file)
output_file.close()
```
|
2019/10/10
|
[
"https://Stackoverflow.com/questions/58320969",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12194431/"
] |
There's a few issues to fix here.
Firstly don't use `id` attributes in content which can be dynamically appended multiple times. You'll end up with duplicate ids which is invalid and will cause issues in your JS. Use common `class` attributes instead.
Secondly you can attach an event handler to all of the elements with that common class to remove the row. You can use DOM traversal methods to find the single `tr` related to the clicked button instead of targeting it by `id`.
Finally you will need to use a delegated event handler as the button is dynamically created, and is not present in the DOM when the page loads. With all that said, try this:
```js
$(document).ready(function() {
$("#btnAdd").click(function() {
$("table").append('<tr><td class="col-md-8"><input type="list" class="form-control"></input></td><td><button type="button" class="btn btn-info btn-rounded btn-sm my-0">Omhoog</button></td><td><button type="button" class="btn btn-info btn-rounded btn-sm my-0">Omlaag</button></td><button></button><td><button type="button" class="btn btn-danger btn-rounded btn-sm my-0 delete">Verwijder</button></td></tr>'
);
});
$('table').on('click', '.delete', function() {
$(this).closest('tr').remove();
});
});
```
```html
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<button id="btnAdd">Add</button>
<table></table>
```
Note that I added the `delete` class to the 'Verwijder' button.
|
You need event delegation for this. Also it's not ideal to use an id with a fixed number for each of your operations. If your table has 1000 rows, you don't want to copy-paste your function 1000 times.
I created an example based on classes:
```js
$(document).ready(function() {
$(".btn-add").click(function() {
$("table#myTable").append(
"<tr id='tr2'><td class='col-md-8'><input type='list' class='form-control'></input></td><td><button type='button' class='btn btn-info btn-rounded btn-sm my-0'>Omhoog</button></td><td><button type='button' class='btn btn-info btn-rounded btn-sm my-0'>Omlaag</button></td><td><button type='button' class='btn btn-danger btn-rounded btn-sm my-0 btn-remove'>Verwijder</button></td></tr>"
);
});
$("#myTable").on('click', '.btn-remove', function() {
$(this).closest('tr').remove();
});
});
```
```html
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<button class="btn-add">Add a row </button>
<table id="myTable">
<tr>
<td class='col-md-8'><input type='list' class='form-control'></td>
<td><button type='button' class='btn btn-info btn-rounded btn-sm my-0'>Omhoog</button></td>
<td><button type='button' class='btn btn-info btn-rounded btn-sm my-0'>Omlaag</button></td>
<td><button type='button' class='btn btn-danger btn-rounded btn-sm my-0 btn-remove'>Verwijder</button></td>
</tr>
</table>
I don't have the id myTable, so nothing happens here
<table>
<tr>
<td class='col-md-8'><input type='list' class='form-control'></td>
<td><button type='button' class='btn btn-info btn-rounded btn-sm my-0'>Omhoog</button></td>
<td><button type='button' class='btn btn-info btn-rounded btn-sm my-0'>Omlaag</button></td>
<td><button type='button' class='btn btn-danger btn-rounded btn-sm my-0 btn-remove'>Verwijder</button></td>
</tr>
</table>
```
|
58,320,969
|
I’m working on the below script on my MacbookAir and I’m unclear where this syntax error comes from and tried searching on google why it breaks at the = sign in the print function.
I understood there are different functions to print and tried many of them. But am unclear if I’m using the correct Python version (both 2 and 3 are installed).
Can you please help out?
I get an error in line `61`:
```
print("The interest rate is too high to trade {}".format(total_profit) , end="\n", file=output_file)
```
**Script:**
```
## Initial values from the Python script
principal=1000
coupon=0.06
frequency=2
r=0.03
transaction_fee = 0.003*principal
## Amendments to the variables as per question 7
probabilitypayingout=0.85
probabolilitynotpayingout=0.15
notpayingoutfullamount=200
maturity=7
market_price=1070
avoidtradingaboveinterestrate=0.02
#!/usr/bin/env python3.7
import numpy as np
# Open a file to store output
output_file = open("outputfile.txt", "w")
# print variables of this case
print("The variables used for this calculation are: \n - Probability of paying out the full principal {}.".format(probabilitypayingout), "\n - Probability of paying out partial principal {}.".format(probabolilitynotpayingout), "\n - Amount in case of paying out partial principal {}.".format(notpayingoutfullamount), "\n - Market price bond {}.".format(market_price), "\n - Bond maturity in years {}.".format(maturity), "\n - Coupon rate bond {}.".format(coupon), "\n - Principal bond {}.".format(principal), "\n - Frequency coupon bond {}.".format(frequency) , "\n - Risk free rate {}.".format(r) , "\n - Avoid trading aboe interest rate {}.".format(avoidtradingaboveinterestrate), "\n \n" )
# calculate true value and decide whether to trade
true_price=0
principalpayout=(probabilitypayingout*principal)+(probabolilitynotpayingout*notpayingoutfullamount)
for t in range(1,maturity*frequency+1):
if t<(maturity*frequency):
true_price = true_price + (coupon/frequency)*principal/(1+(r/frequency))**t # Present value of coupon payments
else:
true_price = true_price + (coupon/frequency)*principal/(1+(r/frequency))**t + principalpayout/(1+(r/frequency))**t # Present value of coupons and principal
print("The price of the bond according to the pricing model is {}, while the current market price is {}.".format(true_price, market_price))
if true_price-transaction_fee>market_price:
profit = true_price-transaction_fee-market_price
print("The trade is executed and if the pricing model is correct, the profit will be {}".format(profit), "after deduction of trading fees.")
else:
print("The trade was not executed, because the expected profit after transaction fees is negative.")
# Fifth, mimic changes in market conditions by adjusting the interest rate and market price. The indented code below the "for" line is repeated 1,000 times.
total_profit=0
for n in range(0,1000):
# Adds some random noise to the interest rate and market price, so each changes slightly (each time code is executed, values will differ because they are random)
change_r=np.random.normal(0,0.015)
change_market_price=np.random.normal(0,40)
r_new = r + change_r
market_price_new = market_price + change_market_price
# Sixth, execute trading algorithm using new values
true_price_new=0
if r_new>avoidtradingaboveinterestrate:
print("The interest rate is too high to trade {}".format(total_profit) , end="\n", file=output_file)
output_file.close()
else:
for t in range(1,maturity*frequency+1):
if t<(maturity*frequency):
true_price_new = true_price_new + (coupon/frequency)*principal/(1+(r_new/frequency))**t
else:
true_price_new = true_price_new + (coupon/frequency)*principal/(1+(r_new/frequency))**t + principalpayout/(1+(r_new/frequency))**t
if true_price_new-transaction_fee>market_price_new:
trading_profit = true_price_new-transaction_fee-market_price_new
total_profit = total_profit + trading_profit
print("The trade was executed and is expected to yield a profit of {}. The total profit from trading is {}.".format(trading_profit,total_profit), end="\n", file=output_file )
print("The total profit from trading is {}".format(total_profit), end="\n", file=output_file)
output_file.close()
```
|
2019/10/10
|
[
"https://Stackoverflow.com/questions/58320969",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12194431/"
] |
I have written code which will add tr at last (by clicking add button) and remove last added raw(by clicking remove button).
```
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<script>
$(document).ready(function() {
var rowCount = $('#myTable tr').length;
$("table#myTable").append(
"<tr id='#tr"+(rowCount+1)+"'><td class='col-md-8'><input type='list' class='form-control'></input></td>/tr>"
);
$("#btnAdd").click(function() {
rowCount++;
$("table#myTable").append(
"<tr id='#tr"+(rowCount+1)+"'><td class='col-md-8'><input type='list' class='form-control'></input></td>/tr>");
});
$("#btnRemove").click(function() {
$('#myTable tr:last').remove();
});
});
</script>
```
```
<body>
<button id="btnAdd">Add</button>
<button id="btnRemove">Remove</button>
<table id="myTable"></table>
</body>
```
|
You need event delegation for this. Also it's not ideal to use an id with a fixed number for each of your operations. If your table has 1000 rows, you don't want to copy-paste your function 1000 times.
I created an example based on classes:
```js
$(document).ready(function() {
$(".btn-add").click(function() {
$("table#myTable").append(
"<tr id='tr2'><td class='col-md-8'><input type='list' class='form-control'></input></td><td><button type='button' class='btn btn-info btn-rounded btn-sm my-0'>Omhoog</button></td><td><button type='button' class='btn btn-info btn-rounded btn-sm my-0'>Omlaag</button></td><td><button type='button' class='btn btn-danger btn-rounded btn-sm my-0 btn-remove'>Verwijder</button></td></tr>"
);
});
$("#myTable").on('click', '.btn-remove', function() {
$(this).closest('tr').remove();
});
});
```
```html
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<button class="btn-add">Add a row </button>
<table id="myTable">
<tr>
<td class='col-md-8'><input type='list' class='form-control'></td>
<td><button type='button' class='btn btn-info btn-rounded btn-sm my-0'>Omhoog</button></td>
<td><button type='button' class='btn btn-info btn-rounded btn-sm my-0'>Omlaag</button></td>
<td><button type='button' class='btn btn-danger btn-rounded btn-sm my-0 btn-remove'>Verwijder</button></td>
</tr>
</table>
I don't have the id myTable, so nothing happens here
<table>
<tr>
<td class='col-md-8'><input type='list' class='form-control'></td>
<td><button type='button' class='btn btn-info btn-rounded btn-sm my-0'>Omhoog</button></td>
<td><button type='button' class='btn btn-info btn-rounded btn-sm my-0'>Omlaag</button></td>
<td><button type='button' class='btn btn-danger btn-rounded btn-sm my-0 btn-remove'>Verwijder</button></td>
</tr>
</table>
```
|
14,109,915
|
I am currently playing around with an example from the book Violent Python. You can see my implementation [here](https://github.com/igniteflow/violent-python/blob/master/pwd-crackers/unix-pwd-crack.py)
I am now trying to implement the same script in Go to compare performance, note I am completely new to Go. Opening the file and iterating over the lines is fine, however I cannot figure out how to use the "crypto" library to hash the string in the same way as Python's crypt.crypt(str\_to\_hash, salt). I thought it maybe something like
```
import "crypto/des"
des.NewCipher([]byte("abcdefgh"))
```
However, no cigar. Any help would be much appreciated as it'd be really interesting to compare Go's parallel performance to Python's multithreaded.
Edit:
[Python docs for crypt.crypt](http://docs.python.org/2/library/crypt.html)
|
2013/01/01
|
[
"https://Stackoverflow.com/questions/14109915",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/343223/"
] |
`crypt` is very easy to wrap with cgo, eg
```
package main
import (
"fmt"
"unsafe"
)
// #cgo LDFLAGS: -lcrypt
// #define _GNU_SOURCE
// #include <crypt.h>
// #include <stdlib.h>
import "C"
// crypt wraps C library crypt_r
func crypt(key, salt string) string {
data := C.struct_crypt_data{}
ckey := C.CString(key)
csalt := C.CString(salt)
out := C.GoString(C.crypt_r(ckey, csalt, &data))
C.free(unsafe.Pointer(ckey))
C.free(unsafe.Pointer(csalt))
return out
}
func main() {
fmt.Println(crypt("abcdefg", "aa"))
}
```
Which produces this when run
```
aaTcvO819w3js
```
Which is identical to python `crypt.crypt`
```
>>> from crypt import crypt
>>> crypt("abcdefg","aa")
'aaTcvO819w3js'
>>>
```
(Updated to free the CStrings - thanks @james-henstridge)
|
E.g.
```
package main
import (
"crypto/des"
"fmt"
"log"
)
func main() {
b, err := des.NewCipher([]byte("abcdefgh"))
if err != nil {
log.Fatal(err)
}
msg := []byte("Hello!?!")
fmt.Printf("% 02x: %q\n", msg, msg)
b.Encrypt(msg, msg)
fmt.Printf("% 02x: %q\n", msg, msg)
b.Decrypt(msg, msg)
fmt.Printf("% 02x: %q\n", msg, msg)
}
```
(Also: <http://play.golang.org/p/czYDRjtWNR>)
---
Output:
```
48 65 6c 6c 6f 21 3f 21: "Hello!?!"
3e 41 67 99 2d 9a 72 b9: ">Ag\x99-\x9ar\xb9"
48 65 6c 6c 6f 21 3f 21: "Hello!?!"
```
|
14,109,915
|
I am currently playing around with an example from the book Violent Python. You can see my implementation [here](https://github.com/igniteflow/violent-python/blob/master/pwd-crackers/unix-pwd-crack.py)
I am now trying to implement the same script in Go to compare performance, note I am completely new to Go. Opening the file and iterating over the lines is fine, however I cannot figure out how to use the "crypto" library to hash the string in the same way as Python's crypt.crypt(str\_to\_hash, salt). I thought it maybe something like
```
import "crypto/des"
des.NewCipher([]byte("abcdefgh"))
```
However, no cigar. Any help would be much appreciated as it'd be really interesting to compare Go's parallel performance to Python's multithreaded.
Edit:
[Python docs for crypt.crypt](http://docs.python.org/2/library/crypt.html)
|
2013/01/01
|
[
"https://Stackoverflow.com/questions/14109915",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/343223/"
] |
`crypt` is very easy to wrap with cgo, eg
```
package main
import (
"fmt"
"unsafe"
)
// #cgo LDFLAGS: -lcrypt
// #define _GNU_SOURCE
// #include <crypt.h>
// #include <stdlib.h>
import "C"
// crypt wraps C library crypt_r
func crypt(key, salt string) string {
data := C.struct_crypt_data{}
ckey := C.CString(key)
csalt := C.CString(salt)
out := C.GoString(C.crypt_r(ckey, csalt, &data))
C.free(unsafe.Pointer(ckey))
C.free(unsafe.Pointer(csalt))
return out
}
func main() {
fmt.Println(crypt("abcdefg", "aa"))
}
```
Which produces this when run
```
aaTcvO819w3js
```
Which is identical to python `crypt.crypt`
```
>>> from crypt import crypt
>>> crypt("abcdefg","aa")
'aaTcvO819w3js'
>>>
```
(Updated to free the CStrings - thanks @james-henstridge)
|
I believe there isn't currently any publicly available package for Go which implements the old-fashioned Unix "salted" DES based `crypt()` functionality. This is different from the normal symmetrical DES encryption/decryption which is implemented in the `"crypto/des"` package (as you have discovered).
You would have to implement it on your own. There are plenty of existing implementations in different languages (mostly C), for example in [FreeBSD sources](http://svnweb.freebsd.org/base/head/secure/lib/libcrypt/crypt-des.c?view=markup) or in [glibc](https://www.gnu.org/software/libc/). If you implement it in Go, please publish it. :)
For new projects it is much better to use some stronger password hashing algorithm, such as [bcrypt](https://en.wikipedia.org/wiki/Bcrypt). A good implementation is available in the [go.crypto](https://code.google.com/p/go.crypto/) repository. The documentation is available [here](http://godoc.org/code.google.com/p/go.crypto/bcrypt). Unfortunately this does not help if you need to work with pre-existing legacy password hashes.
*Edited to add*: I had a look at Python's `crypt.crypt()` implementation and found out that it is just a wrapper around the libc implementation. It would be simple to implement the same wrapper for Go. However your idea of comparing a Python implementation to a Go implementation is already ruined: you would have to implement **both** of them yourself to make any meaningful comparisons.
|
14,109,915
|
I am currently playing around with an example from the book Violent Python. You can see my implementation [here](https://github.com/igniteflow/violent-python/blob/master/pwd-crackers/unix-pwd-crack.py)
I am now trying to implement the same script in Go to compare performance, note I am completely new to Go. Opening the file and iterating over the lines is fine, however I cannot figure out how to use the "crypto" library to hash the string in the same way as Python's crypt.crypt(str\_to\_hash, salt). I thought it maybe something like
```
import "crypto/des"
des.NewCipher([]byte("abcdefgh"))
```
However, no cigar. Any help would be much appreciated as it'd be really interesting to compare Go's parallel performance to Python's multithreaded.
Edit:
[Python docs for crypt.crypt](http://docs.python.org/2/library/crypt.html)
|
2013/01/01
|
[
"https://Stackoverflow.com/questions/14109915",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/343223/"
] |
`crypt` is very easy to wrap with cgo, eg
```
package main
import (
"fmt"
"unsafe"
)
// #cgo LDFLAGS: -lcrypt
// #define _GNU_SOURCE
// #include <crypt.h>
// #include <stdlib.h>
import "C"
// crypt wraps C library crypt_r
func crypt(key, salt string) string {
data := C.struct_crypt_data{}
ckey := C.CString(key)
csalt := C.CString(salt)
out := C.GoString(C.crypt_r(ckey, csalt, &data))
C.free(unsafe.Pointer(ckey))
C.free(unsafe.Pointer(csalt))
return out
}
func main() {
fmt.Println(crypt("abcdefg", "aa"))
}
```
Which produces this when run
```
aaTcvO819w3js
```
Which is identical to python `crypt.crypt`
```
>>> from crypt import crypt
>>> crypt("abcdefg","aa")
'aaTcvO819w3js'
>>>
```
(Updated to free the CStrings - thanks @james-henstridge)
|
Good news! There's actually an open source implementation of what you're looking for. [Osutil](https://github.com/kless/osutil "Osutil") has a crypt package that reimplements `crypt` in pure Go.
<https://github.com/kless/osutil/tree/master/user/crypt>
|
21,892,080
|
I'm using Python's [Watchdog](http://pythonhosted.org/watchdog/) to monitor a given directory for new files being created. When a file is created, some code runs that spawns a subprocess shell command to run different code to process this file. This should run for every new file that is created. I've tested this out when one file is created, and things work great, but am having trouble getting it working when multiple files are created, either at the same time, or one after another.
My current problem is this... the processing code run in the shell takes a while to run and will not finish before a new file is created in the directory. There's nothing I can do about that. While this code is running, watchdog will not recognize that a new file has been created, and will not proceed with the code.
So I think I need to spawn a new process for each new file, or do something get things to run concurrently, and not wait until one file is done before processing the next one.
So my questions are:
1.) In reality I will have 4 files, in different series, created at the same time, in one directory. What's the best way get watchdog to run the code on file creation for all 4 files at once?
2.) When the code is running for one file, how do I get watchdog to begin processing the next file in the same series without waiting until processing for the previous file has completed. This is necessary because the files are particular and I need to pause the processing of one file until another file is finished, but the order in which they are created may vary.
Do I need to combine my watchdog with multiprocessing or threading somehow? Or do I need to implement multiple observers? I'm kind of at a loss. Thanks for any help.
```
class MonitorFiles(FileSystemEventHandler):
'''Sub-class of watchdog event handler'''
def __init__(self, config=None, log=None):
self.log = log
self.config = config
def on_created(self, event):
file = os.path.basename(event.src_path)
self.log.info('Created file {0}'.format(event.src_path))
dosWatch.go(event.src_path, self.config, self.log)
def on_modified(self, event):
file = os.path.basename(event.src_path)
ext = os.path.splitext(file)[1]
if ext == '.fits':
self.log.warning('Modifying a FITS file is not allowed')
return
def on_deleted(self, event):
self.log.critical('Nothing should ever be deleted from here!')
return
```
### Main Monitoring
```
def monitor(config, log):
'''Uses the Watchdog package to monitor the data directory for new files.
See the MonitorFiles class in dosClasses for actual monitoring code'''
event_handler = dosclass.MonitorFiles(config, log)
# add logging the the event handler
log_handler = LoggingEventHandler()
# set up observer
observer = Observer()
observer.schedule(event_handler, path=config.fitsDir, recursive=False)
observer.schedule(log_handler, config.fitsDir, recursive=False)
observer.start()
log.info('Begin MaNGA DOS!')
log.info('Start watching directory {0} for new files ...'.format(config.fitsDir))
# monitor
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
observer.unschedule_all()
observer.stop()
log.info('Stop watching directory ...')
log.info('End MaNGA DOS!')
log.info('--------------------------')
log.info('')
observer.join()
```
In the above, my monitor method sets up watchdog to monitor the main directory. The MonitorFiles class defines what happens when a file is created. It basically calls this dosWatch.go method which eventually calls a subprocess.Popen to run a shell command.
|
2014/02/19
|
[
"https://Stackoverflow.com/questions/21892080",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3329870/"
] |
Here's what I ended up doing, which solved my problem. I used multiprocessing to start a separate watchdog monitoring process to watch for each file separately. Watchdog already queues up new files for me, which is fine for me.
As for point 2 above, I needed, e.g. a file2 to process before a file1, even though file1 was created first. So during file1 I check for the output of the file2 processing. If it finds it, it goes ahead processing file1. If it doesn't it exits. On file2 processing, I check to see if file1 was created already, and if so, then process file1. (Code for this not shown)
### Main Monitoring of Cameras
```
def monitorCam(camera, config, mainlog):
'''Uses the Watchdog package to monitor the data directory for new files.
See the MonitorFiles class in dosClasses for actual monitoring code. Monitors each camera.'''
mainlog.info('Process Name, PID: {0},{1}'.format(mp.current_process().name,mp.current_process().pid))
#init cam log
camlog = initLogger(config, filename='manga_dos_{0}'.format(camera))
camlog.info('Camera {0}, PID {1} '.format(camera,mp.current_process().pid))
config.camera=camera
event_handler = dosclass.MonitorFiles(config, camlog, mainlog)
# add logging the the event handler
log_handler = LoggingEventHandler()
# set up observer
observer = Observer()
observer.schedule(event_handler, path=config.fitsDir, recursive=False)
observer.schedule(log_handler, config.fitsDir, recursive=False)
observer.daemon=True
observer.start()
camlog.info('Begin MaNGA DOS!')
camlog.info('Start watching directory {0} for new files ...'.format(config.fitsDir))
camlog.info('Watching directory {0} for new files from camera {1}'.format(config.fitsDir,camera))
# monitor
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
observer.unschedule_all()
observer.stop()
camlog.info('Stop watching directory ...')
camlog.info('End MaNGA DOS!')
camlog.info('--------------------------')
camlog.info('')
#observer.join()
if observer.is_alive():
camlog.info('still alive')
else:
camlog.info('thread ending')
```
### Start of Multiple Camera Processes
```
def startProcess(camera,config,log):
''' Uses multiprocessing module to start 4 different camera monitoring processes'''
jobs=[]
#pdb.set_trace()
#log.info(mp.log_to_stderr(logging.DEBUG))
for i in range(len(camera)):
log.info('Starting to monitor camera {0}'.format(camera[i]))
print 'Starting to monitor camera {0}'.format(camera[i])
try:
p = mp.Process(target=monitorCam, args=(camera[i],config, log), name=camera[i])
p.daemon=True
jobs.append(p)
p.start()
except KeyboardInterrupt:
log.info('Ending process: {0} for camera {1}'.format(mp.current_process().pid, camera[i]))
p.terminate()
log.info('Terminated: {0}, {1}'.format(p,p.is_alive()))
for i in range(len(jobs)):
jobs[i].join()
return
```
|
I'm not sure it would make much sense to do a thread per file. The [GIL](https://wiki.python.org/moin/GlobalInterpreterLock) will probably eliminate any advantage you'd see from doing that and might even impact performance pretty badly and lead to some unexpected behavior. I haven't personally found `watchdog` to be very reliable. You might consider implementing your own file watcher which can be done fairly easily as in the django framework (see [here](https://github.com/django/django/blob/master/django/utils/autoreload.py)) by creating a dict with the modified timestamp for each file.
|
25,663,543
|
I'm trying to run celery worker in OS X (Mavericks). I activated virtual environment (python 3.4) and tried to start Celery with this argument:
```
celery worker --app=scheduling -linfo
```
Where `scheduling` is my celery app.
But I ended up with this error: `dbm.error: db type is dbm.gnu, but the module is not available`
Complete stacktrace:
```
Traceback (most recent call last):
File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/kombu/utils/__init__.py", line 320, in __get__
return obj.__dict__[self.__name__]
KeyError: 'db'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/other/PhoenixEnv/bin/celery", line 9, in <module>
load_entry_point('celery==3.1.9', 'console_scripts', 'celery')()
File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/__main__.py", line 30, in main
main()
File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/bin/celery.py", line 80, in main
cmd.execute_from_commandline(argv)
File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/bin/celery.py", line 768, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/bin/base.py", line 308, in execute_from_commandline
return self.handle_argv(self.prog_name, argv[1:])
File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/bin/celery.py", line 760, in handle_argv
return self.execute(command, argv)
File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/bin/celery.py", line 692, in execute
).run_from_argv(self.prog_name, argv[1:], command=argv[0])
File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/bin/worker.py", line 175, in run_from_argv
return self(*args, **options)
File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/bin/base.py", line 271, in __call__
ret = self.run(*args, **kwargs)
File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/bin/worker.py", line 209, in run
).start()
File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/worker/__init__.py", line 100, in __init__
self.setup_instance(**self.prepare_args(**kwargs))
File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/worker/__init__.py", line 141, in setup_instance
self.blueprint.apply(self, **kwargs)
File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/bootsteps.py", line 221, in apply
step.include(parent)
File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/bootsteps.py", line 347, in include
return self._should_include(parent)[0]
File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/bootsteps.py", line 343, in _should_include
return True, self.create(parent)
File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/worker/components.py", line 220, in create
w._persistence = w.state.Persistent(w.state, w.state_db, w.app.clock)
File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/worker/state.py", line 161, in __init__
self.merge()
File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/worker/state.py", line 169, in merge
self._merge_with(self.db)
File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/kombu/utils/__init__.py", line 322, in __get__
value = obj.__dict__[self.__name__] = self.__get(obj)
File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/worker/state.py", line 238, in db
return self.open()
File "/Users/other/PhoenixEnv/lib/python3.4/site-packages/celery/worker/state.py", line 165, in open
self.filename, protocol=self.protocol, writeback=True,
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/shelve.py", line 239, in open
return DbfilenameShelf(filename, flag, protocol, writeback)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/shelve.py", line 223, in __init__
Shelf.__init__(self, dbm.open(filename, flag), protocol, writeback)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/dbm/__init__.py", line 91, in open
"available".format(result))
dbm.error: db type is dbm.gnu, but the module is not available
```
Please help.
|
2014/09/04
|
[
"https://Stackoverflow.com/questions/25663543",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/493329/"
] |
try this :-
```
var value=$(this).attr("href");
```
[Demo](http://jsfiddle.net/rmbvo9oh/2/)
|
In JavaScript, when you bind an event to an element, the `this` variable is being available (of course depending on the element and event). So, when you click an element and bind a function, within that function scope the clicked element is referenced as `this`. Writing `$(this)` you make a jQuery object out of it:
```
$(document).ready(function(){
$("a").click(function(){
var value = $(this).attr("href");
alert("hello------> " + value);
});
});
```
|
11,301,863
|
I have a django FileField, which i use to store wav files on the Amazon s3 server. I have set up the celery task to read that file and convert it to mp3 and store it to another FileField. Problem i am facing is that i am unable to pass the input file to ffmpeg as the file is not the physical file on the hard disk drive. To circumvent that, i used stdin to feed the input stream of the file with the django's filefield. Here is the example:
```
output_file = NamedTemporaryFile(suffix='.mp3')
subprocess.call(['ffmpeg', '-y', '-i', '-', output_file.name], stdin=recording_wav)
```
where recording\_wav file is: , which is actually stored on the amazon s3 server.
The error for the above subprocess call is:
```
AttributeError: 'cStringIO.StringO' object has no attribute 'fileno'
```
How can i do this? Thanks in advance for the help.
**Edit:**
Full traceback:
```
[2012-07-03 04:09:50,336: ERROR/MainProcess] Task api.tasks.convert_audio[b7ab4192-2bff-4ea4-9421-b664c8d6ae2e] raised exception: AttributeError("'cStringIO.StringO' object has no attribute 'fileno'",)
Traceback (most recent call last):
File "/home/tejinder/envs/tmai/local/lib/python2.7/site-packages/celery/execute/trace.py", line 181, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/tejinder/projects/tmai/../tmai/apps/api/tasks.py", line 56, in convert_audio
subprocess.Popen(['ffmpeg', '-y', '-i', '-', output_file.name], stdin=recording_wav)
File "/usr/lib/python2.7/subprocess.py", line 672, in __init__
errread, errwrite) = self._get_handles(stdin, stdout, stderr)
File "/usr/lib/python2.7/subprocess.py", line 1043, in _get_handles
p2cread = stdin.fileno()
File "/home/tejinder/envs/tmai/local/lib/python2.7/site-packages/django/core/files/utils.py", line 12, in <lambda>
fileno = property(lambda self: self.file.fileno)
File "/home/tejinder/envs/tmai/local/lib/python2.7/site-packages/django/core/files/utils.py", line 12, in <lambda>
fileno = property(lambda self: self.file.fileno)
AttributeError: 'cStringIO.StringO' object has no attribute 'fileno'
```
|
2012/07/02
|
[
"https://Stackoverflow.com/questions/11301863",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/867365/"
] |
Use [`subprocess.Popen.communicate`](http://docs.python.org/library/subprocess.html?highlight=popen.communicate#subprocess.Popen.communicate) to pass the input to your subprocess:
```
command = ['ffmpeg', '-y', '-i', '-', output_file.name]
process = subprocess.Popen(command, stdin=subprocess.PIPE)
process.communicate(recording_wav)
```
For extra fun, you could use the ffmpeg's output to avoid your NamedTemporaryFile:
```
command = ['ffmpeg', '-y', '-i', '-', '-f', 'mp3', '-']
process = subprocess.Popen(command, stdin=subprocess.PIPE)
recording_mp3, errordata = process.communicate(recording_wav)
```
|
You need to create a pipe, pass the read end of the pipe to the subprocess, and dump the data into the write end.
|
211,046
|
What's a good way to generate an icon in-memory in python? Right now I'm forced to use pygame to draw the icon, then I save it to disk as an .ico file, and then I load it from disk as an ICO resource...
Something like this:
```
if os.path.isfile(self.icon):
icon_flags = win32con.LR_LOADFROMFILE | win32con.LR_DEFAULTSIZE
hicon = win32gui.LoadImage(hinst,
self.icon,
win32con.IMAGE_ICON,
0,
0,
icon_flags)
```
...where self.icon is the filename of the icon I created.
Is there any way to do this in memory? EDIT: All I want to do is create an icon with a 2-digit number displayed on it (weather-taskbar style.
|
2008/10/17
|
[
"https://Stackoverflow.com/questions/211046",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15055/"
] |
You can use [wxPython](http://wxpython.org/) for this.
```
from wx import EmptyIcon
icon = EmptyIcon()
icon.CopyFromBitmap(your_wxBitmap)
```
The [wxBitmap](http://docs.wxwidgets.org/stable/wx_wxbitmap.html#wxbitmap) can be generated in memory using [wxMemoryDC](http://docs.wxwidgets.org/stable/wx_wxmemorydc.html#wxmemorydc), look [here](http://docs.wxwidgets.org/stable/wx_wxdc.html) for operations you can do on a DC.
This icon can then be applied to a wxFrame (a window) or a wxTaskBarIcon using:
```
frame.SetIcon(icon)
```
|
You can probably create a object that mimics the python file-object interface.
<http://docs.python.org/library/stdtypes.html#bltin-file-objects>
|
211,046
|
What's a good way to generate an icon in-memory in python? Right now I'm forced to use pygame to draw the icon, then I save it to disk as an .ico file, and then I load it from disk as an ICO resource...
Something like this:
```
if os.path.isfile(self.icon):
icon_flags = win32con.LR_LOADFROMFILE | win32con.LR_DEFAULTSIZE
hicon = win32gui.LoadImage(hinst,
self.icon,
win32con.IMAGE_ICON,
0,
0,
icon_flags)
```
...where self.icon is the filename of the icon I created.
Is there any way to do this in memory? EDIT: All I want to do is create an icon with a 2-digit number displayed on it (weather-taskbar style.
|
2008/10/17
|
[
"https://Stackoverflow.com/questions/211046",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15055/"
] |
You can use [wxPython](http://wxpython.org/) for this.
```
from wx import EmptyIcon
icon = EmptyIcon()
icon.CopyFromBitmap(your_wxBitmap)
```
The [wxBitmap](http://docs.wxwidgets.org/stable/wx_wxbitmap.html#wxbitmap) can be generated in memory using [wxMemoryDC](http://docs.wxwidgets.org/stable/wx_wxmemorydc.html#wxmemorydc), look [here](http://docs.wxwidgets.org/stable/wx_wxdc.html) for operations you can do on a DC.
This icon can then be applied to a wxFrame (a window) or a wxTaskBarIcon using:
```
frame.SetIcon(icon)
```
|
This is working for me and doesn't require wx.
```
from ctypes import *
from ctypes.wintypes import *
CreateIconFromResourceEx = windll.user32.CreateIconFromResourceEx
size_x, size_y = 32, 32
LR_DEFAULTCOLOR = 0
with open("my32x32.png", "rb") as f:
png = f.read()
hicon = CreateIconFromResourceEx(png, len(png), 1, 0x30000, size_x, size_y, LR_DEFAULTCOLOR)
```
|
34,195,014
|
I have a dataframe that has aggregated people by location like so
```
location_id | score | number_of_males | number_of_females
1 | 20 | 2 | 1
2 | 45 | 1 | 2
```
I want to create a new dataframe that unaggregated this one so I get something like
```
location_id | score | number_of_males | number_of_females
1 | 20 | 1 | 0
1 | 20 | 1 | 0
1 | 20 | 0 | 1
2 | 45 | 1 | 0
2 | 45 | 0 | 1
2 | 45 | 0 | 0
```
Or even better
```
location_id | score | sex
1 | 20 | male
1 | 20 | male
1 | 20 | female
2 | 45 | male
2 | 45 | female
2 | 45 | female
```
I want to do something like
```
import pandas as pd
aggregated_df = pd.DataFrame.from_csv(SOME_PATH)
unaggregated_df = df = pd.DataFrame(columns=['location_id', 'score', 'sex'])
for row in aggregated_df:
for column in ['number_of_males', 'number_of_females']:
for number_of_people in range(0, row[column]):
if column == 'number_of_males':
sex = 'male'
else:
sex = 'female'
unaggregated_df.append([{'location_id': row['location_id'],
'score': row['score'],
'sex': sex}],
ignore_index=True)
```
I am having trouble getting the dict to append even though this seems to be supported in [pandas](http://pandas.pydata.org/pandas-docs/stable/merging.html#appending-rows-to-a-dataframe)
Is there a more pandthonic (panda's version of pythonic) way to accomplish this?
|
2015/12/10
|
[
"https://Stackoverflow.com/questions/34195014",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1096662/"
] |
Here is a way to get your result using `group_by`:
```
ids = ['location_id','score']
def foo(d):
return pd.Series(d['number_of_males'].values*['male'] +
d['number_of_females'].values*['female'])
pd.melt(df.groupby(ids).apply(foo).reset_index(), id_vars=ids).drop('variable', 1)
#Out[13]:
# location_id score value
#0 1 20 male
#1 2 45 male
#2 1 20 male
#3 2 45 female
#4 1 20 female
#5 2 45 female
```
|
Until this I could do in a pandas functions
```
print df
location_id score number_of_males number_of_females
1 20 2 1
2 45 1 2
```
Converting the two columns to one,
```
df.set_index(['location_id','score']).stack().reset_index()
Out[102]:
location_id score level_2 0
0 1 20 number_of_males 2
1 1 20 number_of_females 1
2 2 45 number_of_males 1
3 2 45 number_of_females 2
```
But then I have to iterate to increase the number of rows using python loop :(
|
61,265,226
|
I use python boto3
when I upload file to s3,aws lambda will move the file to other bucket,I can get object url by lambda event,like
`https://xxx.s3.amazonaws.com/xxx/xxx/xxxx/xxxx/diamond+white.side.jpg`
The object key is `xxx/xxx/xxxx/xxxx/diamond+white.side.jpg`
This is a simple example,I can replace "+" get object key, there are other complicated situations,I need to get object key by object url,How can I do it?
thanks!!
|
2020/04/17
|
[
"https://Stackoverflow.com/questions/61265226",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11303583/"
] |
You should use `urllib.parse.unquote` and then replace `+` with space.
From my knowledge, `+` is the only exception from URL parsing, so you should be safe if you do that by hand.
|
I think this is what you want:
```
url_data = "https://xxx.s3.amazonaws.com/xxx/xxx/xxxx/xxxx/diamond+white.side.jpg".split("/")[3:]
object_key = "/".join(url_data)
```
|
71,738,691
|
me is writing a simple code in python which should give me all the data available in my oracle table. Connection stuff are fine.
```
select column1,column2,column3 from table1.
```
column as following values
[](https://i.stack.imgur.com/dWjEf.png)
This is a huge table and 24 million rows. Issue is this is giving my null value in multiple columns, though it TEMPhas values. I thought The issue wat me feel is initial rows of these columns TEMPhas smaller (2 digit only) and dat's why anything having bigger than 2 digits get ignored by python. How can me write a select statement and take everything from oracle table irrespective of if the initial few hundred columns are null also.
But as suggested here this is not the reason, not sure why this is happening. Any help will be appriciated.
me is using python 3.10.
|
2022/04/04
|
[
"https://Stackoverflow.com/questions/71738691",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15238129/"
] |
I think this is because Ubuntu 16.04 includes a fairly old version of git that does not support the `--progress` flag to the `git submodule update` command. I've [opened an issue](https://github.com/alire-project/alire/issues/966) against Alire to see if we might be able to remove this flag.
In the meantime, I'd recommend upgrading git to the latest version. You may also want to consider a more recent Ubuntu version as Alire hasn't been tested extensively on older releases. Alire's integration tests are currently run on Ubuntu 20.04.
|
I *think* you need to say
```
alr index --update-all
```
`--update-all` is a bit misleading, but given that the error message mentions "index" it was the only likely thing in `alr index --help` (you find the possible commands, e.g. "index" here, by just `alr --help`).
|
65,390,129
|
I create a virtual environment; let's say test\_venv, and I activate it. All successful.
HOWEVER, the path of the Python Interpreter doesn't not change. I have illustrated the situation below.
For clarification, the python path SHOULD BE `~/Desktop/test_venv/bin/python`.
```
>>> python3 -m venv Desktop/test_venv
>>> source Desktop/test_venv/bin/activate
(test_venv) >>> which python
/usr/bin/python
```
|
2020/12/21
|
[
"https://Stackoverflow.com/questions/65390129",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14392583/"
] |
#### *Please make sure to read Note #2.*
---
**This is what you should do if you don't want to create a new virtual environment**:
In `venv/bin` folder there are 3 files that store your venv path explicitly
and if the path is wrong they take the normal python path so you should change the path there to your new path.
change: `set -gx VIRTUAL_ENV "what/ever/path/you/need"` in `activate.fish`
change: `VIRTUAL_ENV="what/ever/path/you/need"` in `activate`
change: `setenv VIRTUAL_ENV "what/ever/path/you/need"` in `activate.csh`
**Note #1:**
the path is to `/venv` and not to `/venv/bin`
**Note #2:**
If you reached this page it means that you are probably **not following Python's best practice for a project structure**.
If you were, the process of creating a new virtual environment was just a matter of one command line.
Please consider using one of the following methods:
* add a [`requirements.txt`](https://pip.pypa.io/en/stable/user_guide/#requirements-files) to your project - *for very small projects.*
* [implement an `setup.py` script](https://docs.python.org/3/distutils/setupscript.html) - *for real projects.*
* use a tool like [Poetry](https://python-poetry.org/) - *just like the latter though somewhat user-friendlier for some tasks.*
Thanks you Khalaimov Dmitrii, I didn't thought it was because I moved the folder.
|
It is not an answer specifically to your question, but it corresponds the title of the question. I faced similar problem and couldn't find solution on Internet. Maybe someone use my experience.
I created virtual environment for my python project. Some time later my python interpreter also stopped changing after virtual environment activation. Similar to how you described.
**My problem was that I moved the project folder to a different directory some time ago.** And if I return the folder to its original directory, then everything starts working again.
There is following problem resolution. You save all package requirements (for example, using 'pip freeze' or 'poetry') and remove 'venv'-folder (or in your case 'test\_venv'-folder). After that we create virtual environment again, activate it and install all requirements.
This approach resolved my problem.
|
65,390,129
|
I create a virtual environment; let's say test\_venv, and I activate it. All successful.
HOWEVER, the path of the Python Interpreter doesn't not change. I have illustrated the situation below.
For clarification, the python path SHOULD BE `~/Desktop/test_venv/bin/python`.
```
>>> python3 -m venv Desktop/test_venv
>>> source Desktop/test_venv/bin/activate
(test_venv) >>> which python
/usr/bin/python
```
|
2020/12/21
|
[
"https://Stackoverflow.com/questions/65390129",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14392583/"
] |
It is not an answer specifically to your question, but it corresponds the title of the question. I faced similar problem and couldn't find solution on Internet. Maybe someone use my experience.
I created virtual environment for my python project. Some time later my python interpreter also stopped changing after virtual environment activation. Similar to how you described.
**My problem was that I moved the project folder to a different directory some time ago.** And if I return the folder to its original directory, then everything starts working again.
There is following problem resolution. You save all package requirements (for example, using 'pip freeze' or 'poetry') and remove 'venv'-folder (or in your case 'test\_venv'-folder). After that we create virtual environment again, activate it and install all requirements.
This approach resolved my problem.
|
Check the value of VIRTUAL\_ENV in /venv/bin/activate . If you renamed your project directory or moved it, then the value may still be the old value. PyCharm doesn't update your venv files if you used PyCharm to rename the project. You can delete the venv and recreate a new one if the path is wrong, or try the answer that talks about where to change it.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.