text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
|---|---|---|---|---|
> To: tutor at python.org > From: alan.gauld at btinternet.com > Date: Mon, 6 Sep 2010 08:27:31 +0100 > Subject: Re: [Tutor] why do i get None as output > > > "Roelof Wobben" <rwobben at hotmail.com> wrote > > def make_empty(seq): > > > > > _______________________________________________ > Tutor maillist - Tutor at python.org > To unsubscribe or change subscription options: > Oke, I put a return seq in the programm and it looks now like this : def encapsulate(val, seq): if type(seq) == type(""): return str(val) if type(seq) == type([]): return [val] return (val,) def insert_in_middle(val, seq): middle = len(seq)/2 return seq[:middle] + encapsulate(val, seq) + seq[middle:] def make_empty(seq): """ >>> make_empty([1, 2, 3, 4]) [] >>> make_empty(('a', 'b', 'c')) () >>> make_empty("No, not me!") '' """ if type(seq) == type([]): seq = [] elif type(seq) == type(()): seq=() else: seq = "" return seq if __name__ == "__main__": import doctest doctest.testmod() This works but I don't think its what the exercise means :!") '' """ So i think I have to use encapsulate and insert_in_middle. And I don't use it. Roelof -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
|
https://mail.python.org/pipermail/tutor/2010-September/078316.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
csUserRenderBufferManager Class ReferenceHelper class to manage multiple render buffers, usually provided by the user. More...
#include <cstool/userrndbuf.h>
Detailed DescriptionHelper class to manage multiple render buffers, usually provided by the user.
Definition at line 51 of file userrndbuf.h.
Member Function Documentation
Add a buffer.
Returns false if a buffer of the same name was already added.
Retrieve a buffer.
Remove a buffer.
Returns false if no buffer of the specified name was added.
The documentation for this class was generated from the following file:
- cstool/userrndbuf.h
Generated for Crystal Space 1.2.1 by doxygen 1.5.3
|
http://www.crystalspace3d.org/docs/online/api-1.2/classcsUserRenderBufferManager.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
On 28 February 2012 09:54, Stefan Behnel <stefan_ml at behnel.de>. > -------- Original-Message -------- > Betreff: Re: [cython-users] What's up with PyEval_InitThreads() in python 2.7? > > Mike Cui, 28.02.2012 10:18: >>> Thanks for the test code, you hadn't mentioned that you use a "with gil" >>> block. Could you try the latest github version of Cython? >>> >> >> Ahh, much better! >> >> #if CYTHON_REFNANNY >> #ifdef WITH_THREAD >> __pyx_gilstate_save = PyGILState_Ensure(); >> #endif >> #endif /* CYTHON_REFNANNY */ >> __Pyx_RefNannySetupContext("callback"); >> #if CYTHON_REFNANNY >> #ifdef WITH_THREAD >> PyGILState_Release(__pyx_gilstate_save); >> #endif >> #endif /* CYTHON_REFNANNY */ > > > Hmm, thanks for posting this - it can be further improved. There's no > reason for code bloat here, it should all just go into the > __Pyx_RefNannySetupContext() macro. > > Stefan > _______________________________________________ > cython-devel mailing list > cython-devel at python.org >
|
https://mail.python.org/pipermail/cython-devel/2012-February/001980.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
filequeue 0.3.1
A thread-safe queue object which is interchangeable with the stdlib Queue. Any overflow goes into a compressed file to keep excessive amounts of queued items out of memory
Contents
Overview
filequeue is a Python library that provides a thread-safe queue which is a subclass of Queue.Queue from the stdlib.
filequeue.FileQueue will overflow into a compressed file if the number of items exceeds maxsize, instead of blocking or raising Full like the regular Queue.Queue.
There is also filequeue.PriorityFileQueue and filequeue.LifoFileQueue implementations.
Note filequeue.FileQueue and filequeue.LifoFileQueue will only behave the same as Queue.Queue and Queue.LifoQueue respectively if they are initialised with maxsize=0 (the default). See __init__ docstring for details (help(FileQueue))
Note filequeue.PriorityFileQueue won't currently work exactly the same as a straight out replacement for Queue.PriorityQueue. The interface is very slightly different (extra optional kw argument on put and __init__), although it will work it won't behave the same. It might still be useful to people though and hopefully I'll be able to address this in a future version.
Requirements:
- Python 2.5+ or Python 3.x
Why?
The motivation came from wanting to queue a lot of work, without consuming lots of memory.
The interface of filequeue.FileQueue matches that of Queue.Queue (or queue.Queue in python 3.x). With the idea being that most people will use Queue.Queue, and can swap in a filequeue.FileQueue only if the memory usage becomes an issue. (Same applies for filequeue.LifoFileQueue)
Issues
Any issues please post on the github page.
Changelog
0.3.1 (2013-01-10)
- Added unittests for LifoFileQueue from Queue.
0.3.0 (2013-01-10)
- Added LifoFileQueue implementation that returns the most recently added items first.
- Reverted the file type from gzip to a regular file for the time being.
0.2.3 (2012-11-27)
- Fix for PriorityFileQueue where it wasn't returning items in the correct order according to the priority.
- Added import * into __init__.py to make the namespace a bit nicer.
- Added the unit tests from stdlibs Queue (quickly edited out the full checks and LifoQueue tests).
0.2.2 (2012-11-27)
- Initial public release.
- Downloads (All Versions):
- 14 downloads in the last day
- 114 downloads in the last week
- 348 downloads in the last month
- Author: Paul Wiseman
- Keywords: queue thread-safe file gzip
- License: BSD
- Categories
- Development Status :: 3 - Alpha
- Intended Audience :: Developers
- License :: OSI Approved :: BSD License
- Programming Language :: Python :: 2
- Programming Language :: Python :: 2.5
- Programming Language :: Python :: 2.6
- Programming Language :: Python :: 2.7
- Programming Language :: Python :: 3
- Programming Language :: Python :: 3.0
- Programming Language :: Python :: 3.1
- Programming Language :: Python :: 3.2
- Programming Language :: Python :: 3.3
- Topic :: Utilities
- Package Index Owner: GP89
- DOAP record: filequeue-0.3.1.xml
|
https://pypi.python.org/pypi/filequeue
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
I have thinking sphinx setup with my search form, but it is returning all users instead of users with the data I searched.
In the users controller:
Code Ruby:
def index @users = params[:query].blank? ? User.all : User.search(params[:query])
user_index.rb (inside /indices):
Code Ruby:
ThinkingSphinx::Index.define :user, :with => :active_record do # fields indexes name, :as => :user, :sortable => true indexes [ethnicity, religion, about_me, sexuality, children, user_smoke, user_drink, age, gender] # attributes has id, created_at, updated_at end
I have nothing for thinking sphinx inside my User model, not sure if I need to go that route. Any help would be appreciated
|
http://www.sitepoint.com/forums/printthread.php?t=1163061&pp=25&page=1
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
A degenerate zero-tetrahedron saturated block that corresponds to attaching a Mobius band to a single annulus boundary. More...
#include <subcomplex/nsatblocktypes.h>
A degenerate zero-tetrahedron saturated block that corresponds to attaching a Mobius band to a single annulus boundary.
This is a degenerate case of the layered solid torus (see the class NSatLST), where instead of joining a solid torus to an annulus boundary we join a Mobius band. The Mobius band can be thought of as a zero-tetrahedron solid torus with two boundary triangles, which in fact are opposite sides of the same triangle. By attaching a zero-tetrahedron Mobius band to an annulus boundary, we are effectively joining the two triangles of the annulus together.
The meridinal disc of this zero-tetrahedron solid torus meets the three edges of the annulus in 1, 1 and 2 places, so it is in fact a degenerate (1,1,2) layered solid torus. Note that the weight 2 edge is the boundary edge of the Mobius strip.
Constructs a clone of the given block structure.
Adjusts the given Seifert fibred space to insert the contents of this saturated block.
In particular, the space should be adjusted as though an ordinary solid torus (base orbifold a disc, no twists or exceptional fibres) had been replaced by this block. This description does not make sense for blocks with twisted boundary; the twisted case is discussed below.
If the argument reflect is
true, it should be assumed that this saturated block is being reflected before being inserted into the larger Seifert fibred space. That is, any twists or exceptional fibres should be negated before being added.
Regarding the signs of exceptional fibres: Consider a saturated block containing a solid torus whose meridinal curve runs p times horizontally around the boundary in order through annuli 0,1,... and follows the fibres q times from bottom to top (as depicted in the diagram in the NSatBlock class notes). Then this saturated block adds a positive (p, q) fibre to the underlying Seifert fibred space.
If the ring of saturated annuli bounding this block is twisted then the situation becomes more complex. It can be proven that such a block must contain a twisted reflector boundary in the base orbifold (use Z_2 homology with fibre-reversing paths to show that the base orbifold must contain another twisted boundary component, and then recall that real boundaries are not allowed inside blocks).
In this twisted boundary case, it should be assumed that the twisted reflector boundary is already stored in the given Seifert fibred space. This routine should make any further changes that are required (there may well be none). That is, the space should be adjusted as though a trivial Seifert fibred space over the annulus with one twisted reflector boundary (and one twisted puncture corresponding to the block boundary) had been replaced by this block. In particular, this routine should not add the reflector boundary itself.
Implements regina::NSatBlock.
Returns a newly created clone of this saturated block structure.
A clone of the correct subclass of NSatBlock will be returned. For this reason, each subclass of NSatBlock must implement this routine.
Implements regina::NSatBlock.
Determines whether the given annulus is a boundary annulus for a block of this type (Mobius band).
This routine is a specific case of NSatBlock::isBlock(); see that routine for further details.
nullif none was found.
Describes how the Mobius band is attached to the boundary annulus.
The class notes discuss the weight two edge of the Mobius band (or equivalently the boundary edge of the Mobius band). The return value of this routine indicates which edge of the boundary annulus this weight two edge is joined to.
In the NSatAnnulus class notes, the three edges of the annulus are denoted vertical, horizontal and boundary, and the vertices of each triangle are given markings 0, 1 and 2.
The return value of this routine takes the value 0, 1 or 2 as follows:
Writes an abbreviated name or symbol for this block to the given output stream.
This name should reflect the particular block type, but need not provide thorough details.
The output should be no more than a handful of characters long, and no newline should be written. In TeX mode, no leading or trailing dollar signs should be written.
Implements regina::NSatBlock.
Writes this object in short text format to the given output stream.
The output should be human-readable, should fit on a single line, and should not end with a newline.
Implements regina::ShareableObject.
|
http://regina.sourceforge.net/engine-docs/classregina_1_1NSatMobius.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
!). The.. Qt, or rather DOM, offers a simply way to generate elements with a namespace attached:
QDomDocument::createElementNS( const QString &nsURI, const QString &qName )
doc.createElement( "holiday" );
with
doc.createElementNS( "urn:kde:developer:tutorials:QtDom:holidays", "h:holiday" );
Again, we can write our own helper method "addElementNS":
/*; }:(); } }
Unfortunately, the correct way laid out above does not produce valid XML with Qt versions at least up to 4.2.0:
<(); }
The output now becomes the correct XML code:
<>
Of course, noone forces you to use Qt's DOM classes to generate the XML code. After all, the resulting XML is only a simple text! The most straight-forward approach would thus be to directly generate the text that contains the XML.; }
Of course, this approach works fine if your only goal is to create the XML for outputting it to a file or piping it into another application, library or web service. <
QDomNodeList elementsByTagNameNS ( const QString & nsURI, const QString & localName ) const:
QDomElement e = parent.firstChildElement( "holiday" ); while ( !e.isNull() ) { if ( e.namespaceURI() == nsURI ) { // Do whatever we need to do with the holiday } e = e.nextSiblingElement( "holiday" ); }>
Initial Author: Reinhold Kainhofer
|
http://techbase.kde.org/index.php?title=Development/Tutorials/QtDOM_Tutorial&diff=60725&oldid=7382
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
There are many DLL files that help you to export data to Excel but always there are some problems with them that causes error or you should pay some money to use them with all features.
I want to show that how we can use Microsoft Office original DLL file in order to export data to standard Excel file. In this tip, we use Microsoft.Office.Interop.Excel.dll that is in your PC when you install Microsoft Office.
I use Microsoft.Office.Interop.Excel namespace. The way I use is very simple. If you want to learn more about it, you can see Microsoft.Office.Interop.Excel namespace page in Microsoft website.
Microsoft.Office.Interop.Excel
This is all the code you need to do in your form. First of all, you need to declare references of objects we want to use. The important thing is about using Microsoft.Office.Interop.Excel.dll. You should first install the latest version of Microsoft Office, then set the reference to the Microsoft.Office.Interop.Excel.
Microsoft.Office.Interop.Excel
There are two zip files that you can download and run. If you have any favorite query, use it. But if you don't have any database or query, don't worry you can download Create_Pubs_DB.zip and run it in SQL Server. After that, you can extract the source and run it.
'References that we need
Imports System.Data.SqlClient
Imports System.Data
Imports System.IO.Directory
Imports Microsoft.Office.Interop.Excel 'Before you add this reference to your project,
' you need to install Microsoft Office and find last version of this file.
Imports Microsoft.Office.Interop
Public Class Form1
Private Sub btnBrowse_Click(sender As System.Object, _
e As System.EventArgs) Handles btnBrowse.Click
'Initialize the objects before use
Dim dataAdapter As New SqlClient.SqlDataAdapter()
Dim dataSet As New DataSet
Dim command As New SqlClient.SqlCommand
Dim datatableMain As New System.Data.DataTable()
Dim connection As New SqlClient.SqlConnection
'Assign your connection string to connection object
connection.ConnectionString = "Data Source=.;_
Initial Catalog=pubs;Integrated Security=True"
command.Connection = connection
command.CommandType = CommandType.Text
'You can use any command select
command.CommandText = "Select * from Authors"
dataAdapter.SelectCommand = command
Dim f As FolderBrowserDialog = New FolderBrowserDialog
Try
If f.ShowDialog() = DialogResult.OK Then
'This section help you if your language is not English.
System.Threading.Thread.CurrentThread.CurrentCulture = _
System.Globalization.CultureInfo.CreateSpecificCulture("en-US")
Dim oExcel As Excel.Application
Dim oBook As Excel.Workbook
Dim oSheet As Excel.Worksheet
oExcel = CreateObject("Excel.Application")
oBook = oExcel.Workbooks.Add(Type.Missing)
oSheet = oBook.Worksheets(1)
Dim dc As System.Data.DataColumn
Dim dr As System.Data.DataRow
Dim colIndex As Integer = 0
Dim rowIndex As Integer = 0
'Fill data to datatable
connection.Open()
dataAdapter.Fill(datatableMain)
connection.Close()
'Export the Columns to excel file
For Each dc In datatableMain.Columns
colIndex = colIndex + 1
oSheet.Cells(1, colIndex) = dc.ColumnName
'Export the rows to excel file
For Each dr In datatableMain.Rows
rowIndex = rowIndex + 1
colIndex = 0
For Each dc In datatableMain.Columns
colIndex = colIndex + 1
oSheet.Cells(rowIndex + 1, colIndex) = dr(dc.ColumnName)
'Set final path
Dim fileName As String = "\ExportedAuthors" + ".xls"
Dim finalPath = f.SelectedPath + fileName
txtPath.Text = finalPath
oSheet.Columns.AutoFit()
'Save file in final path
oBook.SaveAs(finalPath, XlFileFormat.xlWorkbookNormal, Type.Missing, _
Type.Missing, Type.Missing, Type.Missing, XlSaveAsAccessMode.xlExclusive, _
Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing)
'Release the objects
ReleaseObject(oSheet)
oBook.Close(False, Type.Missing, Type.Missing)
ReleaseObject(oBook)
oExcel.Quit()
ReleaseObject(oExcel)
'Some time Office application does not quit after automation:
'so i am calling GC.Collect method.
GC.Collect()
MessageBox.Show("Export done successfully!")
End If
Catch ex As Exception
MessageBox.Show(ex.Message, "Warning", MessageBoxButtons.OK)
End Try
End Sub
Private Sub ReleaseObject(ByVal o As Object)
Try
While (System.Runtime.InteropServices.Marshal.ReleaseComObject(o) > 0)
End While
Catch
Finally
o = Nothing
End Try
End Sub
End
|
http://www.codeproject.com/Tips/669509/How-to-export-data-to-Excel-in-VB-NET
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Part of twisted.python View Source
This module aims to provide a unified, object-oriented view of Python's runtime hierarchy.
Python is a very dynamic language with wide variety of introspection utilities. However, these utilities can be hard to use, because there is no consistent API. The introspection API in python is made up of attributes (__name__, __module__, func_name, etc) on instances, modules, classes and functions which vary between those four types, utility modules such as 'inspect' which provide some functionality, the 'imp' module, the "compiler" module, the semantics of PEP 302 support, and setuptools, among other things.
At the top, you have "PythonPath", an abstract representation of sys.path which includes methods to locate top-level modules, with or without loading them. The top-level exposed functions in this module for accessing the system path are "walkModules", "iterModules", and "getModule".From most to least specific, here are the objects provided:
PythonPath # sys.path | v PathEntry # one entry on sys.path: an importer | v PythonModule # a module or package that can be loaded | v PythonAttribute # an attribute of a module (function or class) | v PythonAttribute # an attribute of a function or class | v ...Here's an example of idiomatic usage: this is what you would do to list all of the modules outside the standard library's python-files directory:
import os stdlibdir = os.path.dirname(os.__file__) from twisted.python.modules import iterModules for modinfo in iterModules(): if (modinfo.pathEntry.filePath.path != stdlibdir and not modinfo.isPackage()): print 'unpackaged: %s: %s' % ( modinfo.name, modinfo.filePath.path)
|
http://twistedmatrix.com/documents/8.2.0/api/twisted.python.modules.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Each unit test tests one bit of functionality in the software. Unit tests are entirely automated and complete quickly. Unit tests for the entire system are gathered into one test suite, and may all be run in a single batch. The result of a unit test is simple: either it passes, or it doesn't. All this means you can test the entire system at any time without inconvenience, and quickly see what passes and what fails.
The Twisted development team adheres to the practice of Extreme Programming (XP), and the usage of unit tests is a cornerstone XP practice. Unit tests are a tool to give you increased confidence. You changed an algorithm -- did you break something? Run the unit tests. If a test fails, you know where to look, because each test covers only a small amount of code, and you know it has something to do with the changes you just made. If all the tests pass, you're good to go, and you don't need to second-guess yourself or worry that you just accidentally broke someone else's program.
You don't have to write a test for every single method you write, only production methods that could possibly break.
-- Kent Beck, Extreme Programming Explained, p. 58.
From the root of the Twisted source tree, run Trial:
$ bin/trial twisted
You'll find that having something like this in your emacs init files is quite handy:
(defun runtests () (interactive) (compile "python /somepath/Twisted/bin/trial /somepath/Twisted")) (global-set-key [(alt t)] 'runtests)
Always, always, always be sure all the tests pass before committing any code. If someone else checks out code at the start of a development session and finds failing tests, they will not be happy and may decide to hunt you down.
Since this is a geographically dispersed team, the person who can help you get your code working probably isn't in the room with you. You may want to share your work in progress over the network, but you want to leave the main Subversion tree in good working order. So use a branch, and merge your changes back in only after your problem is solved and all the unit tests pass again.
Please don't add new modules to Twisted without adding tests for them too. Otherwise we could change something which breaks your module and not find out until later, making it hard to know exactly what the change that broke it was, or until after a release, and nobody wants broken code in a release.
Tests go into dedicated test packages such as
twisted/test/ or
twisted/conch/test/,
and are named
test_foo.py, where
foo is the name
of the module or package being tested. Extensive documentation on using
the PyUnit framework for writing unit tests can be found in the
links section below.
One deviation from the standard PyUnit documentation: To ensure
that any variations in test results are due to variations in the
code or environment and not the test process itself, Twisted ships
with its own, compatible, testing framework. That just
means that when you import the unittest module, you will
from twisted.trial import unittest instead of the
standard
import unittest.
As long as you have followed the module naming and placement
conventions,
trial will be smart
enough to pick up any new tests you write.
PyUnit provides a large number of assertion methods to be used when
writing tests. Many of these are redundant. For consistency, Twisted
unit tests should use the
assert forms rather than the
fail forms. Also, use
assertEqual,
assertNotEqual, and
assertAlmostEqual rather
than
assertEquals,
assertNotEquals, and
assertAlmostEquals.
assertTrue is also
preferred over
assert_. You may notice this convention is
not followed everywhere in the Twisted codebase. If you are changing some guidelines to follow when writing tests for the Twisted test suite. Many tests predate these guidelines and so do not follow them. When in doubt, follow the guidelines given here, not the example of old unit tests.
Most unit tests should avoid performing real, platform-implemented I/O
operations. Real I/O is slow, unreliable, and unwieldy. When implementing
a protocol,
twisted.test.proto_helpers.StringTransport can be
used instead of a real TCP transport.
StringTransport is fast,
deterministic, and can easily be used to exercise all possible network
behaviors.
Most unit tests should also avoid waiting for real time to pass. Unit
tests which construct and advance
a
twisted.internet.task.Clock are fast and
deterministic.
Since unit tests are avoiding real I/O and real time, they can usually
avoid using a real reactor. The only exceptions to this are unit tests for
a real reactor implementation. Unit tests for protocol implementations or
other application code should not use a reactor. Unit tests for real
reactor implementations should not use the global reactor, but should
instead use
twisted.internet.test.reactormixins.ReactorBuilder
so they can be applied to all of the reactor implementations automatically.
In no case should new unit tests use the global reactor.
Trial, the Twisted unit test framework, has some extensions which are designed to encourage developers to add new tests. One common situation is that a test exercises some optional functionality: maybe it depends upon certain external libraries being available, maybe it only works on certain operating systems. The important common factor is that nobody considers these limitations to be a bug.
To make it easy to test as much as possible, some tests may be skipped in
certain situations. Individual test cases can raise the
SkipTest exception to indicate that they should be skipped, and
the remainder of the test is not run. In the summary (the very last thing
printed, at the bottom of the test output) the test is counted as a
skip instead of a
success or
fail. This should be used
inside a conditional which looks for the necessary prerequisites:
class SSHClientTests(unittest.TestCase): def test_sshClient(self): if not ssh_path: raise unittest.SkipTest("cannot find ssh, nothing to test") foo() # do actual test after the SkipTest
You can also set the
.skip attribute on the method, with a
string to indicate why the test is being skipped. This is convenient for
temporarily turning off a test case, but it can also be set conditionally (by
manipulating the class attributes after they've been defined):
class SomeThingTests(unittest.TestCase): def test_thing(self): dotest() test_thing.skip = "disabled locally"
class MyTestCase(unittest.TestCase): def test_one(self): ... def test_thing(self): dotest() if not haveThing: MyTestCase.test_thing.im_func.skip = "cannot test without Thing" # but test_one() will still run
Finally, you can turn off an entire TestCase at once by setting the .skip attribute on the class. If you organize your tests by the functionality they depend upon, this is a convenient way to disable just the tests which cannot be run.
class TCPTestCase(unittest.TestCase): ... class SSLTestCase(unittest.TestCase): if not haveSSL: skip = "cannot test without SSL support" # but TCPTestCase will still run ...
Two good practices which arise from the
XP development process are
sometimes at odds with each other:
These two goals will sometimes conflict. The unit tests that are written first, before any implementation has been done, are certain to fail. We want developers to commit their code frequently, for reliability and to improve coordination between multiple people working on the same problem together. While the code is being written, other developers (those not involved in the new feature) should not have to pay attention to failures in the new code. We should not dilute our well-indoctrinated Failing Test Horror Syndrome by crying wolf when an incomplete module has not yet started passing its unit tests. To do so would either teach the module author to put off writing or committing their unit tests until after all the functionality is working, or it would teach the other developers to ignore failing test cases. Both are bad things.
.todo is intended to solve this problem. When a developer first
starts writing the unit tests for functionality that has not yet been
implemented, they can set the
.todo attribute on the test
methods that are expected to fail. These methods will still be run, but
their failure will not be counted the same as normal failures: they will go
into an
expected failures category. Developers should learn to treat
this category as a second-priority queue, behind actual test failures.
As the developer implements the feature, the tests will eventually start
passing. This is surprising: after all those tests are marked as being
expected to fail. The .todo tests which nevertheless pass are put into a
unexpected success category. The developer should remove the .todo
tag from these tests. At that point, they become normal tests, and their
failure is once again cause for immediate action by the entire development
team.
The life cycle of a test is thus:
.todo. Test fails:
expected failure.
unexpected success.
.todotag is removed. Test passes.
success.
failure. Developers spring into action.
success.
Any test which remains marked with
.todo for too long should
be examined. Either it represents functionality which nobody is working on,
or the test is broken in some fashion and needs to be fixed. Generally,
.todo may be of use while you are developing a feature, but
by the time you are ready to commit anything, all the tests you have written
should be passing. In other words, you should rarely, if ever, feel the need
to add a test marked todo to trunk. When you do, consider whether a ticket
in the issue tracker would be more useful.
Trial provides line coverage information, which is very useful to ensure
old code has decent coverage. Passing the
--coverage option to
to Trial will generate the coverage information in a file called
coverage which can be found in the
_trial_temp
folder. This option requires Python 2.3.3 or newer.
Please add a
test-case-name tag to the source file that is
covered by your new test. This is a comment at the beginning of the file
which looks like one of the following:
# -*- test-case-name: twisted.test.test_defer -*-
or
#!/usr/bin/env python # -*- test-case-name: twisted.test.test_defer -*-
This format is understood by emacs to mark
File Variables. The
intention is to accept
test-case-name anywhere emacs would on
the first or second line of the file (but not in the
File
Variables: block that emacs accepts at the end of the file). If you
need to define other emacs file variables, you can either put them in the
File Variables: block or use a semicolon-separated list of
variable definitions:
# -*- test-case-name: twisted.test.test_defer; fill-column: 75; -*-
If the code is exercised by multiple test cases, those may be marked by
using a comma-separated list of tests, as follows: (NOTE: not all tools can
handle this yet..
trial --testmodule does, though)
# -*- test-case-name: twisted.test.test_defer,twisted.test.test_tcp -*-
The
test-case-name tag will allow
trial
--testmodule twisted/dir/myfile.py to determine which test cases need
to be run to exercise the code in
myfile.py. Several tools (as
well as 's
twisted-dev.el's F9 command) use this to automatically
run the right tests.
See also Tips for writing tests for Twisted code.
|
http://twistedmatrix.com/trac/export/34350/trunk/doc/core/development/policy/test-standard.xhtml
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
19 September 2008 17:12 [Source: ICIS news]
By Nigel Davis
LONDON (ICIS news)--Towards the end of an extraordinary week some confidence has returned to the world’s financial markets while manufacturing industry still has to take stock.
At the time of writing the ?xml:namespace>
It was expected to take on billions of dollars worth of mortgage-related debt, the toxic loans that are the root cause of this unprecedented financial collapse.
Inter-bank lending is all but frozen and the future of once strong financial institutions still under threat.
Markets reacted positively to news of the
But the recent turmoil will not easily be left behind.
The knock-on effect on companies and markets could in itself be unprecedented.
Manufacturing industry is heading for a steeper than expected slowdown and chemicals growth is expected to stall. There is continuing high risk of a major economic downturn.
Weaker demand is already having an impact on important polymer and other chemicals markets in North America and
Projections of sector output have been lowered substantially since the start of the year based on the spreading slowdown and fears of recession in western economies.
This banking sector turmoil, however, could dent the decoupling theory which suggests that the fast growing economies in
Of vital importance for chemical makers is the health of demand from
The balance then between
Slowing demand growth began to take its toll in the second quarter: the American Chemistry Council’s global chemicals production index (for chemicals excluding pharmaceuticals) fell from 9.5 to 6.1 between April and June.
Chemicals growth in North America and Western Europe has slowed markedly and the statistics are showing demand growth under pressure in
"We were riding on the astronomical growth of
The chemicals world needs continued strong demand growth from
Without that demand growth, product will back up into markets in Europe,
Some firms are more vulnerable than others. The coming downturn will test production and operating efficiencies and, indeed, the balance sheets of some.
Turmoil in the financial markets could temporarily disrupt merger and acquisition (M&A) activity and add risk to existing deals, Scott Anderson, senior economist with US financial services company Wells Fargo, said on Thursday.
But on the other hand, the disruption to credit markets could help generate more interest in consolidation.
At the ICIS Chemical Purchasing summit in Boston, Massachussets, this week, Laurence Alexander, an analyst with the investment bank Jeffries, said: “The main question is whether the credit market dislocations will have an impact on credit availability next year, and how sensitive the market will be to balance sheet ratios.
"It’s possible the market may penalise companies for having high debt levels."
You are fine if you can continue to service debt, but physical market dynamics suggest that cashflows are likely to diminish over the coming quarters. Woe betide any chemical producer in need of refinancing over the next six months.
Chemical companies have come rapidly off a plateau of performance still manifest in 2007 but clearly weakened in the first half of 2008.
How that performance might be affected by the storms raging in the financial markets may not be clear but the prognosis is not good.
“The credit crisis is like an uncontrollable forest fire, with the potential damage to the
The financial market turmoil threatens chemical companies worldwide as well as the markets they serve.?xml:namespace>
Bookmark Paul Hodges' Chemicals and the Economy and John Richardson’s Asian Chemical Connections blogs
|
http://www.icis.com/Articles/2008/09/19/9157788/insight-the-threat-from-financial-turmoil.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Package Access Specifier - Java Beginners
specifier and identifier
specifier and identifier what is the difference between specifier and identifier in java
date format - Java Beginners
date format how to 45day in dd-mmm-yyyy date format. Hi friend,
Code to solve the problem :
import java.util.Date;
import java.util.Locale;
import java.util.Calendar;
import java.text.SimpleDateFormat
date format - Java Beginners
modifier and specifier
modifier and specifier what is diff between Access modifier and specifier
Access Specifier
Access Specifier What's the usage of getter and setter in access specifier?
Thank You
How to format text file? - Java Beginners
How to format text file? I want the answer for following Query
***(Java code)
How to open,read and format text file(notepad) or MS-word document in the same directory that of this required java program???? Hi
Printing numbers in pyramid format - Java Beginners
Printing numbers in pyramid format Q) Can you please tel me the code to print the numbers in the following format:
1
2 3
4 5 6
7 8 9 10
Hi Friend,
Try
convert .txt file in .csv format - Java Beginners
convert .txt file in .csv format Dear all,
I hope you are doing good.
I am looking to convert .txt file in .csv format.
The contents might have different names and values.
e.g.
start:
id:XXXX
name:abc
address:xyz
Java Number Format
Java Number Format
... the format of the number. In java this is
achieved by the java.text.NumberFormat...:/
Decimal Format Issue Java
Decimal Format Issue Java Decimal Format Issue Java
how to save web pages in a partcular document in html format using java.. - Java Beginners
how to save web pages in a partcular document in html format using java.. i have to save a particular web page in a folder in html format using java.
the url will be the input of the program.
the program when run will first
java format date
java format date Hi,
How can I format the date in the following pattern?
yyyy-MM-dd
import java.util.*;
import java.text.*;
class SimpleDateFormatExample
{
public static void main(String[] args
Programming - Java Beginners
types are limited to methods.
class A{
access-specifier non-access-specifier return-type Method-Name(arguments){
..................
.............code.....
}
}
Access specifiers in java are:
public, default, protected, private
nmber printed in pyramid format
nmber printed in pyramid format how to print this format using java
1
23
345
4567
56789
JAVA - Java Beginners
;Core Java Interview questions... of a ClassIn Java Access Specifiers are keywords that give rights to access a class in different ways. Four types of Access Specifiers can be found in Java
Print the following format
Print the following format how to print the following format given string "00401121" in java
0-**
1-*
2-*
3-
4
String to date with no format
into a date .. this is simple .. but i want to do it without knowing the date format..
Here is a situation .. say i hav 100 dates .. and all are in the same format. but i want to write a java program to find out dis format for me. the result
Date Format required to change in java
Date Format required to change in java can we change the format of date like sql as per our requirements???If yes then please give the detail solution
Mysql Date Format
.
format:-The format specifier used here are
%W:- This is used...
Mysql Date Format
Mysql Date Format explains you the way of formatting Date.
Understand
java beginners - Java Beginners
java beginners the pattern was not in this format
*
* * *
* * * * *
* * * * * * *
it is like this
*
* * *
* * * * *
* * * * * * *
thanks
Hi Friend,
If you want the following
Sql Date and Time Format
Sql Date and Time Format
The Tutorial illustrate an example from the 'Sql Date
and Time Format... to format date and time.
SYNTAX:- DATE_FORMAT(date,format)
date
String to Date format
String to Date format My Date format is String Eg(Jan 20 2011 ).... How to Change This values Like to database date format Eg(2011-01-20) using java
import java.util.*;
import java.text.*;
class ChangeDateFormat
how to print pdf format
how to print pdf format Hi every body iam doing school project iam using backend as oracle front end java .how to print student marks list /attendence in pdf format. please help me. thanks in advance.
Here
txt to cvs format
file from one directory and writes an new one in cvs format in another one. I am running into a problem on how to correctly format it . for now this is what i have...);
//convert(outpath);
}
}
Java convert text file to CSV file
convert current date into date format
convert current date into date format How to get current date and convert into date format in Java or PHP
Format
format
help - Java Beginners
to : what is java and where we use java? Hi friend
Java Date Format Example
Java Date Format Example
You can format date in many ways. In this tutorial Format class is used to format the java date in to the different formats. This class is of java.text.Format package in java.
You can use the Format class
Java Date format - Java Server Faces Questions
Java Date format Code to calculate days between two dates in format... dates in format(DD/MM/YYYY)
import java.util.Date;
import... format
String strDate1 = "19/12/2008";
String strDate2 = "19/10/2007
java - Java Beginners
java how to convert jtextfield into string format and store in oracle database
Formatting a Number Using a Custom Format
Formatting a Number Using a Custom Format
In this section, you will learn how to format a number using a custom format.
A number can be formatted in different ways. Java has designed several
classes to format the number. Using
Regarding Date Format - Java Server Faces Questions
Regarding Date Format How I can convert this(18-Aug-08) date format to Gregorian calendar date format?plz send me the syntax and code plz!! hai,,
java - Java Beginners
java How to format the output of one java file.The output will be formatted like a html table
format.That means how to allign the text which is output of java
java beginners
java beginners When an object is falling because of gravity, the following formula can be used to determine the distance of the object falls... DecimalFormat from library text to have the output in the following format "0.00 encoding - Java Beginners
java encoding how to encode String variable... format. Please visit the given tutorial link with example that shows how you can Convert a Character into the ASCII Format at roseindia.
http
Decimal Format Example
Decimal Format Example
In this example we are going to format
a decimal value... of DecimalFormat(String
format_type) class .In DecimalFormat class we are passing
java - Java Beginners
java i want to change my date format in java language
example my...";
System.out.println("Initial Date format: " +date);
SimpleDateFormat df = new...());
System.out.println("New Date Format: " +newDate);
}
catch(Exception e
struts2.2.1 date Format example
struts2.2.1 date Format example.
In this example, We will discuss about the different type of date format
using struts2.2.1.
Directory structure of example...
Format Example</title>
</head>
<body>
<h2>Date
Convert a Character into the ASCII Format
Convert a Character into the ASCII Format
... a character
data into the ASCII format.
The java.lang package provides the functionality to convert the character
data into the ASCII format
Description
java - Java Beginners
with example
thanks
Hi friend,
Jasper is a program to read Java class files in binary byte code format. The program is capable of generating ASCII... the inheritance hierarchy and composition maps from the java class files
java - Java Beginners
java Hi sir ..
Write a program in java for Password Generator that should ask from user in the format as given below. Please reply with output as well.
Input the string:- Insert string of any length
Password Type
Java show files in tree format
Java show files in tree format
In this section you will learn how to display the whole file system in tree
format.
Description of code:
The java.swing... in a systematic format
Static final variables - Java Beginners
check the access specifier.
the following example may be useful for u.
class Code - Java Beginners
java Code Write a Java Program that find out the age of person... of Birth in format MM-dd-yyyy");
Scanner input=new Scanner(System.in);
String dateOfBirth=input.nextLine() ;
System.out.println("Enter Current date in format
Java files - Java Beginners
Java files i want to get an example on how to develop a Java OO... mark records from a plain-text file in Comma-separated-
value (CSV) format...; the second field is a timestamp in a long format
(as used by e.g.
Save Java Textarea Content in Html Format using FileMenu
Save Java Textarea Content in Html Format using FileMenu Hi..
How to save the textarea content to html format...Using FileMenu Option
How to convert multiple files in java to .zip format
How to convert multiple files in java to .zip format i receive multiple files from a remote URL as
for (String reportid : reportList) {
inputStream = AWSFileUtil.getInputStream
Java programs - Java Beginners
Java programs Hello
Please write the following programs for me using GUI.Thanks You.
1. Write a java program that reads the first name, last name... in the format: last, first initial. Also print the employee pay. Use GUI.
2. Write
Java applicaton - Java Beginners
Java applicaton Write a Java GUI application Index2.java based on the program in project1 that inputs several lines of text and uses String method.... Store the totals for each letter in an array, and print the values in tabular format
java code - Java Beginners
java code Dear
Sir
i need code for the following format of stars...://
Thanks.
Hello
Please try the following code may
java programe - Java Beginners
java programe write a java programe to compute n! for any number.
1!
2!=2*1
3!=3*2*1
n!=n*(n-1)(n-2)
//int type is limited so... (Exception nfe)
{
System.out.println("Invalid number format. Please enter
java - Java Beginners
java i need to display each digit of a number in array format. how do i do that, not getting any idea. Hi Friend,
Try the following code:
import java.util.*;
class DigitsToArray{
public static void main
Convert to java - Java Beginners
to Java codes,please...thanks!
var iTanggalM = 0;
var iTanggalH = 0;
var..._Tanggal(format) {
var namaBulanE = new Array( "January","February","March","April...)
//iTahunJ : int tahun Jawa
//FORMAT :
//1 (default) (Indonesia
java beginners - Java Beginners
the following links: beginners what is StringTokenizer?
what is the funciton
java - Java Beginners
java hello,i work on one project automated placement cell and i need to view and print the resume of student/employee in pdf format .i need a help how to generate .pdf file format using javacode...its a window based application
java - Java Beginners
address
Format: passwd.username@server.com"
please do the favour.
Regards
how to use stringtokenizer on table? and display in table format.
how to use stringtokenizer on table? and display in table format. table is retrieved from mysql database and each row contains data like(java,c,c++,.net
Parsing a Date Using a Custom Format
Parsing a Date Using a Custom Format
In this section, you will learn how to parse a date using a custom format.
Java has provide many classes for handling... and parsing the dates. You have
to use a pattern of characters to specify the format
java porgram - Java Beginners
java porgram what is java IDE. Hi Friend,
Integrated... understand java and its structure that enables to develop a better code. There are many IDEs in use like Eclipse, JDeveloper, JBuilder, NetBeans, Sun Java Studio
Java Code - Java Beginners
Java Code Create a class Computer that stores information about different types of Computers available with the dealer. The information to be stored... of Computer in the following Format.
Computer Name : IBM
RAM Size : 512 MB
Processor
Java Date - Java Beginners
Java Date I have to find the previous date's beginning time and end time in the long milliseconds format. for example today date is '23rd April... main(String[] args) { Date now = new Date(); DateFormat df Hi friend
Simple Date Format Exception
Simple Date Format Exception
Simple Date Format Exception inherits from a package name... and normalization of date. Simple Date-format
provides you to work on choosing any
Number Format Exception
. A Number Format Exception occurs in the java code when a programmer
tries to convert a String into a number. The Number might be int,float or any
java...
Number Format Exception
java - Java Beginners
the user specifies his/her mail to blog address
Format: passwd.username@server.com
Java - Java Beginners
Java Iam developing some JSP pages, I have to display these pages in HTML format with dynamic data, some of the information is coming from the database.
One more thing while we are entering the data into the JSP page
JAVA - Java Beginners
be either in the format: "firstNames lastName" e.g. "Fred Bloggs", "John Joe Smith....
Parameters:
nameString - the name, may be in format "firstnames lastname
JAVA - Java Beginners
JAVA public class Name
Stores the name of an individual. The name is entered as a single string and may be either in the format: "firstNames....
Parameters:
nameString - the name, may be in format "firstnames lastname
Java display file content in hexadecimal format
Java display file content in hexadecimal format
In this section, you will learn how to read the file content and display it
in hexadecimal format.
Reading... in hexadecimal format. For this we have used FileInputStream class to open
a file
Jmeter - Java Beginners
Jmeter Hello,
I just generate a script JMeter at the time of my execution logs are in binary format, I can not see them in a text format.
Could you help me please on this point.
Thank you in advance
java code - Java Beginners
java code can you design a program that records and reports the weekly sales amounts for the salespeople of a company.Each of the n salespeople...,print the sales in a tabular format that displays the IDnumbers and sales
java project - Java Beginners
java project HAVE START A J2ME PROJECT WITH NO CLEAR IDEA .. ANY ONE GUIDE ME
my id is shahzad.aziz1@gmail.com
replay on my id plz
A detail... will be delivered to M-News application in format of XML.
c. Internet Users
Java code - Java Beginners
Java code I need help in writing code for converting julian date to gregorian date format.
Please help me out asap Hi Friend,
Try the following code:
import java.util.*;
import java.text.DateFormatSymbols
java - Java Beginners
main(String[]args){
System.out.println("Enter your Date of Birth in format dd-MM
java - Java Beginners
){
System.out.println("Enter your Date of Birth in format dd-MM-yyyy");
Scanner input=new
Java Code - Java Beginners
Java Code A Java Program to load Image using Swings JFileChooser,edit that and save it in JPEG Format on the system. Hi Friend,
Try the following code:
import java.awt.*;
import java.io.*;
import javax.swing.
Programming with Java - Java Beginners
Programming with Java Using valid Java code from Chapter 1 to 6, create an object oriented(Java application ) program with a minimum of two... with dynamic allocation, in your java test program class to initialize
Java.. pls. - Java Beginners
Java.. pls. Hi, sir. it compiler but when search or purchase it didt go true.
Sir, can u help me sir. Please Sir..
Import java.io.*;
public... the format in which you want to display the data on selecting search and purchase
Java Program - Java Beginners
Java Program Create a class Computer that stores information about different types of Computers
available with the dealer. The information... of Computer in the
following
Format.
Computer Name : IBM
RAM Size : 512 MB
core java - Java Beginners
core java "Helo man&sir can you share or gave me a java code hope..., print the sales in a tabular format that displays the ID numbers and sales....
core java
jsp
servlet
Friend use Core JAVA .thank you so much.hope you
Java Project - Java Beginners
Java Project Create a class Computer that stores information about... of Computer in the following
Format.
Computer Name : IBM
RAM Size : 512 MB... question in my java project.
But this program is shoing error on the 50th line
java - Java Beginners
DATE_FORMAT = "dd/MM/yyyy";
SimpleDateFormat sdf = new SimpleDateFormat(DATE_FORMAT);
String value5= sdf.format(date);
Connection con = null;
try
java problem - Java Beginners
java problem Room.java
This file defines a class of Room objects... with
format as described below:
Room with beds, tariff , and guest
named .
or
Room... from a small Java program.
HotelMain.java
The aim of this class is to provide
java using JSP - Java Beginners
java using JSP hi...i has been created one JSp page.in this page have one links name as "Show datas in Excel format".I want to create Excel file...,
Try the following code:
1)showExcel.jsp:
Show datas in Excel format
java - Java Beginners
java hello sir . i have some problem in my java program .i have inserted records into sql server database.
the table name is "proj". now i want to show all records from the proj table on a panel in a proper table format.
i
Jmeter - Java Beginners
Jmeter hello,
My problem is that I have a server that sends me a response in format-type: iso-8859-1.
And Jmeter software can't read this format
Can you help me please.
Best regards. Hi friend
Set Data Format in Excel Using POI 3.0
Set Data Format in Excel
Using POI 3.0
In this program we are setting data format in excel file
using Java.
POI version 3.0 provides a new feature for manipulating
|
http://www.roseindia.net/tutorialhelp/comment/59754
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Opened 6 years ago
Closed 4 years ago
Last modified 4 years ago
#8758 closed (fixed)
get_tag_uri in /django/utils/feedgenerator.py breaks with port numbers
Description
The following function:
def get_tag_uri(url, date): "Creates a TagURI. See" tag = re.sub('^http://', '', url) if date is not None: tag = re.sub('/', ',%s:/' % date.strftime('%Y-%m-%d'), tag, 1) tag = re.sub('#', '/', tag) return u'tag:' + tag }} - ... breaks for domain names with a port number, such as as this produces the following TAG value: {{{ tag:example.org:8080,2007-09-21:/ }}} From and you can see that the TAG URI should not contain the port number. The following patch should be able to extract the domain name from the link: {{{ - tag = re.sub('^http://', '', url) + tag = str(urllib.splitport(urllib.splithost(urllib.splittype(url)[1])[0])[0]) }}}
Attachments (2)
Change History (8)
comment:1 Changed 6 years ago by Daniel Pope <dan@…>
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
comment:2 Changed 5 years ago by arthurk
- Owner changed from nobody to arthurk
- Status changed from new to assigned
Changed 5 years ago by arthurk
Changed 5 years ago by arthurk
regression test
comment:3 Changed 5 years ago by arthurk
- Version changed from SVN to 1.0
I added a diff and a unit test for a new get_tag_uri method which relies on urlparse instead of going through some regular expressions. Please review.
comment:4 Changed 5 years ago by arthurk
- Triage Stage changed from Unreviewed to Accepted
comment:5 Changed 4 years ago by russellm
- Resolution set to fixed
- Status changed from assigned to closed
comment:6 Changed 4 years ago by russellm
Note: See TracTickets for help on using tickets.
Why not use urlparse.urlparse(url).hostname ?
|
https://code.djangoproject.com/ticket/8758
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
In this post I will show you, how to create a WCF Service with both flavor of SOAP and REST paradigm . For purpose of this post , I am going to create a Calculator Service with following characteristics
- Serivice will have both SOAP and REST enabled
- REST Service will have JOSN Message format
In later part of the post , I will show you the way to consume both types of Service in a managed (console) client.
Idea
- There would be two ServiceContract . One ServiceContract for SOAP and one for REST.
- There would be one Service Defintion file
- There would be two bindings enabled on the service. One binding corrosponds to SOAP and other to REST
- For SOAP , basicHttpBinding would be used and for REST webHttpBinding would be used
- Both SOAP and REST will have same base address
Note : I have taken different Service Contract for REST and SOAP. However you can have it on same ServiceContract also.
Create Service
Let us create two service contracts . One for SOAP and one for REST Service. I have created a Service Contract IService1 for SOAP as below,
IService1.cs
[ServiceContract] public interface IService1 { [OperationContract] int Add(int Number1, int Number2); [OperationContract] int Sub(int Number1, int Number2); [OperationContract] int Mul(int Number1, int Number2); [OperationContract] int Div(int Number1, int Number2); }
Next I have created a Service Contract IService2 for REST as below,
IService2.cs
[ServiceContract] public interface IService2 { [OperationContract] [WebGet(UriTemplate="/Add/{Number1}/{Number2}",RequestFormat=WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json )] int AddRest(string Number1, string Number2); [OperationContract] [WebGet(UriTemplate="/Sub/{Number1}/{Number2}",RequestFormat=WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json )] int SubRest(string Number1, string Number2); [OperationContract] [WebGet(UriTemplate="/Mul/{Number1}/{Number2}",RequestFormat=WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json )] int MulRest(string Number1, string Number2); [OperationContract] [WebGet(UriTemplate="/Div/{Number1}/{Number2}",RequestFormat=WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json )] int DivRest(string Number1, string Number2); }
In above service explictly I am setting message format to JSON. So above service is JSON enabled REST Service. Since REST based service does not take input parameter but string type so input parameters to functions is of string type.
Implementing Service
Service is implemented in simple way. All the operation contracts from the service are performing trivial calculating functions.
Service1.svc.cs
using System; namespace MultipleBindingWCF { public class Service1 : IService1,IService2 { public int Add(int Number1, int Number2) { return Number1 + Number2; } public int Sub(int Number1, int Number2) { return Number1 - Number2; } public int Mul(int Number1, int Number2) { return Number1 * Number2; } public int Div(int Number1, int Number2) { return Number1 / Number2; } public int AddRest(string Number1, string Number2) { int num1 = Convert.ToInt32(Number1); int num2 = Convert.ToInt32(Number2); return num1 + num2; } public int SubRest(string Number1, string Number2) { int num1 = Convert.ToInt32(Number1); int num2 = Convert.ToInt32(Number2); return num1 - num2; } public int MulRest(string Number1, string Number2) { int num1 = Convert.ToInt32(Number1); int num2 = Convert.ToInt32(Number2); return num1 * num2; } public int DivRest(string Number1, string Number2) { int num1 = Convert.ToInt32(Number1); int num2 = Convert.ToInt32(Number2); return num1 / num2; } } }
Configuring Service
This section is very important. We need to configure service for both basicHttpBinding and webHttpBinding .
Very first we need to set the service beahavior as below ,
After setting Service behavior , we need to set EndPoint behavior for REST enabled EndPoint as below,
Next we need to create EndPoints , SOAP EndPoint with basicHttpBinding would get created as below,
In above configuration ,
- Name of the endpoint is SOAPEndPoint. You are free to give any name. Howevere at time of consuming the service this endpoint name is required.
- This endpoint will be available at baseaddress/soap address.
- Binding used in endpoint is basicHttpBinding
- Contract is IService1. If you remember , we created this contract for the SOAP .
Now we need to create REST EndPoint as below,
In above configuration ,
- Name of the endpoint is RESTEndPoint.
- REST Service will be called at the URL baseaddress/rest/add/parameter1/parameter2
- Binding used is webHttpBinding
- Endpoint configuration is restbehavior .We set endpoint behavior with this name previously.
Putting all together service configuration will look like as below,
<?xml version="1.0"?> <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> ="MultipleBindingWCF.Service1" behaviorConfiguration ="servicebehavior" > <endpoint name ="SOAPEndPoint" contract ="MultipleBindingWCF.IService1" binding ="basicHttpBinding" address ="soap" /> <endpoint name ="RESTEndPoint" contract ="MultipleBindingWCF.IService2" binding ="webHttpBinding" address ="rest" behaviorConfiguration ="restbehavior"/> <endpoint contract="IMetadataExchange" binding="mexHttpBinding" address="mex" /> </service> </services> </system.serviceModel> <system.webServer> <modules runAllManagedModulesForAllRequests="true"/> </system.webServer> </configuration>
Now service is ready to be hosted. For purpose of this post I am hosting it Cassini server.
Consuming Service
Service is enabled for both SOAP and REST version. So consumption of service needs to be done accordingly.
Consumption of SOAP Service can be done in usual way in .Net client by adding Service Reference and making a call to the service as below,
static void CallingSOAPfunction() { Service1Client proxy = new Service1Client("SOAPEndPoint"); var result = proxy.Add(7, 2); Console.WriteLine(result); }
If you notice, I am providing endpoint name to explicitly say which endpoint of the service need to be called.
REST Service is working on JSON message format. For deserlization of the message, I am using DataContractJsonSerializer. This class is in the namespace System.Runtime.Serialization.Json.
static void CallingRESTfunction() { WebClient RESTProxy = new WebClient(); byte[] data= RESTProxy.DownloadData(new Uri("")); Stream stream = new MemoryStream(data); DataContractJsonSerializer obj = new DataContractJsonSerializer(typeof(string)); string result = obj.ReadObject(stream).ToString(); Console.WriteLine(result); }
In above code I am downloading data from REST Uri using WebClient and deserlaizing that using DataContractJsonSerializer.
Now you know how to enable REST and SOAP on same WCF Service. I hope this post is useful. Thanks for readingFollow @debug_mode
While this is all well and good, this isn’t REST. This is RPC using XML or JSON as a message level. Note to your readers that if they want to build a real REST service, it is much more complicated than just adding an endpoint.
Pingback: Distributed Weekly 134 — Scott Banwart's Blog
Hi ,
I would be very happy that if you share some article explaining REAL REST SERVICE… as of me this is WCF REST Service created using .Net Framework. Thanks
Absolutely… There is a maturity model associated with endpoints that claim to be REST. You can find an explanation here:.
Effectively, what you are demonstrating above is at Level 0. There are no resources, there are no verbs, and there is no hypermedia. This is OK and it isn’t bad. I just don’t want readers to be under the impression that they have implemented REST.
thanks I agree ..there is great scope to make complex REST with verbs and media.However purpose of this post was to display basic understanding of consumption in WP7. I may complex REST in further post.
Thanks for your time and feedback
Pingback: Monthly Report December 2011: Total Posts 23 « debug mode……
Some of you out there tend to overstate things. The title of this article was “How to enable REST and SOAP both on the same WCF service”. It did not say “How to know everything there is to know about REST”! For what was intended by the article, I think the author did a great job and provided me with a succinct answer to a problem I have been working on most of the day; how do I add a second endpoint to an existing SOAPy WCF service in order to provide REST services to a jQuery/AJAX client. I understand that I have to worry about verbs and URI templates, and relative paths, etc., etc in order to provide a true REST service. But the author accomplished what the title of his article stated. He told me how to enable REST and SOAP both on the same WCF service (without killing mex by the way!).
hey folks my purpose while writing this post was to focus on how to part ! I hope you find it useful
Thanks for this post. It was really useful for me.
Is is possible to use bytearray datatype in rest wcf service?
Glad to know that it was useful.
Thanks
/DJ
Pingback: rest and soap wcf | ynfiesta
|
http://debugmode.net/2011/12/22/how-to-enable-rest-and-soap-both-on-the-same-wcf-service/
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
GLSL Programming/Unity/Curved Glass
This tutorial covers refraction mapping and its implementation with cube maps.
It is a variation of Section “Reflecting Surfaces”, which should be read first.
Refraction Mapping[edit]
In Section “Reflecting Surfaces”, we reflected view rays and then performed texture lookups in a cube map in the reflected direction. Here, we refract view rays at a curved, transparent surface and then perform the lookups with the refracted direction. The effect will ignore the second refraction when the ray leaves the transparent object again; however, many people hardly notice the differences since such refractions are usually not part of our daily life.
Instead of the
reflect function, we are using the
refract function; thus, the fragment shader could be:
#ifdef FRAGMENT void main() { float refractiveIndex = 1.5; vec3 refractedDirection = refract(normalize(viewDirection), normalize(normalDirection), 1.0 / refractiveIndex); gl_FragColor = textureCube(_Cube, refractedDirection); } #endif
Note that
refract takes a third argument, which is the refractive index of the outside medium (e.g. 1.0 for air) divided by the refractive index of the object (e.g. 1.5 for some kinds of glass). Also note that the first argument has to be normalized, which isn't necessary for
reflect.
Complete Shader Code[edit]
With the adapted fragment shader, the complete shader code becomes:
Shader "GLSL shader with refraction mapping" { Properties { _Cube ("Environment() { float refractiveIndex = 1.5; vec3 refractedDirection = refract(normalize(viewDirection), normalize(normalDirection), 1.0 / refractiveIndex); gl_FragColor = textureCube(_Cube, refractedDirection); } #endif ENDGLSL } } }
Summary[edit]
Congratulations. This is the end of another tutorial. We have seen:
- How to adapt reflection mapping to refraction mapping using the
refractinstruction.
Further Reading[edit]
If you still want to know more
- about reflection mapping and cube maps, you should read Section “Reflecting Surfaces”.
- about the
refractinstruction, you could look it up in the “OpenGL ES Shading Language 1.0.17 Specification” available at the “Khronos OpenGL ES API Registry”.
|
http://en.wikibooks.org/wiki/GLSL_Programming/Unity/Curved_Glass
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
The PathAttribute allows setting an attribute at a given position in a Path. More...
The PathAttribute object allows attibutes consisting of a name and a value to be specified for the endpoints of path segments. The attributes are exposed to the delegate as Attached Properties. The value of an attribute at any particular point is interpolated from the PathAttributes bounding the point.
The example below shows a path with the items scaled to 30% with opacity 50% at the top of the path and scaled 100% with opacity 100% at the bottom. Note the use of the PathView.scale and PathView.opacity attached properties to set the scale and opacity of the delegate.
import Qt 4.7
Rectangle {
width: 240; height: 200
Component {
id: delegate
Item {
width: 80; height: 80
scale: PathView.iconScale
opacity: PathView.iconOpacity
Column {
Image { anchors.horizontalCenter: name.horizontalCenter; width: 64; height: 64; source: icon }
Text { text: name; font.pointSize: 16}
}
}
} }
}
}
}
See also Path.
name : string.
value : string
the new value of the attribute..
|
http://idlebox.net/2010/apidocs/qt-everywhere-opensource-4.7.0.zip/qml-pathattribute.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Multi Object Tracker for TLD. More...
#include <opencv2/tracking/tracker.hpp>
Multi Object Tracker for TLD.
TLD is a novel tracking framework. The implementation is based on [103] .
The Median Flow algorithm (see cv::TrackerMedianFlow) was chosen as a tracking component in this implementation, following authors. The tracker is supposed to be able to handle rapid motions, partial occlusions, object absence etc.%.
|
https://docs.opencv.org/3.4.8/d2/d33/classcv_1_1MultiTrackerTLD.html
|
CC-MAIN-2020-24
|
en
|
refinedweb
|
netinet_tcp (0p) - Linux Man Pages
netinet_tcp: definitions for the Internet Transmission Control Protocol (TCP)
PROLOGThis manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux.
NAME
netinet/tcp.h --- definitions for the Internet Transmission Control Protocol (TCP)
SYNOPSIS
#include <netinet/tcp.h>
DESCRIPTIONThe USAGENone.
RATIONALENone.
FUTURE DIRECTIONSNone.
SEE ALSO<sys_socket.h>
The System Interfaces volume of POSIX.1-2008, getsockopt(), setsockopt() .
Linux man pages generated by: SysTutorials
Linux Man Pages Copyright Respective Owners. Site Copyright © SysTutorials. All Rights Reserved.
|
https://www.systutorials.com/docs/linux/man/docs/linux/man/0p-netinet_tcp/
|
CC-MAIN-2020-24
|
en
|
refinedweb
|
- Table of contents
- Module threading types
Module threading types¶
This page talks only about plugins with the "module" suffix; services, sources, tools, and any other plugins are not discussed here.
As of art 3.00.00, modules can be characterized in two dimensions:
- workflow behavior: is the module a producer, filter, analyzer or output module? This is the module workflow type, or just module type.
- processing behavior: is the module shared or replicated across schedules? This is the module threading type.
Whereas determining module workflow type is usually straightforward, figuring out the module threading type can be more difficult. This page discusses the tradeoffs involved in making such a decision. Before reading this page, you must understand the concept of a schedule (see Schedules and transitions).
Shared modules¶
art supports shared modules and replicated modules. Shared modules are module instances shared across all schedules, the number of which is specified by the user (see the scheduler configuration section). For example, if a user's job configuration looks like:
services.scheduler: { num_schedules: 2 } physics: { producers: { m1: { module_type: MySharedModule1 } m2: { module_type: MySharedModule2 } m3: { module_type: MySharedModule3 } } tp: [m1, m2, m3] }
created processing infrastructure could be illustrated like this:
Shared modules see all events.
serialize<art::InEvent>(...)¶
One of the reasons for using a shared module is if the library you are using cannot guarantee thread safety. In such a case, it is possible to tell the framework that event-level calls relying on a given library should be serialized:
MyProducer::MyProducer(Parameters const& p) : SharedProducer{p} // other initializations { produce<MyProduct>(); serialize<art::InEvent>("NameOfThreadUnsafeResource"); }
The argument given to the
serialize function is called the resource name, which is of a type convertible to
std::string. Whenever it makes sense, a function should be introduced that, when called, returns the resource name as an
std::string. This approach avoids the typographical errors for which string-based interfaces are prone (e.g.):
serialize<art::InEvent>("NameOfThreadUnsafeResource"); // 1. discouraged, but not forbidden serialize<art::InEvent>(ThreadUnsafeResource::resource_name()); // 2. encouraged serialize(ThreadUnsafeResource::resource_name()); // 3. equivalent to 2
For resources that have a well-defined type, the type should have a
static member function called
resource_name(). If there is no obvious type to which a resource name should be attributed (e.g. GENIE), it is encouraged that those libraries introduce a free function along the lines of:
namespace GENIE { inline std::string resource_name() { return "GENIE"; } }
so that a user can make a call like
GENIE::resource_name().
Other things to note:
- Explicitly specifying template argument
art::InEventis optional--the
InEventvalue is the default.
- Specifying an empty resource-name list--i.e.
'serialize();'--indicates that the module's event-level function may be called at any point irrespective of any other module, but the module in question still processes one event at a time.
- Multiple resource names may be specified in the serialize call:
serialize(TFileService::resource_name(), "CLHEP_random_engine"). The framework guarantees then places the event-level call in the corresponding queues. It is not until the call is at the front of each queue that the framework invokes the user's member function.
But what does it do?¶
A call to
serialize tells the framework to create a queue associated with each resource name. The event-level calls are serially executed for all modules that register for the same queue. For example, suppose a user has configured modules
'm1' and
'm3' to use the same resource:
MySharedModule1::MySharedModule1(...) { serialize(TFileService::resource_name()); } // c'tor for m1 MySharedModule2::MySharedModule2(...) { serialize(); } // c'tor for m2 MySharedModule3::MySharedModule3(...) { serialize(TFileService::resource_name()); } // c'tor for m3
then the
m1 and
m3 event-level calls would not be invoked at the same time on different events. For module
m2, however, no resource name has been provided. If
m2 were configured on a different path, it would be possible for
m2 to execute in parallel with
m1 and
m3 since there is no resource that is shared between them.
Suppose, however, that string literals were specified instead of making a function call:
MySharedModule1::MySharedModule1(...) { serialize("TFileService"); } // c'tor for m1 MySharedModule2::MySharedModule2(...) { serialize(); } // c'tor for m2 MySharedModule3::MySharedModule3(...) { serialize("TfileService"); } // c'tor for m3 (oops: 'f' instead of 'F')
Such a spelling error results in the framework not properly serializing
'm1' and
'm3' calls. This is why the
resource_name() function call is strongly preferred.
Legacy modules¶
Legacy modules are those that inherit from
EDProducer,
EDFilter,
EDAnalyzer or
OutputModule. To guarantee that old workflows still work, all legacy modules are shared modules with maximum serialization enabled. The serialization for legacy modules is enabled implicitly by the framework, which parses all
'serialize' calls and assigns each legacy module to each resource queue.
async<art::InEvent>()¶
If you can guarantee that the external libraries the module uses are thread-safe, and that the data member members are used in a thread-safe manner, then the
async<art::InEvent>() call may be made in the module's constructor:
MyProducer::MyProducer(Parameters const& p) : SharedProducer{p} // other initializations { produces<MyProduct>(); async<art::InEvent>(); }
The
async call is an instruction to the framework that the event-level function (
produce in this case) may be called concurrently with any other module's member functions. A module that makes the
async call is an asynchronous shared module.
An asynchronous shared module is the optimal module for memory and efficiency performance.
Replicated modules¶
There may be cases where you can ensure that all external libraries used by a module are thread-safe, but it is rather difficult to make individual data members of your module thread safe (e.g. CLHEP random number engines). In such a situation, a replicated module can be used. A replicated module is one where for a given module configuration, one module instance is create per schedule. Assume that the type of module
m2 is changed to a
MyReplicatedModule:
services.scheduler: { num_schedules: 2 } physics: { producers: { m1: { module_type: MySharedModule1 } m2: { module_type: MyReplicatedModule } m3: { module_type: MySharedModule3 } } tp: [m1, m2, m3] }
the processing infrastructure would then look like this:
where separate
m2 instances have been created for each schedule. You should choose a replicated module if:
- the module does not need to see every event
- the module does not need to create
SubRunor
Rundata products (current limitation of art which will be fixed in a newer version)
- the external libraries used by the module are thread-safe
- only module data members are not intrinsically thread-safe
|
https://cdcvs.fnal.gov/redmine/projects/art/wiki/Module_threading_types
|
CC-MAIN-2020-24
|
en
|
refinedweb
|
You can generate Data Matrix in MS Excel spreadsheet, MS Access, Crystal Reports.
To print Barcode in C#, you need both barcode true type font and cruflbcsn.dll.
To get cruflbcsn.dll, you can either download it from Barcode Code 39 Barcode in C#
using cruflbcsn;
cruflbcsn.ILinear pLinear= new CLinear();
textBox2.Text = pLinear.Code39(textBox1.Text);
|
https://www.barcodesoft.net/barcode-csharp.aspx
|
CC-MAIN-2020-24
|
en
|
refinedweb
|
In Part 2, we showed you how to use Istio’s built-in features and integrations with third-party tools to visualize your service mesh, including the metrics that we introduced in Part 1. While Istio’s containerized architecture makes it straightforward to plug in different kinds of visualization software like Kiali and Grafana, you can get deeper visibility into your service mesh and reduce the time you spend troubleshooting by monitoring Istio with a single platform.
In this post, we’ll show you how to use Datadog to monitor Istio, including how to:
- Collect metrics, traces, and logs automatically from Istio’s internal components and the services running within your mesh
- Use dashboards to visualize Istio metrics alongside metrics from Kubernetes and your containerized applications
- Visualize request traces between services in your mesh to find bottlenecks and misconfigurations
- Search and analyze all of the logs in your mesh to understand trends and get context
- Set alerts to get notified automatically of issues within your mesh
With Datadog, you can seamlessly navigate between Istio metrics, traces, and logs to place your Istio data in the context of your infrastructure as a whole. You can also use alerts to get notified automatically of possible issues within your Istio deployment.
Istio currently has full support only for Kubernetes, with alpha support for Consul and Nomad. As a result, we’ll assume that you’re running Istio with Kubernetes.
How to run Datadog in your Istio mesh
The Datadog Agent is open source software that collects metrics, traces, and logs from your environment and sends them to Datadog. Datadog’s Istio integration queries Istio’s Prometheus endpoints automatically, meaning that you don’t need to run your own Prometheus server to collect data from Istio. In this section, we’ll show you how to set up the Datadog Agent to get deep visibility into your Istio service mesh.
Set up the Datadog Agent
To start monitoring your Istio Kubernetes cluster, you’ll need to deploy:
- A node-based Agent that runs on every node in your cluster, gathering metrics, traces, and logs to send to Datadog
- A Cluster Agent that runs as a Deployment, communicating with the Kubernetes API server and providing cluster-level metadata to node-based Agents
With this approach, we can avoid the overhead of having all node-based Agents communicate with the Kubernetes control plane, as well as enrich metrics collected from node-based Agents with cluster-level metadata, such as the names of services running within the cluster.
You can install the Datadog Cluster Agent and node-based Agents by taking the following steps, which we’ll lay out in more detail below.
- Assign permissions that allow the Cluster Agent and node-based Agents to communicate with each other and to access your metrics, traces, and logs.
- Apply Kubernetes manifests for both the Cluster Agent and node-based Agents to deploy them to your cluster.
Configure permissions for the Cluster Agent and node-based Agents
Both the Cluster Agent and node-based Agents take advantage of Kubernetes’ built-in role-based access control (RBAC), and the first step is enabling the following:
- A ClusterRole that declares a named set of permissions for accessing Kubernetes resources, in this case to allow the Agent to collect data on your cluster
- A ClusterRoleBinding that assigns the ClusterRole to the service account that the Datadog Agent will use to access the Kubernetes API server
The Datadog Agent GitHub repository contains manifests that enable RBAC for the Cluster Agent and node-based Agents. One of these grants permissions to the Datadog Cluster Agent’s ClusterRole:
rbac-cluster-agent.yaml
kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: datadog-cluster-agent namespace: <DATADOG_NAMESPACE> rules: - apiGroups: - "" resources: - services - events - endpoints - pods - nodes - componentstatuses verbs: - get - list - watch - apiGroups: - "autoscaling" resources: - horizontalpodautoscalers verbs: - list - watch - apiGroups: - "" resources: - configmaps resourceNames: - datadogtoken - datadog-leader-election verbs: - get - update - apiGroups: - "" resources: - configmaps verbs: - create - get - update - nonResourceURLs: - "/version" - "/healthz" verbs: - get --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: datadog-cluster-agent namespace: <DATADOG_NAMESPACE> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: datadog-cluster-agent subjects: - kind: ServiceAccount name: datadog-cluster-agent namespace: <DATADOG_NAMESPACE> --- kind: ServiceAccount apiVersion: v1 metadata: name: datadog-cluster-agent namespace: <DATADOG_NAMESPACE>
You’ll also need to create a manifest that grants the appropriate permissions to the node-based Agent’s ClusterRole.
rbac-agent.yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: datadog-agent namespace: <DATADOG_NAMESPACE> rules: - apiGroups: - "" resources: - nodes/metrics - nodes/spec - nodes/proxy verbs: - get --- kind: ServiceAccount apiVersion: v1 metadata: name: datadog-agent namespace: <DATADOG_NAMESPACE> --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: datadog-agent namespace: <DATADOG_NAMESPACE> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: datadog-agent subjects: - kind: ServiceAccount name: datadog-agent namespace: <DATADOG_NAMESPACE>
Next, deploy the resources you’ve created.
$ kubectl apply -f /path/to/rbac-cluster-agent.yaml $ kubectl apply -f /path/to/rbac-agent.yaml
You can verify that all of the appropriate ClusterRoles exist in your cluster by running this command:
$ kubectl get clusterrole | grep datadog datadog-agent 1h datadog-cluster-agent 1h
Enable secure communication between Agents
Next, we’ll ensure that the Cluster Agent and node-based Agents can securely communicate by creating a Kubernetes secret, which stores a cryptographic token that the Agents can access.
To generate the token (a 32-character string that we’ll encode in Base64), run the following:
echo -n '<32_CHARACTER_LONG_STRING>' | base64
Create a file named dca-secret.yaml and add your newly created token:
dca-secret.yaml
apiVersion: v1 kind: Secret metadata: name: datadog-auth-token namespace: <DATADOG_NAMESPACE> type: Opaque data: token: <NEW_SECRET_TOKEN>
Once you’ve added your token to the manifest,
apply it to create the secret:
$ kubectl apply -f /path/to/dca-secret.yaml
Run the following command to confirm that you’ve created the secret:
$ kubectl get secret | grep datadog datadog-auth-token Opaque 1 21h
Configure the Cluster Agent
To configure the Cluster Agent, create the following manifest, which declares two Kubernetes resources:
- A Deployment that adds an instance of the Cluster Agent container to your cluster
- A Service that allows the Datadog Cluster Agent to communicate with the rest of your cluster
This manifest links these resources to the service account we deployed above and points to the newly created secret. Make sure to add your Datadog API key where indicated. (Or use a Kubernetes secret as we did for the Cluster Agent authorization token.)
datadog-cluster-agent.yaml
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: datadog-cluster-agent namespace: <DATADOG_NAMESPACE> spec: template: metadata: labels: app: datadog-cluster-agent name: datadog-agent spec: serviceAccountName: datadog-cluster-agent containers: - image: datadog/cluster-agent:latest imagePullPolicy: Always name: datadog-cluster-agent env: - name: DD_API_KEY value: "<DATADOG_API_KEY>" - name: DD_COLLECT_KUBERNETES_EVENTS value: "true" - name: DD_EXTERNAL_METRICS_PROVIDER_ENABLED value: "true" - name: DD_CLUSTER_AGENT_AUTH_TOKEN valueFrom: secretKeyRef: name: datadog-auth-token key: token --- apiVersion: v1 kind: Service metadata: name: datadog-cluster-agent namespace: <DATADOG_NAMESPACE> labels: app: datadog-cluster-agent spec: ports: - port: 5005 # Has to be the same as the one exposed in the Cluster Agent. Default is 5005. protocol: TCP selector: app: datadog-cluster-agent
Configure the node-based Agent
The node-based Agent collects metrics, traces, and logs from each node and sends them to Datadog. We’ll ensure that an Agent pod runs on each node in the cluster, even for newly launched nodes, by declaring a DaemonSet. Create the following manifest, adding your Datadog API key where indicated:
datadog-agent.yaml
apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: datadog-agent namespace: <DATADOG_NAMESPACE> env: - name: DD_API_KEY value: "<DATADOG_API_KEY>" - name: DD_COLLECT_KUBERNETES - name: DD_TAGS value: "env:<YOUR_ENV_NAME>"
Disable automatic sidecar injection for Datadog Agent pods
You’ll also want to prevent Istio from automatically injecting Envoy sidecars into your Datadog Agent pods and interfering with data collection. You need to disable automatic sidecar injection for both the Cluster Agent and node-based Agents by revising each manifest to include the following annotation:
[...] spec: [...] template: metadata: annotations: sidecar.istio.io/inject: "false" [...]
Then deploy the Datadog Agents:
$ kubectl apply -f /path/to/datadog-cluster-agent.yaml $ kubectl apply -f /path/to/datadog-agent.yaml
Use the following
kubectl command to verify that your Cluster Agent and node-based Agent pods are running. There should be one pod named
datadog-agent-<STRING> running per node, and a single instance of
datadog-cluster-agent-<STRING>.
$ kubectl -n <DATADOG_NAMESPACE> get pods NAME READY STATUS RESTARTS AGE datadog-agent-bqtdt 1/1 Running 0 4d22h datadog-agent-gb5fs 1/1 Running 0 4d22h datadog-agent-lttmq 1/1 Running 0 4d22h datadog-agent-vnkqx 1/1 Running 0 4d22h datadog-cluster-agent-9b5b56d6d-jwg2l 1/1 Running 0 5d22h
Once you’ve deployed the Cluster Agent and node-based Agents, Datadog will start to report host– and platform-level metrics from your Kubernetes cluster.
Before you can get metrics from Pilot, Galley, Mixer, Citadel, and services within your mesh, you’ll need to set up Datadog’s Istio integration.
Set up the Istio integration
The Datadog Agent’s Istio integration automatically queries Istio’s Prometheus metrics endpoints, enriches all of the data with tags, and forwards it to the Datadog platform. The Datadog Cluster Agent uses a feature called endpoints checks to detect Istio’s Kubernetes services, identify the pods that back them, and send configurations to the Agents on the nodes running those pods. Each node-based Agent then uses these configurations to query the Istio pods running on the local node for data.
If you horizontally scale an Istio component, there is a risk that requests to that component’s Kubernetes service will load balance randomly across the component’s pods. Endpoints checks enable the Datadog Agent to bypass Istio’s Kubernetes services and query the backing pods directly, avoiding the risk of load balancing queries.
The Datadog Agent uses Autodiscovery to track the services exposing Istio’s Prometheus endpoints. We can enable the Istio integration by annotating these services. The annotations contain Autodiscovery templates—when the Cluster Agent detects that a currently deployed service contains a relevant annotation, it will identify each backing pod, populate the template with the pod’s IP address, and send the resulting configuration to a node-based Agent. We’ll create one Autodiscovery template per Istio component—each Agent will only load configurations for Istio pods running on its own node.
Note that you’ll need to run versions 6.17+ or 7.17+ of the node-based Agent and version 1.5.2+ of the Datadog Cluster Agent.
Run the following script to annotate each Istio service using
kubectl patch. Since there are multiple ways to install Istio, this approach lets you annotate your services without touching their manifests.
#!/bin/bash kubectl -n istio-system patch service istio-telemetry --patch "$(cat<<EOF metadata: annotations: ad.datadoghq.com/endpoints.check_names: '["istio"]' ad.datadoghq.com/endpoints.init_configs: '[{}]' ad.datadoghq.com/endpoints.instances: | [ { "istio_mesh_endpoint": "", "mixer_endpoint": "", "send_histograms_buckets": true } ] EOF )" kubectl -n istio-system patch service istio-galley --patch "$(cat<<EOF metadata: annotations: ad.datadoghq.com/endpoints.check_names: '["istio"]' ad.datadoghq.com/endpoints.init_configs: '[{}]' ad.datadoghq.com/endpoints.instances: | [ { "galley_endpoint": "", "send_histograms_buckets": true } ] EOF )" kubectl -n istio-system patch service istio-pilot --patch "$(cat<<EOF metadata: annotations: ad.datadoghq.com/endpoints.check_names: '["istio"]' ad.datadoghq.com/endpoints.init_configs: '[{}]' ad.datadoghq.com/endpoints.instances: | [ { "pilot_endpoint": "", "send_histograms_buckets": true } ] EOF )" kubectl -n istio-system patch service istio-citadel --patch "$(cat<<EOF metadata: annotations: ad.datadoghq.com/endpoints.check_names: '["istio"]' ad.datadoghq.com/endpoints.init_configs: '[{}]' ad.datadoghq.com/endpoints.instances: | [ { "citadel_endpoint": "", "send_histograms_buckets": true } ] EOF )"
When the Cluster Agent identifies a Kubernetes service that contains these annotations, it uses them to fill in configuration details for the Istio integration. The
%%host%% template variable becomes the IP of a pod backing the service. The Cluster Agent sends the configuration to a Datadog Agent running on the same node, and the Agent uses the configuration to query the pod’s metrics endpoint.
You can also provide a value for the option
send_histograms_buckets—if this option is enabled (the default), the Datadog Agent will tag any histogram-based metrics with the
upper_bound prefix, indicating the name of the metric’s quantile bucket.
Next, update the node-based Agent and Cluster Agent manifests to enable endpoints checks. The Datadog Cluster Agent sends endpoint check configurations to node-based Agents using cluster checks, and you will need to enable these as well. In the node-based Agent manifest, add the following environment variables:
datadog-agent.yaml
# [...] spec: template: spec: containers: - image: datadog/agent:latest # [...] env: # [...] - name: DD_EXTRA_CONFIG_PROVIDERS value: "endpointschecks clusterchecks"
If you set
DD_EXTRA_CONFIG_PROVIDERS to
endpointschecks, the node-based Agents will collect endpoint check configurations from the Cluster Agent. We also need to add the value
clusterchecks, which tells the node-based Agent to pull configurations from the Cluster Agent.
Now add the following environment variables to the Cluster Agent manifest:
datadog-cluster-agent.yaml
# [...] spec: template: spec: containers: - image: datadog/cluster-agent:latest # [...] env: # [...] - name: DD_CLUSTER_CHECKS_ENABLED value: "true" - name: DD_EXTRA_CONFIG_PROVIDERS value: "kube_endpoints kube_services" - name: DD_EXTRA_LISTENERS value: "kube_endpoints kube_services"
The
DD_EXTRA_CONFIG_PROVIDERS and
DD_EXTRA_LISTENERS variables tell the Cluster Agent to query the Kubernetes API server for the status of currently active endpoints and services.
Finally, apply the changes.
$ kubectl apply -f path/to/datadog-agent.yaml $ kubectl apply -f path/to/datadog-cluster-agent.yaml
After running these commands, you should expect to see Istio metrics flowing into Datadog. The easiest way to confirm this is to navigate to our out-of-the-box dashboard for Istio, which we’ll explain in more detail later.
Finally, enable the Istio integration by clicking the tile in your Datadog account.
You can also use Autodiscovery to collect metrics, traces, and logs from the applications running in your mesh with minimal configuration. Consult Datadog’s documentation for the configuration details you’ll need to include.
Get high-level views of your Istio mesh
When running a complex distributed system using Istio, you’ll want to ensure that your nodes, containers, and services are performing as expected. This goes for both Istio’s internal components (Pilot, Mixer, Galley, Citadel, and your mesh of Envoy proxies) and the services that Istio manages. Datadog helps you visualize the health and performance of your entire Istio deployment in one place.
Visualize all of your Istio metrics together
After installing the Datadog Agent and enabling the Istio integration, you’ll have access to an out-of-the-box dashboard showing key Istio metrics. You can see request throughput and latency from throughout your mesh, as well as resource utilization metrics for each of Istio’s internal components.
You can then clone the out-of-the-box Istio dashboard and customize it to produce the most helpful view for your environment. Datadog imports tags automatically from Docker, Kubernetes, and Istio, as well as from the mesh-level metrics that Mixer exports to Prometheus (e.g.,
source_app and
destination_service_name). You can use tags to group and filter dashboard widgets to get visibility into Istio’s performance. For example, the following timeseries graph and toplist use the
adapter tag to show how many dispatches Mixer makes to each adapter.
You can also quickly understand the scope of an issue (does it affect a host, a pod, or your whole cluster?) by using Datadog’s mapping features: the host map and container map. Using the container map, you can easily localize issues within your Kubernetes cluster. And if issues are due to resource constraints within your Istio nodes, this will become apparent within the host map.
You can color the host map based on the current value of any metric (and the container map based on any resource metric), making it clear which parts of your infrastructure are underperforming or overloaded. You can then use tags to group and filter the maps, helping you answer any questions about your infrastructure.
The dashboard above shows CPU utilization in our Istio deployment. In the upper-left widget, we can see that this metric is high for two hosts. To investigate, we can use the container map on the bottom left to see if any container running within those hosts is facing unusual load. Istio’s components might run on any node in your cluster—the same goes for the pods running your services. To monitor our pods regardless of where they are running, we can group containers by the
service tag, making it clear which Istio components or mesh-level services are facing the heaviest demand. The
kube_namespace tag allows us to view components and services separately.
Get insights into mesh activity
Getting visibility into traffic between Istio-managed services is key to understanding the health and performance of your service mesh. With Datadog’s distributed tracing and application performance monitoring, you can trace requests between your Istio-managed services to understand your mesh and troubleshoot issues. You can display your entire service topology using the Service Map, visualize the path of each request through your mesh using flame graphs, and get a detailed performance portrait of each service. From APM, you can easily navigate to related metrics and logs, allowing you to troubleshoot more quickly than you would with dedicated graphing, tracing, and log collection tools.
Set up tracing
Receiving traces
First, you’ll need to instruct the node-based Agents to accept traces. Edit the node-based Agent manifest to include the following attributes.
datadog-agent.yaml
[...] env: [...] - name: DD_APM_ENABLED value: "true" - name: DD_APM_NON_LOCAL_TRAFFIC value: "true" - name: DD_APM_ENV value: "istio-demo" [...]
DD_APM_ENABLED instructs the Agent to collect traces.
DD_APM_NON_LOCAL_TRAFFIC configures the Agent to listen for traces from containers on other hosts. Finally, if you want to keep traces from your Istio cluster separate from other projects within your organization, use the
DD_APM_ENV variable to customize the
env: tag for your traces (
env:none by default). You can then filter by this tag within Datadog.
Next, forward port 8126 from the node-based Agent container to its host, allowing the host to listen for distributed traces.
datadog-agent.yaml
[...] ports: [...] - containerPort: 8126 hostPort: 8126 name: traceport protocol: TCP [...]
This example configures Datadog to trace requests between Envoy proxies, so you can visualize communication between your services without having to instrument your application code. If you want to trace activity within an application, e.g., a function call, you can use Datadog’s tracing libraries to either auto-instrument your application or declare traces within your code for fine-grained benchmarking and troubleshooting.
Finally, create a service for the node-based Agent, so it can receive traces from elsewhere in the mesh. We’ll use a headless service to avoid needlessly allocating a cluster IP to the Agent. Create the following manifest and apply it using
kubectl apply:
dd-agent-service.yaml
apiVersion: v1 kind: Service metadata: labels: app: datadog-agent name: datadog-agent namespace: <DATADOG_NAMESPACE> spec: clusterIP: None ports: - name: dogstatsdport port: 8125 protocol: UDP targetPort: 8125 - name: traceport port: 8126 protocol: TCP targetPort: 8126 selector: app: datadog-agent
After you apply this configuration, the Datadog Agent should be able to receive traces from Envoy proxies throughout your cluster. In the next step, you’ll configure Istio to send traces to the Datadog Agent.
Sending traces
Istio has built-in support for distributed tracing using several possible backends, including Datadog. You need to configure tracing by setting three options:
pilot.traceSamplingis the percentage of requests that Istio will record as traces. Set this to
100.00to send all traces to Datadog—you can then determine within Datadog how long to retain your traces. 2.
global.proxy.tracerinstructs Istio to use a particular tracing backend, in our case
datadog.
tracing.enabledinstructs Istio to record traces of requests within your service mesh.
Run the following command to enable Istio to send traces automatically to Datadog:
helm upgrade --install istio <ISTIO_INSTALLATION_PATH>/install/kubernetes/helm/istio --namespace istio-system --set pilot.traceSampling=100.0,global.proxy.tracer=datadog,tracing.enabled=true
Visualize mesh topology with the Service Map
Datadog automatically generates a Service Map from distributed traces, allowing you to quickly understand how services communicate within your mesh. The Service Map gives you a quick read into the results of your Istio configuration, so you can identify issues and determine where you might begin to optimize your network.
If you have set up alerts for any of your services (we’ll introduce these in a moment), the Service Map will show their status. In this example, an alert has triggered for the
productpage service in the
default namespace. We can navigate directly from the Service Map to see which alerts have triggered.
And if you click on “View service overview,” you can get more context into service-level issues by viewing request rates, error rates, and latencies for a single service over time. For example, we can navigate to the overview of the
productpage service to see when the service started reporting a high rate of errors, and correlate the beginning of the issue with metrics, traces, and logs from the same time.
Understand your Istio logs
If services within your mesh fail to communicate as expected, you’ll want to consult logs to get more context. As traffic flows throughout your Istio mesh, Datadog can help you cut through the complexity by collecting all of your Istio logs in one platform for visualization and analysis.
Set up Istio log collection
To enable log collection, edit the datadog-agent.yaml manifest you created earlier to provide a few more environment variables:
DD_LOGS_ENABLED: switches on Datadog log collection
DD_LOGS_CONFIG_CONTAINER_COLLECT_ALL: tells each node-based Agent to collect logs from all containers running on that node
DD_AC_EXCLUDE: filters out logs from certain containers before they reach Datadog, such as, in our case, those from Datadog Agent containers
datadog-agent.yaml
[...] env: [...] - name: DD_LOGS_ENABLED value: "true" - name: DD_LOGS_CONFIG_CONTAINER_COLLECT_ALL value: "true" - name: DD_AC_EXCLUDE value: "name:datadog-agent name:datadog-cluster-agent" [...]
Next, edit the file to mount the node-based Agent container to the local node’s Docker socket. Since you’ll be deploying the Datadog Agent pod as a DaemonSet, each Agent will read logs from the Docker socket on its local node, enrich them with tags imported from Docker, Kubernetes, and your cloud provider, and send them to Datadog. Istio’s components publish logs to
stdout and
stderr by default, meaning that the Datadog Agent can collect all of your Istio logs from the Docker socket.
datadog-agent.yaml
(...) volumeMounts: (...) - name: dockersocket mountPath: /var/run/docker.sock (...) volumes: (...) - hostPath: path: /var/run/docker.sock name: dockersocket (...)
Note that if you plan to run more than 10 containers in each pod, you’ll want to configure the Agent to use a Kubernetes-managed log file instead of the Docker socket.
Once you run
kubectl apply -f path/to/datadog-agent.yaml, you should start seeing your logs within Datadog.
Discover trends with Log Patterns
Once you’re collecting logs from your Istio mesh, you can start exploring them in Datadog. The Log Patterns view helps you extract trends by displaying common strings within your logs and generalizing the fields that vary into regular expressions. The result is a summary of common log types. This is especially useful for reducing noise within your Istio-managed environment, where you might be gathering logs from all of Istio’s internal components in addition to Envoy proxies and the services in your mesh.
In this example, we used the sidebar to display only the patterns having to do with our Envoy proxies. We also filtered out INFO-level logs. Now that we know which error messages are especially common—Mixer is having trouble connecting to its upstream services—we can determine how urgent these errors are and how to go about resolving them.
Set alerts for automatic monitoring
When running a complex distributed system, it’s impossible to watch every host, pod, and container for possible issues. You’ll want some way to automatically get notified when something goes wrong in your Istio mesh. Datadog allows you to set alerts on any kind of data it collects, including metrics, logs, and request traces.
In this example, we’re creating an alert that will notify us whenever requests to the
productpage service in Istio’s “Bookinfo” sample application take place at an unusual frequency, using APM data and Datadog’s anomaly detection algorithm.
You can also get automated insights into aberrant trends with Datadog’s Watchdog feature, which automatically flags performance anomalies in your dynamic service mesh. With Watchdog, you can easily detect issues like heavy request traffic, service outages, or spikes in demand, without setting up any alerts. Watchdog searches your APM-based metrics (request rates, request latencies, and error rates) for possible issues, and presents these to you as a feed when you first log in.
A view of your mesh at every scale
In this post, we’ve shown you how to use Datadog to get comprehensive visibility into metrics, traces, and logs from throughout your Istio mesh. Integrated views allow you to navigate easily between data sources, troubleshoot issues, and manage the complexity that comes with running a service mesh. If you’re not already using Datadog, you can sign up for a free trial.
|
https://www.datadoghq.com/blog/istio-datadog/
|
CC-MAIN-2020-24
|
en
|
refinedweb
|
gatsby-plugin-page-transitions
** NOT COMPATIABLE WITH GATSBY 2 **
The API and the features this plugin provides is no longer possible with Gatsby 2. For simple page fade transitions the Gatsby team has provided an adequate example.
With Gatsby 2, the plugin will FAIL TO BUILD because the
replaceHistory API has been removed. While the replacement
onRouteUpdate callback allows you to detect URL changes, it only does so when the URL has ALREADY been updated.
This plugin needs to know BEFORE the URL changes, and relies on replacing the history and letting
history.block() give the page time to complete the exit transition for custom/multiple transitions before unmounting.
Gatsby 2’s removal of
replaceHistory means that exit transitions will always be bugged, because the page isn’t blocked and your component will disappear immediately as it unmounts.
The official example works by using
gatsby-plugin-layout to load a layout component with the
TransitionGroup inside that never unmounts, letting the
TransitionGroup handle exit transition timing. This should be adequate for most users, but renders this plugin redundant in the value it provides.
** ONLY APPLICABLE FOR GATSBY 1 **
Add page transitions to your Gatsby site.
Allows you to declaratively add page transitions, as well as specify unique transition strategies for any page on an individual basis.
Examples of usage can be found in the Github repository here.
Examples cover:
- Default Transition
- Custom Transition
- No Transition
- Multiple Transitions
Install
Install the
gatsby-plugin-page-transitionsplugin:
npm install --save gatsby-plugin-page-transitions
or
yarn add gatsby-plugin-page-transitions
Usage
- Add into
gatsby-config.js.
// gatsby-config.js module.exports = { plugins: [ 'gatsby-plugin-page-transitions' ] }
- Import
PageTransition, and wrap the root of each page where transitions are desired with the
PageTransitioncomponent
import React from 'react'; import PageTransition from 'gatsby-plugin-page-transitions'; const Index = () => ( <PageTransition> <div> <span>Some</span> <span>Content</span> <span>Here</span> </div> </PageTransition> )
Pages that are not wrapped with the
PageTransition element navigate immediately, allowing you to decaratively specify which pages has transitions.
Configuration Options
If no options are specified, the plugin defaults to:
- Transition time of 250ms. This is the amount of time the browser blocks navigation, waiting for the animation to finish.
- Opacity
ease-in-outtransition from the
react-transition-groupexamples, here.
There is a convenience option to let you modify the transition time on the default opacity transitions, like so:
// gatsby-config.js module.exports = { plugins: [ { resolve: 'gatsby-plugin-page-transitions', options: { transitionTime: 500 } } ] }
Advanced
If you want to specify your own transition styles when the component enters or leaves, you can do so by passing props into the
PageTransition component.
The component takes in 3 props:
duration: How long the browser should wait for the animation until navigating. This number should match the CSS
transitiontime you chose to
defaultStyle: JS style object of what the component looks like default
transitionStyles: Object with keys of the transition states (
entering,
entered,
exiting,
exited) that have JS style objects of the styles of each transition state.
These follow the transitional styling convention from
react-transition-group here.
The plugin is a wrapper around
react-transition-group, so please see their documentation for implementation details.
For an example, if you wanted a transition to:
- Slide in and out from the left
- Lasting 500ms
- Transitions with a cubic-bezier function
Just pass your desired transition down as props into the
PageTransition element.
import React from 'react'; import PageTransition from 'gatsby-plugin-page-transitions'; const Index = () => ( <PageTransition defaultStyle={{ transition: 'left 500ms cubic-bezier(0.47, 0, 0.75, 0.72)', left: '100%', position: 'absolute', width: '100%', }} transitionStyles={{ entering: { left: '0%' }, entered: { left: '0%' }, exiting: { left: '100%' }, }} transitionTime={500} > <div> <span>Some</span> <span>Content</span> <span>Here</span> </div> </PageTransition> )
Notice that
500ms string is specified as the transition length in the JS CSS object. The component needs to be passed
500 in the
transitionTime prop, so the browser can wait for the animation to finish before navigation to the next path.
You can use this method to specify unique transition strategies for each page individually, or wrap
PageTransition yourself for a custom reusable transition.
Page Transition Event
At a high level the plugin operates this way:
- User clicks a link to another page.
- Page change is caught, and navigation is paused for however long the
transitionTimeis specified.
- Page transition event
'gatsby-plugin-page-transition::exit'is fired.
- Rendered components listening to the page transition event plays the transition.
- Pause is released, and browser navigates.
If you require even more control, such as making different elements on the page transition in different ways, you’ll need to listen for the page’s transition event. Full implementation found in the examples here.
If you are using
react-transition-group’s
Transition component as specified here, then your page might generically look something like this:
import React from 'react' import PageTransition from 'gatsby-plugin-page-transitions' import Transition from 'react-transition-group/Transition' const pageTransitionEvent = 'gatsby-plugin-page-transition::exit'; const defaultStyle = { // Default transition styling } const transitionStyles = { // Transition styling } class CustomComponent extends React.Component { constructor (props) { super(props) this.listenHandler = this.listenHandler.bind(this) this.state = { in: false } } componentDidMount () { global.window.addEventListener(pageTransitionEvent, this.listenHandler) this.setState({ in: true }) } listenHandler () { this.setState({ in: false }) } componentWillUnmount () { global.window.removeEventListener(pageTransitionEvent, this.listenHandler) } render () { return ( <PageTransition transitionTime={500}> <Transition in={this.state.in} timeout={500}> {(state) => ( <div style={{ ...defaultStyle, ...transitionStyles[state] }}> Elements </div> )} </Transition> </PageTransition> ) } }
This component is doing several things:
Per
react-transition-group, there is local state
this.state.in tracking if transitioning elements should be “in” or not
this.state.inbegins as
false, with elements not “in”.
- On mount,
this.state.inis flipped to
true, and elements are transitioned in.
- On mount, component begins listening to
gatsby-plugin-page-transition::exit
- When user navigates away, global window
gatsby-plugin-page-transition::exitevent fires
<PageTransition transitionTime={500}>component will handle event by pausing navigation, with an allotted
transitionTimeof 500ms
- Local component
listenHandlersets
this.state.into
false, and elements are transitioned out. Transitions should take 500ms or less, if they are to complete before page navigation.
- Transitions complete, page navigates, component cleans up listeners.
|
https://www.gatsbyjs.org/packages/gatsby-plugin-page-transitions/
|
CC-MAIN-2020-24
|
en
|
refinedweb
|
#include <sys/stat.h> int utimensat(int fd, const char *path, const struct timespec times[2], int flag); #include <sys/time.h> int utimes(const char *path, const struct timeval times[2]);
Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see .
|
https://man.linuxreviews.org/man3p/utimensat.3p.html
|
CC-MAIN-2020-24
|
en
|
refinedweb
|
AWS Fargate has steadily gained traction in Amazon Elastic Container Service (ECS) environments because it allows users to run containerized applications without thinking about their underlying infrastructure. Today, AWS announced that support for Amazon Elastic Kubernetes Service (EKS) on AWS Fargate is now generally available, giving Amazon EKS users the option to seamlessly manage their infrastructure with AWS Fargate instead of manually provisioning EC2 worker nodes.
Datadog is pleased to work with AWS for the launch of Amazon EKS on AWS Fargate, so you can automatically collect metrics and get deep visibility into your environment. This integration also includes support for Autodiscovery, so the Datadog Agent can immediately detect applications running in your cluster and collect monitoring data from them. You can also configure Datadog APM to collect distributed traces from applications running on Amazon EKS to monitor their performance in real time.
In this post, we’ll show you an example of how you can deploy the Datadog Agent to get visibility into an application that runs on Amazon EKS using AWS Fargate.
Deploy Datadog on Amazon EKS on AWS Fargate
AWS Fargate abstracts away the underlying infrastructure of Amazon EKS and provides on-demand compute capacity for containers. So instead of deploying the Datadog Agent to your nodes, as you would in a regular Kubernetes cluster, you’ll need to run the Datadog Agent as a sidecar container on each pod to ensure that all your pods are monitored. If you’re running a mixed Amazon EKS cluster, with some pods running on AWS Fargate and some pods running on Amazon EC2 instances, you should still deploy the Agent as a DaemonSet to the EC2 instances.
You will also need to set up role-based access control (RBAC) in Amazon EKS so the Datadog Agent can query the Kubernetes API for monitoring data; see the documentation for full details on setting up the Datadog service account, ClusterRole, and ClusterRoleBinding. If your application also utilizes RBAC, you’ll need to ensure that the Agent container and your application containers have all the permissions they need, for example by creating a ClusterRoleBinding that links your application’s service account with the Agent’s ClusterRole, or by adding the Datadog Agent’s required permissions to the ClusterRole associated with your application’s service account.
To deploy Datadog in Amazon EKS on AWS Fargate, define three environment variables in your deployment manifest:
DD_API_KEY: your Datadog API key
DD_EKS_FARGATE: set this to
true
DD_KUBERNETES_KUBELET_NODENAME: set this as specified in the example below.
The following example will deploy the containerized Datadog Agent as a sidecar in the same pod as a Redis container.
apiVersion: apps/v1 kind: Deployment metadata: name: redis namespace: default spec: replicas: 1 template: metadata: labels: app: redis name: redis annotations: ad.datadoghq.com/redis.check_names: '["redisdb"]' ad.datadoghq.com/redis.init_configs: '[{}]' ad.datadoghq.com/redis.instances: | [ { "host": "%%host%%", "port": "6379" } ] spec: serviceAccountName: datadog-agent containers: - name: redis image: redis:latest args: - "redis-server" ports: - containerPort: 6379 - image: datadog/agent name: datadog-agent env: - name: DD_API_KEY value: "<YOUR_DATADOG_API_KEY>" - name: DD_TAGS value: "[clustername:my-eks-fargate-cluster]" - name: DD_EKS_FARGATE value: "true" - name: DD_KUBERNETES_KUBELET_NODENAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName
In this example, we’ve added pod annotations to configure Autodiscovery, which means the Agent will automatically run a check on the Redis container in the same pod.
You can also deploy Datadog’s Cluster Agent on Amazon EKS on AWS Fargate by following the steps in our documentation. The Cluster Agent will run on a single pod and collect events from the API server. It will also complete cluster checks if you’ve configured any (e.g., if you’d like to run an HTTP check to monitor the latency of an NGINX service).
To monitor any of the AWS services you’re running alongside Amazon EKS on AWS Fargate, such as Application Load Balancer, make sure to enable our AWS integration if you haven’t done so already.
Monitor Amazon EKS on AWS Fargate with Datadog
Once you’ve deployed the Agent to your application running on Amazon EKS on AWS Fargate, you should see Kubernetes metrics appear in Datadog, along with data from services detected via Autodiscovery (e.g., Redis in the example above). Although Datadog can collect host-level metrics from any EKS nodes that aren’t managed by Fargate, you won’t see any system-level metrics from any of your Fargate-provisioned “hosts” because AWS manages that infrastructure for you.
This integration includes standard Kubernetes and Docker tags (e.g.,
docker_image,
pod_name,
kube_deployment) that you can use to filter for any subset of your container infrastructure. If you included a
DD_TAGS environment variable in your manifest, you will also be able to leverage any custom tags specified there.
Deep visibility, right out of the gate
With Amazon EKS on AWS Fargate, teams can spend more time developing their container applications, and less time managing the underlying infrastructure. To learn more about running Amazon EKS pods on AWS Fargate, check out the official documentation. For more information about Amazon EKS monitoring, read our in-depth guide.
We’re pleased to provide real-time visibility into your dynamic container environments, regardless of how they’re managed or where they’re running. If you’re already a Datadog customer, consult our documentation to learn how to get started. Otherwise, sign up for a free trial.
|
https://www.datadoghq.com/blog/eks-fargate-monitoring/?lang_pref=en
|
CC-MAIN-2020-24
|
en
|
refinedweb
|
ustat - get file system statistics
#include <sys/types.h> #include <ustat.h> int ustat(dev_t dev, struct ustat *buf);
The ustat() function returns information about a mounted file system. The dev argument is a device number identify- ing a device containing a mounted file system (see makedev(3C)). The buf argument is a pointer to a ustat structure that includes the following members: daddr_t f_tfree; /* Total free blocks */ ino_t f_tinode; /* Number of free inodes */ char f_fname[6]; /* Filsys name */ char f_fpack[6]; /* Filsys pack name */ The f_fname and f_fpack members may not contain significant information on all systems; in this case, these members will contain the null character as the first character.
Upon successful completion, 0 is returned. Otherwise, -1 is returned and errno is set to indicate the error.
The ustat() function will fail if: ECOMM The dev argument is on a remote machine and the link to that machine is no longer active. EFAULT The buf argument points to an illegal address. EINTR A signal was caught during the execution of the ustat() function. EINVAL The dev argument is not the device number of a device containing a mounted file system. ENOLINK The dev argument refers to a device on a remote machine and the link to that machine is no longer active. EOVERFLOW One of the values returned cannot be represented in the structure pointed to by buf.
The statvfs(2) function should be used in favor of ustat().
stat(2), statvfs(2), makedev(3C), lfcompile(5)
The NFS revision 2 protocol does not permit the number of free files to be provided to the client; therefore, when ustat() has completed on an NFS file system, f_tinode is always -1.
|
http://man.eitan.ac.il/cgi-bin/man.cgi?section=2&topic=ustat
|
CC-MAIN-2020-24
|
en
|
refinedweb
|
My service implementation uses hibernate dao objects which should be managing its own connections via the c3p0 connection pool. All my service objects and Dao objects are marked as singletons in my Spring applicationContext.xml file. I don't think any of my transactions are failing as it would most likely throw an exception if the there was a problem performing a transactions and all exceptions are logged (at least I believe they are).
I will most likely put the following c3p0 settings in place and wait to see what comes out of this in production. c3p0.unreturnedConnectionTimeout=30 c3p0.debugUnreturnedConnectionStackTraces=true This should give me a stack trace of what checked out a connection, where that connection hasn't been returned to the pool in 30 seconds. It will be a slight performance hit on production but I should be able to find the problem very quickly. Thanks. Jeff -----Original Message----- From: Dan Retzlaff [mailto:dretzl...@gmail.com] Sent: Wednesday, March 21, 2012 5:04 PM To: users@wicket.apache.org Subject: Re: LDM - correct construction Jeffrey, That won't prevent a connection from being released. The LDM holds a reference to the page, and the page has a serializable proxy to the service implementation. No problem there. Injecting the service into the LDM is only an advantage if you want to share it among pages; then the LDM can be static and its page reference goes away. It sounds like your service implementation may have an issue. Does your service manage its own connections? If so, is it a singleton? Other than that, I'd guess there's a bug in your transaction management such that a transaction gets started but not finished. HTH, Dan On Wed, Mar 21, 2012 at 1:45 PM, Jeffrey Schneller < jeffrey.schnel...@envisa.com> wrote: > Is this the correct construction of a LDM where I need to use a > service bean to access my database? The IMyService bean is injected > on the page and passed to the LDM. Does this make the LDM hold a > reference to the IMyService bean and possibly keep a connection from > being put back into the > c3p0 db connection pool? After some period of time my application is > blocked with all threads waiting on a connection to the db. > > Should I be injected the IMyService bean into the LDM using the > commented out code. Thanks for any help. > > public class MyLDM extends > LoadableDetachableModel<com.example.MyObject> { > > //@SpringBean > private IMyService service; > > private String id; > > public MyLDM(String id, IMyService service) { > this.id = id; > this.service = service; > } > > > //public MyLDM(String id) { > // this.id = id; > //} > > > @Override > protected com.example.MyObject load() { > //InjectorHolder.getInjector().inject(this); > return service.getMyObject(this.id); > } > } > --------------------------------------------------------------------- To unsubscribe, e-mail: users-unsubscr...@wicket.apache.org For additional commands, e-mail: users-h...@wicket.apache.org
|
https://www.mail-archive.com/users@wicket.apache.org/msg70100.html
|
CC-MAIN-2020-24
|
en
|
refinedweb
|
I just completed an implementation of an immutable hash array mapped trie (HAMT) in C#. The HAMT is an ingenious hash tree first described by Phil Bagwell. It's used in many different domains because of its time and space efficiency, although only some languages use the immutable variant. For instance, Clojure uses immutable HAMTs to implement arrays/vectors which are essential to its concurrency.
The linked implementation is pretty much the bare minimum supporting add, remove and lookup operations, so if you're interested in learning more about it, it's a good starting point. Many thanks also to krukow's fine article which helped me quickly grasp the bit-twiddling needed for the HAMT. The tree interface is basically this:
/// <summary> /// An immutable hash-array mapped trie. /// </summary> /// <typeparam name="K">The type of keys.</typeparam> /// <typeparam name="T">The type of values.</typeparam> public class Tree<K, T> : IEnumerable<KeyValuePair<K, T>> { /// <summary> /// The number of elements in the tree. /// </summary> public virtual int Count { get; } /// <summary> /// Find the value for the given key. /// </summary> /// <param name="key">The key to lookup.</param> /// <returns> /// The value corresponding to <paramref name="key"/>. /// </returns> /// <exception cref="KeyNotFoundException"> /// Thrown if the key is not found in this tree. /// </exception> public T this[K key] { get; } /// <summary> /// Add the given key-value pair to the tree. /// </summary> /// <param name="key">The key.</param> /// <param name="value">The value for the given key.</param> /// <returns>A tree containing the key-value pair.</returns> public Tree<K, T> Add(K key, T value); /// <summary> /// Remove the element with the given key. /// </summary> /// <param name="key">The key to remove.</param> /// <returns> /// A tree without the value corresponding to /// <paramref name="key"/>. /// </returns> public Tree<K, T> Remove(K key); }
No benchmarks yet, it's still early stages. The implementation is based on a few functions from my Sasa class library, primarily some bit-twiddling functions from Sasa.Binary.
The whole implementation is literally 200 lines of code, excluding comments. The only deficiency of the current implementation is that it doesn't properly handle hash collisions. A simple linear chain on collision would be a simple extension. I also need an efficient tree merge operation. I was initially implementing Okasaki's Patricia trees because of their efficient merge, but HAMTs are just so much better in every other way. If anyone has any pointers to efficient merging for HAMTs, I'd be much obliged!
State of Sasa v0.9.4
Sasa itself is currently undergoing an aggressive reorganization in preparation for the 0.9.4 release. A lot of the optional abstractions are moving from Sasa core into their own assemblies. A lot of the useful abstractions are relatively stand-alone. It currently stands as follows, with dependencies listed between []:
Production Ready
- Sasa [standalone]: tuples, option types that work with both types and structs, string extensions, IEnumerable extensions, thread and null-safe event extensions, type-safe enum extensions, lightweight type-safe wrappers for some system classes, eg. WeakReference and Delegate, extensions for code generation and debugging, and generic number extensions. The goal is to provide only essential extensions to address deficiencies in the system class libraries.
- Sasa.Binary [standalone]: low-level bit twiddling functions, endian conversions, and portable BinaryReader and BinaryWriter.
- Sasa.Collections [Sasa, Sasa.Binary]: efficient immutable collections library, including purely functional stacks, queues, lists, and trees. Tree needs some more testing obviously, since it's a rather new addition.
- Sasa.Mime [standalone]: a simple library encapsulating standard media types and file extensions. It also provides an interface for extending these associations at runtime.
- Sasa.Statistics [standalone]: a few basic numerical calculations, like standard deviation, and Pierce's criterion used to remove outliers from a data set.
- Sasa.Net [Sasa, Sasa.Collections]: MIME mail message parsing to System.Net.Mail.MailMessage (most libraries provide unfamiliar, custom mail and attachment classes), a POP3 client, and RFC822 header parsing.
- Sasa.Contracts [Sasa]: I've used the runtime preconditions subset of the standard .NET contracts for years. I haven't gotten around to adding postconditions and invariants support to ilrewriter.
- ilrewriter.exe [Sasa]: the IL rewriter currently only erases Sasa.TypeConstraint<T> from your code, which allows you to specify type constraints that C# normally disallows, ie. T : Delegate, or T : Enum.
Beta
These abstractions work, but haven't seen the production use or stress testing the above classes have.
- Sasa.TM [Sasa]: software transactional memory, super-fast thread-local data (much faster than ThreadLocal<T>!).
- Sasa.Reactive [Sasa]: building on Rx.NET, this provides Property<T> which is a mutable, reactive cell with a getter/setter. Any changes automatically propagate to observers. NamedProperty<T> inherits from Property<T> and further implements INotifyPropertyChanged and INotifyPropertyChanging.
- Sasa.Parsing [Sasa]: implements a simple, extensible Pratt parser. Grammars are generic and can be extended via standard inheritance. The test suite is extensive, although I've only use this in private projects, not production code.
- Sasa.Linq [standalone]: base classes for LINQ expression visitors and query providers. Not too uncommon these days, but I've had them in Sasa for many years.
Currently Broken
These assemblies are undergoing some major refactoring, and are currently rather broken.
- Sasa.Dynamics [Sasa, Sasa.Collections]: blazingly fast, type-safe runtime reflection. This code underwent significant refactoring, and I recently realized that the patterns being used here could be abstracted even further by providing multiple dispatch for .NET. See the multiple-dispatch branch of the repository for the current status of that work. This should be complete for Sasa v0.9.4.
- Sasa.Serialization [Sasa, Sasa.Dynamics]: a compact, fast serializer based on Sasa.Dynamics. Waiting on the completion of Sasa.Dynamics.
Deprecated
These assemblies are now deprecated, either because they saw little use, were overly complex, or better alternatives now exist.
- Sasa.Concurrency [Sasa]: primarily an overly complex implementation of futures based on Alice ML semantics. In a future release, Sasa.Concurrency will strip out futures, absorb Sasa.TM, and also provide a deterministic concurrency library based on concurrent revision control, which I believe to be inherently superior to STM.
Toy
These assemblies are not really meant for serious use, primarily because they don't fit with standard .NET idioms.
- Sasa.FP [Sasa, Sasa.Collections]: some less useful functional idioms, like delegate currying and tupling, binomial trees, trivial immutable sets, and either types.
- Sasa.Arrow [Sasa]: a convenient interface for arrows. Definitely not a idiomatic .NET!
- Sasa.Linq.Expressions [Sasa]: extensions to compose LINQ expressions. Also provides some typed expression trees, as opposed to the standard untyped ones. In theory it should work, and the code all type checks, but pretty much 0 testing at the moment.
1 comment:
Nice post very helpful
dbakings
|
https://higherlogics.blogspot.com/2012/04/immutable-hash-array-mapped-trie-in-c.html
|
CC-MAIN-2017-47
|
en
|
refinedweb
|
#include <vectmath.h> CLRV(v) v = 0 i UNITV(v,j) v = delta i ij SETV(v,u) v = u i i ADDV(v,u,w) v = u + w i i i SUBV(v,u,w) v = u - w i i i MULVS(v,u,s) v = u s i i DIVVS(v,u,s) v = u / s i i DOTVP(s,v,u) s = v u i i dotvp(v,u) v u i i ABSV(s,v) s = sqrt(v v ) i i absv(v) sqrt(v v ) i i DISTV(s,v,u) s = sqrt((v - u ) (v - u )) i i i i distv(v,u) sqrt((v - u ) (v - u )) i i i i CROSSVP(s,v,u) s = eps v u [only in 2-D] ij i j CROSSVP(v,u,w) v = eps u w [only in 3-D] i ijk j k CLRM(p) p = 0 ij SETMI(p) p = delta ij ij SETM(p,q) p = q ij ij TRANM(p,q) p = q ij ji ADDM(p,q,r) p = q + r ij ij ij SUBM(p,q,r) p = q - r ij ij ij MULM(p,q,r) p = q r ij ik kj MULMS(p,q,s) p = q s ij ij DIVMS(p,q,s) p = q / s ij ij MULMV(v,p,u) v = p u i ij j OUTVP(p,v,u) p = v u ij i j TRACEM(s,p) s = Tr(p) tracem(p) Tr(p) SETVS(v,s) v = s i ADDVS(v,u,s) v = u + s i i SETMS(p,s) p = s ij MULVV(w,v,u) w = v * u i i i SADDV(v,u) v = v + u i i i SSUBV(v,u) v = v - u i i i SMULVS(v,s) v = v * s i i SDIVVS(v,s) v = v / s i i SMULVV(v,u) v = v * u i i i real s; vector v, u, w; matrix p, q, r;
To specify how many dimensions vectors have, define ONE of
the following symbols to the C preprocessor:
TWODIM - for 2-D vectors THREEDIM - for 3-D vectors NDIM - for N-D vectorsThe symbols TWODIM and THREEDIM are flags, and may be defined with any value whatsoever. NDIM is used as a value, namely (of course) the number of dimensions. If either TWODIM or THREEDIM are defined, vectmath.h will define NDIM to be 2 or 3 respectively, incase NDIM is subsequently needed. If none of these symbols is defined when vectmath.h is included, the default is THREEDIM. Note that CROSSVP is defined only for TWODIM and THREEDIM.
Unless the symbol NOTYPEDEF is defined when vectmath.h is included, vector and matrix will be defined as NDIM and NDIM by NDIM arrays of real numbers, respectively (see stdinc(3NEMO) for a description of real). These may be used to declare vector and matrix objects to be manipulated.
The exact definitions
of these operations correspond to the index language expressions given
above. Some of these definitions imply restrictions on placing the same
object on both sides of the assignment operation. For example,
ADDV(v, v, u);does exactly the right thing, but
CROSSVP(v, v, u);puts garbage into v, instead of the cross-product of v and u.
Most of these
operations are implemented as macros which expand in line. A subtle point
connected with this is that syntactically, a reference to of one of these
macros is a statement, not an expression. This means that, for example,
the code fragment
if (foobar) ADDV(x, y, z); else SUBV(x, y, z);will not compile, because the terminating semi-colon following the ADDV is seen as a seperate (null) statement. The best way to solve this problem is to code the above example as follows:
if (foobar) { ADDV(x, y, z); } else { SUBV(x, y, z); }The enclosing curly-brackets insure that the code is syntactically correct both before and after macro expansion.
Those operators which return a scalar
value instead of storing it (currently, dotvp, absv, distv, and tracem)
are implemented by functions which are called from macro expansions. These
functions need to know if their arguments are floating or double arrays;
vectmath.h uses the preprocessor symbol SINGLEPREC to determine the precision
in use. Mixed-precision expressions yield incorrect results; for example,
float x[NDIM]; double y[NDIM]; ... dotvp(x,y); will fail. Consistent use of the abstractions real, vector, and matrix is suggested. Some impure, but useful, vector and matrix operations are also defined. For example, ADDVS sets each element of a vector to a scalar, and MULVV multiplies two vectors with each other, element by element. BugsYou cannot (easily) mix vectors and matrices of different dimensionality within the same source code. History 30-nov-86 (man) Created JEB 22-oct-90 Merged in some starlab macros PJT 27-apr-92 Added SMULVV and documented starlab macros PJT
Table of Contents
|
http://bima.astro.umd.edu/nemo/man_html/vectmath.3.html
|
CC-MAIN-2017-47
|
en
|
refinedweb
|
Opened 5 years ago
Closed 5 years ago
#7200 closed Feature Requests (fixed)
Unable to build boost.thread modularized
Description
building cmake modularized boost.thread from master, using xcode 4.4 (apple llvm) fails on : /thread.cpp:27:10: fatal error:
'libs/thread/src/pthread/timeconv.inl' file not found
#include <libs/thread/src/pthread/timeconv.inl>
Replacing the include with : #include "timeconv.inl" passes.
I guess CMakeScript can not fix it, because there is no lib directory in include paths.
Attachments (1)
Change History (7)
comment:1 Changed 5 years ago by
comment:2 Changed 5 years ago by
I'm afraid that i'm not able to add parent of libs/thread to cmake file, because i was using this repository: github.com/ryppl/boost-zero.git
As it was mentioned at boost CMakeModularizationStatus wiki page, the repository above, already have an overlay cmake script, but the directory structure of that repository (after git cloning) looks like:
<root>
boost
...
thread
...
I think that there should not be any requirement for directory structure outside its own, because that would break the idea of modularization (it should depend only on libraries found by ryppl_find_and_use_package). Am i getting it wrong?
comment:3 Changed 5 years ago by
I don't know nothing about CMake and the modularization.
Changed to feature request as CMake is not supported yet.
comment:4 Changed 5 years ago by
Committed in trunk revision 80125. Committed in trunk revision 80126.
comment:5 Changed 5 years ago by
comment:6 Changed 5 years ago by
Changed 5 years ago by
khkgjfhgdytrutggvfdr7
I have replaced #include "timeconv.inl" by #include <libs/thread/src/pthread/timeconv.inl> as other platforms complains. Could you try to add the parent directory of libs/thread on the cmake file?
|
https://svn.boost.org/trac10/ticket/7200
|
CC-MAIN-2017-47
|
en
|
refinedweb
|
I give the prompt at the end to checkout another customer, they have to pick Y or N. If they pick N it will go to the good bye, sotre closing, ect...but if they pick Y then it just sits there, it does not know to go back up top to customer two....how can I change this?
Bryan
Code:#include <iostream> #include <fstream> #include <cctype> #include <string> //#include "grocery.h" using namespace std; int ProdNum; string ProdName; float ProdPrice; char Taxable; int Quantity; const int tax_rate=.075; int main() { ifstream OpenFile("inventory.txt"); char ch; while(!OpenFile.eof()) { OpenFile.get(ch); cout << ch; } cout << endl << endl; cout << "Thanks for shopping at OUR Market." << endl << endl; { int customer = 1; cout << "Enter product number and Quantity for customer " << customer++ << endl; cout << "Enter product code 0 to end purchases." << endl << endl; do { cin >> ProdNum; Quantity; } while (ProdNum!=0); cout<<"Thanks for shopping, your receipt is printed below" << endl << endl; } { cout << "Do you want to checkout another customer? (Y) or (N)" << endl; char more; do { cin >> more; } while (more != 'N'); cout << "Close the store for the night. Bye." << endl; } return 0; }
|
https://cboard.cprogramming.com/cplusplus-programming/45206-continue-yes-no.html
|
CC-MAIN-2017-47
|
en
|
refinedweb
|
DMXChangeScreensAttributes man page
DMXChangeScreensAttributes — change back-end screen attributes
Synopsis
#include <X11/extensions/dmxext.h>
int DMXChangeScreensAttributes(Display *dpy, int screen_count, int *screens, int mask_count, unsigned int *masks, DMXScreenAttributes *attr, int *error_screen);
Description
DMXChangeScreensAttributes() changes the geometries and positions of the DMX screen and DMX root windows on the back-end X servers. screen_count specifies the number of screens to be changed. For each screen, the screen number is placed in screens, an attribute mask is placed in masks, and a DMXScreenAttributes structure is included in attr.
An explanation of the DMXScreenAttributes structure is given in DMXGetScreenAttributes(3).
The values that are used to compute each value in masks are as follows
DMXScreenWindowWidth DMXScreenWindowHeight DMXScreenWindowXoffset DMXScreenWindowYoffset DMXRootWindowWidth DMXRootWindowHeight DMXRootWindowXoffset DMXRootWindowYoffset DMXRootWindowXorigin DMXRootWindowYorigin
In general, mask_count should be equal to screen_count. However, as a convenience, mask_count may be less than screen_count, and the last entry in masks will then be used for all of the remaining screens. For example, this allows identical changes to be made to several screens using only one mask.
Return Value
On success, 0 is returned. Otherwise, error_screen is set to the value of the first screen in the list that caused the error and a non-zero value is returned. If screen_count or mask_count is less than 1, or if any of the attribute values are not within the appropriate bounding boxes, DmxBadValue is returned. If a protocol error occurs, DmxBadReply is returned.
DMXChangeScreensAttributes() can generate BadLength (if the data provided does not match the data implicitly required by the screen_count and mask_count values), BadValue (if the values in screens are not valid), and BadAlloc errors.
See Also
DMXGetScreenCount(3), DMXGetScreenAttributes(3), DMX(3), Xdmx(1)
Referenced By
DMX(3), DMXAddScreen(3), DMXGetScreenAttributes(3), DMXGetScreenCount(3).
|
https://www.mankier.com/3/DMXChangeScreensAttributes
|
CC-MAIN-2017-47
|
en
|
refinedweb
|
A Programming Language with Extended Static Checking, what got me looking at this was the following paper:
Constrained Types for Object-Oriented Languages, N. Nystrom, V. Saraswat, J. Parlsberg and C. Grothoff, OOPSLA, 2008. [DOI] [PDF]
Constrained types (which are a form of dependent type) are quite interesting to me, since Whiley supports something similar. A simple example from the paper is this:
class List(length:int}{length >= 0} { ... }
This is a constrained list type whose constraint states that the length cannot be negative. I find the notation here is a bit curious. X10 divides fields up into two kinds: properties and normal fields. The distinction is that properties are immutable values, whilst fields make up the mutable state of an object. Thus, constraints can only be imposed over the properties of a class. This implies our constrained list cannot have anything added to it, or removed from it. But, I suppose we can still change the contents of a given cell.
Constraints can also be given for methods, like so:
def search(value: T, lo: int, hi: int)
{0 <= lo, lo <= hi, hi < length}: ...
The first question that springs to mind here is: what can we do inside a constraint? Obviously, we’ve already seen properties, parameters and ints being used … but what else? In particular, can we call impure methods from constraints? Unfortunately, I don’t have definite answer here. As far as I can tell, X10 has no strong notion of a [[pure function]]. The spec specifically states that X10 functions are “not mathematical functions”. On the other hand, I haven’t seen a single constraint which involves a method invocation, so perhaps you simply can’t call methods/functions from constraints. Sadly, the spec is rather brief on this point.
An interesting design choice they’ve made with X10 is to rely on “pluggable constraint systems”, which presumably stems from work on “pluggable type systems” (see e.g. this):
The X10 compiler allows programs to extend the semantics of the language with compiler plugins. Plugins may be used to support different constraint systems.
Now, let’s be clear: i’m not a fan of this. The problem is really that the meaning of programs is no longer clearly defined, and relies on third-party plugins which may be poorly maintained, or subsequently become unavailable, etc. I think the problem is compounded by the following:
If constraints cannot be solved, an error is reported
To me, this all translates into the following scenario:
“I download and compile an X10 program, but it fails telling me I need such and such plugin; but, it turns out, such and such author is not maintaining it any more and I can’t find it anywhere.”
I’m assuming here that it will be obvious which plugins you need to compile a given program. If not, then you’re faced with a real challenge deciding which plugin(s) you need.
Anyway, that’s my 2c on X10 … let’s see how it pans out!!
UPDATE: I have been reliably informed that constraints may call “property methods” which are effectively macros that expand inline. Thus, they are not true functions and cannot, for example, recurse.
Now, let’s be clear: i’m not a fan of this. The problem is really that the meaning of programs is no longer clearly defined, and relies on third-parties plugins which may be poorly maintained, or subsequently become unavailable, etc.
So… how is this any different from the current situation with software libraries?
Hey Andrew,
So, I do agree with you here, up to a point. In fact, messing around trying to find libraries when compiling some program is what got me thinking about this.
Having these plugins as part for the compiler just seems more fundamental to me. But, perhaps people will end up distributing the necessary plugins with their code. And, of course, i’m sure developers would naturally gravitate towards plugins that are well-known, widely used and generally available.
I am in the process of reading the x10 spec, and I have a question about atomic blocks, maybe somebody can explain.
It says that the atomic block “is executed by an activity as if in a single step during which all other concurrent activities in the same place are blocked”.
Does it mean that I can never have parallel processing of unrelated pieces of data just because they happen to reside in one place?
In similar scenario in Java it will be quite different: synchronized blocks can be processed simultaneously if they are using different monitors.
Hi Leonid,
I’m not a expect on X10. However, atomic blocks have been proposed for lots of languages, including Java. The problem with synchronisation is that you have to synchronise on something. Sounds weird, but image you have two array lists and you want to move an item from one to the other, whilst ensuring that no other thread can ever see the intermediate state where the item is in neither. To do this, you have to lock both array lists and the problem is the order in which you do it can lead to deadlock, if other threads are doing something similar.
Atomic blocks help this situation by ensuring that intermediate state arising during the block is never seen by others. I believe they are a better primitive to use than traditional synchronisation. I would expect that the X10 compiler would allow atomic blocks that cannot interfere with each other to run in parallel. This is probably why the spec says:
“For the sake of efficient implementation X10 v2.0 requires that the atomic block
be analyzable, that is, the set of locations that are read and written by the Block-
Statement are bounded and determined statically.”
This enables the compiler to figure it all out. The following is a good paper talking about this kind of thing for Java:
Dave Cunningham, Khilan Gudka, Susan Eisenbach:
Keep Off the Grass: Locking the Right Path for Atomicity. Proceedings of Compiler Construction, 2008:.
|
http://whiley.org/2010/08/05/the-x10-programming-language/
|
CC-MAIN-2017-47
|
en
|
refinedweb
|
Introduction
When you have many Strings or other type of values to display in a list then a ListActivity is used. It will simply display all the values in a list and you can also use its various types of listeners.
A ListActivity needs an Adapter to set its List. It may be an ArrayAdapter or a SimpleCursorAdapter. In this tutorial, we will see how to use an ArrayAdapter in a ListActivity.
Step 1:
In this step, first of all, create an Android project as in the following.
================
Main.xml
<?xml version="1.0" encoding="utf-8"?><LinearLayout xmlns: <ListView android:
</LinearLayout>
In this file, there is one ListView, which holds a list of values in it. This will help us to set content in it. ListActivity needs a ListView which can bind various sources; either array or cursor. Android provides some standard layout resources which reside in "android.R.layout". They have names such as "simple_list_item_1", "simple_list_item_2" etc. To bind that standard layout with your view objects, you need to specify the "id" of your view objects like "@android:id/list" for ListView. This ListView will bind with a standard layout list.
Step 3:
ListDemoActivity.java
public class ListDemoActivity extends ListActivity
{
String listArray[]={"C-Sharp","Android","Java",".Net"};
/** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main);
ArrayAdapter<String> array=new ArrayAdapter<String>(this, android.R.layout.simple_list_item_1, listArray); setListAdapter(array);
}}
Description:
public ArrayAdapter (Context context, int textViewResourceId,String[] object)
context
The current context.
textViewResourceId
The resource ID for a layout file containing a TextView to use when instantiating views.
objects
The objects to represent in the ListView.
And in the last statement, we set "setListAdapter" as "ArrayAdapter" object "array".
Testing:
If you run this program now, you can see the following screen in your emulator. You need to create an AVD (Android Virtual Device) of Version 2.2 or higher.
So, the solution for the click event is that, ListActivity already has a method. You just need to implement it.
protected void onListItemClick(ListView listView, View view, int position, long id)
ListView
The ListView where the click happened
view
The view that was clicked within the ListView
position
The position of the view in the list
id
The row id of the item that was clicked
So use the following code after the "onCreate" method:
@Override
protected void onListItemClick(ListView l, View v, int position, long id) { super.onListItemClick(l, v, position, id); Toast.makeText(getBaseContext(), l.getItemAtPosition(position).toString(), Toast.LENGTH_SHORT).show();}
In this code, you can see that, we took "Toast" to show text on the screen. Toast has a "makeText" method to show text on the screen having the following arguments:
The context to use. Usually your Application or Activity object.
text
The text to show. Can be formatted text.
duration
How long to display the message. Either LENGTH_SHORT or LENGTH_LONG
Now, its time for testing. Run your program and click on any item. You will see the item's value on the screen as seen below!
In this tutorial, we learned what an ListActivity is and how to set an Adapter in a List. What type of Adapter can we use in it and finally how to get items on a Click event.
View All
|
http://www.c-sharpcorner.com/UploadFile/88b6e5/how-to-use-listactivity-in-your-android-application/
|
CC-MAIN-2017-47
|
en
|
refinedweb
|
LINQ is a set of extensions to the .NET Framework that encompass language-integrated query, set, and transform operations. It extends C# and Visual Basic with native language syntax for queries and provides class libraries to take advantage of these capabilities.
In LINQ we used same format as SQL but some difference like positioning, style in Select, where etc. We use the System.Linq namespace.
Benefits of LINQ:
It provides rich Meta data.
Compile-time syntax checking
LINQ is static typing and provide intelliSence (previously available in imperative code).
It provides query in concise way.
We create LINQ between LINQ to object, LINQ to XML, LINQ to SQL.
LINQ to objects
The term "LINQ to Objects" refers to the use of LINQ queries with any IEnumerable or IEnumerable <T> collection directly, without the use of an intermediate LINQ provider or API such as or LINQ to XML, LINQ to SQL.
LINQ to SQL.
In Linq to sql , it changes into object model and sends to database for execution. After execution it find the result. So we easily maintain the relational database like query.
LINQ to XML
LINQ to XML provides an easy query interface for xml files. We do with linq to xml to read and write data from/to xml file, using the file for persistency maintaining a list of objects. Linq to xml can be used for storing application settings, storing persistent objects or any other data needs to be saved.
How use the LINQ in our program
Step1: we open the console application and write these codes
using System.Collections.Generic;
using System.Linq;
string[] Country = { "India", "SriLanka", "China", "Nepal", "Newzeland", "South Africa","America", "England" };
//In this section using LINQ Query
IEnumerable<string> query = from s in Country
where s.Length == 5
orderby s
select s.ToUpper();
foreach (string item in query)
Console.WriteLine(item);
Step2:Run it
Output:
India
China
Nepal
- Good One!
|
https://www.mindstick.com/Articles/204/linq-language-integrated-query
|
CC-MAIN-2017-47
|
en
|
refinedweb
|
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
In November 1999, Jeet Sukumaran proposed a framework based on virtual functions, and later sketched a template-based approach. Ed Brey pointed out that Microsoft Visual C++ does not support in-class member initializations and suggested the enum workaround. Dave Abrahams highlighted quantization issues.
The first public release of this random number library materialized in March 2000 after extensive discussions on the boost mailing list. Many thanks to Beman Dawes for his original min_rand class, portability fixes, documentation suggestions, and general guidance. Harry Erwin sent a header file which provided additional insight into the requirements. Ed Brey and Beman Dawes wanted an iterator-like interface.
Beman Dawes managed the formal review, during which Matthias Troyer, Csaba Szepesvari, and Thomas Holenstein gave detailed comments. The reviewed version became an official part of boost on 17 June 2000.
Gary Powell contributed suggestions for code cleanliness. Dave Abrahams and
Howard Hinnant suggested to move the basic generator templates from
namespace boost::detail
to
boost::random.
Ed Brey asked to remove superfluous warnings and helped with
uint64_t handling. Andreas Scherer tested
with MSVC. Matthias Troyer contributed a
lagged
Fibonacci generator. Michael Stevens found a bug in the copy semantics
of
normal_distribution
and suggested documentation improvements.
|
http://www.boost.org/doc/libs/1_49_0/doc/html/boost_random/history_and_acknowledgements.html
|
CC-MAIN-2015-11
|
en
|
refinedweb
|
public class SanderRossel : Lazy<Person>
{
public void DoWork()
{
throw new NotSupportedException();
}
}
Sander Rossel wrote:
I'm kind of done with Chrome. It just doesn't play YouTube sound for some video's. I fixed it once, but it came back for no apparent reason.
Sander Rossel wrote: That's the worst advice ever
Sander Rossel wrote:I'm kind of done with Chrome. It just doesn't play YouTube sound for some video's.
Sander Rossel wrote:YouTube is also owned by Google by the way.
Sander Rossel wrote:The only problem I had with Windows was that IE wouldn't work.
The only problem I had with IE was that Silverlight wouldn't work.
Brent Jenkins wrote:By the way, Safari on OSX works really well, much better than Chrome or Firefox in my opinion
IndifferentDisdain wrote:But then you have to use OSX; isn't that like burning down the house for the termites?
IndifferentDisdain wrote:I was on iOS, got bored with it (static icons are so 2007)
IndifferentDisdain wrote:I ended up really just using the iMac as a VM server for a Windows
Marc Clifton wrote:klunky and poor performing
Marc Clifton wrote:the sh*t that is HTML/CSS/Javascript
Sander Rossel wrote:Opera is saying they are the exact opposite!
Sander Rossel wrote:They can switch gangs by simply adjusting their CSS.
Marc Clifton wrote:Then again, the fact that anything can render the sh*t that is HTML/CSS/Javascript pretty much amazes me.
Marc Clifton wrote:Then again, the fact that anything can render the sh*t that is HTML/CSS/Javascript pretty much amazes me
Jacquers wrote:middle clicking on the tab to close it
Jacquers wrote:he only major annoyance is that middle clicking on the tab to close it
Richard Deeming wrote:If it goes wrong, you can fix it yourself!
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Lounge.aspx?msg=4865614
|
CC-MAIN-2015-11
|
en
|
refinedweb
|
28 September 2012 10:50 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The company had informed their customers of the shutdown and had build up sufficient cargo to supply to their customers until 8 October, the source said.
The company will assess the viability of restarting the MA plant based on the market situation for MA and its feedstock benzene after
There is no fixed date for resumption of operations at the moment,
|
http://www.icis.com/Articles/2012/09/28/9599439/chinas-changzhou-shuguang-chemical-factory-to-shut-ma.html
|
CC-MAIN-2015-11
|
en
|
refinedweb
|
Hello friends,
I need to find second largest number in an array of n elements. I have tried a c code for that...
#include <stdio.h>
int main(void) {
int pri, sec, i, v;
int arr[] = {4,10,3,8,6,7,2,7,9,2,0};
pri = sec = 0;
for (i = 0; arr[i]; ++i) {
v = arr[i];
if (v > pri) sec = pri, pri = v;
if (v > sec && v < pri) sec = v;
}
printf("pri is %d, sec is %d\n", pri, sec);
return 0;
}
but i want to know how to find complexity of the algorithm in worst case. and is there any algorithm whose complexity is 2n-3 which cud do the same task?
thnx,
raniiii
|
http://forums.devshed.com/software-design-43/algorithm-largest-element-array-351510.html
|
CC-MAIN-2015-11
|
en
|
refinedweb
|
using System;
using System.Diagnostics;
namespace AngleSharp
{
/// <summary>
/// A set of useful helpers concerning errors.
/// </summary>
static class Errors
{
/// <summary>
/// Retrieves a string describing the error of a given error code.
/// </summary>
/// <param name="code">A specific error code.</param>
/// <returns>The description of the error.</returns>
[DebuggerStepThrough]
public static string GetError(ErrorCode code)
{
switch (code)
{
case ErrorCode.EOF:
return "Unexpected end of the given file.";
case ErrorCode.IndexSizeError:
return "The index is not in the allowed range.";
case ErrorCode.WrongDocumentError:
return "The object is in the wrong document.";
case ErrorCode.NotFoundError:
return "The object can not be found here.";
case ErrorCode.NotSupportedError:
return "The operation is not supported.";
case ErrorCode.InvalidStateError:
return "The object is in an invalid state.";
case ErrorCode.InvalidModificationError:
return "The object can not be modified in this way.";
case ErrorCode.NamespaceError:
return "The operation is not allowed by Namespaces in XML.";
case ErrorCode.InvalidAccessError:
return "The object does not support the operation or argument.";
case ErrorCode.SecurityError:
return "The operation is insecure.";
case ErrorCode.NetworkError:
return "A network error occurred.";
case ErrorCode.AbortError:
return "The operation was aborted.";
case ErrorCode.URLMismatchError:
return "The given URL does not match another URL.";
case ErrorCode.QuotaExceededError:
return "The quota has been exceeded.";
case ErrorCode.TimeoutError:
return "The operation timed out.";
case ErrorCode.InvalidNodeTypeError:
return "The supplied node is incorrect or has an incorrect ancestor for this operation.";
case ErrorCode.DataCloneError:
return "The object can not be cloned.";
case ErrorCode.EncodingError:
return "The encoding operation (either encoded or decoding) failed.";
case ErrorCode.ItemNotFound:
return "The specified item could not be found.";
case ErrorCode.SyntaxError:
return "The given string has a syntax error and is unparsable";
case ErrorCode.InUse:
return "The element is already in use.";
case ErrorCode.HierarchyRequestError:
return "The requested hierarchy is not possible.";
case ErrorCode.InvalidCharacter:
return "Invalid character detected.";
case ErrorCode.NoModificationAllowed:
return "No modification allowed.";
case ErrorCode.BogusComment:
return "Bogus comment detected.";
case ErrorCode.AmbiguousOpenTag:
return "Ambiguous open tag symbol found.";
case ErrorCode.TagClosedWrong:
return "The tag has been closed inappropriately.";
case ErrorCode.ClosingSlashMisplaced:
return "The closing slash symbol is misplaced and has been ignored.";
case ErrorCode.UndefinedMarkupDeclaration:
return "Undefined markup declaration ignored.";
case ErrorCode.LineBreakUnexpected:
return "This position does not support a linebreak (LF, FF).";
case ErrorCode.CommentEndedWithEM:
return "Comment ended unexpectedly with an exclamation mark.";
case ErrorCode.CommentEndedWithDash:
return "Comment ended unexpectedly with a dash.";
case ErrorCode.CommentEndedUnexpected:
return "Unexpected character detected at the end of the comment.";
case ErrorCode.DoctypeUnexpected:
return "The doctype found an unexpected character.";
case ErrorCode.TagCannotBeSelfClosed:
return "The given tag cannot be self-closed.";
case ErrorCode.EndTagCannotBeSelfClosed:
return "End tags can never be self-closed.";
case ErrorCode.EndTagCannotHaveAttributes:
return "End tags cannot carry attributes.";
case ErrorCode.NULL:
return "No character has been found using replacement character instead.";
case ErrorCode.CharacterReferenceInvalidCode:
return "The entered character code is invalid. A proper replacement character has been returned.";
case ErrorCode.CharacterReferenceInvalidNumber:
return "The given character code is invalid. A replacement character has been returned.";
case ErrorCode.CharacterReferenceInvalidRange:
return "The given character code is within an invalid range.";
case ErrorCode.CharacterReferenceSemicolonMissing:
return "The given character code has not been closed properly.";
case ErrorCode.CharacterReferenceWrongNumber:
return "The given character code must be a number, but no number has been detected.";
case ErrorCode.CharacterReferenceNotTerminated:
return "The character reference has not been terminated by semi-colon.";
case ErrorCode.CharacterReferenceAttributeEqualsFound:
return "The character reference in an attribute contains an invalid character.";
case ErrorCode.AttributeNameExpected:
return "The provided character is not valid for the beginning of another attribute.";
case ErrorCode.AttributeNameInvalid:
return "The scanned character is not allowed in attribute names.";
case ErrorCode.AttributeValueInvalid:
return "The character cannot be used in attribute values.";
case ErrorCode.DoubleQuotationMarkUnexpected:
return "The double quotation mark is illegal.";
case ErrorCode.SingleQuotationMarkUnexpected:
return "The single quotation mark is misplaced.";
case ErrorCode.DoctypeInvalidCharacter:
return "The scanned character is either not allowed in doctypes or misplaced.";
case ErrorCode.DoctypePublicInvalid:
return "The doctype's public identifier contains an illegal character.";
case ErrorCode.DoctypeSystemInvalid:
return "The doctype's system identifier contains an illegal character.";
case ErrorCode.DoctypeUnexpectedAfterName:
return "The character is not allowed after the doctype's name.";
case ErrorCode.AttributeDuplicateOmitted:
return "The specified attribute has already been added and has been omitted.";
case ErrorCode.TokenNotPossible:
return "The given token is not allowed in the current state.";
case ErrorCode.DoctypeTagInappropriate:
return "The doctype tag can only be placed on top of the document.";
case ErrorCode.TagMustBeInHead:
return "This tag must be included in the head element.";
case ErrorCode.HeadTagMisplaced:
return "The head tag can only be placed once inside the html element.";
case ErrorCode.HtmlTagMisplaced:
return "The html tag can only be placed once as the root element.";
case ErrorCode.BodyTagMisplaced:
return "The body tag can only be placed once inside the html element.";
case ErrorCode.FramesetMisplaced:
return "The frameset element has been misplaced.";
case ErrorCode.IllegalElementInSelectDetected:
return "The given tag cannot be a child element of a select node.";
case ErrorCode.IllegalElementInTableDetected:
return "The given tag cannot be a child element of a table node.";
case ErrorCode.ImageTagNamedWrong:
return "The tag name of the image tag is actually img and not image.";
case ErrorCode.TagInappropriate:
return "The given tag cannot be applied at the current position.";
case ErrorCode.InputUnexpected:
return "The input element is unexpected and has been ignored.";
case ErrorCode.FormInappropriate:
return "The given form tag is inappropriate and has been ignored.";
case ErrorCode.TagCannotEndHere:
return "The ending of the given tag has been misplaced.";
case ErrorCode.TagCannotStartHere:
return "The given tag cannot start here.";
case ErrorCode.SelectNesting:
return "It is not possible to nest select tags.";
case ErrorCode.TableNesting:
return "It is not possible to nest table tags.";
case ErrorCode.DoctypeInvalid:
return "The given doctype tag is invalid.";
case ErrorCode.DoctypeMissing:
return "The expected doctype tag is missing. Quirks mode has been activated.";
case ErrorCode.TagClosingMismatch:
return "The given closing tag and the currently open tag do not match.";
case ErrorCode.CaptionNotInScope:
return "No caption tag has been found within the local scope.";
case ErrorCode.SelectNotInScope:
return "No select tag has been found within the local scope.";
case ErrorCode.TableRowNotInScope:
return "No tr tag has been found within the local scope.";
case ErrorCode.TableNotInScope:
return "No table tag has been found within the local scope.";
case ErrorCode.ParagraphNotInScope:
return "No p tag has been found within the local scope.";
case ErrorCode.BodyNotInScope:
return "No body tag has been found within the local scope.";
case ErrorCode.BlockNotInScope:
return "No block element has been found within the local scope.";
case ErrorCode.TableCellNotInScope:
return "No td or th tag has been found within the local scope.";
case ErrorCode.TableSectionNotInScope:
return "No thead, tbody or tfoot tag has been found within the local scope.";
case ErrorCode.ObjectNotInScope:
return "No object element has been found within the local scope.";
case ErrorCode.HeadingNotInScope:
return "No h1, h2, h3, h4, h5 or h6 tag has been found within the local scope.";
case ErrorCode.ListItemNotInScope:
return "No li, dt, or dd tag has been found within the local scope.";
case ErrorCode.FormNotInScope:
return "No form tag has been found within the local scope.";
case ErrorCode.ButtonInScope:
return "No button tag has been found within the local scope.";
case ErrorCode.NobrInScope:
return "No nobr has been found within the local scope.";
case ErrorCode.ElementNotInScope:
return "No element has been found within the local scope.";
case ErrorCode.TagDoesNotMatchCurrentNode:
return "The given end tag does not match the current node.";
case ErrorCode.HeadingNested:
return "The previous heading has not been closed properly.";
case ErrorCode.AnchorNested:
return "The previous anchor element has not been closed properly.";
case ErrorCode.CurrentNodeIsNotRoot:
return "The current node is not the root of the document.";
case ErrorCode.CurrentNodeIsRoot:
return "The current node is the root of the document.";
case ErrorCode.TagInvalidInFragmentMode:
return "This tag is invalid in fragment mode.";
case ErrorCode.FormAlreadyOpen:
return "Another form is already on the stack of open elements.";
case ErrorCode.FormClosedWrong:
return "The form's ending tag is misplaced.";
case ErrorCode.BodyClosedWrong:
return "The body has been closed wrong.";
case ErrorCode.FormattingElementNotFound:
return "An expected formatting element has not been found.";
case ErrorCode.NotSupported:
return "The action is not supported in the current context.";
default:
return "An unexpected error occurred.";
}
}
}
}
|
http://www.codeproject.com/script/Articles/ViewDownloads.aspx?aid=609053&zep=AngleSharp%2FFoundation%2FErrors.cs&rzp=%2FKB%2Flibrary%2F609053%2F%2FSource.zip
|
CC-MAIN-2015-11
|
en
|
refinedweb
|
I'm doing an practice exercise from a book and I am having problems with initializing an array structure in a class.
I get multiple errors. one says a brace-enclosed initializer is not allowed here before '{' token.
another says ISO c++ forbids initialization of member 'drink'
third -making 'drink' satic
fourth error -invalid in class initialization of static data member of non integral type drinkInfo[3]
Can you help me find the error in the code?
Code:
#include <iostream>
#include <string>
using namespace std;
struct drinkInfo{
string name;
double price;
int quantity;
drinkInfo(string n, double p, int q){
name = n;
price = p;
quantity = q;
}
};
class machine {
private:
drinkInfo drink[3] = { drinkInfo("cola", 0.75, 20),
drinkInfo("sprite", 0.75, 20),
drinkInfo("root beer", 0.75, 20) };
};
|
http://cboard.cprogramming.com/cplusplus-programming/127070-initializing-structure-class-printable-thread.html
|
CC-MAIN-2015-11
|
en
|
refinedweb
|
I think that the problem you are having here might be related to one I ran into last week on avrfreaks. Could you please scrounge up four more wires and test your LCD in the 8-bit mode? I don't think that pictures are necessary, just a description of what happens should be OK. Don
sorry but how would I connect the wires and modify the code for 8 bit?
#include <LiquidCrystal.h>// Don't forget --> LCD RW pin to ground//LiquidCrystal lcd(RS,EN,D4,D5,D6,D7); LiquidCrystal lcd(7,8,9,10,11,12);void setup() { lcd.begin(16, 4); for (int i=33; i<113; i++) // send 80 sequential ASCII characters to the LCD DDRAM { lcd.print(i,BYTE); // display each character as it is stored// delay(100); // un-comment to observe addressing sequence } }void loop() { }
// include the library code:#include <LiquidCrystal.h> //LiquidCrystal lcd(RS,EN,D0,D1,D2,D3,D4,D5,D6,D7); LiquidCrystal lcd(7,8,2,3,4,5,9,10,11,12);void setup() { // set up the LCD's number of columns and rows: lcd.begin(16, 4); //);}
Busy flag indicates that KS0108B is operating or no operating. When busy flag is high, KS0108B is in internal operating. When busy flag is low, KS0108B can accept the data or instruction.
As for missing part of "Hello, world" that also could be an artifact of the display not being ready after reset.
Interfacing to the MPUThe HD44780U can send data in either two 4-bit operations or one 8-bit operation, thus allowing interfacing with 4- or 8-bit MPUs.• For 4-bit interface data, only four bus lines (DB4 to DB7) are used for transfer. ...The busy flag must be checked (one instruction) after the 4-bit data has been transferred twice. Two more 4-bit operations then transfer the busy flag and address counter data.
Because the busy flag is set to 1 while an instruction is being executed, check it to make sure it is 0 before sending another instruction from the MPU.
The busy flag must be checked (one instruction) after the 4-bit data has been transferred twice. Two more 4-bit operations then transfer the busy flag and address counter data.(my emphasis)
... and this may take longer than actually checking it".
I suggest you use the liquidcrystal440 library - I use my 16x4 LCD in 4 bit mode with no issues using this library.
That's because John took the trouble to correctly interpret the datasheet while writing the library and to ask for help when he needed it.
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy
|
http://forum.arduino.cc/index.php?topic=54458.msg390333
|
CC-MAIN-2015-11
|
en
|
refinedweb
|
sigwaitinfo, sigtimedwait - wait for queued signals (REALTIME)
#include <signal.h> int sigwaitinfo(const sigset_t *set, siginfo_t *info); int sigtimedwait(const sigset_t *set, siginfo_t *info, const struct timespec *timeout);
The function sigwaitinfo() function sigwaitinfo() behaves the same as the sigwait() function if the info argument is NULL. If the info argument is non-NULL, the sigwaitinfo() function behaves the same as sigwait,().
The function sigtimedwait() behaviour is unspecified.
Upon successful completion (that is, one of the signals specified by set is pending or is generated) sigwaitinfo() and sigtimedwait() will return the selected signal number. Otherwise, the function returns a value of -1 and sets errno to indicate the error.
The sigwaitinfo() and sigtimedwait() functions will fail if:
- [ENOSYS]
- The functions sigwaitinfo() and sigtimedwait() are not supported by this implementation.
The sigtimedwait() function will also fail if:
- [EAGAIN]
- No signal specified by set was generated within the specified timeout period.
The sigwaitinfo() and sigtimedwait() functions may fail if:
- [EINTR]
- The wait was interrupted by an unblocked, caught signal. It will be documented in system documentation whether this error will cause these functions to fail.
The sigtimedwait() function may also fail if:
- [EINVAL]
- The timeout argument specified a tv_nsec value less than zero or greater than or equal to 1000 million.
An implementation only checks for this error if no signal is pending in set and it is necessary to wait.
None.
None.
None.
pause(), pthread_sigmask(), sigaction(), <signal.h>, sigpending(), sigsuspend(), sigwait(), <time.h>.
Derived from the POSIX Realtime Extension (1003.1b-1993/1003.1i-1995) and the POSIX Threads Extension (1003.1c-1995)
|
http://pubs.opengroup.org/onlinepubs/007908775/xsh/sigwaitinfo.html
|
CC-MAIN-2015-11
|
en
|
refinedweb
|
Introduction
The .NET framework contains a number of 'Dictionary' classes including:
All of these associate a key with a value and
are represented internally by a collection of key/value pairs. However, none of
them permit duplicate keys and so, if you try to add an item whose key is
already present, an exception is thrown.
There is a Lookup<K, V> class which is a collection of keys mapped to one or
more values. However, this is intended for use in LINQ queries and is not really
suitable for general purpose use. A strategy for a workaround
The absence of a general purpose dictionary class which permits duplicate keys
is a serious shortcoming of the framework, in my opinion.
The most usual way to work around this shortcoming is to associate a list of
values, rather than a single value, with each key in the dictionary.
In fact, the GroupBy extension method which is used in LINQ works by reading the
input elements into a temporary dictionary of lists so that all elements with
the same key end up in the list associated with that key. The lists are then
emitted as a sequence of objects which implement the IGrouping interface.
Although this strategy works, it is tedious to employ in practice because you
need to write code to manipulate the list associated with each key every time an
item is added, removed or checked for containership. Also, when enumerating the
dictionary, you have to enumerate first the keys and then their associated
lists. The Lexicon class
It would be nice if we had a class which did all the 'housekeeping' of
maintaining the lists for us and which could be used in a natural way. I have
therefore written such a class.
As Dictionary is already a long name, I decided to simply use the synonym
'Lexicon' for the new class rather than append yet another prefix to
'Dictionary'.
The Lexicon<K, V> class contains all the same members as the Dictionary<K, V>
class but has some new ones to deal with the possibility that a key could have
multiple values. The usage of some existing members has had to be modified for
the same reason and some additional overloads have been added.
Although the Lexicon<K, V> class is a wrapper for a Dictionary<K, List<V>>, I
decided that it would not be appropriate to formally implement the generic and
non-generic IDictionary interfaces because these interfaces implicitly assume
that classes which implement them will not support duplicate keys. However, it
does implement all the other interfaces which Dictionary implements.
It also implements a new interface I've created called ILexicon<K, V> which
contains signatures for its essential members.
This table lists its main constructors with a brief description of what they do:
Lexicon()
creates a new empty Lexicon
Lexicon(dictionary)
creates a new Lexicon from a generic IDictionary
Lexicon(lexicon)
creates a new Lexicon from an ILexicon
Lexicon(capacity)
creates a new Lexicon with the specified initial capacity
Lexicon(comparer)
creates a new Lexicon which uses a specific IEqualityComparer to test for equality of keys
This table lists its main properties:
Count
gets the total number of items in the Lexicon
KeyCount
gets the total number of unique keys
this[key]
gets or sets the list of values with this key
this[key, index]
gets or sets the value of the item at this index within this key's list of values
Keys
gets a collection of all unique keys in the Lexicon
ValueLists
gets the lists of values of all items in the Lexicon in the same order as the Keys property
Values
gets an enumeration of all values in the Lexicon in the same order as the Keys and Values properties
And, finally, this table lists its main
methods:
Add(key, value)
adds an item to the Lexicon with this key and value
AddList(key, list)
adds items to the Lexicon with this key from this list of values
AddRange(keyValuePairs)
adds an enumerable collection of KeyValuePairs to the Lexicon
ChangeValue(key, oldValue, newValue)
changes the value of the first item with this key and this oldValue to this newValue
ChangeValueAt(key, index, newValue)
changes the value of the item with this index in this key's list of values to newValue
Clear()
clears the Lexicon of all items
Contains(key, value)
indicates if an item with this key/value pair is within the Lexicon
ContainsKey(key)
indicates if an item with this key is within the Lexicon
ContainsValue(value)
indicates if an item with this value is within the Lexicon
ContainsValue(value, out key)
indicates if an item with this value is within the Lexicon and, if so, returns the first key found with that value as an output parameter
CopyTo(array, index)
copies the key /value pairs of the Lexicon to an array starting at the specified index
FindKeyIndexPairs(value)
finds all key/index pairs in the Lexicon which have this value
GetValueCount(key)
gets the number of all values in this key's list
IndexOfValue(key, value)
gets the first index of this value within this key's list
Remove(key, value)
removes the first item in the Lexicon with this key and value
RemoveAt(key, index)
removes the item at this index in this key's list of values
RemoveKey(key)
removes all items in the Lexicon with this key
TryGetValueList(key, out value)
tries to get the list of values with this key
TryGetValueAt(key, index, out value)
tries to get the value of the item at this index within this key's list of values
Notes on members
Lexicon's constructors mirror those of the generic Dictionary class except that
there is an additional constructor to create a new Lexicon from an existing
ILexicon.
The Count property returns the total number of key/value pairs in the Lexicon.
The indexer which takes a sole key argument gets or sets the whole list of
values for that key. If there's already a list for that key, it is replaced (not
merged) by the 'setter'. Otherwise, a new entry for that key is created.
The indexer which takes an additional index parameter gets or sets the
individual value at that index in the key's value list. If the key doesn't exist
or there's no value at that index (and it's the next index to be assigned) then
a new entry is created.
The Keys property returns just the unique keys (however many times they're
duplicated) and the KeyCount property returns how many such keys there are.
The ValueLists property returns the lists of value corresponding to the unique
keys and the Values property returns an enumeration of all values in all
key/value pairs in the Lexicon.
The Add, Contains and Remove methods also have overloads (not shown) which take
a KeyValuePair structure as a parameter.
The AddList method adds a key with a list of values to the Lexicon. If the key
already exists, its existing list of values is merged with the new one, not
replaced by it. This differs from the indexer behaviour.
The FindAllKeyIndexPairs method does what it says on the tin for a given value.
Notice, though, that this method will be slow for a large Lexicon as it needs to
iterate through every key/value pair within it. It has therefore been
implemented using deferred execution (i.e. an iterator).
The 'ChangeValue' and 'Remove' family of methods all return a bool value to
indicate whether the operation was a success or not.
The other members are largely self-explanatory.
When Lexicons are created (or added to) from other objects, they are copied
first so that subsequent changes don't affect the original objects. However,
when lists of values are retrieved by external code, they are not copied and so
may be manipulated directly by that code. Enumerating the Lexicon
The best way to enumerate a Lexicon object is to use the enumerator returned by
the GetEnumerator method which returns all key/value pairs in the same order as
the keys are enumerated in the underlying Dictionary object
However, it's also possible to enumerate using the Keys property and then use
the indexer to get the list of values for that key and iterate through those.
The ValueLists property enables the lists of values for each key to be
enumerated separately.
As mentioned in the previous section, the Values property and FindKeyIndexPairs
method are implemented as generic IEnumerables and so can also be enumerated
using the foreach statement. Example of usage
The code for the Lexicon class and ILexicon interface can be downloaded from the
link accompanying this article as it is too long to include in the body of the
article itself. Both these types are included in a namespace called Lexicons and
can be used in .NET 2.0 or later.
The following console application shows how to use some of its principal
members: using
System; using
System.Collections.Generic; using
Lexicons;
class Program {
static void
Main(string[] args)
{
// create and populate a Lexicon object
Lexicon<string,
int> lex = new Lexicon<string,
int>();
lex.Add("Dave", 1);
lex.Add("John", 2);
lex.Add("Dave", 3);
lex.Add("Stan", 4);
lex.Add("Dave", 5);
lex.Add(new
KeyValuePair<string,
int>("Fred",
6));
// iterate through key/value pairs
Console.WriteLine("The
lexicon initially contains the following key/value pairs\n");
foreach (KeyValuePair<string,
int> kvp in lex)
{
Console.WriteLine("{0}
: {1}", kvp.Key, kvp.Value);
}
// add a new entry to the Lexicon lex["Alan"] =
new List<int>
{ 7 };
// add some more values for the new key lex.AddList("Alan",
new List<int>
{ 8, 9 });
// add another new entry lex.Add("Mary", 10);
// iterate the Lexicon again, this time using the Keys
collection
Console.WriteLine("\nFollowing
the addition of new entries, the lexicon now contains\n");
foreach (string
key in lex.Keys)
{
foreach (int
value in lex[key])
{
Console.WriteLine("{0}
: {1}", key, value);
}
}
Console.WriteLine("\nDave
has {0} values", lex.GetValueCount("Dave"));
lex.RemoveKey("Dave");
// remove key and all its values lex.Remove("Alan", 8);
// remove a single value lex.ChangeValue("Fred", 6, 5); // change a value
// iterate the Lexicon again
Console.WriteLine("\nFollowing
some removals and a change, the lexicon now contains\n");
foreach (KeyValuePair<string,
int> kvp in lex)
{
Console.WriteLine("{0}
: {1}", kvp.Key, kvp.Value);
}
if (lex.Contains("Stan",
4))
{
Console.WriteLine("\nStan
has a value of 4");
}
// create an array of key/value pairs and copy the
Lexicon's contents to it
KeyValuePair<string,
int>[] kvpa = new KeyValuePair<string,
int>[lex.Count];
lex.CopyTo(kvpa, 0);
Console.WriteLine("There
are currently {0} key value pairs in the Lexicon", kvpa.Length);
// try and get the value at index 1 for Alan
int val;
bool b = lex.TryGetValueAt("Alan",
1, out val);
if (b)
Console.WriteLine("Alan has a value of {0} at
index 1", val);
// create a new dictionary
Dictionary<string,
int> dict = new Dictionary<string,
int>();
dict["Nora"] = 3;
dict["John"] = 4;
// uses a key already in the Lexicon
// create a new Lexicon from the Dictionary
Lexicon<string,
int> lex2 = new Lexicon<string,
int>(dict);
// add some more members to it
lex2["Zena"]
= new List<int>
{ 11 };
lex2["Myra", 0] = 12;
// merge with existing Lexicon
lex.AddRange(lex2);
lex.Remove(new
KeyValuePair<string,
int>("Stan",
4)); // effectively remove Stan lex.RemoveAt("Mary", 0);
// effectively remove Mary
// iterate the Lexicon again Console.WriteLine("\nFollowing
a number of changes, the lexicon now contains\n");
foreach (KeyValuePair<string,
int> kvp in lex)
{
Console.WriteLine("{0}
: {1}", kvp.Key, kvp.Value);
}
Console.WriteLine("\nNora
has a value of 3 at index {0}", lex.IndexOfValue("Nora",
3));
lex["Zena",
1] = 1; // add a new value for Zena
if
(lex.ContainsValue(12))
{
Console.WriteLine("The
lexicon contains a value of 12");
}
string k;
if (lex.ContainsValue(5,
out k)) Console.Write("{0}
had a value of 5 ", k);
lex.ChangeValue(k, 5, 2);
if (lex[k, 0] == 2)
Console.WriteLine("but
now has a value of 2", k);
Console.WriteLine("\nThe
following key/index pairs have a value of 2\n");
foreach (KeyValuePair<string,
int> kip in
lex.FindKeyIndexPairs(2))
{
Console.WriteLine("Key
: {0} Value : 2 Index : {1}", kip.Key, kip.Value);
}
Console.ReadKey();
}
} Example output
A screenshot of the output is shown below: Conclusion
I hope you will find the Lexicon class to be a useful adjunct to the other
'Dictionary' classes in the .NET Framework.
It is not intended as a replacement for the generic Dictionary class and should
only be used in situations where keys may be duplicated. As a List object needs
to be assigned to each key, it will clearly use more memory and be slightly
slower than an 'ordinary' Dictionary. This is not ideal in situations where the
duplicated keys are relatively sparse but an inevitable consequence of the
strategy used.
I am currently working on sorted versions of the Lexicon and a 'ToLexicon'
extension method for use in LINQ queries . I am also considering the use of a
different strategy to address the issue of Lexicons with sparse duplicate keys.
The results will be the subject of a future article.
©2015
C# Corner. All contents are copyright of their authors.
|
http://www.c-sharpcorner.com/UploadFile/b942f9/a-dictionary-class-which-permits-duplicate-keys/
|
CC-MAIN-2015-11
|
en
|
refinedweb
|
Edward Hyde, 1st Earl of Clarendon, was an English historian and statesman and grandfather to two British monarchs, Mary II and Queen Anne.
-------------------------------------------------------------
Dictionary of National Biography, 1885-1900, Volume 28
Hyde, Edward (1609-1674)
by Charles Harding Firth
HYDE, EDWARD, Earl of Clarendon (1609-1674), descended from a family of Hydes established at Norbury in Cheshire, son of Henry Hyde of Dinton, Wiltshire, by Mary, daughter of Edward Langford of Trowbridge, was born on 18 Feb. 1608-9 (Lister, Life of Clarendon, i. l; The Life of Clarendon, written by himself, ed. 1857, i. § 1). In Lent term 1622 Hyde entered Magdalen Hall, Oxford; failed, in spite of a royal mandate, to obtain a demyship at Magdalen College, and graduated B.A. on 14 Feb. 1626 (Lister, i. 4; Wood, Athenæ Oxon. ed. Bliss, iii. 1018). He left the university 'rather with the opinion of a young man of parts and pregnancy of wit, than that he had improved it much by industry' (Life, i. 8). His father had destined him for the church, but the death of two elder brothers made him heir to the paternal estate, and in 1625 he became a member of the Middle Temple (Lister, i. 6). In spite of the care which his uncle, Chief Justice Sir Nicholas Hyde [q.v.], bestowed on his legal education, he preferred to devote himself to polite learning and history, and sought the society of wits and scholars. In February 1634 Hyde was one of the managers of the masque which the Inns of Court presented to the king as a protest against Prynne's illiberal attack upon the drama (Whitelocke, Memorials, f.19). Jonson, Selden, Waller, Hales, and other eminent writers were among his friends. In his old age he used to say ' that he owed all the little he knew and the little good that was in him to the friendship and conversation of the most excellent men in their several kinds that lived in that age,' but always recalled with most fondness his ‘entire and unreserved' friendship with Lord Falkland (Life, i. 25, 35).
In 1629 Hyde married Anne, daughter of Sir George Ayliffe of Gretenham, Wiltshire. She died six months later, but the marriage connected him with the Villiers family, and gained him many powerful friends (Lister, i. 9; Life, i. 13). This connection was one of the motives which induced Hyde to vindicate Buckingham's memory in his earliest historical work, a tract entitled ' The Difference and Disparity between the Estate and Condition of George, Duke of Buckingham, and Robert, Earl of Essex' (Religuics Wottoniance, ed. 1685, pp. 185-202). According to Hyde's friend, Sir John Bramston, Charles I was so pleased with this piece that he wished the author to write Buckingham's life (Autobiography of Sir John Bramston, p.255).
Hyde's second marriage, 10 July 1634, with Frances, daughter of Sir Thomas Aylesbury, one of the masters of requests, still further improved his fortunes (Chester, Westminster Registers, p.167). He had been called to the bar on 22 Nov. 1633, began now seriously to devote himself to his profession, and soon acquired a good practice in the court of requests. In December 1634 he was appointed keeper of the writs and rolls of the common pleas (Bramston, p.255; Doyle, Official Baronage, i. 402). The courage and ability with which Hyde conducted the petition of the London merchants against the late lord treasurer, Portland, gained him the favour of Laud. He was consequently ' used with more countenance by all the judges in Westminster Hall and the eminent practisers, than is usually given to men of his years' (Life, i.23). His income grew, he increased his paternal estate by buying adjoining land, and he made influential friends.
Hyde began his political career as a member of the popular party. Although he did not share the hostility ot the puritans to Laud's ecclesiastical policy, nor the common animosity of the lawyers to the churchmen, he was deeply stirred by the perversions and violations of the law which marked the twelve years of the king's personal rule (1628-40). In the Short parliament of 1640 he sat for Wootton Bassett, was a member of seven important committees, and gained great applause by attacking the jurisdiction of the earl marshal's court (Lister, i. 62; Life, i. 78). According to his own account, which cannot be implicitly trusted, he endeavoured to mediate between the king and the commons, and used his influence with Laud to prevent a dissolution.
In the Long parliament Hyde represented 'Saltash, and, as before, principally directed his reforming zeal to questions connected to the administration of the law. He renewed his motion against the marshal's court, obtained a committee, and produced a report which practically abolished that institution. Hyde also acted as chairman of the committees which examined into the jurisdictions of the council of Wales and the council of the North, and gained great popularity by his speech against the latter (26 April 1641; Rushworth, iv. 230). He took a leading part in the proceedings against the judges, and laid before the lords (6 July 1641) the charge against the barons of the exchequer (ib. iv. 333). In the proceedings against Strafford he acted with the popular party, helped to prepare the articles of impeachment, was added on 25 March 1641 to the committee for expediting the trial, and on 28 April took up a message to the lords begging that special precautions might be taken to prevent Strafford's escape (Commons Journals, ii. 112, 130). Hyde's name does not appear in the list of those voting against the attainder bill, and it is hardly possible to doubt that he voted for that measure. He may have ultimately joined the party who were contented with Strafford's exclusion from affairs of state; but the story of his interview with Essex on this subject contains manifest impossibilities (Rebellion, iii. 161; Gardniner, ix. 840).
Church questions soon led Hyde to separate himself from the popular party. He opposed, in February 1641, the reception of the London petition against episcopacy, and in May the demand of the Scots for the assimilation of the English ecclesiastical system to the Scottish (ib. ix. 281, 377). He opposed also, differing for the first time with Falkland, the bill for the exclusion of the clergy from secular office, and was from the beginning the most indefatigable adversary of the Root and Branch Bill. The house went into committee on that bill on 11 July 1641, and its supporters, hoping to silence Hyde, made him chairman. In this capacity he so successfully obstructed the measure that it was dropped (Rebellion, iii. 150-6, 240-2). Hyde's attitude attracted the notice of the king, who sent for him and urged him to persist in the church's defence (Life, i. 93). At the opening of the second session his severance from his former friends was still more marked, and Secretary Nicholas recommended him to the king as one of the chief champions of the royal prerogative (Evelyn, Diary, ed. 1879, iv. 116). He resisted Pym's attempt to make the grant of supplies for the reconquest of Ireland dependent on parliament's approval of the king's choice of councillors, and opposed the Grand Remonstrance, though admitting that the narrative part of it was
true and modestly expressed' (Gardiner, x. 55, 76; Verney, Notes on the Long Parliament, pp. 121, 126). He sought by an attempted protest to prevent the printing (ib. vi. 79 n.) The House of Commons expelled him (11 Aug. 1642), and he was one of the eleven persons who were to be excepted from pardon (21 Sept.), an exception which was repeated in subsequent propositions for peace (Husbands, p.633).
During his stay at Oxford, from October 1642 to March 1645, Hyde lived in All Souls College. In the spring of 1643 he at last exchanged the position of secret adviser for that of an avowed and responsible servant of the crown. On 22 Feb. he was admitted to the privy council and knighted, and on 3 March appointed chancellor of the exchequer (Life, ii. 77; Black, Oxford Docquets, p.351). The king wished to raise him still higher. 'I must make Ned Hyde secretary of state, for the truth is I can trust nobody else,' said an intercepted letter from Charles to the queen. But Hyde was unwilling to supersede his friend Nicholas, and refused the offered post both now, and later after Falkland's death. Promotion so rapid for a man of his age and rank aroused general jealousy, especially among the members of his own profession. Courtiers considered him an upstart, and soldiers regarded him with the hostility which they felt for the privy council in general (cf. Rebellion, vii. 278-82; Life, ii. 73, iii. 37). As chancellor of the exchequer Hyde, in his endeavours to raise money for the support of the war, was concerned in procuring the loan known as 'the Oxford engagement,' and became personally bound for the repayment of some of the sums lent to the king (Cal. Committee for Advance of Money, p. 1002; Clarendon State Papers, ii. 154). His attempt to bring the Bristol custom-dues into the exchequer brought him into collision with Ashburnham, the treasurer of the army (Life, iii. 33).
In the autumn of 1643 the king created a secret committee, or 'junto,' who were consulted on all important matters before they were discussed in the privy council. It consisted of Hyde and five others, and met every Friday at Oriel College (Life, iii. 37, 58; Clarendon State Papers, ii. 286, 290). In the different conferences for peace Hyde was habitually employed in the most delicate personal negotiations, a duty for which his former intimacy with many of the parliament's commissioners specially qualified him. Over-estimating, as his history shows, the influence of personal causes in producing the civil war, he believed that judicious concessions to the leaders would suffice to end it. In the summer of 1642 he had made special efforts to win over the Earl of Pembroke (ib. ii. 144-8; Rebellion, vi. 401 n.) During the Oxford negotiations in March 1643 he intrigued to gain the Earl of Northumberland, and vainly strove to persuade the king to appoint him lord high admiral (Life, iii. 4-12). In the following summer, when Bedford, Clare, and Holland deserted the parliament, Hyde stood almost alone in recommending that the deserters should be well received by king, queen, and court, and held the failure to adopt this plan the greatest oversight committed by the king (Rebellion, vii. 185, 244). When it was too late, Hyde's policy was adopted. In February 1645, during the Uxbridge negotiations, he and three others were empowered to promise places of profit to repentant parliamentarians, but his conferences with Denbigh, Pembroke, Whitelocke, and Hollis led to no result (ib. viii. 243-8; Whitelocke, Memorials, f. 127; Harleian Miscellany, vii. 559).
Throughout these negotiations Hyde opposed any real concessions on the main questions at issue between king and parliament. At Uxbridge (January 1645) he was the principal figure among the king's commissioners, prepared all the papers, and took the lead in all the debates (Rebellion, vii. 252). He defended Ormonde's truce with the Irish rebels, and disputed with Whitelocke on the question of the king's right to the militia (ib. viii. 256). Already, in an earlier negotiation with the Scottish commissioners (February 1643), he had earned their detestation by opposing their demands for "ecclesiastical uniformity, and at Uxbridge he was as persistent in defending episcopacy. Nevertheless, he was prepared to accept a limited measure of toleration, but regarded the offers made at Uxbridge as the extreme limit of reasonable concessions (Clarendon State Papers, ii. 237).
The most characteristic, result of Hyde's influence during this period was the calling of the Oxford parliament (December 1643). He saw the strength which the name of a parliament gave the popular party, and was anxious to deprive them of that advantage. Some of the king's advisers urged him to dissolve the Long parliament by proclamation, and to declare the act for its continuance invalid from the beginning. Hyde opposed this course, arguing that it would alienate public opinion (Life, iii. 40). His hope was to deprive the Long parliament of all moral authority by showing that it was neither free nor representative (Rebellion, vii.326). With this object, when the Scots accepted the Long parliament's invitation to send an army into England, Hyde proposed the letter of the royalist peers to the Scottish privy council, and the summoning of the royalist members of parliament to meet at Oxford (ib. vii. 323). Both expedients proved ineffectual. The Oxford parliament was helpful in raising money, but useless in negotiating with the parliament at Westminster, while the king resented its independence and its demands for peace.
With the failure of Hyde's policy the king fell completely under the influence of less scrupulous and less constitutional advisers. On 4 March 1645 Hyde was despatched to Bristol as one of the council charged with the care of the prince of Wales and the government of the west. The king was anxious to place so trustworthy a servant near the prince, and glad no doubt to remove so strong an opponent of his Irish plans. Already Charles had given to Glamorgan 'those strange powers and instructions ' which Hyde subsequently pronounced to be 'inexcusable to justice, piety, and prudence' (Clarendon State Papers, ii. 337; Life, iii. 50; Rebellion, viii. 253).
The arrival of the prince in the west was followed by a series of disputes between his council and the local military commanders. Hyde, who was the moving spirit of the council, paints in the blackest colours the misconduct of Goring and Grenville; but the king's initial error in appointing semi-independent military commanders, and then setting a board of privy councillors to control them, was largely responsible for the failure of the campaign. Hyde complains bitterly that, but for the means used at court to diminish the power of the council, they would have raised the best army that had been in England since the rebellion began, and, with Hopton to command it, might have effected much (Lister, iii. 20; Rebellion, ix. 7 n, 43). But when Hopton at last took over the command of Goring' s 'dissolute, undisciplined, beaten army,' it was too late for success, and his defeat at Torrington (16 Feb. 1646) obliged the prince's councillors to provide for the safety of their charge.
The king had at first ordered the prince to take refuge in France, and then, on the remonstrance of his council, suggested Denmark. Hyde's aim was to keep the prince as long as possible in English territory, and as long as possible out of France. As no ship could be found fit for the Danish voyage, the prince and his council established themselves at Scilly (4 March 1646), and, when the parliamentary fleet rendered the islands untenable, removed to Jersey (17 April). On the pretext that Jersey was insecure, the queen at once ordered the prince to join her in France, and, against the advice of Hyde and his council, the prince obeyed (Clarendon State Papers, ii. 240, 352; Rebellion, x. 3-48). Hyde distrusted the French government, feared the influence of the queen, and was afraid of alienating English public opinion (Clarendon State Papers, ii. 235, 287).
Though Hyde's opposition to the queen in this matter was the main cause of her subsequent hostility to him, his policy was in other respects diametrically opposed to that which she advocated. She pressed the king to buy the support of the Scots by sacrificing the church. Hyde expected nothing good from their aid, and would not pay their price (ib. ii. 291, 339). He was equally hostile to her plans for restoring the king by French or foreign forces (ib. ii. 307, 329, 339). He was resolved not to sacrifice a foot of English territory, and signed a bond with Hopton, Capel, and Carteret to defend Jersey against Lord Jermyn's scheme for its sale to France (19 Oct. 1646; ib. ii. 279). During the king's negotiations with the parliament and the army Hyde's great fear was that Charles should concede too much. 'Let them,' he wrote, 'have all circumstantial 'temporary concessions, …. distribute as many personal obligations as can be expected, but take heed of removing landmarks and destroying foundations. … Either no peace can be made, or it must be upon the old foundations of government in church and state' (ib. ii. 326, 333, 379). Hyde faithfully practised the principles which he preached, declining either to make his peace with the parliament or to compound for his estate. 'We must play out the game,' he wrote,
with that courage as becomes gamesters who were first engaged by conscience against all motives and temptations of interest, and be to let the world know that we were carried on only by conscience ' (ib. iii. 24). Hyde was already in great straits for money. But he told Nicholas that they had no reason to blush for a poverty which was not brought upon them by their own faults (ib. ii. 310). Throughout the fourteen years of his exile he bore privation with the same cheerful courage.
During his residence in Jersey Hyde lived first in lodgings in St. Helier, and afterwards with Sir George Carteret in Elizabeth Castle. He occupied his enforced leisure by keeping up a voluminous correspondence, and by composing his 'History of the Rebellion,' which he began at Scilly on 18 March 1646. In a will drawn up on 4 April 1647 he directed that the unfinished manuscript should be delivered to Secretary Nicholas, who was to deal with it as the king should direct. If the king decided that any part of it should be published, Nicholas and other assistant editors were empowered to make whatever suppressions or additions they thought fit (Clarendon State Papers, ii. 289, 357). Hyde had also an immediate practical purpose in view. As soon as I found myself alone,' he wrote to Nicholas, 'I thought the best way to provide myself for new business against the time I should be called to it, was to look over the faults of the old, and so I resolved to write the history of these evil times ' (ib. ii. 288). By April 1648 he had carried his narrative down to the commencement of the campaign of 1644. Meanwhile, in February 1648 the Long parliament resolved to present no further addresses to the king, and published a scandalous declaration of its reasons. Hyde at once printed a vindication of his master: 'A full Answer to an infamous and traitorous Pamphlet entitled A Declaration of the Commons of England expressing their reasons of passing the late Resolutions of no further addresses to be made to the King' (published July 28, 1648. An earlier and briefer version of the same answer was published 3 May).
On the outbreak of the second civil war, Hyde was summoned by the queen and the prince to join them at Paris. He left Jersey 26 June 1648, and made his way to Dieppe, whence he took ship for Dunkirk (Clarendon State Papers, ii. 406; Hoskins, Charles II in the Channel Islands, ii. 202). Finding at Dunkirk that the prince was with the fleet in the Thames, he followed him thither. On his way he fell into the hands of an Ostend corsair (13-23 July), who robbed him of all his clothes and money, nor did he succeed in joining Prince Charles till the prince's return to the Hague (7-17 Sept.: Life, v. 10-23; Rebellion, xi. 23, 78). There he found the little court distracted by feuds and intrigues. Hyde set himself to reconcile conflicting interests and to provide the fleet with supplies for a new expedition (Rebellion, xi. 127, 152; Warburton, Prince Rupert, iii. 274, 276, 279). He advised the prince not to trust the Scots, whose emissaries were urging him to visit Scotland, and was resolved that he himself would go neither to Scotland nor to Ireland. In any case, the Scots would not have allowed him to accompany the prince, and he held it safer to see the result of the negotiations at Newport before risking himself in Ireland. The king's concessions during the treaty had filled him with disgust and alarm.
The best,' he wrote, which is proposed is that which I would not consent to, to preserve the kingdom from ashes' (Clarendon State Papers, ii. 459). When the army interrupted the treaty and brought the king to trial, Hyde vainly exerted himself to save his master's life. He drew up a letter from the prince to Fairfax, and after the king's death a circular to the sovereigns and states of Europe, invoking their aid to avenge the king's execution (Cal. State Papers, Dom. 1649-50, p. 5; Cal. Clarendon Papers, i. 465; cf. Warburton, iii. 283). Hyde's enemies thought his influence then at an end, but in spite of the queen's advice, Charles II retained as councillors all the old members of his father's privy council who were with him at the Hague (Rebellion, xii. 2).
The question whether the new king should establish himself in Scotland or Ireland required immediate decision. As the presbyterian leaders demanded the king's acceptance of the covenant, and ' all the most extravagant propositions which were ever offered to his father,' Hyde advised the refusal of their invitation. He had conferred with Montrose, and expected more good from his expedition than from a treaty with Hamilton and Argyll. The Scots and their partisans regarded Hyde as their chief antagonist, and succeeded in suppressing the inaugural declaration which he drew up for the new king (ib. xii. 32; Clarendon State Papers, ii. 467, 473, 527). In the end Charles resolved to go to Ireland, but to pay a visit to his mother in France on the way. Hyde, who termed Ireland the nearest road to Whitehall, approved the first half of the plan, but objected to the sojourn in Paris. Accordingly, when Cottington proposed that they both should go on an embassy to Spain, Hyde embraced the chance of an honourable retreat (Nicholas Papers, i. 124; Rebellion, xii. 34). His friends complained that he was abandoning the king just when his guidance was most necessary. But Hyde felt that a change of counsellors would ultimately re-establish his own influence, and expected to rejoin the king in Ireland within a few months.
The chief objects of the embassy were to procure a loan of money from the king of Spain, to obtain by his intervention aid from the pope and the catholic powers, and to negotiate a conjunction between Owen O'Neill and Ormonde for the recovery of Ireland. The ambassadors left Paris on 29 Sept. 1649, and reached Madrid on 26 Nov. The Spanish government received them coldly (Guizot, Cromwell, transl. 1854, i. 419-26). Their money was soon exhausted, and Hyde was troubled by the ' miserable wants and distresses ' of his wife, whom he had left in Flanders (Lister, i. 361). The subjugation of Ireland, and the defeat of Charles II at Dunbar, destroyed any hope of Spanish aid, while the share taken by a servant of the ambassadors in Ascham's murder made their presence inconvenient to the Spanish government. In December 1650 they were ordered to leave Spain. Hyde was treated with personal favour, and promised the special privileges of an ambassador during his intended residence at Antwerp (Rebellion, xiii. 25, 31). He left Spain in March 1651, and rejoined his family at Antwerp in the following June.
In November 1651 Charles II, immediately after his escape from Worcester, summoned Hyde to Paris. He joyfully obeyed the summons, and for the rest of the exile was, the king's most trusted adviser. He was immediately appointed one of the committee of four with whom the king consulted in all his affairs, and a member of the similar committee which corresponded with the Scottish royalists (Rebellion, xiii. 123, 140). Till August 1654 he filled Nicholas's place as secretary of state. He accompanied the king in his removals to Cologne (October 1654) and Bruges (April 1658), and was formally declared lord chancellor on 13 Jan. 1658 (Lister, i. 441).
For the first two years of this period repeated attempts were made to shake the king's confidence in Hyde. Papists and presbyterians both petitioned for his removal (Rebellion, xiv. 63). In 1653 Sir Robert Long incited Sir Richard Grenville to accuse Hyde of secret correspondence with Cromwell, but the king cleared him by a declaration in council, asserting that the charge was a malicious calumny (13 Jan. 1654; Lister, i. 384, iii. 63, 69, 75). Long also combined with Lord Gerard and Lord-keeper Herbert to charge Hyde with saying that the king neglected his business and was too much given to pleasure. Charles coolly answered
that he did really believe the chancellor had used those words, because he had often said that and much more to himself ' (ib. iii. 74; Rebellion, xiv. 77). Of all Hyde's adversaries, the queen was the most persistently hostile. He made many efforts to conciliate her, arid in 1651 had persuaded the Duke of York to obey her wishes and return to Paris (1651; Rebellion, xiii. 36, 46). But she was so displeased at Hyde's power over the king that she would neither speak to him nor notice him. 'Who is that fat man next the Marquis of Ormonde?' asked Anne of Austria of Charles II during an entertainment at the French court. ' The king told her aloud that was the naughty man who did all the mischief and set him against his mother; at which the queen herself was little less disordered than the chancellor was, who blushed very much.' At the king's request Henrietta allowed Hyde a parting interview before he left France, but only to renew her complaints of his want of respect and her loss of credit (ib. xiv. 62, 67, 93). The Marquis of Ormonde and the chancellor believed that the king had nothing at this time (1652) to do but to be quiet, and that all his activity was to consist in carefully avoiding to do anything that might do him hurt, and to expect some blessed conjuncture from the amity of Christian princes, or some such revolution of affairs in England, as might make it seasonable for his majesty to show himself again' (ib. xiii. 140). In the meantime Hyde endeavoured to prevent any act which might alienate English royalists and churchmen. He defeated Berkeley's appointment as master of the court of wards, lest the revival of that institution should lose the king the affection of the gentry; and dissuaded Charles from attending the Huguenot congregation at Charenton, lest it should injure the church. Above all, he opposed any attempt to buy catholic support by promising a repeal of the penal laws or holding out hopes of the king's conversion (cf. Burnet, Own Time, ed. 1836, i. 135; Ranke, Hist. of England, vi. 21).
The first favourable conjuncture which presented itself was the war between the English republic and the United Provinces (1652). Charles proposed a league to the Dutch, and intended to send Hyde as ambassador to Holland, but his overtures were rejected (Rebellion, xiii. 165; Clarendon State Papers, iii. 91-141). When war broke out between Spain and Cromwell, Hyde applied to Don Lewis de Haro, promising in return for aid in restoring his master
to give the usurper such trouble in his own quarters that he may not have leisure to pursue and supply his new conquests.' Spain agreed to assist Charles with six thousand foot and ships for their transport, whenever he could cause a good port town in England to declare for him' (12 April 1656). Thereupon two thousand Irish soldiers in French service deserted and placed themselves at the disposal of Charles II (Rebellion,xv.22; Clarendon State Papers, iii. 276, 303). But Hyde now as before objected to isolated or premature movements in England, and in the end rested his hopes mainly on some extraordinary accident, such as Cromwell's death or an outbreak of the levellers (Clarendon State Papers, iii. 198, 330, 401). As early as 1649 he had drawn up a paper of considerations on future treaties, showing the advantages of an agreement with the levellers rather than the presbyterians. In 1656 their emissaries applied to Charles, were favourably received, and were promised indemnity for all except actual regicides. Hyde listened to their plots for the assassination of Cromwell without any sign of disapproval (ib. iii. 316, 325, 341, 343; Nicholas Papers, i. 138). On the Protector's death Hyde instructed the king's friends not to stir till some other party rose, then to arm and embody themselves without mentioning the king, and to oppose whichever party was most irreconcilable to his cause. When the Long parliament had succeeded Richard Cromwell, the king's friends were bidden to try to set the army and the parliament by the ears (Clarendon State Papers, iii. 411, 436, 482). The zeal of the royalist leaders in England obliged the king to sanction a rising in August 1659. The date fixed was earlier than Hyde's policy had contemplated, but the fear lest some vigorous dictator should seize power, and the hope of restoring the king without foreign help, reconciled him to the attempt. After its failure he went back to his old policy. 'To have a little patience to sit still till they are in blood' was his advice when Monck and Lambert quarrelled; to obstruct a settlement and demand a free parliament his counsel when the Rump was again restored (ib. iii. 436, 530, 534).
Of Hyde's activity between Cromwell's death and the Restoration the thirteen volumes of his correspondence during that period give ample proof. The heads of all sections of the royalists made their reports to him, and he restrained their impatience, quieted their jealousies, and induced them to work together. He superintended the negotiations, and sanctioned the bargains by which opponents of influence were won to favour the king's return (ib. iii. 417, 443, 497, 673; Burnet, Own Time, i. 61). Hyde's aim was, as it had been throughout, to restore the monarchy, not merely to restore the king. A powerful party wished to impose on Charles II the conditions offered to his father in 1648. Left to himself, Charles might have consented. But, during the negotiations with the levellers in 1656, Hyde had suggested to Ormonde the expedient which the king finally adopted.
When they are obstinate to insist on an unreasonable proposition that you find it necessary to consent to, let it be with this clause, "If a free parliament shall think fit to ask the same of his majesty"' (Clarendon State Papers, iii. 289). By the declaration of Breda the exceptions to the general amnesty, the limits to toleration, and the ownership of forfeited lands, were left, in accordance with this advice, to be determined by parliament. If the adoption of Hyde's policy rendered some of the king's promises illusory, it insured the co-operation of the two powers whose opposition had caused the civil war.
On the eve of the Restoration an attempt was made to exclude Hyde from power. Catholics and presbyterians regarded him as their greatest enemy, and the French ambassador, Bourdeaux, backed their efforts for his removal. A party in the convention claimed for parliament the appointment of the great officers of state, and wished to deprive Hyde of the chancellorship. But he was strongly supported by the constitutional royalists, and the intrigue completely failed. Hyde entered London with the king, and took his seat in the court of chancery on 1 June 1660 (Campbell, Lives of the Chancellors, iii. 187). As the king's most trusted adviser he became virtually head of the government. He was the most important member of the secret committee of six, which, although styled the committee for foreign affairs, was consulted on all important business before it came to the privy council (Cont. of Life, § 46). For a time he continued to hold the chancellorship of the exchequer, but surrendered it finally to Lord Ashley (13 May 1661; Campbell, iii. 191). Ormonde urged Hyde to resign the chancellorship also, in order to devote himself entirely to the management of public business and to closer attendance on the king. He refused, on the ground that
England would not bear a favourite, nor any one man who should out of his ambition engross to himself the disposition of public affairs,' adding that first minister was a title so newly translated out of French into English, that it was not enough understood to be liked' (ib. p. 85).
On 3 Nov. 1660 Hyde was raised to the peerage by the title of Baron Hyde of Hindon, and at the coronation was further created Viscount Cornbury and Earl of Clarendon (20 April, 1661; Lister, ii. 81). The king gave him 20,000l. to support his new dignity, and offered him also a grant of ten thousand acres in the great level of the Fens. Clarendon declined the land, saying that if he allowed the king to be so profuse to himself he could not prevent extravagant bounties to others. But he accepted at various times smaller estates: ten acres of land in Lambeth, twenty in Westminster, and three manors in Oxfordshire forfeited by the attainder of Sir John Danvers [q.v.] In 1662 he was granted, without his knowledge, 20,000l. in rents due from certain lands in Ireland, but never received more than 6,000l. of this sum, and contracted embarrassing obligations in consequence. Though public opinion accused him of avarice, and several articles of his impeachment allege pecuniary corruption, it is plain that Clarendon made no attempt to enrich himself. Charles mocked at his scruples, but the legitimate profits of the chancellorship were large, and they sufficed him (Cont. p. 180; Lister, ii. 81; iii. 522).
The revelation (3 Sept. 1660) of the secret marriage of the Duke of York to Clarendon's daughter Anne [q.v.] seemed to endanger, but really confirmed his power. According to his own account he was originally informed of it by the king, received the news with passionate indignation, urged his daughter's punishment, and begged leave to resign. Afterwards, finding the marriage perfectly valid, and public opinion less hostile than he expected, he adopted a more neutral attitude. On his part the king was reluctant to appeal to parliament to dissolve the marriage, was resolved not to part with Clarendon, and hoped through Anne's influence to keep the duke's public conduct under some control. Accordingly he supported the duke in recognising the marriage, which was publicly owned in December 1660 (Cont. pp. 48-76; Burnet, i. 302; Ranke, iii. 340; Lister, ii. 68). Clarendon's position thus seemed to be rendered unassailable. But at bottom his views differed widely from the king's. He thought his master too ready to accept new ideas, and too prone to take the French monarchy as his model. His own aim was to restore the constitution as it existed before the civil war. He held that the secret of good government lay in a well-chosen and powerful privy council.
At present king and minister agreed on the necessity of carrying out the promises made at Breda. Clarendon wished the convention to pass the Indemnity Act as quickly as possible, although, like the king, he desired that all actual regicides should be excepted. He was the spokesman of the lords in their dispute with the commons as to the number of exceptions (Old Parl. Hist. xxii. 435, 446, 487). But of the twenty-six regicides condemned in October 1660 only ten were executed, and when in 1661 a bill was introduced for the capital punishment of thirteen more, Charles and the chancellor contrived to prevent it from passing (Lister, ii. 117, iii. 496; Clarendon State Papers, iii. App. xlvi). In his speech at the opening of the parliament of 1661, Clarendon pressed for a confirmation of the acts passed by the convention. He steadily maintained the Act of Indemnity, and opposed the provisos and private bills by which the angry royalists would have destroyed its efficacy. The merit of this firmness Hyde attributes partly to the king. According to Burnet, 'the work from beginning to end was entirely' Clarendon's. At all events the chancellor reaped most of the odium caused by the comprehensiveness of the Act of Indemnity (Burnet, i. 193, 297; Lords' Journals, xi. 240, 379; Cont. pp. 130, 184, 285; Pepys, 20 March 1669). He believed that 'the late rebellion could never be extirpated and pulled up by the roots till the king's regal power should be fully vindicated and the usurpations in both houses of parliament since the year 1640 disclaimed.' In declaring the king's sole power over the militia (1661), and in repealing the Triennial Act (1664), parliament fulfilled these desires (Cont. pp. 284, 510, 990). On ecclesiastical questions Charles and the chancellor were less in harmony. Clarendon's first object was to gradually restore the church to its old position. He seems to have entertained a certain doubt whether the king's adherence to episcopacy could be relied upon, and was anxious to give the presbyterians no opportunity of putting pressure upon him. Hence the anxiety to provide for the appointment of new bishops shown by his correspondence with Barwick in 1659, and the rapidity with which in the autumn of 1660 vacant sees were filled up. In 1661, when the Earl of Bristol, in the hope of procuring some toleration for the catholics, prevailed on the king to delay the progress of the bill for restoring the bishops to their place in the House of Lords, Clarendon's remonstrances converted Charles and frustrated the intrigue (ib. p. 289; Clarendon State Papers, iii. 613, 732; Life of Dr. Barwick, ed. 1724, p.205; Ranke, iii. 370).
On the question of the church lands Clarendon's influence was equally important. After the convention had decided that church and crown lands should revert to their owners, a commission was appointed to examine into sales, compensate bona-fide purchasers, and make arrangements between the clergy and the tenants. Clarendon, who was a member of the commission, admits that it failed to prevent cases of hardship, and lays the blame on the clergy. Burnet censures Clarendon himself for not providing that the large fines which the bishops raised by granting new leases should be applied to the use of the church at large (Own Time, i. 338; Cont. p.189; Somers Tracts, vii. 465).
Of the two ways of establishing the liberty for tender consciences promised in the Declaration of Breda the king preferred toleration, Hyde comprehension (cf. Lords' Journals, xi. 175). In April 1660 he sent Dr. Morley to England to discuss with the presbyterian leaders the terms on which reunion was possible (Clarendon State Papers, iii. 727, 738). After the Restoration bishoprics were offered to several presbyterians, including Baxter, who records the kindness with which Clarendm treated him (Reliquiæ Baxterianæ, ii. 281, 302, 381). Clarendon drafted the king's declaration on ecclesiastical affairs (25 Oct. 1660), promising limited religion' (Lister, ii. 295-303; Lords' Journals, xi. 237, 242, 476, 688).
The settlement of Scotland and Ireland, and the course of colonial history also, owed much to Clarendon. The aims of his Scottish policy were to keep Scotland dependent on England and to re-establish episcopacy. He opposed the withdrawal of the Cromwellian garrisons, and regretted the undoing of the union which Cromwell had effected. Mindful of the ill results caused by the separation of Scottish and English affairs, which the first two Stuarts had so jealously maintained, he proposed to set up at Whitehall a council of state for Scotland to control the government at Edinburgh (Rebellion, ii. 17; Cont. pp. 92-106; Burnet, i. 202). His zeal to restore episcopacy in Scotland was notorious. Baillie describes him as corrupting Sharp and overpowering Lauderdale, the two champions on whom the presbyterian party had relied (Letters, iii. 464, 471; Burnet, i. 237). At Clarendon's persuasion the English bishops left Sharp to manage the reintroduction of episcopacy (ib. i. 240). Middleton's selection as the king's commissioner was largely due to his friendship with the chancellor (cf. ib. pp. 273, 365), and Middleton's supersession by Lauderdale in May 1663 put an end to Clarendon's influence over Scottish affairs (Memoir of Sir George Mackenzie, pp. 76, 112;
Lauderdale and the Restoration in Scotland,' Quarterly Review, April 1884).
Hyde's share in the settlement of Ireland is less easy to define. The fifteenth article of his impeachment alleges that he ' procured the bills for the settlement of Ireland, and received great sums of money for the same ' (Miscellaneous Tracts, p. 39). His answer is that he merely acted as one member of the Irish committee, and had no special responsibility for the king's policy; but his council-notes to Charles seem to disprove this plea (Cont. p. 277; Clarendon State Papers, iii. App. xlvii). Sympathising less strongly with the native Irish than the king did, he yet supported the settlement-commissioners against the clamour of the Irish parliament. ' No man,' he wrote to the Earl of Anglesey, ' is more solicitous to establish Ireland upon a true protestant English interest than I am, but there is as much need of temper and moderation and justice in the composing that establishment as ever was necessary in any affair of this world ' (ib. iii. App. xxxiv, xxxvi). He was anxious that the king should carry out his original intention of providing for deserving Irishmen out of the confiscated lands which had fallen to the crown, but was out-generalled by the Earl of Orrery (Cont. p. 272). His influence in Ireland increased after the Duke of Ormonde became lord-lieutenant (December 1661), and he supported Ormonde's policy. He did not share the common jealousy of Irish trade, and opposed the prohibition of the importation of Irish cattle (1665-6) with a persistency which destroyed his remaining credit with the English House of Commons (Carte, Ormonde, ed. 1851, iv. 244, 263-7; Cont. pp. 9, 55-9, 89).
In the extension of the colonial dominions of England, and the institution of a permanent system of colonial administration, Hyde took a leading part. He was one the eight lords proprietors to whom on 24 March 1663 the first Carolina charter was granted, and the settlement they established at Cape Fear was called after him Clarendon County. He helped Baxter to procure the incorporation of the Company for the Propagation of the Gospel in New England, of which he was himself a member (7 Feb. 1662). He joined the general council for foreign plantations (1 Dec. 1660), and the special committee of the privy council charged to settle the government of New England (17 May 1661; Cal. State Papers, Colonial, 1574-1660 p. 492, 1661-8 pp. 30, 71, 125; Reliquiæ Baxterianæ, ii. 290). The policy, which Clarendon probably inspired, endeavoured
to enforce the Acts of Parliament for the control of the shipping trade, to secure for members of the Church of England civil rights equal to those enjoyed by nonconformists, and to subordinate the Colonial jurisdiction by giving a right of appeal to the Crown in certain cases' (Doyle, The English in America; The Puritan Colonies, ii. 150). To prevent the united resistance of the New England states he supported measures to divide them from each other and to weaken Massachusetts (Cal. State Papers, Colonial, 1661-1668, pp. 198-203, 377; Hutchinson, History of Massachusetts, ed. 1795, i. 544). In dealing with the colonies circumstances made Clarendon tolerant. He granted freedom of conscience to all settlers in Carolina, and instructed the governors of Virginia and Jamaica not to molest nonconformists (Cal. State Papers, Colonial, 1661-8, p. 155; Stoughton, Ecclesiastical History of England, iii. 310). The worst side of his policy is shown in his support of the high-handed conduct of Lord Willoughby in Barbadoes, which was made the basis of the fifteenth article of his impeachment in 1667.
Hyde, although playing a conspicuous part in foreign affairs, exerted little influence upon them. His views were purely negative. He thought a firm peace between the king and his neighbours
necessary for the reducing his own dominions into that temper of obedience they ought to be in,' and desired to avoid foreign complications (Cont. p. 1170; Courtenay, Life of Temple, i. 127). But his position and his theory of ministerial duty obliged him to accept the responsibility of a policy which he did not originate, and a war of which he disapproved.
Hyde wished the king to marry, but was anxious that he should marry a protestant The marriage between Charles and Catherine of Braganza was first proposed by the Portuguese ambassador to the king in the summer of 1660, and by the king to the lord chancellor (Ranke, iii. 344). Carte, on the authority of Sir Robert Southwell, describes Clarendon as at first remonstrating against the choice, but finally yielding to the king's decision (Carte, Ormonde, iv. 107, ed. 1851; Burnet, Own Time, i. 300). The council unanimously approved of the marriage, and the chancellor on 8 May 1661 announced the decision to parliament, and prepared a narrative of the negotiations (Lords' Journals, xi. 243; Cont. pp. 149-87; Lister, ii. 126, iii. 119, 513). When it became evident that the queen would give no heir to the throne, it was reported that Clarendon knew she was incapable of bearing children and had planned the marriage to secure the crown for his daughter's issue (Reresby, Memoirs, p.53, ed. Cartwright; Pepys, 22Feb.1664). Clarendon refused a bribe of 10,000l. which Bastide the French agent offered him, but stooped to solicit a loan of 50,000l. for his master and a promise of French support against domestic disturbances. The necessities of the king led to the idea of selling Dunkirk a transaction which the eleventh article of Clarendon's impeachment charged him with advising and effecting. In his 'Vindication' he replied that the parting with Dunkirk was resolved upon before he heard of it, and that 'the purpose was therefore concealed from him because it was believed he was not of that opinion ' (Miscellaneous Tracts, p.33). The authorship of the proposal was subsequently claimed by the Earl of Sandwich, and is attributed by Clarendon to the Earl of Southampton (Cont. p.455; Pepys, 25 Feb. 1666). Clarendon had recently rebuked those who murmured at the expense of Dunkirk, and had enlarged on its value to England. But since it was to be sold, he advised that it should be offered to France, and conducted the bargain himself. The treaty was signed on 27 Oct. 1662 (Lister, ii. 167; Ranke, iii. 388; Clarendon State Papers, iii. App. xxi-ii, xxv) Bristol charged him with having got 100,000l. by the transaction, and on 20 Feb. 1665 Pepys notes that the common people had already nicknamed the palace which the chancellor was building near St. James's, ' Dunkirk House.' At the beginning of the reign Mazarin had regarded Clarendon as the most hostile to France of all the ministers of Charles II, but he was now looked upon as the greatest prop of the French alliance (Chéruel, Mazarin, iii. 291, 320-31; Ranke, iii. 339).
Contrary to his intentions, Clarendon also became engaged in the war with Holland. When his administration began, there were disputes of long standing with the United Provinces, and the Portuguese match threatened to involve England in the war between Holland and Portugal. Clarendon endeavoured to mediate between those powers, and refused to allow the English negotiations to be complicated by consideration of the interests of the prince of Orange. He desired peace with Holland because it would compose people's minds in England, and discourage the seditious party which relied on Dutch aid. A treaty providing for the settlement of existing disputes was signed on 4 Sept. 1662. De Witt wrote that it was Clarendon's work, and begged him to confirm and strengthen the friendly relations of the two peoples (Pontalis, Jean De Witt, i. 280; Lister, iii. 167, 175). Amity might have been maintained had the control of English foreign policy been in stronger hands. The king was opposed to war, and convinced by the chancellor's arguments against it (Cont. pp. 450-54). But Charles and Clarendon allowed the pressure of the trading classes and the Duke of York to involve them in hostilities which made war inevitable. Squadrons acting under instructions from the Duke of York, and consisting partly of ships lent from the royal navy, captured Cape Corso (April 1664) and other Dutch establishments on the African coast, and New Amsterdam in America (29 Aug. 1664). The Dutch made reprisals, and war was declared on 22 Feb. 1665. Clarendon held that the African conquest had been made
without any shadow of justice,' and asserted that, if the Dutch had sought redress peaceably, restitution would have been granted (Lister, iii. 347). Of the attack on the Dutch settlements in America he took a different view, urging that they were English property usurped by the Dutch, and that their seizure was no violation of the treaty. He was fully aware of the intended seizure of the New Netherlands, and appears to have helped the Duke of York to make out his title to that territory (Cal State Papers, Colonial, 1661-1668, pp. 191, 200; Brodhead, History of New York, ii. 12, 15; Life of James II, i. 400). The narrative of transactions in Africa, laid before parliament on 24 Nov. 1664, was probably his work. After the war began Clarendon talked openly of requiring new cessions from the Dutch, and asserted in its extremest form the king's dominion over the British seas (Lords' Journals, xi. 625, 684; Lister, iii. 424; Ranke, iii. 425; Pepys, 20 March 1669). Rejecting the offered mediation of France, he dreamt of a triple alliance between England, Sweden, and Spain, 'which would be the greatest act of state and the most for the benefit of Christendom that this age hath produced' (Lister, iii. 422; Lords' Journals, xi. 488). Later still, when France had actively intervened on the side of Holland, Clarendon's eyes became open to the designs of Louis XIV on Flanders, and he claims to have prepared the way for the triple alliance (Cont. p. 1066). But the belief that he was entirely devoted to French interests was one of the chief obstacles to the conclusion of any league between England and Spain (Klopp, Der Fall des Hauses Stuart, i. 145, 192; Courtenay, Life of Temple, i. 128). Nor was that belief—erroneous though it was—without some justification. When Charles attempted to bring the war to an end by an understanding with Louis XIV, Clarendon drew the instructions of the Earl of St. Albans (January 1667); and though it is doubtful whether he was cognisant of all his master's intentions, he was evidently prepared to promise that England should remain neutral while France seized Flanders.
In June 1667 the Dutch fleet burnt the ships in the Medway, and on 21 July the treaty of Breda was concluded. Public opinion held Clarendon responsible for the ill-success of the war and the ignominious peace. On the day when the Dutch attacked Chatham, a mob cut down the trees before his house, broke his windows, and set up a gibbet at his gate (Pepys, 14 June 1667; cf. ib. 24 June). According to Clarendon's own account, he took very little part in the conduct of the war, 'never pretending to understand what was fit to be done,' but simply concurring in the advice of military and naval experts (Cont. p. 1026). Clarendon's want of administrative skill was, however, responsible for much. He disliked the new system of committees and boards which the Commonwealth had introduced, and clung to the old plan of appointing great officers of state, as the only one suitable to a monarchy. He thought it necessary to appoint men of quality who would give dignity to their posts, and underrated the services of men of business, while his impatience of opposition and hatred of innovations hindered administrative reform.
As the needs of the government increased, the power of the House of Commons grew, and Clarendon's attempt to restrict their authority only diminished his own. He opposed the proviso for the appropriation of supplies (1665) 'as an introduction to a commonwealth and not fit for a monarchy.' He opposed the bill for the audit of the war accounts (1666) as 'a new encroachment which had no bottom,' and urged the king not to 'suffer parliament to extend its jurisdiction. He opposed the bill for the prohibition of the Irish cattle trade (1666) as inexpedient in itself, and because its provisions robbed the king of his dispensing power; spoke slightingly of the House of Commons, and told the lords to stand up for their rights. In 1666, finding the House of Commons 'morose and obstinate,' and 'solicitous to grasp as much power and authority as any of their predecessors had done,' he proposed a dissolution, hoping to find a new house more amenable. Again, in June 1667 he advised the king to call a new parliament instead of convening the existing one, which had been prorogued till October (Cont. pp. 964, 1101; Lister, ii. 400). This advice and the immediate prorogation of parliament when it did meet (25-9 July 1667) deeply incensed the commons, and gave Clarendon's enemies an opportunity of asserting that he had advised the king to do without parliaments altogether (Pepys, 25 July 1667; Lister, ii. 402). Still more serious, with men who remembered the Protectorate, was the charge that he had designed to raise a standing army and to govern the kingdom by military power. What gave colour to the rumour was that, during the invasion of June 1667, Clarendon had recommended the king to support the troops guarding the coast by the levy of contributions on the adjacent counties until parliament met (Cont. p. 1104). In private the king himself owned the charge was untrue, but refused to allow his testimony to be used in the chancellor's defence. Popular hatred turned against Clarendon, and poets threatened Charles with the fate of his father unless he parted with the obnoxious minister (Marvell, Last Instructions to a Painter, 1. 870).
The court in general had long been hostile to Clarendon, and the king's familiar companions took every opportunity of ridiculing him. Lady Castlemaine and he were avowed enemies. The king suspected him of frustrating his designs on Miss Stewart, and was tired of his reproofs and remonstrances. 'The truth is,' explained Charles to Ormonde, 'his behaviour and humour was grown so unsupportable to myself and to all the world else, that I could no longer endure it, and it was impossible to live with it, and do those things with the parliament that must be done, or the government will be lost' (Ellis, Original Letters, 2nd ser. iv. 39). The king therefore decided to remove the chancellor before parliament again met, and commissioned the Duke of York to urge him to retire of his own accord. Clarendon obtained an interview at Whitehall on 26 Aug. 1667, and told the king that he was not willing to deliver up the seal unless he was deprived of it; that his deprivation of it would mean ruin, because it would show that the king believed him guilty; that, being innocent of transgressing the law, he did not fear the justice of the parliament.
Parliaments,' he said, were not formidable unless the king chose to make them so; it was yet in his own power to govern them, but if they found it was in theirs to govern him, nobody knew what the end would be.' The king did not announce his decision, but seemed deeply offended by some inopportune reflections on Lady Castlemaine. For two or three days the chancellor's friends hoped the king would change his purpose, but finally Charles declared
that he had proceeded too far to retire, and that he should be looked upon as a child if he receded from his purpose.' On 30 Aug. Sir William Morrice was sent to demand the great seal. When Morrice brought it back to Whitehall, Charles was told by a courtier that this was the first time he could ever call him king of England, being freed from this great man' (Pepys, 27 Aug., 7 Oct. 1667; Cont. p. 1134 ; Lister, iii. 468). On Clarendon himself the blow fell with crushing severity (cf. Carte, Ormonde, v. 57), but he confidently expected to vindicate himself when parliament met.
The next session opened on 10 Oct. 1667. The king's speech referred to the chancellor's dismissal as an act which he hoped would lay the foundation of greater confidence between himself and parliament. The House of Commons replied by warm thanks, which the king received with a promise never to employ the Earl of Clarendon again in any public affairs whatsoever (16 Oct.). Clarendon's enemies, however, were not satisfied, and determined to arraign him for high treason. The attack was opened by Edward Seymour on 26 Oct., and on 29 Oct. a committee was appointed to draw up charges. Its report (6 Nov.) contained seventeen heads of accusation, but the sixteenth article, which accused Clarendon of betraying the king's counsels to his enemies, was the only one which amounted to high treason. The impeachment was presented to the House of Lords on 12 Nov., but they refused (14 Nov.) to commit Clarendon as requested,
because the House of Commons have only accused him of treason in general, and have not assigned or specified any particular treason.' As they persisted in this refusal, the commons passed a resolution that the non-compliance of the lords was an obstruction to the public justice of the kingdom and a precedent of evil and dangerous consequences' (2 Dec.) The dispute between the two houses grew so high, that it seemed as if all intercourse between them would stop, and a paralysis of the government ensue (Lister, iii. 474). The king publicly supported the chancellor's prosecutors, while the Duke of York stood by his father-in-law, but an attack of small-pox soon deprived the duke of any further power to interfere. As it was, York's conduct had increased the hostility of the chancellor's enemies, and they determined to secure themselves against any possibility of his return to power if James became king (4 Nov. 1667; Life of James II, i. 433; Cont. p. 1177).
By the advice of friends Clarendon wrote to the king protesting innocence of the crimes alleged in his impeachment.
I do upon my knees,' he added, beg your pardon for any overbold or saucy expressions I have ever used to you … a natural disease in old servants who have received too much countenance.' He begged the king to put a stop to the prosecution, and to allow him to spend the small remainder of his life in some parts beyond seas (ib. p. 1181). Charles read the letter, burnt it, and observed 'that he wondered the chancellor did not withdraw himself.' He was anxious that Clarendon should withdraw, but would neither command him to 'go nor grant him a pass for fear of the commons. Indirectly, through the Duke of York and the Bishop of Hereford, he urged him to fly, and promised
that he should not be in any degree prosecuted, or suffer in his honour or fortune by his absence' (ib. p. 1185). Relying on this engagement, and alarmed by the rumours of a design to prorogue parliament and try him by a jury of peers, Clarendon left England on the night of 29 Nov., and reached Calais three days later. With Clarendon's flight the dispute between the two houses came to an end. The lords accepted it as a confession of guilt, concurred with the commons in ordering his petition to be burnt, and passed an act for his banishment, by which his return was made high treason and his pardon impossible without the consent of both houses (19 Dec. 1667; Lister, ii. 415-44, iii. 472-77; Cont. pp. 1155-97 ; Carte, Ormonde, v. 58 ; Lords' Journals, xii. 178; Commons' Journals, ix. 40-3).
The rest of Clarendon's life was passed in exile. From Calais he went to Rouen (25 Dec.), and then back to Calais (21 Jan. 1668), intending by the advice of his friends to return to England and stand his trial. In April 1668 he made his way to the baths of Bourbon, and thence to Avignon (June 1668). For nearly three years he lived at Montpelier (July 1668-June 1671), removing to Moulins in June 1671, and finally to Rouen in May 1674 (Lister, ii. 478, 481, 487; Cont. p. 1238). During the first part of his exile his hardships and sufferings were very great. At Calais he lay for three months dangerously ill. At Evreux, on 23 April 1668, a company of English sailors in French service, holding Clarendon the cause of the non-payment of their English arrears, broke into his lodgings, plundered his baggage, wounded several of his attendants, and assaulted him with great violence. One of them stunned him by a blow with the flat of a sword, and they were dragging him into the courtyard to despatch him, when he was rescued by the town guard (ib. pp. 1215, 1225). In December 1667 Louis XIV, anxious to conciliate the English government, ordered Clarendon to leave France, and, in spite of his illness, repeated these orders with increasing harshness. After the conclusion of the Triple League had frustrated the hope of a close alliance with England, the French government became more hospitable, but Clarendon always lived in dread of fresh vexations (Cont. pp. 1202-1220, 1353). The Archbishop of Avignon, the governor and magistrates of Montpelier, and the governor of Languedoc, treated him with great civility, and he was cheered by the constant friendship of the Abbé Montague and Lady Mordaunt. His son, Laurence, was twice allowed to visit him, and Lord Cornbury was with him when he died (Correspondence of Henry Hyde, Earl of Clarendon, ed. Singer, i. 645; Lister, iii. 488).
To find occupation, and to divert his mind from his misfortunes, Clarendon 'betook himself to his books,' and studied the French and Italian languages. Never was his pen more active than during these last seven years of his life. His most important task was the completion and revision of his ' History of the Rebellion ' together with the composition of his autobiography. In June 1671, and again in August 1674, he petitioned for leave to return to England, and begged the queen and the Duke of York to intercede for him (Clarendon State Papers, iii. App. xliv, xlv). These entreaties were unanswered, and he died at Rouen on 9 Dec. 1674 (Lister, ii. 488). He was buried in Westminster Abbey on 4 Jan. 1675, at the foot of the steps ascending to Henry VII's chapel, where his second wife had been interred on 17 Aug. 1667 (Chester, Westminster Abbey Register, pp. 167, 185). His two sons, Henry, earl of Clarendon (1638-1709), and Laurence, earl of Rochester (1642-1711), and his daughter, Anne, duchess of York (1637-1671), are separately noticed. A third son, Edward Hyde, baptised 1 April 1645, died on 10 Jan. 1665, and was also buried in Westminster Abbey (ib. p. 161). Clarendon's will is printed in Lister's ' Life of Clarendon ' (ii. 489).
As a statesman, Clarendon's consistency and integrity were conspicuous through many vicissitudes and amid much corruption. He adhered faithfully to the principles he professed in 1641, but the circle of his ideas was fixed then, and it never widened afterwards. No man was fitter to guide a wavering master in constitutional ways, or to conduct a return to old laws and institutions; but he was incapable of dealing with the new forces and new conditions which twenty years of revolution had created.
Clarendon is remarkable as one of the first Englishmen who rose to office chiefly by his gifts as a writer and a speaker. Evelyn mentions his ' eloquent tongue,' and his ' dexterous and happy pen.' Some held that his literary style was not serious enough. Burnet finds a similar fault in his speaking. 'He spoke well ; his style had no flow [flaw ?] in it, but had a just mixture of wit and sense, only he spoke too copiously; he had a great pleasantness in his spirit, which carried him sometimes too far into raillery, in which he showed more wit than discretion.' Pepys admired his eloquence with less reserve.
I am mad in love with my lord chancellor, for he do comprehend and speak out well, and with the greatest ease and authority that ever I saw man in my life. … His manner and freedom of doing it as if he played with it, and was informing only all the rest of the company, was mighty pretty ' (cf. Warwick, Memoirs, p. 195; Evelyn, ii. 296; Pepys, Diary, 13 Oct. 1666).
Apart from his literary works, the mass of state papers and declarations drawn by his hand and his enormous correspondence testify to his unremitting industry. His handwriting is small, cramped, and indistinct. During his residence in Jersey 'he writ daily little less than one sheet of large paper with his own hand,' and seldom spent less than ten hours a day between his books and his papers (Life, v.5; Clarendon State Papers, ii. 375). Lordnote). Clarendon to accept was a set of all the books printed at the Louvre (Evelyn, iii. 346, 446; Clarendon State Papers, iii. App. xi. xiii). Clarendon was an assiduous reader of the Roman historians. He quotes Tacitus continually in the 'History of the Rebellion,' and modelled his character of Falkland on that of Agricola. He was familiar with the best historical writers of his own period, and criticises Strada, Bentivoglio, and Davila with acuteness. Of English writers, Hooker, whose exordium he imitates in the opening of the 'History of the Rebellion,' seems to have influenced him most. But he did not disdain the lighter literature of his age, praised the amorous poems of Carew, prided himself on the intimacy of Ben Jonson, and thought Cowley had made a flight beyond all other poets. The muses, as Dryden remarks, were once his mistresses, and boasted his early courtship; but the only poetical productions of Clarendon which have survived are some verses on the death of Donne, and the lines prefixed to Davenant's 'Albovine ' in 1629.
Clarendon's 'History' is the most valuable of all the contemporary accounts of the civil wars. Clarendon was well aware of one cause of its superiority. 'It is not,' he says, ' a collection of records, or an admission to the view and perusal of the most secret letters and acts of state [that] can enable a man to write a history, if there be an absence of that genius and spirit and soul of an historian which is contracted by the knowledge and course and method of business, and by conversation and familiarity in the inside of courts, and [with] the most active and eminent persons in the government' (Tracts, p. 180). But both from a literary and from an historical point of view the book is singularly unequal. At its best Clarendon's style, though too copious, is strong and clear, and his narrative has a large and easy flow. Often, however, the language becomes involved, and the sentences are encumbered by parentheses. As a work of art the history suffers greatly from its lack of proportion. Some parts of the civil war are treated at disproportionate length, others almost entirely neglected. The progress of the story is continually broken by constitutional digressions and lengthy state papers. The 'History' was, however, originally intended rather as an exact memorial of passages ' than 'a digested relation.' It was not to be published as it stood, but to serve as 'a store' out of which 'somewhat more proper for the public view' might be collected (Rebellion, i. 3). The ' History ' itself is to some extent a manifesto, addressed, in the first place, to the king, but appealing still more to posterity. It was designed to set forth a policy as well as to relate events, and to vindicate not so much the king as the constitutional royalists. To celebrate the memories of eminent and extraordinary persons ' Clarendon held one of the principal ends of history. Hence the portraits which fill so many of his pages. His characters are not simply bundles of characteristics, but consistent and full of life, sketched sometimes with affection, sometimes with light humour. Evelyn described them as 'so just, and tempered without the least ingredient of passion or tincture of revenge, yet with such natural and lively touches, as shew his lordship well knew not only the persons' outsides but their very interiors; whilst he treats the most obnoxious who deserved the severest rebuke, with a becoming generosity and freedom, even where the ill-conduct of those of the pretended loyal party, as well as of the most flagitious, might have justified the worst that could be said of their miscarriages and demerits.' Clarendon promised Berkeley that there should not be 'any untruth nor partiality towards persons or sides ' in his narrative (Macray, Clarendon, i., preface, p. xiii), and he impartially points out the faults of his friends. But lack of insight and knowledge prevented him from recognising the virtues of opponents. He never understood the principles for which presbyterians and independents were contending. In his account of the causes of the rebellion he under-estimates the importance of the religious grievances, and attributes too much to the defects of the king's servants, or the personal ambition of the opposition leaders.
As a record of facts the 'History of the Rebellion' is of very varying value. It was composed at different times, under different conditions, and with different objects. Between 1646 and 1648 Clarendon wrote a ' History of the Rebellion' which ended with the defeat of Hopton at Alresford in March 1644. In July 1646 he wrote, by way of defending the prince's council from the aspersions of Goring and Grenville, an account of the transactions in the west, which is inserted in book ix. Between 1668 and 1670 he wrote a 'Life' of himself, which extended from 1609 to 1660. In 1671 he reverted to his original purpose, took up the unfinished ' History ' and the finished 'Life,' and wove them together into the narrative published as the 'History of the Rebellion.' During this process of revision he omitted passages from both, and made many important additions in order to supply an account of public transactions between 1644 and 1660, which had not been treated with sufficient fulness in his
Life.' As the original ' History' was written when Clarendon's memory of events was freshest, the parts taken from it are much more accurate than those taken from the 'Life.' On the other hand, as the ' Life ' was written simply for his children, it is freer in its criticisms, both of men and events. Most of the characters contained in the ' History of the Rebellion ' are extracted from the 'Life.'
The authorities at Clarendon's disposal when the original ' History ' was written supply another reason for its superior accuracy. He obtained assistance from many quarters. From Nicholas he received a number of official papers, and from Hopton the narrative of his campaigns, which forms the basis of the account of the western war given in books vi. and vii. At the king's command Sir Edward Walker sent him relations of the campaigns of 1644 and 1645, and many cavaliers of less note supplied occasional help. When the ' Life ' was written Clarendon was separated from his friends and his papers, and relied upon his memory, a memory which recalled persons with great vividness, but confused and misrepresented events. The additions made in 1671 are more trustworthy, because Clarendon had in the interval procured some of the documents left in England. Ranke's 'History of England' (translation, vi. 3-29) contains an estimate of the 'History of the Rebellion,' and Mr. Gardiner criticises Clarendon's general position as an historian (History of the Great Civil War, ii. 499). George Grenville, lord Lansdowne, attempted to vindicate his relative, Sir Richard Grenville, from Clarendon's censures (Lansdowne, Works, 1732, i. 503), and Lord Ashburnham examines minutely Clarendon's account of John Ashburnham (A Narrative by John Ashburnham, 2 vols. 1830). An excellent dissertation by Dr. Ad. Buff deals with parts of book vi. of the 'Rebellion' (Giessen, 1868).
The 'True Historical Narrative of the Rebellion and Civil Wars in England,' generally termed the ' History of the Rebellion,' was first published at Oxford in 1702-4, in three folio volumes, with an introduction and dedications by Laurence, earl of Rochester. The original manuscripts of the work were given to the university at different dates between 1711 and 1753 (Macray, Annals of the Bodl. Lib. p.225). The first edition was printed, not from the originals, but from a transcript of them made under Clarendon's supervision by his secretary, William Shaw. This was copied for the printers under the supervision of the Earl of Rochester, who received some assistance in editing it from Dr. Aldrich, dean of Christ Church, and Sprat, bishop of Rochester. The editors, in accordance with the discretion given them by Clarendon's will, softened and altered a few expressions, but made no material changes in the text. A few years later, however, John Oldmixon published a series of attacks on them, and on the university, for supposed interpolations and omissions (Clarendon and Whitelocke compared, 1727; History of England during the Reigns of the Royal House of Stuart, preface, pp. 9, 227). These charges, based on utterly worthless evidence, were refuted by Dr. John Burton in 'The Genuineness of Lord Clarendon's History vindicated,' 1744, 8vo. Dr. Bandinel's edition, published in 1826, was the first printed from the original manuscripts. It restores the phrases altered by the editors, and adds in the appendix passages omitted by Clarendon in the revision of 1671-2. The most complete and correct text is that edited and annotated by the Rev. W. D. Macray (Oxford, 1888, 6 vols., 8vo). An account of the manuscripts of the 'History of the Rebellion' is given in the prefaces of Dr. Bandinel and Mr. Macray, and in Lewis's ' Lives of the Contemporaries of Lord Clarendon' (vol. i. Introduction, pt. ii.)
A list of editions of the 'History' is given in Bliss's edition of Wood (Athenæ Oxon. iii. 1017). A supplement to the 'History of the Rebellion,' containing eighty-five portraits and illustrative papers, was published in 1717, 8vo. The Sutherland 'Clarendon' presented to the Bodleian Library in 1837 contains many thousand portraits, views, and maps, illustrating the text of Clarendon's historical works. A catalogue of the collection (2 vols. 4to) was published in 1837 (Macray, Annals of the Bodl. Lib. p.331). The work usually known as the 'Life of Clarendon' was originally published in 1759 ('The Life of Edward, Earl of Clarendon. … Being a Continuation of the History of the Grand Rebellion from the Restoration to his Banishment in 1667. Written by Himself,' Oxford, 1759, folio). It consists of two parts: the ' Life ' proper, written between 1668 and 1670, dealing with the period before 1660; and the 'Continuation,' commenced in 1672. The first consists of that portion only of the original life which was not incorporated in the 'History of the Rebellion.' The second contains an account of Clarendon's ministry and second exile. The 'History of the Reign of King Charles II, from the Restoration to the end of the year 1667,' 2 vols. 4to, n.d., is a surreptitious edition of the last work, published about 1755 (Lowndes, p. 468).
The minor works of Clarendon are the following: 1. 'The Difference and Disparity between the Estate and Condition of George, Duke of Buckingham, and Robert, Earl of Essex' (Reliquiæ Wottonianæ, ed. 1685, p.185). 2. Speeches delivered in the Long parliament on the lord president's court and council in the north, and on the impeachment of the judges (Rushworth Historical Collections, iv. 230, 333). 3. Declarations and manifestos written for Charles I between 1642 and 1648. These are too numerous to be mentioned separately; the titles of the most important have been already given. Many are contained in the 'History of the Rebellion ' itself, and the rest may be found in Rushworth's 'Collections,' in Husband's Collection of Ordinances and Declarations ' (1643), and in the old ' Parliamentary History' (24 vols. 1751-62). 4. Anonymous pamphlets written on behalf of the king. 'Two Speeches made in the House of Peers on Monday, 19 Dec. 1642 ' (Somers Tracts, ed. Scott, vi. 576). 'Transcendent and Multiplied Rebellion and Treason, discovered by the Laws of the Land,' 1645;
A Letter from a True and Lawful Member of Parliament … to one of the Lords of his Highness's Council,' 1656 (see Cal. Clarendon State Papers, i. 295, iii. 79; History of the Rebellion, ed. Macray,vi.l,xiv. 151). 5. 'Animadversions on a Book entitled Fanaticism fanatically imputed to the Church of England, by Dr. Stillingfleet, and the imputation refuted and retorted by Sam. Cressy,' 1674, 8vo (Lister, ii. 567). 6. 'A Brief View and Survey of the dangerous and pernicious errors to Church and State in Mr. Hobbes's book entitled Leviathan, ' Oxford, 1676 (see Clarendon State Papers, iii. App. p. xlii). 7. 'The History of the Rebellion and Civil War in Ireland,' 1720, 8vo. This is a vindication of Charles I and the Duke of Ormonde from the Bishop of Ferns and other catholic writers. It was made use of by Nalson in his 'Historical Collections,' 1682, and by Borlase in his 'History of the Irish Rebellion,' 1680. A manuscript is in the library of Trinity College, Dublin (Hist. MSS. Comm. 8th Rep. p.583). 8. 'A Collection of several Tracts of Edward, Earl of Clarendon,' 1727, fol. This contains (a) the 'Vindication' written by Clarendon in 1668 in answer to the articles of impeachment against him, the substance of which is embodied in the ' Continuation;' (b) Reflections upon several Christian Duties, Divine and Moral, by way of Essays;' (c)
Two Dialogues on Education, and on the Respect due to Age;'(d) Contemplations on the Psalms.' 9. 'Religion and Policy, and the Countenance and Assistance each should give to the other, with a Survey of the Power and Jurisdiction of the Pope in the dominion of other Princes,' Oxford, 1811, 2 vols. 8vo. A work entitled 'A Collection of several Pieces of Edward, Earl of Clarendon, to which is prefixed an Account of his Lordship's Life, Conduct, and Character, by a learned and impartial pen,' was published in 1727, 8vo. The second volume is a reprint of the 'History of the Rebellion in Ireland.' The first contains a reprint of Clarendon's speeches between 1660 and 1666 extracted from the ' Journals of the House of Lords.' Bliss and the Bodleian ' Catalogue ' attribute to Clarendon (on insufficient evidence) a tract entitled 'A Letter sent from beyond seas to one of the chief Ministers of the Nonconforming Party. By a Lover of the Established Government both of Church and State,' dated Saumur, 7 May 1674. Two letters written by Clarendon in 1668 to the Duke and Duchess of York on the conversion of the latter to Catholicism, are printed in the 'Harleian Miscellany' (iii. 555, ed. Park); with the letter he addressed to the House of Lords on his flight from England (v. 185), under the title of ' News from Dunkirk House.' The great collection of Clarendon's ', correspondence, acquired at different times by the Bodleian Library, comprises over one hundred volumes. A selection from these papers, edited by Dr. Scrope and Thomas Monkhouse, was published between 1767 and 1786 (State Papers collected by Edward, Earl of Clarendon, 3 vols. folio, Oxford). They are calendared up to 1657 (3 vols. 8vo; vol. i. ed. by Ogle and Bliss, 1872; vols. ii. and iii. ed. by W. D. Macray, 1869, 1876). A number of the post-restoration papers are printed in the third volume of Lister's 'Life of Clarendon.' Letters to Sir Edward Nicholas are printed in the
Nicholas Papers,' edited by G. F. Warner, Camden Society, 1886; to Sir Richard Browne, in the appendix to the Diary of John Evelyn,' edited by Bray, 1827, and by Wheatley, 1879; to Prince Rupert, in Warburton's 'Prince Rupert' (3 vols. 1849); to Dr. John Barwick in Barwick's
Life of Barwick,' 1724; to Lord Mordaunt and others in 1659-60 (Hist. MSS. Comm. 10th Rep. pt. vi. pp. 189-216).
[Clarendon's autobiographical works and letters form the basis of the Life of Clarendon published in 1837 by Thomas Lister Lord Campbell's memoir in his Lives of the Chancellors (iii. 110-271) has no independent value. An earlier life of little value is contained in Lives of all the Lord Chancellors, but more especially of those two great opposites, Edward, earl of Clarendon, and Bulstrode, lord Whitelocke, 2 vols. 18mo, 1708. Macdiarmid's Lives of British Statesmen, 1807, 4to, and J. H. Browne's Lives of Prime Ministers of England, 1858, 8vo, contain lives of considerable length, and shorter memoirs are given in Lodge's Portraits and Foss's Judges of England. The life of Clarendon given by Wood differs considerably in the first two editions of that work (see Bliss's edition, iii. 1018). Charges of corruption brought against Clarendon in the lives of judges Grlyn and Jenkvns led to the expulsion of Wood from the university and the burning of his book (1693). These and other charges are brought together in Historical Inquiries respecting the Character of Edward Hyde, Earl of Clarendon, by George Agar Ellis, 1827, and answered in Lewis's Lives of the Contemporaries of Lord Clarendon, 1852, vol. i. preface, pt. i.; and in Lister's Life, vol. ii. chap. xix. Other authorities are quoted in the text.]
C. H. F.
|
http://www.geni.com/people/Sir-Edward-Hyde-of-Dinton/6000000006444674622
|
CC-MAIN-2015-11
|
en
|
refinedweb
|
#include <freeswan.h>
Ttoul
Numbers are specified in text ttoul tt.
The dstlen parameter of ult, ULTOT_BUF, which is the size of a buffer just large enough for worst-case results.
The format parameter of ultot must be one of:
'o'
8
'd'
10
'x'
16
17
Ttoul returns NULL for success and a pointer to a string-literal error message for failure; see DIAGNOSTICS. Ultot returns 0 for a failure, and otherwise ttoul are: empty input; unknown base; non-digit character found; number too large for an unsigned long.
Fatal errors in ultot are: unknown format.
Written for the FreeS/WAN project by Henry Spencer.
Conversion of 0 with format o yields 00.
Ultot format 17 is a bit of a kludge. = ttoul( /* ... */ ); if (error != NULL) { /* something went wrong */
|
http://www.makelinux.net/man/3/I/ipsec_ttoul
|
CC-MAIN-2015-11
|
en
|
refinedweb
|
-- | Query and update documents residing on a MongoDB server(s) {-# LANGUAGE OverloadedStrings, RecordWildCards, NamedFieldPuns, TupleSections, FlexibleContexts, FlexibleInstances, UndecidableInstances, MultiParamTypeClasses, GeneralizedNewtypeDeriving, StandaloneDeriving, TypeSynonymInstances, RankNTypes, ImpredicativeTypes #-} module Database.MongoDB.Query ( -- * Connected slaveOk, -- ** Query Query(..), QueryOption(..), Projector, Limit, Order, BatchSize, explain, find, findOne, count, distinct, -- *** Cursor Cursor, next, nextN, rest, -- ** System.IO.Error (try) import Control.Concurrent.MVar import Control.Pipeline (Resource(..)) import qualified Database.MongoDB.Internal.Protocol as P import Database.MongoDB.Internal.Protocol hiding (Query, QueryOption(..), send, call) import Database.MongoDB.Connection (MasterOrSlaveOk(..)) import Data.Bson import Data.Word import Data.Int import Data.Maybe (listToMaybe, catMaybes) import Data.UString as U (dropWhile, any, tail) import Database.MongoDB.Internal.Util (loop, (<.>), true1, MonadIO') -- plus Applicative instances of ErrorT & ReaderT send :: (Context Connection m, Throw IOError m, MonadIO m) => [Notice] -> m () -- ^ Send notices as a contiguous batch to server with no reply. Throw IOError if connection fails. send ns = throwLeft . liftIO . try . flip P.send ns =<< context call :: (Context Connection m, Throw IOError m, MonadIO m) => [Notice] -> Request -> m (forall n. (Throw IOError n, MonadIO n) => n Reply) -- ^ Send notices and request as a contiguous batch to server and return reply promise, which will block when invoked until reply arrives. This call will throw IOError if connection fails on send, and promise will throw IOError if connection fails on receive. call ns r = do conn <- context promise <- throwLeft . liftIO $ try (P.call conn ns r) return (throwLeft . liftIO $ try promise) -- * Connected Monad newtype Connected m a = Connected (ErrorT Failure (ReaderT WriteMode (ReaderT MasterOrSlaveOk (ReaderT Connection m))) a) deriving (Context Connection, Context MasterOrSlaveOk, Context WriteMode, Throw Failure, MonadIO, Monad, Applicative, Functor) -- ^ Monad with access to a 'Connection', 'MasterOrSlaveOk', and 'WriteMode', and throws a 'Failure' on read/write failure and IOError on connection failure deriving instance (Throw IOError m) => Throw IOError (Connected m) instance MonadTrans Connected where lift = Connected . lift . lift . lift . lift runConn :: Connected m a -> Connection -> m (Either Failure a) -- ^ Run action with access to connection. It starts out assuming it is master (invoke 'slaveOk' inside it to change that) and that writes don't need to be check (invoke 'writeMode' to change that). Return Left Failure if error in execution. Throws IOError if connection fails during execution. runConn (Connected action) = runReaderT (runReaderT (runReaderT (runErrorT action) Unsafe) Master) -- | A monad with access to a 'Connection', 'MasterOrSlaveOk', and 'WriteMode', and throws 'Failure' on read/write failure and 'IOError' on connection failure class (Context Connection m, Context MasterOrSlaveOk m, Context WriteMode m, Throw Failure m, Throw IOError m, MonadIO' m) => Conn m instance (Context Connection m, Context MasterOrSlaveOk m, Context WriteMode m, Throw Failure m, Throw IOError m, MonadIO' m) => Conn m -- | Read or write exception like cursor expired or inserting a duplicate key. -- Note, unexpected data from the server is not a Failure, rather it is a programming error (you should call 'error' in this case) because the client and server are incompatible and requires a programming change. data Failure = db col = U.any (== '$') col && db <.> col /= "local.oplog.$main" -- * Selection data Selection = Select {selector :: Selector, coll :: Collection} deriving (Show, Eq) -- ^ Selects documents in collection that match selector { 'WriteFailure' if it reports an error. write notice = do mode <- context case mode of Unsafe -> send [notice] Safe -> do me <- getLastError [notice] maybe (return ()) (throw . uncurry WriteFailure) -- ** MasterOrSlaveOk slaveOk :: (Conn m) => m a -> m a -- ^ Ok to execute given action against slave, ie. eventually consistent reads slaveOk = push (const SlaveOk)Conn m) => Bool -> [Notice] -> Query -> m CursorState' -- ^ Send query request and return cursor state runQuery isExplain ns q = do db <- thisDatabase slaveOK <- context call' ns (queryRequest isExplain slaveOK q db) connection is closed, so you can open another connection to the same server and continue using the cursor. modifyCursorState' :: (Conn m) => Cursor -> (FullCollection -> BatchSize -> CursorState' -> Connected (ErrorT IOError IO) (CursorState', a)) -> m a -- ^ Analogous to 'modifyMVar' but with Conn monad modifyCursorState' (Cursor fcol batch var) act = do conn <- context e <- liftIO . modifyMVar var $ \cs' -> do ee <- runErrorT $ runConn (act fcol batch cs') conn return $ case ee of Right (Right (cs'', a)) -> (cs'', Right a) Right (Left failure) -> (cs', Left $ throw failure) Left ioerror -> (cs', Left $ throw ioerror) either id return e getCursorState :: (Conn m) => Cursor -> m CursorState -- ^ Extract current cursor status getCursorState (Cursor _ _ var) = cursorState =<< liftIO (readMVar var) data CursorState' = Delayed (forall n. (Throw Failure n, Throw IOError n, MonadIO n) => n CursorState) | CursorState CursorState -- ^ A cursor state or a promised cursor state which may fail call' :: (Conn m) => [Notice] -> (Request, Limit) -> m CursorState' -- ^ Send notices and request and return promised cursor state call' ns (req, remainingLimit) = do promise <- call ns req return $ Delayed (fromReply remainingLimit =<< promise) cursorState :: (Conn m) => CursorState' -> m CursorState -- ^ Convert promised cursor state to cursor state or failure cursorState (Delayed promise) = promise cursorState (CursorState cs) = return cs :: ErrorT (runConn (close cursor) conn :: ErrorT IOError IO (Either Failure ())) >> (ErrorT IOError connection only, however, other connections may read from it while the original one is still alive. Note, reading from a temporary collection after its original connection)@.Conn" =:. -}
|
http://hackage.haskell.org/package/mongoDB-0.7/docs/src/Database-MongoDB-Query.html
|
CC-MAIN-2015-11
|
en
|
refinedweb
|
Quoting Eric W. Biederman (ebiederm@xmission.com):> "Ser.No, you shortcut security/commoncap.c:get_file_caps() if (bprm->nosuid),which is set if test_tsk_thread_flag(current, TIF_NOSUID) at exec.So if we're in this new no-suid mode, then file capabilities are nothonored.Which is the right thing to do.> >> removes> the set of circumstances that lead to the sendmail capabilities bug.> > So any kernel feature that requires capabilities only because not> doing so would break backwards compatibility with suid applications.> This includes namespace manipulation, like plan 9.> This includes unsharing pid and network and sysvipc namespaces.> > There are probably other useful but currently root only features> that this will allow to be used by unprivileged processes, that> I am not aware of.> > In addition to the fact that knowing privileges can not be escalated> by a process is a good feature all by itself. Run this in a chroot> and the programs will never be able to gain root access even if> there are suid binaries available for them to execute.> > Eric
|
http://lkml.org/lkml/2009/12/30/236
|
CC-MAIN-2015-11
|
en
|
refinedweb
|
Collections
Posted on March 1st, 2001
To summarize what we’ve seen so far, your first, most efficient choice to hold a group of objects should be an array, and you’re forced into this choice if you want to hold a group of primitives. In the remainder of the chapter we’ll look at the more general case, when you don’t know at the time you’re writing the program how many objects you’re going to need, or if you need a more sophisticated way to store your objects. Java provides four types of collection classes to solve this problem: Vector, BitSet, Stack, and Hashtable. Although compared to other languages that provide collections this is a fairly meager supply, you can nonetheless solve a surprising number of problems using these tools.
Among their other characteristics – Stack, for example, implements a LIFO (last-in, first-out) sequence, and Hashtable is an associative array that lets you associate any object with any other object – the Java collection classes will automatically resize themselves. Thus, you can put in any number of objects and you don’t need to worry about how big to make the collection while you’re writing the program.
Disadvantage: unknown type
The “disadvantage” to using the Java collections is that you lose type information when you put an object into a collection. This happens because, when the collection was written, the programmer of that collection had no idea what specific type you wanted to put in the collection, and making the collection hold only your type would prevent it from being a general-purpose tool. So instead, the collection holds handles to objects of type Object, which is of course every object in Java, since it’s the root of all the classes. (Of course, this doesn’t include primitive types, since they aren’t inherited from anything.) This is a great solution, except for these reasons:
- Since the type information is thrown away when you put an object handle into a collection, any type of object can be put into your collection, even if you mean it to hold only, say, cats. Someone could just as easily put a dog into the collection.
- Since the type information is lost, the only thing the collection knows it holds is a handle to an Object. You must perform a cast to the correct type before you use it.
On the up side, Java won’t let you misuse the objects that you put into a collection. If you throw a dog into a collection of cats, then go through and try to treat everything in the collection as a cat, you’ll get an exception when you get to the dog. In the same vein, if you try to cast the dog handle that you pull out of the cat collection into a cat, you’ll get an exception at run-time.
Here’s an example:
//: CatsAndDogs.java // Simple collection example (Vector) import java.util.*; class Cat { private int catNumber; Cat(int i) { catNumber = i; } void print() { System.out.println("Cat #" + catNumber); } } class Dog { private int dogNumber; Dog(int i) { dogNumber = i; } void print() { System.out.println("Dog #" + dogNumber); } } public class CatsAndDogs { public static void main(String[] args) { Vector cats = new Vector(); for(int i = 0; i < 7; i++) cats.addElement(new Cat(i)); // Not a problem to add a dog to cats: cats.addElement(new Dog(7)); for(int i = 0; i < cats.size(); i++) ((Cat)cats.elementAt(i)).print(); // Dog is detected only at run-time } } ///:~
You can see that using a Vector is straightforward: create one, put objects in using addElement( ), and later get them out with elementAt( ). (Note that Vector has a method size( ) to let you know how many elements have been added so you don’t inadvertently run off the end and cause an exception.)
The classes Cat and Dog are distinct – they have nothing in common except that they are Objects. (If you don’t explicitly say what class you’re inheriting from, you automatically inherit from Object.) The Vector class, which comes from java.util, holds Objects, so not only can you put Cat objects into this collection using the Vector method addElement( ), but you can also add Dog objects without complaint at either compile-time or run-time. When you go to fetch out what you think are Cat objects using the Vector method elementAt( ), you get back a handle to an Object that you must cast to a Cat. Then you need to surround the entire expression with parentheses to force the evaluation of the cast before calling the print( ) method for Cat, otherwise you’ll get a syntax error. Then, at run-time, when you try to cast the Dog object to a Cat, you’ll get an exception.
This is more than just an annoyance. It’s something that can create some difficult-to-find bugs. If one part (or several parts) of a program inserts objects into a collection, and you discover only in a separate part of the program through an exception that a bad object was placed in the collection, then you must find out where the bad insert occurred. You do this by code inspection, which is about the worst debugging tool you have. On the upside, it’s convenient to start with some standardized collection classes for programming, despite the scarcity and awkwardness.Sometimes it works right anyway
It turns out that in some cases things seem to work correctly without casting back to your original type. The first case is quite special: the String class has some extra help from the compiler to make it work smoothly. Whenever the compiler expects a String object and it hasn’t got one, it will automatically call the toString( ) method that’s defined in Object and can be overridden by any Java class. This method produces the desired String object, which is then used wherever it was wanted.
Thus, all you need to do to make objects of your class print out is to override the toString( ) method, as shown in the following example:
//: WorksAnyway.java // In special cases, things just seem // to work correctly. import java.util.*; class Mouse { private int mouseNumber; Mouse(int i) { mouseNumber = i; } // Magic method: public String toString() { return "This is Mouse #" + mouseNumber; } void print(String msg) { if(msg != null) System.out.println(msg); System.out.println( "Mouse number " + mouseNumber); } } class MouseTrap { static void caughtYa(Object m) { Mouse mouse = (Mouse)m; // Cast from Object mouse.print("Caught one!"); } } public class WorksAnyway { public static void main(String[] args) { Vector mice = new Vector(); for(int i = 0; i < 3; i++) mice.addElement(new Mouse(i)); for(int i = 0; i < mice.size(); i++) { // No cast necessary, automatic call // to Object.toString(): System.out.println( "Free mouse: " + mice.elementAt(i)); MouseTrap.caughtYa(mice.elementAt(i)); } } } ///:~
You can see the redefinition of toString( ) in Mouse. In the second for loop in main( ) you find the statement:
System.out.println("Free mouse: " + mice.elementAt(i));
After the ‘ +’ sign the compiler expects to see a String object. elementAt( ) produces an Object, so to get the desired String the compiler implicitly calls toString( ). Unfortunately, you can work this kind of magic only with String; it isn – if you passed the wrong type – you’ll get an exception at run-time. This is not as good as compile-time checking but it’s still robust. Note that in the use of this method:
MouseTrap.caughtYa(mice.elementAt(i));
no cast is necessary.Making a type-conscious Vector
You might not want to give up on this issue just yet. A more ironclad solution is to create a new class using the Vector, such that it will accept only your type and produce only your type:
//: GopherVector.java // A type-conscious Vector import java.util.*; class Gopher { private int gopherNumber; Gopher(int i) { gopherNumber = i; } void print(String msg) { if(msg != null) System.out.println(msg); System.out.println( "Gopher number " + gopherNumber); } } class GopherTrap { static void caughtYa(Gopher g) { g.print("Caught one!"); } } class GopherVector { private Vector v = new Vector(); public void addElement(Gopher m) { v.addElement(m); } public Gopher elementAt(int index) { return (Gopher)v.elementAt(index); } public int size() { return v.size(); } public static void main(String[] args) { GopherVector gophers = new GopherVector(); for(int i = 0; i < 3; i++) gophers.addElement(new Gopher(i)); for(int i = 0; i < gophers.size(); i++) GopherTrap.caughtYa(gophers.elementAt(i)); } } ///:~
This is similar to the previous example, except that the new GopherVector class has a private member of type Vector (inheriting from Vector tends to be frustrating, for reasons you’ll see later), and methods just like Vector. However, it doesn’t accept and produce generic Objects, only Gopher objects.
Because a GopherVector will accept only a Gopher, if you were to say:
gophers.addElement(new Pigeon());
you would get an error message at compile time . This approach, while more tedious from a coding standpoint, will tell you immediately if you’re using a type improperly.
Note that no cast is necessary when using elementAt( ) – it’s always a Gopher.Parameterized types
This kind of problem isn’t isolated – there are numerous cases in which you need to create new types based on other types, and in which it is useful to have specific type information at compile-time. This is the concept of a parameterized type . In C++, this is directly supported by the language in templates. At one point, Java had reserved the keyword generic to someday support parameterized types, but it’s uncertain if this will ever occur.
There are no comments yet. Be the first to comment!
|
http://www.codeguru.com/java/tij/tij0088.shtml
|
CC-MAIN-2015-11
|
en
|
refinedweb
|
29 August 2012 09:44 [Source: ICIS news]
SINGAPORE (ICIS)--Indian major Reliance Industries Ltd (RIL) has withdrawn all export offers for polypropylene (PP) from 28 August, on the back of low stock levels, a company source said on Wednesday.
RIL plans to resume offers for export beginning from 3 September for October shipments, the source added.
Its low inventories was a result of the healthy domestic sales in early August and the recent low output at one of its PP facilities in Jamnagar, the source said. “Previously in early August, import arrivals were low, so, we had good sales locally. Since then, our inventories were on the lower side,” he said.
RIL has begun offering October shipment in mid-August to countries such as ?xml:namespace>
Angie Li
|
http://www.icis.com/Articles/2012/08/29/9590643/indias-ril-withdraws-all-pp-export-offers-to-resume-on-3.html
|
CC-MAIN-2015-11
|
en
|
refinedweb
|
my_product = Product.find(:first)
STDOUT.print my_product.name
my_product.name = "New Product Name"
Active Record uses English pluralization rules to map classes to tables. The model class name is singular and capitalized, while the table name is plural and lowercased. Examples include:
This singular/plural convention results in code that reads fairly naturally. Notice how this mapping is intelligent in its use of English pluralization rules. Also note that the class names use CamelCase (a Ruby convention), while the table names are all lowercase with underscores between words.
In cases where this does not work (such as interfacing with a legacy database with which you have no control over the names), you can also explicitly tell Active Record what name it should use.
The ActiveRecord::Base documentation explains more about Active Record's automatic mapping.
No table stands alone. Well, not usually, anyway. Most database applications use multiple tables with specific relationships between those tables. You can tell Active Record about these relationships in your model classes, and Active Record will generate a slew of navigation methods that make it easy for your code to access related data. The following models:
class Firm < ActiveRecord::Base
has_many :clients
has_one :account
belongs_to :conglomorate
end
allow you to write code such as this:
my_firm = Firm.find(:last)
STDOUT.print my_firm.account.name
STDOUT.print my_firm.conglomerate.employee_count
for c in my_firm.clients
STDOUT.print "Client: " + c.name + "\n"
end
This code will work correctly when the database has a clients and accounts table, of which each has a name column, and a conglomerates table that has an employee_count column.
clients
accounts
conglomerates
employee_count
The ActiveRecord::Associations documentation explains more about associations.
Because you don't want to store just any old thing in your database, you probably want to validate your data before you store it. Active Record contains a suite of macrolike validators that you can add to your model.
class Account < ActiveRecord::Base
validates_presence_of :subdomain, :name, :email_address, :password
validates_uniqueness_of :subdomain
validates_acceptance_of :terms_of_service, :on => :create
validates_confirmation_of :password, :email_address, :on => :create
end
If the built-in validation macros can't do what you need, you can always write your own validation methods.
class Person < ActiveRecord::Base
protected
def validate
errors.add_on_empty %w( first_name last_name )
errors.add("phone_number", "has invalid format") unless phone_number =~ /[0-9]*/
end
def validate_on_create # only runs the first time a new object is saved
unless valid_discount?(membership_discount)
errors.add("membership_discount", "has expired")
end
end
def validate_on_update
errors.add_to_base("No changes have occurred") if unchanged_attributes?
end
end
person = Person.new("first_name" => "David", "phone_number" => "what?")
person.save # => false (and doesn't do the save)
person.errors.empty? # => false
person.count # => 2
person.errors.on "last_name" # => "can't be empty"
person.errors.on "phone_number" # => "has invalid format"
person.each_full { |msg| puts msg } # => "Last name can't be empty\n" +
"Phone number has invalid format"
person.attributes = { "last_name" => "Heinemeier", "phone_number" => "555-555" }
person.save # => true (and person is now saved in the database)
If the validate method exists, Rails will call it just before writing any object to the database. If validation fails, it does not write the object to the database. validate_on_create and validate_on_update are similar, except that the first is called only before Rails creates a new record in the database, while the second is called only when Rails is about to update an existing record.
validate
validate_on_create
validate_on_update
You can also validate a particular attribute only when some condition is true.
# Conditional validations such as the following are possible:
validates_numericality_of :income, :if => :employed?
validates_confirmation_of :password, :if => :new_password?
# Using blocks:
validates_presence_of :username, :if => Proc.new { |user| user.signup_step > 1 }
The ActiveRecord::Validations documentation explains more about validation.
Pages: 1, 2, 3, 4, 5, 6, 7
Next Page
Sponsored by:
© 2015, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
|
http://www.onlamp.com/pub/a/onlamp/2005/10/13/what_is_rails.html?page=3
|
CC-MAIN-2015-11
|
en
|
refinedweb
|
this post I describe some of the similarities and differences between the standard ASP.NET Core
WebHostBuilder used to build HTTP endpoints, and the
HostBuilder used to build non-HTTP services. I discuss the fact that they use similar, but completely separate abstractions, and how that fact will impact you if you try and take code written for a standard ASP.NET Core application, and reuse it with a generic host.
If the generic host is new to you, I recommend checking out Steve Gordon's introductory post. For more detail on the nuts-and-bolts, take a look at the documentation.
How does
HostBuilder differ from
WebHostBuilder?
ASP.NET Core is used primarily to build HTTP endpoints using the Kestrel web server. A
WebHostBuilder defines the configuration, logging, and dependency injection (DI) for your application, as well as the actual HTTP endpoint behaviour. By default, the templates use the
CreateDefaultBuilder extension method on
WebHost in program.cs:
public class Program { public static void Main(string[] args) { CreateWebHostBuilder(args).Build().Run(); } public static IWebHostBuilder CreateWebHostBuilder(string[] args) => WebHost.CreateDefaultBuilder(args) .UseStartup<Startup>(); }
This extension method sets up the default configuration providers and logging providers for your app. The
UseStartup<T>() extension sets the
Startup class where you define the DI services and your app's middleware pipeline.
Generic hosted services have some aspects in common with standard ASP.NET Core apps, and some differences.
Hosted services can use the same configuration, logging, and dependency injection infrastructure as HTTP ASP.NET Core apps. That means you can reuse a lot of the same libraries and classes as you do already (with a big caveat, which I'll come to later).
You can also use a similar pattern for configuring an application, though there's no
CreateDefaultBuilder as of yet, so you need to use the
ConfigureServices extension methods etc. For example, a basic hosted service might look something like the following:
public class Program { public static int Main(string[] args) { CreateHostBuilder(args).Build().Run(); } public static IHostBuilder BuildHost(string[] args) => new HostBuilder() .ConfigureLogging(logBuilder => logBuilder.AddConsole()) .ConfigureHostConfiguration(builder => // setup your app's configuration { builder .SetBasePath(Directory.GetCurrentDirectory()) .AddJsonFile("appsettings.json") .AddEnvironmentVariables(); }) .ConfigureServices( services => // configure DI, including the actual background services services.AddSingleton<IHostedService, PrintTimeService>()); }
There are a number of differences visible in this program.cs file, compared to a standard ASP.NET Core app:
- No default builder - As there's no default builder, you'll need to explicitly call each of the logging/configuration etc extension methods. It makes samples more verbose, but in reality I find this to be the common approach for apps of any size anyway.
- No Kestrel - Kestrel is the HTTP server, so we don't (and you can't) use it for generic hosted services.
- You can't use a
Startupclass - Notice that I called
ConfigureServicesdirectly on the
HostBuilderinstance. This is possible in a standard ASP.NET Core app, but it's more common to configure your services in a separate
Startupclass, along with your middleware pipeline. Generic hosted services don't have that capability. Personally, I find that a little frustrating, and would like to see that feature make it's way to
HostBuilder.
There's actually one other major difference which isn't visible from these samples. The
IHostBuilder abstraction and it's associates are in a completely different namespace and package to the existing
IWebHostBuilder. This causes a lot of compatibility headaches, as you'll see.
Same interfaces, different namespaces
When I started using the generic host, I had made a specific (incorrect) assumption about how the
IHostBuilder and
IWebHostBuilder were related. Given that they provided very similar cross-cutting functionality to an app (configuration, logging, DI), I assumed that they shared a common base interface. Specifically, I assumed the
IWebHostBuilder would be derived from the
IHostBuilder - it provides the same functionality and adds HTTP on top, so that seemed logical to me. However, the two interfaces are completely unrelated!
The ASP.NET Core HTTP hosting abstractions
The ASP.NET Core hosting abstractions library, which contains the definition of
IWebHostBuilder, is Microsoft.AspNetCore.Hosting.Abstractions. This library contains all the basic classes and interfaces for building an ASP.NET Core web host, e.g.
IWebHostBuilder
IWebHost
IHostingEnvironment
WebHostBuilderContext
These interfaces all live in the
Microsoft.AspNetCore.Hosting namespace. As an example, here's the
IWebHostBuilder interface:
public interface IWebHostBuilder { IWebHost Build();); }
The ASP.NET Core generic host abstractions
The generic host abstractions can be found in the Microsoft.Extensions.Hosting.Abstractions library (Extensions instead of AspNetCore). This library contains equivalents of most of the abstractions found in the HTTP hosting abstractions library:
IHostBuilder
IHost
IHostingEnvironment
HostBuilderContext
These interfaces all live in the
Microsoft.Extensions.Hosting namespace (again, Extensions instead of AspNetCore). The
IHostBuilder interface looks like this:
public interface IHostBuilder { IDictionary<object, object> Properties { get; } IHost Build(); IHostBuilder ConfigureAppConfiguration(Action<HostBuilderContext, IConfigurationBuilder> configureDelegate); IHostBuilder ConfigureContainer<TContainerBuilder>(Action<HostBuilderContext, TContainerBuilder> configureDelegate); IHostBuilder ConfigureHostConfiguration(Action<IConfigurationBuilder> configureDelegate); IHostBuilder ConfigureServices(Action<HostBuilderContext, IServiceCollection> configureDelegate); IHostBuilder UseServiceProviderFactory<TContainerBuilder>(IServiceProviderFactory<TContainerBuilder> factory); }
If you compare this interface to the
IWebHostBuilder I showed previously, you'll see some similarities, and some differences. On the similarity side:
- Both interfaces have a
Build()function that returns their respective "Host" interface.
- Both have a
ConfigureAppConfigurationmethod for setting the app configuration. While both interfaces use the same Microsoft.Extensions.Configuration abstraction
IConfigurationBuilder, they each use a different context object -
HostBuilderContextor
WebHostBuilderContext.
- Both have a
ConfigureServicesmethod, though again the type of the context object differs.
There are many more differences between the interfaces. To highlight a few:
- The
IHostBuilderhas a
ConfigureHostConfigurationmethod, for setting host configuration rather than app configuration. This is equivalent to the
UseConfigurationextension method on
IWebHostBuilder(which under the hood calls
IWebHostBuilder.UseSetting).
- The
IHostBuilderhas explicit methods for configuring the DI container. This is normally handled in the
Startupclass for
IWebHostBuilder. As
HostBuilderdoesn't use
Startupclasses, the functionality is exposed here instead.
These changes, and the lack of a common interface, are just enough to make it difficult to move code that was working in a standard ASP.NET Core app to a generic host app. Which is really annoying!
So why all the changes? To be honest, I haven't dug through GitHub issues and commits to find out, but I'm happy to speculate.
It's always about backward compatibility
The easiest way to avoid breaking something, is to not change it! My guess is that's why we're stuck with these two similar-yet-irritatingly-different interfaces. If Microsoft were to introduce a new common interface, they'd have to modify
IWebHostBuilder to implement that interface:
public interface IHostBuilderBase { IWebHost Build(); } public interface IWebHostBuilder: IHostBuilderBase { // IWebHost Build(); <- moved up to base interface); }
On first look that might seem fine - as long as they only moved methods from
IWebHostBuilder to the base interface, and made sure the signatures were the same, any classes implementing
IWebHostBuilder would still correctly implement it. But what if the interface was implemented explicitly? For example:
public class MyWebHostBuilder : IWebHostBuilder { IWebHost IWebHostBuilder.Build() // explicitly implement the interface { // implementation } // other methods }
I'm not 100%, but I suspect that would break some things like overload resolution and such, so would be a no-go for a minor release (and likely a major release to be honest).
The other advantage of creating a completely separate set of abstractions, is a clean slate! For example, the addition of the
ConfigureHostConfiguration() method to
IHostBuilder suggests an acknowledgment that it should have been a first class citizen for the
IWebHostBuilder as well. It also leaves the abstractions free to evolve in their own way.
So if creating a new set of abstractions libraries gives us all these advantages, what's the downside, what do we lose?
Code reuse is out the window
The big problem with the approach of creating new abstractions, is that we have new abstractions! Any "reusable" code that was written for use with the Microsoft.AspNetCore.Hosting abstractions, has to be duplicated if you want to use it with the Microsoft.Extensions.Hosting.
Here's a simple example of the problem that I ran into almost immediately. Imagine you're written an extension method on
IHostingEnvironment to check if the current environment is
Testing:
using System; using Microsoft.AspNetCore.Hosting; public static class HostingEnvironmentExtensions { const string Testing = "Testing"; public static bool IsTesting(this IHostingEnvironment env) { return string.Equals(env.EnvironmentName, Testing, StringComparison.OrdinalIgnoreCase); } }
It's a simple method, you might use it in various places in your app, in the same way the built-in
IsProduction() and
IsDevelopment() extension methods are.
Unfortunately, this extension method can't be used in generic hosted services. The
IHostingEnvironment used by this code is a different
IHostingEnvironment to the generic host abstraction (Extensions namespace vs. AspNetCore namespace).
That means if you have common library code you wanted to share between your HTTP and non-HTTP ASP.NET Core apps, you can't use any of the abstractions found in the hosting abstraction libraries. If you _do_ need to use them, you're left copy-pasting code ☹.
Another example of the issue I found is for third-party libraries that are used for configuration, logging, or DI, and that have a dependency on the hosting abstractions.
For example, I commonly use the excellent Serilog library to add logging to my ASP.NET Core apps. The Serilog.AspNetCore library makes it very easy to add an existing Serilog configuration to your app, with a call to
UseSerilog() when configuring your
WebHostBuilder:
public static IWebHostBuilder CreateWebHostBuilder(string[] args) => WebHost.CreateDefaultBuilder(args) .UseStartup<Startup>() .UseSerilog() // <-- Add this line
Unfortunately, even though the underlying configuration libraries are identical between
IWebHostBuilder and
IHostBuilder, the
UseSerilog() extension method is not available. It's an extension method on
IWebHostBuilder not
IHostBuilder, which means you can't use the Serilog.AspNetCore library with the generic host.
To get round the issue, I've created a similar library for adding Serilog to generic hosts, called Serilog.Extensions.Hosting that you can find on GitHub. Thanks to everyone in the Serilog project for adopting it officially into the fold, and for making the whole process painless and enjoyable! In my next post I'll cover how to use the library in your generic ASP.NET Core apps.
These problems will basically apply to all code written that depends on the hosting abstractions. The only real way around them is to duplicate the code, and tweak some names and namespaces. It all feels like a missed opportunity to create something cleaner, with an easy upgrade path, and is asking for maintainability issues. As I discussed in the previous section, I'm sure the team have their reasons for the approach taken, but for me, it stings a bit.
Summary
In this post I discussed some of the similarities and differences between the hosting abstractions used in the HTTP ASP.NET Core apps and the non-HTTP generic host. Many of the APIs are similar, but the main hosting abstractions exist in different libraries and namespaces, and aren't iteroperable. That means that code written for one set of abstractions can't be used with the other. Unfortunately, that means there's likely going to be duplicate code required if you want to share behaviour between HTTP and non-HTTP apps.
|
https://andrewlock.net/the-asp-net-core-generic-host-namespace-clashes-and-extension-methods/
|
CC-MAIN-2021-31
|
en
|
refinedweb
|
This site uses strictly necessary cookies. More Information
Question no longer needs to be answered
Hello,
I am making a game and I have a script where I can move my object around the screen, but I want it so the object can only be moved inside a border. I followed this tutorial on how to make movable objects:
I was thinking using a Trigger 2D Box collider so the object can only move inside that collider, but I'm not sure how to implement that into my script. Here is my script: using System.Collections; using System.Collections.Generic; using UnityEngine;
public class UIDrag : MonoBehaviour {
float OffsetX;
float OffsetY;
public void BeginDrag() {
OffsetX = transform.position.x - Input.mousePosition.x;
OffsetY = transform.position.y - Input.mousePosition.y;
}
public void OnMouseDrag()
{
transform.position = new Vector3(OffsetX + Input.mousePosition.x, OffsetY + Input.mousePosition.y);
}
}
I used the EventTrigger component for the script:
Will someone please help me?
Answer by madks13
·
Aug 17, 2018 at 08:34 PM
Umm, you could add said collider and do the movement inside OnTriggerStay2D.
For this, you would need a few changes :
Set collider as trigger, don't need collisions with game objects
Inside OnMouseDrag, set a variable to true, so we know we are dragging something
Inside OnTriggerStay2D, if dragging, change element position.
Since the movement is done inside OnTriggerStay2D, the movement will only apply if the drag is inside the collider area.
That said, the collider doesn't collide with mouse cursors, you will need some invisible collidable object following the mouse for the trigger to activate. Best practice would be a ghost element. A preview of the element to move so the user can see where it will be placed.
This is my new script:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class UIDrag : $$anonymous$$onoBehaviour {
float OffsetX;
float OffsetY;
public bool ObjectIs$$anonymous$$oveable;
public void BeginDrag() {
OffsetX = transform.position.x - Input.mousePosition.x;
OffsetY = transform.position.y - Input.mousePosition.y;
}
public void On$$anonymous$$ouseDrag()
{
ObjectIs$$anonymous$$oveable = true;
}
public void OnTriggerStay2D(Collider2D collision)
{
if (ObjectIs$$anonymous$$oveable == true)
{
transform.position = new Vector3(OffsetX + Input.mousePosition.x, OffsetY + Input.mousePosition.y);
}
}
}
Now it's not even movable. The BoxCollider2D is set to Trigger, and the box collider is around the object enough so it's able to move, but it's not moving. I'm confused. @madks13
Err, where did you add the collider on? The collider logic, OnTiggerStay2D, should be on the object containing the collider. But the collider should not be on the object you are dragging, since the collider would move with said object, you would always be inside the collider. For this method to work, you need to have the collider on the parent area, and that parent area will check the boolean inside the draggable object to see if it can be moved and move it if true.
Ok, I did what you said. The gameobject is movable but it's not detecting when the object is outside of the collider, so I used OnTriggerExit2D to set the ObjectIs$$anonymous$$ovable bool to be false, but it didn't disable when I wanted it to. The ObjectIs$$anonymous$$ovable bool does work though, when I manually disable it through the editor, the gameobject stops moving, which is good. So the problem is OnTriggerExit2D isn't working. I have the collider object as the parent of the draggable object. @madks13
Answer by Trevdevs
·
Aug 17, 2018 at 09:04 PM
Why dont you just clamp the postion ?
public float minX = -10;
public float maxX = 10;
public float minY = -10;
public float maxY = 10;
public void OnMouseDrag()
{
transform.position = new Vector3(OffsetX + Input.mousePosition.x, OffsetY + Input.mousePosition.y);
Vector3 newPosition = transform.position;
newPosition.x = Mathf.Clamp(newPosition.x, minX, maxX);
newPosition.y = Mathf.Clamp(newPosition.y, minY, maxY);
transform.position = newPosition;
}
Since you are using world position, this is not a good idea. It seems to me that the parent elements are movable too, so the clamp would not work in world space. That said, if local space was used, this might be an easier solution.
i've done this using world position it works fine just using children ones make sure it factors in the offset of the child to the parent and you're golden
I'm not sure what clamp.
Distribute terrain in zones
3
Answers
Multiple Cars not working
1
Answer
An OS design issue: File types associated with their appropriate programs
1
Answer
How to use PointerEnter on 3D Object
1
Answer
How do I set a maximum character limit to my textbox C#?
1
Answer
EnterpriseSocial Q&A
|
https://answers.unity.com/questions/1543457/i-want-a-border-limit-from-which-object-can-be-mov.html
|
CC-MAIN-2021-31
|
en
|
refinedweb
|
Today everything is connected and everything is mobile. People expect access to their data and their apps across all their devices, and this means that as developers we have more work to do than ever. To make it easier to build amazing, enterprise-class mobile apps, Apple and Salesforce have announced a strategic partnership to redefine the mobile customer experience and to empower learning for all.
For developers, this partnership gives you access to the tools and educational resources you’ll need to start building custom apps for iOS that leverage customer data from Salesforce. These resources include an all-new Salesforce Mobile SDK for iOS optimized for Swift, and new learning content on Trailhead built in collaboration with Apple.
Let’s start by taking a look at how Swift makes it easy to build great apps on iOS.
Swift is a modern programming language that is perfect for developing mobile apps, and is designed to be safe, fast, and expressive. Some of the features that make Swift a modern language include:
Let’s take a look at how simple it is to read a Swift code sample:
// Declare a hello world function func HelloWorld() { print("Hello World") } // Call hello world function HelloWorld()
As you can see, the syntax is simple and is to understand – even at first glance! Next, let’s take a look at a slightly more complicated code sample where we define a Type called person that contains two variables for First Name and Last Name. And then we’ll also define two instances of the person using constants.
// Define the Person type & sayHello function with variables firstName and lastName struct Person { var firstName: String var lastName: String func sayHello() { print("Hello, Trailblazer! My name is (firstName) (lastName).") } } // Define two Person instances using constants let firstPerson = Person(firstName: "Casper", lastName: "the 👻") let secondPerson = Person(firstName: "Count", lastName: "🧛♂️") firstPerson.sayHello() secondPerson.sayHello()
When coding in Swift you get a unique feature called Optional to define variables that may or may not contain a value, as well as modern capabilities like Switch statements to efficiently handle things like flow control. Let’s take a look at how this works:
... var animal: String? = "😺" if animal != nil {switch animal{ case "🐇", "🐁", "🐀", "🐿": print("This animal is a rodent.") case "🐕", "🐩", "🐶", "🐺": print("This animal is a canine.") default: print("This animal is not a rodent.") }} else {print("There is no animal.") } ...
The first thing you might have noticed when you look at this code sample is that you can use emojis when defining constants, variables and values! While that’s a fun and useful way to code in the modern era, what’s even more exciting is the power you get from the Optional type and switch statement features. At the start of the code sample, we defined a variable called animal as an optional string indicated by “String?” and set its value to 😺. We then evaluated the Optional to see if it had a value and used the switch statement (via case) to further evaluate our value. Here’s how it looks in the Xcode IDE and you can see the expected value printed out in the console at the bottom of the screenshot below:
Now that you’ve had a quick introduction to Swift, let’s take a look at how you can start building iOS apps that connect to Salesforce data.
The Salesforce Mobile SDK incorporates all the benefits of Swift, Apple’s flagship programming language, and allows you to build custom mobile applications on Salesforce with great services such as:
Let’s take a quick look at how easy it is to synchronize data between your iOS app and Salesforce with this Swift code sample:
syncManager.Promises.syncDown(target: target, options: options, soupName: soupName,syncName: syncName) .then { .. } .catch SFSmartSync.SyncDownFailed { }
Taking a look at this code, you can immediately see that it is both simple and powerful. Let’s review:
In addition to powerful Mobile SDK features like secure storage and out-of-the-box synchronization, you access to a ton of trusted enterprise services from the Salesforce platform, without having to reinvent the wheel, such as:
As part of this partnership, Salesforce and Apple are working together to optimize Swift support in the Salesforce Mobile SDK. The new SDK will make it easier for developers to use native iOS features and constants and enums are now namespaced for better discoverability. A developer preview of the new SDK will be coming soon!
Together, Salesforce and Apple are launching new learning content on Trailhead to help everyone learn how to build native iOS apps with Swift and Xcode. Effective today, you can take the new Get Started with iOS App Development trail and earn the following badges:
By taking this trail, you will get a hands-on introduction to Swift, create a basic iOS app with Xcode, and learn how to leverage data and services from Salesforce in a secure, trusted environment. Updates to the Salesforce Mobile SDK Basics and Native iOS modules are coming this fall, and all of this content is available free on Trailhead today.
And if you’re joining us at Dreamforce, make sure to stop by the Apple and Salesforce area in the Trailhead Zone (Moscone West, 1st floor) to earn your Swift and Xcode badges on Trailhead.
|
https://developer.salesforce.com/blogs/2018/09/build-ios-apps-with-salesforce.html
|
CC-MAIN-2021-31
|
en
|
refinedweb
|
Optimizing VR Hit Game Space Pirate Trainer* to Perform on Intel® Integrated Graphics
Published: 05/29/2018 Last Updated: 05/29/2018
By Cristiano Ferreira (@cristianohh), Dirk Van Welden (@quarkcannon), and Seth Schneider (@SethPSchneider)
Space Pirate Trainer* was one of the original launch titles for HTC Vive*, Oculus Touch*, and Windows Mixed Reality*. Version 1.0 launched in October 2017, and swiftly became a phenomenon, with a presence in VR arcades worldwide, and over 150,000 units sold according to RoadtoVR.com. It's even the go-to choice of many hardware producers to demo the wonders of VR.
"I'm Never Going to Create a VR Game!" These are the words Dirk Van Welden, CEO of I-Illusions and Creative Director of Space Pirate Trainer. He made the comment after first using the Oculus Rift* prototype. He received the unit as a benefit of being an original Kickstarter* backer for the project, but was not impressed, experiencing such severe motion sickness that he was ready to give up on VR in general.
Luckily, positional tracking came along and he was ready to give VR another shot with a prototype of the HTC Vive. After one month of experimentation, the first prototype of Space Pirate Trainer was completed. Van Welden posted it to the SteamVR* forums for feedback and found a growing audience with every new build. The game caught the attention of Valve*, and I-Illusions was invited to their SteamVR developer showcase to introduce Space Pirate Trainer to the press. Van Welden knew he really had something, so he accepted the invitation. The game became wildly popular, even in the pre-beta phase.
What is Mainstream VR?
Mainstream VR is the idea of lowering the barrier of entry to VR, enabling users to play some of the most popular VR games without heavily investing in hardware. For most high-end VR experiences, the minimum specifications on the graphics processing side require an NVIDIA GeForce* GTX 970 or greater. In addition to purchasing an expensive discrete video card, the player also needs to pay hundreds of dollars (USD) for the headset and sensor combo. The investment can quickly add up.
But what if VR games could be made to perform on systems with integrated graphics? This would mean that any on-the-go user with a top-rated Microsoft Surface* Pro, Intel® NUC, or Ultrabook™ device could play some of the top VR experiences. Pair this with a Windows Mixed Reality headset that doesn't require external trackers, and you have a setup for VR anywhere you are. Sounds too good to be true? Van Welden and I thought so too, but what we found is that you can get a very good experience with minimal trade-offs.
Figure 1. Space Pirate Trainer* - pre-beta (left) and version 1.0 (right).
Rendering Two Eyes at 60 fps on a 13 Watt SoC with Integrated Graphics? What?
So, how do you get your VR game to run on integrated graphics? The initial port of Space Pirate Trainer ran at 12 fps without any changes from the NVIDIA GeForce GTX 970 targeted build. The required frame rate for the mainstream space is only 60 fps, but that left us with 48 fps to somehow get back through intense optimization. Luckily, we found a lot of low-hanging fruit—things that greatly affected performance, the loss of which brought little compromise in aesthetics. Here is a side-by-side comparison:
Figure 2. Comparison of Space Pirate Trainer* at 12 fps (left) and 60 fps (right).
Getting Started
VR developers probably have little experience optimizing for integrated graphics, and there are a few very important things to be aware of. In desktop profiling and optimization, thermals (heat generation) are not generally an issue. You can typically consider both the CPU and GPU in isolation and can assume that each will run at their full clock-rate. Unfortunately, this isn't the case for SoCs (System on a Chip). "Integrated graphics" means that the GPU is integrated onto the same chip as the CPU. Every time electricity travels through a circuit, some amount of heat is generated and radiates throughout the part. Since this is happening on both the CPU and GPU side, this can produce great amounts of heat to the total package. To make sure the chip doesn't get damaged, clock rates for either the CPU or the GPU need to be throttled to allow for intermittent cooling.
For consistency, it's very helpful to get a test system with predictable thermal patterns to use as a baseline. This enables you to always have a solid reference point to go back to and verify the performance improvements or regressions as you experiment. For this, we recommend using the GIGABYTE Ultra-Compact PC GB-BKi5HT2-7200* as a baseline, as its thermals are very consistent. Once you've got your game at a consistent 60 fps on this machine, you can target individual original equipment manufacturer (OEM) machines and see how they work. Each laptop has their own cooling solution, so it helps to run the game on popular machines to make sure their cooling solutions can keep up.
Figure 3. System specs for the GIGABYTE Ultra-Compact PC GB-BKi5HT2-7200*.
* Product may vary based on local distribution.
- Features Latest Intel® Core™ 7th generation above system is currently used at Intel for all of our optimizations to achieve consistent results. For the purpose of this article and the data we provide, we'll be looking at the Microsoft Surface Pro:
Figure 4. System specs for the Microsoft Surface* Pro used for testing.
For this intense optimization exercise, we used Intel® Graphics Performance Analyzers (Intel® GPA), a suite of graphics analysis tools. I won't go into the specifics of each, but for the most part we are going to be utilizing the Graphics Frame Analyzer. Anyway, on to the optimizations!
Optimizations
To achieve 60 fps without compromising much in the area of aesthetics, we tried a number of experiments. The following list shows the biggest bang for the optimization buck as far as performance and minimizing art changes goes. Of course, every game is different, but these are a collection of great first steps for you to experiment with.
Shaders—floor
The first optimization is perhaps the easiest and most effective change to make. The floor of your scenes can take up quite a bite of pixel coverage.
Figure 5. Floor scene, with the floor highlighted.
The above figure shows the floor highlighted in the frame buffer in magenta. In this image, the floor takes up over 60 percent of the scene as far as pixel coverage goes. This means that material optimizations affecting the floor can have huge implications on keeping below frame-budget. Space Pirate Trainer was using the standard Unity* shader with reflection probes to get real-time reflections on the surface. Reflections are an awesome feature to have, but a bit too expensive to calculate and sample every frame on our target system. We replaced the standard shader with a simple Lambert* shader. Not only was the reflection sampling saved, but this also avoided the extra passes required for dynamic lights marked as 'Important' when using the Forward Rendering System used by Windows Mixed Reality titles.
Figure 6. Original measurements for rendering the floor, before optimizations.
Figure 7. Measurements for rendering the floor, after optimizations.
Looking at the performance comparison above, we can see that the original cost of rendering the floor was around ~1.5 ms per frame, and the cost with the replacement shader was only ~0.3 ms per frame. This is a 5x performance improvement.
Figure 8. The assembly for our shader was reduced from 267 instructions (left) down to only 47 (right) and had significantly less pixel shader invocations per sample.
As shown in the figure above, the assembly for our shader was reduced from 267 instructions down to only 47 and had significantly less pixel shader invocations per sample.
Figure 9. Side-by-side comparison of the same scene, with a standard shader on the left and the optimized Lambert* shader on the right.
The above image shows the high-end build with no changes except for the replacement of the standard shader with the Lambert shader. Notice that after all these reductions and cuts, we're still left with a good-looking, cohesive floor. Microsoft has also created optimized versions of many of the Unity built-in shaders and added them as part of their Mixed Reality Toolkit. Experiment with the materials in the toolkit and see how they affect the look and performance of your game.
Shaders—material batching with unlit shaders
Draw call batching is the practice of bundling up separate draw calls that share common state properties into batches. The render thread is often a point of bottleneck contention, especially on mobile and VR, and batching is only one of the main tools in your utility belt to alleviate driver bottlenecks. The common properties required to batch draw calls, as far as the Unity engine is concerned, are materials and the textures used by those materials. There are two kinds of batching that the Unity engine is capable of: static batching and dynamic batching.
Static batching is very straightforward to achieve, and typically makes sense to use. As long as all static objects in the scene are marked as static in the inspector, all draw calls associated with the mesh renderer components of those objects will be batched (assuming they share the same materials and textures). It's always best practice to mark all objects that will remain static as such in the inspector for the engine to smartly optimize unnecessary work and remove them from consideration for the various internal systems within the Unity engine, and this is especially true for batching. Keep in mind that for Windows Mixed Reality mainstream, instanced stereoscopic rendering is not implemented yet, so any saved draw calls will count two-fold.
Dynamic batching has a little bit more nuance. The only difference in requirements between static and dynamic batching is that the vertex attribute count of dynamic objects must be considered and stay below a certain threshold. Be sure to check the Unity documentation for what that threshold is for your version of Unity. Verify what is actually happening behind the scenes by taking a frame capture in the Intel GPA Graphics Frame Analyzer. See Figures 10 and 11 below for the differences in the frame visualization between a frame of Space Pirate Trainer with batching disabled and enabled.
Figure 10. Batching and instancing disabled; 1,300 draw calls; 1.5 million vertices total; GPU duration 3.5 ms/frame.
Figure 11. Batching and instancing enabled; 8 draw calls; 2 million vertices total; GPU duration 1.7 ms/frame (2x performance improvement).
As shown above, the amount of draw calls required to render 1,300 ships (1.5 million verts total) went from 1,300 all the way down to 8. In the batched example, we actually ended up rendering more ships (2 million vertices total) to drive the point home. Not only does this save a huge amount of time on the render thread, but it also saves quite a bit of time on the GPU by running through the graphics pipeline more efficiently. We actually get a 2x performance improvement by doing so. To maximize the total amount of calls batched, we can also leverage a technique called Texture Atlasing.
A Texture Atlas is essentially a collection of textures and sprites used by different objects packed into a single big texture. To utilize the technique, texture coordinates need to be updated to conform to the change. It may sound complicated, but the Unity engine has utilities to make it easy and automated. Artists can also use their modeling tool of choice to build out atlases in a way they're familiar with. Recalling the batching requirement of shared textures between models, Texture Atlases can be a powerful tool to save you from unnecessary work at runtime, helping to get you rendering at less than 16.6 ms/frame.
Key takeaways:
- Make sure all objects that will never move over their lifetime are marked static.
- Make sure dynamic objects you want to batch have fewer vertex attributes than the threshold specified in the Unity docs.
- Make sure to create texture atlases to include as many batchable objects as possible.
- Verify actual behavior with Intel GPA Graphics Frame Analyzer.
Shaders—LOD system for droid lasers
For those unfamiliar with the term, LOD (Level of Detail) systems refer to the idea of swapping various asset types dynamically, based on certain parameters. In this section, we will cover the process of swapping out various materials depending on distance from the camera. The idea being, the further away something is, the fewer resources you should need to achieve optimal aesthetics for lower pixel coverage. Swapping assets in and out shouldn't be apparent to the player. For Space Pirate Trainer, Van Welden created a system to swap out the Unity standard shader used for the droid lasers for a simpler shader that approximates the desired look when the laser is a certain distance from the camera. See the sample code below:
using System.Collections; using UnityEngine; public class MaterialLOD : MonoBehaviour { public Transform cameraTransform = null; // camera transform public Material highLODMaterial = null; // complex material public Material lowLODMaterial = null; // simple material public float materialSwapDistanceThreshold = 30.0f; // swap to low LOD when 30 units away public float materialLODCheckFrequencySeconds = 0.1f; // check every 100 milliseconds private WaitForSeconds lodCheckTime; private MeshRenderer objMeshRenderer = null; private Transform objTransform = null; // « Imaaaagination » - Imagine coroutine is kicked off in Start(). Go green and conserve slide space. IEnumerator Co_Update () { objMeshRenderer = GetComponent<MeshRenderer>(); objTransform = GetComponent<Transform>(); lodCheckTime = new WaitForSeconds(materialLODCheckFrequencySeconds); while (true) { if (Vector3.Distance(cameraTransform.position, objTransform.position) > materialSwapDistanceThreshold) { objMeshRenderer.material = lowLODMaterial; // swap material to simple } else { objMeshRenderer.material = highLODMaterial; // swap material to complex } yield return lodCheckTime; } } }
Sample code for simple shader
This is a very simple update loop that will check the distance of the object being considered for material swapping every 100 ms, and switch out the material if it's over 30 units away. Keep in mind that swapping materials could potentially break batching, so it's always worth experimenting to see how optimizations affect your frame times on various hardware levels.
On top of this manual material LOD system, the Unity engine also has a model LOD system built into the editor (access the documentation here). We always recommend forcing the lowest LOD for as many objects as possible on lower-watt parts. For some key pieces of the scene where high fidelity can make all the difference, it's ok to compromise for more computationally expensive materials and geometry. For instance, in Space Pirate Trainer, Van Welden decided to spare no expense to render the blasters, as they are always a focal point in the scene. These trade-offs are what help the game maintain the look needed, while still maximizing target hardware—and enticing potential VR players.
Lighting and post effects—remove dynamic lights
As previously mentioned, real-time lights can heavily affect performance on the GPU while the engine utilizes the forward rendering path. The way this performance impact manifests is through additional passes for models affected by the primary directional light as well as all lights marked as important in the inspector (up to the Pixel Light Count setting in Quality Setting). If you have a model that's standing in the middle of two important dynamic point lights and the primary directional light, you're looking at least three passes for that object.
Figure 12. Double the amount of dynamic lights at the base of weapons lit on each when they fire in Space Pirate Trainer*, contributing 5 ms of frame time for the floor (highlighted).
In Space Pirate Trainer, the point-lights parented to the barrel of the gun were disabled in low settings to avoid these extra passes, saving quite a bit of frame time. Recalling the section about floor rendering, imagine that the whole floor was sent through for rendering three times. Now consider having to do that for each eye; you'd get six total draws of geometry that cover about 60 percent of the pixels on the screen.
Key takeaways:
- Make sure that all dynamic lights are removed/marked unimportant.
- Bake as much light as possible.
- Use light probes for dynamic lighting.
Post-processing effects
Post-processing effects can take a huge cut of your frame budget if care isn't taken. The optimized "High" settings for Space Pirate Trainer utilize Unity's post-processing stack, but still only take around 2.6 ms/frame on our surface target. See the image below:
Figure 13. "High" settings showing 14 passes (reduced from much more); GPU duration of 2.6 ms/frame.
The highlighted section above shows all of the draw calls involved in post-processing effects for Space Pirate Trainer, and the pop-out window shown is the total GPU duration of those selected calls—around 2.6 ms. Initially, Van Welden and the team tested the mobile bloom to replace the typically used effect, but found that it caused distracting flickering. Ultimately, it was decided that bloom should be scrapped and the remaining stylizing effects could be merged into a single custom pass using color lookup tables to approximate the look of the high-end version.
Merging the passes brought the frame time down from the previously noted 2.6 ms/frame to 0.6 ms/frame (4x performance improvement). This optimization is a bit more involved and may require the expertise of a good technical artist for more stylized games, but it's a great trick to keep in your back pocket. Also, even though the mobile version of Bloom* didn't work for Space Pirate Trainer, testing mobile VFX solutions is a great, quick-and-easy experiment to test first. For certain scene setups, they may just work and are much more performant. Check out the frame capture representing the scene on "low" settings with the new post-processing effect pass implemented:
Figure 14. "Low" settings, consolidating all post-processing effects into one pass; GPU duration of 0.6 ms/frame.
HDR usage and vertical flip
Avoiding the use of high-dynamic rage (HDR) textures on your "low" tier can benefit performance in numerous ways—the main one being that HDR textures and the techniques that require them (such as tone mapping and bloom) are relatively expensive. There is additional calculation for color finalization and more memory required per-pixel to represent the full color range. On top of this, the use of HDR textures in Unity has the scene rendered upside down. Typically, this isn't an issue as the final render-target flip only takes around 0.3 ms/frame, but when your budget is down to 16.6 ms/frame to render at 60 fps, and you need to do the flip once for each eye (~0.6 ms/frame total), this accounts for quite a significant chunk of your frame.
Figure 15. Single-eye vertical flip, 291 ms.
Key takeaways:
- Uncheck HDR boxes on scene camera.
- If post-production effects are a must, use the Unity engine's Post-Processing Stack and not disparate Image Effects that may do redundant work.
- Remove any effect requiring a depth pass (fog, etc.).
Post processing—anti-aliasing
Multisample Anti-Aliasing (MSAA) is expensive. For low settings, it's wise to switch to a temporally stable post-production effect anti-aliasing solution. To get a feel for how expensive MSAA can be on our low-end target, let's look at a capture of Space Pirate Trainer on high settings:
Figure 16. ResolveSubresource cost while using Multisample Anti-Aliasing (MSAA).
The ResolveSubresource API call is the fixed-function aspect of MSAA that determines the final pixels for render targets with MSAA enabled. We can see above that this step alone requires about 1 ms/frame. This is added to the additional work required per-draw that's hard to quantify.
Alternatively, there are several cheaper post-production effect anti-aliasing solutions available, including one Intel has developed called Temporally Stable Conservative Morphological Anti-Aliasing (TSCMAA). TSCMAA is one of the fastest anti-aliasing solutions available to run on Intel® integrated graphics. If rendering at a resolution less than 1280x1280 before upscaling to native head mounted display (HMD) resolution, post-production effect anti-aliasing solutions become increasingly important to avoid jaggies and maintain a good experience.
Figure 17. Temporally Stable Conservative Morphological Anti-Aliasing (TSCMAA) provides an up to 1.5x performance improvement over 4x Multisample Anti-Aliasing (MSAA) with a higher quality output. Notice the aliasing (stair stepping) differences in the edges of the model.
Raycasting CPU-side improvements for lasers
Raycasting operations in general are not super expensive, but when you've got as much action as there is in Space Pirate Trainer, they can quickly become a resource hog. If you're wondering why we were worried about CPU performance when most VR games are GPU-bottlenecked, it's because of thermal throttling. What this means is that any work across a System on Chip (SoC) generates heat across the entire system package. So even though the CPU is not technically the bottleneck, the heat generated by CPU work can contribute enough heat to the package that the GPU frequency, or even its own CPU frequency, can be throttled and cause the bottleneck to shift depending on what's throttled and when.
Heat generation adds a layer of complexity to the optimization process; mobile developers are quite familiar with this concept. Going down the rabbit hole of finding the perfect standardized optimization method for CPUs with integrated GPUs has become a distraction, but it doesn't have to. Just think about holistic optimization as the main goal. Using general good practices on both the CPU and GPU will go a long way in this endeavor. Now that my slight tangent is over, let's get back to the raycasting optimization itself.
The idea of this optimization is that raycast checking frequency can fluctuate based on distance. The farther away the raycast, the more frames you can skip between checks. In his testing, Van Welden found that in the worst case, the actual raycast check and response of far-away objects only varied by a few frames, which is almost undetectable at the frame rate required for VR rendering.
private int raycastSkipCounter = 0; private int raycastDynamicSkipAmount; private int distanceSkipUnit = 5; public bool CheckRaycast() { checkRaycastHit = false; raycastSkipCounter++; raycastDynamicSkipAmount = (int)(Vector3.Distance(playerOriginPos, transform.position) / distanceSkipUnit); if (raycastskipCounter >= raycastDynamicSkipAmount) { if (Physics.Raycast(transform.position, moveVector.normalized, out rh, transform.localScale.y + moveVector.magnitude • bulletSpeed * Time.deltaTime * mathf.Clamp(raycastDynamicSkipAmount,1,10), laserBulletLayerMask)) //---never skip more than 10 frames { checkRaycastHit = true; Collision(rh.collider, rh.point, rh.normal, true); } raycastSkipCounter = 0; } return checkRaycastHit; } }
Sample code showing how to do your own raycasting optimization
Render at Lower Resolution and Upscale
Most Windows Mixed Reality headsets have a native resolution of 1.4k, or greater, per eye. Rendering to a target at this resolution can be very expensive, depending on many factors. To target lower-watt integrated graphics components, it's very beneficial to set your render target to a reasonably lower resolution, and then have the holographic API automatically scale it up to fit the native resolution at the end. This dramatically reduces your frame time, while still looking good. For instance, Space Pirate Trainer renders to a target with 1024x1024 resolution for each eye, and then upscales.
Figure 18. The render target resolution is specified at 1024x1024, while the upscale is to 1280x1280.
There are a few factors to consider when lowering your resolution. Obviously, all games are different and lowering resolution significantly can affect different scenes in different ways. For instance, games with a lot of fine text might not be able to go to such a low resolution, or a different trick must be used to maintain text fidelity. This is sometimes achieved by rendering UI text to a full-size render target and then blitting it on top of the lower resolution render target. This technique will save much compute time when rendering scene geometry, but not let overall experience quality suffer.
Another factor to consider is aliasing. The lower the resolution of the render-target you render to, the more potential for aliasing you have. As mentioned before, some quality loss can be recouped using a post-production effect anti-aliasing technique. The pixel invocation savings from rendering your scene at a lower resolution usually come in net positive, after the cost of anti-aliasing is considered.
#define MAINSTREAM_VIEWPORT_HEIGHTMAX 1400 void App::TryAdjustRenderTargetScaling() { HolographicDisplayA defaultHolographicDisplay = HolographicDisplay::GetDefault(); if (defaultHolographicDisplay == nullptr) { return; } Windows::Foundation::Size originalDisplayViewportSize = defaultHolographicDisplay-MaxViewportSize; if (originalDisplayViewportSize.Height < MAINSTREAM_VIEWPORT_HEIGHT_MAX) { // we are on a 'mainstream' (low end) device. // set the target a little lower. float target = 1024.0f / originalDisplayViewportSize.Height; Windows::ApplicationModel::Core::CoreApplication::Properties->Insert("Windows.Graphics.Holographic.RenderTargetSizeScaleFactorRequest”, target); }
Sample code for adjusting render-target scaling
Render VR hands first and other sorting considerations
In most VR experiences, some form of hand replacement is rendered to represent the position of the players' actual hand. In the case of Space Pirate Trainer, not only were the hand replacements rendered, but also the players' blasters. It's not hard to imagine these things covering a large amount of pixels across both eye-render targets. Graphics hardware has an optimization called early-z rejection, which allows hardware to compare the depth of a pixel being rendered to the existing depth value from the last rendered pixel. If the current pixel is farther back than the last pixel, the pixel doesn't need to be written and the invocation cost of that pixel shader and all subsequent stages of the graphics pipeline are saved. Graphics rendering works like the reverse painters' algorithm. Painters typically paint from back to front, while you can get tremendous performance benefits rendering your scene in a game from front to back because of this optimization.
Figure 19. Drawing the blasters in Space Pirate Trainer* at the beginning of the frame saves pixel invocations for all pixels covered by them.
It's hard to imagine a scenario where the VR hands, and the props those hands hold, will not be the closest mesh to the camera. Because of this, we can make an informed decision to force the hands to draw first. This is very easy to do in Unity; all you need to do is find the materials associated with the hand meshes, along with the props that can be picked up, and override their RenderQueue property. We can guarantee that they will be rendered before all opaque objects by using the RenderQueue enum available in the UnityEngine.Rendering namespace. See the figures below for an example.
namespace UnityEngine.Rendering { ...public enum RenderQueue { ...Background = 1000, ...Geometry = 2000, ...AlphaTest = 2450, ...Geometrylast = 2500, ...Transparent = 3000, ...Overlay = 4000 } }
RenderQueue enumeration in the UnityEngine.Rendering namespace
using UnityEngine; using UnityEngine.Rendering; 0 references public class RenderQueueUpdate : MonoBehaviour { public Material myVRHandsMaterial; // Use this for initialization 0 references void Start () { // Guarantee that VR hands using this material will be rendered before all other opaque geometry. myVRHandsMaterial.renderQueue = (int)RenderQueue.Geometry - 1; } }
Sample code for overriding a material's RenderQueue parameter
Overriding the RenderQueue order of the material can be taken further, if necessary, as there is a logical grouping of items dressing a scene at any given moment. The scene can be categorized (see figure below) and ordered as such:
- Draw VR hands and any interactables (weapons, etc.).
- Draw scene dressings.
- Draw large set pieces (buildings, etc.).
- Draw the floor.
- Draw the skybox (usually already done last if using built-in Unity skybox).
Figure 20. Categorizing the scene can help when overriding the RenderQueue order.
The Unity engine's sorting system usually takes care of this very well, but you sometimes find objects that don't follow the rules. As always, check your scene's frame in GPA first to make sure everything is being ordered properly before applying these methods.
Skybox compression
This last one is an easy fix with some potentially advantages. If the skybox textures used in your scene aren't already compressed, a huge gain can be found. Depending on the type of game, the sky can cover a large amount of pixels in every frame; making the sampling be as light as possible for that pixel shader can have a good impact on your frame rate. Additionally, it may also help to lower the resolution on skybox textures when your game detects it's running on a mainstream system. See the performance comparison shown below in Space Pirate Trainer:
Figure 21. A 5x gain in performance can be achieved from simply lowering skybox resolution from 4k to 2k. Additional improvements can be made by compressing the textures.
Conclusion
By the end, we had Space Pirate Trainer running at 60 fps on the "low" setting on a 13-watt integrated graphics part. Van Welden subsequently fed many of the optimizations back into the original build for higher-end platforms so that everybody could benefit, even on the high end.
Figure 22. Final results: From 12 fps all the way up to 60 fps.
The "high" setting, which previously ran at 12 fps, now runs at 35 fps on the integrated graphics system. Lowering the barrier to VR entry to a 13-watt laptop can put your game in the hands of many more players, and help you get more sales as a result. Download Intel GPA today and start applying these optimizations to your game.
Resources
Intel Graphics Performance Analyzer Toolkit
Unity Optimization Article
Product and Performance Information
Performance varies by use, configuration and other factors. Learn more at.
|
https://software.intel.com/content/www/us/en/develop/articles/optimizing-vr-hit-game-space-pirate-trainer-to-perform-on-intel-integrated-graphics.html
|
CC-MAIN-2021-31
|
en
|
refinedweb
|
Re: [gentoo-dev] Needs ideas: Upcoming circular dependency: expat <> CMake
On 19.12.19 18:37, Michał Górny wrote: > We have a better alternative that lets us limit the impact on the users. > Why not use it? Which one? The CMake bootstrap copy? The adding to stage3 one?
Re: [gentoo-dev] Needs ideas: Upcoming circular dependency: expat <> CMake
Hey! On 19.12.19 17:03, Michał Górny wrote: >> B) Introduce USE flag "system-expat" to CMake similar to existing >>flag "system-jsoncpp", have it off by default, keep reminding >>CMake upstream to update their bundle >> >> [..] > > It violates the policy on bundled libraries. Same for the dev-util/cmake-bootstrap approach, right? > What's worse, the awful > USE flags solution means that most of the Gentoo devs end up using > bundled libraries just because people are manually required to figure > out what to do in order to disable them. I didn't say that it's perfect :) It's the same approach that we have have with the system-jsoncpp USE flag already so that was considered good enough at some point in the past. I guess we want the same for Expat and jsoncpp? Which alternative do you see as better than a new flag system-expat? Best Sebastian
Re: [gentoo-dev] Needs ideas: Upcoming circular dependency: expat <> CMake
Hey! Thanks everyone for your thoughts so far! >From what I heard, these two options seem realistic to me: A) Ask the KDE team for help with teaming up on a new package dev-util/cmake-bootstrap, keep it in sync with dev-util/cmake, make sure both packages co-exists with full disjoint operation, i.e. zero file conflicts + zero cross package file usage (tricky?). B) Introduce USE flag "system-expat" to CMake similar to existing flag "system-jsoncpp", have it off by default, keep reminding CMake upstream to update their bundle I favor (B) by more than just a bit. Does anyone have strong concerns against moving in the dev-util/cmake[-system-expat] (B) direction? Is it acceptable if I make those changes to the CMake ebuild myself? Thanks again Sebastian
Re: [gentoo-dev] Needs ideas: Upcoming circular dependency: expat <> CMake
On 19.12.19 14:32, Rolf Eike Beer wrote: > These things _are_ updated regularly To be fair they update because I keep opening update requests:
[gentoo-dev] Needs ideas: Upcoming circular dependency: expat <> CMake
Hi all, I noticed that dev-util/cmake depends on dev-libs/expat and that libexpat upstream (where I'm involved) is in the process of dropping GNU Autotools altogether in favor of CMake in the near future, potentially the next release (without any known target release date). CMake bundles a (previously outdated and vulnerable) copy of expat so I'm not sure if re-activating that bundle — say with a new use flag "system-expat" — would be a good thing to resort to for breaking the cycle, with regard to security in particular. Do you have any ideas how to avoid a bad circular dependency issue for our users in the future? Are you aware of similar problems and solutions from the past? Thanks and best Sebastian
[gentoo-dev] Packages up for grabs: Gimp and related (gegl, babl, mypaint)
Hello, I need to admit that I don't have enough time to keep up with maintaining the Gimp-related packages well enough in Gentoo. Latest ebuild of Babl and Gegl ebuilds are using Meson by now, Gimp is up next and 2.10.14 is just out the door. These packages are up for grabs now: media-gfx/gimp media-libs/babl media-libs/gegl media-gfx/mypaint (not mine but maintainer-needed and related) media-gfx/mypaint-brushes media-libs/libmypaint Best Sebastian
[gentoo-dev] Last rites: sys-fs/pytagsfs
# Sebastian Pipping (22 May 2019) # Masked for removal in 30 days (bug #686562) # Unfixed bug, dead upstream, not relevant enough sys-fs/pytagsfs
Re: [gentoo-dev] Merge 7 Fedora wallpapers packages to single one with slots?
Hi Alec, On 27.01.2018 22:58, Alec Warner wrote: > > I noticed that we have 7 packages on Fedora wallpapers with names that > > only explain themselves to Fedora insiders: > > So traditionally we follow upstream package naming. If we aim to > deviate, I'd prefer we have strong reasons for it. good point. > >. > > Why not just make x11-themes/fedora-backgrounds, a metapackage that > includes all of the packages? With one file and use flags for each version or with one ebuild file per slot? Fedora 21 was the last release with a release name so if we package 22+ later, their ebuilds would be non-meta in nature. I'm not sure how to blend that into the use-flag version (yet for a meta package all these files seem overkill too). Do you have some third option in mind? Best Sebastian
Re: [gentoo-dev] Merge 7 Fedora wallpapers packages to single one with slots?
On 27.01.2018 19:06, Sebastian Pipping wrote: > 11-solar > 12-constantine > 13-goddard > 14-laughlin > 15-lovelock > 16-verne Correction: 10-solar 11-leonidas 12-constantine 13-goddard 14-laughlin 15-lovelock 16-verne
Re: [gentoo-dev] Merge 7 Fedora wallpapers packages to single one with slots?
Hi, On 27.01.2018 17:32, Michael Orlitzky wrote: > If you do merge them, then it might be better to use flags for the > different sub-packages rather than slots. There's no place to describe > what a slot is for, but having a local USE=solar with a corresponding > description in metadata.xml is (relatively) discoverable. use flags work well if we make a single ebuild offering to install one to all of these. That would be natural if the goddard/13 source rpm included files of constantine/12 and solar/11 as well and so on but it doesn't seem to. I would rather go with one ebuild per mayor release number of Fedora: Needs less use flag configuration as well. About slot names, if "12" is not good enough we could use 11-solar 12-constantine 13-goddard 14-laughlin 15-lovelock 16-verne or so for SLOT to have a mapping to names? Best Sebastian
Re: [gentoo-dev] time to retire
Stefan, thanks for your work on Gentoo! All the best Sebastian
[gentoo-dev] Merge 7 Fedora wallpapers packages to single one with slots?
Hi! I noticed that we have 7 packages on Fedora wallpapers with names that only explain themselves to Fedora insiders: # eix background | fgrep -B3 Fedora * x11-themes/constantine-backgrounds Available versions: 12.1.1.4-r1 Homepage: Description: Fedora official background artwork -- * x11-themes/goddard-backgrounds Available versions: 13.0.0.3-r1 Homepage: Description: Fedora official background artwork -- * x11-themes/laughlin-backgrounds Available versions: 14.1.0.3-r1 Homepage: Description: Fedora official background artwork -- * x11-themes/leonidas-backgrounds Available versions: 11.0.0.2-r1 Homepage: Description: Fedora official background artwork -- * x11-themes/lovelock-backgrounds Available versions: 14.91.1.1-r1 Homepage: Description: Fedora official background artwork -- * x11-themes/solar-backgrounds Available versions: 0.92.0.5-r1 Homepage: Description: Fedora official background artwork -- * x11-themes/verne-backgrounds Available versions: (~)15.91.0.1-r1 Homepage: Description: Fedora official background artwork. Any objections? Best Sebastian
Re: [gentoo-dev] [RFC] Addition of a new field to metadata.xml
On 01.06.2017 23:18, Jonas Stein wrote: > 2. Specification > > A space separated list of the corresponding debian packages should be > written in the field > > > It should be NONE, if debian has no corresponding package. > UNSET or no field, if the creator of the ebuild did not set the field (yet). Please pick NONE or require absence eventually, but not multiple options. Else we're asking for inconsistent data from the beginning. > example: > app-arch/tar/metadata.xml > tar > > app-office/libreoffice-bin/metadata.xml > libreoffice libreoffice-base libreoffice-base > libreoffice-dev libreoffice-dmaths libreoffice-draw > libreoffice-evolution libreoffice-impress Since the difference between source and binary packages has already been brought up, please adjust "" some way to indicating if the text content is a source or a binary package (even if we don't end up supporting both) to be 100% clear. Otherwise people will mix it up, and may not even notice. Best Sebastian
Re: [gentoo-dev] [rfc] dev-libs/expat[unicode] and dev-libs/libbsd dependency
Hi! Just quick note for the record: 2.2.0-r2 has these changes now, no need to have that wait for the next release: Best Sebastian signature.asc Description: OpenPGP digital signature
Re: [gentoo-dev] [rfc] dev-libs/expat[unicode] and dev-libs/libbsd dependency
Hi, On 31.05.2017 21:16, Michał Górny wrote: >> How do you evaluate these options: >> >> a) Keep libexpatu.so + change libexpatw.so to CPPFLAGS=-DXML_UNICODE >> >> b) Drop libexpatu.so + change libexpatw.so to CPPFLAGS=-DXML_UNICODE > > Does any other distribution use libexpatu.so? If not, then there's > probably no point in keeping it. I found none but CoreOS, which is derived from Gentoo (..). >> )" > > I'd dare say the feature is 'arc4random', then that should be the name > of the flag. Good point. Best Sebastian signature.asc Description: OpenPGP digital signature
[gentoo-dev] [rfc] dev-libs/expat[unicode] and dev-libs/libbsd dependency
Hi! The next release of dev-libs/expat is not far away and there are two things that I would appreciate input with, before the next bump in Gentoo: -DXML_UNICODE_WCHAR_T issues and Gentoo/Debian mismatch === With USE=unicode, on Gentoo two extra libraries are built: * libexpatu.so (with CPPFLAGS=-DXML_UNICODE) * libexpatw.so (with CPPFLAGS=-DXML_UNICODE_WCHAR_T) ^ However, -DXML_UNICODE_WCHAR_T has only ever worked with 2-byte wchar_t, while 4-byte wchar_t seems mainstream on Linux (and GCC -fshort-wchar would required libc to have the same, if you actually wanted to pass those wchar_t strings to wprintf and friends). So libexpatw.so in Gentoo is not functional at the moment. To make things worse, Debian has libexpatw.so with CPPFLAGS=-DXML_UNICODE, which corresponds to current libexpatu.so in Gentoo, rather than libexpatw.so. How do you evaluate these options: a) Keep libexpatu.so + change libexpatw.so to CPPFLAGS=-DXML_UNICODE b) Drop libexpatu.so + change libexpatw.so to CPPFLAGS=-DXML_UNICODE Depend on dev-libs/libbsd = The next release is very likely to add (optional but helpful) support for arc4random_buf that dev-libs/libbsd provides (especially on systems with glibc prior to 2.25) [1]. I wonder if Expat's proximity to @system has any strong implications on whether )" C) libbsd could even go into DEPEND and RDEPEND directly, or RDEPEND="dev-libs/libbsd" D) libbsd should not become any kind of future dependency of dev-libs/expat. Thanks for your time! Best Sebastian [1]
[gentoo-dev] Last rites: media-gfx/drqueue
# Sebastian Pipping <sp...@gentoo.org> (08 Oct 2016) # Dead upstream for years, ebuild needs work, 5 open bugs # Masked for removal in 30 days. media-gfx/drqueue
Re: [gentoo-dev] News item: Apache "-D PHP5" needs update to "-D PHP"
On 05.01.2016 20:35, Michael Orlitzky wrote: > I just pushed a new revision with this fix. In eselect-php-0.8.2-r1, > we ship both the new 70_mod_php.conf and the old 70_mod_php5.conf. The > latter comes with a big warning at the top of it, stating that it is for > backwards compatibility only. Cool, sounds like a great idea to me. I guess we don't need a news item any more then? Sebastian
Re: [gentoo-dev] News item: Apache "-D PHP5" needs update to "-D PHP"
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 04.01.2016 11:45, Lars Wendler wrote: > Hi Sebastian, > > to be honest I was very upset when I first stumbled upon this > problem. And yes I only found about it when my apache webserver > started to deliver php source code instead of the real sites. Exactly the same with me. > Doing such a change without getting in contact with me as apache > maintainer before the change was done is very... eh... impolite at > best. Just for the record, it wasn't me :) Best Sebastian -BEGIN PGP SIGNATURE- Version: GnuPG v2 iEYEARECAAYFAlaNbXwACgkQsAvGakAaFgDNmgCfXwHI2i15LT30MFw6eV7cDgyk sZYAnRwFHtwDAG/Z/p5zS4UvFXyvemGX =Xlrd -END PGP SIGNATURE-
[gentoo-dev] News item: Apache "-D PHP5" needs update to "-D PHP"
Hi! Better late then never. Posting 72 hours from now the earliest as advised by GLEP 42. Feedback welcome as usual. === Title: Apache "-D PHP5" needs update to "-D PHP" Author: Sebastian Pipping <sp...@gentoo.org> Content-Type: text/plain Posted: 2016-01-04 Revision: 1 News-Item-Format: 1.0 Display-If-Installed: app-eselect/eselect-php[apache2] With >=app-eselect/eselect-php-0.8.1, to enable PHP support for Apache 2.x file /etc/conf.d/apache2 no longer needs to read APACHE2_OPTS=". -D PHP5" but APACHE2_OPTS=". -D PHP" , i.e. without "5" at the end. This change is related to unification in context of the advent of PHP 7.x. With that change, guard "" in file /etc/apache2/modules.d/70_mod_php.conf has a chance to actually pull in PHP support. Without updating APACHE2_OPTS, websites could end up serving PHP code (include configuration files with passwords) unprocessed to website visitors! The origin of this news item is: === Best Sebastian
[gentoo-dev] Re: [gentoo-commits] repo/gentoo:master commit in: media-libs/gegl/
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 09.12.2015 07:41, Michał Górny wrote: > On Tue, 8 Dec 2015 21:54:44 + (UTC) "Sebastian Pipping" > <sp...@gentoo.org> wrote: > >> commit: a1ea06b430e14f68b5b7bf1947a681215157c034 Author: >> Sebastian Pipping gentoo org> AuthorDate: Tue >> Dec 8 21:49:31 2015 + Commit: Sebastian Pipping > gentoo org> CommitDate: Tue Dec 8 21:54:00 2015 >> + URL: >> >> >> media-libs/gegl: Fix ffmpeg/libav dependency (bug #567638) >> >> Package-Manager: portage-2.2.26 >> >> media-libs/gegl/gegl-0.3.4.ebuild | 10 ++ 1 file changed, >> 6 insertions(+), 4 deletions(-) >> >> diff --git a/media-libs/gegl/gegl-0.3.4.ebuild >> b/media-libs/gegl/gegl-0.3.4.ebuild index 764b6c9..c2b9409 >> 100644 --- a/media-libs/gegl/gegl-0.3.4.ebuild +++ >> b/media-libs/gegl/gegl-0.3.4.ebuild @@ -18,7 +18,7 @@ if [[ ${PV} >> == ** ]]; then> + > ...which change is put silently under 'dependency fix' with no > explicit warning, and effectively breaks ~ia64 reverse > dependencies: > > ia-gfx/gimp There > is for that now. If I don't hear from ia64 and/or sparc until tomorrow night, I will drop those keywords from Gimp as well. If it's more urgent, I'm happy with anyone else doing that before me. I hope that's okay for everyone. Else, please let me know. Best, Sebastian -BEGIN PGP SIGNATURE- Version: GnuPG v2 iEYEARECAAYFAlZoxEkACgkQsAvGakAaFgBvmACfRDY19JxNYqClQaYfVREJevp/ GzAAoMHIWJGN39fyNgvL8+RCxvaKbl36 =1w1B -END PGP SIGNATURE-
Re: [gentoo-dev] repoman adding Package-Manager: portage-2.2.20.1 to every single commit
On 19.08.2015 18:33, hasufell wrote: I don't want to start a lot of bikeshed, but I think this information is practically useless. If there has been a problem with a commit, ask the developer about his repoman version (which I believe was the reason for this, unless you want me to add Package-Manager: paludis-2.4.0 to every commit ;). Let's just remove it. With that line removed, how do we notice that people are committing without repoman or not running repoman checks at least? There is quite a risk of things going straight into stable by mistake when repoman is not used. Best, Sebastian
Re: [gentoo-dev] [RFC] Rebooting the Installer Project
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 20.07.2015 10:51, Michał Górny wrote: [..] I think we'd really benefit from having some kind of helper scripts / checklist of tasks to be done prior to/after install. For example, you'd run 'check-my-install' script and it'd tell you what you likely forgot to set up :). +1 Sebastian
[gentoo-dev] Problems updating Qt from 4.8.6 to 4.8.7
Hi there! I'm having trouble updating Qt:4 (dev-qt/qt*-4.8*:4) from 4.8.6 to 4.8.7. Looking at the ebuilds, they require some 4.8.7 versions to be installed already that in turn cannot be installed because other ebuilds require 4.8.6 while not yet upgraded. I am running the latest version of portage. Is there some trick I should know about or am I stuck with Qt 4.8.6 on that box forever? How did you update? Thanks for your help, best, S
Re: [gentoo-dev] Problems updating Qt from 4.8.6 to 4.8.7
On 05.07.2015 20:44, Alexandre Rostovtsev wrote: What I usually end up doing is listing my installed dev-qt/qt* ebuilds, and updating all of them together explicitly: emerge -1 qtcore:4 qtgui:4 qtsql:4 etc. That's what I tried but it doesn't seem to work with this update. Looking at the dependencies of qtgui dev-qt/qtgui-4.8.6-r4 DEPEND ~dev-qt/qtcore-4.8.6 dev-qt/qtgui-4.8.7 DEPEND ~dev-qt/qtcore-4.8.7 I really wonder if there is any update path from having dev-qt/qtcore-4.8.6-r2 dev-qt/qtgui-4.8.6-r4 installed before to dev-qt/qtcore-4.8.7 dev-qt/qtgui-4.8.7 after. Right now, it looks like I have to use emerge -C .. to un-install them completely, temporary breaking Qt and installing 4.8.7 fresh. I'm still hoping for some way to not needing to do that. Alternatively, just try emerge --update --deep world - it probably should work if you have a consistent, complete and updateable world set. That's where I'm coming from. It doesn't stop complaining because of Qt. Best, Sebastian
Re: [gentoo-dev] Re: Review: Apache AddHandler news item
Hello Duncan, On 06.04.2015 06:53, Duncan wrote: Sebastian Pipping posted on Mon, 06 Apr 2015 01:29:19 +0200 as excerpted: Published a slightly improved version now:- apache-addhandler-addtype If there's anything wrong with it, please mail me directly (or put me in CC) so there is zero chance of slipping through. Thanks! [also mailing sp...@gentoo.org as requested] thanks! $ echo Apache AddHandler/AddType vulnerability protection | wc -c 51 GLEP 42 says max title length 44 chars. 51-44=7 chars too long. Actually, echo prints a newline that is also counted. So it's 50 and 6 characters too much but you still have a point :) Off the top of my head, maybe just s/vulnerability/vuln/ ? That'd cut 9 chars for 42, leaving two to spare. Anyone with a better idea? I made it say exploit now: # echo -n 'Apache AddHandler/AddType exploit protection' | wc -c 44 I hope that's correct enough in terms of security language. The fix protections against exploits of the related vulnerability. That's the big one. Here's a couple more minor English usage change suggestions as well. (Changes denoted in caps here, obviously lowercase them): Line 25, add also: may be helpful. Unfortunately, it can ALSO be a security threat. Fixed. Line 74 s/at/in/: You may be using AddHandler or AddType IN other places, Fixed. Thanks for the review. Best, Sebastian
Re: [gentoo-dev] Review: Apache AddHandler news item
Published a slightly improved version now: If there's anything wrong with it, please mail me directly (or put me in CC) so there is zero chance of slipping through. Thanks! Best, Sebastian
[gentoo-dev] Current Gentoo Git setup / man-in-the-middle attacks
Hi! For the current Gentoo Git setup I found these methods working for accessing a repository, betagarden in this case: git://anongit.gentoo.org/proj/betagarden.git (git://git.gentoo.org/proj/betagarden.git) (git://git.overlays.gentoo.org/proj/betagarden.git) () git+ssh://g...@git.gentoo.org/proj/betagarden.git (git+ssh://g...@git.overlays.gentoo.org/proj/betagarden.git) Those without braces are the ones announced at the repository's page [1]. My concerns about the current set of supported ways of transfer are: * There does not seem to be support for https://. Please add it. * Why do we serve Git over git:// and http:// if those are vulnerable to man-in-the-middle attacks (before having waterproof GPG protection for whole repositories in place)? Especially with ebuilds run by root, we cannot afford MITM. So I would like to propose that * support for Git access through https:// is activated, * Git access through http:// and git:// is deactivated, and * the URLs on gitweb.gentoo.org and the Layman registry are updated accordingly. (Happy to help with the latter.) Thanks for your consideration. Best, Sebastian [1]
Re: [gentoo-dev] Current Gentoo Git setup / man-in-the-middle attacks
On 29.03.2015 19:39, Andrew Savchenko wrote: On Sun, 29 Mar 2015 18:41:33 +0200 Sebastian Pipping wrote: So I would like to propose that * support for Git access through https:// is activated, * Git access through http:// and git:// is deactivated, and Some people have https blocked. http:// and git:// must be available read-only. They would not do online banking over http, right? Why would they run code with root privileges from http? Best, Sebastian
Re: [gentoo-dev] Current Gentoo Git setup / man-in-the-middle attacks
On 29.03.2015 19:56, Diamond wrote: Doesn't git:// uses SSH wich is secure? I think that was on github. git:// is the git protocol [1] with absolutely no authentication and no encryption. GitHub does not support git:// but only secure protocols (HTTPS, SSH), see [2]. Best, Sebastian [1] [2]
Re: [gentoo-dev] Review: Apache AddHandler news item
Next round: * Recipe for handling \.(php|php5|phtml|phps)\. manually added * AddType (with similar problems) mentioned, too * Typo momment fixed (* Internel revision bump to 3, will be committed as revision 1) (* Date bumped to today) (* Links renumbered due to new link [2]) Title: Apache AddHandler/AddType vulnerability protection Author: Sebastian Pipping sp...@gentoo.org Content-Type: text/plain Posted: 2015-03-30 Revision: 3 News-Item-Format: 1.0 Display-If-Installed: www-servers/apache Apache's directives AddHandler [1] (and AddType [3] document a multi-language website as a context where that behavior may be helpful. Unfortunately, it can be a security threat. Combined with (not just PHP) applications that support file upload, the AddHandler/AddType. Shipping automatic protection for this scenario is not trivial, but you could manually install protection based on this recipe: FilesMatch \.(php|php5|phtml|phps)\. # a) Apache 2.2 / Apache 2.4 + mod_access_compat #Order Deny,Allow #Deny from all # b) Apache 2.4 + mod_authz_core #Require all denied # c) Apache 2.x + mod_rewrite #RewriteEngine on #RewriteRule .* - [R=404,L] /FilesMatch * You may be using AddHandler (or AddType) at other places, including off-package files. Please have a look. * app-admin/eselect-php is not the only package affected. There is a dedicated tracker bug at [4]. As of the moment,, Michael Orlitzky and Marc Schiffbauer. [1] [2] [3] [4]
Re: [gentoo-dev] Should Gentoo do https by default?
On 27.03.2015 15:33, Hanno Böck wrote: I think defaulting the net to HTTPS is a big step for more security and I think Gentoo should join the trend here. Yes please! Sebastian
Re: [gentoo-dev] Re: Review: Apache AddHandler news item
Hi! I was wondering about the same thing, too. I can commit it as revision 1 for a workaround. If you have some time, please take this question/issue further with the related software and people. Thanks in advance, Sebastian
[gentoo-dev] Review: Apache AddHandler news item
Hi! In context of mjo and agreed that a portage news item would be a good idea. Please review my proposal below. Thank you! Best, Sebastian === Title: Apache AddHandler vulnerability protection Author: Sebastian Pipping sp...@gentoo.org Content-Type: text/plain Posted: 2015-03-26 Revision:] [1] [2] [3]
Re: [gentoo-dev] Review: Apache AddHandler news item
On 26.03.2015 18:02, Michael Orlitzky wrote: The most important reason is missing =) If you are relying on the AddHandler behavior to execute secret_database_stuff.php.inc, then once the change is made, Apache will begin serving up your database credentials in plain text. Good point. Changes: * Revision bump * Add section on .php.inc * Add thanks line Title: Apache AddHandler vulnerability protection Author: Sebastian Pipping sp...@gentoo.org Content-Type: text/plain Posted: 2015-03-26 Revision: and Michael Orlitzky. [1] [2] [3]
Re: [gentoo-dev] Review: Apache AddHandler news item
On 26.03.2015 20:50, Marc Schiffbauer wrote: * Sebastian Pipping schrieb am 26.03.15 um 19:15 Uhr: As of the momment, affected packages include: ^ Typo Thanks. Fixed in my local copy. No need to re-paste, I believe. Best, Sebastian
Re: [gentoo-dev] Naming of repositories: gento-x86 edition, bike shedding wanted
On 14.03.2015 23:25, Robin H. Johnson wrote: Trying to explain to a new user that the Portage tree refers to the collection of ebuilds used by a PMS-compliant package manager (eg Portage) is problematic. Full ack. Let's limit portage to the piece of software, please. Questions: 0. What names for the tree/repository. It's not a tree. Ideally, it would be a directed acyclic graph (DAG), there maybe be some loops, even. I would therefore object to any name that has tree in it. Since there are other Gentoo-based distros, I would say the word gentoo should be in there. Plain gentoo may work. I would be happy with any of gentoo-{core,main,master} as well, if plain gentoo causes trouble for a name in some context. 1. We have some namespaces in Git: proj, dev, priv, data, sites, exp; should the tree be in one of those namespaces, a new namespace, or be without a namespace? git://anongit.gentoo.org/NEW-NAME.git. Not in any of those namespace, please. If in any, make it repos or repositories, please. Thanks for your consideration. Best, Sebastian
Re: [gentoo-dev] Naming of repositories: gento-x86 edition, bike shedding wanted
On 15.03.2015 10:48, Ulrich Mueller wrote: If we want a separate repo/ namespace, we would probably need to consider moving other repositories there -- at least the official ones. Of course, it would be a nice result, having everything hosted on git.g.o as git.g.o/repo/${repo_name}.git. Isn't repo fairly redundant? Everything there is a repository. There are Git repositories that do not contain ebuilds up there. So repo is not redundant if it refers to its overlays kind of meaning. Two examples: Best, Sebastian
Re: [gentoo-dev] [rfc] Rendering the official Gentoo logo / Blender,2.04, Python 2.2
On 07.06.2011 11:15, Mario Bodemann wrote: Hi folks, Sebastian told me about the problem of not being able to render the logo in recent blender versions. So this is were I stepped in: I tried it and used the geometries from the old .blender file, and the yellowish reflecting image. Problem was to recreate the exact representation of the original logo, by new means of rendering and relighting. I tried to solve them by creating a new material for the g and carefully adjusting the parameters. Also I added a new modifier for the geometry to get rid of the ugly seam at the sharp edge. (This does not modify the geometry, only the rendering of it) However, here are my preliminary results: - the modified .blend-file[1] (tested with blender 2.57b) - new rendered logo image [2] - original logo image (for comparison)[3] What do you think? Greetings, Mario. [1];a=blob_plain;f=g-metal.blend;hb=master [2];a=blob_plain;f=g-metal.png;hb=images [3];a=blob_plain;f=g-metal-orig.png;hb=images For the record, I have resurrected that repository at now. For the images [2][3], have a look at the images branch: Best, Sebastian
Re: [gentoo-dev] collab herd for cooperative pkg maintenance
On 23.03.2015 18:22, Tim Harder wrote: With that in mind, I think it would be an interesting experiment if we had a collaborative herd (probably named collab) that signals the status that anyone is generally free to fix, bump, or do sane things to the pkgs with the caveat that you fix what you break. +1 (to any other non-herd marker in metadata.xml to achieve the same effect, too) Sebastian
Re: [gentoo-dev] Re: [rfc] Rendering the official Gentoo logo / Blender 2.04, Python 2.2
On 23.02.2015 23:34, Daniel Campbell wrote: Can't the logo be remade in a more recent version of Blender? Assuming you can run two separate Blender instances, it would mostly be copying the poly/vertex values from one to the other. I'm not versed in 3-D but it would surprise me if there wasn't a standard mesh format. There was an attempt to port to a recent version of Blender. When comparing renderings, the result is close, but not 1:1. Please check Mario's reply of 2011 in this very thread (). Best, Sebastian
Re: [gentoo-dev] Re: [rfc] Rendering the official Gentoo logo / Blender 2.04, Python 2.2
Hi! Please excuse bringing up a topic as old as this, again. Only bringing up half, actually. On 05.05.2011 07:36, Sebastian Pipping wrote:. While doing digital cleaning over here I ran into my patches to make ancient Blender compile again for Gentoo logo rendering. I have streamlined those into ebuilds and a dedicated overlay and filled the void in the Gentoo wiki with a few words and pointers. The ebuilds are: dev-lang/python/ python-2.2-r8.ebuild media-gfx/blender/ blender-2.26.ebuild blender-2.31a.ebuild So whoever needs to render Blender files from 2003 again at some point should find working ebuilds to do that. Feel free to join keeping them in good installable shape. Thanks and best, Sebastian
Re: [gentoo-dev] CPU_FLAGS_X86 gentoo repository migration complete
On 01.02.2015 23:17, Michał Górny wrote: Hi, developers. Just a quick note: the CPU_FLAGS_X86 conversion of the Gentoo repository is complete now. Cool! Thanks for fixing the freeverb3 ebuild, too. Best, Sebastian
Re: [gentoo-dev] arm64
Thanks! The issue and fix are clear by now (for details:). So I don't need shell access any more, at least not in this context. Best, Sebastian On 25.01.2015 18:49, Tom Gall wrote: Least speaking for myself I can help you out starting Feb 15th, presuming all the stars are in alignment. If someone else doesn’t help you before, please mark it on your calendar and bug me again then cause I’m sure I’ll forget! Best, Tom On Jan 25, 2015, at 11:43 AM, Sebastian Pipping sp...@gentoo.org wrote: Hi! I got a bug report for arm64 against the test suite of uriparser. If I could get a temporary arm64 shell somewhere, that could help me understand the issue. Best, Sebastian
Re: [gentoo-dev] arm64
Hi! I got a bug report for arm64 against the test suite of uriparser. If I could get a temporary arm64 shell somewhere, that could help me understand the issue. Best, Sebastian
[gentoo-dev] Where to install Grub2 background images too?
Hi! Debian is putting Grub2 background (or splash) images into /usr/share/images/grub/ [1] but we do no not have an /usr/share/images/ folder. (I'm not referring to full themes, just background images.) If I were to make media-gfx/grub-splashes:2, where would it install to? Thanks, Sebastian [1]
Re: [gentoo-dev] debootstrap equivalent for Gentoo?
Hi! On 28.12.2014 11:26, Johann Schmitz (ercpe) wrote: I wrote gentoo-bootstrap () some time ago to automate the creation of Gentoo Xen DomU's at work. It can do a lot of more things (e.g. installing packages, overlays, etc.). Interesting tool! (shame on me) it doesn't do signature verification yet. I've opened a ticket for it on GitHub now: Best, Sebastian
[gentoo-dev] debootstrap equivalent for Gentoo?
Hi! I'm wondering if there is an equivalent of debootstrap of Debian anywhere. By equivalent I mean a tool that .. * I can run like command FOLDER with a chroot-able Gentoo system in FOLDER after and * for both stage3 and portage tarballs * Downloading tarball * Downloading signature file * Verifying signature * Extracting Has anyone seen something like that? Thanks, Sebastian
Re: [gentoo-dev] Last rites: dev-php/{adodb-ext,eaccelerator,pecl-apc,pecl-id3,pecl-mogilefs,pecl-sca_sdo,suhosin}
Hi Brian, On 02.10.2014 20:29, Brian Evans wrote: # Brian Evans grkni...@gentoo.org ( 1 Oct 2014 ) # Masked for removal in 30 days. # Broken on =dev-lang/php-5.4. No replacements known. [..] dev-php/suhosin is that true for suhosin? Upstream reads has been tested with PHP 5.4 and 5.5 [1] and there is dev-php/suhosin: version bump - 0.9.36 is available and has been tested with PHP 5.4 and 5.5 too. So at least to me this looks like to early or potentially not even needed. If it is broken fro 5.4/5.5 please share details on why it really is and update the bug mentioned above too, please. Thanks! Best, Sebastian [1]
[gentoo-dev] Packages up for grabs
Hello! Below are some packages that I fail to take care of as needed and have not been using myself for a while. Please take over whatever you have interest in: Latest Open bugs app-text/xmlstarlet yes no media-libs/libmp3spltyes yes media-sound/mp3splt-gtk yes yes media-sound/mp3splt yes yes dev-python/html2text no/2014.9.8 no dev-python/inotifyx no/0.2.2no games-action/openlierox no/0.59_beta10 no media-gfx/drqueueyes yes media-libs/opencollada no/?yes sys-fs/pytagsfs yes yes Many thanks, Sebastian
Re: [gentoo-dev] New Python eclasses -- a summary and reminder
Looks like great work so far. On 11.02.2013 01:20, Michał Górny wrote: Secondly, I'd like to make it clear that the old python.eclass is 'almost' deprecated. We're in process of converting the in-tree packages to use the new eclasses but that's a lot of work [3]. [..] [3]: I wonder what would be the best way to help with conversion for devs with a few hours to contribute? - Where does one go for peer review and how many eyes should be on related commits? - Should package owners be contacted in all cases? - Are there any conversion sprints already or in the future? Best, Sebastian
Re: [gentoo-dev] What did we achieve in 2012? What are our resolutions for 2013?
Coming to my mind: There have been continued regular releases of genkernel integrating patches from various people:;a=tags And there has been a constant stream of people asking for user overlay hosting or getting an existing overlay being added to the layman registry that we could serve. Ben, I hope you have the time to make a news post from this thread's collection? Best, Sebastian
Re: [gentoo-dev] Is /var/cache the right place for repositories?
On 20.12.2012 19:14, Ciaran McCreesh wrote: The tree is a database. It belongs in /var/db/. I don't see /var/db in the latest release of the Filesystem Hierarchy Standard: I would prefer something that blends with FHS. Best, Sebastian
Re: [gentoo-dev] Is /var/cache the right place for repositories?
On 20.12.2012 18:27, Ulrich Mueller wrote: Now I wonder: After removal of e.g. the Portage tree from a system, it is generally not possible to restore it. (It can be refetched, but not to its previous state.) Same is true for distfiles, at least to some degree. They may have vanished upstream or from mirrors. Maybe /var/lib would be a better choice? It would also take care of the issue with fetch-restricted files. Thanks for bringing it up. What you address above is the exact reason why Layman's home was moved to /var/lib/layman/ eventually. It has a cache aspect, bit it's not a true cache. Best, Sebastian
Re: [gentoo-dev] Lastrites: net-proxy/paros, net-misc/ups-monitor, app-emulation/mol, net-wireless/fsam7400, net-wireless/acx, net-wireless/acx-firmware, net-wireless/linux-wlan-ng-modules, net-wirele
On 11/24/2012 10:12 PM, Pacho Ramos wrote: # Pacho Ramos pa...@gentoo.org (24 Nov 2012) # Upstream dead and no longer runs (#402669). # Removal in a month app-cdr/dvd95 Bug fixed. I just ripped a DVD with dvd95 successfully. + 02 Dec 2012; Sebastian Pipping sp...@gentoo.org package.mask: + Keep dvd95 since bug #402669 is now fixed + # Pacho Ramos pa...@gentoo.org (24 Nov 2012) # Fails to build with gcc-4.7 and maintainer is ok with dropping (#424723). # Removal in a month. app-shells/localshell FYI bug fixed, removal prevented by robbat2. Best, Sebastian
[gentoo-dev] Last rites: app-admin/smolt
# Sebastian Pipping sp...@gentoo.org (27 Nov 2012) # Masked for removal in 30 days. # Server and software development discontinued upstream (bug #438082) app-admin/smolt
[gentoo-dev] Last rites: app-admin/profiler
# Sebastian Pipping sp...@gentoo.org (27 Nov 2012) # Masked for removal in 30 days. # Licensing issues, turned out not distributable (bug #444332) app-admin/profiler
Re: [gentoo-dev] License groups in ebuilds
Re: [gentoo-dev] gtk3 useflag and support of older toolk. Best, Sebastian
[gentoo-dev] Re: [gentoo-dev-announce] Lastrite: 4suite, amara and testoob (mostly for security)
On 05/16/2012 10:40 AM, Samuli Suominen wrote: # Samuli Suominen ssuomi...@gentoo.org (16 May 2012) # Internal copy of vulnerable dev-libs/expat wrt #250930, # CVE-2009-{3720,3560} and CVE-2012-{0876,1147,1148}. # # Fails to compile wrt bug #368089 # Bad migration away from dev-python/pyxml wrt #367745 # # Removal in 30 days. dev-python/4suite Can I veto on 4suite (without fixing things myself yet) ? Thanks, Sebastian
Re: [gentoo-dev] Arbitrary breakage: sys-fs/cryptsetup
On 03/22/2012 03:20 PM, Alexandre Rostovtsev wrote: [1] For one, genkernel should bomb out if it can't comply with a command-line arg instead of just putting non-alert text up. There is already a bug open about this issue: With that bug fixed by now is there still need for a news entry? Best, Sebastian
[gentoo-dev] Re: [gentoo-dev-announce] Last rites: Various horde packages
Hello! Would it make sense to move these ebuilds to a dedicated overlay? I can think of one IPS that uses both Gentoo and Horde [1] (though I'm not sure which version and if in combination). A imagine that a dedicated overlay could be both a service to people who still rely on horde and at the same time encourage them to step up to maintenance. With a dedicated overlay we wouldn't need to worry about write access as much as with the main tree. [1] On 03/28/2012 06:26 PM, Alex Legler wrote: Up for removal in 4 weeks: # Alex Legler a...@gentoo.org (28 Nov 2010) # Not maintained, multiple security issues. # Use the split horde ebuilds instead. While I don't use horde from a Gentoo perspective, I'm curious: which remaining split ebuilds are you referring to? For Horde 3 webmail: which ebuild would a user need now? www-apps/horde-webmail www-apps/horde-groupware # Alex Legler a...@gentoo.org (28 Mar 2012) # Leftover packages from a packaging attempt of Horde-4 # These can be readded when someone picks the package up dev-php/Horde_ActiveSync [..] dev-php/Horde_Yaml For those interested a diff says that these Horde packages remain: www-apps/horde www-apps/horde-chora www-apps/horde-dimp www-apps/horde-gollem www-apps/horde-hermes www-apps/horde-imp www-apps/horde-ingo www-apps/horde-jeta www-apps/horde-kronolith www-apps/horde-mimp www-apps/horde-mnemo www-apps/horde-nag www-apps/horde-passwd www-apps/horde-pear www-apps/horde-turba Best, Sebastian
Re: [gentoo-dev] [rfc] Which ebuild category should these ebulds go into?
On 02/01/2012 09:42 AM, ScytheMan wrote: Take a look at g15daemon (useful for some logitech keyboards). There you have: app-misc/g15daemon dev-libs/libg15 Great, thanks! Best, Sebastian
[gentoo-dev] [rfc] Which ebuild category should these ebulds go into?
Hello! Anthoine and I are working on some new ebuilds related to a 3D mouse at the moment. For two of these I wonder what package category makes a good fit. While I would save your time on such a simple thing, I would like to avoid moving around things later, too. I have inspected the related metadata.xml files already. Which categories do you advise for? spacenavd driver daemon (with optional X support) -- sys-apps/spacenavd ? -- app-misc/spacenavd ? -- .. ? libspnav library accessing before-mentioned daemon -- dev-libs/libspnav ? -- media-libs/libspnav ? -- sys-libs/libspnav ? -- .. ? spnavcfg X11/GTK GUI tool for configuration -- x11-misc/spnavcfg seems right Thanks in advance! Best, Sebastian
[gentoo-dev] New maintainer needed: net-misc/aria2
Hello! While someone else is the official maintainer of net-misc/aria2, I have done the last 5 version bumps or so on net-misc/aria. I have gotten a little behind with it lately: 1.12.1 is the latest in tree, upstream has 1.13.0, 1.14.0 and the very fresh 1.14.1. One reason for that is that I don't use aria2 myself that much if at all. Another is that the next version bump needs more care than just copy-and-commit: some dependencies have changed. I would like to pass package net-misc/aria2 on to one of you. With the help of the proxy maintainer team this could even be someone who is not yet a Gentoo developer. Besides the test suite, nothing needs patching in the latest ebuild of 1.12.1. There is three open bugs [1]. If you want to take over, please go ahead. Maybe leave a short reply in this thread. Thanks! Best, Sebastian [1];list_id=712171 Original Message Subject: aria2 maintenance Date: Sat, 31 Dec 2011 19:00:56 +0100 From: Sebastian Pipping sp...@gentoo.org To: ..@gentoo.org Hello .., it looks like you don't really have time or interest to keep aria2 up to speed. More or less the same on my end. Would you mind if I publicly ask for a new maintainer for aria2? If I do not hear from you within a week I take no answer as a yes. Best, Sebastian
[gentoo-dev] unison needs some love
Hello! Version in Gentoo: 2.32.52 Version upstream: 2.40.63 The bug is old enough to justify a takeover to me, provided you act with resonable care. Sebastian
Re: [gentoo-dev] unison needs some love
On 08/03/2011 07:37 PM, Alexis Ballier wrote: I'm more or less alone in the ml herd (maintainer) and I don't use unison :( While you mention the herd: how come this is herd=ml? Best, Sebastian
Re: [gentoo-dev] Last Rites: sys-fs/evms
On 07/03/2011 11:34 AM, Markos Chandras wrote: Hi Sebastian, sys-fs/evms is now gone Thanks for the notification. I have updated genkernel 3.4.17 accordingly. Sebastian
[gentoo-dev] Re: [gentoo-commits] gentoo-x86 commit in net-misc/aria2: aria2-1.12.0.ebuild ChangeLog
On 07/01/2011 10:03 AM, Peter Volkov wrote: В Чтв, 30/06/2011 в 19:27 +, Sebastian Pipping (sping) пишет: Log: net-misc/aria2: Bump to 1.12.0, looks trivial EAPI=2 inherit bash-completion ... pkg_setup() { if use scripts use !xmlrpc use !metalink; then ewarn Please also enable the 'xmlrpc' USE flag to actually use the additional scripts fi } This really calls for REQUIRED_USE from EAPI=4. REQUIRED_USE=scripts? ( ^^ ( xmlrpc metalink ) ) If we use EAPI 4 in that ebuild we cannot make it stable anytime soon, correct? Sebastian
Re: [gentoo-dev] Re: [gentoo-commits] gentoo commit in xml/htdocs/proj/en/qa: index.xml
On 06/09/2011 03:37 PM, Rich Freeman wrote: do we need some kind of policy around membership on special project teams. QA and Devrel are the most obvious examples, Infra might be another. in my eyes we do. too much power to be unregulated. what does it take to get this rolling? sebastian
Re: [gentoo-dev] Reviving webapp-config
Questions: - What does reviving mean in detail? A re-write? A somewhat compatible re-write? Getting back to maintaining the current code? Why did you choose how you did? - Have you spoken to Andreas Nüsslein who worked on a re-write in context of an earlier GSoC? Best, Sebastian
Re: [gentoo-dev] Reviving webapp-config
On 06/10/2011 05:38 PM, Matthew Summers wrote: Why did you choose how you did? I do not understand this sentence, I intended to write as you did, sorry. If that's still bad English: I wanted to hear about your rationale, which you have explained by now. Thanks. [..] this tool has an important role in Gentoo and therefore needs to be revived. I wished people were thinking like that about genkernel :-) Best, Sebastian
Re: [gentoo-dev] Gentoo package statistics -- GSoC 2011
Re: [gentoo-dev] Last Rites: sys-fs/evms
On 06/03/2011 11:32 PM, Markos Chandras wrote: # Markos Chandras hwoar...@gentoo.org (3 Jun 2011) # Dead upstrea, many open bugs, partially working with # kernel-2.6 # Bugs #159741, #159838, #165120, #165200, #231459, # #273902, #278949, #305155, #330523, #369293 # Masked for removal in 30 days # Alternative: sys-fs/lvm2 sys-fs/evms EVMS is a soft dependency of genkernel. If sys-fs/evms is removed, EVMS support will have to be removed from genkernel, too. If you go forward please notify the genkernel team once EVMS has been removed so we can update genkernel accordingly. Thanks! Best, Sebastian
[gentoo-dev] Test request: open-iscsi 2.0.872
Hello! Would be great to have a few people test open-iscsi 2.0.872 before moving it from overlay betagarden to the main tree. To get it installed please run: # layman -a betagarden # emerge -av =sys-block/open-iscsi-2.0.872 Important: Please include a description of what you did while testing in your feedback. At best, post your feedback as a reply to bug 340425: Thanks in advance! Best, Sebastian
[gentoo-dev] Re: Test request: open-iscsi 2.0.872
PS: I noticed the typo in gentoo-users@lists.g.o ^ and sent a new mail to gentoo-user@lists.g.o now. Sebastian
[gentoo-dev] Genkernel needs more hands
Hello, Genkernel's situation (reduced to the three currently most active players) looks like this to me: - aidecoe - is focussed on the transition to Dracut and related things - is fixing bugs in present genkernel from time to time - xake - is fixing bugs in the current genkernel releases - likes his patches to be reviewed - cannot do releases as he has no developer account, yet - sping (me) - writes and applies patches from time to time. (Commitment varies, at a low right now.) - has never used half of the many technologies involved himself (iSCSI, dmraid, netboot, ...) - is the bottleneck on some reviews and releases There are various bugs around, some just need attention, some could use insider knowledge that we lack. Furthermore the kernel configs shipped by Genkernel are mainly from a time before the three of us took over and need fixes, a concept and documentation too. There is no test suite (virtual machines?) that I knew of catching regressions (of which we had few only, in that light). Nevertheless genkernel has fun aspects: it's in much better shape than 3.4.10.907 was, including documentation. It's a core Gentoo tool used by quite a few people so the work you do on Genkernel matters. With that in mind: Are you interested in joining Genkernel? Thanks,? If 2.26 still produced good results, 2.37a already does not. Bisecting involves fixing compilation for each version. I stopped getting 2.30 to compile because it seemed to take forever (longer than fixing 2.26, 2.37a and 2.40 together) and two people had their hands on a port of the logo to Blender 2.57 by then, one of them still has. It's too early to give details. What I can say is that personally I would want a very close match in case of a Blender-based replacement, closer than what I have seen so far. It still seems possible though. Best, Sebastian
Re: [gentoo-dev] Re: [rfc] Rendering the official Gentoo logo / Blender 2.04, Python 2.2
On 05/01/2011 08:06 AM, Michał Górny wrote: Isn't it possible to create a better SVG then? It may be. Of the three variants trying to match the Blender version that I have seen so far, none is a replacement of equal quality on the bling scale to my impression. They feel like tradeoffs, not like the real thing. Maybe they try to come too close to the ray-traced rendering, but I'm not sure if I really want to propose a different direction either. I think such a variant would be much more portable and reproducible than blender files. What I dislike about the idea of moving to a new logo is that we would give up part of our culture just because we were unable to move it from past to present to future. Imagine this dialog: A: Hey guys, I noticed you have a new logo? B: Yeah, blender rendering changed - so we dropped it. I don't really want to be B in that dialog. I see the pragmatic aspect of moving to SVG but it also has the taste of giving up to me. To vercome that taste, a very strong replacement would be needed. If we replace the Blender g we may also need a substitute for the red-white Blender gentoo as seen at*docroot*/images/gentoo-new.gif if just for the sake of consistency. I am wondering what effect the Blender nature of a logo does have on the capability and will of people to create fan art based on it compared to an SVG version. It seems like there is only a handful of 3D Gentoo wallpapers but does that mean it would have been more with an SVG version, instead? On what levels could SVG work as a catalyst? If we ported the logo to Blender 2.57 now: what can we do to not be running after Blender rendering changes for all time or to reduce their impact on us? Is this a natural cost or an evil one? Just my 2 cents. Best,. Both of these seem to run fine with Python 2.4.6, which is still in Gentoo. Without good image diffs, I cannot tell for sure if the rendering has changed since Blender 2.04. Best, Sebastian
[gentoo-dev] [rfc] Rendering the official Gentoo logo / Blender 2.04, Python 2.2
Hello! Gentoo's official logo originates from a Blender file [1] created by Daniel Robbis over 8 years ago. He used Blender 2.04 and Python 1.6 at that time. When rendering that .blend file with Blender 2.49b (or a more recent version), Blender does not apply the reflection texture needed [2] to give the metal look that you know. I don't know why that is. All I know is that Blender does find the file: it's not about the location. Trying Blender 2.04 binaries on a Windows VM, it turned out that Blender 2.04 is still able to render our logo as expected. In my eyes rendering our logo should not depend on a proprietary operating system or binary blobs. The source tarball of Blender 2.04 is hard to find (if available at all), the available sources of 2.03 [7] are incomplete. Binaries of 2.04 [8] are 32bit only and crash on startup on my system. The earliest source tarball after 2.04 that upstream offers for download [3] is Blender 2.26. That version does not compile with GCC 4.4 and turns out to be home with Python 2.2. In hope that this version would be able to render our logo in the way that Blender 2.04 did, I tried fixing compilation against GCC 4.4.5. That worked [4]. The need for Python 2.2 became clear when all of Python 2.4, 2.4 and 2.6 made it segfault in Python related code instantly. Therefore I tried bringing our old Python 2.2-r7 ebuild to life. Smaller changes like -fPIC were needed but it wasn't too hard. You can find the Python 2.2-r8 in the betagarden overlay [6]. In the end I could do sudo eselect python set python2.2 to compile and run Blender 2.26 and make it render g-metal.blend (after adjusting the path to the reflection texture) with metal look in a resolution of a few megapixel on transparent background. I have the impression, that the rending is the same as of Blender 2.04. However, this is not a good long-term solution. For instance Portage doesn't operate under Python 2.2 so an ebuild for Blender 2.25 is a tricky thing to do nowadays. Among the options I see is the following: A) Find out how to render g-metal.blend with recent Blender (2.57b at best) to give pixel-identical results to Blender 2.04. Needs an advanced Blender user ideally. B) Port Blender 2.26 to a recent version of Python. Are there any other options? What do you think? I would also like to encourage you to reproduce the process I described to spot any problems I overlooked. Thanks for reading up to this point. Best, Sebastian [1] [2] [3] [4];a=summary [5] [6];a=commitdiff;h=a3712c45dee61717cbc09b39ff868af7a3ccaa89 [7] [8]
[gentoo-dev] Why a betagarden overlay
Hello! First: If betagarden were a normal overlay, I would not be writing about it here. If you're in a hurry just skip the introduction and jump down to section Betagarden overlay. Introduction The betagarden overlay has been around for a while. I always wanted to write about its purpose and invite you to collaboration but I haven't got to it before. I understand betagarden as a third place supplementing the Gentoo main tree (sometimes known as gentoo-x86 or portage) and the special overlay of Project Sunrise [1]. It fills a gap that these other two repositories leave open. Let's have a look: Gentoo Main tree - Post-publishing review - Territorial write access: Gentoo Developers (only) - Full write access: Gentoo QA maybe? - High quality standards sunrise overlay --- - Pre-publishing review - Reduced write access: Anyone passing a simple test [2] - Full write access: Project Sunrise developers (only) - High quality standards From these lists a few things can be observed: 1. Both trees require high quality from ebuilds. This includes - Full integration with Gentoo (menu entries, init scripts, etc.) - Cleaning the ebuild - Support for LDFLAGS - ... 2. Gentoo developers who are not fully committed to sunrise do not have full write access to it -- Wouldn't it be nice to have a place where polishing is optional (as long as the ebuilds are still safe) with more liberal write access? But there's another group of repositories that I would also like to have a look at: Gentoo developer overlays - When you go to you see them instantly - most Gentoo devs have one: dev/aballier.git Developer overlay Alexis Ballier dev/alexxy.gitDeveloper overlay Alexey Shvetsov dev/anarchy.git Developer overlay Jory Pratt dev/angelos.git Developer overlay Christoph Mende [..] Many of these overlays currently combine two groups of ebuilds: - Stuff useful to themselves, only - Stuff useful to a wider audience (that they didn't feel like adding to the Gentoo main tree) With such a mix it often makes no sense for somebody else to keep that overlay installed over time. -- Wouldn't it be nice to have the stuff useful to others in a more central place (and reduce your developer to stuff that basically is only interesting to you)? Hollow and I (sping) have been trying to do that with our overlays: moving stuff useful to others over to betagarden, a shared overlay. Betagarden overlay == So now that I have shared my view on the Gentoo main tree, the sunrise overlay and developer overlays let me summarize how betagarden fits in: - Full write access to all Gentoo Developers That means more freedom than in the main tree or sunrise. - Reduced (but essential) quality standards (hence the beta in betagarden) - Keeping really useful stuff off the developer overlays How to join --- All devs have write access to betagarden already. 1. Clone git+ssh://g...@git.overlays.gentoo.org/proj/betagarden.git 2. Add yourself to the betagar...@gentoo.org alias: # ssh dev.gentoo.org # nano -w /var/mail/alias/misc/betagarden 3. Start adding (or moving over existing) ebuilds If you have trouble pushing commits please contact overl...@gentoo.org. In bugzilla, you can assign bugs to betagar...@gentoo.org by now. Expected criticism -- I expect some of you to be worried: does that mean people stop adding quality ebuilds to the Gentoo main tree and move on to betagarden? No. If an ebuild is really important it belongs into the main tree. In that case someone will take the time to ensure high quality standards and move it from betagarden to the main tree. I hope some of you do see something good in this project. Thanks for your interest, Sebastian [1] [2]
Re: [gentoo-dev] News item: Dropping Java support on ia64 (retry)
The sentence If there is no interest, the removal of Java support well be done during the second half of March 2011. seems to have some bugs. I suppose well be done was meant to be will be done? ^ But maybe the removal [..] will be done could use re-writing, too. How about this: If there is no interest, Java support will be removed from IA64 during the second half of March 2011. Best, Sebastian
[gentoo-dev] Downgrading glibc?
Hello! In relation to bug 354395 [1] I would like to downgrade my glibc back to 2.12.2. Portage doesn't allow me to do that: * Sanity check to keep you from breaking your system: * Downgrading glibc is not supported and a sure way to destruction * ERROR: sys-libs/glibc-2.12.2 failed (setup phase): * aborting to save your system Can anyone guide me or point me to a guide how to savely do that manually? Thanks, Sebastian [1]
Re: [gentoo-dev] Downgrading glibc?
A little update from my side: I was abe to downgrade glibc to 2.12.2 and my sound problem [1] is now gone again! If it's not glibc itself, it's one of the packages re-installed after (again, see [1] for the list). If anyone considers masking glibc 2.13 for now: please take my vote. Best, Sebastian [1]
Re: [gentoo-dev] Downgrading glibc?
On 02/11/11 13:26, Paweł Hajdan, Jr. wrote: Just curious, what downgrade method did you use? Just untaring an older glibc package? This is what I did: 0) Log out of X, log in to root console 1) Collect packages emerged after previous update to glibc from files in PORT_LOGDIR (using simple shell scripting) 2) Emerge glibc 2.12.2 3) Re-emerge packages from (1) 4) Reboot WARNING: It may not work as well on your system. Best, Sebastian
Re: [gentoo-dev] Re: Downgrading glibc?
On 02/11/2011 01:27 PM, Diego Elio Pettenò wrote: It should have been masked _beforehand_, masking it now is going to cause more trouble. Portage will propose a downgrade of glibc on emerge-update-world, okay. How bad would that be? Does it cause any other trouble? Remember: unless you're able to rebuild everything that was built afterwards without _using_ it, your system is going to be totally broken. Sure it sucks, haven't I said that enough times, regarding pushing stuff that's going to break other stuff straight to ~arch? In your eyes, is there anything we can do to improve the current situation? Best, Sebastian
Re: [gentoo-dev] Unused eclasses] Unused eclasses
-base:inherit eutils php-pear-manylibs-r1 kolab/dev-php/horde-framework-kolab/.svn/text-base/horde-framework-kolab-3.2_rc3-r20080528.ebuild.svn-base:inherit eutils php-pear-manylibs-r1 kolab/dev-php/horde-framework-kolab/horde-framework-kolab-3.2_rc3-r20080529.ebuild:inherit eutils php-pear-manylibs-r1 kolab/dev-php/horde-framework-kolab/horde-framework-kolab-3.2_rc1.ebuild:inherit eutils php-pear-manylibs-r1 kolab/dev-php/Horde_iCalendar/Horde_iCalendar-0.1.0.ebuild:inherit php-pear-r1 eutils kolab/dev-php/Horde_iCalendar/.svn/text-base/Horde_iCalendar-0.1.0.ebuild.svn-base:inherit php-pear-r1 eutils kolab/dev-php/Horde_Serialize/Horde_Serialize-0.0.2.ebuild:inherit php-pear-r1 eutils kolab/dev-php/Horde_Serialize/.svn/text-base/Horde_Serialize-0.0.2.ebuild.svn-base:inherit php-pear-r1 eutils kolab/dev-php/Horde_DataTree/Horde_DataTree-0.0.3.ebuild:inherit php-pear-r1 eutils kolab/dev-php/Horde_DataTree/.svn/text-base/Horde_DataTree-0.0.3.ebuild.svn-base:inherit php-pear-r1 eutils laurentb/dev-php5/phpunit/phpunit-3.5.10.ebuild:inherit php-pear-lib-r1 lordvan/dev-php/PEAR-XML_Feed_Parser/PEAR-XML_Feed_Parser-1.0.3.ebuild:inherit php-pear-r1 ohnobinki/dev-php/PEAR-Services_Yadis/PEAR-Services_Yadis-0.2.3.ebuild:inherit php-pear-r1 ohnobinki/dev-php/PEAR-Services_Facebook/PEAR-Services_Facebook-0.2.8.ebuild:inherit php-pear-r1 php/dev-php/PEAR-PHP_CodeSniffer/PEAR-PHP_CodeSniffer-1.3.0_rc1.ebuild:inherit php-pear-r1 php/dev-php/PEAR-Net_DNS/PEAR-Net_DNS-1.0.1.ebuild:inherit php-pear-r1 depend.php php/dev-php/PEAR-I18Nv2/PEAR-I18Nv2-0.11.4.ebuild:inherit php-pear-r1 php/dev-php/PEAR-Net_Sieve/PEAR-Net_Sieve-1.2.1.ebuild:inherit php-pear-r1 php/dev-php/PEAR-XML_Util/PEAR-XML_Util-1.1.4.ebuild:inherit php-pear-r1 depend.php php-4/dev-php4/phpunit/phpunit-1.3.2.ebuild:inherit php-pear-lib-r1 php-4/dev-php4/phpunit/.svn/text-base/phpunit-1.3.2.ebuild.svn-base:inherit php-pear-lib-r1 webapps-experimental/dev-php/PEAR-Net_GeoIP/PEAR-Net_GeoIP-1.0.0_rc1.ebuild:inherit php-pear-r1 depend.php webapps-experimental/dev-php/PEAR-Net_GeoIP/.svn/text-base/PEAR-Net_GeoIP-1.0.0_rc1.ebuild.svn-base:inherit php-pear-r1 depend.php webapps-experimental/dev-php/PEAR-Structures_Graph/.svn/text-base/PEAR-Structures_Graph-1.0.2.ebuild.svn-base:inherit php-pear-r1 webapps-experimental/dev-php/PEAR-Structures_Graph/PEAR-Structures_Graph-1.0.2.ebuild:inherit php-pear-r1 zugaina/dev-php/PEAR-Net_Sieve/PEAR-Net_Sieve-1.2.1.ebuild:inherit php-pear-r1 php5_2-sapi.eclass : dev-zero/dev-lang/php/php-5.2.12.ebuild:inherit versionator php5_2-sapi apache-module poppler.eclass : devnull/dev-libs/poppler-glib/poppler-glib-0.10.7.ebuild:inherit autotools poppler flag-o-matic kde-sunset/dev-libs/poppler-qt3/poppler-qt3-0.12.1.ebuild:inherit qt3 poppler kde-sunset/dev-libs/poppler-qt3/poppler-qt3-0.12.0.ebuild:inherit qt3 poppler kde-sunset/dev-libs/poppler-qt3/poppler-qt3-0.12.3.ebuild:inherit qt3 poppler kde-sunset/dev-libs/poppler-qt3/poppler-qt3-0.10.7.ebuild:inherit qt3 poppler kde-sunset/dev-libs/poppler-qt3/poppler-qt3-0.10.6.ebuild:inherit qt3 poppler tla.eclass : sunrise/dev-util/tla-tools/tla-tools-20060509.ebuild:inherit tla sunrise/dev-util/tla-tools/.svn/text-base/tla-tools-20060509.ebuild.svn-base:inherit tla Ycarus Le 04/02/2011 15:03, Sebastian Pipping a écrit :] Upcoming changes to hosting of Git repos on git.gentoo.org (NOT overlays.git.gentoo.org)
On 01/22/11 13:32, Theo Chatzimichos wrote: Well, the distinction for unofficial/official overlays happen mostly in layman -L, I don't think users pay attention to our git repo list. Furthermore, I got at least three requests from developers to move their repo from user/ to dev/ (same problem when devs retired). This distinction doesn't make any sense. Three request over what time? Compared to a screen height of user repos created, maybe that's not much. Sebastian
Re: [gentoo-dev] Upcoming changes to hosting of Git repos on git.gentoo.org (NOT overlays.git.gentoo.org)
On 01/22/11 09:55, Robin H. Johnson wrote: - On one hand, I would like user repositories to have a separate namespace, so that other users realize a given repo is NOT from a developer. Seconding that. - On the other side, what do we do when a user with a repo becomes a developer (and when they retire?) To avoid a move, you'd have to give away distinction. To be able to do path-based distinction, you have to move on status updates. It seems that you cannot have both at the same time. Sebastian
Re: [gentoo-dev] Upcoming changes to hosting of Git repos on git.gentoo.org (NOT overlays.git.gentoo.org)
On 01/21/11 23:15, Robin H. Johnson wrote: On Fri, Jan 21, 2011 at 03:47:03PM -0600, Donnie Berkholz wrote: Sweet, we actually got an invitation to bikeshed! Here's my contributions: gentoo-tree.git gentoo-portage-tree.git portage-tree.git (the name 'portage' derives from bsd ports, so it makes sense to keep that connection to make it recognizable to that audience) Please note that I said _location_. I'm not so happy about putting them in in the toplevel namespace. I see. If the long-term goal is too have multiple packages trees, than maybe tree/main.git or tree/core.git would make sense and go well with proj/, as that is not plural either: no projs/, no trees/. It could make tree/core.git tree/science.git tree/games.git tree/... some day. You need to provide TWO names: 1. The current tree that we will start with. 2. The read-only graftable tree with full history (going back to the start of Gentoo commits). Any of these suffixes for the other one would work for me: * past * before * old * history historical is fine, just a bit long, maybe without need to. As much as I like the original Portage tree, I do agree it's lead to confusing of the source code of the package manager vs. the ebuild tree. Great to hear that you share this worry. Best, Sebastian
[gentoo-dev] genkernel 3.4.11.1 released
Hello! This release fixes two bugs both affecting 3.4.11 (not earlier releases). Bugs fixed == 351906 Move application of kernel config after make mrproper as that deletes .config (whereas make clean does not) 351909 busybox 1.18.1: Return of mdstart as an applet (regression) Special thanks go to Xake. Thanks for your interest. Sebastian
Re: [gentoo-dev] genkernel 3.4.11.1 released
On 01/20/11 21:08, Jeroen Roovers wrote: On Thu, 20 Jan 2011 16:00:06 +0100 Sebastian Pipping sp...@gentoo.org wrote: This release fixes two bugs both affecting 3.4.11 (not earlier releases). I'm a Gentoo developer. I've never used genkernel for private purposes. So I don't see why you would send this to gentoo-dev@ and gentoo-dev-announce@. I don't think a bit of extra visibility can hurt with this. Still, I may take it off the list if another Gentoo developer seconds that request. Sebastian
Re: [gentoo-dev] genkernel 3.4.11.1 released
On 01/20/11 21:38, Fabian Groffen wrote: Like Jeroen, I don't think new package releases should be announced on these developer-related lists. It's not about the package, it's about the release itself. I don't send mails on package bumps I do. Sebastian
Re: [gentoo-dev] genkernel 3.4.11.1 released
On 01/20/11 21:45, Rich Freeman wrote: On Thu, Jan 20, 2011 at 3:38 PM, Fabian Groffen grob...@gentoo.org wrote: Like Jeroen, I don't think new package releases should be announced on these developer-related lists. Tend to agree, at least in general. If a genkernel upgrade impacted multiple teams/etc, such as requiring changes to install media, or the handbooks, etc, then I'd consider it completely fair game for the lists. Likewise if some big change that will really impact the distro is being considered I'd consider that fair game as well. Fair point. I'll keep posting to Planet Gentoo. How about gentoo-dev-announce? That said, there are some nice genkernel changes being made and I for one appreciate them (even though I don't yet run it - the mdadm inclusion will probably push me over the edge)! If you get the chance please try genkernel-9 (five nines) exposing the experimental branch. That may save both of us the trouble to fix things post release. Best, Sebastian
Re: [gentoo-dev] genkernel 3.4.11.1 released
On 01/20/11 21:55, Fabian Groffen wrote: Unless if you are on some git repo, we have commit mails which can serve this purpose very well. I am on a git repo, and a commit list serves a different purpose: low level tracking of changes. Sebastian
Re: [gentoo-dev] genkernel 3.4.11.1 released
On 01/20/11 22:06, Jeroen Roovers wrote: Version bumps have no place on the dev-announce list /unless/ they impact developers' work directly. Fine. (and not because you want to celebrate the glory of another version release). I'm not sure if I'm just interpreting things, but I wish you would speak to me in a way, where I would not have to wonder on each mail, if you're just trying to piss me off. Thanks. Sebastian
Re: [gentoo-dev] Tomoyo tools need attention
Bumped to 2.3.0-p20100820. Sebastian
[gentoo-dev] genkernel 3.4.11 released
Hello! I have just released genkernel 3.4.11 to the testing tree. From a high level point of view this release brings: - Slightly faster startup - Updated versions of busybox, LVM, e2fsprogs/blkid - A few new features, e.g. GnuPG support - A bunch of bug fixes (see below) Below you can find details on the changes since 3.4.10.908. Besides the people contributing bug reports special thanks go to: - Amadeusz Zolnowski (LVM update) - Christian Giessner (UUID crypt_root) - dacook (GnuPG 1.x support) - Denis Kaganovich (Busybox patch porting) - devsk (Multi-device patch) - Fabio Erculiani (Slowusb fixes) - Kai Dietrich (Symlink analysis) - Kolbjorn Barmen (Arithmetic fix) Please open bugs for any issues you run into. New features 217959 Add GnuPG 1.x support 315467 Add support for UUID to crypt_root 303529 Add minimal btrfs support 267383 Add virtio support by updating LVM 244651 Run make firmware_install if CONFIG_FIRMWARE_IN_KERNEL != y Component updates = 291822 Update e2fsprogs/blkid to 1.41.14 331971 Update busybox to 1.18.1 255196 Update LVM to 2.02.74 Bug fixes = 351047 Do not sleep after vgscan 271528 Handle missing kernel .config better 323317 Improve slowusb handling 246370 Check return codes of cpio 307855 Create /bin/vg* symlinks when called as /linuxrc, too 303531 Pick first device when several devices are matching real_root 347213 Fix warning cannot remove `/var/cache/genkernel/src' 326593 Allow configuring the list of busybox applets 339789 Fix arithmetic bug in defaults/initrd.scripts Thanks for your interest. Sebastian
|
https://www.mail-archive.com/search?l=gentoo-dev%40lists.gentoo.org&q=from:%22Sebastian+Pipping%22&o=newest&f=1
|
CC-MAIN-2021-31
|
en
|
refinedweb
|
The'.
The video 'Dealing with the explosion of complexity in web test automation' gives you a good idea of how QF-Test handles a deeply nested DOM structure.
Though they often go unnoticed, at least until the first
ComponentNotFoundException occurs, the 'Component' nodes
are the heart of a test-suite. Everything else revolves around
them. Explaining this requires a little side-tracking:
Live recording of the special webinar 'Component recognition'.
The GUI of an application consists of one or more windows which hold a number of components. The components are nested in a hierarchical structure. Components that hold other components are called containers. As QF-Test itself is a complex application, its main window should serve well as an example:
The window contains a menu bar which holds the menus for QF-Test. Below that is the toolbar with its toolbar buttons. The main area employs a split pane to separate the tree view from the details. The tree view consists of a label ("Test-suite") and the tree itself. The detail view contains a complex hierarchy of various components like text fields, buttons, a table, etc. Actually there are many more components that are not obvious. The tree, for example, is nested in a scroll pane which will show scroll bars if the tree grows beyond the visible area. Also, various kinds of panes mainly serve as containers and background for other components, like the region that contains the "OK" and "Cancel" buttons in the detail view.
SWT In SWT the main GUI components are called
Control,
Widget or
Item. Unless explicitly stated otherwise the term
"component", as used in this manual, also applies to these and not only to AWT/Swing/JavaFX
Components.
JavaFX
The same is valid for JavaFX components called
Nodes that build up the
component hierarchy denominated as
Scene graph.
Windows
In the manual we will use the term
component für the GUI elements
of native Windows applications, called
Controls, too.
Web The internal representation of an HTML page is based on the Document
Object Model (DOM) as defined by the W3C, a tree structure consisting of nodes. The
root node, a
Document can contain
Frame nodes with further
Document nodes and/or a root
Element with a tree structure of
further
Element nodes. Though an HTML page with its DOM is quite different
from a Swing, JavaFX or SWT interface, the abstractions QF-Test uses work just as well and the
general term "component" also applies to DOM nodes.
Actions by the end-user of an application are transformed into events by the Java VM. Every event has a target component. For a mouse click this is the component under the mouse cursor, for a key press it is the component that has the keyboard focus. When an event is recorded by QF-Test, the component information is recorded as well, so that the event can later be replayed for the same component.
This may sound trivial and obvious, but component recognition is actually the most complex part of QF-Test. The reason for this is the necessity to allow for change. QF-Test is a tool designed for regression testing, so when a new version of the SUT is released, tests should continue to run, ideally unchanged. So when the GUI of the SUT changes, QF-Test needs to adapt. If, for example, the "OK" and "Cancel" buttons were moved from the bottom of the detail view to its top, QF-Test would still be able to replay events for these buttons correctly. The extent to which QF-Test is able to adapt varies and depends on the willingness of developers to plan ahead and assist a little bit in making the SUT well-suited to automated testing. But more on that later (section 5.6 and section 5.7).
The recorded components are transformed into 'Window' and 'Component' nodes which form a hierarchy that represents the actual structure of the GUI. These nodes are located under the 'Windows and components' node. The following image shows part of the 'Components' representing QF-Test's main window.
Web Instead of a 'Window', the root
Document of a web page
is represented as a 'Web page' node. Nested
Documents inside
Frames are represented as 'Component' nodes.
Every time a sequence is recorded, nodes are generated for components that are not yet represented. When the sequence is discarded later on, the 'Components' remain, hence 'Component' nodes have a tendency to proliferate. The popup menu (right button click) for 'Window' and 'Component' nodes has two items, »Mark unused components...« and »Remove unused components«, which will mark or remove those 'Component' nodes that are no longer being referred to. Be careful though if you are referencing 'Components' across test-suite boundaries or use variable values in 'QF-Test component ID' attributes as these are not taken into account unless the test-suites belong to the same project or the 'Dependencies (reverse includes)' attribute of the 'Test-suite' root node is set correctly .
Note 4.0+ Besides this way of representing components as nodes it is also possible to address components as multi-level sub-items with an XPath-like syntax called QPath as explained in subsection 6.3.2
The attributes of 'Components' and the algorithm for component recognition are explained in detail in section 44.2. Here we will concentrate on the association between 'Component' nodes and the rest of the test-suite.
Windows
In order to control native Windows applications QF-Test provides some procedures in
the package
qfs.autowin of the standard library.
You need to determine the criteria for the identification of the windows components
manually, using the procedures provided in the package
qfs.autowin.helpers, and then pass them as parameters
to the procedures performing the action on the components.
For details please refer to chapter 47.
Every node of the test suite has a 'QF-Test ID' attribute which is secondary for most kinds of nodes. For 'Component' nodes however, the 'QF-Test ID' has an important function. It is the unique identifier for the 'Component' node by which events, checks and other nodes that have a target component refer to it. Such nodes have a 'QF-Test component ID' attribute which is set to the 'Component's' 'QF-Test ID'. This level of indirection is important. If the GUI of the SUT changes in a way that QF-Test cannot adapt to automatically, only the 'Component' nodes for the unrecognized components need to be updated to reflect the change and the test will run again.
It is essential to understand that the 'Component's' 'QF-Test ID' is an artificial concept for QF-Test's internal use and should not be confused with the 'Name' attribute, which serves for identifying components in the SUT and is explained in detail in the following section. The actual value of the 'QF-Test ID' is completely irrelevant, except for the requirement to be unique, and it bears no relation whatever to the actual component in the GUI of the SUT. However, the 'QF-Test ID' of the 'Component' is shown in the tree of the test-suite, for 'Component' nodes as well as for events and other nodes that refer to a 'Component'. For this reason, 'Components' should have expressive 'QF-Test IDs' that allude to the actual GUI component.
When creating a 'Component' node, QF-Test has to assign a 'QF-Test ID' automatically. It does its best to create an expressive value from the information available. The option Prepend parent QF-Test ID to component QF-Test ID controls part of this process. If the generated 'QF-Test ID' doesn't suit you, you can change it. QF-Test will warn you if you try to assign a 'QF-Test ID' that is not unique and if you have already recorded events that refer to the 'Component', it will change their 'QF-Test component ID' attribute to reflect the change. Note that this will not cover references with a variable 'QF-Test component ID' attribute.
Note A common mistake is changing the 'QF-Test component ID' attribute of an event instead
of the 'QF-Test ID' itself. This will break the association between the event and the
'Component', leading to an
UnresolvedComponentIdException. Therefore you should
not do this unless you want to change the actual target component of the event.
Experienced testers with a well-structured concept for automated testing will find the component recording feature described in section 4.4 useful. It can be used to record the component hierarchy first in order to get an overview over the structure of the GUI and to assign 'QF-Test IDs' that suit you. Then you can continue to record the sequences and build the test-suite around the components.
The class of a component is a very important attribute as it describes the type of the recorded component. Once QF-Test records a button, it will only look for a button on replay, not for a table or a tree. Thus the component class conveniently serves to partition the components of a GUI. This improves performance and reliability of component recognition, but also helps you associate the component information recorded by QF-Test with the actual component in the GUI.
Besides its role in component identification, the class of a component is also important for registering various kinds of resolvers that can have great influence on the way QF-Test handles components. Resolvers are explained in detail in subsection 49.1.6.
Each toolkit defines its own system-specific classes for components like Buttons or Tables. In case of Buttons, that definition could be javax.swing.JButton for Java Swing or org.eclipse.swt.widgets.Button for Java SWT or javafx.scene.control.ButtonBase For JavaFX or INPUT:SUBMIT for web applications. In order to allow your tests to run independently of the actually utilized technology QF-Test unifies those classes via so-called generic classes, e.g. all buttons are simply called Button now. This approach provides a certain degree of independence from the dedicated technical classes and will allow you to create tests without taking care about the specific technology. You can find a detailed description of generic classes at chapter 56. In addition to generic classes QF-Test records system-specific classes as 'Extra features' with the state "Ignore". In case of component recognition problems due to too many similar components these can be activated to have a stricter component recognition at the expense of flexibility.
Another reason for generic classes is that dedicated technical classes could get changed during development, e.g. due to introduction of a new base framework or even another technology. In such cases QF-Test needs to be quite flexible in order to recognize a proper class. Here the concept of generic classes allows you to be able to cope with those changes and for the most part to re-use existing tests. You can find more details at subsection 5.4.3.
For Swing, FX and SWT QF-Test works with the actual Java GUI classes whereas a pseudo class hierarchy is used for web applications as follows:
As shown, "NODE" is at the root of the pseudo class hierarchy. It matches any kind of element in the DOM. Derived from "NODE" are "DOCUMENT", "FRAME", "DOM_NODE" and "DIALOG", the types of nodes implementing the pseudo DOM API explained in section 49.11. "DOM_NODE" is further sub-classed according to the tag name of the node, e.g. "H1", "A" or "INPUT" where some tags have an additional subclass like "INPUT:TEXT".
QF-Test can record classes of component in various ways, therefore it organizes component classes in various categories. Those categories are called as the specific class, the technology-specific system class, the generic class and the dedicated type of the generic class. Each category is recorded at 'Extra features'.
The option Record generic class names for components is checked by default. Using this option allows you to record generic classes in order to share and re-use your tests when testing a different technology with just minor changes to the existing tests.
In case you work with one Java engine only and you prefer to work with the "real" Java classes, you could also work without the generic class recording. But in this case you should consider to check the option Record system class only. This option makes QF-Test to record the technology-specific system class instead of the derived class. If you switch off this option you will get the derived class which enables you to make a very well targeted recognition but could cause maintenance efforts in case of changes coming from refactoring.
Web In web applications QF-Test records classes as described in the previous chapter subsection 5.4.2. In case you have to work with a supported AJAX toolkit (see section 46.2), QF-Test records generic classes as well. You shouldn't modify the default options for this technology.
Depending on its class a component has a set of (public) methods and fields which can be used in an 'SUT script' once you have a reference to the object (see subsection 12.2.4). Select the entry »Show the component's methods...« from the context menu of a node under the 'Windows and components' branch to display the methods and fields of the corresponding class or right click on a component in the SUT while you are in component recording mode (see section 4.4).
Web
The methods and fields displayed for (HTML) elements in a browser cannot be used
directly with an object returned by
rc.getComponent(). These are at
JavaScript level and require a wrapping of the method calls into
evalJS
(cf. section 49.11).
Test automation can be improved tremendously if the developers of the SUT have either planned ahead or are willing to help by defining names for at least some of the components of the SUT. Such names have two effects: They make it easier for QF-Test to locate components even after significant changes were made to the SUT and they are highly visible in the test-suite because they serve as the basis for the 'QF-Test IDs' QF-Test assigns to components. The latter should not be underestimated, especially for components without inherent features like text fields. Nodes that insert text into components called "textName", "textAddress" or "textAccount" are far more readable and maintainable than similar nodes for "text", "text2" or "text3". Indeed, coordinated naming of components is one of the most deciding factors for the efficiency of test automation and the return of investment on QF-Test. If development or management is reluctant to spend the little effort required to set names, please try to have them read this chapter of the manual.
Note Please note that recorded names are stored in the 'Name' attribute of 'Component' nodes. Because they also serve as the basis for the 'QF-Test ID' for the same node, 'Name' and 'QF-Test ID' are often identical. But always keep in mind that the 'QF-Test ID' is used solely within QF-Test and that the 'Name' is playing the critical part in identifying the component in the SUT. If the name of a component changes, it is the 'Name' attribute that must be updated, there is no need to touch the 'QF-Test ID'.
The technique to use for setting names during development depends on the kind of SUT:
Swing All AWT and Swing components are derived from the AWT class
Component, so its method
setName is the natural standard for
Swing SUTs and some developers make good use of it even without test automation in mind,
which is a great help.
JavaFX For JavaFx
setId is the pendant of Swing's
setName
method to set identifiers for components (called 'Nodes'). Alternatively IDs can be set via
the FXML attribute
fx:id. While the ID of a 'Node' should
be unique within the scene graph, this uniqueness is not enforced. This is analogous to
the 'ID' attribute on an HTML element.
SWT Unfortunately SWT has no inherent concept for naming components. An
accepted standard convention is to use the method
setData(String key, Object
value) with the String
"name" as the key and the designated name as
the value. If present, QF-Test will retrieve that data and use it as the name for the
component. Obviously, with no default naming standard, very few SWT applications today
have names in place, including Eclipse itself.
Fortunately QF-Test can derive names for the major components of Eclipse/RCP based applications from the underlying models with good results - provided that IDs were specified for those models. See the Automatic component names for Eclipse/RCP applications option for more details.
Web The natural candidate for naming the DOM nodes of a web application is the 'ID' attribute of a DOM node - not to be confused with the 'QF-Test ID' attribute of QF-Test's 'Component' nodes. Unfortunately the HTML standard does not enforce IDs to be unique. Besides, 'ID' attributes are a double-edged sword because they can play a major role in the internal JavaScript operations of a web application. Thus there is a good chance that 'ID' attributes are defined, but they cannot be defined as freely as the names in a Swing, JavaFX or SWT application. Worse, many DHTML and Ajax frameworks need to generate 'ID' attributes automatically, which can make them unsuited for naming. The option Turn 'ID' attribute into name where "unique enough" determines whether QF-Test uses 'ID' attributes as names.
WebIn case you want to test a web application using a supported AJAX toolkit, please take a look at subsection 46.2.2 for details about assigning IDs.
If developers have implemented some other consistent naming scheme not based on the above
methods, those names can still be made accessible to QF-Test by implementing a
NameResolver as described in subsection 49.1.6.
The reason for the tremendous impact of names is the fact that they make component recognition reliable over time. Obviously, locating a component that has a unique name assigned is trivial. Without the help of a name, QF-Test uses lots of different kinds of information to locate a component. The algorithm is fault-tolerant and configurable and has been fine-tuned with excellent results. However, every other kind of information besides the name is subject to change as the SUT evolves. At some time, when the changes are significant or small changes have accumulated, component recognition will fail and manual intervention will be required to update the test-suite.
Another aspect of names is that they make testing of multi-lingual applications independent of the current language because the name is internal to the application and does not need to be translated.
There is one critical requirement for names: They must not change over time, not from one
version of the SUT to another, not from one invocation of the SUT to the next and not
while the SUT executes, for example when a component is destroyed and later created anew.
Once a name is set it must be persistent. Unfortunately there is no scheme for setting
names automatically that fulfills this requirement. Such schemes typically create names
based on the class of a component and an incrementing counter and invariably fail because
the result depends on the order of creation of the components. Because names play such a
central role in component identification, non-persistent names, specifically automatically
generated ones, can cause a lot of trouble. If development cannot be convinced to replace
them with a consistent scheme or at least drop them, such names can be suppressed with the
help of a
NameResolver as described in subsection 49.1.6.
QF-Test does not require ubiquitous use of names. In fact, over-generous use can even be counter-productive because QF-Test also has a concept for components being "interesting" or not. Components that are not considered interesting are abstracted away so they can cause no problem if they change. Typical examples for such components are panels used solely for layout. If a component has a non-trivial name QF-Test will always consider it interesting, so naming trivial components can cause failures if they are removed from the component hierarchy in a later version.
Global uniqueness of names is also not required. Each class of components has its own namespace, so there is no conflict if a button and a text field have the same name. Besides, only the names of components contained within the same window should be unique because this gives the highest tolerance to change. If your component names are unique on a per-window basis, set the options Name override mode (replay) and Name override mode (record) to "Override everything". If names are not unique per window but identically named components are at least located inside differently named ancestors, "Hierarchical resolution" is the next best choice for those options.
Two questions remain: Which components should have names assigned and which names to use?
As a rule of thumb, all components that a user directly interacts with should have a name,
for example buttons, menus, text fields, etc. Components that are not created directly,
but are automatically generated as children of complex components don't need a name, for
example the scroll bars of a
JScrollPane, or the list of a
JComboBox. The component itself should have a name, however.
If components were not named in the first place and development is only willing to spend
as little effort as possible to assign names to help with test automation, a good strategy
is to assign names to windows, complex components like trees and tables, and to panels
that comprise a number of components representing a kind of form. As long as the structure
and geometry of the components within such forms is relatively consistent, this will
result in a good compromise for component recognition and useful 'QF-Test ID' attributes.
Individual components causing trouble due to changing attributes can either be named by
development when identified or taken care of with a
NameResolver.
Since QF-Test "knows" the components for which
setName is most useful, it comes
with a feature to locate and report these components. QF-Test even suggests names to assign,
though these aren't necessarily useful. This feature is similar to component recording and
is explained in the documentation for the option Hotkey for components.
Web The suggested names for DOM nodes are currently not very useful.
Unavoidably the components of the SUT are going to change over time. If names are used consistently this is not really a problem, since in that case QF-Test can cope with just about any kind of change.
Without names however, changes tend to accumulate and may reach a point where component recognition fails. To avoid that kind of problem, QF-Test's representation of the SUT's components should be updated every now and then to reflect the current state of affairs. This can be done with the help of the »Update component(s)« menu-item in the context menu that you get by right-clicking on any node under the 'Windows and components' node.
Note This function can change a lot of information in your test-suite at once and it may be difficult to tell whether everything went fine or whether some components have been misidentified. To avoid problems, always create a backup file before updating multiple components. Don't update too many components at once, take things 'Window' by 'Window'. Make sure that the components you are trying to update are visible except for the menu-items. After each step, make sure that your tests still run fine.
Provided that you are connected to the SUT, this function will bring up the following dialog:
If you are connected to multiple SUT clients, you must choose one to update the components for.
Select whether you only want to update the selected 'Component' node or all its child nodes as well.
You can choose to include components that are not currently visible in the SUT. This is mostly useful for menu-items.
The 'QF-Test ID' for an updated node is left unchanged if "Use QF-Test component ID of original node" is selected. Otherwise, updated nodes will receive a 'QF-Test ID' generated by QF-Test. If the 'QF-Test ID' of a node is changed, all nodes referring to that node via their 'QF-Test component ID' attribute will be updated accordingly. QF-Test also checks for references to the component in all suites of the same project and in those suites that are listed in the 'Dependencies (reverse includes)' attribute of the 'Test-suite' node. Those suites are loaded automatically and indirect dependencies are resolved as well.
Note In this case, QF-Test will open modified test-suites automatically, so you can save the changes or undo them.
After pressing "OK", QF-Test will try to locate the selected components in the SUT and fetch current information for them. Components that are not found are skipped. The 'Component' nodes are then updated according to the current structure of the SUT's GUI, which may include moving nodes to different parents.
Note For large component hierarchies this very complex operation can take a while, in extreme cases even a few minutes.
This function is especially useful when names have been set for the first time in the SUT. If you have already generated substantial test-suites before convincing the developers to add names, you can use this function to update your 'Components' to include the new names and update their 'QF-Test IDs' accordingly. This will work best if you can get hold of an SUT version that is identical to the previous one except for the added names.
Note Very important note: When updating whole windows or component hierarchies of significant size you may try to update components that are not currently visible or available. In that case it is very important to avoid false-positive matches for those components. You may want to temporarily adjust the bonus and penalty options for component recognition described in subsection 37.3.4 to prevent this. Specifically, set the 'Feature penalty' to a value below the 'Minimum probability', i.e. to 49 if you have not changed the default settings. Don't forget to restore the original value afterwards.
If you need to change the setting of the options Name override mode (replay) and Name override mode (record) because, for example, component names turned out not to be unique after all, change only the setting for the recording options before updating the components. When finished, change the replay option accordingly.
If your SUT has changed in a way that makes it impossible for QF-Test to locate a component,
your test will fail with a
ComponentNotFoundException. This should not be confused
with an
UnresolvedComponentIdException which is caused by removing a 'Component'
node from the test-suite or changing the 'QF-Test component ID' attribute of an 'Event' node
to a non-existing 'QF-Test ID'.
There are two videos available explaining in detail how to deal with a
ComponentNotFoundException:
A video of a simple case is 'ComponentNotFoundException - complex case', a more complex case is discussed in the video 'ComponentNotFoundException - complex case'.
Windows
In case you have problems with the component recognition with native Windows
applications in procedures of the package
qfs.autowin of the
standard library please continue in chapter 47.
When you get a
ComponentNotFoundException, rerun the test with QF-Test's debugger
activated so that the test gets suspended and you can look at the node that caused the
problem. Here it pays if your 'QF-Test ID' attributes are expressive because you need to
understand which component the test tried to access. If you cannot figure out what this
node is supposed to do, try to deactivate it and rerun the test to see if it runs through
now. It could be a stray event that was not filtered during recording. In general your
tests should only contain the minimum of nodes required to achieve the desired effect.
If the node needs to be retained, take a look at the SUT to see if the target component is currently visible. If not, you need to modify your test to take that situation into account. If the component is visible, ensure that it was already showing at the time of replay by checking the screenshot in the run-log and try to re-execute the failed node by single-stepping. If execution now works you have a timing problem that you need to handle by either modifying the options for default delays (subsection 37.3.5) or with the help of a 'Wait for component to appear' node or a 'Check' node with a 'Timeout'. As a last resort you can work with a fixed delay.
If the component is visible and replay fails consistently, the cause is indeed a change in the component or one of its parent components. The next step is identifying what changed and where. To do so, re-record a click on the component, then look at the old and new 'Component' node in the hierarchy under 'Windows and components'.
Note You can jump directly from the 'Event' node to the corresponding 'Component' node by pressing [Ctrl-W] or right-clicking and selecting »Locate component«. You can jump back via [Ctrl-Backspace] or »Edit«-»Select previous node«. A clever trick is to mark the 'Component' nodes to compare by setting breakpoints on them to make them easier to spot.
The crucial point is where the hierarchy for those two components branches. If they are located in different 'Window' nodes, the difference is in the 'Window' itself. Otherwise the old and new 'Component' have a common ancestor just above the branching point and the crucial difference is in the respective nodes directly below that branch. When you have located those nodes, examine their attributes top-to-bottom and look for differences.
Note You can open a second QF-Test window via »View«-»New window...« so as to place the detail views of the nodes to compare side to side.
The only differences that will always cause recognition failures are 'Class name' and 'Name'. Differences in 'Feature', structure or geometry attributes can usually be compensated unless they accumulate.
A change in the 'Class name' attribute can be caused by refactoring done by development, in which case you need to update your 'Class name' attribute(s) to reflect the change(s). Another possible cause is obfuscation, a technique for making the names of the application classes illegible for protection against prying eyes. This poses a problem because the class names can then change with each version. You can prevent both refactoring and obfuscation problems by activating the option Record system class only.
If the 'Name' has changed things get more difficult. If the change is apparently
intentional, e.g. a typo was fixed, you can update the 'Name' attribute accordingly.
More likely the cause is some automatically generated name that may change again anytime.
As explained in the previous section, your options in this case are discussing things with
development or suppressing such names with the help of a
NameResolver as
described in subsection 49.1.6.
Changes to the 'Feature' attribute are common for 'Window' nodes, where the 'Feature' represents the window title. When combined with a significant change in geometry such a change can cause recognition to break. This can be fixed by updating the 'Feature' to match the new title or, preferably, by turning it into a regular expression that matches all variants.
Depending on the kind and amount of changes to accommodate there are two ways to deal with the situation:
Note Automatic updates for references from other test-suites require that the suites belong to the same project or the correct setting the 'Dependencies (reverse includes)' attribute of the 'Test-suite' root node.
Hidden fields are not captured by default and therefore not stored under the 'Windows and components' node.
In case you frequently need to access hidden fields you can deactivate the Take visibility of DOM nodes into account option.
Another way to get hidden fields recorded is the following:
To access a hidden field's attributes (e.g. the 'value' attribute) you can create a simple 'SUT script' as shown below. Details on scripting in general, the used methods and parameters can be found in Scripting, Run-context API and Pseudo DOM API respectively.
|
https://www.qfs.de/en/qf-test-manual/lc/manual-en-user_components.html
|
CC-MAIN-2021-31
|
en
|
refinedweb
|
Hi
I write code for microcontroller I want to write my own scheduler for microcontroller. But I am not getting any idea even after searching a lot
scheduling is process by which operating...
Hi
I write code for microcontroller I want to write my own scheduler for microcontroller. But I am not getting any idea even after searching a lot
scheduling is process by which operating...
Thank you laserlight and salem
okay So I made a list just to store student fees.
#include<stdio.h>#include<stdlib.h>
struct node{
int Fees;
struct node *next;
I want to write a C program that stores the records of student studying in school. I would like to take less memory to store record
format for record
Name :
Class :
Fees :
Year :
...
That I know malloc allocate memory at run time but again there is sentence " allocate memory at run time " That's not clear for me
When we write a program, we build and compile it, then the program runs on the PC, which is called the run time.
we say allocate memory at RUN time, This confused me a lot, What's meaning of...
constant qualifier indicate that value of variable will not change while volatile variable indicate that value of variable may be change
Can we modify the const qualifier with volatile?
compile time means that we have written the program and we have to do compile it to check the error. Run time means that the program has been compiled and now it will run on the device
Whenever...
I am looking for example that prove dynamic memory will be useful instead of static memory. Dynamic memory is allocated at run time where as static is allocated at compile time.
I found example...
#include <stdio.h>
struct point
{
int x;
char y;
float z;
}var;
That's what I am trying to do it ?
so can you give idea , pesudo code be batter
so can you give me idea How it can be achieve without sorting
in simple way if i ask to someone who doesn't have programming knowledge how his brain work to find repeated number, what does he...
my second option is sorting because I don't know much about it
I have practiced with basics It would to early to jump on sorting method, ofcourse I will learn but not now. so that's why i want to...
just for basic understanding, do you know how that algorithm can be implement in program ?
Note : not asking complete program just asking process or pescudo code
I have no idea how to write program to find repeated number in the given sequence
Numbers[ ] = [ 6, 4, 2, 1, 3, 1 ]
Program output : Number 1 repeats 2 times
My attempt to make algorithm ...
Embedded is the combination hardware and software,
How do you practicing c/c++ ?
Do you have any development board ?
Which micro you were asking , there are many 8051, PIC, ARM ?
There are so...
As much as I have seen, we keep the micro in the header file. So I created two files main.c other.h
What could be best example for #ifdef, #if, #defined, #else and #elseif directives ?
I do not understand what should be in main function that's why left main empty. I don't found complete code that's why tried with my own code
I have seen in many link there is more theoretical description about preprocessor
C Preprocessor and Macros
I want to experiment by writing some code
#include <stdio.h>#include "other.h"
What you said in post 2 is given in page integer types" table
suppose we want to store number -32768 to 32767
#include<stdio.h> int main(void)
{
short a = 32767;
printf("max...
That information given on this page C syntax - Wikipedia
Size qualifiers short, long
Sign qualifiers signed, unsigned
Data types - int, char, float, double
<Sign qualifiers >...
I am having trouble to understanding this combination to store variable. I do not understand which one should be use for specific reason
let's suppose if I want to store integer number then I...
Okay so I will leave it here
I was looking some sample to create basic scheduler in c programming.
I am trying to run code given in the example GitHub - EmbeddedApprentice/TaskTurner: A first, very short beginning for the TaskTurner.
When I unzip folder I get only three file taskrunner.h...
laserlight
I did changes
#include<stdio.h>
//#include "../include/errors.h"
#include<windows.h>
#include "../include/tasks.h"
#include "../include/taskrunner.h"
I am trying to run sample code on my PC
I have following three files taskrunner.h task.h runtask.c
#ifndef TASKRUNNER_H#define TASKRUNNER_H
int taskrunner(Task * TaskList, unsigned int...
|
https://cboard.cprogramming.com/search.php?s=4034049aa1b526c9381d3a7af26c751e&searchid=7058638
|
CC-MAIN-2021-31
|
en
|
refinedweb
|
Boost Implementation Variations
Separation of interface and implementation
The interface specifications for boost.org library components (as well as for quality software in general) are conceptually separate from implementations of those interfaces. This may not be obvious, particularly when a component is implemented entirely within a header, but this separation of interface and implementation is always assumed. From the perspective of those concerned with software design, portability, and standardization, the interface is what is important, while the implementation is just a detail.
Dietmar Kühl, one of the original boost.org contributors, comments "The main contribution is the interface, which is augmented with an implementation, proving that it is possible to implement the corresponding class and providing a free implementation."
Implementation variations
There may be a need for multiple implementations of an interface, to accommodate either platform dependencies or performance tradeoffs. Examples of platform dependencies include compiler shortcomings, file systems, thread mechanisms, and graphical user interfaces. The classic example of a performance tradeoff is a fast implementation which uses a lot of memory versus a slower implementation which uses less memory.
Boost libraries generally use a configuration header, boost/config.hpp, to capture compiler and platform dependencies. Although the use of boost/config.hpp is not required, it is the preferred approach for simple configuration problems.
Boost policy
The Boost policy is to avoid platform dependent variations in interface specifications, but supply implementations which are usable over a wide range of platforms and applications. That means boost libraries will use the techniques below described as appropriate for dealing with platform dependencies.
The Boost policy toward implementation variations designed to enhance performance is to avoid them unless the benefits greatly exceed the full costs. The term "full costs" is intended to include both tangible costs like extra maintenance, and intangible cost like increased difficulty in user understanding.
Techniques for providing implementation variations
Several techniques may be used to provide implementation variations. Each is appropriate in some situations, and not appropriate in other situations.
Single general purpose implementation
The first technique is to simply not provide implementation variation at all. Instead, provide a single general purpose implementation, and forgo the increased complexity implied by all other techniques.
Appropriate: When it is possible to write a single portable implementation which has reasonable performance across a wide range of platforms. Particularly appropriate when alternative implementations differ only in esoteric ways.
Not appropriate: When implementation requires platform specific features, or when there are multiple implementation possible with widely differing performance characteristics.
Beman Dawes comments "In design discussions some implementation is often alleged to be much faster than another, yet a timing test discovers no significant difference. The lesson is that while algorithmic differences may affect speed dramatically, coding differences such as changing a class from virtual to non-virtual members or removing a level of indirection are unlikely to make any measurable difference unless deep in an inner loop. And even in an inner loop, modern CPUs often execute such competing code sequences in the same number of clock cycles! A single general purpose implementation is often just fine."
Or as Donald Knuth said, "Premature optimization is the root of all evil." (Computing Surveys, vol 6, #4, p 268).
Macros
While the evils of macros are well known, there remain a few cases where macros are the preferred solution:
- Preventing multiple inclusion of headers via #include guards.
- Passing minor configuration information from a configuration header to other files.
Appropriate: For small compile-time variations which would otherwise be costly or confusing to install, use, or maintain. More appropriate to communicate within and between library components than to communicate with library users.
Not appropriate: If other techniques will do.
To minimize the negative aspects of macros:
- Only use macros when they are clearly superior to other techniques. They should be viewed as a last resort.
- Names should be all uppercase, and begin with the namespace name. This will minimize the chance of name collisions. For example, the #include guard for a boost header called foobar.h might be named BOOST_FOOBAR_H.
Separate files
A library component can have multiple variations, each contained in its own separate file or files. The files for the most appropriate variation are copied to the appropriate include or implementation directories at installation time.
The way to provide this approach in boost libraries is to include specialized implementations as separate files in separate sub-directories in the .ZIP distribution file. For example, the structure within the .ZIP distribution file for a library named foobar which has both default and specialized variations might look something like:
foobar.h // The default header file foobar.cpp // The default implementation file readme.txt // Readme explains when to use which files self_contained/foobar.h // A variation with everything in the header linux/foobar.cpp // Implementation file to replace the default win32/foobar.h // Header file to replace the default win32/foobar.cpp // Implementation file to replace the default
Appropriate: When different platforms require different implementations, or when there are major performance differences between possible implementations.
Not appropriate: When it makes sense to use more that one of the variations in the same installation.
Separate components
Rather than have several implementation variations of a
single component, supply several separate components. For
example, the Boost library currently supplies
scoped_ptr and
shared_ptr classes
rather than a single
smart_ptr class parameterized
to distinguish between the two cases. There are several ways to
make the component choice:
- Hardwired by the programmer during coding.
- Chosen by programmer written runtime logic (trading off some extra space, time, and program complexity for the ability to select the implementation at run-time.)
Appropriate: When the interfaces for the variations diverge, and when it is reasonably to use more than one of the variations. When run-time selection of implementation is called for.
Not appropriate: When the variations are data type, traits, or specialization variations which can be better handled by making the component a template. Also not appropriate when choice of variation is best done by some setup or installation mechanism outside of the program itself. Thus usually not appropriate to cope with platform differences.
Note: There is a related technique where the interface is specified as an abstract (pure virtual) base class (or an interface definition language), and the implementation choice is passed off to some third-party, such as a dynamic-link library or object-request broker. While that is a powerful technique, it is way beyond the scope of this discussion.
Template-based approaches
Turning a class or function into a template is often an elegant way to cope with variations. Template-based approaches provide optimal space and time efficiency in return for constraining the implementation selection to compile time.
Important template techniques include:
- Data type parameterization. This allows a single component to operate on a variety of data types, and is why templates were originally invented.
- Traits parameterization. If parameterization is complex, bundling up aspects into a single traits helper class can allow great variation while hiding messy details. The C++ Standard Library provides several examples of this idiom, such as
iterator_traits<>(24.3.1 lib.iterator.traits) and char_traits<> (21.2 lib.char.traits).
- Specialization. A template parameter can be used purely for the purpose of selecting a specialization. For example:
SomeClass<fast> my_fast_object; // fast and small are empty classes SomeClass<small> my_small_object; // used just to select specialization
Appropriate: When the need for variation is due to data type or traits, or is performance related like selecting among several algorithms, and when a program might reasonably use more than one of the variations.
Not appropriate: When the interfaces for variations are different, or when choice of variation is best done by some mechanism outside of the program itself. Thus usually not appropriate to cope with platform differences.
|
https://www.boost.org/community/implementation_variations.html
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
Asynchronous wait on timer, part two, where a job was executed concurrently to the ASIO handler in another thread, using of a mutex, a lock, and an atomic int to let it work as expected.
With ASIO we can follow a different approach, based on its strand concept, avoiding explicit synchronization.
The point is that we won't run the competing functions directly, but we will post the calls to a strand object, that would ensure they will be executed in a sequential way. Just be sure you use the same strand object.
We have a class, Printer, with two private methods, print1() and print2(), that uses the same member variable, count_, and printing something both to cout.
We post the two functions a first time in the class constructor, asking our strand object to run them.
namespace ba = boost::asio; // ... class Printer { // ... ba::io_context::strand strand_; int count_; Printer(ba::io_context& io, int count) : strand_(io), count_(count) { strand_.post(std::bind(&Printer::print1, this)); strand_.post(std::bind(&Printer::print2, this)); }The functions would post themselves again on the same strand, until some condition is satisfied.
void print1() { if (count_ > 0) { print("one"); --count_; strand_.post(std::bind(&Printer::print1, this)); } }And this is more or less the full story for the Printer class. No need of synchronization, we rely on the strand to have them executed sequentially.
We still have to let ASIO run on two threads, and this is done by calling the run() method from io_context from two different threads. This is kind of interesting on its own, because we bump in an subtle problem due on how std::bind() is implemented.
The official Boost ASIO tutorial suggests to use the Boost implementation:
std::thread thread(boost::bind(&ba::io_context::run, &io));It works fine, end of the story, one would say. But let see what it happens when using the standard bind implementation:
std::thread thread(std::bind(&ba::io_context::run, &io)); // error C2672: 'std::bind': no matching overloaded function found // error C2783: 'std::_Binder<std::_Unforced,_Fx,_Types...> std::bind(_Fx &&,_Types &&...)': could not deduce template argument for '_Fx'Damn it. It tries to be smarter than Boost, and in this peculiar case it doesn't work. The problem is that there are two run() functions in io_context, and bind() doesn't know which one to pick up.
A simple solution would be compile our code for a "clean" ASIO version, getting rid of the deprecated parts, as is the case of the run() overload.
If we can't do that, we should provide an extra help to bind, so that it could understand correctly the function type. An explicit cast would do:
auto run = static_cast<ba::io_context::count_type(ba::io_service::*)()>(&ba::io_context::run); std::thread thread(std::bind(run, &io));I have taken the address of the member function run from boost::asio::io_context (also known as io_service, but now it is deprecated too) and I explicitly casted it to its actual type.
Can we get the same result in a more readable way? Well, using a lambda could be an idea.
std::thread thread([&io] { io.run(); });You could get my full C++ code from GitHub. I based it on the Timer.5 example from the official Boost ASIO tutorial.
|
http://thisthread.blogspot.com/2018/03/boost-asio-strand-example.html
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
My problem is that i want to get the 1st part only of the string ive done based on my code i can get the audio/x-aiff by finding the space but want i want to do is i want to is get or print the .aif is there any way to get that 1st substring thanks in advance
#include <stdio.h> #include <string.h> int main(void) { char *cptr; char str[] = ".aif: audio/x-aiff"; char substr[] = " "; cptr = strstr(str, substr); printf("%s\n", cptr); return 0; }
|
https://www.daniweb.com/programming/software-development/threads/371258/get-sub-string-in-a-string
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
Gradual Packaging - allow packaging simple python code with very little effort. Later on (if needed) you can create a more polished package. Follow conventions, support the simple cases. If you need to go out of the simple cases, allow a path towards better packaging.
File layouts (based on simple and common layouts often used):
Tell them why adding each bit of config is useful..
A nice thing so far, is that I was easily able to rename the project three times. Just by renaming files and folders. It was first called 'blabla.py' then 'package.py' then 'release.py', and finally 'pyrelease.py'. Normally you'd need to modify 10 config files when things happen.
Anyway... I'll try some more days on it and see how it turns out.
[0] Thomas Kluyver - an explanation of some philosophy on 'flit' the tool for minimal config distribution that doesn't use distutils or setuptools.
Evolution of your python code.
- in your ipython session messing around.
- paste it into badnamechoice.py -> (everything starts with a bad name in a single .py file)
- test_badnamechoice.py -> (then you add a test. Or did you do this first?)
- renamed.py -> (now you rename the file to something better)
- afolderoffiles/ -> (now there is a folder of files)
- add docs/ and .travisci CI files
- configure coverage testing
- configure an annoying mypy static type checking option
- add flakes testing config tweak
- support some weird debian derived distro
- appveyor CI added next to travisci
- pytest config (change because... ARG reasons).
- add a requirements.txt, and modify your setup file to have them too.
- remove testing stuff out of requirements.txt into requirements.dev.txt
- ... (config tweaks happen over the weeks and months)...
- ...
- Giant ball of 30 config file soup. -> (We have arrived at the modern repo!)
"Cool hackathon! What did you do? - I packaged my python code, added test support, setup a nice git repo, a pypi page, a readthedocs page, configured travisci, search the internet for hours on how to upload a python package. Nice one - look at my finished app."Get something out the door really quickly, and gradually improve the packaging. Or maybe your code isn't all that special and it's ok using the default packaging choices.
So, how do we support this workflow?I started making a tool for doing the zero config releases. It's not done yet, but it is almost able to release itself. This follows on from a series of blog posts and experiments on:
My mission is to improve python packaging for newbies, those in the digital arts community. People using pygame. Also for 90% of python programmers who never release a python package, and yet work with python every day. This is based on feedback from different people in the python community who teach python programming, my own experience teaching it and mentoring teams. The scientific python community is another group which finds it hard to create python packages.
The humble experiment so far.
lease
(still not sure if it's a good idea... but we will see)
Usage:
pyreleasase
File layouts (based on simple and common layouts often used):
-----singlefile.py
----mygame/game.pydata/image.png-----singlefile.pytest_singlefile.py-----
singlefile.pytests/test_singlefile.py-----
The basic steps at runtime are these.
- Gather facts, like author name and email. (much like ansible if you know it)
- Create setup.py, setup.cfg, MANIFEST.in files in a temp folder
- build sdist in that temp folder
- Upload files to pypi (eventually other release targets, including pyweek)
- tag a new version in git (if they are using git)
Tell the user what is happening.The tool should also teach people about python packaging. It should allow you to see the generated setup.py file, setup.cfg, and MANIFEST.in. It should point to the Python Packaging User Guide. It should print out the browser for creating a pypi account, and tell people what config files they need to fill in. It shouldn't require git, or a github account, but support that workflow if they do.
Tell them why adding each bit of config is useful.
A single .py file is the smallest package of python code.The simplest package of python code is a .py file. It can be copied into a project and it can work. Or it can be copied into the site-packages folder, and the module will work. People upload python code to share all the time - on websites all over the internet..
Where does the packaging metadata live?Technically python doesn't need any metadata to install a simple .py file. I mentioned in the first two parts of this series various places where data can be gathered. ~/.pypirc files, .gitrc files, .hgrc files. It can find versions as git tags. It can find __author__, __license__, __version__ variables inside files. description, and longdescription are found at the top of the .py file in the docstring..
Why not template generators like sampleproject and cookiecutter?These tools have their place. They are good if you want to tweak all the config in there, and you know how all the tools work. However, based on feedback from people, it's all too complex still. They want to a couple of .py files and that's it. Renaming code easily is important for some types of python code - especially when you don't even know where your experiment is going. Naming things is hard!
They came to python for the elegance, not the packaging config files.But they still want to share code easily.
Where next with the pyrelease experiment?First I want to get the tool into shape so it can at least release itself. I'm going to Gradually Package the humble pyrelease - by keeping it to one file from the beginning :)
It should support single file modules, packages, and also /data/ folders. The simple layouts that people use. As well it supports making a script automatically if it finds a main(). As well it finds dependencies by parsing the python code (I'll add requirements.txt later). So if you import pygame, click, flask etc... it adds them to install_requires in the setup.py.
- I want to add logging of facts. eg. 'Found author: "Rene" in ~/.gitrc". Could not find author in ~/.hgrc
- Suggest additions for missing things. eg. How to create a pypi account, how to make ~.pypirc, git.
- Have to make uploading to pypi easier, especially when things go wrong.
- thinking of removing the setuptools dependency... only thing I use it for is find_packages so far. I've been writing another more modern implementation of that anyway. Remove distutils dep too? [0]
- Pynsist support (or py2exe, pyinstaller, whatever)
- tests, and tests against test . pypi . python . org
- "add setup files into this repo" for when the tool fails to be good enough.
- notice telling people to go to packaging. python .org if they want more
- decide on convention for screenshots (probably screenshots/ folder)
- bitbucket, and better hg support.
- pyweek upload support.
- try releasing a few more games with it.
- watch some other people try and use it.
- keep blogging, and getting feedback.
[0] Thomas Kluyver - an explanation of some philosophy on 'flit' the tool for minimal config distribution that doesn't use distutils or setuptools.
6 comments:
Hey, I really liked this post. I honestly couldn't agree more about the state of simple package management, and I think your tool would certainly be helpful for many people (me included).
I recently made a tool (trabBuild) to help me with simple packaging. In fact I used it to package and upload itself to PyPi right after I read this post!
My next step with the script was to implement something similar to what you already have here. I checked out your code and I think we might be able to merge some of this together and save a fair amount of work.
Links to code:
PyPi:
GitHub:
Thanks again!
Nice article. I see where you're coming from with the idea that we should be able to publish single file modules. However, there are a few things that bother me still. First of all, while having version numbers, package descriptions and the like in the code is nice in the short term, IMO it's not sustainable in the longer term. First of all, you can't introspect the data from the module without executing the module, and that's a potential problem (less so for a developer tool than for pip, though, so probably OK for pyrelease). Secondly, the needs of your docstring and your long_description diverge fairly quickly - you may want ReST markup in the long description, but (mostly) plain text in your docstring.
Honestly, I'd rather the minimum be the code, plus a single config file containing the metadata. There might be a (tiny) bit of duplication, but the benefit of a clear separation between information about the module and the module itself, would be useful IMO.
I wonder - from what I've seen, flit is a nice lightweight packaging tool that is based on this "code plus a single metadata file" idea. So maybe having pyrelease just generate a flit.ini file for your project is what's needed here.
One thing I am a bit bothered about, though, is the "it's easy to rename" idea. I know where you're coming from (most of my modules start out named test.py!) but naming something for release is a big deal. You're reserving a name on PyPI forever. That really shouldn't be something you do without thinking.
I'd also be interested in where you'll go with regard to testing (which tool? use tox?) and CI (travis and appveyor *need* their own config files, you can't avoid that). And maybe even documentation (you really can't last long with "read the source file"). This sort of question is what prompted me to start the PyPA sampleproject - and it got dreadfully bogged down because there genuinely is no consensus on any of these things :-(
But regardless, thanks for working on this. "How do I set up my environment to build my shiny new idea in Python?" is a very common question, with no (or rather too many) good answer, and we really need something better.
@Scot! Happy to collaborate on things. (your comment got flagged as spam for some reason)
@Paul, thank you very much for the discussion.
A few random thoughts below. The most important and easy win I think is convincing a couple of tools to support using setup.cfg sections for configuration.
---
Many tools now use setup.cfg sections. The holdouts are mypy, pylint, and tox. But 11 or so other tools do apparently (flake, coverage, pytest, etc). I'd like to advocate for at least the python tools supporting setup.cfg.
It's quite easy to parse python files without executing them. As long as people don't do tricky things that is. If they do tricky things, then the tool should be able to fail with an error, and prompt people to use. __version__ __author__ and __license__ are quite common already.
I'm not entirely convinced myself that pyrelease should be a thing. Or if the 'gather meta data' idea is a good thing. However, I'd like to try and continue looking at how far the idea can go. Perhaps it would work nicely combined with ideas from flint, pypackage, sphinx-quickstart and others.
For version, they live in pypi already. At packaging time, the simplest way would be to just increment what you get from pypi. Of course __version__, and git tags could be supported too.
.py files don't really have a tool for managing them at the moment. It would seem a package management tool could support them. They technically require no meta data.
pylint and other tools use per file config. Might be nice to standardise this somehow. Not python constructs but comments.
Yeah, flint with one config file could be easily used by many.
Yeah, releasing to pypi is something to consider. Pypi can remove packages right? I've removed some anyway... Cleaning up pypi of unused old packages might be useful. However, releasing code to internal package indexes(devpi etc) or just publishing to git/folder/web and referencing that in the requirements is popular already (for internal company use). I've seen teams which just copy their python files into place (ansible, fabric devops people), or just 'release' to their git repos and use links to them in the requirements.txt of other packages.
Better support for namespace packages will allow less pollution of the global pypi namespace. That's probably another topic to work on.
I like simple_setup (see below) which is a setup.py file which does the gather metadata trick. The benefit of this is that you can point pip at it to install. pbr uses a similar trick to make the setup.py minimal and keep all the logic within itself. I currently think this is a good idea.
Testing is a good topic, that deserves a lot more words. I recently tried to run all the tests from the 400 most downloaded packages from pypi. It's pretty much impossible. Even with tox.ini ones, there were plenty of failures - that didn't fail on their travisci. However, with a bit of work I managed to automatically find how to run tests with many packages. Even though there is no standard way to test packages (many don't work with setup.py test). For a pyrelease tool, I would probably pick a convention that is supported, and perhaps use some introspection again. If there is a test_bla.py in the folder, I'd try run it with one of the test runners (py.test), or if there's a test folder, do the same. Picking one tool most likely. If people need more than that, then they'll have to configure the test framework.
Running tests for the downstream packages that depend on your package is very useful for catching bugs. It amplifies the size of your test cases by a lot. I know some programs and packages do this already. eg. twisted some years ago asked python to run its test suite before release to lessen the unintentional regressions.
---
If all the python tools use setup.cfg sections, that will already clean up python repos by 3-10 files. There's already pull requests and patches for tox and mypy, but there is some resistance. They already use setup.cfg to configure some of the tools themselves. (see )
Here's a list of tools all aiming to simplify packaging and package management:
pbr
flit
(a setup.py which gather info from the environment)
pypackage
pipenv
fades
(It's interesting to me that some of them come from a user group, a games company, and a microservices group with hundreds of python packages, and the 'for humans' guy).
very interesting..nice post.thank you for sharing information..
python online training
|
http://renesd.blogspot.com/2017/02/gradual-packaging-python-part-two.html
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
This chapter describes how to use XLink and XInclude with resources in Oracle XML DB Repository. It contains these topics:
Overview of XLink and XInclude
XLink and XInclude Link Types
XInclude: Compound Documents
Using XLink with Oracle XML DB
Using XInclude with Oracle XML DB
Examining XLink and XInclude Links using DOCUMENT_LINKS View
Configuring Resources for XLink and XInclude
Managing XLink and XInclude Links using DBMS_XDB.processLinks
A document-oriented, or content-management, application often tracks relationships, between documents, and those relationships are often represented and manipulated as links of various kinds. Such links can affect application behavior in various ways, including affecting the document content and the response to user operations such as mouse clicks.
W3C has two recommendations that are pertinent in this context, for documents that are managed in XML repositories:
XLink – Defines various types of links between resources. These links can model arbitrary relationships between documents. Those documents can reside inside or outside the repository.
XInclude – Defines ways to include the content of multiple XML documents or fragments in a single infoset. This provides for compound documents, which model inclusion relationships. Compound documents are documents that contain other documents. More precisely, they are file resources that include documents or document fragments. The included objects can be file resources in the same repository or documents or fragments outside the repository.
Each of these standards is very general, and it is not limited to modeling relationships between XML documents. There is no requirement that the documents linked using XLink or included in an XML document using XInclude be XML documents.
Using XLink and XInclude to represent document relationships provides flexibility for applications, facilitates reuse of component documents, and enables their fine-grained manipulation (access control, versioning, metadata, and so on). Whereas using XML data structure (an ancestor–descendents hierarchy) to model relationships requires those relationships to be relatively fixed, using XLink and XInclude to model relationships can easily allow for change in those relationships.
Note:For XML schema-based documents to be able to use XLink and XInclude attributes, the XML schema must either explicitly declare those attributes or allow any attributes.
See Also: for information about the XLink standard for information about the XInclude standard
This section describes XLink and XInclude link types and the relation between these and Oracle XML DB Repository links. XLink links are more general than repository links. XLink links can be simple or extended. Oracle XML DB supports only simple XLink links, not extended links.
XLink and XInclude links model arbitrary relationships among documents. The meaning and behavior of a relationship are determined by the applications that use the link. They are not inherent in the link itself. XLink and XInclude links can be mapped to Oracle XML DB document links. When document links target Oracle XML DB Repository resources, they can (according to a configuration option) be hard or weak links. In this, they are similar to repository links in that context. Repository links can be navigated using file system-related protocols such as FTP and HTTP. Document links cannot, but they can be navigated using the XPath 2.0 function
fn:doc.
See Also:"Hard Links and Weak Links"
XLink and XInclude can provide links to other documents. In the case of XInclude, attributes
href and
xpointer are used to specify the target document.
Xlink links can be simple or extended. Simple links are unidirectional, from a source to a target. Extended links (sometimes called complex) can model relationships between multiple documents, with different directionalities. Both simple and extended links can include link metadata. XLink links are represented in XML data using various attributes of the namespace, which has the predefined prefix
xlink. Simple links are represented in XML data using attribute
type with value
simple, that is,
xlink:type = "simple". Extended Xlink links are represented using
xlink:type = "extended".
Third-party extended Xlink links are not contained in any of the documents whose relationships they model. Third-party links can thus be used to relate documents, such as binary files, that, themselves, have no way of representing a link.
The source end of a simple Xlink link (that is, the document containing the link) must be an XML document. The target end of a simple link can be any document. There are no such restrictions for extended links. Example 23-3 shows examples of simple links. The link targets are represented using attribute
xlink:href.
XInclude is the W3C recommendation for the syntax and processing model for merging the infosets of multiple XML documents into a single infoset. Element
xi:include is used to include another document, specifying its URI as the value of an
href attribute. Element
xi:include can be nested, so that an included document can itself include other documents.
(However, an inclusion cycle raises an error in Oracle XML DB. The resources are created, but an error is raised when the inclusions are expanded.)
XInclude thus provides for compound documents: repository file resources that include other XML documents or fragments. The included objects can be file resources in the same repository or documents or fragments outside the repository.
A book might be an example of a typical compound document, as managed by a content-management system. Each book includes chapter documents, which can each be managed as separate objects, with their own URLs. A chapter document can have its own metadata and access control, and it can be versioned. A book can include (reference) a specific version of a chapter document. The same chapter document can be included in multiple book documents, for reuse. Because inclusion is modeled using XInclude, content management is simplified. It is easy, for example, to replace one chapter in a book by another.
Example 23-1 illustrates an XML
Book element that includes four documents. One of those documents,
part1.xml, is also shown. Document
part1.xml includes other documents, representing chapters.
Example 23-1 XInclude Used in a Book Document to Include Parts and Chapters
The top-level document representing a book contains element
Book.
<Book xmlns: <xi:include href=toc.xml"/> <xi:include href=part1.xml"/> <xi:include href=part2.xml"/> <xi:include href=index.xml"/> </Book>
A major book part, file (resource)
part2.xml, contains a
Part element, which includes multiple chapter documents.
<?xml version="1.0"?> <Part xmlns: <xi:include <xi:include <xi:include <xi:include </Part>
These are some additional features of XInclude:
Inclusion of plain text – You can include unparsed, non-XML text using attribute
parse with a value of
text:
parse = "text".
Inclusion of XML fragments – You can use an
xpointer attribute in an
xi:include element to specify an XML fragment to include, instead of an entire document.
Fallback processing – In case of error, such as inability to access the URI of an included document, an
xi:include syntax error, or an
xpointer reference that returns null, XInclude performs the treatment specified by element
xi:fallback. This generally specifies an alternative element to be included. The alternative element can itself use
xi:include to include other documents.
Oracle XML DB supports only simple XLink links, not extended XLink links.
When an XML document containing XLink attributes is added to Oracle XML DB Repository, either as resource content or as user-defined resource metadata, special processing can occur, depending on how the repository or individual repository resources are configured. Element
XLinkConfig of the resource configuration document,
XDBResConfig.xsd, determines this behavior. In particular, you can configure resources so that XLink links are ignored, or so that they are mapped to Oracle XML DB document links. In the latter case, configuration can specify that the document links are to be hard or weak. Hard and weak document links have the same properties as hard and weak repository links.
The privileges needed to create or update document links are the same as those needed to create or update repository links. Even partially updating a document requires the same privileges needed to delete the entire document and reinsert it. In particular, even if you update just one document link you must have delete and insert privileges for each of the documents linked by the document containing the link.
If configuration maps XLink links to document links, then, whenever a document containing XLink links is added to the repository, the XLink information is extracted and stored in a system link table. Link target (destination) locations are replaced by direct paths that are based on the resource OIDs. Configuration can also specify whether OID paths are to be replaced by named paths (URLs) upon document retrieval. Using OID paths instead of named paths generally offers a performance advantage when links are processed, including when resource contents are retrieved.
You can use XLink within resource content, but not within resource metadata.
Oracle XML DB supports XInclude 1.0 as the standard mechanism for managing compound documents. It does not support attribute
xpointer and the inclusion of document fragments, however. Only complete documents can be included (using attribute
href).
You can use XInclude to create XML documents that include existing content. You can also configure the implicit decomposition of non-schema-based XML documents, creating a set of repository resources that contain XInclude inclusion references.
The content of included documents must be XML data or plain text (with attribute
parse = "text"). You cannot include binary content directly using XInclude, but you can use XLink to link to binary content.
You can use XInclude within resource content, but not within resource metadata.
When you retrieve a compound document from Oracle XML DB Repository, you have a choice:
Retrieve it as is, with the
xi:include elements remaining as such. This is the default behavior.
Retrieve it after replacing the
xi:include elements with their targets, recursively, that is, after expansion of all inclusions. An error is raised if any
xi:include element cannot be resolved.
To retrieve the document in expanded form, use PL/SQL constructor
XDBURIType, passing a value of
'1' or
'3' as the second argument (flags). Example 23-2 illustrates this. These are the possible values for the
XDBURIType constructor second argument:
1 – Expand all XInclude inclusions before returning the result. If any such inclusion cannot be resolved according to the XInclude standard fallback semantics, then raise an error.
2 – Suppress all errors that might occur during document retrieval. This includes dangling
href pointers.
3 – Same as
1 and
2 together.
Example 23-2 retrieves all documents that are under repository folder
public/bookdir, expanding each inclusion:
Example 23-2 Expanding Document Inclusions using XDBURIType
SELECT XDBURIType(ANY_PATH, '1').getXML() FROM RESOURCE_VIEW WHERE under_path(RES, '/public/bookdir') = 1; XDBURITYPE(ANY_PATH,'1').GETXML() --------------------------------- <Book> <Title>A book</Title> <Chapter id="1"> <Title>Introduction</Title> <Body> <Para>blah blah</Para> <Para>foo bar</Para> </Body> </Chapter> <Chapter id="2"> <Title>Conclusion</Title> <Body> <Para>xyz xyz</Para> <Para>abc abc</Para> </Body> </Chapter> </Book> <Chapter id="1"> <Title>Introduction</Title> <Body> <Para>blah blah</Para> <Para>foo bar</Para> </Body> </Chapter> <Chapter id="2"> <Title>Conclusion</Title> <Body> <Para>xyz xyz</Para> <Para>abc abc</Para> </Body> </Chapter> 3 rows selected.
(The result shown here corresponds to the resource
bookfile.xml shown in Example 23-8, together with its included resources,
chap1.xml and
chap2.xml.)
See Also:
"Versioning, Locking, and Controlling Access to Compound Documents" for information about access control during expansion
Oracle Database PL/SQL Packages and Types Reference for more information about
XDBURIType
You validate a compound document the way you would any XML document. However, you can choose to validate it in either form: with
xi:include elements as is or after replacing them with their targets.
You can also choose to use one XML schema to validate the unexpanded form, and another to validate the expanded form. For example, you might use one XML schema to validate without first expanding, in order to set up storage structures, and then use another XML schema to validate the expanded document after it is stored.
You can update a compound document just as you would update any resource. This replaces the resource with a new value. It thus corresponds to a resource deletion followed by a resource insertion. This means, in particular, that any
xi:include elements in the original resource are deleted. Any
xi:include elements in the replacement (inserted) document are processed as usual, according to the configuration defined at the time of insertion.
The components of a compound document are separate resources. They are versioned and locked independently, and their access is controlled independently.
Document links to version-controlled resources (VCRs) always resolve to the latest version of the target resource, or the selected version within the current workspace. You can, however, explicitly refer to any specific version, by identifying the target resource by its OID-based path.
Locking a document that contains
xi:include elements does not also lock the included documents. Locking an included document does not also lock documents that include it.
The access control list (ACL) on each referenced document is checked whenever you retrieve a compound document with expansion. This is done using the privileges of the current user (invoker's rights). If privileges are insufficient for any of the included documents, the expansion is canceled and an error is raised.
See Also:
"Expanding Compound-Document Inclusions"
Chapter 24, "Managing Resource Versions" for information about VCRs
Chapter 27, "Repository Access Control" for information about resource ACLs
You can query the read-only public view
DOCUMENT_LINKS to obtain system information about document links derived from both XLink and XInclude links. The information in this view includes the following columns, for each link:
SOURCE_ID – The source resource OID.
RAW(16).
TARGET_ID – The target resource OID.
RAW(16).
TARGET_PATH – Always
NULL. Reserved for future use.
VARCHAR2(4000).
LINK_TYPE – The document link type:
Hard or
Weak.
VARCHAR2(8).
LINK_FORM – Whether the original link was of form
XLink or
XInclude.
VARCHAR2(8).
SOURCE_TYPE – Always
Resource Content.
VARCHAR2(17).
You can obtain information about a resource from this view only if one of the following conditions holds:
The resource is a link source, and you have the privilege
read-contents or
read-properties on it.
The resource is a link target, and you have the privilege
read-properties on it.
See Also:Oracle Database Reference for more information on public view
DOCUMENT_LINKS
Example 23-3 shows how XLink links are treated when resources are created, and how to obtain system information about document links from view
DOCUMENT_LINKS. It assumes that the folder containing the resource has been configured to map XLink links to document hard links.
Example 23-3 Querying Document Links Mapped From XLink Links
DECLARE b BOOLEAN; BEGIN b := DBMS_XDB.createResource( '/public/hardlinkdir/po101.xml', '<PurchaseOrder id="101" xmlns: <Company xlink:Oracle Corporation</Company> <Approver xlink:Willard Quine</Approver> </PurchaseOrder>'); b := DBMS_XDB.createResource( '/public/hardlinkdir/po102.xml', '<PurchaseOrder id="102" xmlns: <Company xlink:Oracle Corporation</Company> <Approver xlink:Haskell Curry</Approver> <ReferencePO xlink: </PurchaseOrder>');/po101.xml /public/hardlinkdir/oracle.xml Hard XLink /public/hardlinkdir/po101.xml /public/hardlinkdir/quine.xml Hard XLink /public/hardlinkdir/po102.xml /public/hardlinkdir/oracle.xml Hard XLink /public/hardlinkdir/po102.xml /public/hardlinkdir/curry.xml Hard XLink /public/hardlinkdir/po102.xml /public/hardlinkdir/po101.xml Hard XLink
See Also:"Mapping XInclude Links to Hard Document Links, with OID Retrieval" for an example of configuring a folder to map XLink links to hard links
Example 23-4 queries view
DOCUMENT_LINKS to show all document links.
Example 23-4 Querying Document Links Mapped From XInclude Links
DECLARE ret BOOLEAN; BEGIN ret := DBMS_XDB.createResource( '/public/hardlinkdir/book.xml', '<Book xmlns: <xi:include <xi:include <xi:include <xi:include </Book>');/book.xml /public/hardlinkdir/toc.xml Hard XInclude /public/hardlinkdir/book.xml /public/hardlinkdir/part1.xml Hard XInclude /public/hardlinkdir/book.xml /public/hardlinkdir/part2.xml Hard XInclude /public/hardlinkdir/book.xml /public/hardlinkdir/index.xml Hard XInclude
You configure XLink and XInclude treatment for Oracle XML DB Repository resources as you would configure any other treatment of repository resources — see "Configuring a Resource". The rest of this section describes the resource configuration file that you use as a resource to configure XLink and XInclude elements
XLinkConfig and
XIncludeConfig, children of element
ResConfig, to configure XLink and XInclude treatment, respectively. If one of these elements is absent, then there is no treatment of the corresponding type of links.
Both
XLinkConfig and
XIncludeConfig can have attribute
UnresolvedLink and child elements
LinkType and
PathFormat. Element
XIncludeConfig can also have child element
ConflictRule. If the
LinkType element content is
None, however, then there must be no
PathFormat or
ConflictRule element.
You cannot define any preconditions for
XLinkConfig or
XIncludeConfig. During repository resource creation, the
ResConfig element of the parent folder determines the treatment of XLink and XInclude links for the new resource. If the parent folder has no
ResConfig element, then the repository-wide configuration applies.
Any change to the resource configuration file applies only to documents that are created or updated after the configuration-file change. To process links in existing documents, use PL/SQL procedure
DBMS_XDB.processLinks, after specifying the appropriate resource configuration parameters.
See Also:
"Managing XLink and XInclude Links using DBMS_XDB.processLinks"
Chapter 22, "Configuring Oracle XML DB Repository"
A
LinkConfig element can have an
UnresolvedLink attribute with a value of
Error (default value) or
Skip. This determines what happens if an XLink or XInclude link cannot be resolved at the time of document insertion into the repository (resource creation).
Error means raise an error and roll back the current operation.
Skip means skip any treatment of the XLink or XInclude link. Skipping treatment creates the resource with no corresponding document links, and sets the resource's
HasUnresolvedLinks attribute to
true, to indicate that the resource has unresolved links.
Using
Skip as the value of attribute
UnresolvedLink can be especially useful when you create a resource that contains a cycle of weak links, which would otherwise lead to unresolved-link errors during resource creation. After the resource and all of its linked resources have been created, you can use PL/SQL procedure
DBMS_XDB.processLinks to process the skipped links. If all XLink and XInclude links have been resolved by this procedure, then attribute
HasUnresolvedLinks is set to
false.
Resource attribute
HasUnresolvedLinks is also set to
true for a resource that has a weak link to a resource that has been deleted. Deleting a resource thus effectively also deletes any weak links pointing to that resource. In particular, whenever the last hard link to a resource is deleted, the resource is itself deleted, and all resources that point to the deleted resource with a weak link have attribute
HasUnresolvedLinks set to
true.
You use the
LinkType element of a resource configuration file to specify the type of document link to be created whenever an XLink or XInclude link is encountered when a document is stored in Oracle XML DB Repository. The
LinkType element has these possible values (element content):
None (default) – Ignore XLink or XInclude links: create no corresponding document links.
Hard – Map XLink or XInclude links to hard document links in repository documents.
Weak – Map XLink or XInclude links to weak document links in repository documents.
See Also:
You use the
PathFormat element of a resource configuration file to specify the path format to be used when retrieving documents with
xlink:href or
xi:include:href attributes. The
PathFormat element has these possible values (element content) for hard and weak document links:
OID (default) – Map XLink or XInclude
href paths to OID-based paths in repository documents — that is, use OIDs directly.
Named – Map XLink or XInclude
href paths to named paths (URLs) in repository documents. The path is computed from the internal OID when the document is retrieved, so retrieval can be slower than in the case of using OID paths directly.
See Also:
You use the
ConflictRule element of a resource configuration file to specify the conflict-resolution rules to use if the path computed for a component document is already present in Oracle XML DB Repository. The
ConflictRule element has these possible values (element content):
Error (default) – Raise an error.
Overwrite – Update the document targeted by the existing repository path, replacing it with the document to be included. If the existing document is a version-controlled resource, then it must already be checked out, unless it is autoversioned. Otherwise, an error is raised.
Syspath – Change the path to the included document to a new, system-defined path.
See Also:Chapter 24, "Managing Resource Versions" for information about version-controlled resources
You use the
SectionConfig element of a resource configuration file to specify how non-schema-based XML documents are to be decomposed when added to Oracle XML DB Repository, to create a set of resources that contain XInclude inclusion references. You use simple XPath expressions in the resource configuration file to identify which parts of a document to map to separate resources, and which resources to map them to.
Element
SectionConfig contains one or more
Section elements, each of which contains the following child elements:
sectionPath – Simple XPath 1.0 expression that identifies a section root. This must use only child and descendant axes, and it must not use wildcards.
documentPath (optional) – Simple XPath 1.0 expression that is evaluated to identify the resources to be created from decomposing the document according to
sectionPath. The XPath expression must use only child, descendant, and attribute axes.
namespace (optional) – Namespace in effect for
sectionPath and
documentPath.
Element
Section also has a
type attribute that specifies the type of section to be created. Value
Document means create a document. The default value,
None, means do not create anything. Using
None is equivalent to removing the
SectionConfig element. You can thus set the
type attribute to
None to disable a
SectionConfig element temporarily, without removing it, and then set it back to
Document to enable it again.
If an element in the document being added to the repository matches more than one
sectionPath value, only the first such expression (in document order) is used.
If no
documentPath element is present, then the resource created has a system-defined name, and is put into the folder specified for the original document.
Example 23-5 shows a configuration-file section that configures XInclude treatment, mapping XInclude attributes to Oracle XML DB Repository hard document links. Repository paths in retrieved resources are configured to be based on resource OIDs.
Example 23-5 Mapping XInclude Links to Hard Document Links, with OID Retrieval
<ResConfig> . . . <XIncludeConfig UnresolvedLink="Skip"> <LinkType>Hard</LinkType> <PathFormat>OID</PathFormat> </XIncludeConfig> . . . </ResConfig>
Example 23-6 shows an
XLinkConfig section that maps XLink links to weak document links in the repository. In this case, retrieval of a document uses named paths (URLs).
Example 23-6 Mapping XLInk Links to Weak Links, with Named-Path Retrieval
<ResConfig> . . . <XLinkConfig UnresolvedLink="Skip"> <LinkType>Weak</LinkType> <PathFormat>Named</PathFormat> </XLinkConfig> . . . </ResConfig>
Example 23-7 shows a
SectionConfig section that specifies that each
Chapter element in an input document is to become a separate repository file, when the input document is added to Oracle XML DB Repository. The repository path for the resulting file is specified using configuration element
documentPath, and this path is relative to the location of the resource configuration file of Example 23-6.
Example 23-7 Configuring XInclude Document Decomposition
<ResConfig> . . . <SectionConfig> <Section type = "Document"> <sectionPath>//Chapter</sectionPath> <documentPath>concat("chap", @id, ".xml")</documentPath> </Section> </SectionConfig> . . . </ResConfig>
The XPath expression here uses XPath function
concat to concatenate the following strings to produce the resulting repository path to use:
chap – (prefix)
chap.
The value of attribute
id of element
Chapter in the input document.
.xml as a file extension.
For example, a repository path of
chap27.xml would result from an input document with a
Chapter element that has an
id attribute with value
27:
<Chapter id="27"> ... </Chapter>
If the configuration document of Example 23-6 and the book document that contains the
XInclude elements are in repository folder
/public/bookdir, then the individual chapter files generated from
XInclude decomposition are in files
/public/bookdir/chap
N
.xml, where the values of
N are the values of the
id attributes of
Chapter elements.
The book document that is added to the repository is derived from the input book document. The embedded
Chapter elements in the input book document are replaced by
xi:include elements that reference the generated chapter documents — Example 23-8 illustrates this.
Example 23-8 Repository Document, Showing Generated xi:include Elements
SELECT XDBURIType('/public/bookdir/bookfile.xml').getclob() FROM DUAL; XDBURITYPE('/PUBLIC/BOOKDIR/BOOKFILE.XML').GETCLOB() -------------------------------------------------------------------------------- <Book> <Title>A book</Title> <xi:include xmlns: <xi:include xmlns: </Book>
You can use PL/SQL procedure
DBMS_XDB.processLinks to manually process all XLink and XInclude links in a single document or in all documents of a folder. Pass
RECURSIVE as the mode argument to this procedure, if you want to process all hard-linked subfolders recursively. All XLink and XInclude links are processed according to the corresponding configuration parameters. If any of the links within a resource cannot be resolved, the resource's
HasUnresolvedLinks attribute is set to
true, to indicate that the resource has unresolved links. The default value of attribute
HasUnresolvedLinks is
false.
|
http://docs.oracle.com/cd/E18283_01/appdev.112/e16659/xdb_xlink.htm
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
*a student must have taken all 4 tests and have no test grade less than 80
*after the lowest of the four tests is dropped, the student must have an average 90 or better."
I have the code written but I still have a couple of problems:
1)When doing the if statement to decide which students are exempt, I get this error message:
exemptions.java:46: operator >= cannot be applied to double[],int
if (grades >=80 && average >=90)
2)How do I use the parse method with an array so I can read the grades from the text file? (It worked before I added the array)
Thanks again!
import java.io.*;//needed for File and IO exception import java.util.StringTokenizer;//to tokenize strings import java.util.Scanner;//to use scanner class public class exemptions { public static void main(String[] args) throws IOException { String line, fileIn="Grades.txt", fileOut="Exempt.txt";//declares line, Grades and Exempt text files StringTokenizer tokens;//declares StringTokenizer variable tokens String SSnumber;//string variable SSnumber //build input stream FileReader fr=new FileReader(fileIn); BufferedReader inFile=new BufferedReader(fr); //build output stream FileWriter fw=new FileWriter (fileOut); BufferedWriter bw=new BufferedWriter(fw); PrintWriter outFile=new PrintWriter(bw); line=inFile.readLine(); while (line!=null) { double []grades=new double[5];//create an array double lowest=grades[0];//store first variable in grades array in the variable lowest tokens=new StringTokenizer(line); SSnumber=null; SSnumber=tokens.nextToken(); for(int i = 1; i <grades.length; i++)//for loop to get lowest grade { if(grades[i]<lowest) lowest=grades[i]; } double total=0;//accumulator double average; for (int i=0; i < grades.length; i++)//for loop to get average of grades total +=grades[i]; average=(total-lowest)/grades.length;//average equals total minus lowest score divided by //number of grades if (grades >=80 && average >=90)//if grades are above 80 and average is above 90 System.out.println(SSnumber + " Average " + average);//print to screen the student's SS number & average //who are exempt from final outFile.print(SSnumber);//print to text file "exempt.txt" outFile.print(average);//print to text file "exempt.txt" line=inFile.readLine();//read next line from file } inFile.close();//close input file "grades.txt" outFile.close();//close output file "exempt.txt" } }
|
http://www.dreamincode.net/forums/topic/48404-using-the-parse-methods-with-arrays/
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
Once I got a case in which my customer wanted to create a webpart which will allow them to add some custom webparts to the specific zone in the same page by checking the corresponding check boxes in that webpart. (almost similar to the functionality in IGoogle)
It will look like the image below :
That “Test” webpart was created as a sample one, you can see that there are 2 check boxes in that custom webpart and if we check the images web part check box then it will add a new imageview webpart to the bottom webpart zone, and if we check the ListView webpart then it will add a list view webpart to the right webpart zone.
Now we can see the challenges that I had faced while developing this webpart.
First I had implemented this functionality using WebPartManager class, and I found that whenever we add a webpart using the AddWebPart () method of WebPartManager class, it will add the webpart to a specified webpart zone, but if we do page refresh or a post back, will keep the newly added webpart some other webpart zone in the page. Bizarre….
After researching more and digging in to this concern, I was able to find out that, whenever we use the AddWebPart() method of the WebPartManager class, the AddWebPart() methods gives the WebPart a random ID and therefore it cannot participate in ViewState management and it is not able to persist the state where we set in a particular zone.
After researching more on finding out an alternate way, I was able to add the WebPart to specific zones using SPLimitedWebPartManager class instead of WebPartManager. Here the concern was we have to do a refresh or post back again from the page, then only we can see the newly added WebParts. In order to work around that issue we can reload the page after adding any WebParts to the page.
Here is the code snippet. I have hard coded the zone as oWPManager.Zones[3] – right zone;, we can change it based upon our requirement, also here I have tested with the PersoanalizationScope as “Shared”. (This code snippet was for adding an imageview webpart)
<code>
void oLinksWP_CheckedChanged(object sender, EventArgs e)
{
if (oChkbxLinksWP.Checked)
{
WebPartZone oWPZone = (System.Web.UI.WebControls.WebParts.WebPartZone)oWPManager.Zones[3];
int iWPCount = oWPZone.WebParts.Count;
ImageWebPart objImageWP = new ImageWebPart();
objImageWP.ID = “imgMyWP2”;
objImageWP.Title = “My Image WP”;
SPWeb oWeb = SPContext.Current.Web;
SPFile oFile = oWeb.GetFile(HttpContext.Current.Request.Url.AbsolutePath.ToString());
SPLimitedWebPartManager oLWPManager = oFile.GetLimitedWebPartManager(PersonalizationScope.Shared);
oLWPManager.AddWebPart(objImageWP, oWPZone.ID.ToString(), iWPCount + 1);
HttpContext.Current.Response.Redirect(HttpContext.Current.Request.Url.AbsolutePath.ToString());
}
}
</code>
The basic difference between WebPartManager and & SPLimitedWebPartManager is WebPartManager is getting from the System.Web.UI.WebControls.WebParts namespace, but SPLimitedWebPartManager is from Microsoft.SharePoint.WebPartPages;
Happy coding J
PingBack from
|
https://blogs.msdn.microsoft.com/sowmyancs/2008/05/29/issue-with-the-webpartmanager-class-while-adding-webparts-dynamically-to-a-specific-webpart-zone/
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
Gitweb:;a=commit;h=bdf88217b70dbb18c4ee27a6c497286e040a6705 Commit: bdf88217b70dbb18c4ee27a6c497286e040a6705 Parent: 0ddc9cc8fdfe3df7a90557e66069e3da2c584725 Author: Roland McGrath <[EMAIL PROTECTED]> AuthorDate: Wed Jan 30 13:31:44 2008 +0100 Committer: Ingo Molnar <[EMAIL PROTECTED]> CommitDate: Wed Jan 30 13:31:44 2008 +0100
Advertising
x86: user_regset header The new header <linux/regset.h> defines the types struct user_regset and struct user_regset_view, with some associated declarations. This new set of interfaces will become the standard way for arch code to expose user-mode machine-specific state. A single set of entry points into arch code can do all the low-level work in one place to fill the needs of core dumps, ptrace, and any other user-mode debugging facilities that might come along in the future. For existing arch code to adapt to the user_regset interfaces, each arch can work from the code it already has to support core files and ptrace. The formats you want for user_regset are the core file formats. The only wrinkle in adapting old ptrace implementation code as user_regset get and set functions is that these functions can be called on current as well as on another task_struct that is stopped and switched out as for ptrace. For some kinds of machine state, you may have to load it directly from CPU registers or otherwise differently for current than for another thread. (Your core dump support already handles this in elf_core_copy_regs for current and elf_core_copy_task_regs for other tasks, so just check there.) The set function should also be made to work on current in case that entails some special cases, though this was never required before for ptrace. Adding this flexibility covers the arch needs to open the door to more sophisticated new debugging facilities that don't always need to context-switch to do every little thing. The copyin/copyout helper functions (in a later patch) relieve the arch code of most of the cumbersome details of the flexible get/set interfaces. Signed-off-by: Roland McGrath <[EMAIL PROTECTED]> Signed-off-by: Ingo Molnar <[EMAIL PROTECTED]> Signed-off-by: Thomas Gleixner <[EMAIL PROTECTED]> --- include/linux/regset.h | 206 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 206 insertions(+), 0 deletions(-) diff --git a/include/linux/regset.h b/include/linux/regset.h new file mode 100644 index 0000000..85d0fb0 --- /dev/null +++ b/include/linux/regset.h @@ -0,0 +1,206 @@ +/* + *. + */ + +#ifndef _LINUX_REGSET_H +#define _LINUX_REGSET_H 1 + +#include <linux/compiler.h> +#include <linux/types.h> +struct task_struct; +struct user_regset; + + +/** + * user_regset_active_fn - type of @active function in &struct user_regset + * @target: thread being examined + * @regset: regset being examined + * + * Return -%ENODEV if not available on the hardware found. + * Return %0 if no interesting state in this thread. + * Return >%0 number of @size units of interesting state. + * Any get call fetching state beyond that number will + * see the default initialization state for this data, + * so a caller that knows what the default state is need + * not copy it all out. + * This call is optional; the pointer is %NULL if there + * is no inexpensive check to yield a value < @n. + */ +typedef int user_regset_active_fn(struct task_struct *target, + const struct user_regset *regset); + +/** + * user_regset_get_fn - type of @get function in &struct user_regset + * @target: thread being examined + * @regset: regset being examined + * @pos: offset into the regset data to access, in bytes + * @count: amount of data to copy, in bytes + * @kbuf: if not %NULL, a kernel-space pointer to copy into + * @ubuf: if @kbuf is %NULL, a user-space pointer to copy into + * + * Fetch_get_fn(struct task_struct *target, + const struct user_regset *regset, + unsigned int pos, unsigned int count, + void *kbuf, void __user *ubuf); + +/** + * user_regset_set_fn - type of @set function in &struct user_regset + * @target: thread being examined + * @regset: regset being examined + * @pos: offset into the regset data to access, in bytes + * @count: amount of data to copy, in bytes + * @kbuf: if not %NULL, a kernel-space pointer to copy from + * @ubuf: if @kbuf is %NULL, a user-space pointer to copy from + * + * Store_set_fn(struct task_struct *target, + const struct user_regset *regset, + unsigned int pos, unsigned int count, + const void *kbuf, const void __user *ubuf); + +/** + * user_regset_writeback_fn - type of @writeback function in &struct user_regset + * @target: thread being examined + * @regset: regset being examined + * @immediate: zero if writeback at completion of next context switch is OK + * + * This call is optional; usually the pointer is %NULL. When + * provided, there is some user memory associated with this regset's + * hardware, such as memory backing cached register data on register + * window machines; the regset's data controls what user memory is + * used (e.g. via the stack pointer value). + * + * Write register data back to user memory. If the @immediate flag + * is nonzero, it must be written to the user memory so uaccess or + * access_process_vm() can see it when this call returns; if zero, + * then it must be written back by the time the task completes a + * context switch (as synchronized with wait_task_inactive()). + * Return %0 on success or if there was nothing to do, -%EFAULT for + * a memory problem (bad stack pointer or whatever), or -%EIO for a + * hardware problem. + */ +typedef int user_regset_writeback_fn(struct task_struct *target, + const struct user_regset *regset, + int immediate); + +/** + * struct user_regset - accessible thread CPU state + * @n: Number of slots (registers). + * @size: Size in bytes of a slot (register). + * @align: Required alignment, in bytes. + * @bias: Bias from natural indexing. + * @core_note_type: ELF note @n_type value used in core dumps. + * @get: Function to fetch values. + * @set: Function to store values. + * @active: Function to report if regset is active, or %NULL. + * @writeback: Function to write data back to user memory, @pos argument must be aligned according to @align; the @count + * argument must be a multiple of @size. These functions are not + * responsible for checking for invalid arguments. + * + * When there is a natural value to use as an index, @bias + * @bias from a segment selector index value computes the regset slot. + * + * If nonzero, @core_note_type + * (@n * @size) and nothing else. The core file note is normally + * omitted when there is an @active function and it returns zero. + */ +struct user_regset { + user_regset_get_fn *get; + user_regset_set_fn *set; + user_regset_active_fn *active; + user_regset_writeback_fn *writeback; + unsigned int n; + unsigned int size; + unsigned int align; + unsigned int bias; + unsigned int core_note_type; +}; + +/** + * struct user_regset_view - available regsets + * @name: Identifier, e.g. UTS_MACHINE string. + * @regsets: Array of @n regsets available in this view. + * @n: Number of elements in @regsets. + * @e_machine: ELF header @e_machine %EM_* value written in core dumps. + * @e_flags: ELF header @e_flags value written in core dumps. + * @ei_osabi: ELF header @e_ident[%EI_OSABI] value written in core dumps. + * + * A regset view is a collection of regsets (&struct user_regset, + * above). This describes all the state of a thread that can be seen + * from a given architecture/ABI environment. More than one view might + * refer to the same &struct user_regset, or more than one regset + * might refer to the same machine-specific state in the thread. For + * example, a 32-bit thread's state could be examined from the 32-bit + * view or from the 64-bit view. Either method reaches the same thread + * register state, doing appropriate widening or truncation. + */ +struct user_regset_view { + const char *name; + const struct user_regset *regsets; + unsigned int n; + u32 e_flags; + u16 e_machine; + u8 ei_osabi; +}; + +/* + * This is documented here rather than at the definition sites because its + * implementation is machine-dependent but its interface is universal. + */ +/** + * task_user_regset_view - Return the process's native regset view. + * @tsk: a thread of the process in question + * + * Return the &struct user_regset_view that is native for the given process. + * For example, what it would access when it called ptrace(). + * Throughout the life of the process, this only changes at exec. + */ +const struct user_regset_view *task_user_regset_view(struct task_struct *tsk); + + +#endif /* <linux/regset.h> */ - To unsubscribe from this list: send the line "unsubscribe git-commits-head" in the body of a message to [EMAIL PROTECTED] More majordomo info at
|
https://www.mail-archive.com/git-commits-head@vger.kernel.org/msg35816.html
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
.barcelona;21 22 23 public class B1 extends B0{24 25 private int b1;26 27 public B1(){28 }29 30 public B_ |
|
http://kickjava.com/src/org/polepos/circuits/barcelona/B1.java.htm
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
Odoo Help
This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers.
How to get value from input tag in python ?
How to get value from input tag in python ? I've created input tag with id and name, but whatever i try to get it in python( tried self.tag_id or self.tag_name ) i get error that object like this does not exist.
Below code of my xml.
<record model="ir.ui.view" id="training_test_form_view">
<field name="name">Training Test Form View</field>
<field name="model">training.test_online</field>
<field name="priority" eval="5"/>
<field name="arch" type="xml">
<form string="Training Form" create="false"
delete="false">
<sheet>
<field name="question" nolabel="1" readonly="1"/>
<group>
<input id="cliked_1" name="answer" type="radio"> <field name="answer1" nolabel="1" readonly="1"/></input>
<input id="cliked_2" name="answer" type="radio"> <field name="answer2" nolabel="1" readonly="1"/></input>
<input id="cliked_3" name="answer" type="radio"> <field name="answer3" nolabel="1" readonly="1"/></input>
<input id="cliked_4" name="answer" type="radio"> <field name="answer4" nolabel="1" readonly="1"/></input>
</group>
<button name="next" type="object" />
</sheet>
</form>
</field>
</record>
and exmaple code in python which is giving error
@api.one
def next(self):
print self.clicked1
Do i need to get it by self, context, js, or how ?
Hello,
For this you have to code in js and from js you have to pass data in python method and also call python method from there like this:
First give button id in xml : (id = "next")
openerp.module_name = function(instance) { var QWeb = openerp.web.qweb; _t = instance.web._t; instance.web.View.include({ load_view: function(context) { var self = this; var view_loaded_def;
if ($('#next').length == 1){ $('#next').click({
//here you can code for get value of form and also call python method and pass value.
console.log('your value......',$('#cliked_1').value());
});
} return self._super(context); }, }); }
hope!
|
https://www.odoo.com/forum/help-1/question/how-to-get-value-from-input-tag-in-python-90983
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
-- | ) where import Test.Hspec.Internal import qualified Test.QuickCheck as QC data QuickCheckProperty a = QuickCheckProperty a property :: QC.Testable a => a -> QuickCheckProperty a property = QuickCheckProperty instance QC.Testable t => SpecVerifier (QuickCheckProperty t) where it description (QuickCheckProperty prop) = do r <- QC.quickCheckResult prop case r of QC.Success {} -> return (description, Success) f@(QC.Failure {}) -> return (description, Fail (QC.output f)) g@(QC.GaveUp {}) -> return (description, Fail ("Gave up after " ++ quantify (QC.numTests g) "test" )) QC.NoExpectedFailure {} -> return (description, Fail ("No expected failure"))
|
http://hackage.haskell.org/package/hspec-0.3.0/docs/src/Test-Hspec-QuickCheck.html
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
django-redactoreditor 1.2.7
Integrates the Redactor Javascript WYSIWYG editor with Django.
This package helps integrate the Redactor Javascript WYSIWYG-editor in Django.
Installation
- Pip install: pip install django-redactoreditor (or add the redactor directory to your Python path)
- Add the redactor application to your INSTALLED_APPS setting.
Usage
The redactor app provides a Django widget called RedactorEditor. It is a drop-in replacement for any TextArea widget. Example usage:
from django import forms from django.db import models from redactor.widgets import RedactorEditor class MyForm(forms.Form): about_me = forms.CharField(widget=RedactorEditor())
You can also customize any of the Redactor editor’s settings when instantiating the widget:
class MyForm(forms.Form): about_me = forms.CharField(widget=RedactorEditor(redactor_settings={ 'autoformat': True, 'overlay': False }))
Django-redactor also includes a widget with some some customizations that make it function and look better in the Django admin:
class MyAdmin(admin.ModelAdmin): formfield_overrides = { models.TextField: {'widget': AdminRedactorEditor}, }
Finally, you can connect a custom CSS file to the editable area of the editor:
class MyForm(forms.Form): about_me = forms.CharField(widget=RedactorEditor( redactor_css="styles/text.css") )
Paths used to specify CSS can be either relative or absolute. If a path starts with ‘/’, ‘http://’ or ‘https://’, it will be interpreted as an absolute path, and left as-is. All other paths will be prepended with the value of the STATIC_URL setting (or MEDIA_URL if static is not defined).
For the sake of convenience, there is also a form field that can be used that accepts the same inputs. This field can be used anywhere forms.CharField can and accepts the same arguments, but always renders a Redactor widget:
from redactor.fields import RedactorField class MyForm(forms.Form): about_me = RedactorField( in_admin=True, redactor_css="styles/text.css", redactor_settings={'overlay': True} )
jQuery
The redactor javascript library requires jQuery 1.9 or better to function. By default, jQuery is included as part of the field and widget media. However, this can cause issues where other widgets or forms on a page are using a different version of jQuery. It is possible to exclude jQuery from the media of the redactor field and wdiget if you wish to handle JavaScript dependency management yourself:
class MyForm(forms.Form): about_me = RedactorField(include_jquery=False)>
Internationalization
If you wish to use Redactor in other languages, you only need to specify the lang setting. The correct javascript language files will be loaded automatically:
class MyForm(forms.Form): about_me = forms.CharField(widget=RedactorEditor(redactor_settings={ 'autoformat': True, 'lang': 'es', 'overlay': False }))
Note
This is a change from version 1.2.1, where the javascript language files needed to be specified by the user.
Django-Redactor is licensed under a Creative Commons Attribution-NonCommercial 3.0 license. However, the noncommercial restrictions of the license (e.g., Section 4(b)) are waived for any user who purchases a legitimate commercial license to the redactor.js library. Open source users are still under the noncommercial clause, but legitimate Imperavi license holders are not.
- Author: James Stevenson
- License: CC licence, see LICENSE.txt
- Categories
- Package Index Owner: mazelife
- DOAP record: django-redactoreditor-1.2.7.xml
|
https://pypi.python.org/pypi/django-redactoreditor/1.2.7
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
CodePlexProject Hosting for Open Source Software
I'm making a game that uses Farseer Physics, and the XNA.Framework.Content. So far to get the physics engine and XNA content pipeline to work together, I've added quite a few [ContentSerializerIgnore] attribute tags around the XNA3 branch. I'm also keeping
with the latest release of Farseer, and doing a daily update. My question is if the XNA3 branch could include tags to make it's objects intermediate serializer safe for everyone?
If we included the ContentSerializerIgnore attribute, we would have dependence on the XNA framework. But because it is an attribute (simply just a class that inherits from Attribute) we might be able to include it.
At the same time, it would be great if there were support for standard serialization (not using the content pipeline). Would you be willing to contribute this to the project?
Standard serialization would be cool! However, I haven't gone too far into using it.
So far, I've only made adjustments to Geom.cs, Body.cs, and GenericLis.cs. I've added:
#if(XNA)
[ContentSerializerIgnore]
#endif
For all delegates, and under #if (XNA) using Microsoft.XNA.Framework.Content;. I've Written Readers and Writers using the Microsoft.XNA.Framework.Content. So, it I do not think it will break the standard library.
I'd be more than happy to contribute to the project. Putting attribute tags for the content pipeline probably won't take too long, as there's only about 8 of them. If you like, I could also put in XNA readers and writers (I just got done coding mine).
I'm not too sure, but for standard xml serialization, the [XmlIgnoreAttribute] tag will cause the standard serializers to ignore the specific attribute. It will compile with the [ContentSerializerIgnore] attribute below it, so I think should work with both
Before I started hacking at the farseerphysics.dll, I created custom, separate classes, which contained the same values as the body and geom, but it's harder to maintain. This would make it way easier! :)
I've also gone a small part of the way with Intermediate Serializer stuff. Rather than editing the engine code, I created classes for PhysicsSimulatorContent, BodyContent and GeomContent. It was only really a proof of concept idea, and it was pretty messy
with the Geom's storing a Body ID to look up the actual Body at runtime. I ended up scrapping the idea of serializing the whole Sim. How would you get your game objects' references assigned to the correct Geom.Tag anyway? Now I think I will have something
like this for my physics object's content classes:
PhysicsObjectContent
|- List<LinkContent>
|- List<PhysicsElementContent>
|- BodyContent,
|- List<GeomContent>
Where LinkContent is a class with all the information for creating Joints and Springs (LinkType, elementId1, elementId2, anchorPoint1, anchorPoint2 etc).
I don't see any reason why these content classes wouldn't be usable with both XmlSerializer and IntermediateSerializer.
@daswampy: Great. If you could download the latest source code checkin, add the attributes and send it as a patch in our patch section (found under source control) it would be much appreciated.
Standard .net serialization is just adding [Serializable] to the classes that can be serialized. If some properties/fields does not need to be serialized, put a NonSerialized attribute above them to make the serializer ignore it. (Remember, binary serializers
even serializes private fields). The XML serializer uses the XmlIgnoreAttribute (It only serializes public fields and properties) to mark it as non serialized.
Working with the XML serializer is the easiest in our case because we don't need to serialize all the private fields and properties. Everything needed is public (i think). To get the binary serializer to work correctly too, might be easier by implementing
the ISerializable interface and tell it to only serialize the relevant data.
An example on how that is done, can be found
here.
Hmm, so it looks like there are many ways to serialize the data.
So, which one will the patch be for?
The best way to test that it works, is to write a serializer for it, so which ever way, there should be a sample in the AdvancedSamples.
I was not aware that binary serialization is not supported on Xbox. I guess we have to go with XML serialization only then.
But, that also makes it a lot easier :)
All public properties that should not be serialized should be attributed in the following manner:
#if(XNA)
[ContentSerializerIgnore]
#endif
[XmlIgnore]
public void Method(Type arguments)
{
}
That should do it. It would be great if you could create a test to see if it works. To smack two flies with one hit, it would be awesome if you made a simple sample for inclusion in AdvancedSamples. Just a simple "Load simulation", "Start
Simulation", "Save Simulation" demo. All the tools are there to create such a sample.
@roonda
I keep all geoms and bodies encapsulated in a physics object. The physics object is kept inside an array. The array is kept in a wrapper. The wrapper is kept in Unit Manager. The unit manager is kept in an arbiter.
I made a struct called Tag, it contains an enum and an int. The tag stores information of which unit manager it belongs to, and what is it's index inside the array wrapper (which is inside the unit manager). When a class wants the complete unit (which uses
the physics object as a base class), based off one geom, it gives the arbiter the geom's tag, and the arbiter returns the unit (the arbiter can see all unit managers).
I hope that made sense.
The thing with System.Runtime.Serialization, is that some of the classes are supported on the xbox 360, some aren't. I'm going through the MSDN documentation for it right now. I'm going to ask in the xna forums what is supported and what isn't supported.
Found it:
What about Zune, I know the XNA serializer doesn't work it, as far as I know only System.Xml works. I think serialization would speed up the loading of my game since right now it takes an unexceptible 1-2 minutes.
XML serialization is for loading and unloading xml documents. So if you have 5 different enemies, you can have 5 different xml documents which define the properties of the enemies. During run-time, the game will load the xml documents into the enemies. Taking
1-2 minutes to load a game might be due to something else.
I looked a little bit on MSDN and XNA forums to see what the work around would be for Zune xml, turns out that using XNA content pipeline will work for zune. During run-time, the zune will need access to the ContentReader. The ContentReader is used whenever
you do:
Unit unit = game.ContentManager.Load<Unit>("assetLocation");
On MSDN, if you look at the bottom, and around the Microsoft.XNA.Framework.Content, it's supported on the Zune platform. Now, the actual content
pipeline itself, with the xml writers, is definitly not supported on the zune. This is because visual studio writes the xml files to binary .xnb during compile time. The compiler also checks between the readers, writers, and xml documents to make sure that
the xml tags match.
Since the content pipeline in my mind is just a fancy way of loading resources and most of it's features are not really something FPE will gain from. (Features like load once from disk and cache rest of time might be something worth) I think that simple
XML serialization of the physics engine together with it's bodies, geometries, joints, springs and controllers would be a great thing.
Simply saving the state of the whole world in an XML file would make it easier for people to create save/load functions in their game. Having the basic structure in place also makes it possible to extend with binary serialization and send copies of the world
(physics simulator along with dynamics) over the network.
I hope you come up with a solution that works on most platforms. If you come up with a solution in the next few days, I might be able to squeeze it into 2.1.
Don't wait for me for 2.1. I'm going to teach my self System.XML (it's really just going through some tutorials) over the next few days, while continuing development on my project. I'm not going to port my XNA pipeline solution over until after the standard
xml solution is written.
Probably the best thing would be to write a tool that will allow people to create and save physics objects to xml, and have classes built into farseerphysics.dll to read in the saved pre-created physics objects. Oh my, that's a lot more work than just adding
tags so people can write it themselves :).
I've used this simple generic XML serializer in some of my previous projects:
If you write down the tags the correct places, I will make sure those tags gets used. For now, we only implement standard XML serialization - any XNA pipeline stuff can come later.
Edit: A heavily updated implementation of the generic XML serializer I might add, but it has the fundamental idea.
I was tired of manually updating the ContentTypeReaders/Writers for my game, so I just wrote a generic class that takes advantage of the Intermediate Serializer:
If it works on all XNA Platforms (awaiting answer from XNA Forums), I'm going to submit this as the XNA Serializer, then get to work on a .net standard XML serializer. If it doesn't, then it's great for development purposes, when classes are always changing
etc. Maybe someone could put it to good use. It just relies on using a folder directory set-up to be exactly the same as your namespacing.
Basically, it does everything automatically for you :), just need to drop in some tags.
/// <summary>
/// Generic class to deserialize any object based on namespace name
/// Must have a matching content directory tree to match full namespace
/// </summary>
/// <typeparam name="T"></typeparam>
public static class ObjectSerialization<T> where T : new()
{
private static StringBuilder loc = new StringBuilder(System.IO.Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().Location));
private static int directorylength = loc.Length - 14;
static public void Serailize(string assetName)
{
T testData = new T();
XmlWriterSettings settings = new XmlWriterSettings();
settings.Indent = true;
settings.ConformanceLevel = ConformanceLevel.Auto;
SetCurrentLocation(assetName);
using (XmlWriter writer = XmlWriter.Create(loc + ".xml", settings))
{
IntermediateSerializer.Serialize(writer, testData, loc.ToString());
}
}
static public T Deserialize(string assetName)
{
SetCurrentLocation(assetName);
//Create the xml reader
using (XmlReader reader = XmlReader.Create(loc.ToString()))
{
//Deserialize the data
return IntermediateSerializer.Deserialize<T>(reader, ".\\");
}
}
private static void SetCurrentLocation(string assetName)
{
//Chop off "bin\x86\Debug" and anything else from the end
loc.Remove(directorylength, (loc.Length - directorylength));
//Set the location to the correct directory
loc.Append("\\content\\" + typeof(T).FullName);
//Using namespaces, so change the . to a \\
loc.Replace(".", "\\");
//Add the assetname
loc.Append("\\" + assetName + ".xml");
}
}
Sent patch for the tags and this previous copy pasted class. I'm not sure if you wanted the body and geom serialized in the springs and joints, so that's up to you.
Going to write a demo screen which will use system.xml next.
Me and my team have a fully working set using the 3rd method that works on Xbox, Pc and the Zune perfectly that we have intergrated into farseer
We will be submitting a patch sometime in the next week or so
They already applied the patch for the [ContentSerializerIgnore]/[XmlIgnore]. It works great now. Been using the IntermediateSerializer until XNA 3.1 comes, which will have automated .xnb serialization. When I submitted the patch, I forgot to add tags to
IsStatic in geom.cs. For IsStatic, it requires that the body be instantiated, which isn't during the deserialization of a geom. I'll submit another patch w/ just that fix.
IMO, The only editing which should be done to farseer is just tags, no readers/writers/anything. Anything else should be put into an advanced demo screen.
Nonetheless, I look forward to seeing your implementation for it.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later.
|
http://farseerphysics.codeplex.com/discussions/58138
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
Dan Harmon says
Interesting problem. There are two questions to be answered; which wire in the box is hot and which is neutral, and which one on the swag fixture is which.
This sounds like it is likely the old knob and tube wiring from decades ago used when they didn't care much which wire went where although the plastic insulation is puzzling. The problem is that the large threaded part of the light bulb can become hot if it is not wired to the neutral and is all too easy to touch when changing a bulb. I would like to see you get this right and prevent any future shocks.
We will probably need some back and forth questions and answers, though, and I don't think that is possible in this part of HubPages. If you would either go to my hub on wiring a new light fixture (... or email me through the address in my profile I will be able to ask for more information and answer that way. Simply copy and paste your question into the comment box on the hub or into the email.
I guessing that you will need either a non-contact voltage tester (preferable) or a voltmeter of some kind to find the proper wires. Is such a thing available?
sheerplan says
First off I have to say: if you are unsure of any electrical connections then please call a licensed electrician. Electricity kills.
With what you have described (I do not have any pictures to go by) it sounds as if the two wires twisted together may be the neutral wires. The single white wire could be the switch-leg return (hot). But it would be hard to say without seeing it. They should be checked with a multimeter.
Take a close look at the insulation on the wires of the fixture - is there a small black stripe? if so this is the hot.
If there isn't a stripe look closely at the braided wires. Is one wire colored gold and the other silver? The gold colored wire is the hot. I hope this helps.
framistan says
You can use a "NEON TEST LIGHT" to tell you which terminal is HOT. They only cost a couple dollars. Picture i uploaded is a sample. they come in various styles. Some of them are built into a regular screwdriver.
|
http://hubpages.com/living/answer/105157/how-do-i-wire-old-ceiling-light-where-all-wires-are-the-same-color-in-the-light-and-ceiling-box
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
:
I/O and Streams
Need help with simulating unix command in java
david colais
Greenhorn
Posts: 29
posted 4 years ago
Hi,
I have a unix command as follows:
Command : nawk -F, '{x=$4;gsub(" ", "", x);print >x".dat"}' $proj_file
The file passed to it is a .txt file with over 20 columns and over a million rows.
The above command will be performing the action as pasted below.
This command is performing all the work but it is taking a long time.
So i am considering doing this by
java
.
How do i achieve the same.
Description of the unix command:
I'm reading a file $proj_file (a variable set elsewhere in my script).
-F, sets "," as the field delimiter
x=$4 assigns the value of the fourth comma-delimited field to the variable "x"
gsub performs a global substitution against the
string
assigned to x. Basically it eliminates all spaces by changing them to the empty string.
print without further specification writes out the complete input record
> redirects to an output file
x".dat" is the output file, whose name is constructed from the variable "x" (remember: the fourth field of your input record, but with spaces eliminated) and a suffix ".dat".
Let's say I'm have an input record like tis:
123, 456, some more, this is it, rest of, record
My statement will write the complete input line to a file named "thisisit.dat"
Each line containing "this is it" will go to the same output file (awk always appends),
lines containing other data will go to other output files according to the name construction rules above.
In other words, the command distributes the content of an input file to one or several output files, depending on the content of a particular column in the input data.
Right now i have a java code which does the same work in about 27 mins. import java.io.*; import java.util.*; import java.io.BufferedReader; import java.io.FileNotFoundException; import java.io.FileReader; import java.io.IOException; public class JAwk { public static void main(String[] args) throws IOException{ BufferedReader s = null; try { String csvFile = "CSVFILE.TXT"; final int BUFFER_SIZE = 1 << 20 << 2;// 4MiB s = new BufferedReader(new FileReader(csvFile), BUFFER_SIZE); String line = null; while ((line = s.readLine()) != null) { String[] atoms = line.split("\\s*,\\s*", 5); String fileName = atoms[atoms.length - 2].replaceAll(" ", ""); PrintWriter out = new PrintWriter(new FileWriter(String.format("%s.dat", fileName), true)); out.println(line); out.close(); } } catch (Exception e) { e.printStackTrace(); } finally { if (s != null) { s.close(); } } } } Please can anyone suggest to speed up the execution of this program.
Post Reply
Bookmark Topic
Watch Topic
New Topic
Similar Threads
Send a String to Server and server send it somewhere on network
Problem organizing text in file
Problems reading from a .txt file
modifying an input file based on pattern matching
FTP Socket Programming
|
https://coderanch.com/t/559827/java/simulating-unix-command-java
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
Code. Collaborate. Organize.
No Limits. Try it Today.
Keith Barrow wrote:Half of Punt and Denis is doing an excellent Eric Idle impression.
public class SanderRossel : Lazy<Person>
{
public void DoWork()
{
throw new NotSupportedException();
}
}
_Maxxx_ wrote:Is this the beginning of the 'dumbing down' of programming?
_Maxxx_ wrote:methods called 'Update2Database' or properties of 'Discount4Customer'
_Maxxx_ wrote:methods called 'GiveDiscount ' and 'DeductTax
OriginalGriff wrote:the remake of the original is shaping up well from the two movies so far
mark merrens wrote:Star Trek probably wasn't a documentary
delete this;
OriginalGriff wrote:But Voyager was good as well - once they dumped Kes
mark merrens wrote:the 2nd pilot for Star Trek
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Lounge.aspx?fid=1159&df=90&mpp=10&noise=1&prof=True&sort=Position&view=None&spc=None&select=4484889&fr=2808
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
Diffie Hellman key exchange
Over View
The following code snippet demonstrates the Diffie-Hellman key exchange algorithm which is a cryptography algorithm that allows two parties to jointly establish a shared secret key over an insecure communication channel. Diffie Hellman method is used to jointly arrive at the same Shared key and you can read more here.
Header Required
#include <cryptoasymmetric.h>
Library Required
LIBRARY cryptography.lib
Following are the steps involved in this algorithm:
1) Create two objects of RInteger class(a TInteger derived class allowing the construction of variable length big integers) each for a prime Number 'P'& a generator':
RInteger PrimeNum = RInteger::NewPrimeL(1024)//pass here the number of bits of prime number you wish to generate e.g. passing 1024 will generate a 1024 bit prime number.
RInteger Generator=RInteger::NewL(5); // generator is generated by passing a constant which is usually 2 or 5
2) Create an object of CDHKeyPair class using the RInteger objects created in Step 1 :
TRAPD(err,iDHKeyPair = CDHKeyPair::NewL(PrimeNum ,Generator));
3) Next get the 'G'(generator) & 'N'(Prime Number) parameters of Diffie Hellman using the CDHkeyPair object created in Step 2 :
const TInteger& G =iDHKeyPair->PublicKey().G();
const TInteger& N =iDHKeyPair->PublicKey().N();
//Get the Prime Number in a buffer as below :
HBufC8 *PrimeBuffer=N.BufferLC();
TPtr8 PtrPrime=PrimeBuffer->Des();
//Also get the DH value 'x'(a random large integer) as:
const TInteger& xparam =iDHKeyPair->PrivateKey().x();
4) Next generate the DH public value/parameter(PublicVal) which is to be exchanged with the party other side as below:
const TInteger& PublicVal = (G.TimesL(xparam )).ModuloL(N);
HBufC8 *Buffervalue = PublicVal .BufferLC();
TPtr8 NewPublicVal = Buffervalue ->Des();
5) Now send the Prime Number which is generated in Step 3 to the other party by the way you wish to send.
6) The receiving party(which will receive the Prime Number) will now repeat the Steps 1 to 4, with Step 1 as below:
RInteger PrimeNumReceiver = RInteger::NewL(const TDesC8&//Prime Number Received from Sender side)
RInteger GeneratorReceiver=RInteger::NewL(5);
//Rest of the steps will be same on receiver side but will require the use of
//received prime number where it is required.
7) Now there will be two public values/parameters generated, one at the Sender's end & one at receiver's end, both generated using the same prime Number.So now exchange the two public values/parameters i.e. the sender will send its value to receiver & vice-versa.
8) Now with public values got exchanged the two parties will now generate a common/shared secret at their ends:
RInteger PrivateValue = RInteger::NewL(const TDesC8& privateParam//private parameter 'x' generated at each side in Step 3);
RInteger PrimeNum = RInteger::NewL(const TDesC8& Prime//the common Prime Number);
RInteger ReceivedPublic = RInteger::NewL(const TDesC8& publicParam //the exchanged public value);
//The common/Shared secret key will be generated at each end as below:
const TInteger& SharedKey = (ReceivedPublic.TimesL(Private)).ModuloL(PrimeNum );
- The above generated shared secret key will be same as generated at both the ends(sender & receiver) & now they can use it as they want like using it as an encryption/decryption key in any of the Symmetrical Cryptographic algorithms.
|
http://developer.nokia.com/community/wiki/index.php?title=Diffie_Hellman_key_exchange&oldid=162617
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
On a side note, the Bluetooth module has a red LED which doesn't stop flashing. Should I expect the flashing to stop at some point if things are working properly?Is it possible this isn't going to work because my bluetooth module is "Master-Only" ?
#include <WProgram.h>void setup(){ Serial.begin(9600); // bluetooth serial: Serial1.begin(9600);}void loop(){ while (Serial1.available()) { // get char from bluetooth; char inChar = (char) Serial1.read(); // output to serial monitor; Serial.println(inChar); } Serial1.println("from Arduino"); delay(2000);}
|
http://forum.arduino.cc/index.php?topic=97971.msg745780
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
1 Nov 20:07 2000
statically nested scopes
Jeremy Hylton <jeremy <at> alum.mit.edu>
2000-11-01 19:07:10 GMT
2000-11-01 19:07:10 GMT
Title: Statically Nested Scopes Author: Jeremy Hylton <jeremy <at> digicool.com> Status: Draft Type: Standards Track Created: 01-Nov-2000 Abstract This PEP proposes the additional of statically nested scoping (lexical scoping) for Python 2.1. The current language definition defines exactly three namespaces that are used to resolve names -- the local, global, and built-in namespaces. The addition of nested scopes would allow resolution of unbound local names in enclosing functions' namespaces. One consequence of this change that will be most visible to Python programs is that lambda statements could reference variables in the namespaces where the lambda is defined. Currently, a lambda statement uses default arguments to explicitly creating bindings in the lambda's namespace. Notes This section describes several issues that will be fleshed out and addressed in the final draft of the PEP. Until that draft is ready, please direct comments to the author. This change has been proposed many times in the past. It has always been stymied by the possibility of creating cycles that could not be collected by Python's reference counting garbage collector. The additional of the cycle collector in Python 2.0 eliminates this concern. Guido once explained that his original reservation about nested scopes was a reaction to their overuse in Pascal. In large Pascal programs he was familiar with, block structure was overused as an organizing principle for the program, leading to hard-to-read code. Greg Ewing developed a proposal "Python Nested Lexical Scoping Enhancement" in Aug. 1999. It is available from Michael Hudson's bytecodehacks projects at provides facilities to support nested scopes using the closure module. Examples: def make_adder(n): def adder(x): return x + n return adder add2 = make_adder(2) add2(5) == 7 from Tkinter import * root = Tk() Button(root, text="Click here", command = lambda : root.test.configure(text="...")) One controversial issue is whether it should be possible to modify the value of variables defined in an enclosing scope. One part of the issue is how to specify that an assignment in the local scope should reference to the definition of the variable in an enclosing scope. Assignment to a variable in the current scope creates a local variable in the scope. If the assignment is supposed to refer to a global variable, the global statement must be used to prevent a local name from being created. Presumably, another keyword would be required to specify "nearest enclosing scope." Guido is opposed to allow modifications (need to clarify exactly why). If you are modifying variables bound in enclosing scopes, you should be using a class, he says. The problem occurs only when a program attempts to rebind the name in the enclosing scope. A mutable object, e.g. a list or dictionary, can be modified by a reference in a nested scope; this is an obvious consequence of Python's reference semantics. The ability to change mutable objects leads to an inelegant workaround: If a program needs to rebind an immutable object, e.g. a number or tuple, store the object in a list and have all references to the object use this list: def bank_account(initial_balance): balance = [initial_balance] def deposit(amount): balance[0] = balance[0] + amount def withdraw(amount): balance[0] = balance[0] - amount return deposit, withdraw I would prefer for the language to support this style of programming directly rather than encouraging programs to use this somewhat obfuscated style. Of course, an instance would probably be clearer in this case. One implementation issue is how to represent the environment that stores variables that are referenced by nested scopes. One possibility is to add a pointer to each frame's statically enclosing frame and walk the chain of links each time a non-local variable is accessed. This implementation has some problems, because access to nonlocal variables is slow and causes garbage to accumulate unncessarily. Another possibility is to construct an environment for each function that provides access to only the non-local variables. This environment would be explicitly passed to nested functions.
|
http://permalink.gmane.org/gmane.comp.python.devel/25058
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
User:Modusoperandi/PLS
From Uncyclopedia, the content-free encyclopedia
{{:User:Electrified mocha chinchilla/PLS/Main}}
Consult the talk page or me for any questions or concerns. See the Poo Lit Archives for the results from past competitions.
edit The competition
edit What is the Poo Lit Surprise?
A writing competition held biannually. It is designed to jump-start writing quality at Uncyclopedia. After the previous six restrictions apply. Due to community consensus, judges are now barred from entering the competition (see here for the list of judges). Non-noobs (people who've been here longer than 3 months) cannot enter the "Best Article By a Noob" category, and noobs are not restricted to the "Best Article By a Noob" category. 9th March ― 22th March, entries will be accepted.
- From 23rd March ― 5th April, entries will be locked and judged.
- From 6th April ― 12th April, winners will be announced, articles will be moved into the mainspace, and all entries will be unlocked.
- 13th April ― we will all get on with our lives.
(n.b.: all times are to be measured by UTC, and all phases of the contest end at midnight on the specified day; entries may be accepted late under certain conditions)
edit Where should I put my entry?
The article should be placed on your namespace. Between 9th March ― 22th March, 21:03, 14 March 2009 (UTC)
If you're a paranoid, or afflicted with OCD, or just want to tell people to shove off until after the PLS, add {{PLS-WIP}} to your entry..
|
http://uncyclopedia.wikia.com/wiki/User:Modusoperandi/PLS?oldid=5010836
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
In this section we are going to discuss about how to reverse a sting in java. There are many ways to reverse a string in java java API provides StringBuffer and StringBuilder reverse() method which can easily reverse a string, this is the most easy way to reverse a string in java but in most interview they will ask to reverse a string without StringBuffer and StringBuilder methods. For that we will do it by recursive way as well as by loop. Now here is the example how to reverse a string in java.
public class Stringreverse { public static void main(String args[]) { String word1=" Welcome to Rose India"; String reverse=""; //Using String Buffer System.out.println("Using String Buffer "); String reverses = new StringBuffer(word1).reverse().toString(); System.out.println("Original String is = " +word1); System.out.println("Reverse String is = "+reverses); String word2 = "Hello everybody"; System.out.println(); //Using String Builder System.out.println("Using String Builder "); reverse = new StringBuilder(word2).reverse().toString(); System.out.println("Original String is = " +word2); System.out.println("Reverse String is = "+reverse); System.out.println(); //Using Iterative by loop System.out.println("Using Loop "); String word3="Rose India"; String rev=""; for(int i = word3.length() -1; i>=0; i--) { rev = rev + word3.charAt(i); } System.out.println("Original String is = " +word3); System.out.println("Reverse String is = "+rev); } }
Output : After compiling and executing of above program.
If you enjoyed this post then why not add us on Google+? Add us to your Circles
Liked it! Share this Tutorial
Discuss: String reverse in java
Post your Comment
|
http://www.roseindia.net/java/beginners/index/string-reverse-in-java.shtml
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
Reference Index
Table of Contents
sem_init, sem_wait, sem_trywait, sem_post, sem_getvalue, sem_destroy - operations on semaphores
#include <semaphore.h>
int sem_init(sem_t *sem, int pshared, unsigned int value);
int sem_wait(sem_t * sem);
int sem_timedwait(sem_t * sem, const struct timespec *abstime);
int sem_trywait(sem_t * sem);
int sem_post(sem_t * sem);
int sem_post_multiple(sem_t * sem, int number);
int sem_getvalue(sem_t * sem, int * sval);
int sem_destroy(sem_t * sem);).
Pthreads-w32 currently does not support process-shared semaphores, thus sem_init always returns with error EPERM if pshared is not zero.
sem_wait atomically decrements sem's count if it is greater than 0 and returns immediately or it suspends the calling thread until it can resume following a call to sem_post or sem_post_multiple.
sem_timedwait atomically decrements sem's count if it is greater than 0 and returns immediately, or it suspends the calling thread. If abstime time arrives before the thread can resume following a call to sem_post or sem_post_multiple, then sem_timedwait returns with a return code of -1 after having set errno to ETIMEDOUT. If the call can return without suspending then abstime is not checked.
sem_trywait atomically decrements sem's count if it is greater than 0 and returns immediately, or it returns immediately with a return code of -1 after having set errno to EAGAIN. sem_trywait never blocks.
sem_post either releases one thread if there are any waiting on sem, or it atomically increments sem's count.
sem_post_multiple either releases multiple threads if there are any waiting on sem and/or it atomically increases sem's count. If there are currently n waiters, where n the largest number less than or equal to number, then n waiters are released and sem's count is incremented by number minus n.
sem_getvalue stores in the location pointed to by sval the current count of the semaphore sem. In the Pthreads-w32 implementation: if the value returned in sval is greater than or equal to 0 it was the sem's count at some point during the call to sem_getvalue. If the value returned in sval is less than 0 then it's absolute value represents the number of threads waiting on sem at some point during the call to sem_getvalue. POSIX does not require an implementation of sem_getvalue to return a value in sval that is less than 0, but if it does then it's absolute value must represent the number of waiters.
sem_destroy destroys a semaphore object, freeing the resources it might hold. No threads should be waiting on the semaphore at the time sem_destroy is called.
sem_wait and sem_timedwait are cancellation points.
These routines are not async-cancel safe.
All semaphore functions return 0 on success, or -1 on error in which case they write an error code in errno.
The sem_init function sets errno to the following codes on error:
pshared is not zero
The sem_timedwait function sets errno to the following error code on error:
if abstime arrives before the waiting thread can resume following a call to sem_post or sem_post_multiple.
The sem_trywait function sets errno to the following error code on error:
if the semaphore count is currently 0
The sem_post and sem_post_multiple functions set errno to the following error code on error:
The sem_destroy function sets errno to the following error code on error:
Xavier Leroy <Xavier.Leroy@inria.fr>
Modified by Ross Johnson for use with Pthreads-w32.
pthread_mutex_init(3) , pthread_cond_init(3) , pthread_cancel(3) .
Table of Contents
|
http://www.sourceware.org/pthreads-win32/manual/sem_init.html
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
Minutes of 20 Feb 2002 and 13 March 2002 approved as posted.
-- On Email binding appendices: MarkB sees the current email binding (RFC 2822) as insufficient in our attempt to exercise our binding framework. Noah does not share MarkB's concerns, he thinks the current Email binding is a usable binding, and it shows a "second binding of SOAP". DavidF: we'll set up a conferecnce call on this topic.
-- Primer: incorporated comments. -- Spec: the editors have done a lot of work, expect to publish a snapshot on Friday and to clear the ed to-do list plus most of the issues we might resolve today. -- TBTF: nothing to report. -- Conformance: Oisin sent the report in his regrets email, and DavidF read the report: There is no new version on the website, there will be one Friday morning EST. This version will contain additional assertions, and have removed obsolete ones. Trying to coordinate with IBM and Microsoft and Soapbuilders list. -- Usage Scenarios: waiting on example for S10. -- Requirements doc: ok -- Email binding: setting up the meeting to resolve MarkB's concerns.
in other words there is an instantiation of option B ("Introduce today an abstract attachment 'binding feature', but defer 'implementation' of this feature to other specifications/notes, such as, for example SOAP+Attachment or DIME.") from the list of options proposed [13] for resolving issue 61, "external payload reference". At its meeting this week, the TBTF briefly discussed the attachment feature proposal and with a few reservations they agreed with it. The purpose of this agenda item is to decide whether the WG agrees to (i) adopt the option B, and (ii) ask the TBTF to come up with a refined attachment feature proposal in the very near future.DavidF: we have a proposal for introducing an abstract concept of an attachment and deferring the concrete implementations of this feature. There were some comments on the lists and TBTF concall, mostly agreement. PaulC: why do we do this? SOAP with Attachments has proven it can be done without our explicit help. DavidF: there has been a feeling that SOAP 1.2 should acknowledge attachments. The proposal gives us a hook so that later on we could do a concrete implementation. This is also a small piece of work. PaulC: still, we are doing more than we must. Noah: I've had a slightly different proposal on xml-dist-app. [scribe disconnected, reconnected] DavidF: we probably should now send this back to the TBTF and to the mailing list. JohnI: we may task the TBTF to give us something next week. Noah: the TBTF should copy the initial discussion email to the public list. PaulC: Is it the opinion of the WG that we cannot go to Last Call without solving this? Jean-Jacques: There is currently an issue closing issue 61. The proposal would enable us to close issue 61. The proposal involves only minor modifications to the HTTP binding. It provides a hook that specs like DIME or S+A could use to add proper support for attachments. DavidF: We will move this issue back to the TBTF and public discussion for a week, then we'll revisit the topic. PaulC: I have no objection to waiting a week.
7. Regarding the rewrite of Part 2 sections 2 and 3, is this the right direction?Asir: this is not changing any functionality, so why do we have to do this for Last Call? Noah: this is much crisper than the last version... Jacek: I have sent some comments that I think should be replied to by the authors and probably also seen by the WG. Gudge: I'll reply when I get to it, I've been busy. DavidF: we've had external comments that liked the rewrite extremely much. Asir: we should also have had an appendix with examples and it does not seem to be there. Gudge: The appendix is in there in a rough form. It was meant to provide explanation on how XML Schema could be used with SOAP Encoding data, if you were expecting any examples on Encoding, that's not what was meant to go in the appendix. Asir: why do we move from XML Schema? Gudge: we get a better layering - moving typing to a higher layer, possibly with a different type system than that provided by XML Schema. Noah: This rewrite clarifies much our relation to XML Schema, removing many loose ends, particularly with respect to validation. Asir: if this is only about validation, can't we only add a comment saying validation is not required? Or just required for builtin simple types? Noah: that might have been the other way but SOAP 1.1 was never saying explicitly that it does require validation of simple types, which is the only real reference to XML Schema. DavidF: we don't want to reopen the issue. Asir are you suggesting we go back to the old text? Asir: I was only asking for clarification. I'm not saying we should go back, maybe we don't need to do that before Last Call. No objections to incorporating it now if we have time to review it before LC. DavidF: there will be time to review the rewrite before submitting the specs to Last Call. 9. Issues having proposals that impact spec and other texts -- Issue 41 DavidF: we have two proposed solutions, solution 2 being commented on as the way to go. Amr: we also have an amended proposal in Henrik: it is fine except it shouldn't suggest we plan to add any such extension ourselves. Noah: I'm mostly OK with it, but it suggests that it's true that the target URI should be in the envelope. We should be explicit that in some cases it may be needed there, in other cases not. Chris: we have raised this issue with the WS Architecture group. DavidF: but we aren't in the position to wait for them to tell us whether or not they are going to handle this. DavidF: proposal: resolve issue 41 by accepting the amended text (pointed to by Amr) without the square-bracketed text, which would be mentioned in the closing text. HFN: suggestion that we just send the whole text to xmlp-comments as the closing text, and change nothing in the spec. DavidF: revised proposal: no change to the spec, amended proposal text going into xmlp-comments as closing text. No objections raised. DavidF: issue 41 is closed with the revised proposal. Henrik will submit xmlp-comment text. -- Issue 189 Noah: for SOAP in general there is no issue, the HTTP binding should be clear on the XML version it uses. Henrik: The current spec (part 1) says that while the examples use XML 1.0, it's not really mandatory as we're based on infoset. DavidF: any objection to closing 189 by referencing this text? Noah: I have some editorial comments which I will give to the editors. No objections raised. DavidF: Issue 189 is closed with the proposed text. JeanJacquesM will send xmlp-comment text. -- Issue 186 DavidF: the proposal is that we provide explicit text specifying uniqueness constraints on the attributes ref and id. Is there any objection to closing the issue with this proposal? HFN: aren't we duplicating some external work? Noah: XML Schema doesn't apply, DTDs we don't want, any other external work? We could say "these attributes have the semantics as if a DTD was there." Jacek: this might interfere with other application attributes named id and ref, wouldn't it? Gudge: no, not really, we'd say this only for SOAP Encoding id and ref attributes. DavidF: again, any objection to the proposed text? No objections reaised. DavidF: issue 186 closed with the original proposed text. MartinG will send text to xmlp-comment. -- Issue 176 Proposal: we'll clarify the rules on how some particular "important" i nformation items can or can not be changed. This has changed slightly with the resolution to 137. Specifically, it clarifies what a SOAP receiver and sender MUST and MUST NOT do with respect to some information items in some places. Noah: it might need to be changed with some cleanups (DTD related) we make. DavidF: any objections? No objections raised. DavidF: issue 176 is closed with the proposal. Henrik to send text to xmlp-comments. -- Issue 187 Noah: the bindings should describe have some kind of a binding failure and a badly formed SOAP message would be such (others being transmission link breakage etc.). We'll have to introduce a broad model for binding failures. It should affect the binding framework and the state machines. TBTF should come up with a good answer to the broader issue and we'll resolve 187 as a special case of that. DavidF: any objections to moving this to the TBTF? No objections raised. DavidF: we will send this issue back to the TBTF to make a proposal on failure description. -- Issue 167 DavidF: it is proposed that we issue a health warning. Gudge: this issue should be covered by the rewrite of Part 2, sections 2 and 3. No objections raised to this resolution with the understanding that Gudge will check 167 is indeed covered by the rewrite. -- Inconsistencies in versioning model, (from the agenda addendum). Henrik: VersionMismatch is on the namespace URI, Upgrade header has the whole root element QName. Proposal: we'll check everything on QNames. This carries a lot of editorial work. We'll also mandate sending the Upgrade header. DavidF: can we agree on this? No objections were raised. DavidF: we accept the proposal. 10. Assign people to create proposals 194: Encoding style in SOAP Header/Body: Henrik will create a proposal. 195: Why mandate RPC return value QName? No volunteer. 163: Jacek volunteered to come up with a proposal. 191: Noah volunteered to come up with a proposal. 192: Chris volunteered to summarise the positions in the discussion and create a proposal. 193: The issue list already contains a proposal. We'll put it on agenda for next week's telcon. DavidF: thanks the editors for their hard work, and reminds the WG that we shall have a new version of the spec to read on friday. End of call.
|
http://www.w3.org/2000/xp/Group/2/03/20-pminutes.html
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
Matthew Dillon <dillon@apollo.backplane.com> writes: > It's not an issue for USB, but it is an actual > error... well, more like a warning. . .. > USB is generating an interrupt which is not being handled > by the interrupt service routine, or which is being > generated before USB is able to install its service > routine, Since >>> The opinions expressed above are entirely my own <<< "Necessity is the plea of every infringement of human freedom. It is the argument of tyrants; it is the creed of slaves." -- William Pitt =================================================================== RCS file: /home/DragonFly/cvs-mirror/src/sys/kern/kern_intr.c,v retrieving revision 1.46 diff -u -r1.46 kern_intr.c --- kern_intr.c 22 Jan 2007 19:37:04 -0000 1.46 +++ kern_intr.c 9 Aug 2007 19:37:17 -0000 @@ -99,10 +99,13 @@ #endif static int livelock_limit = 50000; static int livelock_lowater = 20000; +static int livelock_print = 10; SYSCTL_INT(_kern, OID_AUTO, livelock_limit, CTLFLAG_RW, &livelock_limit, 0, "Livelock interrupt rate limit"); SYSCTL_INT(_kern, OID_AUTO, livelock_lowater, CTLFLAG_RW, &livelock_lowater, 0, "Livelock low-water mark restore"); +SYSCTL_INT(_kern, OID_AUTO, livelock_print, + CTLFLAG_RW, &livelock_print, 0, "Livelock messages printed before being quiet"); static int emergency_intr_enable = 0; /* emergency interrupt polling */ TUNABLE_INT("kern.emergency_intr_enable", &emergency_intr_enable); @@ -826,8 +829,11 @@ * Otherwise we are livelocked. Set up a periodic systimer * to wake the thread up at the limit frequency. */ - kprintf("intr %d at %d > %d hz, livelocked limit engaged!\n", + if (livelock_print > 0) { + kprintf("intr %d at %d > %d hz, livelocked limit engaged!\n", intr, ill_count, livelock_limit); + livelock_print--; + } info->i_state = ISTATE_LIVELOCKED; if ((use_limit = livelock_limit) < 100) use_limit = 100; @@ -857,8 +863,11 @@ if (++lcount >= hz) { info->i_state = ISTATE_NORMAL; systimer_del(&ill_timer); - kprintf("intr %d at %d < %d hz, livelock removed\n", - intr, ill_count, livelock_lowater); + if (livelock_print > 0) { + kprintf("intr %d at %d < %d hz, livelock removed\n", + intr, ill_count, livelock_lowater); + livelock_print--; + } } } else { lcount = 0;
|
http://leaf.dragonflybsd.org/mailarchive/kernel/2007-08/msg00051.html
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
Panel Class
Provides a base class for all Panel elements. Use Panel elements to position and arrange child objects in Windows Presentation Foundation (WPF) applications.
Assembly: PresentationFramework (in PresentationFramework.dll)
The Panel type exposes the following members.
A Panel contains a collection of UIElement objects, which are in the Children property. true.
The following example shows how to use the Children property to add two Button objects to a StackPanel.
using System; using System.Windows; using System.Windows.Controls; namespace SDKSample { public partial class StackpanelExample : Page { public StackpanelExample() { // Create two buttons Button myButton1 = new Button(); myButton1.Content = "Button 1"; Button myButton2 = new Button(); myButton2.Content = "Button 2"; // Create a StackPanel StackPanel myStackPanel = new StackPanel(); // Add the buttons to the StackPanel myStackPanel.Children.Add(myButton1); myStackPanel.Children.Add(myButton2); this.Content = myStackPanel; } } }
|
http://msdn.microsoft.com/en-us/library/System.Windows.Controls.Panel(v=vs.100).aspx
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
FileAttributes Enumeration
Provides attributes for files and directories.
This enumeration has a FlagsAttribute attribute that allows a bitwise combination of its member values.
Namespace: System.IONamespace: System.IO
Assembly: mscorlib (in mscorlib.dll)
You can get attributes for files and directories by calling the GetAttributes method, and you can set them by calling the SetAttributes method.
It is not possible to change the compression status of a File object by using the SetAttributes method. Instead, you must actually compress the file using either a compression tool or one of the classes in the System.IO.Compression namespace..
|
http://msdn.microsoft.com/library/windows/apps/system.io.fileattributes(v=vs.110).aspx?cs-save-lang=1&cs-lang=cpp
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
.
Functionality extensions
Let's start by exploring some of the GCC tricks that extend the standard C language.
Type discoveryof to build a generic macro
#define min(x, y) ({ \ typeof(x) _min1 = (x); \ typeof(y) _min2 = (y); \ (void) (&_min1 == &_min2); \ _min1 < _min2 ? _min1 : _min2; })
Range extension
case statements
static int sd_major(int major_idx) { switch (major_idx) { case 0: return SCSI_DISK0_MAJOR; case 1 ... 7: return SCSI_DISK1_MAJOR + major_idx - 1; case 8 ... 15: return SCSI_DISK8_MAJOR + major_idx - 8; default: BUG(); return 0; /* shut up gcc */ } }.
/* Vector of locks used for various atomic operations */ spinlock_t cris_atomic_locks[] = { [0 ... LOCK_COUNT - 1] = SPIN_LOCK_UNLOCKED};
Ranges also support more complex initializations. For example, the following code specifies initial values for sub-ranges of an array.
int widths[] = { [0 ... 9] = 1, [10 ... 99] = 2, [100] = 3 };
Zero-length arrays.
struct iso_block_store { atomic_t refcount; size_t data_size; quadlet_t data[0]; };
Determining call address).
void * __builtin_return_address( unsigned int level );.
void local_bh_disable(void) { __local_bh_disable((unsigned long)__builtin_return_address(0)); }
Constant detection.
int __builtin_constant_p( exp )
#define roundup_pow_of_two(n) \ ( \ __builtin_constant_p(n) ? ( \ (n == 1) ? 1 : \ (1UL << (ilog2((n) - 1) + 1)) \ ) : \ __roundup_pow_of_two(n) \ )
Function attributes
# define __inline__ __inline__ __attribute__((always_inline)) # define __deprecated __attribute__((deprecated)) # define __attribute_used__ __attribute__((__used__)) # define __attribute_const__ __attribute__((__const__)) # define __must_check __attribute__((warn_unused_result))).
int __deprecated __check_region(struct resource *parent, unsigned long start, unsigned long n) static enum unw_register_index __attribute_const__ decode_abreg(unsigned char abreg, int memory)
Optimization extensions
Now, let's explore some of the GCC tricks available to produce the best machine code possible.
Branch prediction hints).
#define likely(x) __builtin_expect(!!(x), 1) #define unlikely(x) __builtin_expect(!!(x), 0)
unsigned int __skb_checksum_complete(struct sk_buff *skb) { unsigned int sum; sum = (u16)csum_fold(skb_checksum(skb, 0, skb->len, skb->csum)); if (likely(!sum)) { if (unlikely(skb->ip_summed == CHECKSUM_HW)) netdev_rx_csum_fault(skb->dev); skb->ip_summed = CHECKSUM_UNNECESSARY; } return sum; }
Prefetching
void __builtin_prefetch( const void *addr, int rw, int locality );
#ifndef ARCH_HAS_PREFETCH #define prefetch(x) __builtin_prefetch(x) #endif static inline void prefetch_range(void *addr, size_t len) { #ifdef ARCH_HAS_PREFETCH char *cp; char *end = addr + len; for (cp = addr; cp < end; cp += PREFETCH_STRIDE) prefetch(cp); #endif }
Variable attributes.
char __nosavedata swsusp_pg_dir[PAGE_SIZE] __attribute__ ((aligned (PAGE_SIZE)));
static struct swsusp_header { char reserved[PAGE_SIZE - 20 - sizeof(swp_entry_t)]; swp_entry_t image; char orig_sig[10]; char sig[10]; } __attribute__((packed, aligned(PAGE_SIZE))) swsusp_header;
Going further.
Resources.
- In the developerWorks Linux zone, find more resources for Linux developers, including developers who are new to Linux.
-.
|
http://www.ibm.com/developerworks/library/l-gcc-hacks/
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
Larry Franks and Brian Swan on Open Source and Device Development in the Cloud
Recently I was trying to figure out how to allow users to login to a Ruby (Sinatra) web site using an identity service such as Facebook, Google, Yahoo, etc. I ended up using Windows Azure AppFabric Access Control Service (ACS) since it has built-in support for:
The nice thing is that you can use one or all of those, and your web application just needs to understand how to talk to ACS and not the individual services. I worked up an example of how to do this in Ruby, and will explain it after some background on ACS and how to configure it for this example.
ACS is a claims based, token issuing provider. This means that when a user authenticates through ACS to Facebook or Google, that it returns a token to your web application. This token contains various ‘claims’ as to the identity of the authenticating user.
The tokens returned by ACS are either simple web tokens (SWT) or security assertion markup language (SAML)1.0 or 2.0. The token contains the claims, which are statements that the issuing provider makes about the user being authenticated.
The claims returned may, at a minimum, just contain a unique identifer for the user, the identity provider name, and the dates that the token is valid. Additional claims such as the user name or e-mail address may be provided – it’s up to the identity provider as to what is available. You can see the claims returned for each provider, and select which specific ones you want, using the ACS administration web site.
ACS costs $1.99 a month per 100,000 transactions, however there’s currently a promotion going on until January 1, 2012 during which you won’t be charged for using the service. See for more details.
To configure ACS, you’ll need a Windows Azure subscription. Sign in at, then navigate to the “Service Bus, Access Control & Caching” section. Perform the following tasks to configure ACS:
In the Identity Providers section, add the identity providers you want to use. Most are fairly straight forward, but Facebook is a little involved as you’ll need an Application ID, Application secret, and Application permissions. You can get those through. See for a detailed walkthrough of using ACS with Facebook.
This is your web application. Enter a name, Realm (the URL for your site,) the Return URL (where ACS sends tokens to,) error URL’s, etc. For token format you can select SAML or SWT. I selected SWT and that’s what the code I use below uses. You’ll need to select the identity providers you’ve configured. Also be sure to check “Create new rule group”. For token signing settings, click Generate to create a new key and save the value off somewhere.
If you checked off “Create new rule group” you’ll have a “Default Rule Group” waiting for you here. If not, click Add to add one. Either way, edit the group and click Generate to add some rules to it. Rules are basically how you control what claims are returned to your appication in the token. Using generate is a quick and easy way to populate the list. Once you have the rules configured, click Save.
In the Application Integration section, select Login pages, then select the application name. You’ll be presented with two options; a URL to an ACS-hosted login page for your application and a button to download an example login page to include in your application. For this example, copy the link to the ACS-hosted login page.
The code below will parse the token returned by the ACS-hosted login page and return a hash of claims, or an array containing any errors encountered during validation of the token. It doesn't fail immediately on validation, as it may be useful to examine the validation failures to figure out any problems that were encountered. Also note that the constants need to be populated with values specific to your application and the values entered in the ACS management site.
require 'nokogiri'require 'time'require 'base64'require 'cgi'require 'openssl'
REALM=‘’
TOKEN_TYPE=‘’ #SWT
ISSUER=‘'
TOKEN_KEY='the key you generated in relying applications above’
class ResponseHandler attr_reader :validation_errors, :claims def initialize(wresult) @validation_errors = [] @claims={} @wresult=Nokogiri::XML(wresult) parse_response() end def is_valid? @validation_errors.empty? end private #parse through the document, performing validation & pulling out claims def parse_response parse_address() parse_expires() parse_token_type() parse_token() end #does the address field have the expected address? def parse_address address = get_element('//t:RequestSecurityTokenResponse/wsp:AppliesTo/addr:EndpointReference/addr:Address') @validation_errors << "Address field is empty." and return if address.nil? @validation_errors << "Address field is incorrect." unless address == REALM end #is the expire value valid? def parse_expires expires = get_element('//t:RequestSecurityTokenResponse/t:Lifetime/wsu:Expires') @validation_errors << "Expiration field is empty." and return if expires.nil? @validation_errors << "Invalid format for expiration field." and return unless /^(-?(?:])?$/.match(expires) @validation_errors << "Expiration date occurs in the past." unless Time.now.utc.iso8601 < Time.iso8601(expires).iso8601 end #is the token type what we expected? def parse_token_type token_type = get_element('//t:RequestSecurityTokenResponse/t:TokenType') @validation_errors << "TokenType field is empty." and return if token_type.nil? @validation_errors << "Invalid token type." unless token_type == TOKEN_TYPE end #parse the binary token def parse_token binary_token = get_element('//t:RequestSecurityTokenResponse/t:RequestedSecurityToken/wsse:BinarySecurityToken') @validation_errors << "No binary token exists." and return if binary_token.nil? decoded_token = Base64.decode64(binary_token) name_values={} decoded_token.split('&').each do |entry| pair=entry.split('=') name_values[CGI.unescape(pair[0]).chomp] = CGI.unescape(pair[1]).chomp end @validation_errors << "Response token is expired." if Time.now.to_i > name_values["ExpiresOn"].to_i @validation_errors << "Invalid token issuer." unless name_values["Issuer"]=="#{ISSUER}" @validation_errors << "Invalid audience." unless name_values["Audience"] =="#{REALM}" # is HMAC valid? token_hmac = decoded_token.split("&HMACSHA256=") swt=token_hmac[0] @validation_errors << "HMAC does not match computed value." unless name_values['HMACSHA256'] == Base64.encode64(OpenSSL::HMAC.digest(OpenSSL::Digest::Digest.new('sha256'),Base64.decode64(TOKEN_KEY),swt)).chomp # remove non-claims from collection and make claims available @claims = name_values.reject {|key, value| !key.include? '/claims/'} end #given an path, return the content of the first matching element def get_element(xpath_statement) begin @wresult.xpath(xpath_statement, 't'=>'', 'wsu'=>'', 'wsp'=>'', 'wsse'=>'', 'addr'=>'')[0].content rescue nil end endend
So what’s going on here? The main pieces are:
After processing the token, you can test for validity by using is_valid? and then parse through either the claims hash or validation_errors array. I'll leave it up to you to figure out what you want to do with the claims; in my case I just wanted to know the identity provider the user selected and thier unique identifier with that provider so that I could store it along with my sites user specific information for the user.
As mentioned in the introduction, ACS let’s you use a variety of identity providers without requiring your application to know the details of how to talk to each one. As far as your application goes, it just needs to understand how to use the token returned by ACS. Note that there may be some claim information provided that you can use to gather additional information directly from the identity provider. For example, FaceBook returns an AccessToken claim field that you an use to obtain other information about the user directly from FaceBook.
As always, let me know if you have questions on this or suggestions on how to improve the code.
Thank you very much for the post! I just got this working correctly on a project that I'm working on in Rails. Before reading this (and being a Rails newb), I wasn't sure how to implement ACS in Rails. This gave me everything I needed!
Glad it was useful Tuck.
Hi,
Do you have example maybe for single sign out in Ruby?
Thanks,
Ratko
Hi Ratko,
Single sign-out with ACS is pretty easy. It's just doing a redirect to a specially crafted URL, which then does a GET back to a page on your web site. The URL format is https://<your namespace>.accesscontrol.windows.net/v2/wsfederation?wa=wsignout1.0&wtrealm=<your realm>&wreply=<return page>. The only trick is that the realm and return page values have to be encoded. For example, if my namespace is "mynamespace", my realm is "", and the page I want users to land on once they have been signed out is "", then this would be mynamespace.accesscontrol.windows.net/.../wsfederation
That's it. What this does is log the user out of the services associated with ACS. If they don't log out, you see the behavior where they click the login link on your site, it redirects to ACS, then immediately back to your site without asking them to provide credentials for the identity provider. After hitting the sign-out URL above, the next time they login ACS should prompt them to select the identity provider and provide username/password for it.
Hope this helps.
Also, I'm told that Windows Azure Active Directory is now the way to go vs. ACS. I haven't worked with WAAD, so I don't currently have any recommendations or pointers for that. I'll see if I can work up a sample of using that and post to the blog.
|
http://blogs.msdn.com/b/silverlining/archive/2011/10/03/ruby-web-sites-and-windows-azure-appfabric-access-control.aspx
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
Is there such a type as "days"?
This is a discussion on Validating time within the C++ Programming forums, part of the General Programming Boards category; Is there such a type as "days"?...
[EDIT}[EDIT}Code:void Contract::CalculateNoOfDays(Date issue_dt, Date return_dt) { using namespace boost::gregorian; std::ostringstream issue_dymonyr, return_dymonyr; issue_dymonyr << issue_dt.getMonth() << "-" << issue_dt.getDay() << "-" << issue_dt.getYear(); return_dymonyr << return_dt.getMonth() << "-" << return_dt.getDay() << "-" << return_dt.getYear(); try { date issuedDate(from_simple_string(issue_dymonyr.str())); date returnDate(from_simple_string(return_dymonyr.str())); days rental_days = returnDate - issuedDate; } catch(...) { std::cout << "Bad date entered: " << issue_dymonyr.str() << return_dymonyr.str() << std::endl; } }
Not forgetting how to convert from days to a normal integer value [/EDIT]
Maybe you want to take a look at the examples?
They might give you some insight. Be sure to consult the documentation for the classes, as well. There might be something there.
I am not familiar with Boost::DateTime, however, so I cannot say.
|
http://cboard.cprogramming.com/cplusplus-programming/110978-validating-time-2.html
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
Use cond_destroy(3C) to destroy state that is associated with the condition variable pointed to by cv . The space for storing the condition variable is not freed. For POSIX threads, see pthread_condattr_destroy Syntax.
#include <thread.h> int cond_destroy(cond_t *cv);
cond_destroy() returns 0 if successful. When any of the following conditions is detected, cond_destroy() fails and returns the corresponding value.
EFAULTDescription:
cv points to an illegal address.
EBUSYDescription:
The system detected an attempt to destroy an active condition variable.
|
http://docs.oracle.com/cd/E19253-01/816-5137/sthreads-82842/index.html
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
financial market, believing is not seeing. When people become convinced a particular investment strategy is a risk-free way of making money, typically everyone ends up losing their shirt. In the case of Japan, the belief that the yen will weaken in the future creates an incentive for firms to borrow at home and invest abroad. One earns the spread on interest rate plus the appreciation of the foreign currency. Problem is, of course, that outflow of money reduces the amount circulating in the domestic economy. So the anticipated weakening of yen fails to materialize. Once firms realize they have made a bad bet, they'll scramble for yen to cover their positions, which strengthens the currency further. From there it spirals downward.
Yeah, that's one reason the BOJ's inflationary policies over the past generation did not cause cpi inflation. People borrowed in yen and invested in dollar bonds and made a risk free fortune.
Then they cough it all back up when the yen swings in the wrong direction.
Yeah, occasionally the market would turn against them, but the net profits still make them rich.
I doubt it. People don't call carry trade "picking pennies in front of a truck" for no reason. FX swings can be quite violent. It's easy (perhaps inevitable) to suffer total wipe-out. A good trade though if you're a money manager. You can hit year after year solid earning targets while seemingly running zero risk. When things go bad--well, it's other people's money.
In the past banks could borrow in yen at less than 1% and buy US treasuries earning 5% and make good risk free returns. They could ignore FX movements until they needed to sell.
But it's all short-term financing. Firms might manage to roll over loans year after year. Then it'll suddenly stop and a mad scramble to cover one's short position begins. If you look at the JPY-AUD chart from a few year back, it looks positively like something from the Roadrunner cartoon. Around late 2008 the drop was straight down, going from ¥100+ to sub-¥60 in a matter of weeks. Those who left the game earlier did well, naturally. Everyone else meanwhile got pancaked.
It's negative-sum game. In the aggregate, everyone loses--including rich people.
The website “Trading Economics” says, “The benchmark interest rate in Japan was last recorded at 0 percent. Interest Rate in Japan is reported by the Bank of Japan. Historically, from 1972 until 2013, Japan Interest Rate averaged 3.26 Percent reaching an all time high of 9 Percent in December of 1973 and a record low of 0 Percent in February of 1999. In Japan, decisions on interest rates are made by the Bank of Japan’s Policy Board in its Monetary Policy Meetings. The BoJ’s official interest rate is the discount rate. Monetary Policy Meetings produce a guideline for money market operations in inter-meeting periods and this guideline is written in terms of a target for the uncollateralized overnight call rate.”
Japan’s discount rate has been .30% since December 19, 2008. No wonder Japan has been experiencing a “stall mode” economy for years, the expected return on investment is near zero percent! Of course, the BOJ knows this, so the question is why has the BOJ been following a “stall mode” economic policy for over two decades now?
It’s not terribly difficult to understand how an economy grows, and it’s not via cheap credit expansion. The cost of credit has nothing to do with economic growth, EXPECTED RETURNS ON INVESTMENT has, and if interest rates are near zero percent, there is no expected return (interest) on investment.
What Japan’s massive credit expansion will accomplish is to devalue the Yen and create a stock bubble, exactly what the United States is currently experiencing under Federal Reserve Bank Chairman Ben Bernanke’s massive credit expansion these last five years.
Economics Primer:
What do people do when interest rates increase? They invest more (and consume less) because the return on investment is greater. The proceeds that would have gone towards consumption instead goes towards investment because the lure (interest) is higher. It is investment that leads to economic recovery via new technologies that (a) increases the amount of goods that can be purchased since investment has made those goods less expensive; and (2) provide new technologies that cut the cost of business.
What this means is that a properly functioning economy has a DECLINING price level. Economists that tell us that declining general prices are bad for the economy are shills for politicians who for political reasons need central banks and the boom in the economy that credit expansion can produce (when interest rates are high, otherwise a stock bubble will result).
Central Banks, and crony economists in the universities and elsewhere that salivate to the mention of “central bank”, tell us that we must maintain a stable price level otherwise the economy will free fall! Nonsense, entrepreneurs don’t need stable prices in order to know where to properly allocate labor, capital and natural resources to their most profitable avenues of production. In fact, the reason entrepreneurs exist is to determine (via competition) what real prices are, hence what the general price level is. What central banks are doing then, is to remove the entrepreneurial discovery process for prices from the market, and we’ve seen how well central banks are doing in that task!
Once inflation takes off, the savers in Japan will probably toss Abe.
Looks like the ECB is losing "the race to the bottom."
.
NPWFTL
Regards
This is going to end in disaster. Apparently, the BOJ has completely stopped targeting interest rates and is only targeting base money. They also plan to inject $1.4 trillion into their economy by the end of 2014--this is an economy that is a third the size of the US....
They better hope their interest rates don't shift. If they do reach 2% inflation and their interest rates shift by 200 basis points, they'll spend their entire tax revenues on debt service alone. This would come at a time when they spend over 2/3 of their tax revenues on social security with a falling population and falling population workforce. I think this could set off a bond crisis and a currency collapse.
|
http://www.economist.com/comment/1958286
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
Stateflow® provides context-sensitive editing assistance with tab completion. You can quickly select syntax-appropriate options for keywords, data, event, and function names.
In R2014a, there is one Stateflow Chart block that defaults to using MATLAB as the action language. You can modify the action language of a chart to use C syntax. For more information, see Modify the Action Language for a Chart.
In previous releases, you could output self and child activity to Simulink®. In R2014a, you can also output leaf-state activity using automatically managed enumerations.
You can use international characters when naming these Stateflow blocks:
Charts
State Transition Tables
Truth Tables
The auto-correction for MATLAB® syntax now inserts explicit casts for literals.
A model with a Bus Selector block between two Stateflow exported graphical functions that share the same memory can produce different results.
For MATLAB functions and charts, you can manage generation of typedef definitions for imported bus and enumeration types in the Configuration Parameters dialog box, on the Simulation Target pane. You can choose to provide the typedef definition in a custom header file, or have Simulink generate the definitions.
The Stateflow Search & Replace tool has been updated to simplify the controls and improve the color selection of the interface.
Stateflow now supports complex data types for use with Data Store Memory.
You no longer have to use mex -setup to choose a compiler. mex automatically locates and uses a supported installed compiler. You can use mex -setup to change the default compiler. See Changing Default Compiler.
In previous versions of Stateflow, in Moore charts that used MATLAB as the action language, you could assign an output that was dependent on an input. This construct violates Moore semantics, and Stateflow now generates a compiler error. For more information, see Design Considerations for Moore Charts.
The Windows® 64-bit platform now includes LCC-win64 as the default compiler for running simulations. You no longer have to install a separate compiler for simulation in Stateflow and Simulink. You can run simulations in Accelerator and Rapid Accelerator modes using this compiler.
LCC-win64 is used only when another compiler is not configured in MATLAB. To build MEX files, you must install a compiler. See Supported Compilers.
Press the Tab key for automatic word completion of keywords, data, and function names in charts.
You can now use the Pattern Wizard to add loop or decision logic to a previously created pattern in a flow chart.
You can now specify milliseconds and microseconds for absolute time temporal logic in charts.
Charts that use MATLAB as the action language now support continuous time mode with zero crossing detection.
When a chart is closed, you can preview the content of Stateflow charts in Simulink. You can see an outline of the contents of a chart. During simulation you can see chart animation. When a chart is open, you can preview the content of subcharts, Simulink functions, and graphical functions. For details, see Content Preview for Stateflow Objects.
For example, the Temporal Logic chart uses content preview, and the Without Temporal Logic chart does not.
For charts with discrete sample time that are not inside a triggered or enabled subsystem, absolute-time temporal operators generate improved code. The generated code now uses integer counters to track time instead of the Simulink time counter. This allows more efficient code, as well as enabling this code for use in software-in-the-loop(SIL) and processor-in-the-loop(PIL) simulation modes.
In previous releases, you could output whether or not a state is active by selecting Output state activity in the State properties dialog box. In R2013a, you can now also output to Simulink the child activity of a chart, state, atomic subchart, or State Transition Table as an enumeration.
In R2013a, you can mask a chart, Truth Table block, or State Transition Table block directly. In previous releases, you had to place the Stateflow block in a subsystem, and then mask that subsystem.
In R2013a, MATLAB scripts or functions that rely on the MaskType property of Stateflow blocks need to be updated. For example, get_param(handle_to_SF_block, 'MaskType') now returns an empty value instead of 'Stateflow'. For backward compatibility, using find_system('MaskType','Stateflow') returns all the Stateflow blocks. However, use the Stateflow API instead, as a better practice. See Overview of the Stateflow API. Do not create masks with Mask Type "Stateflow", because the behavior is unpredictable.
You can detect unresolved symbols in a chart without updating the diagram or starting simulation. The Stateflow parser can access all required information for detecting unresolved symbols, such as enumerated data types and exported graphical functions from other charts.
In previous releases, Stateflow parameters in the generated code were not derived from the parameter names. In R2013a, parameter names appear unchanged in the code, which provides better traceability between the chart and the generated code.
Exported graphical functions now support inputs and outputs of complex type.
If your chart specifies output data type using the expression type(data_name), you get an error if data_name is of bus type.
A model created in a previous release might cause an error in R2013a if the chart contains an output with the data type specification type(data_name) if data_name is of bus type.
The new editor unifies Stateflow and Simulink functionality. The Stateflow Editor shares most of the same menu items with the Simulink Editor, and provides the following enhancements:
Unified canvas, for editing Stateflow charts and Simulink models in the same window.
Tabbed windows, for accessing Stateflow charts in the same context as Simulink models.
Model Browser tree, for browsing the complete model hierarchy, including Stateflow charts.
Cross-platform consistency, for accessing the same functionality on Windows, UNIX®, and Mac platforms.
Menu bar changes in the new Stateflow Editor are the same as for the new Simulink Editor.
All changes for the following three Stateflow Editor context menus are the same as for the new Simulink Editor:
From the canvas
From a block
From a signal
The following sections describe changes to context menus that are specific to Stateflow:
In the R2012a Stateflow Editor, right-clicking a Stateflow function displays the chart context menu. In the new Stateflow Editor, each kind of Stateflow function (for example, graphical function or Truth Table) has its own context menu.
In the R2012a Stateflow Editor, right-clicking a Stateflow transition displays the chart context menu. In the new Stateflow Editor, a transition has its own context menu.
There is no longer a Smart menu item. In the R2012a Stateflow Editor, the Smart menu item enabled smart transitions. Smart transitions have ends that slide around the surfaces of states and junctions. When the source and/or destination objects are moved and resized in the chart, these transitions use sliding and other behaviors to enable you to produce an aesthetically pleasing chart. The new Stateflow Editor uses smart transitions all the time.
In the R2012a Stateflow Editor, right-clicking a Stateflow state displays the chart context menu. In the new Stateflow Editor, a state has its own context menu.
The new Stateflow Editor makes it easier to create and modify charts by providing these enhancements:
Smart guides, for aligning objects interactively as you place them in the chart.
Drag margins, which allows all objects within a container to move together. The mouse cursor changes to a double-arrow when you are within the drag margins of an object.
Transition indicator lines, for identifying the label associated with a selected transition.
Just-in-time error notification, for flagging illegal object placement during editing (for example, when two states overlap).
A state transition table is an alternative way of expressing modal logic. Instead of drawing states and transitions graphically in a Stateflow chart, you express the modal logic in tabular format. Stateflow automatically generates a graphical state chart from the tabular format, so you can use animation and in-chart debugging (described in In-chart debugging with visual breakpoints and datatips)..
The new block is available in sflib. You can add the block to a new model by entering sfnew('-STT') at the MATLAB command line.
For more information, see Tabular Expression of Modal Logic and Model Bang-Bang Controller with a State Transition Table.
In R2012b, you can use MATLAB as the action language to program Stateflow charts. Benefits of using MATLAB as the action language include:
MATLAB syntax support in state labels and transition labels
You can use the same MATLAB code that you write in a script or enter at the command line.
Automatic identification of unresolved symbols in the new Symbol Wizard
When you update the diagram or start simulation, the Symbol Wizard provides a list of unresolved data in your chart and infers the scope.
Automatic inference of size, type, and complexity for data in the chart, based on usage (unless explicitly defined in the Model Explorer)
Support for control flow logic in state labels
For example, you can write if-else statements directly inside state actions:
StateA du: if (x > 0) x = x + 1; else x = x + 2; end
You do not need to create a separate graphical function to define the flow logic.
Automatic correction of common syntax errors.
For example, if you type x++ on a transition segment, the expression is automatically converted to the correct MATLAB syntax, {x=x+1}.
For more information, see MATLAB Syntax for States and Transitions and Model Event-Driven System Using MATLAB Expressions.
In R2012b, the Stateflow debugger includes the following enhancements:
Display of data values when hovering over a state or transition
When you hover over a state or transition, all data values in scope for that object appear in a popup list.
Step Over and Step Out options on the debugger
When you click Step Over, you can skip the entire execution of a function call when the chart is in debug mode.
When you click Step Out, you can skip the rest of the execution for a function call when the chart is in debug mode.
Badges on graphical chart objects to indicate breakpoint settings
When you hover over the badge on an object, you see a list of breakpoints in a popup list.
To modify breakpoint settings, you can click the badge on an object instead of opening the properties dialog box.
In R2012b, you no longer need to launch the Stateflow debugger to stop chart execution at active breakpoints. Lifting this restriction means that during simulation, chart execution always stops at active breakpoints during simulation, even if the debugger is not running. To prevent unintended interruption of chart execution, Stateflow software automatically disables — but does not delete — existing breakpoints for all objects in charts created in earlier releases. Disabled breakpoints appear as gray badges; you can enable them as needed. See Relationship Between Breakpoints and the Debugger and Set Local Breakpoints.
In addition, in models created in earlier versions, Stateflow software removes When Transition is Tested breakpoints from transitions that do not have conditions. Starting in R2012b, you can set only When Transition is Valid breakpoints on transitions with no conditions.
In R2012b, you can use atomic boxes to reuse graphical functions across multiple charts and models. With atomic boxes, you can reuse models with graphical functions multiple times as referenced blocks in a top model. Because there are no exported graphical functions, you can use more than one instance of that referenced block in the top model.
For more information, see Reusing Functions with an Atomic Box in the Stateflow documentation.
In R2012b, you can convert a state to an atomic subchart when the state accesses chart local data that has any of the following properties:
[M N] size, where M and N are parameters that represent the data dimensions
One of the following non-built-in data types:
Bus type
Alias type
Fixed-point type of nonzero fraction length, such as fixdt(1,16,3)
In previous releases, conversion of a state to an atomic subchart required that the chart local data have a static, deterministic size and a built-in data type.
In R2012b, you can detect undirected local event broadcasts using a new diagnostic in the Diagnostics > Stateflow pane of the Model Configuration Parameters dialog box. You can set the diagnostic level of Undirected event broadcasts to none, warning, or error.
Undirected local event broadcasts can cause unwanted recursive behavior in a chart and inefficient code generation. You can avoid this behavior by using the send operator to create directed local event broadcasts. For more information, see Guidelines for Avoiding Unwanted Recursion in a Chart and Broadcasting Events to Synchronize States in the Stateflow documentation.
For new models created in R2012b and existing models created in previous releases, the default diagnostic setting is warning to discourage the use of undirected local event broadcasts. Models that did not warn in previous releases might now issue a warning because the chart contains an undirected local event broadcast.
In R2012b, you can detect specified transition actions before specified condition actions in transition paths, using a new diagnostic in the Diagnostics > Stateflow pane of the Model Configuration Parameters dialog box. You can set the diagnostic level of Transition action specified before condition action to none, warning, or error.
In a transition path with multiple transition segments, a specified transition action for a transition segment does not execute until the final destination for the entire transition path becomes valid. A specified condition action for a transition segment executes as soon as the condition becomes true. When a transition with a specified transition action precedes a transition with a specified condition action in the same transition path, the condition action for the succeeding transition might execute before the transition action for the preceding transition. When this diagnostic warns for transition paths containing transition actions specified before condition actions, you can identify out-of-order execution.
For more information, see Transition Action Types and Transitions in the Stateflow documentation.
In previous releases, the specification of transition actions before condition actions causes an error during simulation. To suppress this error for all models in future MATLAB sessions, use the following command:
sfpref('ignoreUnsafeTransitionActions',1);
In R2012b, the ignoreUnsafeTransitionActions preference does not exist and the default value of the Transition action specified before condition action diagnostic is warning. The warning occurs for all instances of transition actions specified before condition actions, even if you changed the ignoreUnsafeTransitionActions preference in a previous release.
In R2012b, function-call output events appear on Chart and Truth Table block icons with parentheses after the event name. This appearance is consistent with the rendering of input triggers on Function-Call Subsystem block icons.
The algorithm for resolving a qualified state or data name performs a localized search for states and data that match the given path by looking in each level of the Stateflow hierarchy between the chart level and the parent of the state or data. The algorithm does not perform an exhaustive search of all states and data in the entire chart.
In previous releases, a warning would appear when the search resulted in no matches or multiple matches. In R2012b, this warning has changed to an error. For more information, see Checking State Activity and Using Dot Notation to Identify Data in a Chart in the Stateflow documentation.
Stateflow charts created in earlier releases now generate an error instead of a warning when the search for a qualified state or data name results in no matches or multiple matches.
In R2012b, on 32-bit Windows platforms, you can use the lcc compiler to simulate charts in a folder with the # symbol in its name.
In previous releases, the Mac screen menubar was disabled when Stateflow was installed. This behavior was necessary to enable the Stateflow Editor menu options to work normally on a Mac.
In R2012b, the Mac screen menubar is enabled when Stateflow is installed.
In R2012b, printing the current view of a chart to a figure window is no longer available.
To print the current view of a chart, you can send the output directly to a printer or to a file. Available file formats include PS, EPS, JPG, PNG, and TIFF.
In R2012b, the option to set a breakpoint at End of Broadcast is no longer available for input events.
In previous releases, you could set both Start of Broadcast and End of Broadcast breakpoints for input events. Starting in R2012b, Stateflow ignores End of Broadcast breakpoints on input events for existing models.
You can no longer convert a Stateflow box object to a state object and vice versa.
In previous releases, you were able to convert a box object to a state object and vice versa. You must now delete the box or state and replace it with a new box or state with the same name.
You can use the new sinkedTransitions method to find all inner and outer transitions whose destination is a state, box, or junction.
For more information, see sinkedTransitions in the Stateflow API documentation.
Transition objects now have a DestinationEndPoint property that describes the location of the transition endpoint at the destination object.
For more information, see Transition Properties in the Stateflow API documentation.
In R2012a, inputs and outputs of exported graphical functions can use enumerated data types or structures. For more information, see Rules for Exporting Chart-Level Graphical Functions in the Stateflow documentation.
In R2012a, the Mappings tab in the atomic subchart properties dialog lists all valid scopes for data and event mapping. All valid scopes appear, regardless of whether a data or event of that scope exists in the chart.
In previous releases, the Mappings tab listed only the scopes of data and events that existed in the chart and omitted any scope that did not exist. For more information, see Mapping Variables for Atomic Subcharts in the Stateflow documentation.
In R2011b, selecting Suppress generation of default cases for Stateflow switch statements if unreachable in the Configuration Parameters dialog box would result in decision coverage of less than 100%. In R2012a, you get full decision coverage when suppressing default cases in the generated code for your Stateflow chart.
For more information about decision coverage, see Model Coverage for Stateflow Charts in the Simulink Verification and Validation documentation.
If data in your chart uses an enumerated type with a custom header file, include the header information in the Simulation Target > Custom Code pane of the Configuration Parameters dialog box. In the Header file section, add the following statement:
#include "<custom_header_file_for_enum>.h"
For more information, see Rules for Using Enumerated Data in a Stateflow Chart in the Stateflow User's Guide.
In earlier releases, custom header files for enumerated types did not need to appear in the Configuration Parameters dialog box.
In a future release, the Use Strong Data Typing with Simulink I/O check box will be removed from the Chart properties dialog box because strong data typing will always be enabled.
When this check box is cleared, the chart accepts and outputs only signals of type double. This setting ensures that charts created prior to R11 can interface with Simulink input and output signals without type mismatch errors. For charts created in R11 and newer releases, disabling strong data typing is unnecessary. Also, many Stateflow features do not work when a chart disables strong data typing.
In R2012a, updating the diagram causes a parse warning to appear when a chart disables strong data typing. To prevent the parse warning, select the Use Strong Data Typing with Simulink I/O check box in the Chart properties dialog box.
A new chart property, Saturate on integer overflow, enables you to control the behavior of data with signed integer types when overflow occurs. The check box appears in the Chart properties dialog box.
Arithmetic operations for which you can enable saturation protection are:
Unary minus: –a
Binary operations: a + b, a – b, a * b, a / b, a ^ b
Assignment operations: a += b, a –= b, a *= b, a /= b
For new charts, this check box is selected by default. When you open charts saved in previous releases, the check box is cleared to maintain backward compatibility.
For more information, see Handling Integer Overflow for Chart Data in the Stateflow User's Guide.
A new Logging tab on the Data and State properties dialog boxes enables you to log signals in the same way that you do in Simulink. For more information, see Logging Data Values and State Activity in the Stateflow User's Guide.
You can specify whether or not to always generate default cases for switch-case statements. This optimization works on a per-model basis and applies to the code generated for a state that has multiple substates. Use the following check box on the Code Generation > Code Style pane of the Configuration Parameters dialog box:
This readability optimization is available for embedded real-time (ERT) targets and requires a license for Embedded Coder® software. For new models, this check box is cleared by default. When you open models saved in previous releases, the check box is also cleared to maintain backward compatibility.
For more information, see Code Generation Pane: Code Style in the Embedded Coder documentation.
In R2011b, you can detect inconsistency errors earlier in the model development process. If you select Edit > Update Diagram in the Simulink Editor, you get an error when Stateflow statically detects that there are no active children during execution of a chart or a state.
In previous releases, static detection of these inconsistency errors did not occur until run time.
In previous releases, Mealy and Moore charts automatically applied the initial value of outputs every time the chart woke up. Both chart types ensured that outputs did not depend on previous values of outputs by enforcing the chart property Initialize Outputs Every Time Chart Wakes Up.
In R2011b, this restriction has been lifted. You can now choose whether or not to initialize outputs every time a Mealy or Moore chart wakes up. If you disable this chart property, you enable latching of outputs (carrying over output values computed in previous time steps). This enhancement enables you to model persistent output data for Mealy and Moore charts.
For more information, see Building Mealy and Moore Charts in the Stateflow User's Guide.
You can control the behavior of the Stateflow diagnostic that detects multiple unconditional transitions from the same state or the same junction. Set Transition shadowing to none, warning, or error on the Diagnostics > Stateflow pane of the Configuration Parameters dialog box.
For more information, see Diagnostics Pane: Stateflow in the Simulink Graphical User Interface documentation.
You can use Microsoft® Windows Software Development Kit (SDK) 7.1 as a MEX compiler for simulation on 32- and 64-bit Windows machines. For a list of supported compilers, see Choosing a Compiler in the Stateflow User's Guide.
In R2011b, you can simulate models with Stateflow blocks when the current folder is a UNC path. In previous releases, simulation of those models required that the current folder not be a UNC path.
In R2011b, the Coverage tab of the Stateflow debugger has been removed. In previous releases, clicking the Coverage tab would show the following message:
Coverage feature obsoleted. Please use Simulink Verification and Validation in order to get complete coverage of Simulink/Stateflow objects.
In R2011a, all functionality previously available for the Stateflow Coder™ product is now part of the new Simulink Coder product.
In R2011a, Embedded MATLAB functions have been renamed as MATLAB functions in Stateflow charts. This name change has the following effects:
The function box now shows MATLAB Function instead of eM in the upper-left corner.
Traceability comments in the generated code for embedded real-time targets now use MATLAB Function instead of Embedded MATLAB Function.
For truth table functions in your chart, the Settings > Language menu now provides Stateflow Classic and MATLAB as the choices.
Scripts that use the Stateflow.EMFunction constructor method continue to work. All properties and methods for this object remain the same.
In R2011a, you can enter MATLAB expressions in the Size field of the Data properties dialog box:
This enhancement enables you to use additional constructs, such as:
Variables in the MATLAB base workspace
Enumerated values on the MATLAB search path
Expressions that use fi objects
For more information, see Sizing Stateflow Data in the Stateflow User's Guide.
For the Size field, name conflict resolution works differently from previous releases. In R2011a, when multiple variables with identical names exist, the variable with the highest priority is used:
Mask parameters
Model workspace
MATLAB base workspace
Stateflow data
In previous releases, Stateflow data took precedence over all other variables with identical names.
Previously, you could not change the values of Stateflow data while debugging a chart. Now you can change data values while the chart is in debug mode and see how simulation results change. For more information, see Changing Data Values During Simulation in the Stateflow User's Guide.
When Enable debugging/animation is enabled on the Simulation Target pane of the Configuration Parameters dialog box, this setting applies to all charts in your model. In R2011a, you can enable or disable debugging on a chart-by-chart basis, using the Debug menu in the Stateflow Editor:
This enhancement enables you to focus on debugging a single chart, instead of having to debug all charts in the model. For details, see How to Enable Debugging for Charts in the Stateflow User's Guide.
You can also clear all breakpoints for a specific chart by selecting Debug > Clear All Breakpoints in the Stateflow Editor. For more information, see Clearing All Breakpoints in the Stateflow User's Guide.
In previous releases, you could open the debugger by selecting Tools > Debug in the Stateflow Editor. In R2011a, this menu option has moved to Debug > Stateflow Debugger.
In R2011a, you can use input events in atomic subcharts. For more information, see Making States Reusable with Atomic Subcharts in the Stateflow User's Guide.
In R2011a, the generated function names for atomic subcharts follow the identifier naming rules for subsystem methods on the Code Generation > Symbols pane of the Configuration Parameters dialog box:
This enhancement enables you to control the format of generated function names for atomic subcharts when building an embedded real-time (ERT) target. For more information, see Generating Reusable Code for Unit Testing in the Stateflow User's Guide.
In previous releases, the Stateflow debugger sorted data by scope first, before alphabetically listing data. In R2011a, the debugger sorts data alphabetically in the Browse Data section, without regard to scope. This enhancement helps you find specific data quickly when your chart contains many variables, for example, over a hundred.
Data sorting depends solely on the variable name and not on hierarchy. For example, if you have chart-parented data named arrayOut and state-parented data named arrayData, the list that appears in the Browse Data section is:
S.arrayData arrayOut
The state name has no effect on data sorting.
For more information, see Watching Data Values During Simulation in the Stateflow User's Guide.
In R2011a, you can highlight the states that are active at the end of a simulation by selecting Maintain Highlighting in the Stateflow debugger.
This enhancement enables you to inspect the active states of a chart after simulation ends, without having to use the SimState method highlightActiveStates. For more information, see Animating Stateflow Charts in the Stateflow User's Guide.
For graphical chart objects, you can now right-click the object to set local breakpoints. This enhancement enables you to set breakpoints more quickly, without having to open the properties dialog box for:
Charts
States
Transitions
Graphical functions
Truth table functions
For more information, see Setting Local Breakpoints in the Stateflow User's Guide.
You can now select a format for signal logging data. Use the Signal logging format parameter on the Data Import/Export pane of the Configuration Parameters dialog box to specify the format:
ModelDataLogs — Simulink.ModelDataLogs format (the default; before R2011a, it was the only supported format)
Dataset — Simulink.Simulation.Dataset format (new in R2011a)
The Dataset format:
Supports logging multiple data values for a given time step, which enhances signal logging of Stateflow data
Uses MATLAB timeseries objects to store logged data (rather than Simulink.Timeseries and Simulink.TsArray objects), which enables you to work with logged data in MATLAB without a Simulink license
Avoids the limitations of the ModelDataLogs format, which Bug Report 495436 describes
For more information, see Logging Data Values and State Activity.
In previous releases, selecting Enable debugging/animation on the Simulation Target pane of the Configuration Parameters dialog box would implicitly set all data and states in a Stateflow chart to be test points. In R2011a, you must select the Test point check box explicitly for data and states to appear in the Signal Selector dialog box of a Scope or Floating Scope block.
If you load models from previous releases that rely on the implicit behavior, mark the appropriate data or states as test points to ensure that they appear in the Signal Selector dialog box. For more information, see Monitoring Test Points in Stateflow Charts in the Stateflow User's Guide.
You can now use buses, but not arrays of buses, as shared data in Stateflow data store memory.
In R2011a, state functions are more readable due to improved inlining heuristics.
In R2011a, you can pass arrays of buses as inputs and outputs of the following Stateflow objects:
Charts
MATLAB functions
Simulink functions
For new charts, the default setting of the States When Enabling chart property is Held. In previous releases, the default setting was Inherit. For more information, see Controlling States When Function-Call Inputs Reenable Charts in the Stateflow User's Guide.
In previous releases, if you set an initial value vector using fixed-point or enumerated values, all elements of that vector would have the same value as the first element. For example:
In R2011a, this bug has been fixed.
If you have any models that rely on the behavior of initial value vectors from previous releases, these models will behave differently in R2011a.
In R2011a, the Mac screen menubar is disabled when Stateflow is installed. This behavior enables Stateflow Editor menu options to work normally on a Mac.
To enable the Mac screen menubar, modify the java.opts file by adding the following line:
-Dapple.laf.useScreenMenuBar=true
To prevent a slowdown in the MATLAB Editor, check that the java.opts file contains the following line:
-Dapple.awt.graphics.UseQuartz=true
A java.opts file can reside in the folder from which you launch MATLAB or in the bin/maci64 subfolder within the MATLAB root folder. A java.opts file in the latter location applies to all users, but individual users might not have permissions to modify a java.opts file there. If there is a java.opts file in both locations with settings that conflict, the setting in the java.opts file in the folder from which you launch MATLAB takes precedence. You might want to check both locations to see whether you have existing java.opts files and then decide which one to modify.
To create a new java.opts file or modify an existing copy in the folder from which you launch MATLAB:
Quit MATLAB.
Relaunch MATLAB and immediately enter the following line in the Command Window:
edit java.opts
To create or modify a java.opts file that applies to all users, you can enter the following line in the MATLAB Command Window at any time:
edit(fullfile(matlabroot,'bin','maci64','java.opts'))
In R2010b, you can use atomic subcharts to:
Break up a chart into standalone parts to facilitate team development
Reuse states across multiple charts and models
Animate and debug multiple charts side-by-side during simulation
Use simulation to test changes, one-by-one, without recompiling the entire chart
Generate reusable code for specific states or subcharts to enhance unit testing
A video demovideo demo is available to show you how to reuse the same state multiple times in a chart.
For more information, see Making States Reusable with Atomic Subcharts in the Stateflow User's Guide.
In R2010b, you can use library link charts that specify different data sizes, types, and complexities. Previously, all library charts had to use the same settings for data size, type, and complexity. For more information, see Creating Specialized Chart Libraries for Large-Scale Modeling in the Stateflow User's Guide.
In R2010b, you can control the behavior of the following Stateflow diagnostics in the Diagnostics > Stateflow pane of the Configuration Parameters dialog box:
Unused data and events
Unexpected backtracking
Invalid input data access in chart initialization
No unconditional default transitions
Transition outside natural parent
For more information, see Diagnostics Pane: Stateflow in the Simulink Graphical User Interface.
In R2010b, you can resolve symbols in your chart to symbols defined in custom code while parsing the chart. This enhancement enables more accurate and earlier reporting of unresolved symbols. Previously, the parser assumed that any unresolved chart symbols were defined in custom code. You could not resolve chart symbols to symbols in your custom code until make time. If the chart symbols were undefined in the custom code, a make error would appear.
Also, the Symbol Autocreation Wizard was previously available only for 32-bit Windows platforms that use lcc for the mex compiler. In R2010b, the Symbol Autocreation Wizard is available to help you fix unresolved symbols, regardless of the compiler or platform.
To enable or disable custom-code parsing, you can use the Parse custom code symbols check box on the Simulation Target > Custom Code pane of the Configuration Parameters dialog box.
For more information, see:
Previously, you could not use temporal logic conditions on transitions that originated from junctions. Now you can use temporal logic conditions on transitions from junctions as long as the full transition path connects two states. For more information, see Rules for Using Temporal Logic Operators and Example of Detecting Elapsed Time in the Stateflow User's Guide.
In R2010b, the following changes to the Data properties dialog box apply:
Previously, using a Function-Call Split block to branch a function-call output event from a chart to separate subsystems required binding of the event to a state. In R2010b, binding is no longer required.
In R2010b, you cannot pass real values to function inputs of complex type. This restriction applies to the following types of chart functions:
Graphical functions
Truth table functions
Embedded MATLAB® functions
Simulink functions
If you have existing models that pass real values to function inputs of complex type, an error now appears when you try to simulate your model.
In R2010b, the following model configuration produces an error during Real-Time Workshop® code generation:
A Chart block resides in a For Each Subsystem.
The Chart block tries to access global data from Data Store Memory blocks.
You can now combine entry, during, and exit actions in a single line on state labels. This concise syntax provides enhanced readability for your chart and helps eliminate redundant code. For more information, see Combining State Actions to Eliminate Redundant Code in the Stateflow User's Guide.
A new diagnostic now detects unused Stateflow data and events during simulation. A warning message appears, alerting you to data and events that you can remove. This enhancement helps you reduce the size of your model by removing objects that have no effect on simulation.
This diagnostic checks for usage of Stateflow data, except for the following types:
Machine-parented data
Inputs and outputs of Embedded MATLAB functions
This diagnostic checks for usage of Stateflow events, except for the following type:
Input events
For more information, see Diagnostic for Detecting Unused Data and Diagnostic for Detecting Unused Events in the Stateflow User's Guide.
You can explicitly pass variable-size chart inputs and outputs as inputs and outputs of the following functions:
Embedded MATLAB functions
Simulink functions
Truth table functions that use Embedded MATLAB action language
For more information, see Using Variable-Size Data in Stateflow Charts in the Stateflow User's Guide.
Chart-level data now support up to 128 bits of fixed-point precision for the following scopes:
Input
Output
Parameter
Data Store Memory
This increase in maximum precision from 32 to 128 bits provides these enhancements:
Supports generating efficient code for targets with non-standard word sizes
Allows charts to work with large fixed-point signals
You can explicitly pass chart-level data with these fixed-point word lengths as inputs and outputs of the following functions:
Embedded MATLAB functions
Simulink functions
Truth table functions that use Embedded MATLAB action language
For more information, see Using Fixed-Point Data in Stateflow Charts in the Stateflow User's Guide.
The new chart property States When Enabling helps you specify how states behave when a function-call input event reenables a chart. You can select one of the following settings in the Chart properties dialog box:
Held — Maintain most recent values of the states.
Reset — Revert to the initial conditions of the states.
Inherit — Inherit this setting from the parent subsystem.
This enhancement helps you more accurately control the behavior of a chart with a function-call input event. For more information, see Controlling States When Function-Call Inputs Reenable Charts and Setting Properties for a Single Chart in the Stateflow User's Guide.
You can now define structures of parameter scope that are tunable. For more information, see Defining Structures of Parameter Scope in the Stateflow User's Guide.
If you prevent inlining for a state, Real-Time Workshop® generated code contains a new static function inner_default_statename when:
Your chart contains a flow graph where an inner transition and default transition reach the same junction inside a state.
This flow graph is complex enough to exceed the inlining threshold.
For more information, see What Happens When You Prevent Inlining in the Stateflow User's Guide.
When you use the sizeof function in generated code to determine vector or matrix dimensions, sizeof always takes an input argument that evaluates to a data type.
When you use custom-code function calls in generated code, vector and matrix input arguments always use pass-by-reference instead of pass-by-value behavior.
The implicit event change(data_name) no longer works for machine-parented data. In R2010a, this implicit event works only with data at the chart level or lower in the hierarchy.
For machine-parented data, consider using change detection operators to determine when data values change. For more information, see Detecting Changes in Data Values in the Stateflow User's Guide.
Support for machine-parented events has been completely removed. In R2010a, an error message appears when you try.
Support for Microsoft Visual Studio® .NET 2003 as a MEX compiler for simulation has been removed because MATLAB and Simulink no longer support this compiler. For information about alternative compilers, see Choosing a Compiler in the Stateflow User's Guide.
For Windows platforms, messages about Stateflow or Embedded MATLAB code generation and compilation status now appear only on the status bar of the Simulink Model Editor when you update diagram. Previously, these messages also appeared in the MATLAB Command Window. This enhancement minimizes distracting messages at the command prompt.
Previously, the Configuration Parameters dialog box showed the Stateflow section of the Optimization pane only when both of the following conditions were true:
Real-Time Workshop and Stateflow licenses were available.
Your model included Stateflow charts or Embedded MATLAB Function blocks.
In R2010a, the Configuration Parameters dialog box shows the Stateflow section of the Optimization pane when both licenses are available. Your model need not include any Stateflow charts or Embedded MATLAB Function blocks.
For a list of optimization parameters, see Optimization Pane: General in the Simulink Graphical User Interface.
In R2010a, Real-Time Workshop Embedded Coder™ software inlines generated code for Stateflow charts, even if the generated code calls a subfunction that accesses global Simulink data. This optimization uses less RAM and ROM.
In existing models, simulation and code generation of Stateflow charts and Truth Table blocks always behave as if the Treat as atomic unit check box in the Subsystem Parameters dialog box is selected. Starting in R2010a, this check box is always selected for consistency with existing behavior.
You can copy a function-call subsystem from a model and paste directly in the Stateflow Editor. This enhancement eliminates the steps of manually creating a Simulink function in your chart and pasting the contents of the subsystem into the new function. You can also copy a Simulink function from a chart and paste directly in a model as a function-call subsystem.
For more information, see Using Simulink Functions in Stateflow Charts in the Stateflow User's Guide.
If a flow graph or Embedded MATLAB function in your chart uses if-elseif-else decision logic, you can choose to generate switch-case statements during Real-Time Workshop Embedded Coder code generation. Switch-case statements provide more readable and efficient code than if-elseif-else statements when multiple decision branches are possible.
When you load models created in R2009a and earlier, this optimization is off to maintain backward compatibility. In previous versions, if-elseif-else logic appeared unchanged in generated code.
For more information, see:
In the Pattern Wizard, you can now choose to create a flow graph with switch-case decision logic. For more information, see Modeling Logic Patterns and Iterative Loops Using Flow Graphs in the Stateflow User's Guide.
You can now use more than 254 events in a chart. The previous limit of 254 events no longer applies. This enhancement supports large-scale models with charts that send and receive hundreds of events during simulation. Although Stateflow software does not limit the number of events, the underlying C compiler enforces a theoretical limit of (2^31)-1 events for the generated code.
For more information, see Defining Events in the Stateflow User's Guide.
During single-step mode, the Stateflow Debugger no longer zooms automatically to the chart object that is executing. Instead, the debugger opens the subviewer that contains that object. This enhancement minimizes visual disruptions as you step through your analysis of a simulation.
For more information, see Options to Control Execution Rate in the Debugger in the Stateflow User's Guide.
For Windows platforms, messages about Stateflow compilation status now appear on the status bar of the Simulink Model Editor when you update diagram.
Charts now support input and output data that vary in dimension during simulation. In this release, only Embedded MATLAB functions nested in the charts can manipulate these input and output data.
For more information, see Using Variable-Size Data in Stateflow Charts and Working with Variable-Size Data in MATLAB Functions in the Stateflow User's Guide.
The new compilation report provides compile-time type information for the variables and expressions in your Embedded MATLAB functions. This information helps you find the sources of error messages and understand type propagation issues, particularly for fixed-point data types. For more information, see Working with MATLAB Function Reports in the Simulink User's Guide.
The new compilation report is not supported by the MATLAB internal browser on Sun™ Solaris™ 64-bit platforms. To view the compilation report on Sun Solaris 64-bit platforms, you must have your MATLAB Web preferences configured to use an external browser, for example, Mozilla® Firefox®. To learn how to configure your MATLAB Web preferences, see Web Preferences in the MATLAB documentation.
You can now replace the following math functions with target-specific implementations:
Replacement of abs now works for both floating-point and integer arguments. Previously, replacement of abs with a target function worked only for floating-point arguments.
For more information about Target Function Libraries, see:
In a chart, you can now right-click at any level of the hierarchy (for example, states and subcharts) to insert flow graphs using the Patterns context menu. Previously, options in this context menu were available only if you right-clicked at the chart level.
To log chart signals, you can select Tools > Log Chart Signals in the Stateflow Editor. Previously, you had to right-click the Stateflow block in the Model Editor to open the Signal Logging dialog box.
For more information, see What You Can Log During Chart Simulation in the Stateflow User's Guide.
When you call C math functions, such as sin, exp, or pow, double precision applies unless the first input argument is explicitly single precision. For example, if you call the sin function with an integer argument, a cast of the input argument to a floating-point number of type double replaces the original argument. This behavior ensures consistent results between Simulink blocks and Stateflow charts for calls to C math functions.
To force a call to a single-precision version of a C math function, you must explicitly cast the function argument using the single cast operator. This method works only when a single-precision version of the function exists in the selected Target Function Library as it would in the 'C99 (ISO)' Target Function Library. For more information, see Calling C Functions in Actions and Type Cast Operations in the Stateflow User's Guide.
In the Data properties dialog box, the Lock output scaling against changes by the autoscaling tool check box is now Lock data type setting against changes by the fixed-point tools. Previously, this check box was visible only if you entered an expression or a fixed-point data type, such as fixdt(1,16,0). This check box is now visible for any data type specification. This enhancement enables you to lock the current data type settings on the dialog box against changes that the Fixed-Point Advisor or Fixed-Point Tool chooses.
For more information, see Fixed-Point Data Properties and Automatic Scaling of Stateflow Fixed-Point Data in the Stateflow User's Guide.
If you save a model with Stateflow charts in the format of an earlier version, the charts appear closed when you open the new MDL-file.
You can save the complete simulation state at a specific time and then load that state for further simulation. This enhancement provides these benefits:
Enables running isolated segments of a simulation without starting from time t = 0, which saves time
Enables testing of the same chart configuration with different settings
Enables testing of hard-to-reach chart configurations by loading a specific simulation state
For more information, see Saving and Restoring Simulations with SimState in the Stateflow User's Guide.
In R2009a, you can use enumerated data in Embedded MATLAB functions, truth table functions that use Embedded MATLAB action language, and Truth Table blocks. See Using Enumerated Data in Stateflow Charts in the Stateflow User's Guide.
You can now use true and false as Boolean keywords in Stateflow action language. For more information, see Supported Symbols in Actions in the Stateflow User's Guide.
In R2009a, a new Function Inline Option parameter is available in the State properties dialog box. This parameter enables better control of inlining state functions in generated code, which provides these benefits:
Prevents small changes to a model from causing major changes to the structure of generated code
Enables easier manual inspection of generated code, because of a one-to-one mapping between the code and the model
For more information, see Controlling Inlining of State Functions in Generated Code in the Stateflow User's Guide.
A new diagnostic detects unintended backtracking behavior in flow graphs during simulation. A warning message appears, with suggestions on how to fix the flow graph to prevent unintended backtracking. For more information, see Best Practices for Creating Flow Graphs in the Stateflow User's Guide.
Embedded MATLAB functions in Stateflow charts can now use BLAS libraries to speed up low-level matrix operations during simulation. For more information, see Simulation Target Pane: General in the Simulink documentation.
You can now replace the pow function with a target-specific implementation. For more information about Target Function Libraries, see:
In R2009a, the graphical behavior of smart transitions has been enhanced as follows:
Smart transitions maintain straight lines between states and junctions whenever possible. Previously, smart transitions would preserve curved lines.
When you drag a smart transition radially around a junction, the end on the junction follows the tip to maintain a straight line by default. Previously, the end on the junction would maintain its original location and not follow the tip of the transition.
For more information, see What Smart Transitions Do in the Stateflow User's Guide.
When a top-level chart appears in the Stateflow Editor, clicking the up-arrow button in the toolbar causes the chart to close and the Simulink model that contains the chart to appear. This behavior is consistent with clicking the up-arrow button in the toolbar of a Simulink subsystem window.
Previously, clicking the up-arrow button for a top-level chart would cause the Simulink model to appear, but the chart would not close. For more information, see Navigating Subcharts in the Stateflow User's Guide.
In R2009a, type resolution for Stateflow data has been enhanced to support any MATLAB expression that evaluates to a type.
In R2009a, the generated code for managing Stateflow events uses a deterministic numbering method. This enhancement minimizes unnecessary differences in the generated code for charts between R2009a and any future release.
In R2009a, Real-Time Workshop generated code for charts with Simulink functions no longer uses unneeded global variables for the function inputs and outputs. The interface can be represented by local temporary variables or completely eliminated by optimizations, such as expression folding. This enhancement provides reduced RAM consumption and faster execution time.
In a future version of Stateflow software, use of en, du, ex, entry, during, or exit for naming data or events will be disallowed. In R2009a, a warning message appears when you run a model that contains any of these keywords as the names of data or events.
To avoid warning messages, rename any data or event that uses en, du, ex, entry, during, or exit as an identifier.
In a future version of Stateflow software, support for machine-parented events will be removed. In R2009a, a warning message appears when.
You can use a Simulink function to embed a function-call subsystem in a Stateflow chart. You fill this function with Simulink blocks and call it in state actions and on transitions. Like graphical functions, truth table functions, and Embedded MATLAB functions, you can use multiple return values with Simulink functions.
For more information, see Using Simulink Functions in Stateflow Charts in the Stateflow User's Guide.
You can use data of an enumerated type in a Stateflow chart.
For more information, see Using Enumerated Data in Stateflow Charts in the Stateflow User's Guide and Enumerations and Modeling in the Simulink User's Guide.
You can use alignment, distribution, and resizing commands on graphical chart objects, such as states, functions, and boxes.
For more information, see Formatting Chart Objects in the Stateflow User's Guide.
You can use a single dialog box to specify simulation and embeddable code generation options that apply to Stateflow charts and Truth Table blocks. These changes apply:
For more information, see Configuration Parameters Dialog Box in the Simulink Graphical User Interface and Building Targets in the Stateflow User's Guide.
The following sections describe changes in the panes of the Simulation Target dialog box for nonlibrary models.
For details, see Nonlibrary Models: Mapping of GUI Options from the Simulation Target Dialog Box to the Configuration Parameters Dialog Box.
For details, see Nonlibrary now accessible only in the Model Explorer. When you select Simulink Root > Configuration Preferences in the Model Hierarchy pane, the text appears in the Description field for that model.
For nonlibrary models, the following table maps each GUI option in the Simulation Target dialog box to the equivalent in the Configuration Parameters dialog box. The options are listed in order of appearance in the Simulation Target dialog box.
The following sections describe changes in the panes of the Simulation Target dialog box for library models.
In previous releases, the General pane of the Simulation Target dialog box for library models appeared as follows.
In R2008b, these options are no longer available. All library models inherit these option settings from the main model to which the libraries are linked.
For details, see Library discarded.
For library models, the following table maps each GUI option in the Simulation Target dialog box to the equivalent in the Configuration Parameters dialog box. The options are listed in order of appearance in the Simulation Target dialog box.
The following sections describe enhancements to the Real-Time Workshop pane of the Configuration Parameters dialog box for nonlibrary models.
In previous releases, the Real-Time Workshop > Symbols pane of the Configuration Parameters dialog box appeared as follows.
In R2008b, a new option is available in this pane: Reserved names. You can use this option to specify a set of keywords that the Real-Time Workshop build process should not use. This action prevents naming conflicts between functions and variables from external environments and identifiers in the generated code.
You can also choose to use the reserved names specified in the Simulation Target > Symbols pane to avoid entering the same information twice for the nonlibrary model. Select the Use the same reserved names as Simulation Target check box.
In previous releases, the Real-Time Workshop > Custom Code pane of the Configuration Parameters dialog box appeared as follows.
In R2008b, a new option is available in this pane: Use the same custom code settings as Simulation Target. You can use this option to copy the custom code settings from the Simulation Target > Custom Code pane to avoid entering the same information twice for the nonlibrary model.
The following sections describe changes in the panes of the RTW Target dialog box for library models.
In previous releases, the General pane of the RTW Target dialog box for library models appeared as follows.
In R2008b, these options are no longer available. During Real-Time Workshop code generation, options specified for the main model are used.
For details, see Library Models: Mapping of GUI Options from the RTW Target Dialog Box to the Configuration Parameters Dialog Box.
In previous releases, the Description pane of the RTW Target dialog box appeared as follows.
In R2008b, these options are no longer available. For older models where the Description pane contained information, the text is discarded.
For library models, the following table maps each GUI option in the RTW Target dialog box to the equivalent in the Configuration Parameters dialog box. The options are listed in order of appearance in the RTW Target dialog box.
Previously, you could programmatically set options for simulation and embeddable code generation by accessing the API properties of Target objects sfun and rtw, respectively. In R2008b, the API properties of Target objects sfun and rtw are replaced by parameters that you configure using the commands get_param and set_param.
The following table maps API properties of the Target object sfun for nonlibrary models to the equivalent parameters in R2008b. Object properties are listed in alphabetical order; those not listed in the table do not have equivalent parameters in R2008b.
The following table maps API properties of the Target object sfun for library models to the equivalent parameters in R2008b. Object properties are listed in alphabetical order; those not listed in the table do not have equivalent parameters in R2008b.
The following table maps API properties of the Target object rtw for library models to the equivalent parameters in R2008b. Object properties are listed in alphabetical order; those not listed in the table do not have equivalent parameters in R2008b.
In R2008b, new parameters are added to the Configuration Parameters dialog box for simulation and embeddable code generation.
The following table lists the new simulation parameters that apply to nonlibrary models.
The following table lists the new simulation parameter that applies to library models.
The following table lists the new code generation parameters that apply to nonlibrary models.
The following table lists the new code generation parameters that apply to library models.
Updating Scripts That Set Options Programmatically for Simulation and Embeddable Code Generation
In previous releases, you could use the Stateflow API to set options for simulation and embeddable code generation by accessing the Target object (sfun or rtw) in a Stateflow machine. For example, you could set simulation options programmatically by running these commands in a MATLAB script:
r = slroot; machine = r.find('-isa','Stateflow.Machine','Name','main_mdl'); t_sim = machine.find('-isa','Stateflow.Target','Name','sfun'); t_sim.setCodeFlag('debug',1); t_sim.setCodeFlag('overflow',1); t_sim.setCodeFlag('echo',1); t_sim.getCodeFlag('debug'); t_sim.getCodeFlag('overflow'); t_sim.getCodeFlag('echo');
In R2008b, you must update your scripts to use the set_param and get_param commands to configure simulation and embeddable code generation. For example, you can update the previous script as follows:
cs = getActiveConfigSet(gcs); set_param(cs,'SFSimEnableDebug','on'); set_param(cs,'SFSimOverflowDetection','on'); set_param(cs,'SFSimEcho','on'); get_param(cs,'SFSimEnableDebug'); get_param(cs,'SFSimOverflowDetection'); get_param(cs,'SFSimEcho');
Accessing Target Options for Library Models
In previous releases, you could access target options for library models via the Tools menu in the Stateflow Editor or the Contents pane of the Model Explorer. In R2008b, you must use the Tools menu to access target options for library models. For example, to specify parameters for the simulation target, select Tools > Open Simulation Target in the Stateflow Editor.
What Happens When You Load an Older Model in R2008b
When you use R2008b to load a model created in an earlier version, dialog box options and the equivalent object properties for simulation and embeddable code generation targets migrate automatically to the Configuration Parameters dialog box, except in the cases that follow.
For the simulation target of a nonlibrary model, these options and properties do not migrate to the Configuration Parameters dialog box. The information is discarded when you load the model, unless otherwise noted.
For the simulation target of a library model, these options and properties do not migrate to the Configuration Parameters dialog box. The information is discarded when you load the model.
For the embeddable code generation target of a library model, these options and properties do not migrate to the Configuration Parameters dialog box. The information is discarded when you load the model.
What Happens When You Save an Older Model in R2008b
When you use R2008b to save a model created in an earlier version, parameters for simulation and embeddable code generation from the Configuration Parameters dialog box are saved. However, properties of API Target objects sfun and rtw are not saved if those properties do not have an equivalent parameter in the Configuration Parameters dialog box. Properties that do not migrate to the Configuration Parameters dialog box are discarded when you load the model. Therefore, old Target object properties are not saved even if you choose to save the model as an older version (for example, R2007a).
Workaround for Library Models If They No Longer Use Local Custom Code Settings
Behavior in R2008a and Earlier Releases
In R2008a and earlier releases, the main model simulation target had a custom code option Use these custom code settings for all libraries, or the target property ApplyToAllLibs. The library model simulation target had a similar custom code option Use local custom code settings (do not inherit from main model), or the target property UseLocalCustomCodeSettings.
The following criteria determined which custom code settings would apply to the library model:
The last case was ambiguous, because the main model did not propagate custom code settings and the library model did not specify use of local custom code settings either. In this case, the default behavior was to use local custom code settings for the library model.
Behavior in R2008b
In R2008b, the Use these custom code settings for all libraries option for the main model is removed. The library model either picks up its local custom code settings if specified to do so, or uses the main model custom code settings when the Use local custom code settings option is not selected. This change introduces backward incompatibility for older models that use the "False (main model), False (library model)" setup for specifying custom code settings.
Workaround to Prevent Backward Incompatibility
To resolve the ambiguity in older models, you must explicitly select Use local custom code settings for the library model when you want the local custom code settings to apply:
Open the Stateflow simulation target for the library model.
Load the library model and unlock it.
Open one of the library charts in the Stateflow Editor.
Select Tools > Open Simulation Target.
In the dialog box that appears, select Use local custom code settings (do not inherit from main model).
You can use the Stateflow Pattern Wizard to create commonly used flow graphs such as for-loops in a quick and consistent manner.
For more information, see Modeling Logic Patterns and Iterative Loops Using Flow Graphs.
In the Data properties dialog box, you can initialize vectors and matrices in the Initial value field of the Value Attributes pane.
For more information, see How to Define Vectors and Matrices.
The default mode for ordering parallel states and outgoing transitions is now explicit. When you create a new chart, you define ordering explicitly in the Stateflow Editor. However, if you load a chart that uses implicit ordering, that mode is retained until you switch to explicit ordering.
For more information, see Execution Order for Parallel States and Evaluation Order for Outgoing Transitions.
In R2008b, Real-Time Workshop code generation is enhanced to enable optimized inlining of code generated for Stateflow charts.
When you parse a nonlibrary model, library charts that are not linked to this model are ignored. This enhancement enables more efficient parsing for nonlibrary models.
When you call MATLAB functions in a Stateflow chart, scalar inputs are no longer cast automatically to data of type double. This behavior applies when you use the ml operator to call a built-in or custom MATLAB function. (For details, see ml Namespace Operator.)
Previously, Stateflow generated code for simulation would automatically cast scalar inputs to data of type double when calling MATLAB functions in a chart. This behavior has changed. Stateflow charts created in earlier versions now generate errors during simulation if they contain calls to external MATLAB functions that expect scalar inputs of type double, but the inputs are of a different data type.
To prevent these errors, you can change the data type of a scalar input to double or add an explicit cast to type double in the function call. For example, you can change a function call from ml.function_name(i) to ml.function_name(double(i)).
In R2008b, you can set the Update Method of output data in continuous-time charts to Continuous. In previous releases, only local data could use a continuous update method.
If you enable the option Initialize Outputs Every Time Chart Wakes Up in the Chart properties dialog box, do not use output data as the first argument of a change detection operator. When this option is enabled, the change detection operator returns false if the first argument is an output data. In this case, there is no reason to perform change detection. (For details, see Detecting Changes in Data Values.)
Previously, Stateflow software would allow the use of output data with change detection operators when you enable the option Initialize Outputs Every Time Chart Wakes Up. This behavior has changed. Stateflow charts created in earlier versions now generate errors during parsing to prevent such behavior.
To detect unresolved symbol errors in a chart, you must start simulation or update the model diagram. When you parse a chart without simulation or diagram updates, the Stateflow parser does not have access to all the information needed to check for unresolved symbols, such as exported graphical functions from other charts and enumerated data types. Therefore, the parser now skips unresolved symbol detection to avoid generating false error messages. However, if you start simulation or update the model diagram, you invoke the model compilation process, which has full access to the information needed, and unresolved symbols are flagged.
For more information, see Parsing Stateflow Charts and How to Check for Undefined Symbols.
If you copy and paste a state in the Stateflow Editor, a unique name is generated for the new state only if the original state does not use the default ? label. For more information, see Copying Graphical Objects.
When you load a nonlibrary model with an active configuration reference for Stateflow charts or Truth Table blocks, a copy of the referenced configuration set is created and attached to your model. The new configuration set is marked active, and the configuration reference is marked inactive. This behavior does not apply to library models.
For information about using configuration references, see Manage a Configuration Reference.
In previous releases, you could load a nonlibrary model with an active configuration reference for Stateflow charts or Truth Table blocks. In R2008b, the configuration reference becomes inactive after you load the model, and a warning message appears to explain this change in behavior. To restore the configuration reference to its original active state, follow the instructions in the warning message.
For more information, see Configuration References for Models with Older Simulation Target Settings.
Stateflow charts support data with complex data types. You can perform basic arithmetic (addition, subtraction, and multiplication) and relational operations (equal and not equal) on complex data in Stateflow action language. You can also use complex input and output arguments for Embedded MATLAB functions in your chart.
For more information, see Using Complex Data in Stateflow Charts.
You can specify more than one output argument in graphical functions, truth table functions, and Embedded MATLAB functions. Previously, you could specify only one output for these types of functions.
For more information, see Graphical Functions for Reusing Logic Patterns and Iterative Loops, Truth Table Functions for Decision-Making Logic, and Using MATLAB Functions in Stateflow Charts in the Stateflow documentation.
In previous releases, Real-Time Workshop Embedded Coder Objects in Generated Code in the Stateflow documentation.
You can use a keyword named sec to define absolute time periods based on simulation time of your chart. Use this keyword as an input argument for temporal logic operators, such as after.
For more information, see Using Temporal Logic in State Actions and Transitions in the Stateflow documentation. documentation. documentation. rule 14.10.
Code files for simulation and code generation targets now reside in the slprj folder. Previously, generated code files resided in the sfprj folder.
For more information, see Generated Code Files for Targets You Build in the Stateflow documentation.
In R2008a, Real-Time Workshop code generation is enhanced to enable cross-product optimizations between Simulink blocks and Stateflow charts.
You can use the API method fitToView to zoom in on graphical objects in the Stateflow Editor.
For more information, see Zooming a Chart Object with the API in the Stateflow documentation.
If you copy and paste a state in the Stateflow Editor, a unique name automatically appears for the new state.
For more information, see Copying Graphical Objects in the Stateflow documentation. a Chart in the Stateflow documentation.
The Data Type Assistant in the Data properties dialog box now displays status and details of fixed-point data types.
For more information, see Showing Fixed-Point Details in the Stateflow documentation.
R2008a introduces "What's This?" context-sensitive help for parameters that appear in the Simulink.
When you define a fixed-point data type in a Stateflow chart, you must specify scaling explicitly in the General pane of the Data properties dialog box. box:
Use a predefined option in the Type drop-down menu.
Use the Data Type Assistant to specify the Mode as fixed-point.
For more information, see Defining Data in the Stateflow documentation. box, sessions to crash.
Using enhanced support for modeling continuous-time systems, you can:
Detect zero crossings on state transitions, enabling accurate simulation of dynamic systems with modal behavior.
Support the definition of continuous state variables and their derivatives for modeling hybrid systems as state charts with embedded dynamic equations
For more information, see Modeling Continuous-Time Systems in Stateflow Charts.
Previously, Stateflow charts implemented continuous time simulation without maintaining mode in minor time steps or detecting zero crossings. Accurate continuous-time simulation requires several constraints on the allowable constructs in Stateflow charts. Charts created in earlier versions may generate errors if they violate these constraints.
Using a new super step property, you can enable Stateflow charts to take multiple transitions in each simulation time step. For more information, see Execution of a Chart with Super Step Semantics in the Stateflow documentation.
You can use a new data property, Data Must Resolve to Simulink signal object, to allow local and output data to explicitly inherit the following properties from Simulink.Signal objects of the same name that you define in the base workspace or model workspace:
Size
Type
Complexity
Minimum value
Maximum value
Initial value
Storage class (in Real-Time Workshop generated code)
For more information, see Resolving Data Properties from Simulink Signal Objects in the Stateflow documentation.
Stateflow software no longer performs implicit signal resolution, a feature supported for output data only. In prior releases, Stateflow software attempted to resolve outputs implicitly to inherit the size, type, complexity, and storage class of Simulink.Signal objects of the same name that existed in the base or model workspace. No other properties could be inherited from Simulink signals.
Now, local as well as output data can inherit additional properties from Simulink.Signal objects, but you must enable signal resolution explicitly. In models developed before Version 7.0 (R2007b) that rely on implicit signal resolution, Stateflow charts may not simulate or may generate code with unexpected storage classes. In these cases, Stateflow software automatically disables implicit signal resolution for chart outputs and generates a warning at model load time about possible incompatibilities. Before loading such a model, make sure you have loaded into the base or model workspace all Simulink.Signal objects that will be used for explicit resolution. After loading, resave your model in Version 7.0 (R2007b) of Stateflow software.
You can use the same dialog box interface for specifying data types in Stateflow charts and Simulink models. For more information, see Setting Data Properties in the Data Dialog Box in the Stateflow documentation.
When running Simulink models in external mode, you can now animate states, and view Stateflow test points in floating scopes and signal viewers. For more information, see Animating Stateflow Charts in the Stateflow documentation.
These Real-Time Workshop targets support Stateflow chart animation in external mode:
Stateflow Coder code generation software supports the Target Function Library published by Real-Time Workshop Embedded Coder software, allowing you to map a subset of built-in math functions and arithmetic operators to target-specific implementations. For more information, see Replacing Operators with Target-Specific Implementations and Replacement of C Math Library Functions with Target-Specific Implementations in the Stateflow documentation.
You can now define fixed-point parameters in Truth Table blocks.
You can use custom storage classes to control Stateflow local data, output data, and data store memory in Real-Time Workshop generated code.
For more information, see Custom Storage Classes in the Real-Time Workshop Embedded Coder documentation.
If you save a Stateflow chart in release 2007b, you will not be able to load the corresponding model in earlier versions of Simulink software. To work around this issue, save your model in the earlier version before loading it, as follows:
In the Simulink model window, select File > Save As.
In the Save as type field, select the version in which you want to load the model.
For example, if you want to load the model in the R2007a version of Simulink software, select Simulink 6.6/R2007a Models (#.mdl).
In previous releases, there was a bug where a default transition action occurred more than once if you used a history junction in a state containing only a single substate. The history junction did not remember the state's last active configuration unless there was more than one substate. This bug has been fixed.
You can use three new operators for detecting changes in Stateflow data values between time steps:
hasChanged
hasChangedFrom
hasChangedTo
For more information, see Detecting Changes in Data Values in the Stateflow documentation.
You can use a new chart property to constrain finite state machines to use either Mealy or Moore semantics. You can create Stateflow charts that implement pure Mealy or Moore semantics as a subset of Stateflow chart semantics. Mealy and Moore charts can be used in simulation and code generation of C and hardware description language (HDL). See Building Mealy and Moore Charts in the Stateflow documentation.
You can use a structure data type to interface Simulink bus signals with Stateflow charts and truth tables, and to define local and temporary structures. You specify Stateflow structure data types as Simulink.Bus objects. See Working with Structures and Bus Signals in Stateflow Charts in the Stateflow documentation.
You can use a new chart option Initialize Outputs Every Time Chart Wakes Up. Use this to initialize the value of outputs every time a chart wakes up, not only at time 0 (see Setting Properties for a Single Chart in the online documentation). When you enable this option, outputs are reset whenever the chart is triggered, whether by a function call, edge trigger, or clock tick. The option ensures that outputs are defined in every chart execution and prevents latching of outputs.
You can use MATLAB code to perform the following customizations of the standard Stateflow user interface:
Add items and submenus that execute custom commands in the Stateflow Editor
Disable or hide menu items in the Stateflow Editor
The MATLAB Workspace Browser is no longer available for debugging Stateflow charts. To view Stateflow data values at breakpoints during simulation, use the MATLAB command line or the Browse Data window in the Stateflow Debugger.
No C compiler ships with Stateflow software for 64-bit Windows operating systems. Because Stateflow software performs simulation through code generation, you must supply your own MEX-supported C compiler if you wish to use Stateflow Chart and Truth Table blocks. The C compilers available at the time of this writing for 64-bit Windows operating systems include the Microsoft Platform SDK and the Microsoft Visual Studio development system.
This release provides an interface that gives Stateflow charts access to global variables in Simulink models. A Simulink model implements global variables as data stores, created either as data store memory blocks or instances of Simulink.Signal objects. Now Stateflow charts can share global data with Simulink models by reading and writing data store memory symbolically using the Stateflow action language. See Sharing Global Data with Multiple Charts in the Stateflow documentation.
The Stateflow data properties dialog box has been enhanced to:
Accommodate fixed-point support
Support parameter expressions in data properties
Stateflow charts now accept Simulink parameters or parameters defined in the MATLAB workspace for the following properties in the data properties dialog box:
Initial Value
Minimum
Maximum
Entries for these parameters can be expressions that meet the following requirements:
Expressions must evaluate to scalar values.
For library charts, the expressions for these properties must evaluate to the same value for all instances of the library chart. Otherwise, a compile-time error appears.
See Defining Data in the Stateflow documentation.
You can now use the Embedded MATLAB action language in Stateflow truth tables. Previously, you were restricted to the Stateflow action language. The Embedded MATLAB action language offers the following advantages:
Supports the use of control loops and conditional constructs in truth table actions
Provides direct access to all MATLAB functions
See Truth Table Functions for Decision-Making Logic in the Stateflow documentation.
A truth table function block is now available as an element in the Simulink library. With this new block, you can call a truth table function directly from your Simulink model. Previously, there was a level of indirection. Your Simulink model had to include a Stateflow block that called a truth table function.
The Simulink truth table block supports the Embedded MATLAB language subset only. You must have a Stateflow software license to use the Truth Table block in Simulink models.
See Truth Table Functions for Decision-Making Logic in the Stateflow documentation.
A new Stateflow function sfgco retrieves the object handles of the most recently selected objects in a Stateflow chart.
Stateflow Coder software now implements a default case in generated switch statements to account for corrupted memory at runtime. In this situation, the default case performs a recovery operation by calling the child entry functions of the state whose variable is out of bounds. Reentering the state resets the variable to a valid value.
This recovery operation is not performed if a Stateflow chart contains any of the following elements:
Local events
Machine-parented events
Implicit events, such as state entry, state exit, and data change
If any of these conditions exist in a chart, state machine processing can become recursive, causing variables to temporarily assume values that are out of range. However, when processing finishes, the variables return to valid values.
You can specify the execution order of parallel states explicitly in Stateflow charts. Previously, the execution order of parallel states was governed solely by implicit rules, based on geometry. A disadvantage of implicit ordering is that it creates a dependency between design layout and execution priority. When you rearrange parallel states in your chart, you may inadvertently change order of execution and affect simulation results. Explicit ordering gives you more control over your designs. See Execution Order for Parallel States in the Stateflow documentation.
You can now directly hyperlink the Simulink subsystem connected to a Stateflow output event by using the context menu option Explore for any state or transition broadcasting event. See Accessing Simulink Subsystems Triggered By Output Events in the Stateflow documentation.
A common modeling error is to create charts where a transition loops out of the logical parent of the source and destination objects. The logical parent is either a common parent of the source and destination objects, or if there is no common parent, the nearest common ancestor.
Consider the following example:
In this chart, transition 1 loops outside of logical parent A, which is the common parent of transition source B and destination C.
This type of illegal looping causes the parent to deactivate and then reactivate. In the previous example, if transition 1 is taken, the exit action of A executes and then the entry action of A executes. Executing these actions unintentionally can cause side effects.
This situation is now detected as a parser warning that indicates how to fix the model. Here is the warning associated with the earlier example:
You can now use color highlighting to differentiate syntax elements in the Stateflow action language. Syntax highlighting is enabled by default. To specify highlighting preferences, select Highlighting Preferences from the chart Edit menu, and then click the colors you want to change. See Differentiating Syntax Elements in the Stateflow Action Language in the Stateflow documentation.
This release introduces enhancements to Stateflow chart notes. The chart notes property dialog box now has a ClickFcn section, which includes the following options:
Use display text as click callback check box
ClickFcn edit field
See Annotations Properties Dialog Box in the Simulink documentation for a description of these new options.
This release adds the following chart viewing enhancements:
New View Menu Viewing Commands
New Shortcut Menu Commands
View Command Shortcut Keys
This release enhances the chart viewing commands. You can now maintain a history of the chart viewing commands, i.e., pan and zoom, that you execute for each chart window. The history allows you to quickly return to a previous view in a window, using commands for traversing the history (see New View Menu Viewing Commands).
This release adds the following viewing commands to the chart's View menu:
View > Back
Displays the previous view in the view history.
View > Forward
Displays the next view in the view history.
View > Go To Parent
Goes to the parent of the current subchart.
The shortcut menu now has Forward and Go To Parent commands. The Back command has been moved to be with these new commands. These commands are the same as those described in New View Menu Viewing Commands.
This release adds the following viewing command shortcut keys for users running the UNIX operating system or the Windows operating system:
Stateflow charts now support a mode where you can explicitly specify the testing or execution order of transitions sourced by states and junctions. This mode is called the explicit mode. The implicit mode retains the old functionality, where the transition execution order is determined based on a set of rules (parent depth, triggered and conditional properties, and geometry around the source). In addition, the transition numbers, according to their execution order, now appear on the Stateflow Editor at all times, in both implicit and explicit modes.
Old models created in earlier releases load in implicit mode, which produces identical simulation results. Any new charts created use implicit mode by default. To change to explicit mode, use the Chart properties dialog box.
Charts in library models do not require full specification of data type and size. During simulation, library charts can inherit data properties from the main model in which you link them.
This enhancement also affects code generation in library charts. When building simulation and code generation targets, only the library charts that you link in the main model participate in code generation.
In previous releases, library charts required complete specification of data properties. You had to enter these properties for both the library chart and the main model before simulation.
Data in Stateflow charts and Embedded MATLAB functions may now be explicitly typed using the same aliased types that a Simulink model uses. Also, inherited and parameterized data types in Stateflow charts and Embedded MATLAB functions support propagation of aliased types. However, code generated for Stateflow charts and Embedded MATLAB functions does not yet preserve aliased data types.
|
http://www.mathworks.co.uk/help/stateflow/release-notes.html?nocookie=true
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
The official source of product insight from the Visual Studio Engineering Team
This:
· Adding code-behind to a Start Page
· Persisting user settings using the Visual Studio Settings Store
· Sharing a Start Page on the Visual Studio Gallery
In these next steps we will remove the unnecessary XAML in the user control and add a button with an event handler.
1.
Open the MyControl.xaml file and switch to the XAML View, delete the contents of the user control so it looks like the following:
2.
Add a Button into the UserControl, name it and add a click event handler. This can be automatically ‘wired up’ by pressing the tab key when the ‘<New Event Handler>’ popup appears.
3.
Finally add a caption into the Button content. The end result should be as follows:
<Button x:Choose Background Image</Button>.
Add the following using statement at the top of the code file.
using Microsoft.Win32;
By default the namespace for file dialogs is not included in the template.
Navigate to the Button Click event handler which was added from the designer surface. This is where we will add the code to open a file dialog and set the background image. We only want to allow users to pick .jpg or .png files to use as a background so a filter is added to the dialog for files which have these extensions.
/// <summary>
/// Launches the open file dialog box
/// </summary>
/// <param name="sender"></param>
/// <param name="e"></param>
private void ChooseImageButton_Click(object sender, RoutedEventArgs e)
{
// Create Instance of Open File Dialog
OpenFileDialog dlg = new OpenFileDialog();
// Filter dialog to show only images
dlg.Filter = "Images (*.PNG;*.JPG;)|*.PNG;*.JPG;";
// Show open file dialog box
Nullable<bool> result = dlg.ShowDialog();
}
For more information on the open file dialog, check out the MSDN article:
Now that we have a way to select an image, we need to create a WPF Dependency Property to temporarily store the image location. Using a Dependency Property allows us to use data binding between the source and the Image control on the Start Page.
To easily add a Dependency Property, type “propdp” and press tab twice. This will create the Dependency Property outline for you using a code snippet. Be sure to do this outside of any method in the class.
Edit the Dependency Property template so the end result is as follows:
public string StartPageImage
get { return (string)GetValue(StartPageImageProperty); }
set { SetValue(StartPageImageProperty, value); }
// Using a DependencyProperty as the backing store for StartPageImageProperty. This enables animation, styling, binding, etc...
public static readonly DependencyProperty StartPageImageProperty =
DependencyProperty.Register("StartPageImage", typeof(string), typeof(MyControl), null);
For more information on dependency properties check out the MSDN article:
Navigate back to the Button Click event handler where the file dialog code was added. To store the image location from the file dialog in this new property, add the following code after the dlg.ShowDialog() call.
//Process the dialog result if (result == true) { StartPageImage = dlg.FileName; }
4.
Switch back to the StartPage.xaml file and re-expand the Grid with the comment of ‘<!--Left Column-->’.
Scroll down to the Start Page Options section as this is where we will place the control.
5.
The namespace for the UserControl has already been included in the template, so all we need to do is add an instance of the control to the page.
<my:MyControl x:
6.
Now all that is left to do is to replace the hardcoded Image Source at the top of the StartPage.XAML file with a Binding to the UserControl we just added.
<Image Source="{Binding ElementName=ImageController,Path=StartPageImage}" Stretch="UniformToFill"/>.
Open the MyControl.XAML.CS file and Navigate to the OnLoaded event handler and uncomment the setting store example code.
Update the names of the variables in the example code to be more relevant to this scenario.
string path = StartPageSettings.RetrieveString("ImageSource");
Add the following code to check that the stored value is not null and is a valid file path. The end result should look like this:
// Load control user settings from previous session. string path = StartPageSettings.RetrieveString("ImageSource"); //Apply the setting to the dependancy property if (path != null && System.IO.File.Exists(path)) { StartPageImage = path; }
Add the following code to the ChooseImageButton_Click event handler to store the image chosen by the user. The end result should look like this:
// Process open file dialog box results if (result == true) { StartPageImage = dlg.FileName; StartPageSettings.StoreString("ImageSource", dlg.FileName); }.
Copy the file Path to the output directory of the solution; this will make locating the file easier when in the upload process.
One of the quickest ways to get this path is to click on the ‘Open Folder in Windows Explorer’ command in the solution explorer context menu.
Go to the Visual Studio Gallery and click the Upload button.
If you are not already signed in with a Windows Live ID, you will be Prompted to sign in at this point.
Once signed in, Select ‘Tool’ as the Extension Type and click the ‘Next’ button.
Select the ‘upload my tool’ and click the browse button to specify the file to upload.
Paste the path copied from the bin directory of the extension and select the ‘MySimpleStartPage’ VSIX file.
Once the path has been added to the page, click the ‘Next’ button.
7.
Fill in the meta-data for this extension, in this example, we can just check the ‘Start Pages’ category and add a brief description.
8.
Before clicking the ‘Create Contribution’ button you will need to agree to the Contribution Agreement.
After clicking the ‘Create Contribution button, the Visual Studio Gallery will create a page for your extension based on the information entered here.
Note: The extension is not public until this page is ‘published’.
9.
Click the ‘publish’ link to make your extension publically available.="> is known for its innovative design and trendy idea. If you were going to have a trip, the <a href="">Fendi Crossbody Bags</a> are possible to be the right selection, a exquisite <a href="">Fendi Handbags</a> as well.
|
http://blogs.msdn.com/b/visualstudio/archive/2010/07/29/walkthrough-creating-a-custom-start-page-part-2.aspx
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
The Java EE 7 Tutorial
45.2 Basic JMS API Concepts.
45.2.1 JMS API Architecture 45-2 illustrates the way these parts interact. Administrative tools or annotations allow you to bind destinations and connection factories into a JNDI namespace. A JMS client can then use resource injection to access the administered objects in the namespace and then establish a logical connection to the same objects through the JMS provider.
Figure 45-2 JMS API Architecture
Description of "Figure 45-2 JMS API Architecture"
45.2.2 Messaging Styles
Before the JMS API existed, most messaging products supported either the point-to-point or the publish/subscribe.
45.2.2.1 Point-to-Point Messaging Style 45-3, has the following characteristics.
Each message has only one consumer.
The receiver can fetch the message whether or not it was running when the client sent the message.
Figure 45-3 Point-to-Point Messaging
Description of "Figure 45-3 Point-to-Point Messaging"
Use PTP messaging when every message you send must be processed successfully by one consumer.
45.2.2.2 Publish/Subscribe Messaging Style 45-4 illustrates pub/sub messaging.
Figure 45-4 Publish/Subscribe Messaging
Description of "Figure 45-4 Publish/Subscribe Messaging"
45.2.3 Message Consumptionmethod. The
receivemethodMessagemethod, which acts on the contents of the message. In a Java EE application, a message-driven bean serves as a message listener (it too has an
onMessagemethod), but a client does not need to register it with a consumer.
|
http://docs.oracle.com/javaee/7/tutorial/doc/jms-concepts002.htm
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
There's an old tool I've had on my desktop for many years. It was originally called NetMedic, and I think I paid $50 for it back in, oh, about 1997. At some point it was sold, rebranded as VitalAgentIT and given away for free. Then it was built into some expensive, high-end IT solution and never heard from again. Here's the classic UI, still one of the nicest Windows apps ever.
Without this neat app, I always feel a little blind. It tells you exactly what kind of traffic is moving over your network in a slick, easy-to-grasp user interface.
I've been installing this puppy since NT4.0 days. It has some occasional startup problems with XP Pro, but overall, it's held up remarkably well.
Until XP x64 Edition. It won't run on my HP xw8200 workstation, and I won't really expect it to fair better on Vista, so it's getting to be the end of the road for this app. But it's just so useful, and I haven't found an inexpensive solution to replace it. So it's time to write my own.
Ideally, I'd like to write an almost exact replica of the original NetMedic app. WPF provides an ideal presentation stack to mimic arbitrary UI, so this makes a good project for learning WPF. It would be really neat if I could apply a "NetMedic" style, as well as other cool styles like USS Enterprise control panels. To pick the geekiest possible example.
NetMedic/VitalAgentIT shares similarities with another system monitoring app, Norton System Doctor.
Both programs are built on Win32 APIs. NetMedic is (probably) built on NetMon, and Norton System Doctor is built on the performance monitor API (PerfMon).
Since there might be several different APIs for monitoring, I wanted to abstract the data source. I whipped up the generic ISignalGenerator interface.
public interface ISignalGenerator<T>
{
void Start();
void Stop();
double SampleRate { get; }
T Seed { get; }
long StartSample { get; }
double Gain { get; set; }
double DCOffset { get; set; }
double Frequency { get; set; }
event NewSamplesEventHandler<T> NewSamples;
}
The Start method turns on the firehose, and the performance monitor samples the underlying data stream every SampleRate milliseconds. Data samples are returned through the NewSamples event. The underlying implementation is assumed to be asynchronous.
The .NET Framework provides the convenient PerformanceCounter wrapper type around the Win32 implementation. This type lives in the System.Diagnostics namespace.
I wanted to abstract the PerformanceCounter type a bit, and I wanted to give it a buffer for its data stream. The IPerformanceMonitor interface defines the controller part of the Model-View-Controller pattern.
namespace SystemMonitorLib.PerformanceMonitors
{
public interface IPerformanceMonitor : INotifyPropertyChanged
namespace SystemMonitorLib
{
public interface ISignalView<T> : INotifyPropertyChanged
{
ISignalGenerator<T> SignalSource { get; }
T[] DisplayBuffer { get; }
int DisplayBufferSize { get; set; }
T MinValue { get; }
T MaxValue { get; }
}
}
My PerformanceMonitorControl type derives from System.Windows.Controls.UserControl. As far as layout is concerned, there isn't much to it.
<UserControl x:Class="PerformanceMonitorControlLib.PerformanceMonitorControl"
Ideally, I want to be able to drop individual PerformanceMonitorControl instances onto a WPF page from the Cider Toolbox, then set their properties in Cider's property grid. But we're not quite there yet, so here's some hand-tooled code.
<Window x:Class="SystemMedic.Window1"
The neat thing about this code is that I was able to compose two different PerformanceMonitorControl instances (one a numeric display, the other a signal trace) to form a new display (bolded code). Ultimately, all the hard-coded property values will be factored into styles.
Here's what the app looks like. I call it SystemMedic, since it's trying to be a combination of NetMedic and Norton System Doctor.
There's a lot more I can do with this object model, but this is a start. Ultimately, I want it to look much more similar to NetMedic. It would also be cool to come up with a style that looks like the USS Enterprise displays from, say, Star Trek: The Next Generation. Which is to say, I want to be able to apply arbitrary styling to each control instance. Being able to do that at design time would be the sweetest, and between Sparkle and Cider, it should be quite doable.
Update: By popular demand, I've posted the source code, with the caveat that it's quite prototypical.
Here are a few things you'll need to know:
Since this is a prototype that I haven't touched since May, I'll describe a few design details for future work.
In general, the code needs to be refactored to adhere more closely to the Framework Design Guidelines. I would start by removing the interfaces and replacing them with abstract base classes.
I think the framework should be simplified greatly, as well. I originally had in mind a much more general signal-display framework, which could handle fast (kHz) signals, but that's largely unnecessary for this application.
In any case, have fun, and let me know how it works for you.
|
http://blogs.msdn.com/b/jgalasyn/archive/2006/05/09/594037.aspx
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
Fundamental use cases for porting iPhone and Android applications to Qt
Revision as of 04:16, 11 October 2012[2] property. Note that the QtWebKit import is also required.
import QtQuick 1.0
import QtWebKit 1.0
WebView {
height: 640
width: 480
url: ""
}
To get a scrolling webview, put the WebView element inside a Flickable. .
|
http://developer.nokia.com/community/wiki/index.php?title=Fundamental_use_cases_for_porting_iPhone_and_Android_applications_to_Qt&diff=174961&oldid=111301
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
> From: Christopher Lenz [mailto:cmlenz@gmx.de]
> Dominique Devienne wrote:
> >>From:?
>
> If you also define the namespace prefix "xsi", the document is still
> well-formed so everything should be fine. Xerces will not try to validate
> the build file though, since Ant doesn't put the parser in validating
> mode, of course.
Granted. But Peter apparently says that even though the document is
well-formed (declared proper xsi NS prefix), Ant might still refuse it...
I'm not sure whether it's a valid behavior as far as pure XML processing is
concerned. xsi:schemaLocation is just one example, there are other XML
technologies which use namespace'd XML attributes, like XInclude, XLink,
etc... Ant not supporting any of these might be problematic!?!? --DD
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org
For additional commands, e-mail: dev-help@ant.apache.org
|
http://mail-archives.apache.org/mod_mbox/ant-dev/200311.mbox/%3CD44A54C298394F4E967EC8538B1E00F10248CB2D@lgchexch002.lgc.com%3E
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
ServiceProcessInstaller Class
Installs an executable containing classes that extend ServiceBase. This class is called by installation utilities, such as InstallUtil.exe, when installing a service application.
For a list of all members of this type, see ServiceProcessInstaller Members.
System.Object
System.MarshalByRefObject
System.ComponentModel.Component
System.Configuration.Install.Installer
System.Configuration.Install.ComponentInstaller
System.ServiceProcess.ServiceProcessInstaller
[Visual Basic] Public Class ServiceProcessInstaller Inherits ComponentInstaller [C#] public class ServiceProcessInstaller : ComponentInstaller [C++] public __gc class ServiceProcessInstaller : public ComponentInstaller [JScript] public class ServiceProcess, but the install utilityProcessInstaller Members | System.ServiceProcess Namespace | ServiceInstaller | ServiceBase | ComponentInstaller | Installers | ServiceAccount
|
http://msdn.microsoft.com/en-US/library/system.serviceprocess.serviceprocessinstaller(v=vs.71).aspx
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
System.Speech.Recognition.SrgsGrammar Namespace
With the members of the System.Speech.Recogntion.SRGSGrammar namespace, you can programmatically create grammars that comply with the W3C Speech Recognition Grammar Specification Version 1.0 (SRGS).
To create an SRGS grammar programmatically, you construct an empty SrgsDocument instance and add instances of classes that represent SRGS elements. The SrgsItem, SrgsOneOf, SrgsRule, SrgsRuleRef, SrgsSemanticInterpretationTag, and SrgsToken classes represent elements defined in the SRGS specification. Some of the properties of the SrgsDocument class represent attributes in the SRGS specification, such as Root, Mode, Culture, and XmlBase. See SRGS Grammar XML for a reference to the elements and attributes of the SRGS specification as supported by System.Speech.
To add a grammar rule to a SrgsDocument, use the Add method of the SrgsRule class. You can modify the text within an SRGS element using the Text property of a SrgsText instance.
With the SrgsSubset class, you can optimize recognition of phrases in a grammar by specifying subsets of a complete phrase that will be allowed to constitute a match, and by selecting a matching mode from the SubsetMatchingMode enumeration.
See Create Grammars Using SrgsGrammar in the System Speech Programming Guide for .NET Framework 4.0 for more information and examples.
You can also construct SrgsDocument instances from existing SRGS-compliant XML grammar files, from an instance of SrgsRule, or from an instance of GrammarBuilder.
You can use the methods of the SrgsGrammarCompiler class to prepare completed SrgsDocument objects for consumption by a speech recognition engine.
Grammars created with members of the System.Speech.Recognition.SrgsGrammar namespace can be used by constructors of the Grammar class to create Grammar objects.
|
http://msdn.microsoft.com/en-us/library/system.speech.recognition.srgsgrammar(d=printer).aspx
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
Peer-to-Peer Collaboration Peer-to-Peer Collaboration Infrastructure is a simplified implementation of the Microsoft Windows Peer-to-Peer Infrastructure that leverages the People Near Me service in Windows Vista and later platforms. It is best used for peer-enabled applications within a subnet for which the People Near Me service operates, although it can service internet endpoints or contacts as well. It incorporates the common Contact Manager that is used by Live Messenger and other Live-aware applications to determine contact endpoints, availability, and presence.
A typical peer-to-peer collaboration application is comprised of the following steps:
Peer determines the identity of a peer who is interested in hosting a collaboration session
A request to host a session is sent, somehow, and the host peer agrees to manage collaboration activity.
The host invites contacts on the subnet (including the requestor) to a session.
All peers who want to collaborate may add the host to their contact managers.
Most peers will send invitation responses, whether accepted or declined, back to the host peer in a timely fashion.
All peers who want to collaborate will subscribe to the host peer.
While the peers are performing their initial collaboration activity, the host peer may add remote peers to its contact manager. It also processes all invitation responses to determine who has accepted, who has declined, and who has not answered. It may cancel invitations to those who have not answered, or perform some other activity.
At this point, the host peer can start a collaboration session with all invited peers, or register an application with the collaboration infrastructure. P2P applications use the Peer-to-Peer Collaboration Infrastructure and the System.Net.PeerToPeer.Collaboration namespace to coordinate communications for games, bulletin boards, conferencing, and other serverless presence applications.
In an Active Directory domain, domain controllers provide authentication services using Kerberos. In a serverless peer environment, the peers must provide their own authentication. For Peer-to-Peer Networking, any node can act as a CA, removing the requirement of a root certificate in each peer's trusted root store. Authentication is provided using self-signed certificates, formatted as X.509 certificates. These are certificates that are created by each peer, which.
|
http://msdn.microsoft.com/en-us/library/vstudio/bb968787(v=vs.100).aspx
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
Asynchronous Programming
What happens if you have a lot of sockets that are waiting to read or write data? Asynchronous programming lets you write code that basically says, "Call my callback when you actually have something for me." Although this approach is used all the time in C, it's even nicer in Python because Python has first-class functions.
These days, there are many servers written asynchronously. nginx is a "simplified version" of Apache that is both very fast and highly concurrent. Squid, the popular open source Web proxy, is also written asynchronously. This makes a lot of sense if you think about what a Web proxy does. It spends all of its time managing a ton of sockets, funneling data between clients and servers.
Asynchronous programming starts with operating system APIs such as select, poll, kqueue, aio, and epoll. These APIs let you write code that basically says, "These are the file descriptors I'm working with. Which of them is ready for me to do some reading or writing?" In Python, libraries like the built-in asyncore module and the popular Twisted framework take these low-level APIs and orchestrate callback systems on top of them.
Let's look at an example of asynchronous code. First, the linear (non-asynchronous) code in Example 4.
def handle_request(request): data = talk_to_database() print "Processing request with data from database."
Re-written asynchronously, you end up with something like Example 5. (You can move use_data into a new top-level function after handle_request, but it's convenient to do it this way to maintain access to request via a closure.)
def handle_request(request): def use_data(data): print "Processing request with data from database." deferred = talk_to_database() deferred.addCallback(use_data)
Notice that the talk_to_database function no longer returns a value directly. Rather, it returns a deferred object to which you can attach callbacks.
This is called "continuation passing style". Rather than waiting for a function to simply return, you must pass a callback detailing how to continue once the data is obtained. Because you must use continuation passing style anytime you call a function that might block, it soon permeates your codebase. This can be painful and prevents you from using any library that does blocking I/O unless it's written using continuation passing style.
On the other hand, living in the asynchronous ghetto has its benefits. Aside from the clear concurrency benefits, the Twisted codebase is widely regarded as well-written code, and it provides implementations for most popular protocols.
Subroutines Versus Coroutines
In the beginning, there was the GOTO. It didn't take any parameters, and it was a one-way trip.
A coroutine is like a subroutine, except it doesn't necessarily return. With subroutines, you can do things like:
f -> g -> h (return to g, return to f)
With coroutines, you can do things like:
f -> g -> h -> f
Coroutines can be used for simple cooperative multitasking. The Python Cookbook has a great recipe for coroutines based on generators. Example 6 is a simple version of it.
import itertools def my_coro(name): count = 0 while True: count += 1 print "%s %s" % (name, count) yield coros = [my_coro('coro1'), my_coro('coro2')] for coro in itertools.cycle(coros): # A round-robin scheduler :) coro.next() # Produces: # # coro1 1 # coro2 1 # coro1 2 # coro2 2 # ...
Using generators to implement coroutines is definitely a cute hack. By the way, this same trick can be used in Twisted to alleviate some of the need to use callbacks everywhere.
On the other hand, there are some limitations to this technique. Specifically, you can only call yield in the generator. What happens if my_coro calls some function f and f wants to yield? There are some workarounds, but the limitation is actually pretty core to Python. (Because Python isn't stackless, it can't support true continuations in the same way that Scheme can.) I've written about this topic in detail on my blog.
|
http://www.drdobbs.com/tools/concurrency-and-python/206103078?pgno=3
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
Row-level permissions model
The row-level permissions system): model_id = models.PositiveIntegerField("'Type' ID") model_ct = models.ForeignKey(ContentType, verbose_name="'Type' content model", related_name="model_ct") owner_id = models.PositiveIntegerField("'Owner' ID") owner_ct = models.ForeignKey(ContentType, verbose_name="'Owner' content model", related_name="owner_ct") negative = models.BooleanField() permission = models.ForeignKey(Permission) model = models.GenericForeignKey(fk_field='model_id', ct_field='model_ct') owner = models.GenericForeignKey(fk_field='owner_id', ct_field='owner_ct') objects = RowLevelPermissionManager() class Meta: verbose_name = _('row level permission') verbose_name_plural = _('row level permissions') unique_together = (('model_ct', 'model_id', 'owner_id', 'owner_ct', 'permission'),)
This does not modify the current permissions table at all, and can be layered on top="model_id", content_model_field="model_ct") new_class.add_to_class("row_level_permissions", gen_rel)
django.db.models.options.Options has been modified to set row_level_permissions as disabled by default.
Owner objects
Currently, you can set up an owner by including the following relation:
row_level_permissions_owned = models.GenericRelation(RowLevelPermission, object_id_field="owner_id", content_model_field="owner_ct", related_name="owner")
I might be changing this around in the near future, but I only expect it to be used a few times, and I don't see a large need to make this a similiar process to the enabling of row-level permissions on objects. Please give feedback if you think otherwise.
Checking of row-level permissions
Checking of RLP are done in the following order: User RLP->Group RLP->User Model Level->Group Model Level. Stopping at the first positive or negative.
The has_perm() method has been modified to now check for row-level permissions and has an optional parameter for a model instance, which is required to check row-level permissions.
def has_perm(self, perm, object=None): "Returns True if the user has the specified permission." if not self.is_active: return False if self.is_superuser: return True if object and object._meta.row_level_permissions: row_level_permission = self.check_row_level_permission(perm, object) if row_level_permission is not None: return row_level_permission return perm in self.get_all_permissions()
The check_row_level_permission checks the user RLPs first and then checks the group RLPs. The user RLPs are determined by using a filter method. The group RLP uses an SQL query that works out to be:
SELECT rlp."negative" FROM "auth_user_groups" ug, "auth_rowlevelpermission" rlp WHERE rlp."owner_id"=ug."group_id" AND ug."user_id"=%s AND rlp."owner_ct_id"=%s AND rlp."model_id"=%s AND rlp."model_ct_id"=%s AND rlp."permission_id"=%s;
Integration into administration application
Being worked on. Will post when it is complete or near enough to have a better idea.
|
https://code.djangoproject.com/wiki/RowLevelPermissionsDeveloper?version=5
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
Carsten Ziegeler wrote:
> Reinhard Pötz wrote
>>
>> My proposal
>> -----------
>>
>> * I think we should contact the Apache Commons community and talk with
>> them about org.apache.excalibur.sourceresolve.jnet stuff.
>>
>> * Since I don't expect that we can release it from there anytime soon, I
>> think we should release it for now ourselves using our own namespace.
>> As soon as there is the other release available, we can switch and
>> deprecate our own implementation.
>>
>> * The org.apache.excalibur.sourceresolve.jnet.source stuff can be
>> removed from our repository completely because we don't use it.
>> It should be used by the Apache Excalibur project.
>>
> Sounds like a good plan to me.
>
>> Who's going to contact the Apache Commons community? Carsten, do you
>> want to do it yourself? I think the credit is due to you!
> :) Thanks, yepp, I'll contact them later today. Perhaps we are lucky and
> it doesn't take too long to get a proper commons release.
Thanks!
The two classes can be found at
(afaik only this mirror is currently available).
--
Reinhard Pötz Managing Director, {Indoqa} GmbH
Member of the Apache Software Foundation
Apache Cocoon Committer, PMC member, PMC Chair reinhard@apache.org
________________________________________________________________________
|
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200804.mbox/%3C481586F0.7070909@apache.org%3E
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.