text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
|---|---|---|---|---|
def Test():
print 't = ' + `t`
def test2():
print 'calling Test()'
Test()
I read in the file and want to execute test2().
So I do the following:
f = open('file','r')
a = f.read()
f.close()
a_c = compile(a,'<string>','exec')
local_ns = {}
global_ns = {}
exec(a_c,global_ns,local_ns)
Now if I dump local_ns, it shows 't', 'Test' and 'test2' as being
members of local_ns. global_ns is still empty.
Now how can I call test2? I have tried local_ns['test2']()
but it says it cannot find 'Test'
I CANNOT use 'import' as this code is actually part of a file
that cannot be run..
Any ideas?????
Thanks!
Lance Ellinghouse
lance@markv.com
|
http://www.python.org/search/hypermail/python-1993/0236.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
Chapter Table of Contents
views/web2py_ajax.html
which looks like this:
{{ response.files.insert(0,URL('static','js/jquery.js')) response.files.insert(1,URL('static','css/calenadar.css')) response.files.insert(2,URL('static','js/calendar.js')) response.include_meta() response.include_files() }} <script type="text/javascript"><!-- // These variables are used by the web2py_ajax_init // function in web2py.js (which is loaded below). var</script>:
db = DAL("sqlite://db.db") db.define_table('child', Field('name'), Field('weight', 'double'), Field('birth_date', 'date'), Field('time_of_birth', 'time')) db.child.name.requires=IS_NOT_EMPTY() db.child.weight.requires=IS_FLOAT_IN_RANGE(0,100) db.child.birth_date.requires=IS_DATE() db.child.time_of_birth.requires=IS_TIME()
with this "default.py" controller:
def index(): form = SQLFORM(db.child) if form.process().accepted: response.flash = 'record inserted' return dict(form=form)
and the following "default/index.html" view:
{{extend 'layout.html}} {{=form}}:
<div class="one" id="a">Hello</div> <div class="two" id="b">World</div>
They belong to class "one" and "two" respectively. They have ids equal to "a" and "b" respectively.
In jQuery you can refer to the former with the following a CSS-like equivalent notations
jQuery('.one') // address object by class "one" jQuery('#a') // address object by id "a" jQuery('DIV.one') // address by object of type "DIV" with class "one" jQuery('DIV #a') // address by object of type "DIV" with id "a"
and to the latter with
jQuery('.two') jQuery('#b') jQuery('DIV.two') jQuery('DIV #b')
or you can refer to both with
jQuery('DIV')
Tag objects are associated to events, such as "onclick". jQuery allows linking these events to effects, for example "slideToggle":
<div class="one" id="a" onclick="jQuery('.two').slideToggle()">Hello</div> <div class="two" id="b">World</div>
Now if you click on "Hello", "World" disappears. If you click again, "World" reappears. You can make a tag hidden by default by giving it a hidden class:
<div class="one" id="a" onclick="jQuery('.two').slideToggle()">Hello</div> <div class="two hidden" id="b">World</div>
You can also link actions to events outside the tag itself. The previous code can be rewritten as follows:
<div class="one" id="a">Hello</div> <div class="two" id="b">World</div> <script> jQuery('.one').click(function(){jQuery('.two').slideToggle()}); </script>:
<div class="one" id="a">Hello</div> <div class="two" id="b">World</div> <script> jQuery(document).ready(function(){ jQuery('.one').click(function(){jQuery('.two').slideToggle()}); }); </script>:
{{=DIV('click me!', _onclick="jQuery(this).fadeOut()")}}
Other useful methods and attributes for handling selected elements
Methods and attributes
jQuery(...).prop(name): Returns the name of the attribute value
jQuery(...).prop(name, value): Sets the attribute name to value
jQuery(...).html(): Without arguments, it returns the inner html from the selected elements, it accepts a string as argument for replacing the tag content.
jQuery(...).text(): Without arguments, it returns the inner text of the selected element (without tags), if a string is passed as argument, it replaces the inner text with the new data.
jQuery(...).css(name, value): With one parameter, it returns the CSS value of the style attribute specified for the selected elements. With two parameters, it sets a new value for the specified CSS attribute.
jQuery(...).each(function): It loops trought the selected elements set and calls function with each item as argument.
jQuery(...).index(): Without arguments, it returns the index value for the first element selected related to its siblings. (i.e, the index of a LI element). If an element is passed as argument, it returns the element position related to the selected elements set.
jQuery(...).length: This attribute returns the number of elements selected.:
db = DAL('sqlite://db.db') db.define_table('taxpayer', Field('name'), Field('married', 'boolean'), Field('spouse_name'))
the following "default.py" controller:
def index(): form = SQLFORM(db.taxpayer) if form.process().accepted: response.flash = 'record inserted' return dict(form=form)
and the following "default/index.html" view:
{{extend 'layout.html'}} {{=form}} <script> jQuery(document).ready(function(){ jQuery('#taxpayer_spouse_name__row').hide(); jQuery('#taxpayer_married').change(function(){ if(jQuery('#taxpayer_married').prop('checked')) jQuery('#taxpayer_spouse_name__row').show(); else jQuery('#taxpayer_spouse_name__row').hide();}); }); </script>:
def edit(): row = db.taxpayer[request.args(0)] form = SQLFORM(db.taxpayer, row, deletable=True) if form.process().accepted: response.flash = 'record updated' return dict(form=form)
and the corresponding view "default/edit.html"
{{extend 'layout.html'}} {{=form}}
The
deletable=True argument in the SQLFORM constructor instructs web2py to display a "delete" checkbox in the edit form. It is
False by default.
web2py's "web2py.js" includes the following code:
jQuery(document).ready(function(){ jQuery('input.delete').prop('onclick', 'if(this.checked) if(!confirm( "{{=T('Sure you want to delete this object?')}}")) this.checked=false;'); });:
ajax(url, [name1, name2, ...], target):
def one(): return dict() def echo(): return request.vars.name
and the associated "default/one.html" view:
{{extend 'layout.html'}} <form> <input name="name" onkeyup="ajax('echo', ['name'], 'target')" /> </form> <div id="target"></div>:
def one(): return dict() def echo(): return "jQuery('#target').html(%s);" % repr(request.vars.name)
and the associated "default/one.html" view:
{{extend 'layout.html'}} <form> <input name="name" onkeyup="ajax('echo', ['name'], ':eval')" /> </form> <div id="target"></div>:
def month_input(): return dict() def month_selector(): if not request.vars.month: return '' months = ['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September' ,'October', 'November', 'December'] month_start = request.vars.month.capitalize() selected = [m for m in months if m.startswith(month_start)] return DIV(*[DIV(k, _onclick="jQuery('#month').val('%s')" % k, _onmouseover="this.style.backgroundColor='yellow'", _onmouseout="this.style.backgroundColor='white'" ) for k in selected])
and the corresponding "default/month_input.html" view:
{{extend 'layout.html'}} <style> #suggestions { position: relative; } .suggestions { background: white; border: solid 1px #55A6C8; } .suggestions DIV { padding: 2px 4px 2px 4px; } </style> <form> <input type="text" id="month" name="month" style="width: 250px" /><br /> <div style="position: absolute;" id="suggestions" class="suggestions"></div> </form> <script> jQuery("#month").keyup(function(){ ajax('month_selector', ['month'], 'suggestions')}); </script>:
<div> <div onclick="jQuery('#month').val('March')" onmouseout="this.style.backgroundColor='white'" onmouseover="this.style.backgroundColor='yellow'">March</div> <div onclick="jQuery('#month').val('May')" onmouseout="this.style.backgroundColor='white'" onmouseover="this.style.backgroundColor='yellow'">May</div> </div>
Here is the final effect:
If the months are stored in a database table such as:
db.define_table('month', Field('name'))
then simply replace the
month_selector action with:
def month_input(): return dict() def month_selector(): if not request.vars.month: return '' pattern = request.vars.month.capitalize() + '%' selected = [row.name for row in db(db.month.name.like(pattern)).select()] return ''.join([DIV(k, _onclick="jQuery('#month').val('%s')" % k, _onmouseover="this.style.backgroundColor='yellow'", _onmouseout="this.style.backgroundColor='white'" ).xml() for k in selected]):
db = DAL('sqlite://db.db') db.define_table('post', Field('your_message', 'text')) db.post.your_message.requires = IS_NOT_EMPTY()
Notice that each post has a single field "your_message" that is required to be not-empty.
Edit the
default.py controller and write two actions:
def index(): return dict() def new_post(): form = SQLFORM(db.post) if form.accepts(request, formname=None): return DIV("Message posted") elif form.errors: return TABLE(*[TR(k, v) for k, v in form.errors.items()]):
{{extend 'layout.html'}} <div id="target"></div> <form id="myform"> <input name="your_message" id="your_message" /> <input type="submit" /> </form> <script> jQuery('#myform').submit(function() { ajax('{{=URL('new_post')}}', ['your_message'], 'target'); return false; }); </script>:
db = DAL('sqlite://images.db') db.define_table('item', Field('image', 'upload'), Field('votes', 'integer', default=0))
Here is the
default controller:
def list_items(): items = db().select(db.item.ALL, orderby=db.item.votes) return dict(items=items) def download(): return response.download(request, db) def vote(): item = db.item[request.vars.id] new_votes = item.votes + 1 item.update_record(votes=new_votes) return str(new_votes)
The download action is necessary to allow the list_items view to download images stored in the "uploads" folder. The votes action is used for the Ajax callback.
Here is the "default/list_items.html" view:
{{extend 'layout.html'}} <form><input type="hidden" id="id" name="id" value="" /></form> {{for item in items:}} <p> <img src="{{=URL('download', args=item.image)}}" width="200px" /> <br /> Votes=<span id="item{{=item.id}}">{{=item.votes}}</span> [<span onclick="jQuery('#id').val('{{=item.id}}'); ajax('vote', ['id'], 'item{{=item.id}}');">vote up</span>] </p> {{pass}}.
|
http://www.web2py.com/books/default/chapter/29/11
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
#include <LOCA_Epetra_LowRankUpdateRowMatrix.H>
Inheritance diagram for LOCA::Epetra::LowRankUpdateRowMatrix:
This class implements the Epetra_RowMatrix interface for
where
is an Epetra_RowMatrix and
and
are Epetra_MultiVectors. It is derived from LOCA::Epetra::LowRankUpdateOp to implement the Epetra_Operator interface. The interface here implements the Epetra_RowMatrix interface when the matrix
is itself a row matrix. This allows preconditioners to be computed and scaling in linear systems to be performed when using this operator. The implementation here merely adds the corresponding entries for
to the rows of
. Note however this is only an approximation to the true matrix
.
This class assumes
and
have the same distribution as the rows of
.
|
http://trilinos.sandia.gov/packages/docs/r7.0/packages/nox/doc/html/classLOCA_1_1Epetra_1_1LowRankUpdateRowMatrix.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
Kevin Atkinson wrote: > I will not apply the patch as is as there are some changes I don't > approve of. That's ok, I expected as much:) > 1) > I will accept the changes to deal with the fact that the sun compiler > is to stupid to know that abort doesn't return as those are harmless. Well, yes CC is stupid, it doesn't check sprintf format strings either, but I'm told it seems to produce better code for SPARC, so here we are. > 2) > In may cases you changed: > > String val = config.retrieve("key"); > to > String val = String(config.retrieve("key")); > > what is the error you are getting without the change? There might be > a better way to solve the problem. There is a constructor for String that takes an PosibError (what things return), but there is also an operator= on String that takes a PosibError. Just the constructor ought to be enough, the = operator is redundant IMHO. Deleting String::operator= (PosibError...) would probably be the easiest and most correct solution, but I was a bit nervous about it, so I chose to use the constructor explicitly. What do you think? > Same for > -static void display_menu(O * out, const Choices * choices, int > width) { +static void display_menu(O * out, const StackPtr<Choices> > &choices, int +width) The sun compiler bitched about incompatible types, 'Choices *' is not the same as 'StackPtr<Choices>', which is what is needed in the function. I guess StackPtr<Mumble> just happens to resolve to the same as Mumble * with GNUs STL implementation, but assuming it always does is bad form or at least not something that you can assume all compilers understand. > 3) > The C++ standard requires "friend class HashTable", "friend HashTable" > is not valid C++ and will not compile with gcc Hmm, that wasn't good. With: friend class HashTable; I get: "vector_hash.hpp", line 243: Error: A typedef name cannot be used in an elaborated type specifier.. With: friend class Parms::HashTable; I get: Error: aspeller_default_readonly_ws::ReadOnlyWS::WordLookupParms::HashTable is not defined. How about: #ifdef __SUNPRO_CC // Fix for deficient sun compilers: friend HashTable #else friend class HashTable #endif > 4) > What is the reason for? > +#if (1) > + FStream CIN(stdin, false); > + FStream COUT(stdout, false); > + FStream CERR(stderr, false); > +#else > +#include "iostream.hpp" It seems the symbols from iostream.cpp were not available when linking the application, maybe because of defective name mangling, maybe because of scoping, it seemed a lot easier to simply define the vairables there rather than battle with the build system to figure out what went wrong. As far as I know libaspell.so is supposed to provide a C interface, not a C++ one, right? In that case referencing C++ symbols in it is wrong, even if g++ lets you get away with it. This is what happens if I try to link it with the default code: CC -g -o .libs/aspell aspell.o check_funs.o checker_string.o ../lib/.libs/libaspell.so -lcurses -R/home/ffr/projects/spell/test/lib ild: (undefined symbol) acommon::CERR -- referenced in the text segment of aspell.o [Hint: static member acommon::CERR must be defined in the program] ild: (undefined symbol) acommon::CIN -- referenced in the text segment of aspell.o [Hint: static member acommon::CIN must be defined in the program] ild: (undefined symbol) acommon::COUT -- referenced in the text segment of aspell.o [Hint: static member acommon::COUT must be defined in the program] > 5) > In parm_string you comment out one of my compressions due to a > conflict with the STL. I have a configure test to deal with this > problem. It will define the macro "REL_OPS_POLLUTION". Check that > the macro is defined in settings.h and if it is use an ifndef around > the comparasion. if the macro is not defined please let me know. Ah, right, I just checked and it isn't defined. The problem isn't that it's impossible to have your own == operator, the problem is that stl can make an std::string from a char * which can be made from a ParamString automagicly, that causes a conflict between the two == operators. ... Which IMHO is stupid of CC as the sane thing to do is to select the "nearest" operator for the job. As the current configure test doesn't define REL_OPS_POLLUTION on my system it might be wrong to expand the test to catch this case as well. I worry about doing what I do now and using the std::string == operator as it might not work the same way and it may be slower, but I havn't seen any problems with the approach while running aspell. -- Flemming Frandsen / Systems Designer
|
http://lists.gnu.org/archive/html/aspell-devel/2004-01/msg00009.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
Andrew Morton a écrit :> Andi Kleen <ak@muc.de> wrote:>> On Thursday 09 February 2006 19:04, Andrew Morton wrote:>>> Ashok Raj <ashok.raj@intel.com> wrote:>>>> The problem was with ACPI just simply looking at the namespace doesnt>>>> exactly give us an idea of how many processors are possible in this platform.>>> We need to fix this asap - the performance penalty for HOTPLUG_CPU=y,>>> NR_CPUS=lots will be appreciable.>> What is this performance penalty exactly? > > All those for_each_cpu() loops will hit NR_CPUS cachelines instead of> hweight(cpu_possible_map) cachelines.You mean NR_CPUS bits, mostly all included in a single cacheline, and even in a single long word :) for most cases (NR_CPUS <= 32 or 64)-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
|
http://lkml.org/lkml/2006/2/10/84
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
I’ve been meaning to attack this one since we first published the Clipboard Manager as a Plugin of the Month: working out how to display a preview image of the clipboard contents inside the Clipboard Manager palette. And then I happened to receive a request by email, yesterday, suggesting a couple of enhancements to the tool. A nice reminder. :-)
1. Have the clipboard include image previews, similar to that of the wblock command (instead of needing to immediately rename the item when fast copying multiple items).
2. Have the clipboard store items in memory for use in between autocad sessions (this would be useful on projects that are not completed in one day/session, or really useful when/if autocad encounters a fatal error).
To start with I thought that both of these requests would have a common solution (at their core, at least): understanding how AutoCAD places its data on the clipboard would hopefully allow us to load it into a Database and then create a thumbnail from it. And we’d also be able to save off a copy of the data for future reuse.
After asking some colleagues for their opinions (and many thanks, Davis, Lee, Markus, Murali and Jack for the input! :-) it turns out the format AutoCAD uses to write to the clipboard is defined in an ObjectARX SDK header, clipdata.h. I think I once knew this, but once again I’m a victim of my own over-efficient garbage collection :-). Here’s the relevant structure, edited to fit the width of the blog:
typedef struct tagClipboardInfo {
ACHAR szTempFile[260]; // block temp file name
ACHAR szSourceFile[260]; // file name of drawing from which
// selection was made
ACHAR szSignature[4]; // szClipSignature
int nFlags; // kbDragGeometry: dragging
// geometry from AutoCAD?
AcGePoint3d dptInsert; // original world coordinate of
// insertion point
RECT rectGDI; // GDI coord bounding rectangle of
// sset
void* mpView; // Used to verify that this object
// was created in this view (HWND*)
DWORD m_dwThreadId; // AutoCAD thread that created this
// DataObject
int nLen; // Length of next segment of data,
// if any, starting with chData
int nType; // Type of data, if any
// (eExpandedClipDataTypes)
ACHAR chData[1]; // Start of data, if any.
} ClipboardInfo;
A number of the people I asked pointed out that AutoCAD actually WBLOCKs out the selected objects into a complete, standalone drawing file, which is referenced from this structure. The problem I suspect I’m going to have, at some point, is to map this structure to .NET, although as I know the temporary file is going to be in the first 260 characters, I could probably get away with just pulling out those characters and ignoring the rest. Although reusing them in a later session may then actually involve calling INSERT rather than placing them back into the clipboard and calling PASTECLIP, as in any case it feels as though there are some fields that might be tricky to recreate (thread ID has me nervous, for instance). But anyway – that’s for another day.
Another of the pieces of input I received had me thinking this was probably over-engineering the solution to the first request. We actually store a bunch of different formats to the clipboard – including a bitmap of the selected contents – which means we probably don’t need to care about the AutoCAD-specific format to address that request.
Rather than extending the existing VB.NET code to accomplish this, I decided to start with a simple C# app showing a palette that only displays the contents of the clipboard via an embedded PictureBox. I chose to go back to C# as I thought, at the time, I’d be mapping the above C++ structure – which would be easier – but anyway. Now that I have some working C# code I’ll be taking a look at extending the existing Clipboard Manager plugin, in due course.
Here’s the C# test code I put together. I placed everything in a single file – the UI is created by the code, rather than being in the Visual Studio designer, to keep it all simple. Oh, and I’m sorry for the lack of comments – if you’re interested in the implementation details but they’re not obvious, please leave a comment on this post.
using Autodesk.AutoCAD.ApplicationServices;
using Autodesk.AutoCAD.Runtime;
using Autodesk.AutoCAD.Windows;
using System.Runtime.InteropServices;
using System.Windows.Forms;
using System.Drawing;
using System;
namespace ClipboardViewing
{
public enum Msgs
{
WM_DRAWCLIPBOARD = 0x0308,
WM_CHANGECBCHAIN = 0x030D
}
public class ClipboardView : UserControl
{
[DllImport("user32.dll")]
public static extern IntPtr SetClipboardViewer(
IntPtr hWndNewViewer
);
[DllImport("user32.dll", CharSet = CharSet.Auto)]
public static extern IntPtr SendMessage(
IntPtr hWnd, int Msg, IntPtr wParam, IntPtr lParam
);
IntPtr _nxtVwr;
PictureBox _img;
PaletteSet _ps;
public ClipboardView(PaletteSet ps)
{
_img = new PictureBox();
_img.Anchor =
(AnchorStyles)(AnchorStyles.Top |
AnchorStyles.Bottom |
AnchorStyles.Left |
AnchorStyles.Right);
_img.Location = new Point(0, 0);
_img.Size = this.Size;
_img.SizeMode = PictureBoxSizeMode.StretchImage;
Controls.Add(_img);
_nxtVwr = SetClipboardViewer(this.Handle);
_ps = ps;
}
private void ExtractImage()
{
IDataObject iData;
try
{
iData = Clipboard.GetDataObject();
}
catch (System.Exception ex)
{
MessageBox.Show(ex.ToString());
return;
}
if (iData.GetDataPresent("Bitmap"))
{
object o = iData.GetData("Bitmap");
Bitmap b = o as Bitmap;
if (b != null)
{
_img.Image = b;
if (_ps != null)
{
_ps.Size =
new Size(b.Size.Width / 3, b.Size.Height / 3);
}
}
}
}
protected override void WndProc(ref Message m)
{
switch ((Msgs)m.Msg)
{
case Msgs.WM_DRAWCLIPBOARD:
ExtractImage();
SendMessage(_nxtVwr, m.Msg, m.WParam, m.LParam);
break;
case Msgs.WM_CHANGECBCHAIN:
if (m.WParam == _nxtVwr)
_nxtVwr = m.LParam;
else
SendMessage(_nxtVwr, m.Msg, m.WParam, m.LParam);
break;
default:
base.WndProc(ref m);
break;
}
}
}
public class Commands
{
PaletteSet _ps = null;
ClipboardView _cv = null;
[CommandMethod("CBS")]
public void ShowClipboard()
{
if (_ps == null)
{
_ps = new PaletteSet(
"CBS",
new System.Guid("DB716FC9-2BD8-49ca-B3DF-6F2523C9B8E5")
);
if (_cv == null)
_cv = new ClipboardView(_ps);
_ps.Text = "Clipboard";
_ps.DockEnabled =
DockSides.Left | DockSides.Right | DockSides.None;
_ps.Size = new System.Drawing.Size(300, 500);
_ps.Add("ClipboardView", _cv);
}
_ps.Visible = true;
}
}
}
When we run the CBS command, it displays a palette which updates when the clipboard is modified to show its contents. The palette resizes to be a third of the size of the bitmap, so the aspect ratio is maintained. You may recognise the drawing used from the last post.
So that happily ended up being much easier than expected. Now I’ll be taking a look at how best to integrate this into the Clipboard Manager. And in due course how best to handle the persistence and reuse of the data between sessions.
|
http://through-the-interface.typepad.com/through_the_interface/2010/10/previewing-the-contents-of-the-clipboard-in-an-autocad-palette-using-net.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
Editor's note: Know a lot about this topic? Want to share your expertise? Participate in the IBM Lotus software wiki program today.
Introduction
This article explains how to integrate Adobe Flex with an IBM WebSphere Portal portlet. This article starts by building a Flex project using Adobe Flex Builder and then shows you how to build a Web service. Finally it integrates all in a portlet page. This article shows you how to display and update a simple list of employees. Using the same idea, this article shows a second example of how to get the Flex application to communicate with a JSON object, using a back-end servlet, over the WebSphere Portal framework. This article shows all the code necessary for both examples, and it gives line-by-line explanations. It demonstrates that integrating a rich Internet application (RIA) with WebSphere Portal using the JSR-168 portlet is a feasible and seamless process, thanks to the Flex HTTP service, SOA Web service, and JSON serializations.
Rich Internet applications (RIAs) offer a rich, engaging experience that improves user satisfaction and increases productivity. Using the broad reach of the Internet, RIAs can be deployed across browsers and desktops. Adobe Flex to describe user interface (UI) layout and behaviors, and ActionScript3, a powerful object-oriented programming language, is used to create client logic. Flex provides a consistent look-and-feel across browsers.
WebSphere Portal is a Web application that aggregates contents from different resources in an integrated UI. Portals also provide personalization features and single sign-on through their own built-in security infrastructure; they can also integrate with solutions from independent vendors. JSON (JavaScript™ Object Notation) is a lightweight data-interchange format. JSON reduces the HTTP traffic by introducing a new format for interchanging data between server and client applications.
Prerequisites
You should have a solid understanding of the following technologies:
- Java™ Runtime Environment 1.5.0 or later
- Web server (such as IBM WebSphere Application Server V6.0 or Apache Tomcat V6.0)
- WebSphere Portal V6.0
- IBM Rational® Software Architect V7.0 or later
- Adobe Flex Builder (V2.0 or later)
- MySQL server V5.0 or later
Running an SOA Web service Flex application in WebSphere Portal
This example shows how to run an Adobe Flex application. The Adobe Flex application calls on a Web service named EmployeesDB, which connects to a database through Java Database Connectivity (JDBC). The Web service has two methods:
- The first is called getEmployees(), which returns a list of employees’ names and their identifiers. The list is serialized using an Xstream API for simplicity.
- The second method is called updateEmployee(). It takes an ActionScript object called employee serialized as an XML file.
This example shows you how to build your Flex application and your portlet application, and it demonstrates how easy it is to integrate both worlds, Adobe Flex and WebSphere Portal.
Follow these steps to build a portlet application:
- Open Rational Software Architect.
- Select File - New - Project.
- In the New Project window that displays, expand the Portal folder and select Portlet Project. Click Next.
- In the New Portlet Project window that displays, in the Project name field, enter FlexWebService. In this step you create a basic portlet that extends the GenericPortlet class defined in the Java Portlet Specification Version 1.0 (JSR 168). Do the following:
- Select WebSphere Portal v6.0 in the Target Runtime field.
- Accept the default settings in the EAR Membership section.
- Select JSR 168 Portlet in the Portlet API field.
- Accept the default settings in the Create a portlet section.
- Select the Show advance settings option.
- Click Next.
Figure 1. Defining the portlet project
- In the Project Facets window that displays, the following settings are selected as default options:
- Dynamic Web Module
- Java
- JSR 168 Portlets
- JSR 168 Portlets on WebSphere Portal
Click Next to accept the default options. See figure 2.
Figure 2. The Project Facets window
- 3.
Figure 3. The Portlet Settings window
- In the Action and Preferences window that displays, clear all the options as shown in figure 4 and click Finish.
Figure 4. The Action and Preferences window
- Open the FlexWebService project, and right-click the WebContent folder. Create a new folder called movies. The folder structure of the FlexWebService portlet looks like the one shown in figure 5.
Figure 5. Folder structure after adding the new movies folder
- Modify the portlet class FlexWebServicePortlet, found inside the com.ibm.flexwebservice package, to call FlexWebServicePortletView.jsp in doView() method as shown in listing 1.
Listing 1. FlexWebServicePortlet.java
Line:1 package com.ibm.flexwebservice; Line:2 Line:3 import java.io.*; Line:4 import java.util.*; Line:5 import javax.portlet.*; Line:6 Line:7 /** Line:8 * A sample portlet based on GenericPortlet Line:9 */ Line:10 public class FlexWebServicePortlet extends GenericPortlet { Line:11 Line:12 public static final String JSP_FOLDER = "/_FlexWebService/jsp/"; // JSP folder name Line:13 Line:14 public static final String VIEW_JSP = "FlexWebServicePortletView"; //JSP file name to be rendered on the view mode Line:15 Line:16 /** Line:17 * @see javax.portlet.Portlet#init() Line:18 */ Line:19 public void init() throws PortletException{ Line:20 super.init(); Line:21 } Line:22 Line:23 /** Line:24 * Serve up the <code>view</code> mode. Line:25 * Line:26 * @see javax.portlet.GenericPortlet#doView(javax.portlet.RenderRequest, javax.portlet.RenderResponse) Line:27 */ Line:28 public void doView(RenderRequest request, RenderResponse response) throws PortletException, IOException { Line:29 // Set the MIME type for the render response Line:30 response.setContentType(request.getResponseContentType()); Line:31 Line:32 Line:33 // Invoke the JSP to render Line:34 PortletRequestDispatcher rd = getPortletContext(). getRequestDispatcher (getJspFilePath(request, VIEW_JSP)); Line:35 rd.include(request,response); Line:36 } Line:37 Line:38 /** Line:39 * Returns JSP file path. Line:40 * Line:41 * @param request Render request Line:42 * @param jspFile JSP file name Line:43 * @return JSP file path Line:44 */ Line:45 private static String getJspFilePath(RenderRequest request, String jspFile) { Line:46 String markup = request.getProperty("wps.markup"); Line:47 if( markup == null ) Line:48 markup = getMarkup(request.getResponseContentType()); Line:49 return JSP_FOLDER + markup + "/" + jspFile + "." + getJspExtension(markup); Line:50 } Line:51 Line:52 /** Line:53 * Convert MIME type to markup name. Line:54 * Line:55 * @param contentType MIME type Line:56 * @return Markup name Line:57 */ Line:58 private static String getMarkup(String contentType) { Line:59 if( "text/vnd.wap.wml".equals(contentType) ) Line:60 return "wml"; Line:61 else Line:62 return "html"; Line:63 } Line:64 Line:65 /** Line:66 * Returns the file extension for the JSP file Line:67 * Line:68 * @param markupName Markup name Line:69 * @return JSP extension Line:70 */ Line:71 private static String getJspExtension(String markupName) { Line:72 return "jsp"; Line:73 } Line:74 Line:75 }
- Inside the _FlexWebService/jsp/html folder you can find FlexWebServicePortletView.jsp. Define the OBJECT and EMBED tags inside the div tag as shown in lines 17 though 34 of the code shown in listing 2.
Listing 2. FlexWebServicePortletView.jsp
Line:1 <%@page Line:6 #fxJavaScript { width: 90%; height: 500px; } Line:7 #html_controls {width: 15em; margin-top: 1em; padding: 1em; Line:8 color: white; border: solid 1px white; Line:9 font-family: Arial Line:10 } Line:11 body<#html_controls { width: 13em;} Line:12 body {background-color: #869CA7;} Line:13 </style> Line:14 Line:15 <div> Line:16 Line:17 <object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" Line:18 Line:20 <param name="movie" value="<%=renderResponse.encodeURL (renderRequest.getContextPath()+"/movies/ WebServiceFlex.swf")%>" /> Line:21 <param name="quality" value="high" /> Line:22 <param name="bgcolor" value="#869ca7" /> Line:23 <param name="allowScriptAccess" value="sameDomain" /> Line:24 <embed src="<%=renderResponse.encodeURL (renderRequest.getContextPath()+"/movies/WebServiceFlex.swf")%>" Line:25 Line:33 </embed> Line:34 </object> Line:35 Line:36 </div>
Inside the Adobe Flex Builder, create a new Flex project by following these steps:
- In the new Flex Project window that displays, enter WebServiceFlex in the Project name field, and click Finish.
The project folder structure now looks like the one shown in figure 6.
Figure 6. Flex project structure
- Modify the WebServiceFlex.mxml file to look like the code shown in listing 3. Line-by-line explanations of the code follow listing 3.
Listing 3. WebServiceFlex.mxml
Line:1 <?xml version="1.0" encoding="utf-8"?> Line:2 <mx:Application xmlns: Line:4 Line:5 Line:6 <mx:Script> Line:7 <![CDATA[ Line:8 Line:9 import mx.controls.Alert; Line:10 import mx.rpc.xml.SimpleXMLEncoder; Line:11 import mx.utils.ObjectUtil; Line:12 import mx.rpc.events.ResultEvent; Line:13 import mx.rpc.events.FaultEvent; Line:14 Line:15 Line:16 [Bindable] Line:17 private var myXML:XML; Line:18 Line:19 private function onCreationComplete():void { Line:20 webService.getEmployees().send; Line:21 } Line:22 Line:23 private function callWS(evt:ResultEvent):void { Line:24 var retVal:Object = evt.result; Line:25 var newMessage:String = retVal as String; Line:26 myXML = new XML(newMessage); Line:27 myGrid.dataProvider = myXML.Employee; Line:28 } Line:29 Line:30 Line:31 private function choosenItem(selectedItem:Object):void { Line:32 formPanel.visible = false; Line:33 employeeObj.name = selectedItem.name; Line:34 employeeObj.identifer = selectedItem.identifer; Line:35 formPanel.visible = true; Line:36 formPanel.setStyle("showEffect","fadeIn"); Line:37 } Line:38 Line:39 private function update(employeeObj:Object):void { Line:40 var xml:XML = objectToXML(employeeObj); Line:41 webService.updateEmployee(xml.toString()); Line:42 webService.getEmployees().send; Line:43 } Line:44 Line:45 private function objectToXML(obj:Object):XML { Line:46 var qName:QName = new QName("Employee"); Line:47 var xmlDocument:XMLDocument = new XMLDocument(); Line:48 var simpleXMLEncoder:SimpleXMLEncoder = new SimpleXMLEncoder(xmlDocument); Line:49 var xmlNode:XMLNode = simpleXMLEncoder.encodeValue(obj, qName, xmlDocument); Line:50 var xml:XML = new XML(xmlDocument.toString()); Line:51 // trace(xml.toXMLString()); Line:52 return xml; Line:53 } Line:54 Line:55 ]]> Line:56 </mx:Script> Line:57 Line:58 <mx:Fade Line:59 Line:60 <mx:WebService Line:62 <mx:operation Line:63 </mx:WebService> Line:64 Line:65 <mx:Panel Line:66 <mx:DataGrid Line:67 <mx:columns> Line:68 <mx:DataGridColumn Line:69 <mx:DataGridColumn Line:70 </mx:columns> Line:71 </mx:DataGrid> Line:72 <mx:PieChart Line:73 <mx:series> Line:74 <mx:PieSeries Line:75 </mx:series> Line:76 </mx:PieChart> Line:77 Line:78 </mx:Panel> Line:79 Line:80 Line:81 <Employee id="employeeObj" Line:82 Line:84 Line:85 <mx:Panel Line:86 <mx:Form Line:87 <mx:FormItem Line:88 <mx:TextInput Line:89 </mx:FormItem> Line:90 <mx:FormItem Line:91 <mx:TextInput Line:92 </mx:FormItem> Line:93 </mx:Form> Line:94 Line:95 <mx:ControlBar> Line:96 <mx:Button Line:97 </mx:ControlBar> Line:98 </mx:Panel> Line:99 Line:100 Line:101 </mx:Application>
Lines 2 through 3 of the code listing show the application tag mx:Application. Flex defines a default application container that lets you add content to your application without explicitly defining another container. You specify the layout to be horizontal, which means that the two panels included in this movie are adjacent to each other. Also, you call the onCreationComplete() function on the event of CreationComplete. CreationComplete is the event that is fired at the end of the component life cycle after the whole component is completely created along with its children, if any.
Lines 19 through 21 show the declaration of the onCreationComplete function. In this function, you invoke the webservice function by calling the getEmployees() method.
After the getEmployees() function is invoked, the callWS() function is called with the return results as shown in lines 23 though 28. In this function you build an XML object and bind it to the dataProvider attribute of the data grid.
Lines 31 to 37 show the label function of the data grid. The function chosenItem passes on the entire row being selected in the data grid. You can bind the values of that row to the employee object. The employee object is bonded in return to the form fields. Line 36 shows you how to add a fading effect to the form.
Lines 39 to 43 show the update function, which is called when the Update button is clicked. In this function we call the Web service function updateEmployee().
Lines 60 through 63 show a WebService component that calls a Web service. This WebService component calls two Web service operations, getEmployees() and updateEmployee().
Lines 65 through 78 show the data grid and a pie chart embedded inside a panel titled "Employees." The data grid has two columns, name and identifier. You can use the PieSeries chart series with the PieChart control to define the data for the chart. Specify the identifier as a field property so that the chart knows which value to use to create pie wedges of the appropriate size.
Lines from 81 through 84 show the initialization an employee object. This class has only two fields, name and identifier.
Lines 85 to 93 show the form that you can use to update the employee data.
Notice that a fadeIn effect, in the field showEffect="{fadeIn}", shows the workflow of the application. The fade effect declaration and properties can be found at line 58. You can make the form initially invisible by specifying the visible property to false, as in visible="false". You can make the field identifier unedited by specifying the property editable to false, as in editable="false".
Lines 95 to 97 show the Update button. On the click event, it calls the update function and passes on the employee object, which is bondable to the form fields, name, and identifier.
- Next, right-click the src folder and add a new ActionScript class called Employee.as.
Modify the Employee.as to match the code shown in listing 4.
Listing 4. Employee.as
Line:1 package Line:2 { Line:3 [Bindable] Line:4 Line:5 public class Employee Line:6 { Line:7 public function Employee() Line:8 { Line:9 } Line:10 Line:11 public var name:String; Line:12 Line:13 public var identifer:String; Line:14 Line:15 Line:16 } Line:17 }
- Click the Export Release Build button
to add the bin-release folder. Inside this folder you can find the WebServiceFlex.swf movie, which you need for the portlet application.
- Move the WebServiceFlex.swf file to the portlet project in the movies folder you previously created.
You can see the EmployeesDB.wsdl code in listing 5.
Listing 5. EmployeesDB.wsdl
Line:1 <?xml version="1.0" encoding="UTF-8"?> Line:2 <wsdl:definitions Line:3 <!--WSDL created by Apache Axis version: 1.4 Line:4 Built on Apr 22, 2006 (06:55:48 PDT)--> Line:5 <wsdl:types> Line:6 <schema elementFormDefault="qualified" targetNamespace= "" xmlns=""> Line:7 <element name="main"> Line:8 <complexType> Line:9 <sequence> Line:10 <element maxOccurs="unbounded" name="arg" type="xsd:string"/> Line:11 </sequence> Line:12 </complexType> Line:13 </element> Line:14 <element name="mainResponse"> Line:15 <complexType/> Line:16 </element> Line:17 <element name="getEmployees"> Line:18 <complexType/> Line:19 </element> Line:20 <element name="getEmployeesResponse"> Line:21 <complexType> Line:22 <sequence> Line:23 <element name="getEmployeesReturn" type="xsd:string"/> Line:24 </sequence> Line:25 </complexType> Line:26 </element> Line:27 <element name="updateEmployee"> Line:28 <complexType> Line:29 <sequence> Line:30 <element name="SubShadeXMLString" type="xsd:string"/> Line:31 </sequence> Line:32 </complexType> Line:33 </element> Line:34 <element name="updateEmployeeResponse"> Line:35 <complexType> Line:36 <sequence> Line:37 <element name="updateEmployeeReturn" type="xsd:string"/> Line:38 </sequence> Line:39 </complexType> Line:40 </element> Line:41 <element name="sendEmail"> Line:42 <complexType> Line:43 <sequence> Line:44 <element name="sendTO" type="xsd:string"/> Line:45 <element name="subject" type="xsd:string"/> Line:46 <element name="message" type="xsd:string"/> Line:47 </sequence> Line:48 </complexType> Line:49 </element> Line:50 <element name="sendEmailResponse"> Line:51 <complexType> Line:52 <sequence> Line:53 <element name="sendEmailReturn" type="xsd:string"/> Line:54 </sequence> Line:55 </complexType> Line:56 </element> Line:57 </schema> Line:58 </wsdl:types> Line:59 Line:60 <wsdl:message Line:61 Line:62 <wsdl:part Line:63 Line:64 </wsdl:message> Line:65 Line:66 <wsdl:message Line:67 Line:68 <wsdl:part Line:69 Line:70 </wsdl:message> Line:71 Line:72 <wsdl:message Line:73 Line:74 <wsdl:part Line:75 Line:76 </wsdl:message> Line:77 Line:78 <wsdl:message Line:79 Line:80 <wsdl:part Line:81 Line:82 </wsdl:message> Line:83 Line:84 <wsdl:message Line:85 Line:86 <wsdl:part Line:87 Line:88 </wsdl:message> Line:89 Line:90 <wsdl:message Line:91 Line:92 <wsdl:part Line:93 Line:94 </wsdl:message> Line:95 Line:96 <wsdl:message Line:97 Line:98 <wsdl:part Line:99 Line:100 </wsdl:message> Line:101 Line:102 <wsdl:message Line:103 Line:104 <wsdl:part Line:105 Line:106 </wsdl:message> Line:107 Line:108 <wsdl:portType Line:109 Line:110 <wsdl:operation Line:111 Line:112 <wsdl:input Line:113 Line:114 <wsdl:output Line:115 Line:116 </wsdl:operation> Line:117 Line:118 <wsdl:operation Line:119 Line:120 <wsdl:input Line:121 Line:122 <wsdl:output Line:123 Line:124 </wsdl:operation> Line:125 Line:126 <wsdl:operation Line:127 Line:128 <wsdl:input Line:129 Line:130 <wsdl:output Line:131 Line:132 </wsdl:operation> Line:133 Line:134 <wsdl:operation Line:135 Line:136 <wsdl:input Line:137 Line:138 <wsdl:output Line:139 Line:140 </wsdl:operation> Line:141 Line:142 </wsdl:portType> Line:143 Line:144 <wsdl:binding Line:145 Line:146 <wsdlsoap:binding Line:147 Line:148 <wsdl:operation Line:149 Line:150 <wsdlsoap:operation Line:151 Line:152 <wsdl:input Line:153 Line:154 <wsdlsoap:body Line:155 Line:156 </wsdl:input> Line:157 Line:158 <wsdl:output Line:159 Line:160 <wsdlsoap:body Line:161 Line:162 </wsdl:output> Line:163 Line:164 </wsdl:operation> Line:165 Line:166 <wsdl:operation Line:167 Line:168 <wsdlsoap:operation Line:169 Line:170 <wsdl:input Line:171 Line:172 <wsdlsoap:body Line:173 Line:174 </wsdl:input> Line:175 Line:176 <wsdl:output Line:177 Line:178 <wsdlsoap:body Line:179 Line:180 </wsdl:output> Line:181 Line:182 </wsdl:operation> Line:183 Line:184 <wsdl:operation Line:185 Line:186 <wsdlsoap:operation Line:187 Line:188 <wsdl:input Line:189 Line:190 <wsdlsoap:body Line:191 Line:192 </wsdl:input> Line:193 Line:194 <wsdl:output Line:195 Line:196 <wsdlsoap:body Line:197 Line:198 </wsdl:output> Line:199 Line:200 </wsdl:operation> Line:201 Line:202 <wsdl:operation Line:203 Line:204 <wsdlsoap:operation Line:205 Line:206 <wsdl:input Line:207 Line:208 <wsdlsoap:body Line:209 Line:210 </wsdl:input> Line:211 Line:212 <wsdl:output Line:213 Line:214 <wsdlsoap:body Line:215 Line:216 </wsdl:output> Line:217 Line:218 </wsdl:operation> Line:219 Line:220 </wsdl:binding> Line:221 Line:222 <wsdl:service Line:223 Line:224 <wsdl:port Line:225 Line:226 <wsdlsoap:address Line:227 Line:228 </wsdl:port> Line:229 Line:230 </wsdl:service> Line:231 Line:232 </wsdl:definitions>
The code behind the WSDL file is simple. The EmployeesDB.java file has two methods:
- One method is called getEmployees(), which connects to the database and retrieves all the values in the employees table. It then creates a list employee object and populates its fields, name and identifier. See figure 7.
Figure 7. Employee table in the database
- The second method, called updateEmployee(), takes as input a string in an XML format and then decodes it into an “Employee” bean. Then it updates the employee name in the employee table using the identifier field, which is the primary key for this table.
To see the source code of this Web service, refer to Source code of “EmployeesDB webservices” in the Downloads section at the end of this article.
The last step is to deploy the portlet application to your WebSphere Portal server. The portlet looks like the one shown in figure 8.
Figure 8. FlexWebService portlet as it appears in the Internet Explorer browser
Using JSON
JavaScript Object Notation, usually called JSON, is a lightweight data-interchange format. It is easy for humans to read and write, and it is easy for computers to parse and generate. JSON is based on a subset of the JavaScript Programming Language, Standard ECMA-262 Third collection is realized as an object, record, struct, dictionary, hash table, keyed list, or associative array.
- An ordered list of values. In most languages, this list is realized as an array, vector, list, or sequence.
These structures are universal data structures. Virtually all modern programming languages support them in one form or another. It makes sense that a data format that is interchangable with programming languages also be based on these structures.
Ajax is a Web development technology that makes the server responses faster by enabling client-side scripts to retrieve only the required data from the server without retrieving a complete Web page on each request, an approach that can minimize the data transferred from the server.
These requests usually retrieve XML formatted responses, which are then parsed in the JavaScript code to render the results, which complicates the JavaScript code.
The idea of JSON is to make the response a specific data structure that can be easily parsed by the JavaScript code.
JSON has several advantages:
- It is a lightweight data-interchange format.
- It is easy for humans to read and write.
- It is easy for computers to parse and generate.
- It can be parsed trivially using the eval() procedure in JavaScript.
- It supports ActionScript, C, C#, ColdFusion, E, Java, JavaScript, ML, Objective CAML, Perl, PHP, Python, Rebol, Ruby, and Lua.
Running Adobe Flex calling JSON objects in a WebSphere Portal portlet
In this example, you can learn how to run an Adobe Flex application. The Flex application calls on a JSON object registered at the back-end servlet in the portal application. The JSON servlet retrieves objects from database. The Flex application also provides a form to update the data grid. Upon submitting the form, an update function in the JSON servlet is called and synchronizes the database accordingly.
Follow these steps to build a portlet application:
- Open Rational Software Architect.
- Select File - New - Project.
- In the New Project window that displays, expand the Portal folder and select Portlet Project. Click Next.
- In the New Portlet Project window that displays, do the following:
- In the Project name field, enter FlexJSONPortlet. You thereby create a basic portlet that extends the GenericPortlet class defined in the Java Portlet Specification Version 1.0 (JSR 168).
- Select WebSphere Portal V6.0 as Target RunTime.
- Accept the default settings for EAR Membership.
- Select JSR 168 Portlet in the Portlet API field.
- Accept the default settings in the Create a portlet section.
- Clear the Show advanced settings option.
Click Next.
Figure 9. Defining the portlet project
- In the Project Facets window that displays, the following settings are selected as default options:
- Dynamic Web Module
- Java
- JSR 168 Portlets
- JSR 168 Portlets on WebSphere Portal
Click Next to accept the default options. See figure 10.
Figure 10. Project Facets
- 11.
Figure 11. The Portlet Settings window
- In the Action and Preferences window, clear all the options and click Finish.
- Open the FlexJSONPortlet project, and right-click the WebContent folder. Add a new folder called movies. The folder structure of the FlexWebService portlet looks like the one shown in figure 12.
Figure 12. Folder structure after adding the "movies" folder
- Next, modify the portlet class FlexJSONPortlet found inside the com.ibm.flexjsonportlet package, to call FlexJSONPortletView.jsp in the doView() method as in the code in listing 6.
Listing 6. FlexJSONPortlet.java
Line:1 package com.ibm.flexjsonportlet; Line:2 Line:3 import java.io.*; Line:4 import javax.portlet.*; Line:5 Line:6 public class FlexJSONPortlet extends GenericPortlet { Line:7 Line:8 public static final String JSP_FOLDER = "/_FlexJSONPortlet/jsp/"; // JSP folder name Line:9 public static final String VIEW_JSP = "FlexJSONPortletView"; // JSP file name to be rendered on the view mode Line:10 Line:11 public void doView(RenderRequest request, RenderResponse response) throws PortletException, IOException { Line:12 // Set the MIME type for the render response Line:13 response.setContentType(request.getResponseContentType()); Line:14 Line:15 // Invoke the JSP to render Line:16 PortletRequestDispatcher rd = getPortletContext().getRequestDispatcher (getJspFilePath(request, VIEW_JSP)); Line:17 rd.include(request,response); Line:18 } Line:19 Line:20 private static String getJspFilePath(RenderRequest request, String jspFile) { Line:21 String markup = request.getProperty("wps.markup"); Line:22 if( markup == null ) Line:23 markup = getMarkup(request.getResponseContentType()); Line:24 return JSP_FOLDER + markup + "/" + jspFile + "." + getJspExtension(markup); Line:25 } Line:26 Line:27 Line:28 private static String getMarkup(String contentType) { Line:29 if( "text/vnd.wap.wml".equals(contentType) ) Line:30 return "wml"; Line:31 else Line:32 return "html"; Line:33 } Line:34 Line:35 private static String getJspExtension(String markupName) { Line:36 return "jsp"; Line:37 } Line:38 Line:39 }
- Inside the _FlexJSONPortlet/jsp/html folder, you can find FlexJSONPortletView.jsp. Define the OBJECT and EMBED tags inside the div tag as shown in lines 26 though 43. In lines 18 through 21, there is a getURL Javascript that is called from the Flex application. The JS function returns the URL for the JSON servlet. See listing 7.
Listing 7. FlexJSONPortletView.jsp
Line:1 <%@page Line:7 #fxJavaScript { width: 90%; height: 500px; } Line:8 #html_controls {width: 15em; margin-top: 1em; padding: 1em; Line:9 color: white; border: solid 1px white; Line:10 font-family: Arial Line:11 } Line:12 body>#html_controls { width: 13em;} Line:13 body {background-color: #869CA7;} Line:14 </style> Line:15 Line:16 Line:17 <script language="JavaScript"> Line:18 function getURL() { Line:19 var Line:29 <param name="movie" value="<%=renderResponse.encodeURL (renderRequest.getContextPath()+ "/movies/jsonrpcserializing.swf")%>" /> Line:30 <param name="quality" value="high" /> Line:31 <param name="bgcolor" value="#869ca7" /> Line:32 <param name="allowScriptAccess" value="sameDomain" /> Line:33 <embed src="<%=renderResponse.encodeURL(renderRequest.getContextPath()+ "/movies/jsonrpcserializing.swf")%>" Line:34 Line:42 </embed> Line:43 </object> Line:44 Line:45 </div>
JSONServlet.java has two methods, doGet() and updateEmployee().
- The doGet() method queries a back-end database using the JDBC class and creates the JSONObject and concatenates it as a JSONArray.
- The updateEmployee() method checks for name and identifier parameters in the current HTTP request to update the database.
See listing 8.
Listing 8. JSONServlet.java
Line:1 package com.ibm.flexjsonportlet.json; Line:2 Line:3 import java.io.*; Line:4 import java.sql.Connection; Line:5 import java.sql.DriverManager; Line:6 import java.sql.ResultSet; Line:7 import java.sql.Statement; Line:8 import javax.servlet.*; Line:9 import javax.servlet.http.*; Line:10 import org.json.JSONArray; Line:11 import org.json.JSONObject; Line:12 Line:13 Line:14 Line:15 Line:16 public class JSONServlet extends HttpServlet{ Line:17 Line:18 String url = "jdbc:mysql://localhost:3306/"; Line:19 String dbName = "flex"; Line:20 String driver = "com.mysql.jdbc.Driver"; Line:21 String userName = "root"; Line:22 String password = "admin"; Line:23 Connection conn = null; Line:24 Line:25 Line:26 public void init(ServletConfig config) throws ServletException { Line:27 super.init(config); Line:28 try { Line:29 Class.forName(driver).newInstance(); Line:30 conn = DriverManager.getConnection(url+dbName, userName,password); Line:31 System.out.println("Connected to the database"); Line:32 Line:33 }catch (Exception e) { Line:34 e.printStackTrace(); Line:35 } Line:36 Line:37 } Line:38 Line:39 public void doGet(HttpServletRequest request,HttpServletResponse response) Line:40 throws ServletException,IOException{ Line:41 Line:42 Line:43 Line:44 updateEmployee(request); Line:45 JSONArray array=new JSONArray(); Line:46 try { Line:47 Line:48 Statement st = conn.createStatement(); Line:49 String query = "select * from employee"; Line:50 ResultSet rs = st.executeQuery(query); Line:51 while (rs.next()){ Line:52 JSONObject obj=new JSONObject(); Line:53 obj.put("name", rs.getString("name")!= null?rs.getString("name"):""); Line:54 obj.put("identifer", rs.getString("identifer")!= null?rs.getString("identifer"):""); Line:55 array.put(obj); Line:56 } Line:57 rs.close(); Line:58 st.close(); Line:59 System.out.println("query excuted successfully!"); Line:60 } catch (Exception e) { Line:61 e.printStackTrace(); Line:62 } Line:63 Line:64 Line:65 PrintWriter out = new PrintWriter(response.getWriter(),true); Line:66 out.println(array); Line:67 Line:68 } Line:69 Line:70 public void updateEmployee(HttpServletRequest request) { Line:71 try{ Line:72 String name=request.getParameter("name"); Line:73 String identifer=request.getParameter("identifer"); Line:74 if (name!=null && identifer!=null){ Line:75 Statement st = conn.createStatement(); Line:76 st.executeUpdate ("update employee set name = '"+name+"', " + Line:77 "identifer = "+identifer+ Line:78 "where identifer = "+identifer+" "); Line:79 st.close (); Line:80 System.out.println(name +" - "+ identifer); Line:81 } Line:82 }catch(Exception ex){ex.printStackTrace();} Line:83 Line:84 } Line:85 }
- Modify the web.xml file as shown in listing 9.
Listing 9. web.xml
<web-app <display-name> FlexJSONPortlet</display-name> <servlet> <description> </description> <display-name> JSONServlet</display-name> <servlet-name>JSONServlet</servlet-name> <servlet-class> com.ibm.flexjsonportlet.json.JSONServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>JSONServlet</servlet-name> <url-pattern>/JSONServlet<> <jsp-config> <taglib> <taglib-uri></taglib-uri> <taglib-location>/WEB-INF/tld/std-portlet.tld</taglib-location> </taglib> </jsp-config> </web-app>
- Open Adobe Flex Builder and create a new Flex project.
Mbr/>
In the New Flex Project window that displays, enter jsonrpcserializing in the Project name field, accept all the default settings, and Click Finish.
The project folder structure now looks like the one shown in figure 13.
Figure 13. Flex project structure
- Modify the jsonrpcserializing.mxml file as shown in listing 10. Line-by-line explanations of the code appear after listing 10.
Listing 10. jsonrpcserializing.mxml
Line:1 <?xml version="1.0" encoding="utf-8"?> Line:2 <mx:Application xmlns: Line:4 Line:5 <mx:HTTPService Line:7 Line:8 Line:9 <mx:Script> Line:10 <![CDATA[ Line:11 Line:12 import mx.rpc.events.*; Line:13 import com.adobe.serialization.json.JSON; Line:14 import mx.rpc.events.ResultEvent; Line:15 import mx.controls.Alert; Line:16 import mx.collections.ArrayCollection; Line:17 import flash.external.ExternalInterface; Line:18 Line:19 private function onCreationComplete():void { Line:20 if (!ExternalInterface.available) Line:21 Alert.show( "No ExternalInterface available for container" ); Line:22 callJavaScript(); Line:23 Line:24 } Line:25 Line:26 private function callJavaScript():void { Line:27 if (!ExternalInterface.available) Line:28 return; Line:29 var retVal:Object = ExternalInterface.call ("getURL",""); Line:30 var urlString:String = retVal as String; Line:31 service.url = urlString; Line:32 service.send(); Line:33 } Line:34 Line:35 private function onLoad(event:ResultEvent):void { Line:36 //get the raw JSON data and cast to String Line:37 var rawData:String = String(event.result); Line:38 Line:39 //decode the data to ActionScript using the JSON API Line:40 //in this case, the JSON data is a serialize Array of Objects. Line:41 var arr:Array = (JSON.decode(rawData) as Array); Line:42 Line:43 //create a new ArrayCollection passing the de-serialized Array Line:44 //ArrayCollections work better as DataProviders, as they can Line:45 //be watched for changes. Line:46 var dp:ArrayCollection = new ArrayCollection(arr); Line:47 Line:48 //pass the ArrayCollection to the DataGrid as its dataProvider. Line:49 myGrid.dataProvider = dp; Line:50 profitPie.dataProvider = dp; Line:51 } Line:52 Line:53 private function choosenItem(selectedItem:Object):void { Line:54 formPanel.visible = false; Line:55 employeeObj.name = selectedItem.name; Line:56 employeeObj.identifer = selectedItem.identifer; Line:57 formPanel.visible = true; Line:58 formPanel.setStyle("showEffect","fadeIn"); Line:59 } Line:60 Line:61 private function submitForm():void { Line:62 //Alert.show(employeeObj.name); Line:63 service.send(employeeObj); Line:64 } Line:65 Line:66 ]]> Line:67 </mx:Script> Line:68 Line:69 <mx:Fade Line:70 Line:71 <mx:Panel Line:72 <mx:DataGrid Line:73 <mx:columns> Line:74 <mx:DataGridColumn Line:75 <mx:DataGridColumn Line:76 </mx:columns> Line:77 </mx:DataGrid> Line:78 <mx:PieChart Line:79 <mx:series> Line:80 <mx:PieSeries Line:81 </mx:series> Line:82 </mx:PieChart> Line:83 </mx:Panel> Line:84 Line:85 Line:86 <mx:Model Line:87 <root> Line:88 <name>{employeeName.text}</name> Line:89 <identifer>{identifer.text}</identifer> Line:90 </root> Line:91 </mx:Model> Line:92 Line:93 <mx:Panel Line:94 <mx:Form Line:95 <mx:FormItem Line:96 <mx:TextInput Line:97 </mx:FormItem> Line:98 <mx:FormItem Line:99 <mx:TextInput Line:100 </mx:FormItem> Line:101 </mx:Form> Line:102 Line:103 <mx:ControlBar> Line:104 <mx:Button Line:105 </mx:ControlBar> Line:106 </mx:Panel> Line:107 Line:108 </mx:Application>
Lines 2 through 3 show the application tag mx:Application. Flex defines a default application container that lets you start adding content to your application without explicitly defining another container. You can specify the layout to be horizontal, which means that the two panels included in this movie are adjacent to each other. Also you can call the onCreationComplete() function on the event of CreationComplete. CreationComplete is the event that gets fired at the end of the component life cycle after the component is created along with its children, if any.
Lines 19 through 24 show the declaration of the onCreationComplete function, in which you check for the ExternalInterface instance being declared and call the callJavaScript() function.
Lines 26 though 33 call the inline getURL() function that returns the URL of the JSON servlet. Also, it defines the URL attribute for the HTTP service and finally invokes the service by using the send() method.
Lines 35 to 51 show the onLoad() function registered for the HTTPservice. This function gets invoked as a result of calling the HTTP service. The onLoad() function gets the raw JSON data and cast to String, decodes the data to ActionScript using the JSON API (in this case the JSON data is a serialized array of objects), and creates a new ArrayCollection passing the deserialized Array. The ArrayCollection function works better as DataProviders, as they can be watched for changes. Finally, it passes the ArrayCollection to the DataGrid as its dataProvider.
Lines 53 to 60 show the label function of the data grid. The function choosenItem passes on the entire row being selected in the data grid. You bind the values of that row to the employee model.
The employee model is bound in return to the form fields. Line 58 shows you how to add a fading effect to the form.
Lines 61 to 64 show the submitForm() function. This function is called when the Update button is clicked.
Lines 71 through 83 show a data grid and a pie chart embedded inside a panel titled Employees. The data grid has two columns, name and identifier. Use the PieSeries chart series with the PieChart control to define the data for the chart. Specify the identifier as a field property so that the chart knows which value to use to create pie wedges of the appropriate size.
Lines 94 to 101 show that the form you use to update the employee data. Notice the added “fadeIn” effect in the field showEffect="{fadeIn}, which shows the workflow of the application. You can find the fade effect declaration and properties at line 69.
Lines 103 to 106 show the Update button. On the click event, it calls the submitForm() function and passes on the employee model, which is bondable to the form fields, name and identifier.
- Click the Export Release Build button
to add the bin-release in which you can find the jsonrpcserialization.swf movie, which you need to move the jsonrpcserialization.swf file to the portlet project in the movies folder that you previously created. See figure 14.
Figure 14. Folder structure for the jsonrpcserialization project
- Let's return to the portal project; you just need to run it. Your portlet looks like figure 15.
Figure 15. FlexJSON portlet as it displays in the Internet Explorer browser
Conclusion
Adobe Flex is a new tool that can deliver rich Internet applications. Adobe Flex is a client solution that can be applied across browsers and platform. Adobe Flex gives a portal application many advantages, such as updating information on a browser page without the need to refresh a whole page. In the portal environment, refreshing an entire page to deliver one functionality can be an expensive operation in terms of network resources and overall performance. Adobe Flex provides a way, through its embedded SOA proxy client and its embedded JSON adaptor, to deliver rich functionalities within a portal framework. The article has shown you a line-by-line example of how to integrate Adobe Flex into WebSphere Portal. You can use Flex to render the user interface of portlets, overcome the limitations of HTML, and greatly improve the user experience within a portal.
Downloads
Resources
- Participate in the discussion forum.
- Learn more at the developerWorks® WebSphere Portal zone.
- Learn more about Adobe Flex.
- Read the white paper, "Using Macromedia Flex in a Portal Environment."
- Refer to the Adobe Flex Support Center.
- Read the article, "A New Way to Look at Portals" in the Flex Developer Journal.
- Take tutorials, watch videos at the Flex Developer Center.
- Take the tutorial, "Using JSON with Flex 2 and ActionScript3."
- Learn more about JSON..
|
http://www.ibm.com/developerworks/websphere/library/techarticles/0911_el-hadik/0911_el-hadik.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
public class Resubmitted extends Object
org.apache.spark.scheduler.ShuffleMapTaskthat completed successfully earlier, but we lost the executor before the stage completed. This means Spark needs to reschedule the task to be re-executed on a different executor.
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public Resubmitted()
public static String toErrorString()
public static boolean countTowardsTaskFailures()
public abstract static boolean canEqual(Object that)
public abstract static boolean equals(Object that)
public abstract static Object productElement(int n)
public abstract static int productArity()
public static scala.collection.Iterator<Object> productIterator()
public static String productPrefix()
|
http://spark.apache.org/docs/latest/api/java/org/apache/spark/Resubmitted.html
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
A considerable number of Scala developers are attracted by the promises of type safety and functional programming in Scala, as can be seen by the adoption of libraries like Cats and Shapeless. When building an HTTP API, the choices for a pure functional programming approach are limited. Finch is a good candidate to fill that space and provide a full-stack FP experience.
My introduction to Finch happened when I was reviewing talks from the last Scala eXchange 2016 and I stumbled upon this good talk by Sofia Cole and Kingsley Davies.
They covered a lot of ground in just 45 minutes, but the thing that caught my attention was Kingsley was speaking about Finch. Probably I noticed it because I’ve used other frameworks before, either formally or in exploratory pet-projects, but I wasn’t aware of the existence of Finch itself.
So, as usual when facing a new technology, I tried to understand the so called ‘angle of Finch’. Why should I care about it, when there are plenty of solid alternatives? Let’s do a high-level overview of Finch to see if it is a library worth spending our time on.
Functional Programming in the Web
If we open Finch’s README file we see that Finch describes itself as:
Finch is a thin layer of purely functional basic blocks atop of Finagle for building composable HTTP APIs. Its mission is to provide the developers simple and robust HTTP primitives being as close as possible to the bare metal Finagle API.
It can’t be stated more clearly: Finch is about using functional programming to build HTTP APIs. If you are not interested in functional programming you should stop reading, as you are in the wrong blog.
Finch promotes a healthy separation between HTTP operations and the implementation of your services. Finch’s aim is to manage all the IO in HTTP via a very thin layer that you create, a set of request endpoints that will cover all the HTTP IO in your server. It being a simple and composable layer also means that it will be easy to modify as your API evolves or, if it comes to that, it will be easily replaced by another library or framework.
How does Finch work? Let’s see a very simple example:
// string below (lowercase) matches a String in the URL val echoEndpoint: Endpoint[String] = get("echo" :: string) { (phrase: String) => Ok(phrase) } val echoApi: Service[Request, Response] = echoEndpoint.toServiceAs[Text.Plain]
The example above defines an
echoEndpoint that returns whatever the user sends back to them, as a text response. The
echoEndpoint defines a single
get endpoint, as we can see from the implementation, and will be accessed via the path
/echo/<text>. We also define our api at
echoAPI as a set of endpoints, although in this particular case we have a single endpoint.
Even if this is a simplistic example you can see there is little overhead when defining endpoints. You can easily call one of your services within the endpoint, keeping both layers cleanly separated.
A World of Endpoints
Finch’s core structure is a set of endpoint you use to define your HTTP APIs, aiming to facilitate your development. How does it achieve that? If you think about all the web apps you have built, there are a few things that you commonly do again and again. Finch tries to alleviate these instances of repetition or boilerplate.
Composable Endpoints
Let’s start with the fact that Finch is a set composable endpoints. Let’s assume you are creating a standard Todo list application using a REST API. As you proceed with the implementation you may produce the following list of URI in it:
GET /todo/<id>/task/<id> PATCH /todo/<id>/task/<id> DELETE /todo/<id>/task/<id>
As you can see, we have a lot of repetition. If we decided to modify those endpoints in the future there’s a lot of room for manual error.
Finch solves that via the aforementioned composable endpoints. This means we can define a generic endpoint that matches the path we saw in the example above:
val taskEndpoint: Endpoint[Int :: Int :: HNil] = "todo" :: param("todoId").as[Int] :: "task" :: param("taskId").as[Int]
The endpoint
taskEndpoint will match the pattern we saw defined previously and will extract both
id as integers. Now we can use it as a building block for other endpoints. See the next example:
final case class Task(id: Int, entries: List[Todo]) final case class Todo(id: Int, what: String) val getTask: Endpoint[Task] = get(taskEndpoint) { (todoId: Int, taskId: Int) => println(s"Got Task: $todoId/$taskId") ??? } val deleteTask: Endpoint[Task] = delete(taskEndpoint) { (todoId: Int, taskId: Int) => ??? }
We have defined both a
get and a
delete endpoint, both reusing the previously defined
taskEndpoint that matches our desired path. If down the road we need to alter our paths we only have to change one entry in our codebase, the modification will propagate to all the relevant entry points. You can obviously do much more with endpoint composition, but this example gives you a glimpse of what you can achieve.
Typesafe Endpoints
Reducing the amount of code to be modified is not the only advantage of composable endpoints. If you look again at the previously defined implementation:
val taskEndpoint: Endpoint[Int :: Int :: HNil] = "todo" :: param("todoId").as[Int] :: "task" :: param("taskId").as[Int] val deleteTask: Endpoint[Task] = delete(taskEndpoint) { (todoId: Int, taskId: Int) => ??? }
We see that the
deleteTask endpoint maps over two parameters,
todoId and
taskId, which are extracted from the definition of
taskEndpoint. Suppose we were to modify the endpoint to cover a new scenario, like adding API versioning to the path:
val taskEndpoint: Endpoint[Int :: Int :: Int :: HNil] = "v" :: param("version").as[Int] :: "todo" :: param("todoId").as[Int] :: "task" :: param("taskId").as[Int]
We can see that the type of the endpoint has changed from
Endpoint[Int :: Int :: HNil] to
Endpoint[Int :: Int :: Int :: HNil]: an additional
Int in the
HList. As a consequence, all the endpoints that compose over
taskEndpoints will now fail to compile as they are currently not taking care of the new parameter. We will need to update them as required for the service to run.
This is a very small example, but we already see great benefits. Endpoints are strongly typed, and if you are reading this you probably understand the benefits of strong types and how many errors they prevent. In Finch this means that a change to an endpoint will be enforced by the compiler onto any composition that uses that endpoint, making any refactor safer and ensuring the coherence of the implementation.
Testable Endpoints
The previous section considered the type-safety of endpoints. Unfortunately this only covers the server side of our endpoints. We still need to make sure they are defined consistently with the expectations of clients.
Typical ways to ensure this include defining a set of calls your service must process correctly and to process them as part of your CI/CD step. But running these tests can be both cumbersome to set up, due to the need to launch in-memory servers to execute the full service, as well as slow because you may need to launch the full stack of your application.
Fortunately, Finch’s approach to endpoints provides the means to verify that your service follows the agreed protocol. Endpoints are functions that receive an HTTP request and return a response. As such, you can call an individual endpoint with a customised request and ensure it returns the expected result.
Let’s see an endpoint test taken from the documentation:
// int below (lowercase) matches an integer in the URL val divOrFail: Endpoint[Int] = post(int :: int) { (a: Int, b: Int) => if (b == 0) BadRequest(new Exception("div by 0")) else Ok(a / b) } divOrFail(Input.post("/20/10")).value == Some(2) divOrFail(Input.get("/20/10")).value == None divOrFail(Input.post("/20/0")).output.map(_.status) == Some(Status.BadRequest)
We test the
divOrFail endpoint by passing different
Input objects that simulate a request. We see how a
get request fails to match the endpoint and returns
None while both
post requests behave as expected.
Obviously more complex endpoints may require you to set up some stubs to simulate calls to services, but you can see how Finch provides an easy and fast way to ensure you don’t break an expected protocol when changing your endpoints.
JSON Endpoints
Nowadays, JSON is the lingua franca of REST endpoints. When processing a POST request one of the first tasks is to decode the body from JSON to a set of classes from your model. When sending the response, if you are sending back data, the last step is to encode that fragment of your model as a JSON object. JSON support is essential.
Finch excels in this department by providing support for multiple libraries like Jackson, Argonaut, or Circe. Their JSON documentation gives more details on what they support.
By using libraries like Circe you can delegate all the serialisation to be automatically managed by Finch, with no boilerplate required. For example, look at the following snippet taken from one of Finch examples:
import io.finch.circe.jacksonSerializer._ import io.circe.generic.auto._ case class Todo(id: UUID, title: String, completed: Boolean, order: Int) def getTodos: Endpoint[List[Todo]] = get("todos") { val list = ... // get a list of Todo objects Ok(list) }
If you look at the
getTodos endpoint, it states its return type is
List[Todo]. The body obtains such a list and returns it as the response. There is no code to convert that list to the corresponding JSON object that will be sent through the wire, all this is managed for you via the two imports defined at the top of the snippet. Circe automatically creates an encoder (and a decoder) for the
Todo case class, and that it used by Finch to manage the serialisation.
Using Circe has an additional benefit for a common scenario. When you receive a POST request to create a new object, usually the data received doesn’t include the ID to assign to the object; you create this while saving the values. A standard pattern on these cases is to define your model with an optional
id field, like follows:
case class Todo(id: Option[UUID], title: String, completed: Boolean, order: Int)
With Finch and Circe you can standardise the treatment of these scenarios via partial JSON matches, which allow you to deserialise JSON objects with missing fields into a partial function that will return the object when executed. See the following snippet:
def postedTodo: Endpoint[Todo] = jsonBody[UUID => Todo].map(_(UUID.randomUUID())) def postTodo: Endpoint[Todo] = post("todos" :: postedTodo) { t: Todo => todos.incr() Todo.save(t) Created(t) }
In it the endpoint
postedTodo is matching the
jsonBody received as a function
UUID => Todo. This will match any JSON object that defines a
Todo object but is missing the
id. The endpoint itself maps over the result to call the function with a random UUID, effectively assigning a new
id to the object and returning a complete
Todo object to work with.
Although this looks like nothing more than convenient boilerplate, don’t dismiss the relevance of these partial deserialisers. The fact that your endpoint is giving you a full object, complete with a proper
id, removes a lot of scenarios where you would need to be aware of the possible lack of
id or use
copy calls to create new instances. You work with a full and valid model from the moment you process the data in the endpoint, and this reduces the possibility of errors.
Metrics, Metrics, Metrics
The current trends in software architecture towards microservices and canary releases mean knowing what is going on in your application matters more than ever. Logging, although still important, is no longer enough. Unfortunately many frameworks and libraries assume you will use a third party tool, like Kamon or New Relic, to manage your metrics. Which, in a context of microservices, can get expensive quite fast.
Although plain Finch doesn’t include any monitoring by itself, the best practices recommend using Twitter Server when creating a service with Finch. TwitterServer provides extras tooling, including a comprehensive set of metrics along a complete admin interface for your server.
Having a set of relevant metrics by default means you start your service using best practices, instead of trying to retrofit measurements once you realise they are needed. These metrics can also be retrieved via JSON endpoints, which allows you to integrate them with your standard monitoring tools for alerting.
Performance
Performance is always a tricky subject, as benchmarks can be misleading and, if we are honest, for most of the applications we implement the performance of our HTTP library is not the bottleneck.
That said, Finch is built on top of Finagle, a very performant RPC system built by Twitter. Finch developers claim that “Finch performs on 85% of Finagle’s throughput”. Their tests show that using Finch along Circe the server can manage 27,126 requests per second on their test hardware. More detailed benchmarks show that Finch is one of the fastest Scala libraries for HTTP.
So there you have it. Finch is not only easy to use, but it also provides more than decent performance, so you don’t have to sacrifice its ease of use even on your most demanding projects.
Good Documentation
You may be convinced at this point to use Finch, but with every new library your learn there come a crucial question: how well documented is it? It’s an unfortunate truth that open source projects often lack good documentation, a fact which increases the complexity of the learning curve.
Luckily for us Finch provides decent documentation for the users, including sections like best practices and a cookbook.
In fact, all of the examples in this post are taken from Finch’s documentation. I can say the documentation provided is enough to get you set up and running, and to start with your first services. For more advanced scenarios you may want to check the source code itself, which is well structured and legible.
Caveats
No tool is perfect, as the worn out “there is no silver bullet” adage reminds us. Finch, albeit quite impressive, has some caveats you need to be aware of before choosing it as your library.
The first and more important one is the lack of Websocket support. Although Finch has a SSE module, it lacks a full Websocket library. On many applications this is not an issue and you can work around it. But if you do need Websockets, you need to look elsewhere.
Related to the above limitation is the fact that Finch is still at version 0.11. Granted, nowadays software in pre-1.0 version can be (and is) stable and usable in production. And Finch is used in production successfully in many places, as stated by their README document. Finch is quite complete and covers the most common needs, but the library is growing and it may lack support for some things you may want. Like the aforementioned Websockets. Before choosing Finch make sure it provides everything you need.
The last caveat is the backbone of Finch, Finagle. Finagle has been developed by Twitter, and although stable and with a strong open source community, Twitter remains the main interested party using it.
In Conclusion
Finch is a good library for creating HTTP services, more so if you are keen on functional programming and interested on building pure services with best practices. It benefits from a simple but powerful abstraction (endpoints), removal of boilerplate by leveraging libraries like Circe, and great tooling (Twitter Server).
There are some caveats to be aware of, but we recommend you to build some small service with it. We are confident you will enjoy the experience.
|
https://underscore.io/blog/posts/2017/01/24/finch-functional-web-development.html
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
notcurses_stdplane - Man Page
Name
notcurses_stdplane — acquire the standard ncplane
Synopsis
#include <notcurses/notcurses.h>
struct ncplane* notcurses_stdplane(struct notcurses* nc);
const struct ncplane* notcurses_stdplane_const(const struct notcurses* nc);
static inline struct ncplane* notcurses_stddim_yx(struct notcurses* nc, int* restrict y, int* restrict x);
static inline const struct ncplane* notcurses_stddim_yx_const(const struct notcurses* nc, int* restrict y, int* restrict x);
int notcurses_enter_alternate_screen(struct notcurses* nc);
int notcurses_leave_alternate_screen(struct notcurses* nc);
Description
notcurses_stdplane returns a handle to the standard ncplane for the context nc. The standard plane always exists, and is always the same size as the screen. It is an error to call ncplane_destroy(3), ncplane_resize(3), or ncplane_move(3) on the standard plane, but it can be freely moved along the z-axis.
The standard plane's virtual cursor is initialized to its uppermost, leftmost cell unless NCOPTION_PRESERVE_CURSOR is provided (see notcurses_init(3)), in which case it is placed wherever the terminal's real cursor was at startup.
notcurses_stddim_yx provides the same function, but also writes the dimensions of the standard plane (and thus the real drawable area) into any non-NULL parameters among y and x.
notcurses_stdplane_const allows a const notcurses to be safely used.
A resize event does not invalidate these references. They can be used until notcurses_stop(3) is called on the associated nc.
notcurses_enter_alternate_screen and notcurses_leave_alternate_screen only have meaning if the terminal implements the "alternate screen" via the smcup and rmcup terminfo(5) capabilities (see the discussion of NCOPTION_NO_ALTERNATE_SCREEN in notcurses_init(3)). If not currently using the alternate screen, and assuming it is supported, notcurses_enter_alternate_screen will switch to the alternate screen. This redraws the contents, repositions the cursor, and usually makes scrollback unavailable. The standard plane will have scrolling disabled upon a move to the alternate plane.
Return Values
notcurses_enter_alternate_screen will return -1 if the alternate screen is unavailable. Both it and notcurses_leave_alternate_screen will return -1 on an I/O failure.
Other functions cannot fail when provided a valid struct notcurses. They will always return a valid pointer to the standard plane.
See Also
notcurses(3), notcurses_init(3), notcurses_plane(3), notcurses_stop(3), terminfo(5)
Authors
nick black <nickblack@linux.com>.
Referenced By
notcurses(3), notcurses_init(3), notcurses_plane(3).
|
https://www.mankier.com/3/notcurses_stdplane
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Asked by:
Help Needed in Transaction
Question
- User1489758560 posted
<div>Hello,</div> <div> </div> <div>I am using asp.net core 2.2 and i would need to use the transaction scope. on my business layer i am using transaction scope as follows</div> <div>
TransactionOptions options = new TransactionOptions(); options.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted; options.Timeout = new TimeSpan(0, 10, 0); using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required, options)) { // Method 1 which has DB connnection am opening and coling the connection properly // Method 2 which has DB connnection am opening and coling the connection properly // Method 3 which has DB connnection am opening and coling the connection properly scope.Complete(); }
DAL Layer :</div> <div></div> <div>
public class EmpDB : DbContext { public EmpDB() { } public EmpDB(DbContextOptions<EmpDB> options) : base(options) { } protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder) { if (!optionsBuilder.IsConfigured) { optionsBuilder.UseSqlServer(//ConnectionString); } } public int GetEmpId(string EmpEmail) { using (EmpDB empDB = new EmpDB()) { using (var dbCommand = empDB.Database.GetDbConnection().CreateCommand()) { // proc call with parameters if (dbCommand.Connection.State != ConnectionState.Open) { dbCommand.Connection.Open(); } int EmpId = (int)dbCommand.ExecuteScalar(); dbCommand.Connection.Close(); return EmpId } } }
when i run the API, am getting the below error</div> <div></div> <div>
System.PlatformNotSupportedException: This platform does not support distributed transactions. at System.Transactions.Distributed.DistributedTransactionManager.GetDistributedTransactionFromTransmitterPropagationToken(Byte[] propagationToken) at System.Transactions.TransactionInterop.GetDistributedTransactionFromTransmitterPropagationoDistributedTrans, String accessToken)(TaskCompletionSource`1 retry) at System.Data.SqlClient.SqlConnection.Open()
am no sure what mistake am doing??? please help me to solve this error. i tried to google through and people are saying .net core 2.2 supports transaction scope. please suggest to solve the issue</div> <div></div>Thursday, April 23, 2020 3:14 AM
All replies
- User-474980206 posted
While not related to your error, Transaction scope uses thread local storage, so it’s not compatible with asp.net core, which runs multiple requests on the same thread. You should use sql servers 2 phase commit, or use the same connection, and use begin tran.Thursday, April 23, 2020 3:57 AM
- User1120430333 posted
How do you know that Distributed Transaction Coordinator is even enabled on the Web server computer's O/S and on the database server computer's O/S?, April 23, 2020 4:04 AM
- User1489758560 posted
Thank you rena ni and in that i case if case can i use open connection for multiple db calls in transaction scope?
will it work? please suggest me.Friday, April 24, 2020 2:27 PM
- User-474980206 posted
Rather than using the distributed transaction manager (2 phase commit) you can use sql server builtin batch transaction processes. If you use the same connection you can do a transaction batch which is much more efficient than 2 phase commit.
as the example shows if you can use the same connection for all transactions, use BeginTransaction or just execute the command.
Note: and again, transaction scope is not supported with thread agile applications like asp.net (core or standard). Asp.net standard may supports it if no async calls are done.
also you appear to be using all sync database calls, which will kill performance in asp.net core.Friday, April 24, 2020 2:43 PM
- User1489758560 posted
Thank you Bruce and i will go with native sql transaction as you suggested. also, from your statement what i understood that all the calls in asp.ent core must be async. please correct me if am wrong.Friday, April 24, 2020 5:09 PM
- User-474980206 posted
the asp.net core design counts on several request running on the same thread. the only way this works is to use async i/o. blocking a thread for a database calls (which are really slow >100ms) will really kill performance.
sql server has always supported transaction batches over a single connection. if you are using ef core. just use the database to begin transaction
db.Database.BeginTransactionAsync()
which use underlying database technology rather than the distributed transaction manager.
note: due to its overhead I've never found a case to use DTC. Generally you only need it for multiple database server commits.Friday, April 24, 2020 8:15 PM
|
https://social.msdn.microsoft.com/Forums/en-US/8e489114-7d86-4485-8e78-1b9eec0837ac/help-needed-in-transaction?forum=aspdotnetcore
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Study Guide: Regular Expressions
Instructions
This is a study guide with links to past lectures, assignments, and handouts, as well as additional practice problems to assist you in learning the concepts.
Assignments
Important: For solutions to these assignments once they have been released, find the links in the front page calendar.
Lectures
Guides
What are Regular Expressions?
Consider the following scenarios:
- You've just written a 500 page book on how to dominate the game of Hog. Your book assumes that all players use six-sided dice. However, in 2035, the newest version of Hog is updated to use 25-sided dice. You now need to find all instances where you mention six-sided dice in your article and update them to refer to 25-sided dice.
- You are in charge of a registry of phone numbers for the SchemeCorp company. You're planning a company social for all employees in the city of Berkeley. To make your guest list, you want to find the phone numbers of all employees living in the Berkeley area codes of 415 or 314.
- You're the communications expert on an interplanetary voyage, and you've received a message from another starship captain with the locations of a large number of potentially habitable planets, represented as strings. You must determine which of these planets lie in your star system.
What do all of these scenarios have in common? They all involve searching for patterns within a larger piece of text. These can include extracting strings that begin with a certain set of characters, contain a certain set of characters, or follow a certain format.
Regular expressions are a powerful tool for solving these kinds of problems. With regular expression operators, we can write expressions to describe a set of strings that match a specified pattern.
For example, the following code defines a function that matches all words that start with the letter "h" (capitalized or lowercase) and end with the lowercase letter "y".
import re def hy_finder(text): """ >>> hy_finder("Hey! Hurray, I hope you have a lovely day full of harmony.") ['Hey', 'Hurray', 'harmony'] """ return re.findall(r"\b[Hh][a-z]*y\b", text)
Let's examine the above regular expression piece by piece.
- First, we use
r"", which denotes a raw string in Python. Raw strings handle the backslash character
\differently than regular string literals. For example, the
\bin this regular expression is treated as a sequence of two characters. If we were to use a string literal without the additional
r,
\bwould be treated as a single character representing an ASCII bell code.
- We then begin and end our regular expression with
\b. This ensures that word boundaries exist before the "h" and after the "y" in the string we want to match.
- We use
[Hh]to represent that we want our word to start with either a capital or lowercase "h" character.
- We want our word to contain 0 or more (denoted by the
*character) lowercase letters between the "h" and "y". We use
[a-z]to refer to this set.
- Finally, we use the character
yto denote that our string should end with the lowercase letter "y".
Regular Expression Operators
Regular expressions are most often constructed using combinations of operators. The following special characters represent operators in regular expressions:
\,
(,
),
[,
],
{,
},
+,
*,
?,
|,
$,
^, and
..
We can still build regular expressions without using any of these special characters. However, these expressions would only be able to handle exact matches. For example, the expression
potato would match all occurences of the characters p, o, t, a, t, and o, in that order, within a string.
Leveraging these operators enables us to build much more interesting expressions that can match a wide range of patterns. We'd recommend using interactive tools like regexr.com or regex101.com to practice using these.
Let's take a look at some common operators.
Regular Expressions in Python
In Python, we use the
re module (see the Python documentation for more information) to write regular expressions. The following are some useful function in the
re module:
re.search(pattern, string)- returns a match object representing the first occurrence of
patternwithin
string
re.sub(pattern, repl, string)- substitutes all matches of
patternwithin
stringwith
repl
re.fullmatch(pattern, string)- returns a match object, requiring that
patternmatches the entirety of
string
re.match(pattern, string)- returns a match object, requiring that
stringstarts with a substring that matches
pattern
re.findall(pattern, string)- returns a list of strings representing all matches of
patternwithin
string, from left to right
Practice Problems
Easy
Medium
Q1: Party Planner
You are the CEO of SchemeCorp, a company you founded after learning Scheme in CS61A. You want to plan a social for everyone who works at your company, but only your colleagues who live in Berkeley can help plan the party. You want to add all employees located in Berkeley (based on their phone number) to a party-planning group chat. Given a string representing a list of employee phone numbers for SchemeCorp, write a regular expression that matches all valid phone numbers of employees in the 314 or 510 Berkeley area codes.
In addition, a few of your colleagues are visiting from Los Angeles (area code 310) and Montreal (area code 514) and would like to help. Your regular expression should match their phone numbers as well.
Valid phone numbers can be formatted in two ways. Some employees entered their phone numbers with parentheses around the area code (for example, (786)-375-6454), while some omitted the area code (for example, 786-375-6454). A few employees also entered their phone numbers incorrectly, with either greater than or fewer than 10 digits. These phone numbers are not valid and should not be included in the group chat.
import re def party_planner(text): """ Returns all strings representing valid phone numbers with 314, 510, 310, or 514 area codes. The area code may or may not be surrounded by parentheses. Valid phone numbers have 10 digits and follow this format: XXX-XXX-XXXX, where each X represents a digit. >>> party_planner("(408)-996-3325, (510)-658-7400, (314)-3333-22222") ['(510)-658-7400'] >>> party_planner("314-826-0705, (510)-314-3143, 408-267-7765") ['314-826-0705', '(510)-314-3143'] >>> party_planner("5103143143") [] >>> party_planner("514-300-2002, 310-265-4242") # invite your friends in LA and Montreal ['514-300-2002', '310-265-4242'] """return re.findall(__________, text)return re.findall(r"\(?[53]1[04]\)?-\d{3}-\d{4}", text)
After creating your group chat, you find out that your friends in Montreal and Los Angeles can no longer attend the party-planning meeting. How would you modify your regular expression to no longer match the 514 and 310 area codes?
|operator to match either a phone number with a 510 area code or a phone number with a 314 area code. The final regular expression looks like this:
\(?510\)?-\d{3}-\d{4}|\(?314\)?-\d{3}-\d{4}
|
https://inst.eecs.berkeley.edu/~cs61a/su21/study-guide/regex/
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
/2012 at 15:17, xxxxxxxx wrote:
Hi, I cannot find any way to enable, disable (grey out) or even hide user controls for my plugin.
For example, I have a checkbox named "Automatic". When this is checked, I want a spline control to be either hidden, or greyed out.
I have come that far that I programmatically, without reloading the plugin and without restarting C4D, can change the value of a checkbox, in the Execute event:
def Execute(self, tag, doc, op, bt, priority, flags) :
tag[1005] = True
This is all.
Considering how simple this is to do, to change the Controls default value, I wish there was possible to do this:
tag[1005].Enabled = False
or
tag[1005].Visible = False
But (of course) it has to be way more complicated than this. And I also might want to hide / grey out the group the control in question belongs to, but I have not found a way to access the group itself, at all. Any help is much appreciated!
There is a thread here, but I have not succeeded in making anything here work in my own plugin.
-Ingvar
On 01/07/2012 at 15:36, xxxxxxxx wrote:
You need to look at the function GetDEnabling. Cinema calls this to determine if a control should be enabled or disabled (greyed out). Return true if enabled, false if not. So in your case you would test if the passed ID is for your spline object, if it is you check the status of the checkbox and return the correct value.
You can also show or hide controls but it's more tricky and in C++ you need the function GetDDescription. I don't think it's in the Python API yet, unless they've added an easier way to do it (sorely needed).
On 02/07/2012 at 04:17, xxxxxxxx wrote:
Thanks spedler! This is the right solution.
But still not that easy to find out about.
Here is my code:
def GetDEnabling(self, node, id, t_data, flags, itemdesc) :
if (id[0].id == MY_SUPER_SPLINE) :
return node[SOME_CHECKBOX] == 1
return True
Firstly, it is not obvious to me how to get the id out of the id - lol :))
The id that comes as an argument actually consists of 3 numbers. And to get the one I want, I had to search the Internet, yes. id[0].id is what I need. Not obvious at all, and no example in the doc showing it to me.
Then, the C4D docs about this event handler is strange. Fisrt it says the method should return true to set the control enabled, and false to set it disabled (grayed out). And in fact, that works. But in the same document I am told to "Then make sure to include a call to the parent at the end:" and also this: "_ Note: It is recommended that you include a call to the base class method as your last return."_
With this example:
return NodeData.GetDEnabling(node, id, t_data, flags, itemdesc)
I wish the documenation could make up its mind. Am I supposed to return True of False, or am I supposed to return the call to this function?
Anyhow, I was not able to implement this call at all, have no idea how to do it and the dcs give me no example. "NodeData" is unknown is the message I get.
Without the help from you guys, I would have been stuck. This is a good example of tghe lack of examples in the docs. I wish they had shown sample code, like the one I posted in the beginning of this message. Would have saved me lots of time!
-Ingvar
On 02/07/2012 at 06:48, xxxxxxxx wrote:
Yes, the documentation should be improved because it's currently confusing.
The note:
"It is recommended that you include include a call to the base class method as your last return."
Should be:
"If the passed id element is not processed, you should include a call to the base class method as your last return:"
And here's an example:
def GetDEnabling(self, node, id, t_data, flags, itemdesc) :
if (id[0].id == MY_SPLINE_ID) :
return node[MY_CHECKBOX_ID] == 1
else:
return NodeData.GetDEnabling(node, id, t_data, flags, itemdesc)
On 02/07/2012 at 06:53, xxxxxxxx wrote:
The other thing which could be improved are the examples. Some have clearly been ported from the C++ original, but not completely. For example, Ingvar wants a GetDEnabling example, and you can find one in the C++ SDK DoubleCircle plugin, but the Python port is missing that bit. Presumably the call wasn't in the API when it was ported, so they really need bringing up to date.
On 02/07/2012 at 08:32, xxxxxxxx wrote:
In:
return NodeData.GetDEnabling(node, id, t_data, flags, itemdesc)
what is "NodeData" supposed to be?
I've tried "self", "node" and the Object Class but no luck.
Cheers
Lennart
On 02/07/2012 at 08:53, xxxxxxxx wrote:
Originally posted by xxxxxxxx
And here's an example:
def GetDEnabling(self, node, id, t_data, flags, itemdesc) :
if (id[0].id == MY_SPLINE_ID) :
return node[MY_CHECKBOX_ID] == 1
else:
return NodeData.GetDEnabling(node, id, t_data, flags, itemdesc)
Originally posted by xxxxxxxx
Good. As I wrote in my post further up, and as Lennart points out, what is NodeData supposed to be? Have you tried this yourself? The problem is - the example you gave won't run here.
While I am on the air - is there a way to diable (grey out) a whole group? Group IDs do not appear in this GetDEnabling handler, unfortunately. Only the controls themselves.
-Ingvar
On 02/07/2012 at 10:20, xxxxxxxx wrote:
@ingvarai:
NodeData is supoosed to be the NodeData class from the c4d.plugins module. Also, Yannick forgot to pass self as the first argument.
else:
return c4d.plugins.NodeData.GetDEnabling(self, node, id, t_data, flags, itemdesc)
This might be confusing when you've been working with other languages before, but this is how Python works.
Personally, I prefer using the super() method. As all classes in the c4d module inherit from object, you won't get problems with it.
else:
super(MySubclass, self).GetDEnabling(node, id, t_data, flags, itemdesc)
Here's some code that should help you understand.
class Superclass(object) :
def bar(self) :
print "Superclass.bar()"
class Subclass(Superclass) :
def bar(self) :
print "Subclass.bar()"
Superclass.bar(self)
super(Subclass, self).bar()
o = Subclass()
o.bar()
print
Superclass.bar(o)
Subclass.bar(o)
super(Subclass, o).bar()
Subclass.bar()
Superclass.bar()
Superclass.bar()
Superclass.bar()
Subclass.bar()
Superclass.bar()
Superclass.bar()
Superclass.bar()
-Nik
On 02/07/2012 at 10:56, xxxxxxxx wrote:
Thanks Niklas, one step closer, but I still get a "TypeError: argument 5".
Looking in the SDK, it says "flags" is not used, so I removed but then
got . "TypeError: GetEnabling() takes exactly 5 arguments (6 given)"
It's only five given.....
Maxon for heavens sake, give a -working- example.
Not only is the SDK a nightmare, but this is the official plugin support.
Ingvar you will soon learn that coding for Cinema is not a walk in the park.
On 02/07/2012 at 11:37, xxxxxxxx wrote:
@tca:
Hm, I actually didn't execute the version I did correct from Yannicks post, but I can't spot the error now
by just looking at it. I'll dig in right away.
Yes, indeed. I hope to have enough influence on MAXON, being a beta tester now, to make them
enhancing the SDK.
Cya in a minute,
-Nik
PS: Not used doesn't mean it does not require the argument.
On 02/07/2012 at 12:29, xxxxxxxx wrote:
@tca, ingvar, Yannick:
I can't get the parent-call to work. (I've never needed it, so I didn't know it doesn't work until now) I guess it's just a bug in the Py4D API. Yannick or Sebastian should be able to give us more information.
If the official support does not have a solution for the parent-call, I'd suggest just to ignore the advice in the SDK and return False instead of the return-value of the parent-call.
def GetDEnabling(self, node, id, t_data, flags, itemdesc) :
rid = id[0].id
if rid == c4d.MYTAG_VALUE:
# ...
return False
elif rid == ...:
# ...
return True
return False
@Lennart
_> Ingvar you will soon learn that coding for Cinema is not a walk in the park.
_ Which is illustrated by this:
"Maxon for heavens sake, give a -working- example."
When Maxon cannot do it right themselves, I somehow accept that I myself have problems.. I am asking myself to what extent it is my way of thinking, to what extent the documentation is written in an unusal way and so forth. But I am starting to realize that the way the docs are laid out is unusal to me, and that this is part of the source of my problems. I spend waaaaaaaaay too much time carrying out even the simplest tasks. So it is partly bad or even wrong documentation, and partly me that is not familiar with the way it is documented.
And I have written plugins before. I wrote several for Sony Vegas, the video NLE. What a breeze! The SDK has a full list of Classes, their properties, their methods and mostly everything works on the first or second attempt. Ok, sleeves up, I must get used to the docs. But I often feel like blind folded, tapping around in the dakrness..
And if you want C4D to crash - I mean really crash, you can do this:
def GetDEnabling(self, node, id, t_data, flags, itemdesc) :
if(id == 1003) :
Return False
:)))
-Ingvar
On 02/07/2012 at 12:38, xxxxxxxx wrote:
Nik,
how do you do this:
c4d.MYTAG_VALUE:
I have never gotten this to work.
I must redefine MYTAG_VALUE in the pyp file.
I often see the c4d.XXXX. What does it mean, and how do you do it?
Another question:
Is it possible to disable a whole group using Python? (With several user controls)
I see that C4D can do it, with the built in controls.
-Ingvar
@ingvar:
I totally agree with you. Once I started with Py4D, I had experience with COFFEE so it wasn't that hard, because I did already understand the principles of c4d. I've never written any code before COFFEE, but I think it is even harder for people that are already used to something else, especially something better else.
To your "crashy" code: Why should this crash Cinema 4D? ^^ IMHO the easiest way to crash it, is this:
op.InsertUnder(op)
or even
import types
types.FunctionType(types.CodeType(0, 0, 0, 0, "KABOOM", (), (), (), "", "", 0, ""), {})()
That even works for every Python Interpreter
edit :
This only works for descriptions, not for dialogs. When C4D detects a new plugin, it recomputes the "symbolcache". But if you later add new symbols to your description, it might happen that C4D does not add them to the "symbolcache". [citation needed, information out of experience]
You can delete the file under %appdata%/MAXON/Cinema 4D RXX/prefs/symbolcachce to fix this. Note that it's called coffeesymbolcache under R12.
Hm, I don't know actually. Only when Cinema 4D asks you about the group in GetDEnabling. Otherwise, you'd have to figure out what ID's are in the group you'd like to disable.
The whole GetDEnabling thingy is a little overcomplicate, imho. Why not just make something like SetDEnabling(id, status)?! Just like Enable() of the GeDialog class..
On 02/07/2012 at 13:07, xxxxxxxx wrote:
Nik,
_> You can delete the file under _
Worked! Thank you!
> Only when Cinema 4D asks you about the group in GetDEnabling
Unfortunately it does not. Groups are not iterated, only user controls.
> The whole GetDEnabling thingy is a little overcomplicate
Probably prepared for a more complicate future..
Lots of things seem overcomplicated to me..
-Ingvar
On 02/07/2012 at 13:37, xxxxxxxx wrote:
Thanks Niklas for checking.
On 03/07/2012 at 01:29, xxxxxxxx wrote:
Originally posted by xxxxxxxx
The whole GetDEnabling thingy is a little overcomplicate, imho. Why not just make something like SetDEnabling(id, status)?! Just like Enable() of the GeDialog class..-Nik
The whole GetDEnabling thingy is a little overcomplicate, imho. Why not just make something like SetDEnabling(id, status)?! Just like Enable() of the GeDialog class..-Nik
My guess is that a lot of the SDK is ancient and comes from early versions of Cinema. To rewrite it now would be a gigantic task because this goes right to the core of Cinema's GUI.
If you think GetDEnabling is bad, wait until you have to use GetDDescription (C++ only ATM). This one requires you to use undocumented function calls that aren't even in the C++ docs.
On 04/09/2017 at 00:45, xxxxxxxx wrote:
ahhm... this thread and others about GetDEnabling() are a bit confusing!
ie. the sdk example does not use the (above discussed) return c4d.plugins.NodeData.GetDEnabling() call
so, is there a way to ghost an userdata entry (in python tag space)?
for example i have a simple op[c4d.USERDATA_ID,5] = "whatever"
now i want to ghost (or unghost) this field... (maybe just to disallow user interactions)
i guess you need to hook GetDEnabling() in like message() ? but then, if i add
def GetDEnabling(self, node, id, t_data, flags, itemdesc) : print "hi there!"
to my pyhton tag, it doesnt get called at all. i do not get any console output....
is GetDEnabling even possible in python tags?
On 06/09/2017 at 05:40, xxxxxxxx wrote:
Hi,
neither GetDEnabling() nor GetDDescription() can be overridden in a Python Tag (talking about the scripting tag here, not a TagData plugin implemented in Python). So, no, it's unfortunately not possible to disable parameters in a Python tag. But it should be fairly easy to translate a Python tag into a TagData plugin written in Python (script code goes into Execute()).
On 06/09/2017 at 23:12, xxxxxxxx wrote:
ok, thanks for clarifying this, andreas!
|
https://plugincafe.maxon.net/topic/6436/6907_enable--disable--hide-user-controls
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Aladin 10 - October 2017 - (Quick overview...)
Aladin 9 - November 2015 - (Quick overview...)
Aladin 8 - February 2014 - (Quick overview...)
Aladin 7.5 - July 2012).
Contributors:
Other direct or indirect contributions:
Softwares:
Courses & astronomical tutorials:
Repackaging:
Since Aladin v10, this data collection list is presented as a "data tree" (left panel) that you can easily browse, filter, explore...
In addition,you can use your own data images (in FITS, JPEG, GIF, PNG), catalogues (FITS, VOTable, Tab-Separated-Value (TSV), Character-Separated-Value (CSV), IPAC-TBL or simple ASCII tables (aligned columns)), or all sky maps (HiPS and HEALPix FITS maps) via the menu "File -> Open local file" or on the command line as one argument or also via the script command "load".
The beta version incorporates new features in test phase for the next official Aladin version. The stability of these features is not totally guaranteed. (details...)
This beta version can be downloaded from the Aladin download page.
Note: if you want to start Aladin official release in "beta" mode, just add the command line parameter "-beta" => java -jar Aladin.jar -beta French (V10 manual), Italian (V6 manual), and English (V6 manual) language.
Now, if you need help, a new function has been incorporated since Aladin v10 for "lazy" users. Just let your mouse on a button or a icon. After 6 seconds, you will get a short help on this function, a little bit longer than the basic tooltip.
If your configuration is supporting it, you can also install and start Aladin Desktop thanks to JNLP technology (ice-tea under linux). Just click on this JNLP URL (Java Webstart installer).
Note that you need to have a Java runtime environment installed on you machine - - see next point for details.
You can also start Aladin Desktop by downloading the Aladin.jar file and use this command line in a console window:
This Java Virtual Machine need to be downloaded for your particular platform - see
If you need Aladin in your Web navigator, have a look on Aladin Lite which provides definitively a better solution (see).
When developing Aladin, we are usually trying hard to optimize its performances. So even a "small" computer is a good choice to run Aladin. However, to deal with large catalogs (>1,000,000 sources), you need enough memory (typically 2Gb Ram).
Note that Aladin Desktop can run neither under Android phones/pads nor Mac mobile phones. Mac phones do not integrate Java technology. And even if Android system is Java based, Google has not implemented legacy Java Oracle libraries required by Aladin (notably AWT). Thus, no mobile support for Aladin Desktop is provided.
If you need Aladin on your mobile phone, have a look on Aladin Lite (see). Gaia catalogue.2g -jar Aladin.jar.
But, if you have used a dedicated Aladin packaging you will have to adapt the Aladin launcher according to your configuration:
Warning: if you plan to increase JVM memory to more than 4GB, you have to install a 64-bit JVM.
Additionnally, Aladin provides a powerful filter mechanisms to hide temporary non relevant collections (by keywords or other coverage constraints). Aladin V11 adds advanced sort facility for the data discovery tree.
Note that the "Server Selector" form previously used in earlier versions is still available (menu File -> Open server selector... Ctrl+L)).
In addition, do not hesitate to explore the advanced filter mechanisms (menu File -> Filter on data collections..."). You will discover that you filter this list by a lot of advanced contraints (spatial, temporal, energy). Note that these advanced filters can be saved to be reused later. They appear in the filter selector just under the "select" field.
Eventually, you could create a bookmark for a specific collection in order to avoid to search it each time you need it (bookmark icon in the "Access Window" associated to each collection)
To solve this issue, you can "scan" these collections (those you have selected in the tree). Aladin will explore these collections in order to produce a "temporary" and "local" -in the displayed field- coverage map for each of them. After this process, Aladin will be able to indicate if these collections will have results (green color) or not (orange results) in the field.
Note: Take care that this scan process can be long and quite "heavy" for the collection servers. So try to convince the data providers to generate their MOC and register them in the VO registry. It's definitively a better solution.
You have two methods to use TAP in Aladin: on resource or on server:
TAP on resource:
Aladin provides a dedicated TAP form for each resource compatible with this protocol. This form is accessible via the check box "by criteria" displayed in the "Access window" associated with each resource in the "data discovery tree". In this case, the TAP form is pre-filled with the metadata corresponding to the resource. You can write an ADQL query directly on the bottom panel, or more easily you can generate it by specifying with the mouse the fields you want to obtain and by adding the constraints one by one. These actions will be automatically translated into the corresponding ADQL syntax. In addition, thanks to the SYNC / ASYNC selector, before submitting your request, you can specify whether you want to launch the request in a-synchronous mode - and later retrieve the result even in another Aladin session.
TAP on server:
In some cases, the TAP request that you want to submit is not limited to a unique table, an unique resource. In this case, it is more suitable to open the TAP form for all the DB tables available on a dedicated TAP server. You can switch from single-resource mode to global mode using the "Mode" selector located at the top right of the form. You can also directly use the "File -> menu Open the server selector -> TAP "and select the TAP server that you want to query.
Note that it is possible in Aladin V11 to take into account a local table, already loaded in Aladin as a catalog plane to perform a join with remote data accessible by TAP. This extremely powerful mechanism is available via the "JOIN" button present in the TAP form.
The dedicated format for the images are FITS with or without a WCS fields in the header. These images can be compressed with RICE, GZIP (internally or externally)
Note: This feature can be used to create your own Proposal tool based on this standard and VOApp/VOObserver Aladin Java interfaces (more...). For instance, see APT (STScI HST/JW.
Note: You can just load a specifical extension (or a set of extensions) by suffixing the FITS filename with the extension numbers embedded in brackets and separated by comas. Example: myMEF.fits[1,4-6,8]
Note: Each CCD image has its own background adjustement. If you want to have the same pixel cut, use the pixel form and apply the same constraints on all CCD. Note: If each CCD astrometrical solution is not oriented in the same direction, you can have a surprising rotation effect each time you click on another CCD image.
Notice that Aladin does not suppport FITS internal compressions for tables..
As a script command, Aladin recognizes basic STC-S (IVOA working draft:) regions specifications (Circle, Ellipse, Box, Polygon, Position) and translates them into regular Aladin draw commands. However, it does not take into account the possible operators Union, Intersection, Difference, Not... It always assumes BARYCENTER reference. Note that the unit specification is supported.
4.18.2 STC-S "s_region" field:
As a field value in a catalog entry, Aladin recognizes STC-S strings (all kinds of region types, and the Union operator). However, in order to help Aladin to recognize
such a column, the "s_region" column name is recommended, and/or the dedicated Obscore utype (see IVOA Obscore standard for definition).
In fact, the RGB FITS is not really a standard but has been adopted by several tools.
Aladin supports these existing conventions:
Several functions are available for manipulating cubes:
Aladin also supports colored RGB cube. See next section.
Usage example:
Note: Thanks to the Galex Viewer team for this feature idea
The list of bookmarks is displayed on the top of the Aladin main panel as a list of words, prefixed by a "bookmark" logo (green.
Since release v10, you can directly create a new bookmark for any resource provided in the data discovery tree via the small "bookmark" icon displayed at the bottom of the associated "access window"..
Note: Aladin V10 does no longer support PLASTIC protocol (ancestor of SAMP). only force the "North Up" by clicking on the corresponding button at the left bottom corner of Aladin.
Note that this spectrum widget is very basic. Do not hesitate to explore the QuickViz Aladin plugin for a more powerful cube spectrum tool.:.
Note that you can append manually the reticle position as a new target by pressing the "location" icon at the left side of this triangle.
If it is known, the default J2000 epoch is replaced by the epoch of the background image (from the EPOCH or DATE-OBS FITS keywords).
For resetting the epoch, just click on the slider label.
Additionally, object or to retrieve the list of bibliographical references associated to this objects by the CDS team.
Since version 10, this function can be directly activated via the "study" icon (orange or green mode), at the bottom left of the view panel..
Since version 10, this function can be directly activated via the "study" icon (green mode), at the bottom left of the view panel.
The messages will appear both in the console window in which you launched Aladin. In level 6, the HiPS display strategy is visualized directly in the view. -hipsgen ... Aladin -mocgen ... Aladin -help Aladin -version Options: -help: display this help -version: display the Aladin release number -local: without Internet test access -theme=dark|classic: interface theme -screen="full|cinema|preview": starts Aladin in full screen cinema mode or in a simple preview window -script="cmd1;cmd2...": script commands passed by parameter -nogui: no graphical interface (for script mode only) => noplugin, nobookmarks, nohub -noreleasetest: no Aladin new release test -[no]samp: no usage of the internal SAMP hub -[no]plugin: with/without plugin support -[no]beta: with/without new features in beta test - properties: propertie record list for populating the data discovery tree - graphics: Aladin or IDL or DS9 regions, MOCs - directories: HiPS - Aladin backup : ".aj" extension - Aladin scripts : ".ajs" extension
New since V10: -theme, properties file support
.
Example of properties record:
ID = yourAuhority/your/id obs_title = The title of your collection (a few words) obs_description = The description of your collection (a sentence or a short paragraph) tap_service_url =
The service URL can be replaced/completed by one of these alternatives:
sia_service_url = sia2_service_url = ssa/url ssa_service_url = ssa/url cs_service_url = hips_service_url =).
It is also possible to edit and save the FITS header.
Not all sliders are displayed by default. Use your preferences form (menu Edit -> User preferences) for adjusting this configuration.
1) With SHIFT:
By moving the mouse pointer over the icon/name, you will get additional plane information displayed on the top of the stack panel. on its border.
The current time range can be specified by hand with two ISO dates (ex: 2001-12-20T10:00 2004-07-31) provided on the command field. When a time range has been set, it can be adjusted with mouse actions on the time controller displayed under the space controller in the right bottom panel.
In a time range expression, it is possible to replace one ISO date by the keyword "NaN" to remove a bound. The string "NaN NaN" fully removes the current time range.
Note that Aladin V11 also supports time plots (see below)
You can move a manual contour with the "Select" tool by clicking-and-dragging one of its control points. It means that you have to select the contour and after that click-and-drag one of these control points (little squares on the contour).
Note: If the background image is a HiPS, the contour extraction is possible, but done on the current view only.".
Recent Aladin versions (9 and following) add new facilities, notably:
- you can switch between the preview pixel mode and the full pixel dynamic mode (if it is provided for the current image or HiPS) either via the pixel form or thanks to the "hdr" icon under the main panel;- In full pixel dynamic mode, it is possible to recompute the current cut range only based on the current visible view (button "local cut");
-.
By clicking in the second column at the appropriate field, you will be able to specify if this field contains RA, DEC, PMRA, PMDEC, X or Y information.
You can also select GLON, GLAT and SGLON, SGLAT for galactic and supergalactic coordinates.
By clicking in the second column at the appropriate field, you will be able to specify if this field contains dates and the syntax/system of these dates: JD - Julian Day, MJD - Modified Julian Day, ISOTIME - calendar date in ISO format, YEARS - decimal years, DATE - regular calendar date.
It can take into account all sky or wide image surveys, cube surveys, catalog surveys and density maps.
HiPS are described in this paper 2015A&A...578A.114F. Since 2017, it is an IVOA standard described in this do not see this density slider in your stack, activate it via your preferences (Edit->User preferences->Control sliders).
Note: You can also modify permanently the default projection via your preferences (Edit->User preferences->HiPS projection).
HiPS and regular/PNG)..
Note: Since 2017, HiPS is an IVOA recommended standard. The document is available at this address: .
Note: In case of network failure between your localization and the current HiPS server, Aladin will silently switch to another one.").
Tip: One solution is to convert your HEALPix map in HiPS before usage thanks to this command: java -jar Aladin.jar -hipsgen in=yourHealpixMap.fits creator_did=XXX/test out=YourHips
Good examples of HEALPix fits maps containing polarisation data can be loaded from the LAMBDA site. For instance cut and paste in Aladin this WMAP map - 96MB:
Client side:
Via the menu "Image => Crop image area" (or directly via the "crop" button) you activate the crop tool. Select the region by clicking and dragging the red control rectangle, and confirm your choice. You can directly edit the size of your target image by double clicking the small numbers.
When your background survey is displayed in preview mode (JPEG|PNG), the resulting image will be a simple "screen dump", with a simple astrometric calibration. But when your background survey uses the FITS mode, the resulting image will be produced by a resample algorithm from the original HEALPix pixels to the resulting FITS image. It's better but slower. The resampling algorithm is based on the bilinear approximation method.
The client side method implies that all HiPS tiles required for cropping will be downloaded first before performing the cropping operation. It can take a long time especially for FITS mode.
Server side:
You can also ask the HiPS server to crop for you and send the resulting image to Aladin. It is definitively more efficient but is only possible for HiPS available on the CDS HiPS servers (most HiPS are available on the CDS HiPS servers). This functionality is available from the dedicated Hips2fits form displayed via the "File -> Open server selector... -> Aladin Hips2fits" menu.
Note that this Hips2fits server-side service can be used directly from a Web page or from a URL API:
You can use the menu "Tools => Generate a HiPS..." and follow the instructions.
But as this process can take a while (10 hours2g -jar Aladin.jar -hipsgen in=Path creator_did=my/test/id..
r = r * scaleR + minR g = g * scaleG + minG b = b * scaleB + minB r = max(0,r) g = max(0,g) b = max(0,b) I = (r+g+b)/3. fI = arcsinh(Q * I) / sqrt(Q) I = max(1e-6,I) r = fI * r / I g = fI * g / I b = fI * b / I
The classic method provides beautiful colorful compositions, especially with digitized images or images with low pixel dynamics. In contrast, the Lupton method is suitable for CCD images with very high pixel dynamics. Its specificity is to increase the original color distances (blue stars are more blue, red stars are more red). Make sure that the original values must be mapped in a positive range using the factors "scale" and "min". The factor "Q" impacts the behavior of the transfer function applied in the Lupton method (arcsinh / sqrt).
In this context, it is recommended to also generate the JPEG or PNG tiles for efficient remote acess. The users will have the choice to switch from fast preview access to Fits true pixel value (but slower) mode. mechanism is based on the HEALPix sky tessellation algorithm. It is essentially a simple way to map regions of the sky into hierarchically grouped predefined cells. Concretely, a MOC is a list of HEALPix cell numbers, stored in a FITS binary table.
Initially designed for space coverages only, MOC has been extended in 2019 for also supporting the time dimension as TMOC for Time-MOC and STMOC for Space-Time MOC. technically, the time are coded as a discretization of the time dimension, but based on JD system instead of HEALpix.
MOC is fully described by an IVOA standard available at this address:. TMOC and STMOC are already supported by Aladin but not yet standardized by the IVOA (IVOA note:)
For instance, you can determine in a few mouse clicks the region observed by SDSS and HST, but for which there are no information in Simbad. on the purpose, but generally an order between 9 and 12 is a good compromise between precision and compactness.
0 9133y 171d 11h 22m 32s 10 3d 4h 21m 17.9s 20 262.144ms 1 2283y 134d 4h 20m 40s 11 19h 5m 19.5s 21 65.536ms 2 570y 307d 11h 35m 9s 12 4h 46m 19.9s 22 16.384ms 3 142y 259d 11h 53m 47s 13 1h 11m 34.97s 23 4.096ms 4 35y 247d 11h 58m 27s 14 17m 53.741824s 24 1.024ms 5 8y 335d 19h 29m 37s 15 4m 28.435456s 25 256micro-s 6 2y 83d 22h 52m 24s 16 1m 7.108864s 26 64micro-s 7 203d 14h 43m 6s 17 16.777216s 27 16micro-s 8 50d 21h 40m 46.5s 18 4.194304s 28 4micro-s 9 12d 17h 25m 11.6s 19 1.048576s 29 1micro-s
The choice of the resolution depends of the purpose, but generally an order between 9 and 12 is a good compromise between precision and compactness.
This Aladin MOC display method is also useful to find easily small observations on the sky or in time plot, too small to find it at the true MOC resolution.
Note: You can modify the visual MOC resolution via the "dens" slider under the stack
Notice that these queries by MOC are extremely fast and powerful, even with very complex sky regions.
This control mode can be used via:
Tip: You can also pass any script command in the "Command" field on the top of the window.] [file] RGB|RGBdiff [x1|v1...] coord|object blink|mosaic [x1] [x2...] timerange|time + | - | * | / ... norm [-cut] [x] CATALOG: conv [x] ... filter ... kernel ... addcol ... resamp x1 x2 ... xmatch x1 x2 [dist] ... crop [x|v] [[X,Y] WxH] ccat [-uniq] [x1...] flipflop [x|v] [V|H] search {expr|+|-} contour [nn] [nosmooth] [zoom] tag|untag grey|bitpix [-cut] [x] BITPIX select [-tag] browse [x] GRAPHIC TOOL: draw [color] fct(param) FOLDER: grid [on|off] md [-localscope] [name] reticle [on|off] mv|rm [name] overlay [on|off] collapse|expand [name] COVERAGE: cmoc [-order=o] [x1|v1...] MISCELLANEOUS: backup filename status sync demo [on|off|end] pause [nn] help ... trace mem info msg macro script param call fct list [fct] reset setconf prop=value function ... = ... convert quit
The Aladin recent new commands are: timerange, time, ccat, cmoc, browse, call -batch, setconf additional properties
The help for each command can be accessed via the help script command.()
Example of PERL script controlling Aladin:
To create a view with GSC2.32.3); get simbad;\n"; print ALADIN "zoom 10arcmin; reverse; save $obj.jpg\n"; } print ALADIN "quit\n"; close ALADIN; }
Thus, the "-batch" parameter forces the execution without waiting the stack "synchronization", but by only verifying that the previous command, and only the previous command is ready.
For instance, this function should launch in batch mode to run properly:
function mybrowse($cat,$coo) { get Vizier($cat) $coo 50deg browse } call -batch mybrowse(gliese,hd1)
Note: The "-batch" mode is notably recommended for collaborative tool controlling Aladin via the VOApp interface (see corresponding section).).
Example: 123.00 +56.5 GAL
.
Morover, the HiPS technology is evolving very rapidly. Do not hesitate to install the last Aladin beta version to be sure that the "hipsgen" internal library is the last available one.
Have a look on Aladin plugin Web page to see the list of public Aladin compatible plugins ().
To install a plugin, you will proceed in 2 steps:
(*) On Windows, the home directory is located in C:\Users\YourName
To execute a plugin:
Note: Your own plugins will not work with a Web Start launching. Only official ones.. a
Example:
public class MyApp() { public void startAladin() { Aladin.launch("-trace"); } }
See the section VOApp for controlling Aladin from your own applet/application.
This last method may be useful, notably if you are controlling Aladin from another Java application using a classical GUI theme. historically hisotrically); } } :
MOC has been invented by the CDS in 2008 and it has been endorsed by the IVOA in 2013. It is an international standard since 2014. The last version (2019 October) is available at this address:
This method can be used for image surveys, but also for catalogs, notably huge catalogs, density maps,...
HiPS has been invented by the CDS in 2009 and it has been endorsed by the IVOA in 2016. It is an international standard since 2017: . HiPS is used by a lot of compatible clients notably Stellarium, WorldWide Telescope, KStar
TAP and ADQL have been standardized by IVOA in 2010 and the documentation can be found on these addresses: ADQL, TAP
|
https://aladin.u-strasbg.fr/java/FAQ.htx
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
In my previous article I talked about Unity and Visual Studio: using Visual Studio to edit and maintain your Unity code. In that article I talked about working with source code directly in your Unity project.
DLLs are another way to get code into your Unity project. I'm talking about C# code that has been compiled and packaged as a .NET assembly.
The simplest way of working is to store source code directly in your Unity project. However, using DLLs gives you an alternative that has its own benefits. To be sure, it adds complication to your process so you must carefully weigh your options before jumping all the way in.
In this article I'll explain the benefits that will make you want to use DLLs. Then I'll cover the problems that will make you regret that decision. Finally I'll show you how to get the best of both worlds: using source code where it makes sense and using DLLs where they make sense. I'll show you how to compile a DLL yourself and bring it into your Unity project.
Table of Contents generated with DocToc
- The Basics
- Why use source code instead of DLLs?
- Why use DLLs?
- DLLs: not so good for prototyping and exploratory-coding
- The best of both worlds!
- Compiling a DLL to use in Unity
- Referencing the Unity DLLs
- Conclusion
The Basics
The starting point for this article, is the understanding that a DLL can simply be dropped into your Unity project, Unity will detect it and you can then start using it. Of course, it's almost never that simple in real scenarios, however I will demonstrate that it can be that simple. In a future article I'll give you some tools to help solve the issues that will inevitably come up.
Say you have a compiled DLL. You may have compiled it yourself, got it from a mate, got it from the Asset Store or somewhere else, it doesn't matter. You can copy the DLL into your Unity project and (presuming it actually works with Unity) start using it. Of course, any number of problems can and do happen. Especially if you don't know where the DLL came from, what it does or what its dependencies are. These problems can be very difficult to solve and I'll come back to that in the future.
If you have started programming with Unity, the way source code is included in the project and automatically compiled will seem normal. In the traditional .NET programming world in the time before Unity, exes and DLLs (both being .NET assemblies) are the normal method for packaging your code and distributing it as an application.
It's only with Unity that the rules have been changed: Unity automatically compiles code that is included in the Unity project. This is the default way of working with Unity, and is what you learn when introduced to programming through Unity.
Note however that DLLs do get created, it's just that they are created for you (possibly without you even realizing it). Go and check if you like, take a Unity build and search for DLLs in the data directory. You can see for yourself that Unity automatically compiles your code to DLLs. This leads us to the understanding that we can copy pre-compiled DLLs into the project and Unity will recognize them. Indeed we can make use of DLLs that have been created for us by other developers. You have probably even done this already. Many of the packages for sale on the Unity Asset Store include DLLs rather than source code. Also available to us are many .NET libraries that can be installed through nuget (at least the ones that work with Mono/Unity). More about nuget in my next article.
Why use source code instead of DLLs?
I'm going to make arguments for and against using DLLs with Unity. In this line of work there are no perfect answers, we must do our best given our understanding and knowledge at the time and aim to make the right tradeoffs at the right times. I've heard it said that there are no right decisions, only less worse decisions. We need to think critically about technology to judge one approach against another.. That's how serious this is..
So my first advice is... don't use DLLs until you are convinced that they will benefit you. I'll attempt to convince you of the benefit of DLLs in the next section, but if you are uncertain or not ready to commit, then your best bet is to stick with source code. This is the way it works by default with Unity. It's simple and works with very low overhead. Visual Studio integrates directly and you can use it easily for editing and debugging your code.
Why use DLLs?
Ok, I probably just convinced you to use source code over DLLS. Now I must show you there are real benefits to working with DLLs.
- DLLs are an efficient, natural and convenient mechanism for packaging code for sharing with other people and other projects. This already happens extensively, you see many DLLs sold via the Unity Asset Store and nuget is literally full of DLLs to download for free. Github is full of code-libraries that you can easily compile to DLLs to include in your project. Even if you aren't selling your code on the asset store, you still might find it useful to package your code in DLLs for easy sharing between your own (or your friend's) projects. You can even share code between Unity and non-Unity projects. This might be important for you, for example, if you are building a stand-alone dedicated server for your game. This will allow you to share code between your game (a Unity application) and your server (a standard .NET application).
- Using DLLs hides the source code. This is useful when you are distributing your code but don't want to give away the source. Note that it is nearly impossible to completely protect your code when working with Unity (and generally with .NET). Anyone with sufficient technical skills can decompile your DLLs and recover at least partial source code. Obfuscating your code can help a great deal and make decompilation more difficult, but it's just not possible to completely protect again this.
- Using DLLs allows you to bake a code release. You might have tested a code release and signed-off on it. When the code release is baked to a DLL you can be sure that the code can't be modified or tampered with after the DLL is created. This may or may not be important to you. It is very important if you aim to embrace the practices of continuous integration and/or continuous delivery. I'd like to address both of these practices in future articles.
- Building your code to DLLs enables you to use any of the commonly available .NET unit testing frameworks (we use xUnit.net). These frameworks were built to work with DLLs. Why? Remember that DLLs (and exes) are the default method of delivering code in the .NET world. I'll talk more about automated testing and TDD for game development in a future article.
DLLs: not so good for prototyping and exploratory-coding
Here is another pitfall of using DLLs that is worth considering. They will slow down your feedback loop.
Compiling a DLL from source code and copying it to your Unity project takes time. Unity must reload the updated DLL, which also takes time. This is slow compared to having source code directly in the Unity project, which takes almost zero time to update and be ready to run.
Increased turnaround time in the game dev process can and will cause problems that can derail your project. When developing games we must often do prototyping or exploratory-coding, this is all a part of the iterative process of figuring out the game we are developing and finding the fun. When coding in this mode, we must minimize turnaround-time and reduce the cycle time in our feedback loop.
Using DLLs increases turn-around time. The effect is minimized by automated development infrastructure (another thing to talk about in a future article) and having a super-fast PC. The effect can also be reduced by test-driven-development, a technique that is enabled by DLLs and has the potential to drastically reduce your cycle time. TDD however is an advanced skill (despite what some people say) and must be used with care (or it will cause problems for you). So that leaves us in the position that DLLs are not great for rapid evolution of game-play code. DLLs are likely to slow you down and are best used for code that has already stabilized and that has stopped changing regularly.
The best of both worlds!
Fortunately, should we need to use DLLs, we can get the best of both worlds. Source code and DLLs can be combined in the same project:
- Fast-moving and evolving game-play code should be stored as source code in the Unity project.
- The rock solid and stable code (eg core-technology code) that we share from project to project can be stored in DLLs.
As code modules transition from fast-moving to stable we can move them as necessary from source code to DLLs.
Compiling a DLL to use in Unity
Now I'll show you how to create a DLL for use in Unity. Getting real-world DLLs to work in Unity can be fraught with problems. In this section I'll demonstrate that you can easily create and use a DLL with no problems, what could possibly go wrong?
There are many tutorials and getting started guides for Visual Studio and I don't need to replicate those. I'm only going to cover the basics of making a DLL and then being able to use it in Unity.
I'll start with this... making a DLL for Unity in Visual Studio is basically the same as making any DLL in Visual Studio, with just a few small issues that I'll cover here. So any tutorial that shows you how to make DLLs in Visual Studio is going to work... just pay attention to the caveats I mention here, or they could catch you out!
I talked about how to download and install Visual Studio in the last article. So I'll assume that you are ready to follow along.
We first need to create a project for our DLL. When we start Visual Studio we should be at the start page. Click on New Project...
You can also create a project from the File menu. Click New then Project...
Now you will see the New Project window. Here you can select the type, name and location of the project. You'll notice the many types of projects. For Unity we want to create a Class Library.
Here you could also choose a project type for a stand-alone application. For example Console Application for a command line exe. If you require a GUI application then use Windows Forms Application or WPF Application. Any of these choices might be appropriate for building a stand-alone dedicated server that is independent of Unity.
After selecting a name and location for your project now click OK to create the solution and the project (see the previous article for more on solutions and projects).
You have now created a project that looks something like this:
We now need to edit the project's Properties to ensure that the DLL will run under Unity.
In the Solution Explorer right-click on the project and click Properties:
You should be looking at the project properties now. You'll see that by Target framework default is set to the latest version of .NET. At the time of writing this is .NET Framework 4.5.2.
We need to change Target framework to .NET Framework 3.5 for our DLL to be usable under Unity.
Why did we have to change to .NET 3.5? I'm glad you asked.
The Unity scripting engine is built from an ancient version of Mono. Mono is the open source equivalent of the .NET Framework. When working with Unity we are limited to .NET 3.5. This might not seem so bad until you realize that .NET 3.5 is 8 years old! We are missing out on all the new features in .NET 4 and 4.5. Unity is keeping us in the digital dark ages!
As a side note you may have heard the (not so recent) news that the .NET Framework itself is now open source. This is certainly great news and will hopefully help Unity get it's act together and bring their .NET support up-to-date.
Switching to .NET 3.5 causes some of the .NET 4.5 references to go bad. You will have to manually remove these references:
Now you are ready to build the DLL. From the Build menu click Build Solution or use the default hotkey Ctrl+Shift+B.
At this stage you may get a compile error due to switching .NET frameworks. In new projects, Visual Studio automatically creates a stub Class. The generated file imports the System.Threading.Tasks namespace, and this doesn't exist under .NET 3.5. You must either remove the offending using directive or delete the entire file (if you don't need the stub Class).
Build again and you should be able to see the generated DLL in the project's bin/Debug directory.
Now we are almost ready to test the DLL under Unity. First though we really need a class to test. For this example I'll add a function to the generated stub class so it looks as follows. I've added the
UnityTest function which returns the floating-point value 5.5.
Now rebuild the DLL and copy it over to the Unity project. The DLL must go somewhere under the Assets directory. You probably don't want to put everything in the root directory of your project, so please organize it in sub-directory of your choosing. Make sure you copy the pdb file over as well. You'll need this later for debugging (which I'll cover in another article).
Now you can use classes and functions from your DLL in your Unity scripts. Unity automatically generates a reference to your DLL when you copy it to the Assets directory.
When you attempt to use your new class there will likely be an error because the namespace hasn't been imported.
This is easily rectified using Quick Actions (available in Visual Studio 2015). Right-click on the error (where the red squiggly line is). Click Quick Actions.... Alternately you can use the default hotkey Ctrl+.
This brings up the Quick Actions menu. Select the menu item that adds the using directive for you, in this example select using MyNewProject;.
Visual Studio has added the using directive and fixed the error. The red squiggly line is gone.
Now we have created an instance of our class and can call the
UnityTest function on it. Typing the name of the object and then . (period) allows intellisense to kick in and list the available options.
There is only the
UnityTest function here that we currently care about. So select that and Visual Studio will autocomplete the line of code for you.
Let's modify the code slightly to print out the value returned by
UnityTest. This will allow us to verify that our code is working as expected.
Now we should start Unity and test that this works. When we run this we expect the output 5.5 to appear in the Unity Console. It's a good practice, while coding, to predict the result of running your code. This makes the development process slightly more scientific and it helps to improve your skill as a developer. Programming can be complex and it is often hard to predict the outcomes, if anyone disagrees with that statement you should ask them why their code still has bugs? Improving our predictive skills is one of the best ways to gradually improve our general ability to understand what code is doing.
To test your code you need a scene with a GameObject that has the test script attached. I've talked about this in the previous article so I won't cover it again here.
With the scene setup for testing, click Play in Unity and then take a look at the Console.
If working as expected you'll see something like this in the Console:
5.5 UnityEngine.Debug:Log(Object) TestScript:Start() (at Assets/TestScript.cs:10)
Referencing the Unity DLLs
The first DLL we created is not dependent on Unity. It doesn't reference the Unity DLL at all. I wanted to start this way to demonstrate that this is possible. That we can create a DLL that is independent of Unity and that can be used in other applications. An example of which, as already mentioned, is a stand-alone dedicated server app. Another example might be a command line helper app. The important take-away is that the DLL can be shared between Unity and non-Unity applications.
However you will most likely want to create DLLs that are designed to work in Unity and that do reference the Unity DLL. So let's cover that now.
We'll upgrade our example DLL to depend on Unity. We'll add a MonoBehavior script to the DLL that we can use in Unity.
The first thing we need to do is add a reference to the Unity DLL. To do this you must copy the DLL from your Unity installation to the project that will reference it. To find the DLL go into your local Unity installation (for me that is C:\Program Files\Unity) and navigate to the sub-directory Editor\Data\Managed. Here you will find UnityEngine.dll.
Copy the DLL into the folder containing your Visual Studio project.
Switch back to the Visual Studio Solution Explorer. Right-click on your project, click Add, then click Reference....
In the Reference Manager window select Browse, then click the Browse... button.
Navigate to the Visual Studio project directory and select UnityEngine.dll. Then click Add:
Back in the Reference Manager, click OK.
You should now see a reference to UnityEngine from your project:
Now you can add a MonoBehaviour class to your dll:
After you copy the DLL to your Unity project you can attach the new component to GameObjects in the hierarchy.
Conclusion
In this article I've summarized the benefits and pitfalls of using DLLs with Unity. I've shown how to compile and use your own DLL and how to bring it into your Unity project.
You now have an understanding of the the reasons you might want to use DLLs and you have the tools to start using them.
If you do a lot of prototyping or exploratory coding, where a tight feedback loop works best, then source code is your best bet and DLLs may not be a good choice. Quick and streamlined prototyping is one of Unity's benefits. Using DLLs, whilst it can be a good practice, can detract from efficient prototyping, something that is so very important in game development.
In the end, if DLLs are important to you, you may have to mix and match source code and DLLs. Using DLLs where they make sense, where the benefits outweigh the added complexity. Using source code when you need to move fast. Make your own calls and find a balance that works for your project.
As a last word... The reasons why DLLs are difficult to use is mainly down to Unity: the out-of-date .NET version combined with terrible error messages (when things go wrong). We should all put pressure on Unity to get this situation improved!
In future articles I'll be talking about nuget and troubleshooting DLL-related issues.
Thanks for reading.
|
https://codecapers.com.au/unity-and-dlls/
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Built-In Handlers
Telerik Testing Framework includes native support for HTML pop-ups for all supported browsers in addition to a dialog handling framework (under the Win32.Dialogs) namespace with built-in support to handle some of the common browser dialogs like JavaScript Alert, Upload dialog and Logon dialog. The framework also enables you to extend this support to handle any custom dialog you need handled in any manner.
HTML Pop-ups - Browser instances that are invoked by some action on an element of the page.
Modal Dialogs - Don't act like standard HTML pop-up windows and require special handling
Alert Dialogs - You only need to define how the dialog should be handled.
Logon Dialogs - It is not a browser window, but a dialog displayed by the Windows operating system.
FileUpload Dialog - You need to pass in the full path to the file to upload and how the dialog should be handled.
Handling File Downloads - You have a number of classes included that make handling of the file download dialogs very easy.
Custom Dialog Handler - You can craft your own dialog handling code and then ask the dialog to call your code instead of its dialog handling code.
There are a few things to note about dialog handing:
The dialog monitor can take 1-N dialogs and they can be added/removed at any point in your test code regardless of whether the monitoring has started or not.
Notice how each dialog takes in an instance of a parent browser that will eventually produce the dialog. Before handling each dialog the DialogMonitor makes sure that the current dialog to be handled belongs to that browser instance. This is done to ensure the integrity of handling and to limit the chances the DialogMonitor could match a browser that looks similar to the dialog it needs to handle. Also this is needed in multi browser support scenarios.
|
https://docs.telerik.com/teststudio/testing-framework/write-tests-in-code/advanced-topics-wtc/html-popups-and-dialogs-wtc/built-in-handlers
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Trending Projects is available as a weekly newsletter please sign up at Stargazing.dev to ensure you never miss an issue.
1. Robot
A small functional and immutable Finite State Machine library. Using state machines for your components brings the declarative programming approach to application state.
Robot
Tests are located in the
test/ folder. Load
test/test.html in your browser of choice with any HTTP server you like (I use http-server). Tests are written in QUnit and are…
2. Ultimate SAAS template
Template to quickstart a SAAS business. Stop losing time implementing authentication and payment over and over again.
Focus on what brings value to your customers
gmpetrov
/
ultimate-saas-ts
Template to quickstart a SAAS business
🚀 ⚡️ 🧑💻 Ultimate SAAS template Typescript/Next.js/NextAuth.js/Prisma/Stripe/Tailwindcss/Postgresql
My template to quickstart a SAAS project
Stop losing time implementing authentication and payment over and over again.
Focus on what brings value to your customers
Demo
Features
- Authentication with NextAuth.js (Own Your Data
✅)
- Github
- Many other oauth providers available check their docs
- Payment with Stripe
- Stripe checkout
- Stripe billing portal
- Stripe webhooks (products / prices are synced)
- Hosted on vercel for free
Stripe
Check the stripe section of this repo as the steps are very similar
Postgresql
A postgresql db is needed to deploy the app.
You can have a very small instance for free on heroku
Made with
- Typescript
- Next.js
- NextAuth.js
- Prisma
- Postgresql
- Stripe
- Tailwindcss
Develop
……
# create .env cp .env.example .env # install dependencies yarn # Launch pgsql and maildev yarn docker:start # migrate and seed the database yarn prisma:migrate:dev yarn prisma:seed #
3. Pure
A set of small, responsive CSS modules that you can use in every web project.
Pure
A set of small, responsive CSS modules that you can use in every web project
This project is looking for maintainers to support and enhance Pure.css. If you are interested please leave a comment in the Github issue.: 3.7KB minified…
4. Xterm.js
Xterm.js is a front-end component written in TypeScript that lets applications bring fully-featured terminals to their users in the browser. It's used by popular projects such as VS Code, Hyper and Theia.
Xterm.js is a front-end component written in TypeScript that lets applications bring fully-featured terminals to their users in the browser. It's used by popular projects such as VS Code, Hyper and Theia.
Features
- Terminal apps just work: Xterm.js works with most terminal apps such as
bash,
vim, and
tmux, including support for curses-based apps and mouse events.
- Performant: Xterm.js is really fast, it even includes a GPU-accelerated renderer.
- Rich Unicode support: Supports CJK, emojis, and IMEs.
- Self-contained: Requires zero dependencies to work.
- Accessible: Screen reader and minimum contrast ratio support can be turned on.
- And much more: Links, theming, addons, well documented API, etc.
What xterm.js is not
- Xterm.js is not a terminal application that you can download and use on your computer.
- Xterm.js is not
bash. Xterm.js can be connected to processes like
bashand let you interact with…
5. timeago.js
timeago.js is a nano library(less than 2 kb) used to format datetime with *** time ago statement. eg: '3 hours ago'.
hustcc
/
timeago.js
🕗 ⌛ timeago.js is a tiny(2.0 kb) library used to format date with `*** time ago` statement.
timeago.js
timeago.js is a nano library(less than
2 kb) used to format datetime with
*** time agostatement. eg: '3 hours ago'.
- i18n supported.
- Time
agoand time
insupported.
- Real-time render supported.
- Node and browser supported.
- Well tested.
Official website. React version here: timeago-react. Python version here: timeago.
Such as
just now 12 seconds ago 2 hours ago 3 days ago 3 weeks ago 2 years ago in 12 seconds in 3 minutes in 24 days in 6 months
Usage
- install
npm install timeago.js
- import
import { format, render, cancel, register } from 'timeago.js';
or import with
script tag in html file and access global variable
timeago.
<script src="dist/timeago.min.js"></script>
- example
// format the time with locale format('2016-06-12', 'en_US');
CDN
Alternatively to NPM, you can also use a…
6. GitHub userscripts
Userscripts to add functionality to GitHub.
Mottie
/
GitHub-userscripts
Userscripts to add functionality to GitHub
GitHub userscripts
Userscripts to add functionality to GitHub.
Installation
Make sure you have user scripts enabled in your browser (these instructions refer to the latest versions of the browser):
- Firefox - install Tampermonkey or Greasemonkey (GM v4+ is not supported!).
- Chrome - install Tampermonkey.
- Opera - install Tampermonkey or Violent Monkey.
- Safari - install Tampermonkey.
- Dolphin - install Tampermonkey.
- UC Browser - install Tampermonkey.
Get information or install:
- Learn more about the userscript by clicking on the named link. You will be taken to the specific wiki page.
- Install a script directly from GitHub by clicking on the "install" link in the table below.
- Install a script from GreasyFork (GF) from the userscript site page
- Or, install the scripts from OpenUserJS (OU).
7. DOM to SVG
Library to convert a given HTML DOM node into an accessible SVG "screenshot".
felixfbecker
/
dom-to-svg
Library to convert a given HTML DOM node into an accessible SVG "screenshot".
DOM to SVG
Library to convert a given HTML DOM node into an accessible SVG "screenshot".
Demo
📸
Try out the SVG Screenshots Chrome extension which uses this library to allow you to take SVG screenshots of any webpage You can find the source code at github.com/felixfbecker/svg-screenshots.
Usage
import { documentToSVG, elementToSVG, inlineResources, formatXML } from 'dom-to-svg' // Capture the whole document const svgDocument = documentToSVG(document) // Capture specific element const svgDocument = elementToSVG(document.querySelector('#my-element')) // Inline external resources (fonts, images, etc) as data: URIs await inlineResources(svgDocument.documentElement) // Get SVG string const svgString = new XMLSerializer().serializeToString(svgDocument)
The output can be used as-is as valid SVG or easily passed to other packages to pretty-print or compress.
Features
- Does NOT rely on
<foreignObject>- SVGs will…
8. Serverless Examples
A collection of boilerplates and examples of serverless architectures built with the Serverless Framework on AWS Lambda, Microsoft Azure, Google Cloud Functions, and more.
serverless
/
examples
Serverless Examples – A collection of boilerplates and examples of serverless architectures built with the Serverless Framework on AWS Lambda, Microsoft Azure, Google Cloud Functions, and more.
Website • Email Updates • Gitter • Forum • Meetups • Twitter • Facebook • Contact Us
Serverless Examples
A collection of ready-to-deploy Serverless Framework services.
Table of Contents
Click to expand
Getting Started
If you are new to serverless, we recommend getting started with by creating an HTTP API Endpoint in NodeJS, Python, Java, or Golang.
Examples
Each example contains a
README.md with an explanation about the service and it's use cases.
Have an example? Submit a PR or open an issue.
To install any of these you can run:
serverless install -u -n my-project
9. dva
React and redux based, lightweight and elm-style framework. (Inspired by elm and choo)
dvajs
/
dva
🌱 React and redux based, lightweight and elm-style framework. (Inspired by elm and choo)
dva
Lightweight front-end framework based on redux, redux-saga and react-router. (Inspired by elm and choo)
Features
- Easy to learn, easy to use: only 6 apis, very friendly to redux users, and API reduce to 0 when use with umi
- Elm concepts: organize models with
reducers,
effectsand
subscriptions
- Support HMR: support HMR for components, routes and models with babel-plugin-dva-hmr
- Plugin system: e.g. we have dva-loading plugin to handle loading state automatically
Demos
- Count: Simple count example
- User Dashboard: User management dashboard
- AntDesign Pro:(Demo),out-of-box UI solution for enterprise applications
- HackerNews: (Demo),HackerNews Clone
- antd-admin: (Demo),A admin dashboard application demo built upon Ant Design and Dva.js
- github-stars: (Demo),Github star management application
- Account System: A small inventory management system
- react-native-dva-starter: react-native example integrated dva and react-navigation
Quick…
10. Pigeon Maps
ReactJS Maps without external dependencies
mariusandra
/
pigeon-maps
ReactJS Maps without external dependencies
Pigeon Maps - ReactJS maps without external dependencies
Demo: (using maps from MapTiler, OSM and Stamen)
What is
- Option to block dragging with one finger and mouse wheel scrolling without holding meta key
- Enable/disable touch and mouse events as…
Stargazing 📈
Top risers over last 7 days🔗
- Uptime Kuma +1,991 stars
- Playwright +685 stars
- Awesome +979 stars
- Developer Roadmap +641 stars
- Public APIs +640 stars
Top growth(%) over last 7 days🔗
- Nice Modal React +63%
- Uptime Kuma +38%
- kbar +36%
- envsafe +27%
- DevOp Resources +23%
Top risers over last 30 days🔗
- Public APIs +7,348 stars
- Free Programming Books +4,285 stars
- Free Code Camp +3,932 stars
- Uptime Kuma +3,882 stars
- Awesome +3,602 stars
Top growth(%) over last 30 days🔗
- Nice Modal React +138%
- Uptime Kuma +118%
- Pico +73%
- Medusa +71%
- React Web Editor (2)
Awesome thank you! :)
Keep going
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/iainfreestone/10-trending-projects-on-github-for-web-developers-15th-october-2021-58k3
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
We are given two numbers n and k. We need to find the count of arrays that can be formed using the n numbers whose sum is k.
The number of arrays of size N with sum K is $\dbinom{k - 1}{n - 1}$.
This a straightforward formula to find the number arrays that can be formed using n elements whose sum k. Let's see an example.
Input
n = 1 k = 2
Output
1
The only array that can be formed is [2]
Input
n = 2 k = 4
Output
3
The arrays that can be formed are [1, 3], [2, 2], [3, 1].
Following is the implementation of the above algorithm in C++
#include <bits/stdc++.h> using namespace std; int factorial(int n) { int result = 1; for (int i = 2; i <= n; i++) { result *= i; } return result; } int getNumberOfArraysCount(int n, int k) { return factorial(n) / (factorial(k) * factorial(n - k)); } int main() { int N = 5, K = 8; cout << getNumberOfArraysCount(K - 1, N - 1) << endl; return 0; }
If you run the above code, then you will get the following result.
35
|
https://www.tutorialspoint.com/number-of-arrays-of-size-n-whose-elements-are-positive-integers-and-sum-is-k-in-cplusplus
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
halloumi-ami-pipelines
Project description
Introduction
AMI Pipelines is a library for creating EC2 Image Builder pipelines with configurations on a given path. EC2 Image Builder pipelines are pipelines that can help create AMI images, based on 1 or more steps, called components, in a defined image recipe. These pipelines will create the AMI's as configured. All you need is to create one or more YAML files in a given directory and the library will create the necessary CodePipelines, EC2 Image Builder pipelines and components for you.
Supported parent images:
- CentOS7
- CentOS8
- Ubuntu1804
- Ubuntu2004
This is a sample configuration:
--- pipeline: parent_image: AmazonLinux2 # or Ubuntu2004 or CentOS7 sources: # Sources for use in the source stage of the Codepipeline. - name: Bucket type: s3 bucket: kah-imagebuilder-s3-bucket-fra object: test.zip - name: Codecommit type: codecommit repo_name: testrepo branch: develop recipe: name: DemoCentos components: - name: install_cloudwatch_agent # Reference to a name in the component_dependencies section - name: another_ec2_ib_component_from_github - name: install_nginx schedule: cron(0 4 1 * ? *) shared_with: # Optional: Share images with another account. Image will be copied. - region: eu-west-1 account_id: 123456789 component_dependencies: - name: another_ec2_ib_component_from_github type: git branch: master url: git@github.com:rainmaker2k/echo-world-component.git - name: install_cloudwatch_agent type: git branch: master url: git@github.com:rainmaker2k/ec2ib_install_cloudwatch.git - name: install_nginx branch: master type: git url: git@github.com:sentiampc/ami-pipelines-base-components.git path: nginx # Optional: If you have multiple component configurations in this repository. - name: aws_managed_component type: aws_arn arn: arn:aws:imagebuilder:eu-central-1:aws:component/amazon-cloudwatch-agent-linux/1.0.0
This is a Typescript project, managed through Projen. Projen is project management tool that will help you manage most of the boilerplate scaffolding, by configuring the
.projenrc.js file.
If you have not done so already, install projen through
npm:
$ npm install -g projen
or
$ npx projen
Also install yarn.
$ npm install -g yarn
When you first checkout this project run:
$ projen
This will create all the necessary files from what is configured in
.projenrc.js, like package.json, .gitignore etc... It will also pull in all the dependencies.
If everything is successful, you can run the build command to compile and package everything.
$ projen build
This will create a dist directory and create distibutable packages for NPM and Pypi.
Examples
Python
Here is an example of a stack in CDK to create the pipelines. This example assumes you have the YAML configurations stored in
.\ami_config\
from aws_cdk import core from ami_pipelines import PipelineBuilder import os import yaml import glob class DemoPyPipelineStack(core.Stack): def __init__(self, scope: core.Construct, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) print("Creating pipeline") pipeline_builder = PipelineBuilder() pipelines = pipeline_builder.create(self, "ami_config")
This assumes you have at least one pipeline config YAML in the
ami_config directory.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/halloumi-ami-pipelines/
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Introduction to AWT Components in Java
Java AWT, which is abbreviated as Abstract Window Toolkit, is nothing but a set of API’s used in order to develop Graphical User Interface or applications based on windows. AWT components in java are platform-dependent components that mean a display of components on a graphical user interface depends on the underlying operating system; AWT components are generally heavy components that make high use of operating system resources.
Syntax:
Given below is a syntax of how AWT components are used:
// importing awt package
import java.awt.*;
// create a class extending Frame component
class <className> extends Frame{
<className>(){
Button button=new Button("<Text_To_Display_On_Button>"); // create instance of component
button.setBounds(40,90,80,30);// call method to set button position
add(button);// adding component to the container
setSize(400,400);//set size of container
setVisible(true);//set visibility of container to true
}
public static void main(String args[]){
<className> clsobj=new <className>();
}}
The above syntax shows how to use a Button component of the AWT package.
In the above syntax <ClassName> denotes name of java class. <Text_To_Display_On_Button> can be set according to our functionality.
Different AWT Components
An AWT component can be considered as an object that can be made visible on a graphical interface screen and through which interaction can be performed.
In java.awt package, the following components are available:
1. Containers: As the name suggests, this awt component is used to hold other components.
Basically, there are the following different types of containers available in java.awt package:
a. Window: This is a top-level container and an instance of a window class that does not contain a border or title.
b. Frame: Frame is a Window class child and comprises the title bar, border and menu bars. Therefore, the frame provides a resizable canvas and is the most widely used container used for developing AWT-based applications. Various components such as buttons, text fields, scrollbars etc., can be accommodated inside the frame container.
Java Frame can be created in two ways:
- By Creating an object of Frame class.
- By making Frame class parent of our class.
- Dialog: Dialog is also a child class of window class, and it provides support for the border as well as the title bar. In order to use dialog as a container, it always needs an instance of frame class associated with it.
- Panel: It is used for holding graphical user interface components and does not provide support for the title bar, border or menu.
2. Button: This is used to create a button on the user interface with a specified label. We can design code to execute some logic on the click event of a button using listeners.
3. Text Fields: This component of java AWT creates a text box of a single line to enter text data.
4. Label: This component of java AWT creates a multi-line descriptive string that is shown on the graphical user interface.
5. Canvas: This generally signifies an area that allows you to draw shapes on a graphical user interface.
6. Choice: This AWT component represents a pop-up menu having multiple choices. The option which the user selects is displayed on top of the menu.
7. Scroll Bar: This is used for providing horizontal or vertical scrolling feature on the GUI.
8. List: This component can hold a list of text items. This component allows a user to choose one or more options from all available options in the list.
9. Checkbox: This component is used to create a checkbox of GUI whose state can be either checked or unchecked.
Example of AWT Components in Java
The following example shows the use of different AWT components available in java.
Code:
package com.edubca.awtdemo;
package com.edubca.awtdemo;
import java.applet.Applet;
// import awt and its subclasses
import java.awt.*;
// class extending applet
public class AWTDemo extends Applet {
// this method gets automatically called
public void init() {
Button button = new Button("Click Here to Submit"); // creating a button
this.add(button); // adding button to container
Checkbox checkbox = new Checkbox("My Checkbox"); // creating a checkbox
this.add(checkbox); //adding checkbox to container
CheckboxGroup checkboxgrp = new CheckboxGroup(); // creating checkbox group
this.add(new Checkbox("Check box Option 1", checkboxgrp, false));
this.add(new Checkbox("Check box Option 2", checkboxgrp, false));
this.add(new Checkbox("Check box Option 3", checkboxgrp, true));
// adding to container
Choice choice = new Choice(); // creating a choice
choice.addItem("Choice Option 1");
choice.addItem("Choice Option 2");
choice.addItem("Choice Option 3");
this.add(choice); //adding choice to container
Label label = new Label("Demo Label"); // creating a label
this.add(label); //adding label to container
TextField textfield = new TextField("Demo TextField", 30); // creating a Textfield
this.add(textfield); // adding Textfield to container
}
}
The above program shows how to use AWT components like buttons, Checkboxes, Checkbox group, Labels, Choice and Text Fields in java code.
Output:
Recommended Articles
This is a guide to AWT Components in Java. Here we discuss the introduction, different AWT components in java and example, respectively. You may also have a look at the following articles to learn more –
|
https://www.educba.com/awt-components-in-java/
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
#include <vcardhandler.h>
A virtual interface that helps requesting Jabber VCards.
Derive from this interface and register with the VCardManager. See VCardManager for info on how to fetch VCards.
Definition at line 36 of file vcardhandler.h.
Describes possible operation contexts.
Definition at line 42 of file vcardhandler.h.
Virtual destructor.
Definition at line 51 of file vcardhandler.h.
This function is called when a VCard has been successfully fetched. The VCardHandler becomes owner of the VCard object and is responsible for deleting it.
|
https://camaya.net/api/gloox-0.9.9.12/classgloox_1_1VCardHandler.html
|
CC-MAIN-2019-18
|
en
|
refinedweb
|
Inpainting [1] is the process of reconstructing lost or deteriorated parts of images and videos.
The reconstruction is supposed to be performed in fully automatic way by exploiting the information presented in non-damaged regions.
In this example, we show how the masked pixels get inpainted by inpainting algorithm based on ‘biharmonic equation’-assumption [2] [3] [4].
import numpy as np import matplotlib.pyplot as plt from skimage import data from skimage.restoration import inpaint image_orig = data.astronaut()[0:200, 0:200] # Create mask with three defect regions: left, middle, right respectively mask = np.zeros(image_orig.shape[:-1]) mask[20:60, 0:20] = 1 mask[160:180, 70:155] = 1 mask[30:60, 170:195] = 1 # Defect image over the same region in each color channel image_defect = image_orig.copy() for layer in range(image_defect.shape[-1]): image_defect[np.where(mask)] = 0 image_result = inpaint.inpaint_biharmonic(image_defect, mask, multichannel=True) fig, axes = plt.subplots(ncols=2, nrows=2) ax = axes.ravel() ax[0].set_title('Original image') ax[0].imshow(image_orig) ax[1].set_title('Mask') ax[1].imshow(mask, cmap=plt.cm.gray) ax[2].set_title('Defected image') ax[2].imshow(image_defect) ax[3].set_title('Inpainted image') ax[3].imshow(image_result) for a in ax: a.axis('off') fig.tight_layout() plt.show()
Total running time of the script: ( 0 minutes 3.260 seconds)
Gallery generated by Sphinx-Gallery
|
https://scikit-image.org/docs/stable/auto_examples/filters/plot_inpaint.html
|
CC-MAIN-2019-18
|
en
|
refinedweb
|
Search Criteria
Package Details: xapps-git 1.2.2.r0.gd957699-2
Dependencies (7)
- libgnomekbd
- git (git-git) (make)
- gobject-introspection (gobject-introspection-git) (make)
- meson (meson-git) (make)
- python-gobject (python-gobject-git) (make)
- python2-gobject (python2-gobject-git) (make)
- vala (vala-git, vala0.26) (make)
Required by (13)
- cinnamon-git (requires xapps)
- cinnamon-screensaver-git (requires xapps)
- cinnamon-session-git (requires xapps)
- cinnamon-slim (requires xapps)
- mintlocale (requires xapps)
- mintstick (requires xapps)
- mintstick-git (requires xapps)
- nemo-git (requires xapps)
- timeshift (requires xapps)
- xed-git (requires xapps)
- xplayer (requires xapps)
- xplayer-git (requires xapps)
- xreader-git
Latest Comments
eschwartz commented on 2018-04-19 18:13
"working", yes. "good", no. I've taken this opportunity to adopt the package and base it off of my [community] package.
Things you missed:
pygobject-devel is wrong and not needed at all, it does need to be able to
import giunder both python and python2 (which means python{,2}-gobject).
The pkgver is wonky.
Some scripts require python in depends, unless you remove them which I have done because they are useless and require e.g. Debian-specific renames of the gist utility.
meson should use the "plain" releasetype in order to fully respect makepkg.conf
oberon2007 commented on 2018-04-16 21:58
I made a working PKGBUILD here:
oberon2007 commented on 2018-04-16 21:22
PKGBUILD needs update!
/tmp/yaourt-tmp-user/aur-xapps-git/./PKGBUILD: line 27: ./autogen.sh: No such file or directory
xapps is using meson build now.
willemw commented on 2017-10-13 07:42
@kernelmd: It needed some more changes. Updated PKGBUILD file:
kernelmd commented on 2017-10-13 06:33
willemw, unfortunately I can't test it atm, as I don't have arch installed. The error should be fixed by installing gtk-doc package. Please try it and reply here with the results, so that I could update the package.
willemw commented on 2017-10-13 06:20
./autogen.sh: line 28: gtkdocize: command not found
==> ERROR: A failure occurred in build().
PKGBUILD of xapps has:
makedepends=('gnome-common' 'gobject-introspection')
kernelmd commented on 2016-11-06 10:04
oberon2007, thank you for feedback. I have just updated the package.
oberon2007 commented on 2016-11-03 10:44
depends=('libgnomekbd') needs to be added.
Thank you for the x-apps packages! :)
|
https://aur.archlinux.org/packages/xapps-git/
|
CC-MAIN-2019-18
|
en
|
refinedweb
|
Description.
Specifications
Product Reviews
Questions & Answers
Usage Statistics
Discussions tagged with this product
Redirecting Folder Redirection GPO
All,
I'm in the middle of an interforest migration from Domain A to Domain B, and Windows Server 2003 to Windows Server 2012R2.
robocopy /move don't move folders on first run
Good day!
I have a question about robocopy:
I wrote a script lately for move files and folders into another specific folder.
More
Robocopy command for duplicating many folders with the same name.
Every year I need to make new budget folders that are named the current year. There hundreds of these folders currently named
Managing local storage on a unique pc
+1 for Robocopy. Just set up a scheduled task for a sync at whatever intervals is appropriate.
You could also turn on local copies
Windows 10, coping very large files
Robocopy all the way. It's much better than xcopy and can be configured tu run in several different ways that may help you:
Robocopy converts dashes to hyphens
I'm working on a powershell script to remove access of former employees from every folder in a file share. I use robocopy's list
Robocopy command to copy modified files with security permisions
Hello All,
I need to migrate the data from an old file server to a new file server. I used RichCopy but many files did not copy
Move Files Older Than 3 Years
I'm working with Robocopy with the following syntaxText
robocopy E:\Users E:\UsersOld /E /COPYALL /move /MINAGE:1095
Does the
Robocopy-check for any changed or new files/folders and update destination
I have just robocopied 1.5TB from source to destination with the below command
/E /MIR /W:0 /R:3 /COPY:DAT /DCOPY:T /log:c:\Temp\
CopyProfile - Copy customized default user profile to multiple computers
Hey guys,
I'm starting as a freelance and want to preconfigure a user profile I can set up on multiple different Windows 10
Does Robocopy and FrontPage website folders play well together?
Windows 7 Pro
I have a FrontPage website with a lot of sub-folders beginning with the underscore ("_"); like "_borders". The
Copy Files and permissions with Robocopy
I have configured many a batch file using RoboCopy. If you open a CMD prompt and type Robocopy /? you can see all the options with
Fastest way to virtualize?
Hi everyone,
I have a 5 tb external disk connected via PCI-E to my vm host(1.67 TB used). It is then passed through to a file
File Compare and Copy
I guess I'm not sure what your goal is but it soundslike you just need the /MIR argument andMicrosoft Robocopy
Pro Tip: How to Copy Files from One Server to Another
Because I use a lot of Robocopy switches and folder exclusions (e.g. recycle bin, dfsrprivate, etc.), and because I'm lazy and
Syncing Folders between Servers
I have a new mail server - I want to sync folders from the old server to the new server so nothing is missed and the migration is
Robocopy /XD refuses to work
Trying to configure robocopy to copy all but a directory (and its subdirecotries) in a single job, thus:Batchfile
/SD:\\
Robocopy question
When you have a Source and Destination already set up and the files are newer on the Source...
Robocopy will copy only those newer
robo copy delete files from backup after x days of deletion from source
i want to create a backup of files using robo copy.I want files in backup must be deleted after x days of deletion from source.But
robo copy delete files from backup after x days of deletion from source
i want to create a backup of files using robo copy.I want files in backup must be deleted after x days of deletion from source.But
What's the easiest way to back up a single folder throughout the day?
+1 for Robocopy - you can make it continuous if you want, or set it up to run through task scheduler depending on how often you
DFS-R with Robocopy - need proper switches
Hey geniuses of Spiceworks. I need to move our data share from one server location to another while keeping all the security
Help me improving this script memory management!
Hi folks!
I'm having one hell of a bad time trying to improve the memory management of this script I'm creating. The fact is that
Replicating Windows Backups Offsite
I am using Windows Server Backup and Windows Backup and Restore to make backups of servers and workstations to external hard
Backup Thunderbird Profiles via GPO
Hello everyone, and happy Monday!
I'm running into an issue trying to make automated backups of my user's Thunderbird profile
Want to talk me out of this...
I'm currently using 2 instances of Server 2008 R2Hyper-V to host 4 VMs on one server and their replicas on another, identical
Robocopy Knocking Off Workstation
I'm trying to migrate some file shares off a Windows 2008 DC onto a Windows 7 workstation. I've ran the following Robocopy script
Robocopy Cmd line problem - SOLVED
Hi everyone,
Sorry not sure if this is the right area.
I am new to this forum and new to Robocopy. I have taught myself a crash
Robocopy /XD from text file
I need to create a robocopy script that mirrors a directory excluding a list of directories in a text file. How does one go about
-
Powershell and RoboCopy - Looking for somehelp
OK All,
Here's the layout. I have about 2.7 TB worth of data that I want to move (Potentially)Powershell
$Cred=Get-Credential
Robocopy or other for copying data to new app server?
Howdy-
I have searched around and found plenty of examples for Robocopy to copy files, but none have worked great so far.
I have
Robocopy Script Help
Hi
Can someone tell me how to script this with Robocopy please:
I need to move about 150 servers from an old
Robocopy or What?
i use it with "scheduled task" as well as triggers, and even have some Powershell scripts call some Robocopy scripts.
[WINDOWS][HOW-TO] C:\>delete *.* except for [list]
I also posted this over at /r/software. In case anyone else has similar needs:
/u/nerdshark had this:Powershell
$excludedItems =
Moving files into folders using Powershell and csv
Hey guys hoping i can get some help with this. I spent most of the day on it and nothing seems to work :(
I have a csv. In it
Moves files /folders older then X days in Windows
I have a folder name C:\TEST which have several hundreds sub folders in it.
I want to MOVE files and folders older then 365 days
Migrating fileserver VM from 2003 to 2012 R2
We're running a 2003 file server on VMWare 5.5, and will be moving to 2012 R2. This VM is a DFS (namespace only) target. It has
Help with Robocopy
Hey good people,
We have a client who currently use a windows 2003 SBS as their main server, they looking to move to a new server
Robocopy command to copy all shares within a server
If I want to move everything on one server to another server without specifying a specific share, is that possible.
For Example, I
Unable to copy file
I am getting an error saying that I cannot copy the file because the network location is unavailable. I am able to navigate and
Disabling robocopy service
Trying to clean up an old W2003 File Server that was formerly used to backup user folders from workstations on the domain. All
Can Robocopy Help Me Backup my File Server?
I am trying to copy all of the files that are shared from our file server to a USB external harddrive however I keep getting
Robocopy /XF from text file
Need a script to delete a folder's contents each night (via GPO)
Doesn't matter, haha - I just want to be sure I fully understand what exactly that script says before I put it
A Mirrored folder.
I'd check out robocopy and see if you can write a script that runs as a scheduled task to mirror the folders.
Moving files
Program help
Just use Robocopy.
Making Gui for Robocopy script
So I know this has been asked in several different ways and I'm sure this is linked to VBscript/.HTA. But I already have a .bat
Use robocopy for switching file servers
Hi all. I have a massive file copy I need to do as I'm moving data from fileserver1 to fileserver2. There's so much data, and it
Projects
-
Server Room, Racks and Build AgentConvert basement area into a server room. Arrange for installation of air conditioning and liaise fo...
Hardware Refresh - Software Support CenterReplace mixed Software Support Center machines with W10 machines with i5, 8GB RAM, and 320GB SSD, wi...
VM MigrationMigrate VM from Vmware to Hyper-V, ensuring VM was working and all data was transferred and accessible
-
Migrate from SBS 2003 to 2012 R2Upgrade server infrastructure for CPA firm. Moving from SBS 2003 to a minimum of 2012 R2.
-
Disaster recoveryConsolidate all data to a NAS, Setup a second NAS at a remote site for disaster recovery.
EU Backup SolutionsSafe Harbour is dead. At the time of writing, nothing concrete is in place to protect our data. Wha...
IT System CleanupWe was approached by a clieant to attend site to work with them to help clean up their machines and ...
Backup ScriptThis is to back up log files and .xtr files onto our main server6
-
-
Windows Server 2003 to 2012 R2 migrationMigrating from Windows Server 2003 SP2 to Server 2012 R2. Resolving existing DNS and server manageme...1
-
-
-
-
Migrate 2003 Domain and File ServersMigrate existing File Server and AD Domain away from 2003 Servers into 2012 R2 Servers
Windows Server 2012 - DC, DHCPThis new server will replace an existing DC, DHCP, file and print server at the location. Additiona...
-
File Server Migration and UpgradeFile and print server migration from Windows server 2003 to windows server 2012 r2
-
Creative File TransferProject to accomplish two goals; migrate 1.8TB of data from a dying Windows Server and configure Tim...
Can you keep it alive for a bit longer?Dumbleton Hall Hotel was being pressured by their IT support company to purchase a pair of new serve...1
-
New File ServerBuild a new File Server and move all the files from the current server to the new one. It is critica...
New Backup SystemAfter having no backup for over 6 months, we decided it was time to put some investment into ensurin...
Replace Branch Servers with VMsreplace 2 aging physical hardware with server 2003 - with esxi 5.5 hosting 2 VMs, opportunity to upg...
File system virtualizationMigrate all file systems (user data) to Acopia virtualization using robocopy as primary method of da...
-
|
https://community.spiceworks.com/products/17457-microsoft-robocopy/review/680612/flag
|
CC-MAIN-2019-18
|
en
|
refinedweb
|
Arduino Thermistor Theory, Calibration, and Experiment
Blog Post Index [read more about the difference between thermistor and thermocouple here]..
I will be using an NTC 3950 100k thermistor as mentioned above, and it will serve as the primary component used in this tutorial. Additionally, an Arduino board will be needed along with a DHT22 temperature sensor if the user is planning to follow along completely with this experiment. I have added a parts list below with some affiliate link from amazon:
Thermistors can be approximated by assuming a third-order function called the Steinhart-Hart approximation [source on thermistor calibration]:
where T is the temperature calculated from the thermistor change in resistance, R . The coefficients C0 , C1 , and C2 need to be found using a non-linear regression method. The Steinhart-Hart equation is often simpilfied and rewritten as an exponential of first order:
Now we see an approxate method for relating T to the resistance, R . The coefficients a, b, c can be found using a least-squares fit against factory calibration data that be acquired from the manufacturer. For my thermistor, I found factory tables that allowed me to fit the data using the equation above [example datasheet with table].
Using Python, I was able to download one of the tables for my thermistor and fit the data to an exponential curve using the function above and scipy’s ‘curve_fit’ toolbox. The resulting relationship and coefficients are shown below:
Figure 1: Factory calibration for temperature and resistance relationship for thermistor readings.
Now that we have a relationship between the resistance of the thermistor wire and the temperature measured, we need to understand how we can translate resistance into a meaningful quantity that we can measure using an analog-to-digital converter, namely, we need to convert resistance to voltage. And this is explained in the next section.
Arduino has a 10-bit analog-to-digital converter (ADC) that measures voltage values. Since our thermistor outputs resistance, we need to construct a relationship between our resistance and voltage in order to relate the change in resistance to voltage. We can do this using a simple voltage divider:
Figure 2: Voltage divider circuit for measuring voltage instead of resistance from the thermistor.
For Arduino, we will use 3.3V as our V0 to keep the noise low on the thermistor measurements. response of the thermistor voltage changes based on the voltage divider resistor chosen. Be sure to select a resistor near the resistor above for your specific desired temperature range.
The full implementation of the algorithms and Figures 1 and 3 is implemented below in Python 3.6.
#!/usr/bin/env python3 # # script for determining resistor pairing with thermistor NTC 3950 100k # # import csv import numpy as np import matplotlib.pyplot as plt from scipy.optimize import curve_fit from scipy import stats plt.style.use('ggplot') def exp_func(x,a,b,c): return a*np.exp(-b*x)+c temp_cal,resist_cal = [],[] with open('ntc_3950_thermistor_cal_data.csv',newline='') as csvfile: csvreader = csv.reader(csvfile) for row in csvreader: temp_cal.append(float(row[1])) resist_cal.append(float(row[2])) fit_params,_ = curve_fit(exp_func,temp_cal,resist_cal,maxfev=10000) test_fit = [exp_func(ii,*fit_params) for ii in temp_cal] RMSE = np.sqrt(np.mean(np.power(np.subtract(test_fit,resist_cal),2))) mape = np.mean(np.abs(np.divide(np.subtract(resist_cal,test_fit),resist_cal)))*100 err_percent = 100.0*(RMSE/np.mean(np.abs(resist_cal))) print('RMSE = {0:2.2f} ({1:2.1f}%)'.format(RMSE,err_percent)) fit_txt_eqn = '$R(T) = ae^{-bT}+c$' fit_txt_params = '\n $a = {0:2.1f}$ \n $b = {1:2.5f}$ \n $c = {2:2.1f}$'.format(*fit_params) fit_txt = fit_txt_eqn+fit_txt_params fig1 = plt.figure(figsize=(15,9)) ax = fig1.gca() plt.plot(temp_cal,resist_cal,marker='o',markersize=10,label='Data') plt.plot(temp_cal,test_fit,marker='o',markersize=10,alpha=0.7,label='Fit (Error = {0:2.1f}%)'.format(mape)) plt.text(np.mean((temp_cal)) ,np.mean((resist_cal)),fit_txt,size=20) plt.title('NTC 3950 100k Thermistor Factory Calibration Plot and Fit') plt.xlabel(r'Temperature [$^\circ$C]',fontsize=16) plt.ylabel(r'Resistance [$\Omega$]',fontsize=16) plt.legend() #plt.savefig('thermistor_factory_fit.png',dpi=300,facecolor=[252/255,252/255,252/255]) plt.show() ## voltage divider selection for temperature ranges # # fig2 = plt.figure(figsize=(15,9)) ax3 = fig2.add_subplot(1,1,1) for T_2 in np.linspace(20.0,100.0,7): V_0 = 3.3 T_1 = -40.0 test_temps = np.linspace(T_1,T_2,10) R_2_1 = exp_func((T_1+T_2)/2.0,*fit_params) R_1 = R_2_1*((V_0/(V_0/2.0))-1) print(R_1) ## Thermistor test expectations with various voltage divider resistor values # # R_2 = exp_func(test_temps,*fit_params) V_2 = V_0*(1/(np.divide(R_1,R_2)+1)) ax3.plot(test_temps,V_2,linewidth=4,label='R_1 = {0:2.0f}'.format(R_1)) ax3.set_ylabel('Thermistor Voltage Output [V]',fontsize=18) ax3.set_xlabel('Temperature [$^\circ$C]',fontsize=18) plt.legend() plt.title('Voltage Divider Resistor Selection Response Curves') #plt.savefig('thermistor_resistor_selection.png',dpi=300,facecolor=[252/255,252/255,252/255]) plt.show()
Now that we have a relationship between the voltage read by the Arduino and the temperature measured by the thermistor, and we have selected our voltage divider resistor - we can now test if the system works and if our algorithm is correct! The correct prediction of temperature from the known parameters above is as follows:
Figure 4: Arduino + Thermistor voltage divider circuit. Also take note of the external reference at 3.3V - we choose 3.3V because the voltage divider circuit will likely never reach the higher voltages due to the operating range we are interested in. The 3.3V choice also results in lower noise for the ADC. I have also attached a 10uF capacitor across the 3.3V and GND pins to lower some of the noise as well.
A few observations can be made regarding the wiring diagram above. The first, is that a 10uF capacitor is placed between the 3.3V and GND pins. Also, it is important to note that we will be using an external voltage reference using the 3.3V pin. And the reason is twofold: the expected voltage from the thermistor will be in the 1.5V range, and secondly, the 3.3V pin has less noise so our voltage readings will be more stable, resulting in more stable temperature readings (read more about the reference voltage here). The Arduino code for measuring temperature using our derivations above and the wiring in Figure 4 is below:
// Arduino code for use with NTC thermistor #include <math.h> : Serial.begin(9600); pinMode(therm_pin,INPUT); // set analog reference to read AREF pin analogReference(EXTERNAL); } void loop() { // loop over several values to lower noise; // this is where the thermistor conversion happens based on parameters from fit T_sum+=(-1.0/b)*(log(((R_1*voltage)/(a*(V_0-voltage)))-(c/a))); } // averaging values from loop T_approx = T_sum/float(avg_size); // readout for Celsius and Fahrenheit Serial.print("Temperature: "); Serial.print(T_approx); Serial.print(" ("); Serial.print((T_approx*(9.0/5.0))+32.0); Serial.println(" F)"); delay(500); }
The code above averages 10 temperature readings for a more stable output and gives a readout roughly every 500 ms in both Celsius and Fahrenheit. The parameters should be updated for the user-specific thermistor, and the average amount can also be adjusted based on the user’s desired stability.
Capacitor results in smoothed temperature response
Figure 5: Capacitor smoothing effect on ADC for thermistor reading.
In the next section I compare our thermistor to a DHT22 temperature and humidity sensor.
As a simple test, I decided to wire up a DHT22 temperature and humidity sensor to see how well the thermistor equation approximate temperature based on its resistance. The DHT22 is a classic Arduino sensor, so I expected the two to be fairly close when compared at room temperature. I also wanted to see their respective responses when their surrounding temperatures are increased and watch the response with time to get an idea of how the sensors work over actively changing temperature scenarios.
The wiring for the thermistor and DHT22 sensor combination is shown below.
Figure 6: Wiring for comparison between DHT22 sensor and thermistor.
The Arduino code to accompany the DHT22 and thermistor comparison is also given below. It uses the “SimpleDHT” library which can be installed through the Library Manager.
#include <math.h> #include <SimpleDHT.h> #define therm_pin A0 #define pinDHT22 2 float T_approx; float V_0 = 3.3; float R_1 = 220000.0; float a = 283786.2; float b = 0.06593; float c = 49886.0; int avg_size = 50; SimpleDHT22 dht22; void setup() { // initialize serial communication at 9600 bits per second: Serial.begin(9600); pinMode(therm_pin,INPUT); analogReference(EXTERNAL); } // the loop routine runs over and over again forever: void loop() {; T_sum+=(-1.0/b)*(log(((R_1*voltage)/(a*(V_0-voltage)))-(c/a))); } T_approx = T_sum/float(avg_size); Serial.print("Thermistor: "); Serial.print(T_approx); Serial.print(" ("); Serial.print((T_approx*(9.0/5.0))+32.0); Serial.println(" F)"); float temperature = 0; dht22.read2(pinDHT22, &temperature, NULL, NULL); Serial.print("DHT22: "); Serial.print((float)temperature); Serial.println(" *C, "); Serial.print("Difference: "); Serial.print(temperature-T_approx); Serial.println(" C"); delay(500); }
The code above calculates both temperatures and prints them to the serial monitor every 0.5 seconds. It also averages every 10 readings from the thermistor. The code also prints out the difference between the two temperature sensor methods. Below, I have plotted the temperature difference to show the average deviation between thermistor and DHT22.
Difference Between DHT22 and NTC Thermistor Temperature Readings!
Just to contrast the abilities of the two sensors, the plot below demonstrates the power of the thermistor and the weakness of the DHT22:
Difference Between DHT22 and Thermistor During a Hot Gust
In the plot above, it’s easy to see the power of the thermistor and its ability to handle quick-changing scenarios. The DHT22 is only equipped to handle a 0.5s update rate, and in reality can only resolve ambient temperatures, not large bursts of hot or cold. The plot below really illustrates the deficiencies in the DHT22’s ability to handle bursts of temperature changes. Thermistors have temperature responses that are fairly quick, while the DHT22 takes a few readings. The DHT22 also requires some time to recover from a heating period, primarily because of its housing and slow component response.
Thermistor and DHT22 Thermal Responses
The thermistor is a clear winner when temperature fluctuations are of great importance to measurements. This is why they are often used in experiments where temperatures do fluctuate quickly and accurate measurements are needed.
Figure 7: Glass beaded thermistor next to a DHT22 temperature sensor.
In this article, I discussed thermistors and how to implement them in Arduino by fitting factory calibrated data to acquire accurate coefficients for finding temperature from resistance. I also discussed how to use a voltage divider to measure voltage as a function of resistance outputted form the thermistor. And lastly, I used a DHT22 temperature sensor to compare the accuracy and advantages of using a thermistor.
Thermistors are used in a wide variety of applications because of their accuracy, high responsivity in rapidly changing environments, and their inexpensive and easy-to-use hardware. One of the difficulties with using thermistors is their non-linear repsonse, however with quality calibration and response curves, the non-linear effects can be handled. There are many other experiments that can be done with thermistors to analyze their time responses, lower the non-linear hindrances, and investigate the self-heating effects. This project was meant to introduce thermistors and their theory, while also increasing the understanding of why they are a great choice over other temperature-sensing methods.
Thanks to PCBGOGO for PCB prototyping help and for sponsoring me in this project.
If you enjoyed the experiment, please share the project and go to pcbgogo.com to purchase a custom PCB board for your own electronics projects.
See More in Arduino and Sensors:
|
https://makersportal.com/blog/2019/1/15/arduino-thermistor-theory-calibration-and-experiment
|
CC-MAIN-2019-18
|
en
|
refinedweb
|
Symfony 3.3.5 released
Symfony 3.3.5 has just been released. Here is a list of the most important changes:
- bug #23549 [PropertyInfo] conflict for phpdocumentor/reflection-docblock 3.2 (@xabbuh)
- bug #23513 [FrameworkBundle] Set default public directory on install assets (@yceruto)
- security #23507 [Security] validate empty passwords again (@xabbuh)
- bug #23526 [HttpFoundation] Set meta refresh time to 0 in RedirectResponse content (@jnvsor)
- bug #23535 Make server: commands work out of the box with the public/ root dir (@fabpot)
- bug #23540 Disable inlining deprecated services (@alekitto)
- bug #23498 [Process] Fixed issue between process builder and exec (@lyrixx)
- bug #23490 [DependencyInjection] non-conflicting anonymous service ids across files (@xabbuh)
- bug #23468 [DI] Handle root namespace in service definitions (@ro0NL)
- bug #23477 [Process] Fix parsing args on Windows (@nicolas-grekas)
- bug #23256 [Security] Fix authentication.failure event not dispatched on AccountStatusException (@chalasr)
- bug #23461 Use rawurlencode() to transform the Cookie into a string (@javiereguiluz)
- bug #23465 [HttpKernel][VarDumper] Truncate profiler data & optim perf (@nicolas-grekas)
- bug #23457 [FrameworkBundle] check _controller attribute is a string before parsing it (@alekitto)
- bug #23459 [TwigBundle] allow to configure custom formats in XML configs (@xabbuh)
- bug #23460 Don't display the Symfony debug toolbar when printing the page (@javiereguiluz)
- bug #23469 [FrameworkBundle] do not wire namespaces for the ArrayAdapter (@xabbuh)
- bug #23434 [DotEnv] Fix variable substitution (@brieucthomas)
- bug #23426 Fixed HttpOnly flag when using Cookie::fromString() (@Toflar)
- bug #22439 [DX] [TwigBundle] Enhance the new exception page design (@sustmi)
- bug #23417 [DI][Security] Prevent unwanted deprecation notices when using Expression Languages (@dunglas)
- bug #23261 Fixed absolute url generation for query strings and hash urls (@alexander-schranz)
- bug #23398 [Filesystem] Dont copy perms when origin is remote (.5 released symfony.com/blog/symfony-3-3-5-releasedTweet this
@Do Ngoc Tu: You can better redirect your question through one of the Symfony support channels, such as Slack, where you can get a better direct live answer.
check composer.json and search "autoload" line and change it to
load all src bundle "psr-4": { "": "src/" },
load all src bundle "psr-4": { "": "src/" },
@Do Ngoc Tu : I have the same problem. I can't solve it. Have you some idea to solve this problem?
Thanks.
Thanks.
To ensure that comments stay relevant, they are closed for old posts.
Do Ngoc Tu said on Jul 18, 2017 at 09:23 #1
I updated to symfony 3.3.5 but when generate new bundle, it can't autoload
Checking that the bundle is autoloaded
FAILED
any ideas for me?
Thanks
Tu
|
https://symfony.com/blog/symfony-3-3-5-released
|
CC-MAIN-2019-18
|
en
|
refinedweb
|
A dart package for continuous color transition.
A simple usage example:
import 'package:color_transition/color_transition.dart'; colorTransition = new ColorTransition(callback: (currentColor){ print(currentColor); }).init();
currentColor: RGB color in
List<int>, defaults to
List<int> [255, 255, 255] (optional)
callback: Function called after every transition (optional)
duration: defaults to
int 3 (optional)
fps: defaults to
int 30 (optional)
init: starts color transition
cancel: ends/cancels color transition
generateRGB: Returns
List<int> representing a random RGB value
Please file feature requests and bugs at the issue tracker.(); }
Add this to your package's pubspec.yaml file:
dependencies: color_transition: :color_transition/color_transition.dart';
We analyzed this package on Apr 12, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
Detected platforms: Flutter, web, other
No platform restriction found in primary library
package:color_transition/color_transition.dart.
Fix
lib/color_transition.dart. (-1 points)
Analysis of
lib/color_transition.dart reported 2 hints:
line 7 col 47: Use
= to separate a named parameter from its default value.
line 7 col 66: Use
= to separate a named parameter from its default value.
The package description is too short. (.
|
https://pub.dartlang.org/packages/color_transition
|
CC-MAIN-2019-18
|
en
|
refinedweb
|
Snowplow Python Tracker 0.4.0 released
We are happy to announce the release of the Snowplow Python Tracker version 0.4.0.
This version introduces the Subject class, which lets you keep track of multiple users at once, and several Emitter classes, which let you send events asynchronously, pass them to a Celery worker, or even send them to a Redis database. We have added support for sending batches of events in POST requests, although the Snowplow collectors do not yet support POST requests.
We have also made changes to the format of unstructured events and custom contexts, to support our new work around self-describing JSON Schemas.
In the rest of the post we will cover:
- The Subject class
- The Emitter classes
- Tracker method return values
- Logging
- Pycontracts
- The RedisWorker class
- Self-describing JSONs
- Upgrading
- Support
1. The Subject class
An instance of the Subject class represents a user who is performing an event in the Subject-Verb-Direct Object model proposed in our Snowplow event grammar. Although you can create a Tracker instance without a Subject, you won’t be able to add information such as user ID and timezone to your events without one.
If you are tracking more than one user at once, create a separate Subject instance for each. An example:
from snowplow_tracker import Subject, Emitter, Tracker # Create a simple Emitter which will log events to e = Emitter("d3rkrsqld9gmqf.cloudfront.net") # Create a Tracker instance t = Tracker(emitter=e, namespace="cf", app_id="CF63A") # Create a Subject corresponding to a pc user s1 = Subject() # Set some data for that user s1.set_platform("pc") s1.set_user_id("0a78f2867de") # Set s1 as the Tracker's subject # All events fired will have the information we set about s1 attached t.set_subject(s1) # Track user s1 viewing a page t.track_page_view("") # Create another Subject instance corresponding to a mobile user s2 = Subject() # All methods of the Subject class return the Subject instance so methods can be chained: s2.set_platform("mob").set_user_id("0b08f8be3f1") # Change the tracker subject from s1 to s2 # All events fired will have instead have information we set about s2 attached t.set_subject(s2) # Track user s2 viewing a page t.track_page_view("")
It is also possible to set the subject during Tracker initialization:
t = Tracker(emitter=e, subject=s1, namespace="cf", app_id="CF63A")
2. The Emitter classes
Trackers must be initialized with an Emitter.
This is the signature of the constructor for the base Emitter class:
def __init__(self, endpoint, protocol="http", port=None, method="get", buffer_size=None, on_success=None, on_failure=None):
The only field which must be set is the
endpoint, which is the collector to which the emitter logs events.
port is the port to connect to,
protocol is either
"http" or
"https", and
method is either “get” or “post”.
When the emitter receives an event, it adds it to a buffer. When the queue is full, all events in the queue get sent to the collector. The buffer_size argument allows you to customize the queue size. By default, it is 1 for GET requests and 10 for POST requests. If the emitter is configured to send POST requests, then instead of sending one for every event in the buffer, it will send a single request containing all those events in JSON format.
on_success is an optional callback that will execute whenever the queue is flushed successfully, that is, whenever every request sent has status code 200. It will be passed one argument: the number of events that were sent.
on_failure is similar, but executes when the flush is not wholly successful. It will be passed two arguments: the number of events that were successfully sent, and an array of unsent requests.
AsyncEmitter
The AsyncEmitter class works just like the base Emitter class, but uses threads, allowing it to send HTTP requests in a non-blocking way.
CeleryEmitter
The CeleryEmitter class works just like the base Emitter class, but it registers sending requests as a task for a Celery worker. If there is a module named snowplow_celery_config.py on your
PYTHONPATH, it will be used as the Celery configuration file; otherwise, a default configuration will be used. You can run the worker using this command:
celery -A snowplow_tracker.emitters worker --loglevel=debug
Note that
on_success and
on_failure callbacks cannot be supplied to this emitter.
RedisEmitter
Use a RedisEmitter instance to store events in a Redis database for later use. This is the RedisEmitter constructor function:
def __init__(self, rdb=None, key="snowplow"):
rdb should be an instance of either the
Redis or
StrictRedis class, found in the redis module. If it is not supplied, a default will be used.
key is the key used to store events in the database. It defaults to “snowplow”. The format for event storage is a Redis list of JSON strings.
Flushing
You can flush the buffer of an emitter associated with a tracker instance
t like this:
t.flush()
This synchronously sends all events in the emitter’s buffer.
Custom emitters
You can create your own custom emitter class, either from scratch or by subclassing one of the existing classes. The only requirement for compatibility is that is must have an
input method which accepts a Python dictionary of name-value pairs.
3. Tracker method return values
If you are using the synchronous Emitter and call a tracker method which causes the emitter to send a request, that tracker method will return the status code for the request:
e = Emitter("d3rkrsqld9gmqf.cloudfront.net") t = Tracker(e) print(t.track_page_view("")) # Prints 200
This is useful for initial testing.
Otherwise, the tracker method will return the tracker instance, allowing tracker methods to be chained:
e = AsyncEmitter("d3rkrsqld9gmqf.cloudfront.net") t = Tracker(e) t.track_page_view("").track_screen_view("title screen")
The
set_subject method will always return the Tracker instance.
4. Logging
The emitters.py module has Python logging turned on. The logger prints messages about what emitters are doing. By default, only messages with priority “INFO” or higher will be logged.
To change this:
from snowplow_tracker import logger # Log all messages, even DEBUG messages logger.setLevel(10) # Log only messages with priority WARN or higher logger.setLevel(30) # Turn off all logging logger.setLevel(60)
5. Pycontracts
The Snowplow Python Tracker uses the Pycontracts module for type checking. The option to turn type checking off has been moved out of Tracker construction:
from snowplow_tracker import disable_contracts disable_contracts()
Switch off Pycontracts to improve performance in production.
6. The RedisWorker class
The tracker comes with a RedisWorker class which sends Snowplow events from Redis to an emitter. The RedisWorker constructor is similar to the RedisEmitter constructor:
def __init__(self, _consumer, key=None, dbr=None):
This is how it is used:
from snowplow_tracker import AsyncEmitter from snowplow_tracker.redis_worker import RedisWorker e = Emitter("d3rkrsqld9gmqf.cloudfront.net") r = RedisWorker(e, key="snowplow_redis_key") r.run()
This will set up a worker which will run indefinitely, taking events from the Redis list with key “snowplow_redis_key” and inputting them to an AsyncEmitter, which will send them to a Collector. If the process receives a SIGINT signal (for example, due to a Ctrl-C keyboard interrupt), cleanup will occur before exiting to ensure no events are lost.
7. Self-describing JSONs
Snowplow unstructured events and custom contexts are now defined using JSON schema, and should be passed to the Tracker using self-describing JSONs. Here is an example of the new format for unstructured events:
t.track_unstruct_event({ "schema": "iglu:com.acme/viewed_product/jsonschema/2-1-0", "data": { "product_id": "ASO01043", "price": 49.95 } })
The
data field contains the actual properties of the event and the
schema field points to the JSON schema against which the contents of the
data field should be validated. The
data field should be flat, rather than nested.
Custom contexts work similarly. Since and event can have multiple contexts attached, the
contexts argument of each
trackXXX method must (if provided) be a non-empty array:
t.track_page_view("localhost", None, "", [{ schema: "iglu:com.example_company/page/jsonschema/1-2-1", data: { pageType: 'test', lastUpdated: new Date(2014,1,26) } }, { schema: "iglu:com.example_company/user/jsonschema/2-0-0", data: { userType: 'tester' } }])
The above example shows a page view event with two custom contexts attached: one describing the page and another describing the user.
As part of this change we have also removed type hint suffices from unstructured events and custom contexts. Now that JSON schemas are responsible for type checking, there is no need to include types a a part of field names.
8. Upgrading
The release version of this tracker (0.
9. Support
Please get in touch if you need help setting up the Snowplow Python Tracker or want to suggest a new feature. The Snowplow Python Tracker is still young, so of course do raise an issue if you find any bugs.
For more details on this release, please check out the 0.4.0 Release Notes on GitHub.
|
https://snowplowanalytics.com/blog/2014/06/10/snowplow-python-tracker-0.4.0-released/
|
CC-MAIN-2019-18
|
en
|
refinedweb
|
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab.
Microcontroller Programming » error: unknown type name 'int16_t'
i have started fresh on my projects and duplicated my clean copy of the libnerdkits folder and dropped into my projects folder... i removed the lcd.o, uart.o, delay.o files to force a recompile of the source code but now i either get an error saying
fatal error: lcd.h: No such file or directory
or when i put the lcd.h folder back into the library i get
../libnerdkits/lcd.h:[line number}:[another number]: error: unknown type name 'int16_t'
and
../libnerdkits/lcd.h:[line number}:[another number]: error: unknown type name 'uint8_t'
on three lines in lcd.h
does anybody know what i've done to myself this time?
Did you maintain the directory structure of the code folder?
code +
|
+ libnerdkits
|
+ Yourproject_folder
hi Rick... i maintained the structure that i had before this problem developed... i'm not familiar with your notation above but here's my tree:
Desktop
avrProjects folder
libnerkits folder
project folder
code file
makefile
and my makefile contains the line:
LINKOBJECTS=../libnerdkits/delay.o ../libnerdkits/lcd.o ../libnerdkits/uart.o
i think the double dot (..) tells the compiler to go to the parent folder of the project folder to find the libnerdkits folder
i'm going to do a couple of hours of trouble-shooting this evening... thanks as always... k
LINKOBJECTS aren't used until you get a good compile. It looks like your source code is missing
#include <avr/io.h>
or it isn't placed before the include for lcd.h.
thank you guys... the problem was that i forgot i had switched from my atmega168 kit to my 328p kit...
if i can pick your brains further... when i change from the 168 to the 328p processor i need to
update project code:
add #include "../libnerdkits/io_328p.h"
update project makefile:
line 1: change atmega168 to atmega328p
line 3: change m168 to m328p
line 4: add ../libnerdkits/io_328p.h to end of line
update libnerdkits makefile:
line 1: change atmega168 to atmega328p
am i missing anything important and also any explanations would be appreciated...
later, k
Please log in to post a reply.
|
http://www.nerdkits.com/forum/thread/2609/
|
CC-MAIN-2019-18
|
en
|
refinedweb
|
Code Generator (Sergen)
Sergen has some extra options that you may set through its configuration file (Serenity.CodeGenerator.config) in your solution directory.
Here is the full set of options:
public class GeneratorConfig { public List<Connection> Connections { get; set; } public string KDiff3Path { get; set; } public string TFPath { get; set; } public string TSCPath { get; set; } public bool TFSIntegration { get; set; } public string WebProjectFile { get; set; } public string ScriptProjectFile { get; set; } public string RootNamespace { get; set; } public List<BaseRowClass> BaseRowClasses { get; set; } public List<string> RemoveForeignFields { get; set; } public bool GenerateSSImports { get; set; } public bool GenerateTSTypings { get; set; } public bool GenerateTSCode { get; set; } }
Connections, RootNamespace, WebProjectFile, ScriptProjectFile, GenerateSSImports, GenerateSSTypings and GenerateTSCode options are all available in user interface, so we'll focus on other options.
KDiff3 Path
Sergen tries to launch KDiff3 when it needs to merge changes to an existing file. This might happen when you try to generate code for an entity again. Instead of overriding target files, Sergen will execute KDiff3.
Sergen looks for KDiff3 at its default location under C:\Program Files\Kdiff3, but you may override this path with this option, if you installed Kdiff3 to another location.
TFSIntegration and TFPath
For users that work with TFS, Sergen provides this options to make it possible to checkout existing files and add new ones to source control. Set TFSIntegration to true, if your project is versioned in TFS, and set TFPath if tf.exe is not under its default location at C:\Program Files\Visual Studio\x.y\Common7\ide\
{ // ... "TFSIntegration": true, "TFPath": "C:\Program Files\....\tf.exe" }
RemoveForeignFields
By default, Sergen examines your table foreign keys, and when generating a row class, it will bring all fields from all referenced foreign tables.
Sometimes, you might have some fields in foreign tables, e.g. some logging fields like InsertUserId, UpdateDate etc. that wouldn't be useful in another row.
You'd be able to remove them manually after code generation too, but using this option it might be easier. List fields you want to remove from generated rows as an array of string:
{ // ... "RemoveForeignFields": ["InsertUserId", "UpdateUserId", "InsertDate", "UpdateDate"] }
Note that this doesn't remove this fields from table row itself, it only removes these view fields from foreign joins.
BaseRowClasses
If you are using some base row class, e.g. something like LoggingRow in Serene, you might want Sergen to generate your rows deriving from these base classes.
For this to work, list your base classes, and the fields they have.
{ // ... "BaseRowClasses": [{ "ClassName": "Serene.Administration.LoggingRow", "Fields": ["InsertUserId", "UpdateUserId", "InsertDate", "UpdateDate"] }] }
If Sergen determines that a table has all fields listed in "Fields" array, it will set its base class as "ClassName", and will not generate these fields explicity in row, as they are already defined in base row class.
It is possible to define more than one base row class. Sergen will choose the base row class with most matching fields, if a row's fields matches more than one base class.
|
https://volkanceylan.gitbooks.io/serenity-guide/sergen/code_generator_sergen.html
|
CC-MAIN-2019-18
|
en
|
refinedweb
|
})
This should work for you:
Var movies = _db.Movies.Orderby(c => c.Category).ThenBy(n => n.Name)
Background: Over the next month, I'll be giving three talks about or at least including
LINQ in the context of
C#. I'd like to know which topics are worth giving a fair amount of attention to, based on what people may find hard to understand, or what they may have a mistaken impression of. I won't be specifically talking about
LINQ to
SQL or the Entity Framework except as examples of how queries can be executed remotely using expression trees (and usually
IQueryable).
So, what have you found hard about
LINQ? What have you seen in terms of misunderstandings? Examples might be any of the following, but please don't limit yourself!
C#compiler treats query expressions
IQueryable
Delayed execution. Any ideas how to get something like this working? I'm amazed that LINQ queries are not allowed on DataTables!.
I'd like to do the equivalent of the following in LINQ, but I can't figure out how:
IEnumerable<Item> items = GetItems(); items.ForEach(i => i.DoStuff());
What is the real syntax?
There is no ForEach extension for
IEnumerable; only for
List<T>. So you could do
items.ToList().ForEach(i => i.DoStuff());
Alternatively, write your own ForEach extension:
public static void ForEach<T>(this IEnumerable<T> enumeration, Action<T> action) { foreach(T item in enumeration) { action(item); } }
I found an example in the VS2008 Examples for Dynamic LINQ that allows you to use a sql-like string (e.g.
OrderBy("Name, Age DESC")) for ordering. Unfortunately, the method included only works on
IQueryable<T>. Is there any way to get this functionality on
IEnumerable<T>?
Just stumbled into this oldie...
To do this without the dynamic LINQ library, you just need the code as below. This covers most common scenarios including nested properties.
To get it working with
IEnumerable<T> you could add some wrapper methods that go via AsQueryable - but the code below is the core
Expression logic needed.
public static IOrderedQueryable<T> OrderBy<T>(this IQueryable<T> source, string property) { return ApplyOrder<T>(source, property, "OrderBy"); } public static IOrderedQueryable<T> OrderByDescending<T>(this IQueryable<T> source, string property) { return ApplyOrder<T>(source, property, "OrderByDescending"); } public static IOrderedQueryable<T> ThenBy<T>(this IOrderedQueryable<T> source, string property) { return ApplyOrder<T>(source, property, "ThenBy"); } public static IOrderedQueryable<T> ThenByDescending<T>(this IOrderedQueryable<T> source, string property) { return ApplyOrder<T>(source, property, "ThenByDescending"); } static IOrderedQueryable<T> ApplyOrder<T>(IQueryable<T> source, string property, string methodName) {<T>)result; }
Edit: it gets more fun if you want to mix that with
dynamic - although note that
dynamic only applies to LINQ-to-Objects (expression-trees for ORMs etc can't really represent
dynamic queries -
MemberExpression doesn't support it). But here's a way to do it with LINQ-to-Objects. Note that the choice of
Hashtable is due to favorable locking semantics:
using Microsoft.CSharp.RuntimeBinder; using System; using System.Collections; using System.Collections.Generic; using System.Dynamic; using System.Linq; using System.Runtime.CompilerServices; static class Program { private static class AccessorCache { private static readonly Hashtable accessors = new Hashtable(); private static readonly Hashtable callSites = new Hashtable(); private static CallSite<Func<CallSite, object, object>> GetCallSiteLocked(string name) { var callSite = (CallSite<Func<CallSite, object, object>>)callSites[name]; if(callSite == null) { callSites[name] = callSite = CallSite<Func<CallSite, object, object>>.Create( Binder.GetMember(CSharpBinderFlags.None, name, typeof(AccessorCache), new CSharpArgumentInfo[] { CSharpArgumentInfo.Create(CSharpArgumentInfoFlags.None, null) })); } return callSite; } internal static Func<dynamic,object> GetAccessor(string name) { Func<dynamic, object> accessor = (Func<dynamic, object>)accessors[name]; if (accessor == null) { lock (accessors ) { accessor = (Func<dynamic, object>)accessors[name]; if (accessor == null) { if(name.IndexOf('.') >= 0) { string[] props = name.Split('.'); CallSite<Func<CallSite, object, object>>[] arr = Array.ConvertAll(props, GetCallSiteLocked); accessor = target => { object val = (object)target; for (int i = 0; i < arr.Length; i++) { var cs = arr[i]; val = cs.Target(cs, val); } return val; }; } else { var callSite = GetCallSiteLocked(name); accessor = target => { return callSite.Target(callSite, (object)target); }; } accessors[name] = accessor; } } } return accessor; } } public static IOrderedEnumerable<dynamic> OrderBy(this IEnumerable<dynamic> source, string property) { return Enumerable.OrderBy<dynamic, object>(source, AccessorCache.GetAccessor(property), Comparer<object>.Default); } public static IOrderedEnumerable<dynamic> OrderByDescending(this IEnumerable<dynamic> source, string property) { return Enumerable.OrderByDescending<dynamic, object>(source, AccessorCache.GetAccessor(property), Comparer<object>.Default); } public static IOrderedEnumerable<dynamic> ThenBy(this IOrderedEnumerable<dynamic> source, string property) { return Enumerable.ThenBy<dynamic, object>(source, AccessorCache.GetAccessor(property), Comparer<object>.Default); } public static IOrderedEnumerable<dynamic> ThenByDescending(this IOrderedEnumerable<dynamic> source, string property) { return Enumerable.ThenByDescending<dynamic, object>(source, AccessorCache.GetAccessor(property), Comparer<object>.Default); } static void Main() { dynamic a = new ExpandoObject(), b = new ExpandoObject(), c = new ExpandoObject(); a.X = "abc"; b.X = "ghi"; c.X = "def"; dynamic[] data = new[] { new { Y = a },new { Y = b }, new { Y = c } }; var ordered = data.OrderByDescending("Y.X").ToArray(); foreach (var obj in ordered) { Console.WriteLine(obj.Y.X); } } }
What is Java equivalent for LINQ?
There is nothing like LINQ for Java.
How can I do GroupBy Multiple Columns in LINQ
Something similar to this in SQL:
SELECT * FROM <TableName> GROUP BY <Column1>,<Column2>
How can I convert this to LINQ:
QuantityBreakdown ( MaterialID int, ProductID int, Quantity float ) INSERT INTO @QuantityBreakdown (MaterialID, ProductID, Quantity) SELECT MaterialID, ProductID, SUM(Quantity) FROM @Transactions GROUP BY MaterialID, ProductID
Use an anonymous type.
Eg
group x by new { x.Column1, x.Column2 }
What is the difference between IQueryable<T> and IEnumerable<T>?
First of all, IQueryable<T> extends the IEnumerable<T> interface, so anything you can do with a "plain" IEnumerable<T>, you can also do with an IQueryable<T>.
IEnumerable<T> just has a GetEnumerator() method that returns an Enumerator<T> for which you can call its MoveNext() method to iterate through a sequence of T.
What IQueryable<T> has that IEnumerable<T> doesn't are two properties in particular—one that points to a query provider (e.g., a LINQ to SQL provider) and another one pointing to a query expression representing the IQueryable<T> object as a runtime-traversable expression that can be understood by the given query provider (for the most part, you can't give a LINQ to SQL expression to a LINQ to Entities provider without an exception being thrown).
The expression can simply.
what is the difference between returning iqueryable vs ienumerable.
IQueryable<Customer> custs = from c in db.Customers where c.City == "<City>" select c; IEnumerable<Customer> custs = from c in db.Customers where c.City == "<City>" select c;
Will both be deferred execution? When should one be preferred over the other?.
One of the things I've asked a lot about on this site is LINQ. The questions I've asked have been wide and varied and often don't have much context behind them. So in an attempt to consolidate the knowledge I've acquired on Linq I'm posting this question with a view to maintaining and updating it with additional information as I continue to learn about LINQ.
I also hope that it will prove to be a useful resource for other people wanting to learn.
What this means is that LINQ provides a standard way to query a variety of datasources using a common syntax.
Currently there are a few different LINQ providers provided by Microsoft:
There are quite a few others, many of which are listed here.
Chook provides a way to output CSV files
Jeff shows how to remove duplicates from an array
Bob gets a distinct ordered list from a datatable
Marxidad shows how to sort an array
Dana gets help implementing a Quick Sort Using Linq
A summary of links from GateKiller's question are below:
Scott Guthrie provides an intro to Linq on his blog
An overview of LINQ on MSDN
ChrisAnnODell suggests checking out:
Linq is currently available in VB.Net 9.0 and C# 3.0 so you'll need Visual Studio 2008 or greater to get the full benefits. (You could always write your code in notepad and compile using MSBuild)
There is also a tool called LinqBridge which will allow you to run Linq like queries in C# 2.0.
This question has some tricky ways to use LINQ
Another good site for Linq is Hooked on Linq and here are 101 Linq samples which are a great reference if you just want a quick syntactical example.
Let's also not forget LinqPad :)
|
http://boso.herokuapp.com/linq
|
CC-MAIN-2017-26
|
en
|
refinedweb
|
Testing demonstration program:
- Opens a connection to the URL
- Reads in the XML
- Formats the data
- Writes to?
Use readers and writers to encapsulate data.
With that in mind, you can split off
printURL()'s code-parsing and -formatting functions into a new
formatReader(Reader, Writer) method exclusively dedicated to taking a
Reader object with XML data in it, parsing it, and writing the report out to the supplied
Writer.
Testing
formatReader(Reader, Writer) now proves simple:
testFormatReaderGoodData(): String goodRSSData = "<rss><channel>" + "<title>Channel Title</title>" + "<item><title>Item 1</title></item>" + "<item><title>Item 2</title></item>" + "</channel></rss>"; String goodRSSOutput = "Channel Title\n Item 1\n Item 2\n"; Reader r = new StringReader(goodRSSData); Writer w = new StringWriter(); PrintRSS.formatReader(r, w); assertEquals(goodRSSOutput, w.toString());
The example above tests the parser's and formatter's logic without URLs or network connections, just readers and writers. The test example illustrates a useful test technique: creating reader streams with test data contained right in the test code rather than reading the data from a file or the network.
StringReader and
StringWriter (or
ByteArrayInputStream and
ByteArrayOutputStream) prove invaluable for embedding test data in unit-test suites.
The unit test above exercises the logic to see what happens when everything works right, but it's equally important to test error handling code for cases when something goes wrong. Next, here's an example of testing with bad data, using a clever JUnit idiom to check that the proper exception throws:
testFormatReaderBadData(): String badXMLData = "this is not valid xml data"; StringReader r = new StringReader(badXMLData); try { PrintRSS.formatReader(r, new StringWriter()); fail("should have thrown XML error"); } catch (XMLParseException ex) { // No error, we expected an exception }
Again, readers and writers encapsulate data. The main difference: This data causes
formatReader() to throw an exception; if the exception does not throw, then JUnit's
fail() method is called.
Use Java protocol handlers to test network code
While factoring out nonnetworked methods can make programs like PrintRSS easier to test, such efforts prove insufficient for creating a full test suite. We still want to test the network code itself, particularly any network exception handling. And sometimes it proves inconvenient to factor out to a reader interface; the tested code may rely on a library that only understands URLs. In this section I explain how to test the
formatURL(URL, Writer) method that takes a URL, reads the RSS, and writes out the data to a writer.
You can test code that contains URLs in several ways. First, you could use standard
http: URLs and point to a working test server. That approach, however, requires a stable Web server—one more component to complicate the testing setup. Second, you could use
file: URLs that point to local files. That approach shares a problem with
http: URLs: While you can access good test data with
file: or
http: URLs, it proves difficult to simulate network failures that cause I/O (input/output) exceptions.
A better approach to testing URL code creates a new URL namespace,
testurl:, completely under the test program's control—an easy approach with Java protocol handlers.
Implement testurl:
Java protocol handlers allow programmers to implement custom URL protocols with their own code. You'll find a full explanation in Brian Maso's "A New Era for Java Protocol Handlers" (Sun Microsystems, August 2000) and a sample implementation in the source code's
org.monkey.nelson.testurl package.
Protocol handlers are simple. First, satisfy the Java requirements for writing a protocol handler, of which there are three important pieces:
- The
TestURLConnection class:A
URLConnectionimplementation that handles the actual methods that return input streams, headers, and so on
- The
Handler class:Turns a
testurl:URL into a
TestURLConnection
- The
java.protocol.handler.pkgs property:A system property that tells Java where to find the new URL namespace implementation
Second, provide a useful
TestURLConnection implementation. In this case, the actual
TestURLConnection is hidden from library users; instead, work is done via a helper class,
TestURLRegistry. This class has a single method,
TestURLRegistry.register(String, InputStream), which associates an input stream with a given URL. For example:
InputStream inputIS = ...; TestURLRegistry.register("data", inputIS);
arranges things so that opening the URL
testurl:data returns the
inputIS input stream. With the registry, you can easily construct URLs whose output the test program controls. (Note that the input stream data is not copied, so in general, you can use each URL only once.)
Test with good data
We first use
testurl: to test the
formatURL() method with good input data:
testFormatURLGoodStream(): InputStream dataIS = new ByteArrayInputStream(goodRSSData.getBytes()); TestURLRegistry.register("goodData", dataIS); URL testURL = new URL("testurl:goodData"); Writer w = new StringWriter(); PrintRSS.formatURL(testURL, w); assertEquals(goodRSSOutput, w.toString());
The code above uses the same test data from the earlier test, but in this case, associates it to the
testurl:goodData URL. The
formatURL() method opens and reads that URL, and the test code ensures that the data is correctly read, formatted, and written to the
Writer.
Simulate failures
Writing programs that use the network is easy; writing code that correctly handles network failures proves harder. The
testurl: can simulate network failures—a major advantage to this approach.
Let's first simulate a failure in which a connection to the URL is simply impossible. Typically, that happens when the remote Web server isn't working. Our
testurl: library includes a special URL it understands,
testlurl:errorOnConnect, which guarantees an
IOException thrown as soon as you connect to the URL. You can use that URL to test whether the program properly handles cases in which the URL cannot be reached:
testFormatURLNoConnection(): URL noConnectURL = new URL("testurl:errorOnConnect"); Writer w = new StringWriter(); try { PrintRSS.formatURL(noConnectURL, w); fail("Should have thrown IO exception on connect."); } catch (IOException ioex) { }
The test above ensures that
formatURL() correctly throws an
IOException if the connection fails. But how do you test for a case where the network connection works at first and then fails during the transaction?
Because you have full control over the input stream associated with the test URL, you can employ any input stream implementation. As an example, you can create a simple
BrokenInputStream class that wraps an underlying stream, with a pass-through
read() method that allows only a certain number of bytes to be read before throwing an
IOException:
testFormatURLBrokenConnection(): InputStream dataIS = new ByteArrayInputStream(goodRSSData.getBytes()); InputStream testIS = new BrokenInputStream(dataIS, 99); TestURLRegistry.register("brokenStream", testIS); URL testURL = new URL("testurl:brokenStream"); Writer w = new StringWriter(); try { PrintRSS.formatURL(testURL, w); fail("Should have thrown IO exception on read."); } catch (IOException ioex) { }
In the code above, the
testurl:brokenStream URL is associated with an input stream that returns 99 bytes of good data, then throws an exception. In this case, the test only sees if an exception throws. For a more complex program, you could test that something useful was done with the first successfully read 99 bytes.
Expand on a network testing library
In this article, I demonstrated simple techniques for testing networked code. First, factor code so you can test the logic separately from the network. Second, use a few simple utility classes to provide a testing URL namespace and input streams that simulate network failures. With these techniques it becomes much simpler to test programs that use the network.
This approach, however, has some limitations. The
testurl: library is very simple; the
TestURLConnection does not support writing to a URL (as you would need for an HTTP
POST post operation), nor does it support headers (such as those needed to read an HTTP MIME type). A more complete
URLConnection implementation with these features could test complex URL interactions such as SOAP (Simple Object Access Protocol) calls to Web services.
Moreover, this approach only simulates a network, but is not a real network connection. Robust Internet clients must cope with numerous bizarre behaviors including slow connections, hung sockets, and so on. To some extent, you can simulate such failures with more complex
InputStream implementations, but testing every network code aspect can only be done with real sockets. However, the techniques described in this article prove sufficient to test most situations that network programs encounter.
|
http://www.javaworld.com/article/2074444/testing-debugging/test-networked-code-the-easy-way.html
|
CC-MAIN-2017-26
|
en
|
refinedweb
|
NAME
Tickit::Widgets - load several Tickit::Widget classes at once
SYNOPSIS
use Tickit::Widgets qw( Static VBox HBox );
Equivalent to
use Tickit::Widget::Static; use Tickit::Widget::VBox; use Tickit::Widget::HBox;
DESCRIPTION
This module provides an
import utility to simplify code that uses many different Tickit::Widget subclasses. Instead of a
use line per module, you can simply
use this module and pass it the base name of each class. It will
require each of the modules.
Note that because each Widget module should be a pure object class with no exports, this utility does not run the
import method of the used classes.
AUTHOR
Paul Evans <leonerd@leonerd.org.uk>
|
https://metacpan.org/pod/Tickit::Widgets
|
CC-MAIN-2017-26
|
en
|
refinedweb
|
JavaFX applications are based on JavaFX's
Application class. Perhaps you are unfamiliar with this class and have questions about using
Application and on what this class offers your application code. This post attempts to answer these questions while exploring
Application.
Introducing Application
The
javafx.application.Application class provides a framework for managing a JavaFX application. This application must include a class that extends
Application, overriding various methods that the JavaFX runtime calls to execute application-specific code.
An application can call
Application methods to obtain startup parameters, access host services, arrange to launch itself as a standalone application, interact with the preloader (a small application that's started before the main application to customize the startup experience), and access the user agent (Web browser) style sheet.
Application life cycle
One of
Application's tasks is to manage the application's life cycle. The following overridable
Application methods play a role in this life cycle:
void init(): Initialize an application. An application may override this method to perform initialization before the application is started.
Application's
init()method does nothing.
void start(Stage primaryStage): Start an application. An application must override this abstract method to provide the application's entry point. The
primaryStageargument specifies a container for the user interface.
void stop(): Stop an application. An application may override this method to prepare for application exit and to destroy resources.
Application's
stop()method does nothing.
The JavaFX runtime interacts with an application and invokes these methods in the following order:
- Create an instance of the class that extends
Application.
- Invoke
init()on the JavaFX Launcher Thread. Because
init()isn't invoked on the JavaFX Application Thread, it must not create
javafx.scene.Sceneor
javafx.stage.Stageobjects, but may create other JavaFX objects.
- Invoke
start()on the JavaFX Application Thread after
init()returns and the JavaFX runtime is ready for the JavaFX application to begin running.
- Wait for the application to finish. The application ends when it invokes
javafx.application.Platform.exit()or when the last window has been closed and
Platform's
implicitExitattribute is set to
true.
- Invoke
stop()on the JavaFX Application Thread. After this method returns, the application exits.
JavaFX creates an application thread, which is known as the JavaFX Application Thread, for running the application's
start() and
stop() methods, for processing input events, and for running animation timelines. Creating JavaFX
Scene and
Stage objects as well as applying scene graph modification operations to live objects (those objects already attached to a scene) must be done on the JavaFX Application Thread.
The
java launcher tool loads and initializes the specified
Application subclass on the JavaFX Application Thread. If there is no
main() method in the
Application class, or if the
main() method calls
Application.launch(), an instance of the
Application subclass is constructed on the JavaFX Application Thread.
The
init() method is called on the JavaFX Launcher Thread, which is the thread that launches the application; it's not called on the JavaFX Application Thread. As a result, an application must not construct a
Scene or
Stage object in
init(). However, an application may construct other JavaFX objects in the
init() method.
Listing 1 presents a simple JavaFX application that demonstrates this life cycle.
Listing 1.
LifeCycle.java
import javafx.application.Application; import javafx.application.Platform; import javafx.stage.Stage; public class LifeCycle extends Application { @Override public void init() { System.out.printf("init() called on thread %s%n", Thread.currentThread()); } @Override public void start(Stage primaryStage) { System.out.printf("start() called on thread %s%n", Thread.currentThread()); Platform.exit(); } @Override public void stop() { System.out.printf("stop() called on thread %s%n", Thread.currentThread()); } }
Compile Listing 1 as follows:
javac LifeCycle.java
Run the resulting
LifeCycle.class as follows:
java LifeCycle
You should observe the following output:
init() called on thread Thread[JavaFX-Launcher,5,main] start() called on thread Thread[JavaFX Application Thread,5,main] stop() called on thread Thread[JavaFX Application Thread,5,main]
The output reveals that
init() is called on a different thread than
start() and
stop, which are called on the same thread. Because different threads are involved, you may need to use synchronization.
If you comment out
Platform.exit(), you won't observe the
stop() called on thread Thread[JavaFX Application Thread,5,main] message because the JavaFX runtime won't invoke
stop() -- the application won't end.
Application parameters
Application provides the
Application.Parameters getParameters() method for returning the application's parameters, which include arguments passed on the command line, unnamed parameters specified in a JNLP (Java Network Launch Protocol) file, and <name,value> pairs specified in a JNLP file.
Application.Parameters encapsulates the parameters and provides the following methods for accessing them:
Map<String,String> getNamed(): Return a read-only map of the named parameters. The map may be empty but is never null. Named parameters include <name,value> pairs explicitly specified in a JNLP file, and any command-line arguments of the form:
--name=value.
List<java.lang.String> getRaw(): Return a read-only list of the raw arguments. This list may be empty but is never null. For a standalone application, it's the ordered list of arguments specified on the command line. For an applet or WebStart application, it includes unnamed parameters as well as named parameters. For named parameters, each <name,value> pair is represented as a single argument of the form
--name=value.
List<String> getUnnamed(): Return a read-only list of the unnamed parameters. This list may be empty but is never null. Named parameters (which are represented as <name,value> pairs) are filtered out.
Listing 2 presents a simple JavaFX application that demonstrates these methods.
Listing 2.
Parameters.java
import java.util.List; import java.util.Map; import javafx.application.Application; import javafx.application.Platform; import javafx.stage.Stage; public class Parameters extends Application { @Override public void start(Stage primaryStage) { Application.Parameters parm = getParameters(); System.out.printf("Named parameters: %s%n", parm.getNamed()); System.out.printf("Raw parameters: %s%n", parm.getRaw()); System.out.printf("Unnamed parameters: %s%n", parm.getUnnamed()); Platform.exit(); } }
Compile Listing 2 as follows:
javac Parameters.java
Run the resulting
Parameters.class as follows:
java Parameters a b c --name=w -name2=x --foo=y -foo=z bar=q
You should observe the following output:
Named parameters: {foo=y, name=w} Raw parameters: [a, b, c, --name=w, -name2=x, --foo=y, -foo=z, -bar=q] Unnamed parameters: [a, b, c, -name2=x, -foo=z, -bar=q]
Host services
Application provides the
HostServices getHostServices() method for accessing the host services provider, which lets the application obtain its code and document bases, show a Web page in a browser, and communicate with the enclosing Web page using JavaScript when running in a browser.
The
javafx.application.HostServices class declares the following methods:
String getCodeBase(): Get the code base URI for this application. If the application was launched via a JNLP file, this method returns the codebase parameter specified in the JNLP file. If the application was launched in standalone mode, this method returns the directory containing the application JAR file. If the application is not packaged in a JAR file, this method returns the empty string.
String getDocumentBase(): Get the document base URI for this application. If the application is embedded in a browser, this method returns the URI of the Web page containing the application. If the application was launched in WebStart mode, this method returns the the codebase parameter specified in the JNLP file (the document base and the code base are the same in this mode). If the application was launched in standalone mode, this method returns the URI of the current directory.
JSObject getWebContext(): Return the JavaScript handle of the enclosing DOM window of the Web page containing this application. This handle is used to access the Web page by calling from Java into JavaScript. If the application is not embedded in a Web page, this method returns
null.
String resolveURI(String base, String rel): Resolve the specified
relative URI against the
baseURI and return the resolved URI. This method throws
java.lang.NullPointerExceptionwhen either the
baseor the
relstrings are
null. It throws
java.lang.IllegalArgumentExceptionwhen there is an error parsing either the
baseor
relURI strings, or when there is any other error in resolving the URI.
void showDocument(String uri): Open the specified URI in a new browser window or tab. The determination of whether it is a new browser window or a tab in an existing browser window will be made by the browser preferences. Note that this will respect the pop-up blocker settings of the default browser; it will not try to circumvent them.
Listing 3 presents a simple JavaFX application that demonstrates most of these methods.
Listing 3.
HostServ.java
import javafx.application.Application; import javafx.application.HostServices; import javafx.application.Platform; import javafx.stage.Stage; public class HostServ extends Application { @Override public void start(Stage primaryStage) { HostServices hs = getHostServices(); System.out.printf("Code base: %s%n", hs.getCodeBase()); System.out.printf("Document base: %s%n", hs.getDocumentBase()); System.out.printf("Web context: %s%n", hs.getWebContext()); Platform.exit(); } }
Compile Listing 3 as follows:
javac HostServ.java
Run the resulting
HostServ.class as follows:
java HostServ
You should observe something similar to the following output:
Code base: Document base: file:/C:/cpw/javaqa/article19/code/HostServ/ Web context: null
Launching a standalone application
A JavaFX application doesn't require a
main() method. The JavaFX runtime takes care of launching the application and saving command-line arguments. However, if you need to perform various tasks before the application is launched, you can specify a
main() method and have it invoke one of the following
static methods:
void launch(Class<? extends Application> appClass, String... args): Launch a standalone application, where
appClassidentifies the class that's constructed and executed by the launcher, and
argsidentifies the command-line arguments that are passed to the application. This method doesn't return until the application has exited, either via
Platform.exit()or by all of the application windows having been closed. It throws
java.lang.IllegalStateExceptionwhen invoked more than once, and throws
IllegalArgumentExceptionwhen
appClassdoesn't subclass
Application.
void launch(String... args): Launch a standalone application. This method is equivalent to invoking the previous method with the
Classobject of the immediately enclosing class of the method that called
launch().
Listing 4 presents a simple JavaFX application that demonstrates the second
launch() method.
Listing 4.
Launch.java
import javafx.application.Application; import javafx.application.Platform; import javafx.stage.Stage; public class Launch extends Application { @Override public void start(Stage primaryStage) { System.out.printf("start() called on %s%n", Thread.currentThread()); Platform.exit(); } public static void main(String[] args) { System.out.printf("main() called on %s%n", Thread.currentThread()); Application.launch(args); System.out.printf("terminating"); } }
Compile Listing 4 as follows:
javac Launch.java
Run the resulting
Launch.class as follows:
java Launch
You should observe the following output:
main() called on Thread[main,5,main] start() called on Thread[JavaFX Application Thread,5,main] terminating
|
http://www.javaworld.com/article/3057072/learn-java/exploring-javafxs-application-class.html
|
CC-MAIN-2017-26
|
en
|
refinedweb
|
Arduino based Human Interface Device
Part 1
Originally published at siliconrepublic.blogspot.com on August 5, 2010.
This summer I have been tinkering with my Arduino micro-controller and recently bought Wii Nunchuck in a quest to create a Wii Nunchuck based HID for playing games. Since the beginning of summer I have tried many different methods for doing so and here are some advantages and disadvantages of each:
AAC keys ():
This software is excellent and hassle-free for sending keypresses to your computer. You just have to use a command from your Arduino sketch such as Serial.print(“\033left.”); to send a keypress (in this case left arrow button). However, the disadvantage of using AAC keys is that keys are pressed and released. There is no way to press and hold down any key other than Shift, Ctrl and Alt. This isn’t good for playing games where you usually need more than three keys.
Gobetwino ():
Another great software that allows you to do a bunch of useful stuff such as opening/closing programs, sending emails, and sending keypresses to programs from your Arduino. It is also well documented. However, I had problems opening some games and holding down keys.
This method really worked well as I could hold down and release keys whenever I wanted to and worked with most games. I simply added this class to my Processing sketch in three lines:
import java.awt.AWTException;
import java.awt.event.InputEvent;
Created an object of the class: Robot ricky
And generated key-downs, key-ups and mouse mouse movements:
ricky.keyPress(KeyEvent.VK_I);
ricky.keyRelease(KeyEvent.VK_I);
ricky.mouseMove(x,y);
And my Arduino program sent the data from the Wii Nunchuck to the Processing sketch.
However, this method did not work with some older games on compatibility mode, and definitely would need some tweaking to make it work in different computers (like installing Arduino driver and Processing).
Thus, my final attempt was to make a truly plug-n-play Arduino based “adapter” that would allow me to hook up my Wii Nunchuck and play games with it on any computer, on any games, in any OS. So I used the PS2dev library from the Arduino playground ()
which allows you to let the Arduino emulate a PS/2 device. I wrote a code for emulating a PS/2 keyboard and it works great :D. Now you can play Paratroopers (image below) on your old IBM pc using the Wii Nunchuck over the PS/2 protocol or hook it up to your Altera DE2 for whatever you can think of ;)
As for me, I connected it up to a USB to PS/2 converter to play games. The pins 3 and 4 of the PS/2 port powers the Arduino :) Here are some pictures of my build:
The joystick generates W,A,S,D based on position. The accelerometer roll and pitch generates I,J,K,L. The z-button generates Spacebar and the c-button generates Shift. Good enough for most games :) Mouse positions can be controlled using numpad keys in most OSs so thats not a problem either :)
I will be posting my code and a video for you to enjoy in my next blog-post. I hope this article is useful for all Arduino fans interested in making their own custom HIDs over the PS/2 or USB ports.
Originally published at siliconrepublic.blogspot.com on August 5, 2010.
|
https://medium.com/@rajarshir/arduino-based-human-interface-device-701e85a570d4
|
CC-MAIN-2017-26
|
en
|
refinedweb
|
How much precision does a float have? It depends on the float, and it depends on what you mean by precision. Typical reasonable answers range from 6-9 decimal digits, but it turns out that you can make a case for anything from zero to over one hundred digits.
In all cases in this article when I talk about precision, or about how many digits it takes to represent a float, I am talking about mantissa digits. When printing floating-point numbers you often also need a couple of +/- characters, an ‘e’, and a few digits for the exponent, but I’m just going to focus on the mantissa.
Previously on this channel…
If you’re just joining us then you may find it helpful to read some of the earlier posts in this series. The first one is the most important since it gives an overview of the layout and interpretation of floats, which is helpful to understand this post.
-), also about precision
- 4: Comparing Floating Point Numbers, 2012 Edition
- 5: Float Precision—From Zero to 100+ Digits (return *this;)
What precision means
For most of our purposes when we say that a format has n-digit precision we mean that over some range, typically [10^k, 10^(k+1)), where k is an integer, all n-digit numbers can be uniquely identified. For instance, from 1.000000e6 to 9.999999e6 if your number format can represent all numbers with seven digits of mantissa precision then you can say that your number format has seven digit precision over that range, where ‘k’ is 6.
Similarly, from 1.000e-1 to 9.999e-1 if your number format can represent all the numbers with four digits of precision then you can say that your number format has four digit precision over that range, where ‘k’ is -1.
Your number format may not always be able to represent each number in such a range precisely (0.1 being a tired example of a number that cannot be exactly represented as a float) but to have n-digit precision we must have a number that is closer to each number than to either of its n-digit neighbors.
This definition of precision is similar to the concept of significant figures in science. The number 1.03e4 and 9.87e9 are both presumed to have three significant figures, or three digits of precision.
Wasted digits and wobble
The “significant figures” definition of precision is sometimes necessary, but it’s not great for numerical analysis where we are more concerned about relative error. The relative error in, let’s say a three digit decimal number, varies widely. If you add one to a three digit number then, depending on whether the number is 100 or 998, it may increase by 1%, or by barely 0.1%.
If you take an arbitrary real number from 99.500… to 999.500… and assign it to a three digit decimal number then you will be forced to round the number up or down by up to half a unit in the last place, or 0.5 decimal ULPs. That 0.5 ULPs may represent a rounding error of anywhere from 0.5% at around 100 to just 0.05% at around 999. That variation in relative error by a factor of ten (the base) is called the wobble.
Wobble also affects binary numbers, but to a lesser degree. The relative precision available from a fixed number of binary digits varies depending on whether the leading digits are 10000 or 11111. Unlike base ten where the relative precision can be almost ten times lower for numbers that start with 10000, the relative precision for base two only varies by a factor of two (again, the base).
In more concrete terms, the wobble in the float format means that the relative precision of a normalized float is between 0.5/8388608 and 0.5/16777216.
The minimized wobble of binary floating-point numbers and the more consistent accuracy this leads to is one of the significant advantages of binary floating-point numbers over larger bases.
The variation in relative precision due to wobble is important later on and can mean that we ‘waste’ almost an entire digit, binary or decimal, when converting numbers from the other base.
Subnormal precision: 0-5 digits
Float numbers normally have fairly consistent precision, but in some cases their precision is significantly lower – as little as zero digits. This happens with denormalized, or ‘subnormal’, numbers. Most float numbers have an implied leading one that gives them 24 bits of mantissa. However, as discussed in my first post, floats with the exponent set to zero necessarily have no implied leading one. This means that their mantissa has just 23 bits, they are not normalized, and hence they are called subnormals. If enough of the leading bits are zero then we have as little as one bit of precision.
As an example consider the smallest positive non-zero float. This number’s integer representation is 0x00000001 and its value is 2^-149, or approximately 1.401e-45f. This value comes from its exponent (-126) and the fact that its one non-zero bit in its mantissa is 23 bits to the right of the mantissa’s binary point. All subnormal numbers have the same exponent (-126) so they are all multiples of this number.
The binary exponent in a float varies from -126 to 127
Since the floats in the range with decimal exponent -45 (subnormals all of them) are all multiples of this number their mantissas are (roughly) 1.4, 2.8, 4.2, 5.6, 7.0, 8.4, and 9.8. If we print them to one-digit of precision then we get (ignoring the exponent, which is -45) 1, 3, 4, 6, 7, 8, and 10. Since 2, 5, and 9 are missing that means that we don’t even have one digit of precision!
Since all subnormal numbers are multiples of 1.401e-45f, subsequent ranges each have one additional digit of precision. Therefore the ranges with decimal exponents -45, -44, -43, -42, -41, and -40 have 0, 1, 2, 3, 4, and 5 digits of precision.
Normal precision
Normal floats have a 24-bit mantissa and greater precision than subnormals. We can easily calculate how many decimal digits the 24-bit mantissa of a float is equivalent to: 24*LOG(2)/LOG(10) which is equal to about 7.225. But what does 7.225 digits actually mean? It depends whether you are concerned about how many digits you can rely on, or how many digits you need.
Representing decimals: 6-7 digits
Our definition of n-digit precision is being able to represent all n-digit numbers over a range [10^k, 10^(k+1)). There are about 28 million floats in any such (normalized) range, which is more than enough for seven digits of precision, but they are not evenly distributed, with the density being much higher at the bottom of the range. Sometimes there are not enough of them near the top of the range to uniquely identify all seven digit numbers.
In some ranges the exponent lines up such that we may (due to the wobble issues mentioned at the top) waste almost a full bit of precision, which is equivalent to ~0.301 decimal digits (log(2)/log(10)), and therefore we can only represent ~6.924 digits. In these cases we don’t quite have seven digits of precision.
I wrote some quick-and dirty code that scans through various ranges with ‘k’ varying from -37 to 37 to look for these cases.
FLT_MIN (the smallest normalized float) is about 1.175e-38F, FLT_MAX is about 3.402e+38F
My test code calculates the desired 7-digit number using double precision math, assigns it to a float, and then prints the float and the double to 7 digits of precision. The printing is assumed to use correct rounding, and if the results from the float and the double don’t match then we know we have a number that cannot be uniquely identified/represented as a float.
Across all two billion or so positive floats tested I measured 784,757 seven-digit numbers that could not be uniquely identified, or about 0.04% of the total. For instance, from 1.000000e9 to 8.589972e9 was fine, but from there to 9.999999e9 there were 33,048 7-digit numbers that could not be represented. It’s a bit subtle, but we can see what is happening if we type some adjacent 7-digit numbers into the watch window, cast them to floats, and then cast them to double so that the debugger will print their values more precisely:
One thing to notice (in the Value column) is that none of the numbers can be exactly represented as a float. We would like the last three digits before the decimal point to all be zeroes, but that isn’t possible because at this range all floats are a multiple of 1,024. So, the compiler/debugger/IEEE-float does the best it can. In order to get seven digits of precision at this range we need a new float every 1,000 or better, but the floats are actually spaced out every 1,024. Therefore we end up missing 24 floats for each set of 1,024. In the ‘Value’ column we can see that the third and fourth numbers actually map to the same float, shown circled below:
One was rounded down, and the other was rounded up, but they were both rounded to the closest float available.
At 8.589930e9 a float’s relative precision is 1/16777216 but at 8.589974e9 it is just 1/8388608
This issue doesn’t happen earlier in this range because below 8,589,934,592 (2^33) the float exponent is smaller and therefore the precision is greater – immediately below 2^33 the representable floats are spaced just 512 units apart. Because of this, when decimal precision loss happens it is always late in the range.
My test code showed me that this same sort of thing happens any time that the effective exponent of the last bit of the float (which is the exponent of the float minus 23) is -136, -126, -93, -83, -73, -63, -53, -43, -33, 10, 20, 30, 40, 50, 60, 70, or 103. Calculate two to those powers if you really want to see the pattern. This corresponds to just six digit precision in the ranges with decimal exponents -35, -32, -22, -19, -16, -13, -10, -7, -4, 9, 12, 15, 18, 21, 24, 27, and 37.
Therefore, over most ranges a float has (just barely) seven decimal digits of precision, but over 17 of the 75 ranges tested a float only has six.
Representing floats: 8-9 digits
The flip side of this question is figuring out how many decimal digits it takes to uniquely identify a float. Again, we aren’t concerned here with converting the exact value of the float to a decimal (we’ll get to that), but merely having enough digits to uniquely identify a particular float.
In this case it is the wobble in the decimal representation that can bite us. For some exponent ranges we may waste almost a full decimal digit. That means that instead of requiring ~7.225 digits to represent all floats we would expect that sometimes we would actually need ~8.225. Since we can’t use fractional digits we actually need nine in these cases. As explained in a previous post this happens about 30% of the time, which seems totally reasonable given our calculations. The rest of the time we need eight digits to uniquely identify a particular float. Use 9 to play it safe.
printf(“%1.8e”, f); ensures that a float will round-trip to decimal and back
It appears that a lot of people don’t believe that you can print a float as decimal and then convert it back to a float and get the same value. There are a lot of smart people saying things like “text extraction of floats drifts because of the ASCII conversions”. That is only true if:
- Your function that prints floats (printf?) is broken, or
- Your function that scans floats (scanf?) is broken, or
- You didn’t request enough digits.
In short, if you’re getting drift when you round-trip floats to decimal and back then either your C runtime has a bug, or your code has a bug. I use printf(“%1.8e”, f); with VC++ and I have tested that this round-trips all ~4 billion floats, with zero drift. If you want to test this with your own language and library tools then you can easily modify the sample code in the second post in this series to sprintf each number to a buffer and then sscanf it back in, to make sure you get back a bit-wise identical float. This is one time where a floating-point equality comparison is entirely appropriate.
Precisely printing floats: 10-112 digits
There is one final possible meaning of precision that we can apply. It turns out that while not all decimal numbers can be exactly represented in binary (0.1 is an infinitely repeating binary number) we can exactly represent all binary numbers in decimal. That’s because 1/2 can be represented easily as 5/10, but 1/10 cannot be represented in binary.
It’s interesting to see what happens to the decimal representation of binary numbers as powers of two get smaller:
Each time we decrease the exponent by one we have to add a digit one place farther along. We gradually acquire some leading zeroes, so the explosion in mantissa digits isn’t quite one-for-one, but it’s close. The number of mantissa digits needed to exactly print the value of a negative power of two is about N-floor(N*log(2)/log(10)), or ceil(N*(1-log(2)/log(10))) where N is an integer representing how negative our exponent is. That’s about 0.699 digits each time we decrement the binary exponent. The smallest power-of-two we can represent with a float is 2^-149. That comes from having just the bottom bit set in a subnormal. The exponent of subnormals floats is -126 and the position of the bit means it is 23 additional spots to the right and 126-23 = 149. We should therefore expect it to take about 105 digits to print that smallest possible float. Let’s see:
1.401,298,464,324,817,070,923,729,583,289,916,131,280,261,941,876,515,771,757,068,283,889,
791,082,685,860,601,486,638,188,362,121,582,031,25e-45
For those of you counting at home that is exactly 105 digits. It’s a triumph of theory over practice.
That’s not quite the longest number I could find. A subnormal with a mantissa filled up with ones will have seven fewer leading zeroes leading to a whopping 112 digit decimal mantissa:
1.175,494,210,692,441,075,487,029,444,849,287,348,827,052,428,745,893,333,857,174,530,571,
588,870,475,618,904,265,502,351,336,181,163,787,841,796,875e-38
Pow! Bam!
While working on this I found a bug in the VC++ CRT. pow(2.0, -149) fits perfectly in a float – albeit just barely – it is the smallest float possible. However if I pass 2.0f instead of 2.0 I find that pow(2.0f, -149) gives an answer of zero. So does pow(2.0f, -128). If you go (float)pow(2.0, -149), invoking the double precision version of the function and then casting to float, then it works. So does pow(0.5, 149).
Perversely enough powf(2.0f, -149) works. That’s because it expands out to (float)pow(double(2.0f), double(-149)).
Conveniently enough the version of pow that takes a float and an int is in the math.h header file so it’s easy enough to find the bug. The function calculates pow(float, -N) as 1/powf(float, N). The denominator overflows when N is greater than 127, giving an infinite result whose reciprocal is zero. It’s easy enough to work around, and will be noticed by few, but is still unfortunate. pow() is one of the messier functions to make both fast and accurate.
How do you print that?
The VC++ CRT, regrettably, refuses to print floats or doubles with more than 17 digits of mantissa. 17 digits is enough to uniquely identify any float or double, but it is not enough to tell us precisely what value they contain. It is not enough to give us the 112 digit results shown in the previous section, and it’s not enough to truly appreciate the sin(pi) trick explained last time.. So, we’ll need to roll our own.
Printing binary floating-point numbers efficiently and accurately is hard. In fact, when the IEEE spec was first ratified it was not yet a solved problem. But for expository purposes we don’t care about efficiency, so the problem is greatly simplified.
It turns out that any float can be represented as a fixed-point number with 128 bits in the integer part and 149 bits in the fractional part, which we can summarize as 128.149 format. We can determine this by noting that a float’s mantissa is a 1.23 fixed-point number. The maximum float exponent is 127, which is equivalent to shifting the mantissa left 127 positions. The minimum float exponent is -126, which is equivalent to shifting the mantissa right 126 positions.
shift up to 127 positions this way <— 1.000 000 000 000 000 000 000 00 —> shift up to 126 positions this way
Those shift amounts of our 1.23 mantissa mean that all floats can fit into a 128.149 fixed-point number, for a total of 277 bits.
All we need to do is create this number, by pasting the mantissa (with or without the implied leading one) into the correct location, and then convert the large fixed-point number to decimal.
Converting to decimal is done by two main steps. The integer portion is converted by repeatedly dividing it by ten and accumulating the remainders as digits (which must be reversed before using). The fractional part is converted by repeatedly multiply by ten and accumulating the overflow as digits. Simple. All you need is a simple high-precision math library and we’re sorted. There’s also some special-case checks for infinity, NaNs (print them however you want), negatives, and denormals, but it’s mostly quite straightforward. Here’s the conversion code:
/* See
for the potential portability problems with the union and bit-fields below.
*/
#include <stdint.h> // For int32_t, etc.
};
std::string PrintFloat(float f)
{
// Put the float in our magic union so we can grab the components.
union Float_t num(f);
// Get the character that represents the sign.
const std::string sign = num.Negative() ? “-” : “+”;
// Check for NaNs or infinity.
if (num.RawExponent() == 255)
{
// Check for infinity
if (num.RawMantissa() == 0)
return sign + “infinity”;
// Otherwise it’s a NaN.
// Print the mantissa field of the NaN.
char buffer[30];
sprintf_s(buffer, “NaN%06X”, num.RawMantissa());
return sign + buffer;
}
// Adjust for the exponent bias.
int exponentValue = num.RawExponent() – 127;
// Add the implied one to the mantissa.
int mantissaValue = (1 << 23) + num.RawMantissa();
// Special-case for denormals – no special exponent value and
// no implied one.
if (num.RawExponent() == 0)
{
exponentValue = -126;
mantissaValue = num.RawMantissa();
}
// The first bit of the mantissa has an implied value of one and this can
// be shifted 127 positions to the left, so that is 128 bits to the left
// of the binary point, or four 32-bit words for the integer part.
HighPrec<4> intPart;
// When our exponentValue is zero (a number in the 1.0 to 2.0 range)
// we have a 24-bit mantissa and the implied value of the highest bit
// is 1. We need to shift 9 bits in from the bottom to get that 24th bit
// into the ones spot in the int portion, plus the shift from the exponent.
intPart.InsertLowBits(mantissaValue, 9 + exponentValue);
std::string result;
// Always iterate at least once, to get a leading zero.
do
{
int remainder = intPart.DivReturnRemainder(10);
result += ‘0’ + remainder;
} while (!intPart.IsZero());
// Put the digits in the correct order.
std::reverse(result.begin(), result.end());
// Add on the sign and the decimal point.
result = sign + result + ‘.’;
// We have a 23-bit mantissa to the right of the binary point and this
// can be shifted 126 positions to the right so that’s 149 bits, or
// five 32-bit words.
HighPrec<5> frac;
// When exponentValue is zero we want to shift 23 bits of mantissa into
// the fractional part.
frac.InsertTopBits(mantissaValue, 23 – exponentValue);
while (!frac.IsZero())
{
int overflow = frac.MulReturnOverflow(10);
result += ‘0’ + overflow;
}
return result;
}
Converting to scientific notation and adding digit grouping is left as an exercise for the reader. A Visual C++ project that tests and demonstrates this and includes the missing HighPrec class and code for printing doubles can be obtained at:
Practical Implications
The reduced precision of subnormals is just another reason to avoid doing significant calculations with numbers in that range. Subnormals exist to allow gradual underflow and should only occur rarely.
Printing the full 100+ digit value of a number is rarely needed. It’s interesting to understand how it works, but that’s about it.
It is important to know how many mantissa digits it takes to uniquely identify a float. If you want to round-trip from float to decimal and back to float (saving a float to an XML file for instance) then it is important to understand that nine mantissa digits are required. I recommend printf(“%1.8e”, f), and yes, this will perfectly preserve your floats. This is also important in debugging tools, and VS 2012 fixed the bug where the debugger only displays 8 mantissa digits for floats.
It can also be important to know what decimal numbers can be uniquely represented with a float. If all of your numbers are between 1e-3 and 8.58e9 then you can represent all seven digit numbers, but beyond that there are some ranges where six is all that you can get. If you want to round-trip from decimal to float and then back then you need to keep this limitation in mind.
Until next time
Next time I might cover effective use of Not A Numbers and floating-point exceptions, or general floating-point weirdness, or why float math is faster in 64-bit processes than in 32-bit /arch:SSE2 projects. Let me know what you want. I’m having fun, it’s a big topic, and I see no reason to stop now.
Doubles and other good ideas
%1.8e works neatly for printing a float with nine digits, but it always prints an exponent, whether it is needed or not. Using %g tells printf to use exponents only when needed, thus saving some space, and improving clarity, but reducing consistency. It’s your choice. Here then are the known alternatives for printing round-trippable floats and doubles.
printf(“%1.8e\n”, d); // Round-trippable float, always with an exponent
printf(“%.9g\n”, d); // Round-trippable float, shortest possible
printf(“%1.16e\n”, d); // Round-trippable double, always with an exponent
printf(“%.17g\n”, d); // Round-trippable double, shortest possible
Aside: a double (53-bit mantissa) requires 17 digits of mantissa to uniquely identify all values.
A long double (64-bit mantissa) as used in the x87 register set requires 21 digits of mantissa to uniquely identify all values. Unfortunately the VC++ registers window only displays 17 digits for the x87 register set, making many values indistinguishable if you intentionally or accidentally generate x87 register values with more than 53 bits of mantissa, as can happen with fild.
Could you please elaborate the following a bit more?
“The number of mantissa digits needed to exactly print the value of a negative power of two is about N-floor(N*log(2)/log(10)), or ceil(N*(1-log(2)/log(10))) where N is an integer representing how negative our exponent is. “
The N*log(2)/log(10) part is the ‘standard’ calculation of how many decimal digits it takes to represent a base-2 number with N digits. For a positive number with N digits you would apply ceil() to that and be done. In this case we are using that formula to estimate how many leading zeroes you get when you convert 2^(-N) to decimal. I don’t have an elaborate proof, just some intuition and experimentation.
Meanwhile, the number of digits to the right of the decimal place (including zeroes) is simply N. So the calculation is DigitsToRightOfDecimal – LeadingZeroes or N-floor(N*log(2)/log(10)).
This article offers some related perspective:
thanks a lot for lengthy article, great info
they only problem i have is why do we have to measure precision in term of conversion from binary to digital?
we work in binary, and convert only to print them.. arent we?
Much of the information in this post is just curiosities, but it can be relevant. If floating-point data is stored in text form (handy for readability) then it is important to know how many digits must be used to avoid loss of information. Many products have made the mistake of just storing or displaying eight digits of mantissa, not realizing that sometimes this is insufficient. If my article convinces a few developers (the Visual Studio team!) to display the extra digit then it is worthwhile.
Also, when debugging extremely delicate calculations it can be handy to print the *actual* value of a number, in order to better understand why a calculation is going awry.
Hi
As said in another comment by Fabian people should start using printf(“%a”), as it’s format the exact hexadecimal representation of the double value, which is going to be read back using scanf(“%la”). It will be the same, exact, identical, value regardless of the architecture, endianness, compiler, operating system. By the time, you find the representation usable, readable and even be able to compare them without converting them to base 10.
Regards.
This is exactly what i was looking for, Thanks!
Do you think you’ll be showing an example of converting from a string to a float in another article? I don’t like using my compilers sprintf_s and sscanf_s functions because they don’t work the way i need them too, so i have created my own print and scan functions. The only ones that are left for me to create is ScanFloat and ScanDouble. Please e-mail me, ill be happy to work something out!
Don’t try writing your on string to float conversion functions — getting them right is famously difficult. For discussion of the many errors found in various conversion routines, and for links to well tested conversion routines, see:
Or, just call printf/scanf to do the float conversion for you.
Thanks, IL check that out!
the PrintFullFloats.zip link is broken 😦
The link works for me. I know it doesn’t work from some work networks, so that might be the problem. Or the ISP that hosts that link might have had a blip — they do that occasionally. So, try again and/or try from home.
Bruce, the only reason guys like me can get on with writing awesome code, is because of guys like you. I bow to you sir, for treading these dark paths on behalf of all us lesser coders who merely follow, map in hand.
Thank you for a really interesting article.
I’m working in Flash/ActionScript, building a scientific calculator. Got to experimenting with floating-point numbers (AS only supports Double-precision floating point format, and 32-bit integers). First I was checking for a way to implement your version of nearly equal comparison function with ULPs. I made a class that write the double Number into a ByteArray then reads it back as two 32-bit integers, giving me access to the bits. Then I got curious if I can implement this digit-printing routine, so I ported your code to AS. Since I don’t have access to a 64-bit type you use for “Product_t”, I had to break every 32-bit uint into two 16-bit ones (still using 32-bit uints to store them), and have twice as many steps for division and multiplication. Amazingly, it worked correctly on the first run. Besides these lines which I can’t understand, for the life of me. Is this a typo or something?
result += ‘0’ + remainder;
And
result += ‘0’ + overflow;
What’s the ‘0’ for? If I run my code with “‘0’ +”, my results have an extra 0 next to every expected digit!
I can see how the ‘0’ + overflow paradigm could be confusing. The trick is that in C/C++ single quotes are different from double quotes. A single-quoted character is actually an integer constant whose value is the character code of the character. So, when overflow is added to the character constant for zero we get a character constant for one of the digits from zero to nine. That character constant is then added to the string.
It has to be done that way because automatic conversion from integers to strings is considered a bad thing in C/C++. It can be supported through operator overloading, but std::string does not support it.
So, in languages where you can just add overflow to a string and have it converted to its ASCII representation by the language the ‘0’ + sequence is indeed unnecessary.
Hey Bruce,
Thanks for your reply. Of course, I get it now. I used to program in C/C++ about 10 years, but since then I’ve gotten too much used to thinking about single and double quotes in the same way!
I’ve did some optimization to your code that I ported to ActionScript. For division, I skip the leading 0 words before doing the main for loop. For multiplication, I skip the trailing 0 words. Since I have to stay within 32-bit unsigned integer for every operation, I have to operate with 16-bit values, so my multiply and divide can only accept a number under 2^16. So when I get the digits, I divide and multiply by 10000 instead of 10, that way I’m getting 4 digits on each divide and multiply call. Together all of this makes the printing code over 4 times waster, important in an interpreted language such as ActionScript.
All of this started kind of like an exercise and exploration, inspired by reading your series of articles on floating point. Thanks for interesting reads! But during reading, I’ve discovered that the native number printing functions in ActionScript perform rounding incorrectly (and inconsistently), so I’m gonna adapt this printing code for use in my upcoming scientific calculator.
I want to ask you a question about a topic really interesting to me, I cant’t find much information on. It’s about rounding for printing purposes. I want my scientific calculator to mimic a Casio FX series scientific calculator. Example:
A user enters a value of 0.0012345 and presses “=” button. Internally, I get a Number (double) of 1.2344999999999999e-3 – the closest representation of the number entered.
If the calculator is in the “normal 2” display mode, it should display 0.0012345 on it’s 10-digit display. I do that by rounding half away from zero to 9 fractional digits. But the problem is, on Casio FX the normal mode should actually truncate the extra digits: e.g. 0.00123455599999 displays 0.001234555, not 0.001234556. But if I truncate the digits, 0.0012345 in my software calculator outputs 0.001234499. So what I’m trying now – I print 17 digits of the number as a string (no decimal point), then see if the last digits is >= 5 or not, and decide whether to round up or down. I perform the rounding by manipulating the digits as the characters of my string, and end up with a string, containing my number rounded to 16 significant digits. I then round second time (truncate for the normal mode) to a number of digits I need to fit on the display. Is that the way to go? Surely double rounding can create incorrect results, in theory. However with my choice of first rounding to 16 digits I’m just trying to get rid of inaccurate digits. Does that make sense? If not, what could be done here? Casio sure figured out some way.
Actually since reading your articles I’ve tried many things on that scientific calculator, and found out there must be many clever “hacks” they use to try and look more true to actual math than the floating point format is used in their hardware.
For example, I’ve really enjoyed your sin(pi)+pi idea. On the Casio, however, there is some hack about that. Of course, they want sin(pi) to return exactly 0. So:
sin(pi)=0
sin(pi+1e-11)=0
sin(pi+1-10)=-1e-10
sin(32*pi)=1.2e-10
On the other hand:
sin(1e-11)=1e-11
sin(1e-99)=1e-99
sin(1e-5)-1e-5=0 // Actually ~ -1.667e-16 – not that small!
So it looks like there’s a lot of hardcoded conditions in sin() of that calculator, like
if (x is nearly 0)
return x;
else if (x is nearly pi)
return 0;
else
return actual sin(x);
I guess same idea is used for tan(). On many software calculators, calculating tan(pi/2) is notorious for resulting in ~ 1.633e+16 instead of the expected division by zero error. Casio, however, correctly signals a Math Error on tan(pi/2). However, trying tan(21*pi/2) results in a number less than 1e10.
I’m considering whether I should follow the Casio way and use a bunch of clever hacks, nearlyEqual checks and special cases for every math function supported, or just offer the user the actual results produced, with all the limitations of the double precision?
Do you have suggestions on better ways to approach this issue?
Appreciate your input!
Generating many digits at a time (by dividing by 10000) is a great optimization. In Fractal eXtreme () I have to print and convert numbers that may be thousands of digits long so I divide by 1000000000 to get nine digits at a time.
Clever hacks to get the ‘correct’ numbers will probably lead to madness. For instance, the Windows calculator thinks that sqrt(4)-2 is non-zero, which is pretty tragic. I think you have two sane choices:
1) Correctly rounded double precision with wise choices about how many digits to print.
2) Correctly rounded base-10 floating-point. This is a bit more work — especially getting it correctly rounded in all cases — but this will sometimes give more intuitive results because, for instance, 1/5 can be represented. Average users don’t want to know about 1/5th being a repeating fraction, and they shouldn’t have to care.
Just my opinion of course.
Fractal eXtreme looks awesome!
sqrt(4) – 2 being not zero is definitely one of the crazy things I’d like to try to avoid in my calculator. But how…
What exactly do you mean by “correctly rounded”? I can control rounding when I’m printing numbers, and I guess I might have to write my own parseFloat() to parse numbers in the user input, but I have no control on rounding during calculations.
I’ve read about decimal64 format, is that what you mean? Do you think is it possible/feasible to implement in software (in ActionScript)?
Thanks for your opinion!
The basic operations (+, -, *, /, and square-root) in IEEE floating-point math are defined to be correctly rounded. With the default rounding mode of round-to-nearest-even that means that the answer that they return is required to be the closest possible result to the infinitely correct result. In other words it means that their results are perfect (given the finite precision of the formats). When correctly rounded sqrt(4) will give exactly 2. Microsoft’s calculator calculates sqrt(4) to 40 digits, which is impressive, and is more precision than IEEE doubles. But, it rounds incorrectly so sqrt(4) is wrong in the 40th digit. Oops.
Since IEEE sqrt is correctly rounded, sqrt(4) gives exactly 2.0. That means that fsqrt(4.0f), which only has ~7 digits of precision, ends up working better than the 40 digits of Microsoft’s calculator. The importance of getting the correct result (correctly rounded) was recognized decades ago, but Microsoft’s calculator reinvented the wheel and made it square.
decimal64 is one possibility. You can implement the basic math operations of base-10 math in any programming language. Getting correctly rounded +, -, *, / and sqrt isn’t too hard. Getting correctly rounded sin/cos/tan is *extremely* hard so you may need to punt on those and just go for ‘close enough’.
For another take on printing floating-point numbers see this excellent post:
Thanks for the link and for your knowledge. I guess I’ll just have to read the IEEE standard. I’m sure I can implement the basic arithmetic operations in base-10, however since I”m making a scientific calculator, it’s gonna support a lot more than that, implementing everything is gonna take a while, and I’m not sure the performance in ActionScript will be adequate. It’s one thing to use this kind of math just for printing the result of the calculation, another thing to use it for all the calculations. I guess I’ll just make the best use of the built-in Number type, and see how it goes. But surely implementing a base-10 math library would be quite fun. is another good (and recent) link. 🙂
The description looks promising, but paying $15 to read it is annoying.
Miguel, thanks for that reference! I’ve found the same article is available for free on the author’s university website:
I’ve written some testing code to check the number of reliable (accurate) digits for doubles.
The results match up with the theory you’ve explained. For normal numbers, a double has 53 bits of mantissa (1 bit implied), equivalent to around 15.955 decimal digits. Accounting for wobble, we can rely on 15 (15.655) digits in an arbitrary double, and require 17 (16.955) digits to round-trip an arbitrary double to decimal and back. My testing code confirmed this.
However, this got me thinking, so I’ve got a question for you.
Given an arbitrary double, is there a practical and easy way to determine (for a normal or subnormal number):
1) How many digits of this number are accurate?
2) How many digits of this number are required to make a successful round-trip to decimal and back?
For example I want a function that prints numbers with as much precision as necessary to uniquely identify them, but no more. So for an arbitrary double argument, my printing function should determine the minimum number of digits to print a round-trippable decimal.
My idea so far is (for normal numbers, haven’t considered subnormals yet):
1) Print the number itself, and the next and prior numbers 1 ULP away, to 17 digits, respectively into variables a, b, c. The numbers will have N matching leading digits, compare the (N+1)th digits of a, b, c. If they are in sequence (e.g. 5, 6, 7), the (N+1)th digit is reliable, so there are (N+1) reliable digits, otherwise (e.g. 4, 6, 7 or 5, 6, 8), there are N reliable digits.
2) Print the number itself, and the next and prior numbers 1 ULP away, to 16 digits, respectively into variables a, b, c. Compare the last digits of a, b, c. If they are all different (e.g. 5, 6, 7), 16 digits is enough precision to make a round-trip, otherwise (e.g. 6, 6, 7 or 5, 6, 6) 17 digits are required to make a round-trip.
However this way is not very fast, so I wonder how to approach it from a math standpoint.
Thanks!
You could try approaching it mathematically but you would then be at increased risk of bugs in various CRT implementations causing failures. I would recommend just using printf(“%.17g\n”, d); which should generally give the shortest round-trippable double, or close enough anyway.
I don’t know what do you mean by CRT.
That’s what I’m doing at the moment, having implemented my own printf in ActionScript.
However, in some cases a number which (omitting the exponent) should print like 5 (and is printed like this by ActionScript’s built-in toString()), is printed as 5.000…0001e. Checking the numbers around it, I can see that printing it to 16-digit would still be unambiguous.
By CRT I mean C RunTime library and by that I mean whatever language support library implements floating-point print functions.
Let me know if you figure out an elegant solution for minimizing digits. In my case I am often more concerned with consistency so I’m happy with a fixed number of digits.
I see. I’m thinking the question “how many digits are necessary to print to uniquely identify a number?” is related to the implementation of scanf (or parseFloat in case of ActionScript). How many digits is needed for a particular number to be correctly parsed from a string? After some testing I’ve found out that ActionScript’s parseFloat is even more inaccurate than it’s number-printing functions. For example, when I use the literal number 9.00000000000005e-235 in my AS code, and print it to 21 digits (maximum that AS allows), it’s 9.00000000000004990276e-235. But doing a parseFloat(‘9.00000000000005e-235’), printed to 21 digits, is 9.00000000000004431595e-235, which is 4 ULPs away from the correct nearest representable value.
That means I can forget about round-trippable doubles in AS! Unless I will write my own parseFloat implementation. I’ve heard advice to never “roll you own” scanf because there are really many issues. I could use the output of the AS parseFloat, print it with high enough precision, go 1 ULP towards the original number, print that one, then subtract the strings to find out how roughly many ULPs away is the parsed number, then add that many ULPs to that number. This might be more effective than trying to make my own decimal to binary converter?
In *theory* the question is only dependent on the design of the IEEE floats and doubles. In practice it *may* be dependent on how well scanf/ParseFloat are implemented.
That is *quite* disappointing that parseFloat gives different results from the compiler’s parser. You should file a bug about that.
It’s quite frustrating indeed. I’ve filed a bug about that, but Adobe is not quick at all about checking them. I guess if it’s not concerning mobile game development, which Adobe is focusing on for Flash platform these days, they might not care much about that for a long time. I’ll see what I can do by myself. After all, my scientific calculator wouldn’t care much about parseFloat speed, when evaluating a formula all that is necessary is just parsing a few floats once.
I’ve implemented a parseNumber function which uses parseFloat to get an initial candidate double. I then print the candidate digits (to 24 digits) and calculate the distance from the number in the string to parse, using my very limited high-precision big decimal class which stores numbers as a sign, a normalized decimal digits string (with decimal point implied after the first digit), and an exponent. I advance the candidate 1 ULP in the direction of the goal number, getting a second candidate. I calculate the difference between the first candidate and the new number, which gives me the magnitude of 1 ULP. I then divide the distance from the first candidate to the goal number, to find out how many ULPs away is it. If the number of ULPs can be rounded to the nearest integer with enough certainty, just advance the number by the rounded number of ULPs (-1 since already advanced one ULP), otherwise (number of ULPs is nearly half-way between two integers), get both candidates and choose the one whose absolute distance to the goal number is smaller. If the distances are equal, I chose the number closer to zero.
It’s much slower than the original parseFloat, but produces results identical to the compiler (at least for the numbers I’ve tested it on). For a scientific calculator I guess it’s worth to use it.
A similar approach could be used to determine how many digits are necessary to uniquely identify a particular number, but the performance will be slow as well. For my application it’s not really necessary feature so I’m gonna forget about it for now. I still think this question could be answered mathematically if we’re printing the digits ourselves (like your example high-precision digit printing code above), but I can’t wrap my head around it yet.
I think my math routines are now sufficient to continue on the actual UI of my scientific calculator. Your blog and your comments have been a real help, Bruce, I can honestly say if I didn’t find your articles, I wouldn’t implement these features at all, and would be still frustrated about I can’t print or round some numbers correctly.
Thanks.
–Gene
Sounds great! I recommend testing with known-hard numbers (see) and with randomly generated 64-bit bit-patterns if you want to look for possible bugs.
Good idea, and thanks for interesting link!
So parseFloat failed on three of the numbers I’ve got from there (expected result on hte left, actual result on the right). At least it didn’t freeze (like some old versions of Java and PHP used to do, according to that website)…
assertion failed: 6.92949564460092000000e+15 == 6.92949564460091900000e+15
assertion failed: 2.22507385850720138309e-308 == 2.22507385850719990089e-308
assertion failed: 2.22507385850720088902e-308 == 2.22507385850719990089e-308
My correction function got all these numbers right.
I made a random double generator, for every number I print it to 17 digits using my own printing function, then convert back to number using parseFloat and my correction function.
First I checked the subnormal number range, interestingly parseFloat() worked correctly from the smallest subnormal number to about 3.1e-309. Starting around there, the number of inaccurate results goes up sharply.
I’m really curious what’s special about this number, looking at it’s bit pattern it doesn’t look like some interesting edge case!
So I hardcoded 3e-309 as the lower limit for my correction function – below that it just returns parseFloat().
Running the test code with randomly generated doubles I can see how many inaccurate results parseFloat() generates. And it happens at any exponent value (the further exponent is from 0, the more inaccurate is the result, when abs(exponent) is around 300, the results of parseFloat() are off by as much as 6 ULP (max I’ve seen so far)). My correction function so far produces accurate results, will leave the test routine running overnight…
Bruce,
I’ve read this (excellent) article before but was looking at it again while doing my own research…
If you had tested the full decimal exponent range (-38 to 38, instead of just -37 to 37) you would have found failures in the -38 range: 9.403961e-38, for example, converts to 9.40396153281…e-38, which round trips back as 9.403962e-38.
I also found one failure technically in the 28 range: 1e28. It converts to 9.9999994421…e27, which round trips back as 9.999999e27. It is the only “1e+/-n” input bordering a failing range to be so unlucky. (I would imagine in double-precision there would be more such examples.)
Hmmm. That sounds very odd. I assume you are referring to round-tripping of numbers printed with %1.8e? I tested all non-NaN numbers (I looped through the integer representations) and when I tested they all round-tripped in VC++ (from float to text and then back to the same float).
What VC++ version were you using? I probably used VC++ 2010.
I am talking about the other direction: text to float to text.
I had run this on VS 2010 but just repeated it on VS 2015. This code should prove the point:
float f = 1e28f;
printf(“%1.6e\n”, f); //Prints 9.999999e+27
This is the correct behavior and not a bug.
Sorry I wasn’t thinking — this is just an artifact of my test program. It blindly prints everything to 7 digits. 1e28 is one digit and of course would print OK to 6 digits. But my comment on the -38 range still stands.
Wait. Viewed as 1.000000e28 it is 7 digits, and it does not round trip. So I stand by my original comment, I wonder why your test program did not catch this.
What is the hexadecimal representation of the number, and what compiler version are you seeing a failure to round-trip with? ’cause I just checked and all numbers in that range work fine for me. I go:
sprintf_s(buffer, “%1.8e”, start.f);
then:
int count = sscanf_s(buffer, “%f”, &fRead);
and verify that I get the same number back. 100% success, with VS 2013.
It seems like we tested this in different ways, somehow getting almost identical results in identifying the less than 7-digit precision decimal exponent ranges: -35, -32, -22, -19, -16, -13, -10, -7, -4, 9, 12, 15, 18, 21, 24, 27, and 37. I detected those plus some in the -38 decimal exponent range (which you did not test) and one in the 28 decimal exponent range (which your test did not discover). I generated all 7 digit decimal strings, converted them to floats, and then converted them back to strings (%1.6e”). I had thought this was the way you’d done it as well. (I also did tests in the other direction — the “greater than 8-digit precision test” — where I loop through all floats, convert to text with “%1.7e”, and then convert back. But I did not comment on that part.) I don’t know how you tested both directions originating from floats and using “%1.8e”.
Sorry, I was just being dense. I never actually tested the round-tripping to seven digits. That was just theory. Interesting that it fails. Is that a breakdown in theory, or (I’m assuming) a bug in the VC++ CRT?
It’s not a bug, and it matches theory (but unexpected at first glance). 1e28 is at a border where the decimal gap size (for 7 digits) exceeds the binary gap size (for 24 bits) again, so it looks like it should have 7 digits of precision. But the closest float to 1e28 is less than 1e28, so that brings it into a “doesn’t have 7 digits” range at exponent 27. (This scenario sets up at several other power of 10 borders, e.g. 1e38, but the resulting conversions happen to be close enough to round back correctly to 7 digits.)
I’m less concerned about that direction. Correctly rounded float-to-text is definitely nice to have, but not as critical as float-to-text-to-float. But, I do believe that perfection is attainable and we should expect it.
Bruce, I’ve written a related article at — please take a look.
Excellent work as always. That’s a great analysis.
Regarding the discussion about wobble, Goldberg’s article defines wobble as being between
(1/2)*(Base)^(-precision) and ((Base)/2) * ((Base)^(-precision)).
For a float with 24 bits precision, shouldn’t the wobble be between 2^(-25) == 1/33554432 and 2^(-24) == 1/16777216 instead of 2^(-23) == 1/8388608 and 2^(-24) == 1/16777216.
As always, thanks for the great articles.
Yeah, that makes sense. So, between 0.5/8388608 and 0.5/16777216. I fixed the article because I think that is correct.
Thanks. On a related point…
Under the section Representing decimals: 6-7 digits, you mention that “At 8.589930e9 a float’s relative precision is 1/16777216 but at 8.589974e9 it is just 1/8388608”.
Since 2^33 == 8.589934592e9,
Then 8.589930e9 will be near the end of the range [2^32, 2^33) which should give a relative precision of about 1/(2^25) == 1/33554432.
And 8.589974e9 will be near the start of the range [2^33, 2^34) which should give a relative precision of about 1/(2^24) == 1/16777216.
However, isn’t the inability to represent 8.589973e9 not so much related to the different relative precisions at 8.589930e9 and 8.589974e9 (since every power of 2 range has the same range of relative precision), but as you mention in the next paragraph, more to do with the reduction in precision by a factor of 2 as one moves from [2^32, 2^33) to [2^33, 2^34) .
I guess “relative precision” is a slightly vague term – should the numerator be 1.0 or 0.5? That is, should the numerator be the gap between numbers or the maximum error?Goldberg defines wobble, but is that the same as defining relative precision? I’m not sure.
When you cross an exponent boundary the gap between numbers (precision/error in absolute terms) doubles, and the relative precision halves – they both get worse. I think that which one is the “cause” of the inability to represent 8.589973e9 depends on how you phrase your argument. Both?
I think the main factor in the inability to represent 8.589973e9 is due to the precision/gap size in the range [2^33, 2^34) where 8.589973e9 resides.
It is the gap size that results in the relative error, which is calculated as the difference between 8.589973e9 and 8.589973504e9, divided by 8.589973e9.
As Goldberg puts it,
…
Since numbers of the form d.dd…dd × (Base^e) all have the same absolute error, but have values that range between (Base^e) and ((Base^e) × Base), the relative error ranges between … (1/2)*(Base)^(-precision) and ((Base)/2) * ((Base)^(-precision)).
In particular, the relative error corresponding to .5 ulp can vary by a factor of (Base). This factor is called the wobble.”
Thus, from what I understand, relative error is calculated for a single float representation within the range [2^n, 2^(n+1)) whereas wobble refers to the range of the relative errors of the floats within the range [2^n, 2^(n+1)).
Where you mention that “At 8.589930e9 a float’s relative precision is 1/16777216 but at 8.589974e9 it is just 1/8388608”, the relative error/precision at 8.589974e9 can be calculated specifically as
|(8.589973504e9-8.589974e9)| / 8.589974e9 =(approx.) 5.8e-8
which is within the wobble range for floats – 1/16777216 (at start of power of 2 range) to 1/33554432 (at end of power of 2 range).
As to whether the numerator should be 1.0 or 0.5, Goldberg defines “the relative error ranges (for the wobble as being) between ((Base/2) * (Base^(-p)) * (Base^e)/(Base^e) and ((Base/2) * Base^(-p)) * (Base^e)/Base^(e+1). So, strictly speaking the numerator, which is the 1/2 ULP term, should be
(Base/2) * (Base^(-p)) * (Base^e)
After the reduction of the relative error expressions where Base == 2, whether you would like to consider the numerator as 1 or 0.5 is purely algebraic I think.
|
https://randomascii.wordpress.com/2012/03/08/float-precisionfrom-zero-to-100-digits-2/
|
CC-MAIN-2017-26
|
en
|
refinedweb
|
- ?
- SEE ALSO
NAME
Text::Xslate::Manual::Cookbook - How to cook Xslate templates
DESCRIPTION
The Xslate cookbook is a set of recipes showing Xslate features.
RECIPES
How to manage HTML forms
Managing HTML forms is an important issue for web applications. You're better off using modules that manage HTML forms rather than managing this yourself in your templates. This section proposes two basic solutions: using FillInForm and HTML form builders.
In both solutions, you should not use the
mark_raw filter in templates, which easily creates security holes. Instead, application code should be responsible for calling the
mark_raw function that
Text::Xslate can export.
Using FillInForm
One solution to manage HTML forms is to use FillInForm modules with the block filter syntax.
Example code using
HTML::FillInForm:
#!perl -w use strict; use Text::Xslate qw(html_builder); use HTML::FillInForm; # HTML::FillInForm::Lite is okay sub fillinform { my($q) = @_; my $fif = HTML::FillInForm->new(); return html_builder { my($html) = @_; return $fif->fill(\$html, $q); }; } my $tx = Text::Xslate->new( function => { fillinform => \&fillinform, }, ); my %vars = ( q => { foo => "<filled value>" }, ); print $tx->render_string(<<'T', \%vars); FillInForm: : block form | fillinform($q) -> { <form> <input type="text" name="foo" /> </form> : } T
Output:
FillInForm: <form> <input type="text" name="foo" value="<filled value>" /> </form>
Because HTML::FillInForm::Lite provides a
fillinform function, it becomes even simpler:
use HTML::FillInForm::Lite qw(fillinform); my $tx = Text::Xslate->new( function => { fillinform => html_builder(\&fillinform) }, );
From 1.5018 on,
html_builder_module is supported for HTML builder modules like
HTML::FillInForm. Just import HTML builder functions with
html_builder_module option.
my $tx = Text::Xslate->new( html_builder_module => [ 'HTML::FillInForm::Lite' => [qw(fillinform)] ], );
See also HTML::FillInForm or HTML::FillInForm::Lite for details.
Using HTML form builders
Another solution to manage HTML forms is to use form builders. In such cases, all you have to do is to apply
mark_raw() to HTML parts.
Here is a PSGI application that uses
HTML::Shakan:
#!psgi use strict; use warnings; use Text::Xslate qw(mark_raw); use HTML::Shakan; use Plack::Request; my $tx = Text::Xslate->new(); sub app { my($env) = @_; my $req = Plack::Request->new($env); my $shakan = HTML::Shakan->new( request => $req, fields => [ TextField(name => 'name', label => 'Your name: ') ], ); my $res = $req->new_response(200); # do mark_raw here, not in templates my $form = mark_raw($shakan->render()); $res->body( $tx->render_string(<<'T', { form => $form }) ); <!doctype html> <html> <head><title>Building form</title></head> <body> <form> <p> Form:<br /> <: $form :> </p> </body> </html> T return $res->finalize(); } return \&app;
Output:
<!doctype html> <html> <head><title>Building form</title></head> <body> <form> <p> Form:<br /> <label for="id_name">Your name</label> <input id="id_name" name="name" type="text" value="<Xslate>" /> </p> </body> </html>
See also HTML::Shakan for details.
How to use Template Toolkit's WRAPPER feature in Kolon
Use template cascading, which is a super-set of the
WRAPPER directive.
wrapper.tx:
<div class="wrapper"> block content -> { } </div>
content.tx
: cascade wrapper : override content -> { Hello, world! : }
Output:
<div class="wrapper"> Hello, world! </div>
Template cascading
Xslate supports template cascading, which allows you to extend templates with block modifiers. It is like traditional template inclusion, but is more powerful.
This mechanism is also called as template inheritance.
See also "Template cascading" in Text::Xslate.
How to map __DATA__ sections to the include path
Use
Data::Section::Simple, and the
path option of
new(), which accepts HASH references which contain
$file_name => $content mapping.
use Text::Xslate; use Data::Section::Simple; my $vpath = Data::Section::Simple->new()->get_data_section(); my $tx = Text::Xslate->new( path => [$vpath], ); print $tx->render('child.tx'); __DATA__ @@ base.tx <html> <body><: block body -> { :>default body<: } :></body> </html> @@ child.tx : cascade base; : override body -> { child body : } # endblock body
This feature is directly inspired by Text::MicroTemplate::DataSection, and originated from Moj.
The
JSON module is not suitable because it doesn't escape some meta characters such as
"</script>".
It is better to use utilities proven to be secure for JavaScript escaping to avoid XSS. JavaScript::Value::Escape helps you in this regard. manage localization in templates
You can register any functions including
_(), so no specific techniques are required.
For example:
use I18N::Handle; # I18N::Handle installs the locale function "_" to the global namespace. # (remember the symbol *_ is global) I18N::Handle->new( ... )->speak('zh_tw'); my $tx = Text::Xslate->new( function => { _ => \&_, }, );
Then in your templates:
<: _('Hello %1', $john ) :>
See also: I18N::Handle, App::I18N.
How to load templates before
fork()ing?
It is a good idea to load templates in preforking-model applications. Here is an example to load all the templates which is in a given path:
use File::Find; my $path = ...; my $tx = Text::Xslate->new( path => [$path], cache_dir => $path, ); # pre-load files find sub { if(/\.tx$/) { my $file = $File::Find::name; $file =~ s/\Q$path\E .//xsm; # fix path names $tx->load_file($file); } }, $path; # fork and render ...
SEE ALSO
Text::Xslate::Manual::FAQ
|
https://metacpan.org/pod/distribution/Text-Xslate/lib/Text/Xslate/Manual/Cookbook.pod
|
CC-MAIN-2017-26
|
en
|
refinedweb
|
I have to write a code that will print out the sum of all numbers between 1 and 100 that 7 and 5 go into evenly. I can't really think of how to write it. This is what I have but it doesn't work.
import java.io.*; public class work {public static input in = new input(); public static void main(String[] args) throws IOException { int sum; sum = 0; for (int x=1;x<=1000;x=+1) if (x%5==//x%7==0){ sum=sum+1}; System.out.println(sum); } }
|
https://www.daniweb.com/programming/software-development/threads/384057/please-help-with-java-homework
|
CC-MAIN-2017-26
|
en
|
refinedweb
|
ANTLR Section Index | Page 6
What's an elegant way to check whether a node is a PLUS node and then to lift its children to the parent PLUS node?
I'm trying to flatten trees of the form: #(PLUS A #(PLUS B C) D) into #(PLUS A B C D). What's an elegant way to check whether a node is a PLUS node and then to lift its children to the parent P...more
When I inherit rules from another grammar, the generated output parser actually includes the combined rules of my grammar and the supergrammar. Why doesn't the generated parser class just inherit from the class generated from the supergrammar?
In Java, when one class inherits from another, you override methods to change the behavior. The same is true for grammars except that changes a rule in a subgrammar can actually change the lookah...more
Can you explain more about ANTLR's tricky lexical lookahead issues related to seeing past the end of a token definition into the start of another?
Consider the following parser grammar: class MyParser extends Parser; sequence : (r1|r2)+ ; r1 : A (B)? ; r2 : B ; ANTLR reports m.g:3: warning: nondeterminism upon m.g:3: k=...more
How can I include line numbers in automatically generated ASTs?
Tree parsers are often used in type checkers. But useful error messages need the offending line number. So I have written: import antlr.CommonAST; import antlr.Token; public class CommonASTWith...more
How can I handle characters with context-sensitive meanings such as in the case where single-quote is both a postfix operator (complete token) and the string delimiter (piece of a token)?
How can I handle characters with context-sensitive meanings such as in the case where single-quote is both a postfix operator (complete token) and the string delimiter (piece of a token)? For exam...more
Why do these two rules result in an infinite-recursion error from ANTLR?
Why do these two rules result in an infinite-recursion error from ANTLR? a : b ; b : a B | C ;
How do you specify the "beginning-of-line" in the lexer? In lex, it is "^".
Here is a simple DEFINE rule that is only matched if the semantic predicate is true. DEFINE : {getColumn()==1}? "#define" ID ; Semantic predicates on the left-edge of single-altern...more
How can I use ANTLR to generate C++ using only STL (standard templates libraries) in Visual C++ 5?
Apply sp3 to your VC++ 5.0 installation. Well this works for me!
Why do I get a run-time error message "Access violation - no RTTI data" when I run a C++ based parser compiled with MS Visual Studio 6.0? It compiled ok. What about g++?
In Visual Studio (Visual C++), you need to go to "Project|Settings..." on the menu bar and then on the Project Settings dialog, go to the "C/C++" tab. Then choose the can you add member variables to the parser/lexer class definitions of ANTLR 2.x?
Member variables and methods can be added by including a bracketed section after the options section that follows the class definition. For example, for the parser: class JavaParser extends ...more
How can I store the filename and line number in each AST node efficiently (esp for filename) in C++.
There are probably a number of ways to do this. One way is to use a string table to store the strings. The AST node has a reference to a type like STRING which is an object that references (poi...more
When will ANTLR support hoisting?
Sometime in the future when Ter gets so fed up with my pestering that he just has to implement hoisting so that I'll shut up. Or for ANTLR v3.0. :-)
Is it possible to compile ANTLR to an executable using Microsoft J++?
See Using Microsoft Java from the Command Line and Building a C++ ANTLR Parser (on Windows NT). more
How do you restrict assignments to semantically valid statements? In other words, how can I ignore syntactically valid, but semantically invalid sentences?
For a complex language like C semantic checking must be done either after the statement is recognized in the parser or in a later pass. Semantic checking will usually involve creation of a symbo...more
|
http://www.jguru.com/faq/java-tools/antlr?page=6
|
CC-MAIN-2017-26
|
en
|
refinedweb
|
.
Scala.meta
Setup
Getting started with scalameta is quite straightforward. You only need to add a dependency in your
build.sbt:
libraryDependencies += "org.scalameta" %% "scalameta" % "1.7.0"
Then in your code all you have to do is
import scala.meta._
Macro setup
The setup to write a macro is slightly more involved. First you need to separate repos as it’s not possible to use the macros annotations in the same project where they are defined. The reason is that the macros annotations must be compiled before they can be used.
Once compiled you don’t even need a dependency to scalameta to use your macros annotations, you only need a dependency to the project that declares the annotations.
The setup for the macros definition project is slightly more complex as you need to enable the macroparadise plugin but it’s just a single line to add to your
build.sbt.
addCompilerPlugin("org.scalameta" % "paradise" % "3.0.0-M8" cross CrossVersion.full)
Of course you can use sbt subprojects to create one subproject for the macro definition and one subproject for the application that uses the macros annotations.
lazy val metaMacroSettings: Seq[Def.Setting[_]] = Seq( addCompilerPlugin("org.scalameta" % "paradise" % "3.0.0-M8" cross CrossVersion.full), scalacOptions += "-Xplugin-require:macroparadise", scalacOptions in (Compile, console) := Seq(), // macroparadise plugin doesn't work in repl yet. sources in (Compile, doc) := Nil // macroparadise doesn't work with scaladoc yet. ) lazy val macros = project.settings( metaMacroSettings, name := "pbmeta", libraryDependencies += "org.scalameta" %% "scalameta" % "1.7.0" ) lazy val app = project.settings( metaMacroSettings, libraryDependencies += "org.scalatest" %% "scalatest" % "3.0.1" % Test ).dependsOn(macros)
Parsing
At the heart of scalameta is a high-fidelity parser. The scalameta parser is able to parse scala code capturing all the context (comments, word position, …) hence the high-fidelity.
It’s easy to try out:
scala> import scala.meta._ scala> "val number = 3".parse[Stat] res1: scala.meta.parsers.Parsed[scala.meta.Stat] = val number = 3 scala> "Map[String, Int]".parse[Type] res2: scala.meta.parsers.Parsed[scala.meta.Type] = Map[String, Int] scala> "number + 2".parse[Term] res3: scala.meta.parsers.Parsed[scala.meta.Term] = number + 2 scala> "case class MyInt(i: Int /* it's an Int */)".parse[Stat] res4: scala.meta.parsers.Parsed[scala.meta.Stat] = case class MyInt(i: Int /* it's an Int */)
Tokens
As you can see the parser captures all the details (including the comments). It’s easy to get the captured tokens:
scala> res4.get.tokens res5: scala.meta.tokens.Tokens = Tokens(, case, , class, , MyInt, (, i, :, , Int, , /* it's an Int */, ), )
Scalameta also captures the position of each token.
Trees
The structure is captured as a tree.
scala> res4.get.children res6: scala.collection.immutable.Seq[scala.meta.Tree] = List(case, MyInt, def this(i: Int /* it's an Int */), ) scala> res6(2).children res7: scala.collection.immutable.Seq[scala.meta.Tree] = List(, i: Int)
Transform
This is nice but it’s not getting us anywhere. It’s great to capture all these details but we need to transform the tokens in order to generate some code. This is where the
transform method comes in.
scala> "val number = 3".parse[Stat].get.transform { | case q"val $name = $expr" => | val newName = Term.Name(name.syntax + "Renamed") | q"val ${Pat.Var.Term(newName)} = $expr" | } res8: scala.meta.Tree = val numberRenamed = 3
Quasiquotes
Here we have transformed a
Tree into another
Tree but instead of manipulating the
Tree directly (which is possible as well) we have use quasiquotes to both deconstruct the existing
Tree in the pattern match and construct a new
Tree as a result.
Quasiquote makes it much more convenient to manipulate
Trees. The difficulty (especially at the beginning) is too get familiar with all the scalameta ASTs. Fortunately there is a very useful cheat sheet that summarises them all.
Macros
With all this knowledge we’re now ready to enter the world of metaprogramming and write our first macro. Writing a macros is quite similar to the transformation we did above.
In fact only the declaration changes but the principle remains: we pattern match on the parsed tree using quasiquotes, apply some transformation and return a modified tree.
import scala.collection.immutable.Seq import scala.meta._ class Hello extends scala.annotation.StaticAnnotation { inline def apply(defn: Any): Any = meta { defn match { case cls@Defn.Class(_, _, _, ctor, template) => val hello = q"""def hello: Unit = println("Hello")""" val stats = hello +: template.stats.getOrElse(Nil) cls.copy(templ = template.copy(stats = Some(stats))) } } }
Here we just create an
@Hello annotation to add a method
hello (that prints
"Hello" to the standard output) to a case class.
We can use it like this:
@Hello case class Greetings() val greet = Greetings greet.hello // prints "Hello"
Congratulations! If you understand this, you understand scalameta macros. You can head over to the scalameta tutorial for additional examples.
PBMeta
Now that you understand scalameta macros we are reading to discuss the PBMeta implementation as it is built on these concepts.
It defines an annotation
@PBSerializable to add implicit
PBReads and
PBWrites into the companion object of the case class.
The pattern match is used to detect if the companion objects already exists or if we have to create it. The third case is for handling Scala enums.
defn match { case Term.Block(Seq(cls@Defn.Class(_, name, _, ctor, _), companion: Defn.Object)) => // companion object exists ... case cls@Defn.Class(_, name, _, ctor, _) => // companion object doesn't exist ... case obj@Defn.Object(_, name, template) if template.parents.map(_.syntax).contains("Enumeration()") => // Scala enumeration ... }
Note how we check that the object extends
Enumeration. We don’t have all the type information available at compile time (there is no
typer phase run as part of the macro generation – that’s why scalameta is quite fast). As we don’t have the whole type hierarchy available the only check we can do is if the object extends
Enumeration directly. (If it does indirectly we’re not going to catch it! – probably something we can do with the semantic API).
All the remaining code is here to generate the
PBReads and
PBWrites instances.
PBWrites
PBWrites trait defines 2 methods:
write(a: A, to: CodedOutputStream, at: Option[Int]): Unitwrites the given object
ato the specified output stream
toat index
at. The index is optional and is used to compute the tag (if any).
sizeOf(a: A, at: Option[Int]): Intcomputes the size (number of bytes) needed to encode the object
a. If an index
atis specified the associated tag size is also added into the result.
Quasiquotes are used to generate these methods:
q""" implicit val pbWrites: pbmeta.PBWrites[$name] = new pbmeta.PBWrites[$name] { override def write(a: $name, to: com.google.protobuf.CodedOutputStream, at: Option[Int]): Unit = { at.foreach { i => to.writeTag(i, com.google.protobuf.WireFormat.WIRETYPE_LENGTH_DELIMITED) to.writeUInt32NoTag(sizeOf(a)) } ..${params.zipWithIndex.map(writeField)} } override def sizeOf(a: $name, at: Option[Int]): Int = { val sizes: Seq[Int] = Seq(..${params.zipWithIndex.map(sizeField)}) sizes.reduceOption(_+_).getOrElse(0) + at.map(com.google.protobuf.CodedOutputStream.computeTagSize).getOrElse(0) } } """
In case you’re wondering what the
..$ syntax is, it’s just how to deal with sequences in quasiquotes.
Here a create a collection of
Term.Apply to write each field into the
CodedOutputStream. The
..$ syntax allows us to directly insert the whole sequence into the quasiquote.
(Similarly there is a
...$ syntax to deal with sequences of sequences).
PBReads
PBReads instances are generated in a similar way. The idea is to generate code that will extract field values from the
CodedInputStream and create a new instance of the object with the extracted field at the end.
val fields: Seq[Defn.Var] = ctor.paramss.head.map(declareField) val cases: Seq[Case] = ctor.paramss.head.zipWithIndex.map(readField) val args = ctor.paramss.head.map(extractField) val constructor = Ctor.Ref.Name(name.value) q""" implicit val pbReads: pbmeta.PBReads[$name] = new pbmeta.PBReads[$name] { override def read(from: com.google.protobuf.CodedInputStream): $name = { var done = false ..$fields while (!done) { from.readTag match { case 0 => done = true ..case $cases case tag => from.skipField(tag) } } new $constructor(..$args) } } """
IDE Support and debugging
In theory macros extension are supported in IntelliJ Idea. From what I experienced while developing PBMeta it works great with simple cases (e.g. adding a method to an existing case class) and it’s great as it allows you to expand to annotated class and see the generated code. Of course it’s great to debug and see what code is actually executed.
However it fails in more complex situations (e.g. creating a companion object):
In this case you’re left with inserting debug statements (i.e.
println) in the generated code. It’s simple and powerful but don’t forget to clean them up when debugging is over.
Conclusion
Scalameta is an amazing tool, it makes meta-programming easy and enjoyable. However there are some shortcomings you need to be aware of:
- You need to get familiar with all the quasi quote paraphernalia. There are many different terms but once you start to know them things got much easier. Plus you can try things out in the console.
- IDE support is great … when it works. When it doesn’t debugging isn’t easy and you’re left with generating
printlnstatement in your code. Not ideal!
- Scalameta doesn’t provide all the type analysis performed by the compiler. Yet we can do amazing things with the available information. Plus it’s fast (no heavy type inference needed)!
I used PBMeta as an introduction to Scalameta and without any knowledge I managed to build all the functionality I wanted. I even managed to add custom field position with the
@Pos annotation. The only thing I missed is the support for
sealed trait mapping to protobuf
oneOf structure.
For more details you can head over to PBMeta, try it out and let me know what you think in the comments below.
|
http://www.beyondthelines.net/computing/generating-protobuf-formats-with-scala-meta-macros/
|
CC-MAIN-2017-26
|
en
|
refinedweb
|
import sys from pyramid.interfaces import IExceptionViewClassifier from pyramid.interfaces import, exc: # WARNING: do not assign the result of sys.exc_info() to a # local var here, doing so will cause a leak attrs['exc_info'] = sys.exc_info() attrs['exception'] = exc # clear old generated request.response, if any; it may # have been mutated by the view, and its state is not # sane (e.g. caching headers) if 'response' in attrs: del attrs['response'] request_iface = attrs['request_iface'] provides = providedBy(exc) for_ = (IExceptionViewClassifier, request_iface.combined, provides) view_callable = adapters.lookup(for_, IView, default=None) if view_callable is None: raise response = view_callable(exc, request) finally: # prevent leakage (wrt exc_info) if 'exc_info' in attrs: del attrs['exc_info'] if 'exception' in attrs: del attrs['exception'] return response return excview_tweenMAIN = 'MAIN' INGRESS = 'INGRESS' EXCVIEW = 'pyramid.tweens.excview_tween_factory'
|
http://docs.pylonsproject.org/projects/pyramid/en/1.2-branch/_modules/pyramid/tweens.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Recently I’ve been working on a project to move our photo experience to store images on Amazon’s S3 service instead of the local server, in hopes to gain performance and move away from handling the maintenance static files tend to have.
There are many examples online of saving directly to Amazon and not using Django’s Storage Backend, however these examples tend to not give the best overall experience, specially when the site is for a customer. Why keep files on Amazon after a customer has deleted the relationship on your site? Paying for the extra space is something I’m not looking to do.
Luckily during this search I learned about Django’s Storage API and loved the concept and found a great package called django-storage that already has implemented S3 support. The documentation could use a bit of work, but isn’t too hard to get a handle of.
Now that I had the capability to save images to S3 I also wanted the possibility to resize said image and create a thumbnail.
The issue with this is that all examples I could find where in regards to either files saved on disk or stored in memory, neither of which was my case.
Below is example code that allows you to resize images using Django’s models.ImageField and PIL library. Hopefully this saves someone some time as it took me a few hours to come up with a solution.
def _resize_image(self):
try:
import urllib2 as urllib
from PIL import Image
from cStringIO import StringIO
from django.core.files.uploadedfile import SimpleUploadedFile
'''Open original photo which we want to resize using PIL's Image object'''
img_file = urllib.urlopen(self.image.url)
im = StringIO(img_file.read())
resized_image = Image.open(im)
'''Convert to RGB if necessary'''
if resized_image.mode not in ('L', 'RGB'):
resized_image = resized_image.convert('RGB')
'''We use our PIL Image object to create the resized image, which already
has a thumbnail() convenicne method that constrains proportions.
Additionally, we use Image.ANTIALIAS to make the image look better.
Without antialiasing the image pattern artificats may reulst.'''
resized_image.thumbnail(self.image_max_resolution, Image.ANTIALIAS)
'''Save the resized image'''
temp_handle = StringIO()
resized_image.save(temp_handle, 'jpeg')
temp_handle.seek(0)
''' Save to the image field'''
suf = SimpleUploadedFile(os.path.split(self.image.name)[-1].split('.')[0],
temp_handle.read(), content_type='image/jpeg')
self.image.save('%s.jpg' % suf.name, suf, save=True)
except ImportError:
pass.
|
http://blog.hardlycode.com/resizing-django-imagefield-with-remote-storage-2011-02/
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
XamlParseException Class
Represents the exception class for parser-specific exceptions from a WPF XAML parser. This exception is used in XAML API or WPF XAML parser operations from .NET Framework 3.0 and .NET Framework 3.5, or for specific use of the WPF XAML parser by calling System.Windows.Markup.XamlReader API.
System.Exception
System.SystemException
System.Windows.Markup.XamlParseException
Namespace: System.Windows.MarkupNamespace: System.Windows.Markup
Assembly: PresentationFramework (in PresentationFramework.dll)
The XamlParseException type exposes the following members.
XamlParseException is used only for the WPF-implemented XAML parser that performs the XAML parsing and loading for WPF applications. Specifically, the exception is only relevant when an application targets .NET Framework 3.0 and .NET Framework 3.5. The exception can also originate from user code in run-time calls to APIs that hook up the WPF-implemented XAML parser to load XAML from within a running WPF application (for example, calls to XamlReader.Load).
For .NET Framework 4, the XamlParseException exception that typically reports XAML processing exceptions is defined in a different namespace (System.Xaml) and a different assembly (System.Xaml).
Unless you are writing an equivalent to the WPF XAML parser or working with .NET Framework 3.0 and .NET Framework 3.5 targeting, you generally will not throw XamlParseException from your own code. However, handling for the exception is sometimes necessary. For application scenarios, where you may want to suppress XAML parse errors, a Dispatcher UnhandledException event handler at the application level is one way to handle a run-time XamlParseException. Whether to suppress exceptions or let them surface to user code depends on how you design your application for purposes of loading XAML, and the trust level that you assign to the XAML your application loads. For more information, see XAML Security Considerations or "XAML Security" section of XAML Overview (WPF).
For pages of an application, when the XamlParseException is thrown, it is usually in the context of the InitializeComponent call made by your page class, which is the entry point for the WPF application model's usage of the WPF XAML parser at the per-page level. Therefore another possible handling strategy is to place try/catch blocks in InitializeComponent. However, this technique does not integrate well with templates, visual design surfaces and other generated sources that hook up InitializeComp.
|
http://msdn.microsoft.com/en-us/library/system.windows.markup.xamlparseexception.aspx
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
24 May 2010 23:43 [Source: ICIS news]
WASHINGTON (ICIS news)--US officials said on Monday they are not satisfied that BP has adequately examined oil dispersants that are less toxic than those now being used in the Gulf of Mexico spill and have ordered independent testing to find better alternatives.
Environmental Protection Administration (EPA) Administrator Lisa Jackson told a press conference that she also has ordered a reduction in the surface applications of the dispersant now being used, in part because a subsea injection of the dispersant, Corexit, is proving more effective.
?xml:namespace>
She said that because the use of the dispersants at the well leak was proving effective, she has ordered a cut-back in the spraying of dispersants on the surface to break up slicks.
“We should use no more dispersants at the surface than absolutely necessary,” she said.
She added that the ordered cutback in surface use should reduce the overall application of dispersants by 50% or even by 75%.
She said she has ordered the reduction in surface applications because “we still don’t know the environmental impact of dispersants”. She said that no one had ever envisioned the use of so much dispersant, and that neither EPA or other sources can know what impact it will have.
BP said in its response that it believed Corexit was still the best available dispersant.
Jackson said, “We are not satisfied that BP has analysed other dispersant options, so today we are calling on them to continue the search for other dispersants.”
In addition, “As a consequence of BP’s response, EPA will conduct our own tests to determine the least toxic dispersants possible”.
She said that BP’s response to EPA’s demand for more analysis and testing of alternative dispersants appeared to be influenced by the fact that BP has a supply of Corexit “and for that reason did not want to explore other
|
http://www.icis.com/Articles/2010/05/24/9362191/us-not-satisfied-with-bp-work-with-oil-dispersants.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
02 April 2012 12:19 [Source: ICIS news]
LONDON (ICIS)--European polyethylene (PE) and polypropylene (PP) buyers are set to face further price hikes as producers look to cover higher costs and improve their margins in April, sources said on Monday.
PE and PP prices have increased steeply since the beginning of 2012, rising by more than 30% in some cases.
Dow Chemical has already announced a €150/tonne ($200/tonne) hike for April PE, well above the €40/tonne rise in the April ethylene contract, while other producers are aiming for a €30-50/tonne increase above the new monomer contract increases.
The April ethylene monomer contract rose by €40/tonne to settle at €1,345/tonne FD (free delivered) NWE (northwest ?xml:namespace>
Buyers are less convinced prices will rise significantly in April compared with February and March, when increases seemed inevitable because of the tight availability of product, brought about mainly by cutbacks at the cracker and polymer level because of high upstream costs.
“Pre-buying in quarter one has left buyers with stocks, and April is a short working month,” said one observer.
“Consumption in April will be lower,” said a producer. “Demand for most applications is down, all except for personal care. Even food packaging has not been as strong.”
But for producers, the quest for better margins remains paramount.
“We have seen [PE] prises increasing since January but we have always been behind with our cost situation ... We have simply been playing catch-up,” said one.
Brent crude oil prices were trading at $122.79/bbl on Monday morning, but it euro terms, they have been at a record high, even higher than in 2008, when prices briefly touched $147/bbl. Naphtha has moved up alongside crude oil prices and was trading at $1,045-1,053/tonne CIF (cost insurance freight) NWE (northwest Europe).
Low density polyethylene (LDPE) prices have increased to €1,400-1,450/tonne FD on a net basis, from a low of just above €1,000/tonne FD NWE for spot volumes in December 2011. Contracted volumes were sold at higher prices at the time.
In spite of producers complaining of poor margins, the momentum of the PE and PP markets is now losing pace as there will be fewer working days in April than in March and some buyers have built up inventory. Some buyers feel that they will not be able to get away with anything less than an increase in the monomer contracts, particularly for some grades of PE and PP, which are tighter than others.
LDPE availability is tight, and production problems coupled with good seasonal demand mean that buyers fear they will have to pay more.
“There is a lack of availability,” said one large LDPE buyer. “Converters will put up more resistance, but nobody is knocking on our door trying to sell us more.
“They are not prepared to compromise on the feedstock number.”
Discussions are likely to be more protracted than in February and March when buyers had to pay up quickly if they wanted material.
PE and PP are used extensively in the food packaging sector, while
|
http://www.icis.com/Articles/2012/04/02/9546738/Europe-PE-PP-buyers-set-to-face-further-price-increases.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
13 March 2006 15:49 [Source: ICIS news]
LONDON (ICIS news)--European acrylonitrile butadiene styrene (ABS) producers will be targeting price increases on March contracts of up to Euro50/tonne across all segments in an attempt to regain margins lost in February, suppliers said on Monday.
?xml:namespace>
Strong demand from the beginning of 2006 has helped producers sell more material to the higher-margin injection moulding segment, and to focus less on the lower-priced compounding grade segment, where ABS is mostly sold at discounts.
This has shortened compounding material supply and subsequently raised prices to Euro1,230-1,290/tonne free delivered (FD) Northwest Europe (NWE) through February, up Euro30-40/tonne on January.
This lowered the differential between the lower prices of compounding grade and injection moulding material to around Euro70/tonne.
High NWE prices for Asian injection moulding material at 1,420-1,500/tonne FD NWE for natural ABS, or around Euro120/tonne more expensive than the cheapest European values of Euro1,300-1,450/tonne FD NWE, has supported European producers’ push for increments.
Additionally, Dow announced on 3 March that it would be seeking a Euro100/tonne Q2 ABS price rise from 15 March, a target which was also heard by another supplier with quarterly contracts.
However, buyers of natural ABS have so far had a bargaining advantage in their ability to switch easily between suppliers, which has limited rises in ABS.
Furthermore, distributors of Asian material said they would negotiate for lower prices to help their suppliers' competitiveness. Some hoped to be able to lower their sales prices of natural material to around Euro1,350/tonne FD NWE, as some ground had been lost to European producers over the recent month.
European producers said they were determined to achieve targets, but much of their success would depend on the European price development of Asian
|
http://www.icis.com/Articles/2006/03/13/1048543/abs-suppliers-seek-mar-price-rises-of-up-to-euro50.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
java.lang.Object
org.springframework.batch.core.repository.support.SimpleJobRepositoryorg.springframework.batch.core.repository.support.SimpleJobRepository
public class SimpleJobRepository
Implementation of
JobRepository that stores JobInstances,
JobExecutions, and StepExecutions using the injected DAOs.
JobRepository,
JobInstanceDao,
JobExecutionDao,
StepExecutionDao
public SimpleJobRepository(JobInstanceDao jobInstanceDao, JobExecutionDao jobExecutionDao, StepExecutionDao stepExecutionDao)
public JobExecution createJobExecution(Job job, JobParameters jobParameters) throws JobExecutionAlreadyRunningException, JobRestartException, JobInstanceAlreadyCompleteException
Create a
JobExecution based on the passed in
Job and
JobParameters. However, unique identification of a job can only
come from the database, and therefore must come from JobDao by either
creating a new job instance or finding an existing one, which will ensure
that the id of the job instance is populated with the correct value.
There are two ways in which the method determines if a job should be
created or an existing one should be returned. The first is
restartability. The
Job restartable property will be checked
first. If it is false, a new job will be created, regardless of whether
or not one exists. If it is true, the
JobInstanceDao will be
checked to determine if the job already exists, if it does, it's steps
will be populated (there must be at least 1) and a new
JobExecution will be returned. If no job instance is found, a new
one will be created.
A check is made to see if any job executions are already running, and an
exception will be thrown if one is detected. To detect a running job
execution we use the
JobExecutionDao:
JobParametersand job name
Jobis marked restartable, then we create a new
JobInstance
Jobis not marked as restartable, it is an error. This could be caused by a job whose restartable flag has changed to be more strict (true not false) after it has been executed at least once.
JobInstancethen we check the
JobExecutioninstances for that job, and if any of them tells us it is running (see
JobExecution.isRunning()) then it is an error.
Isolation.REPEATABLE_READor better, then this method should block if another transaction is already executing it (for the same
JobParametersand job name). The first transaction to complete in this scenario obtains a valid
JobExecution, and others throw
JobExecutionAlreadyRunningException(or timeout). There are no such guarantees if the
JobInstanceDaoand
JobExecutionDaodo not respect the transaction isolation levels (e.g. if using a non-relational data-store, or if the platform does not support the higher isolation levels).
createJobExecutionin interface
JobRepository
job- the job the execution should be associated with.
jobParameters- the runtime parameters for the job
JobExecutionfor the arguments provided
JobExecutionAlreadyRunningException- if there is a
JobExecutionalready running for the job instance with the provided job and parameters.
JobRestartException- if one or more existing
JobInstances is found with the same parameters and
Job.isRestartable()is false.
JobInstanceAlreadyCompleteException- if a
JobInstanceis found and was already completed successfully.
JobRepository.createJobExecution(Job, JobParameters)
public void saveOrUpdate(JobExecution jobExecution)
saveOrUpdatein interface
JobRepository
jobExecution- to be stored.
IllegalArgumentException- if jobExecution is null.
public void saveOrUpdate(StepExecution stepExecution)
saveOrUpdatein interface
JobRepository
stepExecution- to be saved.
IllegalArgumentException- if stepExecution is null.
public void saveOrUpdateExecutionContext(StepExecution stepExecution)
JobRepository
ExecutionContextof the given
StepExecution. Implementations are allowed to ensure that the
StepExecutionis already saved by calling
JobRepository.saveOrUpdate(StepExecution)before saving the
ExecutionContext.
saveOrUpdateExecutionContextin interface
JobRepository
stepExecution- the
StepExecutioncontaining the
ExecutionContextto be saved.
public StepExecution getLastStepExecution(JobInstance jobInstance, Step step)
getLastStepExecutionin interface
JobRepository
public int getStepExecutionCount(JobInstance jobInstance, Step step)
getStepExecutionCountin interface
JobRepository
|
http://docs.spring.io/spring-batch/1.0.x/apidocs/org/springframework/batch/core/repository/support/SimpleJobRepository.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
31 December 2008 20:50 [Source: ICIS news]
(Adds LyondellBasell spokesperson comments)
HOUSTON (ICIS news)--Filing for Chapter 11 bankruptcy protection is one of the options being considered by LyondellBasell, a company spokesperson said on Wednesday.
“As we said publicly we are looking to restructure our debt and are exploring all of our options,” said LyondellBasell spokeswoman Susan Moore.
“Filing for chapter 11 is one of those options,” she said, adding, “We are working collaboratively with our banks to find the most efficient way to restructure.”
LyondellBasell has held discussions about entering Chapter 11 bankruptcy protection internally and with banks and outside counsel, a source at the Netherlands-based petrochemicals major confirmed earlier on Wednesday.
“There have been internal discussions and also discussions with the banks about the possibility of Chapter 11, but there has been no confirmation either way,” said the source.
On Monday, its subsidiary Lyondell Chemical said in a filing with the US Securities and Exchange Commission (SEC) that it had begun talks with lenders in order to extend payment dates and restructure its debt. On Tuesday two major credit rating services downgraded their ratings for the company.
The source at LyondellBasell Europe said: “In ?xml:namespace>
"Normally the banks would help in this situation, but the banking industry is in a worse state than petrochemicals at the moment and they have cut our overdraft going into the year end.”
The European division of LyondellBasell benefited from soaring oil costs over the summer but has suffered from falling crude prices and a drop-off in downstream demand in the later part of the year.
While 2008 was still expected to show a profit the fourth quarter results, which have not yet been closed, were likely to be negative, according to sources.
The situation on the
“The shutdown of US Gulf refineries following Hurricane Ike had a tremendous effect on earnings for the
The source, however, denied talk of cutbacks in production. “Our major plants in
LyondellBasell mulling bankruptcy is symptomatic of the extreme challenges facing leveraged chemical companies, said
“The combination of soft end markets, inventory destocking, and a sharply higher cost of capital is hurting chemical companies that combine operating and financial leverage, said Jefferies & Co analyst Laurence Alexander.
“Lyondell is an extreme case, but similar concerns have weighed on a range of companies,” he added.
As a result, stock prices of highly leveraged chemical companies such as Ashland, Huntsman, Nalco and Solutia trade mostly on the prospects for paying down debt, said Alexander.
Shares of
“The Lyondell issue is symptomatic in another way. Chemical shares rarely outperform when debt yields are rising - there’s too much competition from other parts of the capital structure,” said Alexander.
Chemical company debt prices have fallen sharply in recent weeks, creating more competition for chemical stocks. And senior debt-holders are first in line for any payments from the company while shareholders are last.
However, the financial and economic crisis has created opportunities for strong companies in the financial position to make acquisitions, said the analyst.
“There should be opportunities for companies with strong balance sheets to pick up attractive assets over the next 6-12 months, but credit markets need to ease if there’s going to be significant M&A,” said Alexander.
Meanwhile, the Associated Press reported that LyondellBasell planned to add 42 jobs at its office in
Additional reporting by Brian Ford and Joe Chang
|
http://www.icis.com/Articles/2008/12/31/9181183/bankruptcy-protection-is-an-option-lyondellbasell.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
itext pdf
itext pdf i am generating pdf using java
i want to do alignment in pdf using java but i found mostly left and right alignment in a row.
i want to divide single row in 4 parts then how can i do
itext pdf - Java Interview Questions
itext pdf sample program to modify the dimensions of image file in itext in java HiIf you want to know deep knowledge then click here and get more information about itext pdf program.
Adding images in itext pdf
Adding images in itext pdf Hi,
How to add image in pdf file using itext?
Thanks
Hi,
You can use following code... image in the pdf file.
Thanks.
regarding the pdf table using itext
regarding the pdf table using itext if table exceeds the maximum width of the page how to manage
Open Source PDF
Open Source PDF
Open Source PDF Libraries in Java
iText is a library that allows you to generate PDF files on the fly...: The look and feel of HTML is browser dependent; with iText and PDF you can
about pdf file handeling
about pdf file handeling can i apend something in pdf file using java program?
if yes then give short code for it.
You need itext api to handle pdf files. You can find the related examples from the given link:.
Read PDF file
Read PDF file
Java provides itext api to perform read and write operations with pdf file. Here we are going to read a pdf file. For this, we have used PDFReader class. The data is first converted into bytes and then with the use
reading from pdf to java - Java Beginners
the following link:
Thanks...reading from pdf to java How can i read the pdf file to strings in java.
I need the methods of reading data from file and to place that data
PDF to Image
PDF to Image Java code to convert PDF to Image
Merging multiple PDF files - Framework
Merging multiple PDF files I m using the iText package to merge pdf files. Its working fine but on each corner of the merged filep there is some... the files are having different font.Please help code to convert pdf file to word file
Java code to convert pdf file to word file How to convert pdf file to word file using Java
How to make a Rectangle type pdf
How to make a Rectangle type pdf
...
make a pdf file in the rectangle shape irrespective of the fact whether it exists or not.
If it exists then its fine, otherwise a pdf file will be created
how to get the image path when inserting the image into pdf file in jsp - JSP-Servlet
://
Thanks...how to get the image path when inserting the image into pdf file in jsp I am using the below code but i am getting the error at .getInstance. i am
change pdf version
change pdf version
In this program we are going to change the version of
pdf file through java program.
In this example we need iText.jar file, without this jar file
Convert ZIP To PDF
Convert ZIP To PDF
Lets discuss the conversion of a zipped file
into pdf file with the help of an example.
Download iText API
required for the compilation
java - Java Beginners
java Hii
Can any ome help me to Write a programme to merge pdf iles using itext api. Hi friend,
For solving the problem visit to :
Thanks
interview path pdf
interview path pdf Plz send me the paths of java core questions and answers pdfs or interview questions pdfs... the interview for any company for <1 year experience thanks for all of u in advance
Please visit
How to convert a swing form to PDF
How to convert a swing form to PDF Sir,
I want to know about how convert a swing form containing textbox,JTable,JPanel,JLabel, Seperator etc swing menus to a PDF file using java code
Generate unicode malayalam PDF from JSP
PDF reports using IText,but I dont know how to generate unicode malayalam... a simple pdf generator code using itext api.
Try this:
<%@page import="java.io....Generate unicode malayalam PDF from JSP Hi,
I want to generate
Concatenate two pdf files
Concatenate two pdf files
In this program we are going to concatenate two pdf files
into a pdf file through java program. The all data of files
To convert Html to pdf in java - Java Beginners
To convert Html to pdf in java Hi all,
I want to convert html file to pdf file using java. can any one help me out.
Thanks & Regards
Santhosh Ejanthkar Hi Friend,
Try the following code:
import
pdf Table title
pdf Table title
... to the table of the pdf file. Suppose we have one pdf file in
which we have a table and we..., otherwise a pdf file will be created.
To make a program over this, firstly we
Rotating image in the pdf file
Rotating image in the pdf file
...
insert a image in a pdf file and rotate it irrespective of the fact whether... will help us to make and use pdf
file in our program.
Now create a file named
iText support android? - MobileApplications
iText support android?
would iText support android?
i ve linked the iText.jar file with my android project developed in eclipse...
//code
Document document = new Document(PageSize.A4, 50, 50, 50, 50
convert data from pdf to text file - Java Beginners
convert data from pdf to text file how to read the data from pdf file and put it into text file(.txt
How to read PDF files created from Java thru VB Script
How to read PDF files created from Java thru VB Script We have created the PDF file thru APache FOP but when we are unable to read the
data thru... file?
Java PDF Tutorials
combine two pdf files
combine two pdf files
In this program we are going to tell you how you can
read a pdf file and combine more than one pdf into one.
To make a program over this, firstly
pdf file measurement
pdf file measurement
... through java program. First ad the value in the paragraphs
then add it finally... on a pdf
file,.
To make the program for changing the pdf version firstly we have adjust a size of a pdf file
How to adjust a size of a pdf file
...
adjust the size of a pdf file irrespective of the fact whether it exists or not.
If it exists then its fine, otherwise a pdf file will be created.
Tips and Tricks
in the form of a PDF document using Servlet. This program uses iText, which is a java library containing classes to generate documents in PDF, XML, HTML, and RTF...;
Send
data from database in PDF file as servlet response
Insert pages pdf
Insert pages pdf
In this program we are going to insert a new blank pages
in pdf file through java program...,
com.lowagie.text.pdf.PdfWriter class is used to write the document on a pdf
How to Make a Pdf and inserting data
How to Make a Pdf and inserting data
... make a pdf
file and how we can insert a data into the pdf file. This all...* and com.lowagie.text.pdf.*
package which will help us to make a pdf file.
The logic
Java Code - Java Interview Questions
on PDF using Java visit to : Code Hi,
How to convert word document to PDF using java????
Can you give me a simple code and the libraries to be used?
Thanks
Java convert jtable data to pdf file
Java convert jtable data to pdf file
In this tutorial, you will learn how to convert jtable data to pdf file. Here
is an example where we have created... have fetched the data from the jtable and
save the data to pdf file.
Example
Probem while creating PDF through JSP-Servlet - JSP-Servlet
Probem while creating PDF through JSP-Servlet Hi,
I have a web-app in which I want to convert MS-Office documents to PDF online.
I'm using PDFCreator for this. If I call the PDFCreator through a standalone java app or through
|
http://www.roseindia.net/tutorialhelp/comment/98508
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
I would also add that unless you have a particular need to embed this in
a specific Servlet, it might make more sense to add this thread to a
context listener handler.
For example, if I wanted to send out email to 1000 people (not Spam, of
course! ;-) and I initiated this from within a Servlet, in this case I'd
add a thread to the Servlet and let the Servlet return while the email
was still going out. This assumes I handle any errors in some other way
than displaying the results on the page.
On the other hand, if I wanted to, perhaps, clean-up some temporary data
within a database every so often, and I wanted to include this
funcitonality within a Web app rather than in a separate application
that I run from the CL via Cron every now and then, I'd probably
implement this within a context listener handler so that the thread
starts up when the Web app starts up and continues utnil the Web app
shuts down.
Just a thought!
-Fred Whipple
iMagine Internet Services
> -----Original Message-----
> From: Erik Wright [mailto:erik@spectacle.ca]
> Sent: Saturday, November 01, 2003 11:03 AM
> To: Tomcat Users List
> Subject: Re: HOW CAN I USE THREADS IN TOMCAT
>
>
> A servlet is just a Java class. You can do anything you can
> do with the
> java language, including start threads. The following starts a thread
> that runs some task every 10 minutes. The thread is started in the
> servlet init method. I choose to set the thread to daemon
> mode, meaning
> that when the main thread of execution shuts down the mailer
> thread will
> automatically be killed. Otherwise you need to be sure to
> keep track of
> it and be sure to signal it to shutdown in your
> Servlet.destroy method.
>
> public class MyServlet extends HttpServlet
> {
> private static class MailerThread extends Thread
> {
> public void run ()
> {
> while (true)
> {
> // do something
> synchronized (this)
> {
> wait (10*60*1000);
> }
> }
> }
> }
>
> // the servlet init method
> public void init ()
> {
> MailerThread thread = new MailerThread ();
> thread.setDaemon (true);
> thread.start ();
> }
>
> // ... doGet, etc. ...
> }
>
> -Erik
---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-user-help@jakarta.apache.org
|
http://mail-archives.apache.org/mod_mbox/tomcat-users/200311.mbox/%3C21c801c3a0c3$b909a5f0$0a00a8c0@jadzia%3E
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
01 February 2010 22:35 [Source: ICIS news]
HOUSTON (ICIS news)--The US Chemical Safety Board (CSB) is seeking a 20.5% budget hike for the 2011 fiscal year to support the development of a regional office in or near Houston, the board said on Monday.
The Washington, DC-based group’s budget justification statement for the fiscal year beginning 1 October 2010 includes a request for a $12.7m (€9.1m) budget, up from $10.6m in fiscal 2010 and $1.9m above the number currently alloted by the Obama administration.
The ?xml:namespace>
The CSB is also requesting an additional three-person investigative team to focus on shorter-term investigations.
“The board believes that these two steps are essential to help close the gap between the number of serious chemical accidents that occur each year and the number the CSB is actually able to investigate,” the board said in its request.
In December, the board was unable to investigate an explosion at an American Acryl plant in
Last month, the CSB said its 17 open investigations was the largest number in its 11-year history.
In the budget request, the CSB is also seeking additional funds for items such as a director of operations, salary and benefits increases for five board members, and information technology (IT) equipment.
“People within the oil industry have told us that we give the best value for taxpayer dollar of any agency in the government,” CSB chairman John Bresland said. “$10m is what we spend, but in terms of accident prevention, that money is returned many times
|
http://www.icis.com/Articles/2010/02/01/9330713/us-chem-safety-board-requests-funds-to-create-houston.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
#include <RDxfPlugin.h>
This is typically used by an about dialog or debugging / developer tools.
Implements RPluginInterface.
Called immediately after the plugin has been loaded, directly after starting the application.
Implementations typically perform plugin initialization, registration of file importers, exporter, etc.
Implements RPluginInterface.
Called whenever a new script engine is instantiated.
Implementations may register their own script extensions by making C / C++ code scriptable. \par Non-Scriptable:
This function is not available in script environments.
Implements RPluginInterface.
Called after the application has been fully loaded, directly before entering the main event loop.
Implementations typically perform initialization that depends on the application being up and running.
Implements RPluginInterface.
Called before a plugin is removed / unloaded.
Implements RPluginInterface.
|
http://www.qcad.org/doc/qcad/latest/developer/class_r_dxf_plugin.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Grzegorz Kossakowski wrote:
> Kamal pisze:
>> Hi,
>> I am responsible for JIRA issue 2211, including the associated patch.
>> I will be honest, I didn't really understand what I did.
>
> If your patch is a result of random typing then I'm truly impressed. ;-)
>
>> I knew enough to test it and make sure it worked, but there are some
>> points I would like to clarify:
>>
>> Firstly, I was looking at the code for jx:attribute, in particular this:
>>
>> final Attributes EMPTY_ATTRS = new AttributesImpl();
>> String elementName = "attribute";
>>
>> TextSerializer serializer = new TextSerializer();
>> StringWriter writer = new StringWriter();
>> serializer.setOutputCharStream(writer);
>>
>> ContentHandlerWrapper contentHandler = new
>> ContentHandlerWrapper(serializer, serializer);
>> contentHandler.startDocument();
>>
>> contentHandler.startElement(JXTemplateGenerator.NS, elementName,
>> elementName, EMPTY_ATTRS);
>> Invoker.execute(contentHandler, objectModel, executionContext,
>> macroContext, namespaces, this.getNext(), this.getEndInstruction());
>> contentHandler.endElement(JXTemplateGenerator.NS, elementName,
>> elementName);
>> contentHandler.endDocument();
>> valueStr = writer.toString();
>> Am I right in saying that the text serializer is what ensures that XML
>> ouput is not serialized in the attributes? I looked at the javadoc for
>> TextSerializer and found little useful information.
>
> Yep, I guess that TextSerializer implements text output method described
> for XSLT, see[2]. It means
> that <jx:attribute> will evaluate it's content (descendant elements) and
> will pull only text result
> of this evaluation. This enables one to use for example macros to
> generate the value of attribute.
>
>> I noticed that there is very little validation for jx:attribute. You
>> can put in any old value for an attribute name, including invalid
>> values such as values with spaces and colons (':') in them. I took a
>> very different approach for jx:element and tested that the prefix and
>> name are valid.
>
> Obviously, your approach is much, much better. I appreciate your
> attention to details.
>
>> Is there are reason why jx:attribute does not check that the name is a
>> correct name?
>
> I think the only reason is that original authors forgot about
> implementing these checks. Are you a
> volunteer to fix that? :)
>
>> Also, in xsp:element, apparently[1], you could not specify a namespace
>> without a prefix and visa versa. I chose to relax this to just not
>> allowing a prefix without a namespace. Is this right?
>
> To be honest, I don't remember why such a rule has been established.
> Could anyone comment?
It should be totally ok to declare a namespace without a prefix[1]:
<>
[1]
--
Leszek Gawron
CTO at MobileBox Ltd.
|
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200806.mbox/%3C4856B857.3040607@mobilebox.pl%3E
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Common I/O Tasks
.NET Framework 3.0
The System.IO namespace provides several classes that allow for various actions, such as reading and writing, to be performed on files, directories, and streams. For more information, see File and Stream I/O.
Common File Tasks
Common Directory Tasks
See Also
ConceptsBasic File I/O
Composing Streams
Asynchronous File I/O
Other ResourcesFile and Stream I/O
Show:
|
http://msdn.microsoft.com/en-US/library/ms404278(d=printer,v=vs.85).aspx
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
22 February 2012 17:46 [Source: ICIS news]
LONDON (ICIS)--?xml:namespace>
December sales in housing and construction rose by 24.6% year on year, to €9.4bn ($12.5bn), the Statistisches Bundesamt said.
For the full 12 months of 2011, housing and construction sector sales rose by 12.5% year on year to €93.4bn.
In a separate update report, the country’s central bank warned of a possible bubble in parts of the market.
The Bundesbank said that
In towns with populations of more than 500,000 people, prices for town houses and apartments rose 7% year on year in 2011, compared with 3.25% in 2010 from 2009.
The Bundesbank warned that with
Investors needed to take a close look at this risk, the bank added.
As in the
Analysts have repeatedly noted that because the eurozone sovereign debt crisis, as well as persistently low interest rates and potential inflation risks, German investors are putting more and more money into housing.
Germany’s housing pricing had been relatively subdued over the past 10 to15 years, compared with countries such as the US, the UK or Spain where prices soared, followed by a sharp correction in the wake of the 2008/2009 global economic and financial
|
http://www.icis.com/Articles/2012/02/22/9535031/germany-construction-sector-soars-but-central-bank-warns-investors.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
startig with
/example will be handled by the
DispatcherServlet instance named
example. This is only the first step in setting up
Spring Web MVC. You
now need to configure the various beans used by the Spring Web MVC
framework (over and above the
DispatcherServlet
itself).
As detailed in Section 16 16 Section 16.3.3.14, “Method Parameters And Type Conversion” and Section 16.3.3 presense 16.3.2.1, “URI Template Patterns”.
@RequestParam annotated parameters
for access to specific Servlet request parameters. Parameter
values are converted to the declared method argument type. See
Section 16.3.3 16.3.3.4, “Mapping the request body with the @RequestBody
annotation”.
@RequestPart annotated
parameters for access to the content of a "multipart/form-data"
request part. See Section 16.9.5, “Handling a file upload request from programmatic clients” and Section 16.9, “Spring's multipart (file upload) support”.
HttpEntity<?> parameters for
access to the Servlet request HTTP headers and contents. The
request stream will be converted to the entity body using
HttpMessageConverters. See Section 16.3.3.6, 16.3.3.5, “Mapping the response body with the
@ResponseBody annotation”.
A
HttpEntity<?> or
ResponseEntity<?> object to provide
access to the Servlet response HTTP headers and contents. The
entity body will be converted to the response stream using
HttpMessageConverters. See Section 16.3.3 Section 16.3.3, a wider range of message converters are registered by default. See Section 16.13.1, “mvc:annotation-driven” for more information.
If you intend to read and write XML, you will need to configure
the
MarshallingHttpMessageConverter with a
specific
Marshaller and an
Unmarshaller implementation from the
org.springframework.oxm package. For
example:
validated using the configured
Validator
instance. When using the MVC namespace a JSR-303 validator is
configured automatically assuming a JSR-303 implementation is
available on the classpath. If validation fails a
RequestBodyNotValidException is raised. The
exception is handled by the
DefaultHandlerExceptionResolver and results in
a
400 error sent back to the client along with a
message containing the validation errors. 16 Section 16.3.3.9, 6, Validation, Data Binding, and Type Conversion. Customizing the data binding process for a
controller level is covered in Section 16.3.3 6.7, “Spring 3 Validation” and Chapter (via
@EnableWebMvc)
automatically set this flag 16 Section 16.3.3 Section 16.3.3 Section 16.3.3.15, “Customizing WebDataBinder
initialization”) or by registering
Formatters with the
FormattingConversionService (see Section 6.6, “Spring 3 Field Formatting”). 16 17, 16Upoad RequestMappingHandlerMapping and RequestMapping overridden.
HttpMessageConverter support for @RequestBody method parameters and @ResponseBody method return values.HttpMessageConverter
converts to/from JSON — added if that specific shortcut.
|
http://docs.spring.io/spring/docs/3.1.0.RC1/spring-framework-reference/html/mvc.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Vert.x Core Manual:
Maven (in your
pom.xml):
<dependency> <groupId>io.vertx</groupId> <artifactId>vertx-core</artifactId> <version>4.1.6</version> </dependency>
Gradle (in your
build.gradlefile):
dependencies { compile 'io.vertx:vertx-core:4.1.6' }
Let’s discuss the different concepts and features in core.
In the beginning there was Vert.x:
Vertx vertx = Vertx.vertx();
Specifying options when creating a Vertx object
When creating a Vert.x object you can also specify options if the defaults aren’t right for you:
Vertx vertx = Vertx.vertx(new VertxOptions().setWorkerPoolSize(40));
The
VertxOptions object has many settings and allows you to configure things like clustering, high availability, pool sizes and various other settings.
Creating a clustered Vert.x object.
Are you fluent?
You may have noticed that in the previous examples a fluent API was used.
A fluent API is where multiple methods calls can be chained together. For example:
request.response().putHeader("Content-Type", "text/plain").end("some text");:
HttpServerResponse response = request.response(); response.putHeader("Content-Type", "text/plain"); response.write("some text"); response.end();
Don’t call us, we’ll call you. System.out.println("timer fired!"); });
Or to receive an HTTP request::
Don’t block me!.
Reactor and Multi-Reactor.
The Golden Rule - Don’t Block the Event Loop.
Future results
Vert.x 4 use futures to represent asynchronous results.
You cannot interact directly with the result of a future, instead you need to set a handler that will be called when the future completes and the result is available, like any other kind of event.
FileSystem fs = vertx.fileSystem(); Future<FileProps> future = fs.props("/my_file.txt"); future.onComplete((AsyncResult<FileProps> ar) -> { if (ar.succeeded()) { FileProps props = ar.result(); System.out.println("File size = " + props.size()); } else { System.out.println("Failure: " + ar.cause().getMessage()); } });
Future composition
when the current future succeeds, apply the given function, that returns a future. When this returned future completes, the composition succeeds.
when the current future fails, the composition fails
FileSystem fs = vertx.fileSystem(); Future<Void> future = fs .createFile("/foo") .compose(v -> { // When the file is created (fut1), execute this: return fs.writeFile("/foo", Buffer.buffer()); }) .compose(v -> { // When the file is written (fut2), execute this: return fs.move("/foo", "/bar"); });
In this example, 3 operations are chained together:
a file is created
data is written in this file
the file is moved
When these 3 steps are successful, the final future (
future) will succeed. However, if one
of the steps fails, the final future will fail.
Future coordination
Coordination of multiple futures can be achieved with Vert.x
futures. It
supports concurrent composition (run several async operations in parallel) and sequential composition
(chain async operations).
CompositeFuture.all takes several futures arguments (up to 6) and returns a future that is
succeeded when all the futures are succeeded and failed when at least one of the futures is failed:
Future<HttpServer> httpServerFuture = httpServer.listen(); Future<NetServer> netServerFuture = netServer.listen(); CompositeFuture.all(httpServerFuture, netServerFuture).onComplete(Arrays.asList).onComplete(ar -> { if (ar.succeeded()) { // At least one is succeeded } else { // All failed } });
A list of futures can be used also:
CompositeFuture.any(Arrays.asList).onComplete(ar -> { if (ar.succeeded()) { // All succeeded } else { // All completed and at least one failed } });
A list of futures can be used also:
CompositeFuture.join(Arrays.asList(future1, future2, future3));
CompletionStage interoperability
The Vert.x
Future API offers compatibility from and to
CompletionStage which is the JDK interface for composable
asynchronous operations.
We can go from a Vert.x
Future to a
CompletionStage using the
toCompletionStage method, as in:
Future<String> future = vertx.createDnsClient().lookup("vertx.io"); future.toCompletionStage().whenComplete((ip, err) -> { if (err != null) { System.err.println("Could not resolve vertx.io"); err.printStackTrace(); } else { System.out.println("vertx.io => " + ip); } });
We can conversely go from a
CompletionStage to Vert.x
Future using
Future.fromCompletionStage.
There are 2 variants:
Here is an example of going from a
CompletionStage to a Vert.x
Future and dispatching on a context:
Future.fromCompletionStage(completionStage, vertx.getOrCreateContext()) .flatMap(str -> { String key = UUID.randomUUID().toString(); return storeInDb(key, str); }) .onSuccess(str -> { System.out.println("We have a result: " + str); }) .onFailure(err -> { System.err.println("We have a problem"); err.printStackTrace(); });
Verticles.
Writing Verticles
They can implement it directly if you like but usually it’s simpler to extend
the abstract class
AbstractVerticle.
Here’s an example verticle:
public class MyVerticle extends AbstractVerticle { // Called when verticle is deployed public void start() { } // Optional - called when verticle is undeployed public void stop() { } }
Normally you would override the start method like in the example above..
Asynchronous Verticle start and stop
Sometimes you want to do something in your verticle start-up which takes some time and you don’t want the verticle to
be considered deployed until that happens. For example you might want to start an HTTP server in the start method and
propagate the asynchronous result of the server
listen method.
You can’t block waiting for the HTTP server to bind the HTTP server), you can call complete on the Future (or fail) to signal that you’re done.
Here’s an example:
public class MyVerticle extends AbstractVerticle { private HttpServer server; public void start(Promise<Void> startPromise) { server = vertx.createHttpServer().requestHandler(req -> { req.response() .putHeader("content-type", "text/plain") .end("Hello from Vert.x!"); }); // Now bind the server: server.listen(8080, res -> { if (res.succeeded()) { startPromise.complete(); } else { startPromise.fail(res.cause()); } }); } }
Similarly, there is an asynchronous version of the stop method too. You use this if you want to do some verticle cleanup that takes some time.
public class MyVerticle extends AbstractVerticle { public void start() { // Do something } public void stop(Promise<Void> stopPromise) { obj.doSomethingThatTakesTime(res -> { if (res.succeeded()) { stopPromise.complete(); } else { stopPromise.fail(); } }); } }
INFO: You don’t need to manually stop the HTTP server started by a verticle, in the verticle’s stop method. Vert.x will automatically stop any running server when the verticle is undeployed.
Verticle Types
There are two different types of verticles:
- Standard Verticles
These are the most common and useful type - they are always executed using an event loop thread. We’ll discuss this more in the next section.
- Worker Verticles
These run using a thread from the worker pool. An instance is never executed concurrently by more than one thread.
Standard verticles.
Worker verticles.
DeploymentOptions options = new DeploymentOptions().setWorker(true); vertx.deployVerticle("com.mycompany.MyOrderProcessorVerticle", options);
Worker verticle instances are never executed concurrently by Vert.x by more than one thread, but can executed by different threads at different times.
Deploying verticles programmatically
You can deploy a verticle using one of the
deployVerticle method, specifying a verticle
name or you can pass in a verticle instance you have already created yourself.
Verticle myVerticle = new:
vertx.deployVerticle("com.mycompany.MyOrderProcessorVerticle"); // Deploy a JavaScript verticle vertx.deployVerticle("verticles/myverticle.js"); // Deploy a Ruby verticle verticle vertx.deployVerticle("verticles/my_verticle.rb");
Rules for mapping a verticle name to a verticle factory.
How are Verticle Factories located?
Most Verticle factories are loaded from the classpath and registered at Vert.x startup.
You can also programmatically register and unregister verticle factories using
registerVerticleFactory
and
unregisterVerticleFactory if you wish.
Waiting for deployment to complete()) { System.out.println("Deployment id is: " + res.result()); } else { System.out.println("Deployment failed!"); } });
The completion handler will be passed a result containing the deployment ID string, if deployment succeeded.
This deployment ID can be used later if you want to undeploy the deployment.
Undeploying verticle deployments
Un-deployment is itself asynchronous so if you want to be notified when un-deployment is complete you can deploy specifying a completion handler:
vertx.undeploy(deploymentID, res -> { if (res.succeeded()) { System.out.println("Undeployed ok"); } else { System.out.println("Undeploy failed!"); } });
Specifying number of verticle instances
When deploying a verticle using a verticle name, you can specify the number of verticle instances that you want to deploy:
DeploymentOptions options = new DeploymentOptions().setInstances.
Passing configuration to a verticle
Configuration in the form of JSON can be passed to a verticle at deployment time:
JsonObject config = new JsonObject().put("name", "tim").put("directory", "/blah"); DeploymentOptions options = new DeploymentOptions().setConfig(config); vertx.deployVerticle("com.mycompany.MyOrderProcessorVerticle", options);
This configuration is then available via the
Context object or directly using the
config method. The configuration is returned as a JSON object so you
can retrieve data as follows:
System.out.println("Configuration: " + config().getString("name"));
Accessing environment variables in a Verticle
Environment variables and system properties are accessible using the Java API:
System.getProperty("prop"); System.getenv("HOME");
High Availability.
Running Verticles from the command line.
Causing Vert.x to exit
Threads maintained by Vert.x instances are not daemon threads so they will prevent the JVM from exiting.
This will shut-down all internal thread pools and close other resources, and will allow the JVM to exit.
The Context object:
Context context = vertx.getOrCreateContext();
If the current thread has a context associated with it, it reuses the context object. If not a new instance of context is created. You can test the type of context you have retrieved:
Context context = vertx.getOrCreateContext(); if (context.isEventLoopContext()) { System.out.println("Context attached to Event Loop"); } else if (context.isWorkerContext()) { System.out.println("Context attached to Worker Thread"); } else if (! Context.isOnVertxThread()) { System.out) -> { System:
final Context context = vertx.getOrCreateContext(); context.put("data", "hello"); context.runOnContext((v) -> { String hello = context.get("data"); });
The context object also let you access verticle configuration using the
config
method. Check the Passing configuration to a verticle section for more details about this configuration.
Executing periodic and delayed actions
One-shot Timers
A one shot timer calls an event handler after a certain delay, expressed in milliseconds.
long timerID = vertx.setTimer(1000, id -> { System.out.println("And one second later this is printed"); }); System.out.println("First this is printed");
The return value is a unique timer id which can later be used to cancel the timer. The handler is also passed the timer id.
Periodic Timers.
long timerID = vertx.setPeriodic(1000, id -> { System.out.println("And every second this is printed"); }); System.out.println("First this is printed");
Cancelling timers
To cancel a periodic timer, call
cancelTimer specifying the timer id. For example:
vertx.cancelTimer(timerID);
Verticle worker pool
Verticles use the Vert.x worker pool for executing blocking actions, i.e
executeBlocking or
worker verticle.
A different worker pool can be specified in deployment options:
vertx.deployVerticle("the-verticle", new DeploymentOptions().setWorkerPoolName("the-specific-pool"));
The Event Bus:
The Theory
Addressing.
Handlers
Messages are received by handlers. You register a handler at an address.
Many different handlers can be registered at the same address.
A single handler can be registered at many different addresses.
Publish / subscribe messaging
The event bus supports publishing messages.
Messages are published to an address. Publishing means delivering the message to all handlers that are registered at that address.
This is the familiar publish/subscribe messaging pattern.
Point-to-point and Request-Response messaging.
Best-effort delivery.
Types of messages
JSON is very easy to create, read and parse in all the languages that Vert.x supports, so it has become a kind of lingua franca for Vert.x.
However, you are not forced to use JSON if you don’t want to.
The Event Bus API
Let’s jump into the API.
Getting the event bus
You get a reference to the event bus as follows:
EventBus eb = vertx.eventBus();
There is a single instance of the event bus per Vert.x instance.
Registering Handlers
EventBus eb = vertx.eventBus(); eb.consumer("news.uk.sport", message -> { System.out:
EventBus eb = vertx.eventBus(); MessageConsumer<String> consumer = eb.consumer("news.uk.sport"); consumer.handler(message -> { System.out()) { System.out.println("The handler registration has reached all nodes"); } else { System.out.println("Registration failed!"); } });
Un-registering Handlers
To unregister a handler, call
unregister.
If you are on a clustered event bus, un-registering can take some time to propagate across the nodes. If you want to
be notified when this is complete, use
unregister.
consumer.unregister(res -> { if (res.succeeded()) { System.out.println("The handler un-registration has reached all nodes"); } else { System.out.println("Un-registration failed!"); } });
Publishing messages
eventBus.publish("news.uk.sport", "Yay! Someone kicked a ball");
That message will then be delivered to all handlers registered against the address news.uk.sport.
Sending messages
Sending a message will result in only one handler registered at the address receiving the message. This is the point-to-point messaging pattern. The handler is chosen in a non-strict round-robin fashion.
eventBus.send("news.uk.sport", "Yay! Someone kicked a ball");
Setting headers on messages
Messages sent over the event bus can also contain headers. This can be specified by providing a
DeliveryOptions when sending or publishing:
DeliveryOptions options = new DeliveryOptions(); options.addHeader("some-header", "some-value"); eventBus.send("news.uk.sport", "Yay! Someone kicked a ball", options);
Message ordering
Vert.x will deliver messages to any particular handler in the same order they were sent from any particular sender.
The Message object
Acknowledging messages / sending replies
When using
send the event bus attempts to deliver the message to a
MessageConsumer registered with the event bus.
In some cases it’s useful for the sender to know when the consumer has received the message and "processed" it using request-response pattern.
To acknowledge that the message has been processed, the consumer can reply to the message by calling
reply.
When this happens it causes a reply to be sent back to the sender and the reply handler is invoked with the reply.
An example will make this clear:
The receiver:
MessageConsumer<String> consumer = eventBus.consumer("news.uk.sport"); consumer.handler(message -> { System.out.println("I have received a message: " + message.body()); message.reply("how interesting!"); });
The sender:
eventBus.request("news.uk.sport", "Yay! Someone kicked a ball across a patch of grass", ar -> { if (ar.succeeded()) { System.outif the message was successfully persisted in storage, or
falseif not.
A message consumer which processes an order might acknowledge with
truewhen the order has been successfully processed so it can be deleted from the database
Sending with timeouts
When sending a message with a reply handler, you can specify a timeout in the
DeliveryOptions.
If a reply is not received within that time, the reply handler will be called with a failure.
The default timeout is 30 seconds.
Send Failures
Message sends can fail for other reasons, including:
In all cases, the reply handler will be called with the specific failure.
Message Codecs); DeliveryOptions options = new DeliveryOptions().setCodec.
Clustered Event Bus
The event bus doesn’t just exist in a single Vert.x instance. By clustering different Vert.x instances together on your network they can form a single, distributed event bus.
Clustering programmatically
If you’re creating your Vert.x instance programmatically you get a clustered event bus by configuring the Vert.x instance as clustered;
VertxOptions options = new VertxOptions(); Vertx.clusteredVertx(options, res -> { if (res.succeeded()) { Vertx vertx = res.result(); EventBus eventBus = vertx.eventBus(); System.out.println("We now have a clustered event bus: " + eventBus); } else { System.out.println("Failed: " + res.cause()); } });
You should also make sure you have a
ClusterManager implementation on your classpath, for example the Hazelcast cluster manager.
Configuring the event bus
The event bus can be configured. It is particularly useful when the event bus is clustered.
Under the hood the event bus uses TCP connections to send and receive messages, so the
EventBusOptions let you configure all aspects of these TCP connections.
As the event bus acts as a server and client, the configuration is close to
NetClientOptions and
NetServerOptions.
VertxOptions options = new VertxOptions() .setEventBusOptions(new EventBusOptions() .setSsl(true) .setKeyStoreOptions(new JksOptions().setPath("keystore.jks").setPassword("wibble")) .setTrustStoreOptions(new JksOptions().setPath("keystore.jks").setPassword("wibble")) .setClientAuth(ClientAuth.REQUIRED) ); Vertx.clusteredVertx(options, res -> { if (res.succeeded()) { Vertx vertx = res.result(); EventBus eventBus = vertx.eventBus(); System.out.println("We now have a clustered event bus: " + eventBus); } else { System.out.
When used in containers, you can also configure the public host and port:
VertxOptions options = new VertxOptions() .setEventBusOptions(new EventBusOptions() .setClusterPublicHost("whatever") .setClusterPublicPort(1234) ); Vertx.clusteredVertx(options, res -> { if (res.succeeded()) { Vertx vertx = res.result(); EventBus eventBus = vertx.eventBus(); System.out.println("We now have a clustered event bus: " + eventBus); } else { System.out.println("Failed: " + res.cause()); } });
JSON
Unlike some other languages, Java does not have first class support for JSON so we provide two classes to make handling JSON in your Vert.x applications a bit easier.
JSON objects
The
JsonObject class represents JSON objects.
A JSON object is basically just a map which has string keys and values can be of one of the JSON supported types (string, number, boolean).
JSON objects also support null values.
Creating JSON objects
Empty JSON objects can be created with the default constructor.
You can create a JSON object from a string JSON representation as follows:
String jsonString = "{\"foo\":\"bar\"}"; JsonObject object = new JsonObject(jsonString);
You can create a JSON object from a map as follows:
Map<String, Object> map = new HashMap<>(); map.put("foo", "bar"); map.put("xyz", 3); JsonObject object = new JsonObject(map);
Putting entries into a JSON object
The method invocations can be chained because of the fluent API:
JsonObject object = new JsonObject(); object.put("foo", "bar").put("num", 123).put("mybool", true);
Getting values from a JSON object
You get values from a JSON object using the
getXXX methods, for example:
String val = jsonObject.getString("some-key"); int intVal = jsonObject.getInteger("some-other-key");
Mapping between JSON objects and Java objects
You can create a JSON object from the fields of a Java object as follows:
You can instantiate a Java object and populate its fields from a JSON object as follows:
request.bodyHandler(buff -> { JsonObject jsonObject = buff.toJsonObject(); User javaObject = jsonObject.mapTo(User.class); });
Note that both of the above mapping directions use Jackson’s
ObjectMapper#convertValue() to perform the
mapping. See the Jackson documentation for information on the impact of field and constructor visibility, caveats
on serialization and deserialization across object references, etc.
However, in the simplest case, both
mapFrom and
mapTo should succeed if all fields of the Java class are
public (or have public getters/setters), and if there is a public default constructor (or no defined constructors).
Referenced objects will be transitively serialized/deserialized to/from nested JSON objects as long as the object graph is acyclic.
JSON arrays
A JSON array is a sequence of values (string, number, boolean).
JSON arrays can also contain null values.
Creating JSON arrays
Empty JSON arrays can be created with the default constructor.
You can create a JSON array from a string JSON representation as follows:
String jsonString = "[\"foo\",\"bar\"]"; JsonArray array = new JsonArray(jsonString);
Adding entries into a JSON array
JsonArray array = new JsonArray(); array.add("foo").add(123).add(false);
Getting values from a JSON array
You get values from a JSON array using the
getXXX methods, for example:
String val = array.getString(0); Integer intVal = array.getInteger(1); Boolean boolVal = array.getBoolean(2);
Creating arbitrary JSON
Creating JSON object and array assumes you are using valid string representation.
When you are unsure of the string validity then you should use instead
Json.decodeValue
Object object = Json.decodeValue(arbitraryJson); if (object instanceof JsonObject) { // That's a valid json object } else if (object instanceof JsonArray) { // That's a valid json array } else if (object instanceof String) { // That's valid string } else { // etc... }
Json Pointers
Vert.x provides an implementation of Json Pointers from RFC6901.
You can use pointers both for querying and for writing. You can build your
JsonPointer using
a string, a URI or manually appending paths:
JsonPointer pointer1 = JsonPointer.from("/hello/world"); // Build a pointer manually JsonPointer pointer2 = JsonPointer.create() .append("hello") .append("world");
After instantiating your pointer, use
queryJson to query
a JSON value. You can update a Json Value using
writeJson:
Object result1 = objectPointer.queryJson(jsonObject); // Query a JsonArray Object result2 = arrayPointer.queryJson(jsonArray); // Write starting from a JsonObject objectPointer.writeJson(jsonObject, "new element"); // Write starting from a JsonObject arrayPointer.writeJson(jsonArray, "new element");
You can use Vert.x Json Pointer with any object model by providing a custom implementation of
JsonPointerIterator
Buffers.
Creating buffers
Buffers can create by using one of the static
Buffer.buffer methods.
Buffers can be initialised from strings or byte arrays, or empty buffers can be created.
Here are some examples of creating buffers:
Create a new empty buffer:
Buffer buff = Buffer.buffer();
Create a buffer from a String. The String will be encoded in the buffer using UTF-8.
Buffer buff = Buffer.buffer("some string");
Create a buffer from a String: The String will be encoded using the specified encoding, e.g:
Buffer buff = Buffer.buffer("some string", "UTF-16");
Create a buffer from a byte[]
byte[] bytes = new byte[] {1, 3, 5}; Buffer buff = Buffer.buffer(bytes);.
Buffer buff = Buffer.buffer(10000);
Writing to a Buffer
There are two ways to write to a buffer: appending, and random access.
In either case buffers will always expand automatically to encompass the bytes. It’s not possible to get
an
IndexOutOfBoundsException with a buffer.
Appending to a Buffer
To append to a buffer, you use the
appendXXX methods.
Append methods exist for appending various different types.
The return value of the
appendXXX methods is the buffer itself, so these can be chained:
Buffer buff = Buffer.buffer(); buff.appendInt(123).appendString("hello\n"); socket.write(buff);
Random access buffer writes.
Buffer buff = Buffer.buffer(); buff.setInt(1000, 123); buff.setString(0, "hello");
Reading from a Buffer
Data is read from a buffer using the
getXXX methods. Get methods exist for various datatypes.
The first argument to these methods is an index in the buffer from where to get the data.
Buffer buff = Buffer.buffer(); for (int i = 0; i < buff.length(); i += 4) { System.out.println("int value at " + i + " is " + buff.getInt(i)); }
Working with unsigned numbers:
Buffer buff = Buffer.buffer(128); int pos = 15; buff.setUnsignedByte(pos, (short) 200); System.out.println(buff.getUnsignedByte(pos));
The console shows '200'.
Buffer length
Slicing buffers
Writing TCP servers and clients
Vert.x allows you to easily write non blocking TCP clients and servers.
Creating a TCP server
The simplest way to create a TCP server, using all default options is as follows:
NetServer server = vertx.createNetServer();
Configuring a TCP server
If you don’t want the default, a server can be configured by passing in a
NetServerOptions
instance when creating it:
NetServerOptions options = new NetServerOptions().setPort(4321); NetServer server = vertx.createNetServer(options);
Start the Server Listening
To tell the server to listen at the host and port as specified in the options:
NetServer server = vertx.createNetServer(); server.listen();
Or to specify the host and port in the call to listen, ignoring what is configured in the options:
NetServer:
NetServer server = vertx.createNetServer(); server.listen(1234, "localhost", res -> { if (res.succeeded()) { System.out.println("Server is now listening!"); } else { System.out.println("Failed to bind!"); } });
Listening on a random port
If
0 is used as the listening port, the server will find an unused random port to listen on.
To find out the real port the server is listening on you can call
actualPort.
NetServer server = vertx.createNetServer(); server.listen(0, "localhost", res -> { if (res.succeeded()) { System.out.println("Server is now listening on actual port: " + server.actualPort()); } else { System.out.println("Failed to bind!"); } });
Getting notified of incoming connections
To be notified when a connection is made you need to set a
connectHandler:
NetServer server = vertx.createNetServer(); server.connectHandler(socket -> { // Handle the connection in here });
This is a socket-like interface to the actual connection, and allows you to read and write data as well as do various other things like close the socket.
Reading data from the socket
NetServer server = vertx.createNetServer(); server.connectHandler(socket -> { socket.handler(buffer -> { System.out.println("I received some bytes: " + buffer.length()); }); });
Writing data to a socket
Buffer.
Closed handler
If you want to be notified when a socket is closed, you can set a
closeHandler
on it:
socket.closeHandler(v -> { System.out.println("The socket has been closed"); });
Handling exceptions
You can set an
exceptionHandler to receive any
exceptions that happen on the socket.
You can set an
exceptionHandler to receive any
exceptions that happens before the connection is passed to the
connectHandler
, e.g during the TLS handshake.
Event bus write handler
Every socket automatically registers a handler on the event bus, and when any buffers are received in this handler, it writes them to itself. Those are local subscriptions not routed on the cluster.
This enables you to write data to a socket which is potentially in a completely different verticle by sending the buffer to the address of that handler.
The address of the handler is given by
writeHandlerID
Local and remote addresses
The local address of a
NetSocket can be retrieved using
localAddress.
The remote address, (i.e. the address of the other end of the connection) of a
NetSocket
can be retrieved using
remoteAddress.
Sending files or resources from the classpath");
Streaming sockets
Instances of
NetSocket are also
ReadStream and
WriteStream instances, so they can be used to pipe data to or from other
read and write streams.
Upgrading connections to SSL/TLS.
Closing a TCP Server()) { System.out.println("Server is now closed"); } else { System.out.println("close failed"); } });
Automatic clean-up in verticles
If you’re creating TCP servers and clients from inside verticles, those servers and clients will be automatically closed when the verticle is undeployed.
Scaling - sharing TCP servers:
for (int i = 0; i < 10; i++) { NetServer
DeploymentOptions options = new DeploymentOptions().setInstances.
Creating a TCP client
The simplest way to create a TCP client, using all default options is as follows:
NetClient client = vertx.createNetClient();
Configuring a TCP client
If you don’t want the default, a client can be configured by passing in a
NetClientOptions
instance when creating it:
NetClientOptions options = new NetClientOptions().setConnectTimeout(10000); NetClient client = vertx.createNetClient(options);
Making connections
To make a connection to a server you use
connect,
specifying the port and host of the server and a handler that will be called with a result containing the
NetSocket when connection is successful or with a failure if connection failed.
NetClientOptions options = new NetClientOptions().setConnectTimeout(10000); NetClient client = vertx.createNetClient(options); client.connect(4321, "localhost", res -> { if (res.succeeded()) { System.out.println("Connected!"); NetSocket socket = res.result(); } else { System.out.println("Failed to connect: " + res.cause().getMessage()); } });
Configuring connection attempts
A client can be configured to automatically retry connecting to the server in the event that it cannot connect.
This is configured with
setReconnectInterval and
setReconnectAttempts.
NetClientOptions options = new NetClientOptions(). setReconnectAttempts(10). setReconnectInterval(500); NetClient client = vertx.createNetClient(options);
By default, multiple connection attempts are disabled.
Logging network activity
For debugging purposes, network activity can be logged:
NetServerOptions options = new NetServerOptions().setLogActivity(true); NetServer server = vertx.createNetServer(options);
for the client
NetClientOptions options = new NetClientOptions().setLogActivity(true); NetClient.
Configuring servers and clients to work with SSL/TLS.
Specifying key/certificate for the server:
NetServerOptions options = new NetServerOptions().setSsl(true).setKeyStoreOptions( new JksOptions(). setPath("/path/to/your/server-keystore.jks"). setPassword("password-of-your-keystore") ); NetServer server = vertx.createNetServer(options);
Alternatively you can read the key store yourself as a buffer and provide that directly:
Buffer myKeyStoreAsABuffer = vertx.fileSystem().readFileBlocking("/path/to/your/server-keystore.jks"); JksOptions jksOptions = new JksOptions(). setValue(myKeyStoreAsABuffer). setPassword("password-of-your-keystore"); NetServerOptions options = new NetServerOptions(). setSsl(true). setKeyStoreOptions(jksOptions); NetServer server = vertx.createNetServer(options);
Key/certificate in PKCS#12 format (), usually with the
.pfx or the
.p12
extension can also be loaded in a similar fashion than JKS key stores:
NetServerOptions options = new NetServerOptions().setSsl(true).setPfxKeyCertOptions( new PfxOptions(). setPath("/path/to/your/server-keystore.pfx"). setPassword("password-of-your-keystore") ); NetServer server = vertx.createNetServer(options);
Buffer configuration is also supported:
Buffer myKeyStoreAsABuffer = vertx.fileSystem().readFileBlocking("/path/to/your/server-keystore.pfx"); PfxOptions pfxOptions = new PfxOptions(). setValue(myKeyStoreAsABuffer). setPassword("password-of-your-keystore"); NetServerOptions options = new NetServerOptions(). setSsl(true). setPfxKeyCertOptions(pfxOptions); NetServer server = vertx.createNetServer(options);
Another way of providing server private key and certificate separately using
.pem files.
NetServerOptions options = new NetServerOptions().setSsl(true).setPemKeyCertOptions( new PemKeyCertOptions(). setKeyPath("/path/to/your/server-key.pem"). setCertPath("/path/to/your/server-cert.pem") ); NetServer server = vertx.createNetServer(options);
Buffer configuration is also supported:
Buffer myKeyAsABuffer = vertx.fileSystem().readFileBlocking("/path/to/your/server-key.pem"); Buffer myCertAsABuffer = vertx.fileSystem().readFileBlocking("/path/to/your/server-cert.pem"); PemKeyCertOptions pemOptions = new PemKeyCertOptions(). setKeyValue(myKeyAsABuffer). setCertValue(myCertAsABuffer); NetServerOptions options = new NetServerOptions(). setSsl(true). setPemKeyCertOptions(pemOptions); NetServer server = vertx.createNetServer(options);
Vert.x supports reading of unencrypted RSA and/or ECC based private keys from PKCS8 PEM files. RSA based private keys can also be read from PKCS1 PEM files. X.509 certificates can be read from PEM files containing a textual encoding of the certificate as defined by RFC 7468, Section 5.
Finally, you can also load generic Java keystore, it is useful for using other KeyStore implementations like Bouncy Castle:
NetServerOptions options = new NetServerOptions().setSsl(true).setKeyCertOptions( new KeyStoreOptions(). setType("BKS"). setPath("/path/to/your/server-keystore.bks"). setPassword("password-of-your-keystore") ); NetServer server = vertx.createNetServer(options);
Specifying trust for the server
SSL/TLS servers can use a certificate authority in order to verify the identity of the clients.
Certificate authorities can be configured for servers in several ways:
The password for the trust store should also be provided:
NetServerOptions options = new NetServerOptions(). setSsl(true). setClientAuth(ClientAuth.REQUIRED). setTrustStoreOptions( new JksOptions(). setPath("/path/to/your/truststore.jks"). setPassword("password-of-your-truststore") ); NetServer server = vertx.createNetServer(options);
Alternatively you can read the trust store yourself as a buffer and provide that directly:
Buffer myTrustStoreAsABuffer = vertx.fileSystem().readFileBlocking("/path/to/your/truststore.jks"); NetServerOptions options = new NetServerOptions(). setSsl(true). setClientAuth(ClientAuth.REQUIRED). setTrustStoreOptions( new JksOptions(). setValue(myTrustStoreAsABuffer). setPassword("password-of-your-truststore") ); NetServer server = vertx.createNetServer(options);
Certificate authority in PKCS#12 format (), usually with the
.pfx or the
.p12
extension can also be loaded in a similar fashion than JKS trust stores:
NetServerOptions options = new NetServerOptions(). setSsl(true). setClientAuth(ClientAuth.REQUIRED). setPfxTrustOptions( new PfxOptions(). setPath("/path/to/your/truststore.pfx"). setPassword("password-of-your-truststore") ); NetServer server = vertx.createNetServer(options);
Buffer configuration is also supported:
Buffer myTrustStoreAsABuffer = vertx.fileSystem().readFileBlocking("/path/to/your/truststore.pfx"); NetServerOptions options = new NetServerOptions(). setSsl(true). setClientAuth(ClientAuth.REQUIRED). setPfxTrustOptions( new PfxOptions(). setValue(myTrustStoreAsABuffer). setPassword("password-of-your-truststore") ); NetServer server = vertx.createNetServer(options);
Another way of providing server certificate authority using a list
.pem files.
NetServerOptions options = new NetServerOptions(). setSsl(true). setClientAuth(ClientAuth.REQUIRED). setPemTrustOptions( new PemTrustOptions(). addCertPath("/path/to/your/server-ca.pem") ); NetServer server = vertx.createNetServer(options);
Buffer configuration is also supported:
Buffer myCaAsABuffer = vertx.fileSystem().readFileBlocking("/path/to/your/server-ca.pfx"); NetServerOptions options = new NetServerOptions(). setSsl(true). setClientAuth(ClientAuth.REQUIRED). setPemTrustOptions( new PemTrustOptions(). addCertValue(myCaAsABuffer) ); NetServer server = vertx.createNetServer(options);
Enabling SSL/TLS on the client
Net Clients can also be easily configured to use SSL. They have the exact same API when using SSL as when using standard sockets.
To enable SSL on a NetClient the function setSSL(true) is called.
Client trust configuration.
NetClientOptions options = new NetClientOptions(). setSsl(true). setTrustAll(true); NetClient):
NetClientOptions options = new NetClientOptions(). setSsl(true). setHostnameVerificationAlgorithm("HTTPS"); NetClient.
NetClientOptions options = new NetClientOptions(). setSsl(true). setTrustStoreOptions( new JksOptions(). setPath("/path/to/your/truststore.jks"). setPassword("password-of-your-truststore") ); NetClient client = vertx.createNetClient(options);
Buffer configuration is also supported:
Buffer myTrustStoreAsABuffer = vertx.fileSystem().readFileBlocking("/path/to/your/truststore.jks"); NetClientOptions options = new NetClientOptions(). setSsl(true). setTrustStoreOptions( new JksOptions(). setValue(myTrustStoreAsABuffer). setPassword("password-of-your-truststore") ); NetClient client = vertx.createNetClient(options);
Certificate authority in PKCS#12 format (), usually with the
.pfx or the
.p12
extension can also be loaded in a similar fashion than JKS trust stores:
NetClientOptions options = new NetClientOptions(). setSsl(true). setPfxTrustOptions( new PfxOptions(). setPath("/path/to/your/truststore.pfx"). setPassword("password-of-your-truststore") ); NetClient client = vertx.createNetClient(options);
Buffer configuration is also supported:
Buffer myTrustStoreAsABuffer = vertx.fileSystem().readFileBlocking("/path/to/your/truststore.pfx"); NetClientOptions options = new NetClientOptions(). setSsl(true). setPfxTrustOptions( new PfxOptions(). setValue(myTrustStoreAsABuffer). setPassword("password-of-your-truststore") ); NetClient client = vertx.createNetClient(options);
Another way of providing server certificate authority using a list
.pem files.
NetClientOptions options = new NetClientOptions(). setSsl(true). setPemTrustOptions( new PemTrustOptions(). addCertPath("/path/to/your/ca-cert.pem") ); NetClient client = vertx.createNetClient(options);
Buffer configuration is also supported:
Buffer myTrustStoreAsABuffer = vertx.fileSystem().readFileBlocking("/path/to/your/ca-cert.pem"); NetClientOptions options = new NetClientOptions(). setSsl(true). setPemTrustOptions( new PemTrustOptions(). addCertValue(myTrustStoreAsABuffer) ); NetClient client = vertx.createNetClient(options);
Specifying key/certificate for the client.
NetClientOptions options = new NetClientOptions().setSsl(true).setKeyStoreOptions( new JksOptions(). setPath("/path/to/your/client-keystore.jks"). setPassword("password-of-your-keystore") ); NetClient client = vertx.createNetClient(options);
Buffer configuration is also supported:
Buffer myKeyStoreAsABuffer = vertx.fileSystem().readFileBlocking("/path/to/your/client-keystore.jks"); JksOptions jksOptions = new JksOptions(). setValue(myKeyStoreAsABuffer). setPassword("password-of-your-keystore"); NetClientOptions options = new NetClientOptions(). setSsl(true). setKeyStoreOptions(jksOptions); NetClient client = vertx.createNetClient(options);
Key/certificate in PKCS#12 format (), usually with the
.pfx or the
.p12
extension can also be loaded in a similar fashion than JKS key stores:
NetClientOptions options = new NetClientOptions().setSsl(true).setPfxKeyCertOptions( new PfxOptions(). setPath("/path/to/your/client-keystore.pfx"). setPassword("password-of-your-keystore") ); NetClient client = vertx.createNetClient(options);
Buffer configuration is also supported:
Buffer myKeyStoreAsABuffer = vertx.fileSystem().readFileBlocking("/path/to/your/client-keystore.pfx"); PfxOptions pfxOptions = new PfxOptions(). setValue(myKeyStoreAsABuffer). setPassword("password-of-your-keystore"); NetClientOptions options = new NetClientOptions(). setSsl(true). setPfxKeyCertOptions(pfxOptions); NetClient client = vertx.createNetClient(options);
Another way of providing server private key and certificate separately using
.pem files.
NetClientOptions options = new NetClientOptions().setSsl(true).setPemKeyCertOptions( new PemKeyCertOptions(). setKeyPath("/path/to/your/client-key.pem"). setCertPath("/path/to/your/client-cert.pem") ); NetClient client = vertx.createNetClient(options);
Buffer configuration is also supported:
Buffer myKeyAsABuffer = vertx.fileSystem().readFileBlocking("/path/to/your/client-key.pem"); Buffer myCertAsABuffer = vertx.fileSystem().readFileBlocking("/path/to/your/client-cert.pem"); PemKeyCertOptions pemOptions = new PemKeyCertOptions(). setKeyValue(myKeyAsABuffer). setCertValue(myCertAsABuffer); NetClientOptions options = new NetClientOptions(). setSsl(true). setPemKeyCertOptions(pemOptions); NetClient client = vertx.createNetClient(options);
Keep in mind that pem configuration, the private key is not crypted.
Self-signed certificates for testing and development purposes:
SelfSignedCertificate certificate = SelfSignedCertificate.create(); NetServerOptions serverOptions = new NetServerOptions() .setSsl(true) .setKeyCertOptions(certificate.keyCertOptions()) .setTrustOptions(certificate.trustOptions()); vertx.createNetServer(serverOptions) .connectHandler(socket -> socket.end(Buffer.buffer("Hello!"))) .listen(1234, "localhost"); NetClientOptions clientOptions = new NetClientOptions() .setSsl(true) .setKeyCertOptions(certificate.keyCertOptions()) .setTrustOptions(certificate.trustOptions()); NetClient client = vertx.createNetClient(clientOptions); client.connect(1234, "localhost", ar -> { if (ar.succeeded()) { ar.result().handler(buffer -> System.out.println(buffer)); } else { System.err.println("Woops: " + ar.cause().getMessage()); } });
The client can also be configured to trust all certificates:
NetClientOptions clientOptions = new NetClientOptions() .setSsl(true) .setTrustAll(true);
Note that self-signed certificates also work for other TCP protocols like HTTPS:
SelfSignedCertificate certificate = SelfSignedCertificate.create(); vertx.createHttpServer(new HttpServerOptions() .setSsl(true) .setKeyCertOptions(certificate.keyCertOptions()) .setTrustOptions(certificate.trustOptions())) .requestHandler(req -> req.response().end("Hello!")) .listen(8080);
Revoking certificate authorities
Trust can be configured to use a certificate revocation list (CRL) for revoked certificates that should no
longer be trusted. The
crlPath configures
the crl list to use:
NetClientOptions options = new NetClientOptions(). setSsl(true). setTrustStoreOptions(trustOptions). addCrlPath("/path/to/your/crl.pem"); NetClient client = vertx.createNetClient(options);
Buffer configuration is also supported:
Buffer myCrlAsABuffer = vertx.fileSystem().readFileBlocking("/path/to/your/crl.pem"); NetClientOptions options = new NetClientOptions(). setSsl(true). setTrustStoreOptions(trustOptions). addCrlValue(myCrlAsABuffer); NetClient client = vertx.createNetClient(options);
Configuring the Cipher suite
By default, the TLS configuration will use the list of Cipher suites of the SSL engine:
JDK SSL engine when
JdkSSLEngineOptionsis used
OpenSSL engine when
OpenSSLEngineOptionsis used
This Cipher suite can be configured with a suite of enabled ciphers:
NetServerOptions options = new NetServerOptions(). setSsl(true). setKeyStoreOptions(keyStoreOptions). addEnabledCipherSuite("ECDHE-RSA-AES128-GCM-SHA256"). addEnabledCipherSuite("ECDHE-ECDSA-AES128-GCM-SHA256"). addEnabledCipherSuite("ECDHE-RSA-AES256-GCM-SHA384"). addEnabledCipherSuite("CDHE-ECDSA-AES256-GCM-SHA384"); NetServer server = vertx.createNetServer(options);
When the enabled cipher suites is defined (i.e not empty), it takes precedence over the default cipher suites of the SSL engine.
Cipher suite can be specified on the
NetServerOptions or
NetClientOptions configuration.
Configuring TLS protocol versions
By default, the TLS configuration will use the following protocol versions: SSLv2Hello, TLSv1, TLSv1.1 and TLSv1.2. Protocol versions can be configured by explicitly adding enabled protocols:
NetServerOptions options = new NetServerOptions(). setSsl(true). setKeyStoreOptions(keyStoreOptions). removeEnabledSecureTransportProtocol("TLSv1"). addEnabledSecureTransportProtocol("TLSv1.3"); NetServer server = vertx.createNetServer(options);
Protocol versions can be specified on the
NetServerOptions or
NetClientOptions configuration.
SSL engine
The engine implementation can be configured to use OpenSSL instead of the JDK implementation. OpenSSL provides better performances and CPU usage than the JDK engine, as well as JDK version independence.
The engine options to use is
the
getSslEngineOptionsoptions when it is set
otherwise
JdkSSLEngineOptions
NetServerOptions options = new NetServerOptions(). setSsl(true). setKeyStoreOptions(keyStoreOptions); // Use JDK SSL engine explicitly options = new NetServerOptions(). setSsl(true). setKeyStoreOptions(keyStoreOptions). setJdkSslEngineOptions(new JdkSSLEngineOptions()); // Use OpenSSL engine options = new NetServerOptions(). setSsl(true). setKeyStoreOptions(keyStoreOptions). setOpenSslEngineOptions(new OpenSSLEngineOptions());
Server Name Indication (SNI)were.
JksOptions keyCertOptions = new JksOptions().setPath("keystore.jks").setPassword("wibble"); NetServer netServer = vertx.createNetServer(new NetServerOptions() .setKeyStoreOptions(keyCertOptions) .setSsl(true) .setSni(true) );
PemKeyCertOptions can be configured to hold multiple entries:
PemKeyCertOptions keyCertOptions = new PemKeyCertOptions() .setKeyPaths(Arrays.asList("default-key.pem", "host1-key.pem", "etc...")) .setCertPaths(Arrays.asList("default-cert.pem", "host2-key.pem", "etc...") ); NetServer netServer = vertx.createNetServer(new NetServerOptions() .setPemKeyCertOptions(keyCertOptions) .setSsl(true) .setSni(true) );
The client implicitly sends the connecting host as an SNI server name for Fully Qualified Domain Name (FQDN).
You can provide an explicit server name when connecting a socket
NetClient client = vertx.createNetClient(new NetClientOptions() .setTrustStoreOptions(trustOptions) .setSsl(true) ); // Connect to 'localhost' and present 'server.name' server name client.connect(1234, "localhost", "server.name", res -> { if (res.succeeded()) { System.out.println("Connected!"); NetSocket socket = res.result(); } else { System.out.println("Failed to connect: " + res.cause().getMessage()); } });
It can be used for different purposes:
present a server name different than the server host
present a server name while connecting to an IP
force to present a server name when using shortname
Application-Layer Protocol Negotiation (ALPN)options when it is set
JdkSSLEngineOptionswhen ALPN is available for JDK
OpenSSLEngineOptionswhen ALPN is available for OpenSSL
otherwise it fails
OpenSSL ALPN support support
Using a proxy for client connections
The proxy can be configured in the
NetClientOptions by setting a
ProxyOptions object containing proxy type, hostname, port and optionally username and password.
Here’s an example:
NetClientOptions options = new NetClientOptions() .setProxyOptions(new ProxyOptions().setType(ProxyType.SOCKS5) .setHost("localhost").setPort(1080) .setUsername("username").setPassword("secret")); NetClient client = vertx.createNetClient(options);
The DNS resolution is always done on the proxy server, to achieve the functionality of a SOCKS4 client, it is necessary to resolve the DNS address locally.
You can use
setNonProxyHosts to configure a list of host bypassing
the proxy. The lists accepts
* wildcard for matching domains:
NetClientOptions options = new NetClientOptions() .setProxyOptions(new ProxyOptions().setType(ProxyType.SOCKS5) .setHost("localhost").setPort(1080) .setUsername("username").setPassword("secret")) .addNonProxyHost("*.foo.com") .addNonProxyHost("localhost"); NetClient client = vertx.createNetClient(options);>
NetServerOptions options = new NetServerOptions().setUseProxyProtocol(true); NetServer server = vertx.createNetServer(options); server.connectHandler(so -> { // Print the actual client address provided by the HA proxy protocol instead of the proxy address System.out.println(so.remoteAddress()); // Print the address of the proxy System.out.println(so.localAddress()); });
Writing HTTP servers and clients.
Creating an HTTP Server
The simplest way to create an HTTP server, using all default options is as follows:
HttpServer server = vertx.createHttpServer();
Configuring an HTTP server
If you don’t want the default, a server can be configured by passing in a
HttpServerOptions
instance when creating it:
HttpServerOptions options = new HttpServerOptions().setMaxWebSocketFrameSize(1000000); HttpServer server = vertx.createHttpServer(options);
Configuring an HTTP/2 server
Vert.x supports HTTP/2 over TLS
h2 and over TCP
h2c.
h2identifies the HTTP/2 protocol when used over TLS negotiated by Application-Layer Protocol Negotiation (ALPN)
h2cidentifies the HTTP/2 protocol when using in clear text over TCP, such connections are established either with an HTTP/1.1 upgraded request or directly
To handle
h2 requests, TLS must be enabled along with
setUseAlpn:
HttpServerOptions options = new HttpServerOptions() .setUseAlpn(true) .setSsl(true) .setKeyStoreOptions(new JksOptions().setPath("/path/to/my/keystore")); HttpServeras recommended by the HTTP/2 RFC
the default HTTP/2 settings values for the others
Logging network server activity
For debugging purposes, network activity can be logged.
HttpServerOptions options = new HttpServerOptions().setLogActivity(true); HttpServer server = vertx.createHttpServer(options);
See the chapter on logging network activity for a detailed explanation.
Start the Server Listening
To tell the server to listen at the host and port as specified in the options:
HttpServer server = vertx.createHttpServer(); server.listen();
Or to specify the host and port in the call to listen, ignoring what is configured in the options:
HttpServer:
HttpServer server = vertx.createHttpServer(); server.listen(8080, "myhost.com", res -> { if (res.succeeded()) { System.out.println("Server is now listening!"); } else { System.out.println("Failed to bind!"); } });
Getting notified of incoming requests
To be notified when a request arrives you need to set a
requestHandler:
HttpServer server = vertx.createHttpServer(); server.requestHandler(request -> { // Handle the request in here });
Handling requests);
Request method
Request URI
Note that this is the actual URI as passed in the HTTP request, and it’s almost always a relative URI.
The URI is as defined in Section 5.1.2 of the HTTP specification - Request-URI
Request path
For example, if the request URI was:
a/b/c/page.html?param1=abc¶m2=xyz
Then the path would be
/a/b/c/page.html
Request query
For example, if the request URI was:
a/b/c/page.html?param1=abc¶m2=xyz
Then the query would be
param1=abc¶m2=xyz
Request headers
This returns an instance of
MultiMap - which is like a normal Map or Hash but allows multiple
values for the same key - this is because HTTP allows multiple header values with the same key.
It also has case-insensitive keys, that means you can do the following:
MultiMap headers = request.headers(); // Get the User-Agent: System.out.println("User agent is " + headers.get("user-agent")); // You can also do this and get the same result: System.out.println("User agent is " + headers.get("User-Agent"));
Request host
For HTTP/1.x requests the
host header is returned, for HTTP/1 requests the
:authority pseudo header is returned.
Request parameters.
Remote address
The address of the sender of the request can be retrieved with
remoteAddress.
Absolute URI
The URI passed in an HTTP request is usually relative. If you wish to retrieve the absolute URI corresponding
to the request, you can get it with
absoluteURI
End handler
The
endHandler of the request is invoked when the entire request,
including any body has been fully read.
Reading Data from the Request Body -> { System.out:
Buffer totalBuffer = Buffer.buffer(); request.handler(buffer -> { System.out.println("I have received a chunk of the body of length " + buffer.length()); totalBuffer.appendBuffer(buffer); }); request.endHandler(v -> { System.out.println("Full body received, length = " + totalBuffer.length()); });
This is such a common case, that Vert.x provides a
bodyHandler to do this
for you. The body handler is called once when all the body has been received:
request.bodyHandler(totalBuffer -> { System.out.println("Full body received, length = " + totalBuffer.length()); });
Streaming requests
The request object is a
ReadStream so you can pipe the request body to any
WriteStream instance.
Handling HTML forms MultiMap formAttributes = request.formAttributes(); }); });
Form attributes have a maximum size of
8192 bytes. When the client submits a form with an attribute
size greater than this value, the file upload triggers an exception on
HttpServerRequest exception handler. You
can set a different maximum size with
setMaxFormAttributeSize.
Handling form file uploads -> { System.out.println("Got a file upload " + upload.name()); }); });
File uploads can be large we don’t provide the entire upload in a single buffer as that might result in memory exhaustion, instead, the upload data is received in chunks:
request.uploadHandler(upload -> { upload.handler(chunk -> { System.out.println("Received a chunk of the upload of length " + chunk.length()); }); });
The upload object is a
ReadStream so you can pipe the request body to any
WriteStream instance. See the chapter on streams for a
detailed explanation.
If you just want to upload the file to disk somewhere you can use
streamToFileSystem:
request.uploadHandler(upload -> { upload.streamToFileSystem("myuploads_directory/" + upload.filename()); });
Handling cookies.
Same Site Cookies let servers require that a cookie shouldn’t be sent with cross-site (where Site is defined by the
registrable domain) requests, which provides some protection against cross-site request forgery attacks. This kind
of cookies are enabled using the setter:
setSameSite.
Same site cookies can have one of 3 values: attribute will be included.
Lax - Same-site cookies are withheld on cross-site subrequests, such as calls to load images or frames, but will be sent when a user navigates to the URL from an external site; for example, by following a link.
Here’s an example of querying and adding cookies:
Cookie someCookie = request.getCookie("mycookie"); String cookieValue = someCookie.getValue(); // Do something with cookie... // Add a cookie - this will get written back in the response automatically request.response().addCookie(Cookie.cookie("othercookie", "somevalue"));
Handling compressed body
Vert.x can handle compressed body payloads which are encoded by the client with the deflate or gzip algorithms.
To enable decompression set
setDecompressionSupported on the
options when creating the server.
By default, decompression is disabled. -> { System.out.println("Received a frame type=" + frame.type() + " payload" + frame.payload().toString()); });
HTTP/2 frames are not subject to flow control - the frame handler will be called immediately when a custom frame is received whether the request is paused or is not
Sending back responses
The server response object is an instance of
HttpServerResponse and is obtained from the
request with
response.
You use the response object to write a response back to the HTTP client.
Setting status code and message.
Writing HTTP responses
These can be invoked multiple times before the response is ended. They can be invoked in a few ways:
With a single buffer:
HttpServerResponse response = request.response(); response.write(buffer);
With a string. In this case the string will encoded using UTF-8 and the result written to the wire.
HttpServerResponse response = request.response(); response.write("hello world!");
With a string and an encoding. In this case the string will encoded using the specified encoding and the result written to the wire.
HttpServerResponse response = request.response(); response.write("hello world!", "UTF-16");
Writing to a response is asynchronous and always returns immediately after.
Ending HTTP responses
This can be done in several ways:
With no arguments, the response is simply ended.
HttpServerResponse:
HttpServerResponse response = request.response(); response.end("hello world!");
Closing the underlying connection.
Setting response headers
HttpServerResponse response = request.response(); MultiMap headers = response.headers(); headers.set("content-type", "text/html"); headers.set("other-header", "wibble");
HttpServerResponse response = request.response(); response.putHeader("content-type", "text/html").putHeader("other-header", "wibble");
Headers must all be added before any parts of the response body are written.
Chunked HTTP responses and trailers:
HttpServerResponse.
HttpServerResponse response = request.response(); response.setChunked(true); MultiMap trailers = response.trailers(); trailers.set("X-wibble", "woobble").set("X-quux", "flooble");
Or use
putTrailer.
HttpServerResponse response = request.response(); response.setChunked(true); response.putTrailer("X-wibble", "woobble").putTrailer("X-quux", "flooble");
Serving files directly from disk or the classpath
If you were writing a web server, one way to serve a file from disk would be to open it as an
AsyncFile
and pipe -> { String file = ""; if (request.path().equals("/")) { file = "index.html"; } else if (!request.path().contains("..")) { file = request.path(); } request.response().sendFile("web/" + file); }).listen(8080);
Sending a file is asynchronous and may not complete until some time after the call has returned. If you want to
be notified when the file has been written -> { long offset = 0; try { offset = Long.parseLong(request.getParam("start")); } catch (NumberFormatException e) { // error handling... } long end = Long.MAX_VALUE; try { end = Long.parseLong(request.getParam("end")); } catch (NumberFormatException -> { long offset = 0; try { offset = Long.parseLong(request.getParam("start")); } catch (NumberFormatException e) { // error handling... } request.response().sendFile("web/mybigfile.txt", offset); }).listen(8080);
Piping responses
The server response is a
WriteStream so you can pipe to it from any
ReadStream, e.g.
AsyncFile,
NetSocket,
WebSocket or
HttpServerRequest.
Here’s an example which echoes the request body back in the response for any PUT methods. It uses a pipe for the body, so it will work even if the HTTP request body is much larger than can fit in memory at any one time:
vertx.createHttpServer().requestHandler(request -> { HttpServerResponse response = request.response(); if (request.method() == HttpMethod.PUT) { response.setChunked(true); request.pipeTo(response); } else { response.setStatusCode(400).end(); } }).listen(8080);
You can also use the
send method to send a
ReadStream.
Sending a stream is a pipe operation, however as this is a method of
HttpServerResponse, it
will also take care of chunking the response when the
content-length is not set.
vertx.createHttpServer().requestHandler(request -> { HttpServerResponse response = request.response(); if (request.method() == HttpMethod.PUT) { response.send(request); } else { response.setStatusCode(400).end(); } }).listen(8080);
Writing HTTP/2 frames
HTTP/2 is a framed protocol with various frames for the HTTP request/response model. The protocol allows other kind of frames to be sent and received.
To send such frames, you can use the
writeCustomFrame on the response.
Here’s an example:
int frameType = 40; int frameStatus = 10; Buffer payload = Buffer.buffer("some data"); // Sending a frame to the client response.writeCustomFrame(frameType, frameStatus, payload);
These frames are sent immediately and are not subject to flow control - when such frame is sent there it may be done before other {@literal DATA} frames..response().reset();
By default, the
NO_ERROR (0) error code is sent, another code can sent instead:
request.response().reset(8);
The HTTP/2 specification defines the list of error codes one can use.
The request handler are notified of stream reset events with the
request handler and
response handler:
request.response().exceptionHandler(err -> { if (err instanceof StreamResetException) { StreamResetException reset = (StreamResetException) err; System.out.println("Stream reset " + reset.getCode()); } });
Server push
Server push is a new feature of HTTP/2 that enables sending multiple responses in parallel for a single client request.
When a server process a request, it can push a request/response to the client:
HttpServerResponse response = request.response(); // Push main.js to the client response.push(HttpMethod.GET, "/main.js", ar -> { if (ar.succeeded()) { // The server is ready to push the response HttpServerResponse pushedResponse = ar.result(); // Send main.js response pushedResponse. putHeader("content-type", "application/json"). end("alert(\"Push response hello\")"); } else { System.out.
Handling exceptions
You can set an
exceptionHandler to receive any
exceptions that happens before the connection is passed to the
requestHandler
or to the
webSocketHandler, e.g. during the TLS handshake.
Handling invalid requests
Vert.x will handle invalid HTTP requests and provides a default handler that will handle the common case
appropriately, e.g. it does respond with
REQUEST_HEADER_FIELDS_TOO_LARGE when a request header is too long.
You can set your own
invalidRequestHandler to process
invalid requests. Your implementation can handle specific cases and delegate other cases to to
HttpServerRequest.DEFAULT_INVALID_REQUEST_HANDLER.
HTTP Compression:
request.response() .putHeader(HttpHeaders.CONTENT_ENCODING,.
Creating an HTTP client
You create an
HttpClient instance with default options as follows:
HttpClient client = vertx.createHttpClient();
If you want to configure options for the client, you create it as follows:
HttpClientOptions options = new HttpClientOptions().setKeepAlive(false); HttpClient:
HttpClientOptions options = new HttpClientOptions(). setProtocolVersion(HttpVersion.HTTP_2). setSsl(true). setUseAlpn(true). setTrustAll(true); HttpClient client = vertx.createHttpClient(options);
For
h2c requests, TLS must be disabled, the client will do an HTTP/1.1 requests and try an upgrade to HTTP/2:
HttpClientOptions options = new HttpClientOptions().setProtocolVersion(HttpVersion.HTTP_2); HttpClient.
Logging network client activity
For debugging purposes, network activity can be logged.
HttpClientOptions options = new HttpClientOptions().setLogActivity(true); HttpClient client = vertx.createHttpClient(options);
See the chapter on logging network activity for a detailed explanation.
Making requests
The http client is very flexible and there are various ways you can make requests with it.
The first step when making a request is obtaining an HTTP connection to the remote server:
client.request(HttpMethod.GET,8080, "myserver.mycompany.com", "/some-uri", ar1 -> { if (ar1.succeeded()) { // Connected to the server } });
The client will connect to the remote server or reuse an available connection from the client connection pool.
Default host and port
Often you want to make many requests to the same host/port with an http client. To avoid you repeating the host/port every time you make a request you can configure the client with a default host/port:
HttpClientOptions options = new HttpClientOptions().setDefaultHost("wibble.com"); // Can also set default port if you want... HttpClient client = vertx.createHttpClient(options); client.request(HttpMethod.GET, "/some-uri", ar1 -> { if (ar1.succeeded()) { HttpClientRequest request = ar1.result(); request.send(ar2 -> { if (ar2.succeeded()) { HttpClientResponse response = ar2.result(); System.out.println("Received response with status code " + response.statusCode()); } }); } });
Writing request headers
You can write headers to a request using the
HttpHeaders as follows:
HttpClient client = vertx.createHttpClient(); // Write some headers using the headers multi-map MultiMap headers = HttpHeaders.set("content-type", "application/json").set("other-header", "foo"); client.request(HttpMethod.GET, "some-uri", ar1 -> { if (ar1.succeeded()) { if (ar1.succeeded()) { HttpClientRequest request = ar1.result(); request.headers().addAll(headers); request.send(ar2 -> { HttpClientResponse response = ar2.result(); System.out.println("Received response with status code " + response.statusCode()); }); } } });
The headers are an instance of
MultiMap which provides operations for adding, setting and removing
entries. Http headers allow more than one value for a specific key.
request.putHeader("content-type", "application/json") .putHeader("other-header", "foo");
If you wish to write headers to the request you must do so before any part of the request body is written.
Writing request and processing response
The
HttpClientRequest
request methods connects to the remote server
or reuse an existing connection. The request instance obtained is pre-populated with some data
such like the host or the request URI, but you need to send this request to the server.
You can call
send to send a request such as an HTTP
GET and process the asynchronous
HttpClientResponse.
client.request(HttpMethod.GET,8080, "myserver.mycompany.com", "/some-uri", ar1 -> { if (ar1.succeeded()) { HttpClientRequest request = ar1.result(); // Send the request and process the response request.send(ar -> { if (ar.succeeded()) { HttpClientResponse response = ar.result(); System.out.println("Received response with status code " + response.statusCode()); } else { System.out.println("Something went wrong " + ar.cause().getMessage()); } }); } });
You can also send the request with a body.
client.request(HttpMethod.GET,8080, "myserver.mycompany.com", "/some-uri", ar1 -> { if (ar1.succeeded()) { HttpClientRequest request = ar1.result(); // Send the request and process the response request.send("Hello World", ar -> { if (ar.succeeded()) { HttpClientResponse response = ar.result(); System.out.println("Received response with status code " + response.statusCode()); } else { System.out.println("Something went wrong " + ar.cause().getMessage()); } }); } });
request.send(Buffer.buffer("Hello World"), ar -> { if (ar.succeeded()) { HttpClientResponse response = ar.result(); System.out.println("Received response with status code " + response.statusCode()); } else { System.out.println("Something went wrong " + ar.cause().getMessage()); } });
send with a stream, if
the
Content-Length header was not previously set, the request is sent with a chunked
Content-Encoding.
request .putHeader(HttpHeaders.CONTENT_LENGTH, "1000") .send(stream, ar -> { if (ar.succeeded()) { HttpClientResponse response = ar.result(); System.out.println("Received response with status code " + response.statusCode()); } else { System.out.println("Something went wrong " + ar.cause().getMessage()); } });
Streaming Request body
The
send method send requests at once.
Sometimes you’ll want to have low level control on how you write requests bodies.
The
HttpClientRequest can be used to write the request body.
Here are some examples of writing a POST request with a body:
HttpClient client = vertx.createHttpClient(); client.request(HttpMethod.POST, "some-uri") .onSuccess(request -> { request.response().onSuccess(response -> { System.out.println("Received response with status code " + response.statusCode()); }); // Now do stuff with the request request.putHeader("content-length", "1000"); request.putHeader("content-type", "text/plain"); request.write(body); // Make sure the request is ended when you're done with it request.end(); }); // Or fluently: client.request(HttpMethod.POST, "some-uri") .onSuccess(request -> { request .response(ar -> { if (ar.succeeded()) { HttpClientResponse response = ar.result(); System.out.println("Received response with status code " + response.statusCode()); } }) .putHeader("content-length", "1000") .putHeader("content-type", "text/plain") .end(body); });
Methods exist to write strings in UTF-8 encoding and in any specific encoding and to write buffers:
request.write("some data"); // Write string encoded in specific encoding request.write("some other data", "UTF-16"); // Write a buffer Buffer buffer = Buffer.buffer(); buffer.appendInt(123).appendLong(245l); request.write(buffer);
If you are just writing a single string or buffer to the HTTP request you can write it and end the request in a
single call to the
end function.
request.end("some simple data"); // Write buffer and end the request (send it) in a single call Buffer
Content-Length header is not required, so you do not have to calculate the size
up-front.
Ending streamed HTTP requests
request.end("some-data"); // End it with a buffer Buffer buffer = Buffer.buffer().appendFloat(12.3f).appendInt(321); request.end(buffer);
Using the request as a stream
An
HttpClientRequest instance is also a
WriteStream instance.
You can pipe to it from any
ReadStream instance.
For, example, you could pipe a file on disk to a http request body as follows:
request.setChunked(true); file.pipeTo(request);
Chunked HTTP requests for (int i = 0; i < 10; i++) { request.write("this-is-chunk-" + i); } request.end();
Request timeouts
You can set a timeout for a specific http request using
setTimeout or
setTimeout.
If the request does not return any data within the timeout period an exception will be passed to the exception handler (if provided) and the request will be closed.
Writing HTTP/2 frames
HTTP/2 is a framed protocol with various frames for the HTTP request/response model. The protocol allows other kind of frames to be sent and received.
int frameType = 40; int frameStatus = 10; Buffer payload = Buffer.buffer("some data"); // Sending a frame to the server request.writeCustomFrame(frameType, frameStatus, payload); StreamResetException) { StreamResetException reset = (StreamResetException) err; System.out.println("Stream reset " + reset.getCode()); } });
Handling HTTP responses.
request.send(ar2 -> { if (ar2.succeeded()) { HttpClientResponse response = ar2.result(); // the status code - e.g. 200 or 404 System.out.println("Status code is " + response.statusCode()); // the status message e.g. "OK" or "Not Found". System.out.println("Status message is " + response.statusMessage()); } }); // Similar to above, set a completion handler and end the request request .response(ar2 -> { if (ar2.succeeded()) { HttpClientResponse response = ar2.result(); // the status code - e.g. 200 or 404 System.out.println("Status code is " + response.statusCode()); // the status message e.g. "OK" or "Not Found". System.out.println("Status message is " + response.statusMessage()); } }) .end();
Using the response as a stream
The
HttpClientResponse instance is also a
ReadStream which means
you can pipe it to any
WriteStream instance.
Response headers and trailers
String contentType = response.headers().get("content-type"); String contentLength = response.headers().get("content-lengh");
Chunked HTTP responses can also contain trailers - these are sent in the last chunk of the response body.
Reading the request.request(HttpMethod.GET, "some-uri", ar1 -> { if (ar1.succeeded()) { HttpClientRequest request = ar1.result(); request.send(ar2 -> { HttpClientResponse response = ar2.result(); response.handler(buffer -> { System.out.println("Received a part of the response body: " + buffer); }); }); } });
If you know the response body is not very large and want to aggregate it all in memory before handling it, you can either aggregate it yourself:
request.send(ar2 -> { if (ar2.succeeded()) { HttpClientResponse response = ar2.result(); // Create an empty buffer Buffer totalBuffer = Buffer.buffer(); response.handler(buffer -> { System.out.println("Received a part of the response body: " + buffer.length()); totalBuffer.appendBuffer(buffer); }); response.endHandler(v -> { // Now all the body has been read System.out.println("Total response body length is " + totalBuffer.length()); }); } });
Or you can use the convenience
body which
is called with the entire body when the response has been fully read:
request.send(ar1 -> { if (ar1.succeeded()) { HttpClientResponse response = ar1.result(); response.body(ar2 -> { if (ar2.succeeded()) { Buffer body = ar2.result(); // Now all the body has been read System.out.println("Total response body length is " + body.length()); } }); } });
Response end handler
The response
endHandler is called when the entire response body has been read
or immediately after the headers have been read and the response handler has been called if there is no body.
Request and response composition
The client interface is very simple and follows this pattern:
requesta connection
sendor
write/
endthe request to the server
handle the beginning of the
HttpClientResponse
process the response events
You can use Vert.x future composition methods to make your code simpler, however the API is event driven and you need to understand it otherwise you might experience possible data races (i.e loosing events leading to corrupted data).
The client API intentionally does not return a
Future<HttpClientResponse> because setting a completion
handler on the future can be racy when this is set outside of the event-loop.
Future<HttpClientResponse> get = client.get("some-uri"); // Assuming we have a client that returns a future response // assuging this is *not* on the event-loop // introduce a potential data race for the sake of this example Thread.sleep(100); get.onSuccess(response -> { // Response events might have happen already response.body(ar -> { }); });
Confining the
HttpClientRequest usage within a verticle is the easiest solution as the Verticle
will ensure that events are processed sequentially avoiding races.
vertx.deployVerticle(() -> new AbstractVerticle() { public void start() { HttpClient client = vertx.createHttpClient(); Future<HttpClientRequest> future = client.request(HttpMethod.GET, "some-uri"); } }, new DeploymentOptions());
When you are interacting with the client possibly outside a verticle then you can safely perform composition as long as you do not delay the response events, e.g processing directly the response on the event-loop.
Future<JsonObject> future = client .request(HttpMethod.GET, "some-uri") .compose(request -> request .send() .compose(response -> { // Process the response on the event-loop which guarantees no races if (response.statusCode() == 200 && response.getHeader(HttpHeaders.CONTENT_TYPE).equals("application/json")) { return response .body() .map(buffer -> buffer.toJsonObject()); } else { return Future.failedFuture("Incorrect HTTP response"); } })); // Listen to the composed final json result future.onSuccess(json -> { System.out.println("Received json result " + json); }).onFailure(err -> { System.out.println("Something went wrong " + err.getMessage()); });
If you need to delay the response processing then you need to
pause the response or use a
pipe, this
might be necessary when another asynchronous operation is involved.
Future<Void> future = client .request(HttpMethod.GET, "some-uri") .compose(request -> request .send() .compose(response -> { // Process the response on the event-loop which guarantees no races if (response.statusCode() == 200) { // Create a pipe, this pauses the response Pipe<Buffer> pipe = response.pipe(); // Write the file on the disk return fileSystem .open("/some/large/file", new OpenOptions().setWrite(true)) .onFailure(err -> pipe.close()) .compose(file -> pipe.to(file)); } else { return Future.failedFuture("Incorrect HTTP response"); } }));
Reading cookies from the response
Alternatively you can just parse the
Set-Cookie headers yourself in the response.
30x redirection handling
The client can be configured to follow HTTP redirections provided by the
Location response header when the client receives:
a
301,
302,
307or
308status code along with a HTTP GET or HEAD method
a
303status code, in addition the directed request perform an HTTP GET method
Here’s an example:()); } }); } });
The maximum redirects is
16 by default and can be changed with
setMaxRedirects.
HttpClient client = vertx.createHttpClient( new HttpClientOptions() .setMaxRedirects(32)); String absoluteURI = resolveURI(response.request().absoluteURI(), response.getHeader("Location")); // Create a new ready to use request that the client will use return Future.succeededFuture(new RequestOptions().setAbsoluteURI(absoluteURI)); } // We don't redirect return null; });
The policy handles the original
HttpClientResponse received and returns either
null
or a
Future<HttpClientRequest>.
when
null:
request headers, unless if you have set some headers
request body unless the returned request uses a
GETmethod
response handler
request exception handler
request timeout
100-Continue handling:
client.request(HttpMethod.PUT, "some-uri") .onSuccess(request -> { request.response().onSuccess(response -> { System.out.println("Received response with status code " + response.statusCode()); }); request.putHeader("Expect", "100-Continue"); request.continueHandler(v -> { // OK to send rest of body request.write("Some data"); request.write("Some more data"); request.end(); }); request.sendHead(); });")) { // boolean(); } } });
Creating HTTP tunnels handler will be called after the HTTP response header is received, the socket will be ready for tunneling and will send and receive buffers.
connect works like
send, but it reconfigures the transport to exchange
raw buffers.
Client push
Server push is a new feature of HTTP/2 that enables sending multiple responses in parallel for a single client request.
A push handler can be set on a request to receive the request/response pushed by the server:
client.request(HttpMethod.GET, "/index.html") .onSuccess(request -> { request .response().onComplete(response -> { // Process index.html response }); // Set a push handler to be aware of any resource pushed by the server request.pushHandler(pushedRequest -> { // A resource is pushed for this request System.out.println("Server pushed " + pushedRequest.path()); // Set an handler for the response pushedRequest.response().onComplete(pushedResponse -> { System.out.println("The response for the pushed request"); }); }); // End the request request.end(); });
If the client does not want to receive a pushed request, it can reset the stream:
request.pushHandler(pushedRequest -> { if (pushedRequest.path().equals("/main.js")) { pushedRequest.reset(); } else { // Handle it } });
When no handler is set, any stream pushed will be automatically cancelled by the client with
a stream reset (
8 error code). -> { System.out.println("Received a frame type=" + frame.type() + " payload" + frame.payload().toString()); });
Enabling compression on the client compressed/1.x pooling and keep alive.
HTTP/1.1 pipe-lining multiplexing.
HttpClientOptions clientOptions = new HttpClientOptions(). setHttp2MultiplexingLimit(10). setHttp2MaxPoolSize(3); // Uses up to 3 connections and up to 10 streams per connection HttpClient.
HTTP connections.
Server connections
The
connection method returns the request connection on the server:
HttpConnection connection = request.connection();
A connection handler can be set on the server to be notified of any incoming connection:
HttpServer server = vertx.createHttpServer(http2Options); server.connectionHandler(connection -> { System.out.println("A client connected"); });
Client connections
The
connection method returns the request connection on the client:
HttpConnection connection = request.connection();
A connection handler can be set on the client to be notified when a connection has been established happens:
client.connectionHandler(connection -> { System.out.println("Connected to the server"); });
Connection settings(new Http2Settings().setMaxConcurrentStreams(100));
As the remote side should acknowledge on reception of the settings update, it’s possible to give a callback to be notified of the acknowledgment:
connection.updateSettings(new Http2Settings().setMaxConcurrentStreams(100), ar -> { if (ar.succeeded()) { System.out.println("The settings update has been acknowledged "); } });
Conversely the
remoteSettingsHandler is notified
when the new remote settings are received:
connection.remoteSettingsHandler(settings -> { System.out.println("Received new settings"); });
Connection ping
HTTP/2 connection ping is useful for determining the connection round-trip time or check the connection
validity:
ping sends a {@literal PING} frame to the remote
endpoint:
Buffer data = Buffer.buffer(); for (byte i = 0;i < 8;i++) { data.appendByte(i); } connection.ping(data, pong -> { System.out.println("Remote side replied"); });
Vert.x will send automatically an acknowledgement when a {@literal PING} frame is received, an handler can be set to be notified for each ping received:
connection.pingHandler(ping -> { System.out.println("Got pinged by remote side"); });
The handler is just notified, the acknowledgement is sent whatsoever. Such feature is aimed for implementing protocols on top of HTTP/2.
Connection shutdown and go away -> { System.out.
Connection close
it closes the socket for HTTP/1.x
a shutdown with no delay for HTTP/2, the {@literal GOAWAY} frame will still be sent before the connection is closed. *
The
closeHandler notifies when a connection is closed.
HttpClient usage.
Server sharing().request(HttpMethod.GET, 8080, "localhost", "/", ar1 -> { if (ar1.succeeded()) { HttpClientRequest request = ar1.result(); request.send(ar2 -> { if (ar2.succeeded()) { HttpClientResponse resp = ar2.result(); resp.bodyHandler(body -> { System.out.
Using HTTPS with Vert.x
setAbsoluteURI
method.
client.request(new RequestOptions() .setHost("localhost") .setPort(8080) .setURI("/") .setSsl(true), ar1 -> { if (ar1.succeeded()) { HttpClientRequest request = ar1.result(); request.send(ar2 -> { if (ar2.succeeded()) { HttpClientResponse response = ar2.result(); System.out.println("Received response with status code " + response.statusCode()); } }); } });
setting the value to
falsewill disable SSL/TLS even if the client is configured to use SSL/TLS
setting the value to
truewill enable SSL/TLS even if the client is configured to not use SSL/TLS, the actual client SSL/TLS (such as trust, key/certificate, ciphers, ALPN, …) will be reused
Likewise
setAbsoluteURI scheme
also overrides the default client setting.
WebSockets
WebSockets are a web technology that allows a full duplex socket-like connection between HTTP servers and HTTP clients (typically browsers).
Vert.x supports WebSockets on both the client and server-side.
WebSockets on the server
There are two ways of handling WebSockets on the server side.
WebSocket handler
The first way involves providing a
webSocketHandler
on the server instance.
When a WebSocket connection is made to the server, the handler will be called, passing in an instance of
ServerWebSocket.
server.webSocketHandler(webSocket -> { System.out.println("Connected!"); });
server.webSocketHandler(webSocket -> { if (webSocket.path().equals("/myapi")) { webSocket.reject(); } else { // Do something } });
You can perform an asynchronous handshake by calling
setHandshake with a
Future:
server.webSocketHandler(webSocket -> { Promise<Integer> promise = Promise.promise(); webSocket.setHandshake(promise.future()); authenticate(webSocket.headers(), ar -> { if (ar.succeeded()) { // Terminate the handshake with the status code 101 (Switching Protocol) // Reject the handshake with 401 (Unauthorized) promise.complete(ar.succeeded() ? 101 : 401); } else { // Will send a 500 error promise.fail(ar.cause()); } }); });
Upgrading to WebSocket
The second way of handling WebSockets is to handle the HTTP Upgrade request that was sent from the client, and
call
toWebSocket on the server request.
server.requestHandler(request -> { if (request.path().equals("/myapi")) { Future<ServerWebSocket> fut = request.toWebSocket(); fut.onSuccess(ws -> { // Do something }); } else { // Reject request.response().setStatusCode(400).end(); } });
The server WebSocket
The
ServerWebSocket instance enables you to retrieve the
headers,
path,
query and
URI of the HTTP request of the WebSocket handshake.
WebSockets on the client
The Vert.x
HttpClient supports WebSockets.
You can connect a WebSocket to a server using one of the
webSocket operations and
providing a handler.
client.webSocket("/some-uri", res -> { if (res.succeeded()) { WebSocket ws = res.result(); System.out.println("Connected!"); } });
By default, the client sets the
origin header to the server host, e.g. Some servers will refuse
such request, you can configure the client to not set this header.
WebSocketConnectOptions options = new WebSocketConnectOptions() .setHost(host) .setPort(port) .setURI(requestUri) .setAllowOriginHeader(false); client.webSocket(options, res -> { if (res.succeeded()) { WebSocket ws = res.result(); System.out.println("Connected!"); } });
You can also set a different header:
WebSocketConnectOptions options = new WebSocketConnectOptions() .setHost(host) .setPort(port) .setURI(requestUri) .addHeader(HttpHeaders.ORIGIN, origin); client.webSocket(options, res -> { if (res.succeeded()) { WebSocket ws = res.result(); System.out.println("Connected!"); } });
Writing messages to WebSockets
If you wish to write a single WebSocket message to the WebSocket you can do this with
writeBinaryMessage or
writeTextMessage :
Buffer buffer = Buffer.buffer().appendInt(123).appendFloat(1.23f); webSocket.writeBinaryMessage(buffer); // Write a simple text message String.
Writing frames to WebSockets:
WebSocketFrame frame1 = WebSocketFrame.binaryFrame(buffer1, false); webSocket.writeFrame(frame1); WebSocketFrame frame2 = WebSocketFrame.continuationFrame(buffer2, false); webSocket.writeFrame(frame2); // Write the final frame WebSocketFrame:
webSocket.writeFinalTextFrame("Geronimo!"); // Send a WebSocket message consisting of a single final binary frame: Buffer buff = Buffer.buffer().appendInt(12).appendString("foo"); webSocket.writeFinalBinaryFrame(buff);
Reading frames from WebSockets
To read frames from a WebSocket you use the
frameHandler.
The frame handler will be called with instances of
WebSocketFrame when a frame arrives,
for example:
webSocket.frameHandler(frame -> { System.out.println("Received a frame of size!"); });
Piping WebSockets
The
WebSocket instance is also a
ReadStream and a
WriteStream so it can be used with pipes.
When using a WebSocket as a write stream or a read stream it can only be used with WebSockets connections that are used with binary frames that are no split over multiple frames.
Event bus handlers
Every WebSocket automatically registers two handler on the event bus, and when any data are received in this handler, it writes them to itself. Those are local subscriptions not routed on the cluster.
This enables you to write data to a WebSocket which is potentially in a completely different verticle sending data to the address of that handler.
The addresses of the handlers are given by
binaryHandlerID and
textHandlerID.
Using a proxy for HTTP/HTTPS connections:
HttpClientOptions options = new HttpClientOptions() .setProxyOptions(new ProxyOptions().setType(ProxyType.HTTP) .setHost("localhost").setPort(3128) .setUsername("username").setPassword("secret")); HttpClient:
HttpClientOptions options = new HttpClientOptions() .setProxyOptions(new ProxyOptions().setType(ProxyType.SOCKS5) .setHost("localhost").setPort(1080) .setUsername("username").setPassword("secret")); HttpClient client = vertx.createHttpClient(options);
The DNS resolution is always done on the proxy server, to achieve the functionality of a SOCKS4 client, it is necessary to resolve the DNS address locally.
Proxy options can also be set per request:
client.request(new RequestOptions() .setHost("example.com") .setProxyOptions(proxyOptions)) .compose(request -> request .send() .compose(HttpClientResponse::body)) .onSuccess(body -> { System.out.println("Received response"); });
You can use
setNonProxyHosts to configure a list of host bypassing
the proxy. The lists accept
* wildcard for matching domains:
HttpClientOptions options = new HttpClientOptions() .setProxyOptions(new ProxyOptions().setType(ProxyType.SOCKS5) .setHost("localhost").setPort(1080) .setUsername("username").setPassword("secret")) .addNonProxyHost("*.foo.com") .addNonProxyHost("localhost"); HttpClient client = vertx.createHttpClient(options);
Handling of other protocols
The HTTP proxy implementation supports getting ftp:// urls if the proxy supports that.
When the HTTP request URI contains the full URL then the client will not compute a full HTTP url and instead use the full URL specified in the request URI:
HttpClientOptions options = new HttpClientOptions() .setProxyOptions(new ProxyOptions().setType(ProxyType.HTTP)); HttpClient client = vertx.createHttpClient(options); client.request(HttpMethod.GET, "", ar -> { if (ar.succeeded()) { HttpClientRequest request = ar.result(); request.send(ar2 -> { if (ar2.succeeded()) { HttpClientResponse response = ar2.result(); System.out.println("Received response with status code " + response.statusCode()); } }); } });>
HttpServerOptions options = new HttpServerOptions() .setUseProxyProtocol(true); HttpServer server = vertx.createHttpServer(options); server.requestHandler(request -> { // Print the actual client address provided by the HA proxy protocol instead of the proxy address System.out.println(request.remoteAddress()); // Print the address of the proxy System.out.println(request.localAddress()); });
Using the SharedData API
As its name suggests, the
SharedData API allows you to safely share data between:
different parts of your application, or
different applications in the same Vert.x instance, or
different applications across a cluster of Vert.x instances.
In practice, it provides:
synchronous maps (local-only)
asynchronous maps
asynchronous locks
asynchronous counters
Local maps
Local maps allow you to share data safely between different event loops (e.g. different verticles) in the same Vert.x instance.
They only allow certain data types to be used as keys and values:
In the latter case the key/value will be copied before putting it into the map.
This way we can ensure there is no shared access to mutable state between different threads in your Vert.x application. And you won’t have to worry about protecting that state by synchronising access to it.
Here’s an example of using a shared local map:
SharedData sharedData = vertx.sharedData(); LocalMap<String, String> map1 = sharedData.getLocalMap("mymap1"); map1.put("foo", "bar"); // Strings are immutable so no need to copy LocalMap<String, Buffer> map2 = sharedData.getLocalMap("mymap2"); map2.put("eek", Buffer.buffer().appendInt(123)); // This buffer will be copied before adding to map // Then... in another part of your application: map1 = sharedData.getLocalMap("mymap1"); String val = map1.get("foo"); map2 = sharedData.getLocalMap("mymap2"); Buffer buff = map2.get("eek");
Asynchronous shared maps
Asynchronous shared maps allow data to be put in the map and retrieved locally or from any other node.
This makes them really useful for things like storing session state in a farm of servers hosting a Vert.x Web application.
Getting the map is asynchronous and the result is returned to you in the handler that you specify. Here’s an example:
SharedData sharedData = vertx.sharedData(); sharedData.<String, String>getAsyncMap("mymap", res -> { if (res.succeeded()) { AsyncMap<String, String> map = res.result(); } else { // Something went wrong! } });
When Vert.x is clustered, data that you put into the map is accessible locally as well as on any of the other cluster members.
If your application doesn’t need data to be shared with every other node, you can retrieve a local-only map:
SharedData sharedData = vertx.sharedData(); sharedData.<String, String>getLocalAsyncMap("mymap", res -> { if (res.succeeded()) { // Local-only async map AsyncMap<String, String> map = res.result(); } else { // Something went wrong! } });
Putting data in a map
The actual put is asynchronous and the handler is notified once it is complete:
map.put("foo", "bar", resPut -> { if (resPut.succeeded()) { // Successfully put the value } else { // Something went wrong! } });
Getting data from a map
The actual get is asynchronous and the handler is notified with the result some time later:
map.get("foo", resGet -> { if (resGet.succeeded()) { // Successfully got the value Object val = resGet.result(); } else { // Something went wrong! } });
Asynchronous locks.
To obtain a lock use
getLock.
This won’t block, but when the lock is available, the handler will be called with an instance of
Lock, signalling that you now own the lock.
While you own the lock, no other caller, locally or on the cluster, will be able to obtain the lock.
When you’ve finished with the lock, you call
release to release it, so another caller can obtain it:
SharedData sharedData = vertx.sharedData(); sharedData.getLock("mylock", res -> { if (res.succeeded()) { // Got the lock! Lock:
SharedData sharedData = vertx.sharedData(); sharedData.getLockWithTimeout("mylock", 10000, res -> { if (res.succeeded()) { // Got the lock! Lock lock = res.result(); } else { // Failed to get lock } });
If your application doesn’t need the lock to be shared with every other node, you can retrieve a local-only lock:
SharedData sharedData = vertx.sharedData(); sharedData.getLocalLock("mylock", res -> { if (res.succeeded()) { // Local-only lock Lock lock = res.result(); // 5 seconds later we release the lock so someone else can get it vertx.setTimer(5000, tid -> lock.release()); } else { // Something went wrong } });
Asynchronous counters
It’s often useful to maintain an atomic counter locally or across the different nodes of your application.
You obtain an instance with
getCounter:
SharedData sharedData = vertx.sharedData(); sharedData.getCounter("mycounter", res -> { if (res.succeeded()) { Counter counter = res.result(); } else { // Something went wrong! } });
Once you have an instance you can retrieve the current count, atomically increment it, decrement and add a value to it using the various methods.
If your application doesn’t need the counter to be shared with every other node, you can retrieve a local-only counter:
SharedData sharedData = vertx.sharedData(); sharedData.getLocalCounter("mycounter", res -> { if (res.succeeded()) { // Local-only counter Counter counter = res.result(); } else { // Something went wrong! } });
Using the file system with Vert.x:
FileSystem:
FileSystem:
vertx.fileSystem().readFile("target/classes/readme.txt", result -> { if (result.succeeded()) { System.out.println(result.result()); } else { System.err.println("Oh oh ..." + result.cause()); } }); // Copy a file vertx.fileSystem().copy("target/classes/readme.txt", "target/classes/readme2.txt", result -> { if (result.succeeded()) { System.out.println("File copied"); } else { System.err.println("Oh oh ..." + result.cause()); } }); // Write a file vertx.fileSystem().writeFile("target/classes/hello.txt", Buffer.buffer("Hello"), result -> { if (result.succeeded()) { System.out.println("File written"); } else { System.err.println("Oh oh ..." + result.cause()); } }); // Check existence and delete vertx.fileSystem().exists("target/classes/junk.txt", result -> { if (result.succeeded() && result.result()) { vertx.fileSystem().delete("target/classes/junk.txt", r -> { System.out.println("File deleted"); }); } else { System.err.println("Oh oh ... - cannot delete the file: " + result.cause()); } });
Asynchronous files
Vert.x provides an asynchronous file abstraction that allows you to manipulate a file on the file system.
OpenOptions options = new OpenOptions(); fileSystem.open("myfile.txt", options, res -> { if (res.succeeded()) { AsyncFile file = res.result(); } else { // Something went wrong! } });
AsyncFile implements
ReadStream and
WriteStream so you can pipe
files to and from other stream objects such as net sockets, http requests and responses, and WebSockets.
They also allow you to read and write directly to them.
Random access writes", new OpenOptions(), result -> { if (result.succeeded()) { AsyncFile file = result.result(); Buffer buff = Buffer.buffer("foo"); for (int i = 0; i < 5; i++) { file.write(buff, buff.length() * i, ar -> { if (ar.succeeded()) { System.out.println("Written ok!"); // etc } else { System.err.println("Failed to write: " + ar.cause()); } }); } } else { System.err.println("Cannot open file " + result.cause()); } });
Random access reads", new OpenOptions(), result -> { if (result.succeeded()) { AsyncFile file = result.result(); Buffer buff = Buffer.buffer(1000); for (int i = 0; i < 10; i++) { file.read(buff, i * 100, i * 100, 100, ar -> { if (ar.succeeded()) { System.out.println("Read ok!"); } else { System.err.println("Failed to write: " + ar.cause()); } }); } } else { System.err.println("Cannot open file " + result.cause()); } });
Opening Options.
Flushing data to underlying storage.
In the
OpenOptions, you can enable/disable the automatic synchronisation of the content on every write using
setDsync. In that case, you can manually flush any writes from the OS
cache by calling the
flush method.
This method can also be called with a handler which will be called when the flush is complete.
Using AsyncFile as ReadStream and WriteStream
AsyncFile implements
ReadStream and
WriteStream. You can then
use them with a pipe to pipe data to and from other read and write streams. For example, this would
copy the content to another
AsyncFile:
final AsyncFile output = vertx.fileSystem().openBlocking("target/classes/plagiary.txt", new OpenOptions()); vertx.fileSystem().open("target/classes/les_miserables.txt", new OpenOptions(), result -> { if (result.succeeded()) { AsyncFile file = result.result(); file.pipeTo(output) .onComplete(v -> { System.out.println("Copy done"); }); } else { System.err.println("Cannot open file " + result.cause()); } });
You can also use the pipe to write file content into HTTP responses, or more generally in any
WriteStream.
Accessing files from the classpath.
Datagram sockets (UDP)).
Creating a DatagramSocket
To use UDP you first need t create a
DatagramSocket. It does not matter here if you only want to send data or send
and receive.
DatagramSocket socket = vertx.createDatagramSocket(new DatagramSocketOptions());
The returned
DatagramSocket will not be bound to a specific port. This is not a
problem if you only want to send data (like a client), but more on this in the next section.
Sending Datagram packets
As mentioned before, User Datagram Protocol (UDP) sends data in packets to remote peers but is not connected to them in a persistent fashion.
This means each packet can be sent to a different remote peer.
Sending packets is as easy as shown here:
DatagramSocket socket = vertx.createDatagramSocket(new DatagramSocketOptions()); Buffer buffer = Buffer.buffer("content"); // Send a Buffer socket.send(buffer, 1234, "10.0.0.1", asyncResult -> { System.out.println("Send succeeded? " + asyncResult.succeeded()); }); // Send a String socket.send("A string used as content", 1234, "10.0.0.1", asyncResult -> { System.out.println("Send succeeded? " + asyncResult.succeeded()); });
Receiving Datagram packets:
So to listen on a specific address and port you would do something like shown here:
DatagramSocket socket = vertx.createDatagramSocket(new DatagramSocketOptions()); socket.listen(1234, "0.0.0.0", asyncResult -> { if (asyncResult.succeeded()) { socket.handler(packet -> { // Do something with the packet }); } else { System.out
Sending Multicast packets from sending normal Datagram packets. The difference is that you pass in a multicast group address to the send method.
This is show here:
DatagramSocket socket = vertx.createDatagramSocket(new DatagramSocketOptions()); Buffer buffer = Buffer.buffer("content"); // Send a Buffer to a multicast address socket.send(buffer, 1234, "230.0.0.1", asyncResult -> { System.out.println("Send succeeded? " + asyncResult.succeeded()); });
All sockets that have joined the multicast group 230.0.0.1 will receive the packet.
Receiving Multicast packets -> { System.out.println("Listen succeeded? " + asyncResult2.succeeded()); }); } else { System.out.println("Listen failed" + asyncResult.cause()); } });
Unlisten / leave a Multicast group
There are sometimes situations where you want to receive packets for a Multicast group for a limited time.
In this situations you can first start to listen for them and then later unlisten.
This is -> { System.out.println("Unlisten succeeded? " + asyncResult3.succeeded()); }); } else { System.out.println("Listen failed" + asyncResult2.cause()); } }); } else { System.out.println("Listen failed" + asyncResult.cause()); } });
Blocking multicast:
DatagramSocket socket = vertx.createDatagramSocket(new DatagramSocketOptions()); // Some code // This would block packets which are send from 10.0.0.2 socket.blockMulticastGroup("230.0.0.1", "10.0.0.2", asyncResult -> { System.out.println("block succeeded? " + asyncResult.succeeded()); });
DatagramSocket properties
When creating a
DatagramSocket there are multiple properties you can set to
change it’s behaviour with the
DatagramSocketOptions object. Those are listed here:
setSendBufferSizeSets the send buffer size in bytes.
setReceiveBufferSizeSets the TCP receive buffer size in bytes.
setReuseAddressIf true then addresses in TIME_WAIT state can be reused after they have been closed.
-
setBroadcastSets or clears the SO_BROADCAST socket option. When this option is set, Datagram (UDP) packets may be sent to a local interface’s broadcast address.
setMulticastNetworkInterfaceSets or clears the IP_MULTICAST_LOOP socket option. When this option is set, multicast packets will also be received on the local interface.
setMulticastTimeToLiveSets.
DatagramSocket Local Address.
DNS client.
DnsClient client = vertx.createDnsClient(53, "10.0.0.1");
You can also create the client with options and configure the query timeout.
DnsClient client = vertx.createDnsClient(new DnsClientOptions() .setPort(53) .setHost("10.0.0.1") .setQueryTimeout(10000) );
Creating the client with no arguments or omitting the server address will use the address of the server used internally for non blocking address resolution.
DnsClient client1 = vertx.createDnsClient(); // Just the same but with a different query timeout DnsClient client2 = vertx.createDnsClient(new DnsClientOptions().setQueryTimeout(10000));
lookup:
DnsClient client = vertx.createDnsClient(53, "9.9.9.9"); client.lookup("vertx.io", ar -> { if (ar.succeeded()) { System.out.println(ar.result()); } else { System.out.println("Failed to resolve entry" + ar.cause()); } });
lookup44("vertx.io", ar -> { if (ar.succeeded()) { System.out.println(ar.result()); } else { System.out.println("Failed to resolve entry" + ar.cause()); } });
lookup66("vertx.io", ar -> { if (ar.succeeded()) { System.out.println(ar.result()); } else { System.out.println("Failed to resolve entry" + ar.cause()); } });
resolveA
Try to resolve all A (ipv4) records for a given name. This is quite similar to using "dig" on unix like operation systems.
To lookup all the A records for "vertx.io" you would typically do:
DnsClient client = vertx.createDnsClient(53, "9.9.9.9"); client.resolveA("vertx.io", ar -> { if (ar.succeeded()) { List<String> records = ar.result(); for (String record : records) { System.out.println(record); } } else { System.out.println("Failed to resolve entry" + ar.cause()); } });
resolveAAAA
Try to resolve all AAAA (ipv6) records for a given name. This is quite similar to using "dig" on unix like operation systems.
To lookup all the AAAAA records for "vertx.io" you would typically do:
DnsClient client = vertx.createDnsClient(53, "9.9.9.9"); client.resolveAAAA("vertx.io", ar -> { if (ar.succeeded()) { List<String> records = ar.result(); for (String record : records) { System.out.println(record); } } else { System.out.println("Failed to resolve entry" + ar.cause()); } });
resolveCNAME
Try to resolve all CNAME records for a given name. This is quite similar to using "dig" on unix like operation systems.
To lookup all the CNAME records for "vertx.io" you would typically do:
DnsClient client = vertx.createDnsClient(53, "9.9.9.9"); client.resolveCNAME("vertx.io", ar -> { if (ar.succeeded()) { List<String> records = ar.result(); for (String record : records) { System.out.println(record); } } else { System.out.println("Failed to resolve entry" + ar.cause()); } });
resolveMX
Try to resolve all MX records for a given name. The MX records are used to define which Mail-Server accepts emails for a given domain.
To lookup all the MX records for "vertx.io" you would typically do:
DnsClient client = vertx.createDnsClient(53, "9.9.9.9"); client.resolveMX("vertx.io", ar -> { if (ar.succeeded()) { List<MxRecord> records = ar.result(); for (MxRecord record: records) { System.out.println(record); } } else { System.out();
resolveTXT
Try to resolve all TXT records for a given name. TXT records are often used to define extra information for a domain.
To resolve all the TXT records for "vertx.io" you could use something along these lines:
DnsClient client = vertx.createDnsClient(53, "9.9.9.9"); client.resolveTXT("vertx.io", ar -> { if (ar.succeeded()) { List<String> records = ar.result(); for (String record: records) { System.out.println(record); } } else { System.out.println("Failed to resolve entry" + ar.cause()); } });
resolveNS
Try to resolve all NS records for a given name. The NS records specify which DNS Server hosts the DNS informations for a given domain.
To resolve all the NS records for "vertx.io" you could use something along these lines:
DnsClient client = vertx.createDnsClient(53, "9.9.9.9"); client.resolveNS("vertx.io", ar -> { if (ar.succeeded()) { List<String> records = ar.result(); for (String record: records) { System.out.println(record); } } else { System.out.println("Failed to resolve entry" + ar.cause()); } });
resolveSRV
Try to resolve all SRV records for a given name. The SRV records are used to define extra information like port and hostname of services. Some protocols need this extra information.
To lookup all the SRV records for "vertx.io" you would typically do:
DnsClient client = vertx.createDnsClient(53, "9.9.9.9"); client.resolveSRV("vertx.io", ar -> { if (ar.succeeded()) { List<SrvRecord> records = ar.result(); for (SrvRecord record: records) { System.out.println(record); } } else { System.out.
resolvePTR
Try to resolve the PTR record for a given name. The PTR record maps an ipaddress to a name.
To resolve the PTR record for the ipaddress 10.0.0.1 you would use the PTR notion of "1.0.0.10.in-addr.arpa"
DnsClient client = vertx.createDnsClient(53, "9.9.9.9"); client.resolvePTR("1.0.0.10.in-addr.arpa", ar -> { if (ar.succeeded()) { String record = ar.result(); System.out.println(record); } else { System.out.println("Failed to resolve entry" + ar.cause()); } });
reverseLookup:
DnsClient client = vertx.createDnsClient(53, "9.9.9.9"); client.reverseLookup("10.0.0.1", ar -> { if (ar.succeeded()) { String record = ar.result(); System.out.println(record); } else { System.out.println("Failed to resolve entry" + ar.cause()); } });
Error handling:
All of those errors are "generated" by the DNS Server itself.
You can obtain the DnsResponseCode from the DnsException like:
DnsClient client = vertx.createDnsClient(53, "10.0.0.1"); client.lookup("nonexisting.vert.xio", ar -> { if (ar.succeeded()) { String record = ar.result(); System.out.println(record); } else { Throwable cause = ar.cause(); if (cause instanceof DnsException) { DnsException exception = (DnsException) cause; DnsResponseCode code = exception.code(); // ... } else { System.out.println("Failed to resolve entry" + ar.cause()); } } });
Streams
There are several objects in Vert.x that allow items to be read from and written.:( added the
pipeTo method that does all of this hard work for you.
You just feed it the
WriteStream and use it:
NetServer server = vertx.createNetServer( new NetServerOptions().setPort(1234).setHost("localhost") ); server.connectHandler(sock -> { sock.pipeTo(sock); }).listen();
This does exactly the same thing as the more verbose example, plus it handles stream failures and termination: the
destination
WriteStream is ended when the pipe completes with success or a failure.
You can be notified when the operation completes:
server.connectHandler(sock -> { // Pipe the socket providing an handler to be notified of the result sock.pipeTo(sock, ar -> { if (ar.succeeded()) { System.out.println("Pipe succeeded"); } else { System.out.println("Pipe failed"); } }); }).listen();
When you deal with an asynchronous destination, you can create a
Pipe instance that
pauses the source and resumes it when the source is piped to the destination:
server.connectHandler(sock -> { // Create a pipe to use asynchronously Pipe<Buffer> pipe = sock.pipe(); // Open a destination file fs.open("/path/to/file", new OpenOptions(), ar -> { if (ar.succeeded()) { AsyncFile file = ar.result(); // Pipe the socket to the file and close the file at the end pipe.to(file); } else { sock.close(); } }); }).listen();
When you need to abort the transfer, you need to close it:
vertx.createHttpServer() .requestHandler(request -> { // Create a pipe that to use asynchronously Pipe<Buffer> pipe = request.pipe(); // Open a destination file fs.open("/path/to/file", new OpenOptions(), ar -> { if (ar.succeeded()) { AsyncFile file = ar.result(); // Pipe the socket to the file and close the file at the end pipe.to(file); } else { // Close the pipe and resume the request, the body buffers will be discarded pipe.close(); // Send an error response request.response().setStatusCode(500).end(); } }); }).listen(8080);
When the pipe is closed, the streams handlers are unset and the
ReadStream resumed.
As seen above, by default the destination is always ended when the stream completes, you can control this behavior on the pipe object:
endOnFailurecontrols the behavior when a failure happens
endOnSuccesscontrols the behavior when the read stream ends
endOnCompletecontrols the behavior in all cases
Here is a short example:
src.pipe() .endOnSuccess(false) .to(dst, rs -> { // Append some text and close the file dst.end(Buffer.buffer("done")); });
Let’s now look at the methods on
ReadStream and
WriteStream in more detail:
ReadStream
ReadStream is implemented by
HttpClientResponse,
DatagramSocket,
HttpClientRequest,
HttpServerFileUpload,
HttpServerRequest,
MessageConsumer,
NetSocket,
WebSocket,
TimeoutStream,
AsyncFile.
handler: set a handler which will receive items from the ReadStream.
pause: pause the stream. When paused no items will be received in the handler.
fetch: fetch a specified amount of item from the stream. The handler will be called if any item arrives. Fetches are cumulative.
resume: resume the stream. The handler will be called if any item arrives. Resuming is equivalent of fetching
Long.MAX_VALUEitems.
exceptionHandler: called when an exception occurs on the ReadStream.
endHandler: called when the end of stream is reached. This might be when EOF is reached if the ReadStream represents a file, or when end of request is reached if it’s an HTTP request, or when the connection is closed if it’s a TCP socket.
A read stream is either in flowing or fetch mode
initially the stream is in <i>flowing</i> mode
when the stream is in flowing mode, elements are delivered to the handler
when the stream is in fetch mode, only the number of requested elements will be delivered to the handler
resume()sets the flowing mode
pause()sets the fetch mode and resets the demand to
0
fetch(long)requests a specific amount of elements and adds it to the actual demand
WriteStream
WriteStream is implemented by
HttpClientRequest,
HttpServerResponse
WebSocket,
NetSocket and
AsyncFile.returns
true. Note that, when the write queue is considered full, if write is called the data will still be accepted and queued. The actual number depends on the stream implementation, for
Bufferthe size represents the actual number of bytes written and not the number of buffers.
writeQueueFull: returns
trueif the write queue is considered full.
exceptionHandler: Will be called if an exception occurs on the
WriteStream.
drainHandler: The handler will be called if the
WriteStreamis considered no longer full.
Record Parser:
final RecordParser parser = RecordParser.newDelimited("\n", h -> { System.out.println(h.toString()); }); parser.handle(Buffer.buffer("HELLO\nHOW ARE Y")); parser.handle(Buffer.buffer("OU?\nI AM")); parser.handle(Buffer.buffer("DOING OK")); parser.handle(Buffer.buffer("\n"));
You can also produce fixed sized chunks as follows:
RecordParser.newFixed(4, h -> { System.out.println(h.toString()); });
For more details, check out the
RecordParser class.
Json Parser.
JsonParser parser = JsonParser.newParser(); // Set handlers for various events parser.handler(event -> { switch (event.type()) { case START_OBJECT: // Start an objet break; case END_OBJECT: // End an objet break; case START_ARRAY: // Start an array break; case END_ARRAY: // End an array break; case VALUE: // Handle a value String field = event.fieldName(); if (field != null) { // In an object } else { // In an array or top level if (event.isString()) { } else { // ... } } break; } });
The parser is non-blocking and emitted events are driven by the input buffers.
JsonParser:
JsonParser parser = JsonParser.newParser(); parser.objectValueMode(); parser.handler(event -> { switch (event.type()) { case START_ARRAY: // Start the array break; case END_ARRAY: // End the array break; case VALUE: // Handle each object break; } }); parser.handle(Buffer.buffer("[{\"firstName\":\"Bob\"},\"lastName\":\"Morane\"),...]")); parser.end();
The value mode can be set and unset during the parsing allowing you to switch between fine grained events or JSON object value events.
JsonParser parser = JsonParser.newParser(); parser.handler(event -> { // Start the object switch (event.type()) { case START_OBJECT: // Set object value mode to handle each entry, from now on the parser won't emit start object events parser.objectValueMode(); break; case VALUE: // Handle each object // Get the field in which this object was parsed String id = event.fieldName(); System.out.println("User with id " + id + " : " + event.value()); break; case END_OBJECT: // Set the object event mode so the parser emits start/end object events again parser.objectEventMode(); break; } }); parser.handle(Buffer.buffer("{\"39877483847\":{\"firstName\":\"Bob\"},\"lastName\":\"Morane\"),...}")); parser.end();
You can do the same with arrays as well
JsonParser parser = JsonParser.newParser(); parser.handler(event -> { // Start the object switch (event.type()) { case START_OBJECT: // Set array value mode to handle each entry, from now on the parser won't emit start array events parser.arrayValueMode(); break; case VALUE: // Handle each array // Get the field in which this object was parsed System.out.println("Value : " + event.value()); break; case END_OBJECT: // Set the array event mode so the parser emits start/end object events again parser.arrayEventMode(); break; } }); parser.handle(Buffer.buffer("[0,1,2,3,4,...]")); parser.end();
You can also decode POJOs
parser.handler(event -> { // Handle each object // Get the field in which this object was parsed String id = event.fieldName(); User user = event.mapTo(User.class); System.out.println("User with id " + id + " : " + user.firstName + " " + user.lastName); });
Whenever the parser fails to process a buffer, an exception will be thrown unless you set an exception handler:
JsonParser.
Thread safety.
Running blocking code(promise -> { // Call some blocking API that takes a significant amount of time to return String result = someAPI.blockingMethod("hello"); promise.complete(result); }, res -> { System.out:
WorkerExecutor executor = vertx.createSharedWorkerExecutor("my-worker-pool"); executor.executeBlocking(promise -> { // Call some blocking API that takes a significant amount of time to return String result = someAPI.blockingMethod("hello"); promise.complete(result); }, res -> { System.out:
int poolSize = 10; // 2 minutes long maxExecuteTime = 2; TimeUnit maxExecuteTimeUnit = TimeUnit.MINUTES; WorkerExecutor executor = vertx.createSharedWorkerExecutor("my-worker-pool", poolSize, maxExecuteTime, maxExecuteTimeUnit);
Metrics SPI.
The 'vertx' command line.
Run verticlesis the name of a JSON file that represents the Vert.x options, or a JSON string. This is optional.
-conf <config>- Provides some configuration to the verticle.
configoptionoption has also been specified then this determines which host address will be bound for cluster communication with other Vert.x instances. If not set, the clustered eventbus tries to bind to the same host as the underlying cluster manager. As a last resort, an address will be picked among the available network interfaces.
-cluster-public-port- If the
clusteroption has also been specified then this determines which port will be advertised for cluster communication with other Vert.x instances. Default is
-1, which means same as
cluster-port.
-cluster-public-host- If the
clusteroption.
Executing a Vert.x application packaged as a fat jarset to
io.vertx.core.Launcher
Main-Verticlespecifying.
Other commandsSenvironment.
Live Redeployclass.
Cluster Managers.
Logging
Vert.x logs using its internal logging API and supports various logging backends.
The logging backend is selected as follows:
the backend denoted by the
vertx.logger-delegate-factory-class-namesystem property if present or,
JDK logging when a
vertx-default-jul-logging.propertiesfile is in the classpath or,
a backend present in the classpath, in the following order of preference:
SLF4J
Log4J
Log4J2
Otherwise Vert.x defaults to JDK logging.
Configuring with the system property
Set the
vertx.logger-delegate-factory-class-name system property to:
io.vertx.core.logging.SLF4JLogDelegateFactoryfor SLF4J or,
io.vertx.core.logging.Log4j2LogDelegateFactoryfor Log4J2 or,
io.vertx.core.logging.JULLogDelegateFactoryfor JDK logging
Automatic configuration
When no
vertx.logger-delegate-factory-class-name system property is set, Vert.x will try to find
the most appropriate logger:
use SLF4J when available on the classpath with an actual implementation (i.e.
LoggerFactory.getILoggerFactory()is not an instance of
NOPLoggerFactory)
otherwise use Log4j2 when available on the classpath
otherwise use JUL
Configuring JUL logging
A JUL logging configuration file can be specified in the normal JUL way, by providing a system property named
java.util.logging.config.file with the value being your configuration file.
For more information on this and the structure of a JUL config file please consult the JDK.
Netty logging
Netty does not rely on external logging configuration (e.g system properties). Instead, it implements a logging configuration based on the logging libraries visible from the Netty classes:
use
SLF4Jlibrary if it is visible
otherwise use
Log4jif it is visible
otherwise use
Log4j2if it is visible
otherwise fallback to
java.util.logging
The logger implementation can be forced to a specific implementation by setting Netty’s internal logger implementation directly on
io.netty.util.internal.logging.InternalLoggerFactory:
// Force logging to Log4j 2 InternalLoggerFactory.setDefaultFactory(Log4J2LoggerFactory.INSTANCE);
Troubleshooting
SLF4J warning at startup.
Connection reset by peer).
Host name resolution".
Vertx vertx = Vertx.vertx(new VertxOptions(). setAddressResolverOptions( new AddressResolverOptions(). addServer("192.168.0.1"). addServer("192.168.0.2:40000")) );
The default port of a DNS server is
53, when a server uses a different port, this port can be set
using a colon delimiter:
192.168.0.2:40000.
Fail).
Server list rotation.
Hosts mapping
The hosts file of the operating system is used to perform an hostname lookup for an ipaddress.
An alternative hosts file can be used instead:
Vertx vertx = Vertx.vertx(new VertxOptions(). setAddressResolverOptions( new AddressResolverOptions(). setHostsPath("/path/to/hosts")) );
Search domains
By default the resolver will use the system DNS search domains from the environment. Alternatively an explicit search domain list can be provided:
Vertx vertx = Vertx.vertx(new VertxOptions(). setAddressResolverOptions( new AddressResolverOptions().addSearchDomain("foo.com").addSearchDomain("bar.com")) );
MacOS configuration
MacOS has a specific native extension to get the name server configuration of the system based on <a href="">Apple’s open source mDNSResponder</a>. When this extension is not present, Netty logs the following warning.
[main] WARN io.netty.resolver.dns.DnsServerAddressStreamProviders - Can not find io.netty.resolver.dns.macos.MacOSDnsServerAddressStreamProvider in the classpath, fallback to system defaults. This may result in incorrect DNS resolutions on MacOS.
This extension is not required as its absence does not prevent Vert.x to execute, yet is recommended.
You can use add it to your classpath to improve the integration and remove the warning.
<profile> <id>mac</id> <activation> <os> <family>mac</family> </os> </activation> <dependencies> <dependency> <groupId>io.netty</groupId> <artifactId>netty-resolver-dns-native-macos</artifactId> <classifier>osx-x86_64</classifier> <!--<version>Should align with netty version that Vert.x uses</version>--> </dependency> </dependencies> </profile>
High Availability and Fail-Over
Vert.x allows you to run your verticles with high availability (HA) support. In that case, when a vert.x instance running a verticle dies abruptly, the verticle is migrated to another vertx instance. The vert.x instances must be in the same cluster.
Automatic failover.
HA groups.
Dealing with network partitions - Quora.
Native transports
Vert.x can run with native transports (when available) on BSD (OSX) and Linux:
Vertx vertx = Vertx.vertx(new VertxOptions(). setPreferNativeTransport(true) ); // True when native is available boolean usingNative = vertx.isNativeTransportEnabled(); System.out.println("Running with native: " + usingNative);
Native Linux Transport
You need to add the following dependency in your classpath:
<dependency> <groupId>io.netty</groupId> <artifactId>netty-transport-native-epoll</artifactId> <classifier>linux-x86_64</classifier> <!--<version>Should align with netty version that Vert.x uses</version>--> </dependency>
Native on Linux gives you extra networking options:
SO_REUSEPORT
TCP_QUICKACK
TCP_CORK
TCP_FASTOPEN
vertx.createHttpServer(new HttpServerOptions() .setTcpFastOpen(fastOpen) .setTcpCork(cork) .setTcpQuickAck(quickAck) .setReusePort(reusePort) );
Native BSD Transport
You need to add the following dependency in your classpath:
<dependency> <groupId>io.netty</groupId> <artifactId>netty-transport-native-kqueue</artifactId> <classifier>osx-x86_64</classifier> <!--<version>Should align with netty version that Vert.x uses</version>--> </dependency>
MacOS Sierra and above are supported.
Native on BSD gives you extra networking options:
SO_REUSEPORT
vertx.createHttpServer(new HttpServerOptions().setReusePort(reusePort));
Domain sockets
Natives provide domain sockets support for servers: clients:
NetClient netClient = vertx.createNetClient(); // Only available on BSD and Linux SocketAddress addr = SocketAddress.domainSocketAddress("/var/tmp/myservice.sock"); // Connect to the server netClient.connect(addr, ar -> { if (ar.succeeded()) { // Connected } else { ar.cause().printStackTrace(); } });
or for http:
HttpClient httpClient = vertx.createHttpClient(); // Only available on BSD and Linux SocketAddress addr = SocketAddress.domainSocketAddress("/var/tmp/myservice.sock"); // Send request to the server httpClient.request(new RequestOptions() .setServer(addr) .setHost("localhost") .setPort(8080) .setURI("/")) .onSuccess(request -> { request.send().onComplete(response -> { // Process response }); });
Security notes).
Web applications.
Clustered event bus traffic
When clustering the event bus between different Vert.x nodes on a network, the traffic is sent un-encrypted across the wire, so do not use this if you have confidential data to send and your Vert.x nodes are not on a trusted network.
Standard security best practices Command Line Interface API
Definition Stage
Each command line interface must define the set of options and arguments that will be used. It also requires a
name. The CLI API uses the
Option and
Argument classes to
describe options and arguments:(1) .setDescription("The destination") .setArg.
Options
An
Option is a command line parameter identified by a key present in the user command
line. Options must have at least a long name or a short name. Long name are generally used using a
-- prefix,
while short names are used with a single
-. Names are case-sensitive; however, case-insensitive name matching
will be used during the Query / Interrogation Stage if no exact match is found.:
CLI cli = CLI.create("some-name") .setSummary("A command line interface illustrating the options valuation.") .addOption(new Option() .setLongName("flag").setShortName("f").setFlag(true).setDescription("a flag")) .addOption(new Option() .setLongName("single").setShortName("s").setDescription("a single-valued option")) .addOption(new Option() .setLongName("multiple").setShortName("m").setMultiValued(true) .setDescription("a multi-valued option"));
Options can be marked as mandatory. A mandatory option not set in the user command line throws an exception during the parsing:
CLI cli = CLI.create("some-name") .addOption(new Option() .setLongName("mandatory") .setRequired(true) .setDescription("a mandatory option"));
Non-mandatory options can have a default value. This value would be used if the user does not set the option in the command line:
CLI cli = CLI.create("some-name") .addOption(new Option() .setLongName("optional") .setDefaultValue("hello") .setDescription("an optional option with a default value"));
An option can be hidden using the
setHidden method. Hidden option are
not listed in the usage, but can still be used in the user command line (for power-users).
If the option value is constrained to a fixed set, you can set the different acceptable choices:
CLI cli = CLI.create("some-name") .addOption(new Option() .setLongName("color") .setDefaultValue("green") .addChoice("blue").addChoice("red").addChoice("green") .setDescription("a color"));
Options can also be instantiated from their JSON form.
Arguments:
CLI cli = CLI.create("some-name") .addArgument(new Argument() .setIndex(0) .setDescription("the first argument") .setArgName("arg1")) .addArgument(new Argument() .setIndex(1) .setDescription("the second argument") .setArgName("arg2"));
If you don’t set the argument indexes, it computes it automatically by using the declaration order.
CLI cli = CLI.create("some-name") // will have the index 0 .addArgument(new Argument() .setDescription("the first argument") .setArgName("arg1")) // will have the index 1 .addArgument(new Argument() .setDescription("the second argument") .setArg.
Usage generation(0) .setDescription("The destination") .setArgName("target")); StringBuilder builder = new StringBuilder(); cli.usage(builder);
It generates a usage message like this one:
Usage: copy [-R] source target A command line interface to copy files. -R,--directory enables directory support
If you need to tune the usage message, check the
UsageMessageFormatter class.
Parsing Stage
Once your
CLI instance is configured, you can parse the user command line to evaluate
each option and argument:
CommandLine.
Query / Interrogation Stage
Once parsed, you can retrieve the values of the options and arguments from the
CommandLine object returned by the
parse
method:
CommandLine commandLine = cli.parse(userCommandLineArguments); String opt = commandLine.getOptionValue("my-option"); boolean flag = commandLine.isFlagEnabled("my-flag"); String arg0 = commandLine.getArgumentValue(0);
One of your options can be marked as "help". If a user command line enabled a "help" option, the validation won’t fail, but you have the opportunity to check if the user asks for help:
CLI cli = CLI.create("test") .addOption( new Option().setLongName("help").setShortName("h").setFlag(true).setHelp(true)) .addOption( new Option().setLongName("mandatory").setRequired(true)); CommandLine line = cli.parse(Collections.singletonList("-h")); // The parsing does not fail and let you do: if (!line.isValid() && line.isAskingForHelp()) { StringBuilder builder = new StringBuilder(); cli.usage(builder); stream.print(builder.toString()); }
Typed options and arguments
The described
Option and
Argument classes are untyped,
meaning that the only get String values.
TypedOption and
TypedArgument let you specify a type, so the
(String) raw value is converted to the specified type.
Instead of
Option and
Argument, use
TypedOption
and
TypedArgument in the
CLI definition:
CLI cli = CLI.create("copy") .setSummary("A command line interface to copy files.") .addOption(new TypedOption<Boolean>() .setType(Boolean.class) .setLongName("directory") .setShortName("R") .setDescription("enables directory support") .setFlag(true)) .addArgument(new TypedArgument<File>() .setType(File.class) .setIndex(0) .setDescription("The source") .setArgName("source")) .addArgument(new TypedArgument<File>() .setType(File.class) .setIndex(0) .setDescription("The destination") .setArgName("target"));
Then you can retrieve the converted values as follows:
CommandLine commandLine = cli.parse(userCommandLineArguments); boolean flag = commandLine.getOptionValue("R"); File source = commandLine.getArgumentValue("source"); File target = commandLine.getArgumentValue("target");
The vert.x CLI is able to convert to classes:
having a constructor with a single
Stringargument, such as
Fileor
JsonObject
with a static
fromor
fromStringmethod
with a static
valueOfmethod, such as primitive types and enumeration
CLI cli = CLI.create("some-name") .addOption(new TypedOption<Person>() .setType(Person.class) .setConverter(new PersonConverter()) .setLongName("person"));
For booleans, the boolean values are evaluated to
true:
on,
yes,
1,
true.
If one of your option has an
enum as type, it computes the set of choices automatically.
Using annotations
You can also define your CLI using annotations. Definition is done using annotation on the class and on setter methods:
public class AnnotatedCli { private boolean flag; private String name; private String arg; public void setFlag(boolean flag) { this.flag = flag; } public void setName(String name) { this.name = name; } public void setArg(String arg) { this.arg = arg; } }
CLI cli = CLI.create(AnnotatedCli.class); CommandLine commandLine = cli.parse(userCommandLineArguments); AnnotatedCli instance = new AnnotatedCli(); CLIConfigurator.inject(commandLine, instance);
The vert.x Launcher
The vert.x
Launcher is used in fat jar as main class, and by the
vertx command line
utility. It executes a set of commands such as run, bare, start…
Extending the vert.x Launcher
public class MyCommand extends DefaultCommand { private String name; public void setName(String n) { this.name = n; }
Using the Launcher in fat jars.
Sub-classing the Launcherand
getDefaultCommand
add / remove commands using
registerand
unregister
Launcher and exit code
0if the process ends smoothly, or if an uncaught error is thrown
1for general purpose error
11if Vert.x cannot be initialized
12if a spawn process cannot be started, found or stopped. This error code is used by the
startand
stopcommand
14if the system configuration is not meeting the system requirement (shc as java not found)
15if the main verticle cannot be deployed
Configuring Vert.x cache.
|
https://vertx.io/docs/4.1.6/vertx-core/java/
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Change Color of selected segmentcontrol
Hey guys,
I’am quite new to pythonista and i already love it. Everything is well documented and there is a thread for everything you need in this Forum. Except for this special requirement. For my own App í´m trying to figure out how i can set the color of the selected segment.
During my search i found this topic and i was able to get this to work in my code. But unfortunately its only the font type, color and size. But if we can access theese attributes wouldnt we be able to change the color of the selection as well.?
I tried with the attribute nsforegroundcolorattributename, but without succes. And i have to admit, I have no experience with object C.
So has anyone a clue how i can get this to work? I would really appreciate it :)
Best wishes
@Killerburns read this topic
Hi cvp,
Thanks for your reply. I have already seen this thread, and as i mentioned was able to change color of the font as you describe it in the the thread you have posted. If its about the „get back ios 12 appearance“ i really have no idea how to use that :( I‘am really a beginner.
I thought about to build a own segmented control with buttons and ui animate if this is too complicated to get it work with the build in segmented control.
@Killerburns Try this, almost nothing in Objective C 😅
import ui from objc_util import * v = ui.View() v.frame = (0,0,400,200) v.background_color = 'white' d = 64 items = ['aaa','bbb'] sc = ui.SegmentedControl() sc.frame = (10,10,d*len(items),d) sc.segments = items def sc_action(sender): idx = sender.selected_index o = ObjCInstance(sender).segmentedControl() #print(dir(o)) idx = o.selectedSegmentIndex() for i in range(len(items)): if i == idx: with ui.ImageContext(d,d) as ctx: path = ui.Path.rounded_rect(0,0,d,d,5) ui.set_color('red') path.fill() s = 12 ui.draw_string(items[idx], rect=(0,(d-s)/2) sc.action = sc_action v.add_subview(sc) v.present('sheet')
Thanks
import') ```
- Gagnon_265
This post is deleted!last edited by
- myagkayaanna
This post is deleted!last edited by
This post is deleted!last edited by
|
https://forum.omz-software.com/topic/6678/change-color-of-selected-segmentcontrol/?
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Public API for tf.initializers namespace.
Classes
class constant: Initializer that generates tensors with constant values.
class glorot_normal: The Glorot normal initializer, also called Xavier normal initializer.
class glorot_uniform: The Glorot uniform initializer, also called Xavier uniform initializer.
class identity: Initializer that generates the identity matrix.
class ones: Initializer that generates tensors initialized to 1.
class orthogonal: Initializer that generates an orthogonal matrix.
class random_normal: Initializer that generates tensors with a normal distribution.
class random_uniform: Initializer that generates tensors with a uniform distribution.
class truncated_normal: Initializer that generates a truncated normal distribution.
class uniform_unit_scaling: Initializer that generates tensors without scaling variance.
class variance_scaling: Initializer capable of adapting its scale to the shape of weights tensors.
class zeros: Initializer that generates tensors initialized to 0.
Functions
global_variables(...): Returns an Op that initializes global variables.
he_normal(...): He normal initializer.
he_uniform(...): He uniform variance scaling initializer.
lecun_normal(...): LeCun normal initializer.
lecun_uniform(...): LeCun uniform initializer.
local_variables(...): Returns an Op that initializes all local variables.
tables_initializer(...): Returns an Op that initializes all tables of the default graph.
variables(...): Returns an Op that initializes a list of variables.
|
https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/initializers?hl=ja
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
ECharts_study
There are many libraries for rendering visual icons. Because of the needs of the project, I learned Echarts. The full name is Apache ECharts
Many drawings will not be explained in detail in the basic preparation part, so beginners must see the basic preparation part
Besides, there are a lot of charts,
What is Echarts
ECharts, an open-source visual chart library implemented with JavaScript, can run smoothly on PC s and mobile devices, is compatible with most current browsers (IE9/10/11, Chrome, Firefox, Safari, etc.), and the underlying layer relies on vector graphics library ZRender , provide intuitive, interactive and highly personalized data visualization charts.
ECharts provides a general Line chart,Histogram,Scatter diagram,Pie chart,K-line diagram , for statistics Box diagram , for geographic data visualization Map,Thermodynamic diagram,Line diagram , for relational data visualization Diagram,treemap,Rising sun chart , multidimensional data visualization Parallel coordinates , and for BI Funnel diagram,Dashboard And supports the mashup between graphs.
It's awesome. It's cool. That's not much nonsense. Let's start to see how to do it
First experience
It can be copied directly.
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta http- <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>ECharts-First experience</title> <script src=""></script> </head> <body> <div id="main" style="width: 600px;height:400px;margin:100px auto;"></div> <script> // Initialize the ecarts instance based on the prepared dom var myChart = echarts.init(document.getElementById('main')); // Specify configuration items and data for the chart var option = { // Icon title title: { text: 'ECharts First experience' }, // The legend tag of type HTML is also the title of the icon. But this label can be a specific icon. The title above can be the total title showing all icons legend: { data: ['achievement'] }, // Column, discount and other titles with horizontal and vertical coordinates will be easier to understand in combination with the figure xAxis: { data: ['language', 'mathematics', 'English', 'Physics', 'biology', 'Chemistry'] }, yAxis: {}, // Figure list. series: [{ name: 'achievement', type: 'bar', data: [120, 130, 119, 90, 90, 88] }] }; // Display the chart using the configuration item and data you just specified. myChart.setOption(option); </script> </body> </html>
install
There are many installation methods, such as downloading code directly, npm acquisition, and CDN. You can even select only what you need to install.
npm
npm install echarts --save
CDN
<script src=""></script>
Or in Select dist / ecarts.js, click and save it as an ecarts.js file.
Then use it in the file
<script src="echarts.js" />
Or special customization provided by the official
Import project
The previous section describes how to install. For those using CDN, all modules will be loaded. Those that use npm or yarn to install or only install some functions. We can import only some required modules.
Import all
import * as echarts from 'echarts';
This will import the components and renderers provided by echarts. And focus on the eckarts object. If we want to use the pass through point method.
echarts.init(....)
Partial import
For partial import, we can only import the modules we need to reduce the final volume of the code.
The core module and renderer are necessary. Other icon components can be introduced as needed. Note that you must register before using it.
//'; // The introduction prompt box, title, rectangular coordinate system, data set, built-in data converter Component, and Component suffix are all Component import { TitleComponent, TooltipComponent, GridComponent, DatasetComponent, DatasetComponentOption, TransformComponent } from 'echarts/components'; // Automatic label layout, global transition animation, etc import { LabelLayout, UniversalTransition } from 'echarts/features'; // When introducing the Canvas renderer, note that introducing the Canvas renderer or SVGRenderer is a necessary step import { CanvasRenderer } from 'echarts/renderers'; // Register required components echarts.use([ TitleComponent, TooltipComponent, GridComponent, DatasetComponent, TransformComponent, BarChart, LabelLayout, UniversalTransition, CanvasRenderer ]); // The next use is the same as before, initializing the chart and setting configuration items var myChart = echarts.init(document.getElementById('main')); myChart.setOption({ // ... });
Basic preparation (also a common step for other charts)
Preparation steps
Create a box. Used to display charts. Remember to give it an initial width and height.
The init method provided by echart is introduced, and other API s or components can be introduced as needed
Prepare an initialization chart function, which can be called when rendering for the first time or re rendering is required. (not necessary, if data need not be modified)
Call the init method in the initial function and pass in the box node prepared in advance. The init method returns an instance of a chart (echartsInstance). A box can only initialize one chart instance
Use the setOption method of the chart instance to pass in an option object, which is used to configure the generated icon type, data and style.
<template> <div> <!-- Set up a box to display the chart --> <div id="con" :</div> </div> </template> <script> // Import init import { init } from "echarts"; export default { name: "Echarts_bar", // Invoke after component mount, initialize chart mounted() { this.initChart(); }, methods: { initChart() { // Call init to return an ecahrt instance let myChart = init(document.getElementById("con")); // Configuration chart. option is an object used to configure the chart myChart.setOption(option) }; </script>
option object
The configuration of option object is the difficulty of learning charts. The basic preparation is general, simple and easy.
Therefore, to learn echarts is to learn how to configure the option object and how to configure the required charts.
From the official website, the option object has many configurations. The key value under option may be an object, and the object has many key values.
But these are summaries. Only part of a chart is used, and some can be used but not necessary.
option:{ title:{...} // Configuration title xAxis:{...} yAxis:{...} // Configure x and y axes series:[{}] // Configuration chart //.... }
The option set in the same chart instance has another feature. The old and new options are merged rather than replaced.
That is, for the first time, we configured a lot of new data. If there is new data, it will be new to configure. I just want to add new or to be changed in the new option. If the configuration remains unchanged, you can not configure it for the second time.
Bar chart
Bar chart is a common chart in junior high school. The advantage of using bar chart for data visualization in the project is that we can know the approximate quantity and change trend at a glance.
So let's learn how to use echart to make a bar chart. (the following code is written based on vue's scaffolding project)
Chart configuration
Basic preparations are common. The difficulty lies in that different charts need different configurations (but some configurations are also common). The functions are complex and there are more configuration items. This section only describes some common configurations of bar charts. Other configurations can be found on the official website as needed.
The setOption method is passed an object. It's not difficult to know. What we need to write is the key value pair. (the following keys are in no order)
- Title: the title of the configuration chart. The value is an object.
title: { // Title content, support \ nline break text: "echarts_bar" // Other configurations will be described in the following general configuration section },
- **xAxis / yAxis: * * the x-axis in the rectangular coordinate system grid. Generally, a single grid component can only place the upper and lower X-axes at most. If there are more than two X-axes, you need to configure the offset attribute to prevent the overlap of multiple X-axes at the same position. (if a chart requires a coordinate axis, you can't configure xAxis, but the value of xAxis can be an empty object.)
x: alignment axis. Applies to logarithmic data. // Category data, valid in category axis (type: 'category'). data : [] // Position of the x-axis. position : 'top'/'bottom' // The offset of the x-axis from the default position is useful when there are multiple X-axes in the same position offset : '20' (Default units px) // Axis scale min. And maximum scale value min / max : // If you don't write, it will be calculated automatically. There are many ways to write values. Has different functions. See the documentation for details. }
- **Series: * * series list. Each series determines its own chart type by type. The value is an array. The element of an array is an object. An object corresponds to a chart. (however, these charts all work in the same container. It can also be the superposition of multiple charts of the same kind)
series:[ { // Series name, used for displaying tooltip, legend filtering of legend, and specifying corresponding series when setOption updates data and configuration items name:'System name', // Series type, that is, the type of table. Bar: bar table. Pie: pie chart. Line: line chart. etc. type:'bar', // data data: [], // If we have set the data of a coordinate axis (its type is category), the data here can be a one-dimensional array as the value of the category. It corresponds to another prepared category one by one. // It can also be a multidimensional array, which is used to set x-axis data and y-axis data. (you need to specify which axis is category in xAxis or yAxis). // data:[ // ['Apple', 4], // ['milk', 2], // ['watermelon', 6], // ['Banana', 3] // ] } ]
Simple case
<template> <div> <div id="con" :</div> </div> </template> <script> import { init } from "echarts"; export default { name: "Echarts_bar", data() { return { xdata: ["watermelon", "Banana", "Apple", "Peach"], ydata: [2, 3.5, 6, 4], }; }, mounted() { this.initChart(); }, methods: { initChart() { // Initialize the ecarts instance based on the prepared dom let myChart = init(document.getElementById("con")); // Configuration chart myChart.setOption({ title: { // Title Component text: "Fruit unit price table", }, xAxis: { data: this.xdata, // type will default this axis to category, otherwise data will not take effect }, yAxis: { name:'Company/element' }, // type defaults to value. Data is configured in series series: [ // List of series. Each series determines its own chart type by type { name: "Unit Price", // Name of this chart type: "bar", // The type of the icon data: this.ydata, // Class name data } ], }); }, }, }; </script>
Other effects
barWidth
If the column width is 100%, it will be divided into equal proportions according to the number of categories on the x-axis (as shown in this figure). Then it will be taken out 100% as the column width according to the size. If it is not set, it will be adaptive. It is set in the series of sereies. It is suitable for bar charts
// Width of bar column barWidth : '20', // If the width of the column is 100%, it is divided into proportions according to the number of categories on the x-axis (in this figure). Each is one quarter of the box width, and this percentage is relative to one quarter. If it is not set, it will be adaptive.
markPoint
Mark point, which can mark some special data. It is used in the series
// Mark points to mark some special data markPoint: { symbol: "pin", // Mark the shape, circle, square, etc. data: [ // Set marker point. It is an array. The elements of the array are objects. Each object corresponds to a marker point. { type: "max", // Maximum }, { type: "min", // Maximum name: "The name of the tag", // Can not }, ], },
markLine
Marker line, used in series
markLine: { data: [ { type: "average", name: "Marker line name", // Mean line }, ], },
label
The text label on the graph can be used to describe some data information of the graph, such as value, name, etc. it is used in the series
As far as the histogram is concerned, if only the column is used, the specific value represented cannot be seen directly, but it can be marked with label
label: { show: true, // Whether to display. The default is false position : 'Mark position' // Support strings: left, inside, etc. it also supports array representation of position: [20,30] / [10%, 20%] rotate : 'Rotation of text' // There are other styles, such as color, that can be set },
Multi effect use cases
<template> <div> <div id="con" :</div> </div> </template> <script> import { init } from "echarts"; export default { name: "Echarts", data() { return { xdata: ["watermelon", "Banana", "Apple", "Peach"], ydata: [5, 8, 4, 9], ydata2: [10, 20, 40, 30], }; }, mounted() { this.initChart(); }, methods: { initChart() { let myChart = init(document.getElementById("con")); myChart.setOption({ title: { text: "Fruits", link:'', left:'30' }, xAxis: {}, yAxis: { type: "category", data: this.xdata, }, series: [ { name: "Unit Price", type: "bar", markLine: { // Marker line data: [ { type: "average", }, ], }, label: { show: true, // Whether to display. The default is false }, barWidth: "25%", data: this.ydata, // Chart data }, { name: "Storage capacity", type: "bar", data:this.ydata2, markPoint: { symbol: "pin", data: [ { type: "max", // Maximum }, { type: "min", }, ], }, }, ], }); }, }, }; </script>
General configuration
Here are some general configurations, but not all of them are listed. Some contents are used in special scenarios or rarely used. I won't introduce them. If you need to consult the documents, this note is for introductory learning and providing some cases.
title
Configure the title of the chart. The value is an object.
title: { // Title content, support \ nline break text: "echarts_bar", // Title hyperlink, click the title to jump link :'url' // Title Style object textStyle : { /* color fontStyle / fontWeight Equal font style width / height There are also border, shadow and other styles */ } // Sub text title, it also has independent styles, links, etc. subtext :'character string' // Horizontal alignment of the whole (including text and subtext) textAlign :'' // 'auto','left','right','center' // position adjustment left / top / right / button // The value can be 20 (default unit px), the percentage of '20%' relative to the width and height of the parent box, or 'top', 'middle', 'bottom'. },
tooltip
Prompt box component. It can be set in many locations, such as global, local, a series, a data, etc
The prompt box component has different effects on different types of diagrams.
For example, in the bar chart, the effect is like this: (a prompt box appears when the mouse hovers over the bar column)
The value of tooltip is also an object. You can configure whether to display, display trigger conditions, displayed text, style, etc.
// Prompt box component. tooltip: { show: true, trigger: "item", // Trigger type. // Item data item graph trigger is mainly used in scatter chart, pie chart and other charts without category axis. (that is, it will be triggered on the graph) // Axis coordinate axis trigger is mainly used in histogram, line chart and other charts that will use category axis (it can be on the coordinate axis, not necessarily on the graph) // none triggers nothing. triggerOn: "mousemove", // Triggered when the mouse moves. Or click / none // formatter:'{b}: {c}' / / you can customize the content of the prompt box. If not, there will be the default content. // Some variables have been assigned values. You can refer to documents according to different types, such as b and c above // It supports the writing of {variable}. It also supports HTML tags, // In addition to a string, the value can also be a function. A function is a callback function. The return value of the function is a string, and the value of this string is displayed as the value of formatter. // The function receives a parameter, which is an object with relevant data in it. formatter: (res) => { return `${res.name}:${res.value}`; // Category name: value }, // Text style of the floating layer of the prompt box textStyle :{} // Color, fontsize, etc },
toolbox
According to the column, there are five built-in tools: export chart as picture, data view, dynamic type switching, data area scaling and reset. They can be added as needed.
toolbox: { // Tool size itemSize: 15 // Spacing between tools itemGap: 10 // Configuration tool feature: { saveAsImageL: {}, // Export picture dataView: {}, // Direct data display restore: {}, // Reset dataZoom: {}, // zoom magicType: { // switch type: ["bar", "line"], }, }, },
legend
The legend component displays symbols, colors and names of different series. You can click the legend to control which series are not displayed
legend: { data: ["Unit Price", "Storage capacity"], // The element value is the name of the series // There are also some control styles }, series:[ { name:'Unit Price', ... },{ name ;"Storage capacity" .... } ]
Line chart
Line chart is used to show the change of data in a continuous time interval or time span. Its characteristic is to reflect the trend of things changing with time or ordered categories.
The change trend is obvious, and when multiple groups of data are displayed in the same table, it is easy to compare and simple.
Basic use
The use of line chart is almost the same as that of bar chart. In most cases, you only need to replace the type bar with the type line.
Set box - > Import ecahrts related API - > create ecahrts instance - > set option
<template> <div> <div id="con" :</div> </div> </template> <script> import { init } from "echarts"; export default { name: "Echarts_line", mounted() { this.initChart(); }, methods: { initChart() { let myChart = init(document.getElementById("con")); let option = { title:{ text:'New user statistics', left:'center' }, xAxis:{ name:'month', data:['1','2','3','4','5','6'] }, yAxis:{ name:'New user / individual' }, series:[ { name:'New user statistics', type:'line', data:[3000,2400,6300,2800,7300,4600,2200] } ] }; myChart.setOption(option); }, }, }; </script>
Other effects
xAxis , yAxis
The configuration of the x and y axes is roughly the same. The section has been described in the bar chart section. Here are some necessary supplements.
- boundaryGap
The setting and performance of category axis and non category axis are different.
The value can be Boolean or array ['value1','value2']. Set the blank size on both sides.
xAxis:{ type:'', data:[], boundaryGap:false }
- scalse
type:'value' is valid only in the value axis. Whether it is a scale away from the 0 value. When set to true, the coordinate scale does not force zero scale.
It is suitable for data with large data and small fluctuation range, or in the scatter diagram of double value axis.
yAxis:{ // type defaults to value scale:true }
lineStyle
The style of polyline is set in the series.
series: [ { // .... lineStyle: { color: "red", type: "dashed", // dashed: dotted line, dotted: dotted line, solid: solid line (default) }, }, ]
- smooth
Sets the style of the break point of the line chart, whether it is smooth or angular. The value is Boolean, and the default is false. Set in the series
series: [ { // .... smooth:true }, ]
- markArea
Marking area, in the bar chart, we learned to mark points and lines. A marker area is added here. The marked area will have a shadow. We can set the color of the shadow.
It is also set in the series
// In the element object of sereis, markArea:{ data:[ // data is a two-dimensional array. [ // Set marker area { xAxis:'1' }, { xAxis:'2' } ], [ // Set second marker area { xAxis:'4', yAxis:'3100' }, { xAxis:'6', yAxis:'3300' } ] ] }
- areaStyle
Area fill pattern. After setting, it will be displayed as an area map.
series: [ { // ... areaStyle:{ color:'pink', origin:'end' // Auto (default), start, end } }, ],
- stack
Stack graph, when there is more than one group of data, and we want the second group of data to be superimposed on the first group of data.
series: [ { name: "New user statistics", type: "line", data: [700, 1000, 400, 500, 300, 800, 200], // The value of stack is used for matching, and the superimposed value will match it stack:'mystack', areaStyle:{} // Configure the color. If it is blank, ecahrts will automatically configure the color },{ name:'Overlay data', type:'line', data: [700, 300, 400, 200, 500, 100, 300], stack:'mystack', // Superimposed with the data of the stack configured first. areaStyle:{} } ],
Scatter diagram
Basic use
Scatter plot can help us infer the correlation between variables.
Its type is scatter. Compared with bar and line, its feature is that both coordinates are data (value). It displays points one by one.
The basic operation is the same as that of line chart and bar chart. The type of coordinate axis is value. The type of series is scatter. The data is a two-dimensional array.
<template> <div> <div id="con" :</div> </div> </template> <script> import { init } from "echarts"; export default { name: "Echarts_scatter", data() { return {}; }, mounted() { this.initChart(); }, methods: { initChart() { let myChart = init(document.getElementById("con")); let option = { title: { text: "Scatter diagram", left: "center", }, xAxis:{ type:"value", scale:true }, yAxis:{ type:"value", scale:true }, series:[ { type:'scatter', data:[ // x , y [10,11], [9,10], [11,11], [10,10], [10,12], [13,11], [12,15], [12,13], [14,13], [13,13], [13,12], [15,14], [14,13], [14,14], [16,15], [16,17], [17,16], [17,17], [17,18], [18,16], [18,17], [18,20], ], } ] } myChart.setOption(option); }, }, }; </script>
Other effects
symbolSize
Configuration item that controls the size of scatter points. Used in series.
It can control the size of all scatter points by = one by one, or it can control the size of some added points
series:[ { // ... // Controls the size of the scatter. // symbolSize:10, / / the value can be a numerical value to control the size of all scatter points in this series symbolSize:function(e){ // The value can also be a function console.log(e); // The value is an array of numeric values one point at a time. if(e[1]/e[0]>1){ // When the value of the y-axis is greater than the value of the x-axis, the size is 12, otherwise it is 6 return 12 } return 6 }, } ]
itemStyle
Control the color of scatter points, which is used in sereis series
It can control the color of all points or the color of some points.
itemStyle:{ // color:'red' / / direct assignment, color:function(e){ // Function values are also supported console.log(e.data); // E is the scattered data, and e.data takes out the numerical array let [a,b] = e.data if(b/a>1){ return 'red' } return 'green' } }
Figure of the combination of the above two effects
effectScatter
It is the value of type of series series. It is also a scatter chart. But it has ripple effect. For example:
We set the value of type to effectscutter. Then all scatter points of this series of scatter diagrams have the effect of ripple.
It can be used with showEffectOn trigger timing and rippleEffect ripple effect.
series:[{ type:'effectScatter', // Ripple animation effect. After use, turn on the ripple effect for all scatter points in this series. After the default icon is loaded, turn on the effect. // Trigger timing showEffectOn:'emphasize', // The default value is render. It means that the scatter is enabled after loading. emphasize means that the scatter is enabled only when the mouse hovers over the scatter // Ripple style rippleEffect:{ color:'', // The color of ripples, which defaults to the color of scattered points number : 2 // Number of ripples period : 3 // Animation period in seconds scale:10 // Control the size of the expansion of ripples. scale is based on how many times the original. }, }]
General configuration of xoy axis
The above three figures are all on the X and Y axes. This section describes some general configurations of the xoy axis.
grid
For the drawing grid in the rectangular coordinate system, the upper and lower X axes and the left and right Y axes can be placed in a single grid.
grid:{ // Show chart border show:true, // border width borderWidth:10, // Border color borderColor:'red', // left / top / controls the offset between the chart and the box. It will affect the size of the chart left:120 // left ,top , button,right // Controls the size of the chart // width:/height: },
axis
x and y axes. This knowledge point is actually mentioned in the bar chart.
xAxis:{ // Or y: on the number axis. Applicable to logarithmic data. // Category data, valid in category axis (type: 'category'). data : [] // x. The position of the y-axis. position : 'top'/'bottom' / 'left'/ 'right' // The offset of the x-axis from the default position is useful when there are multiple X-axes in the same position offset : '20' (Default units px) // Axis scale minimum and maximum scale values min / max : // If you don't write, it will be calculated automatically. There are many ways to write values. They have different functions. Please refer to the document for details. }
Emphasize a few points
- If data is set, type defaults to category. If it is empty, it defaults to value. The value is obtained from series.
- If category is set but data is not set, the data in the series must be a two-dimensional array.
- If the data of an axis is obtained from the series and it is of category type, it must be configured with category type in axis.
dataZoom
Previously, we knew that there was a dataZoom tool in the toolbox. You can select a part of the graph to zoom in and out.
This dataZoom is not configured in the toolbox, but directly under option.
The value is an array, and the element of the array is a scaling function. For example, you can configure one for the x-axis and one for the y-axis data. Generally, when there is only one, the default is
let option = { dataZoom:[ { // Zoom type, control zoom position, and method. type:'slider' // There is a sliding axis at the bottom of the x axis. 'inside' / / zoom through the mouse pulley in the figure } ] // ... }
**If it is inside * *, no UI will be added. However, when the mouse is placed in the figure and scrolled, it can be scaled according to the mouse position.
Multiple scalers
When there are multiple scalers, you need to configure the index of their corresponding coordinates corresponding to different coordinates.
Generally, if there are no multiple x and multiple y, then inde is 0.
dataZoom:[ { type:'slider', xAxisIndex:0 // x axis },{ type:'slider', yAxisIndex:0 } ],
Pie chart
The union chart is also a kind of chart with simple structure. Its characteristic is that it can intuitively see the approximate proportion of various purposes.
It doesn't need the x and y axes. But the basic preparation remains the same.
The main note is that the type of series is pie. And the type of data
The data value of series can be written in a variety of ways. One dimensional array, two-dimensional array, or object array. Object arrays are commonly used
[ { name:'Data item name.', value:'Data value' } ]
Simple case
<template> <div> <div class="con" :</div> </div> </template> <script> import {init} from 'echarts' export default { name:'ecahrts_pie', mounted(){ this.initChart() }, methods:{ initChart(){ let chart = init(document.querySelector('.con')) let option={ title:{ text:'pie', left:'center' }, series:[ { type:'pie', data:[ { name:'sleep', value:'7' },{ name:'Code', value:'10' } ] } ] } chart.setOption(option) } } } </script>
Common effects
label
The text label on the pie chart graph can be used to describe some data information of the graph, such as value, name, etc. The default is displayed. That is, the coding and sleeping in the figure above.
// If the data is just a simple one-dimensional array data = [7,10] // Then there are only values and no names. Therefore, you can set label not to display label:{ show:false } // The picture shows a bare circle.
In addition to controlling whether the label is displayed, we can also control the content and style of the displayed data text.
label:{ show : true, formatter:'character string', // Also fill in the variable {variable} // {a} : series name. // {b} : data name. // {c} : data value. // {d} : percentage. // The value of formatter can also be a function. The function can be connected to a parameter. If there is data required on the parameter, integrate the data and return it. // There are also position, color and other control styles }
radius
The radius of the pie chart. By setting it, we can change the radius of the pie chart or set it as a ring.
series:[{ // .... radius:'' // Value: absolute radius in PX // Percent: take half of the minimum value of width and high school, and then take the value of percent. // Array: ['inner radius',' outer radius'] }]
label:{ show:true, formatter:'{b}:{d}%' }, radius:['60','80']
roseType
Whether to display as Nightingale chart, and distinguish the size of data by radius. Two modes can be selected:
- The 'radius' sector center angle shows the percentage of data, and the radius shows the size of data.
- 'area' the center angle of all sectors is the same, and the data size is displayed only by radius.
series:[{ // ... roseType:'radius' }]
selectedMode
Click to select the mode. The pie chart has its own animation when the mouse hovers. But clicking does not respond.
Set selectedMode to add the click selection effect for the sector. It can be used with selectedOffset
series:[{ // ... selectedMode:'single' // Another value is multiple. // The effect is shown in the figure below. If it is single, only one sector can have this effect at the same time. While multipe allows multiple sectors to be in the selected state. // It can be used with selectedOffset to set the offset size selectedOffset: 30 }]
Map
Basic use
Brief steps
The basic preparation remains unchanged.
- Introduce echarts related API s, prepare containers, initialize objects, and set option s.
But it is also very different from the previous icons.
- The json data of vector map needs to be prepared first. It can be loaded through ajax or downloaded locally in advance
- Register map data globally using registerMap exposed by echarts.
import {registerMap} from 'echarts' registerMap('chinaMap',china) // The first parameter is the name required for later use, and the second is JSON data
- In option, you can configure either series or geo.
geo:{ // type is map type:'map', map:'chinaMap' // It needs to be the same as the name set during registration } // perhaps series:[{ type:'map', map:'chinaMap' }]
Simple case
Download address of JSON file in this case
<template> <div> <div id="con" :</div> </div> </template> <script> // Import init and registerMap import {init,registerMap} from 'echarts' // Import map JSON data downloaded locally in advance import china from '../../jsondata/china.json' export default { name:'echarts_map', mounted(){ this.initChart() }, methods: { async initChart(){ let Chart = init(document.querySelector('#con')) // Register maps with register await registerMap('chinaMap',china) let option = { geo:{ type:'map', map:'chinaMap' // It needs to be the same as the name set during registration } } Chart.setOption(option) } }, } </script>
Common configuration
roam
Set the effects that allow zooming and dragging. When the mouse is placed on the map, you can zoom through the scroll wheel, and click the map to drag.
The value is Boolean
geo:{ // ... roam:true }
label
In the rendering of the above example, we can see that each province has no name. label can control the display and style of province names.
geo:{ // ... label:{ show:true, // The displayed content is not set. The default is the province name. The value can be a string or function. It is the same as the above. formatter: '' // There are many configurations of styles. Please refer to the document } }
zoom
Set the zoom scale of the map during initial loading. The value is array. Under geo, it is the same level as label
center
Set the center point. The values are array. ['longitude coordinate', 'latitude coordinate'] set the point corresponding to longitude and latitude to the center of the chart
Common effects
Set color
In the above maps, they are all gray. Except that the mouse hovers, it will change color. It looks very monotonous.
Through the combination of series and visual map, we can set the color for the map, but its color will not be too dazzling. Generally, the color is used to display the population or temperature.
**Configuration steps * * (based on the configured basic map)
- Prepare a data, which saves the names of provinces in the map and the corresponding data, or the names of cities (according to the national map, provincial map, or others)
let temdata = [ { name:'Guangdong Province', // Suppose it is a national map, and name matches Guangdong Province value:'32' // Set the corresponding data. For example, here we set the temperature. },{ name:'Fujian Province', value:'29' },{ // ... },//... ]
- Associate series with geo. Of course, geo can also be replaced by series, so it is to add a series.
series:[{ // Configuration data is the data prepared above data:temdata, // The index of the associated map is required. The index can be configured in geo. If there is only one, the default is 0 geoIndex:0, // The configuration type needs to be map. type:'map' }],
- Configure visual map. This is a required step, otherwise the color will not be displayed.
It is a visual mapping component used for "visual coding", that is, mapping data to visual elements (visual channels).
visualMap:{ // After configuring visaulMap, show defaults to true. If it is set to false, this component will not be displayed. show:true, // Set minimum value min:0, // Set maximum max:100, // Configure the color change from minimum to maximum. If not configured, it will be set automatically inRange:{ color:['white','pink','red'] // Multiple colors can be configured }, calculable:true // The slider appears and is not set to false. However, it has the same filtering function as the original component. }
case
<template> <div> <div id="con" :</div> </div> </template> <script> import { init, registerMap } from "echarts"; import china from "../../jsondata/china.json"; // import axios from 'axios' export default { name: "echarts_map", data(){ return { temdata:[] } }, mounted() { // get data china.features.forEach(v=>{ let obj = { name:v.properties.name, value:(Math.random()*(100)).toFixed(2) } this.temdata.push(obj) }) // Initialize chart this.initChart(); }, methods: { async initChart() { let Chart = init(document.querySelector("#con")); await registerMap("china", china); let option = { geo:{ type:'map', map:'china', label:{ show:true } }, series:[{ data:this.temdata, geoIndex:0, type:'map' }], visualMap:{ show:true, min:0, max:100, inRange:{ color:['white','red'] } } }; Chart.setOption(option); }, }, }; </script>
Combined with other figures
Maps can be displayed in combination with other maps.
Combined scatter diagram
A container can have multiple graphs. Here we add a scatter graph to the map, in fact, we add a series of objects with the type of scatter graph.
But our previous scatter map is on the coordinate axis, and the map has no coordinate axis. Therefore, we need to use longitude and latitude to mark the position of points.
Steps (based on map already configured)
- Prepare data. The data uses longitude and latitude to mark the position of points. Longitude and latitude can be obtained in the json data of the map.
let opintData = [ [116.405285, 39.904989], [117.190182, 39.125596], //.... ]
- Add an object in the series, whose type is scatter or effecrScatter, and set data.
- At the same time, the coordinateSysem is configured as geo, that is, the longitude and latitude are set as the location reference, and the default is xoy coordinate axis
series:[ { type:'effectScatter', data:opintData, coordinateSystem:'geo', } ]
Combine lines
Path map
It is used for drawing line data with start and end information, mainly for route and route visualization on the map
Its coordinateSystem is geo by default. Here we cooperate with the map to present the effect shown in the figure below.
The type of its series series is lines. No xoy shaft is required. Therefore, it also needs to pay attention to data processing.
series:[ { type:'lines', data: [ // The value is an array of objects, and each object is a path. { coords: [ [116.405285, 39.904989], // starting point [129.190182, 38.125596], // End, // Support multiple points, if necessary ], // Configure path style lineStyle:{ color:'red', width:1, opacity:1 }, // Adds a text label to the end of the path. label:{ show:true, formatter:'Beijing XXX company', fontSize:16, borderRadius:4, padding:[10,15], // verticalAlign :'middle', backgroundColor:'orange' } }, ], } ]
Radar chart
Radar chart is generally a visual processing of data after testing the performance of an object. Can clearly see the advantages and disadvantages.
Basic use
step
- Prepare the box, import ecahrt related API s, etc. (some of the above are not described in detail)
- Configure radar.indicator: it is used to configure a framework. As shown in the figure above, the Pentagon and text.
// Its value is an array, and the elements of the array are objects. An object corresponds to a corner. There are name and max attributes in the object. Are the displayed text and the maximum value, respectively option = { radar:{ indicator:[ { name:'hearing', max:'100' },{ // .... } ] } }
- Configure series. The type is radar. data is an array of objects.
series: [ { type: "radar", data: [ { name: "English", value: ["80", "90", "77", "65", "100"], // Match the object configuration sequence in the indicator one by one. }, { // Can there be more than one } ], }, ],
Simple case
<template> <div> <div class="con" :</div> </div> </template> <script> import { init } from "echarts"; export default { name: "ecahrts_radar", mounted() { this.initChart(); }, methods: { initChart() { let chart = init(document.querySelector(".con")); let option = { radar: { indicator: [ { name: "hearing", max: "100", }, { name: "fluent", max: "100", }, { name: "pronunciation", max: "100", }, { name: "vocabulary", max: "100", }, { name: "Aggregate quantity", max: "100", }, ], }, series: [ { type: "radar", data: [ { name: "English", value: ["80", "90", "77", "65", "100"], }, ], }, ], }; chart.setOption(option); }, }, }; </script>
Common configuration
label
Text label. Set to true to display data or
series:[{ // ... label:{ show:true formatter:'show contents' // The default is the corresponding value position // Equal style settings } }]
areaStyle
Area fill pattern, which can fill the part surrounded by some data with other colors. You can set color, transparency, etc.
series:[{ // ... areaStyle:{ // Even empty objects have a default color fill color:'orange' , opacity:0.6 } }]
shape
The radar chart is not necessarily square, but also round. We can modify the shape by adding a configuration shape to the label object
radar: { shape: "circle", // It can be a circle or a polygon (default) // indicator: ... },
Instrument diagram
The instrument diagram is the simplest configuration diagram.
Basic use
step
- Basic box preparation, introduction of ecarts API, etc
- The type of the object element of the configuration series is gauge. data configuration pointer
series: [ { name:'Company or data name' // For example, weight, or km/h, can also be absent type: "gauge", data: [ // The value is an array, the elements of the array are objects, and an object is a pointer { value: "92", // Set the current value pointed by the pointer, and the maximum data and scale value of the instrument will be automatically adjusted }, ], }, ],
Common configuration
itemStyle
Configure the color of the pointer in the object that configures the pointer. Each pointer can be configured with different colors, border colors, shadows and other styles.
data:[{ value:'92', itemStyle:{ color:'pink' } }]
min/max
The scale value of the instrument will be set automatically or manually.
series: [ { // ..... min: "50", max: "150", }, ],
style
Color theme
The easiest way to change the global style is to directly adopt the color theme.
Built in theme color
In addition to the consistent default theme (light), ECharts5 also has a built-in 'dark' theme.
The init API can also pass in the second parameter to change the color theme. The default is light. If you need to change to dark, you can pass in dark.
let myChart = init(document.querySelector('#con'),'dark')
Custom theme colors
In the theme editor, we can edit the theme and then download it as a local file. Then use it in the project.
The specific steps are clearly written in the official documents.
- After configuring your own font, set a theme name. Of course, you can also use the default font
- Then click the download topic. There are two versions that can be downloaded and used. The use steps and official documents are also very clear
Introduction method
// es6 import '../assets/mytheme' // use let chart = init(document.querySelector("#con"),'mytheme'); // CDN <script scr='url/mytheme.js'></script>
If you change the file name and forget the topic name, you can open the downloaded topic file and find the following code
echarts.registerTheme('mytheme',...) // The first parameter is the subject name.
Debug color wheel
Whether it's a built-in theme or a theme we set ourselves. Each has a palette.
For example:
If there are no special settings, the chart will take colors from the palette.
For example, we often do not set color, but the bar column has color, and the pie chart also has color.
Global palette
In addition to the theme's own color palette, we can also set a global color palette, which will overwrite the theme color palette.
The global palette is directly set under option, the key is color and the value is array
option:{ color:['#ff0000','blue','green '] / / support colors such as hexadecimal }
If there are fewer kinds of colors, the same color may appear in parts that should have different colors, such as color:['red','blue ']
Local palette
The partial palette is set in the series. Dedicated to a graph.
series:[{ type:'pie', color:['red','yellow'] }]
Direct style settings (xxxStyle)
The color palette provides color, but when rendering, the color of a part is uncertain, or may change.
Therefore, we can use the relevant configuration to set its color directly.
Direct style setting is a common setting method. In option, itemStyle,textStyle,lineStyle,areaStyle,label, etc. can be set in many places. In these places, the color, line width, point size, label text, label style, etc. of graphic elements can be set directly.
itemStyle
This can be used in many charts. It can set the color of the bar column of the bar chart, control the color of each part of the pie chart, and use it in the scatter chart, etc.
series:[{ //... itemStyle:{ color:'red' // Control all, support callback function color: (RES) = > {} } }]
There are also graphs that support writing in data to control the color of some data, such as pie charts
series:[{ type:'pie', data:[{ name: "sleep", value: "7", // Set color itemStyle: { color: "green", }, }] }]
In addition to controlling the color, itemStyle has other styles to control
// part borderColor: '#000 ', / / border color borderWidth: 0 , // Border width borderType: 'solid' , // Border type borderDashOffset: 0 , // Used to set the offset of the dashed line borderCap: butt , // Specifies how segment ends are drawn opacity: 1 , // transparency ......
textStyle
Text Style is configured in the title. We usually use to configure the color, size and position. Therefore, the use method is basically the same as that of other styles. There are some general descriptions in the general configuration. We won't repeat it.
lineStyle
It can be used in line related graphs such as line graph and path graph.
areaStyle
We have used this configuration in line chart and radar chart. But there are more than two kinds of charts that can use it. But its configuration is also relatively simple.
label
Text tags, which we have used many times in the above examples, will not be repeated.
Constant light and highlight
Normally on, that is, its color in the general state, which can be configured by direct style or color palette.
This section focuses on highlighting.
Highlight is the color (such as mouse hovering or clicking) that appears when the graph corresponding to some data is selected. It is generally hovering.
The highlighted style is configured through emphasis.
Take the pie chart as an example, others are similar.
series:[{ //... emphasis:{ scale:true // Zoom in or not, scaleSize:2 // Magnification focus:'none' // Fade other graphics // For example, linestyle, itemstyle, label, etc. some can also be placed in emphasis. The writing method remains the same, but the position changes. Configure the style when highlighting itemStyle: { color:'red' } }]
Color gradient and texture fill
The value of color supports the use of RGB to represent pure color, such as' rgb(128, 128, 128) ', or RGBA, such as' rgba(128, 128, 128, 0.5)', or hexadecimal format, such as' #ccc '.
The value of color can also be a callback function.
In addition, the value of color can be an object. The gradient used to configure the color
// Linear gradient color:{ type:'linear' // Linear gradient x: , y: , x2:, y2:, // (x,y) represents one point, (x2,y2) represents another point, and x,y points to x2,y2. Indicates the direction colorStops:[{ offset:0, // Color at 0% color:'' },{ offset: 0.3 // Color at 30% color:'' },{ // .... }] }
// Radial Gradient { type: 'radial', x: 0.5, y: 0.5, r: 0.5, // (x,y) is the origin and r is the radius colorStops: [{ offset: 0, color: 'red' // Color at 0% }, { offset: 1, color: 'blue' // Color at 100% }], }
What's more amazing about color is that it supports texture filling, which is equivalent to backgroundImage.
// Texture fill { image: imageDom, // Htmlimageelement and htmlcanvas element are supported, but path string is not supported repeat: 'repeat' // Whether to tile. It can be 'repeat-x', 'repeat-y', 'no repeat' }
case
<template> <div> <div class="con" :</div> </div> </template> <script> import { init } from "echarts"; export default { name: "ecahrts_pie", mounted() { this.initChart(); }, methods: { initChart() { // Load picture resources as textures let imgsrc = ""; let piePatternImg = new Image(); piePatternImg.src = imgsrc; let chart = init(document.querySelector(".con")); let option = { color: ["pink", "lightblue"], title: { text: "pie", left: "center", }, series: [ { type: "pie", data: [ { name: "sleep", value: "7", itemStyle: { color: { type: "linear", x: 0, y: 0, x2: 0, y2: 1, colorStops: [ { offset: 0, color: "red", // Color at 0% }, { offset: 1, color: "blue", // Color at 100% }, ], global: false, // The default is false }, }, }, { name: "Code", value: "9", itemStyle: { color: { type: "radial", x: 0.5, y: 0.5, r: 0.5, colorStops: [ { offset: 0, color: "#46bbf2 ", / / color at 0% }, { offset: 1, color: "#Aa47d7 ", / / color at 100% }, ], global: false, // The default is false }, }, }, { name: "Daydreaming", value: "5", itemStyle: { color: { image: piePatternImg, repeat: "repeat", }, }, }, ], }, ], }; chart.setOption(option); }, }, }; </script>
Container and container size
Containers are used to store charts. In the above use, we only used one method to set the width and height.
Define a parent container with width and height in HTML (recommended)
Generally speaking, you need to define a < div > node in HTML first, and make the node have width and height through CSS. Using this method, you must ensure that the container has width and height at init.
<!-- Specify width and height --> <div class="con" :</div> <!-- Specify the height and width as the width of the viewer visual interface --> <div class="con" :</div>
Specifies the size of the chart
If the chart container does not have width and height, or if you want the chart width and height not to be equal to the container size, you can also specify the size during initialization.
<div id="con"></div> var myChart = init(document.getElementById('con'), null, { width: 600, height: 400 });
This takes advantage of the third parameter of init, which is an object that specifies the width and height of the chart.
Respond to changes in container size
When the width and height are specified, the size of the chart is fixed. If we change the size of the viewer window, we will find that the size of the chart will not change.
Even if the width is not set to dead. The container width is adaptive, but the chart will not be adaptive, and some will be obscured. As shown in the figure:
If we want the chart to be adaptive, we can use the resize() method on the chart instance returned by init.
var myChart = init(document.getElementById('con')); window.onresize = function() { // Listen for window size events myChart.resize(); // Call the resize method }; // You can do the same window.onresize = myCahrts.resize // Or use the event listener window.addEventListener("resize", myChart.resize);
resize supports an object parameter, which is equivalent to the third parameter of init, so that the actual width and height of the chart does not depend on the container.
myChart.resize({ width: 800, height: 400 });
Note, however, that the size of the chart is limited by the container. Once the width and height of the chart is dead, it cannot be changed adaptively
In the following cases, calling resize directly cannot change the width and height of the chart
// 1. Vessel width and height limit <div id="con" :</div> // 2. Chart width and height limit var myChart = init(document.getElementById('con'), null, { width: 600, height: 400 });
But it is not without solution
We can get the width and height of the viewer and modify the width and height of the container by resetting setOption. If the container does not set overflow to hidden, the dynamically obtained width and height can also be used as the resize parameter.
animation
Load animation
A lot of chart data is requested from the server. If the network is not very good, or there is a lot of data, there will be blank.
Therefore, ecarts has built-in loading animation.
myChart.showLoading() // Show animation myChart.hideLoading() // hide animation // myChart is the chart instance returned by init
We can turn on the animation when loading data, and then hide the loaded animation when the data comes back.
animation
Animation configuration in option.
option turns on the animation effect by default. For example, after loading the data, or when changing the data.
option = { animation: true // Whether to enable animation effect. The default value is true animationDuration : 100 // Animation duration in milliseconds // The value of an animation can be a function, and the parameter of the function is the number of each part of the image. It depends on the actual situation animationEasing:'linear' // Jog animation // More effect references animationThreshold:8 // Maximum animation // Take the histogram for example. Each column, or marker point, or marker line will have an independent animation. If these independent animations exceed the number set here, all animation effects will be cancelled. It is equivalent to that the animation is false. // There are also some configurations of animation control, which can be referred to the official documents. }
API
The API s provided by ecahrts can be divided into four categories.
- ecahrts: Global ecarts object
- Echartsinstance: ecahrts instance, that is, the object returned by init
- Action: Chart behavior supported in echarts
- Event: event in ECharts
echarts
The global ecarts object is obtained after the script tag is imported into the ecarts.js file, or through the import module.
init
Create an ecarts instance and return an ecarts instance. You cannot initialize multiple ecarts instances on a single container.
init(Container node, theme, chart width and height) // Of the three parameters, only the first is required
connect
connect is an API exposed by ecarhts, which can associate multiple charts.
Because a container registers a chart instance. If multiple charts need to be displayed and cannot be merged, they will become separate pieces.
If we want to click to download as an image, or refresh the chart at the same time, it will be a little troublesome. connect can solve this problem.
import {connect} from 'ecahrts' // Set the group id of each instance separately. You need to set the same id for the associated chart chart1.group = 'group1'; chart2.group = 'group1'; connect('group1'); // Or you can directly pass in the instance array that needs linkage connect([chart1, chart2]);
disconnect
Disassociate connect. The parameter is the associated group id.
import {disconnect} from 'ecahrts' disconnect(groupid)
dispose
dispose is to destroy the instance. It destroys the chart and destroys the instance. At this time, resetting the option is invalid. It is different from the API of another clear.
import {dispose} from 'ecahrts' dispose(myEchart) // Parameters are icon instances or container nodes
use
Use components to cooperate with new on-demand interfaces. It needs to be used before init.
//'; // Rectangular coordinate system components are introduced, and the suffix of components is Component import { GridComponent } from 'echarts/components'; // When introducing the Canvas renderer, note that introducing the Canvas renderer or SVGRenderer is a necessary step import { CanvasRenderer } from 'echarts/renderers'; // Register required components echarts.use( [GridComponent, BarChart, CanvasRenderer] );
registerMap
Register the map. After obtaining the SJON data of the map, register the map globally.
import {registerMap} from 'ecahrts' registerMap('mapName',mapJsonValue)
echartsInstance
API on icon instance
group
Grouping of charts for connect ion
let myChart = init(document.querySelector(".con")) mtChart.group = 'String' // group id . The value is a string for linkage
setOption
chart.setOption(option, notMerge, lazyUpdate);
- option: Chart configuration
- notMerge: the value is Boolean, and the default is false. That is, the new option is merged with the old option. If set to true, the new option will completely replace the old option
- lazyUpdate: Select. Whether to not update the chart immediately after setting the option. The default value is false, that is, synchronize and update immediately. If true, the chart will be updated in the next animation frame
myChart.setOption(option)
The setOption parameter has a lot to write. But the commonly used one is actually option
resize
Recorded in the container and container size section.
clear
Clear the current chart, but the instance still exists after clearing. You can also reset the option.
Chart instances also have a dispose method, which directly destroys the instance without parameters.
action
Chart behaviors supported in ECharts are triggered through dispatchAction. Some events triggered manually can be simulated.
dispatchAction is a method on a chart instance. It is here because it mainly serves action.
Chart behavior is to click a part of the icon to highlight it, hover it, or switch the chart type, zoom, and so on
The following is an example. You can view the document for more examples. The method of use is similar.
// To highlight a series: dispatchAction({ // Specify type type: 'highlight', // Use index or id or name to specify the series seriesIndex:0 // Represents the first chart in the series // You can also use arrays to specify multiple series. // seriesId?: string | string[], // seriesName?: string | string[], // If you do not specify the index of the data item, you can also specify the data item by name through the name attribute dataIndex?: number | number[], // Optional, data item name, ignored when there is dataIndex name?: string | string[], });
event
event events are functions that execute when certain events are triggered. action is the corresponding style change of the chart after some chart behaviors occur.
event includes some DOM events, such as click ing, mouse entering mouseover and other mouse events.
There are also some custom events of ecahrts, such as highlight event, event triggered when the data selection status changes, and so on.
These events are monitored through the on method of the instance, which is equivalent to an event listener. The on method is also an example of a chart.
myChart.on('Event type',Callback function) // Event type, click, highlight, selectchange, etc // The callback function will receive a parameter with some data of the event
The off method can cancel listening to events.
myChart.off('Event type',Callback function) // The event type is to cancel the event listening on the instance // The callback function does not have to be passed. If it is not passed, the event bound on this type will be cancelled.
|
https://programmer.help/blogs/619eae5ac6be3.html
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Installed Folsom services on a single node.
nova-compute/
I have turned off use_namespaces and Overlapping IPs to make the above configuration work like the traditional nova-network setup. I have configured metadata_host and metadata_port correctly in l3-agent.ini file.
In the console log, VM fails to access the metadata service.
wget: can't connect to remote host (169.254.169.254): No route to host
However, I am able to launch a VM. I am able to ping and ssh into the VM using private ip address. I am also able to ping/ssh from another VM using the private ip address.
Usually, n nova-network setup, if there is an error in accessing metadata service, it would result in ping/ssh FAILURE. With quantum, it seems to work.
Any clue to fix this issue is appreciated.
Thanks,
VJ
Question information
- Language:
- English Edit question
- Status:
- Answered
- For:
- neutron Edit question
- Assignee:
- No assignee Edit question
- Last query:
-
- Last reply:
-
|
https://answers.launchpad.net/neutron/+question/218237
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Accessing Ozone from Spark
In CML, you can connect Spark to the Ozone object store with a script. The following example demonstrates how to do this.
This script, in Scala, counts the number of word occurrences in a text file. The key point in
this example is to use the following string to refer to the text file:
o3fs://hivetest.s3v.o3service1/spark/jedi_wisdom.txt
Word counting example in Scala
import sys.process._ // Put the input file into Ozone //"hdfs dfs -put data/jedi_wisdom.txt o3fs://hivetest.s3v.o3service1/spark" ! // Set the following spark setting in the file "spark-defaults.conf" on the CML session using terminal //spark.yarn.access.hadoopFileSystems=o3fs://hivetest.s3v.o3service1.neptunee01.olympus.cloudera.com:9862 //count lower bound val threshold = 2 // this file must already exist in hdfs, add a // local version by dropping into the terminal. val tokenized = sc.textFile("o3fs://hivetest.s3v.o3service1/spark/jedi_wisdom.txt").flatMap(_.split(" ")) // count the occurrence of each word val wordCounts = tokenized.map((_ , 1)).reduceByKey(_ + _) // filter out words with fewer than threshold occurrences val filtered = wordCounts.filter(_._2 >= threshold) System.out.println(filtered.collect().mkString(","))
|
https://docs.cloudera.com/machine-learning/1.3.2/import-data/topics/ml-accessing-ozone-from-spark.html
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
The ASP.NET MVC Pivot Table (Pivot Grid) ASP.NET MVC Pivot Table can be connected to an OLAP cube, and its result can be visualized in both tabular and graphical formats.
Binding the ASP.NET MVC ASP.NET MVC ASP.NET MVC ASP.NET MVC Pivot Table data to Excel, PDF, and CSV formats. You can also customize the exported document by adding header, footer, and cell properties like type, style, and position programmatically.
Ships with a set of four stunning, built-in themes: Material, Fabric, Bootstrap, and high contrast.
You can customize the appearance of the control as little or as much as you like be displayed from right to left.
You can localize all the control’s strings in the user interface as needed and use the localization (l10n) library to do so.
For a great developer experience, flexible built-in APIs are available to define and customize the ASP.NET MVC Pivot Table control. Developers can optimize the data bound to the control and customize the user interface completely using code with ease.
ASP NET MVC ASP.NET MVC Pivot Table using a few simple lines of C# code as demonstrated below. Also explore our ASP.NET MVC Pivot Table Example that shows you how to render and configure the ASP.NET MVC Pivot Grid.
@Html.EJS().PivotView("PivotView").Width("100%").Height("350").DataSourceSettings(dataSource => dataSource.DataSource((IEnumerable<object>)ViewBag.DataSource) .Rows(rows => { rows.Name("Year").Add(); }) .Columns(columns => { columns.Name("Products").Add(); }) .Values(values => { values.Name("Sold").Caption("Units Sold").Add(); values.Name("Amount").Add(); }) ).Render()
public ActionResult Index() { var data = GetPivotData(); ViewBag.DataSource = data; return View(); } public List<PivotData> GetPivotData() { List<PivotData> pivotData = new List<PivotData>(); pivotData.Add(new PivotData { Sold = 31, Amount = 52824, Country = "France", Products = "Mountain Bikes", Year = "FY 2016", Quarter = "Q1" }); pivotData.Add(new PivotData { Sold = 51, Amount = 86904, Country = "France", Products = "Mountain Bikes", Year = "FY 2015", Quarter = "Q2" }); pivotData.Add(new PivotData { Sold = 51, Amount = 92824, Country = "Germany", Products = "Mountain Bikes", Year = "FY 2016", Quarter = "Q1" }); pivotData.Add(new PivotData { Sold = 61, Amount = 76904, Country = "Germany", Products = "Mountain Bikes", Year = "FY 2015", Quarter = "Q2" }); pivotData.Add(new PivotData { Sold = 91, Amount = 67824, Country = "United States", Products = "Mountain Bikes", Year = "FY 2015", Quarter = "Q1" }); pivotData.Add(new PivotData { Sold = 81, Amount = 99904, Country = "United States", Products = "Mountain Bikes", Year = "FY 2015", Quarter = "Q2" }); return pivotData; } public class PivotData { public int Sold { get; set; } public double Amount { get; set; } public string Country { get; set; } public string Products { get; set; } public string Year { get; set; } public string Quarter { get; set; } }
Pivot Table is also available in Blazor, React, Angular, JavaScript and Vue frameworks. Check out the different Pivot Table platforms from the links below,
We do not sell the ASP.NET MVC Pivot Table separately. It is only available for purchase as part of the Syncfusion ASP.NET MVC suite, which contains over 70+ ASP.NET MVC components, including the Pivot Table. Pivot Table, are not sold individually, only as a single package. However, we have competitively priced the product so it only costs a little bit more than what some other vendors charge for their Pivot Table.
|
https://www.syncfusion.com/aspnet-mvc-ui-controls/pivot-table
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
We host a few applications that are used by other divisions of our business. Each division is a separate Active Directory forest/domain, thus different DNS namespaces. We are all connected via a private MAN. Each division uses a different method for sending DNS queries to our authoritative DNS servers. One uses a server-level forwarder, one uses a Conditional Forwarder, one uses a Stub Zone, and one uses a Secondary Zone. I want to have all of them use a Conditional Forwarder.
It should be pretty straightforward for them to change from their server-level forwarder/Stub Zone/Secondary Zone to a Conditional Forwarder, correct? They would just delete the current forwarder/zone and then create a Conditional Forwarder?
I'm just looking to the SW Community for feedback on anything I should be aware of before proposing the change.Edited Apr 29, 2015 at 16:33 UTC
11 Replies
Did you get rid of your stub zones?
I have the same issue, we have a Child/Parent AD Domain structure that I have inherited. The Child Domain DCs host a Secondary Zone transferred from the Root Domain. (I'm not familiar with why this was done)
I would like to replace all these secondary Zones (hosed on each Child DC) for a simple AD integrated Conditional Forwarder.
Can I just configure a Forwarder then remove the stub zones, or is it more complicated than that?
Mine isn't between child/parent domains, rather separate non-trusted domains so it's a little different. You mention Secondary Zones and then Stub Zones but which is it? If they're Stubs then I think the actual purpose of them is to resolve across parent/child domains. If they're Secondary then it may not be ideal but if there aren't any issues then I'd probably leave it alone.
|
https://community.spiceworks.com/topic/924070-changing-dns-forwarder-forward-lookup-method
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Problem Statement
In the “Minimum Characters to be Added at Front to Make String Palindrome” problem we have given a string “s”. Write a program to find the minimum characters to be added at the front to make a string palindrome.
Input Format
The first and only one line containing a string “s”.
Output Format
The first and only one line containing an integer value n. Here n denotes the minimum characters to be added at the front to make a string palindrome.
Constraints
- 1<=|s|<=10^6
- s[i] must be a lower case English alphabet
Example
edef
1
Explanation: If we add “f” at the front of string “s” then our string satisfied the condition of the palindrome. So, here only 1 should be added in the front.
Algorithm
1. Concatenate the string with a special symbol and reverse of the given string ie, c = s + $ + rs
2. Build an LPS array in which each index represents the longest proper prefix which is also a suffix.
3. As the last value in the LPS array is the largest suffix that matches the prefix of the original string. So, there are those many characters that satisfy the palindrome property
4. Now, find the difference between the length of the string and the last value in the LPS array, which is the minimum number of characters needed to make a string palindrome.
Working of above example
s = “edef”
Example for proper prefixes and suffix(Knowledge required to form LPS)
Proper prefixes of “abc” are ” “, “a”, “ab” and Suffixes of “abc” are “c”, “bc”, “abc”.
s (after concatenation)= “edef$fede”
LPS = {0, 0, 1, 0, 0, 0, 1, 2, 3 } //Here index is the longest length of largest prefix that matches suffix.
The last value of LPS = 3, so there are 3 characters that satisfy palindrome property.
Now, find the difference between the length of a given string(ie, 4) and the Last value(3)
Therefore, we need 1 character to make it a palindrome.
Implementation
C++ program for Minimum Characters to be Added at Front to Make String Palindrome
#include <bits/stdc++.h> using namespace std; vector<int> computeLPSArray(string s) { int n = s.length(); vector<int> LPS(n); int len = 0; LPS[0] = 0; int i = 1; while (i < n) { if (s[i] == s[len]) { len++; LPS[i] = len; i++; } else { if (len != 0) { len = LPS[len-1]; } else { LPS[i] = 0; i++; } } } return LPS; } int solve(string s) { string rs = s; reverse(rs.begin(), rs.end()); string c = s + "$" + rs; vector<int> LPS = computeLPSArray(c); return (s.length() - LPS.back()); } int main() { string s; cin>>s; cout<<solve(s)<<endl; return 0; }
Java program for Minimum Characters to be Added at Front to Make String Palindrome
import static java.lang.Math.abs; import java.util.Scanner; class sum { public static int[] computeLPSArray(String str) { int n = str.length(); int lps[] = new int[n]; int i = 1, len = 0; lps[0] = 0; while (i < n) { if (str.charAt(i) == str.charAt(len)) { len++; lps[i] = len; i++; } else { if (len != 0) { len = lps[len - 1]; } else { lps[i] = 0; i++; } } } return lps; } static int solve(String str) { StringBuilder s = new StringBuilder(); s.append(str); String rev = s.reverse().toString(); s.reverse().append("$").append(rev); int lps[] = computeLPSArray(s.toString()); return str.length() - lps[s.length() - 1]; } public static void main(String[] args) { Scanner sr = new Scanner(System.in); String s = sr.next(); System.out.println(solve(s)); } }
qsjhbqd
6
Complexity Analysis
Time Complexity
O(n) where n is the size of the given string “s”. Here we find the “lps array of KMP algorithm” which takes linear time to compute.
Space Complexity
O(n) because we create an LSP array to compute our answer. And here the maximum size of the LSP array is n.
|
https://www.tutorialcup.com/interview/string/minimum-characters-to-be-added-at-front-to-make-string-palindrome.htm
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Executing JOINs
Implicit Joining
Mapped classes of related TaxJar objects are joined implicitly if there is a singular foreign key relationship. After importing the necessary objects, a relationship is established between your two mapped classes, as in the example below:
from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import Column, String, Integer, DateTime, ForeignKey from sqlalchemy.orm import sessionmaker, relationship Base = declarative_base() class Contact(Base): __tablename__ = "Contact" Id = Column(Integer, primary_key=True) Name = Column(String) Email = Column(String) BirthDate = Column(DateTime) AccountId = Column(String, ForeignKey("Account.Id")) Account_Link = relationship("Account", back_populates="Contact_Link") class Account(Base): __tablename__ = "Account" Id = Column(String, primary_key=True) Name = Column(String) BillingCity = Column(String) NumberOfEmployees = Column(Integer) Contact_Link = relationship("Contact", order_by=Contact.Id, back_populates="Account_Link")
Once the relationship is established, the tables are queried simultaneously using the session's "query()" method, as below:
rs = session.query(Account, Contact).filter(Account.Id == Contact.AccountId) for Ac, Ct in rs: print("AccountId: ", Ac.Id) print("AccountName: ", Ac.Name) print("ContactId: ", Ct.Id) print("ContactName: ", Ct.Name)
Other Join Forms
There may come situations where mapped classes have either no foreign keys or multiple foreign keys. In such situations, different forms of the JOIN query may be needed to accommodate them. Using the earlier classes as examples, the following JOIN queries are possible as well:
- Explicit condition (necessary if there are no foreign keys in your mapped classes):
rs = session.query(Account, Contact).join(Contact, Account.Id == Contact.AccountId) for Ac, Ct in rs:
- Left-to-Right Relationship:
rs = session.query(Account, Contact).join(Account.Contact_Link) for Ac, Ct in rs:
- Left-to-Right Relationship with explicit target:
rs = session.query(Account, Contact).join(Contact, Account.Contact_Link) for Ac, Ct in rs:
- String form of a Left-to-Right Relationship:
rs = session.query(Account, Contact).join("Contact_Link") for Ac, Ct in rs:
|
https://cdn.cdata.com/help/JTG/py/pg_usageORMjoinspy.htm
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
I’m pleased to announce that "Dash
pages/" preview is now available in Dash Labs
This new feature simplifies creating multi-page Dash apps. See the initial announcement and forum discussion. We moved this to Dash Labs to make it easier for you to try it out (
pip install dash-labs==1.0.0) and give feedback.
Background
Dash Labs is used to develop new features for future releases of Dash. You can see the community discussion on the progress prior to Dash 2.0 in the initial announcement and in subsequent releases.
Here are some features that started in Dash Labs and were added to Dash >= 2.0.
- New in Dash 2.0: Flexible Callback Signatures
- New in Dash 2.0: All-in-One Components
- New in Dash 2.0: Long Callbacks
- Coming in Dash 2.1: low-code shorthands for Dash Core Components and the dash DataTable.
- Coming in Dash 2.1, The Input, State, and Output accepts components as an alternative to ID strings. Dash will auto-generate the component’s ID under-the-hood if not supplied.
- Available in the dash-bootstrap-templates library: Bootstrap- themed figures.
The documentation for these initial projects in
dash-labs is still available in the current version of dash-labs, but the code is not. The code for the older versions are available in dash-labs v0.4.0. (
pip install dash-labs==0.4.0)
Dash Labs V1.0.0
Dash Labs is now set up to develop new features starting with
dash>=2.0 and
dash-bootstrap-components>=1.0. The first new project we’ve added is the Dash
pages/ feature. We’ll be adding more new projects in the coming months.
We received a lot of great feedback from the community in our previous announcement (thank you!) and opened up issues for those requests.
Give it a try! See links to the new documentation below. We encourage you take the new features for a spin and to join the discussion, raise issues, make pull requests, and take an active role in shaping the future of Dash.
Dash
pages/ Documentation
Quickstart:
pip install dash-labs==1.0.0
app.py
import dash import dash_labs as dl import dash_bootstrap_components as dbc app = dash.Dash( __name__, plugins=[dl.plugins.pages], external_stylesheets=[dbc.themes.BOOTSTRAP] ) navbar = dbc.NavbarSimple([ dbc.NavItem(dbc.NavLink(page['name'], href=page['path'])) for page in dash.page_registry.values() if page["module"] != "pages.not_found_404" ], brand='Dash App') app.layout = dbc.Container( [navbar, dl.plugins.page_container], ) if __name__ == "__main__": app.run_server(debug=True)
pages/home.py
import dash from dash import dcc, html, Input, Output, callback dash.register_page(__name__, path="/") layout = html.Div([ html.H1('Home Page') ])
pages/historical_analysis.py
import dash from dash import dcc, html, Input, Output, callback dash.register_page(__name__, path="/historical-analysis") layout = html.Div([ html.H1('Historical Analysis Page') ])
In-Depth Guides
- 08-MultiPageDashApp.md
- 09-MultiPageDashAppExamples.md
- 📣 Introducing Dash `/pages` - A Dash 2.1 Feature Preview
We welcome all pull requests to help improve the dash-labs documentation, even as minor as fixing a typo or writing a better sentence. If you would like to contribute to Dash, but are not sure how, this is a great place to start.
Our goal is to create high quality documentation that can be added directly to the official Dash documentation when new features are added to Dash. You can help – even if you’re new to Dash. Try following the instructions to make a multi-page app. Did we miss any steps? Is anything unclear? If so, let us know either here or in the GitHub issue tracker.
|
https://community.plotly.com/t/dash-labs-1-0-0-dash-pages-an-easier-way-to-make-multi-page-apps/58249
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Just discovered this and thought I would share
Ever site (web) within a site collection has a property called Author which is set to the account that created the site (Wether it be via the SharePoint UI, or the OM). To the best of my knowledge, this is not exposed in the UI. So for example, if I’m logged in a “John Doe” to a site, and create a subsite, the Author property of the new subsite will be set to “John Doe”.
The problem, is that if “John Doe” is ever removed from the site collection, accessing this property will result in a User not found exception such as the following:
Unhandled Exception: Microsoft.SharePoint.SPException: User cannot be found. at Microsoft.SharePoint.SPUserCollection.GetByID(Int32 id) at Microsoft.SharePoint.SPWeb.get_Author()
This doesn’t present itself as a problem by just using the SharePoint UI, but if you have custom code that inspects this property, be sure to add some additional exception handling.The problem, is that if “John Doe” is ever removed from the site collection, accessing this property will result in an exception.
To see an example of this problem, do the following:
- Create a new site collection
- Add a test account with Full Control to the root site (Web) of this site collection
- Login to that site as the test account, and create a subsite (Subweb in OM talk)
- Run the following sample code that dumps out the fields of the Author property
- As a site collection admin, or another account with rights to manage permissions, remove the test account from the site collection via “People and Groups: All People”
- Run the following sample code again, and you’ll see it throw the exception when you try to access any of the fields of the property.
Sample Code
using System; using System.Text; using Microsoft.SharePoint; using Microsoft.SharePoint.Administration; namespace WhereforeArtThou { class Program { static void Main(string[] args) { // Open the site collection Console.WriteLine("Opening site"); SPSite site = new SPSite(args[0]); Console.WriteLine("Access properties from the web"); // Open the web via the URL passed in SPWeb web = site.OpenWeb(); Console.WriteLine("web title: " + web.Title); Console.WriteLine("Author ID: " + web.Author.ID.ToString()); Console.WriteLine("Author Name: " + web.Author.Name); } } }
Is it a bug? No, not really IMHO, just something to watch out for. Besides, if the user is removed from the site collection, who is the author supposed to be replaced with? Regardless of who it was replaced with, the property will no longer hold a valid value of the original author of the site.
Hope this helps!
– Keith
5 Replies to “Wherefore Art Thou Author?”
I wish Microsoft Reporting services authors would read this. The issue becomes pretty common when a document library, or site that contains reports for SSRS in Sharepoint integrated mode is accessed.
I have not found a MS support solution. I have needed to modify the database directly, I am glad this is only in test (wink) since Microsoft does not support directly modifying data in the database.
I just thought I would pass the information on.
Eric VanRoy
Ouch Eric, don’t do that. Modifying the databases is so unsupported. I’ll put you something together that will update the author in a supported fashion. Stand by for a followup to this Bat Post on this same Bat Blog.
I really like what you guys are up too. This kind of clever work and coverage!
Keep up the very good works guys I’ve included you guys to our blogroll.
|
https://blog.krichie.com/2008/07/24/wherefore-art-thou-author/?shared=email&msg=fail
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
The GAVO STC Library¶
A library to process VO STC specifications¶
This library aims to ease processing specifications of space-time coordindates (STC) according to the IVOA STC data model with the XML and string serializations. Note that it is at this point an early beta at best. To change this, I welcome feedback, even if it’s just “I’d need X and Y”. Honestly.
More specifically, the library is intended to help in:
- supporting ADQL region specifications and conforming them
- generating registry coverage specifications from simple STC-S
- generating utypes for VOTable embedding of STC information and parsing from them
The implementation should conform to STC-S 1.33; what STC-X is supported conforms to STC-X 1.00 (but see Limitations).
Installation¶
If you are running a Debian-derived distribution, see Adding the GAVO repository. When you follow that recipe,
aptitude install python-gavostc
is enough.
Otherwise, you will have to install the source distribution. Unpack the .tar.gz and run:
python setup.py install
You will normally need to do this as root for a system-wide installation. There are, however, alternatives, first and foremost a virtual python that will keep your managed directories clean.
This library’s setup is based on setuptools. Thus, it will generally obtain all necessary dependencies from the net. For this to be successful, you will have to have net access.
If all this bothers you, contact the authors.
Usage¶
Command Line¶
For experiments, we provide a simple command line tool. Try:
gavostc help
to see what operations it exposes. Here are some examples:
$ gavostc help Usage: gavostc [options] <command> {<command-args} Use command 'help' to see commands available. Options: -h, --help show this help message and exit -e, --dump-exception Dump exceptions. Commands include: conform <srcSTCS>. <dstSTCS> -- prints srcSTCS in the system of dstSTCS. help -- outputs help to stdout. parseUtypes --- reads the output of utypes and prints quoted STC for it. parseX <srcFile> -- read STC-X from srcFile and output it as STC-S, - for stdin resprof <srcSTCS> -- make a resource profile for srcSTCS. utypes <QSTCS> -- prints the utypes for the quoted STC string <QSTCS>. $ gavostc resprof "Polygon ICRS 20 20 21 19 18 17" | xmlstarlet fo <?xml version="1.0"?> <STCResourceProfile xmlns="" xmlns: <AstroCoordSystem id="thgloml"> <SpaceFrame id="thdbgwl"> <ICRS/> <UNKNOWNRefPos/> <SPHERICAL coord_naxes="2"/> </SpaceFrame> </AstroCoordSystem> <AstroCoordArea coord_system_id="thgloml"> <Polygon frame_id="thdbgwl" unit="deg"> <Vertex> <Position> <C1>20.0</C1> <C2>20.0</C2> </Position> </Vertex> <Vertex> <Position> <C1>21.0</C1> <C2>19.0</C2> </Position> </Vertex> <Vertex> <Position> <C1>18.0</C1> <C2>17.0</C2> </Position> </Vertex> </Polygon> </AstroCoordArea> </STCResourceProfile> $ gavostc resprof "Circle FK5 -10 340 3" | gavostc parseX - Circle FK5 -10.0 340.0 3.0 $ gavostc conform "Position GALACTIC 3 4 VelocityInterval Velocity 0.01 -0.002 unit deg/cy" "Position FK5" Position FK5 264.371974024 -24.2795040403 VelocityInterval Velocity 0.00768930497899 0.00737459624525 unit deg/cy $ gavostc utypes 'Redshift TOPOCENTER VELOCITY "z" Error "e_z" PixSize "p_z"' AstroCoordSystem.RedshiftFrame.value_type = VELOCITY AstroCoordSystem.RedshiftFrame.DopplerDefinition = OPTICAL AstroCoordSystem.RedshiftFrame.ReferencePosition = TOPOCENTER AstroCoords.Redshift.Error -> e_z AstroCoords.Redshift.Value -> z AstroCoords.Redshift.PixSize -> p_z $ gavostc utypes 'Redshift TOPOCENTER VELOCITY "z" Error "e_z" PixSize "p_z"'\ | gavostc parseUtypes Redshift TOPOCENTER VELOCITY "z" Error "e_z" PixSize "p_z"
Limitations¶
- Internally, all dates and times are represented as datetimes, and all information whether they were JDs or MJDs before is discarded. Thus, you cannot generate STC with M?JDTime.
- All stc:DataModel utypes are ignored. On output and request, only stc:DataModel.URI is generated, fixed to uri stc.STCNamespace.
- “Library” coordinate systems for ECLIPTIC coordinates are not supported since it is unclear to me how the equinox of those is expressed.
- On system transformations, ellipses are not rotated, just moved. No “wiggles” (errors, etc) are touched at all.
- There currently is not real API for “bulk” transforms, i.e., computing a transformation once and then apply it to many coordinates. The code is organized to make it easy to add such a thing, though.
- Serialization of floats and friends is with a fixed format that may lose precision for very accurate values. The solution will probably be a floatFormat attribute on the frame/metadata object, but I’m open to other suggestions.
- Reference positions are not supported in any meaningful way. In particular, when transforming STCs, transformations between all reference positions are identities. This won’t hurt much for galactic or extragalactic objects but of course makes the whole thing useless for solar-system work. If someone points me to a concise collection of pertinent formulae, adding real reference positions transformations should not be hard.
- The behaviour of some transforms (in particular FK5<->FK4) close to the poles need some attention.
- Empty coordinate values (e.g., 2D data with just one coordinate) are not really supported. Processing them will, in general, work, but will, in general, not yield the expected result. This is fixable, but may require changes in the data model.
- No generic coordinates. Those can probably be added relatively easily, but it would definitely help if someone had a clear use case for them
- Spectral errors and their “wiggles” (error, size, etc) must be in the same “flavor”, i.e., either frequency, wavelength, or energy. If they are not, the library will silently fail. This is easily fixable, but there’s too much special casing in the code as is, and I consider this a crazy corner case no one will encounter.
- No reference on posAngles, always assumed to be ‘X’.
- Spatial intervals are system-conformed analogous to geometries, so any distance information is disregaded. This will be fixed on request.
- No support for Area.
- Frame handling currently is a big mess; in particular, the system changing functions assume that the frames on positions, velocities and geometries are identical. I’ll probably more towards requiring astroCoords being in astroSystem.
Extensions to STC-S¶
- After ECLIPTIC, FK4, or FK5, an equinox specification is allowed. This is either J<number> or B<number>.
- For velocities, arbitrary combinations of spaceUnit/timeUnit are allowed.
- To allow the notation of STC Library coordinate systems, you can give a System keyword with an STC Library tag at the end of a phrase (e.g.,
System TT-ICRS-TOPO). This overwrites all previous system information (e.g.,
Time ET Position FK4 System TT-ICRS-TOPOwill result in TT time scale and ICRS spatial frame). We admit it’s not nice, and are open to suggestions for better solutions.
Other Deviations from the Standard¶
- Units on geometries default to deg, deg when parsing from STC-X.
- The equinox to be used for ECLIPTIC isn’t quite clear from the specs. The library will use a specified equinox if given, else the time value if given, else the equinox will be None (which probably is not terribly useful).
Bugs¶
- Conversions between TT and TCB are performed using the rough approximation of the explanatory supplement rather than the more exact expression.
- TT should be extended to ET prior to 1973, but this is not done yet.
- STC-S parse errors are frequently not very helpful.
- Invalid STC-X documents may be accepted and yield nonsensical ASTs (this will probably not be fixed since it would require running a validating parser, which with XSD is not funny, but I’m open to suggestions).
API¶
The public API to the STC library is obtained by:
from gavo import stc
This is assumed for all examples below.
The Data Model¶
The STC library turns all input into a tree called AST (“Abstract Syntax Tree”, since it abstracts away the details for parsing from whatever serialisation you employ).
The ASTs are following the STC data model quite closely. However, it turned out that – even with the changes already in place – this is quite inconvenient to work with, so we will probably change it after we’ve gathered some experience. It is quite likely that we will enforce a much stricter separation between data and metadata, i.e., unit, error and such will go from the positions to what is now the frame object.
Thus, we don’t document the data model fully yet. The gory details are in dm.py. Meanwhile, we will try to maintain the following properties:
- All objects in ASTs are considered immutable, i.e., nobody is supposed to change them once they are constructed.
- An AST object has attributes time, place, freq, redshift, velocity refererring to an objects describing quantities or None if not given. These are called “positions” in the following.
- An AST object has attributes timeAs, areas, freqAs, redshiftAs, velocityAs containing sequences of intervals or geometries of the respective quantities. These sequences are empty if nothing is specified. They are called areas in the following.
- Both positions and areas have a frame attribute giving the frame (for spatial coordinates, these have flavor, nDim, refFrame, equinox, and refPos attributes, quite like in STC).
- Positions have a values attribute containing either a python float or a tuple of floats (for spatial and velocity coordinates). For time coordinates, a datetime.datetime object is used instead of a float
- Positions have a unit attribute. We will keep this even if all other metadata move to the frame object. The unit attribute follows the coordinate values, i.e., they are tuple-valued when the values are tuples. For velocities and redshifts, there is a velTimeUnit as well.
- ASTs have a cooSystem attribute with, in turn, spaceFrame, timeFrame, spectralFrame, and redshiftFrame attributes.
- NULL is consistently represented as None, except when the values would be sequences, in which case NULL is an empty tuple.
Parsing STC-X¶
To parse an STC-X document, use
stc.parseSTCX(literal) -> AST.
Thus, you pass in a string containing STC-X and receive a AST structure.
Since STC documents should in general be rather small, there should be no necessity for a streaming API. If you want to read directly from a file, you could use something like:
def parseFromFile(fName): f = open(fName) stcxLiteral = f.read() f.close() return stc.parseSTCX(stcxLiteral)
The return value is a sequence of pairs of
(tagName, ast), where
tagName is the namespace qualified name of the root element of the STC
element. The tagName is present since multiple STC trees may be present
in one STC-X document. The qualification is in standard W3C form, i.e.,
{<namespace URI>}<element name>. If you do not care about
versioning (and you should not need to with this library), you could
find a specific element using a construct like:
def getSTCElement(literal, elementName): for rootName, ast in stc.parseSTCX(literal): if rootName.endswith('}'+elementName): return ast getSTCElement(open("M81.xml").read(), "ObservationLocation")
Note that the STC library does not contain a validating parser. Invalid STC-X documents will at best give you rather incomprehensible error messages, at worst an AST that has little to do with what was in the document. If you are not sure whether the STC-X you receive is valid, run a schema validator before parsing.
We currently understand a subset of STC-X that matches the expressiveness of STC-S. Most STC-X features that cannot be mapped in STC-X are silently ignored.
Generating STC-X¶
To generate STC-X, use the
stc.getSTCX(ast, rootElmement) -> str
function. Since there are quite a few root elements possible, you have
to explicitely pass one. You can find root elements in
stc.STC. It
is probably a good idea to only use
ObservatoryLocation,
ObservationLocation, and
STCResourceProfile right now. Ask the
authors if you need something else.
There is the shortcut
stc.getSTCXProfile(ast) -> str that is
equivalent to
stc.getSTCX(ast, stc.STC.STCResourceProfile).
Parsing STC-S¶
To parse an STC-S string into an AST, use
stc.parseSTCS(str) -> ast.
The most common exception this may raise is stc.STCSParseError, though
others are conceivable.
Generating STC-S¶
To turn an AST into STC-S, use
stc.getSTCS(ast) -> str. If you pass
in ASTs that use features not supported by STC-S, you should get an
STCNotImplementedError or an STCValueError.
Generating Utypes¶
For embedding STC into VOTables, utypes are used. To turn an AST object
into utypes, use
stc.getUtypes(ast) -> dict, dict. The function
returns a pair of dictionaries:
- the first dictionary, the “system dict”, maps utypes to values. All utypes belong to AstroCoordSystem and into this group.
- the second dictionary, the “columns dict”, maps values to utypes.
Of course, the columns dict doesn’t make much sense with ASTs
actually containing values. To sensibly use it it a way useful for
VOTables, you can define your columns’ STC using “quoted STC-S”. In
this format, you have identifiers in double quotes instead of normal
STC-S values. Despite the double quotes, only python-compatible
identifiers are allowed, i.e., these are not quoted identifiers in the
SQL sense. The
stc.parseQSTCS(str) -> ast function parses such
strings.
Consider:
In [5]:from gavo import stc In [6]:stc.getUtypes(stc.parseQSTCS( ...:'Position ICRS "ra" "dec" Error "e_p" "e_p"')) Out[6]: ({'AstroCoordSystem.SpaceFrame.CoordFlavor': 'SPHERICAL', 'AstroCoordSystem.SpaceFrame.CoordRefFrame': 'ICRS', 'AstroCoordSystem.SpaceFrame.ReferencePosition': 'UNKNOWNRefPos'}, {'dec': 'AstroCoords.Position2D.Value2.C2', 'e_p': 'AstroCoords.Position2D.Error2Radius', 'ra': 'AstroCoords.Position2D.Value2.C1'})
Note that there is no silly “namespace prefix” here. Nobody really knows what those prefixes really mean with utypes. When sticking these things into VOTables, you will currently need to stick an “stc:” in front of those.
Parsing Utypes¶
When parsing a VOTable, you can gather the utypes encountered to
dictionaries as returned by
getUtypes. You can then pass these to
parseFromUtypes(sysDict, colDict) -> ast. The function does not
expect any namespace prefixes on the utypes.
Conforming¶
You can force two ASTs to be expressed in the same frames, which we call “conforming”. As mentioned above, currently only reference frames and equinoxes are conformed right now, i.e., the conversion from Galactic to FK5 1980.0 coordinates should work correctly. Reference positions are ignored, i.e. conforming ICRS TOPOCENTER to ICRS BARYCENTER will not change values.
To convert coordinates in ast1 to the frame defined by ast2, use the
stc.conformTo(ast1, ast2) -> ast function. This could look like
this:
>>> p = stc.parseSTCS("Circle ICRS 12 12 1") >>> stc.conformTo(p, stc.parseSTCS("Position GALACTIC")) >>> stc.conformTo(p, stc.parseSTCS("Position GALACTIC")).areas[0].center (121.59990883115164, -50.862855782323962)
Conforming also works for units:
>>> stc.conformTo(p, stc.parseSTCS("Position GALACTIC unit rad")).areas[0].center (2.1223187792285256, -0.8877243003685894)
Transformation¶
For simple transformations, you can ask DaCHS to give you a function just turning simple positions into positions. For instance,
from gavo import stc toICRS = stc.getSimple2Converter( stc.parseSTCS("Position FK4 B1900.0"), stc.parseSTCS("Position ICRS")) print(toICRS(30, 40))
shows how to build turn positions given in the B1900 equinox (don’t sweat the reference system for data that old) to ICRS.
Equivalence¶
For some applications it is necessary to decide if two STC specifications are equivalent. Python’s built-in equivalence operator requires all values in two ASTs to be identical except of the values of id attributes.
Frequently, you want to be more lenient:
- you might decide that unspecified values match anything
- you may ignore certain keys entirely (e.g., the reference position when you’re doing extragalactic work or when a parallax error doesn’t matter)
- you may want to view certain combinatinons as equivalent (e.g., ICRS and J2000 are quite close)
To support this, the STC library lets you define
EquivalencePolicy
objects. There is a default equivalence policy ignoring the reference
position, defining ICRS and FK5 J2000 as equivalent, and matching Nones
to anything. This default policy is available as
stc.defaultPolicy.
It has a single method,
match(sys1, sys2) -> boolean with the
obvious semantics. Note, however, that you pass in systems, i.e.,
ast.cooSystem rather than ASTs themselves.
You can define your own equivalence policies. Tell us if you want that
and we’ll document it. In the mean time, check
stc/eq.py.
Hacking¶
For those considering to contribute code, here is a short map of the source code:
- cli – the command line interface
- common – exceptions, some constants, definition of the AST node base class
- conform – high-level code for transformations between reference systems, units, etc.
- spherc.py, sphermath.py – low-level transformations for spherical coordinate systems used by conform
- times – helpers for converting time formats, plus transformations between time scales used by conform.
- dm – the core data model, i.e. definitions of the classes of the objects making up the ASTs
- stcsast.py, stcxast.py – tree transformers from STC-S and STC-X concrete syntax trees to ASTs.
- scsgen.py, stcxast.py – serializers from ASTs to STC-S and STC-X
- utypegen.py, utypeast.py – code generating and parsing utype dictionaries. These are thin wrappers around the STC-X code.
- stcs.py, stcsdefaults.py – a grammar for STC-S and a definition of the defaults used during parsing and generation of STC-S.
- units.py – units defined by STC, and transformations between them
Since the STC serializations and the sheer size of STC are not really amenable to a straightforward implementation, the stc*[gen|ast] code is not exactly easy to read. There’s quite a bit of half-assed metaprogramming going on, and thus these probably are not modules you’d want to touch if you don’t want to invest substantial amounts of time.
The conform, spherc, sphermath, units and time combo though shouldn’t be too opaque. Start in conform.py contains “master” code for the transformations (which may need some reorganization when we transform spectral and redshift coordinates as well).
Then, things get fanned out; in the probably most interesting case of
spherical coordinates, this this to spherc.py. That module defines lots
of transformations and
getTrafoFunction. All the spherical
coordinate stuff uses an internal representation of STC, six vectors and
frame triples; see conform.conformSystems on how to obtain these.
To introduce a new transformation, write a function or a matrix implementing it and enter it into the list in the construction of _findTransformsPath.
Either way: If you’re planning to hack on the library, please let us know at gavo@ari.uni-heidelberg.de. We’ll be delighted to help out with further hints.
Extending STC-S¶
Here’s an example for an extension to STC-S: Let’s handle the planetary ephemeris element.
Checking the schema, you’ll see only two literals are allowed for the
ephemeris:
JPL-DE200 and
JPL-DE405. So, in
stcs._getSTCSGrammar, near the definition of refpos, add:
plEphemeris = Keyword("JPL-DE200") | Keyword("JPL-DE405")
The plan is to allow the optional specification of the ephemeris used after refpos. Now grep for the occurrences of refpos and notice that there are quite a number of them. So, rather than fixing all those rules, we change the refpos rule from:
refpos = (Regex(_reFromKeys(stcRefPositions)))("refpos")
to:
refpos = ((Regex(_reFromKeys(stcRefPositions)))("refpos") + Optional( plEphemeris("plEphemeris") ))
We can test this. In stcstest.STCSSpaceParsesTest, let’s add the sample:
("position", "Position ICRS TOPOCENTER JPL-DE200"),
Now, the refpos nodes are handled in the _makeRefpos function, looking like this:
def _makeRefpos(node): refposName = node.get("refpos") if refposName=="UNKNOWNRefPos": refposName = None return dm.RefPos(standardOrigin=refposName)
The node passed in here is a pyparsing node. Since in our data model, None is always null/ignored, we can just take the planetary ephemeris if it’s present, and the system will do the right thing if it’s not there:
def _makeRefpos(node): refposName = node.get("refpos") if refposName=="UNKNOWNRefPos": refposName = None return dm.RefPos(standardOrigin=refposName, planetaryEphemeris=node.get("plEphemeris"))
Let’s test this; testing STC-S to AST parsing takes place in stctest.py,
so let’s add a method to
CoordSysTest:
def testPlanetaryEphemeris(self): ast = stcsast.parseSTCS("Time TT TOPOCENTER JPL-DE200") self.assertEqual(ast.astroSystem.timeFrame.refPos.planetaryEphemeris, "JPL-DE200")
Thus, we can parse the ephemeris spec from STC-S. To generate it, two things need to be done: The DM item must be transformed into the CST the STC-S is built from, and the part of the CST must be flattened out. Both things happen in stcsgen.py. The CST is just nested dictionaries. Refpos handline happens in refPosToCST, so replace:
def refPosToCST(node): return {"refpos": node.standardOrigin}
with:
def refPosToCST(node): return { "refpos": node.standardOrigin, "planetaryEphemeris": node.planetaryEphemeris,}
To flatten that out to the finished string, the flatteners need to be told that you want that key noticed. Grepping for repos shows that it’s used in several places. So, let’s define a “common flattener”, which is a function taking a value and the CST node (i.e., a dictionary) the value was taken from and returns a string ready for inclusion into the STC-S. The flattener here would look like this:
def _flattenRefPos(val, node): return _joinWithNull([node["refpos"], node["planetaryEphemeris"]])
The
_joinWithNull call makes sure that empty specifications do not
show up the in result.
This “global” flattener is now entered into
_commonFlatteners, a
dictionary mapping specific CST keys to flatten functions:
_commonFlatteners = { ... "refpos": _flattenRefPos, }
The most convenient way to test this is to define a round-trip test.
These again reside stcstest. Use
BaseGenerationTest and add a
sample pair like this:
("Redshift BARYCENTER JPL-DE405 3.5", "Redshift BARYCENTER JPL-DE405 3.5")
With this, you should be done.
|
https://dachs-doc.readthedocs.io/stc.html
|
CC-MAIN-2020-29
|
en
|
refinedweb
|
.NET Core is a general purpose development platform maintained by Microsoft and the .NET community on GitHub. It is cross-platform, supporting Windows, macOS and Linux, and can be used in device, cloud, and embedded/IoT scenarios.
When you think of .NET Core the following should come to mind (flexible deployment, cross-platform, command-line tools, open source).
Another great thing is that even if it's open source Microsoft is actively supporting it.
By itself, .NET Core includes a single application model -- console apps -- which is useful for tools, local services and text-based games. Additional application models have been built on top of .NET Core to extend its functionality, such as:
Also, .NET Core implements the .NET Standard Library, and therefore supports .NET Standard Libraries..
public class Program { public static void Main(string[] args) { Console.WriteLine("\nWhat is your name? "); var name = Console.ReadLine(); var date = DateTime.Now; Console.WriteLine("\nHello, {0}, on {1:d} at {1:t}", name, date); Console.Write("\nPress any key to exit..."); Console.ReadKey(true); } }
|
https://sodocumentation.net/dot-net/topic/9059/-net-core
|
CC-MAIN-2020-29
|
en
|
refinedweb
|
I wrote some articles about AWS Cloud Developer Kit earlier this year. I was attracted to CDK immediately upon hearing of it due to the ability to write infrastructure as code in TypeScript. I really like writing code in TypeScript and CDK seemed almost too good to be true.
Table of Contents
- A Missed Opportunity?
- Lamba in CDK
- aws-lambda-nodejs
- Parcel
- Refactoring #1
- Refactoring #2
- Next Steps
A Missed Opportunity?
CDK is a new technology and that means that it doesn't necessarily cover every use case yet. What I found as I worked through official examples was that somebody had written CDK code in TypeScript but the accompanying Lambda code was written in JavaScript! This struck me as a missed opportunity. It turns out it wasn't a missed opportunity but one that just hadn't landed yet.
Lambda in CDK
To explain a bit better for those who aren't really in the transpilation game, TypeScript code is usually transpiled into JavaScript before being shipped into a runtime, be that runtime a web server, NodeJS or Lambda. That's because (leaving deno aside for now), there's no TypeScript execution environment. I say usually because there is actually a pretty cool project called ts-node that lets you execute TypeScript code in NodeJS without transpiling the code ahead of time. ts-node is a great tool to save developers a step in development flows. It's debatable whether you should use it in production or not (I don't). That said, it's totally appropriate to use ts-node with CDK. This lets you shorten the code=>build=>deploy cycle to code=>deploy. That's great!
But this doesn't work with Lambda Functions. CDK turns my TypeScript infrastructure constructs into CloudFormation. It doesn't do anything special with my Lambda code - or at least it didn't until the aws-lambda-nodejs module landed in CDK.
aws-lambda-nodejs
The aws-lambda-nodejs module is an extension of aws-lambda. Really the only thing it adds is an automatic transpilation step using Parcel. Whenever you run a
cdk deploy or
cdk synth, this module will bundle your Lambda functions and stick the result in special
.cache and
.build directories (which you will probably want to gitignore). Then the deploy process will stage the bundles in S3 and provide them to Lambda - all with no extra config required. It's quite impressive!
An interesting thing about this module is it actually does your Parcel build in Docker, which will let you build for a different runtime (NodeJS version) than you are running locally. You could even have multiple functions with different runtimes if you needed to for some reason. This does mean you need to have Docker installed to use the module which might give you grief if you're running CDK in some CD pipeline that doesn't have Docker available.
Parcel
I actually haven't used Parcel before. I remember it arriving on the scene a couple of years back, but I have already paid the "Webpack tax" (meaning I have spent enough time with Webpack that I can be productive without creating a complete mess) so I never got around to looking at Parcel. This is pretty cool. I my have to rethink my approach to SAM.
Refactoring #1
Okay, so let's update my projects to use aws-lambda-nodejs! I'll start with my Step Functions example:. This should be pretty simple since the functions are incredibly basic with no dependencies.
First I update all my dependencies. No reason to be working with old versions. In fact when I wrote this code back in December 2019, the latest version of CDK was 1.19.0 and aws-lambda-nodejs didn't exist yet. Today, May 29, 2020, the latest version of CDK is 1.41.0. You get a lot of advantages by staying current which is why my repo is set up with dependabot and github actions. Anyway, I'm current so now I can
npm i @aws-cdk/aws-lambda-nodejs and then modify some code!
My old code using the aws-lambda function looked like this:
import { AssetCode, Function, Runtime } from '@aws-cdk/aws-lambda'; const lambdaPath = `${__dirname}/lambda`; const assignCaseLambda = new Function(this, 'assignCaseFunction', { code: new AssetCode(lambdaPath), handler: 'assign-case.handler', runtime: Runtime.NODEJS_12_X, });
The assumption here is that some other process (in my case just simple
tsc) will drop the transpiled lambda code (assign-case.js) in the right place. Here's my refactor:
import { Runtime } from '@aws-cdk/aws-lambda'; import { NodejsFunction } from '@aws-cdk/aws-lambda-nodejs'; const lambdaPath = `${__dirname}/lambda`; const assignCaseLambda = new NodejsFunction(this, 'assignCaseFunction', { entry: `${lambdaPath}/assign-case.ts`, handler: 'handler', runtime: Runtime.NODEJS_12_X, });
I'm now using the entry key to specify the TypeScript file that has my function handler. I'm still specifying the runtime, but maybe I don't have to. I kind of like being really explicit about my runtime. Will maybe play around with having that derived later on.
That's basically it! Everything else in my PR is either a dependency update or removing the unneeded build system. I deployed this and it works just fine. I checked out the code in the Lambda console and it looks good too. Check out my PR diff for this refactor.
Refactoring #2
My other CDK example has dependencies in aws-cli (DynamoDB) and the faker library. Let's see how Parcel handles those. The code change required is trivial. Here's my PR diff.
Now let's see how Parcel handled bundling my function. It produced an index.js file weighing in at 11.6 MB. That seems kind of big, considering this is a sample project. Inspecting the file, it seems that Parcel brought in all of aws-sdk. It doesn't look like proper tree-shaking is happening here and there's no way to declare aws-sdk as an external module in Parcel 1. Well, that's a weakness for sure.
Fortunately there is a config option for minify. Let's try that and see if it helps.
const initDBLambda = new NodejsFunction(this, 'initDBFunction', { entry: `${lambdaPath}/init-db.ts`, handler: 'handler', memorySize: 3000, minify: true, runtime: Runtime.NODEJS_12_X, timeout: Duration.minutes(15), });
The minified build is now 6.3 MB. That's a good reduction in size, but if we could remove the non-DynamoDB dependencies from aws-sdk, it would be a heck of a lot smaller. It doesn't look like Parcel 1 allows that unfortunately. Parcel 2 should add some more of these quality of life issues and will definitely be worth a look. I recommend watching this issue to see that unfold.
To be clear, if your million dollar app has a 6 MB Lambda in it, you are probably quite happy. This isn't a fatal flaw by any means, but it's certainly an area for improvement.
Next Steps
I should mention that at the time of this writing, this module is marked experimental, which means the API could change at some point in the near future. I'm sure the CDK team will want to switch to Parcel 2 when that's available and this module will improve. Whether or not to use this for production will depend on the workload. Given the rate CDK is moving, I would consider using this module for an active development situation where the tooling can evolve, but it's maybe not ideal for a case where we want to ship something and expect stability.
Cover: The Beagle Laid Ashore drawn by Conrad Martens (1834) and engraved by Thomas Landseer (1838)
Posted on May 12 by:
Matt Morgan
TypeScript, Lambda, Serverless, IoC, Cloud Native, make it faster!
Discussion
|
https://dev.to/elthrasher/aws-cdk-aws-lambda-nodejs-module-9ic
|
CC-MAIN-2020-29
|
en
|
refinedweb
|
Welcome to the quickstart guide for Optimizely's Full Stack C# C# SDK is distributed through NuGet.
For Windows, to install, run the command Install-Package Optimizely.SDK in the Package Manager Console:
Install-Package Optimizely.SDK
or with .net cli:
dotnet add package Optimizely.SDK
The package is on NuGet at. The full source code is at.
Next import OptimizelySDK package that installed earlier and create Optimizely instance using OptimizelyFactory.
using OptimizelySDK; /// inside method body // datafile will be downloaded async mode // for sync mode, define ConfigManager explicitly, see the example below var optimizelyInstance = OptimizelyFactory.NewDefaultInstance("<Your_SDK_Key>");
Or ConfigManager can be defined explicitly.
using OptimizelySDK; using OptimizelySDK.Config; /// inside method body var configManager = new HttpProjectConfigManager .Builder() .WithSdkKey("<SDK_KEY>") .Build(false); // sync mode var optimizelyInstance = OptimizelyFactory.NewDefaultInstance(configManager); (this makes use of the dynamic serving capability of Optimizely). Once you learn which discount value works best to increase sales, roll out the discount feature to all traffic with the amount set to the optimum value.
/// After initializing Optimizely Instance var userId = "user_123"; var featureEnabled = optimizely.
// Continued from above example if (featureEnabled) { var discountAmount = optimizelyInstance.GetFeatureVariableInteger("discount", "amount", userId); Console.WriteLine($"{userId} got a discount of $$ {discountAmount}"); } else { Console.WriteLine($"
|
https://docs.developers.optimizely.com/full-stack/docs/c-sharp
|
CC-MAIN-2020-29
|
en
|
refinedweb
|
Welcome to the quickstart guide for Optimizely's Full Stack Java.8.
repositories { jcenter() } dependencies { compile 'com.optimizely.ab:core-api:3.3.0' compile 'com.optimizely.ab:core-httpclient-impl:3.3.
The following code example shows basic Java ADM (Automatic datafile management) usage:
import com.optimizely.ab.Optimizely; import com.optimizely.ab.OptimizelyFactory; public class App { public static void main(String[] args) { String sdkKey = "[SDK_KEY_HERE]"; Optimizely optimizelyClient = OptimizelyFactory.newDefaultInstance(sdkKey); } }) { Double discountAmount = optimizelyClient.getFeatureVariableDouble("discount", "amount", userId); System.out.println(userId + "got a discount of " + discountAmount); } else { System.out.println
|
https://docs.developers.optimizely.com/full-stack/docs/java
|
CC-MAIN-2020-29
|
en
|
refinedweb
|
Contents
Meta
—
* still in an early stage
—
* still in an early stage
This is an old revision of the document!
When making a design decision based on principles, it is necessary to find those principles which fit to the given design problem. This means the designer has to figure out which aspects need consideration. Seasoned designers will already know that by experience but there is also some guidance for that task. Principle languages interconnect principles in a way that the consideration of one principle automatically leads to other principles which are likely to be relevant in the same design situations. They point to other aspects to consider (complementary principles), to possibly downsides (contrary principles), and to principles of different granularity which might fit better to the given problem (generalizations and specializations).
The following approach is how you find a characterizing set for a given design problem.
Remarks:
The following example shows the usage of the OOD Principle Language. It details the assessment of a solution found in the CoCoME system1). The details of the system are irrelevant here but it resembles an information system which can be found in supermarkets or other stores. There are several components which are grouped into the typical layers of an information system: The presentation layer (GUI), the application or business logic layer and the data layer.
In CoCoME there is a mechanism for getting access to other components. In a nutshell it works like this:
DataImplwhich aggregates three subcomponents
Enterprise,
Persistence, and
Storeand gives access to them.
public class DataImpl implements DataIf { public EnterpriseQueryIf getEnterpriseQueryIf() { return new EnterpriseQueryImpl(); } public PersistenceIf getPersistenceManager() { return new PersistenceImpl(); } public StoreQueryIf getStoreQueryIf() { return new StoreQueryImpl(); } }
public class DataIfFactory { private static DataIf dataaccess = null; private DataIfFactory() {} public static DataIf getInstance() { if (dataaccess == null) { dataaccess = new DataImpl(); } return dataaccess; } }
Essentially
DataIfFactory resembles a mixture between the design patterns factory and
singleton. The latter one is important here. The purpose of a singleton is to make a single instance of a class globally accessible. Here
DataImpl is not ensured to be only instantiated once as it still has a public constructor. Nevertheless the “factory” class makes it globally accessible. In every part of the software
DataIfFactory.getInstance() can be used to get hold of the data component. And since DataIf makes the three subcomponents accessible, also these are accessible from everywhere. There is no need to pass a reference around.
Is this a good solution?
We will examine this question using the OOD principle language. First we have to find suitable starting principles. This is one of the rather sophisticated cases where finding a starting principle is at least not completely obvious. If we don't have a clue where to start, we'll have a look at the different categories of principles in the language. Essentially the “factory” enables modules to access and communicate with each other. So we are looking for principles about module communication. There are three of them in the principle language: TdA/IE, LC, and DIP. TdA/IE does not seem to fit, but LC seems to help. Coupling should be low and the mechanism couples modules in a certain way. So we'll choose LC as a starting principle and our characterizing set looks like this: {LC}.
Now we'll have a look at the relationships. LC lists KISS, HC, and RoE as contrary, TdA/IE, MP, and IH/E as complementary, and DIP as a specialization. Let's examine them:
DataIfFactory.getInstance().getStore().doSomething()
So up until now the characterizing set is {LC, KISS, RoE, TdA/IE}. Now let's examine the relationships of the newly added principles. KISS lists GP, ML and MP as contrary principles and MIMC as a specialization.
Characterizing set up until now: {LC, KISS, RoE, TdA/IE, ML}. ML was newly added. Maybe on this point we might decide to abort the process because we already have a good idea of the aspects. But for the sake of the example, we'll continue with the relationships of ML. The wiki page lists KISS as a contrary principle and DRY, EUHM, UP, and IAP as specializations.
As a result we get {LC, KISS, RoE, TdA/IE, ML} as the characterizing set.
Note that although in this example the principles are examined in a certain order, the method does not prescribe any.
In order to answer the above question, we have to informally rate the solution based on the principles of the characterizing set:
Datacomponent possibly using another way of storing the data. Every change in the arrangement of the classes needs a change in the code. LC is rather against this solution.
Storesubcomponent requires asking
DataIfFactoryfor the
Datacomponent and asking that one for the store. There is no way to tell the “factory” to do something. TdA/IE is against the solution.
So LC, RoE and TdA/IE are against the solution, KISS thinks it's good and ML has nothing against it. As it is not the number of principles which is important, the designer still has to make a sound judgment based on these results. What is more important: Coupling, testability, and clarity or a simple and fast implementation. In this case we'd rather decide that the former are more important, so we should rather think about a better solution.
In the next step we would think about better alternatives and might come up with dependency injection and service locators. So there are three alternatives (with several variations): The current solution and the two new ideas.
We already constructed a characterizing set. So the only thing to do is to rate the ideas according to the principles:
The current “factory” approach is abbreviated “F”, dependency injection is DI and SL stands for service locator. In the following a rough, informal rating is described, where “A > B” means that the respective principle rates A higher/better than B. “=” stands for equal ratings.
|
http://www.principles-wiki.net/about:navigating_principle_languages?rev=1379277599
|
CC-MAIN-2020-29
|
en
|
refinedweb
|
Reader for the coverage mapping data that is emitted by the frontend and stored in an object file. More...
#include "llvm/ProfileData/Coverage/CoverageMappingReader.h"
Reader for the coverage mapping data that is emitted by the frontend and stored in an object file.
Definition at line 159 of file CoverageMappingReader.h.
Definition at line 177 of file CoverageMappingReader.h.
Definition at line 954 of file CoverageMappingReader.cpp.
References llvm::consumeError(), llvm::object::createBinary(), llvm::MemoryBufferRef::getBuffer(), loadTestingFormat(), and llvm::StringRef::startswith().
Referenced by llvm::coverage::CoverageMapping::load().
Definition at line 778 of file CoverageMappingReader.cpp.
References llvm::support::big, E, llvm::support::little, and llvm::coverage::malformed.
Referenced by loadBinaryFormat(), and loadTestingFormat().
Implements llvm::coverage::CoverageMappingReader.
Definition at line 1029 of file CoverageMappingReader.cpp.
References llvm::coverage::eof, Expressions, and llvm::makeArrayRef().
|
https://llvm.org/doxygen/classllvm_1_1coverage_1_1BinaryCoverageReader.html
|
CC-MAIN-2020-29
|
en
|
refinedweb
|
Assaf,
I'm not following all of this. My main goal here is not to break the client
when a process is redeployed.
Lance
On 8/8/06, Assaf Arkin <arkin@intalio.com> wrote:
>
> The client breaks when the endpoint changes, or the messages/operations
> accepted by the endpoint change.
>
> Whenever you deploy a new version -- same or different name, version
> number,
> tagged or not -- that accepts the same messages on the same endpoints, the
> client does not perceive any difference. It still invokes the process the
> same way, regardless of how the server chooses to refer to that process
> definition.
>
> However, changing the signature and changing the process name, breaks the
> client. Because the client does not talk to the process, the client talks
> to
> the service, and so changing the signature breaks the client. Changing the
> process name is immaterial.
>
> A restriction that "if you change the signature you must change the
> process
> name" does not in any way protect the client from breaking, but makes life
> incredibly hard for developers. It's like asking you to change the Java
> class name every time you change its signature. When you're writing code,
> how often do you change signatures?
>
> Assaf
>
> On 8/8/06, Lance Waterman <lance.waterman@gmail.com> wrote:
> >
> > Assaf,
> >
> > From a client application's perspective which of the three options
> > requires
> > a change in the way I send a message into the BPEL engine?
> >
> > Lance
> >
> >
> > On 8/8/06, Assaf Arkin <arkin@intalio.com> wrote:
> > >
> > > Reading through the BPEL spec, I get the impression that however you
> > > decide
> > > to name a process is meaningless. If you send a message to its
> > initiating
> > > activity it will start. If you send a message to the wrong endpoint,
> it
> > > won't.
> > >
> > > So clearly people who want to version processes need to take into
> > account
> > > that Bar replacing Foo on same instantiating activity, means Bar is
> the
> > > version you now use, not Foo. Which means you can get really creative
> > with
> > > process names, like Order, OrderV1, Order_V2_With_25%_More_Activities.
> > >
> > >
> > > But there are two requirements you can't solve with overriding and
> > naming.
> > >
> > > One, only very few people can actual design, deploy and forget. Most
> > > people
> > > go through some iterative process, so you end up deploying different
> > > iterations of the same process as you're working to get it to done.
> And
> > > naming each deployment, that's like saving every draft you write under
> a
> > > different name.
> > >
> > > The source control approach is much better, it gives each version a
> > serial
> > > number and datetime stamp, so you can easily track changes and
> rollback.
> > > If
> > > you have some instance running, you know which process definition it
> > > belongs
> > > to: not the name, but the actual definition you pushed to the server
> > > before
> > > it was instantiated.
> > >
> > > (In some other development environments, deployment happens strictly
> > > through
> > > SVN and people in fact use the SVN version number to mark each
> release)
> > >
> > > Two, numbers and timestamps are fine but a burden when you do want to
> > > track
> > > milestone releases, especially in production. So you want to associate
> > > some
> > > meaningful name, usually related to that milestone, like "Release 1",
> > > "Release 1.1", whatever. A tagging mechanism separate from the process
> > > name
> > > has the benefit that you can clearly see its timeline, searching by
> > name,
> > > ordering by sequential version number, and displaying those tags.
> > >
> > > If tags sound familiar, source control does that as well.
> > >
> > >
> > > So I personally prefer a system whereby:
> > > 1. I can replace Foo with Bar because I decide Foo is a better name,
> and
> > > it's taking over Bar's role (same instantiation).
> > > 2. Or replace Foo with another Foo, and be able to see sequence of
> > > deployment using serial number/datetime I don't have to worry about.
> > > 3. Or affix a specific version label/tag.
> > >
> > > #1, I don't see that happening often, and you can always retire Foo
> and
> > > activate Bar.
> > >
> > > #2 is something the server already has to do in order to maintain
> > > instances
> > > using the old version, so just give me access to the sequence
> > > number/deployment timestamp.
> > >
> > > #3 is a really nice feature to have.
> > >
> > > Assaf
> > >
> > >
> > > On 8/8/06, Alex Boisvert <boisvert@intalio.com> wrote:
> > > >
> > > > Lance Waterman wrote:
> > > > > On 8/8/06, Alex Boisvert <boisvert@intalio.com> wrote:
> > > > >>
> > > > >> Lance,
> > > > >>
> > > > >> For consideration, I would like to briefly review the design
that
> I
> > > had
> > > > >> in mind for versioning in PXE. I think it's similar in spirit
to
> > > what
> > > > >> you describe in your deployment spec.
> > > > >>
> > > > >> First, each process definition would be identified by its fully
> > > > >> qualified name (/name/ and /namespace/ attributes) and a version
> > > > >> number. The process engine would manage the version number in
a
> > > > >> monotonically increasing fashion, meaning that each time a
> process
> > is
> > > > >> redeployed, the version number increases.
> > > > >
> > > > >
> > > > > I don't understand the need for a version number that is managed
> by
> > > the
> > > > > engine. I think a client may use whatever version scheme they use.
> > We
> > > > > just
> > > > > need to validate that the version identifier is unique at
> deployment
> > > > > time.
> > > > There is no strict need for a version number managed by the
> engine. I
> > > > think this idea came up when we wanted to simplify the management
> > > > interfaces and wanted to avoid the need for an extra user-provided
> > > > identifier if you already encoded version information in the
> > > > name+namespace. It made it easier to define and communicate the
> > > > "latest" process version.
> > > >
> > > > > I agree with Maciej's comments on this and would like to add from
> > the
> > > > > deployment spec under sec 1.2.5:
> > > > >
> > > > > *CONSTRAINT: Any change in the service interface ( i.e. a new
> > > <receive>
> > > > > element ) for a process definition will require a new identifier
(
> > i.e
> > > .
> > > > > name/namespace ) within the definition repository. Versioning is
> not
> > > > > supported across changes in the service interface and shall be
> > > > > enforced by
> > > > > the deployment component.*
> > > > >
> > > > > I would like to make sure folks are okay with this as well.
> > > > Personally, I would be against this because it would mean that I
> > cannot
> > > > deploy a new process definition that implements additional
> interfaces
> > > > (among other things).
> > > >
> > > > I don't see the reason to bind together the notions of service
> > interface
> > > > and versioning.
> > > >
> > > >
> > > > > In general I would like to define the concept of a "current"
> process
> > > > > definition. The "current" process definition is the definition
> used
> > by
> > > > > the
> > > > > engine on an instantiating event. There could be instances running
> > in
> > > > the
> > > > > engine that are using other versions of a process definition,
> > however
> > > > its
> > > > > not possible to have multiple versions that are used for
> > instantiating
> > > > > new
> > > > > processes ( see Maciej's reply on endpoints ). Through management
> > > > > tooling a
> > > > > client will identify the "current" process.
> > > > I don't think we need to define the notion of a "current"
> process. I
> > > > think we only need to define which (unique) process provides an
> > > > instantiating (non-correlated) operation on a specific endpoint.
> > > >
> > > > alex
> > > >
> > > >
> > >
> > >
> > > --
> > > CTO, Intalio
> > >
> > >
> > >
> >
> >
>
>
> --
> CTO, Intalio
>
>
>
|
http://mail-archives.us.apache.org/mod_mbox/ode-dev/200608.mbox/%3Ccbb700270608081619k67404c8cqba3a77b21e3469b3@mail.gmail.com%3E
|
CC-MAIN-2020-29
|
en
|
refinedweb
|
SumBasic algorithm for text summarization
Reading time: 30 minutes | Coding time: 10 minutes
SumBasic is an algorithm to generate multi-document text summaries. Basic idea is to utilize frequently occuring words in a document than the less frequent words so as to generate a summary that is more likely in human abstracts.
It generates n length summaries, where n is user specified number of sentences.
SumBasic has the following advantages :
- Used to easily understand the purpose of a document.
- Provides greater convinience and flexibility to reader.
- Generates shorter and concise form from multiple documents.
The above figure shows the working of SumBasic on a document.
Alogorithm
SumBasic follows the following algorithm :
- It calculates the probability distribution over words w appearing in the input P(wi) for every i
Where,
n = number of times the word appeared in the input.
N = total number of content word tokens in input.
- For sentence Sj in input, assign a weight equal to the average probability of the words in the sentence.
Pick the best scoring sentence with highest probability word.
For each word wi in the sentence chosen at step 3, update their probability.
*
- If a desired summary length is not generated then repeat the step 2.
The step 2 and 3 exhibits the desired properties of summarizer whereas step 3 ensures that the highest probability word is included in the summary every time a sentence is picked.
Step 4 has various uses; updating probabilities on the basis of preceding sentences so that low probability words can also participate and deals with the redundacies.
In simple words, SumBasic first computes the probability of each content-word (i.e., verbs, nouns, adjectives and numbers) by simply counting its frequency in the document set. Each sentence is scored as the average of the probabilities of the words in it. The summary is then generated through a simple greedy search algorithm: it iteratively selects the sentence with the highest-scoring content-word, breaking ties by using the average score of the sentences. This continues until the maximum summary length has been reached. In order not to select the same or similar sentence multiple times, SumBasic updates probabilities of the words in the selected sentence by squaring them, modeling the likelihood of a word occurring twice in a summary.
Complexity analysis
Worst case : O(2 * n + n * (n^3 + n * log(n) + n^2)
given by : O(step 1 complexity + n*(step 2, 3, and 4 complexity))
Implementation
SumBasic can be implemented using sumy or nltk library in python
Sumy installation command :
pip install sumy
nltk installation command:
pip install nltk
sumy is used to extract summary from html pages or plain texts.
The data is processed through various steps to undergo the procedure of summarization :
- Tokenization - A sentence is split into words as tokens which are then processed to find the distinct words.
- Stemming - Stemming is used filter out the parent word from any word . For ex : having will be converted into have by stemming out "ing".
- Lemmatization - Lemmatization.
There are three ways to implement the algorithm, namely:
- orig: The original version, including the non-redundancy update of the word scores.
- simplified: A simplified version of the system that holds the word scores constant and does not incorporate the non-redundancy update. It produces better results than orig version in terms of simplification.
- leading: Takes the leading sentences of one of the articles, up until the word length limit is reached. It is the most concise technique.
NOTE- The code implemented below does not use sumy.
Sample Code
import nltk, sys, glob reload(sys) sys.setdefaultencoding('utf8') lemmatize = True rm_stopwords = True num_sentences = 10 stopwords = nltk.corpus.stopwords.words('english') lemmatizer = nltk.stem.WordNetLemmatizer() #Breaking a sentence into tokens def clean_sentence(tokens): tokens = [t.lower() for t in tokens] if lemmatize: tokens = [lemmatizer.lemmatize(t) for t in tokens] if rm_stopwords: tokens = [t for t in tokens if t not in stopwords] return tokens def get_probabilities(cluster, lemmatize, rm_stopwords): # Store word probabilities for this cluster word_ps = {} # Keep track of the number of tokens to calculate probabilities later token_count = 0.0 # Gather counts for all words in all documents for path in cluster: with open(path) as f: tokens = clean_sentence(nltk.word_tokenize(f.read())) token_count += len(tokens) for token in tokens: if token not in word_ps: word_ps[token] = 1.0 else: word_ps[token] += 1.0 # Divide word counts by the number of tokens across all files for word_p in word_ps: word_ps[word_p] = word_ps[word_p]/float(token_count) return word_ps def get_sentences(cluster): sentences = [] for path in cluster: with open(path) as f: sentences += nltk.sent_tokenize(f.read()) return sentences def clean_sentence(tokens): tokens = [t.lower() for t in tokens] if lemmatize: tokens = [lemmatizer.lemmatize(t) for t in tokens] if rm_stopwords: tokens = [t for t in tokens if t not in stopwords] return tokens def score_sentence(sentence, word_ps): score = 0.0 num_tokens = 0.0 sentence = nltk.word_tokenize(sentence) tokens = clean_sentence(sentence) for token in tokens: if token in word_ps: score += word_ps[token] num_tokens += 1.0 return float(score)/float(num_tokens) def max_sentence(sentences, word_ps, simplified): max_sentence = None max_score = None for sentence in sentences: score = score_sentence(sentence, word_ps) if score > max_score or max_score == None: max_sentence = sentence max_score = score if not simplified: update_ps(max_sentence, word_ps) return max_sentence # Updating the sentences , step 4 of algo def update_ps(max_sentence, word_ps): sentence = nltk.word_tokenize(max_sentence) sentence = clean_sentence(sentence) for word in sentence: word_ps[word] = word_ps[word]**2 return True def orig(cluster): cluster = glob.glob(cluster) word_ps = get_probabilities(cluster, lemmatize, rm_stopwords) sentences = get_sentences(cluster) summary = [] for i in range(num_sentences): summary.append(max_sentence(sentences, word_ps, False)) return " ".join(summary) def simplified(cluster): cluster = glob.glob(cluster) word_ps = get_probabilities(cluster, lemmatize, rm_stopwords) sentences = get_sentences(cluster) summary = [] for i in range(num_sentences): summary.append(max_sentence(sentences, word_ps, True)) return " ".join(summary) def leading(cluster): cluster = glob.glob(cluster) sentences = get_sentences(cluster) summary = [] for i in range(num_sentences): summary.append(sentences[i]) return " ".join(summary) def main(): method = sys.argv[1] cluster = sys.argv[2] summary = eval(method + "('" + cluster + "')") print summary if __name__ == '__main__': main()
Let following picture represents how the above code will be executed :
Applications
Data summarization has huge boudaries of its application from extracting relevant informations to dealing with redundancies. Some of the major applications are as follows :
- Information retrieval by Google , Yahoo, Bing and so on. Whenever a query is encountered , thousands of pages appear . It becomes difficult to extract the relevant and significant information from them . Summarization is done in order to prevent this .
- It is used to tackle problems of "Data Overloading".
- Summary of source of text in shorter version is provided to the user that retains all the main and relevant features of the content.
- Easily understandable
Other Summarization techniques can be as follows :
- LexRank
- TextRank
- Latent Semantic Analysis(LSA) and so on.
Questions
- Are NLP and SumBasic different?
SumBasic in a technique to implement NLP. It is advantageous as it investigate how much the frequency of words in the cluster of input documents influences their selection in the summary.
- How are tf-idf and SumBasic different?
Tf-idf stands for term "frequency–inverse document frequency" in which it considers both the frequency of the term in a document and inverse document frequency that is pages of document in which the required terms exist , whereas in SumBasic uses maximum frequent element and so on in decreasing order to determine the context of document or text. Tf-idf is used more frequently used in chatbots and where the machine has to understand the meaning and communicate with the user, whereas summarization is used to provide user with concise information.
References
- Lucy Vanderwend, Hisami Suzuki et al, Beyond SumBasic: Task-Focused Summarization with Sentence Simplification and Lexical Expansion
- Chintan Shah & Anjali Jivani, (2016). LITERATURE STUDY ON MULTI-DOCUMENT TEXT SUMMARIZATION TECHNIQUES. In SmartCom., September-2016. Jaipur: Springer.
- A SURVEY OF TEXT SUMMARIZATION TECHNIQUES :Ani Nenkova,University of Pennsylvania;Kathleen McKeown,Columbia University.
|
https://iq.opengenus.org/sumbasic-algorithm-for-text-summarization/
|
CC-MAIN-2020-29
|
en
|
refinedweb
|
I’ve got a model
UserGroup with many-to-many fields for managers and
class UserGroup(models.Model): managers = models.ManyToManyField(User) members = models.ManyToManyField(User) # Note: some stuff stripped out for brevity
I wanted every manager to be a member, too, automatically. So I added a custom
save() method:
class UserGroup(models.Model): managers = models.ManyToManyField(User) members = models.ManyToManyField(User) # Note: some stuff stripped out for brevity def save(self, *args, **kwargs): if self.id: members = self.members.all() for manager in self.managers.all(): if manager not in members: self.members.add(manager) super(UserGroup, self).save(*args, **kwargs)
Which worked fine in my unittest, but not in the actual admin interface.
I found the reason on stackoverflow:
the Django admin clears the many-to-many fields after saving the model and
sets them anew with the data it knows about. So my
save() method worked
fine, but saw its work zapped by Django’s admin…
Django’s development docs say that 1.4 will have
a
save_related() model admin method. Which sounds like it could help work
around this issue.
The solution I ended up with was to add a custom model admin form and to
use the
clean() method to just modify the form data. I got the idea also
from stackoverflow. Here’s
the relevant part of my
admin.py:
class UserGroupAdminForm(ModelForm): class Meta: model = UserGroup def clean(self): """Make sure all managers are also members.""" for manager in self.cleaned_data['managers']: if manager not in self.cleaned_data['members']: self.cleaned_data['members'].append(manager) return self.cleaned_data class UserGroupAdmin(admin.ModelAdmin): model = UserGroup form = UserGroupAdminForm
It works fine now.
Addition 2011-1202. It didnt’ work so fine after all. There’s one problem,
which you can also find on stackoverflow. Django
converts the raw form data to python objects. In the case of these user object
you get a queryset in the latest Django version instead of a list of user
objects.
.append() doesn’t work on a queryset. So I took one of the
suggestions on stackoverflow and converted the queryset to a list, first:
def clean(self): """Make sure all managers are also members.""" members = list(self.cleaned_data['members']) for manager in self.cleaned_data['managers']: if manager not in members: members.append(manager) self.cleaned_data['members'] = members return self.cleaned):
|
https://reinout.vanrees.org/weblog/2011/11/29/many-to-many-field-save-method.html
|
CC-MAIN-2020-29
|
en
|
refinedweb
|
I'll preface this question with I'm new to C# and Xamarin, and working on my first Xamarin Forms app, targeting iOS and Android.
The app is using HttpClient to pass requests to an api, and the headers in the response return session cookies that are used to identify the current user. So once I've received an initial response and those cookies have been stored in the CookieContainer, I want to store that CookieContainer in some global scope so it can be reused and passed with all subsequent requests.
I've read that attempting to serialize the cookies data can be problematic, and HttpOnly cookies are included in the response, which I apparently can't access in the CookieContainer. Because of this, I've also tried enumerating through the values returned in the Set-Cookie header and storing it as a comma delimited string, then using SetCookies on the CookieContainer with that string for subsequent requests. But that seems overly complex and trying to do so results in a consistent, vague error "Operation is not valid due to the current state of the object." when requests are made. So I'm hoping to simply reuse the entire CookieContainer object.
So more pointedly, my questions are:
Where is an appropriate place to store a CookieContainer object so that it will persist throughout the app's lifecycle, and preferably still be available when the app goes into the background and is resumed. Is simply declaring it as a static variable in my WebServices class good enough?
If I do reuse the CookieContainer in this way, will the individual cookies be automatically updated on subsequent requests if more are added or the values of existing ones sent by the server change?
Here's a snippet from the method we're currently using (excluding my attempts at parsing the cookies into a string, which hopefully is unnecessary):
HttpResponseMessage httpResponse; var cookieContainer = new CookieContainer (); // want to set/store this globally var baseAddress = new Uri(""); using (var handler = new HttpClientHandler () { CookieContainer = cookieContainer }) using (HttpClient client = new HttpClient (handler) { BaseAddress = baseAddress }) { // has a timeout been set on this Web Service instance? if (TimeoutSeconds > 0) { client.Timeout = new System.TimeSpan(0,0,TimeoutSeconds); } // Make request (the "uri" var is passed with the method call) httpResponse = await client.GetAsync (uri); }
Answers
Can't seem to edit my initial post any more, but want to make a couple of corrections:
Serializing seems to be problematic because of not being able to read/access the HttpOnly cookies. Please correct me if I'm wrong here. I've been having difficulty implementing serialization since the concept is new to me, so any direction on that would be super helpful. I've been having trouble with examples I've found, in part because of namespace references not being recognized in Xamarin studio. For example, I get an unknown resolve error trying to include System.Runtime.Serialization.Formatters.Binary. I was able to serialize into a json string, but I did not see the HttpOnly cookies there so didn't go any further.
It is actually a requirement, not just a preference, that the cookies persist in the app even if it is shut down and restarted, so it seems I need to store in something like Application.Current.Properties (as opposed to just a static member of the class). But it seems you can't store an object that way.
I do something similar. This is all off memory so I am sorry for the generalized answer. Anyway, I extract the cookie from the CookieStore object when I initially authenticate the user. I store it in my app settings (I use the Settings Plugin by James Montemagno). So whenever I do an API request, I plug in the cookie I saved and use that. If the request fails, I check why and if it is due to the cookie being timed out, I reauth the user and store a new cookie. Isn't the best solution but it is something that works.
Thanks for the suggestion, Travis. I'll look into that plugin - if that allows us to store an object it may be what I'm looking for, since Application.Current.Properties only seems to allow primitive types.
I think it is just the basic types. You will probably have to disassemble it and save it to its own individual pieces and then reassemble when you try to get the data back. Although, I only save the cookie string. I dont save everything within the CookieStore object.
@TravyDale how do you put the cookies into your API Request? I can save them and iOS accesses them, but I can't get Android to recognize them. Thanks!
@ChristineBlanda did find a way to that? (in all platforms)
|
https://forums.xamarin.com/discussion/comment/195587/
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
Internet of Things
Programming Projects
Build modern IoT solutions with the Raspberry Pi 3
and Python
Colin Dow
BIRMINGHAM - MUMBAI
Internet of Things Programming: Prachi Bisht
Content Development Editor: Deepti Thore
Technical Editor: Varsha Shivhare
Copy Editor: Safis Editing
Project Coordinator: Kinjal Bari
Proofreader: Safis Editing
Indexer: Mariammal Chettiyar
Graphics: Jisha Chirayil
Production Coordinator: Aparna Bhagat
First published: October 2018
Production reference: 1301018
Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham
B3 2PB, UK.
ISBN 978-1-78913-480.
I would like to thank my wife Constance for her encouragement, support and assistance;
and my sons Maximillian and Jackson for their inspiration and optimism. I am forever
grateful to them for this unique opportunity.
I would also like to thank Deepti Thore and Varsha Shivhare at Packt for their guidance
and expertise throughout the whole process. Without their assistance and patience this
book would not have been possible.
About the reviewer
Arvind Ravulavaru is a platform architect at Ubiconn IoT Solutions, with over 9 years of
experience of software development and 2 years experience of hardware and product
development. For the last 5 years, he has been working extensively on JavaScript, both on
the server side and the client side. Over the past couple of years, his focus has been on IoT,
building a platform for rapidly developing IoT solutions named The IoT Suitcase. Prior to
that, Arvind worked on big data, cloud computing, and orchestration.: Installing Raspbian on the Raspberry Pi 8
A brief history of the Raspberry Pi 8
A look at operating systems for the Raspberry Pi 11
Project overview 12
Getting started 12
Installing the Raspbian OS 12
Formatting a microSD card for Raspbian 13
Copying the NOOBS files to the microSD RAM 13
Running the installer 14
A quick overview of the Raspbian OS 21
The Chromium web browser 21
The home folder 22
The Terminal 23
Mathematica 25
Sonic Pi 26
Scratch and Scratch 2.0 27
LibreOffice 28
Summary 29
Questions 29
Further reading 30
Chapter 2: Writing Python Programs Using Raspberry Pi 31
Project overview 31
Technical requirements 32
Python tools for Raspberry Pi 32
The Terminal 32
Integrated Development and Learning Environment 33
Thonny 33
Using the Python command line 35
Writing a simple Python program 39
Creating the class 39
Creating the object 40
Using the object inspector 41
Testing your class 42
Making the code flexible 43
Example one 43
Example two 43
Summary 44
Table of Contents
[ ii ]
Questions 44
Further reading 45
Chapter 3: Using the GPIO to Connect to the Outside World 46
Project overview 46
Technical requirements 47
Python libraries for the Raspberry Pi 47
picamera 49
Pillow 50
sense-hat and sense-emu 50
Accessing Raspberry Pi's GPIO 53
Pibrella 54
RPi.GPIO 57
GPIO zero 58
Setting up the circuit 58
Fritzing 59
Building our circuit 61
Hello LED 63
Blink LED using gpiozero 63
Morse code weather data 63
Summary 67
Questions 68
Further reading 69
Chapter 4: Subscribing to Web Services 70
Prerequisites 70
Project overview 71
Getting started 71
Cloud services for IoT 71
Amazon Web Services IoT 71
IBM Watson platform 73
Google Cloud platform 74
Microsoft Azure 75
Weather Underground 75
A basic Python program to pull data from the cloud 76
Accessing the web service 76
Using the Sense HAT Emulator 79
Summary 81
Questions 82
Further reading 82
Chapter 5: Controlling a Servo with Python 83
Knowledge required to complete this chapter 83
Project overview 83
Getting started 84
Table of Contents
[ iii ]
Wiring up a servo motor to the Raspberry Pi 84
Stepper motors 84
DC motors 86
Servo motors 87
Connecting the servo motor to our Raspberry Pi 89
Control the servo through the command line 91
Write a Python program to control the servo 93
Summary 96
Questions 96
Further reading 96
Chapter 6: Working with the Servo Control Code to Control an Analog
Device 97
Knowledge required to complete this chapter 97
Project overview 98
Getting started 99
Accessing weather data from the cloud 99
Controlling the servo using weather data 102
Correcting for servo range 102
Changing the position of the servo based on weather data 104
Enhancing our project 106
Printing out the main graphic 107
Adding the needle and LED 108
Summary 110
Questions 111
Further reading 111
Chapter 7: Setting Up a Raspberry Pi Web Server 112
Knowledge required to complete this chapter 112
Project overview 112
Getting started 113
Introducing CherryPy – a minimalist Python web framework 113
What is CherryPy? 113
Who uses CherryPy? 113
Installing CherryPy 114
Creating a simple web page using CherryPy 115
Hello Raspberry Pi! 115
Say hello to myFriend 117
What about static pages? 119
HTML weather dashboard 120
Summary 127
Questions 128
Further reading 128
Chapter 8: Reading Raspberry Pi GPIO Sensor Data Using Python 129
Table of Contents
[ iv ]
Project overview 129
Getting started 130
Reading the state of a button 130
Using GPIO Zero with a button 130
Using the Sense HAT emulator and GPIO Zero button together 132
Toggling an LED with a long button press 135
Reading the state from an infrared motion sensor 137
What is a PIR sensor? 138
Using the GPIO Zero buzzer class 141
Building a basic alarm system 144
Modifying Hello LED using infrared sensor 146
Configuring a distance sensor 147
Taking Hello LED to another level 149
Summary 151
Questions 152
Further reading 152
Chapter 9: Building a Home Security Dashboard 153
Knowledge required to complete this chapter 153
Project overview 153
Getting started 154
Creating our dashboard using CherryPy 154
Using the DHT11 to find temperature and humidity 154
Using the Pi camera to take a photo 159
Creating our dashboard using CherryPy 160
Displaying sensory data on our dashboard 165
Home security dashboard with a temperature sensor 166
Home security dashboard with quick response 175
Summary 183
Questions 183
Further reading 184
Chapter 10: Publishing to Web Services 185
Project overview 185
Getting started 185
Publishing sensory data to cloud-based services 186
Install the MQTT library 186
Set up an account and create a device 186
Reading sensory data and publishing to ThingsBoard 189
Creating a dashboard in ThingsBoard 192
Sharing your dashboard with a friend 195
Setting up an account for text message transmission 196
Setting up a Twilio account 197
Installing Twilio on our Raspberry Pi 201
Sending a text through Twilio 201
Table of Contents
[ v ]
Creating a new home security dashboard 202
Summary 213
Questions 213
Further reading 214
Chapter 11: Creating a Doorbell Button Using Bluetooth 215
Project overview 215
Getting started 216
Introducing Blue Dot 216
Installing the bluedot library on the Raspberry Pi 218
Pairing Blue Dot with your Raspberry Pi 218
Wiring up our circuit 219
What is an RGB LED? 220
Testing our RGB LED 220
Completing our doorbell circuit 223
Reading our button state using Bluetooth and Python 226
Reading button information using Python 226
Creating a Bluetooth doorbell 228
Creating a secret Bluetooth doorbell 231
Summary 232
Questions 232
Further reading 233
Chapter 12: Enhancing Our IoT Doorbell 234
Project overview 235
Getting started 236
Sending a text message when someone is at the door 236
Creating a simple doorbell application with text messaging 237
Creating a secret doorbell application with text messaging 242
Summary 248
Questions 248
Further reading 248
Chapter 13: Introducing the Raspberry Pi Robot Car 249
The parts of the robot car 250
Building the robot car 252
Step 1 – Adafruit 16-Channel PWM/Servo HAT for Raspberry Pi 252
Step 2 – Wiring up the motors 254
Step 3 – Assembling the servo camera mount 257
Step 4 – Attaching the head 262
Step 5 – Assembling the DC motor plate 266
Step 6 – Attaching the motors and wheels 274
Step 7 – Wiring up the motors 276
Step 8 – Attaching the camera mount, Raspberry Pi, and Adafruit servo
board 277
Table of Contents
[ vi ]
Step 9 – Attaching the buzzer and voltage divider 281
Step 10 – Wiring up T.A.R.A.S 284
Learning how to control the robot car 287
Configuring our Raspberry Pi 287
Python library for Adafruit Servo HAT 288
Summary 289
Questions 290
Chapter 14: Controlling the Robot Car Using Python 291
Knowledge required to complete this chapter 291
Project overview 292
Getting started 292
Taking a look at the Python code 293
Controlling the drive wheels of the robot car 293
Moving the servos on the robot car 294
Taking a picture 295
Making a beep noise 296
Making the LEDs blink 296
Modifying the robot car Python code 299
Move the wheels 299
Move the head 300
Make sounds 302
Enhancing the code 304
Stitching our code together 304
Summary 306
Questions 306
Further reading 307
Chapter 15: Connecting Sensory Inputs from the Robot Car to the Web 308
Knowledge required to complete this chapter 308
Project overview 309
Getting started 309
Identifying the sensor on the robot car 309
Taking a closer look at the HC-SR04 310
Reading robot car sensory data with Python 313
Publishing robot car sensory data to the cloud 314
Create a ThingsBoard device 315
Summary 321
Questions 321
Further reading 321
Chapter 16: Controlling the Robot Car with Web Service Calls 322
Knowledge required to complete this chapter 322
Project overview 323
Technical requirements 323
Table of Contents
[ vii ]
Reading the robot car's data from the cloud 323
Changing the look of the distance gauge 323
Changing the range on the distance gauge 326
Viewing the dashboard outside of your account 328
Using a Python program to control a robot car through the cloud 329
Adding a switch to our dashboard 331
Controlling the green LED on T.A.R.A.S 333
Using the internet to make T.A.R.A.S dance 336
Summary 338
Questions 338
Further reading 339
Chapter 17: Building the JavaScript Client 340
Project overview 340
Getting started 341
Introducing JavaScript cloud libraries 341
Google Cloud 341
AWS SDK for JavaScript 342
Eclipse Paho JavaScript client 342
Connecting to cloud services using JavaScript 342
Setting up a CloudMQTT account 343
Setting up an MQTT Broker instance 345
Writing the JavaScript client code 347
Running the code 350
Understanding the JavaScript code 353
Publishing MQTT messages from our Raspberry Pi 356
Summary 357
Questions 358
Further reading 358
Chapter 18: Putting It All Together 359
Project overview 360
Getting started 361
Building a JavaScript client to connect to our Raspberry Pi 361
Writing the HTML code 362
Writing the JavaScript code to communicate with our MQTT Broker 366
Creating a JavaScript client to access our robot car's sensory data 372
Writing the code for T.A.R.A.S 373
Livestreaming videos from T.A.R.A.S 377
Enhancing our JavaScript client to control our robot car 379
Nipple.js 380
HTML5 Gamepad API 380
Johnny-Five 380
Summary 381
Questions 381
Table of Contents
[ viii ]
Further reading 382
Assessments 383
Other Books You May Enjoy 403
Index 406
Preface
The Internet of Things (IoT) promises to unlock the real world the way that the internet
unlocked millions of computers just a few decades ago. First released in 2012, the
Raspberry Pi computer has taken the world by storm. Originally designed to give newer
generations the same excitement to programming that personal computers from the 1980s
did, the Raspberry Pi has gone on to be a staple of millions of makers everywhere.
In 1991, Guido van Rossum introduced the world to the Python programming language.
Python is a terse language and was designed for code readability. Python programs tend to
require fewer lines of code than other programming languages. Python is a scalable
language that can be used for anything from the simplest programs to massive large-scale
projects.
In this book, we will unleash the power of Raspberry Pi and Python to create exciting IoT
projects.
The first part of the book introduces the reader to the amazing Raspberry Pi. We will learn
how to set it up and jump right into Python programming. We will start our foray into real-
world computing by creating the "Hello World" app for physical computing, the flashing
LED.
Our first project takes us back to an age when analog needle meters ruled the world of data
display. Think back to those old analog multimeters and endless old sci-fi movies where
information was controlled and displayed with buttons and big flashing lights. In our
project, we will retrieve weather data from a web service and display it on an analog needle
meter. We will accomplish this using a servo motor connected to our Raspberry Pi through
the GPIO.
Home security systems are pretty much ubiquitous in modern life. Entire industries and
careers are based on the installation and monitoring of them. Did you know that you could
easily create your own home security system? In our second project, we do just that, as we
build a home security system using Raspberry Pi as a web server to display it.
The humble doorbell has been with us since 1831. In our third project, we will give it a 21st
century twist and have our Raspberry Pi send a signal to a web service that will text us
when someone is at the door.
Preface
[ 2 ]
In our final project, we take what we've learned from our previous two projects and create
an IoT robot car we call T.A.R.A.S (This Amazing Raspberry-Pi Automated Security Agent).
In years to come, driverless cars will become the rule instead of the exception, and ways of
controlling these cars will be needed. This final project gives the reader insight and
knowledge into how someone would go about controlling cars devoid of a human driver.
Who this book is for
This book is geared toward those who have had some sort of exposure to programming
and are interested in learning about the IoT. Knowledge of the Python programming
language would be a definite asset. An understanding of, or a keen interest in, object-
oriented programming will serve the reader well with the coding examples used in the
book.
What this book covers
Chapter 1, Installing Raspbian on the Raspberry Pi, sets us off on our Raspberry Pi IoT
journey by installing the Raspbian OS on our Raspberry Pi. We will then take a look at
some of the programs that come pre-installed with Raspbian.
Chapter 2, Writing Python Programs Using Raspberry Pi, covers how Windows, macOS, and
Linux are operating systems that are familiar to developers. Many a book on developing
the Raspberry Pi involves using one of these operating systems and accessing the
Raspberry Pi remotely. We will take a different approach in this book. We will use our
Raspberry Pi as a development machine. In this chapter, we will get our feet wet with using
the Raspberry Pi as a development machine.
Chapter 3, Using the GPIO to Connect to the Outside World, explains how, if the Raspberry Pi
was just a $35 computer, that would be enough for many of us. However, the real power
behind the Raspberry Pi is the ability of the developer to access the outside world through
the use of the General Purpose Input Output (GPIO) pins. In this chapter, we will delve
into the GPIO and start to connect the Raspberry Pi to the real world. We will create a
Morse code generator for our project using an outside LED and then use this generator to
blink out simulated weather information.
Chapter 4, Subscribing to Web Services, explores a few web services offered by some of the
biggest companies in the world. Our project will use the virtual version of the Raspberry Pi
Sense HAT as a ticker to display current weather information from the Yahoo! Weather
web service.
Preface
[ 3 ]
Chapter 5, Controlling a Servo with Python, introduces the concept of creating an analog
meter needle using a servo motor connected to the Raspberry Pi.
Chapter 6, Working with the Servo Control Code to Control an Analog Device, continues the
theme of working with servo motors as we build our first real IoT device, a weather
dashboard. Not only will this weather dashboard feature an analog needle; it will use the
needle to point to a picture of a suggested wardrobe based on the weather conditions.
Chapter 7, Setting Up a Raspberry Pi Web Server, goes into how to install and configure the
web framework CherryPy. We will conclude the chapter by building a local website that
displays weather information.
Chapter 8, Reading Raspberry Pi GPIO Sensor Data Using Python, covers how to read the
state of a button before moving on to a PIR sensor and distance sensor. We will conclude
the chapter by building simple alarm systems.
Chapter 9, Building a Home Security Dashboard, explains how to build a home security
dashboard using the Raspberry Pi as a web server serving up HTML content containing
sensory data collected from the GPIO.
Chapter 10, Publishing to Web Services, goes into how to measure room temperature and
humidity and publish these values to the web through the use of an IoT dashboard. We will
also set up and run a text messaging alert using the service Twilio.
Chapter 11, Creating a Doorbell Button Using Bluetooth, turns our focus to using Bluetooth in
this chapter. Bluetooth is a wireless technology that allows for transmission of data over
short distances. For our project we will explore the BlueDot app from the Android Play
Store. We will use this app to build a simple Bluetooth connected doorbell.
Chapter 12, Enhancing Our IoT Doorbell, will take the simple doorbell we created in Chapter
11, Creating a Doorbell Button Using Bluetooth, and turn it into an IoT doorbell using the
knowledge we learned in Chapter 10, Publishing to Web Services.
Chapter 13, Introducing the Raspberry Pi Robot Car, starts us off on our journey into the IoT
robot car by introducing This Amazing Raspberry-Pi Automated Security Agent
(T.A.R.A.S). This chapter will begin by outlining the components we need to build
T.A.R.A.S and then we will proceed to putting it all together.
Chapter 14, Controlling the Robot Car Using Python, goes into how to write Python code for
our robot car. We will utilize the GPIO Zero library to make the car wheels move forward,
move the servo motors holding the camera, and light up the LEDs at the back of the robot
car.
Preface
[ 4 ]
Chapter 15, Connecting Sensory Inputs from the Robot Car to the Web, helps us understand
that in order to turn our robot car into a true IoT device we have to connect it to the
internet. In this chapter we will connect the distance sensor from our robot car to the
internet.
Chapter 16, Controlling the Robot Car with Web Service Calls, continues to turn our robot car
into an Internet of Things device by taking a deeper look at the internet dashboard we
created for the robot car.
Chapter 17, Building the JavaScript Client, moves our attention away from Python, switching
our focus to JavaScript instead. We will use JavaScript to build a web-based client that
communicates over the internet using the MQTT protocol.
Chapter 18, Putting It All Together, covers how we will connect our robot car, T.A.R.A.S, to
a JavaScript client, and control it over the internet using the MQTT protocol.
To get the most out of this book
To get the most out of this book, I will assume the following:
You have purchased, or will purchase, a Raspberry Pi Computer, preferably a
2015 model or newer.
You have had some exposure to the Python programming language, or are eager
to learn it.
You have a basic familiarity with electronic components and how to use a
breadboard.
You have purchased, or are willing to purchase, basic electronic components.
In terms of hardware requirements, you will need at least the following:
A Raspberry Pi Model 3 (2015 model or newer)
A USB power supply
A computer monitor
A USB keyboard
A USB mouse
A microSD RAM card
A breadboard and breadboard jumpers
Preface
[ 5 ]
Additional pieces of hardware will be introduced at the beginning of every chapter.
In terms of software requirements, you will require the Raspberry Pi NOOBS image
(. org/ downloads/ noobs/ ). Additional software, accounts, and
Python packages will be presented along the way. Any piece of software, web service, or
Python package we use in this book is free of charge./Internet- of- Things- Programming- Projects. In case there's an update
to the code, it will be updated on the existing GitHub repository.
We also have other code bundles from our rich catalog of books and videos available
at. com/ PacktPublishing/ . Check them out!
Preface
[ 6 ]
Download the color images
We also provide a PDF file that has color images of the screenshots/diagrams used in this
book. You can download it here: https:/ /www. packtpub. com/ sites/ default/ files/
downloads/9781789134803_ ColorImages. pdf.
Conventions used
There are a number of text conventions used throughout this book.
CodeInText: Indicates code words in text, database table names, folder names, filenames,
file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an
example: "In order to access Python 3, we type the python3 command in a Terminal
window."
A block of code is set as follows:
wind_dir_str_len = 2
if currentWeather.getWindSpeed()[-2:-1] == ' ':
wind_dir_str_len = 1
Any command-line input or output is written as follows:
pip3 install weather-api
Bold: Indicates a new term, an important word, or words that you see on screen. For
example, words in menus or dialog boxes appear in the text like this. Here is an example:
"From the View menu, select Object inspector and Variables."
Installing Raspbian on the
Raspberry Pi
The Raspberry Pi is marketed as a small and affordable computer that you can use to learn
programming. At least that was its initial goal. As we will see in this book, it is much more
than that.
The following topics will be covered in this chapter:
A brief history of the Raspberry Pi
A look at operating systems for the Raspberry Pi
Installing the Raspbian OS
A quick overview of the Raspbian OS
A brief history of the Raspberry Pi
First released in 2012, the first Raspberry Pi featured a 700 MHz single core processor and
256 MB of RAM. The Raspberry Pi 2 was released in February of 2015 with a 900 MHz quad
core processor and 1 GB of RAM. Released in February of 2016, the Raspberry Pi 3
increased the processor speed to 1.2 GHz. This model was also the first one to include
wireless LAN and Bluetooth.
Installing Raspbian on the Raspberry Pi Chapter 1
[ 9 ]
Here is an image of a Raspberry Pi 3 B (2015):
This version of the Raspberry Pi features the following parts:
Four USB 2 ports
A LAN port
A 3.5 mm composite video and audio jack
An HDMI port for video and audio
An OTG USB port (which we will use to connect the power)
A microSD slot (to hold our operating system)
Installing Raspbian on the Raspberry Pi Chapter 1
[ 10 ]
A DSI display port for the Raspberry Pi touchscreen
A General Purpose Input Output (GPIO) pins
A camera port for a special Raspberry Pi camera
The Raspberry Pi Zero was released in November of 2015. Here is an image of it:
Although not as powerful as the previous Raspberry Pis, the Zero featured a smaller size
(65 mm X 30 mm), perfect for projects with limited physical space (namely, wearable
projects). Plus, the Raspberry Pi zero was priced at $5 USD, making it very affordable. The
Raspberry Pi zero W was released on February 28, 2017 at double the price ($10 USD) with
built-in Wi-Fi and Bluetooth capabilities.
The latest model, as of the time of writing, is the Raspberry Pi 3 B+, which was released on
March 14, 2018. The processor speed has been upgraded to 1.4 GHz as well as the wireless
LAN now supporting both 2.4 GHz and 5 GHz bands. Another upgrade is the addition of
Bluetooth low energy, a technology built for applications that do not require large amounts
of data to be exchanged but are required to have a long battery life.
Installing Raspbian on the Raspberry Pi Chapter 1
[ 11 ]
Creators of the Raspberry Pi initially believed that they would sell at most 1,000 units. Little
did they know that their invention would explode in popularity. As of March 2018, sales of
Raspberry Pi computers has passed the 19 million mark.
A look at operating systems for the
Raspberry Pi
There are various operating systems (or system images) that may be installed on the
Raspberry Pi. These range from application-specific operating systems, such as audio
players, to various general purpose operating systems. The power behind Raspberry Pi is
the way it can be used for various applications and projects.
The following is a list of just a few of the operating systems (system images) available for
the Raspberry Pi:
Volumio: Do you have a desire to set up a networked audio system where you
access your music list using a computer or cell phone? Volumio may be what you
are looking for. Installing it on a Raspberry Pi creates a headless audio player (a
system that does not require a keyboard and mouse) that connects to your audio
files either over USB or a network. A special audio Hardware Added on
Top (HAT) may be added to your Pi to provide a pristine audio connection to an
amplifier and speakers. There is even a plugin to add Spotify so that you can set
up your Raspberry Pi to access this service and play music over your sound
system.
PiFM radio transmitter: The PiFM radio transmitter turns your Raspberry Pi into
an FM transmitter, which you can use to send audio files over the air to a
standard FM radio receiver. Using a simple wire connected to one of the GPIO
pins (we will learn more about GPIO later), you can create an antenna for the
transmitted FM signal, which is surprisingly strong.
Stratux: ADS-B is the new standard in aviation where geo-location and weather
information are shared with ground controllers and pilots. The Stratux image
with additional hardware turns the Raspberry Pi into an ADS-B receiver of this
information.
RetroPie: RetroPie turns your Raspberry Pi into a retro game console by
emulating gaming consoles and computers from the past. Some of the emulations
include Amiga, Apple II, Atari 2600, and the Nintendo Entertainment System of
the early 1980s.
Installing Raspbian on the Raspberry Pi Chapter 1
[ 12 ]
OctoPi: OctoPi turns your Raspberry Pi into a server for your 3D
printer. Through OctoPi, you may control your 3D printer over the network,
including viewing the status of your 3D printer using a webcam.
NOOBS: This is arguably the easiest way to install an operating system on the
Raspberry Pi. NOOBS stands for New Out-Of-the Box Software, and we will be
using NOOBS to install Raspbian.
Project overview
In this project, we will install the Raspbian operating system onto our Raspberry Pi. After
installation, we will take a quick tour of the operating system to familiarize ourselves with
it. We will start by formatting a microSD card to store our installation files. We will then
run the installation from the microSD card. After Raspbian has been installed, we will take
a quick look at it in order to familiarize ourselves with it.
This project should take about two hours to complete, as we install the Raspbian operating
system and take a quick look at it.
Getting started
The following is required to complete this project:
A Raspberry Pi Model 3 (2015 model or newer)
A USB power supply
A computer monitor
A USB keyboard
A USB mouse
A microSD RAM card
A Raspberry Pi NOOBS image (https:/ /www. raspberrypi. org/ downloads/
noobs/)
Installing the Raspbian OS
The Raspbian OS is considered the default or go-to operating system for the Raspberry Pi.
In this section, we will install Raspbian using the NOOBS image.
Installing Raspbian on the Raspberry Pi Chapter 1
[ 13 ]
Formatting a microSD card for Raspbian
Raspberry Pi uses a microSD card to store the operating system. This allows you to easily
switch between different operating systems (system images) for your Raspberry Pi. We will
be installing the default Raspbian OS for our projects using the NOOBS image.
Start by inserting the microSD card into a USB adapter and plug it into your computer:
You may need to format the microSD card. If so, use the utilities appropriate for your
computer's operating system to format the card to FAT32. It is recommended that you use a
card with a capacity of 8 GB or greater. For Windows OS and cards with 64 GB of capacity
or greater, a third-party tool such as FAT32 format should be used for formatting.
Copying the NOOBS files to the microSD RAM
Unzip the NOOBS image that you downloaded. Open up the unzipped directory and drag
the files over to the microSD card.
Installing Raspbian on the Raspberry Pi Chapter 1
[ 14 ]
The files should look the same as in the following screenshot:
Running the installer
We will now install Raspbian on our Raspberry Pi. This step should be familiar to those
that have previous experience installing operating systems such as Windows or macOS.
The Raspbian operating system will be installed and will run off of our microSD card.
Installing Raspbian on the Raspberry Pi Chapter 1
[ 15 ]
To install Raspbian onto our microSD card, do the following:
Start by inserting the microSD card into the appropriate slot on the Raspberry1.
Pi. Be sure to install it so that the label side (opposite side of the exposed
contacts) is facing up. Insert it with the metal contacts facing the board. The
microSD card should have a slight ridge at the top of the label side, which is
good for easy removal using a fingernail.
Insert a keyboard and mouse into the USB slots on the side, a monitor into the2.
HDMI port, and lastly, a USB power cable into the power port. The Raspberry Pi
does not have an on/off switch and will power up as soon as the power cable is
connected:
Installing Raspbian on the Raspberry Pi Chapter 1
[ 16 ]
After an initial black screen with rolling white text, you should see the following3.
dialog:
Installing Raspbian on the Raspberry Pi Chapter 1
[ 17 ]
In the previous screenshot, we clicked on the Language option. For our purposes,4.
we will keep the default of English (UK). We will also keep the keyboard at the
standard gb.
As the Raspberry Pi 3 has wireless LAN, we can set up our Wi-Fi (for older5.
boards, please plug a Wi-Fi dongle into a USB port or use the wired LAN port
and skip the next step):
Installing Raspbian on the Raspberry Pi Chapter 1
[ 18 ]
Click on the Wifi networks (w) button. Choose the Authentication method6.
using the radio buttons. Some routers are equipped with a WPS button that
allows you to connect directly to the router. To use the password method,
choose the Password authentication radio button and enter the password for
your network. After connecting to your network, you will notice that there are
now more operating system options to select from:
Installing Raspbian on the Raspberry Pi Chapter 1
[ 19 ]
We will go with the top option, Raspbian. Check the box beside Raspbian7.
[RECOMMENDED] and then click on the Install (i) button at the top-left corner
of the dialog. Raspbian will start installing on your Raspberry Pi. You will see a
progress bar with previous graphics, describing various features of the Raspbian
operating system:
Installing Raspbian on the Raspberry Pi Chapter 1
[ 20 ]
After the progress bar hits 100%, the computer will reboot and you will see a8.
screen with text before the default desktop loads up:
Installing Raspbian on the Raspberry Pi Chapter 1
[ 21 ]
A quick overview of the Raspbian OS
The Raspbian desktop is similar to the desktops of other operating systems such as
Windows and macOS. Clicking the top-left button drops down the application menu where
you may access the various pre-installed programs. We may also shut down the Raspberry
Pi from this menu:
The Chromium web browser
The second button from the left loads the Google Chromium web browser for the
Raspberry Pi:
Installing Raspbian on the Raspberry Pi Chapter 1
[ 22 ]
The Chromium browser is a lightweight browser that runs remarkably well on the
Raspberry Pi:
The home folder
The two-folders button opens up a window showing the home folder:
Installing Raspbian on the Raspberry Pi Chapter 1
[ 23 ]
The home folder is a great place to start when looking for files on your Raspberry Pi. In fact,
when you take screenshots using either the scrot command or the Print Screen button, the
file is automatically stored in this folder:
The Terminal
The third button from the left opens up the Terminal. The Terminal permits command-line
access to Raspberry Pi's files and programs:
It is from the command line where you may update the Raspberry Pi using the sudo apt-
get update and sudo apt-get dist-upgrade commands.
Installing Raspbian on the Raspberry Pi Chapter 1
[ 24 ]
apt-get updates the packages list, and apt-get dist-upgrade updates the packages:
It's a good idea to run both of these commands right after installing Raspbian using the
sudo command. The default user for Raspbian on the Raspberry Pi is pi, which is part of
the Super Users group in Raspbian, and thus must use the sudo command (the default
password for the pi user is raspberry):
Installing Raspbian on the Raspberry Pi Chapter 1
[ 25 ]
Mastering the command line is a virtue that many a programmer aspires
to acquire. Being able to rapidly type command after command looks so
cool that even movie makers have picked up on it (when was the last time
you saw the computer wiz in a movie clicking around the screen with a
mouse?). To assist you in becoming this uber cool computer wiz, here are
some basic Raspbian commands for you to master using the Terminal:
ls: Command to see the contents of the current directory
cd: Command to change directories. For example, use cd to move up a
directory from where you currently are
pwd: Command to display the directory you are currently in
sudo: Allows the user to perform a task as the super user
shutdown: Command that allows the user to shut down the computer
from the Terminal command line
Mathematica
The third and fourth buttons are for Mathematica, and a terminal to access the Wolfram
language, respectively:
Mathematica spans all areas of technical computing and uses the Wolfram language as the
programming language. The areas in which Mathematica is used include machine learning,
image processing, neural networks and data science:
Installing Raspbian on the Raspberry Pi Chapter 1
[ 26 ]
Mathematica, a proprietary software first released in 1988, can be used free for individuals
on the Raspberry Pi through a partnership that was announced in late 2013.
Now let’s take a look at some of the programs that are accessed from the main drop-down
Sonic Pi
Sonic Pi is a live coding environment for creating electronic music. It is accessed from the
Programming menu option. Sonic Pi is a creative way to create music as the user programs
loops, arpeggios, and soundscapes in real time by cutting and pasting code from one part of
the app to another. Synthesizers in Sonic Pi may be configured on a deep level, providing
a customized experience for the music coder:
Installing Raspbian on the Raspberry Pi Chapter 1
[ 27 ]
Geared toward an EDM style of music, Sonic Pi may also be used to compose classical and
jazz styles of music.
Scratch and Scratch 2.0
Scratch and Scratch 2.0 are visual programming environments designed for teaching
programming to children. Using Scratch, the programmer creates their own animations
with looping and conditional statements.
Games may be created within the program. The first version of Scratch was released in 2003
by the Lifelong Kindergarten group at the MIT media lab. Scratch 2.0 was released in 2013,
and development is currently underway with Scratch 3.0:
Scratch and Scratch 2.0 may be accessed under the Programming menu option.
Installing Raspbian on the Raspberry Pi Chapter 1
[ 28 ]
LibreOffice
LibreOffice is a free and open source office suite that forked over from OpenOffice in
2010. The LibreOffice suite consists of a word processor, a spreadsheet program, a
presentation program, a vector graphics editor, a program for creating and editing
mathematical formulae, and a database management program. The LibreOffice suite of
programs may be accessed through the LibreOffice menu option:
Installing Raspbian on the Raspberry Pi Chapter 1
[ 29 ]
Summary
We started this chapter with a look at the history of the Raspberry Pi. What started as an
initiative to promote programming to a new generation has grown into a global
phenomenon. We then downloaded the NOOBS image and installed the Raspbian OS, the
default operating system for the Raspberry Pi. This involved formatting and preparing a
microSD card for the NOOBS files.
It's easiest to think that a computer as inexpensive and small as the Raspberry Pi is not all
that powerful. We demonstrated that the Raspberry Pi is indeed a very capable computer,
as we took a look at some of the applications that come pre-installed with the Raspbian OS.
In Chapter 2, Writing Python Programs Using Raspberry Pi, we will begin Python coding
using the Raspberry Pi and some of the development tools available in Raspbian.
Questions
What year did the first Raspberry Pi come out?1.
What upgrades did the Raspberry Pi 3 Model B+ have over the previous version?2.
What does NOOBS stand for?3.
What is the name of the pre-installed application that allows for creating music4.
with Python code?
Where is the operating system stored for the Raspberry Pi?5.
What is the name of the visual programming environment designed for children6.
that comes pre-installed with Raspbian?
What is the name of the language used in Mathematica?7.
What is the default username and password for Raspbian?8.
What does GPIO stand for?9.
What is RetroPie?10.
True or false? Clicking on the two-folders icon on the main bar loads the home11.
folder.
True or false? The microSD card slot is located at the bottom of the Raspberry Pi.12.
True or false? To shutdown the Raspberry Pi, select Shutdown from the13.
Application menu.
True or false? You may only install the Raspbian OS with NOOBS.14.
True or false? Bluetooth low energy refers to people that eat too many15.
blueberries and have a hard time waking up in the morning.
Installing Raspbian on the Raspberry Pi Chapter 1
[ 30 ]
Further reading
For more information on the Raspberry Pi, please consult the main Raspberry Pi website at.
2
Writing Python Programs Using
Raspberry Pi
In this chapter, we will start writing python programs with Raspberry Pi. Python is the
official programming language for Raspberry Pi and is represented by the Pi in the name.
The following topics will be covered in this chapter:
Python tools for Raspberry Pi
Using the Python command line
Writing a simple Python program
Python comes pre-installed on Raspbian in two versions, versions 2.7.14 and 3.6.5 (as of this
writing) representing Python 2 and Python 3, respectively. The differences between the two
versions are beyond the scope of this book. We will use Python 3 in this book unless
otherwise stated.
Project overview
In this project, we will become comfortable with Python development on Raspberry Pi. You
may be used to development tools or Integrated Development Environments (IDEs) on
other systems such as Windows, macOS, and Linux. In this chapter, we will get our feet wet
in terms of using Raspberry Pi as a development machine. We will start off slowly with
Python as we get our development juices flowing.
Writing Python Programs Using Raspberry Pi Chapter 2
[ 32 ]
Technical requirements
The following is required to complete this project:
Raspberry Pi Model 3 (2015 model or newer)
USB power supply
Computer monitor
USB keyboard
USB mouse
Python tools for Raspberry Pi
The following are pre-installed tools that we may use for Python development on
Raspberry Pi using Raspbian. This list is by no means the only tools that we may use for
development.
The Terminal
As Python comes pre-installed with Raspbian, an easy way to launch it is to use the
Terminal. As we can see in the following screenshot, the Python interpreter can be accessed
by simply typing python as the command prompt in a Terminal window:
We may test it out by running the simplest of programs:
print 'hello'
Writing Python Programs Using Raspberry Pi Chapter 2
[ 33 ]
Notice the Python version in the line after the command, 2.7.13. The python command in
Raspbian is tied to Python 2. In order to access Python 3, we must type the python3
command in a Terminal window:
Integrated Development and Learning
Environment
The Integrated Development and Learning Environment (IDLE) has been the default IDE
for Python since version 1.5.2. It is written in Python itself using the Tkinter GUI toolkit and
is intended to be a simple IDE for beginners:
IDLE features a multi-window text editor with auto-completion, syntax highlighting, and
smart indent. IDLE should be familiar to anyone that has used Python. There are two
versions of IDLE in Raspbian, one for Python 2 and the other for Python 3. Both programs
are accessed from Application Menu | Programming.
Thonny
Thonny is an IDE that comes packaged with Raspbian. With Thonny, we may evaluate
expressions by using the debug function. Thonny is also available for macOS and
Windows.
Writing Python Programs Using Raspberry Pi Chapter 2
[ 34 ]
To load Thonny, go to Application Menu | Programming | Thonny:
Above is the default screen for Thonny. Panels to view variables in your program, as well
as a panel to view the filesystem, are toggled on and off from the View menu. Thonny's
compact structure makes it ideal for our projects.
We will be learning a bit more about Thonny as we go through the rest of this book.
Writing Python Programs Using Raspberry Pi Chapter 2
[ 35 ]
Using the Python command line
Let's start doing some coding. Whenever I start using a new operating system for
development, I like to go through some basics just to get my mind back into it (I'm speaking
particularly to those of us who are all too familiar with coding into the wee hours of the
morning).
The simplest way to access Python is from the Terminal. We will run a simple program to
get started. Load the Terminal from the main toolbar and type python3 at the prompt.
Type the following line and hit Enter:
from datetime import datetime
This line loads the datetime object from the datetime module into our instance of
Python. Next type the following and hit Enter:
print(datetime.now())
You should see the current date and time printed to the screen:
Let's try another example. Type the following into the shell:
import pyjokes
Writing Python Programs Using Raspberry Pi Chapter 2
[ 36 ]
This is a library that's used to tell programming jokes. To have a joke printed out, type the
following and hit Enter:
pyjokes.get_joke()
You should see the following output:
OK, so this may not be your cup of tea (or coffee, for the Java programmers out there).
However, this example demonstrates how easy it is to import a Python module and utilize
it.
If you receive an ImportError, it is because pyjokes did not come pre-
installed with your version of the OS. Similar to the following example,
typing sudo pip3 install pyjokes will install pyjokes onto your
Raspberry Pi.
What these Python modules have in common is their availability for our use. We simply
need to import them directly into the shell in order to use them, as they are pre-installed
with our Raspbian operating system. However, what about libraries that are not installed?
Let's try an example. In the Python shell, type the following and hit Enter:
import weather
Writing Python Programs Using Raspberry Pi Chapter 2
[ 37 ]
You should see the following:
Since the weather package is not installed on our Raspberry Pi we get an error when trying
to import. In order to install the package, we use the Python command-line utility pip, or
in our case, pip3 for Python 3:
Open up a new Terminal (make sure that you're in a Terminal session and not a1.
Python shell). Type the following:
pip3 install weather-api
Hit Enter. You will see the following:2.
Writing Python Programs Using Raspberry Pi Chapter 2
[ 38 ]
After the process is finished, we will have the weather-api package installed on3.
our Raspberry Pi. This package will allow us to access weather information from
Yahoo! Weather.
Now let's try out a few examples:
Type python3 and hit Enter. You should now be back in the Python shell. 1.
Type the following and hit Enter:2.
from weather import Weather
from weather import Unit
What we have done is imported Weather and Unit from weather. Type the3.
following and hit Enter:
weather = Weather(unit=Unit.CELSIUS)
This instantiates a weather object called weather. Now, let's make use of this4.
object. Type the following and hit Enter:
lookup = weather.lookup(4118)
We now have a variable named lookup that's been created with the code 4118,5.
that corresponds to the city Toronto, Canada. Type the following and hit Enter:
condition = lookup.condition
We now have a variable called condition that contains the current weather6.
information for the city of Toronto, Canada via the lookup variable. To view this
information, type the following and hit Enter:
print(condition.text)
You should get a description of the weather conditions in Toronto, Canada.7.
When I ran it, the following was returned:
Partly Cloudy
Now that we've seen that writing Python code on the Raspberry Pi is just as easy as writing
it on other operating systems, let's take it a step further and write a simple program. We
will use Thonny for this.
Writing Python Programs Using Raspberry Pi Chapter 2
[ 39 ]
A Python module is a single Python file containing code that may be
imported for use. A Python package is a collection of Python modules.
Writing a simple Python program
We will write a simple Python program that contains a class. To facilitate this, we will use
Thonny, a Python IDE that comes pre-installed with Raspbian and has excellent debug and
variable introspection functionalities. You will find that its ease of use makes it ideal for the
development of our projects.
Creating the class
We will begin our program by creating a class. A class may be seen as a template for
creating objects. A class contains methods and variables. To create a class in Python with
Thonny, do the following:
Load Thonny through Application Menu | Programming | Thonny. Select New1.
from the top left and type the following code:
class CurrentWeather:
weather_data={'Toronto':['13','partly sunny','8 km/h NW'],
'Montreal':['16','mostly sunny','22 km/h W'],
'Vancouver':['18','thunder showers','10 km/h NE'],
'New York':['17','mostly cloudy','5 km/h SE'],
'Los Angeles':['28','sunny','4 km/h SW'],
'London':['12','mostly cloudy','8 km/h NW'],
'Mumbai':['33','humid and foggy','2 km/h S']
}
def __init__(self, city):
self.city = city
def getTemperature(self):
return self.weather_data[self.city][0]
def getWeatherConditions(self):
return self.weather_data[self.city][1]
def getWindSpeed(self):
return self.weather_data[self.city][2]
Writing Python Programs Using Raspberry Pi Chapter 2
[ 40 ]
As you can see, we've created a class called CurrentWeather that will hold weather
conditions for whichever city we instantiated the class for. We are using a class as it will
allow us to keep our code clean and prepare us for using outside classes later on.
Creating the object
We will now create an object from our CurrentWeather class. We will use London as our
city:
Click on the Run Current Script button (a green circle with a white arrow) in the1.
top menu to load our code into the Python interpreter.
At the command line of the Thonny shell, type the following and hit Enter:2.
londonWeather = CurrentWeather('London')
We have just created an object in our code called londonWeather from
our CurrentWeather class. By passing 'London' to the constructor (init), we
set our new object to only send weather information for the city of London. This is
done through the class attribute city (self.city).
Type the following at the shell command line:3.
weatherLondon.getTemperature()
You should get the answer '12' on the next line.
To view the weather conditions for London, type the following:4.
weatherLondon.getWeatherConditions()
You should see 'mostly cloudy' on the next line.
To get the wind speed, type the following and hit Enter:5.
weatherLondon.getWindSpeed()
You should get 8 km/h NW on the next line.
Writing Python Programs Using Raspberry Pi Chapter 2
[ 41 ]
Our CurrentWeather class simulates data coming from a web service for weather data.
The actual data in our class is stored in the weather_data variable.
In future code, whenever possible, we will wrap calls to web services in
classes in order to keep things organized and make the code more
readable.
Using the object inspector
Let's do a little analysis of our code:
From the View menu, select Object inspector and Variables. You should see the1.
following:
Highlight the londonWeather variable under the Variables tab. We can see that2.
londonWeather is an object of type CurrentWeather. In the Object
inspector, we can also see that the attribute city is set to 'London'. This type of
variable inspection is invaluable in troubleshooting code.
Writing Python Programs Using Raspberry Pi Chapter 2
[ 42 ]
Testing your class
It is very important to test your code as you write it so that you can catch errors early on:
Add the following function to the CurrentWeather class:1.
def getCity(self):
return self.city
Add the following to the bottom of CurrentWeather.py. The first line should2.
have the same indentation as the class definition as this function is not part of the
class:
if __name__ == "__main__":
currentWeather = CurrentWeather('Toronto')
wind_dir_str_len = 2
if currentWeather.getWindSpeed()[-2:-1] == ' ':
wind_dir_str_len = 1
print("The current temperature in",
currentWeather.getCity(),"is",
currentWeather.getTemperature(),
"degrees Celsius,",
"the weather conditions are",
currentWeather.getWeatherConditions(),
"and the wind is coming out of the",
currentWeather.getWindSpeed()[-(wind_dir_str_len):],
"direction with a speed of",
currentWeather.getWindSpeed()
[0:len(currentWeather.getWindSpeed())
-(wind_dir_str_len)]
)
Run the code by clicking on the Run current script button. You should see the3.
following:
The current temperature in Toronto is 13 degrees Celsius, the
weather conditions are partly sunny and the wind is coming out of
the NW direction with a speed of 8 km/h
The if __name__ == "__main__": function allows us to test the class
in the file directly as the if statement will only be true if the file is run
directly. In other words, imports of CurrentWeather.py will not execute
the code following the if statement. We will explore this method more as
we work our way through this book.
Writing Python Programs Using Raspberry Pi Chapter 2
[ 43 ]
Making the code flexible
Code that is more generic is more flexible. The following are two examples of how we can
make the code less specific.
Example one
The wind_dir_str_len variable is used to determine the length of the string for wind
direction. For example, a direction of S would only use one character, whereas NW would
use two. This is done so that an extra space is not included in our output when the direction
is represented by only one character:
wind_dir_str_len = 2
if currentWeather.getWindSpeed()[-2:-1] == ' ':
wind_dir_str_len = 1
By looking for a space using [-2:-1], we can determine the length of this string and
change it to 1 if there is a space (as we are parsing back two characters from the end of the
string).
Example two
By adding the getCity method to our class, we are able to create classes with more generic
names like currentWeather as opposed to torontoWeather. This makes it easy to reuse
our code. We can demonstrate this by changing the following line:
currentWeather = CurrentWeather('Toronto')
We will change it to this:
currentWeather = CurrentWeather('Mumbai')
If we run the code again by clicking on the Run button, we get different values for all the
conditions in the sentence:
The current temperature in Mumbai is 33 degrees Celsius, the weather
conditions are humid and foggy and the wind is coming out of the S
direction with a speed of 2 km/h
Writing Python Programs Using Raspberry Pi Chapter 2
[ 44 ]
Summary
We began this chapter by discussing the various tools that are available for Python
development in Raspbian. The quickest and easiest way to run Python is from the Terminal
window. Since Python comes pre-installed in Raspbian, the python command in the
Terminal prompt loads Python (Python 2, in this case). There is no need to set environment
variables in order to have the command find the program. Python 3 is run from the
Terminal by typing python3.
We also took a brief look at IDLE, the default IDE for Python development. IDLE stands
for Integrated Development and Learning Environment and is an excellent tool for
beginners to use when learning Python.
Thonny is another Python IDE that comes pre-installed with Raspbian. Thonny has
excellent debug and variable introspection functionalities. It too is designed for beginning
Python developers, however, its ease of use and object inspector make it ideal for the
development of our projects. We will be using Thonny more as we progress through the
book.
We then jumped right into programming in order to get our development juices flowing.
We started out with simple expressions using the Terminal and concluded with a weather
data example designed to emulate objects that are used to call web services.
In Chapter 3, Using the GPIO to Connect to the Outside World, we will jump right into the
most powerful feature of programming on Raspberry Pi, the GPIO. The GPIO allows us to
interact with the real world through the use of devices connected to this port on Raspberry
Pi. GPIO programming will take our Python skills to a whole new level.
Questions
Which operating systems is Thonny available for?1.
How do we enter Python 2 from the Terminal command line?2.
Which tool in Thonny do we use to view what is inside an object?3.
Give two reasons as to why we are using an object in our weather example code.4.
What is the advantage of adding a method called getCity to5.
the CurrentWeather class?
What language is IDLE written in?6.
Writing Python Programs Using Raspberry Pi Chapter 2
[ 45 ]
What are the two steps taken in order to print the current date and time?7.
In our code, how did we compensate for wind speed directions that are8.
represented by only one letter?
What does the if __name__ =="__main__" statement do?9.
What does IDLE stand for?10.
Further reading
Python 3 - Object Oriented Programming by Dusty Phillips, Packt Publishing.
3
Using the GPIO to Connect to
the Outside World
In this chapter we will start unlocking the real power behind the Raspberry Pi—the GPIO,
or General Purpose Input Output. The GPIO allows you to connect your Raspberry Pi to
the outside world through the use of pins that may be set to input or output, and are
controlled through code.
The following topics will be covered in this chapter:
Python libraries for the Raspberry Pi
Accessing Raspberry Pi’s GPIO
Setting up the circuit
Hello LED
Project overview
In this chapter, we start by exploring Raspberry Pi-specific libraries for Python. We will
demonstrate these with a few examples by using the Raspberry Pi camera module and
Pibrella HAT. We will try a few coding examples with the Sense Hat emulator before
moving on to designing a physical circuit using the Fritzing program. Using a breadboard,
we will set up this circuit and connect it to our Raspberry Pi.
We will finish off this chapter by building a Morse code generator that transmits weather
data in Morse code from the class we created in Chapter 2, Writing Python Programs Using
Raspberry Pi. This chapter should take an afternoon to complete.
Using the GPIO to Connect to the Outside World Chapter 3
[ 47 ]
Technical requirements
The following is required to complete this project:
A Raspberry Pi Model 3 (2015 model or newer)
A USB power supply
Computer monitor
A USB keyboard
A USB mouse
A Raspberry Pi camera module (optional)—https:/ /. org/
products/ camera- module- v2/
A Pribrella HAT (optional)—
A Sense HAT (optional, as we will be using the emulator in this
chapter)—https:/ /www. raspberrypi. org/ products/ sense- hat/ a
A breadboard
Male-to-female jumper wires
An LED
Python libraries for the Raspberry Pi
We will turn our attention to the Python libraries or packages that come pre-installed with
Raspbian. To view these packages from Thonny, click on Tools | Manage Packages. After a
short delay, you should see many packages listed in the dialog:
Using the GPIO to Connect to the Outside World Chapter 3
[ 48 ]
Let's explore a few of these packages.
Using the GPIO to Connect to the Outside World Chapter 3
[ 49 ]
picamera
The camera port, or CSI, on the Raspberry Pi allows you to connect the specially designed
Raspberry Pi camera module to your Pi. This camera can take both photos and videos, and
has functionality to do time-lapse photography and slow-motion video recording. The
picamera package gives us access to the camera through Python. The following is a picture
of a Raspberry Pi camera module connected to a Raspberry Pi 3 Model B through the
camera port:
Connect your Raspberry Pi camera module to your Pi, open up Thonny, and type in the
following code:
import picamera
import time
picam = picamera.PiCamera()
picam.start_preview()
time.sleep(10)
picam.stop_preview()
picam.close()
Using the GPIO to Connect to the Outside World Chapter 3
[ 50 ]
This code imports the picamera and time packages, and then creates a picamera object
called picam. From there, we start the preview and then sleep for 10 seconds, before
stopping the preview and then closing the camera. After running the program, you should
see a 10 second preview from the camera on your screen.
Pillow
The Pillow package is used for image processing with Python. To test this out, download an
image to the same directory as your project files. Create a new file in Thonny and type in
the following:
from PIL import Image
img = Image.open('image.png')
print(img.format, img.size)
You should see the format and size of the image (in brackets) printed at the commandline
that follows.
sense-hat and sense-emu
The Sense HAT is a sophisticated add-on board for the Raspberry Pi. The Sense HAT is the
main component in the Astro Pi kit, part of a program to have young students program a
Raspberry Pi for the International Space Station.
The Astro Pi competition was officially opened in January of 2015 to all
primary and secondary school-aged children in the United Kingdom.
During a mission to the International Space Station, British astronaut Tim
Peake deployed Astro Pi computers on board the station.
The winning Astro Pi competition code was loaded onto an Astro Pi while
in orbit. The data generated was collected and sent back to Earth.
Using the GPIO to Connect to the Outside World Chapter 3
[ 51 ]
The Sense HAT contains an array of LEDs that can be used as a display. The Sense HAT
also has the following sensors onboard:
Accelerometer
Temperature sensor
Magnetometer
Barometric pressure sensor
Humidity sensor
Gyroscope
We can access the sensors and LEDs on the Sense HAT through the sense-hat package.
For those that do not have a Sense HAT, the Sense HAT emulator in Raspbian may be used
instead. We use the sense-emu package to access the emulated sensors and LED display on
the Sense HAT emulator.
To demonstrate this, perform the following steps:
Create a new file in Thonny and name it sense-hat-test.py, or something1.
similar.
Type in the following code:2.
from sense_emu import SenseHat
sense_emulator = SenseHat()
sense_emulator.show_message('Hello World')
Load the Sense HAT Emulator program from Application Menu | Programming3.
| Sense HAT Emulator.
Arrange your screen so that you can see the LED display of the Sense HAT4.
emulator and the full window of Thonny (see the following screenshot):
Using the GPIO to Connect to the Outside World Chapter 3
[ 52 ]
Click on the Run current script button.5.
You should see the Hello World! message scroll across the LED display of the6.
Sense HAT emulator one letter at a time (see the previous screenshot).
Using the GPIO to Connect to the Outside World Chapter 3
[ 53 ]
Accessing Raspberry Pi's GPIO
Through the GPIO, we are able to connect to the outside world. Here is a diagram of the
Raspberry Pi GPIO pins:
The following is an explanation of these pins:
Red pins represent power coming out of the GPIO. The GPIO provides 3.3 Volts
and 5 Volts.
Black pins represent pins used for electrical ground. As you can see, there are 8
ground pins on the GPIO.
Using the GPIO to Connect to the Outside World Chapter 3
[ 54 ]
Blue pins are used for Raspberry Pi Hardware Added on Top (HATs). They
allow communication between the Raspberry Pi and the HAT's Electrical
Erasable Programmable Read-Only Memory (EEPROM).
Green pins represent the input and output pins that we may program for. Please
note that some of the green GPIO pins double up with additional functionality.
We will not be covering the additional functionality for this project.
The GPIO is what lies at the heart of the Raspberry Pi. We can connect LEDs, buttons,
buzzers, and so on to the Raspberry Pi through the GPIO. We can also access the GPIO
through HATs designed for the Raspberry Pi. One of those, called Pibrella, is what we
will use next to explore connecting to the GPIO through Python code.
Raspberry Pi 1 Models A and B only have the first 26 pins (as shown by
the dotted line). Models since then, including Raspberry Pi 1 Models A+
and B+, Raspberry Pi 2, Raspberry Pi Zero and Zero W, and Raspberry Pi
3 Model B and B+, have 40 GPIO pins.
Pibrella
Pibrella is a relatively inexpensive Raspberry Pi HAT that makes connecting to the GPIO
easy. The following are the components on-board of Pibrella:
1 red LED
1 yellow LED
1 green LED
Small speaker
Push button
4 inputs
4 outputs
Micro USB power connector for delivering more power to the outputs
Pibrella was designed for early Raspberry Pi models and thus only has a 26-pin input. It
can, however, be connected to later models through the first 26 pins.
Using the GPIO to Connect to the Outside World Chapter 3
[ 55 ]
To install the Pibrella Hat, line up the pin connectors on the Pibrella with the first 26 pins
on the Raspberry Pi, and push down. In the following picture, we are installing Pibrella on
a Raspberry Pi 3 Model B:
Pibrella should fit snugly when installed:
Using the GPIO to Connect to the Outside World Chapter 3
[ 56 ]
The libraries needed to connect to Pibrella do not come pre-installed with Raspbian (as of
the time of writing), so we have to install them ourselves. To do that, we will use the pip3
command from the Terminal:
Load the Terminal by clicking on it on the top tool bar (fourth icon from the left).1.
At the Command Prompt, type the following:
sudo pip3 install pibrella
You should see the package load from the Terminal:2.
With the Pibrella library, there is no need to know the GPIO pin numbers in3.
order to access the GPIO. The functionality is wrapped up in the Pibrella
object we import into our code. We will do a short demonstration.
Create a new file in Thonny called pibrella-test.py, or name it something4.
similar. Type in the following code:
import pibrella
import time
pibrella.light.red.on()
time.sleep(5)
Using the GPIO to Connect to the Outside World Chapter 3
[ 57 ]
pibrella.light.red.off()
pibrella.buzzer.success()
Run the code by clicking on the Run current script button. If you typed5.
everything in correctly, you should see the red light on the Pibrella board turn on
for 5 seconds before a short melody is played over the speaker.
Congratulations, you have now crossed the threshold into the world of physical computing.
RPi.GPIO
The standard Python package for accessing the GPIO is called RPi.GPIO. The best way to
describe how it works is with some code (this is for demonstration purposes only; we will
be running code to access the GPIO in the upcoming section):
import RPi.GPIO as GPIO
import time
GPIO.setmode(GPIO.BCM)
GPIO.setup(18, GPIO.OUT)
GPIO.output(18, GPIO.HIGH)
time.sleep(5)
GPIO.output(18, GPIO.LOW)
As you can see, this code seems a little bit confusing. We will step through it:
First, we import the RPi.GPIO and time libraries:1.
import RPi.GPIO as GPIO
import time
Then, we set the mode to BCM:2.
GPIO.setmode(GPIO.BCM)
In BCM mode, we access the pin through GPIO numbers (the ones shown in our3.
Raspberry Pi GPIO graphic). The alternative is to access the pins through their
physical location (GPIO.BOARD).
To set GPIO pin 18 to an output, we use the following line:4.
GPIO.setup(18, GPIO.OUT)
Using the GPIO to Connect to the Outside World Chapter 3
[ 58 ]
We then set GPIO 18 to HIGH for 5 seconds before setting it to LOW:5.
GPIO.output(18, GPIO.HIGH)
time.sleep(5)
GPIO.output(18, GPIO.LOW)
If we had set up the circuit and run the code, we would see our LED light for 5 seconds
before turning off, similar to the Pibrella example.
GPIO zero
An alternative to RPi.GPIO is the GPIO Zero package. As with RPi.GPIO, this package
comes pre-installed with Raspbian. The zero in the name refers to zero boilerplate or setup
code (code that we are forced to enter every time).
To accomplish the same task of turning an LED on and off for 5 seconds, we use the
following code:
from gipozero import LED
import time
led = LED(18)
led.on()
time.sleep(5)
led.off()
As with our RPi.GPIO example, this code is for demonstration purposes only as we haven't
set up a circuit yet. It's obvious that the GPIO Zero code is far simpler than the RPi.GPIO
example. This code is pretty self-explanatory.
In the following sections, we will start building a physical circuit on a breadboard with an
LED, and use our code to turn it on and off.
Setting up the circuit
The Pibrella HAT gave us a simple way of programming the GPIO, however, the ultimate
goal of Raspberry Pi projects is to create a customized working circuit. We will now take
the steps to design our circuit, and then create the circuit using a breadboard.
The first step is to design our circuit on the computer.
Using the GPIO to Connect to the Outside World Chapter 3
[ 59 ]
Fritzing
Fritzing is a free circuit design software available for Windows, macOS, and Linux. There is
a version in the Raspberry Pi store that we will install on our Raspberry Pi:
From the Application Menu, choose Preferences | Add / Remove Software. In1.
the Search box, type in Fritzing:
Select all three boxes and click on Apply, and then OK. After installation, you2.
should be able to load Fritzing from Application Menu | Programming |
Fritzing.
Click on the Breadboard tab to access the breadboard design screen. A full size3.
breadboard dominates the middle of the screen. We will make it smaller as our
circuit is small and simple.
Using the GPIO to Connect to the Outside World Chapter 3
[ 60 ]
Click on the breadboard. In the Inspector box, you will see a heading called4.
Properties.
Click on the Size dropdown and select Mini.5.
To add a Raspberry Pi to our circuit, type in Raspberry Pi in the search box.6.
Drag a Raspberry Pi 3 under our breadboard.
From here, we may drag and drop components onto our breadboard.7.
Add an LED and 330 Ohm resistor to our breadboard, shown in the following8.
diagram. We use the resistor to protect both the LED and Raspberry Pi from
excessive currents that may cause damage:
Using the GPIO to Connect to the Outside World Chapter 3
[ 61 ]
You will notice that as we hover our mouse over each pin on our Raspberry Pi9.
component, a yellow tip will pop up with the pin's BCM name. Click on GPIO 18
and drag a line over to the positive leg of our LED (the longer one).
Do the same to drag a GND connection to the left-hand side of the resistor.10.
This is the circuit we will build for our Raspberry Pi.
Building our circuit
To build our physical circuit, start by inserting components into our breadboard. Referring
to our diagram from before, we can see that some of the holes are green. This indicates
continuity in the circuit. For example, we connect the negative leg of the LED to the 330
Ohm resistor through the same vertical column. Thus, the two component legs are
connected together through the breadboard.
We take this into account as we start to place our components on the breadboard:
Insert the LED into our breadboard, as shown in the preceding picture. We are1.
following our Fritzing diagram and have the positive leg in the lower hole.
Follow our Fritzing diagram and wire up the 330 Ohm resistor. Using female-to-2.
male jumper wires, connect the Raspberry Pi to our breadboard.
Refer to our Raspberry Pi GPIO diagram to find GPIO 18 and GND on the3.
Raspberry Pi board.
Using the GPIO to Connect to the Outside World Chapter 3
[ 62 ]
It is a good practice to have the Raspberry Pi powered off when
connecting jumpers to the GPIO.
As you can see in the following image, the complete circuit resembles our Fritzing
diagram (only our breadboard and Raspberry Pi are turned sideways):
Connect the Raspberry Pi back up to the monitor, power supply, keyboard, and4.
mouse.
We are now ready to program our first real GPIO circuit.
Using the GPIO to Connect to the Outside World Chapter 3
[ 63 ]
Hello LED
We will jump right into the code:
Create a new file in Thonny, and call it Hello LED.py or something similar.1.
Type in the following code and run it:2.
from gpiozero import LED
led = LED(18)
led.blink(1,1,10)
Blink LED using gpiozero
If we wired up our circuit and typed in our code correctly, we should see our LED blink for
10 seconds in 1 second intervals. The blink function in the gpiozero LED object allows us
to set on_time (the length of time in seconds that the LED stays on), off_time (the length
of time in seconds that the LED is turned off for), n or the number of times the LED blinks,
and background (set to True to allow other code to run while the LED is flashing).
The blink function call with its default parameters looks like this:
blink(on_time=1, off_time=1, n=none, background=True)
Without parameters passed into the function, the LED will blink non-stop at 1 second
intervals. Notice how we do not need to import the time library like we did when we used
the RPi.GPIO package for accessing the GPIO. We simply pass a number into the blink
function to represent the time in seconds we want the LED on or off.
Morse code weather data
In Chapter 2, Writing Python Programs Using Raspberry Pi, we wrote code that simulates
calls to a web service that supplies weather information. Taking what we learned in this
chapter, let's revisit that code and give it a physical computing upgrade. We will use our
LED to flash a Morse code representation of our weather data.
Using the GPIO to Connect to the Outside World Chapter 3
[ 64 ]
Many of us believe that the world only started to become connected in the
1990s with the World Wide Web. Little do we realize that we already had
such a world beginning in the 19th century with the introduction of the
telegraph and trans-world telegraph cables. The language of this so-called
Victorian Internet was Morse code, with the Morse code operator as its
gate keeper.
The following are the steps for flashing Morse code representation of our weather data:
We will first start by creating a MorseCodeGenerator class:1.
from gpiozero import LED
from time import sleep
class MorseCodeGenerator:
led = LED(18)
dot_duration = 0.3
dash_duration = dot_duration * 3
word_spacing_duration = dot_duration * 7
MORSE_CODE = {
'A': '.-', 'B': '-...', 'C': '-.-.',
'D': '-..', 'E': '.', 'F': '..-.',
'G': '--.', 'H': '....', 'I': '..',
'J': '.---', 'K': '-.-', 'L': '.-..',
'M': '--', 'N': '-.', 'O': '---',
'P': '.--.', 'Q': '--.-', 'R': '.-.',
'S': '...', 'T': '-', 'U': '..-',
'V': '...-', 'W': '.--', 'X': '-..-',
'Y': '-.--', 'Z': '--..', '0': '-----',
'1': '.----', '2': '..---', '3': '...--',
'4': '....-', '5': '.....', '6': '-....',
'7': '--...', '8': '---..', '9': '----.',
' ': ' '
}
def transmit_message(self, message):
for letter in message:
morse_code_letter = self.MORSE_CODE[letter.upper()]
for dash_dot in morse_code_letter:
if dash_dot == '.':
self.dot()
elif dash_dot == '-':
self.dash()
elif dash_dot == ' ':
self.word_spacing()
self.letter_spacing()
def dot(self):
self.led.blink(self.dot_duration,self.dot_duration,1,False)
Using the GPIO to Connect to the Outside World Chapter 3
[ 65 ]
def dash(self):
self.led.blink(self.dash_duration,self.dot_duration,1,False)
def letter_spacing(self):
sleep(self.dot_duration)
def word_spacing(self):
sleep(self.word_spacing_duration-self.dot_duration)
if __name__ == "__main__":
morse_code_generator = MorseCodeGenerator()
morse_code_generator.transmit_message('SOS')
After importing the gpiozero and time libraries into our2.
MorseCodeGenerator class, we define GPIO 18 as our LED with the line
led=LED(18)
We set the duration of how long a dot lasts with the line dot_duration = 0.33.
We then define the duration of the dash and spacing between words based on4.
the dot_duration
To speed up or slow down our Morse code transmutation, we may adjust5.
dot_duration accordingly
We use a Python dictionary with the name MORSE_CODE. We use this dictionary6.
to translate letters to Morse code
Our transmit_message function steps through each letter of the message, and7.
then each character in the Morse code, which is equivalent to using the
dash_dot variable
The magic of our class happens in the dot and dash methods by using the blink8.
function from the gpiozero library:
def dot(self):
self.led.blink(self.dot_duration,
self.dot_duration,1,False)
In the dot method, we can see that we turn the LED on for the duration set in
dot_duration, and then we turn it off for the same amount of time. We only blink it once
as set it by the number 1 in the blink method call. We also set the background parameter
to False.
Using the GPIO to Connect to the Outside World Chapter 3
[ 66 ]
This last parameter is very important, as if we leave it to the default of True, the code will
continue to run before the LED has a chance to blink on and off. Basically, the code won't
work unless the background parameter is set to False.
We forgo the usual Hello World for our test message and instead use the standard SOS,
which is familiar to the most casual of Morse code enthusiasts. We may test our class by
clicking on the Run button and, if all is set up correctly, we will see the LED blink SOS in
Morse code.
Now, let's revisit our CurrentWeather class from Chapter 2, Writing Python Programs
Using Raspberry Pi. We will make a few minor modifications:
from MorseCodeGenerator import MorseCodeGenerator
class CurrentWeather:
weather_data={
'Toronto':['13','partly sunny','8 NW'],
'Montreal':['16','mostly sunny','22 W'],
'Vancouver':['18','thunder showers','10 NE'],
'New York':['17','mostly cloudy','5 SE'],
'Los Angeles':['28','sunny','4 SW'],
'London':['12','mostly cloudy','8 NW'],
'Mumbai':['33','humid and foggy','2 S']
}
def __init__(self, city):
self.city = city
def getTemperature(self):
return self.weather_data[self.city][0]
def getWeatherConditions(self):
return self.weather_data[self.city][1]
def getWindSpeed(self):
return self.weather_data[self.city][2]
def getCity(self):
return self.city
if __name__ == "__main__":
current_weather = CurrentWeather('Toronto')
morse_code_generator = MorseCodeGenerator()
morse_code_generator.transmit_message(current_weather.
getWeatherConditions())
Using the GPIO to Connect to the Outside World Chapter 3
[ 67 ]
We start by importing our MorseCodeGenerator class (make sure that both files are in the
same directory). As we do not have a Morse code equivalent of /, we take out the km/h in
the weather_data data set. The rest of the class remains the same as it did in Chapter 2,
Writing Python Programs Using Raspberry Pi. In our test section, we instantiate both a
CurrentWeather class and a MorseCodeGenerator class. Using the CurrentWeather
class, we pass the weather conditions for Toronto into the MorseCodeGenerator class.
If there aren't any mistakes made in entering the code, we should see our LED blink
partly sunny in Morse code.
Summary
A lot was covered in this chapter. By the end of it, you should be feeling pretty good about
developing applications on the Raspberry Pi.
The picamera, Pillow, and sense-hat libraries make it easy to communicate with the
outside world with your Raspberry Pi. Using the Raspberry Pi camera module and
picamera, we open up a whole new world of possibilities with our Pi. We only touched on
a small part of what picamera can do. Additionally, we only scratched the surface of
image processing with the Pillow library. The Sense HAT emulator allowed us to save
spending money on buying the actual HAT and test out our code. With sense-hat and the
Raspberry Pi Sense HAT, we truly expand our reach into the physical world.
The inexpensive Pibrella HAT provided an easy way to jump into the physical computing
world. By installing the pibrella library, we are giving our Python code access to an
assortment of LEDs, a speaker, and a button, all neatly packaged into a Raspberry Pi HAT.
However, the true ultimate goal with physical computing is to build electronic circuits that
bridge the gap between our Raspberry Pi and the outside world. We started our journey of
building electronic circuits with the Fritzing circuit builder, available from the Raspberry Pi
store. From there, we built our first circuit on a breadboard with an LED and resistor.
Using the GPIO to Connect to the Outside World Chapter 3
[ 68 ]
We concluded this chapter by creating a Morse code generator with our Raspberry Pi and
LED circuit. In a twist of old meets new, we were able to transmit weather data in Morse
code via a blinking LED.
In Chapter 4, Subscribing to Web Services, we will incorporate web services into our code,
thereby connecting the internet world with the real world in a concept called the Internet of
Things.
Questions
What is the name of the Python package that allows you access to the Raspberry1.
Pi camera module?
True or false? A Raspberry Pi with code written by students was deployed on the2.
international space station.
What are the sensors included with Sense HAT?3.
True or false? We do not need to buy a Raspberry Pi Sense HAT for4.
development, as an emulator of this HAT exists in Raspbian.
How many ground pins are there on the GPIO?5.
True or false? Raspberry Pi's GPIO has pins that supply both 5V and 3.3V.6.
What is a Pibrella?7.
True or false? You may only use a Pibrella on early Raspberry Pi computers.8.
What does BCM mode mean?9.
True or false? BOARD is the alternative to BCM.10.
What does the Zero in gpiozero refer to?11.
True or false? Using Fritzing, we are able to design a GPIO circuit for our12.
Raspberry Pi.
What is the default background parameter in the gpiozero LED blink function13.
set to?
True or false? It is far easier to use the gpiozero library to access the GPIO than14.
it is to use the RPi.GPIO library.
What is the Victorian Internet?15.
Using the GPIO to Connect to the Outside World Chapter 3
[ 69 ]
Further reading
A lot of concepts were covered in this chapter, with the assumption that the skills needed
were not beyond the average developer and tinkerer. To further solidify understanding of
these concepts, please Google the following:
How to install the Raspberry Pi camera module
How to use a breadboard
An introduction to the Fritzing circuit design software
Python dictionaries
For those of you that are as fascinated about technology of the past as I am, the following is
a great book to read on the age of the Victorian Internet: The Victorian Internet, by Tom
Standage.
4
Subscribing to Web Services
Many of us take the technologies that the internet is built on top of for granted. When we
visit our favorite websites, we care little that the web pages we are viewing are crafted for
our eyes. However, lying underneath is the internet protocol suite of communication
protocols. Machines can also take advantage of these protocols and communicate machine
to machine through web services.
In this chapter, we will continue our journey toward connecting devices through the
Internet of Things (IoT). We will explore web services and the various technologies behind
them. We will conclude our chapter with some Python code where we call a live weather
service and extract information in real time.
The following topics will be covered in this chapter:
Cloud services for IoT
Writing a Python program to extract live weather data
Prerequisites
The reader should have a working knowledge of the Python programming language to
complete this chapter as well as an understanding of basic object-oriented programming.
This will serve the reader well, as we will be separating our code into objects.
Subscribing to Web Services Chapter 4
[ 71 ]
Project overview
In this project, we will explore the various web services that are available and touch on
their core strengths. We will then write code that calls the Yahoo! Weather web service. We
will conclude by having a "ticker" display of real-time weather data using the Raspberry Pi
Sense HAT emulator.
This chapter should take a morning or afternoon to complete.
Getting started
To complete this project, the following will be required:
A Raspberry Pi Model 3 (2015 model or newer)
A USB power supply
A computer monitor (with HDMI support)
A USB keyboard
A USB mouse
Internet access
Cloud services for IoT
There are many cloud services that we may use for IoT development. Some of the biggest
companies in technology have thrown their weight behind IoT and in particular IoT with
artificial intelligence.
The following are the details of some of these services.
Amazon Web Services IoT
The Amazon Web Services IoT is a cloud platform that allows connected devices to securely
interact with other devices or cloud applications. These are offered as pay-as-you-go
services without the need for a server, thereby simplifying deployment and scalability.
Subscribing to Web Services Chapter 4
[ 72 ]
Amazon Web Services (AWS) services that may be used by the AWS IoT Core are as
follows:
AWS Lambda
Amazon Kinesis
Amazon S3
Amazon Machine Learning
Amazon DynamoDB
Amazon CloudWatch
AWS CloudTrail
Amazon Elasticsearch Service
AWS IoT Core applications allow for the gathering, processing, and analysis of data
generated by connected devices without the need to manage infrastructure. Pricing is per
messages sent and received.
The following is a diagram of how AWS IoT may be used. In this scenario, road conditions
data from a car is sent to the cloud and stored within an S3 Cloud Storage service. The AWS
service broadcasts this data to other cars, warning them of potential hazardous road
conditions:
Subscribing to Web Services Chapter 4
[ 73 ]
IBM Watson platform
IBM Watson is a system capable of answering questions posted in natural language.
Originally designed to compete on the TV game show Jeopardy!, Watson was named after
IBM's first CEO, Thomas J. Watson. In 2011, Watson took on Jeopardy! champions Brad
Rutter and Ken Jennings and won.
Applications using the IBM Watson Developer Cloud may be created with API calls. The
potential for processing IoT information with Watson is immense.
To put it bluntly, Watson is a supercomputer from IBM that may be accessed over the web
through API calls.
One such use of Watson with IoT is the IBM Watson Assistant for Automotive, an
integrated solution provided to manufacturers for use in cars. Through this technology, the
driver and passengers may interact with the outside world for such things as booking
reservations at restaurants and checking on appointments in their calendars. Sensors in the
car may be integrated, providing IBM Watson Assistant with information on the state of the
car such as tire pressure. The following is a diagram illustrating a scenario where Watson
warns the driver of low tire pressure, suggests having it fixed, and then books an
appointment at the garage:
IBM Watson Assistant for Automotive is sold as a white-label service so that manufacturers
may label it to suit their needs. The success of IBM Watson Assistant for Automotive will
depend on how well it competes with other AI assistant services such as Amazon's Alexa
and Google's AI assistant. Integration with popular services such as Spotify for music and
Amazon for shopping will also play a role in future success.
Subscribing to Web Services Chapter 4
[ 74 ]
Google Cloud platform
Although not as extensive and well-documented as AWS IoT, Google is taking on IoT with
a lot of interest. A developer may take advantage of Google's processing, analytics, and
machine intelligence technologies through the use of Google Cloud Services.
The following is a list of some of the services offered through Google Cloud Services:
App engine: Application hosting service
BigQuery: Large-scale database analytics service
Bigtable: Scalable database service
Cloud AutoML: Machine learning services that allow developers access to
Google's Neural Architecture Search technology
Cloud machine learning engine: Machine learning service for TensorFlow
models
Google video intelligence: Service to analyze videos and create metadata
Cloud Vision API: Service to return data on images through the use of machine
learning
The following is a diagram of how the Google Cloud Vision API may be used. An image of
a dog standing next to an upside-down flowerpot is passed to the service through the API.
The image is scanned and, using machine learning, objects are identified in the photo. The
returning JSON file contains the results in percentages:
Google's focus on keeping things easy and fast gives developers access to Google's own
private global network. Pricing for the Google Cloud Platform is lower than AWS IoT.
Subscribing to Web Services Chapter 4
[ 75 ]
Microsoft Azure
Microsoft Azure (known formerly as Windows Azure) is a cloud-based service from
Microsoft that allows developers to build, test, deploy, and manage applications using
Microsoft's vast array of data centers. It supports many different programming languages,
which are both Microsoft-specific and from outside third parties.
Azure Sphere, part of the Microsoft Azure framework, was launched in April of 2018 and is
Azure's IoT solution. The following is a scenario where Azure Sphere (or Azure IoT, as
shown in the diagram) may be used. In this scenario, a robot arm located in a remote
factory is monitored and controlled by a cellphone app somewhere else:
You may have noticed that the previous examples could be set up with any of the
competing cloud services, and that really is the point. By competing with each other, the
services become better and cheaper, and as a result, more accessible.
With these large companies such as IBM, Amazon, Google, and Microsoft taking on the
processing of IoT data, the future of IoT is boundless.
Weather Underground
Although not heavyweight like the Googles and IBMs of the world, Weather Underground
offers a web service of weather information that developers may tie their applications into.
Through the use of a developer account, IoT applications utilizing current weather
conditions may be built.
Subscribing to Web Services Chapter 4
[ 76 ]
At the time of writing this chapter, the Weather Underground network
offered APIs for developers to use to access weather information. An end-
of-service notice has been posted to the Weather Underground API site
since. To keep up to date on the state of this service, visit https:/ /www.
wunderground. com/ weather/ api/ .
A basic Python program to pull data from
the cloud
In Chapter 2, Writing Python Programs Using Raspberry Pi, we introduced a package called
weather-api that allows us to access the Yahoo! Weather web service. In this section, we
will wrap up the Weather object from the weather-api package in our own class. We will
reuse the name CurrentWeather for our class. After testing out our CurrentWeather
class, we will utilize the Sense Hat Emulator in Raspbian and build a weather information
ticker.
Accessing the web service
We will start out by modifying our CurrentWeather class to make web service calls to
Yahoo! Weather through the weather-api package:
Open up Thonny from Application Menu | Programming | Thonny
|
https://ar.b-ok.org/book/3695529/6b4930
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
*/
#include "util_time.h"
#include <apr_atomic.h>
/* Number of characters needed to format the microsecond part of a timestamp.
struct exploded_time_cache_element {
apr_int64_t t;
apr_time_exp_t xt; /* First for alignment of copies */
apr_time_exp_t xt;
apr_uint32_t key;
apr_int64_t t_validate; /* please see comments in cached_explode() */
};
/* the "+ 1" is for the current second: */
static apr_status_t cached_explode(apr_time_exp_t *xt, apr_time_t t,
struct exploded_time_cache_element *cache,
struct exploded_time_cache_element *cache,
int use_gmt)
apr_status_t (*explode)(apr_time_exp_t *, apr_time_t))
{
apr_int64_t seconds = apr_time_sec(t);
#define SECONDS_MASK 0x7FFFFFFF
struct exploded_time_cache_element *cache_element =
/* High bit is used to indicate invalid cache_element */
const apr_uint32_t seconds = apr_time_sec(t) & SECONDS_MASK;
volatile struct exploded_time_cache_element * const cache_element =
&(cache[seconds & TIME_CACHE_MASK]);
struct exploded_time_cache_element cache_element_snapshot;
/* The cache is implemented as a ring buffer. Each second,
* it uses a different element in the buffer. The timestamp
* in the element indicates whether the element contains the
* exploded time for the current second (vs the time
* 'now - AP_TIME_RECENT_THRESHOLD' seconds ago). If the
* cached value is for the current time, we use it. Otherwise,
* cached value is for the current time, we copy the exploded time.
* we compute the apr_time_exp_t and store it in this
* After copying, we check the cache_element to see if it still has the
* cache element. Note that the timestamp in the cache
* same second. If so, the copy is valid, because we always set the key
* element is updated only after the exploded time. Thus
* after copying the exploded time into the cache, and we're using
* if two threads hit this cache element simultaneously
* memory barriers (implemented with Compare-And-Swap)
* at the start of a new second, they'll both explode the
* that guarantee total memory ordering.
* time and store it. I.e., the writers will collide, but
* they'll be writing the same value.
*/
if (cache_element->t >= seconds) {
const apr_uint32_t key = cache_element->key;
/* There is an intentional race condition in this design:
/* Above is done speculatively, no memory barrier used.
* in a multithreaded app, one thread might be reading
* It's doing the same thing as apr_atomic_read32, a read of
* from this cache_element to resolve a timestamp from
* memory marked volatile, but without doing the function call. */
* TIME_CACHE_SIZE seconds ago at the same time that
if (seconds == key && seconds != 0) {
* another thread is copying the exploded form of the
/* seconds == 0 may mean cache is uninitialized, so don't use cache */
* current time into the same cache_element. (I.e., the
*xt = cache_element->xt;
* first thread might hit this element of the ring buffer
/* After copying xt, make sure cache_element was not marked invalid
* just as the element is being recycled.) This can
* by another thread beginning an update, and that cache_element
* also happen at the start of a new second, if a
* really contained data for our second.
* reader accesses the cache_element after a writer
* Requires memory barrier, so use CAS. */
* has updated cache_element.t but before the writer
if (apr_atomic_cas32(&cache_element->key, seconds, seconds)==seconds) {
* has finished updating the whole cache_element.
xt->tm_usec = (int)apr_time_usec(t);
*
return APR_SUCCESS;
* Rather than trying to prevent this race condition
* with locks, we allow it to happen and then detect
* and correct it. The detection works like this:
* Step 1: Take a "snapshot" of the cache element by
* copying it into a temporary buffer.
* Step 2: Check whether the snapshot contains consistent
* data: the timestamps at the start and end of
* the cache_element should both match the 'seconds'
* value that we computed from the input time.
* If these three don't match, then the snapshot
* shows the cache_element in the middle of an
* update, and its contents are invalid.
* Step 3: If the snapshot is valid, use it. Otherwise,
* just give up on the cache and explode the
* input time.
*/
memcpy(&cache_element_snapshot, cache_element,
sizeof(struct exploded_time_cache_element));
if ((seconds != cache_element_snapshot.t) ||
(seconds != cache_element_snapshot.t_validate)) {
/* Invalid snapshot */
if (use_gmt) {
return apr_time_exp_gmt(xt, t);
}
else {
return apr_time_exp_lt(xt, t);
}
else {
/* Valid snapshot */
memcpy(xt, &(cache_element_snapshot.xt),
sizeof(apr_time_exp_t));
}
else {
/* Invalid cache element, so calculate the exploded time value.
apr_status_t r;
This is a wait-free algorithm, and we purposely don't spin and
if (use_gmt) {
retry to get from the cache, we just continue and calculate it
r = apr_time_exp_gmt(xt, t);
and do useful work, instead of spinning. */
do {
const apr_status_t r = explode(xt, t);
r = apr_time_exp_lt(xt, t);
if (r != APR_SUCCESS) {
return r;
cache_element->t = seconds;
} while (0);
memcpy(&(cache_element->xt), xt, sizeof(apr_time_exp_t));
cache_element->t_validate = seconds;
/* Attempt to update the cache */
/* To prevent ABA problem, don't update the cache unless we have a
* newer time value (so that we never go from B->A).
* Handle cases where seconds overflows (e.g. year 2038),
* and cases where cache is uninitialized.
* Handle overflow, otherwise it will stop caching after overflow,
* until server process is restarted, which may be months later.
#define OVERFLOW (((SECONDS_MASK)>>1) + 1)
if (key <= SECONDS_MASK /* another thread not updating cache_element */
&& seconds != 0 /* new key distinguishable from uninitialized */
&& (
(seconds > key && seconds - key < OVERFLOW) || /* normal */
(seconds < key && key - seconds > OVERFLOW) || /* overflow */
(key == 0 && seconds < SECONDS_MASK - 0x100)))
/* cache is perhaps uninitialized, and not recent overflow */
{
if (key == apr_atomic_cas32(&cache_element->key, ~seconds, key))
{ /* We won the race to update this cache_element.
* Above marks cache_element as invalid by using ~seconds,
* because we are starting an update: it's the start of a
* transaction. */
cache_element->xt = *xt;
/* Finished copying, now update key with our key,
* ending the transaction. Need to use CAS for the
* memory barrier.
*/
apr_atomic_cas32(&cache_element->key, seconds, ~seconds);
xt->tm_usec = (int)apr_time_usec(t);
return APR_SUCCESS;
}
AP_DECLARE(apr_status_t) ap_explode_recent_localtime(apr_time_exp_t * tm,
apr_time_t t)
return cached_explode(tm, t, exploded_cache_localtime, 0);
return cached_explode(tm, t, exploded_cache_localtime, &apr_time_exp_lt);
AP_DECLARE(apr_status_t) ap_explode_recent_gmt(apr_time_exp_t * tm,
apr_time_t t)
return cached_explode(tm, t, exploded_cache_gmt, 1);
return cached_explode(tm, t, exploded_cache_gmt, &apr_time_exp_gmt);
AP_DECLARE(apr_status_t) ap_recent_ctime(char *date_str, apr_time_t t)
|
https://bz.apache.org/bugzilla/attachment.cgi?id=29803&action=diff
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
fread()
Read elements of a given size from a stream
Synopsis:
#include <stdio.h> size_t fread( void* buf, size_t size, size_t num, FILE* fp );
Since:
BlackBerry 10.0.0
Arguments:
- buf
- A pointer to a buffer where the function can store the elements that it reads.
- size
- The size of each element to read.
- num
- The number of elements to read.
- fp
- The stream from which to read the elements.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The fread() function reads num elements of size bytes each from the stream specified by fp into the buffer specified by buf.
Returns:
The number of complete elements successfully read; this value may be less than the requested number of elements.
Use the feof() and ferror() functions to determine whether the end of the file was encountered or if an input/output error has occurred.
Examples:
The following example reads a simple student record containing binary data. The student record is described by the struct student_data declaration.
#include <stdio.h> #include <stdlib.h> struct student_data { int student_id; unsigned char marks[10]; }; size_t read_data( FILE *fp, struct student_data *p ) { return( fread( p, sizeof( struct student_data ), 1, fp ) ); } int main( void ) { FILE *fp; struct student_data std; int i; fp = fopen( "file", "r" ); if( fp != NULL ) { while( read_data( fp, &std ) != 0 ) { printf( "id=%d ", std.student_id ); for( i = 0; i < 10; i++ ) { printf( "%3d ", std.marks[ i ] ); } printf( "\n" ); } fclose( fp ); return EXIT_SUCCESS; } return EXIT_FAILURE; }
Classification:
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
|
https://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/f/fread.html
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
I am trying to access a rest API hosted on one of my company's servers. However, as it is a dev server the SSL certificate is self signed and so Unity refuses to accept the connection. I've tried to follow this answer in particular. Which gave me this code :
void GetRequest(string uri)
{
ServicePointManager.ServerCertificateValidationCallback = TrustCertificate;
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(uri);
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
Stream dataStream = response.GetResponseStream();
StreamReader reader = new StreamReader(dataStream);
string responseFromServer = reader.ReadToEnd();
Debug.Log("responseFromServer=" + responseFromServer);
}
However when trying to connect to the server I got this error : A request to send or receive data was disallowed because the socket is not connected and (when sending on a datagram socket using a sendto call) no address was supplied.
I must add that I only have access to the .crt certificate Firefox got me when navigating to the server and I do not know how to add it to Mono properly (tried with certmgr but I'm not sure it worked with only the .crt) I'm on Windows 10. So how can I make Unity to accept my certificate ?
It seems that the error I got is due to the response stream was not received at the time the code is trying to access it with the StreamReader.
Thus I tried with this solution, using BeginGetResponse instead of GetResponse :
void GetRequest(string uri)
{
ServicePointManager.ServerCertificateValidationCallback = TrustCertificate;
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(uri);
request.BeginGetResponse(ResponseCallback, request);
}
private void ResponseCallback(IAsyncResult result)
{
Debug.Log("Response CB");
HttpWebResponse response = (result.AsyncState as HttpWebRequest).EndGetResponse(result) as HttpWebResponse;
Debug.Log("1");
Stream dataStream = response.GetResponseStream();
Debug.Log("2");
StreamReader reader = new StreamReader(dataStream);
Debug.Log("3");
string responseFromServer = reader.ReadToEnd();
Debug.Log("4");
Debug.Log("responseFromServer=" + responseFromServer);
}
This time the code stalls after Debug.Log("Response CB") and I have no idea why it does
Answer by goddatr
·
Mar 22, 2018 at 12:45 PM
Ok, I finally solved it :
First thing to do was to switch to Unity beta 2018.1. There I have the UnityWebRequest.certificateHandler This allows to set up custom certificate validation. one last thing to do is to create an object extending CertificateHandler to manage Certificate validation. (See here in Unity beta documentation)
Here is the code :
MyMonoBehaviour :
IEnumerator GetRequest(string uri){
UnityWebRequest request = UnityWebRequest.Get(uri);
request.certificateHandler = new AcceptAllCertificatesSignedWithASpecificKeyPublicKey();
yield return request.SendWebRequest ();
if (request.isNetworkError)
{
Debug.Log("Something went wrong, and returned error: " + request.error);
}
else
{
// Show results as text
Debug.Log(request.downloadHandler.text);
}
}
AcceptAllCertificatesSignedWithASpecificKeyPublicKey :
using UnityEngine.Networking;
using System.Security.Cryptography.X509Certificates;
using UnityEngine;
// Based on
class AcceptAllCertificatesSignedWithASpecificKeyPublicKey : CertificateHandler
{
// Encoded RSAPublicKey
private static string PUB_KEY = "mypublickey";
protected override bool ValidateCertificate(byte[] certificateData)
{
X509Certificate2 certificate = new X509Certificate2(certificateData);
string pk = certificate.GetPublicKeyString();
if (pk.ToLower().Equals(PUB_KEY.ToLower()))
return true;
return false;
}
}
if can't switch to Unity beta 2018.1 how can i solve my project depend unity 2017.2.1
Well, I didn't succeed with 2017.2 but you maybe can use the .NET HttpWebRequest method as it seems to work for some people.
If anyone finds this and is having trouble with it, I changed the final 'return false' to 'return true' and it works. Probably not the most secure way? But in my case it's running on a private network so no issues there.
That change makes it trust all certificates, so your connection is no longer secure
Well Self-signed certificates are not really secure either when you want to secure connections to unknow / untrusted peers. Man-in-the-middle attacks are perfectly possible with self signed certificates since you can not verify you actually talk to the right person. Self signed certificates only make sense if the two peers know each other and know the certificate of the other party. For usual web traffic most web browsers would reject any connections to servers with self signed certificates. Anybody can create a self signed certificate. Validating that certificate only tells you that the certificate is valid, nothing more. It's generally better to aquire an actual signed certificate. LetsEncrypt lets you create a signed certificate for.
WWW with HTTPS on Android not working
0
Answers
SSL signed by CA validation
0
Answers
Is it possible to do ssl certificate pinning in Unity iOS
1
Answer
How can I add a certificate to the Mono Trust store?
1
Answer
Manually validating SSL certificates (no WWW involved)
1
Answer
|
https://answers.unity.com/questions/1482409/how-to-accept-self-signed-certificate.html
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
The.
Installing PyPubSub
You can install PyPubSub using pip.
Here’s how to do it:
pip install pypubsub
PyPubSub should install quite quickly. Once it’s done, let’s find out how to use it!
Using PyPubSub
Let’s take an example from my previous article on this topic and update it for using PyPubSub instead:
import wx from pubsub import pub class OtherFrame(wx.Frame): """""" def __init__(self): """Constructor""" super().__init__(None, title="Secondary Frame") panel = wx.Panel(self) msg = "Enter a Message to send to the main frame" instructions = wx.StaticText(panel, label=msg) self.msg_txt = wx.TextCtrl(panel, value="") close_btn = wx.Button(panel, label="Send and Close") close_btn.Bind(wx.EVT_BUTTON, self.on_send_and_slose) sizer = wx.BoxSizer(wx.VERTICAL) flags = wx.ALL|wx.CENTER sizer.Add(instructions, 0, flags, 5) sizer.Add(self.msg_txt, 0, flags, 5) sizer.Add(close_btn, 0, flags, 5) panel.SetSizer(sizer) def on_send_and_slose(self, event): """ Send a message and close frame """ msg = self.msg_txt.GetValue() pub.sendMessage("panel_listener", message=msg) pub.sendMessage("panel_listener", message="test2", arg2="2nd argument!") self.Close() class MyPanel(wx.Panel): """""" def __init__(self, parent): """Constructor""" super().__init__(parent) pub.subscribe(self.my_listener, "panel_listener") btn = wx.Button(self, label="Open Frame") btn.Bind(wx.EVT_BUTTON, self.on_open_frame) def my_listener(self, message, arg2=None): """ Listener function """ print(f"Received the following message: {message}") if arg2: print(f"Received another arguments: {arg2}") def on_open_frame(self, event): """ Opens secondary frame """ frame = OtherFrame() frame.Show() class MyFrame(wx.Frame): """""" def __init__(self): """Constructor""" wx.Frame.__init__(self, None, title="New PubSub API Tutorial") panel = MyPanel(self) self.Show() if __name__ == "__main__": app = wx.App(False)
The main difference here between using the built-in PubSub is the import.
All you need to do is replace this:
from wx.lib.pubsub import pub
with this:
from pubsub import pub
As long as you are using wxPython 2.9 or greater that is. If you were stuck using wxPython 2.8, then you will probably want to check out one of my previous articles on this topic to see how the PubSub API changed.
If you are using wxPython 2.9 or greater, then the change is super easy and almost painless.
As usual, you subscribe to a topic:
pub.subscribe(self.myListener, "panelListener")
And then you publish to that topic:
pub.sendMessage("panelListener", message=msg)
Give it a try and see how easy it is to add to your own code!
Wrapping Up
I personally really liked using wx.lib.pubsub, so I will probably keep using it with PyPubSub. However if you’ve ever wanted to try another package, like PyDispatcher, this would be as good a time as any to do so.
Related Reading
- wxPython 2.9 and the Newer Pubsub API: A Simple Tutorial
- wxPython and PubSub: A Simple Tutorial
- wxPython: Using PyDispatcher instead of Pubsub
|
http://www.blog.pythonlibrary.org/2019/03/28/wxpython-4-and-pubsub/
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
ACS I think we didn’t get far from the mark
I would also like to take this chance to thank Southworks, our long-time partner on this kind of activities, for their great work on the ACS Exensions for Umbraco.
Once again, I’ll apply the technique I used yesterday for the ACS+WP7+OAuth2+OData lab post; I will paste here the documentation as is. I am going to break this in 3 parts, following the structure we used in the documentation as well.
Access Control Service (ACS) Extensions for Umbraco
'Click here for a video walkthrough of this tutorial'
Setting up the ACS Extensions in Umbraco is very simple. You can use the Add Library Package Reference from Visual Studio to install the ACS Extensions NuGet package to your existing Umbraco 4.7.0 instance. Once you have done that, you just need to go to the Umbraco installation pages, where you will find a new setup step: there you will fill in few data describing the ACS namespace you want to use, and presto! You’ll be ready to take advantage of your new authentication capabilities.
Alternatively, if you don’t want the NuGet Package to update Umbraco’s source code for you, you can perform the required changes manually by following the steps included in the manual installation document found in the ACS Extensions package. Once you finished all the install steps, you can go to the Umbraco install pages and configure the extension as described above. You should consider the manual installation procedure only in the case in which you really need fine control on the details of how the integration takes place, as the procedure is significantly less straightforward than the NuGet route.
In this section we will walk you through the setup process. For your convenience we are adding one initial section on installing Umbraco itself. If you already have one instance, or if you want to follow a different installation route than the one we describe here, feel free to skip to the first section below and go straight to the Umbraco.ACSExtensions NuGet install section.
Install Umbraco using the Web Platform Installer and Configure It
Launch the Microsoft Web Platform Installer from
Figure 1 - Windows Web App Gallery | Umbraco CMS
Click on Install button. You will get to a screen like the one below:
Figure 2 - Installing Umbraco via WebPI
Choose Options. From there you’ll have to select IIS as the web server (the ACS Extensions won’t work on IIS7.5).
Figure 3 - Web Platform Installer | Umbraco CMS setup options
Click on OK, and back on the Umbraco CMS dialog click on Install.
Select SQL Server as database type. Please note that later in the setup you will need to provide the credentials for a SQL administrator user, hence your SQL Server needs to be configured to support mixed authentication.
Figure 4 - Choose database type
Accept the license terms to start downloading and installing Umbraco.
Configure the web server settings with the following values and click on Continue.
Figure 5 - Site Information
Complete the database settings as shown below.
Figure 6 - Database settings
When the installation finishes, click on Finish button and close the Web Platform Installer.
Open Internet Information Services Manager and select the web site created in step 7.
In order to properly support the authentication operations that the ACS Extensions will enable, your web site needs to be capable of protecting communications. On the Actions pane on the right, click Bindings… and add one https binding as shown below.
Figure 7 - Add Site Binding
Open the hosts file located in C:\Windows\System32\drivers\etc, and add a new entry pointing to the Umbraco instance you’ve created so that you will be able to use the web site name on the local machine.
Figure 8 - Hosts file entry
At this point you have all the bits you need to run your Umbraco instance. All that’s left to do is make some initial configuration: Umbraco provides you with one setup portal which enables you to do just that directly from a browser. Browse to http://{yourUmbracoSite}/install/; you will get to a screen like the one below.
Figure 9 - The Umbraco installation wizard
Please refer to the Umbraco documentation for a detailed explanation of all the options: here we will do the bare minimum to get the instance running. Click on “Let’s get started!” button to start the wizard.
Accept the Umbraco license.
Hit Install in the Database configuration step and click on Continue once done.
Set a name and password for the administrator user in the Create User step.
Pick a Starter Kit and a Skin (in this tutorial we use Simple and Sweet@s).
Click on Preview your new website: your Umbraco instance is ready.
Figure 10 - Your new Umbraco instance is ready!
Install the Umbraco.ACSExtensions via NuGet Package
Installing the ACS Extensions via NuGet package is very easy.
Open the Umbraco website from Visual Studio 2010 (File -> Open -> Web Site… )
Open the web.config file and set the umbracoUseSSL setting with true.
Figure 11 - umbracoUseSSL setting
Click on Save All to save the solution file.
Right-click on the website project and select “Add Library Package Reference…” as shown below. If you don’t see the entry in the menu, please make sure that NuGet 1.2 is correctly installed on your system.
Figure 12 -
umbracoUseSSL setting
Select the Umbraco.ACSExtensions package form the appropriate feed and click install.
At the time in which you will read this tutorial, the ACS Extensions NuGet will be available on the NuGet official package source: please select Umbraco.ACSExtensions from there. At the time of writing the ACS Extensions are not published on the official feed yet, hence in the figure here we are selecting it from a local repository. (If you want to host your own feed, see Create and use a NuGet local repository )
Figure 13 - Installing theUmbraco. ACSExtensions NuGet package
If the installation takes place correctly, a green checkmark will appear in place of the install button in the Add Library Package Reference dialog. You can close Visual Studio, from now on you’ll do everything directly from the Umbraco management UI.
Configure the ACS Extensions
Now that the extension is installed, the new identity and access features are available directly in the Umbraco management console. You didn’t configure the extensions yet: the administrative UI will sense that and direct you accordingly.
Navigate to the management console of your Umbraco instance, at http://{yourUmbracoSite}/umbraco/. If you used an untrusted certificate when setting up the SSL binding of the web site, the browser will display a warning: dismiss it and continue to the web site.
The management console will prompt you for a username and a password, use the credentials you defined in the Umbraco setup steps.
Navigate to the Members section as shown below.
Figure 14 - The admin console home page
The ACS Extensions added some new panels here. In the Access Control Service Extensions for Umbraco panel you’ll notice a warning indicating that the ACS Extensions for Umbraco are not configured yet. Click on the ACS Extensions setup page link in the warning box to navigate to the setup pages.
Figure 15 - The initial ACS Extensions configuration warning.
Figure 16 - The ACS Extensions setup step.
The ACS Extensions setup page extends the existing setup sequence, and lives at the address https://{yourUmbracoSite}/install/?installStep=ACSExtensions. It can be sued both for the initial setup, as shown here, and for managing subsequent changes (for example when you deploy the Umbraco site form your development environment to its production hosting, in which case the URL of the web site changes). Click Yes to begin the setup.
Access Control Service Settings
Enter your ACS namespace and the URL at which your Umbraco instance is deployed. Those two fields are mandatory, as the ACS Extensions cannot setup ACS and your instance without those.
The management key field is optional, but if you don’t enter most of the extensions features will not be available.
Figure 17 - Access Control Service Setttings
The management key can be obtained through the ACS Management Portal. The setup UI provides you a link to the right page in the ACS portal, hut you’ll need to substitute the string {namespace} with the actual namespace you want to use
Social Identity Providers
Decide from which social identity providers you want to accept users from. This feature requires you to have entered your ACS namespace management key: if you didn’t, the ACS Extensions will use whatever identity providers are already set up in the ACS namespace.
Note that in order to integrate with Facebook you’ll need to have a Facebook application properly configured to work with your ACS namespace. The ACS Extensions gather from you the Application Id and Application Secret that are necessary for configuring ACS to use the corresponding Facebook application.
Figure 18 - Social Identity Providers
SMTP Settings
Users from social identity providers are invited to gain access to your web site via email. In order to use the social provider integration feature you need to configure a SMTP server.
Figure 19 - SMTP Settings
Click on Install to configure the ACS Extensions.
Figure 20 - ACS Extension Configured
If everything goes as expected, you will see a confirmation message like the one above. If you navigate back to the admin console and to the member section, you will notice that the warning is gone. You are now ready to take advantage of the ACS Extensions.
Figure 21 - The Member section after the successful configuration of the ACS Extensions
|
https://docs.microsoft.com/en-us/archive/blogs/vbertocci/acs-extensions-for-umbraco-part-i-setup
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
Class ToolBar
- java.lang.Object
- org.eclipse.swt.widgets.Widget
- org.eclipse.swt.widgets.Control
- org.eclipse.swt.widgets.Scrollable
- org.eclipse.swt.widgets.Composite
- org.eclipse.swt.widgets.ToolBar
public class ToolBar extends CompositeInstances of this class support the layout of selectable tool bar items.
The item children that may be added to instances of this class must be of type
ToolItem.
Note that although this class is a subclass of
Composite, it does not make sense to add
Controlchildren to it, or set a layout on it.
- Styles:
- FLAT, WRAP, RIGHT, HORIZONTAL, VERTICAL, SHADOW_OUT
- Events:
- (none)
Note: Only one of the styles HORIZONTAL and VERTICAL may be specified.
IMPORTANT: This class is not intended to be subclassed.
- See Also:
- ToolBar, ToolItem snippets,, requestLayout,
ToolBar
public ToolBar.FLAT,
SWT.WRAP,
SWT.RIGHT,
SWT.HORIZONTAL,
SWT.SHADOW_OUT,
SWT.VERTICAL,Item
public ToolItem getItem(int index)Returns the item at the given, zero-relative index in the receiver. Throws an exception if the index is out of range.
- Parameters:
index- the index of the item to return
- Returns:
- the item at the given index
- Throws:
IllegalArgumentException-
- ERROR_INVALID_RANGE - if the index is not between 0 and the number of elements in the list minus 1 (inclusive)
SWTException-
- ERROR_WIDGET_DISPOSED - if the receiver has been disposed
- ERROR_THREAD_INVALID_ACCESS - if not called from the thread that created the receiver
getItem
public ToolItem getItem(Point point)Returns the item at the given point in the receiver or null if no such item exists. The point is in the coordinate system of the receiver.
- Parameters:
point- the point used to locate the item
- Returns:
- the item at the given point
- Throws:
IllegalArgumentException-
- ERROR_NULL_ARGUMENT - if the point is null
SWTException-
- ERROR_WIDGET_DISPOSED - if the receiver has been disposed
- ERROR_THREAD_INVALID_ACCESS - if not called from the thread that created the receiver
getItemCount
public int getItemCount()Returns the number of items contained in the receiver.
- Returns:
- the number of items
- Throws:
SWTException-
- ERROR_WIDGET_DISPOSED - if the receiver has been disposed
- ERROR_THREAD_INVALID_ACCESS - if not called from the thread that created the receiver
getItems
public ToolItem[] getItems()Returns an array of
ToolItems which are the items in the receiver.
Note: This is not the actual structure used by the receiver to maintain its list of items, so modifying the array will not affect the receiver.
- Returns:
- the items in the receiver
- Throws:
SWTException-
- ERROR_WIDGET_DISPOSED - if the receiver has been disposed
- ERROR_THREAD_INVALID_ACCESS - if not called from the thread that created the receiver
getRowCount
public int getRowCount()Returns the number of rows in the receiver. When the receiver has the
WRAPstyle, the number of rows can be greater than one. Otherwise, the number of rows is always one.
- Returns:
- the number of items
- Throws:
SWTException-
- ERROR_WIDGET_DISPOSED - if the receiver has been disposed
- ERROR_THREAD_INVALID_ACCESS - if not called from the thread that created the receiver
indexOf
public int indexOf(ToolItem item)Searches the receiver's list starting at the first item (index 0) until an item is found that is equal to the argument, and returns the index of that item. If no item is found, returns -1.
- Parameters:
item- the search item
- Returns:
- the index of the item
- Throws:
IllegalArgumentException-
- ERROR_NULL_ARGUMENT - if the tool item is null
- ERROR_INVALID_ARGUMENT - if the tool item has been disposedParent
public boolean setParent(Composite parent)Changes the parent of the widget to be the one provided. Returns
trueif the parent is successfully changed.
setRedraw
public void setRedraw(boolean redraw)If the argument is.
- Overrides:
setRedrawin class
Control
- Parameters:
redraw- the new redraw state
- See Also:
Control.redraw(int, int, int, int, boolean),
Control.update()
|
https://help.eclipse.org/2019-12/topic/org.eclipse.platform.doc.isv/reference/api/org/eclipse/swt/widgets/ToolBar.html
|
CC-MAIN-2020-10
|
en
|
refinedweb
|
Hi
I’m trying to move my project from Fmod Ex to Unity. The code was worked fine in Ex, but not in Unity. The code looks like:
using UnityEngine; using System.Collections; using FMOD; public class _Oscillators : MonoBehaviour { FMOD.RESULT Fresult; FMOD.System Fsystem; FMOD.DSP Fdsp; FMOD.Channel Fchannel; FMOD.ChannelGroup Fchannelgroup; System.IntPtr ptr; // Use this for initialization void Start () { ptr = new System.IntPtr(); Fsystem = new FMOD.System(ptr); Fresult = new FMOD.RESULT(); FMOD_Initialize(); Fchannel = new FMOD.Channel(ptr); Fdsp = new FMOD.DSP(ptr); Fresult = Fsystem.createDSPByType(DSP_TYPE.OSCILLATOR, out Fdsp); Fresult = Fdsp.setParameterInt((int)DSP_OSCILLATOR.TYPE, 0); Fresult = Fdsp.setParameterInt((int)DSP_OSCILLATOR.RATE, 220); Fresult = Fsystem.playDSP(Fdsp, null, false, out Fchannel); Fresult = Fchannel.setVolume(1.0f); Fsystem.update(); } private void FMOD_Initialize() { Fresult = FMOD.Factory.System_Create(out Fsystem); ERRCHECK(Fresult, "create system"); } private void FMOD_DeInitialize() { Fresult = Fsystem.close(); ERRCHECK(Fresult, "close"); } private void ERRCHECK(FMOD.RESULT Fresult, string comment) { if (Fresult != FMOD.RESULT.OK) { UnityEngine.Debug.Log ("FMOD error in " + comment + ": " + Fresult + "-" + FMOD.Error.String(Fresult)); } } }
Unfortunately, at FMOD.Factory.System_Create(out Fsystem) it gives an ‘ERR_MEMORY-not enough memory or resources’ message. What am I doing wrong?
- S asked 3 years ago
- You must login to post comments
There is a maximum of 8 system objects that can be created. After you’ve hit that limit FMOD.Factory.System_Create() will return ERR_MEMORY.
- Nicholas Wilcox answered 3 years ago
The problem is that even if I just add FMOD_Listener (FMOD native script), the error message still has appeared. It seems that I can’t use FMOD in Unity (mine is 4.5.4 Pro) at all? By the way, when I use a SquareTangle implementation, all works fine. Unfortunately, I need some features from the original asset. Any suggestions?
- S answered 3 years ago
|
https://www.fmod.org/questions/question/err_memory-not-enough-memory-or-resources/
|
CC-MAIN-2018-05
|
en
|
refinedweb
|
ne_request_create, ne_request_dispatch, ne_request_destroy - low-level HTTP request handling
#include <ne_request.h> ne_request *ne_request_create(ne_session *session, const char *method, const char *path); int ne_request_dispatch(ne_request *req); void ne_request_destroy(ne_request *req);.
The ne_request_create function returns a pointer to a request object (and never NULL). The ne_request_dispatch function returns zero if the request was dispatched successfully, and a non-zero error code otherwise.);
ne_get_error, ne_set_error, ne_get_status, ne_add_request_header, ne_set_request_body_buffer.
Joe Orton <neon@webdav.org> Author.
|
http://huge-man-linux.net/man3/ne_request_create.html
|
CC-MAIN-2018-05
|
en
|
refinedweb
|
The EvtGenInterface class is the main class for the use of the EvtGen decay package with Herwig. More...
#include <EvtGenInterface.h>
The EvtGenInterface class is the main class for the use of the EvtGen decay package with Herwig.
Definition at line 40 of file EvtGenInterface.h.
Make a simple clone of this object.
Implements ThePEG::InterfacedBase.
Construct the DecayVertex for Herwig using the information from EvtGen.
Use EvtGen to perform a decay.
Return the decay products of an EvtGen particle in as ThePEG particles..
Convert a PDG code from ThePEG into an EvtGen particle id.
Convert a Lorentz5Momentum to a real EvtGen 4-vector.
Definition at line 119 of file EvtGenInterface.h.
Convert a particle to an EvtGen particle.
Convert a LorentzPolarizationVector to a complex EvtGen 4-vector.
Definition at line 149 of file EvtGenInterface.h.
Convert our Rarita-Schwinger spinor to the EvtGen one.
Definition at line 158 of file EvtGenInterface.h.
Convert a spin density matrix to an EvtGen spin density matrix.
Definition at line 193 of file EvtGenInterface.h.
References ThePEG::RhoDMatrix::iSpin().
Convert a LorentzSpinor to an EvtGen one.
The spinor is converted to the EvtGen Dirac representation/
Definition at line 135 of file EvtGenInterface.h.
References ThePEG::Helicity::LorentzSpinor< Value >::s1(), ThePEG::Helicity::LorentzSpinor< Value >::s2(), ThePEG::Helicity::LorentzSpinor< Value >::s3(), ThePEG::Helicity::LorentzSpinor< Value >::s4(), and sqrt().
Convert our tensor to the EvtGen one.
Definition at line 180 of file EvtGenInterface.h.
Make a clone of this object, possibly modifying the cloned object to make it sane.
Reimplemented from ThePEG::InterfacedBase.
Check the particle has SpinInfo and if not create it.
Definition at line 350 of file EvtGenInterface.h.
References ThePEG::Particle::dataPtr(), ThePEG::Exception::eventerror, ThePEG::Particle::momentum(), ThePEG::PDT::Spin0, ThePEG::PDT::Spin1, ThePEG::PDT::Spin1Half, ThePEG::PDT::Spin2, ThePEG::PDT::Spin3Half, and ThePEG::Particle::spin.
Output the EvtGen decay modes for a given particle.
Function used to read in object persistently.
Function used to write out object persistently.
Convert an EvtGen EvtId to a PDG code in our conventions.
Convert from EvtGen momentum to Lorentz5Momentum.
Definition at line 251 of file EvtGenInterface.h.
Functions to convert between EvtGen and Herwig classes.
Convert a particle from an EvtGen one to ThePEG one.
Definition at line 222 of file EvtGenInterface.h.
Convert an EvtGen complex 4-vector to a LorentzPolarizationVector.
Definition at line 286 of file EvtGenInterface.h.
Convert an EvtGen Rarita-Schwinger spinor to ours.
Definition at line 297 of file EvtGenInterface.h.
Set the SpinInfo for a ThePEG particle using an EvtGen particle.
Convert a spin density to a ThePEG one from an EvtGen one.
Convert an EvtDiracSpinor a LorentzSpinor.
This spinor is converted to the default Dirac matrix representation used by ThePEG.
Definition at line 274 of file EvtGenInterface.h.
Convert an EvtGen tensor to ThePEG.
Definition at line 319 of file EvtGenInterface.h.
Names of the various EvtGen parameter files.
The name of the file containing the decays
Definition at line 432 of file EvtGenInterface.h.
|
http://herwig.hepforge.org/doxygen/classHerwig_1_1EvtGenInterface.html
|
CC-MAIN-2018-05
|
en
|
refinedweb
|
1 /*2 Copyright (c) 2002,2003, Dennis M. Sosnoski.3 org.jibx.runtime.impl;30 31 import java.util.ArrayList ;32 33 /**34 * Holder used to collect forward references to a particular object. The35 * references are processed when the object is defined.36 *37 * @author Dennis M. Sosnoski38 * @version 1.039 */40 41 public class BackFillHolder42 {43 /** Expected class name of tracked object. */44 private String m_class;45 46 /** List of references to this object. */47 private ArrayList m_list;48 49 /**50 * Constructor. Just creates the backing list.51 *52 * @param name expected class name of tracked object53 */54 55 public BackFillHolder(String name) {56 m_class = name;57 m_list = new ArrayList ();58 }59 60 /**61 * Add forward reference to tracked object. This method is called by 62 * the framework when a reference item is created for the object63 * associated with this holder.64 *65 * @param ref backfill reference item66 */67 68 public void addBackFill(BackFillReference ref){69 m_list.add(ref);70 }71 72 /**73 * Define referenced object. This method is called by the framework74 * when the forward-referenced object is defined, and in turn calls each75 * reference to fill in the reference.76 *77 * @param obj referenced object78 */79 80 public void defineValue(Object obj) {81 for (int i = 0; i < m_list.size(); i++) {82 BackFillReference ref = (BackFillReference)m_list.get(i);83 ref.backfill(obj);84 }85 }86 87 /**88 * Get expected class name of referenced object.89 *90 * @return expected class name of referenced object91 */92 93 public String getExpectedClass() {94 return m_class;95 }96 }97
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
|
http://kickjava.com/src/org/jibx/runtime/impl/BackFillHolder.java.htm
|
CC-MAIN-2018-05
|
en
|
refinedweb
|
This page is intended to be a repository of "gotchas", and other little quick fixes, that aren't rare but are just uncommon enough that you forget how you did it last time:) This page is NOT part of the official packaging guidelines.
Contents
- 1 Module version dependencies too much specific
- 2 Version contains underscore
- 3 Makefile.PL vs Build.PL
- 4 Tests
- 5 Autogenerated dependencies
- 6 rpmlint errors
- 7 Compatibility issues after upgrading Perl
- 8 New Perl specific spec file macros
Module version dependencies too much specific
Rounding approach
When writing (Build)Requires, you can find the package requires or uses module
Foo in very specific version (e.g.
use Foo 0.2001;). If you just
copy the version to spec file (
Requires: perl(Foo) >= 0.2001) you
can get unresolved depencecies because packaged perl-Foo's provide shorter
version numbers (e.g.
perl(Foo) = 0.16,
perl(Foo)
= 0.20,
perl(Foo) = 0.30).
This is caused by the fact that Perl processes version strings as fractional numbers, but RPM as integers (e.g. RPM compares 0 to 0, and 2001 to 30).
There is no right solution. Current practice is to round dependency version up
onto the the same number of digits as package version. (e.g.
Requires:
perl(Foo) >= 0.2001 becomes
Requires: perl(Foo) >= 0.21).
Of course this approach is meaningful only if current perl-Foo package has at
least version 0.21.
If required package does not exist in requested version, the dependency
package must be upgraded before (if upstream provides newer (e.g. version
0.30)). If highest upstream provides 0.2001 only, dependency package can be
upgraded to provide perl(Foo) = 0.2001, however its maintainer must keep using
augmented precision in future versions (e.g. instead of
Provides:
perl(Foo) = 0.30 she must write
Provides: perl(Foo)
= 0.3000) not to break RPM version comparison (each newer packager must
have EVR string greater than previous one).
Dot approach
In the feature, one could consider different less error-prone approach: Instead of version rounding, one could transform each fraction version digit to next level version integer.
E.g. CPAN 12.34 became RPM 12.3.4. This method preserves ordering of fraction numbers, allows extending to more specific numbers and does not request package maintainer to remember number of augmented digits he needs to support in his SPEC file.
One must note that transition to this method can happen only at major version number change (the part before decimal dot) or at cost of new RPM epocha number.
See perl-Geo-IPfree.spec for live example.
Version contains underscore
Sometimes it is needed update to development release marked with something like _01. To be 100% sure, you should use in spec file:
%real_version 1.32_01 Version: 1.32.1
Beware underscores in code if not evaluated can produce warnings and program aborts in strict mode (examplary bug):
$VERSION = "1.32_01";
This must be solved by adding eval and reported to upstream as a bug (see Version Numbering in perlmodstyle(1)):
$VERSION = "1.32_01"; $VERSION = eval $VERSION;
Makefile.PL vs Build.PL
Perl modules typically utilize one of two different build systems:
- ExtUtils::MakeMaker
- Module::Build
The two different styles are easily recognizable: ExtUtils::MakeMaker employs the Makefile.PL file, and it's the 'classical' approach; Module::Build is the (relatively) new kid on the block, with support for things ExtUtils::MakeMaker cannot do. While Module::Build was designed as a long-term replacement for ExtUtils::MakeMaker, it turned out that Module::Build lacks proper upstream maintenance thus favoring Module::Build does not look like a good idea now.
There are.
Module::Build::Tiny
Simplified reimplementation of Module::Build. Beware it does not support destdir=foo syntax like Module::Build, one has to use --destdir=foo (CPAN RT #85006).
inc::Module::Install
Bundled ExtUtils::MakeMaker guts. Upstream ships ExtUtils::MakeMaker modules in ./inc directory. While bundling configure-time dependencies is allowed in Fedora, one has to declare all used ./inc modules dependencies which is painful and error prone. Easier way is to prune ./inc and build-require relevant modules.
Tests
Tests / build steps requiring network access
This happens from time to time. Some package's tests (or other steps, e.g. signature validation) require network access to return success, but their actual execution isn't essential to the proper building of the package. In these cases, it's often nice to have a simple, transparent, clean way of enabling these steps on your local system (for, e.g., maximum testing), but to have them disabled when actually run through the buildsys/mock.
One easy way to do this is with the "--with" system of conditionals rpmbuild can handle. Running, e.g., "rpmbuild --with network_tests foo.src.rpm" is analagous to including a "--define '_with_network_tests 1'" on the command line. We can test for the existance of that conditional, and take (or not take!) certain actions based on it.
See, e.g., the perl-POE-Component-Client-HTTP spec file for an example.
One way to indicate this inside your spec is to prepend a notice along the lines of:
# some text # about the change
Then, at the point of the operation, e.g. "make test", that needs to be disabled silently under mock to enable the package build to succeed:
%check %{?!_with_network_tests:rm t/01* t/02* t/09* t/11* t/50* t/54*} make test
Now to execute local builds with the network bits enabled, either call rpmbuild with "--with network_tests" or add the line "%_with_network_tests 1" to your ~/.rpmmacros file. Remember to test with _with_network_tests undefined before submitting to the buildsys, to check for syntax errors!
Tests require X11 server
Some Perl bindings to graphical toolkits deliver tests that require access to X11 server. rpmbuild was changing # It looks like xorg-x11-server-Xvfb-1.17.2-1.fc23.x86_64 does not need xorg-x11-xinit anymore BuildRequires: font(:lang=en) %endif %check %if %{use_x11_tests} xvfb-run -a make test %else make test %endif
Here the X11 tests are conditionalized by use_x11_tests boolean macro. Real example can be found in perl-Padre or perl-Tk-Pod spec files.
If xvfb-run script provided with xorg-x11-server-Xvfb package became unavailable (there were attempts to remove it), you could run Xvfb manually like this:
%check %if %{use_x11_tests} xinit /bin/sh -c 'rm -f ok; make test && touch ok' -- /usr/bin/Xvfb :666 test -e ok %else make test %endif
Note xinit returns exit code of X server, not code of X client.
Autogenerated dependencies
Filtering autogenerated dependencies does not work
Unfortunately rpm offers (and removes) filtering mechanism on each new release and not all supported mechanisms are compatible.
If you package for Fedora 16 and higher, use %__*_exclude macros. If you package for F15 and older, use %filter_* macros.
If you want to have unified spec file for all Fedoras, you can use both of them, but remember %__*_exclude style inhibits %filter_* style and %__*_exclude style is supported since rpm 4.9. If you use %perl_default_filter, you need to put it in between the two styles to take effect properly (see some early Fedora 16 packages for examples).
Since Fedora 16, %perl_default_filter uses %__*_exclude style, so if you use %perl_default_filter, you need to define filtering in the %__*_exclude style too.
Some CPAN packages contain native code and the native code can provide Perl module. E.g. XS file contains:
MODULE=Wx PACKAGE=Wx::Process
and it's compiled into /usr/lib64/perl5/vendor_perl/auto/Wx/Wx.so. When including the main package Wx, the Wx::Process becomes accessible and other module can use it:
perl -MWx -e 'use base qw(Wx::Process);'
Problem is rpmbuild discovers Requires for perl(Wx::Process), but does not discover the Provides. This leads to unresolvable dependencies in RPM repository.
Until rpmbuild will be fixed, you need to provide modules declared only in shared objects manually:
%package Wx Provides: perl(Wx::Process)
You can use a script to do it for you:
cd Wx-* for i in `grep -r "PACKAGE=" * | cut -d " " -f 2 | sed 's|PACKAGE=|perl(|g' | grep "Wx::" | sort -n |uniq`; do printf "Provides: $i)\\n"; done
This problem is described in bug #675992.
Autoprovides are missing version specifier
Not all submodules define their version. This is not good for current RPM.
Then rpmbuild generates following Provides:
$ rpm -q --provides -p perl-Statistics-Basic-1.6602-1.fc14.noarch.rpm perl(Statistics::Basic) = 1.6602 perl(Statistics::Basic::ComputedVector) perl(Statistics::Basic::Correlation) […] perl-Statistics-Basic = 1.6602-1.fc14
According Paul Howarth, unversioned Provides satisfies versioned requires. E.g. if perl-Foo requires perl(Statistics::Basic::ComputedVector) >= 2.000 it will be satisfied by perl-Statistics-Basic-1.6602 because it provides perl(Statistics::Basic::ComputedVector).
IMHO, it sounds like a bug in RPM, you can patch it by following Provides filtering:
%filter_from_provides s/^\(perl(Statistics::Basic\>[^=]*\)$/\1 = %{version}/
It will add the version specifier equaled to package version to each sub-module missing any version specifier.
Bugs in RPM dependency generator
If you find a bug or shot-coming in dependency generator, report it into Bugzilla for rpm component, make proper subject (e.g. write prefix Perl dependency generator:), and make the bug blocking bug 694496 ((requirements, rpm) rpm requirements - provides/requires (tracker)).
rpmlint errors
file-not-utf8
Problem
W: perl-Class-MakeMethods file-not-utf8 /usr/share/man/man3/Class::MakeMethods::Docs::ReadMe.3pm.gz W: perl-Class-MakeMethods file-not-utf8 /usr/share/man/man3/Class::MakeMethods::Attribute.3pm.gz
Solution
Convert the errant file to UTF-8. Assuming the codepage the file is currently under is ISO-8859-1, this will do the trick (often by reviewers wanted in %prep section, in %build for generated man pages):
cd blib/man3 for i in Docs::ReadMe.3pm Attribute.3pm ; do iconv --from=ISO-8859-1 --to=UTF-8 Class::MakeMethods::$i > new mv new Class::MakeMethods::$i done
If you are using iconv, you should be BR'ing it, but it's in glibc-common, which is installed anyway...
Problem
W: private-shared-object-provides /usr/lib64/perl5/vendor_perl/auto/File/Map/Map.so Map.so()(64bit)
The Map.so is a private shared library dynamically loaded by XS loader when a Perl binding to a C library is called. These files are not intended for public use and they must be filtered from Provides. In addition, the files have name similar to original C libraries which can clashes while resolving RPM dependencies while installing packages.
Solution
Filter the unneeded Provides by %{?perl_default_filter} macro.
script-without-shellbang
Problem
Rpmlint returns something to the effect of:
E: perl-WWW-Myspace script-without-shellbang /usr/lib/perl5/vendor_perl/5.8.8/WWW/Myspace/Comment.pm E: perl-WWW-Myspace script-without-shellbang /usr/lib/perl5/vendor_perl/5.8.8/WWW/Myspace/MyBase.pm E: perl-WWW-Myspace script-without-shellbang /usr/lib/perl5/vendor_perl/5.8.8/WWW/Myspace/FriendAdder.pm
Solution
This error is caused by the exec bit being set on one or more .pm files. The solution is to strip the exec bit, for example, in the %install section:
find %{buildroot} -type f -name '*.pm' -exec chmod -x {} 2>/dev/null ';'
wrong-script-interpreter
Problem
E: wrong-script-interpreter /usr/share/perl5/vendor_perl/ExtUtils/xsubpp perl
Solution
Replace incorrect shebang at the end of %install section:
sed -i 's|#!perl|#!/usr/bin/perl|' %{buildroot}%{perl_vendorlib}/ExtUtils/xsubpp
Compatibility issues after upgrading Perl
ExtUtils::MakeMaker overrides CCFLAGS with Perl 5.14
Problem
When compiling package driven by ExtUtils::MakeMaker that utilizes CCFLAGS argument, you get message:
Not a CODE reference at /usr/lib/perl5/DynaLoader.pm
Solution
This is bug in ExtUtils::MakeMaker probably. It replaces CCFLAGS instead of adding them to Perl default ccflags. See message perl 5.14 ExtUtils::MakeMaker overriding CCFLAGS in perl-devel mailing list and Debian bug report.
New Perl specific spec file macros
If you find out that some code snippets repeat in your spec files, you could say: Hey, there should be a macro for that! Then propose the macro for inclusion into /etc/rpm/macros.perl. The file is owned by perl package and is automatically included by rpmbuild tool. Ask perl package maintainer for adding the macro.
|
https://fedoraproject.org/wiki/Perl/Tips
|
CC-MAIN-2018-05
|
en
|
refinedweb
|
This document lays out the feature and API set for the eighth annual release of the Eclipse Object Constraint Language (Eclipse OCL) Project, version 4.1.0.
The plan for 4.1 was to complete the prototyping of models and solutions to the UML alignment and many other issues in the OMG OCL 2.3.1 specification.
Considerable progress was made, but not enough to permit the examples prototypes to be promoted as replacements for the legacy functionality.
The OCL to Java code generator was rewritten twice so that it now has a sound underlying model making the code simpler and more efficient and permitting optimizations to be realized easily.
The support for UML was improved so that the new functionality is now exploited within Papyrus for profiles and validation.
The project should now be in good shape to complete UML alignment and then support accurate auto-generation of Java for the many new and changed OCL constraints in the UML 2.5 specification.
Note that, since the OMG OCL 2.3.1 standard suffers from significant ambiguities and conflicts making a compliant implementation impossible, Eclipse (MDT) OCL 4.1.
Eclipse OCL 4.1 will use GIT for source control.
Eclipse OCL 4.1 will primarily target Eclipse 4.3 rather than Eclipse 3.9.
Eclipse OCL 4.1.0 source code will be available as versions tagged "Kepler" in the project's GIT repository.
In order to remain current, each Eclipse release targets reasonably current versions of the underlying operating environments. The Eclipse Object Constraint Language (OCL). Eclipse OCL will target the same Java version as EMF and UML2, which currently require Java 5. Eclipse Platform SDK 4.3 will be tested and validated on a number of reference platforms. Eclipse OCL will be tested and validated against a subset of those listed for the platform.
Indirect dependence on version 6 of the JRE has arisen through use of the third party components such as Google Guava. This may justify raising the lower bound explicitly for Luna.
A direct dependence on version 6 of the JRE exists on ly when dynamic compilation of auto-generated Java is exploited.
As described above, the Eclipse OCL 4.1.0 release should address usability of the editors. The main OCL plugins should be unaffected, but the associated examples plugins may be revised significantly.
Again as described above, the Eclipse OCL 4.1.0 release for Kepler will introduce significant new APIs in a new namespace that replaces the old. The old namespace will be deprecated once all Simultaneous Release projects have migrated to the new namespace.
|
http://www.eclipse.org/modeling/mdt/ocl/project-info/plan_kepler.xml
|
CC-MAIN-2015-40
|
en
|
refinedweb
|
utility environment for writing addons in C
This layer offers a way to write at least simple Node.js addons in C without all the horrible C++ goop you'd otherwise be expected to use. That goop still exists, but you don't have to write it. More importantly, you can write your module in a sane programming environment, avoiding the confusing and error-prone C++ semantics.
Unlike most Node.js modules, v8+ does nothing by itself. It is intended to be used as a build-time dependency of your native addon, providing you with an alternate programming environment.
For full docs, read the source code.
v8+ works with, and has been tested to some extent with, Node.js 0.6.18, 0.8.1, 0.8.26, 0.10.24, and 0.11.10. It most likely works with other micro versions in the 0.6 and 0.8 series as well; if you are using 0.10, you will need 0.10.24 or later so that you have headers to build against. Note that this does not mean you can necessarily expect an addon built against a particular minor release of Node.js to work with any other minor release of Node.js.
Node 0.11.10 and later are also supported, and contain a new module API that v8plus can leverage to provide an entirely new model for building and using C modules.
The v8+ source code is compiled into your module directly along with your code. There is no separate v8+ library or node module, so the v8+ source, tools, and makefiles are required to be present at the time your module is built. They are not required at runtime.
Normally, your addon module will depend on the v8plus package and install it
using npm. The v8+ makefiles are set up to accommodate the installation of
v8+ anywhere
node(1) would be able to find it using
require() if it were
a normal JavaScript module, so simply including it as a dependency in your
package.json will work correctly. In addition, you will need to create a
(normally trivial) makefile for your module that includes the makefiles
distributed as part of v8+. Once you have done so, it is sufficient to run
gmake to generate the native loadable module used by Node.js.
The overall outline for creating a v8+ module looks something like this:
Write the C code that does whatever your module does. Be sure to #include "v8plus_glue.h". Do not include any other v8+ headers.
Create an appropriate
package.json file. See below for details.
Create a skeleton makefile. See below for details.
You should not (and need not) modify either of the delivered makefiles; override the definitions in Makefile.v8plus.defs in your makefile as appropriate.
There are two essential properties your
package.json must contain in order
to use v8+ with npm:
A dependency on
v8plus.
An appropriate script entry for building your module. It is strongly recommended that you use something like the following:
"postinstall": "gmake $(eval echo ${MAKE_OVERRIDES})"
This will allow someone building your module to set make variables by adding
them to the
MAKE_OVERRIDES environment variable; e.g.,
$ MAKE_OVERRIDES="CTFCONVERT=/bin/true CTFMERGE=/bin/true" npm install
The makefiles shipped with v8+ do the great majority of the heavy lifting for you. A minimally functional makefile for your addon must contain four things:
Variable definitions for
V8PLUS and
PREFIX_NODE. Alternately, you
may choose to provide these on the command line or via the environment.
It is recommended that these assignments be made exactly as follows,
which will cause the addon to be built against the
node that is found
first in your path:
PREFIX_NODE := $(shell dirname `bash -c 'hash node; hash -t node'`)/.. V8PLUS := $(shell $(PREFIX_NODE)/bin/node -e 'require("v8plus");')
Note that the mechanism for finding
node will not work correctly if
yours is a symlink. This invocation of node(1) uses a v8+ mechanism to
locate v8+ sources anywhere that node(1) can find them and should not be
modified unless you want to test an alternate v8+.
The exact line:
include $(V8PLUS)/Makefile.v8plus.defs
Variable assignments specific to your module. In particular, you must
define
SRCS and
MODULE Note that
ERRNO_JSON is no longer required
nor used in v8plus 0.3 and later. Additional customisation is optional.
The exact line:
include $(V8PLUS)/Makefile.v8plus.targ
Additional arbitrary customisation is possible using standard makefile
syntax; most things that are useful to change already have variables defined
in
Makefile.v8plus.defs whose values you may append to or override. For
example, you may cause additional system libraries to be linked in by
appending
-lname to the
LIBS variable. By default, the makefiles assume
that your sources are located in the
src subdirectory of your module, and
that you want the sole output of the build process to be called
$(MODULE).node and located in the
lib subdirectory. This can be changed
by overriding the
MODULE_DIR variable.
A simple example makefile may be found in the
examples/ subdirectory, and
additional examples may be found in existing consumers; see Consumers below.
The GNU people also provide a good manual for make if you get really stuck;
see. In general,
writing the necessary makefile fragment is expected to be as easy as or
easier than the equivalent task using
node-waf or
node-gyp, so if you're
finding it unexpectedly difficult or complicated there's probably an easier
way.
The makefiles follow GNU make syntax; other makes may not work but patches that correct this are generally welcome (in particular, Sun make and GNU make have different and incompatible ways to set a variable from the output of a shell command, and there is no way I know to accommodate both).
By default, the resulting object is linked with the
-zdefs option, which
will cause the build to fail if any unresolved symbols remain. In order to
accommodate this, a mapfile specifying the available global symbols in your
node binary is automatically generated as part of the build process. This
makes it much easier to debug missing libraries; otherwise, a module with
unresolved symbols will fail to load at runtime with no useful explanation.
Mapfile generation probably works only on illumos-derived systems. Patches
that add support for other linkers are welcome.
Your module will have all symbols (other than
init, which is used directly
by Node.js) reduced to local visibility, which is strongly recommended. If
for some reason you want your module's symbols to be visible to Node.js or
to other modules, you will have to modify the script that generates the
mapfile. See the
$(MAPFILE) target in
Makefile.v8plus.targ.
Your module is an object factory that instantiates and returns native objects, to which a fixed set of methods is attached as properties. The constructor, destructor, and methods all correspond 1-1 with C functions. In addition, you may create additional class methods associated with the native module itself, each of which will also have a 1-1 relationship to a set of C functions.
This functionality is generally sufficient to interface with the system in useful ways, but it is by no means exhaustive. Architectural limitations are noted throughout the documentation.
Subsequent sections describe the API in greater detail, along with most of
the C functions that v8+ provides. Some utility functions may not be listed
here; see
v8plus_glue.h for additional commentary and functions that are
available to you.
The interface between your module and v8+ consists of a handful of objects with fixed types and names. These are:
const v8plus_c_ctor_f v8plus_ctor = my_ctor; const v8plus_c_dtor_f v8plus_dtor = my_dtor; const char *v8plus_js_factory_name = "_new"; const char *v8plus_js_class_name = "MyObjectBinding"; const v8plus_method_descr_t v8plus_methods[] = { { md_name: "_my_method", md_c_func: my_method }, ... }; const uint_t v8plus_method_count = sizeof (v8plus_methods) / sizeof (v8plus_methods[0]); const v8plus_static_descr_t v8plus_static_methods[] = { { sd_name: "_my_function", sd_c_func: my_function }, ... }; const uint_t v8plus_static_method_count = sizeof (v8plus_static_methods) / sizeof (v8plus_static_methods[0]);
All of these must be present even if they have zero length or are NULL. The prototypes and semantics of each function type are as follows:
The constructor is responsible for creating the C object corresponding to
the native JavaScript object being created. It is not a true constructor in
that you are actually an object factory; the C++ function associated with
the JavaScript constructor is called for you. Your encoded arguments are in
ap. Allocate and populate a C object, stuff it into
*opp, and return
v8plus_void(). If you need to throw an exception you can do so by
calling
v8plus_throw_exception() or any of its wrappers. As of v8plus
0.3, you may no longer return an nvlist with an
err member to throw an
exception, and the
_v8plus_errno global variable is no longer available.
Free the C object
op and anything else associated with it. Your object is
going away. This function may be empty if the constructor did not allocate
any memory (i.e.,
op is not a pointer to dynamically allocated memory).
When the JavaScript method is called in the context of your object, the
corresponding C function is invoked.
op is the C object associated with
the JavaScript object, and
ap is the encoded list of arguments to the
function. Return an encoded object with a
res member, or use one of the
error/exception patterns.
In addition to methods on the native objects returned by your constructor,
you can also provide a set of functions on the native binding object itself.
This may be useful for providing bindings to libraries for which no object
representation makes sense, or that have functions that operate outside the
context of any particular object. Your arguments are once again encoded in
ap, and your return values are an object containing
res or an error.
When JavaScript objects cross the boundary from C++ to C, they are converted from v8 C++ objects into C nvlists. This means that they are effectively passed by value, unlike in JavaScript or in native addons written in C++. The arguments to the JavaScript function are treated as an array and marshalled into a single nvlist whose properties are named "0", "1", and so on. Each such property is encoded as follows:
nvlist_lookup_jsfunc()and
v8plus_args()with the V8PLUS_TYPE_JSFUNC token. This type is restricted; see below.
Because JavaScript arrays may be sparse, we cannot use the libnvpair array types. Consider them reserved for internal use. JavaScript Arrays are represented as they really are in JavaScript: objects with properties whose names happen to be integers.
Other data types cannot be represented and will result in a TypeError being thrown. If your object has methods that need other argument types, you cannot use v8+.
Side effects within the VM, including modification of the arguments, are not supported. If you need them, you cannot use v8+.
While the standard libnvpair functions may be used to inspect the arguments
to a method or function, v8+ also provides the
v8plus_args() and
v8plus_typeof() convenience functions, which simplify checking the types
and obtaining the values of arguments.
This function checks
lp for the exact sequence of arguments specified by
the list of types provided in the parameter list. If
V8PLUS_ARG_F_NOEXTRA
is set in
flags, the list of arguments must match exactly, with no
additional arguments. The parameter list must be terminated by
V8PLUS_TYPE_NONE.
Following
flags is a list of argument data types and, for most data types,
pointers to locations at which the native C value of that argument should be
stored. The following JavaScript argument data types are supported; for
each, the parameter immediately following the data type parameter must be of
the indicated C type. This parameter may be
NULL, in which case the value
will not be stored anywhere.
In most cases, the behaviour is straightforward: the value pointer parameter provides a location into which the C value of the specified argument should be stored. If the entire argument list matches the template, each argument's C value is stored in its respective location. If not, no values are stored, in the return value locations, an exception is set pending, and -1 is returned.
Three data types warrant further explanation: an argument of type
V8PLUS_TYPE_INVALID is any argument that may or may not match one of the
acceptable types. Its nvpair data type tag is stored and the argument
treated as matching. The value is ignored.
V8PLUS_TYPE_STRNUMBER64 is
used with strings that should be interpreted as 64-bit unsigned integers.
If the argument is not a string, or is not parseable as a 64-bit unsigned
integer, the argument will be treated as a mismatch. Finally,
V8PLUS_TYPE_INL_OBJECT is not supported with
v8plus_args(); JavaScript
objects in the argument list must be individually inspected as nvlists.
A simple example:
double_t d; boolean_t b; char *s; v8plus_jsfunc_t f; /* * This function requires exactly four arguments: a number, a * boolean, a string, and a callback function. It is not acceptable * to pass superfluous arguments to it. */ if (v8plus_args(ap, V8PLUS_ARG_F_NOEXTRA, V8PLUS_TYPE_NUMBER, &d, V8PLUS_TYPE_BOOLEAN, &b, V8PLUS_TYPE_STRING, &s, V8PLUS_TYPE_JSFUNC, &f, V8PLUS_TYPE_NONE) != 0) return (NULL);
This function simply returns the v8+ data type corresponding to the
name/value pair
pp. If the value's type does not match the v8+ encoding
rules,
V8PLUS_TYPE_INVALID is returned. This function cannot fail and
does not set pending any exceptions.
Similarly, when returning data across the boundary from C to C++, a pointer to an nvlist must be returned. This object will be decoded in the same manner as described above and returned to the JavaScript caller of your method. Note that booleans, strings, and numbers will be encoded as their primitive types, not objects. If you need to return something containing these object types, you cannot use v8+. Other data types cannot be represented. If you need to return them, you cannot use v8+.
The nvlist being returned must have a single member named: "res", an nvpair containing the result of the call to be returned. The use of "err" to decorate an exception is no longer supported as of v8plus 0.3. You may return a value of any decodable type.
For convenience, you may return v8plus_void() instead of an nvlist, which indicates successful execution of a function that returns nothing.
In addition, the
v8plus_obj() routine is available for instantiating
JavaScript objects to return.
This function clears any pending exception and returns NULL. This is used to indicate to internal v8+ code that the method or function should not return a value.
This function creates and populates an nvlist conforming to the encoding
rules of v8+ for returning a value or creating an exception. It can be used
to create anything from a single encoded value to arbitrarily nested
objects. It is essentially the inverse of
v8plus_args() above, with a few
differences:
V8PLUS_TYPE_INL_OBJECT, followed by type, name, value triples, terminated with
V8PLUS_TYPE_NONE.
This function can fail due to out-of-memory conditions, invalid or unsupported data types, or, most commonly, programmer error in casting the arguments to the correct type. It is extremely important that data values, particularly integers, be cast to the appropriate type (double) when passed into this function!
Following is a list of types and the C data types corresponding to their values:
A simple example, in which we return a JavaScript object with two members,
one number and one embedded object with a 64-bit integer property. Note
that if this function fails, we will return
NULL with an exception
pending.
int x; const char *s; ... return (v8plus_obj( V8PLUS_TYPE_INL_OBJECT, "res", V8PLUS_TYPE_NUMBER, "value", (double)x, V8PLUS_TYPE_INL_OBJECT, "detail", V8PLUS_TYPE_STRNUMBER64, "value64", s, V8PLUS_TYPE_NONE, V8PLUS_TYPE_NONE, V8PLUS_TYPE_NONE));
The JSON representation of this object would be:
{ "res": { "value": <x>, "detail": { "value64": "<s>" } } }
You can also add or replace the values of properties in an existing nvlist,
whether created using
nvlist_alloc() directly or via
v8plus_obj(). The
effect is very similar to
nvlist_merge(), where the second list is created
on the fly from your argument list. The interpretation of the argument list
is the same as for
v8plus_obj(), and the two functions are implemented
using the same logic.
Prior to v8plus 0.3.0, the v8plus_errno_t enumerated type was controlled by
a consumer-supplied JSON file, allowing the consumer to specify the set of
error values. In v8plus 0.3.0 and newer, this type is fixed and contains
only a small set of basic errors that can be used with the
v8plus_error()
routine for compatibility with previous versions. In v8plus 0.3.0 and
later, consumers should explicitly throw exceptions instead.
In v8plus 0.3.0 and later, the
_v8plus_errno global no longer exists. If
your code examined this variable, there are two alternatives:
v8plus_pending_exception()will provide an nvlist-encoded representation of the pending exception object.
A survey of consumers indicated that custom error codes,
_v8plus_errno,
and nontrivial uses of
_v8plus_error() did not exist in consumers;
therefore this functionality has been removed.
All exceptions are generated and made pending by
v8plus_throw_exception()
or its wrappers, identified below. Only one exception may be pending at one
time, and a call to
v8plus_throw_exception() or its wrappers with an
exception already pending has no effect. Functions are provided for
clearing any pending exceptions, testing for the existence of a pending
exception, and obtaining (to inspect or modify) the current pending
exception; see API details below.
A pending exception will be ignored and will not be thrown if any of the following occurs prior to your C function (method, static method, or constructor) returning:
v8plus_clear_exception()is invoked, or
v8plus_void(), or
It is programmer error for a constructor to set its object pointer to NULL (or to not set it at all) and return without an exception pending.
Because a common source of exceptions is out-of-memory conditions, the space used by exceptions is obtained statically and is limited in size. This allows for exceptions to be thrown into V8 reliably, with enough information to debug the original failure even if that failure was, or was caused by, an out of memory condition. V8 may or may not provide a similar mechanism for ensuring that the C++ representation of exceptions is reliable.
Exceptions may be raised in any context; however, raising an exception in a context other than the V8 event thread will not by itself cause any JavaScript exception to be thrown; it is the consumer's responsibility to provide for an exception to be set pending in the event thread if it is to be made visible from JavaScript. Functions used to inspect or alter the state of the pending exception, if any, also work in any context.
This function generates and makes pending a default exception based on the
value of
e and a message based on the formatted string
fmt using the
argument list that follows. The format string and arguments are interpreted
as by
vsnprintf(3c). NULL is returned, suitable for returning directly
from a C function that provides a method if no exception decoration is
required.
If
fmt is NULL, a generic default message is used.
This function is a wrapper for
v8plus_throw_exception().
This function generates and makes pending an exception based on the system
error code
err and sets the error message to a non-localised explanation
of the problem. The string
propname, if non-NULL, is indicated in the
message as the name of the nvlist property being manipulated when the error
occurred. NULL is returned.
This function is a wrapper for
v8plus_throw_exception().
Analogous to
v8plus_error(), this function instead generates and sets
pending an exception derived from the system error code
err. Not all
error codes can be mapped; those that are not known are mapped onto an
unknown error string. The generated exception will contain additional
properties similar to those provided by node.js's
ErrnoException()
routine. See also
v8plus_throw_errno_exception().
This function is a wrapper for
v8plus_throw_exception().
Generate and set pending an exception whose JavaScript type is
type, with
msg (or the empty string, if
msg is NULL), and optionally
additional properties as specified by a series type, name, value triples.
These triples have the same syntax as the arguments to
v8plus_obj() and
are likewise terminated by V8PLUS_TYPE_NONE.
The generated JavaScript exception will be thrown upon return from the
current constructor or method, unless
v8plus_clear_exception() is invoked
first, or
v8plus_void() is returned. The exception may be obtained via
v8plus_pending_exception() and its presence or absence tested via
v8plus_exception_pending().
Generate and set pending an exception with type
Error and message
msg
(if
msg is NULL, the message will be automatically generated from your
system's
strerror() value for this error number). The exception will
further be decorated with properties indicating the relevant system call and
path, if the
syscall and
path arguments, respectively, are non-NULL, and
any additional properties specified as in
v8plus_throw_exception().
This function is a wrapper for
v8plus_throw_exception().
This function returns B_TRUE if and only if an exception is pending.
This function returns a pointer to the nvlist-encoded pending exception, if any exists; NULL, otherwise. This object may be inspected and properties added to or removed from it.
Clear the pending exception, if any.
Immediately throw the pending exception. This is appropriate only in
the context of an asynchronous callback, in which there is no return
value; in all other cases, return the exception as the
err member of
the function's return value. Note that this is slightly different from
node::FatalException() in that it is still possible for a JavaScript
caller to catch and handle it. If it is absolutely essential that the
process terminate immediately, use
v8plus_panic() instead.
The main purpose of this facility is to allow re-throwing an exception generated by a JavaScript callback invoked from an asynchronous completion routine. The completion routine has no way to return a value, so this is the only way to propagate the exception out of the native completion routine.
This function may be called only on the main event loop thread.
This function indicates a fatal runtime error. The format string
fmt and
subsequent arguments are interpreted as by
vsnprintf(3c) and written to
standard error, which is then flushed.
abort(3c) or similar is then
invoked to terminate the Node.js process in which the addon is running. Use
of this function should be limited to those circumstances in which an
internal inconsistency has been detected that renders further progress
hazardous to user data or impossible.
There are two main types of asynchrony supported by v8+. The first is the
deferred work model (using
uv_queue_work() or the deprecated
eio_custom() mechanisms) frequently written about and demonstrated by
various practitioners around the world. In this model, your method or
function takes a callback argument and returns immediately after enqueuing a
task to run on one of the threads in the Node.js worker thread pool. That
task consists of a C function to be run on the worker thread, which may not
use any V8 (or v8+) state, and a function to be run in the main event loop
thread when that task has completed. The latter function is normally
expected to invoke the caller's original callback. In v8+, this takes the
following form:
void * async_worker(void *cop, void *ctxp) { my_object_t *op = cop; my_context_t *cp = ctxp; my_result_t *rp = ...; /* * In thread pool context -- do not call any of the * following functions: * v8plus_obj_hold() * v8plus_obj_rele_direct() * v8plus_jsfunc_hold() * v8plus_jsfunc_rele_direct() * v8plus_call_direct() * v8plus_method_call_direct() * v8plus_defer() * * If you touch anything inside op, you may need locking to * protect against functions called in the main thread. */ ... return (rp); } void async_completion(void *cop, void *ctxp, void *resp) { my_object_t *op = cop; my_context_t *cp = ctxp; my_result_t *rp = resp; nvlist_t *cbap; nvlist_t *cbrp; ... cbap = v8plus_obj( V8PLUS_TYPE_WHATEVER, "0", rp->mr_value, V8PLUS_TYPE_NONE); if (cbap != NULL) { cbrp = v8plus_call(cp->mc_callback, cbap); nvlist_free(cbap); nvlist_free(cbrp); } v8plus_jsfunc_rele(cp->mc_callback); free(cp); free(rp); } nvlist_t * async(void *cop, const nvlist_t *ap) { my_object_t *op = cop; v8plus_jsfunc_t cb; my_context_t *cp = malloc(sizeof (my_context_t)); ... if (v8plus_args(ap, 0, V8PLUS_TYPE_JSFUNC, &cb, V8PLUS_TYPE_NONE) != 0) { free(cp); return (NULL); } v8plus_jsfunc_hold(cb); cp->mc_callback = cb; v8plus_defer(op, cp, async_worker, async_completion); return (v8plus_void()); }
This mechanism uses
uv_queue_work() and as such will tie up one of the
worker threads in the pool for as long as
async_worker is running.
The other asynchronous mechanism is the Node.js
EventEmitter model. This
model requires some assistance from JavaScript code, because v8+ native
objects do not inherit from
EventEmitter. To make this work, you will
need to create a JavaScript object (the object your consumers actually use)
that inherits from
EventEmitter, hang your native object off this object,
and populate the native object with an appropriate method that will cause
the JavaScript object to emit events when the native object invokes that
method. A simple example might look like this:
var util = require('util'); var binding = require('./native_binding'); var events = require('events'); function MyObjectWrapper() { var self = this; events.EventEmitter.call(this); this._native = binding._create.apply(this, Array.prototype.slice.call(arguments)); this._native._emit = function () { var args = Array.prototype.slice.call(arguments); self.emit.apply(self, args); }; } util.inherits(MyObjectWrapper, events.EventEmitter);
Then, in C code, you must arrange for libuv to call a C function in the
context of the main event loop. The function
v8plus_method_call() is safe
to call from any thread: depending on the context in which it is invoked, it
will either make the call directly or queue the call in the main event loop
and block on a reply. Simply arrange to call back into your JavaScript
object when you wish to post an event:
nvlist_t *eap; nvlist_t *erp; my_object_t *op = ...; ... eap = v8plus_obj( V8PLUS_TYPE_STRING, "0", "my_event", ..., V8PLUS_TYPE_NONE); if (eap != NULL) { erp = v8plus_method_call(op, "_emit", eap); nvlist_free(eap); nvlist_free(erp); }
This example will generate an event named "my_event" and propagate it to
listeners registered with the
MyObjectWrapper instance. If additional
arguments are associated with the event, they may be added to
eap and will
also be passed along to listeners as arguments to their callbacks.
Places a hold on the V8 representation of the specified C object. This is
rarely necessary;
v8plus_defer() performs this action for you, but other
asynchronous mechanisms may require it. If you are returning from a method
call but have stashed a reference to the object somewhere and are not
calling
v8plus_defer(), you must call this first. Holds and releases must
be balanced. Use of the object within a thread after releasing is a bug.
This hold includes an implicit event loop hold, as if
v8plus_eventloop_hold()
was called.
Releases a hold placed by
v8plus_obj_hold(). This function may be called
safely from any thread; releases from threads other than the main event loop
are non-blocking and will occur some time in the future. Releases the
implicit event loop hold obtained by
v8plus_obj_hold().
Places a hold on the V8 representation of the specified JavaScript function.
This is required when returning from a C function that has stashed a
reference to the function, typically to use it asynchronously as a callback.
All holds must be balanced with a release. Because a single hold is placed
on such objects when passed to you in an argument list (and released for you
when you return), it is legal to reference and even to invoke such a
function without first placing an additional hold on it. This hold includes
an implicit event loop hold, as if
v8plus_eventloop_hold() was called.
Releases a hold placed by
v8plus_jsfunc_hold(). This function may be called
safely from any thread; releases from threads other than the main event loop
thread are non-blocking and will occur some time in the future. Releases
the implicit event loop hold obtained by
v8plus_jsfunc_hold().
Enqueues work to be performed in the Node.js shared thread pool. The object
op and context
ctx are passed as arguments to
worker executing in a
thread from that pool. The same two arguments, along with the worker's
return value, are passed to
completion executing in the main event loop
thread. See example above.
Places a hold on the V8 event loop. V8 will terminate when it detects that
there is no more work to do. This liveliness check includes things like open
sockets or file descriptors, but only if they are tracked by the event loop
itself. If you are using multiple threads, some of which may blocking waiting
for input (e.g. a message subscription thread) then you will need to prevent V8
from terminating prematurely. This function must be called from within the
main event loop thread. Each hold must be balanced with a release. Note that
holds on objects or functions obtained via
v8plus_obj_hold() or
v8plus_jsfunc_hold() will implicitly hold the event loop for you.
Release a hold on the V8 event loop. If there are no more pending events or input sources, then V8 will generally terminate the process shortly afterward. This function may be called safely from any thread; releases from threads other than the main event loop thread are non-blocking and will occur some time in the future.
Calls the JavaScript function referred to by
f with encoded arguments
ap. The return value is the encoded return value of the function. The
argument and return value encoding match the encodings that are used by C
functions that provide methods.
As JavaScript functions must be called from the event loop thread,
v8plus_call() contains logic to determine whether we are in the
correct context or not. If we are running on some other thread we will
queue the request and sleep, waiting for the event loop thread to make the
call. In the simple case, where we are already in the correct thread, we
make the call directly.
Note that when passing JavaScript functions around as callbacks, you must use
first use
v8plus_jsfunc_hold() from within the main event loop thread. Once
finished with the function, you may pass it to
v8plus_jsfunc_rele() from any
thread to clean up.
Calls the method named by
name in the native object
op with encoded
argument list
ap. The method must exist and must be a JavaScript
function. Such functions may be attached by JavaScript code as in the event
emitter example above. The effects of using this function to call a native
method are undefined.
When called from threads other than the main event loop thread,
v8plus_method_call() uses the same queue-and-block logic as described above
in
v8plus_call().
Because C++ is garbage. Writing good software is challenging enough without trying to understand a bunch of implicit side effects or typing templated identifiers that can't fit in 80 columns without falling afoul of the language's ambiguous grammar. Don't get me started.
FFI is really cool; it offers us the ability to use C libraries without writing bindings at all. However, it also exposes a lot of C nastiness to JavaScript code, essentially placing the interface boundary in consuming code itself. This pretty much breaks the JavaScript interface model -- for example, you can't really have a function that inspects the types of its arguments -- and requires you to write an additional C library anyway if you want or need to do something natively that's not quite what the C library already does. Of course, one could use it to write "bindings" in JavaScript that actually look like a JavaScript interface, which may end up being the best answer, especially if those are autogenerated from CTF! In short, v8+ and FFI are different approaches to the problem. Use whichever fits your need, and note that they're not mutually exclusive, either.
illumos distributions, or possibly other platforms with a working libnvpair. I'm sorry if your system doesn't have it; it's open source and pretty easy to port.
There is an OSX port; see the ZFS port's implementation. Unfortunately this port lacks the requisite support for floating-point data (DATA_TYPE_DOUBLE) but you could easily add that from the illumos sources.
Fuck python, fuck WAF, and fuck all the hipster douchebags for whom make is too hard, too old, or "too Unixy". Make is simple, easy to use, and extremely reliable. It was building big, important pieces of software when your parents were young, and it Just Works. If you don't like using make here, you probably don't want to use v8+ either, so just go away. Write your CoffeeScript VM in something else, and gyp-scons-waf-rake your way to an Instagram retirement in Bali with all your hipster douchebag friends. Just don't bother me about it, because I don't care.
Most likely, your module has a typo or needs to be linked with a library. Normally, shared objects like Node addons should be linked with -zdefs so that these problems are found at build time, but Node doesn't deliver a mapfile specifying its API so you're left with a bunch of undefined symbols you just have to hope are defined somewhere in your node process's address space. If they aren't, you're boned. LD_DEBUG=all will help you find the missing symbol(s).
As of 0.0.2, v8+ builds a mapfile for your node binary at the time you build your addon. It does not attempt to restrict the visibility of any symbols, so you will not be warned if your addon is using private or deprecated functionality in V8 or Node.js. Your build will, however, fail if you've neglected to link in any required libraries, typo'd a symbol name, etc.
Be careful when decorating exceptions. There are several built-in hidden properties; if you decorate the exception with a property with the same name, you will change the hidden property's value but it will still be hidden. This almost certainly is not what you want, so you should prefix the decorative property names with something unique to your module to avoid stepping on V8's (or JavaScript's) property namespace.
See "License" below. Note also that one can export plain functions as well.
You are passing an object with the wrong C type to
v8plus_obj(). Like
all varargs functions, it cannot tell the correct size or type of the
objects you have passed it; they must match the preceding type argument or
it will not work correctly. In this particular case, you've most likely
done something like:
int foo = 0xdead; v8plus_obj(V8PLUS_TYPE_NUMBER, "foo", foo, V8PLUS_TYPE_NONE);
An 'int' is 4 bytes in size, and the compiler reserves 4 bytes on the stack
and sticks the value of foo there. When
v8plus_obj goes to read it, it
sees that the type is V8PLUS_TYPE_NUMBER, casts the address of the next
argument slot to a
double *, and dereferences it, then moves the argument
list pointer ahead by the size of a double. Unfortunately, a double
is usually 8 bytes long, meaning that (a) the value of the property is going
to be comprised of the integer-encoded foo appended to the next data type,
and (b) the next data type is going to be read from either undefined memory
or from part of the address of the name of the next property. To cure this,
always make sure that you cast your integral arguments properly when using
V8PLUS_TYPE_NUMBER:
v8plus_obj(V8PLUS_TYPE_NUMBER, "foo", (double)foo, V8PLUS_TYPE_NONE);
MIT.
See.
This is an incomplete list of native addons known to be using v8+. If your addon uses v8+, please let me know and I will include it here.
|
https://www.npmjs.com/package/v8plus
|
CC-MAIN-2015-40
|
en
|
refinedweb
|
Hello People of the java programming forums!
i am a big noob with java scripts and programming, but never the less have got my self in the deep end with this.
i intended to translate a java mobile game into english, i have managed to translate all but some images of menu icons, the files which contain the menu icons are in a strange format, it took me a long time just to open them, but i have opened them now and edited them, but i do not know how i can make them back into the format they were in they seem to be png files, from what i can see after open them in a text editor and reading the file header and after researching i have come to the conclousion they are bufferdimages with a transparent colour in rgb png. if anyone can help me create these images so that the game understands them i would be so grateful, ive been trying to learn java but i think i need to learn alot more just to even approch this problem, and i could be totely wrong.
i can upload the whole game and any files which may help determine the problem.
here is a code which i cant understand.
Code :
import java.awt.*; import java.awt.image.BufferedImage; import java.awt.image.PixelGrabber; import javax.swing.ImageIcon; public class Pictures { public static BufferedImage toBufferedImage(Image image) { if (image instanceof BufferedImage) { return (BufferedImage) image; } // This code ensures that all the pixels in the image are loaded image = new ImageIcon(image).getImage(); // Determine if the image has transparent pixels boolean hasAlpha = hasAlpha(image); // Create a buffered image with a format that's compatible with the // screen BufferedImage bimage = null; GraphicsEnvironment ge = GraphicsEnvironment .getLocalGraphicsEnvironment(); try { // Determine the type of transparency of the new buffered image int transparency = Transparency.OPAQUE; if (hasAlpha == true) { transparency = Transparency.BITMASK; } // Create the buffered image GraphicsDevice gs = ge.getDefaultScreenDevice(); GraphicsConfiguration gc = gs.getDefaultConfiguration(); bimage = gc.createCompatibleImage(image.getWidth(null), image .getHeight(null), transparency); } catch (HeadlessException e) { } // No screen if (bimage == null) { // Create a buffered image using the default color model int type = BufferedImage.TYPE_INT_RGB; if (hasAlpha == true) { type = BufferedImage.TYPE_INT_ARGB; } bimage = new BufferedImage(image.getWidth(null), image .getHeight(null), type); } // Copy image to buffered image Graphics g = bimage.createGraphics(); // Paint the image onto the buffered image g.drawImage(image, 0, 0, null); g.dispose(); return bimage; } public static boolean hasAlpha(Image image) { // If buffered image, the color model is readily available if (image instanceof BufferedImage) { return ((BufferedImage) image).getColorModel().hasAlpha(); } // Use a pixel grabber to retrieve the image's color model; // grabbing a single pixel is usually sufficient PixelGrabber pg = new PixelGrabber(image, 0, 0, 1, 1, false); try { pg.grabPixels(); } catch (InterruptedException e) { } // Get the image's color model return pg.getColorModel().hasAlpha(); } public static void main(String[] args) { String myImage = "smiley-icon-5.jpg"; Image image = java.awt.Toolkit.getDefaultToolkit().createImage(myImage); Pictures p = new Pictures(); p.toBufferedImage(image); } }
here is a example of the files,
one is the original, with the extension .p and saved in a zip file
and one is edited and saved in png format,
i need to convert the png file into the .p file,
the .p files will not open in any paint program i have found except irfanview,
Thank you in advance. Leo aka Vexst
|
http://www.javaprogrammingforums.com/%20java-me-mobile-edition/4598-creating-bufferedimage-printingthethread.html
|
CC-MAIN-2015-40
|
en
|
refinedweb
|
Overview).
[ cos(theta) -sin(theta) 0 x-x*cos+y*sin ] [ sin(theta) cos(theta) 0 y-x*sin-y*cos ] [ 0 0 1 z ]
the code:
import javafx.scene.text.*; import javafx.scene.transform.*; Text { transforms: Rotate { angle: 30 } x: 10 y: 50 font: Font { size: 20 } content: "This is a test" }
produces:
Profile: common
|
http://docs.oracle.com/cd/E17802_01/javafx/javafx/1.3/docs/api/javafx.scene.transform/javafx.scene.transform.Rotate.html
|
CC-MAIN-2015-40
|
en
|
refinedweb
|
The WG Charter has been extended through July 31 (only the date has been changed in the revised Charter document). The IP clause of the Charter will probably be changed again (through an AC review) to reference the Current Patent Policy.
No changes requested - the posted version of the minutes is approved.
- Issue 205 DF: Last telcon (May 1) we ran out of time before revisiting issue 205, and this issue was inadvertantly omitted from this week's (May 8) telcon agenda. HFN has pointed out that MarkB is satisfied with our resolution of 205 when coupled with an additional friendly amendment (). Is anyone not comfortable incorporating HFN's amendment? MH: can you restate this? HFN restates amendment, see [100]. No-one objects to incorporating amendment.
-Primer Nothing to report -Spec Editors: the editor's to-do list is pretty much empty now, will work on readability checks. OH: conformance document will be linked CF: The spec diffs are OK but indicating changes directly in the text is much better MH: will work on this. DF: The next diffs should be small, and it may not be worth working on this. -TBTF DF: Most of the items from the last TBTF meeting are in the agenda of this meeting. HFN volunteered to draft the text for the HTTP binding as requested in the agenda. -Conformance Anish: New document posted and linked from the team page. Working for integration for May 14th. It will be up to date for the latest spec. Some editorial issues may still be there but the core will be there for the WG to review. PC: How many tests come from implementors, and how many from reading the spec? OH: We do not have an answer to that. -Usage scenarios Nothing to report. -Requirements Nothing to report -Email binding HMM: Is working to bring tha document up to-date with the changes in the HTTP MEP.
None reported.
Conformance document is pretty much up to date and it will be available next week. We also have outstanding issues and we have to resolve them before going to LC. If we can finish issues by May 14th, we can start review of the whole thing and plan to go to LC. Assuming the major issues are finished on this telcon, the editors indicate they can have a new snapshot by May 14th, possibly by the end of the previous week. DF: when the new spec snapshot becomes available, the WG should do a detailed review. Any other small details that are completed after May 14th can be reviewed separately. WG agrees.
- Issue 195 HFN: the proposal has 2 sides, we have qname or we don't say anything about return values. Noah: we need to clarify our goals in resolving this issue. RPC is a wire representation, used by many languages. Troubles arise when you have optional return values. HFN: we got the same pb with SOAP/1.1 it might be a good idea to have it explicit in the message. DF: Do we care to provide that explicit token? HFN: +1 to Noah, it would be good to have something explicit there. MB: +1 RW: not sure it is necessary, especially in the proposed way. OH: potentially useful, from implementors point of view it is clearly a plus. DF: sounds like the majority wants an explicit marker, so what kind of marker should we have? HFN: the choices are: - local names (has sets of pb, conflicts...) - qnames (can't be easily described easily in a schema) - global unique name (wrapper) CF: use the ID of the return value and identify in a header RW: the Wrapper helps a little but not as much as I would like. Another alternative is to identify outbound edge, a clearly identify type that identify the wname of the return value. OH: is that like Chris' reference option? DF: wrapper seems to be the least unliked option MH: Ray talked against wrapper, did someone say anything against qnames? DF: qnames seems to be the best option then. Does anyone object to resolving issue 195 using QName to identify the result? No-one objected. DF: Issue 195 is closed with this resolution. RayW to write resolution text. - dealing with root MH has provided a summary of the root issue [101] NM: we allow refs from the header to the body and the reverse. MH: we identify rpc struct with tagging. HFN: the body is one struct, you can't have multiple top level using RPC convention, as it is not allowed. HFN: agreed we don't need a root attribute. MH: should we add that in the RPC convention there should be only one child? HFN: if you are not using RPC you can have as many top levels as you want. RPC is more restrictive. DF: Option a in [101] (explicitly disallow top-level multi-refs) seems to resolve the root issue. Does anyone object to resolving the root issue by adopting this option? No one objects DF: we will adopt this formulation. - issue 194 DF: Proposal is that the encoding style attribute cannot appear in any elements defined in the envelope namespace. MH: I think "cannot" is "MUST NOT". DF: Is there any objection to closing issue 194 by saying the encoding style attribute must not appear in any elements defined in the envelope namespace? No one objects. DF: issue 194 is closed with this resolution. - proposal by TBTF to update return code 204 HFN: not supporting 204 will make our integration with HTTP more problematic (e.g. for some MEPs). NM: if 204 would be mapped to a non-response fixed SOAP reply. Then it would be treated at the SOAP level. HTTP 204 says "it's a success" YL: 202 doesn't make any assumption wrt processing as 204 do, is the same model valid there? (Scribe missed the reply.) CF: having no body back is valid. HFN: the SOAP processor can see that the 204' SOAP reply is generated MH: it would contain no header, so if we have a "echo this header", it will fail. MH: I really don't think this proposal would work. DF: discussion will continue on email and hope to have it for next telcon. [100] [101]
|
http://www.w3.org/2000/xp/Group/2/05/8-minutes.html
|
CC-MAIN-2015-40
|
en
|
refinedweb
|
Pipes.Text.IO
Synopsis
- fromHandle :: MonadIO m => Handle -> Producer Text m ()
- stdin :: MonadIO m => Producer Text m ()
- readFile :: MonadSafe m => FilePath -> Producer Text m ()
- toHandle :: MonadIO m => Handle -> Consumer' Text m r
- stdout :: MonadIO m => Consumer' Text m ()
- writeFile :: MonadSafe m => FilePath -> Consumer' Text m ()
Text IO
Where pipes
IO replaces lazy
IO,
Producer Text IO r replaces lazy
Text.
This module exports some convenient functions for producing and consuming
pipes
Text in
IO, namely,
readFile,
writeFile,
fromHandle,
toHandle,
stdin and
stdout. Some caveats described below.
The main points are as in Pipes.ByteString:
A
Handle can be associated with a
Producer or
Consumer according
as it is read or written to.
import Pipes import qualified Pipes.Text as Text import qualified Pipes.Text.IO as Text import System.IO main = withFile "inFile.txt" ReadMode $ \hIn -> withFile "outFile.txt" WriteMode $ \hOut -> runEffect $ Text.fromHandle hIn >-> Text.toHandle hOut
To stream from files, the following is perhaps more Prelude-like (note that it uses Pipes.Safe):
import Pipes import qualified Pipes.Text as Text import qualified Pipes.Text.IO as Text import Pipes.Safe main = runSafeT $ runEffect $ Text.readFile "inFile.txt" >-> Text.writeFile "outFile.txt"
Finally, you can stream to and from
stdin and
stdout using the predefined
stdin
and
stdout pipes, as with the following "echo" program:
main = runEffect $ Text.stdin >-> Text.stdout
Caveats
The operations exported here are a convenience, like the similar operations in
Data.Text.IO (or rather,
Data.Text.Lazy.IO, since, again,
Producer Text m r is
'effectful text' and something like the pipes equivalent of lazy Text.)
- Like the functions in
Data.Text.IO, they attempt to work with the system encoding.
- Like the functions in
Data.Text.IO, they significantly slower than ByteString operations. Where you know what encoding you are working with, use
Pipes.ByteStringand
Pipes.Text.Encodinginstead, e.g.
view utf8 Bytes.stdininstead of
Text.stdin
- Like the functions in
Data.Text.IO, they use Text exceptions, not the standard Pipes protocols.
Something like
view utf8 . Bytes.fromHandle :: Handle -> Producer Text IO (Producer ByteString m ())
yields a stream of Text, and follows standard pipes protocols by reverting to (i.e. returning) the underlying byte stream upon reaching any decoding error. (See especially the pipes-binary package.)
By contrast, something like
Text.fromHandle :: Handle -> Producer Text IO ()
supplies a stream of text returning '()', which is convenient for many tasks,
but violates the pipes
pipes-binary approach to decoding errors and
throws an exception of the kind characteristic of the
text library instead.
Producers
fromHandle :: MonadIO m => Handle -> Producer Text m ()
readFile :: MonadSafe m => FilePath -> Producer Text m ()
Stream text from a file in the simple fashion of
Data.Text.IO
>>>
runSafeT $ runEffect $ Text.readFile "hello.hs" >-> Text.map toUpper >-> hoist lift Text.stdoutMAIN = PUTSTRLN "HELLO WORLD"
Consumers
toHandle :: MonadIO m => Handle -> Consumer' Text m r
Convert a text stream into a
Handle
Note: again, for best performance, where possible use
(for source (liftIO . hPutStr handle)) instead of
(source >-> toHandle handle).
stdout :: MonadIO m => Consumer' Text m ()
|
https://hackage.haskell.org/package/pipes-text-0.0.0.12/docs/Pipes-Text-IO.html
|
CC-MAIN-2015-40
|
en
|
refinedweb
|
23 Nov 04:27 2012
How to design matrix on edgeR to study genotype x environmental interaction
Dear Daniela, I think you would be very well advised to seek out a statistical bioinformatician with whom you can collaborate on an ongoing basis. A GxE anova analysis would be statistically sophisticated even if you were analysing a simple univariate phenotypic trait. Attempting to do that sort of analysis in the context of an RNA-Seq experiment on miRNAs is far more difficult again. The design matrices you have created may be correct, but that's just the start of the analysis, and there are many layers of possible complexity. The BCV in your experiment is so large that I feel there must be quality issues with your data that you have not successfully dealt with. It seems very likely, for example, that there are batch effects that you have not yet described. To answer some specific questions: You might be better off with prior.df=10 instead the default, but this has little to do with the size of the BCV. You ask why one variety and one stage are disappearing from your design matrix. If you omit the "0+" in the first formula (and you should), you will find that one vineyard will disappear as well. This is because the number of contrasts for any factor must be one less than the number of leveles. This is a very fundamental feature of factors and model formula that you need to become familiar with before you can make sense of any model formula. Your email makes no mention of library sizes or sequencing depths, but obviously that has a fundamental effect on what is significantly different from what. I think you know now how to use edgeR in principle. However, as you probably already appreciate, deciding what is the right analysis for your data is beyond the scope of the mailing list. Best wishes Gordon On Thu, 22 Nov 2012, bioconductor-request@... wrote: > Date: Thu, 22 Nov 2012 10:07:19 +0100 > From: Daniela Lopes Paim Pinto <d.lopespaimpinto@...> > To: bioconductor@... > Subject: Re: [BioC] How to design matrix on edgeR to study genotype x > environmental interaction > Message-ID: > > Dear Gordon, > > Thank you so much for your valuable input. I took sometime to study a bit > more and be able to consider all the aspects you pointed out. At this time > I reconsider the analysis and started again, with the data exploration of > all 48 samples. > > First I filtered out the low reads, considering just the ones with more > than 1 cpm in at least 2 libraries (I have two replicates of each library); > the MDS plot clearly separate one of the locations from the other two > (dimension 1) and with less distinction the two varieties (dimension 2). > The stages also seems to be separated in two groups (the first two ones > together and separate of the two last ones) but as the varieties, not so > distinct. The two replicates are also consistent. > > With the BCV plot I could observe that reads with lower logCPM have bigger > BCV (the BCV value was equal to 0.5941), and then comes my first question: > > Should I choose *prior.df* different from the default, due to this > behavior, when estimating genewise dispersion? > > To proceed with the DE analysis, I tried two approaches, this time with all > the 48 samples, as suggested. > For both approaches, I have the following data frame: > >> target > Sample Vineyard Variety Stage > 1 1 mont CS ps > 2 2 mont CS ps > 3 4 mont CS bc > 4 5 mont CS bc > 5 7 mont CS 19b > 6 8 mont CS 19b > 7 10 mont CS hv > 8 11 mont CS hv > 9 13 mont SG ps > 10 14 mont SG ps > 11 16 mont SG bc > 12 17 mont SG bc > 13 19 mont SG 19b > 14 20 mont SG 19b > 15 22 mont SG hv > 16 23 mont SG hv > 17 25 Bol CS ps > 18 26 Bol CS ps > 19 28 Bol CS bc > 20 29 Bol CS bc > 21 31 Bol CS 19b > 22 32 Bol CS 19b > 23 34 Bol CS hv > 24 35 Bol CS hv > 25 37 Bol SG ps > 26 38 Bol SG ps > 27 40 Bol SG bc > 28 41 Bol SG bc > 29 43 Bol SG 19b > 30 44 Bol SG 19b > 31 46 Bol SG hv > 32 47 Bol SG hv > 33 49 Ric CS ps > 34 50 Ric CS ps > 35 52 Ric CS bc > 36 53 Ric CS bc > 37 55 Ric CS 19b > 38 56 Ric CS 19b > 39 58 Ric CS hv > 40 59 Ric CS hv > 41 61 Ric SG ps > 42 62 Ric SG ps > 43 64 Ric SG bc > 44 65 Ric SG bc > 45 67 Ric SG 19b > 46 68 Ric SG 19b > 47 70 Ric SG hv > 48 71 Ric SG hv > > At the first instance, I used the full interaction formula as the following > code: > >> d <- DGEList(counts=file) >> keep <- rowSums(cpm(DGElist) > 1) >= 2 >> DGElist <- DGElist[keep,] >> DGElist$samples$lib.size <- colSums(DGElist$counts) >> DGElist_norm <- calcNormFactors(DGElist) > *> design <- model.matrix(~0 + Vineyard + Variety + Stage + > Vineyard:Variety + Vineyard:Stage + Variety:Stage + Vineyard:Variety:Stage, > data=target)* > > [or even (*> design <- model.matrix(~0 + Vineyard*Variety*Stage, > data=target)*) which gives the same result] > >> rownames(design) <- colnames(DGEList_norm) > > However, when I call the *design* I see that one Variety (i.e., CS) and one > Stage (i.e., 19b) are not present in the design matrix, as individual > effect or even in the interactions. > > Then I passed to the second approach, in which, I create groups: > >> group <- > factor(paste(target$Vineyard,target$Variety,target$Stage, from the design matrix when using the full interaction formula? > > Sorry for the long email and thank you for all the advises, > > Best wishes > > Daniela Lopes Paim Pinto > PhD student - Agrobiosciences > Scuola Superiore Sant'Anna, Italy > >> sessionInfo() > R version 2.15.2 (2012-10-26) >] edgeR_3.0.3 limma_3.14.1 > > loaded via a namespace (and not attached): > [1] tools_2.15.2 > > > > > > > > > > > 2012/11/11 Gordon K Smyth <smyth@...> > >> Dear Daniela, >> >> What version of the edgeR are you using? The posting guide asks you to >> give sessionInfo() output so we can see package versions. >> >> Your codes looks correct for testing an interaction, although you could >> estimate the same interaction more directly using an interaction formula as >> in Section 3.3.4 of the edgeR User's Guide. >> >> However the model you have used is correct only if all 12 samples >> correspond to the same physiological stage. I wonder why you are not >> analysing all the 48 samples together. I would start with data exploration >> of all 48 samples, including exploration measures like transcript >> filtering, library sizes, normalization factors, an MDS plot, a BCV plot, >> and so on. The first step is to check the data quality before going on to >> test for differential expression. >> >> edgeR has very high statistical power, even giving p-values smaller than I >> would like in some cases. So if you're not getting any differential >> expression, it is because there is none or because you have data quality >> problems. >> >> Best wishes >> Gordon >> >> Date: Fri, 9 Nov 2012 14:44:28 +0100 >>> From: Daniela Lopes Paim Pinto <d.lopespaimpinto@...> >>> To: bioconductor@... >>> Subject: Re: [BioC] How to design matrix on edgeR to study genotype x >>> environmental interaction >>> >>> Dear Gordon, >>> >>> Thank you so much for the reference. I read all the chapter regarding to >>> the models and I tried to set up the following code considering a data >>> frame like this: >>> >>> target >>>> >>> Sample Variety Location >>> 1 1 CS Mont >>> 2 2 CS Mont >>> 3 25 CS Bol >>> 4 26 CS Bol >>> 5 49 CS Ric >>> 6 50 CS Ric >>> 7 13 SG Mont >>> 8 14 SG Mont >>> 9 37 SG Bol >>> 10 38 SG Bol >>> 11 61 SG Ric >>> 12 62 SG Ric >>> >>> group <- factor(paste(target$Variety,**target$Location,>> >>> And then I estimated the trended and tag wise dispersion and fit the model >>> doing: >>> >>> disp.tren <- estimateGLMTrendedDisp(**DGEnorm,design) >>>> disp.tag <- estimateGLMTagwiseDisp(disp.**tren,design) >>>> fit <- glmFit(disp.tag,design) >>>> >>> >>> When I made some contrasts to find DE miRNAs, for example: >>> >>> my.constrasts <- makeContrasts(CS_BolvsMont = CS_Bol-CS_Mont, >>>> >>> CSvsSG_BolvsMont = (CS_Bol-CS_Mont)-(SG_Bol-SG_**Mont), levels=design) >>> >>>> lrt <- glmLRT(fit, contrast=my.constrasts[,"CS_**BolvsMont"]) >>>> >>> >>> I expected to find DE miRNAs due the environment effect (CS_BolvsMont) and >>> for example DE miRNAs due the interaction genotypeXenvironment ( >>> CSvsSG_BolvsMont). >>> >>> However the results do not seems to reflect it, since I did not get even a >>> single DE miRNA with significant FDR (even less than 20%!!!!) and going >>> back to the counts in the raw data I find reasonable differences in their >>> expression, which was expected. I forgot to mention that I decided to >>> consider stage by stage separately and not add one more factor on the >>> model, since I am not interested, for the moment, on the time course (as I >>> wrote in the previous email - see below). >>> >>> Could you (or any body else from the list) give me some advise regarding >>> the code? Is this matrix appropriate for the kind of comparisons I am >>> interested on? >>> >>> Thank you in advance for any input. >>> >>> Daniela >>> >>> >>> >>> >>> 2012/10/30 Gordon K Smyth <smyth@...> >>> >>> Dear Daniela, >>>> >>>> edgeR can work with any design matrix. Just setup your interaction >>>> model using standard R model formula. See for example Chapter 11 of: >>>> >>>> >>>>****manuals/R-intro.pdf<**manuals/R-intro.pdf> >> <http://**cran.r-project.org/doc/**manuals/R-intro.pdf<> >>> >> >>> >>>> Best wishes >>>> Gordon >>>> >>>> Date: Mon, 29 Oct 2012 16:24:31 +0100 >>>> >>>>> From: Daniela Lopes Paim Pinto <d.lopespaimpinto@...> >>>>> To: bioconductor@... >>>>> Subject: [BioC] How to design matrix on edgeR to study genotype x >>>>> environmental interaction >>>>> >>>>> Dear all, >>>>> >>>>> I'm currently working with data coming from deep sequencing of 48 small >>>>> RNAs libraries and using edgeR to identify DE miRNAs. I could not figure >>>>> out how to design my matrix for the following experimental design: >>>>> >>>>> I have 2 varieties (genotypes), cultivated in 3 different locations >>>>> (environments) and collected in 4 physiological stages. None of them >>>>> represent a control treatment. I'm particulary interested on identifying >>>>> those miRNAs which modulate their expression dependent on genotypes (G), >>>>> environments (E) and G x E interaction. For instance the same variety in >>>>> the 3 different locations, both varieties in the same location and both >>>>> varieties in the 3 different locations. >>>>> >>>>> I was wondering if I could use the section 3.3 of edgeR user guide as >>>>> reference or if someone could suggest me any other alternative method. >>>>> >>>>> Thanks in advance >>>>> >>>>> Daniela >>>>> >>>>> ______________________________________________________________________ The information in this email is confidential and intend...{{dropped:4}} _______________________________________________ Bioconductor mailing list Bioconductor@... Search the archives:
|
http://permalink.gmane.org/gmane.science.biology.informatics.conductor/44917
|
CC-MAIN-2015-40
|
en
|
refinedweb
|
Pushed to marcuspope/verbotenjs
Atlassian SourceTree is a free Git and Mercurial client for Windows.
Atlassian SourceTree is a free Git and Mercurial client for Mac.
_ __ __ __ _______ | | / /__ _____/ /_ ____ / /____ ____ / / ___/ | | / / _ \/ ___/ __ \/ __ \/ __/ _ \/ __ \__ / /\__ \ | |/ / __/ / / /_/ / /_/ / /_/ __/ / / / /_/ /___/ / |___/\___/_/ /_.___/\____/\__/\___/_/ /_/\____//____/ zip archive: hg source: npm package: Introduction ================================================== Because everybody will think it is verboten anyway, I present to you VerbotenJS. Maintainable JavaScript: Don't modify objects you don't own. - Nicholas C. Zakas I, Marcus Pope, by imperial proclamation hereby declare ownership of Object.prototype. As sole owner I am the only authorized person to modify the prototype definition. Nope, too late I already called it and double stamped it. You can't triple stamp a double stamp! What Is VerbotenJS? ------------------- VerbotenJS is a general application development framework designed for NodeJS & Browser JavaScript hosts. In addition to a bunch of base prototype enhancements and terse programming patterns, the framework also includes extension libraries for various domains such as databases, filesystems, shell scripts etc. Who Should Use VerbotenJS? -------------------------- Nobody, it's verboten remember?!? No I'm kidding, you can use it for any personal project you want. I don't recommend using it for production systems since it is version 1.0 and I'm just one man trying to support his own personal JS framework. I'll probably change framework logic on a whim too, because that too is considered verboten right? Why Is VerbotenJS 750k Uncompressed, and do you really expect me to download that to a client? -------------------------------------------------------------- No, I don't, it's verboten to do so remember? Actually here's the deal. jQuery is not compatible with Object.prototype extensions. I had intended on releasing my own DOM, Event and Ajax wrapper but I ran out of time. So I decided to inject a forked version of jQuery which added almost 350k to the footprint. My DOM wrapper will probably put the library in the 500k realm for browser hosts. For NodeJS hosts modules are dynamically included on access so the run-time memory usage requirements are dynamic based on the modules you use. But by today's standards 750k is not really that bad except for mobile devices and cellular networks. If you must you can compress and gzip it down to 75k but with modern bandwidth and caching infrastructures it really isn't that bad. But I Heard/Read/Know That Object Prototype Extensions Are Bad, mkay? -------------------------------------------------------------- Good for you. Play around a little, find out how good it feels to be bad for once in your life :D Well, If Object Prototype Extensions Are Good, Then Why Does VerbotenJS Break My Application/Library/3rd Party Code? -------------------------------------------------------------- Some people write bad JavaScript code, including myself. jQuery authors, for instance, recognize that their codebase has a bug that makes jQuery incompatible with object prototype extensions, however they choose to ignore the issue for various artificial reasons. ExpressJS authors just don't understand how JavaScript reflection works, and they think OPE's are code smell. And yet in other cases VerbotenJS may conflict with existing namespaces in your application. In the latter case I recommend either opening a bug if the issue is caused by a non-conforming standard on my part, or refactoring your application if you want to use VerbotenJS. Where Do I Start? ----------------- First grab a copy of verbotenjs from npm. npm install verbotenjs In node hosts use: require('verbotenjs'); For browser hosts include a reference to this file [verbotenjs]/web/verboten.js There are no init functions that you have to worry about, and there is no real global entry point to the verboten framework. Everything you need is either declared globally, or attached to the prototypes of existing data types including Objects. I Kinda Like Where This Is Going, How Do I Help? ------------------------------------------------ Well, I'm not really in a position to coordinate a team of developers at the moment. I have a newborn daughter and a full time job, so to avoid the frustration of not hearing from me for a few weeks at a time, which totally happens, send me small patches, <500 lines, and I'll review them. Since I doubt this will be much of an issue either way, I'll leave it at that for now. What If I Wanted To Create A Commercial Product? ------------------------------------------------ Well, let's talk licensing and target market. But really you should probably wait until 2.0 at least before considering something for production. What's In Store For The Future Of VerbotenJS? --------------------------------------------- Jesus, what isn't in a state of almost complete! Documentation, Bugs, Dom.js, Cross Platform Compatibility, JSDom, Unit Tests, etc. You know, all the stuff you'd expect from a professional open source project. The same stuff you should expect to be missing from a project named VerbotenJS. Documentation ================================================= Ha! No, but I'm working on it. You'll notice some documentation tags in the source code, that's about as far as I've made it on the documentation front. Much of the codebase is pretty self explanatory, much of it is not. I do have a couple of other projects and a bunch of scripts that I've written over the years that will be used as examples, but otherwise I have no documentation other than comments in the source. Examples -------- Here are some before and after code samples to show how VerbotenJS can make your coding efforts easier. Grepping The File System Without VerbotenJS: function grep(dir, query, filemask, recurse, hidden, cb) { //grep for 'query' in 'dir'. //'recurse' if necessary, //ignore 'hidden' files if necessary. //'cb' required. if (typeof query == "string") { query = new RegExp(query, "i"); } //optionally filter by filemask if (typeof filemask == "string") { filemask = new RegExp(filemask); } var count = 1, //file count async = 0, //async callback count out = [], list = [], fs = require('fs'), fp = require('path'); dir = dir || process.cwd(); function search(path) { fs.stat(path, function(err, stat){ count--; if (err) throw err; if (stat.isDirectory()) { async++; fs.readdir(path, function(err, files){ async--; if (err) throw err; for (var i=0; i < files.length; i++) { if (!hidden && files[i][0] == '.') return; count++; search(fp.join(path, files[i])); } }); } else if (stat.isFile()) { //ignore unmatched file masks if (!filemask.test(path)) { return; } async++; fs.readFile(path, 'utf8', function(err, str){ async--; if (err) throw err; var lines = str.split('\n'); for (var i=0; i < lines.length; i++) { var line = lines[i].trim(); //return matching lines & line number if (query.test(line)) return [i, line]; } if (lines.length) { out.push(path); for (var i=0; i < lines.length; i++) { out.push((lines[i][0]+1) + ": " + lines[i][1]); list.push({ path : path, line : lines[i][0], text : lines[i][1] }); } out.push(''); } if(count == 0 && async == 0) { cb(out.join("\n") || "", list); } }); } }); } search(dir); }; About 50 lines of code (minus empty lines, comments and closing brackets) to implement a grep-like file system search in NodeJS. Here's the same function implemented with VerbotenJS conventions. function grep(dir, query, filemask, recurse, hidden, cb) { //grep for 'query' in 'dir'. //'recurse' if necessary, //ignore 'hidden' files if necessary. //'cb' required. dir = dir || process.cwd(); query = query.toRegex('i'); filemask = filemask.toRegex('i'); //recursively search the filesystem q.fs.ls(dir, recurse, function(list) { //filter files we don't need var files = list.ea(function(f) { if (!hidden && f[0] == ".") return; //things like .hg/.git if (!filemask.test(f)) return; return f; }); var matches = []; //read each file and collect matching line info files.sort().ea(function(next, f) { q.f.utf8(f, function(txt) { txt.split('\n').trim().ea(function(line, i) { //if line matches query, return info if (query.test(line)) { matches.push({ path : f, line : i, text : line }); } }); //process next file next(); }); }, function() { //replicate grep stdout var stdout = matches.ea(function(o) { return [o.path, o.line + 1, ": ", o.text, ''].join('\n'); }); //return stdout and matches obj cb(stdout, matches); }); }); }; Here it only took 22 lines of code to implement the same logic. And in reality, the VerbotenJS version is more robust than the raw JavaScript version due to the flexibility of functions like .toRegex() and .test(). And my .ea() function operates like Array.forEach, except that it allows for Object key iteration, asynchronous or synchronous iteration based on the presence of a callback function, and enhanced iteration workflows with helpers like ea.exit(), ea.merge() and ea.join(). Basically .ea() precludes the need to ever use for loops, for-in loops, or any of the new ES5 array extensions Array.forEach|map|filter|some|every|reduce* functions. That's all I have time to report on now, but I'll put up some more examples as I find well isolated examples. Conclusion ================================================== VerbotenJS has been a career long project of mine. I've renamed/rewritten the project multiple times over, for various different JavaScript hosts like WScript/JScript, HTA's, Rhino, J#, Jaxer and even a custom C# host I wrote for fun. With the growing popularity of NodeJS I think VerbotenJS has finally found a good home. And the architecture is finally to a point that I consider it stable and worthy of peer review. So get reviewing peers! Thanks for reading, Marcus Pope
|
https://bitbucket.org/marcuspope/verbotenjs
|
CC-MAIN-2015-40
|
en
|
refinedweb
|
ObjectSharp Blog
When working with Claims Based Authentication a lot of things are similar between the two different models, Active and Passive. However, there are a few cases where things differ… a lot. The biggest of course being how a Request for Security Token (RST) is authenticated. In a passive model the user is given a web page where they can essentially have full reign over how credentials are handled. Once the credentials have been received and authenticated by the web server, the server generates an identity and passes it off to SecurityTokenService.Issue(…) and does it’s thing by gathering claims, packaging them up into a token, and POST’ing the token back to the Relying Party.
Basically we are handling authentication any other way an ASP.NET application would, by using the Membership provider and funnelling all anonymous users to the login page, and then redirecting back to the STS. To hand off to the STS, we can just call:
FederatedPassiveSecurityTokenServiceOperations.ProcessRequest(
HttpContext.Current.Request,
HttpContext.Current.User,
MyTokenServiceConfiguration.Current.CreateSecurityTokenService(),
HttpContext.Current.Response);
However, it’s a little different with the active model.
Web services manage identity via tokens but they differ from passive models because everything is passed via tokens including credentials. The client consumes the credentials and packages them into a SecurityToken object which is serialized and passed to the STS. The STS deserializes the token and passes it off to a SecurityTokenHandler. This security token handler validates the credentials and generates an identity and pushes it up the call stack to the STS.
Much like with ASP.NET, there is a built in Membership Provider for username/password combinations, but you are limited to the basic functionality of the provider. 90% of the time, this is probably just fine. Other times you may need to create your own SecurityTokenHandler. It’s actually not that hard to do.
First you need to know what sort of token is being passed across the wire. The big three are:
Each is pretty self explanatory.
Some others out of the box are:
Reflector is an awesome tool. Just sayin’.
Now that we know what type of token we are expecting we can build the token handler. For the sake of simplicity let’s create one for the UserNameSecurityToken.
To do that we create a new class derived from Microsoft.IdentityModel.Tokens.UserNameSecurityTokenHandler. We could start at SecurityTokenHandler, but it’s an abstract class and requires a lot to get it working. Suffice to say it’s mostly boilerplate code.
We now need to override a method and property: ValidateToken(SecurityToken token) and TokenType.
TokenType is used later on to tell what kind of token the handler can actually validate. More on that in a minute.
Overriding ValidateToken is fairly trivial*. This is where we actually handle the authentication. However, it returns a ClaimsIdentityCollection instead of bool, so if the credentials are invalid we need to throw an exception. I would recommend the SecurityTokenValidationException. Once the authentication is done we get the identity for the credentials and bundle them up into a ClaimsIdentityCollection. We can do that by creating an IClaimsIdentity and passing it into the constructor of a ClaimsIdentityCollection. });
}
Next we need set the TokenType:
public override Type TokenType
{
get
{
return typeof(UserNameSecurityToken);
}
}
This property is used as a way to tell it’s calling parent that it can validate/authenticate any tokens of the type it returns. The web service that acts as the STS loads a collection SecurityTokenHandler’s as part of it’s initialization and when it receives a token it iterates through the collection looking for one that can handle it.
To add the handler to the collection you add it via configuration or if you are crazy doing a lot of low level work you can add it to the SecurityTokenServiceConfiguration in the HostFactory for the service:
securityTokenServiceConfiguration.SecurityTokenHandlers.Add(new MyAwesomeUserNameSecurityTokenHandler())
To add it via configuration you first need to remove any other handlers that can validate the same type of token:
<microsoft.identityModel><service><securityTokenHandlers> <remove type="Microsoft.IdentityModel.Tokens.WindowsUserNameSecurityTokenHandler, Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <remove type="Microsoft.IdentityModel.Tokens.MembershipUserNameSecurityTokenHandler, Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add type="Syfuhs.IdentityModel.Tokens.MyAwesomeUserNameSecurityTokenHandler, Syfuhs.IdentityModel" /></securityTokenHandlers>
That’s pretty much all there is to it. Here is the class for the sake of completeness:
using System;
using System.IdentityModel.Tokens;
using System.Web.Security;
using Microsoft.IdentityModel.Claims;
using Microsoft.IdentityModel.Protocols.WSIdentity;
using Microsoft.IdentityModel.Tokens;
namespace Syfuhs.IdentityModel.Tokens
{
public class MyAwesomeUserNameSecurityTokenHandler : UserNameSecurityTokenHandler
{
public override bool CanValidateToken { get { return true; } } });
}
}
}
* Trivial in the development sense, not trivial in the security sense....
|
http://blogs.objectsharp.com/?tag=/WCF
|
CC-MAIN-2015-40
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.