text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
|---|---|---|---|---|
I've just started working with Plugins, and I just can't get past this one issue. I can call functions that return strings just fine, I've been able to create a plugin to download files to the SD card (needed to use SSL), but I just can't seem to create a plugin that provides a native view or intent in any way. This is my first foray into writing anything native for the droid, so hopefully I'm just missing something obvious.
I started with the Unity Plugin example using a prebuilt JNI Library that can be found here. I'm using the build file (and thus Apache ANT) to compile my java.
So, now I've got a C# file with the following code pertaining to this:
...
loadWebView = AndroidJNI.GetMethodID(cls_JavaClass, "loadWebView", "(Ljava/lang/String;)Ljava/lang/String;");
...
private string openWebView() {
JavaVM.AttachCurrentThread();
// get the Java String object from the JavaClass object
jvalue[] setTypeParameters = new jvalue[2];
setTypeParameters[0] = new jvalue();
setTypeParameters[0].l = AndroidJNI.NewStringUTF("");
//IntPtr str_fileInfo = JNI.CallObjectMethod(JavaClass, downloadFile, setTypeParameters);
IntPtr str_fileInfo = AndroidJNI.CallObjectMethod(JavaClass, loadWebView, setTypeParameters);
Debug.Log("str_fileInfo = " + str_fileInfo);
This calls the function - as I can remove all the java code and just have it return a string, and it works properly.
My java code, though, seems to just freeze up the application. The file it calls is as follows:
public class JavaClass {
private Activity mActivity;
public JavaClass(Activity currentActivity) {
Log.i("JavaClass", "Constructor called with currentActivity = " + currentActivity);
mActivity = currentActivity;
}
...
public String loadWebView(String fURL) {
webActivity showWeb = new webActivity();
showWeb.showWebView(fURL);
return "loaded";
}
This then calls into another file which contains this code:
...
public class webActivity extends Activity {
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
}
public void showWebView(String fURL) {
setContentView(R.layout.main);
WebView wv = (WebView) findViewById(R.id.webview1);
WebSettings webSettings = wv.getSettings();
webSettings.setBuiltInZoomControls(true);
wv.loadUrl(fURL);
}
}
At some point, this Java all compiled. But I've gone through so many alterations trying to get things to work, I've ended up with things no longer compiling due to two errors on the line "webActivity showWeb = new webActivity();" (can't find either symbol webActivity in that line). Even when it did compile, though, when I hit the button I had linked to call the Java code, it just froze.
In the end, I'm not sure what I'm missing here. I'm not completely shocked it's not working, but I just can't figure out for what reason (or reasons) it isn't working due to my unfamiliarity with the native android code. Any help would be very, very much appreciated, as I've attempted about 5-6 completely different routes to this point, and this is the closest I've come.
Answer by maskedMan
·
Nov 23, 2011 at 09:48 AM
Bear in mind that my knowledge of Unity and its JNI calls is about 1 hour old now. Not knowing anything more than what you've typed here, there are a few things I can think of.
WebView wv = (WebView) findViewById(R.id.webview1);
Where did you define R.id.webview1? In a normal Android application, using Eclipse as the IDE, R is an automatically generated Resource bundle that is parsed out based on the contents of layout.xml files. I don't see any such layout files in the linked sample project. I'm guessing that the build file created an Android project for you?
webActivity showWeb = new webActivity();
New activities in Android are launched via Intents, not with the 'new' keyword. You'll need to look at the documentation on how Intent objects are used to instantiate Activities. This leads into : What do you have in your AndroidManifest.xml file? Do you have one? You'll need to create one and specifically let the main application know that it's allowed to launch Intents. You also would need to let your webActivity have permissions to access the Internet.
I have no familiarity yet with using Apache ANT to build the project, so I'm afraid I don't know exactly what you're dealing with in terms of your project setup. If I have some time this evening I will check into the process and update my answer.
Answer by geminids
·
Nov 15, 2012 at 08:47 PM
I think the following articles/tutorial from chinese, show that how to implement webview in Unity
Unity and Web.
Android webview on Unity
0
Answers
Android: NullRefException on creating WebView
0
Answers
FileNotFoundException: Could not find file MyProject\Temp\StagingArea\AndroidManifest.xml
0
Answers
How to access an Android native plugin in unity ?
0
Answers
Problem reading sensor plugin, event won't trigger
2
Answers
|
https://answers.unity.com/questions/127503/android-plugin-that-provides-a-webview.html
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
from django.apps import apps def get_current_site(request): """ Check if contrib.sites is installed and return either the current ``Site`` object or a ``RequestSite`` object based on the request. """ # Imports are inside the function because its point is to avoid importing # the Site models when django.contrib.sites isn't installed. if apps.is_installed('django.contrib.sites'): from .models import Site return Site.objects.get_current(request) else: from .requests import RequestSite return RequestSite(request)
|
https://django.readthedocs.io/en/latest/_modules/django/contrib/sites/shortcuts.html
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
This error is detected when a class extends org.apache.struts.action.ActionForm and does not implement the method 'reset'.
The reset method is called before filling out a form with request parameters. If some fields are not reset, they can retain old values which can be user by an attacker. Since a client request can be early manipulated by an attacker, do not depend on the presence of certain fields in a request.
Klocwork security vulnerability (SV) checkers identify calls that create potentially dangerous data; these calls are considered unsafe sources. An unsafe source can be any data provided by the user, since the user could be an attacker or has the potential for introducing human error.
Reset all fields that a client can populate. If a field cannot be populated by the client, that usually mean the form tries to implement some business logic or is used as an output form, which normally should not be part of the ActionForm and should be implemented by a different class.
10 public class SV_STRUTS_RESETMET_Sample_1 extends ActionForm { 11 private String name; 12 private String[] cars = new String[10]; 13 public String getName() { 14 return name; 15 } 16 public void setName(String name) { 17 this.name = name; 18 } 19 public String getCar(int i) { 20 return cars[i]; 21 } 22 public void setCar(int i, String car) { 23 this.cars[i] = car; 24 } 25 }
SV.STRUTS.RESETMET is reported for form declaration on line 10: Struts: Form 'SV_STRUTS_RESETMET_Sample_1' does not have method 'reset' defined
|
https://docs.roguewave.com/en/klocwork/current/sv.struts.resetmet
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
Package: pyneighborhood Version: 0.5.1-2 Severity: normal Hi. Trying to browse a remote servern through its IP, I get : $ /usr/bin/pyNeighborhood Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/pyneighborhood/addwindow.py", line 111, in response_handler query_workgroup(ip, True) File "/usr/lib/python2.7/dist-packages/pyneighborhood/nmblookup.py", line 94, in query_workgroup return (workgroup, cursor.fetchone()[0]) TypeError: 'NoneType' object has no attribute '__getitem__' Thanks in advance. Best regards, -- System Information: Debian Release: jessie/sid APT prefers testing APT policy: (900, 'testing') Architecture: i386 (i686) Kernel: Linux 3.2.0-4-686-pae (SMP w/2 CPU cores) Locale: LANG=fr_FR.utf8, LC_CTYPE=fr_FR.utf8 (charmap=UTF-8) Shell: /bin/sh linked to /bin/dash Versions of packages pyneighborhood depends on: ii cifs-utils 2:6.0-1 ii python 2.7.3-5 ii python-glade2 2.24.0-3+b1 ii python-gtk2 2.24.0-3+b1 ii python2.6 2.6.8-2 ii python2.7 2.7.5-4 ii smbclient 2:3.6.15-1 pyneighborhood recommends no packages. pyneighborhood suggests no packages. -- debconf-show failed
|
https://lists.debian.org/debian-qa-packages/2013/06/msg00040.html
|
CC-MAIN-2017-30
|
en
|
refinedweb
|
I):
Geoff Webber-Cross - .Net, Windows 8, Silverlight and WP developer. Software problem solver. My Website Obelisk - WP7 MVVM Tombstone Library
Friday, 16 May 2014
Sunday, 11 May 2014
Implementing Azure AD Group Authorization
In my last article we talked about implementing AD single sign-on authentication to an MVC5 website, now we're going to look at adding AD group authorization to a controller with a customised AuthorizeAttribute implementation. Azure AD doesn’t currently allow addition or custom roles, there are a number of built-in administrator roles; however we have full control over groups so we can use these for authorization.
Unfortunately authorization isn’t as simple as just using the Authorize attribute with a role as you would with ASP.Net roles, we need to query the Azure AD Graph API to check if a user is a member of the group. We’ll add a Sales group to the AzureBakery AD we created in the last article and then implement a custom AuthorizeAttribute to query the Azure AD Graph API using the Azure AD Graph client.
This was written for an MVC controller but can be used for a Web API controller and could used with Azure Mobile Services too.
This was written for an MVC controller but can be used for a Web API controller and could used with Azure Mobile Services too.
We’re going to use the Azure AD PowerShell module to modify the AD Application service principal later in the procedure, so install this first.
You can download the module from here:
- · 32-bit version:
- · 64-bit version:
I needed to install Microsoft Online Services Sign-In Assistant for IT Professionals BETA (not RTW):
Creating an AD Group
1. First go the AD GROUP workspace in the portal for our Active Directory and click ADD GROUP on the toolbar:
2. Enter a name and description of the group and click the tick button to create it:
3. Next click on the newly created group and then ADD MEMBERS on the toolbar:
4. In the Add members dialog, click on the AD user we created to add it to the SELECTED list and click the tick button to confirm:
5. Now go to the GROUP CONFIGURE tab and make a note of the OBJECT ID, we’ll need this later.
6. Next we need to create a key for our application to allow us to access the graph API, so create a new key in the APPLICATION workspace CONFIGURE tab:
7. Make a note of this and the CLIENT ID.
8. We need to create keys for the local and Azure AD applications.
Modifying the Application Service Principal
We need to modify our application’s Service Principal so that it has permission to access the Graph API, in theory this should be done by adjusting the permissions to other applications section of the APPLICATION CONFIGURATION tab but at the time of writing this it doesn’t work. Please try it yourself and if it doesn’t work for you (you will get an unauthorized exception in the AD Graph API Client) use the following procedure to manually add the service principal to an administrator role:
1. Launch the Azure AD PowerShell Console (from the desktop shortcut if you chose to use it).
2. First we need to obtain our AD credentials, so type the following command and enter your AD user credentials when prompted:
$msolcred = get-credential
This stores the credentials in a variable called $msolcred.
3. Next we need to connect the console by typing the following command:
connect-msolservice -credential $msolcred
For a quick test, we can use the get-msoluser command to list AD users. We should see something like this:
PS C:\WINDOWS\system32> get-msoluser
UserPrincipalName DisplayName isLicensed
----------------- ----------- ----------
gwebbercross_outlook.co... Geoff Webber-Cross False
geoff@azurebakery.onmic... Geoff False
4. Now we need to get the service principal for our application using the following command (get the CLIENT ID from CONFIGURE tab of the AD APPLICATION workspace for the application associated with the website):
$msolServicePrincipal = Get-MsolServicePrincipal -AppPrincipalId YourClientId
5. We can see the properties of the service principal object by outputting it like this:
write-output $msolServicePrincipal
6. Next we need to add the service principal to an administrator role like this:
Add-MsolRoleMember -RoleName “Company Administrator” -RoleMemberObjectId $msolServicePrincipal.ObjectId –RoleMemberType ServicePrincipal
Implementing AzureAdAuthorizeAttribute
We’re going to create a class called AzureAdAuthorizeAttribute which can be added to a controller with either a group name or group ObjectId specified. The ObjectId implementation is more efficient as it doesn’t require an additional query to look up the id from the name.
We need to install the Microsoft.Azure.ActiveDirectory.GraphClient and Microsoft.IdentityModel.Clients.ActiveDirectory Microsoft.IdentityModel.Clients.ActiveDirectory NuGet packages by entering the following commands:
- Install-Package Microsoft.Azure.ActiveDirectory.GraphClient
- Install-Package Microsoft.IdentityModel.Clients.ActiveDirectory
This is the complete code for the attribute, the comments in the code explain what’s going on:
using Microsoft.Azure.ActiveDirectory.GraphClient;
using Microsoft.IdentityModel.Clients.ActiveDirectory;
using System;
using System.Collections.Generic;
using System.Configuration;
using System.Linq;
using System.Security.Claims;
using System.Web;
using System.Web.Mvc;
namespace AdminWebsite.Auth
{
[AttributeUsageAttribute(AttributeTargets.Class | AttributeTargets.Method, Inherited = true, AllowMultiple = true)]
public class AzureAdAuthorizeAttribute : AuthorizeAttribute
{
private readonly string _clientId = null;
private readonly string _appKey = null;
private readonly string _graphResourceID = "";
public string AdGroup { get; set; }
public string AdGroupObjectId { get; set; }
public AzureAdAuthorizeAttribute()
{
this._clientId = ConfigurationManager.AppSettings["ida:ClientID"];
this._appKey = ConfigurationManager.AppSettings["ida:Password"];
}
protected override bool AuthorizeCore(HttpContextBase httpContext)
{
// First check if user is authenticated
if (!ClaimsPrincipal.Current.Identity.IsAuthenticated)
return false;
else if (this.AdGroup == null && this.AdGroupObjectId == null) // If there are no groups return here
return base.AuthorizeCore(httpContext);
// Now check if user is in group by querying Azure AD Graph API using client
bool inGroup = false;
try
{
// Get information from user claim
string signedInUserId = ClaimsPrincipal.Current.FindFirst(ClaimTypes.NameIdentifier).Value;
string tenantId = ClaimsPrincipal.Current.FindFirst("").Value;
string userObjectId = ClaimsPrincipal.Current.FindFirst("").Value;
// Get AuthenticationResult for access token
var clientCred = new ClientCredential(_clientId, _appKey);
var authContext = new AuthenticationContext(string.Format("{0}", tenantId));
var authResult = authContext.AcquireToken(_graphResourceID, clientCred);
// Create graph connection with our access token and API version
var currentCallContext = new CallContext(authResult.AccessToken, Guid.NewGuid(), "2013-11-08");
var graphConnection = new GraphConnection(currentCallContext);
// If we don't have a group id, we can quiery the graph API to find it
if (this.AdGroupObjectId == null)
{
// Get all groups
var groups = graphConnection.List<Group>(null, null);
if (groups != null && groups.Results != null)
{
// Find group object
var group = groups.Results.SingleOrDefault(r => (r as Group).DisplayName == this.AdGroup);
// check if user is in group
if (group != null)
this.AdGroupObjectId = group.ObjectId;
}
}
if (this.AdGroupObjectId != null)
inGroup = graphConnection.IsMemberOf(this.AdGroupObjectId, userObjectId);
}
catch(Exception ex)
{
string message = string.Format("Unable to authorize AD user: {0} against group: {1}", ClaimsPrincipal.Current.Identity.Name, this.AdGroup);
throw new Exception(message, ex);
}
return inGroup;
}
}
}
Once we’ve created this class, we need to add the ida:ClientID and ida:Password settings to the web.config like this:
<appSettings>
<add key="ida:ClientID" value="d30553b1-21f3-4ee5-bda5-63cf9b2d9861" />
<add key="ida:Password" value="60VVjSMWB5IHNtfIBym9eIv7XXXXXXXXXXXXXXXXXXXXXXXXXX=" />
</appSettings>
Once we’ve done this, we can simply add the attribute to our controllers to implement Azure AD Group authorization like this:
namespace AdminWebsite.Controllers
{
[AzureAdAuthorize(AdGroup = "Sales", AdGroupObjectId = "f8a96bf1-c152-41a8-abcdefabcdef")]
public class HomeController : Controller
{
public ActionResult Index()
{
return View();
}
So that we can automatically switch the web.config settings for the Azure web application, we can simply add the following transform to web.Release.config which will be run during publishing:
<appSettings>
<add key="ida:ClientID" value="123456-58a2-4549-95fc-AABBCCDDee" xdt:
<add key="ida:Password" value="dXqblNwq1y//
</appSettings>
<system.web>
Wednesday, 7 May 2014
Adding Azure AD Single Sign-On to a MVC5 Website
Azure Active Directory Single Sign-On can be used with MVC websites to allow us to create websites with single sign-on authentication for Azure AD users which can be centrally managed in Azure AD.
Visual Studio 2013 makes implementing this really easy and we don't need to touch AD Applications, or web.config in our website.
This article is an extract from a new book I'm writing titled "Learning Microsoft Azure" for Packt Publishing. This website is an admin website for the Sales Business domain of a fictional industrial bakery called Azure Bakery.
I this article, we’re going to create an Azure Active Directory for the company, then add a user and configure a new MVC5 Administrator website to implement Azure AD Single Sign-On.
Configuring Active Directory
First we need to create an Active Directory and an initial user account to sign in with
1. From the NEW Services menu, select ACTIVE DIRECTORY | DIRECTORY | CUSTOM CREATE:
2. Fill out the NAME of the directory, it’s DOMAIN NAME and the COUNTRY OR REGION:
3. Now from the Active Directory USERS workspace, click ADD USER from the bottom toolbar to add a user:
4. Fill in the USER NAME. I’ve left TYPE OF USER as New user in your organization, although you can add an existing Microsoft account or Windows Azure AD directory:
5. Next, fill in the user details, select Global Administrator for the ROLE and click the next arrow:
6. Click create on the next tab to get the temporary password for the user. Make a note of it and also enter an email to send it to, then click the tick button to finish:
Configuring an MVC Website for AD Single Sign-On
Next we’ll create a new MVC website and use the wizard to help us setup AD single sign-on. In Visual Studio 2012 this was quite tricky to do with a fair amount of manual configuration in the portal and the website’s web.config, but it’s quite straight forward in Visual Studio 2013.
1. In Visual Studio, add a new ASP.Net Web Application. From the template dialog, select the MVC template, check Create remote resources under the Windows Azure section and then click Change Authentication:
2. Select Organizational Accounts and enter the AD domain name for the Active Directory we just created and click OK:
3. Sign in using the new Active Directory user, then click OK in the previous dialog (be careful to change the user to your Azure portal account when prompted to sign into Azure).
4. Enter the Site name, choose Subscription, Region, Database Server (select No database because we’re using the existing one):
5. Click OK, this will now provision the website, setup an AD application and create our MVC project for us.
6. We can test this locally by simply running the website from Visual Studio. You will get a security warning due to the implementation of a temporary SSL certificate on your local web server like this (in IE):
7. Accept the warning (Continue to this website (not recommended) and you will then see the AD Login page:
8. Login with your new user and the website should load.
Publishing the Website with AD Single Sign-on
When Visual Studio provisioned our website for us it created an application entry in the AD APPLICATIONS tab for our local debug configuration:
Rather than changing the APPLICATION CONFIGURATION for our production website, when we publish the application, there is an option to Enable Organizational Authentication which will add a new application entry in AD and rewrite the federation section of the web.config for us on publish:
<system.identityModel.services>
<federationConfiguration>
<cookieHandler requireSsl="true" />
<wsFederation passiveRedirectEnabled="true" issuer="" realm="" requireHttps="true" />
</federationConfiguration>
</system.identityModel.services>
In the Publish Web dialog, check Enable Organizational Authentication and enter the AD Domain name. You will need to include a connection string for your database as the website will update the database with entries in new IssuingAuthorityKeys and Tenants tables:
Once published, we will see a new entry in the AD APPLICATIONS workspace:
This is great as we don’t need to reconfigure the applications between running locally and publishing to Azure.
|
http://geoffwebbercross.blogspot.co.uk/2014/05/
|
CC-MAIN-2017-30
|
en
|
refinedweb
|
CodePlexProject Hosting for Open Source Software
Voted
I am creating a KML file, and it seems to load into Google Earth fine, but I noticed that the file is missing the </kml> tag. When I export the same back out of Google Earth, it writes it correctly.
Id #2640 | Release:
None
| Updated: Feb 16 at 3:46 PM by samcragg | Created: Feb 15 at 10:48 PM by shiften
KML2.3 includes some additional elements, some of which were previously in the Google extension namespace (such as gx:Track).
Add support for the new elements in the new namespace whilst still all...
Id #2550 | Release:
SharpKml 2.X
| Updated: Dec 5, 2016 at 2:49 AM by lpieraut | Created: Mar 3, 2016 at 7:30 PM by samcragg
To support better writing/reading from streams, it would be nice to support the XXXAsync overload version of the methods to allow callers not to block when reading/writing to files.
In order to do...
Id #2545 | Release:
None
| Updated: Jan 3, 2016 at 8:28 PM by samcragg | Created: Jan 3, 2016 at 8:28 PM by samcragg
In the DateTime unit test it first tries on {10/11/2012
09:08:07} local time (I am in Jerusalem Standard Time at the moment), and the ValueConverter.TryGetValue returns a value of {10/11/2012
07:0...
Id #1792 | Release:
SharpKml 1.06
| Updated: Feb 15, 2016 at 9:31 PM by jcteague | Created: Oct 6, 2012 at 10:34 AM by samcragg
It its possible, it would be nice to have the option for lower resolution time values
gYear (YYYY)
<TimeStamp>
<when>1997</wh...
Id #1556 | Release:
None
| Updated: Feb 21, 2013 at 11:00 PM by norcis | Created: Mar 23, 2012 at 3:07 AM by BerwynMck
Keyboard shortcuts are available for this page.
Keep up with what's going on in this project's Issue Tracker
|
http://sharpkml.codeplex.com/workitem/list/basic
|
CC-MAIN-2017-30
|
en
|
refinedweb
|
Summary
Pojomatic provides configurable implementations of the equals(Object), hashCode() and toString() methods inherited from java.lang.Object.
Version 1.0 of Pojomatic has
been released. Pojomatic provides an easy way to implement
the
equals(Object),
hashCode()
and
toString() methods inherited
from
java.lang.Object. The typical work needed
for most classes is:
import org.pojomatic.Pojomatic; import org.pojomatic.annotations.AutoProperty; @AutoProperty // tells Pojomatic to use all non-static fields. public class Pojo { // Fields, getters and setters... @Override public boolean equals(Object o) { return Pojomatic.equals(this, o); } @Override public int hashCode() { return Pojomatic.hashCode(this); } @Override public String toString() { return Pojomatic.toString(this); } }
While this is adequate more most cases, there are numerous ways to
customize how Pojomatic will build up the implementations of
the
equals,
toString
and
hashCode methods. The
@AutoProperty
annotation can instruct Pojomatic to use fields or getters to accesses
properties. Alternatively, one can annotate individual fields and/or
accessor methods with the
@Property
annotation to include them explicitely, or to exclude certain
properties if
@AutoProperty is being used. For any
property, a
@PojomaticPolicy
can be set to indicate which methods the property should be included
in. By default, a property is used
for each of the
equals,
hashCode
and
toString methods, but any combination is possible,
subject to the restriction that if a property is used
for
hashCode, then it must be used
for
equals as well.
As discussed in
How to
Write an Equality Method in Java, challenges arise in
satisfying the contract for
equals when class hierarchies
come into play. The solution suggested in that article is to
introduce an additional method,
canEqual, which the
implementation of
equals will use to ensure that
instances of a parent class do not accidentally declare themselves to
be equal to an instance of a subclass which has
redefined
equals. If all the classes in a hierarchy use
Pojomatic, this step is not necessary; Pojomatic keeps track of
whether instances of two related classes can be equal to each other or
not via the
areCompatibleForEquals
method. Two classes are compatible for
equality if they each have the same set of properties designated for
use with the
equals method. If a subclass has a need to
implement the
equals method without using Pojomatic, it
can be annotated with
@OverridesEquals
to indicate that it is not compatible for equality with it's parent class.
A common use of the
equals method is to facilitate the use
of the
assertEquals methods of JUnit or TestNG.
When
assertEquals fails, the exception method includes
the
toString representation of each instance. One pain point which no
doubt will be familiar to many is that of trying to determine why two
instances are not equal when they have a large number of properties.
Often, the only option is to copy the failure message into an editor
which allows comparing the
toString representations to look for
differences. Pojomatic helps address this by adding
method,
Pojomatic.diff
which can reveal the differences between two instances of a Pojomated
class.
The PojomaticTestUtils
library leverages this capability to provide
an
assertEqualsWithDiff method which will call out the
differences between two instances in the failure messaged in the event
that they are not equal.
Pojomatic is available on the central maven repository; if you use maven, it is just a dependency away:
<dependency> <groupId>org.pojomatic</groupId> <artifactId>pojomatic</artifactId> <version>1.0</version> </dependency>Others can download Pojomatic from the Sourceforge project page
Have an opinion? Readers have already posted 17 comments about this weblog entry. Why not add yours?
If you'd like to be notified whenever Ian Robertson adds a new entry to his weblog, subscribe to his RSS feed.
|
http://www.artima.com/weblogs/viewpost.jsp?thread=290325
|
CC-MAIN-2017-30
|
en
|
refinedweb
|
On the second day of the Embedded Linux Conference Europe (ELCE), Iisko Lappalainen from MontaVista Software presented a method for running secondary Linux environments inside a "host" Linux OS with strict sandboxing and security requirements. The example use-case was running Android inside a GENIVI-based Linux in-vehicle infotainment (IVI) system, though other combinations are possible. The setup would permit a car-maker to ship a system with full access to an Android application ecosystem, while maintaining isolation from the underlying OS.
As GENIVI's Matt Jones explained in an earlier session, the GENIVI middleware stack is isolated from the vehicle's safety-critical systems like engine control and anti-lock braking; running on separate hardware on electrically isolated circuits. But there are still important functions in an IVI system that, if interrupted, would greatly inconvenience the user. Navigation on the head unit, or proximity sensors on the bumpers, for example — neither one should hang or crash just because a child in the back is playing a buggy video game on a rear-seat entertainment screen. Buggy games aside, there is also always the prospect of intentionally malicious code.
That provides one use-case for running applications inside some sort of sandboxed environment. Lappalainen listed several others. First, it would provide a way to run applications written for multiple UI frameworks, in particular, frameworks not natively supported by the base IVI system. The example he presented was an HTML5-based web runtime, which was not a component planned for the MeeGo IVI user experience (UX), which GENIVI designated in 2010 as its reference platform. Canonical has subsequently announced its own GENIVI-compliant IVI remix, which also does not address a web runtime; MeeGo's successor Tizen, however, does have a web runtime.
The Android environment in particular offers its own advantages as the sandboxed OS, Lappalainen said. The existing ecosystem is enormous, both in terms of applications and trained developers. Android's "app store" model also explicitly supports multiple, branded app stores, which would allow OEMs to provide their own software product channel directly in the IVI system. Finally, if done right, sandboxing should allow the OEM to enforce a tight security model on the applications inside — perhaps providing a more isolated environment for untrusted, user-installed applications, while factory-installed applications are allowed to run natively.
The sandboxing approach taken by MontaVista utilizes Linux Containers (LXC) to isolate the sandboxed environment, and SELinux to supply a security layer. LXC containers provide a form of virtualization by isolating the sandboxed processes in a separate control group — thus allowing the host OS to limit resource usage and isolate file access — and by maintaining separate process ID and network namespaces. Separate namespaces not only hide the host OS from each container, but isolate each container from the others.
Lappalainen referred to this approach as "virtualization," but that term can mean different things to different people. Specifically, LXC containers provide OS-level virtualization akin to OpenVZ or Virtuozzo. A system running inside an LXC container can have its own view of the filesystem and a separate group of processes — with entirely different user-space code — but it still uses the same kernel as the "host" (for lack of a better term) OS. This is a distinct difference from hardware-level virtualization, which supports running any flavor of guest OS on top of the host OS. On the other hand, OS-level virtualization is generally faster because there is no overhead associated with running a virtual machine layer.
But OS-level virtualization also introduces a hurdle to running one Linux-based OS inside of another if the two OSes differ significantly in kernel features, not just userspace. That is certainly the case with Android, which replaces several stock kernel features and adds several other features. In MontaVista's Android-on-GENIVI project, the host kernel is patched with Android-specific features.
Lappalainen listed the Android kernel's IPC binder, low-memory killer, logger device, and asynchronous shared-memory system (ashmem) in particular on his slides; in the talk however he simply described the kernel as including the "Android patches." He also mentioned that these kernel functions needed to be adapted to work only within the context of the Android container. In particular, Android replaces the standard Linux out-of-memory (OOM) killer with its own variety. One would only want the low-memory killer to watch for low memory conditions within the Android container, and then to only kill one of the Android container's processes.
The guest-OS containers are configured so that their processes run at lower priority than the host OS's. There are also various mechanisms used to process IO, graphics, and other resources for the collection of containers. The "event dispatcher" tracks the window coordinates of each application, for example, so it can route input events to the proper container or to the host OS. Graphics output is handled by capturing the Android container's frame buffer, and sending it to a "layer manager" that overlays it on the display together with video output from the other applications. Audio is less tricky to coordinate, he said, because it can be down-mixed into one output by the audio server. This is already what ALSA and PulseAudio do when multiple applications play sound simultaneously.
Power management is handled entirely by the host OS, which Lappalainen said required changes to the Android wakelock code. On multi-core systems, he added, the container-management code can also be used to bind containers to specific processors, which provides another method of ensuring that they cannot bring down the host OS even in the event of a serious fault.
Lappalainen did not go into much detail on the role that SELinux plays in providing further isolation for the LXC containers. It is certainly possible that SELinux could simply be set up to duplicate the filesystem isolation and other sandboxing mechanisms provided by LXC, acting as a separate, back-up "wrapper" around the containers. But SELinux might also plug security holes in LXC. For example, LXC does not provide user namespaces, which means that a malicious root user could escape from its container and execute code as the root user on the host OS.
Lappalainen outlined various use-cases for the LXC/SELinux containerization approach, noting that it could also be beneficial in other embedded Linux projects because it can isolate untrusted applications, but without the performance hit of running them in an emulator. MontaVista's implementation of this configuration is its Automotive Technology Platform (ATP), a commercial IVI product.
The company announced ATP's Android-and-HTML5 support feature in an October 10 press release, which positions ATP as a competitor to open projects like MeeGo/Tizen and Ubuntu IVI — in particular, one that has a leg up on the competition thanks to the vast array of already-written Android and HTML5 applications. IVI was not a major topic at ELCE; Jones' talk was the only other session dedicated to IVI, and it dealt as much with the plans and in-house experiments of his employer Jaguar Land Rover (JLR) as it did with GENIVI.
An illuminating snippet from that talk, however, was that it will be 2014 at the earliest before any Linux-based IVI systems are available in JLR vehicles. That is an exceptionally long time in kernel and distribution time-scales. A few other car-makers are reported to be closer, notably BMW, but have not announced a deployment schedule.
In fact, ever since the announcement of the MeeGo IVI platform, it seems that the IVI software industry has changed drastically faster than the car industry with which the rival platforms are vying to go into business. There were rumors that GM would adopt Android as the next-generation base for its OnStar system, only to have the company join GENIVI instead. MeeGo brought on several major car-makers as partners (including Toyota) in early 2011, then MeeGo morphed into Tizen without warning.
That much change can make it difficult to handicap the players. However, the big obstacle for ATP is likely to be asking car-makers to undertake supporting Android and a GENIVI Linux distribution. Even apart from the handset-and-tablet-centric stance that Google takes with the product, it sounds like a challenging customer support undertaking. In 2011, GENIVI quietly began shifting its language away from talk of a MeeGo "reference implementation" and towards "GENIVI compliance," which blesses multiple distributions. That could be because GENIVI had early warning of the migration from MeeGo to Tizen; regardless of whether or not GENIVI formally adopts Tizen as its reference platform, Tizen will match ATP's HTML5 support, which could make the web-runtime selling-point moot.
In short, though the work that has gone into virtualizing "guest" Linux OSes in MontaVista's ATP is interesting, it seems odd to position it as an IVI-specific technology. There are certainly plenty of users of other form factors that would love the chance to run thousands of Android applications inside a secure sandbox — starting with smartphone and desktop Linux distributions. Whether ATP, Android itself, or some other solution entirely is adopted by GENIVI or the car-makers remains to be seen, but it does seem likely that a hybrid like ATP will be fighting an uphill battle.
[The author would like to thank the Linux Foundation for assisting with his travel to the Embedded Linux Conference Europe 2011.]
ELCE11: Sandboxing for automotive Linux
Posted Nov 3, 2011 1:26 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]
ELCE11: Sandboxing for automotive Linux
Posted Nov 3, 2011 7:23 UTC (Thu) by simlo (guest, #10866) [Link]
I once saw Windriver talking about the same thing: Using time slices for virtual machines (running VxWorks) within a simplified VxWorks. This was for aerospace, not automotive.
ELCE11: Sandboxing for automotive Linux
Posted Nov 3, 2011 16:23 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]
That's why I'd just separate critical functionality into a completely separate CPU, maybe even with a separate network.
ELCE11: Sandboxing for automotive Linux
Posted Nov 10, 2011 20:17 UTC (Thu) by lamadiHH (guest, #80884) [Link]
ELCE11: Sandboxing for automotive Linux
Posted Nov 3, 2011 14:16 UTC (Thu) by davecb (subscriber, #1574) [Link]
I speculate that there is likely to be quite an overlap between SEL and LXC, something that might trigger some interesting cross-pollination.
--dave (my uncle the farmer really liked cross-pollination) c-b
ELCE11: Sandboxing for automotive Linux
Posted Nov 3, 2011 18:15 UTC (Thu) by jimparis (subscriber, #38647) [Link]
Power management seems like a minor concern. Cars have orders of magnitude more energy available -- a typical car battery holds something like 500 W-h, whereas the Nexus One battery is about 5 W-h. Your overhead dome lamp draws more power than your phone.
Is the assumption that all car IVI systems will have some form of always-on network connection? Android certainly doesn't seem designed for an offline use case. I don't know how things like a store would even work in that case, and many applications and games are supported by ads.
ELCE11: Sandboxing for automotive Linux
Posted Nov 3, 2011 23:00 UTC (Thu) by martinfick (subscriber, #4455) [Link]
I think that is dreaming. OS level virtualization can handle 1000s of guests, do you think KVM "done right" could even handle 100?
ELCE11: Sandboxing for automotive Linux
Posted Nov 4, 2011 19:59 UTC (Fri) by jimparis (subscriber, #38647) [Link]
Linux is a registered trademark of Linus Torvalds
|
https://lwn.net/Articles/465316/
|
CC-MAIN-2017-30
|
en
|
refinedweb
|
There are many reasons that you will want to upgrade your Windows NT domains to Active Directory, not least of which is to make use of Active Directory and other features. It's possible to have significantly fewer domains in Active Directory because each domain can now store millions of objects. While fewer domains means less administration, the added benefit of using organizational units to segregate objects makes the DNS. With multimaster replication and a significantly more efficient set of replication algorithms.
Reduce the number of PDCs/BDCs to a smaller number of DCs through a more efficient use of DCs by clients.
Eliminate the need for reliance on WINS servers and move to the Internet-standard DNS for name resolution.
There are three important steps in preparing for a domain upgrade:
Test the upgrade on an isolated network segment set aside for testing.
Do a full backup of the SAM and all core data prior to the actual upgrade.
Set up a fallback position in case of problems.
We cannot stress strongly enough how enlightening doing initial testing on a separate network segment can be. It can show a wide variety of upgrade problems, show you areas that you never considered, and in cases in which you have considered everything, give you the confidence that your trial run did exactly what you expected. In the world of today's complex systems, some organizations still try to roll out operating system upgrades and patches without full testing; this is just plain daft. The first part of your plan should be for a test of your upgrade plans.
When you do the domain upgrade itself, it goes without saying that you should have full backups of the Windows NT SAM and the data on the servers. You would think this is obvious, but again we have seen a number of organizations attempt this without doing backups first.
The best fallback position is to have an ace up your sleeve, and in Windows NT upgrade terms, that means you need a copy of the SAM somewhere safe. While backup tapes are good for this, there are better solutions for rapid recovery of a domain. These recipes for success require keeping a PDC or a BDC of your domain safely somewhere. In this context, by safely we mean off the main network. Your first option is to take the PDC off the network. This effectively stores it safely in case anything serious goes wrong. Next, as your domain now has no PDC, you need to promote a BDC to be the PDC for the domain. Once that has been done successfully, and you've manipulated any other services that statically pointed at the old PDC, you can upgrade that new PDC with the knowledge that your old PDC is safe in case of problems. The second option is to make sure that an existing BDC is fully replicated, then take it offline and store it. Both solutions give you a fallback PDC in case of problems.
Remember that the first domain in a forest is a significant domain and cannot be deleted. That means you cannot create a test domain tree called testdom.mycorp.com, add a completely different noncontiguous tree called mycorp.com to the same forest, and subsequently remove testdom.mycorp.com. You have to make sure that the first domain that you ever upgrade is the major or root domain for the company. In Windows NT domain model terms, that means upgrading the master domains prior to the resource domains. The resource domains may end up being Organizational Units instead anyway now, unless political, cultural, or bandwidth reasons force you to want to keep them as domains.
Single Windows NT domains and complete trust domains can be upgraded with few problems. With a single domain, you have just one to convert, and with complete trust domains, every domain that you convert will still maintain a complete trust with all the others. However, when you upgrade master domains or multimaster domains, there are account and resource domains that need to be considered. No matter how many master domains you have, the upgrade of these domains has to be done in a certain manner to preserve the trust relationships and functionality of the organization as a whole. We'll now explain the three broad ways to upgrade your master domain structure.
Let's assume that you have one or more single-master or multimaster domains that you wish to convert. Your first task will be to create the forest root domain. This domain will act as your placeholder and allow you to join the rest of the domains to it. The forest root domain can be an entirely new domain that you set up, or you can make the first domain that you migrate the forest root domain.
Take a look at Figure 15-1, which shows a Windows NT multimaster domain. Each domain that holds resources trusts the domains that hold user accounts, allowing the users to log on to any of the resource domains and use the respective resources.
There are three main ways to upgrade this domain. None of them is necessarily any better than the other, as each design would be based on choices that you made in your namespace design notes from Chapter 8.
First, the domains could all be joined as one tree under an entirely new root. Each master domain would represent a branch under the root with each resource domain joined to one of the masters. This is shown in Figure 15-2.
The second option is to aim toward making one of the master domains the root of the new tree. All resource domains could then join to the root, one of the other master domains, or one of the resource domains. Figure 15-3 shows this in more detail. Two resource domains have been joined to one of the master domains, but the third resource domain can still go to one of three parents, as indicated by the dashed lines.
Finally, you could make each domain a separate tree. While the first master domain that you migrate will be the forest root domain, the rest of the master domains will simply be tree roots in their own right.
Let's now consider the process for migrating these domains. We must migrate the master account domains first, since they are the ones that the resource domains depend on. To start the process, convert any one of the master account domains over to Active Directory by upgrading the PDC of that master domain. If any of the trust relationships have been broken between this domain and the other master and resource domains during migration, reestablish them. Once the PDC is upgraded, proceed to upgrade the other BDCs of that domain (or you can leave the domain running with Windows NT BDCs; it doesn't really matter to the rest of the migration).
The next step is to migrate the other master domains. You continue in the same manner as you did with the first domain until all master domains have been converted. Once each domain is converted, you need to reestablish only trust relationships with the existing Windows NT domains; the Active Directory domains in the forest will each have hierarchical and transitive trusts automatically anyway. So now you end up with a series of Active Directory master domains in a tree/forest and a series of Windows NT resource domains with manual trusts in place.
Once all the master domains are converted, you can start consolidating them (as discussed in the next section), or you can immediately convert the resource domains. Either way, once all domains are converted, you are likely to start a consolidation process to reduce the number of domains that you have in existence. Part of that consolidation will be to convert existing resource domains to Organizational Units. This is because resource domains by their very nature tend to fit in well as Organizational Units.[1] For that to happen, these future Organizational Units will need to be children of one of the migrated master or resource domains. It doesn't matter which master or resource domain acts as the parent, since there are consolidation tools available that allow you to move entire branches of the tree between domains. The process is simple: you take each resource domain in turn and convert it to a child domain of one of the existing Active Directory master or resource domains. Once they are all converted, you can really begin consolidation.
[1] Resource domains were created because of Windows NT's inability to allow delegation of authority within a domain. Now Organizational Units provide that functionality, so separate resource domains are no longer required. Thus, old resource domains can become Organizational Units under Windows 2000 and still maintain all their functionality.
Upgrading your domains is not the end of the story. Many administrators implemented multiple Windows NT domains to cope with the size constraints inherent in Windows NT domains. With Active Directory, those constraints are lifted, and each domain in a forest can easily share resources with any other domain. This allows administrators to begin removing from the directory information that has become unnecessary in an Active Directory environment.
When your final Windows NT 4.0 BDC for a domain has been taken out of service or upgraded, you are ready to convert the domain to Windows 2003 functional level. After the conversion, you have some decisions to make about the groups you have in this domain. You can leave all groups as they are or start converting some or all groups to universal groups. With multiple domains in a single forest, you can consolidate groups from more than one domain together into one universal group. This allows you to combine resources and accounts from many domains into single groups.
There are two methods for bringing these groups online:
Setting up parallel groups
Moving existing groups
In a parallel group setup, the idea is that the administrator sets up groups that hold the same members as existing groups. In this way, users become members of both groups at the same time, and the old group and a user's membership can be removed in a calculated manner over time. The arguably easier solution is to move existing groups, but to do that you need to follow a series of steps. Take the following example, which leads you through what's involved.
Three global groupspart_time_staff in finance.mycorp.com, part_time_staff in mktg.mycorp.com, and part_time_staff in sales.mycorp. comneed merging into one universal group, to be called part_time_staff in mycorp.com. The following is the step-by-step procedure:
All part_time_staff global groups are converted to universal groups in their current domains.
To make the part_time_staff universal group names unique so that they can all exist in one domain, the group needs to be renamed with the domain element. That means finance\part_time_staff, mktg\part_time_staff, and sales\part_time_staff become finance\finance_part_time_staff, mktg\mktg_part_time_staff, and sales\sales_part_time_staff.
Make use of the Windows 2003 functional level ability to move groups, and move the three groups to the mycorp.com domain. This leaves you with mycorp\finance_part_time_staff, mycorp\mktg_part_time_staff, and mycorp\sales_ part_time_staff.
Create a new universal group called part_time_staff in the mycorp. com domain.
Make the mycorp\finance_part_time_staff, mycorp\mktg_part_time_ staff, and mycorp\sales_part_time_staff groups members of the new mycorp\part_time_staff universal group.
You can then remove the three old groups as soon as it is convenient. Remember that, while this is an easy series of steps, there may be an entire infrastructure of scripts, servers, and applications relying on these groups. If that is the case, you will need either to perform the steps completely, modifying the infrastructure to look at the new single universal group after Step 5, or modify the groups immediately after you complete Step 2 and then again after you complete Steps 3 to 5 in quick succession. We favor the former, since it requires that the work be done once, not twice.
When it comes to considering computer accounts, things are relatively straightforward. Under Windows NT, a computer could exist in only one domain at a time, since that computer and domain required a trust relationship to be established to allow domain users to log on to the domain at that client. You could set up bidirectional trust relationships manually between domains, allowing a client in Domain A to authenticate Domain B users to Domain B, but this was not common. With Active Directory, all domains in a forest implicitly trust one another automatically. As long as the computer has a trust relationship with one domain, users from any other domain can log on to their domain via the client by default. The following is a rough list of items to consider:
Moving computer accounts between domains to gain better control over delegation
Joining computers to the domain
Creating computer groups
Defining system policies
In all of these, it is important to understand that the source domain does not have to be at the Windows 2003 functional level to move computers to a new domain. In addition, administrators can use the NETDOM utility in the Windows Support Tools to add and remove domain computer objects/accounts; join a client to a domain, move a client between domains; verify, reset, and manage the trust relationship between domains; and so on.
While you may have had computer accounts in a series of domains before, you now can move these accounts anywhere you wish in the forest to aid your delegation of control. Group Policy Object processing also has a significant impact on where your computer accounts should reside. However, you now can work out what sort of Organizational Unit hierarchy you would ideally wish for your computer accounts and attempt to bring this about. Moving computers between domains is as simple as the following NETDOM command.
Here we want to move a workstation or member server, called mycomputerorserver, from the domain sales.mycorp.com to the location LDAP://ou=computers,ou=finance,dc=mycorp,dc=com. We specifically want to use the myDC domain controller and the MYCORP\JOINTODOMAIN account to do the move. Connection to the client will be done with the SALES\Administrator account, which uses an asterisk (*) in the password field to indicate to prompt for the password. We could just as easily have used an account on the client itself. We also include a 60-second grace period before the client is automatically rebooted:
NETDOM MOVE mycomputerorserver /DOMAIN:mycorp.com /OU:Finance/Computers /UserD:jointodomain /PasswordD:thepassword /Server:myDC /UserO:SALES\Administrator /PasswordO:* /REBOOT:60
This is actually the long-winded version, split up onto multiple lines for visibility; here's the short form:
NETDOM MOVE /D:mycorp.com /OU:Finance/Computers /UD:jointodomain /PD:thepassword /S:myDC /UO:SALES\Administrator /PO:* /REB:60
Note that moving a Windows NT computer doesn't delete the original account, and moving a Windows 2000 computer just disables it in the source domain.
You also need to consider who will be able to add workstations to the domain. You can set up an account with join-domain privileges only, i.e., an account with the ability to make and break trust relationships for clients. We've used this approach with a lot of success, and it means that an administrator-equivalent user is no longer required for joining clients to a domain. Let's take the previous example, but this time we wish to both create an account and join a new computer to the domain with that account. This is the code to do that using NETDOM:
NETDOM JOIN mycomputerorserver /D:mycorp.com /OU:Finance/Computers /UD:jointodomain /PD:thepassword /S:myDC /UO:SALES\Administrator /PO:* /REB:60
In all these NETDOM examples, we're using a specially constructed account that only has privileges to add computer objects to this specific Organizational Unit. At Leicester we precreated all the computer accounts, and the jointodomain account was used only to establish trusts between existing accounts; it had no privilege to create accounts in any way.
You also need to be aware that workstation accounts under Windows NT could not go into groups. Under Active Directory, that has all changed, and you can now add computers to groups. So when moving computers between domains for whatever purposes, you now can use hierarchical Organizational Unit structures to delegate administrative/join-domain control, as well as using groups to facilitate Group Policy Object (GPO) upgrades from system policies.
System policies themselves are not upgradeable. However, as explained in Chapter 7 and Chapter 10, you can use system policies with Active Directory clients and bring GPOs online slowly. In other words, you can keep your system policies going and then incrementally introduce the same functionality into GPOs. Since each part of each system policy is included in the GPO, you can remove that functionality from the system policy while still maintaining the policies application. Ultimately, you will end up replacing all the functionality incrementally, and the system policies will have no more policies left so can be deleted.
When consolidating domains, you'll need at some point to move users around to better represent the organization's structure, to gain better control over delegation of administration, or for group policy reasons. Whichever of these it is, there are useful tools to help you move users between domains.
To be able to transfer users between domains, you need to have gone to Windows 2000 functional level, and this will have ditched all your Windows NT BDCs. This allows a seamless transfer of the user object, including the password. A good method for transferring users and groups so that no problems occur is as follows:
The first stage is to transfer all the required domain global groups to the destination domain. This maintains the links to all users within the source domain, even though the groups themselves have moved.
Now the users themselves are transferred to the destination domain. The domain global group memberships are now updated with the fact that the users have now joined the same domain.
You then can consolidate the domain global groups or move the domain global groups back out to the original domain again. This latter option is similar to Step 1, where you move the groups and preserve the existing links during the move.
Clean up the user's Access Control Lists to resources on local computers and servers, since they will need to be modified after the change.
If you do it this way, you may have fewer problems with group memberships during the transition. As for moving users, while you can use the Active Directory Users and Computers MMC to move containers of objects from one domain to another, there are also two utilitiescalled MOVETREE and SIDWALKin the Resource Kit that can come in very handy.
MOVETREE allows you to move containers from one part of a tree in one domain to a tree in a completely different domain. For example, suppose we wish to move the branch of the tree under an Organizational Unit called Managers from the sales.mycorp.com domain to the Organizational Unit called Sales-Managers on the mycorp.com domain. The command we would use to start the move is something like the following, preceded by a full check:
MOVETREE /start /s sales.mycorp.com /d mycorp.com /sdn OU=Managers,DC=sales /ddn OU=Sales-Managers /u SALES\Administrator /p thepassword
The SIDWALK utility is designed to support a three-stage approach to modifying ACLs. Each stage of changing ACLs can take a while to complete and verify, sometimes a day or more. It thus requires some amount of system resources and administrator time. The stages are:
The administrator needs to determine what users have been granted access to resources (file shares, print shares, NTFS files, registry keys, and local group membership) on a particular computer.
Based on who has access to what resources on the system, the administrator can chose to delete old, unused security identities or replace them with corresponding new identities, such as new security groups.
Using the information from the planning and mapping phases, the third stage is the conversion of security identities found anywhere on a system to corresponding new identities.
After you've migrated, you may want to get rid of some old domains entirely, move member servers between domains, consolidate multiple servers together, or possibly even convert a member server to become a DC. Whatever you're considering, moving member servers and their data while maintaining group memberships and ACLs to resources can be done. Again, as with users and computers, taking the process in stages helps ensure that there is less chance of a problem.
If you're considering moving member servers between domains or removing domains in general, these are the steps that you need to consider:
Make sure that the source domain and the destination domain are at the Windows 2000 or higher functional level.
Move all groups from the source domain to the target domain to preserve memberships.
Move the member servers to the destination domain.
Demote the DCs to member servers, removing the domain in the process.
Clean up the Access Control Lists to resources on local computers and servers, since they will need to be modified after the change.
|
http://etutorials.org/Server+Administration/Active+directory/Part+II+Designing+an+Active+Directory+Infrastructure/Chapter+15.+Migrating+from+Windows+NT/15.1+The+Principles+of+Upgrading+Windows+NT+Domains/
|
CC-MAIN-2017-30
|
en
|
refinedweb
|
Hello, Everyone!
Today we have some great news to share with you. To make a long story short, C/C++ IDE goes public as CLion!
Since the early days, JetBrains has been focused on making software development more productive and enjoyable. Having started with a simple refactoring tool for Java, we have provided support for an amazing line-up of languages and platforms: Java, .NET, Python, Ruby & Ruby on Rails, PHP, JavaScript, HTML, Objective-C and many others. Our intelligent tools are widely known for their promotion of code quality, refactorings and smart editing features.
The C and C++ languages have a history going back to the early days of programming itself. They are two of the most successful survivors from the ‘primordial soup’ of programming languages, while most of the others now lie forgotten. So here at JetBrains we were driven by the belief that we could make C/C++ developers’ lives easier with a new IDE targeting these specific languages.
We are very much looking forward to your feedback in order to help us create a tool that you will enjoy using on everyday basis. That is why this early access program exists in the first place. Please note that this build is not even a Beta yet, and we have lots of things to do before we release v1.0.
Let’s introduce you the main features included into this build:
CMake
CLion uses CMake as a project model. It takes all the project’s information (source files, compiler settings, targets description, etc.) and handles all your changes in CMake files automatically.
If you already have a CMake-based project, just open the main CMakeLists.txt file in the IDE. If not, then our simple wizard will help you create a new project by initializing CMakeLists.txt with all the necessary definitions. Every change you make in CMakeLists.txt is automatically handled by CLion (but you can also call Reload CMake Project manually). Naturally, the IDE will invoke CMake automatically while building your project, so you don’t need to do it yourself.
All CMakeCache variables and CMake errors are available within the CMake tool window inside the IDE:
Compiler and Debugger
CLion supports the GCC and Clang compilers. For debugging, CLion currently supports GDB 7.8. The debugging experience is just as you would expect: you can run your program step by step, set breakpoints, evaluate expressions, add watches, and set variable values manually during execution:
Cross-Platform Compatibility
CLion is a cross-platform IDE, so you can use it on OS X, Linux or Windows. In case of Windows, the MinGW and Cygwin tool sets can be used. In our Quick Start Guide you can find the list of tools for each platform you need to start CLion.
Note: If you are using Visual Studio for C++ development (and the Visual C++ Compiler), try our ReSharper for C++.
Languages and Standards
CLion supports various languages:
- C (C99 version)
- C++ (C++03; C++11, including lambda functions, raw string literals, variadic templates, decltype, auto and more)
- HTML (including HTML5), CSS, JavaScript, XML
- Some other languages are also available via plugins (for example, Lua)
Intelligent Features
Knowing your code through and through, CLion takes care of the routine while you focus on the important things. The intelligent features of CLion are expressly designed to boost your productivity and improve the quality of your code.
The smart editor saves your time with code completion and highlighting (including smart completion that filters the list of types, methods, and variables to match the expected type of an expression).
Efficient project navigation will help you find your way through the code:
- Full-scale search via Find Usages:
- File structure navigation
- Navigating to class/file/symbol;
- Navigating to declaration/definition/super definition/subclass:
- Navigating through the timeline (recent files, recent changes, last edit location, etc.);
- Search everywhere, and other types of search
CLion monitors your code and tries to keep it accurate and clean. It detects potential errors and problems, and suggests quick-fixes for them:
CLion offers a wide variety of reliable code refactorings, which track down and correct the affected code references automatically:
- Rename (works also for CMakeLists.txt usages):
- Extract method/variable/typedef/define/etc.
- Change signature:
- Safe delete
- Inline
Writing code can be a lot easier and quicker when you use the code generation options available in CLion:
- Generate constructor/destructor,
- Generate getters/setters
- Override/implement
- Live templates
- Surround with if-else, while, #ifdef, etc.:
Watch CLion in action:
Are you interested? Give it a try!
What’s Next?
We are planning to publish v1.0 in a couple of months. While continuing with the current features (and especially with CMake support), we hope to add LLDB and/or Google Test. If there is a particular feature that you’d love to see in CLion, please post it to our issue tracker or vote for it if it is already there. We will consider them soon after the 1.0 release as we prepare the roadmap for future releases.
If you require even more information, please visit our website. If you have any questions, feel free to ask them on the CLion Discussion Forum, on twitter and here in our blog, where you can find news, updates as well as tips and tricks on how to use the IDE efficiently. And don’t hesitate to report any problems to our support team (clion-support at jetbrains.com) or on our issue tracker.
Develop with pleasure!
The CLion Team
Great! Can’t wait to try it.
Thanks for the great news. I’ve tried the private preview already and liked it a lot.
Question: Will you provide converters for Makefile-based projects?
And I’m wondering if there will be a open-source version?
We will support other project models in future. Priority depends on the votes in tracker:
Great news! Note: your link to Resharper for C++ appears to point to an internal URL…
Thanks. Fixed. Sorry for inconvenience.
Great news!
Just built the Arch Linux package for convenient installation
The package source is managed at
Will CLion be also available as a plugin for IntelliJ IDEA like other languages?
Congrats!
We’ll consider the plugin option in future. Definitely not before the 1.0 release.
A plugin for IntelliJ along with some supporting features for Java JNI/Android NDK development would be great!
+1 Seeing how JetBrains is really making a play for Android development, adding CLion as an Ultimate plugin with NDK support would be amazing. I just tried the EAP release on Linux and it is very impressive.
+1 for Android NDK / iOS plugin for intellij. could make the ide perfect for connected mobile apps with client and server code.
+1
+1
Please
+1 Amazing C++ IDE in its early days! Very impressive.
And world badly needs a CLion for NDK.
+1 for IntelliJ IDEA plugin. I am a big fan of IntelliJ IDEA and use this single IDE for almost all the development in different language. A CLion plugin for IDEA would be great
Hi,
Does an issue exist that we can track for updates on this?
I would also hope that an Idea Ultimate plugin would be a priority for proper native programming support which is lacking.
Cheers!
Note: If necessary I would pay for an Idea plugin if it were sold separately.
Not yet.
I hope you make it an intelliJ plugin… Keep the flagship flying the colors!
+1 . I work on multilanguage projects and would love C/C++ support in IDEA as well !
Really excited about the new C/C++ IDE !
Great news!
Is here any possibility to open directory as project?
If you have a CMakeLists.txt file there – yes.
it`s so great!!
I created project from existing source, however search aren’t working properly.
Always got: “No occurrences of ‘foo_bar’ found in Project 34 usages are out of scope ‘Project and Frameworks’ Find Options… (Ctrl+Shift+F)”
What’s wrong with the search?
Do you have correct CMakeLists.txt files where all the files from the project are pointed?
What if we want to search files which are not related to the project?
I created a custom scope as well but I still cannot search properly.
Looks like it’s a bug. Could you please fill the report here, describing the case in details?
What would be the best way to get Clion working with a cross compiler for an embedded device, say the MSP430. There is a port of gcc and gdb, however a tool known as mspdebug is needed for programming and initiating debug. Is there any way to integrate this into Clion/where should I start looking.
Currently we don’t support remote debugging neither cross-compilation. We plan to implement this functionality in future:
What languages will you support in the future, then version 1.0 is released? I would like if CLion would support Python, a nice language “glue” C(++) projects.
CLion will probably have a Python plugin in future. We are considering this for now.
What else are you interested in? There is C, C++, HTML, CSS, JS, XML and Lua for now.
Oh I’d so love to see ruby support <3
The D language is very nice language. It may be used as a replacement for C and C++. There is a growing community around it. The D language has all the features to produce fast concurrent systems. The language has all the benefits you would expect of a modern language. Unfortunately there are no good IDEs for the D language. Are there any plans to develop a plugin for the D language?
This would definitely get me interested in purchasing the CLion.
We are unaware of this.
+1 for D Language support, D would be amazing with a proper IDE
D support would be really great.
Would definitely love a D version of this. Just spent an entire day trying to use various command line and IDE options to get a D environment working to no success. A good IDE from you guys would be extremely welcome
+1 for Dlang support!
+1, I’d definitely like to see D support
There is some 3rd party plugin () you can try with CLion.
Thanks for the answer!
But what are your (business) plans for CLion? If you are going to support some (or all?) major languages, why purchase Intellij IDEA? Will CLion be your “C/C++ alternative” to the “Java alternative” (Intellij IDEA)? Or are you placing CLion in between Intellij and the other IDEs for specific languages?
CLion will support C/C++ and some complimentary languages. Lua, Python, etc. – mostly via plugins. Web technologies – CSS, HTML, XSLT, JS – are already bundled to the IDE.
Is there/will there be compatibility with PyCharm? I’ve been trying to write a C++ file using but PyCharm won’t locate the headers. Xcode does however. Is this an incompatibility in PyCharm or do I need to edit the CMake file?
PyCharm is an IDE for Python. If you need a C/C++ IDE use CLion.
+1 for Python – c+python – a marriage material:)
File templates don’t seem to be working. All I see is:
File
Directory
—-
HTML File
JavaScript File
XSLT Stylesheet
—-
Edit File Templates…
When edit file templates, I see there are ones for C, C header, and so on. They don’t show in the project window context menu, tough.
If I create a new C file, I don’t get the template inserted, just an empty file.
Yes, there is no such templates for now. But we have this in plans:
I would really love to import a Visual Studio project into CLion!
You know what would be just fantastic??? If you would add a c# support plugin to clion…
I would pay big bucks for it
I’m not really sure we are planning this at all.
Would love to see that too, along with support for the Mono/Xamarin platform.
will qt framework be supported in clion?
Libraries are resolved already at some point. As for the qmake –
thanks for reply.
what about ui forms? will be some designer tool provided for that like in qtcreator?
In the 1.0 release no. We’ll consider later. Add a feature request to our tracker:
Guys! It seems very promising. I don’t really like C++, but with CLion it may change. Thanks for making programming easier and more fun.
Great news!!! But I, like many others, are still waiting for the C# IDE :P!
CLion sounds cool! Question: When I buy IntelliJ Ultimate, will I get access to CLion or its features (as a plugin), or will I have to buy it separately then?
At first CLion will be a separate IDE. After 1.0 release we’ll consider the plugin option.
+1 for making clion plugin available to ultimate users
Hi,
even if it is still a different IDE, will the IntelliJ Ultimate license be sufficient to use CLion?
No, it’s a separate product with a separate license.
How can i build all the targets in one click? My CMakeLists.txt contains a lot of targets and Ctrl+F9 only builds the first one, and I can’t find the “Build all” option.
Unfortunately now not. Vote for the issue () to increase its priority.
Hi there
will version 1 support: standart?
Not sure about this for 1.0 release. But we have C11 in our plans in tracker –. Vote it up to increase priority.
Will it be available as a plugin for Intellij?
I would love to have only one IDE that I can use for almost all languages/platforms
With 1.0 release – no. Later we’ll consider this as an option.
Support for cross-compilation/debugging would be great. I’d like to be able to configure custom (i.e. per project) toolkit (including GDB), custom GDB scripts, etc.
Currently embedded programming is somehow dominated by Eclipse CDT, time to change it! 😉
Vote for this to increase the priority of the issue:
This is cool! Will you guys support Arduino? Please Please Please!
(Or does this fall in remote run/debug request?)
Did I say please?
I think your are talking about this:
Yep. That’s it! Thanks!
Very cool, I learned C++ using Borland C and I recall the experience as “less than resharperly” in terms of joy. I’ll give this a whirl for fun, but I have a question for the team…
What is the purpose of producing a C/C++ editor? Given the fact that there are numerous editors favoritted by many. This is an honest question, I’m not being cheeky. What was the motivation and what does the future look like in regards to CLion?
We do believe that we can make C/C++ developer’s life easier providing him an IDE that can enhance his/her productivity. We see the lack of proper C/C++ IDEs and so want to use our big experience in this area for this purpose.
How this compares to AppCode.. if it’s better there’s a way to import my current projects from AppCode into Clion?
C/C++ languages support is the same in both IDEs. The difference here is that CLion is cross-platform and based on CMake project model. Supported tools will also differ in future.
I’m loving it, only downside for me is that there are no pretty printers for things like std::vector/map etc. in the debugging view.I don’t like to get entangled into implementation details just to get the information I need.
This is it:. Vote to increase the priority
I do most of my dev in Appcode but often need to port to other platforms so this is great!! I’d love it if there was some integration between the two so that I could easily maintain an Xcode and Cmake project but even doing this manually I’m happy to not need visual studio anymore! Would be great if Xcode shortcut keys could be added as an option though.
AppCode has the same C/C++ language support, but the build systems are different in AppCode and CLion. Not sure if we are going to support Xcode build system in CLion, but feel free to add a feature request to our tracker.
About Xcode keymap – it’s there. Press Ctrl+` and select switch keymap.
Thanks Anastasia, sorry for the slow reply and sorry I missed the key mapping – that’s great! I have added a feature request here:. Hopefully some other people will vote on it too.
Looks great. I have a multi-project cmake/nodejs based build system for C across
Linux/BSD/Mac/Windows/ARM (Android).
Will this support Android and iOS cross compiling? Will it integrate CPACK (NSIS/Debian/RPM)?
Valgrind support?
What is the likely cost? Will there be multiple editions? Will there be a community edition?
Thanks
Cross-compilation feature –
Valgrind feature –
Feel free to vote to increase the priority.
The cost will be around AppCode’s one. There will be typical licenses like Commercial, Personal, Classroom, Academic. There is no thoughts about community edition for now. May be later.
Link for the valgrind youtrack issue gets me a “You have no permissions to view this page” page.
Sorry for that. Please, try now –
Works now. Thanks.
CPP-548 doesn’t seem to exist. (Memory inspection) discusses Valgrind. I’m not sure how to interpret the History. Seems to imply it was implemented (?).
Heh… That will impair my desire to take it on. The IDEA licensing was winning enough for someone to actually take you up on it. I don’t give a damn HOW approachable the licensing is- if you can, at a moment’s notice yank all rights to the thing like BitKeeper did with Linus and company…you get the idea.
Yes, a company has the right to make money. I’ve been burned by Lord only knows how damn many companies that had the best thing yet only to disappear on me or not support the new platform I’d moved to. Never again.
I’ve used this tip: . But I’m still waiting an official CLion’s valgrind support.
Hi dudes. After some googling I found this following helpful alternative at StackOverflow’s answers: .
Hello,
I was hopeful about this until I read:
“Note: If you are using Visual Studio for C++ development (and the Visual C++ Compiler), try our ReSharper for C++.”
What I need is one project file and one editor, so I can build the same project on multiple platforms. This limitation means I cannot do that. I assume this limitation is because you don’t want to upset Microsoft, but that doesn’t help me as a customer.
What I meant to say is I request Microsoft compiler support.
Right now we have no plans for supporting MS Visual C++ Compiler. May consider in future.
Why not extend QtCreator ?
In windows, Visual Studio is the reference IDE, why add another IDE to the “others” list of choices ?
Doesn’t QtCreator have many features already that can be extended easily ?
We have some great experience preparing IDEs based on our IntelliJ platform. There is a lot of smart features already ready to be re-used. So from our point it’s more logical to create a separate IDE.
That is simple. They can not make money on QtCreator.
lol, IDEs…
Real men use ed.
Real men use cat
Super men use vi/vim.
i tried to create a text file and include it in cmake file and then tried to read from it in the project but with no use, i can’t read from file!?
What do you mean saying ‘read from it in project’?
i want to read from this file in my c++ code
i used all reading from file methods in c++ but with no use
So add a file to the project by pointing it in CMakeLists.txt. For example add to SOURCE_FILES macro that is used some how like add_executable(exec_name ${SOURCE_FILES}) and then you can use it in this project.
why the comments are removed ?
They are not, just hidden for pre-moderation.
that’s ok they’re appeared again.. please can you tell what’s the issue with reading from file.
Just answered, if I got you right.
That’s what i have done so far and can’t read from file till now
set(SOURCE_FILES main.cpp in.txt)
add_executable(untitled ${SOURCE_FILES})
Could you please send us a sample project to clion-support at jetbrains.com?
i have emailed( clion-support@jetbrains.com ), is that what you meant or something went wrong, cause no one has replied till now?
please can you give me the link or something i searched the site but can’t find anything
clion-support at jetbrains.com
i have emailed( clion-support@jetbrains.com ), is that what you meant or something went wrong, cause no one has replied till now?
We’ve just replied. Sorry for a small delay – there is a lot of feedback messages coming to us these days.
Besides what we’ve answered via e-mail – you can go to Run | Edit Configuration and set Working directory there to use relative paths in your code.
thank you very much.
and finally this is the best IDE ever I’ve been waiting so long
Just a bug i found:
Code:
Output:
It seems me to that anything before the scanf statement is being produced after it scans an input rather than before as it should have in the code
Another example of the scanf bug:
Code:
Output using Clion:
Output using VS:
I am loving the IDE so far, just thought would point out bugs.
Yes, unfortunately there is a problem with out/in-out buffers flushing in the IDE:
Ah ok. Thank you for your response Anastasia. I was specifically waiting for this IDE to be released so that i can immerse myself fully into C. I realize its an EAP but it does a wonderful job anyways. I will be sure to point out any more bugs that might come up! I was using MinGW environment with the integrated CMake and GDB packages incase that information is needed
how can i add another file from another directory in my current directory ??
What do mean with ‘add’? Include?
no i mean adding a file to my project directory inside the IDE
Just place it to CMakeLists.txt as source files. You’ll be warned that it’s placed outside the root directory: Some source files are located outside of CMakeLists.txt directory. You can change the project root or ignore this in future.
would it be free or have a community version ?
It will be paid with 1.0 release. May be later we’ll consider some community version but now no such plans.
Can you please explain why you refuse to make a community edition? It is very useful for opensource projects.
Open source projects can request a special free license from us. So this is not the case.
Why do you need a separate community edition for other ides then?
Each product at JetBrains is run independently and not every decision of one product effects other products. Our IDEs, though many of them are build on the same platform, are still vastly different in terms of tools included, set of languages supported and built-in options.
In addition to supporting Open Source projects with a full edition of CLion at no cost, which in essence provides more functionality to users than a hypothetical Community Edition, we also provide students with free licenses and give startups a valuable discount.
It’s great! Good job. I wonder CLion has an option to set custom cmake arguments, for example when I have a custom vairable called CUSTOM_TEST I would do the following in command line,
cmake -DCUSTOM_TEST=TRUE ..
Or similar how can I do this with CLion?
Now unfortunately you can’t pass this to cmake itself in CLion:. But you can do it through the CMakeCache. You can’t add custom variable from the IDE’s tool window (), but here is the workaround how to achieve this:.
We do hope to improve this experience in future.
Good to know you guys are working on it. There’s another feature that I would like to propose, I’ve only seen it in KDevelop.
template
class A{
public:
T i;
A(T x) : i(x) {}
};
It’s nice to know what are the template arguments that are taken by class A, in KDevelop it shows A when I hit ctrl+space at A< which imo gives out much more information.
Thanks, this sounds useful. Parameter info action should show this:
I also do this, so support would be good. However, I could work around this using environment vars.
I’m really looking forward to the version 1.0 and the integration of google test.
Clion is just great
We do hope to have it there but can’t give you a promise. But they will definitely come in some 1.x version.
Thank you
CLion IDE Introduction for Turkish users. I love this IDE. But my computer oldest :/
Also visit my blog for CLion Introduction post
Is it possible to import an existing project with GNU make model to CLion? Are only cmake projects supported?
For now – CMake only.
We plan other build systems for the future, definitely after 1.0 release –
What about slightly less known build systems. As an example I’m working on the Meson build system and one of the design goals is that it must be as easily embeddable into IDEs as possible. Everything is introspectable with straightforward JSON as described on this wiki page. Enabling basic Meson support for an IDE should not take more than a few hours and it can be used to get deep integration, such as right clicking on a failed unit test to launch it in the debugger.
You can add them to the tracker (). We can consider later.
Having trouble figuring out how to get one project to reference another.
I have one module which compiles to a shared library, has a test suite, and needs to publish certain headers for consumption (but not others). I have managed to figure out enough CMake (which I’ve never used before) to compile the shared library, though it’s hard for me to tell where it’s finding Boost (magic!).
I have another module which needs to consume the first module’s shared library and headers, compile its own code and link everything into an executable.
Are there any established patterns for such a thing? Ideally, I don’t want to turn them into a single “project” as the shared library changes infrequently, and I want for cleanliness to consider it a black box as much as possible.
Just point the libraries in CMake to link you target with. Something like:
target_link_libraries(exec_name lib_name)
And set up the include_directories if you are going to use any headers from there.
Find a lot of useful CMake variables here:.
I can’t seem to login to the issue tracker, so I’m going to post my bug here. The syntax highlighting doesn’t take into account “-include” flags. It compiles and runs just fine though. My sample case uses the std namespace from the included file, but the syntax highlighter can’t figure out anything in main.cpp.
Simple sample project here:
Thanks!
Thanks for the report. Our tracker was under maintenance in the end of last week, but now it’s ok. However I’ve filled the issue with your sample:. Feel free to comment and follow. Actually the problem is that CLion doesn’t handle your header as a file from the project.
You’re absolutely right! Hah, I hadn’t realized that in my original project that I hadn’t included the file. It’s not actually terminated by a “.h” so my “generic” project CMake file did not pick it up.
Wow, thank you so much!
Oops, I jumped the gun on that reply. Apparently CLion hadn’t finished highlighting the file.
Adding the header to the project does not fix the highlighting errors in the main.cpp file. I’m updating my github issue to include the header.
But still, thanks for the reply!
Why it’s so ugly on l!inux distibutions? O_o
I tried with JRE 6, 7, 8, 9 and OpenJRE too but the look and feel is ugly as hell.
Why?
There are some problems that exists in Java distribution with fonts rendering. May be this is the point. We are trying to improve it for now, but not finished yet.
Looks like the issue tracker does’t allow any new registration. So I posted it here.
I would like to have support for CppUTest (). It is a popular C/C++ Unit Test framework. IMHO, even better than Google Test. It is also well covered in the book “Test Driven Development for Embedded C” ().
I also happen to be the author for the CppUTest test runner for Eclipse. () Please feel free to use the code if you find it useful.
Well… or if you really don’t have any plan to support CppUTest, can I also know how I can find information to make a plugin for CLion? Thanks.
There are some plans about various Unit testing frameworks support in our tracker: (there were some maintenance works there, but now it’s working, feel free to use it). You can add you request there also.
Or of course write some plugin yourself:. Check the link for the additional information.
“New user registration is disabled. Please contact your system administrator”
I wanted to vote on the Qt project support issue to express my whish for QBS support. Unfortunatly I seem to be unable to create a new account and guests are not allowed to vote. Is this intended?
Regards and thanks for the IDE. It looks really promising.
Regards,
Erik
The tracker was under maintenance works but now it’s working, feel free to use.
Guy’s you are AWESOME!
Being an IDE-addicted, and being charmed by IDEA for Java and Groovy, I just feel smth like that, using CLion. And it mean’s C++ will be back to my life, and will not be as painful, as always!
Million thanks, keep going!
thanks!
intellij idea is the best java ide I ever used. I’m looking forward to the version 1.0 too, hope it will come soon.
Could we have multiple projects (or modules) in the same CLion window?
Not now.
Is there any plan to support multiple projects/modules? I think it is a very common use case in many software development.
What case do you mean exactly? Could you please show an example?
Here is one of the cases. We have a system in which some different types of server processes communicate with each other. Each server code is stored in a separate folder. They will have their own build configuration (Cmake or Makefile). And they can share some common libraries.
so, we will have:
servers/
server1/
server2/
lib/
commonLib1/
commonLib2/
So, can Clion support this kind of system model?
You can make some top level directory with main CMakeLists.txt file listing the add_subdirectory options (optionally, root folder can differ from the folder with the main CMakeLists.txt file). Libraries paths should be pointed in the appropriate server’s CMakeLists.txt files to link correctly.
Makefile build system is not supported yet, only CMake one.
+servers/
|—server1/
|—server2/
+lib/
|—commonLib1/
|—commonLib2/
I’d really love to be able to import vcproj files to CLion. IDE is great but we can’t yet import our huge Visual Studio projects into CLion which means we can’t switch to your product
With the second EAP build you can import project with the existing sources and IDE will help you to start with the CMake. Check it here:
Pingback: Text Editor Economics | Gruending Design
Hello,
first i want to thank you for the amazing job, you are doing(either with clion or your other IDEs). I really look forward for the final CLion version.
I have a question. I heard(read somewhere), you are using libclang “behind the scene” for code parsing and analyzing. If it is so, can i ask what version you are using? And also, is there a way to pass options to to parseTranslationUnit(if you actually use libclang). Actually what i want to know – is there a possibility to use c++14 support? (which seems to have very nice support in latest(even not latest 3.5) version of libclang)
Clang annotator is not bundled yet –. We have this in plans.
And for the C++14 follow/vote this issue –. It’s also not ready yet.
Thank you,
is there some estimated timeframe, where we can expect clang annotator to be built in? (2 months or year or so..)
No estimations for now, unfortunately.
Clion is a great C/C++ IDE, I really enjoy using it. But would you please remove this annoying message that tells me that the currently installed version of clion has expired and I should install a new one? I mean it is ok that the software informs me that there is an update, but why does it prevent me from using the old version? I installed Clion from the Arch Linux User repositories, because I don’t want to mess around with a manual install. The repositories are updated frequently, but it can take a few days/weeks until new releases are accepted or links are updated. So please don’t prevent your users from using a slightly older version of your great IDE.
Maybe this issue is related?
No, this is not about this.
This is Early Access Program free build with the limited validity period (to try and to evaluate the IDE). After release there will be a normal version.
Ahhh, ok. Thanks. Got confused by the different entries in the Arch Linux User Repositories as there are two packages clion and clion-eap. Furthermore clion is listed at the student program page (), so I thought it is an official release.
AWESOME!!!
By far the best C++ IDE 😉
Regards
Is it possible a plugin for compatibility with the C ++ Builder projects?
As the project increases files, the rad studio becomes slow to access methods/code completion.
In CLint there are several options that make programming easier compared with rad ide and it would be great if you support it.
You can just import your project to CLion and it will create a basic CMake file for you.
I am looking forward to creating chrome native apps using Nacl .C++ and HTML(take it for all web thing) are already being supported if the support is extended to specifically target nacl it will be awesome.
If I got it right from here () this is only a question of cross-compilation (which is, feel free to vote for it).
I am not able to print the statement before the scanf in Clion IDE (for Linux based)
There are some issues with the input/output streams that will probably be fixed in the next EAP. Sorry for inconvenience.
Very nice.
2 questions:
– How much will it cost?
– Will there be an option to run a build from a shell, something like clion -build “Debug”, or whatever?
1. When 1.0 is released there will be a set of options available in terms of licenses. The set and prices will be the same as for the AppCode:, that includes paid, free and discounted options.
2. Why do you need this? Of course, you can always call build cmd manually, but what’s the usecase for this? You can pass all the proper options to the CMake files and you’ll be able soon to pass the env variable and parameters to cmake cmd that CLion runs.
I have been using CLion for school, and I love it! (They try to make us use Visual Studio, but I’ll take CLion EAP over VS2013 any day!)
Tried CLion in CentOS 6.5. IDE is working correctly but can’t compile projects.
Please, review screenshot
Regards,
Eugene.
C++11 is supported since gcc 4.7 and gcc 4.4 has C++0x support. So simply remove the C++11 option in CMakeLists.txt or change to -std=c++0x
Thank you. I’ll try update qcc up actual verion manually.
And why Ctrl-C and Ctrl-V do not work?
It should be working. What keymap are you using in CLion? Check or switch in Settings | Keymap.
Changed in settings from VIM to IDE and it’s working now… A bit strange after Visual Studio tune something myself )))
It’s a Linux!
You’ve probably installed the Vim plugin and thus non-default keymap was selected.
This has a lot of potential. I’d be interested in an open source community-edition like IntelliJ has, I’d rather not use proprietary software.
CLion won’t have a community edition. At least in the nearest future. Maybe later we’ll come back to this.
Hello.
I would use CLion as well. I would buy it to work on my C++/Lua project, but why can’t I install Lua plugin? Everywhere I see information, that Lua is supported, but no Lua plugin in plugin list and “Plugin ‘Lua’ is incompatible with this installation” when I trying to install it from disk.
I encountered this issue with IDEA too, but there i was able to use previous version of IDE, that works with Lua plugin well. I can’t find where can I get previous version of CLion to work on my project. Or can i solve my trouble in any another way?
Unfortunately Lua plugin is not developed by JetBrains, it’s a 3d party plugin. So we could only suggest you to write a message to the plugin developers. We’ve already contacted them for some tickets like this:. Hope they will fix it soon.
Thank you for fast answer.
Will see.
Looking forward for a GUI builder within CLion. An amazing IDE for C++. Thanks a lot for developing this. Will be purchasing a license once the GUI builder is available.
What exact GUI build do you want?
Hi Ana,
Thanks for the quick reply.
A UI forms or Dialog based one , like the one IntelliJ IDEA has.
Is it possible to use Cinder C++ with CLion. I am not sure whether the CMake can generate a correct file for such a framework.
Best,
Ron
Currently this is out of product scope. But probably we can come back to it later. Now we have quite a lot of things with upper priority.
I have a small game project that uses SFML, box2d and other small libraries.
I already managed to make my project work on both MSVC 2012 and Xcode5. All I have to do is to commit code, add source files I would have added in the other project, and I’m good to do.
It’s pretty smooth except from weird stdafx differences, and the fact MSVC2012 doesn’t support variadic templates yet.
I’m really wondering if I would really be able benefit from clion. I mean obviously I would have to recompile SFML as SFML does not provide binaries for clion ? That’s a first problem I guess.
I also remember giving up on compiling Ogre3D on xcode, there were many dependencies, versions of the engine (repo or stable), it was very annoying and I’m not into tweaking Cmake.txt files. I guess jetbrains would not solve this.
I have not given a try to this IDE, but I’m skeptic. C++ is designed as a cross platform languages, but the fact build systems differ is a huge pain, and an IDE won’t solve this unless library maintainer provide solid cmake scripts.
Thank you for the comment and your case description.
CLion is currently supporting only CMake. But since you are using VS prj, maybe it’s worth trying ReSharper C++, this is an extension to VS.
However if you need a cross-platform case, CLion is more for you. You can find some quick CMake tutorial in our webhelp: and try to import you project into CMake via the import project functionality and then tune the dependencies manually in CMake files.
Hi,
is there something in planning to support ClearCase in C-Lion? I’m using the ClearCase plugin for IntelliJ, so it would be very helpfull to have something similar in C-Lion.
Thanks in advance,
Michael
We are currently collecting votes:. If there is a demand, we’ll bind the plugin. Feel free to leave comments and follow the ticket as well.
|
https://blog.jetbrains.com/clion/2014/09/clion-brand-new-ide-for-c-and-c-developers/
|
CC-MAIN-2017-30
|
en
|
refinedweb
|
While the API comprising the members of the
openvrml namespace provides everything necessary to implement nodes, node implementation can be very verbose. The members of the
openvrml::node_impl_util namespace can make node implementation more concise by abstracting and providing code that many node implementations are likely to have in common.
In particular,
node_type_impl centralizes the logic for generalized field access. By using an instance of this class template for your
openvrml::node_type implementation you can avoid a lot of tedious and repetitive code to implement
openvrml::node::do_field,
openvrml::node::do_event_listener, and
openvrml::node::do_event_emitter.
|
http://openvrml.org/doc/namespaceopenvrml_1_1node__impl__util.html
|
CC-MAIN-2017-30
|
en
|
refinedweb
|
Date: March 1998. Last edited: $Date: 2009/08/27 21:38:07 $
Status: . Editing status: incomplete first draft. This explains the rationale for XML namespaces and RDF schemas, and derives requirement on them from a discussion of the process by which we arrive at standards.
Up to Design Issues
(These ideas were mentioned in a keynote on "Evolvability" at WWW7 and this text follows closely enough for you to give yourself the talk below using those slides. More or less. If and when we get a video from WWW7 of the talk, maybe we'll be able to serve that up in parallel.)
The World Wide Web Consortium was founded in 1994 on the mandate to lead the Evolution of the Web while maintaining its Interoperability as a universal space. "Interoperability" and "Evolvability" were two goals for all W3C technology, and whilst there was a good understanding of what the first meant, it was difficult to define the second in terms of technology.
Since then W3C has had first hand experience of the tension beween these two goals, and has seen the process by which specifications have been advanced, fragmented and later reconverged. This has led to a desire for a technological solution which will allow specifications to evolve with the speed and freedom of many parallel deevlopments, but also such that any message, whether "standard" or not, at least has a well defined meaning.
There have been technologies dubbed "futureproof" for years and years, whether they are languages or backplane busses. I expect you the reader to share my cynicism when encountering any such claim. We must work though exactly what we mean: what we expect to be able to do which we could not do before, and how that will make evolution more possible and less painfull.
A rule explicit or implcit in all the email-like Internet protocols has always been that if you found a mail header (or something) which you did not understand, you should ignore it. This obviously allows people to add all sorts of records to things in a very free way, and so we can call it the rul of free extension. It has its advatage of rapid prototyping and incremental deployment, and the disadvantage of ambiguity, confusion, and an inability to add a mandatory feature to an existing protocol. I adopeted the rule for HTML when initially designing it - and used it myself all the time, adding elements one by one. This is one way in which HTML was unlike a conventional SGML application, but it allowed the dramatic development of HTML.
The development of HTML between 1994 and 1998 took place in a cycle, fuelled by the tension between the competitive urge of companies to outdo each other and the common need for standards for moving forward. The cycle starts simply simply bcause the HTML standard is open and usable by anyone: this means that any engineer, in any company or waiting for a bus can think of new ways to extend HTML, and try them out.
The next phase is that some of these many ideas are tried out in prototypes or products, using the fact free extension rule that any unrecongined extensiosn will be ignored by everything which does not understand them. The result is a drmatic growth in features. Some of these become product differentiators, during which time their originators are loth to discuss the technology with the competition. Some features die in the market and diappear from the products. Those successful features have a fairly short lifetime as product differetiators, as they are soon emulated in some equivalent (though different) feature in competeing products.
After this phase of the cycle, there are three or four ways of doing the same thing, and engineers in each company are forced to spend their time writing three of four different versions of the same thing, and coping with the software architectural problems which arise from the mix of different models. This wastes program size, and confuses users. In the case for example, of the TABLE tag, a browser meeting one in a document had no idea which table extension it was, so the situation could become ambiguous. If the interpretation of the table was important for the safe interpretation ofthe document, the server would never know whether it had been done, as an unaware client would blithely ignore it in any case. This internal software mess resulting from having to implement multiple models also threatens future deevlopment. It turns the stable consistent base for future development into something fragmented and inconsistent: it is difficult to design new features in such an environment.
Now the marketting pressure is off which prevented discussions, and there is a strong call for the engineers to get around the W3C table, and iron out a common way of doing things. As this happens, a system is designed which puts together the best aspects of each other system, plus a few weeks experience, so everyone is in the end happier with the result. The companies all go away making public promises to implement it, even though the engineering staff will be under pressure to add the next feature and startthe next cycle. The result is published as a common specification opene to anyone to implement. And so the cycle starts again.
This is not the way all W3C activities have worked, but it was the particular case with HTML, and it illustrates some of the advantages and disadvantages with the free extenstion rule.
The HTML cycle as a method of arriving at consensus on a document has its drawbacks. By 1998, there were reasons to change the cycle.The work in the W3C, which had started off in 1994 with several years backlog of work, had more or less caught up, and was begining to lead, rather than trail, developments. The work was seen less as fire fighting and more as consolitation..
Inthe future it was clear that we needed somehow to set up a modular system which would allow one to add to HTML new standard modules. At the same time, it was clear that with XML available as a manageble version of SGML as a base for anyone to define their own tag sets, there was likely to be a deluge of application-specific and industry-specific XML based languages. The idea of all this happening underthefree extension rule was frightening. Most applications would simply add new tags to HTML. If we continued the process of retrospectively roping into a new bigger standard, the document would grow without limit and become totally unmanageble. The rule of free extesnion was no longer appropriate.
Now let us compare this situation with the way development occus in the world of distributed computing, specifically remote rpocedure call (RPC) and distributed object oriented systems. In these systems, the distributed system (equivalent to the server plus the client for the web) is viewed as a single software system which happens to be spread over several physical machines. [nelson - courier, etc]
The network protocols are defined automatically as a function of the software interfaces which happen to end up being between modules on different machines. Each interface, local or remote, has a well documented structure, and the list of functions (procedures, methods or whatever) and parameters are defined in machine-processable form. As the system is built, the compiler checks that the interfaces required by one module is exactly provided by another module. The interface, in each version of its development, typically has an identifying (typically very long) unique number.
The interface defines the parameters of a remote call, and therefore defines exactly what can occur in a message from one module to another. There is no free extension. If the interface is changed, and a new module made, any module on the other side of the interface will have to be changed too, or you can't build the system.
The great advantage of this is that when the system has been built, you expect it to work. There is no wondering wether a table is being displayed - if you have called the table module, you know exactly what the module is supposed to do, and there is no way the system could be without that module. Given the chaos of the HTML devleopment world, you can imagine that many people were hankering after the well defined interfaces of the distributed computing technology.
With well-defined interfaces, either everything works, or nothing. This was in fact at least formally the case with SGML documents. Each had a document type definition (DTD) refered to at the the top, which defiend in principle exactly what could and could not be in the document. PICS labels were similar in that thet are self-describing: they actually have a URI atthe top which points to a machine-readable description of what can and can't be in athat PICS label. When you see one of these documents, as when you get an RPC mesaage with an interface number on it, you can check whether you understand the interface or not. Another intersting thing you can do, if you don't have a way of processing it, is to look it up in some index and dynamically download the code to process it.
The existence of the Web makes all this much smoother: instead of inventing arbitrary names for inetrfaces, tyou can use a real URI which can be dereferenecd and return the master definition of the interface in real time. The Web can become a decentralised registray of interfaces (languages) and code modules.
The need was clearly for the best of both worlds. One must be able to freely extend a language, but do so with an extension language which is itself well defined. If for example, documents which were HTML 2.0 plus Netscape's version of tables version 2.01 were identified as such, mcuh o the problem of ambiguity would have been resolved, but the rest ofthe world left free to make their own table extensions. This was the goal of the namespaces work in XML.
To be able to use the namespaces work in the extension of HTML, HTML has to transition from being an SGML application (with certain constraints) to being an XML based langauge. This will not only give it a certain ease of parsing, but allow it to build on the modularity introduced by namespaces.
In fact, already in April of 1998 there was a W3C Recommendation for "MathML", defined as as XML langauge and obviously aimed at being usable in the context of an HTML document, but for which there was no defined way to write a combined HTML+MathML document. MathML was already waiting for XML namespaces.
XML namespaces will allow an author (or authoring tool, hopefully) to declare exactly waht set of tags he orshe is using in a document. Later, schemas should allow a browser to decide what to do as a fall back when finding vocabulary which it does not understand.
It is expected that new extensions to HTML be introduced as namespaces, possibly languages in their own right. The intent is that the new languages, where appropriate, will be able to use the existing work on style sheets, such as CSS, and the existing DOM work which defines a programming interface.
Language mixing is an important facility, for HTML, for the evolution of all other Web and application technology. It must allow, in a mixed labguage document, for both langauges to be well defined. A mixed langage document is quiote analogous to a program which makes calls to two runtime libraries, so it is not rocket science. It is not like an RPC message, which in most systems is very strongly ytped froma single rigid definition. (An RPC message can be represented as a structured document but not, in general, vice-versa)
Language mixing is a realtity. Real HTML pages are often HTML with Javascript, or HTML plus CSS, or both. They just aren't declared as such. In real life, many documents are made from multiple vocabularies, only some of which one understands. I don't understand half the information in the tax form - but I know enough to know what applies to me. The invoice is a good example. Many differet coloured copies of the same document used to serve as a packing list, restocking sheet, invoice, and delivery note. Different parts of a company would understand different bits: the financial dividion woul dcheck amounts and signatures, the store would understand the part numbers, and the sales and marketting would define dthe relationship betwene the part numbers and prices.
No longer can the Web tolerate the laxness which HTML and HTTP have been extended. However, it cannot constrain itself to a system as rigid as a classical disributed object oriented system.
The note on namespaces defines some requirements of a language framework which allows new schmata to be developed quite independently, and mixed within one document. This note elaborates on the sorts of things which have to be possible when the evolution occurs.
You may notice than nowhere in the architecture do XML or RDF specify what language the schema should be written in. This is because much of the future power of the system will lie in the power of the schema and related documents, so it isimportant to leave that open as a path for the future. In the short term, yo can think of a schema being written in HTML and english. Indeed, this is enough to tie the significance of documents written in the schema to the law of the land and mkae the document an effective part of serious commercial or other social interaction. You can imagine a schema being in a sort of SGML DTD language which tells a computer program what constraints there are on the structure of documents, but nothing about their meaning. This allows a certain crude validity check to be made on a document but little else.
Now let us imagine further power which we could put into a schema language.
A crucial first milestone for the system is partial understanding. Let's use the scenario of an invoice, like the scenario in the "Extensible languages" note. An invoice refers to two schemata: one is a well-known invoice schema and the other a proprietory part number schema. The requirement is that an invoice processing program can process the invoice without needing to understand the part description.
Somehow the program must find out that the invoice is from its point of view just as valid as an invoice with the details fo the part description stripped out.
One possibility is to mark the part description as "optional" on the text. We could imagine a well-known way of doing this. It could be done in the document itself [as usual, using an arbitrary syntax:]
<item> <partnumber>8137498237</> <optional> <xml:using
<a:partdesc> ... <a:partdesc> </xml:using>
</opional> </item>
There are problems with this. One is that we are relying on the invoice schema to define what in invoice is and isn't and what it means. It would be nice if the designer of the invoice could say whether the item should contain a part description of not, or whether it is possible to add things into the item description or not. But in general if there is something to be said we like to allow it to be said anywhere (like metadata). But for the optionalness to be expressed elsewhere would save the writer of every invoice the bother of having to explicitly.
The other more fundamental problem is that the notion of "optional" is subjective. We can be more precise about "partial understanding" by saying that the invoice processing system needs to convert the document which contains things it doesn't understand into a document which it does completely understand: a valid invoice. However, another agent may which to convert the same detailed invoice into, say, a delivery note: in this case, quite different information would be "optional".
To be more specific, then, we need to be able to describe a transformation from one document to another which preserves "valididy" in some sense. A simple form of transformation is the removal of sections, but obviously there can be all kinds of level of transformation language ranging from the cudest to theturing complete. Whatever the language, statement that given a document x, that some f(x) can be deduced.
In practice, this suggest that one should leave the actual choice of the transformation language as a flexibility point. However, as with most choices of computer language, the general "principle least power" applies:
(@@justify in greater depth in footnote)
While being able to express a very complex function may feel good, the result will in general be less useful. As Lao-Tse puts it, "Usefulness from what is not there". From the point of view of translation algorithms, one usefulness is for them to be reversible. In the case in which you are trying to prove something (such as access to a web site or financial credibility) you need to be able to derive a document of a given form. The rules you use are the pieces of the web of trust and you are looking for a path through the web of trust. Clearly, one approach is to enumerate all the things which can be deduced from a given document, but it is faster to have an idea of which algorithms to apply. Simple ones have input and output patterns. A deletion rule is a very simple case
s/.*foo.*/\1\2/
This is stream editor languge for "Remove "foo" from any string leaving what was on either side". If this rule is allowed, it means that "foo"is optional. @@@ to be continued
Optional features and Partial Understanding
The test of independent invention is a thought experiment which tests one aspect of the quality of a design. When you design something, you make a number of important architectural decisions, such as how many wheels a car has, and that an arch will be used between the pillas of the vault. You make other arbitrary decisions such as the color of the car, the side of the road everyone will drive, whether to open the egg at the big end or the little end.
Suppose it just happens that another group is designing the same sort of thing, tackling the same problem, somewhere else. They are quite unknown to you and you to them, but just suppose that being just as smart as you, they make all the same important archietctural decisions. This you can expect if you believe hat these decisions make logical sense. Imagine that they have the same philosophy: it is largely the philosophy which we are testing. However, imagine that they make all the arbitrary decisions differently. They complement bit 7. They drive on the other other side of the road. They use red buoys on the starbord side, and use 575 lines per screen on their televisions.
Now imagine that the two systems both work (locally), and being usccessful, grow and grow. After a while, they meet. Suddenly you discover each other. Suddenly, people want to work across both systems. They want to connect two road systems, two telephone systems, two networks, two webs. What happens?
I tried originally to make WWW?
(see also WWW and Unitarian Universalism). We could add MMML as a MIME type. And so on. However, if we required all Web servers to synchronise though one and only one master lock server in Waltdorf, we would have found the Mesh required synchronisation though a master server in Melbourne. It would have failed.
No system completely passes the ToII - it is always some trouble to convert.
As the Web becomes the basis for many many applications to be build on top of it, the phenomenon of independent invention will recur again and again. We have to build technology so as to make it easy for systems to pass the test, and so survive real life in an evolving world.
If systems cannot pass the TOII, then we can only achieve worldwide interoperability when one original design has originally beaten the others. This can happen if we all sit down together as a worldwide committee and do a "top down"design of the whole thing before we start. This works for a new idea but not for the automation of something which, like pharmacy or trade, has been going on for centuries and is just being represented in the Semantic Web. For example, the library community has had endless trouble trying to agree on a single library card format (MARC record) worldwide.
Another way it can happen is if one system is dropped completely, leading to a complete loss of the effport put into it. When in the late 1980s Europe eventually abandoned its suite of ISO protocols for networking because they just could not interwork with the Internet, a huge amount of work was lost. Many problems, solved in Europe but not in the US (including network addresses of more than 32 bits) had to be solved again on the Internet at great cost. Sweden actually changed from driving on the left to driving on the right. All over the world, people have changed word processor formats again and again but only at the cost of losing access to huge amounts of legacy information. The test of independent invention is not just a thought experiment, it is happening all the time.
So now let us get more specific about what we really need in the underlying technology of the Semantic Web to allow systems in the future to pass the test of independent invention.
Our first assumption is that we will be smarter in the future. This means that we will produce better systems. We will want to move on from version 1 to version 2, from version n to version n+1.
What happens now? A group of people use version 4 of a word process and share some documents. One touches a document using a new version 5 of the same program. Oen of the other people tries to load it using version 4 of the software. The version 4 program reads the file, and find it is a version5 file. It declares that there is no way it can read the file,as it was produced in the future, and there is no way it can predict the future to know how to read a version 5 file. A flag day occurs: everyone in the group has to upgrade immediately - and often they never even planned to.
So the first requirement is for a version 4 program to be able to read a version 5 file. Of course there will be some features in version 5 that the version 4 program will not be able to understand. But most of the time, we actually find that what we want to achieve can be done by partial understanding - understanding those parts of the document which correspond to functions which exist in version 4. But even though we know partial understanding would be acceptable, with most systems we don't know how to do even that.
The philosophical assumption that we may not be smarter than everyone else (a huge step for some!) leads us to realise that others will have gret ideas too, and will independently invent the same things. It forces us to consider the test of independent invention.
The requirement for the system to pass the ToII is for one program which we write to be able to read somehow (partially if not totally) data written by the program written by the other folks. This simple operation is the key to decentralised evolution of our technology, and to the whole future of the Web.
So we have deduced two requirements for the system from our simple philosophical assumptions:
We are we with the requirements for evolvability so far? We are looking for a tecnology which has free but well defined extension. We want to do it by allowing documents to use mixed vocabularies. We have already found out (from PICS work for example) that we need to be abl eto know whether extension vocabulary is mandatory or can be ignored. We want to use the Web for any registry, rather than any central point. The technology has to be allow an application to be able to convert the output of a future version of itself, or the output of an equivalent program written indpendently, into something it can process, just by looking up schema information.
Now let us look at the world of data on the Web, the Semantic Web, which I expect we expect to become a new force in the next few years. By "data" as opposed to "documents", I am talking about information on the Web in a form specifically to aid automated processing rather than human browsing. "Data" is characterised by infomation with a well defined strcuture, where the atomic parts have wel ldefined types, such as numbers and choices from finite sets. "Data", as in a relational database, normally has well defined meaning which has rarely been written down. When someone creates a new databse, they have to give the data type of each column, but don't have to explain what the field name actually means in any way. So there is a well defined semantics but not one which can be accessed. In fact, the only time you tells the machine anything about the semantics is when you define which two columns of different tables are equivalent in some way, so that they can be used for example as the basis for joining the two databases. (That the meaning of data is only defined relative to the meaning of other data is of course quite normal - we don't expect machines to have any built in understanding of what "zip code" might mean apart from where you can read it and write it and what you can compare it with). Notice that what happens with real databases is that they are defined by users one day, and they evolve. They are rarely the result of a committee sitting down and deciding on a set of concepts to use across a company or an industry, and then designing the data schema. The schema is craeted on the fly by the user.
We can distinguish two ways in which tha word "schema" has been used:
I will use it for the first only. In fact, a syntactic schema dedfines a class of document, and often is accompanied by human documentation which provides some rough semantics.
There is a huge amount ("legacy" would unfairly suggest obsolescence) of data in relational databases. A certain amount of it is being exported onto the web as virtual hypertext. There are many applications which allow one to make hypertext views of difeferent aspects of a database, so that each server request is met by performing adatabse query, and then formatting the result as a report in HTML, with appropriate style and decoration.
Information about information is interesting in two ways. Firstly, it is interesting because the Web society desperately needs it to be able to manage social aspects of information such as endorsement (PICS labels, etc), ownership and access rights to information, privacy policies (P3P, etc), structuring and cataloguing information and a hundred otehr uses which I will not try to ennumerate. This first aspect is discussed elsewhere. (See Metadata architecture about general treatment of metadata and labels, and the Technology and Society domain for overveiw of many of the social drivers and related projects and technology)
The second interest in metadata is that it is data. If we are looking for a language for putting data onto the Web, in a machine understandable way, then metadata happens to be a first application area. Also, because metadat ais fundamental to most data on eth web, it is the focus of W3C effort, while many other forms of data are regarded as applications rather than core Web archietcure, and so are not.
Suppose for example that you run a server which provides online stock prices. Your application which today provides fancy web pages with a company's data in text and graphs (as GIFs) could tomorrow produce the same page as XML data, in tabular form, for machine access. The same page could even be produced at the same URL in two formats using content negotiation, or you could have a typed link between the machine-understandable and person-understandable versions.
The XML version contains at the top (or soemewhere) a pointer to a schema document. This poiner makes the document "self-describing". It is this pointer which is the key to any machine "understanding" of the page. By making the schema a first class object, in other words by giving its URL and nothing else, we are leaving the dooropen to many possibilities. Now it is time to look at the various sorts of schema document which it could point to.
Computer languags can be classified into various types, with various capabilities, and the sort we chose for the schema document, and information we allow the schema fundamentally affects not just what the semantic web can be but, more importantly, how it can grow.
The schema document can, broadly, be one of the following:
We'll go over the pros and cons of each, because none of these should be overlooked, but some are often way better than others.
This may sound like a silly trivial example, but like many trival examples, it is not silly. If you just name your schema somewhere in URI space, then you have identified it. This deosn't offer a lot of help to anyone to find any documentation online, but one fundamental function is possible. Anyone can check compatability: They can compare the schema against a list of schemata they do understand, and return yes or no.
In fact, they can also se an idnex to look up information about the schema, including ifnromation about suitable software to download to add understanding of the document. In fact this level is the level which many RPC systems use: the interface is given a unique but otherwise random number which cannot be dereferenced directly.
So this is the level of machine-understanding typical of distributed ocmputing systems and should not be underestimated. There are lot sof parts of URI space you can use for this: yo might own some http: space (but never actually serve the document at that point) , but if you don't, you can always generate a URI in a mid: ro cid: space or if desperate in one of the hash spaces.
The next step up from just using the Schema identifier as a document tyope identifier is to make that URI one which will dereference to a human-readable document. If you're a computer, big deal. But as well as allowing a strict compatiability test (test for equality of the schema URI), this also allows human beings to get involed if ther is any argument as to what a document means. This can be signifiant! For example, the schema could point to a complete technical spec which is crammed with legalese about what the document does and does not imply and commit to. At the end of the day, all machine-understandable descriptions of documents are all very well, but until the day that they bootstrap themselves into legality, they must all in the end be defined in terms of human-readable legalese to have social effect. Human legalese is the schema language of our society. This is level 2.
Now we move into the meat of the schema system when we start to discuss schema documents which are machine readable. now we are satrting to enable some machine understanding and automatic processing of document types which have not been pre-programmed by people. Ça commence.
The next level we conside is that when your brower (agent, whatever) dereferences the namespace URI, it find a schema which defines the structure of the document. this is a bit like an SGML Doctument type Definition (DTD). It allows you to do everything which the levels 1 and 2 allowed, if it has sufficient comments in it to allow human arguments to be settled.
In addition, a system which has a way of defineing structure allows everyone to have one and only one parser to handle all manner of documents. Any document coming across the threshold can be parse into a tree.
More than that, it allows a document o be validated against allowed strctures. If a memeo contains two subject fields, it is not valid. Tjis is one fo the principal uses of DTDs in SGML.
In some cases, there maybe another spin-off. You canimagine that if the schema document lists the allwoed structrue of the document, and the types (and maybe names) of each element, then this would allow an agent to construct on the fly a graphic user interafce for editing such a document. This was theintent with PICS rating systems: at least, a parent coming across a new rating system would be be given a ahuman-readable descriptoin of the various parameters and would be able to select
The "optional" flag is a term I use here for a common crucial step which can make the difference between chaos and smooth evolution. All you need to do is to mark in the schema of a new version of the language which elements of the langauge can be ignored if you don't understand them. This simple step allows a processor which handled the old language, giventhe schema of the new langauge, to filter it so as to produce a document it can legitimately understand.
Now we have a technology which ahs all the benefits to date, plus it can handle that elusive version 2 to version 1 conversion problem!
Always in langauges there is the balance between the declarative limited langauge, whose foprmulae can be easily manipulated, and the powerful programming language whose programs cannot be analyzed in general, but which have to be left to run to see what they do. Each end of the spectrum has its benefits. In describing a lanuage in terms of another, one way is to provide a black box program, say in Java or Javascript, which will convert from one to the other.
Filters written in turing-complete languages generally have to be trusted, as you can't see what rules they are based on by looking at them. But they can do weird and wonderful things. (They can also crash and loop forever of course!).
A good language for conversion from one XML-based language to another is XSL. It lstarted off as a template-like system for building one document from another (and can be very simple) but is in fact Turning-complete.
When you do publish a program to convert language A to language B, then anyone who trusts it has that capability. A disadvantage is that they never know how it works. You can't deduce things about the individual components of the languages. You can't therefore infer much indirectly about relationships to other languages. The only way such a filter can be used is to get whatever you have into language A and then put it though the filter. This might be useful. But it isn't as fascinating as the option of blowing language A open.
What is fundamentally more exciting is to write down as explicitly as posible wahteth new language means. Sorry, let me take that back, in case you think that I am talking about some absulte meaning of meaning. If you know me, I am not. All I mean is that we write in a machine-processable logical way the equivalences and conversions which are possible in and out of language A from other languages. And other languages.
A specific case of course, is when we document the relationship betwen version 2 and version 1. The schema document for version 2 could explain that all the terms are synonyms, except for some new terms which can be converted to nothing (ie are optional) and some which affect the meaning of the document completely and so if you don't understand them you are stuck.
In a more general case, take a language like iCalendar in RDF (were it in RDF), which is for describing events as would be in a personal organizer. A schema for the language might declare equivalences betwen a calendar's concept of group MEMBER ship and an access control system's concept of group membership; it might declare the equivalence of eth concept of LOCATION to be the text description of a Geographical Information Systems standard's location, and it may declare an INDIVIDUAL to be a superset of the HR department's concept of employee. These bits of information of the stuff of the semantic web, as they allow inference to stretch across the gloabe and conclude things which we knew as whole but no one person knew. This is what RDF and the Semnatic Web logic built on top of it is all about.
So, what will semantic web engine be able to do? They will not all have the same inference abilities or algorithms. They will share a core concept of an RDF statement - an assertion that a given resource has a property with a given value. They will use this as a common way of exchanging data even when their inference rules are not compatible. An agent will be able to read a document in a new version of a language, by looking up on the web the relationship with the old version that it can natively read. It will be able to combine many documents into a single graph of knowledge, and draw deductions from the combination. And even though it might not be able to find a proof of a given hypothesis, when faced with an elaborated proof it will be able to check its veracity.
At this stage (1998) we need relational database experts in the XML and RDF groups, [2000 -- include ontology and conceptual graph and knowledge representation experts].
Examples abound of language mixing and evolution in the real world which make the need for these capabilities clear. There is a great and unused overlap in the concepts used by, for example, personal information managers, email systems, and so on. These capabilities would allow information to flow between these applications.
You just have to look at the history of a standard such as MARC record for library information to see that the tension between agreeing on a standard (difficult and only possible for a common subset) and allowing variations (quick by not interoperable) would be eased by allowing language mixing. A card could be written out in a mixture of standard and local terms.
The real world is full of times when conventions have been developed separately and the relationships have been deduced afterward: hence the market for third party converters of disk formats, scheduler files, and so on.
I have left open the discussion as to what inference power and algorithms will be useful on the semantic web precisely because it will always be an open question. When a language is sufficiently expressive to be able to express teh state of the real world and real problems then there will be no one query engine which will be able to solve real problems.
We can, however, guess at how systems might evolve. No one at the beginning of the Web foresaw the search engines which could index almost all the web, so these guesses may be very inaccurate!.
In fact I thing we will see a huge market for interesting new algorithms, each to take advantage of particular characteristics of particular parts of the Web. New algorithms around electronic commerce may have directly beneficial business models, to there will be incentive for their development.
Imagine some questions we might want to ask an engine of the future:
All these involve bridging barriers between domains of knowledge, but they do not involve very complex logic -- except for the tax form, that is. And who knows, perhaps in the future the tax code will have to be presented as a formula on the semantic web, just as it is expected now that one make such a public human-readable document available on the Web.
There are some requirements on the Semantic Web design which must be upheld if the technology is to be able to evolve smoothly. They involve both the introduction of new versions of one language, and also the merging of two originally independent languages. XML Namespaces and RDF are designed to meet these requirements, but a lot more thought and careful design will be needed before the system is complete.
Lao-TseLao-Tse
The Space Within.
(UU-STLT#600)
...
Imagine that the EU and the US independently define RDF schemata for an invoice. Invoice are traded around Europe with a schema pointer at the top which identifies the smema. Indeed, the schema may be found on the web.
Next: Metadata architecture
Up to Design Issues
Tim BL
|
http://www.w3.org/DesignIssues/Evolution.html
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
Request for Comments: 6045
Category: Informational
ISSN: 2070-1721
EMC
November 2010
Real-time Inter-network Defense (RID). Normative and Informative ..................................6 1.2. Terminology ............................................... ...........................................17 4.3.1. RID Data Types .....................................17 4.3.1.1. Boolean ...................................17 4.3.2. RID Messages and Transport .........................18 4.3.3. IODEF-RID Schema ...................................19 4.3.3.1. RequestStatus Class .......................21 4.3.3.2. IncidentSource Class ......................23 4.3.3.3. RIDPolicy Class ...........................24 4.3.4. RID Namespace ......................................29 4.4. RID Messages ..............................................29 4.4.1. TraceRequest .......................................29 4.4.2. RequestAuthorization ...............................30 4.4.3. Result .............................................31 4.4.4. Investigation Request ..............................33 4.4.5. Report .............................................35 4.4.6. IncidentQuery ......................................36 4.5. RID Communication Exchanges ...............................37 4.5.1. Upstream Trace Communication Flow ..................39 4.5.1.1. RID TraceRequest Example ..................40 4.5.1.2. RequestAuthorization Message Example ......44 4.5.1.3. Result Message Example ....................44 4.5.2. Investigation Request Communication Flow ...........47 4.5.2.1. Investigation Request Example .............48 4.5.2.2. RequestAuthorization Message Example ......50 4.5.3. Report Communication ...............................51 4.5.3.1. Report Example ............................51 4.5.4. IncidentQuery Communication Flow ...................54 4.5.4.1. IncidentQuery Example .....................54 5. RID Schema Definition ..........................................55 6. Security Considerations ........................................60 6.1. Message Transport .........................................62 6.2. Message Delivery Protocol - Integrity and Authentication ..63 6.3. Transport Communication ...................................63 6.4. Authentication of RID Protocol ............................64 6.4.1. Multi-Hop TraceRequest Authentication ..............65 6.5. Consortiums and Public Key Infrastructures ................66 6.6. Privacy Concerns and System Use Guidelines ................67 7. IANA Considerations ............................................72 8. Summary ........................................................72 9. References .....................................................73 9.1. Normative References ......................................73 9.2. Informative References ....................................74 Acknowledgements ..................................................75 Sponsor Information ...............................................75
1. Introduction
Incident handling involves the detection, reporting, identification, and mitigation of an attack, whether it be a system compromise, socially engineered phishing attack, or a denial-of-service (DoS) attack. When an attack is detected, the response may include simply filing a report, notification to the source of the attack, a request for mitigation,Request should be permitted to continue. The data in RID messages is represented in an Extensible Markup Language (XML) [XML1.0].
At this point,. regarding security incidents. RID messages are encapsulated for transport, which is defined in a separate document [RFC6046]. The authentication, integrity, and authorization features each layer has to offer are used to achieve a necessary level of security.
1.1. Normative and Informative
The XML schema [XMLschema] and transport requirements contained in this document are normative; all other information provided is intended as informative. More specifically, the following sections of this document are intended as informative: Sections 1, 2, and 3; and the sub-sections of 4 including the introduction to 4, 4.1, and 4.2. The following sections of this document are normative: The sub-sections of 4 including 4.3, 4.4, and 4.5; Section 5; and Section 6.
1.2. Terminology
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119].
1.3. [DoS] attacks are characterized by large amounts of traffic destined for particular Internet locations and can originate from a single or multiple sources. An attack from multiple sources is known as a distributed denial-of-service (DDoS) attack. Because DDoS attacks can originate from multiple sources, tracing such an attack can be extremely difficult or nearly impossible. Many TraceRequests may be required to accomplish the task and may require the use of dedicated network resources to communicate incident handling information to prevent a DoS attack taken, systems (IDSs) HTTP/TLS (Transport Layer Security) or an appropriate protocol defined in the transport specification. effects of the attack through methods such as filtering or rate-limiting the traffic close to the source or by using methods such as taking the host or network offline. Care must also be taken to ensure that the system is not abused and to use proper analysis in determining if attack traffic is, in fact, attack traffic at each NP along the path of a trace.
Tracing. Security incidents, including DDoS attacks, can be difficult or nearly impossible to trace because of the nature of the attack. Some of the difficulties in tracing attacks include the following:
- the attack originates from multiple sources;
- NP or HTTP requests sent to an organization connected to the Internet;
- the attack may utilize varying types of packets including TCP, UDP, ICMP, or other IP protocols;
- the attack may be from "zombies", which then require additional searches to locate a controlling server as the true origin of the attack;
- with as described in "Hash-Based IP Traceback" [HASH-IPtrace]. Other research solutions involve marking packets as explained in "ICMP Traceback Messages" [ICMPtrace], "Practical network support for IP traceback" [NTWK-IPtrace], the IP Flow Information eXport (IPFIX) protocol [RFC3917], and IP marking [IPtrace]. research presented in "Hash- Based IP Traceback" [HASH-IPtrace] provides guidelines to establish a minimum requirement for distinguishing packets. The full packet and content SHOULD be provided, but the minimum requirement MUST be provided. The research from [HASH-IPtrace] found that the first 28 invariant bytes of a packet (masked IP header plus the first 8 bytes of the payload) are sufficient to differentiate almost all non- identical IPv4 packets. RID requires the first 28 invariant bytes of an IPv4 packet in order to perform a trace. RID requires the first 48 invariant bytes for an IPv6 packet in order to distinguish the packet in a trace. Reference [HASH-IPtrace] for additional details.
The input mechanism for packets to be traced should be flexibleFC6046]. Sub-sections 4.3, 4.4, and.
Each NP should dedicate a phone number to reach a member of their respective CSIRT. The phone number could be dedicated to inter-NP incident communications and must be a hotline that provides a 24x7 live response. The phone line should reach someone who would have the authority, expertise, NP's network. The outside resource should be able to mitigate or alleviate the financial limitations and any lack of experienced resource personnel.
A technical solution to trace traffic across a single NP may include homegrown or commercial systems for which RID messaging must accommodate the input requirements. The IHS used on the NP's backbone by the CSIRT to coordinate the trace across the single network requires a method to accept and process RID messages and relay TraceRequestsFC6046] document.
One goal of RID is to prevent the need to permit access to other networks' that is the responsibility of the two network peers hosting the NP may be negotiated in a contract as part of a value-added service or through a service level agreement (SLA). Further discussion is beyond the scope of this document and may be more appropriately handled in network FBI or other assisting government organization in the country of the investigation.
4.1. Inter-Network Provider RID Messaging
In order to implement a messaging mechanism between RID communication systems or IHSs,Request will issue a trace across the network to determine the upstream source of the traffic. The RequestAuthorization and Result messages are used to communicate the status and result of a TraceRequest or Investigation request. The Investigation request message would only involve the RID communication systems along the path to the source of the traffic and not the use of network trace systems. The Investigation request leverages the bilateral relationships or a consortium's interconnections to mitigate or stop problematic traffic close to the source. Routes could determine the fastest path to a known source IP address in the case of an Investigation request. A message sent between RID systems for a TraceRequest or an Investigation request to stop traffic at the source through a bordering network would require TraceRequest containing for this system and one established through the peering agreements for each bilateral peer or agreed-upon consortium guidelines. The purpose of such policies is to avoid abuse of the system; the policies shall be developed by a consortium Considerations. An example is illustrated below.
_______ _______ _______ | | | | | | __| RID |____-------------____| RID |____-------------____| RID |__ |_____| | NP Border | |_____| | NP TraceRequest that may have also been unnecessary.
4.3. Message Formats
Section 4.3.2FC6046]. Each RID message type, along with an example, is described in the following sections. The IODEF-RID schema is introduced in Section 4.3.3 to support the RID message types in Section 4.3" [XMLschema] in the schema.
4.3.2. RID Messages and Transport
The six RID message types follow:
- TraceRequest. This message is sent to the RID system next in the upstream trace. It is used to initiate a TraceRequest or to continue a TraceRequest to an upstream network closer to the source address of the origin of the security incident. The TraceRequest would trigger a traceback on the network to locate the source of the attack traffic.
- RequestAuthorization. This message is sent to the initiating RID system from each of the upstream NPs' RID systems to provide information on the request status in the current network.
- Result. This message is sent to the initiating RID system through the network of RID systems in the path of the trace as notification that the source of the attack was located. The Result message is also used to provide the notification of actions taken for an Investigation request.
- Investigation. This message type is used when the source of the traffic is believed not to be spoofed. The purpose of the Investigation request message is to leverage the existing peer relationships in order to notify the network provider closest to the source of the valid traffic of a security-related incident for any necessary actions to be taken.
- Report. This message is used to report a security incident, for which no action is requested. This may be used for the purpose of correlating attack information by CSIRTs, statistics and trending information, etc.
- IncidentQuery. This message is used to request information about an incident or incident type from a trusted RID system. The response is provided through the Report message.
When a system.-upon include the use of encryption. The RIDPolicy information is not required to be encrypted, so separating out this data from the IODEF extension removes the need for decrypting and parsing the entire IODEF and RID policy information and guidelines are discussed in Section 6.6. The policy is defined between RID peers and within or between consortiums. The | +------------------+ | ANY | | |<>---{0..1}----[ RIDPolicy ] | ENUM restriction | | ENUM type |<>---{0..1}----[ RequestStatus ] | STRING meaning | | |<>--- RequestAuthorization messages to report back to the originating RID system if the trace will be continued by each RID system that received a TraceRequest in the path to the source of the traffic.6046], as that information should be as small as possible and may not be encrypted. The RequestStatus message MUST be able to stand alone without the need for an IODEF document to facilitate the communication, limiting the data transported to the required elements per [RFC6046].
4.3.3.1. RequestStatus Class
The RequestStatus class is an aggregate class in the RID class.
+--------------------------------+ | RequestStatus | +--------------------------------+ | | | ENUM restriction | | ENUM AuthorizationStatus | | ENUM Justification | | STRING ext-AuthorizationStatus | | STRING ext-Justification | | | +--------------------------------+
Figure
REQUIRED. ENUM. The listed values are used to provide a response to the requesting CSIRT of the status of a TraceRequest in the current network.
- Approved. The trace was approved and will begin in the current NP.
-.
- Pending. Awaiting approval; a timeout period has been reached, which resulted in this Pending status and RequestAuthorization message being generated.
Justification
OPTIONAL. ENUM. Provides a reason for a Denied or Pending message.
- SystemResource. A resource issue exists on the systems that would be involved in the request.
2. Authentication. The enveloped digital signature [RFC3275] failed to validate.
- AuthenticationOrigin. The detached digital signature for the original requestor on the IP packet failed to validate.
- Encryption. Unable to decrypt the request.
- Other. There were other reasons this request could not be processed.
AuthorizationStatus-ext
OPTIONAL. STRING. A means by which to extend the AuthorizationStatus attribute. See IODEF [RFC5070], Section 5.1.
Justification-ext this class is reused from the IODEF specification [RFC5070], Section 3.16..
4.3.3.3. RIDPolicy Class
The RIDPolicy class facilitates the delivery of RID messages and is also referenced for transport in the transport document [RFC6046].
+------------------------+ | this class is reused from the IODEF specification [RFC5070], Section 3.16. would be botnets,. ENUM.
- ClientToNP. An enterprise network initiated the request.
- NPToClient. An NP passed a RID request to a client or an enterprise attached network to the NP based on the service level agreements.
- IntraConsortium. A trace that should have no restrictions within the boundaries of a consortium with the agreed-upon use and abuse guidelines.
- PeerToPeer. A trace that should have no restrictions between two peers but may require further evaluation before continuance beyond that point with the agreed-upon use and abuse guidelines.
- BetweenConsortiums. A trace that should have no restrictions between consortiums that have established agreed-upon use and abuse guidelines.
- "BetweenConsortiums" setting since it may be possible to have multiple nations as members of the same consortium, and this option must be selected if the traffic is of a type that may have different restrictions in other nations. that would most accurately describe the traffic type.
type
One. ENUM.
- Attack. This option should only be selected if the traffic is related to a network-based NP network traffic or routing issues.
-.
- OfficialBusiness. This option MUST be used if the traffic being traced is requested or is affiliated with any government or other official business request. This would be used during an investigation by government authorities or other government traces to track suspected criminal or other activities.
-.
REQUIRED. ENUM. The type of RID message sent. The six types of messages are described in Section 4.3.2 and can be noted as one of the six selections below.
- TraceRequest. This message may be used to initiate a TraceRequest or to continue a TraceRequest to an upstream network closer to the source address of the origin of the security incident.
- RequestAuthorization.. This message type is used when the source of the traffic is believed to be valid. The purpose of the Investigation request is to leverage the existing peer or consortium relationships in order to notify the NP, statistics and trending information, etc.
- IncidentQuery. This message is used to request information from a trusted RID system about an incident or incident type.
Additionally, there is an extension attribute to add new enumerated values:
MsgDestination
REQUIRED. ENUM. The destination required at this level may either be the RID messaging system intended to receive the request, or, in the case of an Investigation request, the source of the incident. In the case of an Investigation request, address listed in the Node element of the RIDPolicy class is the next upstream RID system that will receive the RID message.
- address used by the source is believed to be valid and an Investigation request message is used. This is not to be confused with the IncidentSource class, as the defined value here is from an initial trace or Investigation request, not the source used in a Result message.
MsgType-ext
OPTIONAL. STRING. A means by which to extend the MsgType attribute. See IODEF [RFC5070], Section 5.1.
MsgDestination-ext
4.3.4. RID Namespace
The RID schema declares a namespace of "iodef-rid-1.0" and registers it per [XMLnames]. Each IODEF-RID document MUST use the "iodef- rid-1.0" namespace in the top-level element RID-Document. It can be referenced as follows:
<RID-Document number of IODEF can be used to correlate related incident data that is part of a larger incident.
4.4.2. RequestAuthorization RID system..
4.4.3. Result). NP listed is the NP that located the source of the traffic (the NP sending the Result message).
Event, Record, and RecordItem classes to include example
-
packets and other information related to the incident (optional).
Note: Event information included here requires a second instance of EventData in addition to that used to convey NP path contact information.
Standards for encryption and digital signatures [RFC3275]:
Digital signature of source NP for authenticity of Result Message, from the NP creating this message using XML digital signature.
A message is Section 6 MUST be taken into account.
Note: The History class has been expanded in IODEF to accommodate all of the possible actions taken as a result of a RID TraceRequest or Investigation TraceRequest or Investigation request is made.
4.4.4. Investigation Request
Description: This message type is used when the source of the traffic is believed not to be spoofed. The purpose of the Investigation request message is to leverage the existing bilateral peer relationships in order to notify the network provider closest to the source of the valid traffic that some event occurred, which may be a security-related incident.
The following information is required for Investigation request messages and is provided through:
RID Information:
RID Policy
-.
Standards for encryption and digital signatures [RFC3275]:
Digital signature from initiating RID system, passed to all systems in upstream trace using XML digital signature.
Security considerations would include the ability to encrypt [XMLencrypt] the contents of the Investigation request message using the public key of the destination RID system. The incident number would increase as if it were a TraceRequest message in order to ensure uniqueness within the system. The relaying peers would also append their Autonomous System (AS) or RID system information as the request message was relayed along the web of network providers so that the Result message could utilize the same path as the set of trust relationships for the return message, thus indicating any actions taken. The request would also be recorded in the state tables of both the initiating and destination NP RID systems. The destination NP is responsible for any actions taken as a result of the request in adherence to any service level agreements or internal policies. The NP should:
RID Information:
RID Policy RID message type, IncidentID, and destination
-
policy information
The following data is recommended if available and can be provided through: report using XML digital signature.
Security considerations would and the same) providing the reason why the message could not be processed. Assuming that the trace continued, additional TraceRequests with the response of a Request, a RequestAuthorization message is sent in response; otherwise, no return message is sent. If a RequestAuthorization.
Note: For each example listed below, [RFC5735] addresses were used. Assume that each IP address listed is actually a separate network range held by different NPs. Addresses were used from /27 network ranges. requestor. If a TraceRequest is denied, the downstream peer has the option to take an action and respond with a Result message. The originator of the request may follow up with the downstream peer of the NP involved using an Investigation request to ensure that an action is taken if no response is received. Nothing precludes the originator of the request from initiating a new TraceRequest bypassing the NP that denied the request, if a trace is needed beyond that point. Another option may be for the initiator to send an Investigation request to an NP upstream of the NP that denied the request if enough information was gathered to discern the true source of the attack traffic from the incident handling.
In the following example, use of [XMLsig] to generate digital signatures does not currently provide digest algorithm agility, as [XMLsig] only supports SHA-1. A future version of [XMLsig] may support additional digest algorithms to support digest algorithm agility.
<iodef-rid:RID xmlns: <iodef-rid:RIDPolicy >2001-09-14T08:19> </iodef:Incident> </iodef:IODEF-Document>
<!-- Digital signature accompanied by above RID and IODEF -->
<Envelope xmlns="urn:envelope"
-
xmlns: <iodef:IODEF-Document> <iodef:Incident> <iodef:EventData> <iodef:Record> <iodef:RecordData> <iodef:RecordItem type=:Incident> </iodef:IODEF-Document> <Signature xmlns=" <SignedInfo> <CanonicalizationMethod Algorithm=" REC-xml-c14n type:DateTime>2004-02-02T23:07:21+00:00</iodef:DateTime> <iodef:IncidentID CSIRT-FOR-NP3#3291-1 </iodef:IncidentID> . Investigation Request Example414.
>
4.5.3. Report Communication
The diagram below outlines the RID Report communication flow between RID systems on different networks.
NP-1 NP-2
- Generate incident information and prepare Report message
2. o-------Report-------> 3. File report in database
Figure 9. Report Communication Flow
The Report communication flow is used to provide information on specific incidents detected on the network. Incident information may be shared between CSIRTs or participating RID hosts, TraceRequest information.
<iodef-rid:RID xmlns: >
4.5.4. IncidentQuery Communication Flow
The diagram below outlines the RID IncidentQuery communication flow between RID systems on different networks.
NP-1 NP-2
- Generate a request for information on a specific incident number or incident type
2. o---IncidentQuery---> 3. Verify policy information and determine if matches exist for requested information 4. <-------Report------o
-).
<iodef-rid:RID xmlns: <iodef-rid:RIDPolicy
5. RID Schema Definition
<?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns: <xs:import
<xs:import namespace="
schemaLocation= "
<!-- **************************************************************** ********************************************************************* ***:sequence> <xs:attribute <xs:simpleType> ="OfficialBusiness"/> <xs:enumeration <xs:enumeration </xs:restriction> </xs:simpleType> </xs:attribute> <xs:attribute </xs:complexType> </xs:element> </xs:schema>
6. Security Considerations
Communication between NPs' RID systems must be protected. RID has many security considerations built into the design of the protocol, several of which are described in the following sub-sections. For a complete view of security, considerations need to public key infrastructure (PKI) cross-certifications of consortiums. By using RIDPolicy information, TLS, and the XML security features of encryption [XMLencrypt] and digital signatures [RFC3275], [XMLsig], RID takes advantage of existing security standards. The standards provide clear methods to ensure that messages are secure, authenticated, and authorized, and that the messages meet policy and privacy guidelines and maintain integrity.
As specified in the relevant sections of this document, the XML digital signature [RFC3275] and XML encryption [XMLencrypt] are used in the following cases:
XML Digital Signature
- The originator of the TraceRequest.
- For all message types, the full IODEF/RID document MUST be signed using an enveloped signature by the sending peer to provide authentication and integrity to the receiving RID system.
XML Encryption
- The IODEF/RID document may be encrypted to provide an extra layer of security between peers so that the message is not only encrypted for the transport, but also while stored.6046].
- An Investigation request, or any other message type that may be relayed through RID systems other than the intended destination as a result of trust relationships, may be encrypted, using an enveloped signature, for the originator of the request. The existence of the Result message for an incident would tell any intermediate parties used in the path of the incident investigation that the incident handling has been completed.., leveraging the information listed in RIDPolicy.
6.1. Message Transport
The transport specifications are fully defined in a separate document [RFC6046]. are discussed at the beginning of Section 6 of this document.
XML security functions such as the digital signature [RFC3275] and encryption [XMLencrypt] provide a standards-based method to encrypt and digitally sign RID messages. RID messages specify system use and privacy guidelines through the RIDPolicy class. A public key infrastructure (PKI) provides the base for authentication and authorization, encryption, and digital signatures to establish trust relationships between members of a RID consortium or a peering consortium.
XML security functions such as the digital signature [RFC3275] and encryption [XMLencrypt] systems or IHSs. not only their neighbor network, but also from the initiating RID system as discussed in Section 6.4. Several methods can be used to ensure integrity and privacy of the communication.
The transport mechanism selected MUST follow the defined transport protocol [RFC6046] when using RID messaging to ensure consistency among the peers. Consortiums may vary their selected transport mechanisms and thus must decide upon a mutual protocol to use for transport when communicating with peers in a neighboring consortium using RID. RID systems MUST implement and deploy HTTPS as defined in the transport document [RFC6046] and optionally support other protocols such as the Blocks Extensible Exchange Protocol the Regional Internet Registry's (RIR's) PKI hierarchy. Security and privacy considerations related to consortiums are discussed in Sections 6.5 and 6.6.
6.3..
6.4. Authentication of RID Protocol
In order to ensure the authenticity of the RID messages, a message authentication scheme is used to secure the protocol. network providers. The PKI used for authentication would also provide the necessary certificates needed for encryption used for the RID transport protocol [RFC6046].
The use of pre-shared keys may be considered for authentication. If this option is selected, the specifications set forth in "Pre-Shared Key Ciphersuites for Transport Layer Security (TLS)" [RFC4279] MUST be followed.. [XMLencrypt] or digital signature [XMLsig], [RFC3275] is used within a document. Transport specifications are detailed in a separate document [RFC6046]..
6 own trust relationships as well as the previous trust relationships in the downstream path. For practical reasons, the NPs may want to prioritize incident handling events based upon the immediate peer for a TraceRequest, TraceRequest, and each party MUST be able to perform full path validation on the digital signature. Full path validation verifies the chaining relationship to a trusted root and also performs a, a hierarchy, or a single cross-certification relationship.
Consortiums also need to establish guidelines for each participating NP to adhere to. The RECOMMENDED guidelines include:
- Physical and logical practices to protect RID systems;
- Network and application layer protection for RID systems and communications;
- Proper use guidelines for RID systems, messages, and requests; and
- A PKI to provide authentication, integrity, and privacy.
The functions described for a consortium's role would parallel that NP. The PKI may be a subordinate CA or in the CA hierarchy from the NP's consortium to establish the trust relationships necessary as the request is made to other connected networks.
6 that MUST be addressed in the RID system and other integrated components include the following:
Network Provider Concerns:
- Privacy of data monitored and/or stored on IDSs for attack detection.
- Privacy of data monitored and stored on systems used to trace traffic across a single network.
Customer Attached Networks Participating in RID with NP:
- Customer networks may include an enterprise, educational, government, or other attached networks to an NP participating in RID and MUST be made fully aware of the security and privacy considerations for using RID.
- Customers MUST know the security and privacy considerations in place by their NP and the consortium of which the NP is a member.
- Customers MUST understand that their data can and will be sent to other NPs in order to complete a trace unless an agreement stating otherwise is made in the service level agreements between the customer and NP.
Parties Involved in the Attack:
- Privacy of the identity of a host involved in an attack.
- Privacy of information such as the source and destination used for communication purposes over the monitored or RID connected network(s).
- Protection of data from being viewed by intermediate parties in the path of an Investigation request MUST be considered.
Consortium Considerations:
- System use restricted to security incident handling within the local region's definitions of appropriate traffic for the network monitored and linked via RID in a single consortium also abiding by the consortium's use guidelines.
- System use prohibiting the consortium's participating NPs from inappropriately tracing non-attack traffic to locate sources or mitigate traffic unlawfully within the jurisdiction or region.
Inter-Consortium Considerations:
- System use between peering consortiums MUST also adhere to any government communication regulations that apply between those two regions, such as encryption export and import restrictions. This may include consortiums that are categorized as "BetweenConsortiums" or "AcrossNationalBoundaries".
- System use between consortiums MUST NOT request traffic traces and actions beyond the scope intended and permitted by law or inter-consortium agreements.
- System use between consortiums classified as "AcrossNationalBoundaries" at the beginning of Section-NP level and are out of scope for RID messaging.
The identity of the true source of an attack packet being traced through RID could be sensitive. The true identity listed in a Result message can be protected through the use of encryption [XMLencrypt] that the identity of the attack traffic is known to involved parties. The NP network provider in the path of the trace would become an Investigation request, where the originating NP is aware of the NP that will receive the request for processing, the free-form text areas of the document could be encrypted [XMLencrypt] using the public key of the destination NP to ensure that no other NP in the path can read the contents. The encryption would be accomplished through the W3C [XMLencrypt]N via TLS over the transport protocol [RFC6046]. The RID messages may be decrypted at each RID system in order to properly process the request or relay the information. Today's processing power is more than sufficient to handle the minimal burden of encrypting and decrypting relatively small typical RID messages.
7.-1.0
Registrant Contact: See the "Author's Address" section of this document.
XML:
-
None. Namespace URIs do not represent an XML specification.
Registration request for the iodef-rid XML schema:
URI: urn:ietf:params:xml:schema:iodef-rid-1.0
Registrant Contact: See the "Author's Address" section of this document.
XML:
-
See Section 5, "RID Schema Definition", of this document.HSs or RID systems. A TraceRequest or Investigation request is communicated to an upstream NP NPs, which is essential for incident handling.
9. References
9.1. Normative References
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997. [RFC3275] Eastlake 3rd, D., Reagle, J., and D. Solo, "(Extensible Markup Language) XML-Signature Syntax and Processing", RFC 3275, March 2002. [RFC3688] Mealling, M., "The IETF XML Registry", BCP 81, RFC 3688, January 2004. [RFC4279] Eronen, P., Ed., and H. Tschofenig, Ed., 6046] Moriarty, K. and B. Trammell, "Transport of Real-Time Inter-Network Defense (RID) Messages," RFC 6046, November 2010. [XML1.0] "Extensible Markup Language (XML) 1.0 (Second Edition)". W3C Recommendation. T. Bray, E. Maler, J. Paoli, and C.M. Sperberg-McQueen. October 2000. [XMLnames] "Namespaces in XML 1.0 (Third Edition)". W3C Recommendation. T. Bray, D. Hollander, A. Layman, R. Tobin, H. Thompson. December 2009. [XMLencrypt] "XML Encryption Syntax and Processing". W3C Recommendation. T. Imamura, B. Dillaway, and E. Simon. December 2002. [XMLschema] "XML Schema". E. Van der Vlist. O'Reilly. 2002. [XMLsig] "XML-Signature Syntax and Processing (Second Edition)". W3C Recommendation. M. Bartel, J. Boyer, B. Fox, B. LaMacchia, and E. Simon. June 2008.
9.2. Informative References
[RFC1930] Hawkinson, J. and T. Bates, "Guidelines for creation, selection, and registration of an Autonomous System (AS)", BCP 6, RFC 1930, March 1996. [RFC2827] Ferguson, P. and D. Senie, "Network Ingress Filtering: Defeating Denial of Service Attacks which employ IP Source Address Spoofing", BCP 38, RFC 2827, May 2000. [RFC3647] Chokhani, S., Ford, W., Sabett, R., Merrill, C., and S. Wu, "Internet X.509 Public Key Infrastructure Certificate Policy and Certification Practices Framework", RFC 3647, November 2003. [RFC3917] Quittek, J., Zseby, T., Claise, B., and S. Zander, "Requirements for IP Flow Information Export (IPFIX)", RFC 3917, October 2004. [RFC5735] Cotton, M. and L. Vegoda, "Special Use IPv4 Addresses", BCP 153, RFC 5735, January 2010. [IPtrace] "Advanced and Authenticated Marking Schemes for IP Traceback". D. Song and A. Perrig. IEEE INFOCOM 2001. [HASH-IPtrace] "Hash-Based IP Traceback". A. Snoeren, C. Partridge, L. Sanchez, C. Jones, F. Tchakountio, S. Kent, and W. Strayer. SIGCOMM'01. August 2001. [ICMPtrace] Bellovin, S., Leech, M., and T. Taylor, "ICMP Traceback Messages", Work in Progress, February 2003.
[NTWK-IPtrace] "Practical network support for IP traceback". S.
-
Savage, D. Wetherall, A. Karlin, and T. Anderson. SIGCOMM'00. August 2000. [DoS] "Trends in Denial of Service Attack Technology". K. Houle, G. Weaver, N. Long, and R. Thomas. CERT Coordination Center. October 2001.
Acknowledgements
Many thanks to coworkers and the Internet community for reviewing and commenting on the document.
Sponsor Information
This work was sponsored by the Air Force under Air Force Contract FA8721-05-C-0002, while working at MIT Lincoln Laboratory.
"Opinions, interpretations, conclusions, and recommendations are those of the author and are not necessarily endorsed by the United States Government".
Author's Address
Kathleen M. Moriarty RSA, The Security Division of EMC 174 Middlesex Turnpike Bedford, MA 01730 US
-
Moriarty_Kathleen@EMC.com
|
http://pike.lysator.liu.se/docs/ietf/rfc/60/rfc6045.xml
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Description
Class for generating spheres from different families, each with given probability.
It 'mixes' different ChRandomShapeGenerator sources (among these you can also put other ChRandomShapeCreatorFromFamilies to create tree-like families). This can be used to make bi-modal or multi-modal distributions, for example suppose that you need a mixture of 30% spheres, with their own size distribution, and 70% cubes, with their own distribution.
#include <ChRandomShapeCreator.h>
Member Function Documentation
◆ AddFamily()
◆ GetObtainedPercentuals()
For debugging.
For various reasons (ex. very odd masses in generated particles), especially with a small number of particles, the desired percentuals of masses or n.of particles for the families are only approximated. Use this vector to get the actual percentuals obtained so far.
The documentation for this class was generated from the following file:
- /builds/uwsbel/chrono/src/chrono/particlefactory/ChRandomShapeCreator.h
|
https://api.projectchrono.org/development/classchrono_1_1particlefactory_1_1_ch_random_shape_creator_from_families.html
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
PresentID Face detection API can detect the face(s) in your image and retrieve some features such as Age, Gender, Landmarks, etc.
Input:
Output:
Features
Use Cases:
Rules & Restrictions:
import requests api_url = ' api_key = 'Your API Key' image_path = 'Image directory path' image_name = 'Image name' files = {'Photo': (image_name, open(image_path + image_name, 'rb'), 'multipart/form-data')} header = { "x-rapidapi-host": "face-recognition4.p.rapidapi.com", "x-rapidapi-key": api_key } response = requests.post(api_url, files=files, headers=header)
|
https://rapidapi.com/de/PresentID/api/face-detection11/details
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
ec_cache_create
Name
ec_cache_create — Create a cache with
max_elts
Synopsis
#include "ec_cache.h"
ec_cache_t * **ec_cache_create** ( | max_elts, | |
| | max_lifetime, | |
| | dtor
); | |
unsigned int <var class="pdparam">max_elts</var>;
unsigned int <var class="pdparam">max_lifetime</var>;
ec_cache_elt_dtor_func <var class="pdparam">dtor</var>;
Description
Create a cache with
max_elts.
Note
This is equivalent to calling
ec_cache_create2 with a NULL
name parameter.
- max_elts
The maximum number of elements that can be kept in the cache. If that number is exceeded, then the least recently used (LRU) element will be removed from the cache.
- max_lifetime
Specifies a time-to-live (TTL) in seconds for the cache element. If
max_lifetimeis not given the value
EC_CACHE_LIFETIME_INFINITE, then it specifies a time-to-live in seconds after which the entry will be removed from the cache. If using the cache in per-item-ttl mode, then
max_lifetimeis actually a number of additional seconds beyond the ttl for which an element will not be removed.
- dtor
Specifies a function that will be called when the refcount of an item becomes zero. The following typedef applies to this data type:
typedef void (*ec_cache_elt_dtor_func)(void *value);.
Returns the address of an
ec_cache_t type. The following typedef applies to this data type:
typedef struct ec_cache_head ec_cache_t;.
While it is legal to call this function in any thread, it should only be called in the Scheduler thread.
|
https://support.sparkpost.com/momentum/3/3-api/apis-ec-cache-create
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Bare Guidelines For Setting Up Auth With Revisions Based on Comments using Ruby on Rails And Vanilla JS
- make a frontend directory (be sure to link your index.html, app.js, app.css)
rails new --api -d=postgresql backend_auth_1st_attempt
- fix cors errors
uncomment bcrypt
bundle add jwt to add jwt to your back end directory
run bundle install
create user with rails g resource, get password digest, username, password
password:digest
rails g resource username email password:digestrails d resourcepassword:digest should generate has_secure_password in your user model, rails magic to show t.string :password_digest in your migration table, you better not do a password_digest when you create your seeds
rails db create migrate
test - seed and consoleUser.create(name: "", password: "")
setup indexrails g controller
check your rails routes
you should have a authentication#login for your login route to POST a login action within your authentication controller
you should also have a your users controller updated in your routes to the profile action (users#profile) as a GETYOU WANt to create an authentication controller
require 'jwt'
you'll also want to save the token created by the authentication controller to local storage.
whenver the page loades you'll want to be checking if local storage has a token in it.
hmm...if a token does exist you'll want to pass it over to the user controller.... you'll want to be passing over to the backend to for it to be decoded and return the...(payload of the decoded token) ...which was probably the user that was stored within that token...
02–18–20 Really setting up auth
Make a frontend directory
mkdir frontendAuth
cd frontendAuth
touch index.html app.js app.css
Make a rails project/ a backend directory
rails new --api -d=postgresql backendAuth
Fix cors error
command + p gemfile
command + p cors.rb
uncomment bcrypt and bundle install jwt so you can require ‘jwt’ in the authentication controller that you will create as well as in the users controller or any other controllers that require the jwt as a dependency?
create your user model
rails g resource user username email password:digest
password digest will make it so the user model 'has_secure_password'
it will also generate the password as password_digest in the migration table and schema to assist with the login action and profile action created in the authentication controller and users controller respectively.
create your authentication controller
rails g controller authentication
allow a post to be created in your routes.rb that allows you to create a login method under the authentication controller for a login action
also allow a get to be created in your routes.rb that allows you to create a profile method under the users controller for a profile action
in your routes:
post "login", to: 'authentication#login'get 'profile', to: 'users#profile'
check that these two additional routes have been created
rails routes
create a login method so you have a login action within your authentication controller that is able to create a token and encode the attribute values that were created in your user migration table.
def login
username = params[:user][:username]
password = params[:user][:password]
?? @user = User.find_by(username: username, email: email) if !@user
render status: :unauthorized
else
if !@user.authenticate password
render status: :unauthorized
else
secret_key = Rails.application.secrets.secret_key_base
token = JWT.encode({
user_id: @user.id,
user_name: @user.username,
}, secret_key)
render json: {
token: token
}
create a login form on your frontend with at least a username password and submit input
grab your login form from within you app.js file and create an event listener for the login form to link to the login action/endpoint you created on your backend.
You will want to store the token created for your user into localstorage.
If local storage contains a token, you will want to create a logout button for that user with that specific token.
Decode the token on the backend from your profile action/endpoint. fetch from the profile endpoint to get the payload information back decoded for your user or whatever the token happens to be encoding.
If you’re looking for a deeper dive into this you can watch this tutorial below:
|
https://joshycsm.medium.com/bare-guidelines-for-setting-up-auth-with-revisions-based-on-comments-using-ruby-on-rails-and-b303b30dff4b?source=post_internal_links---------4-------------------------------
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
ec_spool_in2
Last updated March 2020
Name
ec_spool_in2 — spool in message meta data from disk
Synopsis
#include "spool.h"
int **ec_spool_in2** ( | id, | |
| | msg
); | |.
spool in message meta data from disk.
Performs initial spool-in of the message identified by id. Populates msg with the message and returns IO_DONE on success. returns one of IO_FAIL or IO_TRANS_FAIL on failure.
Was this page helpful?
|
https://support.sparkpost.com/momentum/3/3-api/apis-ec-spool-in-2
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Easy Hyperparameter Tuning in Neural Networks using Keras Tuner
This article was published as a part of the Data Science Blogathon
In the last few articles, we discussed Neural Networks, their work and, their practical implementation in Python on the MNIST dataset. Continuing to the same, in this article we will look at how to tune the parameters of Neural Network to achieve the appropriate parameters which provide the highest training and testing accuracy, we don’t want overfitting in our data, right.
I would highly suggest going through the Implementation of ANN on MNIST data blog to understand this one better.
What are Hyperparameters?
Hyperparameters are the values we provide to the model and are used to improve the performance of the model. They are not automatically learned during the training phase but have to be provided explicitly.
Hyperparameters play a major role in the performance of the model and should be chosen and set such that the model accuracy improves. In Neural Network some hyperparameters are the Number of Hidden layers, Number of neurons in each hidden layer, Activation functions, Learning rate, Drop out ratio, Number of epochs, and many more. In this article, We are going to use the simplest possible way for tuning hyperparameters using Keras Tuner.
Using the Fashion MNIST Clothing Classification problem which is one of the most common datasets to learn about Neural Networks. But before moving on to the Implementation there are some prerequisites to use Keras tuner. The following is required:
- Python 3.6+
- Tensorflow 2.0+ (I had Tensorflow 2.1.0 in my system but still it didn’t work so had to upgrade it to 2.6.0)
Some Frequently asked questions(FAQs):
1. How to check the Tensorflow version:
#use this command print(tensorflow.__version__)
2. How to upgrade Tensorflow?
#Use the following command pip install --upgrade tensorflow --user
3. What to do if it still does not work?
–> Use Google colab
Let’s move on to the problem statement now. In the Fashion MNIST dataset, we have images of clothing such as Tshirt, trousers, pullovers, dresses, coats, sandals,s and have a total of 10 labels.
#importing necessary libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.layers import Dropout from tensorflow.keras.layers import Flatten from tensorflow.keras.datasets import fashion_mnist
#visualizing the dataset for i in range(25): # define subplot plt.subplot(5, 5, i+1) # plot raw pixel data plt.imshow(X_train[i], cmap=plt.get_cmap('gray')) # show the figure plt.show()
#normalizing the images X_train=X_train/255 X_test=X_test/255
In the last MNIST digit classification example, we flattened the dataset before building the model, but here we will do it in the model building code itself. I have explained the model building code in detail in the last article, kindly refer to that for an explanation.
Model Building
model=Sequential([ #flattening the images Flatten(input_shape=(28,28)), #adding first hidden layer Dense(256,activation='relu'), #adding second hidden layer Dense(128,activation='relu'), #adding third hidden layer Dense(64,activation='relu'), #adding output layer Dense(10,activation='softmax') ])
#compiling the model model.compile(loss='sparse_categorical_crossentropy',optimizer='adam',metrics=['accuracy']) #fitting the model model.fit(X_train,y_train,epochs=10)
#evaluating the model model.evaluate(X_test,y_test)
We have built the basic ANN model and got the training and testing accuracy as shown in the above figures. We can see the difference in accuracies and losses of the training and test sets. The loss in the training data is less but increases for the test data which can lead to wrong predictions on the unseen data.
Now let’s tune the Hyperparameters to get the values that can help in improving the model. We will be optimizing the following Hyperparameters in the model:
- Number of hidden layers
- Number of neurons in each hidden layer
- Learning rate
- Activation Function
But first, we need to install the Keras Tuner.
#use this command to install Keras tuner pip install keras-tuner
#installing the required libraries from tensorflow import keras from keras_tuner import RandomSearch
Defining the function to build an ANN model where the hyperparameters will be the Number of neurons in the hidden layer and Learning rate.
def build_model(hp): #hp means hyper parameters model=Sequential() model.add(Flatten(input_shape=(28,28))) #providing range for number of neurons in a hidden layer model.add(Dense(units=hp.Int('num_of_neurons',min_value=32,max_value=512,step=32), activation='relu')) #output layer model.add(Dense(10,activation='softmax')) #compiling the model model.compile(optimizer=keras.optimizers.Adam(hp.Choice('learning_rate',values=[1e-2, 1e-3, 1e-4])),loss='sparse_categorical_crossentropy',metrics=['accuracy']) return model
In the above code, we have defined the function by the name build_model(hp) where hp stands for hyperparameter. While adding the hidden layer we use hp.Int( ) function which takes the Integer value and tests on the range specified in it for tuning. We have provided the range for neurons from 32 to 512 with a step size of 32 so the model will test on neurons 32, 64,96,128…,512.
Then we have added the output layer. While compiling the model Adam optimizer is used with different values of learning rate which is the next hyperparameter for tuning. hp.Choice( ) function is used which will test on any one of the three values provided for the learning rate.
#feeding the model and parameters to Random Search tuner=RandomSearch(build_model, objective='val_accuracy', max_trials=5, executions_per_trial=3, directory='tuner1', project_name='Clothing')
The code above uses the Random Search Hyperparameter Optimizer. The following variables are provided to the Random Search. The first is model i.e build_model, next objective is val_accuracy that means the objective of the model is to get a good validation accuracy. Next, the value of trails and execution per trail provided which is 5 and 3 respectively in our case meaning 15 (5*3) iterations will be done by the model to find the best parameters. Directory and project name are provided to save the values of every trial.
#this tells us how many hyperparameter we are tuning #in our case it's 2 = neurons,learning rate tuner.search_space_summary()
#fitting the tuner on train dataset tuner.search(X_train,y_train,epochs=10,validation_data=(X_test,y_test))
The above code will run 5 trails with 3 executions each and will print the trail details which provide the highest validation accuracy. In the below figure, we can see the best validation accuracy achieved by the model.
We can also check the summary of all the trails done and the hyperparameters chosen for the best accuracy using the below code. The best accuracy is achieved using 416 neurons in the hidden layer and 0.0001 as the learning rate.
tuner.results_summary()
That’s how we perform tuning for Neural Networks using Keras Tuner.
Let’s tune some more parameters in the next code. Here we are also providing the range of the number of layers to be used in the model which is between 2 to 20.
def build_model(hp): #hp means hyper parameters model=Sequential() model.add(Flatten(input_shape=(28,28))) #providing the range for hidden layers for i in range(hp.Int('num_of_layers',2,20)): #providing range for number of neurons in hidden layers model.add(Dense(units=hp.Int('num_of_neurons'+ str(i),min_value=32,max_value=512,step=32), activation='relu')) model.add(Dense(10,activation='softmax')) #output layer #compiling the model model.compile(optimizer=keras.optimizers.Adam(hp.Choice('learning_rate',values=[1e-2, 1e-3, 1e-4])), #tuning learning rate loss='sparse_categorical_crossentropy',metrics=['accuracy']) return model
#feeding the model and parameters to Random Search tuner=RandomSearch(build_model, objective='val_accuracy', max_trials=5, executions_per_trial=3, directory='project', project_name='Clothing')
#tells us how many hyperparameters we are tuning #in our case it's 3 =layers,neurons,learning rate tuner.search_space_summary() #fitting the tuner tuner.search(X_train,y_train,epochs=10,validation_data=(X_test,y_test))
Summary and the best accuracy of the model in the below code. This time we got 0.89 as the validation accuracy.
tuner.results_summary()
Endnotes:
This was the simplest possible way to tune the parameters in Neural Network. Please refer to the official documentation of Keras Tuner for more details: *
|
https://www.analyticsvidhya.com/blog/2021/08/easy-hyperparameter-tuning-in-neural-networks-using-keras-tuner/?utm_source=feed&utm_medium=feed-articles
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Hi, I want to develop a new binding for my XBee devices ad OH2. I have followed the wiki and I have created a new binding in eclipse.
The Java library that I want to integrated in OH2 is here:
Following the guide, I have in the handler:
@Override public void initialize() { // TODO: Initialize the thing. If done set status to ONLINE to indicate proper working. // Long running initialization should be done asynchronously in background. updateStatus(ThingStatus.ONLINE); String PORT = "COM1"; int BAUD_RATE = 9600; XBeeDevice myDevice = new XBeeDevice(PORT, BAUD_RATE); } public class MyDataReceiveListener implements IDataReceiveListener { /* * (non-Javadoc) * @see com.digi.xbee.api.listeners.IDataReceiveListener#dataReceived(com.digi.xbee.api.models.XBeeMessage) */ @Override public void dataReceived(XBeeMessage xbeeMessage) { System.out.format("From %s >> %s | %s%n", xbeeMessage.getDevice().get64BitAddress(), HexUtils.prettyHexString(HexUtils.byteArrayToHexString(xbeeMessage.getData())), new String(xbeeMessage.getData())); } }
Now, where i can put the code to receive the command from XBee device? In the initialize function?
try { myDevice.open(); myDevice.addDataListener(new MyDataReceiveListener()); System.out.println("\n>> Waiting for data..."); } catch (XBeeException e) { e.printStackTrace(); }
Every things created should have almost two param: The address of XBee from where receive command and the Pin number (the value to assign to the thing). Address and pin number/value are in the data packet received. When data is received, I have to assign the value to the correct item.
Can you help me in the development?
Thanks!
Roberto
|
https://community.openhab.org/t/new-binding-for-xbee-device/20026
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Note:
Current version does not support AES Encryption feature.
Include "yalgaar_api.h" in your code.
#include "yalgaar_api.h"
To connect to a Yalgaar client, use following method, it must have valid Client Key.
yalgaar_connect(char *clientKey,bool isSecure,char *uuid,char *AES_SECURITY_KEY,unsigned int AESTYPE,void(*connectionCallback) (yalgaar_error_t error_msg));
To publish message, use following method.
yalgaar_publish(const unsigned char *channel,char *message);
To subscribe message, use following method.
yalgaar_subscribe(char *channel,void(*subscribeMessageCallback)(void *),void(*presenceMessageCallback)(struct presence_t *),void(*errorMessageCallback)(void *)); yalgaar_subscribes(char **channel,void(*subscribeMessageCallback)(void *),void(*presenceMessageCallback)(struct presence_t *),void(*errorMessageCallback)(void *));
To unsubscribe message, use following method.
yalgaar_unsubscribe(channel);
To get list of all user subscribe with specified channel name, use following method.
yalgaar_GetUserList(char *channel,void(*userListCallback)(Sll *),void(*errorMessageCallback)(void *));
To get list of all channels subscribed by specified user, use following method.
yalgaar_GetChannelList(char *UUID,void(*channelListCallback)(Sll *),void(*errorMessageCallbacks)(void *));
To get message history from specified channel name, use following method.
yalgaar_GetHistoryMessage(char *channelName,char *messageCount,void(* historymessagecallback)(char *),void(*errorMessageCallback)(char *));
To disconnect connection with YalgaarClient, use following method.
yalgaar_disconnect();
#include "yalgaar_api.h" void connection_Callback(int result,char *error_msg); void subscribe_message_callback(void *payload) { printf(stderr,"\n\nsubscribe_message_callback=%s\n",(char *)payload); } void error_message_callback(void *error_msg) { printf("\n Error :%s \n",( char *)error_msg); } void connection_Callback(int result,char *error_msg) { yalgaar_error_t rc = FAILURE; char err_string[YALGAAR_ERROR_MESSAGE_LENGTH]; if(!result) { printf("\n Connected..\n"); rc =yalgaar_subscribe("YourChannel",&subscribe_message_callback,NULL,&error_message_callback); if(SUCCESS != rc) { enum_to_message(rc,err_string); printf("\nError subscribing : %s ", err_string);fflush(stdout); } yalgaar_publish("YourChannel","This is Yalgaar Yocto Project SDK Example"); } else { printf("\nError:%s",error_msg);fflush(stdout); } } int main(int argc, char**argv) { yalgaar_error_t rc = FAILURE; char err_string[YALGAAR_ERROR_MESSAGE_LENGTH]; /**Yalgaar connect Method called*/ rc = yalgaar_connect("YourClientKey",1,"UUID",0,0,&connection_Callback); if(SUCCESS != rc) { printf("\nError yalgaar_connect : %d ", rc); disconnectCallbackHandler(); return -1; } while(1) { /**this function is call to wait for receive from yalgaar */ yalgaar_wait_to_read(); } if(SUCCESS != rc) { printf("\nAn error occurred in the loop. %d\n",rc); } rc = yalgaar_disconnect(); return rc; }.
|
https://www.yalgaar.io/documentation/yocto-api
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
Header:
#include <[headers/TestDB.h CUnit/TestDB.h]> (included automatically by <CUnit/CUnit.h>)
typedef struct CU_TestRegistry
typedef CU_TestRegistry* CU_pTestRegistry
CU_ErrorCode CU_initialize_registry(void)
void CU_cleanup_registry](void)
CU_BOOL CU_registry_initialized](void)
CU_pTestRegistry CU_get_registry](void)
CU_pTestRegistry CU_set_registry](CU_pTestRegistry pTestRegistry)
CU_pTestRegistry CU_create_new_registry](void)
void CU_destroy_existing_registry](CU_pTestRegistry* ppRegistry)
The test registry is the repository for suites and associated tests. CUnit maintains an active test registry which is updated when the user adds a suite or test. The suites in this active registry are the ones run when the user chooses to run all tests.
The CUnit test registry is a data structure CU_TestRegistry declared in [headers/TestDB.h <CUnit/TestDB.h>]. It includes fields for the total numbers of suites and tests stored in the registry, as well as a pointer to the head of the linked list of registered suites.
typedef struct CU_TestRegistry
{
unsigned int uiNumberOfSuites;
unsigned int uiNumberOfTests;
CU_pSuite pSuite;
} CU_TestRegistry;
typedef CU_TestRegistry* CU_pTestRegistry;
The user normally only needs to initialize the registry before use and clean up afterwards. However, other functions are provided to manipulate the registry when necessary.
The active CUnit test registry must be initialized before use. The user should call CU_initialize_registry() before calling any other CUnit functions. Failure to do so will likely result in a crash.
An error status code is returned:
This function can be used to check whether the registry has been initialized. This may be useful if the registry setup is distributed over multiple files that need to make sure the registry is ready for test registration.
When testing is complete, the user should call this function to clean up and release memory used by the framework. This should be the last CUnit function called (except for restoring the test registry using CU_initialize_registry()] or CU_set_registry()).
Failure to call CU_cleanup_registry() will result in memory leaks. It may be called more than once without creating an error condition. Note that this function will destroy all suites (and associated tests) in the registry. Pointers to registered suites and tests should not be dereferenced after cleaning up the registry.
Calling CU_cleanup_registry() will only affect the internal CU_TestRegistry maintained by the CUnit framework. Destruction of any other test registries owned by the user are the responsibility of the user. This can be done explictly by calling CU_destroy_existing_registry(), or implicitly by making the registry active using CU_set_registry() and calling CU_cleanup_registry() again.
Other registry functions are provided primarily for internal and testing purposes. However, general users may find use for them and should be aware of them.
These include:
Returns a pointer to the active test registry. The registry is a variable of data type CU_TestRegistry. Direct manipulation of the internal test registry is not recommended - API functions should be used instead. The framework maintains ownership of the registry, so the returned pointer will be invalidated by a call to CU_cleanup_registry() or CU_initialize_registry().
Replaces the active registry with the specified one. A pointer to the previous registry is returned. It is the caller's responsibility to destroy the old registry. This can be done explictly by calling CU_destroy_existing_registry() for the returned pointer. Alternatively, the registry can be made active using CU_set_registry() and destroyed implicitly when CU_cleanup_registry() is called. Care should be taken not to explicitly destroy a registry that is set as the active one. This can result in multiple frees of the same memory and a likely crash.
Creates a new registry and returns a pointer to it. The new registry will not contain any suites or tests. It is the caller's responsibility to destroy the new registry by one of the mechanisms described previously.
Destroys and frees all memory for the specified test registry, including any registered suites and tests. This function should not be called for a registry which is set as the active test registry (e.g. a CU_pTestRegistry pointer retrieved using CU_get_registry()). This will result in a multiple free of the same memory when CU_cleanup_registry()is called. ppRegistry may not be NULL, but the pointer it points to may be. In that case, the function has no effect. Note that *ppRegistry will be set to NULL upon return.
The following data types and functions are deprecated as of version 2. To use these deprecated names, user code must be compiled with USE_DEPRECATED_CUNIT_NAMES defined.
#include <[testdb_h CUnit/TestDB.h]> (included automatically by CUnit/CUnit.h>).
|
http://code.google.com/p/c-unit/wiki/test_registry
|
crawl-002
|
en
|
refinedweb
|
Microsoft MVP BizTalk Server
Oracle ACE
Declarative Services is one of the exciting new features of Windows Communication Foundation (WCF) 4.0. By declarative, we are referring to services that are completely modeled by using the Extensible Application Markup Language (XAML). As you might think, this capability will open to door for a whole new set of scenarios in Service Oriented systems which are really hard to implement with the current technologies. The specific capabilities of declarative services will be the subject of a future post. Today, I would like to explore one of the features that enable the implementation of declarative services: XAML serialization.
You don’t really need to know a lot about declarative service to figure out that there it must be a component that handles the translation between XAML and CLR types. This is precisely the role of the XamlServices class included in the System.Runtime.Xaml namespace. This class implements the serialization and deserialization process between XAML and .NET types and consequently it’s a key component of the declarative services infrastructure. Although we can currently use different mechanisms for implementing Xaml serialization, XamlServices exposes a programming model considerably simpler than other alternatives.
Let’s take the following class definition.
1: public class Contact
2: {
3: private string firstName;
4: private string lastName;
5: private DateTime birthDay;
6:
7: public Contact()
8: { }
9:
10: public string FirstName
11: {
12: get { return firstName; }
13: set { firstName = value; }
14: }
15:
16: public string LastName
17: {
18: get { return lastName; }
19: set { lastName = value; }
20: }
21:
22: public DateTime BirthDay
23: {
24: get { return birthDay; }
25: set { birthDay = value; }
26: }
27: }
In order to serialize an instance of the Contact class into XAML we can use the XamlServices class as illustrated in the following code.
1: private static void XamlSerializationSample()
2: {
3: Contact contact = new Contact();
4: contact.FirstName = "fn";
5: contact.LastName = "ln";
6: contact.BirthDay = DateTime.Now;
7: XmlWriter writer = XmlWriter.Create(file path...);
8: XamlServices.Save(writer, contact);
9: }
The output of the serialization looks like the following.
1: <?xml version="1.0" encoding="utf-8"?>
2: <Contact BirthDay="2008-12-16T19:58:14.1173568-08:00" FirstName="fn" LastName="ln"
xmlns="clr-namespace:XamlSerialization;assembly=XamlSerialization" />
Additionally, we can use a similar algorithm to deserialize the XAML representation into an instance of the contact class.
1: private static void XamlDeserializationSample()
3: XmlReader reader= XmlReader.Create(file path...);
4: Contact contact= (Contact)XamlServices.Load(reader);
5: }
Pingback from Using XAML serialization in WCF 4.0 - Jesus Rodriguez's WebLog
Declarative Services is one of the exciting new features of Windows Communication Foundation (WCF) 4
Has WCF been updated to support datashaping on the wire based on the XAML serilaizer? Your sample eludes to this.
I have a few gripes with WCF Contracts and the biggest one is the lack of DataShaping with the DCS and the ability to use attributes like the XmlSerializer supported. If the XAML serializer was supporrted on the wire, we would be closer to this.
Sounds like WCF4 will then have support for serializing Directed Acyclic Graph data structures then?
As my friend Jesus mentioned in the post " Using XAML serialization in WCF 4.0 ", WCF 4.0 introduces
What’s New in WCF 4.0? בתקופה האחרונה אני משקיע הרבה זמן על ללמוד את החידושים בדוט-נט 4.0 ובפרט
|
http://weblogs.asp.net/gsusx/archive/2008/12/17/using-xaml-serialization-in-wcf-4-0.aspx
|
crawl-002
|
en
|
refinedweb
|
DynamicProxy2, from the Castle Project, provides a method interception framework.
The AutofactContrib DynamicProxy2 integration enables method calls on Autofac components to be intercepted by other components. Common use-cases are transaction handling, logging, and declarative security.
Add the interception module itself to Autofac:
builder.RegisterModule(new StandardInterceptionModule());
Interceptors implement the DynamicProxy IInterceptor interface:);
}
}
Interceptors must be registered with the container:
builder.Register(c => new CallLogger(Console.Out)).Named("log-calls");
Interceptors can be registered as a particular typed service or with a name as required.
To attach a service to an interceptor, either attribute its implementation type with the InterceptAttribute:
[Intercept("log-calls")]
public class MyCallsAreLogged
{
public virtual int Method();
}
Or use the InterceptedBy() extension:
builder.Register<MyCallsAreLogged>().InterceptedBy("log-calls");
Or add an IEnumerable<Service> to its extended properties with the ExtendedPropertyInterceptorProvider.InterceptorsPropertyName key:
var interceptors = new Service[] { new NamedService("log-calls") };
builder.Register<MyCallsAreLogged>()
.WithExtendedProperty(ExtendedPropertyInterceptorProvider.InterceptorsPropertyName, interceptors);
Components created using expressions, or those registered as instances, cannot be subclassed by the DynamicProxy2 engine. In these cases, it is necessary to use interface-based proxies.
The simplest way to enable this behaviour is to use the FlexibleInterceptionModule instead of the standard one:
builder.RegisterModule(new FlexibleInterceptionModule());
For tighter control of how proxies are attached, see the InterceptionModule base class.
To enable proxying via interfaces, the component must provide its services through interfaces only. For best performance, all such service interfaces should be part of the registration, i.e. included in As<X>() clauses.)
|
http://code.google.com/p/autofac-contrib/wiki/DynamicProxy2
|
crawl-002
|
en
|
refinedweb
|
|
Submissions
W3C is pleased to receive the XAdES Submission from Cisco Systems.
XAdES extends the XML Signature Specification with additional syntax and processing necessary to satisfy the European Commission Directive on a Community Framework for Electronic Signatures as well as other use-cases requiring long-term validity. XAdES itself contains several modules that permit varying levels of security such as non-repudiation with time-stamps, certification data and certification archives.
W3C Team members have commented on the ETSI specifications, attended meetings of ETSI-ESI and continue to maintain an informal liaison. The following issues were part of these discussions that might be of interest for a larger community.
During the elaboration of XAdES Denis Pinkas argued that the persistence of DNS and the domain will be crucial for the security of the Signatures. If the domain could be re-attributed or if the underlying schema could be changed, the URI indicating the meaning of the elements could change. Thus the semantics described would change and the signed document or information would change meaning: the signature could be compromised.
In this context it is worthwhile to note that ETSI has adopted a policy with respect to persistence of the URIs for XML Namespaces also affecting those used in XAdES, which was a major step. The policy is similar to the persistence policy of the W3C. This allowed the XAdES specification to keep the initial ETSI-namespace used while being conformant to W3C's expectations about persistence. It means that the issue has been resolved satisfactorily and might give some hints for others using namespaces in security-related XML applications.
In section 5.1.2 (The ObjectIdentifierType data type) the identifier can be specified using URIs or OIDs. XAdES requires the OIDs to be represented in the form URN-OID as specified by RFC 3061, which provides an namespace for OIDs and allows them to be expressed in a format conforming to RFC 2396 [URI]. This helps bridge the gap between the ASN.1 and XML worlds.
It might be confusing that the terms
SignedProperties and
UnSignedProperties are used to qualify XAdES information that
apply to a particular
ds:Signature, and do not actually indicate
whether the information has or has not been signed by some other signature. A
different name for these elements might have helped to avoid
misunderstandings.
XML Signature specifies that the
ds:SignatureProperties can
only include information about the
ds:Signature. But XAdES
includes other information such as time-stamps, countersignatures, and
certificates that are not exclusively about the signature. Consequently,
XAdES encapsulates all of its information (even that information about the
signature) in XML-Signature's
ds:Object element. However, this
rationale is not stated in the specification and readers might find this
choice confusing otherwise.
DataObjectFormatelement
The
DataObjectFormat element provides information that
describes the format of the signed data object as it is rendered to the user.
This element, which is intended to describe how the document being signed is
to be rendered to the signing user, seems misguided and dangerous. First, the
information being provided (e.g.,
Encoding) may differ or
disagree with information found elsewhere in the processing (e.g., the
transport protocol). Second, it is not clear what this information applies to
within the signature processing pipe-line. For example, does this describe
the data before or after a signature transform, such as canonicalization?
Third, who is to say that the textual description actually corresponds to
what is being signed by a user? It does not appear that the XAdES
specification is defining application behavior with respect to this element
so we have no expectation about how the correspondence between this
description and the actual rendering can be drawn or enforced.
Furthermore, explicit specification of rendering conditions can be counter to W3C's efforts to make the Web device independent, accessible to all users, and to permit multiple modes of interaction. Constraints on the validity of a signature because of the media device or rendering may prevent the users of devices such as mobile phones, voice browsers, and screen readers from participating fully.
XAdES uses and extends the XML-Signature Recommendation. It
specifies how to add qualifying information to an XML-Signature such that it satisfies
the requirements of long-term validity and the Advanced Electronic
Signature according to the "Directive
1999/93/EC of the European Parliament and of the Council of 13 December 1999
on a Community framework for electronic signatures" (hereafter named Directive). Ultimately,
a signature according to XAdES incorporates
a commitment that has been
explicitly endorsed under a signature policy, at a given time, by a signer
under an identifier, e.g. a name or a pseudonym, and optionally a
role.
The Directive provides a legal framework for the recognition of digital/electronic signatures. It distinguishes between normal electronic signatures and advanced electronic signatures. It is worth noting that the Directive does not distinguish between other levels of security. In fact, Article 5.2 rules that Member States shall ensure that an electronic signature is not denied legal effectiveness and admissibility as evidence in legal proceedings solely on the grounds that it is:
Advanced Signatures have the following requirements stated by Article 2.2
of the Directive:;
Article 5.1 gives the incentive for advanced electronic signatures:
Member States shall ensure that advanced electronic signatures [...]
(a) satisfy the legal requirements of a signature in relation to data in electronic form in the same manner as a handwritten signature satisfies those requirements in relation to paper-based data; and
(b) are admissible as evidence in legal proceedings.
The technical work of standardization of electronic signatures was supported by the European Commission and mandated to the Information and Communication Technologies Standards Board (ICTSB), a round table of most European IT standards bodies and some international standards bodies such as the W3C.
For the purpose of standardizing electronic signatures, ICTSB created the EESSI Steering Group under the auspecies of ICTSB. The EESSI Steering Group delegated the development of appropriate document formats for electronic signatures that fulfill the requirements of the Directive to the ETSI ESI Activity. (This submission is a specification delivered by ETSI in the framework of the ESI Activity.)
The original format for Advanced Electronic Signatures was specified in ETSI TS 101 733 as a ASN.1 representation. XAdES is based on the contents of the ETSI Technical Specification TS 101 903: "XML Advanced Electronic Signatures (XAdES)". Given the importance of electronic signatures for commerce and the growing importance of XML within that context, (e.g., ebXML), the EESSI Steering Group invited the W3C to help enable advanced electronic signatures for XML applications. Funding from EESSI and the European Commission and the informal liaison with the W3C contributed to the development of XAdES under the auspices of the ETSI STF 178 task force. Some of the European Commission supported experts from this XAdES task force were also participating actively in W3C's XML-Signature Working Group. The result of this collaboration was a preference for a translation of ETSI TS 101 733 that uses the full benefit of XML instead of a narrow transliteration of the ASN.1 format. In ETSI, the XAdES-Specification carries the number TS 101 903.
XAdES provides additional syntax and semantics to XML-Signature for such things as non-repudiation and long term validity. To be able to address the requirements of advanced electronic signatures in the sense of the Directive there are also additional measures needed such as the use of secure signature creation devices. These additional specifications are provided on the EESSI Web site.
The specifications aiming to fulfill the requirements of the Directive will need approval from the so called Article 9 Committee. The committee has not yet made a decision and consequently these specifications have not yet been published in the European Official Journal. However, the chair of ETSI ESI has reported that the negotiations with the Article 9 Committee on the original (ASN.1) ETSI TS 101 733 specification has not yielded major changes in syntax or semantics. Consequently, while XAdES will be updated to track any adjustments in ETSI TS 101 733, few changes are expected.
The submission will be brought to the attention of the XML Signature Activity's mailing-list w3c-ietf-xmldsig@w3.org. The work on XML Signature closed on 31 December 2002. The Activity Statement of the XML Signature still encourages the use of w3c-ietf-xmldsig@w3.org for the discussion of future work. XAdES is intended as a contribution to future work in this area and might be taken up by future Working Groups developing further features in the area of XML Signature.
XAdES will also be brought to the attention of Web Services Architecture Group and the XML Protocol Working Group. As ebXML may rely on Web Services, as ebXML has chosen XML Protocol/Soap as it's underlying Protocol, the securing of such commercial exchanges -especially those carrying high value- can make use of XML Signature. The developers of XAdES believe that it can provide the necessary formats to enable certain exchanges to meet various levels of legal certainty specified by the European Commission Directive on Electronic Signatures.
Rigo Wenning
Privacy Activity Lead
with contributions from
Joseph Reagle XML Signature Co-Chair
Last update $Date: 2008/04/23 14:58:55 $
|
http://www.w3.org/Submission/2003/01/Comment
|
crawl-002
|
en
|
refinedweb
|
I've been using Velocity (Microsofts new caching product) on a project and while lurking on the lists spotted a post about testability around the Velocity API's. There may be reasons for doing this but it is something I avoid. I tend to use the SRP principle when it comes to anything that is associated with crosscutting concerns in my code (Caching, Logging, Session etc). As such I have one class that just deals with caching, a simple example would be.
1: public interface ICache
2: {
3: void Add(Basket res, string key);
4: Basket Get(string key);
5: void Remove(string key);
6: }
1: public class Velocity : ICache
2: {
3: private const string CACHE = "MyCache";
4:
5: public void Add(Basket res, string key)
6: {
7: var CacheCluster1 = new CacheFactory();
8: var Cache1 = CacheCluster1.GetCache(CACHE);
9: Cache1.Add(key, res);
10: }
11:
12: public Basket Get(string key)
13: {
14: var CacheCluster1 = new CacheFactory();
15: var Cache1 = CacheCluster1.GetCache(CACHE);
16:
17: return (Basket) Cache1.Get(key);
18: }
19:
20: public void Remove(string key)
21: {
22: var CacheCluster1 = new CacheFactory();
23: var Cache1 = CacheCluster1.GetCache(CACHE);
24:
25: Cache1.Remove(key);
26: }
27: }
So simple actions for Add, Remove, Get that wrap up what Velocity is doing. The rest of my code just needs to know about this class with rest of the details abstracted away. If I wanted I could use memcache rather than Velocity, as long as my memcache class implements ICache my code is not concerned, all it is concerned with is a class that implements ICache.
1: var cacheValue = cache.Get(basketid);
2: if(cacheValue == null)
3: {
4: var res = GetBasket(basketid);
5: if(res != null)
6: {
7: cache.Add(res, cachekey);
8: }
9: return res;
10: }
11: return cacheValue;
Know you can use DI to inject your caching class, using the IOC this would simply be
1: IOC.Register<ICache, Velocity>();
Neat.
Pingback from Dew Drop - September 16, 2008 | Alvin Ashcraft's Morning Dew
Hi.
I don't like the interface, because is tightly coupled to the Basket class. Probably an interface like this is more appropriate :
public interface ICache<T>
{
void Add(T item, string key);
void Remove(string key);
T Get(string key);
}
The implementation is :
public class Velocity<T> : ICache<T>
private const string CACHE = "MyCache";
#region ICache<T> Members
public void Add(T item, string key)
{
var CacheCluster1 = new CacheFactory();
var Cache1 = CacheCluster1.GetCache(CACHE);
Cache1.Add(key, T);
}
public void Remove(string key)
Cache1.Remove(key);
public T Get(string key)
return (T)Cache1.Get(key);
#endregion
And finally
public class BasketRepository : Velocity<Basket>
I don't know how this will integrate with an IOC container but it's more pleasant to me.
Any thoughts?
|
http://weblogs.asp.net/astopford/archive/2008/09/16/caching-and-the-srp.aspx
|
crawl-002
|
en
|
refinedweb
|
Note this schema is NOT a normative schema - - It contains types derived from all the builtin simple type definitions with the same local name but in a distinct namespace, for use by applications which do no wish to import the full XMLSchema schema. Since derivation is not symmetric, unexpected results may follow from mixing references to these definitions with references to the definitions in the XMLSchema namespace. For example, although dt:positiveInteger is derived from xs:integer, the converse does not hold.
|
http://www.w3.org/2000/10/XMLSchema-datatypes.xsd
|
crawl-002
|
en
|
refinedweb
|
0
Sorry for asking the topic that asked before... but im still dont know how to use that..
Below is an example create by me.. I hope that someone can tell me which part have to fix ...
This is the header file:
int check_SkipCard (string input) { int SkipCard=0; if (input=="skip") SkipCard=1; return SkipCard; }
Now is the main .cpp file:
#include <iostream> #include <cstring> #include "SkipCard.h" using namespace std; main() { int skip; int i,j; for(i=0;i<2;i++){ for(j=0;j<2;j++){ cout<<"enter"<<i<<j<<": "; cin>>skip; if(skip==1) j=2; else cout<<"no skip dou\n";}} system("pause"); return EXIT_SUCCESS; }
The error i get is :
`string' was not declared in this scope, and
expected `,' or `;' before '{' token
|
https://www.daniweb.com/programming/software-development/threads/148502/about-header-file
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
namespaceTextureTools
Texture tools.
Tools for generating, compressing and optimizing textures.
This library is built if
WITH_TEXTURETOOLS is enabled when building Magnum. To use this library with CMake, you need to request the
TextureTools component of the
Magnum package in CMake and link to the
Magnum::TextureTools target:
find_package(Magnum REQUIRED TextureTools) # ... target_link_libraries(your-app Magnum::TextureTools)
Note that functionality depending on GL APIs is available only if Magnum is built with both
WITH_GL and
TARGET_GL enabled (which is done by default).
See Downloading and building and Usage with CMake for more information.
Functions
- auto atlas(const Vector2i& atlasSize, const std::
vector<Vector2i>& sizes, const Vector2i& padding = Vector2i()) -> std:: vector<Range2Di>
- Pack textures into texture atlas.
- void distanceField(GL::
Texture2D& input, GL:: Texture2D& output, const Range2Di& rectangle, Int radius, const Vector2i& imageSize = Vector2i())
- Create signed distance field.
- void flip()
- Flip an image upside down.
Function documentation
std::
vector<Range2Di> Magnum:: TextureTools:: atlas(const Vector2i& atlasSize,
const std:: vector<Vector2i>& sizes,
const Vector2i& padding = Vector2i())
Pack textures into texture atlas.
Packs many small textures into one larger. If the textures cannot be packed into required size, empty vector is returned.
Padding is added twice to each size and the atlas is laid out so the padding don't overlap. Returned sizes are the same as original sizes, i.e. without the padding.
void Magnum::
TextureTools:: distanceField(GL:: Texture2D& input,
GL:: Texture2D& output,
const Range2Di& rectangle,
Int radius,
const Vector2i& imageSize = Vector2i())
Create signed distance field.
Converts binary image (stored in red channel of
input) to signed distance field (stored in red channel in
rectangle of
output). The purpose of this function is to convert high-resolution binary image (such as vector artwork or font glyphs) to low-resolution grayscale image. The image will then occupy much less memory and can be scaled without aliasing issues. Additionally it provides foundation for features like outlining, glow or drop shadow essentially for free.
You can also use the magnum-distancefieldconverter utility to do distance field conversion on command-line. By extension, this functionality is also provided through magnum-fontconverter utility.
The algorithm
For each pixel inside
rectangle the algorithm looks at corresponding pixel in
input and tries to find nearest pixel of opposite color in area given by
radius. Signed distance between the points is then saved as value of given pixel in
output. Value of 1.0 means that the pixel was originally colored white and nearest black pixel is farther than
radius, value of 0.0 means that the pixel was originally black and nearest white pixel is farther than
radius. Values around 0.5 are around edges.
The resulting texture can be used with bilinear filtering. It can be converted back to binary form in shader using e.g. GLSL
smoothstep() function with step around 0.5 to create antialiased edges. Or you can exploit the distance field features to create many other effects. See also Shaders::
Based on: Chris Green - Improved Alpha-Tested Magnification for Vector Textures and Special Effects, SIGGRAPH 2007,
|
http://doc.magnum.graphics/magnum/namespaceMagnum_1_1TextureTools.html
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
The two primary Active Directory components are its logical and physical structures, which respectively involve the organization and communication of objects. A third component, known as the schema, defines objects that make up the Active Directory. The discussion of the schema is included in the logical structure section for the sake of convenience.
Logical Structure
The base logical components of the Active Directory are objects and their associated attributes. Object classes are merely definitions of the object types that can be created in the Active Directory. The schema is the Active Directory mechanism for storing object classes. It also permits the addition of other object classes and associated attributes.
Active Directory objects are organized around a hierarchical domain model. This model is a design facility that permits the logical arrangement of objects within administrative, security, and organizational boundaries. Each domain has its own security permissions and unique security relationships with other domains. The Active Directory utilizes multi-master replication to communicate information and changes between domains.
The following sections provide an overview of domain model building blocks: domains, domain trees, forests, organizational units, and the schema.
Domains
The Active Directory manages a hierarchical infrastructure of networked computers with the domain as the foundation. A domain comprises computer systems and network resources that share a common logical security boundary. It can store more than 17 terabytes within the Active Directory database store. Although a domain can cross physical locations, all domains maintain their own security policies and security relationships with other domains. They are sometimes created to define functional boundaries such as an administrative unit (for example, marketing versus engineering). They are also viewed as groupings of resources or servers that utilize a common domain name, known as a namespace. For example, all servers or resources in the EntCert.com namespace belong to a single domain.
In very simple terms, every domain controller has the following information as part of its Active Directory:
Data on every object and container object within the particular domain
Metadata about other domains in the tree (or forest) to provide directory service location
Listing of all domains in the tree and forest
Location of the server with the Global Catalog
Domain Trees
When multiple domains share a common schema, security trust relationships, and a Global Catalog, a domain tree is createddefined by a common and contiguous namespace. Thus, for example, all domains with the ending namespace of EntCert.com belong to the EntCert domain tree. A domain tree is formed through the expansion of child domains such as Sales.EntCert.com or Research.EntCert.com. In this example, the root domain is EntCert.com.
The first created domain, known as the root domain, contains the configuration and schema data for the tree and (as we shall see) the forest. A tree structure is formed by adding child domains. There are a number of reasons for creating multiple domains in a treefor example, some are the following:
Discretely managing different organizations or providing unit identities
Enforcing different security boundaries and password policies
Requiring a better method of controlling Active Directory replication
Better handling of a very large number of managed objects
Decentralizing administration
A single domain contains a complete Active Directory partition for all of its objects. It is also, by definition, a complete domain tree. As child domains are added to the domain tree, Active Directory partitions are replicated to one or more domain controllers within each of the domains.
Domain Forests
Trust relationships can be formed between domain trees with different namespaces. When this occurs, a domain forest is created, which allows the enterprise to have different domain names, such as "entcert.com" and "unint.com."
All trees within the forest share a number of common attributes, including a Global Catalog, configuration, and schema. A forest is simply a reference point between trees, and does not have its own name. Forests utilize the Kerberos security technology to create transitive trust relationships between trees.
Organizational Units
Domains and child domains can be internally divided into administrative substructures known as organizational units (OUs), each of which can compartmentalize more than 10 million objects. As container objects, OUs can be nested within other OUs. For example, the marketing division may be defined as an organizational unit, and product groups within this division may be defined as suborganizational units. A domain usually comprises one or more organizational units arranged hierarchically. Objects can be organized within organizational units for administrative purposes. The OU acts as a container that lists the object contents, including users, computer systems, and devices such as printers.
An organizational unit is a logical subset defined by security or administrative parameters. In this administrative arrangement, the specific functions of the system administrator can also be easily segmented or delegated with the organizational unit level. From a system administrator's vantage point, this is very important to understand. It is possible to delegate system management responsibility solely to certain activities within a domain, OU, or child subsidiary OU. For example, a person within an organizational unit can be granted authority to manage print and file server functions, but be denied authority to add, modify, or delete user accounts.
Organizational units are subunits with a domain or child domain.
Trees and Forest Scaling and Extensibility
The Active Directory scales across environments ranging from a single server to a domain of one million users or more. The basis of this scaling is the peer-to-peer directory service relationship that is established between domains. Every domain server (known as a domain controller) is provided updated information on Active Directory objects. Consistency across domains is ensured through the automatic replication services. To avoid extremely large and unwieldy directories, the Active Directory creates tree partitions that comprise small portions of the entire enterprise directory. However, every directory tree has sufficient information to locate other objects in the enterprise. In addition to greater efficiency, objects that are more frequently used are placed in the local store for more rapid access. A trust relationship is automatically established between the domains in the same tree, so it is possible to transparently locate resources anywhere in the tree or forest enterprise where the Active Directory resides.
Schema
The schema is simply a framework of definitions that establishes the type of objects available to the Active Directory. These definitions are divided into object classes, and the information that describes the object is known as its attributes. There are two types of attributes: those that must exist and those that may exist. For example, the schema defines a user object class as having the user's name as a required attribute; the user's physical location or job description is optional. Attributes are used to further help distinguish one object from another. They include Object Name, Object Identifier (OID), Syntax, and Optional Information.
The schema is stored within the Active Directory database file Ntds.dit. Object definitions are stored as individual objects, so the Directory can treat schema definitions in the same way it treats other objects. The default schema is created with the first installation of the Active Directory. It contains common objects and properties for items such as users, groups, computers, printers, and network devices. It also establishes the default Active Directory structure that is used internally.
As an extensible component, new object classes may be dynamically added to the current schema, and old object classes can be modified. It is not possible to modify or deactivate system classes and attributes.
NOTE
Schema data should not be confused with configuration data, which provides the structural information for the Active Directory. The schema provides information about what objects and attributes are available to the Directory. Configuration information maintains the Directory structure that represents the relationship between the actual objects, and indicates how to replicate this structure between domain controllers.
The schema is accessible only to the Administrators and Schema Admins user groups by default. It is managed through the Active Directory Schema snap-in tool. (The Active Directory Schema snap-in becomes available only after the adminpak application is installed from the Windows 2000 Server CD within the I386 folder.) Active Directory schema elements are dynamic and available to applications after the initial startup of the system. New attributes and classes can be added to the schema to provide dynamic extensibility to the Active Directory.
The schema object container attaches definitions to the directory tree. It is typically represented as a child of the directory root. In turn, each instance has its own individual schema.
The schema operations master domain controller manages the structure and content of the schema. It replicates schema information to the other domain controllers in the forest. Every domain controller loads a copy of the schema in a RAM-based cache, ensuring rapid access to currently used object and attribute definitions. If changes occur in the schema, the cache is refreshed.
In the third article of this series, we examine the physical structure of active directory.
|
http://www.informit.com/articles/article.aspx?p=26675
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
On 9/2/07, Matt Holmes <tegan at gnometank.net> wrote: > > I am trying to expose one of the classes in my engine as an abstract > class that is used as the base for a series of Python classes. > > That class is called InputCommand and is defined as such: > namespace Stasis { > class InputCommand { > public: > virtual bool execute() = 0; > }; > } > > To expose that class, I created the following wrapper: > > class InputCommandWrapper : public Stasis::InputCommand, public > wrapper<Stasis::InputCommand> { > public: > bool execute() { > return this->get_override("execute")(); > } > }; > > And added it to my Boost.Python module like so: > > class_<InputCommandWrapper, boost::noncopyable>("InputCommand") > .def("execute", pure_virtual(&InputCommand::execute)); > > Everything seems okay so far, but then I have another class called > InputManager. This class exposes a function, registerCommand, that takes > a const std::string& and an InputCommand*. That classes partial > defintion is: > > class InputManager : public Singleton<InputManager> > public: > static InputManager& getSingleton() { return *ms_Singleton; } > static InputManager* getSingletonPtr() { return ms_Singleton; } > > InputManager(); > ~InputManager(); > > void executeCommand(const string& cmdName); > void registerCommand(const string& cmdName, InputCommand* cmd); > } Who is responsible for cmd lifetime, InputManager or somebody else? This answer is a key for correct solution. I attached 2 files that contains solution for your problem P.S. The code is not minimal, because I extracted it from Py++ unittests -- Roman Yakovenko C++ Python language binding -------------- next part -------------- An HTML attachment was scrubbed... URL: <> -------------- next part -------------- A non-text attachment was scrubbed... Name: transfer_ownership.cpp Type: text/x-c++src Size: 4836 bytes Desc: not available URL: <> -------------- next part -------------- A non-text attachment was scrubbed... Name: transfer_ownership_to_be_exported.hpp Type: text/x-c++hdr Size: 690 bytes Desc: not available URL: <>
|
https://mail.python.org/pipermail/cplusplus-sig/2007-September/012479.html
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
player_ptz_req_control_mode Struct Reference
Request/reply: Control mode. More...
#include <player_interfaces.h>
Detailed Description
Request/reply: Control mode.
To switch between position and velocity control (for those drivers that support it), send a PLAYER_PTZ_REQ_CONTROL_MODE request. Note that this request changes how the driver interprets forthcoming commands from all clients. Null response.
Member Data Documentation
Mode to use: must be either PLAYER_PTZ_VELOCITY_CONTROL or PLAYER_PTZ_POSITION_CONTROL.
The documentation for this struct was generated from the following file:
|
http://playerstage.sourceforge.net/doc/Player-cvs/player/structplayer__ptz__req__control__mode.html
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
View on a range, either closing it or leaving it as it is.
The closeable_view is used internally by the library to handle all rings, either closed or open, the same way. The default method is closed, all algorithms process rings as if they are closed. Therefore, if they are opened, a view is created which closes them. The closeable_view might be used by library users, but its main purpose is internally.
template<typename Range, closure_selector Close> struct closeable_view { // ... };
#include <boost/geometry/views/closeable_view.hpp>
|
http://www.boost.org/doc/libs/1_50_0/libs/geometry/doc/html/geometry/reference/views/closeable_view.html
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Today we begin an exciting look at how the Manufacturing Industry uses WPF in their Enterprise Applications. In this post, we are going to first take a look at what manufacturing industries consist of, why they are choosing WPF and how they typically represent data in their app. We will then switch gears and write a sample application using Telerik RadControls for WPF that can help accelerate your development efforts in this industry and others.
The manufacturing industry refers to those industries which involve.
Examples of manufacturing industries include the following:
Manufacturing also includes chemical industries, textile industries, plastic industries and many more.
Now that we have a grasp on what type of industries are included in manufacturing we need to examine how WPF can help bridge the gap between what the industry needs and what WPF can provide.
The manufacturing industry chooses WPF for a number of reasons:
Of course, these are not all of the reasons, but let’s dig deeper into some of them.
It is easy to get caught up in all the new and shiny technology that is released every couple of months. But the fact remains, that most enterprises are still using older operating systems such as Windows XP, 7 and even Windows Server 2003. While interacting with several of our enterprise customers, we have found that they have no plans at the moment to upgrade, yet they still need a mature platform to build applications on. This again is where WPF comes in, not only can it run on these operating systems and more, but it has been out since 2006. This means that we have seven years’ worth of documentation on the platform available to us on the web.
Another key point, is that WPF has full access to the file system and hardware of the operating system. You can read or write a file anywhere, make a synchronous call to a web service, or interact directly with the hardware if need be. You are also in charge of the deployment model, whether it be standalone or XAML Browser Applications. You are not dependent on anyone or anything, unlike some of the newer technology which has to go through a review process by people outside of your organization.
Rich UI that includes Vector Graphics Support
It is important to note that users will only interact with the Graphical User Interface (GUI) of your application. They will never see the code running in the background no matter how well structured it is. Therefore, it is important that the application focus on a stunning GUI using proven design concepts that look great and prevent your users from becoming frustrated while using your application. You will notice that WPF applications step away from the standard colors of white/gray (often used in WinForms) and uses a rich color palette. It is also important to design and implement a consistent UI strategy throughout your application.
But what about vector graphics support? Why is that so important? Again, we will have to first define what vector graphics actually means: Vector graphics is the use of geometrical primitives such as points, lines, curves, and shapes or polygons, which are all based on mathematical expressions, to represent images in computer graphics. [2]
In many manufacturing industries they are dependent upon charts and graphs that use the power of vector graphics to display information with detailed precision.
Besides the layout of precise data points, vector graphics is also important as it allows most controls and elements to be scaled without loss in quality or pixelization, thus increasing accessibility with a wide range of monitors common in this industry.
While looks are very important in the manufacturing industry, we still have to solve the business problem, after all that is what the overall goal is.
Occasionally Connected Applications
A problem that exists for many manufacturing industries as well as others, is clients that are located in places where they have no internet or limited internet connectivity. For these shops, it is too risky to have an application depend on the cloud or isolated storage which can become corrupt. Thankfully, WPF has access to the full .NET networking stack and can make any type of call (asynchronous and synchronous). It also has full access to the file system and can read/write anywhere on the local file system and then moved to the appropriate backup location on the network with the use of windows services, etc.
After reviewing several applications in this industry, I have built a small demo application with the automotive industry in mind. I have built this application using WPF 4.5 and Telerik’s RadControls for WPF. Of course, no application can illustrate every scenario, but this demo will give you a chance to see what Telerik has to offer to assist you in your next enterprise application.
The scenario we will build against is an automotive executive who wants a quick glimpse of how all of the various automotive plants across the globe are doing at a given moment. He would like to see the amount of units produced, units in-progress, production growth since yesterday and the overall system health of the plant. He has assigned you the application and has given you the following requirements:
We can accomplish this by using RadControls for WPF in a variety of ways:
Let’s go ahead and build this demo and show the first page of the dashboard.
Before we get started, let’s take a look at what the completed application will look like as shown in Figure 1.
Figure 1: The International Motor Company Dashboard
We have already described in detailed, which controls we are using so now it is time to build the application.
One of the first requirements was that he wanted our application to look modern. Since very few developers have a full-time designer on staff that can produce the required styles, we went straight to Telerik’s built-in themes. Currently Telerik has a variety of themes that can be applied across your application. The full list can be found here. In our case, we are using the Expression_DarkTheme and it is as simple as adding in the assemblies and setting one line of code in our MainWindow constructor as shown below.
public MainWindow()
{
StyleManager.ApplicationTheme=new Expression_DarkTheme();
InitializeComponent();
Loaded+=MainWindow_Loaded;
}
This automatically applies styles to all of our RadControls used in the application. Now that we have a theme in place, we need to add in a way to search various plant locations and display them on the screen.
Since many manufacturing industries can have a variety of plant locations that begin with a similar name, RadAutoCompleteBox will filter the data as shown in Figure 2.
Figure 2: RadAutoCompleteBox Filtering Data.
When the user hits the B key, all of the items that start with B will be displayed.
This can be easily created in XAML with the following code snippet:
<telerik:RadAutoCompleteBox x:
Examining the XAML, we will see that this control has the ability to allow your end-user to select more than one item, if your application requirement calls for it, with the SelectionMode property. We can also change the way the text is filtered with the AutoCompleteMode property. This will be helpful in instances where you might want to search only the first letter or text contained somewhere in the ItemSource. We can also add a watermark to the control to give instructions to your users of what this input control is used for.
If the user does not want to filter the data and can see which plant location they want to use, then they simply select it from the RadListBox as shown in Figure 3.
Figure 3 RadListBox showing the available locations for the automotive industry.
Thankfully Telerik was thinking about you as they made the ItemSource property of both controls available to work with an ObservableCollection. In this case, I’ve created my class that lists the available locations and added each of these locations to that source. The only thing needed to display the data properly in the control is to set the DisplayMemberPath to Name as shown below for both RadAutoCompleteBox (Shown in Figure 2) and RadListBox (Shown in Figure 3).
<telerik:RadListBox x:
I’ve added the class below as well as a sample of the GetCountries() method. I’ve hard coded this data, but it could be coming from a web service as well.
public class Country
public string Name { get; set; }
public Country(string name)
{
this.Name=name;
}
public void GetCountries()
this.Countries=new ObservableCollection<Country>()
{
new Country("Australia"),
new Country("Austria"),
new Country("Azerbaijan"),
new Country("Bahamas"),
new Country("Bahrain"),
new Country("Bangladesh"),
new Country("Barbados"),
new Country("Belarus"),
new Country("Belgium"),
new Country("Belize"),
new Country("Finland"),
new Country("France"),
new Country("Gabon")
};
radListBox.ItemsSource=Countries;
radAutoCompleteBox.ItemsSource=Countries;
As you can see from the XAML listed above, several other properties/events have been set. The one that we want to pay special attention to is the SelectionChanged event as that leads us into populating the data for RadGridView and RadChartView.
When the SelectionChanged event fires, we run a method that is called GetData() that returns a Name, Description and a random Total. We can see this data populated in RadGridView as shown in Figure 4.
Figure 4: RadGridView displaying data from the currently selected location.
The XAML is completely straight-forward as I’m simply instantiating the control and only setting the ColumnWidth property to *, so that each column’s width will adjust accordingly to the space available on the screen. Everything else will be auto-generated at run-time.
<telerik:RadGridView x:
One of the most interesting aspects of RadGridView is the Excel-like capabilities to filter data as shown in Figure 5. We are only displaying a few rows. Imagine hundreds and thousands of rows. We can easily group columns by dragging and dropping it onto the header bar or clicking the filter button on the column.
Figure 5: Excel like filtering built into RadGridView.
One of our last requirements was a chart that he could quickly visualize the data located inside of RadGridView. By using RadChartView as shown in Figure 6, we were able to easily bind our existing “Total” data from RadGridView into a BarSeries.
Figure 6: RadChartView representing the data visually from RadGridView.
Again, we were able to accomplish this with only a few lines of XAML and creating a Class called LocationDetails and LocationDetailsService.
<chart:RadCartesianChart x:
<chart:RadCartesianChart.HorizontalAxis>
<chartView:CategoricalAxis
</chart:RadCartesianChart.HorizontalAxis>
<chart:RadCartesianChart.VerticalAxis>
<chartView:LinearAxis/>
</chart:RadCartesianChart.VerticalAxis>
<chartView:BarSeries
</chart:RadCartesianChart>
The Location Details class only contains a few properties and the LocationDetailService uses an ObservableCollection to notify our UI that the underlying data has changed. A code snippet is provided below:
public class LocationDetails
public string Description { get; set; }
public int Total { get; set; }
ic static ObservableCollection<LocationDetails> GetLocations()
rvableCollection<LocationDetails> locations=new ObservableCollection<LocationDetails>();
Random rnd=new Random();
//For Production
LocationDetails ld=new LocationDetails();
ld.Name="Units Produced";
ld.Description="Units Currently Produced";
ld.Total = rnd.Next(75, 100);
locations.Add(ld);
return locations;
}
From Figure 1, in the upper right-hand corner you can see RadButton in action. The button looks very sleek as it has the theme that we applied earlier to it. As you hover over it, you will notice the sleek animation is applied to it. If you click the button then RadWindow will appear as shown in Figure 7.
Figure 7: The “About” dialog box.
Contrary to what you might believe, this Window is not a UserControl or WPF Window, it was declared entirely in C# by using our RadWindow control. Again, the theming was automatic. The complete code for RadWindow is shown below:
RadWindow radWindow=new RadWindow();
radWindow.Width=400;
radWindow.Height=300;
radWindow.ResizeMode=System.Windows.ResizeMode.NoResize;
radWindow.WindowStartupLocation=System.Windows.WindowStartupLocation.CenterOwner;
radWindow.Header="International Motor Company";
TextBlock tb=new TextBlock();
tb.Text="Created by : International Motor Company \r\nVersion 1.0 \r\nAll Rights Reserved.";
tb.VerticalAlignment=System.Windows.VerticalAlignment.Center;
tb.HorizontalAlignment=System.Windows.HorizontalAlignment.Center;
radWindow.Content=tb;
radWindow.ShowDialog();
We first created an instance of RadWindow, then set a few properties relating to position and then created a TextBlock to display the text shown in the window. Finally, we wrap up by showing the dialog to the user.
We have taken a look at what the Manufacturing Industry is, along with the requirements found in that industry. We built a small sample application using RadControls for WPF and it helped us complete the project in a timely manner. I would encourage you to download RadControls for WPF and check out the source code of the sample application to get a better understanding of how the controls were used. If you have any questions or comments, then please leave them in the comment field below. But before you go:
Also check out: Jesse Liberty’s post on the BFSI Industry using WPF.
|
http://www.telerik.com/blogs/wpf-in-manufacturing-industries
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Hello, On Thu, 19 Apr 2012 16:18:21 -0400 "Eric V. Smith" <eric at trueblade.com> wrote: > This reflects (I hope!) the discussions at PyCon. My plan is to produce > an implementation based on the importlib code, and then flush out pieces > of the PEP. I don't understand why PEP 382 was rejected. There doesn't seem to be any obvious argument against it. The mechanism is simple, explicit and unambiguous. As PEP 382 points.” The "directory.pyp" scheme is highly unlikely to conflict with unrelated uses of a ".pyp" directory extension. It's also easy to use, and avoids oddities in the lookup algorithm such as “if the scan completes without returning a module or package, and at least one directory was recorded, then a namespace package is created”. On the other hand, PEP 420 provides potential for confusion (for example, if the standard "test" package is not installed, trying to import it could end up importing some other arbitrary "test" directory on the path as a namespace package), without seeming to have any obvious advantage over PEP 382. Unless there are clear advantages over PEP 382, I'm -1 on this PEP, and would like to see PEP 382 revived. Regards Antoine.
|
https://mail.python.org/pipermail/import-sig/2012-May/000538.html
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
SCJP
Card Set Information
Author:
cquezadav
ID:
112824
Filename:
SCJP
Updated:
2011-11-05 13:33:09
SCJP
Folders:
Description:
SCJP
Show Answers:
>
Flashcards
> Print Preview
The flashcards below were created by user
cquezadav
on
FreezingBlue Flashcards
. What would you like to do?
Get the
free
Flashcards app for iOS
Get the
free
Flashcards app for Android
Learn more
What is a Class?
A template that describes the kinds of state and behavior that objects of its type support.
What is an Object?
At runtime, when the Java Virtual Machine (JVM) encounters the new keyword, it will use the appropriate class to make an object which is an instance of that class. That object will have its own state, and access to all of the behaviors defined by its class.
What is Inheritance?
Object oriented concept which allows code defined in one class to be reused in other classes.
What are Interfaces?
Importants in inheritance. Interfaces are like a 100% abstract superclass that defines the methos a subclass must support (just the signature), but the subclass has to implement the methods.
An Interface is a Contract.
All interface methods are implicity public and abstracts. Must not be static, final, strictfp or native.
All variables must be public, static, and final. (constants). Implicity.
Can extend one or more other interfaces, and just interfaces.
What means Cohesive?
Means that every class should have a focused set of responsabilities.
Java Identifiers
Identifiers are the names for mathos, classes and variables. Legal identifiers:
Must start with a letter, currency character ($), or underscore ( _ ) . Cannot start with number.
There is no limit to number of characters.
Cannot use java keywords as identifiers.
Idetifiers are case sensitive.
Java Keywords
Examples of JavaBean method signature
public void setMyValue(int v)
public int getMyValue()
public boolean isMyStatus()
public void addMyListener(MyListener m)
public void removeMyListener(MyListener m)
which are the sorce file declaration rules?
One public class per source code file
Comments can appear at the beginning or end of any line in the source code.
The name of the file must match the cname of the public class.
The package statement must be the first line.
The import statements, must go between the package statement and the class declaration.
A file can have more than one nonpublic class.
Files with no public classes can have a name that does not match any of the classes in the file
Default Class Access
No modifier preceding the declaration.
Package level access. A class in a different package cannot have access.
Public Class Access
Class can be accessed from all class in any package.
strictfp Class modifier
For the exam it is just necessary to know that strictfp is a keyword and can be used to modify a class or a method, but never a variable.
Final class modifier
A Final class cannot be subclassed.
No other class can ever extend a final class.
String class is final.
A final class destroy extensibility OO benefit.
Abstract class modifier
An abstract class can never be instantiated.
Its only mission is to be extended (subclassed).
An abstract class finish with semicolon ( ; )
public abstract void goFast();
If one method is abstract the whole class must be abstract.
Public Members
Public methods or variable members can be accessed by all classes regardless of the package they belong.
It depends of class access.
Private Members
Can't be accessed by code in any class but the class in which the private member was declarated.
A subclass can't inherit private members.
Rules of overriding do not apply so a method with the same name is totally a diferent method.
Default Members
A default member can be accessed ony if the class accessng the member belongs to the same package.
Protected Members
A protected member can be accessed by classes that belong to the same package.
A protected member can be accesses by a subclass even is the subclass is in a different package.(different to default access)
Protected members in the subclass become private to any code outside the subclass, with exception of subclasses of the subclass.
Local variables and access modifiers
The only modifier allowed in local variables is final.
Final Methods
Final keyword prevents a method from being overriden in a subclass.
Final keyword is contrary to many OO befefits like extensibility.
Final Arguments
A final argument can't be modified within the method.
public Record getRecord(int fileNumber, final int recordNumber)
Abstracts Methods
An abstract method does not contain funcional code.
An abstract method finish with a semicolon.
It has no method body.
Subclasses have to provide the implementacion.
A method can never be abstract and final or abstract and private or abstract and static.
public abstract void showSample();
If a class has an abstract method, the class has to be obligatory declared abstract .
It is legal to have an abstract class with no abstract methods.
The first concrete (nonabstract) subclass of an abstract class must implement all abstract methods.
Synchronized Methods
Synchronized keyword means that a method can be accessed by only one thread at a time.
It can be applied only to methods (not classes, not variables)
public synchronized Record retrieveUserInfo(int id) { }
Native Methods
The native modifier Indicates that a method is implemented in a platform-dependent code, often C.
Can be applied only to methods.
A native method's body must be a semicolon like in abstract methods.
Strictfp Methods
Strictfp forces floating points to adhere to the IEEE 754 standard.
Variable Argument List (var-args)
Methods that can take a variable number of arguments.
The var-arg must be the last argument in the method's signature.
A method can have just one var-arg.
void doStuff2(char c, int... x)
Constructor Declarations
A constructor does not have a return type.
Have the same name as the class.
Constructors can't be marked as static, final or abstract
Primitive Variables
char, boolean, byte, short, int, long, double and floar.
Once a primitive has been declared, its primitive type can never change.
Can be declared as class variables (static), instance variables, method parameters or local variables.
Reference Variables
Used to refer or access an object.
Can refer any object of the declared type or a subtype of the declared type.
Numeric Types
byte -> short -> int -> long (integers)
float -> double (floating point)
All number types are signed (+ and -)
The leftmost bit is used to represent the sign, 1 means negative and 0 means positive.
Range of Numeric Primitives
Instance Variables
Defined inside the class but outside aany method.
Initialized when the class is instantiated.
Can use any of the four access levels (default, pretected, private and public)
Can be marked as final and transient.
Cannot be marked as abstract, synchronized, strictfp, native, static.
Comparation of modifiers on variables vs methods
Local Variables
Declared within a method.
Variables live in the scope of the method.
They are destroyed when the method has completed.
Local variables are always on the stack, not in the heap.
Local variables must be initialized before use it.
Local variables do not get default values.
It is possible to make shadowing (local variable with the same name than an instance variable)
Array Declaration
Arrays can store multiple variables of the same type (or subclasses).
Array is an object on the heap.
int[] key;
Thread threads [];
String [] [] [] name; (array of arrays of arrays)
String [] lastName []; (array of arrays)
Final Variables
A final variable is impossible to reinitialize once it has been initialized.
A primitive final variables the value can't ve altered.
A reference final variable can't ever be reassigned to refer to a different object. The data can be changed but the reference variable cannot.
Transient Variables
A transient variable will be ignored by the JVM when attempt to serialize the object containing it.
Volatile Variables
Volatile modifier tells the JVM that a thread accessing the variable must always ajust its own private copy of the variable with the master copy in memory.
It can be applied only to instance vatiables.
Static Variables and Methods
The static members will exist independently of any instance of the class.
Static members exist before any new instance of the class.
There will be only one copy of the static members.
All instances of the class share the same value for any static variable.
Declaring Enums
Lets restrict a variable to having one of the only few pre-defined values.
Can be declared as their own separate class, or as a class member.
Cannot be declared within a method.
Enum declared outside a class cannot be private or protected. (can be public or default).
enum CoffeeSize{BIG,HUGE}
drink.size = CoffeeSize.BIG; // enum outside class
drink.size = Coffee2.CoffeeSize.BIG; enum inside class
It is optional tu put a semicolon if the enum is the las declaration.
Enum constructor and instance variable declaration
Use Emum Class
Enum Constructor Overriding
Encapsulation
One benefit of encapsulation is the ability to make changes to the code without breaking the code of others who use the same code.
Hide implementation details behind an interface. That interface is a set of accessible methods (API).
Encapsulation benefits are maintainability, flexibility, and extensibility.
Rules
: keep intance variables protected (private), make public accessor methods, and use the javaBean naming convention (get and set).
Inheritance
Inheritance promote code reuse and polymorphism.
Every class is a subclass of class Object.
All classes inherit the methos
: equals, clone, notify, wait and others.
References
A reference variable can be of onle one type, although the object it references can chage.
A reference is a variable, so it can be reassigned to other objects. (excluding final references).
A reference variable determines the methods that can be invoked on the object the rariable is refering.
A reference variable can refer to any subtype of the declared type.
A reference variable can be declared as a class or an iterface.
At runtime the JVM knows the object, so if the object has an overridden method, the JVM will invoke the overridden version, and not the one of the declared reference variable type.
Overridden Methods
A class that inherit a method from a superclass have the can override the method.
The benefit is to define behavior that is specific to a subclass.
Abstracts methos have to be implemented (overridden) by the concrete class.
The overriden method has to have the same or better access (public, protected..)
The return type must be the same or a subtype of the superclass method.
The overriding method can throw any unchecked (runtime) exception, even if the overridden method does not.
The overriding method must not throw checked exception that are new or broader.
Overriding method can throw narrower or fewer exceptions.
It is not possible to override a method neither marked final nor marked static.
Then a method declares a cheked exception but the overriding method does not, the compiler thinks you are calling a method that declares an exception, so if the overriding method does not declare the exception and a reference of the superclass refers to the subtype, the compiler throws an error. This would not occurr at runtime.
Overloaded Methods
Overloaded methods let reuse the same method name in a class, but with different arguments (optionally different return type).
Overloaded methods must change the argument list.
Overloaded methods can change the return type.
Overloaded methods can change access modifiers.
Overloaded methods can declare new or broader checked exceptions.
A method can be overloaded in the same class or in a subclass.
Overloaded method are chosen on compilation time and not in runtime, so which overloaded version of the method to call is based on the reference type of the argument passed at compile time.
Differences between overloaded an overrriden methods.
Implementin Interface
Provide concrete implementation for all methods.
Follow all the rules for legal overrides.
Decalre no checked exception on implementation methods other than those declared by the interface method or a subclass of those declared by the interface.
Maintain the signature of the interface method, and maintain the same return type or subtype.
Implementig defines a role the class can play but not who or what the class is.
A class can implement more than one interface.
An interface can extend another intreface.
Return Types on Overloaded Methods
The overloaded method can have a different return type, but it is obligatory to chage the argument list.
Overriding and Return Types
A subclass must define a method that matched the inherited version. However, the return type can also be a subtype of the original declared return type.
Return Value Rules (1)
You can return null in a method with an object reference return type.
An array is perfectly legal return type.
In a method with a primitive return type, you can return any value or variable that can be implicitly converted to the declared return type. public int foo() {char c='c'; return c;}
In a method with a primitive return type, you can return any value or variable that can be explicitly converted to the declared return type. public int foo() {float f=32.5f; return (int)f;}
Return Value Rules (2)
In a method with an object reference return type, you can return any type that can be implicitly cast to the declared return type.
Return Value Type (3)
Constructor Basic
Every class, including abstract classes, must have a constructor.
Constructors do not have return type and the name matchs the class name.
Constructors are invoked at runtime with the new keyword.
Constructors Rules
Can use any access modifier, including private (a private constructor means that only code within the class can instantiate an object of that type).
If you do not type a constructor a default constructor will be automatically generated by the compiler.
If the class has a constructor with arguments the compiler will not create one with no arguments.
Every constructor has as its first stament a call to an overloaded costructor (this()) or a call to the superclass constructor (super()).
You cannot call to an instance method or access an intance vatiable until after the super constructor runs.
Only static variables and methods can be accessed as part of the call to super or this. ex
: super(Animal.NAME).
Abstract classes have constructors.
Interfaces does not have constructors.
Constructor can be invoked only within other contructor.
When a constructor has a this() call to other constructor, sooner or later the super() constructor gets callled.
The first line of a constructor must be super() or this(). If the constructor does not have any call the compiler insert the no-arg call to super().
A constructor can never have both a call to super() and a call to this() because each of those calls must be the first stament.
Compilet-Generated Constructor Code
Static Variables and Methods
Static variables are seted when the JVM load the class, it is before instances are created.
Static variables and methods belong to the class, rather than to any particular instance.
It is possible to use static members without having an instance of the class.
There is only one copy of the static members.
A static method cannot access a nonstatic (instance) variable or method. It is necesary to have an instance of the class to access instance members.
Accessing Static Methods and Variables
It is not neccessary an instance to access static members.
To access a static method or variable you need the name of the class and the dot operator.
It is also possible to access the static members using an instance of the class.
Static methods cannot be overriden, but they can be redefined in a subclass.
Coupling
Coupling is the degree to which one class knows about another class.
If a class know about other class through its interface, then both classes are loosely coupled.
If a class relies on parts of another class that are not part of the class interface, then the coupling is tighter.
Cohesion
Cohesion is used to indicate the degree to which a class has a sigle, well-focused purpose.
Cohesion let a much easier maintainace and reusabillity.
What would you like to do?
Get the
free
Flashcards app for iOS
Get the
free
Flashcards app for Android
Learn more
>
Flashcards
> Print Preview
|
https://www.freezingblue.com/flashcards/print_preview.cgi?cardsetID=112824
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Read operations can be implemented with SQL queries or LINQ to SQLite expressions. In case of LINQ to SQLite, two syntactic forms for specifying queries can be used: query syntax and method syntax. Here are described the methods that implement the reading operations from the database.
Synchronous execution of a SQL query with parameters according to the SQLite SQL syntax.
public List<PersistentType> Get<PersistentType>(string queryStatement, params object[] queryParameter)
Asynchronous execution of SQL query.
public async Task<List<PersistentType>> GetAsync<PersistentType>(string query)
Synchronous execution of SQL query resulted to scalar value.
public ScalarType GetScalar<ScalarType>(string query)
Asynchronous execution of SQL query resulted to scalar value.
public async Task<ScalarType> GetScalarAsync<ScalarType>(string query)
Get all the persisted data from the database via LINQ to SQLite provider. This is the base method for switching to LINQ to SQLite use.
public IQueryable<PersistentType> GetAll<PersistentType>()
Examples: All examples use the database created in the Getting Started section.
With SQL query:
context.Get<Cars>("Select * from Cars"); // gets all records from Cars table
context.Get<Cars>("Select * from Cars where OwnerFK = 1"); // gets Carss for owner with ID=1
context.Get<Cars>("Select * from Cars where OwnerFK = @p1", 1); // gets cars for owner with ID=1 using parameterized query
context.GetAsync<Cars>("Select * from Cars where OwnerFK = 1"); // gets asynchronously Cars for owner with ID=1
LINQ to SQLite - the GetAll method is essential for usage of LINQ to SQLite provider.
With query syntax:
var t1 = from item in context.GetAll<Cars>()
select item;
var t2 = from item in context.GetAll<Cars>()
where item.OwnerFK == 1
select item;
With method syntax:
var f1 = context.GetAll<Cars>();
var f2 = context.GetAll<Cars>().Where<Cars>(car => car.OwnerFK == 1);
var f3 = context.GetAll<Cars>().Where<Cars>(car=>car.OwnerFK==1).OrderBy(car=>car.CarID);
Join operations with LINQ:
The names of the fields should be identical as the names of the records from the result of the join operations.
With anonymous type for read only operations using d object:
var d = from car in context.GetAll<Cars>()
join owner in context.GetAll<CarOwners>()
on car.OwnerFK equals owner.OwnerID
select new { owner.Name, owner.Age, car.Model, car.YearOfModel };
With custom entity class for resulted set of fields using SQL query:
string query = @"select owner.Name as OwnerName,
car.Model ,
car.YearOfModel ,
car.RegistrationNumber ,
car.CarID
from Cars as car
join CarOwners as owner
on car.OwnerFK = owner.OwnerID";
var queryResult = context.Get<CarCustomInfo>(query);
Where:
[SuppressSchemaGeneration]
public class CarCustomInfo
{
[Key]
public long CarID { get; set; }
public string Model {get; set;}
public DateTime YearOfModel { get; set; }
public string RegistrationNumber { get; set; }
public string OwnerName { get; set; }
public int OwnerAge { get; set; }
}
|
http://www.telerik.com/help/windows-8-xaml/datastorage-readoperation.html
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Outline
Java lacks a built-in way to define default arguments for methods and constructors. Over the years, several approaches have been proposed, each with its pros and cons. The most widely-known one uses method overloading, though varargs, null values, the builder pattern, and even maps have been used as well. Here we propose a new approach based on functional constructs.
Functional Default Arguments
Background
Many languages support default arguments for methods and constructors out of the box, i.e. Scala:
def sum(x: Int = 6, y: Int = 7): Int = x + y
The sum method can be invoked as follows:
sum(1, 2) // 3 -> x = 1, y = 2 (no defaults) sum(3) // 10 -> x = 3, default y = 7 sum(y = 5) // 11 -> default x = 6, y = 5 sum() // 13 -> default x = 6, default y = 7 sum(y = 3, x = 4) // 7 -> x = 4, y = 3 (no defaults)
This is very handy, but Java doesn't support it. There are a few different ways to accomplish something similar, however all of them have some drawback.
Method Overloading
The most widely-known one uses method overloading to simulate default arguments:
public int sum(int x, int y) { return x + y; // actual implementation } public int sum(int x) { // default y = 7 return this.sum(x, 7); } public int sum() { // default x = 6 return this.sum(6); // and y = 7 (implicitly) }
Despite this is a very common pattern (or anti-pattern) in Java, it has some drawbacks:
- The number of overloads for the method increases exponentially with the number of arguments, since all possible, meaningful argument combinations must be considered.
- Some argument combinations are not possible because overloaded methods are differentiated by the number and the type of the arguments passed into the method. In the example above, there's no way to define an overload sum(int y) that defaults the value of x, because we have chosen to specify x as an explicit argument (defaulting y to 7).
- Definition of default argument values is implemented inside the overloaded methods, thus making defaults global to every caller. In other words, this does not take the method invocation's context into account.
Varargs
Another approach to default arguments in Java is using varargs:
public int sum(int... arguments) { // Define default values int x = 6; int y = 7; // Extract explicit argument values, checking bounds if (arguments != null && arguments.length >= 1) { x = arguments[0]; if (arguments.length >= 2) { y = arguments[1]; } } return x + y; }
Here we define the default values and then extract the explicit arguments from the varargs parameter. Each default value is preserved only if its corresponding explicit value is not present in the varargs parameter.
Drawbacks of this approach are:
- As per the Java Language Specification, varargs parameters must be specified at the end of the argument list.
- If default arguments were of different types, the varargs parameter definition should be changed to Object... arguments. But if we do this, we lose static type checking, so we would have to check each argument's type at runtime, cast it and handle all possible errors...
- Both definition of default argument values and handling of the varargs parameter are to be implemented inside the method. As with the method overloading approach, defaults remain global to every caller, so the method invocation's context is not taken into account.
Null values
This approach is quite simple: if you invoke the method with a null argument, then its corresponding default value is used instead:
public int sum(Integer x, Integer y) { // Define default values x = x == null ? 6 : x; y = y == null ? 7 : y; return x + y; }
Using null values is much simpler than previous approaches. However it still has some drawbacks:
- As null is to be used to specify a default value, it cannot be used as an argument's valid explicit value.
- Primitives are not allowed, since only references can be null. This is why we've used wrapper types in the example above.
- Checking for null arguments and assigning default values has to be implemented inside the method. Again, defaults remain global to every caller, so the method invocation's context is not taken into account.
Other Approaches: Maps, Optional, and the Builder Pattern
In this StackOverflow answer given by user Vitalii Fedorenko, all common approaches to default arguments are visited. I won't analyze them here, since there are already a lot of articles that explore their pros and cons. Instead, I would like to introduce a new way to work with default arguments in Java that takes functional programming into consideration. But that will have to wait until the second part of this article...
Did you enjoy this post? If so, stay tuned for part two tomorrow, and please consider signing up to The Bounds of Java newsletter. I usually }}
|
https://dzone.com/articles/functional-default-arguments
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Hi Guys.
So here is the thing. I have a win32 / opengl application. I've created a class* that opens a console with AllocConsole and ties the stdin, stderr and stdout of the application to it.
So this is awesome, 'cos I can go (from anywhere in the program)
#include <iostream> ... std::cout << "\nSomething broke!"
Anyways, the point is I now have a console window that I can potentially get custom commands out of, such as "quit" for example.
I'm dubious about hacking something together before I've thought aboutt his so that's why I'm here.
If, on every frame (opengl) I go..
bool Frame() { std::string _command; std::cin >> command; if( command == "quit") return false; else return gfx->frame(); // do rendering }
Then I'm never going to get to type "quit" because the program will always be recreating the string every seconds-per-frame.
So how do I solve that? I need some way of like holding a buffer of input characters and send/emptying it on a newline / null character? then have a vector of these ? How do I test it?
How I turn AllocConsole() and freopen("CONIN$", "r", stdin) into an interactive console?
|
https://www.gamedev.net/topic/638568-win32-opengl-get-commands-from-console/
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
2016-7-11 : Version 3.0: After reading BOOK About Face (still not finished), I started the project of (a) getting rid of my multitude of windows, (b) modeless validation and (c) more intuitive interfaces where you simply do your work and don't have to manage the work flow and save all the time.Removing the multitude of windows presented some issues. I briefly thought about keeping the programs as external executables and simply embedding their windows into another frame. This idea was abandoned as it more of a hack and not a solution.Instead I decided to put each program's window into a notebook tab in the main program. The first step was placing each external program into its own namespace. Then when an external program is needed, it is source'd. I have a little source manager module that tracks whether a program has been sourced already and if so, simply calls its main procedure. These external programs that are now in their own namespace still feel like separate entities, even though they are sourced and are really part of the same program.The prior version used sockets for all the inter-process communication and that hasn't changed. It may be a bit unusual to use sockets to communicate with what is essentially the same process. But that was a huge rewrite I didn't want to do. It has the advantages in that the event driven code still works the same way and also the external programs can still be used as separate executables if needed.The one major socket problem was when a program sent a message and needed to get a response. Since the program is now monolithic, it can't both wait for a response and process the sent message at the same time. There is now a 'sendget' routine that will call the receiver's socket handler proc directly with a special socket name. The receiver's socket handler calls the socket send routine as usual to send the response and the socket module checks for the special socket name and saves the response data in a variable. After the receiver's socket handler exits, the 'sendget' routine can now continue and pulls the response data out of the variable.Rather than have each executable destroy all of its ui widgets and restart from scratch each time (since destroying and creating widgets is slow), the widgets are only built once when the program is first sourced. Where I had context sensitive ui elements, I had to always create the widgets and the startup code determines whether the widget is gridded or removed.Many of the variables that were initialized by default caused a lot of problems and I had to search through the code and make sure those variables were explicitly initialized or unset, otherwise restarting the executable (which was external and now sourced) picked up old data. I had fallen out of a good habit of always explicitly initializing every variable and that made the rewrite more difficult. Where it was reasonable, I tried to have the executable keep the data that was last used so when it was restarted, it was essentially where the user left off.Since the program is now essentially monolithic, I was able to make the database a singleton object. This saves on a lot of database loads and will speed up the user's work.Menus turned out to be a problem. My program uses context sensitive menus everywhere. Destroying and replacing the menu caused horrible flicker on all platforms. And Linux has a bug where the replaced menu will not appear in a maximized window. Instead, I have only one menu, and it destroys all of its current entries and rebuilds the new entries from the menu associated with the current notebook tab.Almost every dialog is now contained in a new notebook tab or if there was room and it made sense, simply added to the existing notebook tab. My program has very few toplevel windows now. It is a lot cleaner and easier to use. When a notebook tab is closed it will call a registered "close handler" for the tab. When the user switches to a different notebook tab, an optional "save handler" is called.Since the main window is now maximized by default, I added an internal window with a size grab bar in some places so that the view size could be changed by the user. This view size is automatically saved but turned out to be a little tricky to restore the same view size.
|
http://wiki.tcl.tk/39320
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
This message contains a list of some regressions from 2.6.30, for which there
are no fixes in the mainline I know of. If any of them have been fixed already,
please let me know.
If you know of any other unresolved regressions from 2.6.30,-08-10 89 27 24
2009-08-02 76 36 28
2009-07-27 70 51 43
2009-07-07 35 25 21
2009-06-29 22 22 15
Unresolved regressions
----------------------
Bug-Entry :
Subject : Oops when USB Serial disconnected while in use
Submitter : Bruno Prémont <bonbons@...>
Date : 2009-08-08 17:47 (2 days old)
References :
Bug-Entry :
Subject : Libertas: Association request to the driver failed
Submitter : Daniel Mack <daniel@...>
Date : 2009-08-07 19:11 (3 days old)
First-Bad-Commit:;a=commit;h=57921c312e8cef72ba35a4cfe870b376da0b1b87
References :
Handled-By : Roel Kluin <roel.kluin@...>
Bug-Entry :
Subject : WARNING: at net/mac80211/mlme.c:2292 with ath5k
Submitter : Fabio Comolli <fabio.comolli@...>
Date : 2009-08-06 20:15 (4 days old)
References :
Bug-Entry :
Subject : Troubles with AoE and uninitialized object
Submitter : Bruno Prémont <bonbons@...>
Date : 2009-08-04 10:12 (6 days old)
References :
Bug-Entry :
Subject : x86 Geode issue
Submitter : Martin-Éric Racine <q-funk@...>
Date : 2009-08-03 12:58 (7 days old)
References :
Bug-Entry :
Subject : iwlagn and sky2 stopped working, ACPI-related
Submitter : Ricardo Jorge da Fonseca Marques Ferreira <storm@...>
Date : 2009-08-07 22:33 (3 days old)
References :
Bug-Entry :
Subject : 2.6.31-rcX breaks Apple MightyMouse (Bluetooth version)
Submitter : Adrian Ulrich <kernel@...>
Date : 2009-08-08 22:08 (2 days old)
First-Bad-Commit:;a=commit;h=fa047e4f6fa63a6e9d0ae4d7749538830d14a343
Bug-Entry :
Subject : e1000e reports invalid NVM Checksum on 82566DM-2 (bisected)
Submitter : <jsbronder@...>
Date : 2009-08-04 18:06 (6 days old)
Bug-Entry :
Subject : Huawei E169 GPRS connection causes Ooops
Submitter : Clemens Eisserer <linuxhippy@...>
Date : 2009-08-04 09:02 (6 days old)
Bug-Entry :
Subject : Oops from tar, 2.6.31-rc5, 32 bit on quad core phenom.
Submitter : Gene Heskett <gene.heskett@...>
Date : 2009-08-01 13:04 (9 days old)
References :
Bug-Entry :
Subject : 2.6.31-rc4 - slab entry tak_delay_info leaking ???
Submitter : Paul Rolland <rol@...>
Date : 2009-07-29 08:20 (12 days old)
References :
Bug-Entry :
Subject : Radeon framebuffer (w/o KMS) corruption at boot.
Submitter : Duncan <1i5t5.duncan@...>
Date : 2009-07-29 16:44 (12 days old)
Bug-Entry :
Subject : iwlwifi (4965) regression since 2.6.30
Submitter : Lukas Hejtmanek <xhejtman@...>
Date : 2009-07-26 7:57 (15 days old)
References :
Bug-Entry :
Subject : LEDs switched off permanently by power saving with rt61pci driver
Submitter : Chris Clayton <chris2553@...>
Date : 2009-07-13 8:27 (28 days old)
References :
Bug-Entry :
Subject : Input : regression - touchpad not detected
Submitter : Dave Young <hidave.darkstar@...>
Date : 2009-07-17 07:13 (24 days old)
References :
Bug-Entry :
Subject : suspend script fails, related to stdout?
Submitter : Tomas M. <tmezzadra@...>
Date : 2009-07-17 21:24 (24 days old)
References :
Bug-Entry :
Subject : Kernel Oops when trying to suspend with ubifs mounted on block2mtd mtd device
Submitter : Tobias Diedrich <ranma@...>
Date : 2009-07-15 14:20 (26 days old)
First-Bad-Commit:;a=commit;h=15bce40cb3133bcc07d548013df97e4653d363c1
References :
Bug-Entry :
Subject : system freeze when switching to console
Submitter : Reinette Chatre <reinette.chatre@...>
Date : 2009-07-23 17:57 (18 days old)
Bug-Entry :
Subject : oprofile: possible circular locking dependency detected
Submitter : Jerome Marchand <jmarchan@...>
Date : 2009-07-22 13:35 (19 days old)
Bug-Entry :
Subject : X server crashes with 2.6.31-rc2 when options are changed
Submitter : Michael S. Tsirkin <m.s.tsirkin@...>
Date : 2009-07-07 15:19 (34 days old)
Bug-Entry :
Subject : 2.6.31-rc2: irq 16: nobody cared
Submitter : Niel Lambrechts <niel.lambrechts@...>
Date : 2009-07-06 18:32 (35 days old)
References :
Bug-Entry :
Subject : The AIC-7892P controller does not work any more
Submitter : Andrej Podzimek <andrej@...>
Date : 2009-07-05 19:23 (36 days old)
Bug-Entry :
Subject : [drm/i915] Possible regression due to commit "Change GEM throttling to be 20ms (...)"
Submitter : <kazikcz@...>
Date : 2009-07-05 10:49 (36 days old)
First-Bad-Commit:;a=commit;h=b962442e46a9340bdbc6711982c59ff0cc2b5afb
Bug-Entry :
Subject : NULL pointer dereference at (null) (level2_spare_pgt)
Submitter : poornima nayak <mpnayak@...>
Date : 2009-06-17 17:56 (54 days old)
References :
Regressions with patches
------------------------
Bug-Entry :
Subject : ath5k broken after suspend-to-ram
Submitter : Johannes Stezenbach <js@...>
Date : 2009-08-07 21:51 (3 days old)
References :
Handled-By : Nick Kossifidis <mickflemm@...>
Patch :
Bug-Entry :
Subject : x86 MCE malfunction on Thinkpad T42p
Submitter : Johannes Stezenbach <js@...>
Date : 2009-08-07 17:09 (3 days old)
References :
Handled-By : Bartlomiej Zolnierkiewicz <bzolnier@...>
Patch :
Bug-Entry :
Subject : MD raid regression
Submitter : Mike Snitzer <snitzer@...>
Date : 2009-08-05 15:06 (5 days old)
First-Bad-Commit:;a=commit;h=449aad3e25358812c43afc60918c5ad3819488e7
References :
Handled-By : NeilBrown <neilb@...>
Patch :
For details, please visit the bug entries and follow the links given in
references.
As you can see, there is a Bugzilla entry for each of the listed regressions.
There also is a Bugzilla entry used for tracking the regressions from 2.6.30,
unresolved as well as resolved, at:
Please let me know if there are any Bugzilla entries that should be added to
the list in there.
Thanks,
Rafael
Rafael J. Wysocki <rjw@...> changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|RESOLVED |CLOSED
--
Configure bugmail:
------- You are receiving this mail because: -------
You are watching the assignee of the bug.
Rafael J. Wysocki <rjw@...> changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |RESOLVED
Resolution| |CODE_FIX
--- Comment #1 from Rafael J. Wysocki <rjw@...> 2009-08-09 23:15:04 ---
Fixed by commit dff33cfcefa31c30b72c57f44586754ea9e8f3e2 .
--
Configure bugmail:
------- You are receiving this mail because: -------
You are watching the assignee of the bug.
Rafael J. Wysocki <rjw@...> changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|RESOLVED |CLOSED
Resolution|PATCH_ALREADY_AVAILABLE |CODE_FIX
--- Comment #6 from Rafael J. Wysocki <rjw@...> 2009-08-09 23:02:25 ---
Fixed by commit 0924d942256ac470c5f7b4ebaf7fe0415fc6fa59 .
--
Configure bugmail:
------- You are receiving this mail because: -------
You are watching the assignee of the bug.
Igor <karabaja4@...> changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |karabaja4@...
--- Comment #6 from Igor <karabaja4@...> 2009-08-09 21:54:16 ---).
--
Configure bugmail:
------- You are receiving this mail because: -------
You are watching the assignee of the bug.
Drivers sometimes don't call drm_vblank_init() (e.g. radeon [1],
vboxvideo [2], mga driver on out of memory condition, etc.), therefore
flip_list is left uninitialized. I did not test the patch yet.
[1]
[2]
This should have been a reply to "[PATCH] Add modesetting pageflip
ioctl and corresponding drm event", but I couldn't find a Message-ID I
could feed to git-send-email, so that it would appear as reply, sorry
for that.
---
drivers/gpu/drm/drm_fops.c | 7 ++++---
1 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/drm_fops.c b/drivers/gpu/drm/drm_fops.c
index dcd9c66..a40d36b 100644
--- a/drivers/gpu/drm/drm_fops.c
+++ b/drivers/gpu/drm/drm_fops.c
@@ -459,9 +459,10 @@ int drm_release(struct inode *inode, struct file *filp)
mutex_lock(&dev->struct_mutex);
/* Remove pending flips */
- list_for_each_entry_safe(f, ft, &dev->flip_list, link)
- if (f->pending_event.file_priv == file_priv)
- drm_finish_pending_flip(dev, f, 0);
+ if (dev->num_crtcs == 0)
+ list_for_each_entry_safe(f, ft, &dev->flip_list, link)
+ if (f->pending_event.file_priv == file_priv)
+ drm_finish_pending_flip(dev, f, 0);
/* Remove unconsumed events */
list_for_each_entry_safe(e, et, &file_priv->event_list, link)
--
1.6.2.5
fix the following 'make includecheck' warning:
include/drm/drm_memory.h: linux/vmalloc.h is included more than once.
Signed-off-by: Jaswinder Singh Rajput <jaswinderrajput@...>
---
include/drm/drm_memory.h | 2 --
1 files changed, 0 insertions(+), 2 deletions(-)
diff --git a/include/drm/drm_memory.h b/include/drm/drm_memory.h
index 63e425b..15af9b3 100644
--- a/include/drm/drm_memory.h
+++ b/include/drm/drm_memory.h
@@ -44,8 +44,6 @@
#if __OS_HAS_AGP
-#include <linux/vmalloc.h>
-
#ifdef HAVE_PAGE_AGP
#include <asm/agp.h>
#else
--
1.6.0.6
bit SDVO_OUTPUT_SVID0 is tested twice
Signed-off-by: Roel Kluin <roel.kluin@...>
---
diff --git a/drivers/gpu/drm/i915/intel_sdvo.c b/drivers/gpu/drm/i915/intel_sdvo.c
index 5371d93..95ca0ac 100644
--- a/drivers/gpu/drm/i915/intel_sdvo.c
+++ b/drivers/gpu/drm/i915/intel_sdvo.c
@@ -1458,7 +1458,7 @@ intel_sdvo_multifunc_encoder(struct intel_output *intel_output)
(SDVO_OUTPUT_RGB0 | SDVO_OUTPUT_RGB1))
caps++;
if (sdvo_priv->caps.output_flags &
- (SDVO_OUTPUT_SVID0 | SDVO_OUTPUT_SVID0))
+ (SDVO_OUTPUT_SVID0 | SDVO_OUTPUT_SVID1))
caps++;
if (sdvo_priv->caps.output_flags &
(SDVO_OUTPUT_CVBS0 | SDVO_OUTPUT_CVBS1))
Rafael Antonio Porras Samaniego <SpOeK@...> changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |RESOLVED
Resolution| |FIXED
--- Comment #8 from Rafael Antonio Porras Samaniego <SpOeK@...> 2009-08-09 03:05:35 PST ---
I've been using the KMS branch [1] for a while and the panic is gone.
[1]
--
Configure bugmail:
------- You are receiving this mail because: -------
You are the assignee for the bug.
Hi Linus,
Please pull the 'drm-fixes' branch from
ssh://master.kernel.org/pub/scm/linux/kernel/git/airlied/drm-2.6.git drm-fixes
One real bug fix, and two quite stupid errors that people think are bugs,
because we report them.
Dave.
drivers/gpu/drm/drm_irq.c | 2 +-
drivers/gpu/drm/drm_modes.c | 2 ++
drivers/gpu/drm/i915/i915_irq.c | 4 ++--
3 files changed, 5 insertions(+), 3 deletions(-)
commit 6cb504c29b1338925c83e4430e42a51eaa43781e
Author: Frans Pop <elendil@...>
Date: Sun Aug 9 12:25:29 2009 +1000
drm/i915: silence vblank warnings
these errors are pretty pointless
Reviewed-by: Jesse Barnes <jbarnes@...>
Signed-off-by: Dave Airlie <airlied@...>
commit 8d3457ec3198a569dd14dc9e3ae8b6163bcaa0b5
Author: Paul Rolland <rol@...>
Date: Sun Aug 9 12:24:01 2009 +1000
drm: silence pointless vblank warning.
Some applications/hardware combinations are triggering the message "failed to
acquire vblank counter" to be issued up to 20 times a second, which makes it
both useless and dangerous, as this may hide other important messages.
This changes makes it only appear when people are debugging.
Signed-off-by: Paul Rolland <rol@...>
Reviewed-by: Jesse Barnes <jbarnes@...>
Lost-twice-by: Dave Airlie <airlied@...>
Signed-off-by: Dave Airlie <airlied@...>
commit 38d5487db7f289be1d56ac7df704ee49ed3213b9
Author: Keith Packard <keithp@...>
Date: Mon Jul 20 14:49:17 2009 -0700
drm: When adding probed modes, preserve duplicate mode types
The code which takes probed modes and adds them to a connector eliminates
duplicate modes by comparing them using drm_mode_equal. That function
doesn't consider the type bits, which means that any modes which differ only
in the type field will be lost.
One of the bits in the mode->type field is the DRM_MODE_TYPE_PREFERRED bit.
If the mode with that bit is lost, then higher level code will not know
which mode to select, causing a random mode to be used instead.
This patch simply merges the two mode type bits together; that seems
reasonable to me, but perhaps only a subset of the bits should be used? None
of these can be user defined as they all come from looking at just the
hardware.
Signed-off-by: Keith Packard <keithp@...>
Signed-off-by: Dave Airlie <airlied@...>
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
|
https://sourceforge.net/p/dri/mailman/dri-devel/?viewmonth=200908&viewday=9
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Beginners: please read this post and this post before posting to the forum.
0 Members and 1 Guest are viewing this topic.
#include <PIC16F877A.h>void MSDelay (unsigned int);#define Psensor PORTBbits.RB1#define buzzer PORTCbits.RC7void main (void) { TRISBbits.TRISB1 = 1; //PORTB.1 as an input TRISCbits.TRISC7 = 0; //make PORTC.7 an output while (Psensor ==1) { buzzer = 0; MSDelay (200) ; buzzer = 1; MSDelay (200) ; } while(1); // stay here forever } void MSDelay (unsigned int itime) { unsigned int i; unsigned char j; for (i=0;i<itime;i++) for (j=0;j<165;j++)
#include <PIC16F877A.h>void MSDelay (unsigned int);#define Psensorlow PORTBbits.RB1 //attach low sensor to pin b1#define Psensorhigh PORTBbits.RB2 //attach high sensor to pin b2#define led PORTCbits.RC6 //attach led to pin rc6#define buzzer PORTCbits.RC7 //attach buzzer to pin c7void main (void) { TRISBbits.TRISB1 = 1; //PORTB.1 as an input TRISBbits.TRISB2 = 2; //PORTB.2 as an input TRISCbits.TRISC7 = 0; //make PORTC.7 an output TRISCbits.TRISC6 = 0; //make PORTC.6 an output while (1) //do this loop for ever { if(Psensorlow == 1){ //if the low level sensor detects water if(Psensorhigh == 1){ //if the high sensor is active buzzer = 1; //turn on the buzzer led=0; //make sure the led is off MSDelay (200) ;//wait for 200 ms buzzer = 0; //turn the buzzer off (this will make the buzzer intermittent) //or may be needed for pwm to the buzzer circuit? MSDelay (200) ;//wait for 200 ms } //end of if high sensor is active if(Psensorhigh == 0){ //if high sensor is not activated buzzer =0; //make sure the buzzer is off led =1; //switch the led on } //end of if high sensor is not active }//end of if low level active }//end of while loop }//end of main void MSDelay (unsigned int itime) { unsigned int i; unsigned char j; for (i=0;i<itime;i++){ for (j=0;j<165;j++); } }
the code will only work if water is detected at the top level. and will halt at doing nothing forever otherwise.
#include <16f877A.h>#use delay (clock = 20000000)#fuses hs, noprotect, nowdt, nolvp#byte PORTA = 5#byte PORTB = 6#byte PORTC = 7main(){ set_tris_a(1); set_tris_b(0); set_tris_c(0); do{ if (input(PIN_A1)==0) // condition when switch 1 is pressed { output_high(PIN_B1); // buzzer on portc = 0x1; // led at Pin C0 on } else { portc = 0xaa; // led at port C on alternately output_low(PIN_B1); // buzzer off output_low(PIN_C0); // led at pin C0 off } } while (1);}
|
http://www.societyofrobots.com/robotforum/index.php?topic=1954.0
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
a = 10
def f():
print(1)
print(a) # UnboundLocalError raised here
a = 20
f()
UnboundLocalError: local variable 'a' referenced before assignment
print(a)
print(a)
a
a
global
According to Python's documentation, the interpreter will first notice an assignment for a variable named
a in the scope of
f() (no matter the position of the assignment in the function) and then as a consequence only recognize the variable
a as a local variable in this scope. This behavior effectively shadows the global variable
a.
The exception is then raised "early", because the interpreter which executes the code "line by line", will encounter the print statement referencing a local variable, which is not yet bound at this point (remember, Python is looking for a
local variable here).
As you mentioned in your question, one has to use the
global keyword to explicitly tell the compiler that the assignment in this scope is done to the global variable the correct code would be:
a = 10 def f(): global a print(1) print(a) # Prints 10 as expected a = 20 f()
As @2rs2ts said in a now-deleted answer, this is easily explained by the fact that "Python is not merely interpreted, it is compiled into a bytecode and not just interpreted line by line".
|
https://codedump.io/share/GzphD8Lwqxo9/1/why-does-incorrect-assignment-to-a-global-variable-raise-exception-early
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
Popup in OpenErp??
hello everybody!!!
I work with Popups in OpenERP.
So, when i click on a button i open a new window which is of an other created model.
The problem here is that i want:
-When i click on save or not close the window,
-Put default values in the popup
-Create a report for the open window.
Here is my code:
def edit_solde(self, cr, uid, ids, employee_id, context=None):
result = []
mod_obj = self.pool.get('ir.model.data')
res = mod_obj.get_object_reference(cr, uid, 'hr_payroll', 'view_hr_payslip_form')
momo_id = self.read(cr, uid, ids,['id','employee_id','date_from','date_to'])
obj = self.pool.get('hr.payslip')
obj_ids = obj.search(cr, uid, [('employee_id', '=', momo_id[0]['employee_id'][0])])
result = obj.read(cr, uid, obj_ids, ['id'], context)
ref_id = False
for r in result :
ref_id = r['id']
return {
'name': 'Solde De Tout Compte',
'view_type': 'form',
'view_mode': 'form',
'view_id': [res and res[1] or False],
'res_model': 'hr.payslip',
'context': {'default_employee_id':momo_id[0]['employee_id'][0],'default_date_from':momo_id[0]['date_from'],'default_date_to_id':momo_id[0]['date_to']},
'type': 'ir.actions.act_window',
'nodestroy': True,
'target': 'new',
'flags' : { 'action_buttons' : True,},
'res_id': ref_id,
}
Who can help please.
|
https://www.odoo.com/forum/help-1/question/popup-in-openerp-94025
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
1 package org.tigris.scarab.actions;2 3 /* ================================================================4 * Copyright (c) 2003 CollabNet. All rights reserved.5 * 6 * Redistribution and use in source and binary forms, with or without7 * modification, are permitted provided that the following conditions are8 * the15 * documentation and/or other materials provided with the distribution.16 * 17 * 3. The end-user documentation included with the redistribution, if18 * any, must include the following acknowlegement: "This product includes19 * software developed by CollabNet <>."20 * Alternately, this acknowlegement may appear in the software itself, if21 * and wherever such third-party acknowlegements normally appear.22 * 23 * 4. The hosted project names must not be used to endorse or promote24 * products derived from this software without prior written25 * permission. For written permission, please contact info@collab.net.26 * 27 * 5. Products derived from this software may not use the "Tigris" or 28 * "Scarab" names nor may "Tigris" or "Scarab" appear in their names without 29 * prior written permission of CollabNet.30 * 31 * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESSED OR IMPLIED32 * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF33 * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.34 * IN NO EVENT SHALL COLLABNET OR ITS CONTRIBUTORS BE LIABLE FOR ANY35 * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL36 * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE37 * GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS38 * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER39 * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR40 * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF41 * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.42 *43 * ====================================================================44 * 45 * This software consists of voluntary contributions made by many46 * individuals on behalf of CollabNet.47 */ 48 49 import junit.framework.TestCase;50 51 /**52 * Tests {@link org.tigris.scarab.actions.Register}.53 * 54 * @since Scarab 0.16.1555 */56 public class RegisterTest extends TestCase57 {58 private Register register;59 60 protected void setUp()61 {62 register = new Register();63 }64 65 public void testParseDomain()66 {67 assertEquals("bar.com", register.parseDomain("jon@foo.bar.com"));68 }69 }70
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
|
http://kickjava.com/src/org/tigris/scarab/actions/RegisterTest.java.htm
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Download presentation
Presentation is loading. Please wait.
Published byJoel Penner Modified about 1 year ago
1
Rajesh Nair & Chris Golledge Kansas City, MO January 30, 2004 IDS and.NET
2
Kansas City Informix User's Group Agenda Overview of.NET.NET Framework Architecture ADO.NET Architecture Features of the IDS.NET Provider Roadmap of IDS.NET Support
3
Kansas City Informix User's Group Overview of.NET.NET is a new framework for developing enterprise applications –Actively in use on Microsoft Windows platforms –No single language required for development.NET is a response to industry trends: –Distributed Computing –Componentization –Enterprise Services –Web paradigm shifts –Maturity
4
Kansas City Informix User's Group Overview of.NET Requires a new API (provider) for access to data sources –API is exposed as a set of.NET interfaces Object-Oriented Architecture “Namespaces” used to organize classes into related groups Managed vs. Unmanaged code
5
Kansas City Informix User's Group.NET Framework Architecture Web ServicesASP.NETWin Forms ADO.NET / XML Classes Framework Base Classes CLR V S. N E T
6
Kansas City Informix User's Group.NET Framework Architecture
7
Kansas City Informix User's Group.NET Framework Architecture Common Language Runtime (CLR) –runtime environment that manages memory, garbage collection, code access security –managed code Framework Base Classes –provide I/O, collections, and threading functionality ADO.NET –provides data access API ASP.NET, Web Services & Forms –focuses on specific aspects of application development
8
Kansas City Informix User's Group ADO.NET Architecture.NET framework’s data management capabilities are encapsulated into a single component : ADO.NET ADO.NET contains: –Data Provider API which interacts with the actual data source –Content class, DataSet, which provides application program level abstraction for the data to/from the data source
9
Kansas City Informix User's Group What is a.NET Provider? Runtime class library that encapsulates a data access API for use by Microsoft.NET applications Set of specialized classes that implement standard ADO.NET interfaces and serve as a bridge between a data source and.NET applications
10
Kansas City Informix User's Group ADO.NET Architecture Client Application DataRelation DataSet DataTable DataColumn DataReaderDataAdapter Command Transaction Connection.NET Provider Classes
11
Kansas City Informix User's Group ADO.NET Architecture System.Data namespace – Microsoft class library for general data access functionality Default “bridge” providers for use with System.Data class and either an OLEDB provider or an ODBC driver –Provide IDS connectivity via Informix OLEDB or ODBC driver IBM.Data.Informix – Informix namespace which defines a class library implementing.NET interfaces for IBM Informix data access
12
Kansas City Informix User's Group ADO.NET Data Retrieval Two Models –Connected similar to typical client/server model obtains and maintains a connection with which to work –Disconnected by default, data is disconnected in.NET DataSet supports this model by presenting an in-memory view of a data source after data is retrieved, connection can be discarded changes performed on DataSet can be applied to the database with a new connection
13
Kansas City Informix User's Group Classes defined by the IDS.NET Provider IfxConnection – manages a connection to an Informix database IfxTransaction – represents a database transaction IfxCommand – manages execution of SQL commands IfxCommandBuilder – automatically generates commands, given a SELECT statement IfxParameter – represents a parameter to a Command object IfxParameterCollection – represents a collection of all parameters relevant to a Command object
14
Kansas City Informix User's Group Classes defined by the IDS.NET Provider IfxDataReader – provides forward-only, read-only access to data IfxDataAdapter –boundary between.NET provider and non-provider content classes –builds the DataSet objects (DataTable, DataColumn, DataRow etc), given a query and a DataSet instance IfxError, IfxException, IfxErrorCollection - for error and exception processing
15
Kansas City Informix User's Group IfxConnection Details Can be established: –programmatically using ConnectionString property of IfxConnection class –user environment –using SetNet to update the system registry database Order of precedence is ConnectionString, user environment, and then SetNet
16
Kansas City Informix User's Group Connection Attributes User and Password INFORMIXSERVER Database Name Service CLIENT_LOCALE DB_LOCALE Fetch Buffer Size Enlist (in a distributed transaction) Connection Pooling Connection Time Out Connection Lifetime Minimum Pool Size Maximum Pool Size Persist Security Info
17
Kansas City Informix User's Group Data Types in the IDS.Net Provider
18
Kansas City Informix User's Group Data Types in the IDS.Net Provider
19
Kansas City Informix User's Group Classes for Informix Data Types IfxDecimal IfxDateTime IfxTimeSpan IfxMonthSpan IfxRow IfxComplexLiteral IfxBlob IfxClob
20
Kansas City Informix User's Group Using MTS with the IDS.Net Provider Using CLR to implement COM+ configured classes is easier than implementing them with COM System.EnterpriseServices Namespace –provides the.NET classes, interfaces, structures, delegates, and enumerations with access to COM+ services ServicedComponent Class –base class for all COM+ services ContextUtil Class –wraps the CoGetObjectContext, the COM+ API for retrieving object context
21
Kansas City Informix User's Group Using MTS with the IDS.Net Provider Connection can be enlisted in distributed transaction –manually Set IfxConnection class Enlist property to false Application calls EnlistDistributedTransaction() method to enlist into the distributed transaction –automatically Set IfxConnection class Enlist property to true Application sets TransactionOption attribute to Required or RequiresNew MS DTC is still unmanaged code –New “plumbing” minimizes some data type conversions –Explicit resource deallocation may be required
22
Kansas City Informix User's Group API Tracing with the IDS.Net Provider IFXDOTNETTRACE – sets the level of tracing to be reported –Trace levels 0 = no tracing 1 = reports API calls entry & exit 2 = level 1 + function parameter values IFXDOTNETTRACEFILE- sets the log file where trace information is to be written
23
Kansas City Informix User's Group Integration with Microsoft Visual Studio.Net Emphasis on improving Visual Studio.NET development experience when using IBM (DB2 and IDS) DBMS IBM connections and database projects are depicted within Visual Studio: –Data Connections folder –Server Explorer tab –Solution Explorer shows IBM database projects Future release will add support for: –IDS SQL Editor –IDS Query Builder –Content and dynamic help
24
Kansas City Informix User's Group Installation and Deployment IDS.Net Provider requires.NET framework version 1.1 IDS.Net Provider will be included with CSDK (development) and I-Connect (runtime) distributions Requires a stored procedure to be installed in sysmaster database of each server instance
25
Kansas City Informix User's Group Roadmap of IDS.NET Support IDS.NET Provider completed technology preview in Q4 ‘03 IDS.NET Provider is currently available in the CSDK 2.81.TC3 release Subsequent releases of the provider will add additional functionality –Release mechanism will remain via CSDK and I- Connect products
26
Kansas City Informix User's Group Questions ?
27
Kansas City Informix User's Group Where to Get More Information IBM developerWorks Informix zone page: (www7b.boulder.ibm.com/dmdd/zones/informix/) Microsoft.NET Framework home page: IBM Informix.NET Provider product documentation Contact us!
Similar presentations
© 2017 SlidePlayer.com Inc.
|
http://slideplayer.com/slide/4036105/
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
With our latest Unity Core Assets release , we’re excited to unveil full support for the Unity 5.4 beta, which features native support for the HTC Vive. This is the fourth release since the previous installment in this series , when we shared some background on the ground-up re-architecting of our hand data pipeline. Today we’re going to look under the surface and into the future.
As our Orion beta tracking evolves, we’ve continued behind the scenes to hammer on every aspect of our client-side data handling for performance and efficiency. We’re also actively refining developer-side workflows and expanding the features of our toolset for building hand-enabled VR experiences. And in some cases, we’re adding features to the Core Assets to support upcoming Unity Modules.
For this release, we focused on our HandPool class. This Unity MonoBehavior component not only manages the collection of hand models for the rest of the system, it defines much of the workflow for the way you add hands to your scene. This release brings some refinements but also a significant new feature – the ability to have multiple pairs of hand models and to easily enable or disable those pairs at runtime.
While working on demonstration projects here at Leap Motion, we’ve found ourselves wanting to use different sets of hands for a variety of reasons. For complex graphical representations, it might be helpful to have hand models that only provide shadows, or only provide glows in addition to the main hands. A superhero game could benefit from the flexibility of completely different iHandModel implementations for fire hands versus ice hands. And some experiences might benefit from different hands for different UI situations.
The HandPool component, located on the same GameObject as the LeapHandController and LeapServiceProvider components, now has an exposed Size value for our ModelPool’s list of ModelGroups. Setting this value allows you to define how many pairs of hands you’d like in your scene. If you change this number from 2 to 3, slots for another pair of models will appear. You can assign a name for your new model pair so you can refer to it at runtime.
As in previous versions of Core Assets, you can drag iHandModel prefabs from the Project window to be children of the LeapHandController GameObject in the Hierarchy window. When you do this, the iHandModel component in the model prefab receives a default Leap hand pose. Since all Leap hand models inherit from the iHandModel, class this means that each pair of hands will align with the others.
You can test this by adding two DebugHand prefabs to your LeapHandController. In the Inspector for each DebugHand, you can set the Handedness to Left and Right. Then drag these children into their new slots. For the iHandModels to align, just be sure that the local translations and rotations of your iHandModel’s transform are zeroed out.
We’ve also improved the DebugHand script to show the rotation basis for each Leap Bone. These are immediately visible in the Scene window, but there’s a trick that allows you to view them in the game window as well. If the Gizmos button at the top right of the Game window is enabled and you select the LeapHandController in the Hierarchy window, you can view collider-based physics hands as well as the new Debug hands.
Using the Debug hands in this way can be help for – wait for it – debugging your other hands to verify they’re lining up with Leap Motion data! We hope this will be a helpful workflow when you’re building your own iHandModel implementations.
The new multiple hands feature becomes even more powerful with the added ability to enable and disable pairs at runtime.In the Inspector, you can set the IsEnabled boolean value for each model pair. This will control whether those models are used when you Start your scene. But more importantly, you can enable and disable these pairs at runtime with HandPool’s EnableGroup() and DisableGroup() methods.
Here’s a simple script you can attach to the LeapHandController. It will allow you to use the keyboard to enable and disable groups:
using UnityEngine; using System.Collections; using Leap.Unity; public class ToggleModelGroups : MonoBehaviour { private HandPool handPool; void Start() { handPool = GetComponent<HandPool>(); } void Update () { if (Input.GetKeyDown(KeyCode.U)) { handPool.DisableGroup("Graphics_Hands"); } if (Input.GetKeyDown(KeyCode.I)) { handPool.EnableGroup("Graphics_Hands"); } } }
Refactoring the HandPool class to support these new features while maintaining and improving encapsulation required some scrutiny and iteration. This work also allowed us to simplify the developer-facing UI we exposed in the Inspector. Where previous versions had the notion of a ModelCollection which populated our ModelPool at runtime, the new workflow is to add iHandModels directly to the HandPool simplifying the code and UI simultaneously.
To watch the ModelPool system at work and get a solid understanding of the system (like we did in theprevious blog post), you can comment out the [ HideInInspector ] tag above the modeList variable on line 39 of HandPool.cs. Each pair of iHandModels is part of a ModelGroup class. Its modeList gets populated at runtime. When a new Leap Hand starts tracking, an iHandModel is removed from the modelList and returned to the modelList – and therefore the ModelPool – when that Leap Hand stops tracking.
Each model pair has a CanDuplicate boolean value that works in tandem with iHandModel’s public Handedness enum. When CanDuplicate is set to True, this provides some flexibility to the Leap Motion tracking by allowing more than one copy of a Right or Left iHandModel. This can allow hands to initialize slightly faster in some cases, but also lets you create scenes where you’d like other users to put their hands in as well. Setting this to False allows you to ensure that only one Right and Left hand will be used at any time, which is useful if you’re going to drive the hands of a character or avatar.
Finally, our further refactoring has allowed us to relax the requirement that HandPool receive only instances from the scene Hierarchy. Prefabs can once again be dragged directly from the Project window directly into HandPool’s iHandModel slots. While this removes our ability to visualize the hand in the Scene view during edit time, we’re striving to allow the most flexibility for all sorts of workflows.
These new features are already allowing us to experiment with and demonstrate new use cases. But more importantly, they’re immediately providing the basis for new Unity Modules currently under construction. These will unlock new features and workflows like hand models, the ability to create your own hand models, user interface assets to streamline the creation of wearable menus in VR, and more.
Barrett Fox / An interaction engineer at Leap Motion, Barrett has been working at the intersection of game design, information visualization and animation craft for 20 years as a producer, game designer, and animator.
评论 抢沙发
|
http://www.shellsec.com/news/17384.html
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Red Hat Bugzilla – Bug 125979
/usr/include/X11/extensions/dpms.h cannot be used in C++ code
Last modified: 2007-11-30 17:10:44 EST
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.6)
Gecko/20040612 Firefox/0.8
Description of problem:
When using dpms.h in a C++ program. The code is compiled and linked
correctly, but when the program is ran the DPMS symbols have been
mangled by C++ and the DPMS* functions can't be located and the
program crashes. Suggest putting and #ifdef __cplusplus extern "C" in
the dpms.h header to correct this problem.
Example from xpm.h, which works correctly with C++
#ifdef __cplusplus
extern "C" {
#endif
Version-Release number of selected component (if applicable):
xorg-x11-devel-6.7.0-2
How reproducible:
Always
Steps to Reproduce:
1. create a c++ program that uses DPMSQueryExtension
2. compile and link program
3. execute program
Actual Results: relocation error:
undefined symbol: _Z18DPMSQueryExtensionP9_XDisplayPiS1_
Expected Results: Program should execute correctly.
Additional info:
When compiling against C code the program works correctly.
In my program I did this and it solved the problem
#ifdef DPMSExtension
#include <X11/Xlib.h>
#ifndef DPMS_SERVER
#include <X11/X.h>
#include <X11/Xmd.h>
extern "C" Bool DPMSQueryExtension(Display *, int *, int *);
extern "C" Bool DPMSCapable(Display *);
extern "C" Status DPMSInfo(Display *, CARD16 *, BOOL *);
extern "C" Status DPMSEnable(Display *);
#endif
#endif
I don't want to make such a change to our X header files unless
it's been approved by upstream first, as there are risks involved
within a stable release cycle, however if you file a bug report
in the X.Org bugzilla at and
carbon copy me on the bug report, I'll track the issue in their
bugzilla instead, and will consider backporting any changes they
make in CVS.
Thanks for the report!
Opened bug #830 at freedesktop
Above link points to 830 in Red Hat bugzilla.
Upstream URL link is:
Setting bug status to "UPSTREAM" and tracking in fd.o bugzilla.
Thanks.
Status update: The upstream bug report has not yet received any
comments from developers. You may wish to discuss the issue with
them on xorg mailing lists, or update the upstream report with
additional comments.
I have submitted at patch into the bugzilla report. Hopefully it will
be accepted.
Was committed to upstream CVS for a while now. Resolving as rawhide...
|
https://bugzilla.redhat.com/show_bug.cgi?id=125979
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Finding memory leaks in a non-MFC app can be difficult
because the debugger doesn't display path and line number
info in the debug window. But, it's quite easy to
enable that info. Here's how:
Copy this to a file called LeakWatcher.h
#ifndef IMWATCHINGYOULEAK
#define IMWATCHINGYOULEAK
#include <crtdbg.h>
#ifdef _DEBUG
void* operator new(size_t nSize, const char * lpszFileName, int nLine)
{
return ::operator new(nSize, 1, lpszFileName, nLine);
}
#define DEBUG_NEW new(THIS_FILE, __LINE__)
#define MALLOC_DBG(x) _malloc_dbg(x, 1, THIS_FILE, __LINE__);
#define malloc(x) MALLOC_DBG(x)
#endif // _DEBUG
#endif // #include guard
Call _CrtDumpMemoryLeaks(); at the end of your program
(or wherever you want to see outstanding memory allocations).
_CrtDumpMemoryLeaks();
Add this to each of your source files (after your last #include) :
#include
#include "LeakWatcher.h"
#ifdef _DEBUG
#define new DEBUG_NEW
#undef THIS_FILE
static char THIS_FILE[] = __FILE__;
#endif
This does the same thing an MFC app does -
it provides the path and line number of each allocation
to the C-runtime allocation routines. Those routines store
the path and line number for each allocation, and spit
them out when you call _CrtDumpMemoryLeaks();. You should
recognize that little chunk of code in step 3 from every source
file that the AppWizard (and ClassWizard, mostly) has ever created.
#define _CRTDBG_MAP_ALLOC will provide
file/line info for leaks caused by malloc, but for leaks with new,
it ends up telling you that the leak occurred in
crtdbg.h
(because that's where MS defined their new) - not really useful.
#define _CRTDBG_MAP_ALLOC
malloc
new
This represents about ten minutes of searching
through Microsoft's C runtime and MFC source files.
So, all of the credit goes to MS. But, all of the
blame goes to MS, too, for not making this part of the non-MFC headers.
Anyway, have fun and be excellent to each other.
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
_CrtDumpMemoryLeaks()
_CrtSetDbgFlag(_crtDbgFlag | _CRTDBG_LEAK_CHECK_DF);
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
https://www.codeproject.com/articles/2319/detailed-memory-leak-dumps-from-a-console-app?fid=3947&df=90&mpp=10&sort=position&spc=relaxed&tid=4340768
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Hello
As a rather recent bike enthusiast, I have recently come across Kawasaki's ZX-6R available for sale at approx Rs 850K. While, the mileage is almost 8,000 Kms, the bike is still unregistered. Other than that, the bike looks in prestine condition.
I would appreciate some guidance on buying the bike, like the things to look for in a bike before buying one etc. As I'm in Lahore, some referrals will be much appreciated.
Regards
hello...im foosa...nice to see another biker here...
now for the bike...i think the asking is wayyyyyyyyyy too much for the bike...the bike should be around 500k or even 550k depending upon the condition...others factors come in when buying a bike but since you mentioned ur new to all this so i'll state only the basic one...BUY A BIKE WHICH YOU CAN CONTROL
here are a couple of good bikes for beginners
a suzuki gs-500f price=525000 (including all taxes)
an Aprilia Rs-125 (very very fast 125 that is price 250k including all taxes )
another good bike is the 2 stroke aprilia RS-250...but to my suprise they stopped making new ones :|...my bad...
and this info is for you all of you...suzuki launched the new GSX650F...not gsxr...but gsx650f...and it costs 700k for a brand new 2008 model including all taxes
here is a pic
Thank you foosa for the feedback.
As you have mentioned that aprilia has discontinued making 250CC bikes, what other alternative do you suggest?
Since aprilia is no more available, I'll try negotiating with the seller of Kawasaki but I doubt it that he will accept an offer of Rs 500K. However, if you know him personally, then please put in a word for me, which may help in the negotiation.
Welcome to the zoo..Since how long have you been driving bikes? No matter for how long, I wouldn't suggest a start from a 600cc directly. As I mentioned on some other thread, jumping from a normal bike to a 600cc directly is like shifting from a glider to an F-16, you know the results. Control is everything and you need some time to get on it aptly.That price for a non-registered 600 is way too much, however a person like me would pay a bit more if the condition is superb.About Aprilias, I don't think theres is even a single bike by Aprilia in Pakistan, plus the Italian bikes can be costly in the long run.That being said, good luck with the purchase, do share the pics when the purchase has been made.
Well then, thanks for pointing me in the right direction. Now it seems I have to hunt for a 250cc. Any fellow member here who has a 250cc bike to spare?
You are not looking for brand new ones?Kawasaki is coming with a brilliant 250 ZXR model this year. Should be a great choice.
Yes I read about it on the internet. I agree its an option worthwhile exploring.
Its retailed at USD 3,500 (PKR 217,000) and with duties and other charges, I figure it'll cost me around PKR 400,000? Is there anyway to get a precise estimate of its landed price?
rats...how could i forget my favrite zx-2r :|..bad bad foosa...
hamie a brand new 2008 zx-2r would cost u 325K including all taxes ...and its one kick ass bike i assure ya (Y)...also try to persuade 7thgear here into selling his bike :P...
@7th..
thanx for reminding about the zx2r ninja...coz i was totally into getting a yamaha fzr-400EXUP....i have 250k in bank...so need another 100k and i'll surely have the zx-2r
hey hamei i have a kawasaki zx6r and its a very kool bike i bought it for 600000 and i am selling it now if intrested do email me at sfjafri27@yahoo.com will gvive a you a gud price
@foosa
I was rather surprised as to how you could forget the 2R being a Kawi fan, as its such a nice machine.
@Hamie
Its always upto the purchaser what he wants for a beginner's bike. But this is why I wouldn't suggest a 600 to start with. Learning to control is everything.
Aprilia 125 is looking great. is it 2 Stroke or 4 Stroke???
Just checkout this Video
@7th gear... I agree so thats why i'm trying to find a dealer who'll help me in importing a Kawasaski's ZX-2R.
@foosa... hey man, the aftab guy i told you about gave me an estimate of approx Rs 475K for the above bike.
@saya88... sure i'll be interested. Can you pls post some pictures of your bike here and share some details like mileage, accidental history (try being honest), year of make etc. Much appreciate it.
@foosa again... i'll call you.
^Lets hope you do import one soon..
ha ha ha... yeah!
import we shall...and import we must...break we shall the monoply of the local sports bikes dealers :P...ALL HAIL MEGATRON (Y)
my personal view, any bike below 600cc looks ugly to me. If you want a real sports bike, which also looks and sounds like a real sports bike go for atleast 600cc bike. And regarding controling a 600cc bike.I have known atleast 2 ppl who had yamaha yzf-r6 and cbr-600f4 as their first bike. Both of these guys never drove even a normal cd70 in their life. And if you ride these bikes at normal crusing speeds you wont have any problem. Just dont try to race.
I would still say get a 600cc R6 preferably. It will look and sound good. You will have real fun on it and it would surely be a head turner.
O! maverick! you spoke rather the very words that resonate how I feel right now. Who knows, I might end up buying 600cc at the end of the day. Being a tall heavy built (180lbs) man, I am a bit aprehensive with buying a 250cc.
I am also considering Suzuki's GS-500F (recommended by foosa above) as a viable option. Lets see... working on it.
I would like to add my views in addition to support Maverickk... control is in your hands and mind ..... just dun't let ur craziness drive ur bike while you are on it ....... in my personal experience even heavier bike should not be matter for u to control.... i moved from 100cc yamaha to 750 katana which weight 480 lbs(dry weight) but was not very difficult to control.... and my self only 150 lbs.... so gud luck watever u get.....
hamir, dont even think of buying a 250cc bike. Its waste of money. Believe you me. Buy a real thing. Atleast a 600cc bike by yamaha, honda , suzuki or kawasaki all will look ang sound good and will be head turner. No one looks at you on a 250cc bike bhai. Anything below 600cc is waste of money.
And as attiq said control is in your hands. Doesnt matter what your age is. Belive you me, i have seen a 18 year old riding a yamaha R6 in lahore at fairly good speed, i was at above 100km/h on car following him and he must be well above that speed. As long as you are careful, wont matter at all. YOu can even handle hayabusa 1300cc. Makes no difference.
My personal choice would be yamaha R6. Rest is up to you. But plz dont waste your money on anything below 600cc.Good luck with your purchase. Do post pictures of your bike when you get it.
|
https://www.pakwheels.com/forums/t/looking-to-buy-kawasaki-zx-6r/57986
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
The first thing that one must know with functions is that they are not essential. One can still write their
program without any functions, however when debugging all that unsorted code can be very confusing, especially
for beginners.
One should use functions in their programs because they allow one to break each part of one's program into
managable chunks.
The syntax for a function is as follows:
- Code: Select all
def NAME(PARAMETERS): # Don't forget that colon, alot of people do. :)
"""Purpose of Function""" # Indent 4 spaces like everything else.
STATEMENTS
The first thing that we can see is the 'def'. This tells python that this is a function, and will skip over it
untill it is called.
The next thing here is the name of the function. This should be clear, and memorable. I highly recommend that
one never has a function name that is the same as the name of a global variable. One can, but it is considered
to be bad form, and is very confusing for one as the programmer.
Next we have the parameters of the function. It is customary for parameters to be single letters that
represent what needs to go into your function. The values of these parameters are specified when the function
is called
This is optional, but it is a very good idea to state the purpose of one's function in triple quotation marks,
as when programming you can see what the function is supposed to do. You may be wondering why we did not simply
leave a comment with '#'. We use the quotes because the purpose of the function can be summoned, whereas a
standard comment has no effect at all.
Next we have the statements. These will manipulate the values and change them. You know, just print commands
and stuff.
Lets move on to some examples, and some common mistakes.
- Code: Select all
def squarenum(n):
"""Return the square of any number"""
return n ** 2
print(squarenum(5))
This type of function is called a fruitful function, because the function returns a value. Now alot of people
make this mistake in functions like these, so read carefully.
Alot of novices will do this:
- Code: Select all
def squarenum(n):
"""Return the square of any number"""
result = n ** 2
print(result)
This will return an error. Why? Because the 'result' is what is called a local variable. It is removed when the
function ends, so when this person tries to print 'result' they get an error saying it was not defined. The way
that will work is shown first, where I print the function where I used the return command.
Another common error is when people do not use the names of parameters in the body of their functions, but use
the names of a global input variable, which was earlier defined as a parameter.
Here is an example:
- Code: Select all
def squarenum(n):
"""Return the square of any number"""
return number ** 2 # Another dumb mistake. :P
number = int(input("Please put a number"))
print(squarenum(5))
So that was my tutorial on functions. I hope this will help you make and improve your own programs!
|
http://python-forum.org/viewtopic.php?p=607
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
@Target(value=TYPE) @Retention(value=RUNTIME) @Documented @Import(value=TransactionManagementConfigurationSelector.class) public @interface EnableTransactionManagement
<tx:*>XML namespace. To be used on @
Configurationclasses as follows:
:
In both of the scenarios above,In both of the scenarios above,
PlatformTransaction
PlatformTransaction..
|
http://docs.spring.io/spring-framework/docs/3.2.0.RC2/api/org/springframework/transaction/annotation/EnableTransactionManagement.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Difference between revisions of "ECE597 Lab 1 Wiring and Running the Beagle"
Latest revision as of 20:48, 27 October 2011
Here's what you need to do to get your Beagle running for the first time.
- Wire your Beagle as shown
- Insert the SD card
- Plug in the 5V power
- Login and look around. Left-click on the background to get a menu.
- Try starting an XTerm.
- Try some basic Linux commands
ls, cd, less mplayer ls /proc ls /sys
<code?/proc</code> and
/sys are files that map to the hardware.
- Try
cd /sys/class/leds ls cd beagleboard::usr0 ls cat trigger echo none > trigger echo 1 > brightness
cat trigger tells you what options you can set trigger to. Try some of them. Explore
/sys and
/proc is see what else you can find.
Homework
There are many interesting programs already compiled for the Beagle that aren't on the SD card we gave you. New applications can be easily installed using the
opkg package manager. When you are back home and connected to the network try:
opkg list | less opkg install gcc
The first command lists the packages you can install. The second command installs the gcc C compiler. With a compiler installed you can take your favorite C program and compile and run it on the Beagle. Here is the embedded version of Hello World as presented in The Embedded Linux Primer.
#include <stdio.h> int bss_var; /* Uninitialized global variable */ int data_var = 1; /* Initialized global variable */ int main(int argc, char **argv) { void *stack_var; /* Local variable on the stack */ stack_var = (void *)main; /* Don't let the compiler */ /* optimize it out */ printf("Hello, World! Main is executing at %p\n", stack_var); printf("This address (%p) is in our stack frame\n", &stack_var); /* bss section contains uninitialized data */ printf("This address (%p) is in our bss section\n", &bss_var); /* data section contains initializated data */ printf("This address (%p) is in our data section\n", &data_var); return 0; } Compile and run it <pre> yoder@beagleboard:~$ gcc helloBeagle.c yoder@beagleboard:~$ ./a.out Hello, World! Main is executing at 0x8380 This address (0xbefa1cf4) is in our stack frame This address (0x10670) is in our bss section This address (0x10668) is in our data section
This is a program I use what talking about the address spaces on the Beagle and virtual memory.
Try some of your own C programs. See how well they run. If you come up with something interesting, add them to the wiki.
|
http://elinux.org/index.php?title=ECE597_Lab_1_Wiring_and_Running_the_Beagle&diff=71497&oldid=18360
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Hi Kathleen,
> >And the UI code has little to do with window
> > procedures and messages; there no real advantage from using the WinForms
> > designer. If
> > there had been classes like those in the WinForms namespace all along,
> > nobody would have
> > taken VB's UI capabilities seriously. We're just used to it, that's all.
>
> "No real advantage from using the WinForms designer"?? Are you exaggerating
> or do you really see no advantage? I think that drag and drop visual
> development is still the best thing going in the programming world (have
> created enough forms manually in my life. Add to that the resizing and
> inheritance capabilities of VB.net and you have a far more powerful
> mechanism for Windows development than we have had before.
Well, I exaggerated; maybe I don't like the designer because it's ... v e r y s l o w ...
in Beta 2. Maybe it's because I touch-type and hate to use the mouse ...
Mostly, form creation comes down to setting a few properties and hooking up handlers. It
depends on the type of form (data form or document like form), the number of controls,
whether layout can be "automatic" (using the Dock property). Often, using the designer
will be faster. But mainly, when I have to talk to the controls in the code anyway, I
might as get used to it right away. IntelliSense does most of the typing, and the
"specifics" (like a button caption) have to be typed anyway.
I think the inheritance issue is important. The designer creates one big "kitchen sink"
sub, called "InitializeComponent". If I design a form that allows for inheritance, I'd
like to have more fine-grained control over the form creation/initialization process. For
example, a form base class might create some default context menu, but inheritors might
want to - not only extend the context menu - but maybe replace its items altogether. So,
in this case, protected overridable subs that break the whole thing into smaller pieces
can be useful.
Regards,
Gregor
Forum Rules
Development Centers
-- Android Development Center
-- Cloud Development Project Center
-- HTML5 Development Center
-- Windows Mobile Development Center
|
http://forums.devx.com/showthread.php?53180-Re-A-moderate-view-Adressing-Control-Arrays&p=180391
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
#include <Conditional.h>
Inheritance diagram for makefile::ConditionalDEF:
[inline]
Definition at line 114 of file Conditional.h.
[inline, virtual]
Definition at line 117 of file Conditional.h.
[inline, private, virtual]
Output the header line, like ifeq (arg1, arg2).
Implements makefile::Conditional.
Definition at line 109 of file Conditional.h.
References cstr, and makefile::string.
[private]
Definition at line 108 of file Conditional.h.
|
http://hugin.sourceforge.net/docs/html/classmakefile_1_1ConditionalDEF.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Other posts in this series:
And now for a bit more controversial shift. While most folks doing DI in ASP.NET MVC see the benefit of the ability to provide injection around the controller-side of things (filters, action results, controllers etc.), I’ve also seen a lot of benefit from injection on the view side. But before delve into the how, let’s first look at the why.
The responsibility of a view is to render a model. Pretty cut and dry, until you start to try and do more interesting things in the view. Up until now, the CODE extensibility points of a view included:
- XyzHelper extensions (UrlHelper, AjaxHelper, HtmlHelper)
- Custom base view class with custom services
I’m leaving out markup extensibility points such as MVC 2 templated helpers, master pages, partials, render action and so on. If we want our custom helper extension method to use a service, such as a custom Url resolver, localization services, complex HTML builders and so on, we’re left with service location, looking something like this:
public static MvcHtmlString Text<TModel>(this HtmlHelper<TModel> helper, string key) { var provider = ObjectFactory.GetInstance<ILocalizationProvider>(); var text = provider.GetValue(key); return MvcHtmlString.Create(text); }
We started seeing this sort of cruft all over the place. It became clear quite quickly that HtmlHelper extensions are only appropriate for small, procedural bits of code. But as we started building customized input builders (this was before MVC 2’s templated helpers and MVC Contrib’s input builders), the view started becoming much, much more intelligent about building HTML. Its responsibilities were still just “build HTML from the model”, but we took advantage of modern OO programming and conventions to drastically reduce the amount of duplication in our views.
But all of this was only possible if we could inject services into the view. Since MVC isn’t really designed with DI everywhere in mind, we have to use quite a bit of elbow grease to squeeze out the powerful designs we wanted.
Building an injecting view engine
Our overall strategy for injecting services into the view was:
- Create a new base view class layer supertype
- Expose services as read/write properties
- Utilize property injection to build up the view
Since we’re using the WebFormsViewEngine, we don’t really have any control over view instantiation. We have to use property injection instead. That’s not a big hurdle for us here as it was in other place. We’re not instantiating views in unit tests, which are a big source of confusion when doing property injection.
First, we need to subclass the existing view engine and plug in to its existing behavior:
public class NestedContainerViewEngine : WebFormViewEngine { public override ViewEngineResult FindView( ControllerContext controllerContext, string viewName, string masterName, bool useCache) { var result = base.FindView(controllerContext, viewName, masterName, useCache); return CreateNestedView(result, controllerContext); } public override ViewEngineResult FindPartialView( ControllerContext controllerContext, string partialViewName, bool useCache) { var result = base.FindPartialView(controllerContext, partialViewName, useCache); return CreateNestedView(result, controllerContext); }
We’re going to use the base view engine to do all of the heavy lifting for locating views. When it finds a view, we’ll create our ViewEngineResult based on that. The CreateNestedView method becomes:
private ViewEngineResult CreateNestedView( ViewEngineResult result, ControllerContext controllerContext) { if (result.View == null) return result; var parentContainer = controllerContext.HttpContext.GetContainer(); var nestedContainer = parentContainer.GetNestedContainer(); var webFormView = (WebFormView)result.View; var wrappedView = new WrappedView(webFormView, nestedContainer); var newResult = new ViewEngineResult(wrappedView, this); return newResult; }
We want to create a nested container based on the calling controller’s nested container. Our previous controller factory used a static gateway to store the outermost nested container in HttpContext.Items. To make it visible to our view engine (which is SINGLETON), we have no choice but to build a little extension method in GetNestedContainer for HttpContextBase to retrieve our nested container.
Once we have the outermost nested container, we create a new, child nested container from it. Containers can nest as many times as we like, inheriting the parent container configuration.
From there, we then need to build up our own IView instance, a WrappedView object. Unfortunately, the extension points in the WebFormView class do not exist for us to seamlessly extend it to provide injection. However, since MVC is open source, we have a great starting point.
After we build our WrappedView, we create our ViewEngineResult and our custom view engine is complete. Before we look at the WrappedView class, let’s look at how our views will be built.
Layer supertype to provide injection
To provide injection of services, we’ll need a layer supertype between our actual views and the normal MVC ViewPage and ViewPage<T>:
public abstract class ViewPageBase<TModel> : ViewPage<TModel> { public IHtmlBuilder<TModel> HtmlBuilder { get; set; } } public abstract class ViewPageBase : ViewPageBase<object> { }
Here, we include our custom IHtmlBuilder service that will do all sorts of complex HTML building. We can include any other service we please, but we just need to make sure that it’s a mutable property on our base view layer supertype. The HtmlBuilder implementation does nothing interesting, but includes a set of services we want to have injected:
public class HtmlBuilder<TModel> : IHtmlBuilder<TModel> { private readonly HtmlHelper<TModel> _htmlHelper; private readonly AjaxHelper<TModel> _ajaxHelper; private readonly UrlHelper _urlHelper; public HtmlBuilder( HtmlHelper<TModel> htmlHelper, AjaxHelper<TModel> ajaxHelper, UrlHelper urlHelper) { _htmlHelper = htmlHelper; _ajaxHelper = ajaxHelper; _urlHelper = urlHelper; }
When we configure our container, we want any service used to be available. By configuring the various helpers, we allow our helpers to be injected instead of passed around everywhere in method arguments. This is MUCH MUCH cleaner than passing context objects around everywhere we go, regardless if they’re actually used or not.
Wrapping WebFormView to provide injection
As I mentioned before, we can’t subclass WebFormView directly, but we can instead wrap its behavior with our own. Composition over inheritance, but we still have to duplicate more behavior than I would have liked. But, it’s about the cleanest and lowest-impact implementation I’ve come up with, and gets around any kind of sub-optimal poor-man’s dependency injection.
First, our WrappedView definition:
public class WrappedView : IView, IDisposable { private bool _disposed; public WrappedView(WebFormView baseView, IContainer container) { BaseView = baseView; Container = container; } public WebFormView BaseView { get; private set; } public IContainer Container { get; private set; } public void Dispose() { Dispose(true); GC.SuppressFinalize(this); } protected virtual void Dispose(bool disposing) { if (_disposed) return; if (Container != null) Container.Dispose(); _disposed = true; }
We accept the base view (a WebFormView created from the original WebFormsViewEngine), as well as our nested container. We need to dispose of our container properly, so we implement IDisposable properly.
Now, the next part is large, but only because I had to duplicate the existing MVC code to add in what I needed:
public void Render(ViewContext viewContext, TextWriter writer) { if (viewContext == null) { throw new ArgumentNullException("viewContext"); } object viewInstance = BuildManager.CreateInstanceFromVirtualPath(BaseView.ViewPath, typeof(object)); if (viewInstance == null) { throw new InvalidOperationException( String.Format( CultureInfo.CurrentUICulture, "The view found at '{0}' was not created.", BaseView.ViewPath)); } //////////////////////////////// // This is where our code starts //////////////////////////////// var viewType = viewInstance.GetType(); var isBaseViewPage = viewType.Closes(typeof (ViewPageBase<>)); Container.Configure(cfg => { cfg.For<ViewContext>().Use(viewContext); cfg.For<IViewDataContainer>().Use((IViewDataContainer) viewInstance); if (isBaseViewPage) { var modelType = GetModelType(viewType); var builderType = typeof (IHtmlBuilder<>).MakeGenericType(modelType); var concreteBuilderType = typeof (HtmlBuilder<>).MakeGenericType(modelType); cfg.For(builderType).Use(concreteBuilderType); } }); Container.BuildUp(viewInstance); //////////////////////////////// // This is where our code ends //////////////////////////////// var viewPage = viewInstance as ViewPage; if (viewPage != null) { RenderViewPage(viewContext, viewPage); return; } ViewUserControl viewUserControl = viewInstance as ViewUserControl; if (viewUserControl != null) { RenderViewUserControl(viewContext, viewUserControl); return; } throw new InvalidOperationException( String.Format( CultureInfo.CurrentUICulture, "The view at '{0}' must derive from ViewPage, ViewPage<TViewData>, ViewUserControl, or ViewUserControl<TViewData>.", BaseView.ViewPath)); }
I’m going to ignore the other pieces except what’s between those comment blocks. We have our ViewPageBase<TModel>, and we need to configure various services for our views, including:
- ViewContext
- Anything needed by the helpers (IViewDataContianer)
- Custom services, like IHtmlBuilder<TModel>
Just like our previous nested container usage, we take advantage of StructureMap’s ability to configure a container AFTER it’s been created. We first configure ViewContext and IViewDataContainer (needed for HtmlHelper). Finally, we determine the TModel model type of our view, and configure IHtmlBuilder<TModel> against HtmlBuilder<TModel>. If TModel is of type Foo, we configure IHtmlBuilder<Foo> to use HtmlBuilder<Foo>.
Finally, we use the BuildUp method to perform property injection. Just as we explicitly configured property injection for our filter’s services, we need to do the same for the view’s services:
SetAllProperties(c => { c.OfType<IActionInvoker>(); c.OfType<ITempDataProvider>(); c.WithAnyTypeFromNamespaceContainingType<ViewPageBase>(); c.WithAnyTypeFromNamespaceContainingType<LogErrorAttribute>(); });
The view services are all contained in the namespace of the ViewPageBase class. With that in place, we just have one more piece to deal with: services in master pages.
Dealing with master pages
In the RenderViewPage method, we add a piece to deal with master pages and enable injection for them as well:
private void RenderViewPage(ViewContext context, ViewPage page) { if (!String.IsNullOrEmpty(BaseView.MasterPath)) { page.MasterLocation = BaseView.MasterPath; } page.ViewData = context.ViewData; page.PreLoad += (sender, e) => BuildUpMasterPage(page.Master); page.RenderView(context); }
Because master pages do not flow through the normal view engine, we have to hook in to their PreLoad event to do our property injection in a BuildUpMasterPage method:
private void BuildUpMasterPage(MasterPage master) { if (master == null) return; var masterContainer = Container.GetNestedContainer(); masterContainer.BuildUp(master); BuildUpMasterPage(master.Master); }
If we needed any custom configuration for master pages, this is where we could do it. In my example, I don’t, so I just create a new default nested container from the parent container. Also, master pages can have nesting, so we recursively build up all of the master pages in the hierarchy until we run out of parent master pages.
Finally, we need to hook up our custom view engine in the global.asax:
protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); StructureMapConfiguration.Initialize(); var controllerFactory = new StructureMapControllerFactory(ObjectFactory.Container); ControllerBuilder.Current.SetControllerFactory(controllerFactory); ViewEngines.Engines.Clear(); ViewEngines.Engines.Add(new NestedContainerViewEngine()); }
And that’s it! With our nested container view engine in place, we can now inject complex UI services in to our views, allowing us to create powerful UI content builders without resorting to gateways or service location.
Conclusion
It was a bit of work, but we were able to inject services into not only views, but partials, master pages, even MVC 2 templated helpers! By using nested containers, we were able to configure all of the contextual pieces so that services built for each view got the correct contextual item (the right HtmlHelper, ViewContext, IViewDataContainer, etc.)
This is a quite powerful tool now, we don’t need to resort to ugly usage of static gateways or service location. We can now build UI services that depend on an HtmlHelper or ViewContext, and feel confident that our services get the correct instance. In the past, we’d need to pass around our ViewContext EVERYWHERE in order to get back at these values. Not very fun, especially when you start to see interfaces that accept everything under the sun “just in case”.
For those folks that don’t want to inject services in to their views, it’s all about responsibilities. I can create encapsulated, cohesive UI services that still only create HTML from a model, but I’m now able to use actual OO programming without less-powerful static gateways or service location to do so.
So looking back, we were able to inject services into our controllers, filters, action results and views. Using nested containers, we were able to provide contextual injection of all those context objects that MVC loves to use everywhere. But now we can let our services use them only when needed through dependency injection, providing a much cleaner API throughout.
You can find code for this example on my github:
Post Footer automatically generated by Add Post Footer Plugin for wordpress.
|
http://lostechies.com/jimmybogard/2010/05/19/dependency-injection-in-asp-net-mvc-views/
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Basic class for grid elements. More...
#include <Galeri_grid_Element.h>
Basic class for grid elements.
Class Galeri::grid::Element specifies the interface for a generic 1D, 2D or 3D grid element. This class is a semi-virtual class.
In Galeri/pfem, an element is defined as a geometric object, composed by vertices and components. The former are easy to define; the latter, instead, define the sub-entities of the element, and they are either segments (for 2D elements) or faces (for 3D elements).
The idea is that you can define an object by deriving this class, and set the correct number of vertices and components. You also have to decide the local ID of each of the components. Mixed components are perfectly legal. Since a component is nothing but an Element-derived class, you can recursively assemble components and create the object you need for your discretization.
This class is derived by Galeri::grid::Point, Galeri::grid::Segment, Galeri::grid::Triangle, Galeri::grid::Quad, Galeri::grid::Tet and Galeri::grid::Hex. New elements can be easily if required.
|
http://trilinos.sandia.gov/packages/docs/r10.10/packages/galeri/doc/html/classGaleri_1_1grid_1_1Element.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
EclipseLink/UserGuide/JPA/Basic JPA Development/Entities/MappedSuperclass
From Eclipsepedia
EclipseLink JPA
Examples
relatinoships.
@AttributeOverride
You can use the @AttributeOverride and @AttributeOverrides annotations, or
<attribute-override> XML element to override the column for a basic attribute in a mapped superclass. This allows for the column name to be different in each subclass. a mapped superclass. This allows for the join column name or join table to be different in each subclass.
For more information, see Section 11.1.2 "AssociationOverride Annotation" in the JPA Specification.
The following examples shows usages of the three different inheritance strategies for mapping an
Account hierarchy.
Example: Using
SINGLE_TABLE with @Inheritance annotation extends Account { @Basic private BigDecimal interestRate; }
@Entity @DiscriminatorValue("CHECKING") public class CheckingAccount extends Account { @Basic private boolean returnChecks; }
Example: Using
SINGLE_TABLE with
<inheritance> XML
<entity class="Account"> <table name="ACCOUNT"/> <inheritance strategy="SINGLE_TABLE"/> <discriminator-column <attributes> <id name="id"/> <basic name="balance"/> </attributes> </entity>
<entity class="SavingAccount"> <discriminator-value>SAVINGS</discriminator-value> <attributes> <basic name="interestRate"/> </attributes> </entity>
<entity class="CheckingAccount"> <discriminator-value>CHECKING</discriminator-value> <attributes> <basic name="returnChecks"/> </attributes> </entity>
|
http://wiki.eclipse.org/index.php?title=EclipseLink/UserGuide/JPA/Basic_JPA_Development/Entities/MappedSuperclass&oldid=259546
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Well, I think this is mainly a syntax problem, but I just can't figure out how to get this thing to do what I want it to.
I have a program that needs to run with an open command prompt window so that when the user closes the window the program goes away. To do that, I created this short program (it's one line of code; it's more like a script than a program) to access cmd.exe and do it so that I could have an executable JARfile to make running the program simple for the end user (rather than them having to type at the command prompt).
Here is my code. If I run it, it seems like it only gets as far as the cd C:\HearingAid bit, completely ignoring the fact that it goes FURTHER into the ProgramData folder, and of course not even coming within a mile of making a call to Java.exe:
Code :
import java.io.IOException; public class HearingAid { public static void main(String[] args) throws IOException { Runtime.getRuntime().exec(new String[] { "cmd.exe", "/C", "start;", "cd C:/HearingAid/ProgramData", "&", "java AudioTransfer"}); } }
What have I written wrong, folks?
-summit45
--- Update ---
I feel kind of stupid now, but I just saw someone on StackOverflow suggest to someone else for a very slightly similar issue to just use a shell script for this. The irony is I was using those earlier to simplify compiling and didn't think of that. Lol.
Either way, someone feel free to answer the question if you want to. Can always learn something.
-summit45
|
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/31129-how-get-command-line-function-java-code-printingthethread.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
struts opensource framework
struts opensource framework i know how to struts flow ,but i want struts framework open source code
Calender code
in the
// document, if the calendar is shown. If the click was outside the open...Calender code hi, this is my script, where do i add your script below to?
var Calendar = Class.create
Calender code ! HELP!!!
Calender code ! HELP!!! how can i add 3 days to current date ?
i am... help me.
This is my code:
var Calendar = Class.create... was outside the open
// calendar this function closes it.
Calendar._checkCalendar
Java is an opensource program or not? why???
Java is an opensource program or not? why??? Java is an open source program or not.. why
Calender code
Calender code how can i add 3 days to current date ?
i am using a script from jotform which has a calender field. the system shows the current...; Hi Friend,
Try the following code:
import java.util.*;
class Add3Days
calender applet
calender applet here is my code im getting errors in 23,37,49lines like cant find symbols constructor,methods e.tc is it the code for calender..." calendar component */
public class Cal extends Applet implements ActionListener
calendar
calendar sir,
how to fix calendar in javascript..pls provide the complete source code..
Please visit the following link:
open source help desk
Open Source Help Desk
Open Source Help Desk Software
As my help desk... of the major open source help desk software offerings. I?m not doing...?s out there.
The OneOrZero Open Source Task Management
Open Source Calendar
Open Source Calendar
Choosing an open calendar manager...;
Open Source Calendar/Alarm/Planner
I did a little searching and I found Rainlendar. It's a open source calendar with a todo list, eventlist, pop up
Open Source Exchange
;
DDN Open Source Code Exchange
The DDN site...Open Source Exchange
Exchange targeted by open-source group
A new open-source effort dubbed
Open Source GPS
Open Source GPS
Open
Source GPSToolKit
The goal of the GPSTk project is to provide a world class, open source computing suite to the satellite....
Open Source GPS Software
Working with GPS
calender devlopment - Java Beginners
calender devlopment Sir i need help sir.
Sir how to develop calender by using JAVA SWING.
Here conditions are
one textbox and one small button are needed for this application.whenever i click on small button small calender
Open Source Jobs
its original design free of charge. Open source code is typically created..
Java Script Code of Calendar and Date Picker or Popup Calendar
Java Script Code of Calendar and Date Picker or Popup Calendar...;
This is detailed java script code that can use for
Calendar or date picker or popup calendar. In the code given below there are
three files...
1: calendar.html PHP
with phc.
PHP shopping cart with open source code
Most... by hindering competition and plagiarism. X-Cart is a software with open source code. We... with open source code is a good way to get the right features quickly.
Open-source software
of the application. But if the software is Open-source then the source code is also...Open-source software Hi,
What is Open-source software? Tell me the most popular open source software name?
Thanks
Hi,
Open-source
Open Source Outlook
of the code base and hosting the additions on the same open source basis.
...;
Open-Source Outlook-Like Calendar Control for
ASP. NET...
Open Source Outlook
Open Source Outlook Sync Tool
Calendaring
Open Source E-mail Server
code for complete control.
POPFile: Open Source E-Mail...Open Source E-mail Server
MailWasher Server Open Source
MailWasher Server is an open-source, server-side junk mail filter package
Open Source Groupware
;
Open Source CRM and GroupWare
Agenda, Calendar, Enterprise Directory...
Open Source Groupware
Open
Groupware
Open Group... Software
This list is focused on open source software projects relating
Open Source Blog
calendar application client for Cosmo code-named "Scooby".
Open...Open Source Blog
About Roller
Roller is the open source blog server...;
The Open Source Law Blog
This blog is designed to let you know about
ask java - Date Calendar
to take calender to another frame. now we can find like "datapicker" in VB... this problem? Thank's
Hi friend,
Code to help calculate...
{
//create SimpleDateFormat object with source string date format
J2ME Calendar Application
like as follow..
Source Code of CalenderMIDlet.java
import ...){
notifyDestroyed();
}
}
Download Source Code... J2ME Calendar Application
Open Source HTML
Open Source HTML
HTML Tidy Library Project
A quorum...;
Open Source HTML Parsers in Java
NekoHTML is a simple HTML...
dynamic calender
dynamic calender hi
i need the code to "insert date using GUI"
Hi Friend,
Try the following code:
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
class DatePicker{
int month
Open Source Forum
product based upon existing open source code; extend an existing product...Open Source Forum
Open Source Forum on Software Patents
The Open Source Forums will be short and focused seminars on topical developments in the Open
date format - Date Calendar
calendar format in JSP
please tell me the code Hi friend,
Code to convert date to calender.
import java.util.*;
import java.text.*;
public... SimpleDateFormat("dd-MMM-yyyy");
date = (Date)format.parse(strDate);
Calendar cal
Open Source Version Control
), a popular open-source application within which many developers store code...-only CVS access.
Open
Source code version control...Open Source Version Control
CVS:
Open source version control
CVS Project Management
Open Source Project Management
Open Source Project Management... to access our support forums.
An Open-Source Based... provide these features thanks to the open-source nature of ]project-open
this is my code java - Date Calendar
this is my code java /*
* NewJFrame.java
*
* Created on 11... NOT modify this code. The content of this method is
* always regenerated...
{
//create SimpleDateFormat object with source string date format
java awt calender
java awt calender java awt code for calender to include beside a textfield
calender - JSP-Servlet
calender i created a calender using java script but when i click the icon of the calendar it popups but when i again click the icon another popup opens without closing the previous one so i need the help to close it when i click
Open Source software written in Java
Code Coverage
Open Source Java Collections API...Open Source software written in Java
Open Source Software or OSS for short is software which is available with the source code. Users can read, modify
want the code for calender in php+htmal page
want the code for calender in php+htmal page i have to select a calendra for some textbox in an table
Open Source Business Model
with publication of their source code on the Internet, as the Open CASCADE...Open Source Business Model
What is the open source business
model
It is often confusing to people to learn that an open source company may give its
Open Source Database
-source database that comes with a newer code base and an open-source reporting... Version 8.0.4, the most recent version of the open-source code upon which...Open Source Database
Open Source
Open Source Browser
Open Source Browser
Building an Open Source Browser
One year ago -ages ago by Internet standards- Netscape released in open source... browser. Based on KHTML and KJS from KDE's Konqueror open source project
Open Source Movement
Open Source Movement
Open source movement Wikipedia
The open source movement is an offshoot of the free software movement that advocates open source software as an alternative label for free software, primarily
Draw Calendar in SWT
the calendar with default time and locale.
Here is the code... will be displayed as:
Download Source Code
...Draw Calendar in SWT
What is Open Source?
What is Open Source?
Introduction
Open source is a concept referring to production...; became popular with the spread of the Internet. Open source is a philosophy
java yesterday - Date Calendar
how about 1 year and 1 month? And Can help me to using Date Calender what a script to take calender from another frame,Thank's friend for getting year month and days between two dates use following code
and for took
Why Open Source?
Why Choose Open Source?
Introduction
Open source refers to a production and development system... behind the concept of open source software is that it enables rapid evolution
Open Source e-commerce
code and J2EE best practices.
Open Source E...Open Source e-commerce
Open Source Online Shop E-Commerce Solutions
Open source Commerce is an Open
javascript calendar popup code
javascript calendar popup code javascript calendar popup code XML Editor
benefits: it?s Open Source, so you can see the code. As a set of Cocoa classes it?s..
Open Source E-mail
Open Source E-Mail...;
hMailServer -Open source email
hMailServer is a free, open...;
POPFile: Open Source E-Mail Solution
POPFile is a program
Open Source Reports
Open Source Reports
ReportLab Open Source
ReportLab, since its early beginnings, has been a strong supporter of open-source. In the past we have found the feedback and input from our open-source community an invaluable aid
Open Source Code Coverage Tools For Analyzing Unit Tests written in Java
calendar - Date Calendar
Creating a Date calendar in Java I am looking for a code for Creating a Date calendar in Java
Two Dates code validation - Date Calendar
Two Dates code validation Hi Sir.I am using calender for two text... should be greater than date of birth.plz help me in code . Hi Friend,
Try the following code:
function check(objName) {
var datefield
calendar in php code - PHP
calendar in php code Any Idea to create a calendar for school in PHP? I need a simple nd handy code
Open Source Code
Open Source Code
What is SOAP
SOAP is a protocol for exchanging....
Open source code in Linux
If the Chinese have IT, get...-writing it.
Open-source code quality
The open
Microsoft Open source
is inferior.
Open Source Code Finds Way into Microsoft... code and other data, hardcore open source advocates will no doubt argue... source code and other data, hardcore open source advocates will no doubt argue
JavaScript calendar date picker
the code for the calendar table
var html = TABLE;
// this is the title bar...;/form>
</body>
</html>
Output:-
Download Source Code... {
}
/* the table (within the div) that holds the date picker calendar */
.dpTable {
font
j2me code - Date Calendar
j2me code how to write a calendar program using alert
Open Source Antivirus
to the pervasiveness of email, a favorite delivery platform for malicious code. Open source...Open Source Antivirus
Developing Open Source AntiVirus Engines... a significant contribution to the development of a viable, working open source
calendra.css source code - Java Beginners
calendra.css source code hello i need the source code... and year are getting displayed Hi Friend,
Try the following code:
***************calendar.css***************
#calender {
width: 200px
calender
Open Source Installer
Open Source Installer
Open source installer tool
NSIS (Null soft Scriptable Install System) is a professional open source system to create..., install or update shared components.
Open Source Java Tool
why the php is open source?
why the php is open source? why the php is open source
continue java - Date Calendar
and source code where you having the problem.
Thanks Thanks for posting in correct language...........
use this code
package rajanikant.code.oct... date1;// = new Date();
Calendar calendar = Calendar.getInstance
Online Open Source Billing System Software
business.
The Open source Billing software comes with the source code, which can...Online Open Source Billing System Software
... about the
Online Open Source Billing Software System. Articles discussed here
java again - Date Calendar
java again I can't combine your source code yesterday, can you help...
//we enter ,then result in jTextfield2
Date work : jtextfield2
This my Source Code :
/*
* tes.java
*
* Created on 13 Oktober 2008, 8:40
*/
package
|
http://www.roseindia.net/tutorialhelp/comment/82882
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Using a PaaS in the cloud just got a whole lot easier for Microsoft Windows Users. OpenShift is happy to announce that we now have a streamlined install and configuration process for users of Windows that does not require Cygwin.
OpenShift is Red Hat's free, Cloud Application Platform as a Service (PaaS). As an application platform in the cloud, OpenShift manages the stack so you can focus on your code. And best of all, OpenShift is free to use and try out. On the free tier, each user is able to create three applications with 512mb of RAM and 1GB of disk space. You can also add a database to your application such as MongoDB, MySQL, or PostgreSQL. If you are ready to try out OpenShift, head on over to sign up for free.
What are you waiting for? Let’s get started with using OpenShift on Windows.
Note: If you want to watch a screencast of the install and configuration, check out this video. the Ruby environment on your system
The OpenShift client tools are written in the Ruby programming language. In order to execute and use the commands, you must have the appropriate runtime environment for your operating system. We suggest that you use the RubyInstaller from rubyinstaller.org in order to ensure you have the correct packages to interact with OpenShift.
Point your browser to rubyinstall.org and click the red download button on the left side of the screen.
Download the latest version and select Run from the dialog choices.
Once the installation has started, you will be requested to accept the license agreement that is presented to you. Review the license and click accept if you agree to the stated terms.
We will be interacting with OpenShift via the command line during this blog so ensure that you select ‘Add Ruby executables to your PATH’. This will ensure that you can access the ruby and gem commands from your windows command prompt.
Step 3: Install the Git revision control software on your system
In order to deploy and push your application code up to your OpenShift servers, you will need to have the Git revision control system installed and accessible on your system. To install this software, download the latest package from and follow the instructions.
As we did with Ruby above, we want Git to be available to us on the command line. Select ‘Run Git from the Windows Command Prompt’ and click next.
On the next screen, make sure that ‘Checkout Windows-style, commit Unix-style line endings’ is checked and click next.
Step 4: Install the RHC command line tools
Now that we have Ruby and Git installed, we are ready to start the installation of the OpenShift command line tools. In order to do this, open up a command prompt by clicking the windows logo and type in cmd.
You should be sitting at a command prompt at this point. All we need to do now is issue the following command:
C:> gem install rhc
This will download and install the client tools as well as any required dependencies for the package.
Step 5: Use the RHC command line tools
Now that you have the client tools installed, you can begin using them by using the following command to list all of your existing applications:
C:> rhc domain show
If this is the first time you have used the RHC tools on your machine, it will take you through a guided setup to create your SSH key and then upload it to the server. Make sure that you have already signed up for an OpenShift account and created a namespace before using the rhc domain show command.
There's a handy cheatsheet you can print out that explains what the most common rhc commands do at the bottom of this post.
That’s all there is to it to get up and running using the OpenShift client tools on Microsoft Windows. See you on the Cloud!
|
https://www.openshift.com/blogs/how-to-install-the-new-and-improved-openshift-client-tools-for-windows
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
#include <BelosRCGIter.hpp>
Inheritance diagram for Belos::RCGIterInitFailure:
This std::exception is thrown from the RCGIter::initialize() method, which is called by the user or from the RCGIter::iterate() method if isInitialized() ==
false.
In the case that this std::exception is thrown, RCGIter::isInitialized() will be
false and the user will need to provide a new initial iterate to the iteration.
Definition at line 145 of file BelosRCGIter.hpp.
Definition at line 146 of file BelosRCGIter.hpp.
|
http://trilinos.sandia.gov/packages/docs/r10.0/packages/belos/doc/html/classBelos_1_1RCGIterInitFailure.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Deployer updateKeith Babo Feb 16, 2011 7:58 AM
I pushed a fresh copy of the deployer work from the F2F into my core repository in the SWITCHYARD-38 branch. This has been rebased against current upstream. The core deployer interfaces and impl have been split off from CDI, with the CDI-based deployer in its own module now.
Tom C - can you pull it down and integrate your AS deployer work as a separate module inside the deploy directory?
Tom F - the current deploy API is still kinda yucky, but it's slimmed down from the last attempt, so that's something. I have pushed an updated Bean and SOAP deployer to my components tomtomdeploy branch. The only piece that''s missing is the creation of invocation proxies in the bean component activator. Assuming the current API satisfies your needs, we can push the core and component pieces into upstream.
1. Re: Deployer updateTom Cunningham Feb 16, 2011 9:45 AM (in response to Keith Babo)
Took a look at the SWITCHYARD-38 branch - it looks like the rebase against the current upstream broke things - it doesn't matter for the work I'm doing - I'll just comment things out for now and get my stuff working around it.
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.3.2:compile (default-compile) on project deploy: Compilation failure: Compilation failure: [ERROR] /home/tcunning/src/switchyard/kbabo/core/deploy/base/src/main/java/org/switchyard/deploy/internal/Deployer.java:[38,44] cannot find symbol [ERROR] symbol : class ExternalServiceModel [ERROR] location: package org.switchyard.config.model.composite [ERROR] /home/tcunning/src/switchyard/kbabo/core/deploy/base/src/main/java/org/switchyard/deploy/internal/Deployer.java:[39,44] cannot find symbol [ERROR] symbol : class InternalServiceModel [ERROR] location: package org.switchyard.config.model.composite [ERROR] /home/tcunning/src/switchyard/kbabo/core/deploy/base/src/main/java/org/switchyard/deploy/internal/Deployer.java:[40,44] cannot find symbol [ERROR] symbol : class ReferenceModel [ERROR] location: package org.switchyard.config.model.composite [ERROR] /home/tcunning/src/switchyard/kbabo/core/deploy/base/src/main/java/org/switchyard/deploy/internal/Deployer.java:[154,17] cannot find symbol [ERROR] symbol : class InternalServiceModel [ERROR] location: class org.switchyard.deploy.internal.Deployer [ERROR] /home/tcunning/src/switchyard/kbabo/core/deploy/base/src/main/java/org/switchyard/deploy/internal/Deployer.java:[171,17] cannot find symbol [ERROR] symbol : class ReferenceModel [ERROR] location: class org.switchyard.deploy.internal.Deployer [ERROR] /home/tcunning/src/switchyard/kbabo/core/deploy/base/src/main/java/org/switchyard/deploy/internal/Deployer.java:[184,13] cannot find symbol [ERROR] symbol : class ExternalServiceModel [ERROR] location: class org.switchyard.deploy.internal.Deployer [ERROR] -> [Help 1] [ERROR]
2. Deployer updateKeith Babo Feb 16, 2011 10:23 AM (in response to Tom Cunningham)
My bad. I foolishly did not compile after the rebase ... whoops. Nothing big, just some name changes. Fixed the problem and pushed an update. Thanks for catching it.
3. Deployer updateDavid Ward Feb 16, 2011 1:48 PM (in response to Keith Babo)
Keith, I looked here:
, and line 89:
switchyardConfig = (CompositeModel)new ModelResource().pull(switchyardConfig);
is wrong.
switchyard.xml now has a root element of <switchyard>, not <composite>.
It should read:
switchyardConfig = ((SwitchYardModel)new ModelResource().pull(switchyardConfig)).getComposite();
4. Deployer updateDavid Ward Feb 16, 2011 1:52 PM (in response to David Ward)
or rather:
compositeModel = ...
5. Deployer updateKeith Babo Feb 16, 2011 7:57 PM (in response to David Ward)
Yeah, that's Tom's bit there. :-) You warned me about this yesterday and I forgot to change it. Now that we are on the subject though ... is it really "wrong" to pull the config file into a child of the root module? I agree with you that we will want to parse into the root SwitchYardModel inside Deployer, so take my question as mere curiosity on the possible uses of the config API. As it stands now, the current code works, so I'm guessing this is a planned or accidental feature of the implementation.
6. Deployer updateDavid Ward Feb 17, 2011 10:35 AM (in response to Keith Babo)
The only reason I mentioned it is because I saw elsewhere in the code (where you were pulling it into an InputStream), that you were reading in a switchyard xml file. Becase of this, your cast above would break. If your xml file had a root of <composite>, it would work fine.
That being said, there is nothing wrong with whoever is pulling the switchyard xml file to next access the CompositeModel, then pass that to other code: code that only cares about the CompositeModel. Another example might be a gateway who only cares about a BindingModel, so it gets passed that. My statement way-back-when when I said we should be passing around a SwitchYardModel instead of a CompositeModel was more of a suggestion for generic code where the requirements of the receiver are not specifically known.
7. Deployer updateDavid Ward Feb 17, 2011 10:40 AM (in response to Keith Babo)
As it stands now, the current code works, so I'm guessing this is a planned or accidental feature of the implementation.
The only way the code above could work is if the switchyard.xml you're pulling in is old-school (ie: it has a root of <composite>), or maybe you haven't pulled updates into your local git repo for a while.
8. Deployer updateKeith Babo Feb 17, 2011 10:50 AM (in response to David Ward)
Crap. You're right on the money with the old switchyard.xml. I was using the end-to-end example and I thought I had updated the config.
9. Re: Deployer updateTom Fennelly Feb 18, 2011 6:49 AM (in response to Keith Babo)
OK... I've wired in the cdi client proxies now and have updated the unit tests to "deploy" the test services (see the JUnitCDIDeployer class).
Located at...
- git@github.com:tfennelly/jboss-switchyard-core.git kcbabo-SWITCHYARD-38
- git@github.com:tfennelly/jboss-switchyard-components.git cdi-deployer-evolve-v5
Main changes:
- Created an AbstractDeployer class, which both Deployer and JUnitCDIDeployer extend.
- I had to modify the Activator interface to add a method for getting the ServiceInterface for a service... Deployer and JUnitCDIDeployer use this when registering services.
- Added a createExchange(ExchangeContract) method to the Service interface. I think we have another JIRA for this. I wanted to get rid of the code level dependency on ServiceDomain(s) wherever I could.
Notes:
- There are 2 copies of the JUnitCDIDeployer class (and some related classes) at the moment (in bean and soap component tests). That should get fixed as part of the "test" component we're in the process of pulling together (see here). However to fix it, we might need to move some of the bean comopnent classes (they depend) somewhere else (the CDI deployer perhaps?).
- The Activator interface... would it make sense to have Activator and ServiceActivator interfaces ? ServiceActivator would extend Activator and add the new describe(QName) method I added as part of the new set of changes. Activator could be used to implement activators that only activate reference or service binding components, while ServiceActivator could be used to implement activators that activate Service instances e.g. the SOAPActivator would just implement the Activator interface, while the BeanComponentActivator would implement the ServiceActivator interface.
I probably forgot to mention something.
10. Deployer updateKeith Babo Feb 18, 2011 8:06 PM (in response to Tom Fennelly)
I just submitted a pull request based on Tom C's AS6 deployer work. The good news is that it's working nicely on it's own. When I added bean component to the mix, our JNDI handshake between the Deployer and Bean Activator got messed up. I have reworked the ApplicationDescriptorSet implementation and posted a new version to the tomtomdeploy branch in my repository. The resulting implementaiton appears to work in AS6* and actually improves the stability with CDI Deployer quite a bit. Before, a redeploy would munge everything up with the CDI deployer, whereas now it seems to work pretty smoothly.
* Unfortunately, CDI stuff is still not working quite right with AS6 deployer. From stepping through in the debugger, it looks to me like the old problem of @Service beans not getting discovered in the CDI extension. Tom F - since you have been through this a bunch of times before, I'll let you take a look and let me know what's what. Let me know if you have any questions on how to reproduce.
11. Deployer updateTom Fennelly Feb 19, 2011 7:04 AM (in response to Keith Babo)
I sorted out the commit squashing issues with the deployer changes I was making and have pushed them:
-
-
Keith... I'll have a look at the CDI issues you are seeing. I'll look at the mods you've made on that branch but can you also send me whatever you're using to test this + instructions... thanks.
12. Re: Deployer updateKeith Babo Feb 19, 2011 7:13 AM (in response to Tom Fennelly)
Instructions to reproduce AS6 deployer issue
tfennelly: while we are in here … I'll give you a quick rundown
tfennelly: all you have to do is build from upstream and you'll get an AS6 deployer zip in your deployer/build/target directory
tfennelly: unzip that into as6/server/default/deployers
tfennelly: then take the bean component and soap component from my tomtomdeploy branch
tfennelly: drop them in server/default/deploy
tfennelly: then deploy the app
Update app attached.
- TestDeployApp.jar 9.6 K
13. Re: Deployer updateKeith Babo Feb 19, 2011 7:42 AM (in response to Keith Babo)
Missed some files in my last commit and push to tomtomdeploy. To make things easier, I have pushed everything into a new branch SWITCHYARD-102 and rebased against the current upstream. Use this branch if you are going to build and run a test following the instructions above.
14. Re: Deployer updateTom Fennelly Feb 19, 2011 1:00 PM (in response to Keith Babo)
Hmmm... perhaps there are more steps to the deployment, but what I'm seeing is that it's failing before the CDI Discovery every happens....
Caused by: java.lang.NullPointerException at org.switchyard.deploy.internal.Deployment.init(Deployment.java:109) [:1.0-SNAPSHOT] at org.switchyard.deployment.SwitchYardDeployer.deploy(SwitchYardDeployer.java:49) [:1.0-SNAPSHOT] ... 42 more
I hooked up a debugger and the issue in my case is that when the SwitchYardConfigParser attempts to create the config model, it gets null back from the call to (SwitchYardModel) new ModelResource().pull(is). I stepped into that and the reason for the failure there is that it fails to discover the Marshaller for the "" namespace.
Are there any other steps to deploying SwitchYard to AS6 that I am missing (I followed the ones above exactly)?
|
https://community.jboss.org/message/587831
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
This document describes how to successfully develop applications targeting both Windows Phone 7 and Windows Phone 8 platforms with minimal maintenance effort and without compromising quality or features. Instead of one-way porting, it is worthwhile to support both platforms, since the cost of this is low compared to the benefits of wider device base. We refer to this approach as co-development.
The Windows Phone 8 platform, along with the new devices, offers a great number of new features and enhancements that provide the means to develop stunning applications. Whether you are adding Windows Phone 8 support to your existing application or developing a new one, do not pass the exciting new possibilities provided by the Windows Phone 8 and new Lumia devices. So instead of going by "the least common denominator" approach, do utilise the new feature set and push the boundaries of your application without losing the compatibility to Windows Phone 7.
In short, co-development means sharing the same code base for both platforms and having platform specific parts of implementation that are cross-platform compatible. The amount of code that can be shared varies, depending on the application type and technologies used, but fortunately with common abstraction techniques and proper amount of time spent in the design of the project architecture you can increase the size of the shared code base.
The techniques covered in this documentation apply regardless of whether you are about to port your existing Windows Phone 7 application to Windows Phone 8 or creating a new Windows Phone 8 application that also supports the existing Windows Phone 7 device base.
* XNA on Windows Phone 8 is deprecated in terms that it will not be future proof. For instance, you can no longer create new XNA based applications for Windows Phone 8, but Windows Phone 7 XNA applications are fully supported and continue to run on Windows Phone 8.
** The HTML5 support has been significantly improved for Windows Phone 8, and this is something you should consider when creating new applications for Windows Phone 8 and planning to back-port them to Windows Phone 7.
Also note that XAML used in Silverlight does not mix with C++. However, you can mix C# and C++, which is especially great in case you need to port your C++ application logic from another platform. Furthermore, DirectX is the framework with which you would most benefit from using C++. However, since DirectX is not supported on Windows Phone 7, it is not covered in this documentation.
Nokia Lumia devices running on Windows Phone 8 introduce a great set of new APIs. The most notable of these are:
With Nokia Maps the map experience in Windows Phone devices has been greatly improved, and developers get to use the very same features that are provided with the Nokia Maps application. APIs for Bing Maps are still supported, so all the Windows Phone 7 applications using them will also work on Windows Phone 8 devices. Note that Bing Maps API, like XNA, is deprecated and thus not future proof. Learn more of the Nokia Maps from the article Guide to the Maps.
With the Proximity API you can create apps that send data between devices using NFC, interact with NFC tags, and establish a Wi-Fi, Bluetooth, or Wi-Fi Direct connection between your app and an instance of your app on a near-by device. Learn more of the Proximity API usage from the article Using NFC to establish a persistent connection.
There is also a separate article on the advanced capture APIs, Advanced Photo Capturing, describing the usage of these APIs with an extensive example application.
With Nokia MixRadio API you can build applications that help users to find music and provide detailed information about the music they are interested in. To learn more about Nokia MixRadio API, see the article Nokia MixRadio API.
Windows Phone 8 supports phones that have WVGA, WXGA, and 720p resolutions, while Windows Phone 7 only supports WVGA. The following table describes the resolutions and aspect ratios that are supported in Windows Phone 7 and Windows Phone 8:
As graphic assets of an application may take a lot of space, it is recommended to use only graphic assets optimised for WXGA resolution. They have the highest quality and automatically scale down to other resolutions. Because of the ratio difference between 720p and other resolutions, there might be some cases when a customised image for 720p resolution is wanted, for example for a page background. An example of how to dynamically select the best resolution for a specific graphic asset is given in section "Runtime adaptation" of this topic.
For splash screen, a single WXGA 768 x 1280 SplashScreenImage.jpg image is recommended. It is automatically scaled to the phone resolution. If pixel-perfect splash screens for all resolutions are wanted, you should additionally have the following three files in the root folder of the application:
For Tile and App icons it is similarly recommended to use only WXGA image, as it is automatically scaled down to WVGA and 720p screens. Creating a separate image for every applicable tile size enables tailoring the contents of the tile (graphics, captions, details) to make the most of the available space. The following table summarises the correct size of the tile images for the different tile templates supported in Windows Phone 8:
There are several techniques that help you to develop and maintain your cross-platform application. See the following table for techniques and suitable application types. These techniques can be combined, and in many cases you will end up using more than just one. Do note that the Windows Phone 7 apps will work out-of-the-box on Windows Phone 8, and sometimes it is not necessary to do anything, especially if the app does not require any additional features. It is strongly recommended, however, to at the very least compile the application for Windows Phone 8 target to achieve the best performance.
The following sections describe each of these co-development techniques in more detail.
This technique is the preferred approach in developing applications targeting both Windows Phone OS 7.1 and Windows Phone 8. With this approach you can use all the other techniques to isolate version-specific sections and share code to minimise duplication, ensure consistency, and make maintenance easier. The technique can be applied to a new application targeting both Windows Phone 7 and Windows Phone 8 platforms as well as to upgrading an already existing Windows Phone 7 application to take advantage of the new Windows Phone 8 features. The basic idea is to have a single Visual Studio solution with two top-level projects targeting different platforms, as shown in the minimal example screenshot below. Both projects have their own copies of source files that may use platform specific features and images independently.
For new applications targeting both Windows Phone 7 and Windows Phone 8 platforms, just create a new project – <project>_WP75 – targeting Windows Phone OS 7.1, and then add a new project – <project>_WP8 – targeting Windows Phone 8 for the same solution and start developing the application for both platforms in parallel.
For an already existing Windows Phone 7 application, you should first make a backup copy of the existing project and then follow the steps below to start developing with two top-level projects.
Prepare the files and folders using File Explorer:
Create a solution with platform specific projects for Windows Phone 7 and Windows Phone 8:
With this technique you can share code and resource files between several projects. Continuing with the same minimal example, let's assume that we have noticed that instead of having the source code duplicated in two projects, we can share the same code between the projects. First we delete the duplicated files (MainPage.xaml & MainPage.xaml.cs) from the Windows Phone 8 project by right-clicking the MainPage.xaml file and selecting delete.
You can add shared files as links in two ways:
You can later choose to replace the linked files with Windows Phone 8 specific files whenever the need arises. Just remove the linked file from the Windows Phone 8 project and add a new Windows Phone 8 specific file – for example, an image with greater resolution or xaml file with Windows Phone 8 specific features.
Having multiple resolutions to support, relative sizes and layouts should be used to render the application pages correctly in all phone models. Instead of hard-coding the size and position of controls in the page, place the controls in a grid and use * or Auto to size and position the controls on the pages. The following code example and screenshots demonstrate how to adapt to multiple resolutions in Silverlight.
<phone:PhoneApplicationPage ... <!--LayoutRoot is the root grid where all page content is placed--> <Grid x: ... <!--ContentPanel - place additional content here--> <Grid x: <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="*"/> <RowDefinition Height="Auto"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="*"/> <ColumnDefinition Width="*"/> <ColumnDefinition Width="*"/> <ColumnDefinition Width="*"/> </Grid.ColumnDefinitions> <TextBox x: <Button Grid. <maps:Map x: <TextBlock Text="+" Foreground="Red" FontSize="50" Grid. <TextBlock x: <TextBlock x: </Grid> </Grid> </phone:PhoneApplicationPage>
The following code example demonstrates how to dynamically load images depending on the screen resolution. The example relies on the fact that all the images have been named as <image_name>_<resolution_type>.png.
public Uri GetScaledImageUri(String imageName) { int scaleFactor = (int)Application.Current.Host.Content.ScaleFactor; switch (scaleFactor) { case 100: return new Uri(imageName + "_wvga.png", UriKind.RelativeOrAbsolute); case 150: return new Uri(imageName + "_720p.png", UriKind.RelativeOrAbsolute); case 160: return new Uri(imageName + "_wxga.png", UriKind.RelativeOrAbsolute); default: throw new InvalidOperationException("Unknown resolution type"); } } ... // Next line will load a correct image depending on the resolution of the device MyImage.Source = new BitmapImage(GetScaledImageUri("myImage"));
The following methods can help you to find out the current screen resolution of the device in runtime:
public bool IsWvga { get { return Application.Current.Host.Content.ScaleFactor == 100; } } public bool IsWxga { get { return Application.Current.Host.Content.ScaleFactor == 160; } } public bool Is720p { get { return Application.Current.Host.Content.ScaleFactor == 150; } }
The following code block demonstrates how to get runtime information on the platform the code is running on:
public bool IsRunningOnWP8 { get { return Environment.OSVersion.Version.Major >= 8; } }
You can use conditional compilation symbols to isolate platform specific code in your application. This comes handy when you have separate projects for Windows Phone 7 and Windows Phone 8 applications. First you need to define a conditional compilation symbol which you then use in C# code to isolate platform specific code.
Defining conditional compilation symbol:
Isolating version specific sections of code:
// Separate implementations for different OS versions #if WP8 // code using enhancements introduced in Windows Phone 8 #else // code using Windows Phone OS 7.1 features #endif // A new Windows Phone 8 feature #if WP8 // code using new Windows Phone 8 feature #endif
Portable class libraries are a form of code sharing where you compile the cross-platform parts of your application into a library that you then reference from the platform specific code; e.g., user interface (UI) layer. This approach is especially useful for applications with generalised business logic that can be easily ported. Despite the isolation of the shared code, interaction with the platform specific implementation can be achieved using abstractions. For instance, you can create an abstract interface for the platform specific business logic, include that in your library, and implement the interface in the platform specific layer.
Learn more at.
For a complete hands-on tutorial on how to upgrade an application to Windows Phone 8 to make use of the platform's new features without losing compatibility with Windows Phone 7, see:
Last updated 19 November 2013
|
http://developer.nokia.com/resources/library/Lumia/co-development-and-porting-guide.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
#include "RTOp.h"
Go to the source code of this file.
Function that implements the guts an
apply_op() method for dense serial vectors.
This function takes care of all of the (not so ugly) details that goes on under the hood of using
RTOp operators in a serial environment.
This first set of arguments defines the serial vector arguments and their 38 of file RTOp_apply_op_serial.c.
|
http://trilinos.sandia.gov/packages/docs/r10.4/packages/moocho/browser/doc/html/RTOp__apply__op__serial_8h.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Hello again, I am asked by my professor to create a program using selection sort in a linked list. Selection sort with sorting only the data is pretty easy, but I'm having a hard time because he made us sort the nodes themselves. Can anyone help by giving atleast the algorithm in sorting the nodes. Thank you. Here is my code so far, it's only about sorting the data:
#include <iostream> #include <iomanip> using namespace std; typedef struct node { int DATA; node *NEXT; }; node *HEAD = NULL; void Create(int data); void Display(); void Sort(); int main() { int num, numOfEl; cout << "Enter number of elements: "; cin >> numOfEl; cout << "Enter " << numOfEl << " numbers: "; for(int i = 0 ; i < numOfEl ; i++) { cin >> num; Create(num); } cout << "\nDisplay before sorting.\n"; Display(); Sort(); cout << "\nDisplay after sorting.\n"; Display(); } void Sort() { node *h = HEAD, *i, *j, *next_i; for(i = h; i!=NULL && i->NEXT!=NULL; i=i->NEXT) { node *min; min = i; for(j = i->NEXT; j!=NULL ; j=j->NEXT) { if(j->DATA < min->DATA) min=j; } if(min!=i) { int temp; temp = min->DATA; min->DATA = i->DATA; i->DATA = temp; } } HEAD = h; } void Display() { node *current; current = HEAD; cout << setw(20) << "LAST" << setw(20) << "NUMBER" << setw(20) << "NEXT" << endl; while(current != NULL) { cout << setw(20) << current << setw(20) << current->DATA << setw(20) << current->NEXT << endl; current = current->NEXT; } system("pause>0"); } void Create(int data) { node *front, *tail; tail = new node; tail->DATA = data; tail->NEXT = NULL; if(HEAD == NULL) HEAD = tail; else { front = HEAD; while(front->NEXT!=NULL) front = front->NEXT; front->NEXT = tail; } }
Your program worked perfectly for me using VC++ 2010 Express on Windows 7. since this is c++ code you don't need the typedef symbol.
Yes, it works. But it only swaps the data. I have to make it swap the nodes themselves.
Forgive AD, he's getting a bit senile ;)
From your last thread, you do know how to swap two nodes that are next to each other, right? Well, you should, because I told you how.
The problem with swapping two adjacent nodes is that you also needed to keep track of the node before that pair of nodes such that you could update its NEXT pointer. Here, you have the same problem, but twice. If you want to swap two nodes that are possibly far from each other in the list, you need to keep track of the nodes that preceed both nodes that you are about to swap. In other words, you would need
node* before_i; and
node* before_min;.
To keep track of those nodes is not hard, this would do the trick for
before_min:
node* min = i; node* before_min = NULL; for(node* j = i; j->NEXT != NULL; j = j->NEXT) { if( j->NEXT->DATA < min->DATA ) { before_min = j; min = j->NEXT; }; };
and you can do something similar for the outer loop.
After the inner-loop, if
before_min is still
NULL, then you don't have to swap anything.
Then, for the actual swap, you have to check if
i is the
HEAD node in which case it has to predecessor to update, but the value of
HEAD has to be changed to the new node that will become the head. If
i is not the head and
i != min, then you swap them by updating the
NEXT pointer in each other's predecessors and by swapping their own
NEXT pointers. The only other sticky point to consider is when
before_min == i, but this means that
i and
min are adjacent nodes, so, you can use the same swapping method as in the bubble-sort you had before.
BTW, let me guess that insertion-sort will be your next assignment!
Thank you again! Hah, I really hope not.
I'm sorry, Mike. But I guess I just don't quite get it. Think of me as a 17 year old kid trying to figure this out which is true. So anyway, I got lost with me checking if i is the HEAD node in which case it has to predecessor to update. What do you mean by that? Thank you! Here's my disaster of a code of my sort so far:
void Sort() { node *h = HEAD, *i, *next_i; for(i = h; i!=NULL && i->NEXT!=NULL; i=i->NEXT) { node *min; min = i; node *before_min = NULL; for(node *j = i->NEXT; j->NEXT!=NULL ; j=j->NEXT) { if(j->DATA < min->DATA) { before_min=j; min=j; } } if(before_min == NULL) return; else if(i == HEAD) { } else if(before_min == i) { i = min->NEXT; min->NEXT = i->NEXT; i->NEXT = min; if(min == HEAD) { HEAD = i; min = i; } else { min = i; } } } HEAD = h; }
You should use exactly the inner loop that I posted before, yours is incorrect.
About this: "in which case it has to predecessor to update", it was a typo, I meant to write "in which case it has NO predecessor to update". I think that makes more sense, doesn't it?
Your outer loop should probably start in a fashion like this:
node* before_i = NULL; for(node* i = HEAD; i->NEXT != NULL; i = i->NEXT) { //.. the code.. //.. at the end of the iteration: before_i = i; };
This will make sure that you keep track of the node that comes before i.
Then, the swapping (after the inner-loop), should have this kind of structure of if-statements:
if( before_min != NULL ) { // if a swap is required. if( i == before_min ) { // if i comes just before the min node. if( i == HEAD ) { // if i is at the head, it has no predecessor, but HEAD needs to be updated. // put min at the head, and order the next pointers such that min --> i --> (min->NEXT) } else { // otherwise, there is at least one node before i (i.e. before_i != NULL). // swap the nodes as you did before in the bubble-sort algorithm (swap nodes i and min, with before_i being their predecessor). }; } else { // otherwise, i and min are separated by at least one node. if( i == HEAD ) { // if i is at the head, it has no predecessor, but HEAD needs to be updated. // update the HEAD pointer to point to min, and place i next to before_min, and then swap the NEXT pointer in i and min. } else { // otherwise, there is at least one node before i (i.e. before_i != NULL). // swap the NEXT pointers in before_i and before_min, and swap the NEXT pointers in i and min. }; }; };
Now, all you need to do is implement those instructions. I cannot make it any clearer than that without doing it all for you.
|
http://www.daniweb.com/software-development/cpp/threads/419177/selection-sort-linked-list
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Teleconference.2008.04.16/Agenda
From OWL
Revision as of 09:18, 17 April 2008 by IanHorrocks (Talk | contribs)
Contents
Call in details
If joining late please don't identify yourself verbally; instead, Identify Yourself to ZAKIM on IRC
- Date of Call: Wednesday April: Uli Sattler (Scribe List)
- Link to Agenda:
Agenda
- ADMIN (20 min)
- Roll call
- Agenda amendments
- Mention OWL-RDF compatibility draft
- F2F3 - where and when? (remember 8 weeks advance notice requirement)
- Minutes of Monday's UFDTF meeting, where are they?
- PROPOSED: Accept F2F2 Minutes
- PROPOSED: Accept Previous Minutes (9 April)
- Action items status
- Pending Review Actions
- Action 76 Arrange HP review of OWL Prime page / Jeremy Carroll
- Action 86 Send proposal for issue-91 ontology property / Jeremy Carroll
- Action 90 Summarize problem with bnodes in ISSUE-3 vs bnodes in OWL list / Jeremy Carroll
- Action 100 Put n3 version of rules on wiki with pointer to documentation. All to review and discuss via email / James Hendler
- Action 102 Work with Michael to clarify semantics of deprecatedclass so that Peter becomes happy / James Hendler
- Action 115 Update the RDF mapping with the accepted resolution of ISSUE-12 as per Peter's suggestion / Boris Motik
- Action 116 Add a note to the fragments document regarding possible later extension of DL Lite / Boris Motik
- Action 117 RAISE issue on relationship between OWL-R non-entailments and OWL-Full entailments, and link to it from Fragments as EDITORIAL NOTE / Jeremy Carroll
- Action 125 Draft the "humble" editor's note / SOTD request for comments for Primer / Bijan Parsia
- Action 126 Add a from-community section for OWL 1.0 users, to Primer / Bijan Parsia
- Action 130 Propose a way to reintroduce annotations into the structural specification and to provide RDF mappings / Boris Motik
- Due and overdue Actions
- Action 43 Develop scripts to extract test cases from wiki, coordinating with Bijan, Jeremy, Alan / Sandro Hawke
- Action 112 Review syntax document as reference - what is needed? / Evan Wallace
- Action 119 Review Carsten's charge / Bernardo Cuenca Grau
- 124 Make sure that namespaces work right in the hello world example, and that the "separate document" link goes to the schema rather than the wiki page / Sandro Hawke
- Action 127 Review primer+editorial changes after Bijan is done making them / Deborah McGuinness
- Action 133 Update the structural spec to add anonymous individuals; no mention about semantics so far / Boris Motik
- Action 134 Set up F2F calendar poll / Sandro Hawke
- Issues (65 minutes)
- Raised Issues
- Issue 110 Use of CURIEs in Structural Specification
- Issue 111 There's no way to signal the intended semantics of an OWL document
- Issue 112 Universal property (a.k.a. universal role) missing in current OWL2 documents
- Issue 113 Some OWL-R nonentailments are OWL-Full entailments
- Issue 114 Which combinations of punning should be allowed?
- Issue 115 Icon needed for the WG pages
- Issue 116 Should Axiomatic Triples added to OWL-R Full?
- Resolved Editorial Issues
- Proposals to Resolve Issues
- Issue 76
- Issue 77
- Issue 80
- Issue 67
- Issue 81 (revised)
- Issue 9
- Issue 60
- Other Issue Discussions
- Issue 71 create datarange of literals matching given language range
- Issue 16
- General Discussions (0 min)
- Additional other business (5 min)
Next Week(s)
- General Discussions (not necessarily in this order)
- Fragments and Conformance
- Imports
- OWL 1.1 Full (see Mapping to RDF, OWL DL/Full compatibility)
- Versioning
- Easy Keys
- Rich Annotations
- Test Cases
Regrets
JeffPan
|
http://www.w3.org/2007/OWL/wiki/index.php?title=Teleconference.2008.04.16/Agenda&oldid=6064
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Announcement - "No Closures" prototype.
Here is a method declared to be castable as a Runnable
import net.java.dev.rapt.anon.As;
class MyClass {
@As(Runnable.class)
void slowStuff() {
... // whatever
}
}
In order to cast that method to a Runnable, you do this
new Thread(Runnables.slowStuff(this)).start();
Thats right, you get the Runnable by calling a static method on the Runnables class, and pass the object that owns the method.
So how does this work?.
Example).
What's Hot and What's Not.
I don't think this is as good as any of the closures proposals, except in one respect, you don't have to wait for JDK 7 to use it.
- Printer-friendly version
- brucechapman's blog
- 4153 reads
by hlovatt - 2008-03-17 20:41
by rdjackson - 2008-03-20 18:37Thanks for the link to the Netbeans wiki on how to get this setup so that things just work. I can't over express How much I really like this Idea and solution to the "No Closures" needed approach.
by rdjackson - 2008-03-19 19:30When
by rdjackson - 2008-05-21 18:45Any updates on when you will get around to releasing the code for this? I'm still very interested in taking a look at it.
by tobega - 2008-03-11 03:42Cool!
by aberrant - 2008-03-10 09:25From
by opinali - 2008-03-10 08:26 :)
by mcnepp - 2008-03-10 08:03Hi
by brokenshard - 2008-03-11 22:02I agree with fabriziogiudici, just the extra 's' may cause some problems...
by stefan_schulz - 2008-03-08 05:50I.
by zero - 2008-03-08 02:42really cool, let's hope the news spread! looking forward for the next feature release :-)
by fabriziogiudici - 2008-03-08 01:58
|
https://weblogs.java.net/blog/brucechapman/archive/2008/03/anouncement_no.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
NAME
sched_setscheduler, sched_getscheduler - set and get scheduling algorithm/parameters
SYNOPSIS
#include <sched.h> int sched_setscheduler(pid_t pid, int policy, const struct sched_param *p);. Privileges and resource limits In Linux kernels before 2.6.12, only privileged (CAP_SYS_NICE) processes can set a non-zero-zero, errno is set appropriately.
ERRORS
EINVAL The scheduling policy is not one of the recognized policies, or the parameter p does not make sense for the policy. EPERM The calling process does not have appropriate privileges. ESRCH The process whose ID is pid could not be found.
CONFORMING TO
POSIX.1b (formerly POSIX.4)
NOTE).
|
http://manpages.ubuntu.com/manpages/dapper/man2/sched_setscheduler.2.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
RapPlan
From Eclipsepedia
Revision as of 03:59, 6 September 2007 by Rherrmann.innoopract.com (Talk | contribs)
| RAP wiki home | RAP project home |
RAP development plan
This document is a draft and is subject to change, we welcome all feedback.
To ensure the planning process is transparent and open to the entire Eclipse community, we (the RAP project team) post plans in an embryonic form and revise them throughout the release cycle.
Draft plan for RAP 1.0
- 2006-06 - 2006-09 initial code contribution: Java component library for UI development (done)
- 2006-10: Moving widget toolkit to org.eclipse packages, re(de)fine widget toolkit api (done / without milestone release)
- 2007-02 M1: Basic WebWorkbench implementation running on OSGi (functionality implemented - release targeted for first week of february) (done)
- 2007-03 M2: extending workbench functionality, initial work on Perspective switcher, extending jface functionality (dialog framework) (done)
- 2007-04-27 M3: move to org.eclipse.swt, org.eclipse.jface, org.eclipse.ui namespaces, .war deployment, ViewActions (basic implementation), Taborder (done)
- 2007-06-08 M4: move to qooxdoo 0.7, finalize move to a RCP API subset, implement untyped listeners, adapt codebase to RCP 3.3, theme management, expand JFace, Workbench implementation (open/close workbench parts, ...) (done)
- 2007-07-13 M5: background process management (ProgressMonitor, Display.syncExec ...), finalize new table component,
branding functionality,performance optimization (client side widget cache), expand Workbench implementation (perspective extensions, perspective switcher, ...), client side font size calculation, data binding (done)
- 2007-08-17 M6
- Provide all API for Release 1.0
- check for UnsupportedOperationException (done)
- move required classes from w4t to rwt bundle, refactor package names (done)
- revise request parameter names (w4t_startup, w4t_custom_service_handler) (done)
- introduce org.eclipse.ui plug-in (done)
- resolve api differences for Dialogs, Color, Font, Image (done)
- mark Adaptable non-api (done)
- Provide facade for TextSizeDetermination (done)
- Define API for custom widget developers
- performance optimization
- finalize TableViewer, TreeViewer
- branding functionality (done)
- servlet name
- favicon
- title
- theming
- make index page body configurable
- 2007-09-14 RC1: Code freeze for 1.0 (IN PROGRESS)
- final performance optimization
- error handling
- deliver exceptions as HTML page with HTTP status 500 (done)
- change client-side response processing to react on HTTP status != 200 (done)
- robustness
- verify JavaScript response (e.g. detect initial comment) and react accordingly (warning, session restart) (done)
- 190762 "confirm exit" message in branding (displayed when pressing F5, location change) (done)
- handle requests from different browser tabs/windows: deliver HTML page that informs the user that there is already a session running and provide a link to start a new session
- request versioning (request counter): in case of an invalid request, send above mentioned HTML page
- handle deep links when there is already a session running: inform user that request has been processed by existing session, optionally send request when activating browser tab/window
- Message to the user when connection is lost (see bug 183213): let the user retry to send the request, terminate/clean up client application when problem persists. (done)
- move initialization code from EngineConfigWrapper to bundle startup
- fixing of major bugs
186804Can't set a textfield value if text is too long, verify that this also solves 200910(done) 187258Support doit on ShellEvent for shellClosing (done) 187540Dynamically generated context menus do not lay out correctly (done)
- 191964 SWT.MULTI for Tree broken
196911Invalid value Text#getText with opened shell (done)
- Implement 'empty' API: VerifyEvents, 200394,
200396, 200397, 187252
- Fix bugs introduced with API cleanup:
201403, 201225, 201286, 201528(done) 190762Ask before browser window/tab is closed
- 188045 Problem with character-encoding of request data
- 200390 URLImageDescriptor cannot resolve image URLs with platform:/ schema
199965Activate event on shell open 201080[Text] disposing of focused text widget causes JavaScript error (done)
- 2007-09-28 v1.0: Release 1.0
- fixing of critical bugs
The work will be conducted in the following components
Component: org.eclipse.rap.ui
Workbench
- WorkbenchAdvisor, WorkbenchWindowAdvisor, ActionBarAdvisor
- extension of menubars / toolbars by extension points
- WorkbenchWindow SelectionService
- WorkbenchPage PartListener
- IWorkbenchPart, IViewPart
- IWorkbenchPart: maximize, minimize, restore (done)
- IWorkbenchPart: Workbench Actions for opening workbench parts
- IWorkbenchPart: Close-Button (configuarble)
- Toolbar, Menu for ViewParts
- ActionSets: IWorkbenchWindowActionDelegate
Perspectives
- definition of perspectives by IPerspectiveFactory (partly done - finalizing implementation e.g. StandaloneView)
- Perspective extensions
- Perspective Switcher
- Perspective Actions(open, close etc)
- Perspective Name in WindowTitle
Component: org.eclipse.rap.jface
- Actions
- MenuManager
- CoolBarManager
- Structured Viewers: TreeViewer, TableViewer
- WindowManager, Window, ApplicationWindow
- ImageDescriptor, ImageRegistry
- Dialog Framework (done for the most frequently used classes)
Component: org.eclipse.rap.rwt
- Font: base implementation (done)
- Font: font size calculation
- Image: ImageLoader with size calculation
- Table: Extending the current implementation to match SWT (e.g. inline-editing, bounds, font, and color for TableItem, images, etc.)
- Tree: Extending the functionality to match SWT Tree (e.g. selection should not cause expand event, Images etc.)
- Browser-Widget (mostly done)
- Z-Index (done)
- Modify-Event (done for existing widgets as there are Text and Spinner)
- Menu: Extending the functionality to match SWT (e.g. enable and visible properties, ArmEvent for MenuItem, MenuEvent for Menu, etc.)
Community involvement desirable
the implementation of the following functionality will largely depend on community involvement
- StatusBar (implemented by RAP team)
- TraverseEvents? (deferred)
- Short-Cuts (Accelerator) (deferred)
- Preferences (deferred)
- Drag&Drop of Workbenchparts (deferred)
- Columns for the tree widget (used e.g. by the PropertySheet) (implemented by RAP team)
Out of scope for version 1.0
- IEditorPart, IEditorInput (done)
- Workspace, Resources
- Help System, context sensitive help, HelpEvents
- KeyEvents
- Low-Level-MouseEvents
- Paint-Events
- Undocked ViewParts
- Navigation
- Accessibility
- SWT-AWT - Bridge
- StyledText
- Cursor
|
http://wiki.eclipse.org/index.php?title=RapPlan&oldid=49361
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
csFontCache::KnownFont Struct Reference
A font known to the cache. More...
#include <csplugincommon/canvas/fontcache.h>
Detailed Description
A font known to the cache.
Definition at line 144 of file fontcache.h.
Member Data Documentation
The font size this font was cached for.
Definition at line 148 of file fontcache.h.
The documentation for this struct was generated from the following file:
- csplugincommon/canvas/fontcache.h
Generated for Crystal Space 2.0 by doxygen 1.6.1
|
http://www.crystalspace3d.org/docs/online/api-2.0/structcsFontCache_1_1KnownFont.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Search: Search took 0.01 seconds.
a clear step by step to publish on the stores
Model associations and stores
Cannot sync store with hasmany associationStarted by Golden.Vulture, 14 Jan 2014 11:15 PM
Working with storesStarted by silveralecs, 19 Dec 2013 3:53 AM
- Last Post By:
- Last Post: 20 Dec 2013 3:20 AM
- by silveralecs
Best Practice for Waiting n stores to load before showing view
MVC - associating views to controllersStarted by paromitadey, 21 Nov 2013 6:06 AM
- Last Post By:
- Last Post: 21 Nov 2013 10:32 AM
- by troseberry
Store performance
Memory Issues: Properly clear and remove Store records and unused References
- Last Post By:
- Last Post: 17 Dec 2013 5:31 AM
- by kidmanmatch
- Last Post By:
- Last Post: 20 Jun 2013 5:50 AM
- by davysuisse
trying to detect store update before displaying a gridStarted by mrbobpowell, 6 Jun 2013 11:01 AM
- Last Post By:
- Last Post: 7 Jun 2013 7:22 AM
- by mrbobpowell
Stores and ids conflictStarted by sencha-dev1, 30 May 2013 1:48 AM
- Last Post By:
- Last Post: 31 May 2013 5:23 AM
- by sencha-dev1
[FIXED] PullRefresh ignores sorterparams
- Last Post By:
- Last Post: 11 Mar 2013 6:45 AM
- by mitchellsimoens
Problem with stores and loading lists.
- Last Post By:
- Last Post: 4 Feb 2013 1:59 PM
- by mitchellsimoens
Creating one grid view with multiple stores?
How to deal with stores in another namespace
IE 8 and behind errorStarted by drumforhim, 1 Nov 2012 4:40 AM
- Last Post By:
- Last Post: 1 Nov 2012 5:54 AM
- by drumforhim
Reloading a Nested List problemStarted by mediademon, 19 Oct 2012 5:10 AM
- Last Post By:
- Last Post: 22 Oct 2012 2:41 AM
- by mediademon
Can't Access Any Dynamically Created Stores
Multiple Store Load ProblemStarted by bsmithinghamisd, 8 Sep 2012 1:00 PM
Dynamically changing the proxy url on a store is wiping my records each time
How to listen to the load event of a store in Architect?
- Last Post By:
- Last Post: 15 Aug 2012 3:56 PM
- by billtricarico
Results 1 to 25 of 63
|
http://www.sencha.com/forum/tags.php?tag=stores
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
I felt I was a bit grey in this aspect and I found this :.
He says,
"static' can also be defined within a function. If this is done, the variable is initalised at compilation time and retains its value between calls. Because it is initialsed at compilation time, the initalistation value must be a constant. "
I just tried out a simple code where I didnt initialise the static variable. I was expecting some warning to be displayed when I compiled it with the -Wall option. Instead the static variable was initialised to 0.
Can anyone shed some light on this.
Here's the code :
And the output :And the output :Code:# include<stdio.h> int func() { static int i; i++; return (i); } int main() { int count=0; while(count < 5) { printf(" count :%d, func = %u\n",count,func()); count++; } return 0; }
Code:count :0, func = 1 count :1, func = 2 count :2, func = 3 count :3, func = 4 count :4, func = 5
|
http://cboard.cprogramming.com/c-programming/91555-static-variables-plus-initialisation.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
java.lang.Object
org.apache.commons.resources.impl.ResourcesBaseorg.apache.commons.resources.impl.ResourcesBase
org.apache.commons.resources.impl.CollectionResourcesBaseorg.apache.commons.resources.impl.CollectionResourcesBase
org.apache.commons.resources.impl.WebappXMLResourcesorg.apache.commons.resources.impl.WebappXMLResources
public class WebappXMLResources
Concrete implementation of
Resources that wraps a family
(one per
Locale of XML documents that share a base
context-relative path for servlet context resources, and have
name suffixes reflecting the
Locale for which
the document's messages apply. Resources are looked up in a hierarchy
of properties files in a manner identical to that performed by
java.util.ResourceBundle.getBundle()..
The base resource path passed to our constructor must contain the
context-relative base name of the properties file family.
For example, if the base path is passed as, the resources for the
en_US Locale would be stored under URL, and the default
resources would be stored in.
public WebappXMLResources(String name, String base, javax.servlet.ServletContext servletContext)
Create a new
Resources instance with the specified
logical name and base resource URL.
name- Logical name of the new instance
base- Base URL of the family of properties files that contain the resource keys and values
servletContext- the
ServletContextinstance to use for resolving resource references
|
http://commons.apache.org/dormant/resources/apidocs/org/apache/commons/resources/impl/WebappXMLResources.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
How Groovy Helps JavaFX: Farewell Pure Java Code?
One of the many cool sample applications known to those trying out JavaFX is the JavaFX Weather application, which is now bundled with the NetBeans IDE 6.5.1/JavaFX 1.2 bundle. In short, it connects to a weather service and then displays the results for selected cities in an impressive JavaFX GUI:
In a technical session at JavaOne on Wednesday, entitled "JavaFX Programming Language + Groovy = Beauty + Productivity", Dierk König showed a number of powerful ways in which Groovy and JavaFX can interact. One of those ways is outlined below, with all the code and the results. It is really quite impressive and earned Dierk a round of applause from his audience when he demoed it in his session.
The JavaFX Weather application creates the above GUI, while using this massive Java class to connect to Yahoo's weather service. The RSS feed that the Java class connects to is displayed in full below:
<?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
<rss version="2.0" xmlns:
<channel>
<title>Yahoo! Weather - Prague, EZ</title>
<link>*</link>
<description>Yahoo! Weather for Prague, EZ</description>
<language>en-us</language>
<lastBuildDate>Fri, 05 Jun 2009 8:00 pm CEST</lastBuildDate>
<ttl>60</ttl>
<yweather:location
<yweather:units
<yweather:wind
<yweather:atmosphere
<yweather:astronomy

<item>
<title>Conditions for Prague, EZ at 8:00 pm CEST</title>
<geo:lat>50.1</geo:lat>
<geo:long>14.28</geo:long>
<link>*</link>
<pubDate>Fri, 05 Jun 2009 8:00 pm CEST</pubDate>
<yweather:condition
<description><![CDATA[
<img src=""/>
<b>Current Conditions:</b>
Partly Cloudy, 54 F
<b>Forecast:</b>
Fri - Partly Cloudy. High: 58 Low: 42
Sat - PM Rain. High: 58 Low: 49
<a href="*">Full Forecast at Yahoo! Weather</a>(provided by The Weather Channel)<br/>
]]>
</description>
<yweather:forecast
<yweather:forecast
<guid isPermaLink="false">EZXX0012_2009_06_05_20_00_CEST</guid>
</item>
</channel>
</rss>
Now, whenever you hear "Groovy", you should think "grunt work". That's what Groovy is particularly good at. A very strong case in point is that of web services. Also, that of parsing HTML and XML. Therefore, when you need to interact with web services in your Java application, the most obvious helper language you should think of is Groovy.
Look again at the RSS above and then look at line 11 in the Groovy code below. Here's that line:
def channel = new XmlParser().parse(url).channelThat line gets you the "channel" element in the RSS feed above! Awesome, right? And from there on, the Groovy script below parses the RSS feed, identifying exactly those pieces that are of interest to the JavaFX GUI, resulting in EXACTLY the same result, in approximately 20 lines, as the original does in approximately 250.
Take a look at the Groovy snippet below to see how it works, by comparing it to the RSS feed above. Note that this isn't even a snippet! It is the WHOLE Groovy web service class. Now that's just plain cool, especially after comparing it again to the original monstrosity.
package weatherfx.service
class YahooWeatherServiceG {
static YW = new groovy.xml.Namespace("")
def forecasts
YahooWeatherServiceG(String code, boolean celsius) {
def url = ""
println url.toURL().text
def channel = new XmlParser().parse(url).channel
cityName = channel[YW.location].@city
def wind = channel[YW.wind].first()
windSpeed = wind.@speed.toInteger()
windDirection = wind.@direction.toInteger()
def cond = channel.item[YW.condition].first()
temp = cond.@temp.toInteger()
forecasts = channel.item[YW.forecast]
}
String cityName
int temp
int windSpeed
int windDirection
int getConditionCode(int day=0) { forecasts[day].@code.toInteger() }
int getLowsTemp (int day=0) { forecasts[day].@low.toInteger() }
int getHighsTemp (int day=0) { forecasts[day].@high.toInteger() }
}
Now, take a moment to imagine how much simpler, effective, and less error-prone it will be to (a) test and (b) maintain the above code, compared to its original pure Java version.
However, there isn't a JavaFX/Groovy cross-compiler yet. So, how to replace your Java web service code with the above in Groovy? Create a separate project in which you create your Groovy web service. Then add that project to your JavaFX application's classpath. Next, replace the two or three references in your JavaFX application to refer to the Groovy class (which, after compilation, is now a Java class) instead of the original Java class.
In a single picture, the above paragraph gets you the following:
Then run the JavaFX application and you'll have the same result as before, with the difference that the web service code is now handled in Groovy. And... there's no pure Java code in your application at all anymore. Oh, dear. JavaFX creates the GUI, while Groovy does the grunt work in the background. So, farewell, pure Java code?
Otengi Miloskov replied on Sun, 2009/06/07 - 8:55pm
Cloves Almeida replied on Sun, 2009/06/07 - 9:45pm
Andrew replied on Sun, 2009/06/07 - 10:26pm
Dean Del Ponte replied on Sun, 2009/06/07 - 10:26pm
James Jamesson replied on Sun, 2009/06/07 - 11:40pm
Andres Almiray replied on Sun, 2009/06/07 - 11:57pm
in response to:
James Jamesson
That depends on your definition of "Java": is it the language or the platform? or both. Frankly I don't care if a JavaFX (or any other alternate JVM language for that matter) post appears from time to time on my Javalobby feed, I can decide to skip it if I want to. Disclaimer: I love Groovy, I don't like JavaFX Script that much, I'm partial to Scala and I'm a total ignorant on Clojure. Tag me as a madman, I prefer polyglot programmer ;-)
Peace.
Andrew replied on Sun, 2009/06/07 - 11:58pm
in response to:
James Jamesson
Geertjan Wielenga replied on Mon, 2009/06/08 - 12:45am
in response to:
James Jamesson
James Jamesson replied on Mon, 2009/06/08 - 1:01am
in response to:
Geertjan Wielenga
Michal Hlavac replied on Mon, 2009/06/08 - 1:11am
in response to:
James Jamesson
It's interesting, how intolerant readers are here. This article is strongly about java. Why didn't you simply skip this article? (rhetorical question)
James Jamesson replied on Mon, 2009/06/08 - 1:18am
in response to:
Michal Hlavac
Andrew replied on Mon, 2009/06/08 - 1:37am
in response to:
Michal Hlavac
Michal Hlavac replied on Mon, 2009/06/08 - 1:59am
in response to:
Andrew
James Jamesson replied on Mon, 2009/06/08 - 2:50am
in response to:
Michal Hlavac
Michal Hlavac replied on Mon, 2009/06/08 - 2:51am
in response to:
James Jamesson
Geertjan Wielenga replied on Mon, 2009/06/08 - 2:56am
in response to:
James Jamesson
Hmmm. Not in one single place has that argument been made in this article. Not even once. If anything, it's asking you to favor Groovy over Java.
James Jamesson replied on Mon, 2009/06/08 - 3:03am
in response to:
Michal Hlavac
Andrew replied on Mon, 2009/06/08 - 3:04am
in response to:
Michal Hlavac
Hi Michal,
I agree with you that when you wrote an article about several technologies it is difficult not to rush to publish it on any board possible.
However, I share some concerns with jamesjames that JavaLobby board is being used for preaching JavaFX / Groovy over Java.
Just look at the title "… Farewell Pure Java Code?".
And further the author writes
Quote 1:
"Now that's just plain cool (Groovy), especially after comparing it again to the original monstrosity (Java)."
Quote 2:
"Now, take a moment to imagine how much simpler, effective, and less error-prone it will be to (a) test and (b) maintain the above code, compared to its original pure Java version."
I think to write articles about JavaFX/ Groovy code being "much simpler, effective, and less error-prone" than Java code is more appropriate on Groovy board.
And for articles about JavaFX there is a nice RIA Zone.
Geertjan Wielenga replied on Mon, 2009/06/08 - 3:23am
So all you want on Javalobby are articles that are uncritical of Java? What qualifies this article for inclusion on Javalobby is the fact that it is about Java. The fact that it turns out that it doesn't FAVOR Java is completely and utterly irrelevant.
James Jamesson replied on Mon, 2009/06/08 - 3:49am
in response to:
Michal Hlavac
You have gotto be kidding me Michal. Let me rephrase it for you. I wont use JavaFX because I do not need it. And, like JavaScript, JavaFX is not Java. Point here is that a scripting language having a "Java" word in its name does not qualify it to be Java related.
I apologize for being harsh to you. With peace!
Jose Maria Arranz replied on Mon, 2009/06/08 - 3:50am
Groovy is a higher level language on top of "Java as platform", JavaFX is even more higher. Groovy, JavaFX and Java as language are different flavours of "Java as platform", it makes sense Groovy and JavaFX news in JavaLobby. Some day if overwhelming success of Groovy and JavaFX occurs, a javafx.dzone.com and groovy.dzone.com would be necessary to avoid too many news per day, and java.dzone.com would be specialized in Java "as language".
Said this, I can understand JavaFX is the new cornerstone of the Sun (Oracle) offering and it is fine and needed, JavaFX can compete with Flash/Flex/AIR and Groovy has its place, but many people (myself included) think scripting languages are problematic to manage complex applications because of lack of a compiler (yes I know you can compile Groovy). Promoting the mantra "no pure Java code" can be appeling though can drive to bad decisions about the right technology to use.
Scripting has its place, really, in small applications (or applications repeating the same pattern again and again) and highly dynamic applications wich need to read source code from external sources.
I've read again and again examples like:
def channel = new XmlParser().parse(url).channel
"Now that's just plain cool, especially after comparing it again to the original monstrosity (in Java)."
Geertjan is not the language, is the API, the original code in Java is using a SAX parser, a SAX parser is a low level API for markup, using DOM would be shorter (I'm sure Groovy uses DOM behind the scenes), the method Document.getDocumentElement() returns the channel node. Using XPath and/or TreeWalker can help to traverse the DOM in a similar fashion you are doing in Groovy, yes the Java code would be slightly more verbose but who cares? Even more if you wrap some typical DOM code in a custom self made mini-API you have the same productivity as Groovy. There is one problem, you need to compile your Java code, you don't in Groovy and JavaFX, yes definitively Groovy, JavaFX etc are nice if compilation phase is a problem.
By the way, jamesjames, Geertjan works for SunOracle and I'm sure he does not want to kill Java... the language :)
James Jamesson replied on Mon, 2009/06/08 - 4:07am
in response to:
Jose Maria Arranz
Felix Schrape replied on Mon, 2009/06/08 - 10:28am
Fabrizio Giudici replied on Mon, 2009/06/08 - 11:01am
I think that it would make sense to *discuss* the creation of a JavaFX zone, but it would mostly for ... promoting JavaFX on its own.
For the rest, JavaFX is the only non-Java language that I'm using on the Java platform. I don't have any particular interest in Groovy, JRuby, Scala; nevertheless I regularly read articles about that stuff, generically because looking at things in different perspectivesis inspiring; because who knows, sooner or later I could change my mind (unlikely) or because I would have a better awareness in motivating my decision of sticking with Java (likely).
My point is that it's important that the title (or the first lines) describes the article content. In this case it does and if I'm not in the mood of reading about Groovy I don't find it hard to skip it.
Osvaldo Doederlein replied on Mon, 2009/06/08 - 11:22am
Groovy is Java. JavaFX is Java. Scala is Java. JRuby is Java. Repeat after me 100 times, "Java is a Platform".
Yes we have a Groovy Zone, and we should have a JavaFX Zone, at least if JavaFX content and interest are already big enough. But this particular article belongs here, exactly because it's a "language mashup" article. Notice: We don't have separated "Java Language Zone" and "Java Platform Zone", so the "Java Zone" must obviously accept any material that is relevant to Java-as-a-Platform. Perhaps we should have that separation, to avoid such stupid discussions.
What' next, people complaining of Java Zone articles that cover any API/framework that's not included in the ME/SE/EE platforms and not an official JCP API? "Move this crap to the Hibernate Zone", I can already hear...
Otengi Miloskov replied on Mon, 2009/06/08 - 4:01pm
Calm down people, This article is about Java and Groovy. It happend that JavaFX is the front end it could be JSF or JSP or GWT or even Flex. The point here is between Java and Groovy and as I said if I will replace Java will be with Scala but I think the article is fine and critique Java flaws so we Java developers can learn from Groovy and apply those to Java. I dont see anything bad about this article because also Groovy is Java behind the scenes.
For JavaFX and Flex and even GWT there is a RIA zone, that zone is for that topics but sometimes we need a front end to explain the excersise of the article and happend to be in this article is JavaFX.
Personaly I dont like much JavaFX I prefer Flex but it is not bad to learn something new sometimes.
Otengi Miloskov replied on Mon, 2009/06/08 - 4:19pm
phil swenson replied on Mon, 2009/06/08 - 8:52pm
Leonel Gayard replied on Tue, 2009/06/09 - 9:34am
Guys, doesn't it strike you that users will also have to download Groovy's 1.6 MB jar file in order to get this simple app working ?
Also, this simple line of code:
def channel = new XmlParser().parse(url).channel
This is possible not because Groovy the language is cool (it is), but because its accompanying classes (the GDK) are awesome.
There are a lot of classes in the JDK for XML, but they aren't as simple to use.
This is a class I could easily see implemented in Java.
Instead of promoting language changes, we could wish for easier to use classes in the JDK.
Andres Almiray replied on Tue, 2009/06/09 - 12:10pm
in response to:
Leonel Gayard
In plain Java you would go as far as
calling .channel requires metaprogramming techniques that are simply not available in Java, so this is really a combination of language features and libraries. The thing is that both are so tighly integrated that a library feature can be mistaken for a language feature, like the following example demonstrates ;-)
|
http://java.dzone.com/news/draft-how-groovy-helps-javafx
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
A few people on the list have expressed a desire to add the shortcut
'-l' for the '--limit' switch, currently used only with 'svn log'.
Running 'svn log --limit' seems to be a pretty common operation, and a
shortcut for it would be useful. The implementation is trivial, but I
know how protective we generally are about our shortcut namespace, so I
wanted to run this by the list before doing anything.
Are there any objections to adding '-l' as a shortcut for '--limit'?
-Hyrum
This is an archived mail posted to the Subversion Dev
mailing list.
|
http://svn.haxx.se/dev/archive-2007-03/0476.shtml
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Vector-based std::map functionality. More...
#include <VecMap.hpp>
Vector-based std::map functionality.
This template class mimics the 'std::map' associative container interface; however, its semantics are significantly different. Storage for the map-lite class is provided by the std::vector class where the entries are sorted by key value.
Light weight associative container functionality for small keys and values, e.g. key = integer and value = pointer.
Modifications to the vecmap contents are linear complexity in violation of the associative container requirement for logarithmic complexity. Furthermore, modification operations are guaranteed to invalidate all iterators after the insert/erase point. Insertion operations may also invalidate all iterators if the storage is reallocated.
All non-modifying query operations conform to either the constant or logarithmic complexity.
Definition at line 60 of file VecMap.hpp.
|
http://trilinos.sandia.gov/packages/docs/r11.2/packages/stk/doc/html/classsierra_1_1vecmap.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
#include <boost/exception/enable_current_exception.hpp>
namespace boost { template <class T> ---unspecified--- enable_current_exception( T const & e ); }
An object of unspecified type which derives publicly from T. That is, the returned object can be intercepted by a catch(T &).
This function is designed to be used directly in a throw-expression to enable the exception_ptr support in Boost Exception. For example:
class my_exception: public std::exception { }; .... throw boost::enable_current_exception(my_exception());
Unless enable_current_exception is called at the time an exception object is used in a throw-expression, an attempt to copy it using current_exception may return an exception_ptr which refers to an instance of unknown_exception. See current_exception for details.
Instead of using the throw keyword directly, it is preferable to call boost::throw_exception. This is guaranteed to throw an exception that derives from boost::exception and supports the exception_ptr functionality.
|
http://www.boost.org/doc/libs/1_44_0/libs/exception/doc/enable_current_exception.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
Multiplies a point by another.
The coordinates of the first point will be multiplied by those of the second point. The second point is not modified.
template<typename Point1, typename Point2> void multiply_point(Point1 & p1, Point2 const & p2)
Either
#include <boost/geometry/geometry.hpp>
Or
#include <boost/geometry/arithmetic/arithmetic.hpp>
|
http://www.boost.org/doc/libs/1_47_0/libs/geometry/doc/html/geometry/reference/arithmetic/multiply_point.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Summary
This article will help the users to examine the use of HTTP Modules and their use in extending the pipeline. The HTTP Pipeline is a series of extensible objects that are initiated by the ASP.NET runtime in order to process a request. Http Handlers and Http Modules are .NET components that serve as the main points of extensibility in the pipeline. This article, will demonstrate how to create an HTTP Module. Http modules are filters that can pre and post-process requests as they pass through the pipeline. As a matter of fact, many of the services provided by ASP.NET are implemented as HTTP Modules. Examples are modules which are used to implement ASP.NET security as well as ASP.NET session state.
At the most fundamental level, ASP.NET HTTP Modules are classes which implement the System.Web.IHttpModule interface, illustrated below:
public interface IHttpModule { void Dispose(); void Init(HttpApplication context); }
In order to insure that your HttpHandler is hooked up to the HTTP pipeline requires an entry in the application's web config. or the machine.config file. The same holds true for HTTP modules. When an HTTP Module is added to the pipeline, the ASP.NET runtime will call the methods of the IHttpModule interface at the appropriate times. When the module is first created by the runtime, the Init method is called. As illustrated above, the parameter passed into the Init method allows your code to tie into the HttpApplication object.
The following table lists the key events exposed by the HttpApplication object. Note that all these events are implemented as multicast delegates so that numerous modules can register for each one.
Event When It's Called addition to the above methods, we can also respond to the methods in the global.asax file for the HttpApplication object. These events include the session start and end events, as well as the start and end events for the application.
Let's create a simple HTTP module and hook it up to the pipeline. In the following example, the SampleModule class implements the IHttpModule interface. When the Init method is called by the runtime, the module "hooks up" to the events of the HttpApplication object. In this example, we have registered for the application's BeginRequest event as well as its EndRequest event. In our implementation of these methods, we simply record the time the request was sent in, and then we record the difference between times when the request is finished. In order to send the output to the response stream in some fashion, we add the result to the header of the HTTP response.
using System; using System.Web; namespace MikeModules { public class SampleModule : IHttpModule { DateTime beginrequest; public void Init(HttpApplication app) { // register for events created by the pipeline app.BeginRequest += new EventHandler(this.OnBeginRequest); pp.EndRequest += new EventHandler(this.OnEndRequest); } public void Dispose() {} public void OnBeginRequest(object o, EventArgs args) { // obtain the time of the current request beginrequest = DateTime.Now; } public void OnEndRequest(object o, EventArgs args) { // get the time elapsed for the request TimeSpan elapsedtime = DateTime.Now - beginrequest; // get access to application object and the context object HttpApplication app =(HttpApplication) o; HttpContext ctx = app.Context; // add header to HTTP response ctx.Response.AppendHeader("ElapsedTime", elapsedtime.ToString()); }} }
Similar to ASP.NET HTTP Handlers, we need to notify the ASP.NET runtime that we wish to hook up our module to the pipeline. Once the module has been built and deployed to the application's bin directory or the machine's GAC, it must be registered in either the web.config file or the machine.config file. The following code adds a module to the
IIS metabase:
<httpModules> <add type="classname, assemblyname" name="modulename"/> <httpModules>
In order to add our module, we must implement the following code in the web.config file:
<configuration> <system.web> <httpModules> <add name="TimeElapsedModule" type="SampleModules.SampleModule,SampleModules "/> </httpModules> </system.web> </configuration>
In this example, the Web.config file tells the pipeline to attach an instance of the SampleModules.SimpleModule class to every HttpApplication object instantiated to service requests that target this application.
One of the most common operations when intercepting an HTTP request is to terminate the request. This is very common in custom authentication modules added to ASP.NET. In this case, the HttpApplication class has a method called CompleteRequest that is called for finishing the request. Calling CompleteRequest with the appropriate status codes can terminate the request.
ASP.NET HTTP Modules can be used to further extend the HTTP pipeline. Many of the services provided by the runtime are indeed implemented as HTTP modules. Common uses for creating custom modules include custom authentication and authorization schemes, in addition to any other filtering services that your application needs.
Pros and Cons of ActiveX and DHTML Controls
Paging in ASP.NET
|
http://www.c-sharpcorner.com/UploadFile/hemantkathuria/ASPNetHttpModules11262005004251AM/ASPNetHttpModules.aspx
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Cool Controls Library (v1.1)
I started to write CoolControls in October last year. The inspiration for me was Corel 8.0 and its great, well-designed UI, especially the dialog controls. If you have already seen Corel then you exactly know what I mean. If no, download the demo project and take a brief look at it now! One picture is often worth more than thousands of words, isn't it? Although the idea is borrowed from Corel, I wrote the entire code myself and I think that my implementation is faster and more accurate. Initially, I wrote support only for drop-down combo-boxes and edit controls and that early version required subclassing of each control individually. In fact this took me only two days, but I wasn't satisfied with that solution. So, I hit upon an idea to make a hook which could subclass all controls automatically. I wrote the code quickly because I've already had some experience with Windows hooks. It was working quite fine but I still had support only for basic controls, nothing more. Well, I realized that I've got to handle a variety of controls and its styles. It seemed to be a horrible work, but I didn't get scared and I was writing support for the rest of the controls. At last, I had to test the code under Windows 95/98 and NT including different system metrics and color sets. It took me a month to complete the code, pretty long time, but I hope that the code is good and doesn't contain too many bugs. That's a whole story.
What's new in version 1.1?
- Fixed bug with LVS_EX_HEADERDRAGDROP list controls (thanks to Vlad Bychkoff for pointing this out)
- UNICODE support added
- WH_CALLWNDPROCRET is no longer supported due to some weird problems with that type of hook
- Added support for multiple UI threads - (thanks for Mike Walter for the code)
- Class name has been changed to CCoolControlsManager (my own idea)
- Added support for SysTabControl32
Yeah, this is a very good question but answer isn't easy. Writing the code is usually easier than writing a good documentation for it :-). Nevertheless, I'll try to explain that...
Generally speaking the effect is done by subclassing a control and painting on the non-client area (in most cases) when the control needs to be drawn. The state of the control depends on the keyboard focus and mouse cursor position. The control is drawn with lighter borders (never totally flat) when it has no focus or mouse is outside of the window. Otherwise, the control is drawn in a normal way (without any changes). In more details, the library consists of two parts. First is a one and only, global CControlsManager object. The most important part of this class is a map of all subclassed controls, implemented as CMapPtrToPtr. The ControlsManager also provides a way to add a control manually by calling AddControl() member function.
Second part is a set of classes (not CWnd-derived) which represent each control individually. All classes derive from CCMControl, which holds important control information and is responsible for drawing the control border. CCMControl derives from CCMCore, which is a virtual class that provides a skeleton for all of the rest. Each CCMControl-derived class typically implements own DrawControl() function, which is the main drawing routine. It was necessary to respect all possible control styles and hence it took relatively long time to write this code and check all possible situations in different system configurations.
The first thing we have to do is installing app-wide hook of WH_CALLWNDPROCRET type. Further processing depends on m_bDialogOnly flag. If this flag is set to TRUE, we intercept WM_INITDIALOG and make a call to ControlManager's Install() method, which gets a handle to the dialog as a parameter. Next ,this function iterates through all dialog controls and for each of them the AddControl() member is being called. This approach allows to subclass only controls that are inserted to some dialog. If m_bDialogOnly is set to FALSE, WM_CREATE is intercepted instead of WM_INITDIALOG, so we are able to subclass all controls including those on toolbars an other non-dialog windows.
The AddControl() member gets a handle to control, retrieves its windows class name and next try to classify the control to one of currently supported groups.
Currently supported controls are:
- Pushbuttons (except those with BS_OWNERDRAW style)
- Checkboxes
- Radiobuttons
- Scrollbar controls
- Edit boxes
- List boxes
- List views
- Tree views
- Spin buttons
- Slider controls
- Date/time pickers
- Combo boxes (all styles)
- Header controls
- Hotkey controls
- IPAddress controls
- Toolbars (without TBSTYLE_FLAT)
- Month calendars
- Extended combo boxes
- Rich edit controls
- Tab controls
When window class name matches one of supported items, an object of appropriate type is created, the control is subclassed and the object is added to the map. The ControlsManager checks periodically (by setting a timer of 100ms period) whether the mouse cursor is over of any control in the map. If so, state of that control is changed accordingly. In addition we have to intercept some of messages that may cause redrawing of the control, e.g. WM_KILLFOCUS, WM_SETFOCUS, WM_ENABLE etc. and border of control is redrawn after calling the original window procedure. The control is removed from the map when it receives WM_NCDESTROY, the last message that the system sends to a window.
My code is not strongly MFC-based because I've used only CMapPtrToPtr class from MFC. Initially, I tried to use 'map' class from the STL, but the resulting executable was bigger and slower than this which has been built using CMapPtrToPtr.
For further information look at the CoolControlsManager.cpp and .h files
How to use it?
Single-threaded applications:
This module is extremely easy to use; you have to add to your CWinApp-derived class implementation file only two lines of code. First is a typical #include <CoolControlsManager.h> statement, second is a call to ControlsManager's InstallHook() method. The best place for this is the InitInstance() method of your CWinApp-derived class.
...
#include "CoolControlsManager.h"
...
BOOL CCoolControlsApp::InitInstance()
{
// Install the CoolControls
GetCtrlManager().InstallHook();
// Remaining stuff
}
Multithreaded applications:
Steps are the same as for single-threaded case, but you must add a call to InstallHook() for any additional thread you're going to create. You can place this code in InitInstance() of your CWinThread-derived class.
...
#include "CoolControlsManager.h"
...
BOOL CNewThread::InitInstance()
{
// Install the CoolControls for this thread
GetCtrlManager().InstallHook();
// Remaining stuff
}
BOOL CNewThread::ExitInstance()
{
// Uninstall the CoolControls for this thread
GetCtrlManager().UninstallHook();
// Remaining stuff
}
Of course don't forget to add CoolControlsManager.cpp to your project! That's all. The code can be compiled using VC5 as well as VC6 and has been tested under Win98 and WinNT 4.0.
Standard Disclaimer
This files may be redistributed unmodified by any means providing it is not sold for profit without the authors written consent, and providing that this notice and the authors name and all copyright notices remains intact. This code may be used in compiled form in any way you wish with the following conditions:
If the source code is used in any commercial product then a statement along the lines of "Portions Copyright (C) 1999 Bogdan Ledwig".
Download demo project -62 KB
Date Last Updated: May 17, 1999
Amazing!Posted by Legacy on 11/17/2003 12:00am
Originally posted by: Victor N
Looking for this thing for a long time...Reply
Made since 1999, still usefull in 2003.
Error, GetWindowText returned empty string in richedit on Win2000Posted by Legacy on 04/15/2002 12:00am
Originally posted by: Paradoxx
How to handle my own drawn button that conflict with yours?Posted by Legacy on 12/02/2001 12:00am
Originally posted by: devilsword
I derive a my own drawn button from CButtonReply
When I used GetCtrlManager().InstallHook()
my own drawn cannot work as it was.How did I
disable the button function of yours and didn't
effect other's controls?
How can I add support for ownerdrawn push buttons?Posted by Legacy on 10/23/2001 12:00am
Originally posted by: Joerg Hoffmann
Yes, you did really good work.
The only missing thing is the support for for push buttons with the BS_OWNWERDRAWN style.
Any suggestions how or where i could add this ?
THXReply
Does it need the virtual destructor for CCMControl?Posted by Legacy on 07/30/2001 12:00am
Originally posted by: Hyung-Wook Kim
It's really cool job!Reply
But studying this work, I wonder why there isn't virtual destructor for CCMControl... The classes inherited from CCMControl are deleted in RemoveControl(), but they are the type of CCMControl. So they are destructed as the object of CCMControl type, right? Is it safe for the memory leakage?
Truly amazingPosted by Legacy on 06/05/2001 12:00am
Originally posted by: Diarrhio
Great work, man. Your work really made a diff in my app! Thanks so much for your diligence!
DReply
Flat controlsPosted by Legacy on 05/21/2001 12:00am
Originally posted by: Iulian Costache
I want to make an application that has flat controls all the time (like they are before you move the mouse inside one). I want to be flat even when you push a Scrollbar or a Thumb.
Can you help me?Reply
Thnx
Possible Bug in Horizontal Scroll BarPosted by Legacy on 07/05/1999 12:00am
Originally posted by: Cris Tagle
Yes, indeed it is an excellent set of codes. Thank you very much for sharing such knowledge. However, it seems to have a minor glitch in displaying horizontal scroll bars. When the control is not active, it displays a combination of both the original and the new control. The thumb doesn't seem to appear in a single location thus producing a duplicate image. I'll try to trace the problem but since this is your work, you maybe able to track it faster than I could. And if you want a snap shot of the said control, please tell me and I'll email you the image.
Thanks again.Reply
Error processing Tab ControlsPosted by Legacy on 05/20/1999 12:00am
Originally posted by: Simon Brown
More than just thanksPosted by Legacy on 05/13/1999 12:00am
Originally posted by: Ian Duff
As a programmer of more years than I care to remember, I would just like to add my congratulations to the author. He has done an enormous amount of work, and quite generously offers his source code to us in the programming community gratis. Well done Bogdan! and thank you very much for such a beautiful piece of work. I personally think that this is worthy of Code Guru status.Reply
|
http://www.codeguru.com/cpp/controls/controls/coolcontrols/article.php/c2157/Cool-Controls-Library-v11.htm
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
01 December 2006 18:45 [Source: ICIS news]
TORONTO (ICIS news)--A planned sharp increase in the tariffs for feedstock shipments on Canada's Cochin pipeline is set to add to the woes of Ontario’s petrochemicals hub in Sarnia, a top Canadian chemicals analyst said on Friday.
The 1,900-mile ?xml:namespace>
BP, which operates
Tariffs on the line’s western leg, from
"The proposed increase in the tariff is substantial," John Cummings, a Toronto-based independent petrochemicals analyst, said in a research note to clients, adding that it would hurt the profitability of Sarnia-based ethylene and derivatives production.
The increases do not cover expenditures required for upgrading the line to resume ethylene shipments, which BP suspended earlier this year citing safety reasons.
Cummings said BP is taking a cautious approach in testing the whole line because of the recent problems it had at its
A spokesman for ExxonMobil’s Canadian affiliate Imperial Oil told ICIS news that the rate hike may impact the company’s petrochemicals business in Sarnia. However, he stressed that Imperial is able to obtain alternative feedstock and ethylene through the ExxonMobil network to support its
Nova Chemicals expects any direct impact on it to be minimal as it hardly use the line any longer, it said in a brief statement. “We will be interested to see any market impacts that may result from this tariff increase,” the Calgary-based petrochemicals major added.
Whether BP can successfully realise the rate hike remains to be seen, Cummings said.
A key factor will be the long-term availability of surplus ethane
|
http://www.icis.com/Articles/2006/12/01/1111148/canada-pipeline-tariff-hike-hurting-chems-analyst.html
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
.
My Haskell solution (see for a version with comments):
[...] today’s Programming Praxis exercise, our goal is to convert a decimal length value to the fractions used by [...]
Here’s a Python solution:
Also here: and here:
My solution in Python:
My try in REXX:
I just used ‘ and ” to indicate feet and inches. Until seeing other’s solutions it hadn’t occured to me to spell them out.
Here’s my version modified to print out “feet” and “inches”.
Here’s one in C++
namespace Carpenter
{
class Program
{
static void Caprenter(double measureInches, out int feet, out int inches, out int thirtyseconds)
{
feet = (int)(measureInches / 12);
inches = (int)(measureInches % 12);
thirtyseconds = (int)((measureInches – (feet * 12) – inches) * 32.0);
}
static void Main()
{
double measureInches = 26.375;
int feet, inches, thirtyseconds;
Caprenter(measureInches, out feet, out inches, out thirtyseconds);
System.Console.WriteLine(“Feet {0}, Inches {1}, 1/32 {2}”, feet, inches, thirtyseconds);
}
}
}
This is written in Excel VBA; it is called using: =CarpenterNotation(A1) for the defaults, or by selecting the varous optional parameters.
It can optionally round to various fractions, or 16th’s are the default; typical values would be 8 for 8ths or 16 for 16ths of an inch.
It can optionally round up or down, otherwise standard rounding occurs.
It can optionally display a dash on the left or right side for negative numbers, or parenthesis is the default.
It can optionally display a tilde to represent when the number is now an approximation due to rounding, or not display any notation, or a double-tilde is the default approximation notation.
I made some changes. First I fixed a bug if zero appears in the denominator. I forgot to test for that, so I fixed it. And I made an option to decide if you want to display zero inches when there is feet; otherwise inches will always display if feet is zero.
Also, I decided to go with no approximation indication (FALSE value) or TRUE value to display approximation by using a tilde to show that the actual value is LESS than the rounded value displayed and I show the double tilde if the actual value is GREATER than what is displayed. This lets you know if the display is just a hair light or heavy simply by looking at the tilde’s.
So now TRUE/FALSE is used for the last 2 options:
Sorry, I wrote my notes backwards–I wish that I could edit it. For the tilde indication for my code above here is the rules:
if actual cell value > rounded display value then a tilde
if actual cell value < rounded display value then a double-tilde
saying the same thing in the reverse way:
if rounded display value actual cell value then a double-tilde
This is my final version if you’d like to have it–it’s public domain.
I added some error checking and found a bug if the value was just barely under 1 foot, 2 feet, etc, then it would round up but show 0/16″ Ooops, sorry. It’s fixed now.
I have tested this really well and it works solidly, I hope that you like it, sorry for the different versions, but the final product is nice.
In FORTH, still rounds down sometimes but mostly correct. Also I just print “inch(es)” as shortcut…
Execution:
|
http://programmingpraxis.com/2011/07/01/feet-and-inches/?like=1&_wpnonce=274c59a080
|
CC-MAIN-2014-10
|
en
|
refinedweb
|
Force A Component To Only Have One Child
A component can normally have an arbitrary number of elements nested
directly inside it. React's `Children.only` function can be used to force it
to a single direct child.
```javascript
import React, { Children, Component } from "react";
class App extends Component {
render() {
return (
|
https://til.hashrocket.com/posts/i0shbgh0me-force-a-component-to-only-have-one-child.md
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
- HOWTI make a real toolbar
- Scoping problem with xtype
- In Border Layout West region is not taking all the space
- afterRender, this.mainBody is undefined after upgrade to Ext2.2
- TabPanel/Nested BorderLayout rendering problem
- [solved] cannot add checkbox to toolbar on ext 2.2
- about checkboxgroup
- Form Element Events : fired on label also
- TabPanel.add open jsp pages,but render items in one page?
- Problem to use Ext.tree.ColumnTree
- How to search and highlight String in page with DomQuery?
- Adding dynamic tab contains a formPanel gives error
- SelectionModel error
- Problem with Ext.Viewport + <form>
- [SOLVED] numberfield content align
- Actions.success is never called
- Showing alert while clicking tree node
- cannot collapse node with checkbox after expanding the node
- Window autoheight
- Selected row in EditorGridPanel doesn't update data in other panel
- How to get the Element using autoLoad
- Dynamic creation of ext objects from json data
- Combo doesn't get populated
- Issue in displaying the value in LOV used with Grid
- Combobox selection
- How I can change text inside div?
- How to put icons inside a panel?
- PagingToolbar Ext2.2 refresh button
- calling two store synchronously and getting records when both of them loaded
- Inter frame drag and drop using Ext ?
- How to sort groups in GroupingStore by different column?!
- in layout column to put button to buttom
- [2.x] Reducing horizontal viewport size issue in IE6
- Show paging grid in new window without paging for print
- Dynamic form - second render problem
- Pass custom params on grid sort
- 100% anchor or 100% width
- Grid row selection needs two clicks
- Row Expander + How to format output?
- Syntax element? - // }}}
- Check an Ext.form.ComboBox
- panel elements & panel Layout remove problem
- Check an Ext.form.ComboBox
- Using Record.set to update an array property
- open window from window
- Sorting Async Tree
- (Solved)why i got a exception : object doesn`t support this propertys or method
- Mozila Bug:input text box inside the treenode does not get focus on first click.
- Incorrect Height of panel inside tabpanel in IE
- Adding Events
- Mozila Bug:input text box inside the treenode does not get focus on first click.
- GridPanel w/dialog TO Window Containing PropertyGrid > tabPanel > formPanel Example
- BorderLayout and Submit
- pagingToolbar, google data api and start-index
- Ext.form.FileUploadField - file types
- open tabs error
- Remote Combobox Filter
- Paging toolbar bug?
- Creating form
- RadioGroup Issues
- Page not found error after page gets loaded
- RowExpander with dynamic content
- ComboBox doesn't show records retrieved with Ext.data.Store
- Grid column line sperator & header cell height
- loadRecord() not calling CheckBox.setValue()
- Why are methods not being called?
- numberRenderer?
- DateField behavior
- Checkbox labels in Checkboxgroup wrapping
- Not uploading file - how to debug ?
- Key press, editor in EditorGridPanel!
- Adding labels above Combo Box and input boxes
- changing url in Updater
- buttons in forms
- setTitle pushing bottom toolbar
- Grid displays "Page 5 of 4" when removing all rows from last page
- Ext.form.DateField change Event question
- numberRenderer
- Complex table drag and drop
- AsyncTreeNode not requesting data
- Store into array
- drag and drop grids using the viewport?
- ajax from another server
- defaultButton not being set
- Ext 2.0 xhtml viewport namespace error
- [SOLVED] Removing component from container
- [2.2] Ext.form.NumberField listener render getValue()
- Ext 2.2
- Why the tree has only root to show?
- help : get value of grid is null
- How to close an Ext window when save in "Open Save Cancel" dialog box is clicked
- [help]About get values in form with multi fieldset
- issue of ArrayReaer totalProperty
- get store w/o create(new) it
- root node with checkbox is checked or unchecked?
- Using extjs lib from ext site ?
- print on Ext.Window
- JSONStore and a Spring JSON View...
- Combo is not displaying the items completely even though the combo has enough width
- an issue of use livegrid to load data
- javascript cache
- How to get a value of a field when the 'rowdblclick' event appear?
- store.load error!
- Date filters
- How do I: Change Indicator on Property Grid like EditorGrid
- Grid, C is Undefined
- Form and Table Layout cannot have a 'fit' panel?
- [CLOSED] [2.0] Extending data.Store - manage 'loadexception'
- Paging Problem in XML Grid
- Setting Date Filters Refreshes grid twice
- Ext.data.store is not a constructor
- TextField and MaskRe
- [SOLVED] CheckboxGroup/Checkbox Listenner
- Need to expand tree nodes by default
- Order of loading things in FF3: Type not defined?
- Rendering requires mouse event
- window dragging
- [Solved] Scroll Bar issue in IE
- Stop Event or build "function stack"
- [Solved]Find all edited rows in a grid
- Get reference on base/super class
- List of hidden column
- Ext.TabPanel items with "autoLoad" and "scripts: true" issue
- Many ComboBox with the same store
- x scripts that need Ext.onReady
- Ext 2.0.1 - Ext.form.Label is not a constructor
- this.targetXY is undefined
- Page scrolling, how to begin?
- Simple Datastore with multiple filters
- OverWrite Issue in combo box
- Floating Ext.Panel always in front (FF3 and Safari)
- Animated Drawer
- Can't make resizeable div.
- Column Numberer ?!
- [CLOSED] Ext.override issuing p has no properties message
- Button Rendering
- Grid with metaData
- Search Textfield in API
- FieldSet checkboxToggle
- Tab Title Disappear in IE6 with non latin characters
- render questions
- TabPanel Remove TabItem
- enableKeyEvents with 2.0.1 ??
- [SOLVED] Scope problem in Ext.Window config
- Grid Header not appearing in a window on IE 7
- Multiple Comboboxes
- Set max length of text in MessageBox
- Form submit on enter key but only when my all mandatory fields fillup
- Regd : EditorGridPanel
- textfiled for file
- access particular items from regions
- Issues with submitting a form with parameters
- How to add a row to editorgrid field in form
- [solved] Panel TBar and it's hiding
- Ext.Layer
- Problems with afteredit
- Setting default focus on form field
- [SOLVED] cookie expire
- [SOLVED] FormPanel loadRecord
- [solved] How I can get currently selected tab index number
- FormPanel layout problem
- Making a EditorGridPanel working like a Field
- Drag'N'Drop between two grids with AutoHeight=true
- Need advise to propagate disable event to fieldLabel
- GRID DON
- Attributes for preconfigured classes
- Will there be a 2.2 Conversion Checklist?
- Calling a method
- Help - grid not render to tab
- checkboxgroup question
- Dynamic Items
- Java Applet Viewport GridPanel
- Grid not being resized, vertical scrollbar missing
- isFocus()
- ComboBox Rendering Slow
- [solved] Problem Render Date Format Grid
- Treepanel - Remove all nodes
- Basic FileUpload (Button-only) - WITH UPLOAD
- Add loadmask to treepanel when loading
- New CheckboxGroup appears with "x-hidden" class.
- local paging with XMLReader
- Updating data in LiveGrid component
- Creating "Addtab" Tab in a TabPanel
- Combo Box Can't Select By Clicking on Value
- tabs tbar dissapears when tbar button clicked
- year only picker?
- how to hide panel items when panel renders?
- XMLRader for this XML
- Combobox expanded list displays underneath form menus in IE6
- [2.2] Nested Panels\TabPanels in Window; layout/resize problem (IE Only)
- TreePanel in Accordian will not collapse
- Add button to a tab
- Problem with PagingToolbar and commitChanges
- Manually compressiong JS using GZip utility
- Can I add a Alt-x hotkey to an extjs TabPanel?
- change the parent of a panel
- Problem is Tree display
- enable/disable
- Json Store and Reader - simple problem...
- Destroy panel from button inside panel
- [solved] Modal Window Memory Leak?
- is there an example for grid/table with adding rows?
- FormPanel with Border Layout in it
- About Scroll Bar ?!
- [2.2] Border layout with incorrect layout
- Application.ThreadGrid is not a constructor
- Fading in/out for Panel?
- Question About EditorGridPanel
- point of sale app - need your feedback
- Text from Combobox
- XType form values
- Can SearchField be AutoComplete...
- XML Tree Loader (POST) error?
- New to ExtJs: How to comuicate whith a parent object...
- PagingToolBar Problem w/ JSON w/ POST w/ ExtJS 2.2
- Problem with ext.tabpanel
- [SOLVED] Problem with upgrading to Ext 2.2
- **** READ ME FIRST **** - Guidelines, FAQs, Resources etc
- Update grid/xml with different record
- Adobe Air Application...
- [2.2] Combo / auto suggest does not work any more
- Adding event handlers to ext objects
- Performance advice: how to cache or activate?
- FormPanel question
- Using two border layouts
- Binding custom validator functions
- TabPanel only working in sequence
- [SOLVED] Autofill
- tree - click a node and get its text and isLeaf() value
- How can I dynamically change the editor in an editorgridpanel/propertygrid?
- radiogroup can't show select value form.loadRecord(rec)
- [SOLVED] success doesn't trigger
- something wrong with my paging toolbar..
- Problem in combo
- Strange error "URL could not be retrieved"
- Help: Problem with Draggable Grids in Borderlayout
- Grouping grids - can they have multiple grouping options
- [SOLVED] ComboBox doesn't show
- ext.htmleditor capture enter key
- Absolute positioning on a window hides controls
- problem in Ext.form.NumberField(?)
- [SOLVED][2.2] Problem with ASP.NET Forms
- how reload store in PagingToolbar and initialezed param start ?
- [SOLVED] How to read value of textbox item what is in Form?
- Problem scritptagproxy ?!
- Form input data sent to server via json
- ComboBox, local vs. remote
- TextField - autocomplete "on"
- xmltreeloader
|
https://www.sencha.com/forum/archive/index.php/f-9-p-74.html?s=23a022d5ee2034f2873d1da8f03a142d
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Wing Tips:.
Debugging Flask in Wing
To debug Flask in Wing you need to turn off Flask's built-in debugger, so that Wing's debugger can take over reporting exceptions. This is done by setting the debug attribute on the Flask application to False:
app.debug = False
Then use Set Current as Main Entry Point in the Debug menu to set your main entry point, so you can start debugging from the IDE even if the main entry point file is not visible in the editor.
Once debug is started, you can load pages from a browser to reach breakpoints or exceptions in your code. Output from the Flask process is shown in Wing's Debug I/O tool.
Example
Here's an example of a complete "Hello World" Flask application that can be debugged with Wing:
import os from flask import Flask app = Flask(__name__) @app.route("/") def hello(): return "<h3>Hello World!</h3><p>Your app is working.</p>" if __name__ == "__main__": if 'WINGDB_ACTIVE' in os.environ: app.debug = False app.run()
To try it, start debugging it in Wing and use the URL printed to the Debug I/O tool to load the page in a web browser. Setting a breakpoint on the return statement will stop there whenever the page is reloaded in the browser.
Setting up Auto-Reload with Wing Pro
With the above configuration, you will need to restart Flask whenever you make a change to
your code, either with Restart Debugging in the Debug menu or with the
toolbar icon.
If you have Wing Pro, you can avoid the need to restart Flask by telling it to auto-restart when code changes on disk, and configuring Wing to automatically debug the restarted process.
Flask is configured by adding a keyword argument to your app.run() line:
app.run(use_reloader=True)
Wing is configured by enabling Debug Child Processes under the Debug/Execute tab in Project Properties, from the Project menu. This tells Wing Pro to debug also child processes created by Flask, including the reloader process.
Now Flask will automatically restart on its own whenever you save an already-loaded source file to disk, and Wing will debug the restarted process. You can add additional files for Flask to watch as follows:
watch_files = ['/path/to/file1', '/path/to/file2'] app.run(use_reloader=True, extra_files=watch_files)
That's it for now! We'll be back soon with more Wing Tips for Wing Python IDE.
As always, please don't hesitate to email support@wingware.com if you run into problems or have any questions.
Share this article:
|
http://wingware.com/hints/flask
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
This blog post is a rehash of an earlier blog post about using Apache Spark ML pipeline models for real-time prediction. It aims to demonstrate how things have evolved over the past 3.5 years, so that the proposed approach should now be intelligible to and executable by anyone with basic Apache Spark ML (PySpark flavour) experience.
The workflow has four steps:
- Importing JPMML-SparkML library into Apache Spark.
- Assembling and fitting a pipeline model, converting it to the PMML representation.
- Starting Openscoring REST web service.
- Using Python client library to work with Openscoring REST web service.
Importing JPMML-SparkML into Apache Spark
The JPMML-SparkML library converts Apache Spark ML pipeline models to the standardized Predictive Model Markup Language (PMML) representation.
This library can be bundled statically with the application, or imported dynamically into the application driver program using
--jars or
--packages command-line options.
Users of Apache Spark 2.0, 2.1 and 2.2 are advised to download a suitable version of the JPMML-SparkML executable uber-JAR file from the GitHub releases page, and include it into their environment using the
--jars /path/to/jpmml-sparkml-executable-${version}.jar command-line option.
For example, including JPMML-SparkML 1.3.15 into Apache Spark 2.2:
$ export SPARK_HOME=/opt/apache-spark-2.2.X $ wget $ $SPARK_HOME/bin/pyspark --jars jpmml-sparkml-executable-1.3.15.jar
Users of Apache Spark 2.3, 2.4 and newer are advised to fetch the JPMML-SparkML library (plus its transitive dependencies) straight from the Maven Central repository using the
--packages org.jpmml:jpmml-sparkml:${version} command-line option:
For example, including JPMML-SparkML 1.5.7 into Apache Spark 2.4:
$ export SPARK_HOME=/opt/apache-spark-2.4.X $ $SPARK_HOME/bin/pyspark --packages org.jpmml:jpmml-sparkml:1.5.7
The JPMML-SparkML library is written in the Java language.
PySpark users should additionally install the
pyspark2pmml package, which provides Python language wrappers for JPMML-SparkML public API classes and methods:
$ pip install --upgrade pyspark2pmml
Assembling, fitting and converting pipeline models
The JPMML-SparkML library supports most common Apache Spark ML model and transformer types.
Selected highlights:
- Pipeline assembly via
feature.RFormula.
- Feature engineering via
feature.SQLTransformer.
- Hyperparameter selection and tuning via
tuning.CrossValidatorand
tuning.TrainValidationSplit.
- Third-party ML framework model types such as XGBoost and LightGBM (MMLSpark).
- Custom model and transformer types.
The exercise starts with training two separate classification-type decision tree models for the "red" and "white" subsets of the "wine quality" dataset.
For demonstration purposes, the original dataset is enriched with a "ratio of free sulfur dioxide" column by dividing the "free sulfur dioxide" column with the "total sulfur dioxide" column using Apache Spark SQL (by convention, column names must be surrounded with backticks if they contain whitespace):
from pyspark.ml import Pipeline from pyspark.ml.classification import DecisionTreeClassifier from pyspark.ml.feature import RFormula, SQLTransformer df = spark.read.option("delimiter", ";").csv("winequality-red.csv", header = True, inferSchema = True) statement = """ SELECT *, (`free sulfur dioxide` / `total sulfur dioxide`) AS `ratio of free sulfur dioxide` FROM __THIS__ """ sqlTransformer = SQLTransformer(statement = statement) formula = "quality ~ ." rFormula = RFormula(formula = formula) classifier = DecisionTreeClassifier(minInstancesPerNode = 20) pipeline = Pipeline(stages = [sqlTransformer, rFormula, classifier]) pipelineModel = pipeline.fit(df)
The conversion of pipeline models is essentially a one-liner:
from pyspark2pmml import PMMLBuilder PMMLBuilder(sc, df, pipelineModel) .buildFile("RedWineQuality.pmml")
The
pyspark2pmml.PMMLBuilder Python class is a thin wrapper around the
org.jpmml.sparkml.PMMLBuilder Java class, and "inherits" the majority of its public API methods unchanged.
It is possible to use
PMMLBuilder.putOption(stage: ml.PipelineStage, name, value) and
PMMLBuilder.verify(df: sql.DataSet) methods to configure the look and feel of PMML markup and embed model verification data, respectively, as described in an earlier blog post about converting Apache Spark ML pipeline models to PMML documents.
For demonstration purposes, disabling decision tree compaction (replaces binary splits with multi-way splits), and embedding five randomly chosen data records as model verification data:
from pyspark2pmml import PMMLBuilder PMMLBuilder(sc, df, pipelineModel) .putOption(classifier, "compact", False) .putOption(classifier, "keep_predictionCol", False) .verify(df.sample(False, 0.005).limit(5)) .buildFile("RedWineQuality.pmml"))
Unlike any other ML persistence or serialization data format, the PMML data format is text based and designed to be human-readable.
It is possible to open the resulting
RedWineQuality.pmml and
WhiteWineQuality.pmml files in a text editor and follow the splitting logic of the learned decision tree models in terms of the original feature space.
Starting Openscoring REST web service
The quickest way to have something happening is to download the latest Openscoring server executable uber-JAR file from the GitHub releases page, and run it.
For example, running Openscoring standalone server 2.0.1:
$ wget $ java -jar openscoring-server-executable-2.0.1.jar
There should be a Model REST API endpoint ready at now.
The default user authorization logic is implemented by the
org.openscoring.service.filters.NetworkSecurityContextFilter JAX-RS filter class, which grants "user" role (read-only) to any address and "admin" role (read and write) to local host addresses.
When looking to upgrade to a more production-like setup, then Openscoring-Docker and Openscoring-Elastic-Beanstalk projects provide good starting points.
Using Python client library to work with Openscoring REST web service
The Openscoring REST API is simple and straightforward.
Nevertheless, Python users should install the
openscoring package that provides an even simpler high-level API.
$ pip install --upgrade openscoring
The
openscoring.Openscoring class holds common information such as the REST API base URL, credentials etc.
The base URL is this part of URL that is shared between all endpoints.
It typically follows the pattern
http://<server>:<port>/<context path>.
The Openscoring standalone server uses a non-empty context path
openscoring for disambiguation purposes, so the default base URL is.
from openscoring import Openscoring os = Openscoring("")
A single Openscoring application instance can host multiple models. Individual models are directly addressable in the REST API by appending a slash and their alphanumeric identifier to the URL of the Model REST API endpoint.
# Shall be available at os.deployFile("RedWineQuality", "RedWineQuality.pmml") # Shall be available at os.deployFile("WhiteWineQuality", "WhiteWineQuality.pmml")
It is recommended to open model URLs in a browser and examine the model schema description part (names, data types and value spaces of all input, target and output fields) of the response object.
For example, the model schema for "RedWineQuality" lists seven input fields, one target field and eight output fields.
It follows that this model does not care about four input fields (ie. "fixed acidity", "citric acid", "chlorides" and "density" columns) that were present in the
winequality-red.csv dataset.
The mappings for these input fields may be safely omitted when making evaluation requests:
dictRequest = { #"fixed acidity" : 7.4, "volatile acidity" : 0.7, #"citric acid" : 0, "residual sugar" : 1.9, #"chlorides" : 0.076, "free sulfur dioxide" : 11, "total sulfur dioxide" : 34, #"density" : 0.9978, "pH" : 3.51, "sulphates" : 0.56, "alcohol" : 9.4, } dictResponse = os.evaluate("RedWineQuality", dictRequest) print(dictResponse)
The "single prediction" mode is intended for real-time application scenarios. Openscoring uses the JPMML-Evaluator library as its PMML engine, and should be able to deliver sub-millisecond turnaround times for arbitrary complexity PMML documents.
The "batch prediction" mode is intended for application scenarios, where new data becomes available at regular intervals, or where the cost of transporting data over the computer network (eg. calling a service from remote locations) is the limiting factor:
import pandas dfRequest = pandas.read_csv("winequality-white.csv", sep = ";") dfResponse = os.evaluateCsv("WhiteWineQuality", dfRequest) print(dfResponse.head(5))
When a model is no longer needed, then it should be undeployed to free up server resources:
os.undeploy("RedWineQuality") os.undeploy("WhiteWineQuality")
Resources
- "Wine quality" dataset:
winequality-red.csvand
winequality-white.csv
- Python scripts:
train.pyand
deploy.py
|
https://openscoring.io/blog/2020/02/16/deploying_sparkml_pipeline_openscoring_rest/
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
I'm having a bit of a hardship understanding how I could possibly perform this operation.
float squared( float num ) { __asm { push ebp mov ebp, esp sub esp, num xorps xmm0, xmm0 movss dword ptr 4[ebp], xmm0 movss xmm0, dword ptr num[ebp] mulss xmm0, dword ptr num[ebp] movss dword ptr 8[ebp], xmm0 fld dword ptr 4[ebp] sqrtss xmm0, ebp movss ebp, xmm0 mov esp, ebp pop ebp ret 0 } }
I've worked in C / C++ for a while now, and it's always been a task of mine to really dig into how inline assembly works, but I'm having some problems when executing the code.
When I run this in my main function to print the root and insert a value, I'm given an error:
Exception thrown at 0x00000000 in Test.exe: 0xC0000005: Access violation executing location 0x00000000. occurred
Any ideas?
The most fundamental issue with this code is that you wrote your own function prologue and epilogue. You have to do that when you are writing .ASM files entirely by hand, but you have to not do that when you write "inline" assembly embedded in C. You have to let the compiler handle the stack. This is the most likely reason why the program is crashing. It also means that all of your attempts to access the
num argument will instead access some unrelated stack slot, so even if your code didn't crash, it would take a garbage input,
As pointed out in comments on the question, you also have a bunch of nonsensical instructions in there, e.g.
sqrtss xmm0, ebp (
sqrtss cannot take integer register arguments). This should have caused the compiler to reject the program, but if it instead produced nonsensical machine code, that could also cause a crash.
And (also as pointed out in comments on the question) I'm not sure what mathematical function this code would compute in the hypothetical scenario where each machine instruction does something like what you meant it to do, but it definitely isn't the square root.
Correct MSVC-style inline assembly to implement single-precision floating point square root, using the SSEn
sqrtss instruction, would look something like this, I think. (Not tested. Since this is Win32 rather than Win64, an implementation using
fsqrt instead might be more appropriate, but I don't know how to do that off the top of my head.)
float square_root(float radicand) { __asm { sqrtss xmm0, radicand } }
... Or you could just
#include <math.h> and use
sqrtf and save yourself the trouble.
I think using
fsqrt from scratch will work.
fld qword [num] fsqrt
User contributions licensed under CC BY-SA 3.0
|
https://windows-hexerror.linestarve.com/q/so59740121-Square-root-ASM
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Make your points expressive
Being able to show individual data points is a powerful way to communicate. Being able to change their appearance can make the story they tell much richer.
A note on terminology: In Matplotlib, plotted points are called "markers". In plotting, "points" already refers to a unit of measure, so calling data points "markers" disambiguates them. Also, as we'll see, markers can be far richer than a dot, which earns them a more expressive name.
We'll get all set up and create a few data points to work with. If any part if this is confusing, take a quick look at why it's here.
import matplotlib matplotlib.use("agg") import matplotlib.pyplot as plt import numpy as np x = np.linspace(-1, 1) y = x + np.random.normal(size=x.size) fig = plt.figure() ax = fig.gca()
Change the size
ax.scatter(x, y, s=80)
Using the
s argument, you can set the size of
your markers, in points squared. If you want a marker 10 points
high, choose
s=100.
Make every marker a different size
The real power of the
scatter() function somes out when
we want to modify markers individually.
sizes = (np.random.sample(size=x.size) * 10) ** 2 ax.scatter(x, y, s=sizes)
Here we created an array of sizes, one for each marker.
Change the marker style
Sometimes a circle just doesn't set the right tone. Luckliy Matplotlib has you covered. There are dozens of options, plus the ability to create custom shapes of any type.
ax.scatter(x, y, marker="v")
Using the
marker argument and the right character code,
you can choose whichever style
you like. Here are a few of the common ones.
- ".": point
- "o": circle
- "s": square
- "^": triangle
- "v": upside down triangle
- "+": plus
- "x": X
Make multiple marker types
Having differently shaped markers is a great way to distinguish between different groups of data points. If your control group is all circles and your experimental group is all X's the difference pops out, even to colorblind viewers.
N = x.size // 3 ax.scatter(x[:N], y[:N], marker="o") ax.scatter(x[N: 2 * N], y[N: 2 * N], marker="x") ax.scatter(x[2 * N:], y[2 * N:], marker="s")
There's no way to specify multiple marker styles in a single
scatter() call, but we can separate our data out
into groups and plot each marker style separately. Here we chopped
our data up into three equal groups.
Change the color
Another great way to make your markers express your data story is by changing their color.
ax.scatter(x, y, c="orange")
The
c argument, together with any of the color names
(
as described in the post on lines) lets you change your
markers to whatever shade of the rainbow you like.
Change the color of each marker
If you want to get extra fancy, you can control the color of
each point individually. This is what makes
scatter()
special.
ax.scatter(x, y, c=x-y)
One way to go about this is to specify a set of numerical values for the color, one for each data point. Matplotlib automatically takes them and translates them to a nice color scale.
Make markers transparent
This is particularly useful when you have lots of overlapping markers and you would like to get a sense of their density.
x = np.linspace(-1, 1, num=1e5) y = x + np.random.normal(size=x.size)
To illustrate this, we first create a lot of data points.
ax.scatter(x, y, marker=".", alpha=.05, edgecolors="none")
Then by setting the
alpha argument to something small,
each individual point only contributes a small about of digital ink
to the picture. Only in places where lots of points overlap is the
result a solid color.
alpha=1 represents no transparency
and is the default.
The
edgecolors="none" is necessary to remove the marker
outlines. For some marker types at least, the
alpha
argument doesn't apply to the outlines, only the solid fill.
and even more...
If you are curious and want to explore all the other crazy things you can do with markers and scatterplots, check out the API.
We've only scratched the surface. Want to see what else you can change in your plot? Come take a look at the full set of tutorials.
|
https://e2eml.school/matplotlib_points.html
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Answered
When a method's return value has an unspecified nullability, it seems to treat it as always null.
Here's a simple console application to repro [Visual Studio 2015; no ReSharper Extensions].
using System; namespace ConsoleApplication2 { static class Program { static void Main() { var foo = MakeAString(); Console.WriteLine(foo?.Length); Console.WriteLine(foo.Length); } private static string MakeAString() { return new string(new[] { 'f', 'o', 'o' }); } } }
Seems the same problem:
Is it same to Issue RSRP-458210 ?
I got the same issues. Here two good examples of the strange behavior:
1. In this example I'm debugging into the code from which R# says it cannot happen.
2. Perhaps the same cause but here it says I can get a NullRefException while casting my object to a custom enum. Also the code after that line would be unreachable... And getting back to line 421, this warning says that my object will always be null.
Any feedback would be appreciated.
I have used installation from Issue RSRP-458210 and it helps.
Thanks @Yevhen, that helped! I installed the fixed version and my issues were gone. :-)
|
https://resharper-support.jetbrains.com/hc/en-us/community/posts/207366695-ReSharper-2016-1-Nullability-and-unreachable-code-analysis-seems-to-have-issues-
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Provided by: libbobcat-dev_5.02.00-1build1_amd64
NAME
FBB::DiffieHellman - Diffie-Hellman PKI, computing shared keys
SYNOPSIS
#include <bobcat/diffiehellman> Linking option: -lbobcat -lcrypto
DESCRIPTION
The class FBB::DiffieHellman computes shared keys (shared secrets) using the Diffie-Hellman (1976) algorithm. The Diffie-Hellman algorithm uses public and private information. The public information consists of a prime (e.g., a prime number consisting of 1024 bits), a generator (for which the value 5 is commonly used), and (using ** to represent the power operator on integral values) the value generator ** private mod prime, where private is a randomly selected large number, which is the private information. The Diffie-Hellman algorithm is commonly used to compute a shared key which can be used to encrypt information sent between two parties. One party, which in this man-page is called the initiator computes the prime and defines the generator. The prime is computed by FBB::DiffieHellman’s first constructor, while the generator is passed to this constructor as one of its arguments. For the generator the value 5 is often used. Next the initiator passes its public information, consisting of the prime, the generator, and the value generator ** private mod prime) to the other party, which in this man-page is called the peer. The public information is written in binairy, big-endian form to file using the member save. The initiator may optionally save the private information to a separate file as well. The peer thereupon receives the initiator’s public information. The initialor’s public information is read by a FBB::DiffieHellman constructor either expecting the name of a file or a std::istream containining the initiator’s public information. Having obtained the prime and generator, the peer’s public (and, optionally, private information) is saved by also calling save. This results, among other things, in the value generator ** private mod prime, but now using the peer’s private information. At this point the peer is already able to compute the shared key. The key is returned by calling the key member, which returns the shared key as a series of bytes stored in a std::string. Before the initiator can compute the shared key the peer’s generator ** private mod prime value must be available. The peer sends the saved public data to the initiator. The initiator then passes the peer’s public data either by file name or by std::istream to the key member, returning the shared key. Perfect Forward Secrecy and Ephemeral Diffie Hellman If the initiator and peer decide not to save their private information Perfect Forward Secrecy and Ephemeral Diffie Hellman may be obtained. Here, the procedure is applied as follows: o Initiator and peer have agreed upon and securely exchanged a long-lasting common secret, which may be used in combination with, e.g., symmetric encryption methods. o Applying the abovementioned procedure, the private information is never saved on file. Consequently, the shared key, once computed, cannot be reconstructed anymore. o The value generator ** private mod prime is not sent to either peer or initiator `in the clear’, but encrypted using the long-lasting common secret. As the current implementation saves all public information on file, it’s probably easiest to encrypt the file containing the public information. o The recipients, having received the other party’s encrypted public information, decrypt it using the long-lasting shared secret and compute the the shared key. o As the secret information is not kept, the shared key cannot be reconstructed, while a Man-In-The-Middle attack is prevented by only exchanging encrypted public information. o The shared key can now be used to encrypt a communication session
NAMESPACE
FBB All constructors, members, operators and manipulators, mentioned in this man-page, are defined in the namespace FBB.
INHERITS FROM
-
PUBLIC ENUMERATION
The enumeration FBB::DiffieHellman::SecretKey has two values: o DONT_SAVE_SECRET_KEY, indicating that the secret information should not be saved on file; o SAVE_SECRET_KEY, indicating that the secret information should be saved on file;
CONSTRUCTORS
o DiffieHellman(size_t primeLength = 1024, size_t generator = 5, bool progress = false): This constructor computes a prime of the specified length, and initializes the public information with the indicated generator. If progress is true, the progress of the prime construction process is shown to std::cout by a series of dots, minuses and plusses. Generating a suitable prime may fail, resulting in an FBB::Exception being thrown. Unless the generator is specified as 2 or 5 the warning cannot check the validity of generator ... is inserted into the mstream(3bobcat)’s wmsg object. A warning is also inserted if the provided generator is not a generator for the computed prime. This constructor should be called by the initiator to start the Diffie-Hellman shared key computation procedure. o DiffieHellman(std::string const &initiatorPublicFileName): This constructor should be called by the peer, after having received the initiator’s public info. It makes the initiator’s public information available to the peer, after which the peer’s public and private information can be computed. o DiffieHellman(std::stream &initiatorPublicStream): This constructor acts like the previous constructor, expecting a std::istream rather than a file name. It should be called by the peer, after having received the initiator’s public info. It makes the initiator’s public information available to the peer, after which the peer’s public and private information can be computed. o DiffieHellman(std::string const &initiatorPublicFileName, std::string const &initiatorPrivateFileName): Unless the initiator’s DiffieHellman object is still available, this constructor should again be called by the initiator, to load the initiator’s public and private data. o DiffieHellman(std::stream &initiatorPublicStream, std::stream &initiatorPrivateStream): This constructor acts like the previous constructor, expecting std::istreams rather than file names. It should be called by the initiator, to load the initiator’s public and private info. Copy and move constructors (and assignment operators) are available.
MEMBER FUNCTIONS
o std::string key() const: This member should be called by the peer. It returns the shared key. If the key cannot be computed, or if the key is not resistant to the small group attack (i.e., if the key equals 1, or is at least equal to the public prime value, or if key ** ((prime - 1) / 2) mod prime != 1), then an FBB::Exception is thrown. o std::string key(std::string const &peerPublicFileName) const: This member should be called by the initiator. It skips the data referring to the prime and generator found in peerPublicFileName and then reads the peer’s generator ** private mod prime value. If this value cannot be read or if the key is not resistant to the small group attack (cf. the description of the previous key member) then an FBB::Exception is thrown. It returns the shared key. o std::string key(std::istream const &peerPublicStream) const: This member should be called by the initiator. It acts like the previous key member, reading the peer’s generator ** private mod prime value from peerPublicStream. It returns the shared key. o void save(std::string const &basename, SecretKey action = DONT_SAVE_SECRET_KEY): This member should be called by the initiator. It saves the public information on the file ’basename’.pub. The information is written in binary, big-endian format, using the following organization: - the size of the prime in bytes; - the prime’s bytes; - the size of the generator in bytes; - the generator’s bytes; - the size of the public info (generator ** private mod prime) in bytes; - the public info’s bytes. If action is specified as SAVE_SECRET_KEY then the private information is written in binary, big-endian format, using the following organization: - the size of the private information in bytes; - the private information bytes.
EXAMPLE
When called without arguments, the example program generates Diffie-Hellman parameters writing the initiator’s public and private information to, respectively, init.pub and init.sec. When called with one argument, init.pub is read, and the peer’s public and private information is written to, respectively, peer.pub and peer.sec. Next, the (peer’s) shared key is written to peerkey. When called with two arguments, init.pub and init.sec are read, as well as the peer’s public information (on the file peer.pub). Next, the (initiator’s) shared key is written to initkey. The files peerkey and initkey should be identical. #include <fstream> #include <iostream> #include <bobcat/diffiehellman> using namespace FBB; using namespace std; int main(int argc, char **argv) try { if (argc == 1) // initiator: create DH parameters { DiffieHellman dh(1024, 5, true); dh.save("init", DiffieHellman::SAVE_SECRET_KEY); } if (argc == 2) // peer: save peer’s scret key { DiffieHellman dh("init.pub"); dh.save("peer", DiffieHellman::SAVE_SECRET_KEY); string key = dh.key(); cout << "Key length: " << key.length() << ’\n’; ofstream outkey("peerkey"); outkey.write(key.data(), key.length()); } if (argc == 3) { DiffieHellman dh("init.pub", "init.sec"); string key = dh.key("peer.pub"); cout << "Key length: " << key.length() << ’\n’; ofstream outkey("initkey"); outkey.write(key.data(), key.length()); } } catch (std::exception const &exc) { std::cout << exc.what() << ’\n’; }
FILES
bobcat/diffiehellman - defines the class interface
SEE ALSO
bobcat(7), bigint(3bobcat)
BUGS
None Reported.
DISTRIBUTION FILES
o bobcat_5.02.00-x.dsc: detached signature; o bobcat_5.02.00-x.tar.gz: source archive; o bobcat_5.02.00-x_i386.changes: change log; o libbobcat1_5.02.00-x_*.deb: debian package holding the libraries; o libbobcat1-dev_5.02.00).
|
http://manpages.ubuntu.com/manpages/focal/man3/diffiehellman.3bobcat.html
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
$ cnpm install @brandingbrand/react-native-fast-image
Performant React Native image component.
<kbd>
</kbd>
<kbd>
</kbd>
FastImage example app.
React Native's
Image component handles image caching like browsers
for the most part.
If the server is returning proper cache control
headers for images you'll generally get the sort of built in
caching behavior you'd have in a browser.
Even so many people have noticed:
FastImage is an
Image replacement that solves these issues.
FastImage is a wrapper around
SDWebImage (iOS)
and
Glide (Android).
#} />
react-native link)
If you use Proguard you will need to add these lines to
android/app/proguard-rules.pro:
-keep public class com.dylanvann.fastimage.* {*;} -keep public class com.dylanvann.fastimage.** {*;}.
FastImage.preload: (source[]) => void
Preload images to display later. e.g.
FastImage.preload([ { uri: '', headers: { Authorization: 'someAuthToken' }, }, { uri: '', headers: { Authorization: 'someAuthToken' }, }, ])
If you have any problems using this library try the steps in troubleshooting and see if they fix it.
Follow these instructions to get the example app running.
This project only aims to support the latest version of React Native.
This simplifies the development and the testing of the project.
If you require new features or bug fixes for older versions you can fork this project.
The idea for this modules came from vovkasm's react-native-web-image package. It also uses Glide and SDWebImage, but didn't have some features I needed (priority, headers).
MIT
MIT
Apache-2.0
|
https://npm.taobao.org/package/@brandingbrand/react-native-fast-image
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
For the Technical folks out there, this one is for you.
If you have a Access Control Service (ACS) namespace you will periodically see email indicating your certificate, key or password is about to expire. This video will guide you through what to check for to ensure all you certificates, keys and passwords are up to date.
The code references it this presentation can downloaded at
Links
|
https://blogs.technet.microsoft.com/ptsblog/2013/12/06/windows-azure-identifying-and-updating-expiring-certificates-symmetric-keys-and-passwords/
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Algorithm, Code, Quiz, Math, Simple, Programming, Easy Questions
Budget $30-250 USD
Hello,
I am looking for some one who can code excellent algorithms, have clear concepts in programming, good with data structures and can code with good design.
Who has good understanding about Big O notations and time/space complexity. If you can answer below questions, and provide support (online) for an hour. You will be awarded this project and good money, excellent feedback and bonus on good work. I am 5.0/5.0 employer. I will create 100% Milestone Money. It will be fun, exciting to work together.
****** Please Refer to the attached File.
BASIC ALGORITHMS
1. What is the best data structure to implement priority queue?
2. What are the worst case time complexity of the following algorithms, performed on containers of size N:
(a) Locating a number in an unsorted array.
(b) Locating a number in a sorted array.
(c) Inserting a number in a balanced binary tree
(f) Deleting a number from an unbalanced binary tree
(g) Building a heap
(h) Adding a number to a hash table
(i) Sorting an array
3. What is the relation (less, greater, equal) between O(n) and O(2n)? O(log2 N) and O(log10 N)?
CODING
EXPECTED TIME TO COMPLETE: 20-30 minutes
1. In a two dimensional array of integers of size m x n, for each element which value is zero, set to zero the entire row and
the column where this element is located, and leave the rest of the elements untouched.
2. Write a function that takes an integer argument and returns the corresponding Excel column name.
For instance 1 would return 'A', 2 would return 'B', ...., 27 would return 'AA' and so on
3. Write code to merge 3 sorted arrays of integers (all different sizes) into a single sorted array.
DEBUGGING
1. Find the bug in the following code:
// Find the first break in monotonically increasing sequence Returns c if there is no break.
int r(int *p, size_t c)
{
int i = 0;
while(p[i + 1] >= p[i] && i < c - 1)
++i;
return i + 1;
}
MATH, PROBABILITY, COMPLEXITY
1. A rare disease afflicts 1% of the population. A medical test for this disease has 1% false positive rate (if a person is healthy, there is a 1% probability that the test will show that the person is ill), and 1% false negative rate (if a person is ill, there is a 1% probability that the test will that the person is healthy). A person tests as having the disease. What is the probability that the person actually has the disease?
2. An evil dictator captured you and made you play a game. You are in front of three glasses of wine. Two of them are poisoned; one is not. You must pick one can and drink it. If you survive, the evil dictator will release you. When you pick one of the glasses, the dictator reveals which one of the other two is poisoned, and offers you to stay with your original choice, or switch. Should you switch?
3. In front of you there is a black box. The box can perform two operations: push(N) adds a number to its internal storage; pop-min() extracts the current minimum of all numbers that are currently stored, and makes the box forget it. The numbers are mathematical objects: there is no upper bound. Both push(N) and pop-min() execute in O(1) time. Design and algorithm that could be used to implement such a box.
4. You are asked to design a plotter. A plotter is a computer-controlled device that picks a pen, carries it to a point on paper using mechanical maniplator, lowers it so that it touches the paper, and drags it to the next point drawing a line. In your plotter there will be 3 pens, red, green, and blue. Computer uploads a picture to the plotter which consists of list of segments and colors in which these segments must be drawn. You are asked to reorder the segments such that the work performed by mechanical manipulator is optimal.
Can you design an algorithm that would do so?
5. What is the time complexity of the following algorithm:
unsigned int Rabbits(unsigned int r)
{
return (r < 2) ? r : Rabbits(r - 1) + Rabbits(r - 2);
}
Awarded to:
14 freelancers are bidding on average $225 for this job
Hi, I am Algorithm expert and can surely help you with this project. Please let me know if you are interested. Thank you
plz discuss....................................................................................................................................................................
Hi, I already finished few such projects. I am expert in algorithms. I can do it. If you have question you can ask me
|
https://www.freelancer.com/projects/Mathematics-Algorithm/Algorithm-Code-Quiz-Math-Simple/
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
You may follow this code @ serializableex.codeplex.com.
This is an updated article and code base from my original publication [Serialization for Rapid Application Development: A Better Approach] here at CodeProject. After several years of growth in ability and advances in technology, I decided it was time to update that code base and article for use in .NET 4. The result of this is a much smaller code base, unit tests to back them up, and fixes to some issues concerning discovery within a web environment.
In short, the problems with the XmlSerializer that existed when I wrote the first article still exist today. They are classic issues that there is still no unified approach in dealing with. The major problem is still how do you resolve unknown classes when serializing and deserializing.
XmlSerializer
There have been several attempts that I've followed over the years to get around this, anywhere from the creation of entire frameworks like YAXLib to instructions on how to make use of the IXmlSerializable interface at various levels of sophistication.
IXmlSerializable
With all of these solutions, the major problem from a developer perspective that has always crept in is cumbersomeness of the implementation. IXmlSerializable asks the developer to write some form of a customized implementation per class. YAXLib and others ask the developer to use different attributes, or handle outputs that don't look like the clean(ish) XML generated by XmlSerializer. The [XmlInclude(Type type)] attribute demands that all types to be serialized be within the same library. None of them (that I know of) in my opinion relieves the developer from the tedium of working with these solutions.
[XmlInclude(Type type)]
Serializable Extra Types is designed to be as thoughtless as possible with an absolute minimum of development consideration. What it does is make use of the standard XmlSerializer and a slightly un-hyped overloaded constructor; it has to incorporate extra type definitions for use in type resolution during serialization and deserialization.
It is actually a very simple idea. Keep a list of all the possible types that the XmlSerializer may have need of during a serialization/deserialization process. Register those types using attribute adornments and provide some Extension Methods to make incorporating those lists thoughtless to the developer.
It works under a parent child relationship. I can best describe it as saying, it's the reverse of the XmlInclude attribute. The XmlInclude attribute is placed on a class to give the serializer knowledge of other classes when serializing the class that is adorned. SerializableExtraType is placed on a class to give the other class knowledge of the adorned class when the other class is serialized.
XmlInclude
SerializableExtraType
So...
XmlInclude = Parent => Child
SerializableExtraType = Child => Parent
This allows the SerializableExtraTypes code to integrate related classes across libraries. Additionally, I have exposed a method by which you may register additional relationships at runtime. This solves any situation that you may come across with libraries and applications having complex implied relationships.
SerializableExtraTypes
The code is available in the download and at the CodePlex site. Both have a series of test libraries and a consuming test project that shows the usage very well. Here I will outline the quick and dirty of how to make use of it.
First, adorn a class with the required attribute like this:
// example 1 of registering class and all derived classes
[SerializableExtraType(typeof(Foo))]
// example 2 of registering class with an unrelated class
[SerializableExtraType(typeof(SomethingElse))]
// example 3 of registering with multiple classes in same attribute
[SerializableExtraType(typeof(ClassOne), typeof(ClassTwo))]
public class Foo { public Foo() {} }
The Extension Methods make use of the SerializableExtraTypes under System.Xml.Serialization.
System.Xml.Serialization
Now make use of the Extension Methods to serialize and deserialize objects.
// example 1 assuming that ClassOne and ClassTwo inherit from Foo
string xml = new Foo { ClassList = {new ClassOne(), new ClassTwo(), }, }.SerializeEx();
Foo obj = "<Foo><ClassList><ClassOne /><ClassTwo /></Foo>".DeserializeEx<Foo>();
// example 2 assuming that Foo bears no relationship with SomethingElse
string xml = new SomethingElse { ObjectList = { new Foo(), new Foo(), }, }.SerializeEx();
SomethingElse obj = "<SomethingElse><ArrayOfObject><Foo />" +
"<Foo /></ArrayOfObject></SomethingElse>".DeserializeEx<SomethingElse>();
There is more functionality built into the code base but this is a quick and dirty sample. Please take a look at the download for further examples.
This is the second publication of this method. Please send any comments or suggestions on how to improve it. You can email me at danatcofo@gmail.
|
https://www.codeproject.com/Articles/201272/Serializable-Extra-Types-for-NET-4?fid=1629031&df=90&mpp=10&sort=Position&spc=None&select=4030390&tid=3909244
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Hi,
As stated I'm trying to get this to work. This is a project that is being compiled to make a static library. I'm working within Code::Blocks not Visual Express. I had a look on line and found the following file might be needed:
#include <excpt.h>
I included it but to no avail. I have the following error message:
|59|error: '__try' was not declared in this scope|
Now as far as I understand it this function might be bound to Windows specifically somehow but I'm not sure. I'm trying to compile it into a plain static library so maybe already that is a mistake I'm not sure. Is there some #include I can use here to get it to run or is there something more complicated going on that is OS dependent? I'm mainly a game programmer not a software engineer so this is a bit beyond me
I'd appreciate any help anyone could offer, thanks.I'd appreciate any help anyone could offer, thanks.
|
https://cboard.cprogramming.com/cplusplus-programming/135256-error-when-trying-use-__try-function.html
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Is the condition check really redundant in the following sample?:
public class MyClass {
public bool MyProperty { get; set; }
public void DoSomething(bool newValue) {
// R# says: redundant condition check before assignment
// on the following line:
if (MyProperty != newValue) { // <======
MyProperty = newValue;
}
}
}
MyProperty
newValue
There are only two situations where I've seen this type of check.
The first is when there is an additional line of code which sets another property on the object to True to indicate that the object has been modified. This is typically used when trying to decide whether to persist the state of the object to something like a database.
The second situation is when the types in question are immutable. You might want to avoid setting the value and therefore creating a new string, for example, when the values are the same. Even then, I've only seen it in certain apps where memory usage is critical.
|
https://codedump.io/share/BQrkdwZTkOUw/1/redundant-condition-check-before-assignment-suggestion-for-c-in-resharper-5
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
I have question about array input.I have to create program which will enter number n (number of students) and then i have to input index of student,and score.Score has to be from the highest from the lowest.I have made that but problem is that my index won't "follow" my score.Ex.
package danl;
import java.util.Scanner;
public class Nizovi6 {
public static void main(String[] args) {
Scanner scanner = new Scanner(System.in);
int n = scanner.nextInt();
int[] index = new int[n];
int[] score = new int[n];
for (int i = 0; i < n; i++) {
index[i] = scanner.nextInt();
score[i] = scanner.nextInt();
}
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
if (score[i] > score[j]) {
int t = score[j];
score[j] = score[i];
score[i] = t;
}
}
}
for (int i = 0; i < n; i++) {
System.out.println(index[i] + " " + score[i]);
System.out.println();
}
}
}
Because of you didn't swap
index[] array:
t = index[j]; index[j] = index[i]; index[i] = t;
|
https://codedump.io/share/RFBsr2FRVaXF/1/java-double-input-in-array
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Full version, Basic version
Specification link: 21.3 An example
masking-path-04-b ← index → painting-fill-05-b
Check that metadata in a variety of namespaces, inside a metadata element, does not affect rendering in any way. The file is not valid to the DTD, but is well formed.
The diagram on the table is a visualization of the RDF metadata in the graphic.
The rendered result should match the reference image and there should be no error messages or warnings
|
http://www.w3.org/Graphics/SVG/Test/20061213/htmlObjectHarness/full-metadata-example-01-b.html
|
CC-MAIN-2015-18
|
en
|
refinedweb
|
Hi guys,
I've been trying for a while to solve that problem, but it just gave me headache. Really don't understand why it's not working.
I got this message each time:
Have a coupon? (Y/N) Exception in thread "main" java.lang.NullPointerException
at DescuentoPeli.main(DescuentoPeli.java:14)
He you have the whole code. Please, be kind to help me.
import java.util.Scanner;
class DescuentoPeli {
public static void main(String args[]) {
Scanner myScanner = new Scanner(System.in);
int age;
double price = 0.00;
char reply;
System.out.print("How old are you? ");
age = myScanner.nextInt();
System.out.print("Have a coupon? (Y/N) ");
reply = myScanner.findInLine(".").charAt(0);
if (age >= 12 && age < 65) {
price = 9.25;
}
if (age < 12 || age >= 65) {
price = 5.25;
}
if (reply == 'Y' || reply == 'y') {
price -= 2.00;
}
if (reply != 'Y' && reply != 'y' && reply!= 'N' && reply!= 'n') {
System.out.println("Huh?");
}
System.out.print("Please pay $");
System.out.print(price);
System.out.print(". ");
System.out.println("Enjoy the show!");
}
}[/I]
|
http://www.javaprogrammingforums.com/whats-wrong-my-code/14850-nullpointerexception-trouble.html
|
CC-MAIN-2015-18
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.