text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
|---|---|---|---|---|
This chapter discusses the UI Shell page template used to build web pages, and the components used to implement user interface features in JDeveloper, such as menus, task flows, security, search, navigation, and the home page UI.
This chapter includes the following sections:
Section 14.1, "Introduction to Implementing the UI Shell"
Section 14.2, "Populating a UI Shell"
Section 14.3, "Implementing Application Menu Security"
Section 14.4, "Controlling the State of Main and Regional Area Task Flows"
Section 14.5, "Working with the Global Menu Model"
Section 14.6, "Using the Personalization Menu"
Section 14.7, "Implementing End User Preferences"
Section 14.8, "Using the Administration Menu"
Section 14.9, "Using the Help Menu"
Section 14.10, "Implementing Tagging Integration"
Section 14.11, "Implementing Recent Items"
Section 14.12, "Implementing the Watchlist"
Section 14.13, "Implementing Group Spaces"
Section 14.14, "Implementing Activity Streams and Business Events"
Section 14.15, "Implementing the Oracle Fusion Applications Search Results UI"
Section 14.16, "Introducing the Navigate API"
Section 14.17, "Warning of Pending Changes in the UI Shell"
Section 14.18, "Implementing the Oracle Fusion Home Page UI"
Section 14.19, "Using the Single Object Context Workarea"
Section 14.20, "Implementing the Third Party Component Area"
Section 14.21, "Developing an Activity Guide Client Application with the UI Shell"
For basic information and detailed information of the features, see:
Chapter 13, "Getting Started with Your Web Interface"
Chapter 15, "Implementing UIs in JDeveloper with Application Tables, Trees and Tree Tables"
Chapter 16, 14-1) cannot be a part of the page itself since it is loaded dynamically. Middleware alone.
Before you begin:
You should be familiar with JDeveloper, be able to create and run 14-1.
The shell is optimized for a screen resolution of 1280x1024 pixels.
The four areas are:
Global Area: The global area, across the full width at the top of the UI shell, is stable, consistent, and persistent for an individual user. It contains controls that, in general, drive the contents of the other three areas. See Section 14.1.2.1, "Global Area Standard Links."
Regional Area: The regional area is in the left pane of the UI shell. It has controls and content that, in general, drive, in general, drive the contents of the contextual area.
Main Area: This term designates the combination of the Local Area and the Contextual Area.).
Application designers, customers, and administrators can set the regional area shall be able to bind contextual area content to the local area such that each invocation of a local area shall automatically cause a relevant contextual area in the right state to show up alongside the local area.
The Global Area incorporates a number of built-in indicators and links.
Click this link to return to the defined Home page. See Section 14.18, "Implementing the Oracle Fusion Home Page UI."
Navigator
The Navigator menu, shown in Figure 14-18, is rendered when the Navigator link is clicked on the UI Shell. See Section 14.5.1.2, "Displaying the Navigator Menu."
Recent Items
Recent Items tracks a list of the last 20 task flows visited by a user. See Section 14.11, "Implementing Recent Items."
Favorites
Add to Favorites takes the most recent Recent Item (see Section 14.11, "Implementing Recent Items") and persists it into the Favorites list.
Tagging is a service that allows users to add tags to arbitrary resources in Oracle Fusion Applications so they may collectively contribute to the overall taxonomy and discovery of resources others have visited. See Section 14.10, "Implementing Tagging Integration."
Watchlist
Watchlist is a user-accessible UI that provides a summary of items the user can track with drilldown shortcuts. See Section 14.12, "Implementing the Watchlist."
Group Spaces
Group Spaces bundle all the collaboration tools and provide an easy way for users to create their own ad hoc collaborative groups around a project or business artifact. See Section 14.13, "Implementing Group Spaces."
Personalization
The Personalization menu options let you set your preferences, edit the current page, and reset the content and layout. See Section 14.6, "Using the Personalization Menu."
Accessibility
The Accessibility link appears on pages that can be accessed without logging in. It will allow users to set their accessibility preferences because the Personalization menu, which includes preferences, is hidden for anonymous users.
Administration
The Administration menu options allow you to customize the current page at a multi-user level, allows you to manage sandboxes, and allows you access to the setup applications. See Section 14.8, "Using the Administration Menu."
The Help menu options let you control trace levels, run diagnostics, and provide an About page that lists information about the application. See Section 14.9, "Using the Help Menu."
Note:When signing in, users always are directed to the application's home page.
There are two possible scenarios during run-time: (OID) LDAP, the user name is the display name of the authenticated user principal. The display name is indexed by a general end user preference. See Section always redirected to the application's a menu metadata. This informs it of which taskflows to load and where. The UIShell also can create a list of tasks, from the same metadata, that, when clicked, can load into the Main Area. All task flows and the page built by the UIShell template follow the normal ADF Security framework.
When you create an application using the 14-2.
In the dialog:
Enter a file name and directory path.
The filename should follow these patterns:
[<Product Code><LBA Prefix>]<Role>Dashboard.jspx
[<Product Code><LBA Prefix>]<Object>Workarea.jspx
From the Use Page Template list, select UIShell.
Note:The
af:skipLinkTargettag has been incorporated in the UIShell.jspx template so developers do not need to code this required accessibility feature in each page.
Specifically,
<af:skipLinkTarget/> has been inserted before the SingleObjectContextArea facet in UIShell.jspx:
<af:panelGroupLayout <af:skipLinkTarget/> <af:facetRef </af:panelGroupLayout>
Check for dynamic loading of task flows at runtime. It also creates the Navigator Menu and Task List Menu. See Section 14.5, "Working with the Global Menu Model" and Section 14.2.1.1, "Working with the Applications Menu Model."
Now you can add components to the page. Table 14-1 lists the
itemNode properties that can be used for a JSF page. See Section tasks list, defaultMain, and defaultRegional. A menu is created for each J2EE application.
To create an ADF menu to access page elements through the Navigator menu on JSF pages or task flows that are based on the UI Shell template:
Select the JSPX page in the Application Navigator, then right-click and select the Create Application Menu option.
This step creates the menu file with one itemNode. The menu file will be named
<view id>_taskmenu.xml. For example, if there is a
PageA.jspx, its view id in
adfc-config.xml is PageA, and the menu file name is PageA_taskmenu.xml. This step also should add the ApplicationsMenuModel managed bean entry into
adfc-config.xml. The managed bean entry should not have the topRootModel managed bean property set. Example 14-1 shows a sample of the generated content in
PageA_taskmenu.xml.
Example def it in the JDeveloper editor.
The
adfc-config.xml file is located in the following location: <project_name> > WEB INF.
Drag pages from the Application Navigator panel 14-3, displays.
Click Find Pages to populate the display with all the JSPX pages that have been added to
adfc_config.xml. As shown in Figurenaming standard. For example, the menu file for
Example.jspxwill 14-5 is displayed.
In the Create New JSF Page Fragment dialog:
Enter a page-fragment name. For example, you might enter
def_main.jsff.
The filename 14-1, "UI Shell Areas" shows 10 task flows.
The Create ADF Task Flow dialog shown in Figure 14-6 is displayed.
Make sure that the JSF page fragment file is selected in the Application Navigator and is displayed in the Edit view.
In the Edit view, click the Source tab.
Locate the line that resembles
<f:facet.
In the Application Navigator pane, displays, select Region.
The
<f:facet changes to
<f:facet and code resembling that shown in Example 14-2 will be inserted after it.
Example 14-2 Creating Region localArea Facet Added Code
<af:region </f:facet>
In the Structure window, an
af:region entry is added following the
f:facet - localArea entry.
Note: This step is optional. If you do not need a
contextualArea, skip pane, select an applicable task flow, such as the one you created in Step 4, and drag and drop it onto the highlighted entry in Source view.
From the Create menu that displays, select Region.
Click OK on the Edit Task Flow Binding dialog that displays.
Code resembling that shown in Example 14-3 will be inserted after
<af:showDetailItem ...> and the page fragment in the editor will resemble Figure 14-7.
Now that you have created the Main Area page fragment, you must wrap it in an ADF task flow.
In the Create ADF Task Flow dialog:
Enter a descriptive name for the task flow.
For example, enter
def_main_task-flow-definition.xml.
Make sure that the Create as Bounded Task Flow and the Create With Page Fragments boxes are checked. 14-8.
To load the menu metadata:
Note:The menu data accomplishes several important jobs for you:
It defines properties of the page for you. For instance, it will be displayed in no-tab mode or with dynamic tabs, and define the width of Regional Area.
Can create a task list menu for each page.
Can create labels for groups of tasks.
In the Application Navigator, select the
test_menu_taskmenu.xml file that you created using the ADF Menu Model dialog. For details about creating the menu, see Section 14.2.1.1.1, "How to Create an Applications Menu."
In the
test_menu_taskmenu.xml structure view menu tree, shown in Figure 14-9, right-click the itemNode item and choose Insert inside itemNode <task_flow_name> > itemNode.
The Insert itemNode - Common Properties dialog, shown in Figure (task flow is displayed by default whenever the page is rendered).
The Data Control Scope should have been set to
isolated, inside the task flow definition for any taskflow in the menu (
defaultMain or
dynamicMain) or any call from
openMainTask. See
dataControlScope in Table 14-1.
Task Flow Id: ID of the task flow to be loaded.
To enter the ID, click the ellipsis to display the Select Task Flow Id dialog, shown in Figure 14-11, "Task Flow Property Inspector".
To run the JSPX page, select the page in the Application Navigator, right-click the page file, and choose Run.
The new page, shown in Figure 14-13, is displayed in a web browser.
Check that the newly rendered page contains one tab whose content is the task flow that you defined in this procedure.
The available itemNode properties for Main and Regional task flows for application menus are shown in Table 14-2.
Unless otherwise noted, follow the procedure outlined in Section 14.2.2, "How to Add Default Main Area Task Flows to a Page," to insert the appropriate itemNode properties listed inTable.
You need to.
where:
Id needs to.
disclosed attribute is usually set to true. Although, it can be set to false if you do not want to disclose Tasks List by default.
parametersList should set
fndPageParams as shown above so that this object is available in the pageFlowScope of the tasks list task flow. This context is necessary for Single Object WorkArea. For more information, see Section 14.
Note:The label should be defined in a resource bundle so it can be translated more readily.
Task Type:
taskCategory
Run the page.
To run the page, right-click the JSPX page file in the Projects tree view and choose Run.
Check. It can pass page-level and task-level parameters.
The
navigateViewId attribute supports this feature.
Example 14-4 shows a sample of the metadata for a link in a Task list that links to another page:
Example"/>
Product teams can suppress dynamic tab navigation and just display one main area at a time. To do this, add
isDynamicTabNavigation="false" to the itemNode that represents your JSPX page, as shown in Example 14-5.
Example 14-5 Implementing No-Tab Workarea
product teams to:
Load their task flow into a popup when the user clicks one of the Tasks List links.
Cancel from the popup or open a new Main Area Task, passing in parameter values from the popup.
Implementation Notes
The UI Shell provides an
af:popup component with a modal
af:panelWindow as its immediate child, which would again contain a dynamic region defined in it. On user click, the UI Shell will load the product team's task flow into the dynamic region, and show the modal
af:popup panelWindow without any buttons. Therefore, the product team's task flow must include the OK and Cancel buttons that are used to launch a dynamic tab and dismiss the popup, respectively. The dialog title will be set according to the label mentioned in the menu meta data of the dynamic task link. There is a refresh condition set on the dynamic region that refreshes the task flow and reloads it each time the popup is launched.
Developer Implementation
There are several considerations to keep in mind when you implement the Task Popup:
For a dynamicMain Task item that you would like to load into the popup, specify the
loadPopup property as true. For example, as shown in Example 14-6, the ChooseSR Task Flow would be loaded in a popup when the user clicks its link in the Tasks List. The label that is mentioned will be displayed as the dialog title of the popup that launches the task flow.
Developers can define any components within this task flow, except the
af:popup and its child components, such as
af:dialog,
af:panelWindow, and a
f:menu.
Teams cannot have a UIShellMainArea page template or any other templates inside the popup task flow.
Developers must add the necessary buttons as part of the task flow. For example, if the task flow has a simple .jsff file, it should contain OK and Cancel buttons, along with other components.
Create a managed bean to set the action listener for the Cancel button. See Section 16.4.1.3, "Implementing OK and Cancel Buttons in a Popup."
Create another method for the OK button that calls the method in Example 16-5, and any additional processing logic. The common use case would be opening a new task in the Main Area by using the openMainTask API. For example, you can bind the OK button to a managed bean and add your own action listeners. See Section 16.4.1.3, "Implementing OK and Cancel Buttons in a Popup."
Developers then can pass the parameters directly from the managed bean to the openMainTask API bindings for the popup task flow page to launchNodes with taskType of dynamicMain, defaultMain, and defaultRegional have parameter support. In addition to specifying the taskFlowId to load when the user clicks a Task link, developers 14-7.
Example}"/>
methodParameters can be used for passing a Java object into the task flow that is specified in the taskFlowId parameter. Use the setCustomObject() API in FndMethodParameters for setting the Java object.
Example of Passing a Java Object Using openMainTask
Bind the methodParameters parameter value to a managed bean property. Example 14-8 shows the method action binding in the page definition of the page fragment that calls openMainTask. Also see Table 14-2, "itemNode Properties for Main and Regional Task Flows for Application Menus".
Example 14-8 Method Action Binding to Call>
Code in the managed bean for passing a hashmap to the task flow would resemble Example 14-9.
Example 14-9 Example Code for Passing a Hashmap-10.
Example 14-10 Reading Java Object.
The launch files that are internal to the current web application.
Therefore, the only input that the task list link will need is the internal path (within the webApp) of the file. Once the path is provided to UI Shell, it will determine the current contextual root of the application and append it to the internal path of the file. Once the URI for the file is generated, this is set on the URLView activity and an action expression is set on the task link to launch the URL view. UI Shell also needs to call an actionEvent javascript method on the client side that will not allow the page to lose its current state upon redirection from the URLView activity.
Limitations
Product teams can only launch data files that are part of their webApp, such as
/oracle/apps/fin/acc/file1.xls. This feature only supports the launch of data files through the task list. Any other URI paths, such as a JSPX or a JSF page, are not supported.
Developer Implementation
For a
dynamicMain task item that you would like to use to launch 14-11.
Example only run for a user if that user has access to run the page or task flow. Directions for setting this up are in the "Adding Security to an Oracle Fusion Web Application" chapter in the Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework.
Application menus and taskList menus will automatically have their page security checked by the menu utilities. If the user does not have access, the menu entry will not be rendered. If these three conditions are true, security checks if a logged-in user has view privilege for a given task flow.
The application has enabled authorization
The
taskType is
dynamicMain for the itemNode
The
taskFlowId attribute is defined in the itemNode
If any of these conditions non-profit company. If it evaluates to false, the menu will not appear. For more information on all the security expressions, see the "Adding Security to an Oracle Fusion Web Application" chapter in the Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework.
If your UI Shell pages are secured by ADF Security, you must add a policy, similar to Example 14-12, to the jazn-data.xml file and the system-jazn-data.xml file.
Example 14-13.
Example pre-check of the same permission? That's where an Expression Language expression comes in.
Example 14-14 shows the generic Expression Language expression being used to perform a pre-check of the Task Flow Permission. Note that this is only needed for an itemNode with taskType="defaultMain" or "defaultRegional". The security check is performed automatically for an itemNode with taskType="dynamicMain" (that is, what is in the tasks list).
Example 14-14 Generic Expression Language Expression Used for Task Flow Permission Pre-check
rendered = "#{securityContext.userGrantedPermission['permissionClass=oracle.adf.controller.security.TaskFlowPermission; target=/WEB-INF/audit-expense-report.xml#audit-expense-report; action=view']}"
Example 14-15 shows the task flow-specific Expression Language.
Example 14-15 Task Flow-specific Expression Language Expression Used for Task Flow Permission Pre-check
rendered="#{securityContext.taskflowViewable[/WEB-INF/audit-expense-report.xml#audit-expense-report]}"
Note that both of these checks actually go directly against the policy store; that is, they don't interrogate the task flow definition. This avoids the overhead of loading a large number of ADF artifacts to render links and menus.
UI Shell tasks to open up or close a Main Area tab are exposed as data control methods so that you easily can create such UI artifacts through drag and drop. You do not need to create your own data control methods and manually raise Contextual Events.
Data control APIs are:
FndUIShellController.openMainTask.
FndUIShellController.closeMainTask. See Section 14.4.1.1, "closeMainTask History" for more information.
For example, to open or close a Main Area tab, drag and drop the appropriate data control method to create the UI affordance. Having specified the parameter values into these methods, user clicks will prompt UI Shell to react accordingly.
To use the openMainTask data control method:
Expand the Data Controls and select the openMainTask item, as shown in Figure 14-14.
Drag openMainTask and drop it onto the page fragment. When you do, the Applications Context menu shown in Figure 14-15 displays so you can choose one of the three options.
To use the closeMainTask data control method:
Expand the Data Controls and select the closeMainTask item, as shown in Figure 14-16.
Drag closeMainTask and drop it onto the page fragment. When you do, the Applications Context menu shown in Figure 14-17 displays so you can choose one of the three options.
Two APIs, shown in Example 14-16, are exposed to open and close a Main Area tab.
Example 14-16 APIs Exposed to Open and Close a Main Area Tab
/** * Opens a Main Area task. * * @param taskFlowId Task flow to open * reuseInstance Default true. If true, refocus an existing instance * of the task flow, if such a one exists, without * opening a new instance of the task flow. If false, * always open a new instance * java object into the task flow that's specified * in taskFlowId parameter. Use setCustomObject() API *) /** * Closes the current focused task flow. * * @param methodParameters For future implementation. No-op for now. * @return For internal Contextual Event processing */ public FndMethodParameters closeMainTask(FndMethodParameters methodParameters)
Bind the
methodParameters parameter value to a managed bean property. Example 14-17 shows the method action binding in the page definition of the page fragment that calls
openMainTask.
Example 14-17 Method Action Binding in Page Definition of Page Fragment That Calls>
Example 14-18 shows code in a managed bean for passing a hashmap to the task flow.
Example 14-18 Sample Code in a Managed Bean for Passing a Hashmap to the Task Flow-19.
Example.
MainAreaHandler.handleOpenMainTaskEvent has a mechanism to handle the new tab. The managed bean for the tab adds an additional property for the previous tab. When a new tab is configured to be launched, the current tab is set as the previous tab for the managed bean for the new tab.
A disclosure listener,
MainAreaBackingBean.setLastDisclosedItem, handles user clicks in the tab UI. When the user clicks a tab, two events fire: end up in. When a new task flow is opened, the task flow ID and its associated parameter values are pushed onto the stack.
Having this information, the call to
closeMainTask pops the stack to get the last task flow ID and its parameter values that were displayed, and reinitializes the Main Area with that task flow and parameter information.
See also Section 14.2.3.4, "Supporting No-Tab Workareas."
The UI Shell exposes the means to control the disclosure state of the Regional Area as a whole, and the disclosure state of individual panels within the Regional Area panelAccordian.
Declarative support: (to allow the developer to specify the initial state of the following on loading a Work Area JSPX page)
Within the Regional Area, whether or not a Regional Area Task Panel is collapsed or disclosed.
A given Regional Area panel that is disclosed on initial rendering of the page should honor its assigned pixel height to determine how much screen real estate it occupies.
Programmatic support: (to allow the developer to control the initial or subsequent state of the following within a Work Area JSPX page)
By default, the disclosure state is driven by what is specified declaratively. However, after initial page load, the developer gesture.
Developer Implementation 14-20.
Example, the default values used are:
regionalAreaWidth="256" isRegionalAreaCollapsed ="false"
For programmatic control, drag and drop the corresponding method from the FndUIShellController data control.
discloseRegionalArea
collapseRegionalArea
setRegionalAreaWidth
Two APIs, shown in Example 14-21, are exposed as data control methods under FndUIShellController.
Example 14-21 APIs Exposed as Data Control Methods Under FndUIShellController
/** * Discloses a Regional Area task. * * @param taskFlowId Task flow to disclose * methodParameters For future implementation. No-op for now. * key-value pairs. For example, * "key1=value1;key2=value2". * @param methodParameters For future implementation. No-op for now. * @return For internal Contextual Event processing */ public FndMethodParameters collapseRegionalTask(String taskFlowId, String keyList, FndMethodParameters methodParameters)
Limitations
Declarative support allows the
inflexibleHeight property to control the pixel height of the Regional panel. Programmatic support does not have this allowance.
Programmatic support allows for
forceRefresh property to make it possible to refresh a Task without passing in any parameters. Declarative support does not have this allowance.
Refreshing a Regional Task without disclosing the task is not supported.
Multiple Regional Tasks are allowed to be disclosed at the same time. A switch to force showing only one task at a time is not provided.
Support for persisting any of these settings explicitly altered by the user during a session, across sessions, is not a part of this feature.
This section discusses the declarative and programmatic means of controlling the state of the Contextual Area splitter.
The UI Shell must expose the means to control the disclosure state of the Contextual Area.
Declarative support lets the developer specify the initial state when loading a Work Area JSPX page. It determines whether or not the Contextual Area (as a whole) is collapsed or disclosed.
Programmatic support lets the developer control the initial or subsequent state of the Contextual Area within a Work Area JSPX page.
By default, the disclosure state is driven by what is specified declaratively. However, after the initial page load, the developer gesture by the user, such as a button click or menu selection.
Samples of Expected Behavior
A Work Area page (JSPX) loads with the Contextual Area collapsed or disclosed when the page renders, based on the declarative setting. If the Work Area is loaded as a result of a Main Menu invocation, declarative options always are used for the disclosure state.
If a Work Area loads as a result of a page navigation from another Work Area, programmatically set options may override declarative settings.
Extend the contextual-area-task-flow-template Task Flow Template into the page task flow, as shown in Example 14-22.
Specify values for the contextual area splitter position and the collapsed state in the menu for the item node that represents the page using
contextualAreaWidth and
contextualAreaCollapsed properties. A sample entry in the menu file will resemble Example 14-23.
Example, these default values are used:
contextualAreaWidth="256" contextualAreaCollapsed ="false"
For programmatic control, drag and drop the corresponding method from the data control named FndUIShellController:
collapseContextualArea
contextualAreaWidthSelection
To set these values when opening a new task, drag and drop the
openMainTask method from FndUIShellController and pass in the
contextualAreaWidth and
contextualAreaCollapsed parameters through "methodsParameters > NamedData" as shown in Example 14-24.
Set the method in the page managed bean to set the
contextualAreaWidth and
contextualAreaCollapsed values, as shown in Example 14-24.
Example"/>
For setting 14-25.
Set the method in the page managed bean to set the
contextualAreaWidth and
contextualAreaCollapsed values, as shown in Example 14-25.
Example 14-25 Setting the contextualAreaWidth and contextualAreaCollapsed Values for navigate
<methodAction id="navigate"Value="#{<ManagedBean.Method>}" NDType="oracle.apps.fnd.applcore.patterns.uishell.ui. bean.FndMethodParameters"/> </methodAction>
Multiple Regional Area panels will be open at the same time, instead of showing only one panel at a time.
Because the desired size of each panel will be different for each panel, developers can set the pixel height for each of the panels by specifying the
inflexibleHeight property in the itemNode that represents a Regional Area panel, as shown in Example 14-31.
Global Menu Behavior
Items to which the user does not have access will not be displayed.
A category is hidden if there is no child to display.
If a menu entry length is greater than 27 characters, ellipses (...) display. The entire entry will display in a tool tip when the pointer hovers over the entry.
Parent and children will not be split in different columns.
Note:Before you create menus, you first must create JSF pages using the UI Shell template.
These Global Menus span J2EE applications.
Navigator Menu: This is the global menu that displays in the UI Shell global area.
Home Page Menu: The home page tabs are actually each a JSPX page assembled using menu metadata.
Preferences Menu: The User Preferences page has a tasklist to all other preference pages within Oracle Fusion Middleware. This is assembled using menu metadata.
Table 14-3, Table 14-4, and Table 14-15 list the menu attributes added by Applications Core to the menu XML above what is provided by Oracle ADF.
The Navigator menu, shown in Figurearea pages in various applications. To simplify the runtime behavior, one XML file contains all the menu entries. An Applications Core application will deploy these menus to MDS. Each application will read these directly from MDS.
Each application must be configured so that the shared library can read the menus from MDS.
To implement a Global Menu:
Verify that the
web.xml of the application has the correct Java Authentication and Authorization Service (JAAS) filter to enable checking menu security against Oracle Platform Security Services (OPSS), as shown in Example 14-27.
Example 14-27 Sample JAAS Filter
<filter> <filter-name>JpsFilter</filter-name> <filter-class>oracle.security.jps.ee.http.JpsFilter</filter-class> <init-param> <param-name>enable.anonymous</param-name> <param-value>true</param-value> </init-param> <init-param> <param-name>remove.anonymous.role</param-name> <param-value>false</param-value> </init-param> <init-param> <param-name>application.name</param-name> <param-value>crm</param-value> </init-param> <init-param> <param-name>oracle.security.jps.jaas.mode</param-name> <param-value>subjectOnly</param-value> </init-param> </filter>
application.name, as shown in the example, in
web.xml is the application family value. The choices are crm, fscm, and hcm. This value is used to create the stripe in LDAP.
Update
weblogic-application.xml. As shown in Example 14-28, set the application-param that has the param-name jps.policystore.migration to OFF.
In
weblogic-application.xml, make sure the application-param that has the param-name jps.policystore.applicationid is set to the correct stripe, as shown in Example 14-29. This is the same as the application.name property of
web.xml.
Add the following entry in web.xml:
, launch WebLogic Server after setting the JAVA_OPTIONS environmental variable in the
setDomainEnv.sh file:
JAVA_OPTIONS = -DAPPLCORE_TEST_SECURED_MENU=N
Before you enforce user actions, you should already have defined roles, principals, and actions in the database.
Functional security will always prevent a user from accessing a page or task flow that the user does not have access to., but not to show an entry under the Employee Self-Service category that also led to the same page. See Example 14-30.
Example 14-30 Expression Language Expression to Evaluate a User's Access Rights
rendered="#{securityContext.userInRole['EMPLOYEE_ROLE']}"
The Expression Language expression should never check the pageDef. However, you can use the Expression Language expression to check security of a person's role since that is in LDAP.
The
applicationStripe attribute determines which LDAP stripe is checked for the
securedResourceName.
Menu files will be references through MDS. This means they can be located in a table or in a file system directory. Determine where this directory will be now. This is where your
root_menu.xml and other menu files will be located. For Global Menu attributes, see Section 14.5.1.1, "Menu Attributes Added by Oracle Fusion Middleware Extensions for Applications (Applications Core)."
Create the root menu.
Example 14-31 shows a sample root menu.
Example launch a page, or references to more menu files. If itemNodes were included that were not deployed, they will not appear since a check is done against the deployment tables of what was deployed. Applications Core requires that if a groupNode has no children, which could happen through security enforcement, the groupNode itself will not be rendered.
There are situations, particularly with more simple applications, when the default enterprise-level menu structure is not suitable. In these cases, you may want to display the Navigator menu as a series of pull-down buttons.
To switch the UI Shell rendering so the Navigator menu renders as pull-down buttons in a horizontal row, set the
isSelfService attribute to "true" on the .jspx page that extends the UI Shell template. That is, inside the
<af:pageTemplate> tag, add the following:
<f:attribute
The Personalization menu options, shown in Figure 14-19, let you set your preferences, edit the current page, and reset the content and layout. The menu is supplied automatically by the UI Shell and requires no developer work.
The Preference Menu only appears if you have the ApplSession filter and mapping set up. See Section 47.2, "Configuring Your Project to Use Application User Sessions.".
Set Preferences
The actual Preferences dialog, such as shown in Figure 14-22, is created by developers. See Section 14.7 for the details of how to implement the menu.
Edit Current Page
This option displays only if the displayed page has been marked as able to be user-edited (if the
isPersonalizableInComposer attribute in
af:pageTemplate is set to true). Selecting this option will start the editing feature and the page will resemble Figure 14-20. Click Close to return to the page. Click Customization Manager to change the displayed page in Composer..flows are personalized on that page via Composer, they are not reset by this menu item.
Set Preferences, shown in Figure 14-21, is a link in the global area for easy access to setting preferences for the current application, general user preferences, or for any other application preference in Oracle Fusion Middleware. For more information about this global menu, see Section can be stored in LDAP.
The Preferences link from the global area will launch a Workarea that shows the preferences related to the currently-displayed page.
Links in the left hand side will allow navigation to any Preferences Workarea page within the entire Oracle Fusion product. This menu will be rendered using Applications Core menu federation abilities. Development teams will own the menu files.
If an application is not installed, or the user does not have access, the entry in the tasklist menu should not appear. If a user does not have access to a particular setting within a page, application teams need to use the
rendered property with a security expression behind it.
For each application, there should be a preferences page. The preferences page will be found by looking for a page using the same path of the current application, but with a page name of preferences.jspx.
If no associated preferences page exists, a default General Preferences page will be shown. This page shows global Applications Core most-used preferences.
If there are several preference pages associated to an application, such as a Common Setting page and more specific pages, only one preferences page as a target from the global preferences link can be defined per application. (There will be a default name for the target
focusViewID of the preferences page for an application.) Once in the preferences workarea, other links are available from the tasklist to more specific pages or task flows. (Links in the tasklist can contain other
focusViewIDs that belong to the same application as the default preference page.)
When the first application is deployed, it should only be navigated to from the task list. Therefore, from the Preferences link, the user always displays the more specific preferences page of that application.
A Preferences page will be like any other Workarea page. Preference values are not supported in integrated WLS LDAP, only an external LDAP is supported. The Tasks list will be loaded as a defaultRegional flow and the main area will be a defaultMain flow.
Workarea Title
Each Preferences page should display a title similar to {Category_name:Page_name}. This can be done though Expression Language and will not be created automatically from the framework.
The name that appears in the tasklist can be different from the page title. This is allowed since the tasklist name is generated from the tasklist preference distributed menu metadata, while the page title will be from the local page level menu metadata.
Tasklist / Navigation Pane
Each page needs its Application Menu metadata to specify that it wants the Preferences tasklist menu in the defaultRegional area as well as the defaultMain flow.
This menu can be a two-level menu having categories with links under each category.
Tasklist Federation
The tasklist will be a task flow that will contain links to all Preference pages throughout Oracle Fusion Middleware.
Each application will provide the preference menu files that contain tasklist links to preference pages delivered by that application. The preferences tasklist should follow the Navigator Menu architecture recommendations where it uses sharedNode references to bring in menus from each application so they can be patched independently. Applications Core will automatically federate the menu metadata so the tasklist that renders will contain all the entries from all applications (filtered by security).
Individual menu files will be versioned like other distributed menu files, so any application can apply a patch and the new menu will take precedence over an older version when federated.
Tasklist Only Can Link to Full Pages (not specific task flows)
The tasklist will not launch task flows dynamically, but will load a Preferences workarea page. This is because the tasklist menu needs to be federated and only page-level entries are allowed in a federated menu.
No-tabs Mode
The Preferences page should use a no-tabs mode. This is a standard, not controlled through any code. Teams could use tabs if all flows are defaultMain if desired. See also Section 14.2.3.4, "Supporting No-Tab Workareas."
Tasklist Security
The tasklist will be filtered by functional page level security for that user. If all entries in a category are restricted then the category should not appear either.
Preference Settings
Settings will be a view activity in a task flow. It will follow other UX standards so it should be built using an Applications Panel. This means the action buttons will appear at the top.
Different Preference pages can change the same back end setting. This is up to Applications design. If this is needed, it should be stored in a common area, such as LDAP, or be in the General Preferences page.
Application preferences pages are deployed with the corresponding product pages.
The design should be similar to that shown in Figure 14-22.
This section discusses how to set up and configure Preferences.
Once the WebLogic Server console is configured, create an Oracle Fusion web application that uses UI Shell pages.
Create a UI Shell page that is used solely for the user preferences, such as PreferencesUI.jspx.
Set the
isDynamicTabNavigation to false for the PreferencesUI page entry in the menu.
Add the following task flow as default regional under the preferences page entry:
"/WEB-INF/oracle/apps/fnd/applcore/pref/ui/mainflow/GeneralPreferencesFlow.xml#GeneralPreferencesFlow"
The final menu entries for the page will appear similar to those shown in Example 14-32.
Example>
This should display the basic Preferences in the default regional area that can be launched to display sub-flows for each preference sub task (for instance, Accessibility and Appearance).
The General Preferences flow that is exposed also renders the Preferences Menu Model links by using a call to the Menu Service API.
The Preferences menu will be part of a central Utility application (Menu web service) that will be deployed in the server. The Preferences menu will be maintained by the development team.
On any UI Shell page, the Global Area contains a Personalization menu that contains a Set Preferences link. This Preference link will redirect the user to a webApp-specific Preference page, depending upon the entry in Menu data.
Example 14-33 shows sample preferences menu data.
Example webApp-specific flow in which the Preference page exists.
For example, the first itemNode refers to the preferencesA page that is part of the FND webApp. The ServiceFlow child node is a task flow that belongs to FND webApp.
For each parent itemNode, there is an attribute called
prefForApps that contains a list of webApp names. This means that the itemNode is a common preference page for those listed webApps.
For example, the Preferences page is common for two webApps -- gl and hr. This essentially means that all the DashBoards and Workarea UI Shell pages in gl and hr webApps will be redirected to this preferencesA page, which is in webApp FND, when the Set Preferences link is clicked.
All the task flows under each preference page itemNode will display in the General Preferences Flow as navigate links. Therefore, all preference pages will have access to these flows.
To test the general preferences flows, you need to configure user session and ADF Security for the test application. See Chapter 47, "Implementing Application User Sessions."
When configuring ADF Security, there is no need to define users, because you already are using an OID store that will authenticate the users existing in the store.
See Chapter 20, "Working with Localization Formatting."
A use case exists where the UI needs to.
The Password link on the General Preferences page will point to the Password Management page from the Oracle Identity Management administration application. This page is maintained by the OIM team. For the Password link to redirect to the
pwdmgmt.jspx page, the deployment information of the current application and the OIM administration application must be populated correctly in the ASK tables.
The Administration Menu, shown in Figure 14-23, is displayed only if the logged-in user has the appropriate privileges. See Section 14.8.1, "How to Secure the Administration Menu". The menu is supplied automatically by the UI Shell and requires no developer work.
Customize Case List Table Pages ...
Select this option to customize the current page for multiple users using the customization layer picker dialog.For information about customization, see Chapter 61, "Creating Customizable Applications" and the "Customizing Applications with MDS" chapter in the Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework.
Customization Manager ...
Select this option to launch the Customization Manager.
For information about customization, see Chapter 61, "Creating Customizable Applications" and the "Customizing Applications with MDS" chapter in the Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework..
For information about defining and configuring namespaces when promoting a page fragment to a label, see "Updating Your Application's adf-config.xml File" in the "Performing Oracle Composer-Specific MDS Configurations" chapter of Oracle Fusion Middleware Developer's Guide for Oracle WebCenter.
Manage Sandboxes ...
Select this option to manage sandboxes on your system.
The Sandbox is built on top of the standard Sandbox feature from Oracle Metadata Services. See "Using the Sandbox Manager" in the "Understanding the Customization Development Lifecycle" chapter of the Oracle Fusion Applications Extensibility Guide.
Setup and Maintenance ...
Select this option to launch the Oracle Fusion Functional Setup Manager application. See the Oracle Fusion Applications Common Implementation Guide.
All teams that need the Administration link need to include the privilege and the permission in their JAZN file as defined in Example 14-34. All Administrator Roles must inherit the Applications Core "Administration Link View Duty" duty role. This duty role gives access to the "View Administration Link" privilege.
Example 14-24," needs to be provisioned by the System Administrator. That is, an entry with
DEPLOYED_MODULE_NAME = "upk" needs to be added in the ASK deployment tables. These tables are populated at deployment time through Oracle Fusion Functional Setup Manager tasks. Without the entry, the menu item "User Productivity Kit ..." would not be shown in the Help menu.
Applications Help
Select this option to launch the help system in a separate window.
Troubleshooting
When you select the Troubleshooting option, an additional menu, similar to Figure 14-25, displays.
Troubleshooting Options
Select this option to display the Options dialog, as shown in Figure 14, it will insert user-specific profile values to the profiles. If users decide to revert the setting to the default site profile, they will need to use the Functional Setup Manager to remove their own profile.
After making changes to any one of the options for applications logging, severity level, or modules, the user needs to log out of the Oracle Fusion application, close the browser session, and log back in for the new options to take effect. These logging profiles are cached in user session and initialized when a user logs into an Oracle Fusion application.
Database trace:
This option enables SQL trace also enable the SQL trace option to capture bind variables.
Capture wait events:
Select this option to also enable the SQL trace option to capture wait events.
PL/SQL profiler:
This option enables the PL/SQL hierarchical profiler for all the connections used by the current user session. See "Using the PL/SQL Hierarchical Profiler" in the Oracle Database Advanced Application Developer's Guide.
For PL/SQL profiler, the output will be in the directory defined by
APPLLOG_DIR. The exact path for
APPLLOG_DIR can be found on the database host by using the SQL command:
plshprof under
$ORACLE_HOME/bin. Information" and "Troubleshooting for Oracle Fusion Applications Using Logs and Diagnostic Tests" in Oracle Fusion Applications Administrator's Guide.
Modules:
Module filter for logging. This is a comma-separated list of modules to be logged. The percent sign (%) is used as a wild card. For example, % or %financial%. The percent sign (%) is the default value and, if no other value is specified, Information" and "Troubleshooting for Oracle Fusion Applications Using Logs and Diagnostic Tests".
Tagging is a service that allows users to add tags to arbitrary resources in the Oracle WebCenter so they may collectively contribute to the overall taxonomy and discovery of resources others have visited.
Tagging is a component of Oracle WebCenter. For complete information, see the Oracle Fusion Middleware Developer's Guide for Oracle WebCenter. Specific information about tagging that you will need is in these chapters:
"Configuring Your Application for Oracle WebCenter Web 2.0 Services"
"Integrating the Tags Service"
This section assumes that:
Tagging is being enabled on a business object. The Model/View already exists for the business object.
Your pages are using the UI Shell template.
You have identified the business objects that you want to Tag.
You have database connections to ApplicationDB and WebCenter.
You have enabled data security.
Important Considerations
Tags are attached to objects, not taskflows. When you click a link belonging to a tagged object, you will navigate to a page and taskflow for viewing that particular object.
You can enable tagging at the page level if it is clear that the page represents a specific object.
If you have several pages in a flow, all representing the same object, you could enable tagging on every page.
You can enable tagging in a table if each row represents a specific object.
You can tag an object from several different places. You can give several target navigation paths from TagCenter. This allows different users to access the same object from different workareas.
Oracle Fusion Applications Search only allows a single navigation path, but global search does allow alternate links as well as multiple Service view objects for alternate navigation paths. That is, you may tag an object from one workarea, but on navigation from the tagged object link in TagCenter or Oracle Fusion Applications Search, the workarea that it navigates to could be different.
Security will hide a tagged object in both TagCenter and Oracle Fusion Applications Search based on the object's data security, not taskflow security, although page and taskflow security are enforced after clicking the tag.
Preliminary Setup
The following steps are condensed from the Oracle Fusion Middleware Developer's Guide for Oracle WebCenter.
Open your current application in JDeveloper.
Make sure that your database connection has access to Oracle WebCenter Schema. If it does not, create another connection to access Oracle WebCenter Schema. Name the connection
WebCenter. Tagging looks for this connection to access the data.
Note:You should now see at least two connections in your
connections.xml: WebCenter and ApplicationDB.
In the resource Palette, open My Catalog > Web Center Service Catalog > Task Flows. Make sure. Though not mandatory, it is highly recommended. Furthermore, it is required for implementing security for Tagging.
You now can enable Tagging for your business objects.
Three pieces of information are needed to define Tagging to a business object:
The unique key of the object given which one, make sure the date value that you use in String concatenation is date formatted in Database format.
Teams cannot implement their own resource parser.
SERVICE_ID (VARCHAR2(200)): This is used to identify the object. The Applications standard is to use the logical business object name.
NAME
(VARCHAR2(200)): This is what will be displayed as the resource that is tagged in the Tag Center, and that will be visible to the end users when they search for a tagged item. Give it a meaningful name, such as <PO Number>+<PO Title> or Invoice Description or Customer Name.
Note:All fields are varchar2(200). Make sure that you are not violating the constraint. Also note that if, for instance, your product has three business objects that you are planning to tag, you will build three different services with the proper business object name as the service id.
Follow these steps to tag a resource:
From the Component Palette, select WebCenter Tagging Service. Drag and drop the Tagging Button onto the page. (If you do not find the button, make sure you added the WebCenter Tagging JSP Tag Library to your user interface project.)
Open the Property Inspector for the Tagging Button. Enter the bound values for ResourceId, ResourceName and ServiceId, similar to Example 14-35.
Example 14-35 Entering Property Information for Tagging Button
<tag:taggingButton
Note:Make sure all the values are of type String.
From the Resource Palette, under My Catalogs > WebCenter Services Catalog > Task Flows, drag and drop the Tagging Dialog (as a Region). If you do not find the Tagging Dialog, make sure placing tags within rows of a table, this region must be dropped outside of the table. Otherwise, it is instantiated for every row, which will not work.
The ability to tag an object is now enabled on the page. The code will look similar to that shown in Example 14-36.
If you have multiple objects to tag, there will be multiple tag which you want to pass to task flow. For example, your task flow takes invoiceId and invoiceType. If the RESOURCE_ID for the tagged item is 123.C45 where 123 is the invoice ID: These are parameters that can be passed to the task flow in addition to the resourceParamList.
navTaskKeyList: Do not specify this if it is not used. Otherwise, task flows from TagCenter will always go to a specific tab while the same taskfow on page load, so it can be an Expression Language expression that the landing page can resolve.
pageResourceParamList: If the RESOURCE_ID for the tagged item is, for example, "EastCoast.C45" where EastCoast will be a page level parameter called
Region and a taskflow parameter called
InvoiceType. It is assumed an object will always need both composite keys to be identified as a unique entity. So, the resourceParamList will always contain the same number of parameter names as the composite keys. The taskflow. Example:
pageParametersList = "Campaign=Sales"
customDelimiter: This is optional. The default is a ".". If you want something different, add this parameter.
For ease of deployment, service definition files will be stored in Oracle Metadata Services (MDS). Add the
service-definition.xml file to the Metadata Archive (MAR) file definition.
A sample
service-definition.xml is provided in Example 14-37.
Example 14-37>
You want to put the
service-definition.xml file in a standardized location so there are no conflicts. Create or copy the file, which by default is located at
.adf/META_INF/service-definition.xml, to the new standardized location. There are two goals for where to put the
service-definition.xml:
Make it unique so two teams are not trying to push files to the exact same location in MDS.
Standardize it to be in
meta/oracle/apps/meta. The
/oracle/apps/meta so
adf-config.xml, as shown in Example 14-38.
Example 14-38-39 to your
adf-config.xml to enable the default resource action handler from Applications Core:
Example 14-39Center will navigate you to the task flow defined in your service definition, passing the parameters you specified so you can view the object.
Searchable and taggable objects are defined at the view object/logical business object level. The same view object/business object can be viewed and tagged in more than one workarea and taskflow. The requirement is that all users must be able to navigate from TagCenter to a detail taskflow for which they have privileges. If this taskflow is available in the current workarea, use this target before any other.
This is implemented by supplying multiple navigation targets in the current
service-definition.xml. Each target will have its parameter separated by a caret (^).
Example 14-40 shows how to define a list of three targets:
Example 14-40 launch it in the current page, if the user has view access to that task flow, which overrides the order of the list. Otherwise, you check a list of link targets for the first target to which the current user has access. Application teams are responsible for making sure that all users who can access this object have at least one match to a target taskflow defined in the
service-definition.xml.
When the target is in a different webApp, the security check is performed by calling
checkBulkAuthorization API. Therefore, using a standalone WebLogic Server and LDAP policy store are required. This requirement is optional when the lists of targets are within the same webApp.
To add row-level tagging to a table, add a new empty column to hold the tag button/link.
All other steps remain the same as described in Section 14.10.1.1, "Tagging a Resource (Business Object).".
The Applications Standard recommends that you do not add the Tag Search in your page. The standard way is to launch.
Each development team will build a task flow where they can take the user for that resource and show the desired additional information.
Select or create a new task flow that will act as your resource viewer for a service (business object).
In the task flow definition, define an input parameter that is called resourceId (in this example), class equal to java.lang.String, value equal to #{pageFlowScope.resourceId}, and required enabled.
Note that the value for resourceId is set automatically when clicking a tagged item link in TagCenter. on the method in the task flow and go to page definition) with the parameter value of #{pageFlowScope.resourceId}, type java.lang.String, and name equal to the parameter variable name in the setCurrentRow method signature.
Add to the new task flow the bounded task flow for the details/information page of the tagged item.
Add a control flow case from the setCurrentRow method to the details/information page task flow, as shown in Figure 14-27.
Register the new task flow in the
service-definition.xml file. You will register the resource viewer for a particular service (business object) to its section in the service definition file. See Example 14-37 for samples of the service-definition and resource-view entries.
Basically, you have defined a task flow that takes the resource id as input. Use the resource id to uniquely identify the business object and display any desired extra detail. Note that clicking a tagged item in TagCenter displays the details/information page for the tagged item in the local area of the workarea page for that task flow.
By default, tagging does not provide any security. To avoid this problem, the development teams will implement security for each service (business object) for which tagging is enabled.
For a business object, first implement Oracle Fusion Data Security.
Add the
authorizerClass and the
dataSecurityObjectName parameter to the
service-definition.xml file, as shown in Example 14-41.
Example 14-41 normally.
This section presents examples of how tagging appears in the UI Shell.
Tagging an object by having the tagging button and tagging dialog in the task flow. On clicking the tagging button, a tagging dialog displays and prompts the user to tag the object with a name, as shown in Figure 14-28.
To see the tagged object in the tag center flow, click the Tags link in the UI Shell global area and the tag center flow displays the list of tags available, as shown in Figure 14-29.
Click one of the tags in the tag cloud region of the tag center to view the tagged items, as shown in Figure 14-30.
Click the tagged item to view the object in the task flow mentioned in the service definition file, as shown in Figure 14-31.
To check whether or not the object is already tagged, hover over the tagging button to see the tags. On clicking the tag link on the hover dialog, the tag center flow will be launched with the selected tag, as shown in Figure 14-32.
Recent Items tracks a list of the last 20 task flows visited by a user. The Recent Items list is persistent across user sessions and a task can be relaunched from the Recent Items list. The feature is automatically turned on and will be available automatically in pages using the UI Shell template. Security must be disabled to turn Recent items off.
Before you begin:
For the Recent Items feature to work, product teams must configure the user session and ADF Security. See Chapter 47, "Implementing Application User Sessions." Without security enabled, recent items will not be captured, because the data is recorded for each authenticated user.
Recent Items records the task flow labels for a launched task flow. Therefore, product teams must carefully choose the labels for task flows, and must provide task flow labels for all task flows, even if they are meant to be used in no-tab mode (see Section 14.2.3.4, "Supporting No-Tab Workareas").
openMainTask is used to open a new task in the Main Area of Oracle Fusion web applications that use UIShell launching a task from a tasks list, product teams need to explicitly notify the UI Shell for sub-flow calls.
When
openSubTask is called before a sub-flow is launched, the sub-flow ID and its parameter values are pushed onto the stack. Applications Core also notifies the Recent Items implementation with recorded task flow information. This essentially makes a sub-flow able to be bookmarked by Recent Items, and can be launched directly from the selection of menu items on Recent Items.
Note that registering sub-flows to Recent Items is optional. The decision is up to a product team's product manager.
Implementation
This API is exposed as the Data Control methods
FndUIShellController.openSubTask and
FndUIShellController.closeSubTask that developers will drag and drop to their page fragments to create links to notify UI Shell. The FndUIShellController Data Control is automatically available to all Oracle Fusion applications that reference Applications Core libraries.
Example 14-42 shows the signature and Javadoc of the method.
Example 14-42 Recent Items API
/** * Notify UIShell to record launched. * * Section 14.16, "Introducing the Navigate API.".
The
openSubTask API accepts the same set of parameters as used by the Section 14.16, "Introducing launch it. Recent Items takes care of launching the task flow in the right work area and web application. Product teams do not need to do anything for that. Because the
openSubTask API supports
parametersList, product teams can pass some requirement-specific values to it while registering their task flow to Recent Items. On launching, those passed values are available in the
pageFlowScope. So, product teams can analyze these values and make decisions, such as if they need to first initialize the parent flow, or if they need to set
Visible to
False on some of the actions on the page.
To record sub-flows into the Recent Items list, applications need to call the
openSubTask API right before sub-flows are launched.
openSubTask takes parameters similar to the
Navigate API. One of these is task flow ID. For this, you need to specify the parent flow's ID (or main task's ID). In other words, sub-flows need to be executed via parent flow, even though they are launched from the Recent Items menu.
If your sub-flow does not need to be bookmarked by Recent Items, you do not need to change anything. Otherwise, you need to modify your parent flow and sub-flow as described in this section. After the changes, sub-flows can be launched in two ways:
From original flows
From Recent Items menu items using recorded information
Both will start the execution in the parent flow. Because the sub-flow needs to be piggybacked on the parent flow when it is launched launched launched launched-33,-34.
If users would like to add this Employee Complete Detail page of a specific employee to their Recent Items, product teams need to set up something extra to make this happen. If this page (actually a bounded task flow whose default page is displayed) has been bookmarked, the next time users can click it on the Recent Items list and launch it directly by skipping the search step.
The parent task flow named
ToParentSFFlow is shown in Figure 14-35.
decideFlow is the router activity that decides whether the control flow should go to the original parent flow path (
initParent) or to the sub-flow path (
toChild). The condition used flow's pageFlowScope is null.
#{pageFlowScope.Empno} is set using its input parameter
Empno when the parent flow is called. The input parameters on the parent flow (that is,
ToParentSFFlow) is defined as:
<input-parameter-definition> <name>Empno</name> <value>#{pageFlowScope.Empno}</value> <class>java.lang.String</class> </input-parameter-definition>
When the parent flow is launched from the task List, the
Empno parameter is not set (that is, it is not defined in the application menu's itemNode). Therefore, it is null and the router will route it to the
initParent path.
When the sub-flow is recorded through the
openSubTask API,
Empno is set on the
parametersList as:
<methodAction id="openSubTask" RequiresUpdateModel="true" Action="invokeMethod" MethodName="openSubTask" IsViewObjectMethod="false" DataControl="FndUIShellController" InstanceName="FndUIShellController.dataProvider" ReturnName="FndUIShellController.methodResults.openSubTask_FndUIShellController_dataProvider_openSubTask_result"> <NamedData NDName="taskFlowId" NDType="java.lang.String" NDValue="/WEB-INF/oracle/apps/xteam/demo/ui/flow/ToParentSFContainerFlow.xml#ToParentSFContainerFlow"/> <NamedData NDName="parametersList" NDType="java.lang.String" NDValue="Empno=#{row.Empno}"/> <NamedData NDName="label" NDType="java.lang.String" NDValue="#{row.Ename} complete details"/> <NamedData NDName="keyList" NDType="java.lang.String"/> <NamedData NDName="taskParametersList" NDType="java.lang.String"/> <NamedData NDName="viewId" NDType="java.lang.String" NDValue="/DemoWorkArea"/> <NamedData NDName="webApp" NDType="java.lang.String" NDValue="DemoAppSource"/> <NamedData NDName="methodParameters" NDType="oracle.apps.fnd.applcore.patterns.uishell.ui.bean.FndMethodParameters"/> </methodAction>
You also set up:
taskFlowId to be the parent flow's, not the sub-flow's
label to be the sub-flow's
When end users click the link (the Ename) to which the
openSubTask method is bound,
openSubTask will be called. This link component is defined as:
<af:column <af:commandLink <af:setActionListener </af:commandLink> </af:column>
Note that when the link is clicked:
actionListener and the
action specified on the link are executed, in that order.
openSubTask needs to be called only from the original parent flow path (that is,
initParent), not from the sub-flow path (that is,
toChild).
EmployeeDetails activity in Figure 14-35 is a Task Flow Call activity that invokes the
ToChildSFFlow sub-flow. Before the sub-flow is executed, you need to add initialization steps. These initialization steps could include, but are not limited to:
Set up parent states. For this example, you need to set the selected employee's row to be the current row.
Set up the contextual area state.
Set up states to allow the sub-flow to navigate back to the parent flow.
There are two approaches to set up the initialization steps:
In the parent flow
In the sub-flow
For the first approach, you can add logic to initialize both paths before the task flow call activity in the parent flow. For the second approach, you initialize states in the sub-flow by using input parameters of the sub-flow. For example, the sub-flow will take an input parameter named
Empno. In effect, the second approach just postpones the initialization to the sub-flow.
The definition of input parameters in the Task Flow Call activity is:
<task-flow-call <task-flow-reference> <document>/WEB-INF/oracle/apps/xteam/demo/ui/flow/ToChildSFFlow.xml</document> <id>ToChildSFFlow</id> </task-flow-reference> <input-parameter> <name>Empno</name> <value>#{pageFlowScope.Empno}</value> </input-parameter> </task-flow-call>
Note that this means that the calling task flow needs to store the value of
Empno in
#{pageFlowScope.Empno}. For example, from the original parent flow path, it is set to be
#{row.Empno} using the
setActionListener tag. For the sub-flow path, it is set using the parent flow's input parameter Empno. On the sub) needs to be the same as the parameter name defined on the Task Flow Call activity. When the parameter is available, ADF will place it in
#{pageFlowScope.Empno} to be used within the sub-flow. However, this
pageFlowScope is different from the one defined in the Task Flow Call activity because they have a different owning task flow (that is, parent task flow versus sub-flow).
The definition of the sub-flow is shown in Figure 14-36:
In the sample implementation, you chose to implement the initialization step in the sub-flow.
Empno is passed as a parameter to the sub-flow and used to initialize the parent state. When the sub-flow is launched, the default view activity (
ToChildSFPF) displays. Before it renders, the
initPage method on the ChildSFBean will be executed. The page definition of the default page is defined as:
<pageDefinition xmlns=""> <parameters/> <executables> ... <invokeAction id="initPageId" Binds="initPage" Refresh="always"/> </executables> <bindings> ... <methodAction id="initPage" InstanceName="ChildSFBean.dataProvider" DataControl="ChildSFBean" RequiresUpdateModel="true" Action="invokeMethod" MethodName="initPage" IsViewObjectMethod="false" ReturnName="ChildSFBean.methodResults.initPage_ChildSFBean_dataProvider_initPage_result"/> ... </bindings> </pageDefinition>
initPage is specified in the executables tag and will be invoked when the page is refreshed. The
initPage method itself is defined as:
public void initPage() { FacesContext facesContext = FacesContext.getCurrentInstance(); ExpressionFactory exp = facesContext.getApplication().getExpressionFactory(); DCBindingContainer bindingContainer = (DCBindingContainer)exp.createValueExpression( facesContext.getELContext(),"#{bindings}",DCBindingContainer.class).getValue(facesContext.getELContext()); ApplicationModule am = bindingContainer.getDataControl().getApplicationModule(); ViewObject vo = am.findViewObject("ComplexSFEmpVO"); vo.executeQuery(); Map map = AdfFacesContext.getCurrentInstance().getPageFlowScope(); if(map !=null){ Object empObj = map.get("Empno"); if(empObj instanceof Integer){ Integer empno =(Integer)map.get("Empno");// new Integer(empnoStr); Object[] obj = {empno}; Key key = new Key(obj); Row row = vo.getRow(key); vo.setCurrentRow(row); } else { String empnoStr = (String)map.get("Empno"); Integer empno = new Integer(empnoStr); Object[] obj = {empno}; Key key = new Key(obj); Row row = vo.getRow(key); vo.setCurrentRow(row); } } }
initPage. This is a case that a user may have to perform often. You can use the
openSubTask API to register a search page with search parameters. The next time the user can see the search results by just launching it from Recent Items. This is similar to using
parametersList to specify search parameters while registering the search flow. While launching, a little programming can be done to retrieve the search parameters and execute the query with the parameter values.
Once tasks are recorded on the Recent Items list, they are eligible for Favorites. The Favorites menu is implemented on top of Recent Items. Any current task on the Recent Items list can be bookmarked and placed in Favorites' folders. Currently, only a one-level folder is supported. Similar to Recent Items, tasks on the Favorites list can be launched directly from the menu. So, the description in this section for Recent items applies similarly to the Favorites implementation. For example, sub-flows based on the design pattern described in this section can be registered on the Favorites list as well as the Recent Items list.
The Watchlist is a portlet needs to track. Each item is comprised of descriptive text followed by a count. Each item also is linked to a page in a workarea where the individual items of interest are listed.
The Watchlist is available both as a dashboard region in the Welcome tab of the Home dashboard, and as a global menu. These are two views of the same content. The dashboard region is available to the users as soon as they login, while the global menu is accessible as they navigate through the suite.
The Watchlist will be refreshed to fetch new counts and items whenever the user navigates to the Home page. The Watchlist can refresh the entire watchlist or individual categories as needed. Users will be able to personalize the Watchlist to hide or show items.
Figure 14-37 shows an example of the Watchlist portlet and menu.
Implementing teams Watchlist category and item meaning.
Seed information to tell the Watchlist what counts to track, how to display developers will query it for testing verification.
The only other data model effect will be in the creation of summary tables. These summary tables help with retrieving the count of Watchlist items with data security. See Section 14.12.4.3.1, "Summary Tables."
The Watchlist data model is supported by ATK. The tables are:
ATK_WATCHLIST_CATEGORIES: Represents the functional categories in which each Watchlist item will fit. See Table 14-5, "ATK_WATCHLIST_CATEGORIES".
ATK_WATCHLIST_SETUP: Represents a type of count that a Watchlist item can track. The primary key is a Watchlist item code. See Table 14-6, only updated only be updated upon request, and events that change the count will not simultaneously be updating the Watchlist. In this case, Watchlist code is responsible for querying the count of an asynchronous Watchlist item on demand. Figure 14-38 shows the flow of an asynchronous Watchlist item.
For the Expense report saved search panel example:
Since, the developer tasks are:
(Not needed for Human Task) Determine/set up view objects to execute the query for Watchlist count. You may want to include view criteria with bind variables and specify default values for bind variables.
For example, most view objects seeded for the Watchlist would need to-43 shows a code snippet from the view object xml for bind variable with default value.
(Optional) Set up a Summary view object to facilitate will only launch the refresh process.
Determine/code task flows for drill-down query panel and work with Watchlist API. For promotion and unpromotion of user-saved searches, code will have to invoke a method in the provided Watchlist JAR, which will then handle interaction back to the Watchlist. Add promotion and unprom-39.
Create FND_LOOKUPS for displayed Watchlist category and item meaning (FND_STANDARD_LOOKUP_TYPES.LOOKUP_TYPE). Product teams will need to seed the lookup type with meaning for category (VIEW_APPLICATION_ID = 0 and SET_ID = 0). The translated lookup type meaning is shown in the Watchlist UI for category, while the corresponding lookup value meanings are shown in the Watchlist UI for items (user saved search item meanings come from saved search directly). Product teams will need to create seed data for this lookup.
Example 14-44 presents sample code to create or update the lookup.
Example 14-44 Sample Code to Create.
Developers should follow these procedure to use the Watchlist.
To ensure the Watchlist link works in your pages, complete these steps.
Add this ADF library JAR file to the SuperWeb user interface project for your application (the JAR file must be part of your WAR in WEB-INF/lib):
fusionapps/jlib/AdfAtkWatchListPublicUi.jar
Add this dependent model ADF library JAR file in your application (the JAR file must be part of your EAR in APP-INF/lib):
fusionapps/jlib/AdfAtkWatchListProtectedModel.jar
Add this dependent resource bundle ADF library JAR file in your application (the JAR file must be part of your EAR in APP-INF/lib):
fusionapps/jlib/AdfAtkWatchListPublicResource.jar
Add these resource-ref entries to
web.xml>
Refer to Section 14.12.2, "Watchlist Physical Data Model Entities" for details of the entire Watchlist data model. Seed only ATK_WATCHLIST_CATEGORIES and ATK_WATCHLIST_SETUP.
By default, Watchlist code will access the application module/view object that is specified in the setup table, and rerun the query to refresh the Watchlist count. The summary view object is a way for a product team to get the count in its own just take the first row and get the specified attribute.
Thus, the developer per, since you are only interested in the count, a more efficient way would be to create a summary table for this table. The summary table keeps track of the count for each BUID. For the example table, the summary table would resemble Table 14.
Developers can use summary tables to populate. Currently, saved searches in MDS are stored in the file system as an XML file for each view object. The developer steps to create this file are:
Make sure>/
<user>.12, "Implementing the Watchlist."
For the same Watchlist items, the Watchlist portlet needs to be able to invoke a refresh. Each product team will set up and expose a service that includes local JAR files for this purpose. The nested Watchlist JAR file can use those local JAR files in its refresh code.
The service will expose the refreshCategory method, which will, in turn, delegate the call to the same method in the nested Watchlist application module. This method will be provided in the Watchlist JAR file and will contain code to perform the category-wide refresh.
Your Service project will need to import the other JAR files from your product that will need to.
public void refreshWatchlistCategory(String categoryCode)
This method, shown in Example 14-45, refreshes all Watchlist items in the corresponding category.
For subsequent steps that require running Watchlist APIs from your code, you will need to import the Watchlist JAR files. These also contain an AppMasterDB connection that will need to will need to include a component to let the user control which of the saved searches to promote. The pre-seeded saved searches will be shown as static, while the user can use checkboxes to determine which of his or her own".
Make sure the application is MDS enabled so users can save their searches and that the saved searches exist across sessions.
From the watchlist UI, users can drill down to the transactional UI flows. Make sure-46:
Example 14-46 team-provided task flow to promote saved searches to the Watchlist.
Note:If you already have implemented promoting the user-saved search to the Watchlist, see Additional Steps for Existing Consumers.
Make sure the AdfAtkWathcListPublicUi.jar file is available (usually in
fusionapps/jlib).
Launch-40.
In the toolbar facet of the query region, drag and drop an ADF Toolbar component, such as
Toolbar (ADF Faces.Common Components) shown in Figure 14-41,-42.
Expand the connection node and the ADF library node in the Resource Palette as shown in Figure 14-43.
Once the ADF Task Flows node is expanded, you should see two task flows. The task flow AtkWatchlistUserSavedSearchPromotionTF is the one to be used by product teams. displays.-46.-44.
Click OK. This creates a region component in the UI page, as shown in Figure 14-45.
Open the page definition file and select the executable associated with the Watchlist-related task flow.
Open the Property Inspector and set the Refresh field to ifNeeded, as shown in Figure 14-46. the AtkWatchlistUserSavedSearchPromotionTF, grant it to an appropriate role, and select appropriate actions as shown inFigure 14-47.
Run the UI page. In the toolbar facet of the query region, there will be a Watchlist Options button, as shown in Figure 14-48.
When you click the button, a popup with the list of all saved searches displays, as shown in Figure 14-49.
Additional Steps for Existing Consumers
For product teams-46, add the two methods shown in Example 14-47.
Example 14-47-down.
The developer will be concerned with implementing the two steps for the destination task flow. First, he or she can retrieve the ViewCriteriaName from the PageFlowScope with the code shown in Example 14-48.
Example 14-48 Retrieving the ViewCriteriaName from the PageFlowScope
Map pfs = RequestContext.getCurrentInstance().getPageFlowScope(); String vcName = (String) pfs.get("vcName");
Second, the developer can use the code shown in Example 14-49 to apply the ViewCriteria to the search panel. If no ViewCriteria was passed, the code loads the default ViewCriteria.
Example 14-49 static variable to ensure that it only runs once.
There is a link in the UI Shell for Watchlist in the global area. To make it work, the user interface project that you will ultimately run will need to ejb-jar.xml in your service project that contains the watchlist service>
Add this entry in web.xml in your SuperWeb project>
connections.xml. GroupSpaces functionality attempts to retrieve the group spaces for the logged-on user. Without a secure application, this functionality would fail.
Follow these steps to implement GroupSpaces.
Make sure the Oracle WebCenter Spaces Client library,
spaces-webservice-client.jar, has been added to the Project.
Define an application connection to point to the URL of the Oracle WebCenter GroupSpaces chrome but will still render all the tabs within that Group Space. This chrome level suppresses the WebCenter chrome but will still render all the tabs within that Group Space.
When the user clicks View All GroupSpaces, the same UI as the My Group Spaces in the spaces application is rendered. This is also rendered as an iFrame within the WebCenter HomePage tab where it suppresses the chrome as well as the top level WebCenter tabs.
When navigating to the WebCenter home page, a WebCenterCenter, Global Search, or an Activity Stream link.
Use a question mark (?) instead of an ampersand (&) if this is the only request parameter for that URL.
Activity Streams is a feature provided by the WebCenter. Product teams a Activity but for Business Events, only the event source is used as an object.
WebCenter Activities are defined in the
service-definition.xml file. The scope of
service_definition.xml is per business object. The service id attribute should match the name of the entity object. The
service-definition.xml file contains ActivityTypes, Object Types and resource-view definitions. An ActivityType needs to be defined for every business event on the Entity Object. The type name should match the business event's name. The
messageFormatKey attribute in the ActivityType element points to a message key in a ResourceBundle. It defines the format of the message displayed in the Activity Stream UI. These tokens are supported in a message.
.
The ADF Business Components Business Events are based on Entity Objects, it is not possible to use these events for publishing Activities for non Entity Object-based model changes. For such scenarios, product teams could use the Applications Core API shown in Example 14-50 to programmatically publish Activities to the ActivityStream service.
BusinessActivityPublisher
This class provides the publishActivity API that can be used to publish Activities asynchronously. This is a singleton per J2EE-50 BusinessActivityPublisher.java
/** * This class is responsible for publishing business events as WebCenter Activities to * the ActivityStreaming service. This is a singleton per J2EE application. * An instance of this object is obtained using the getInstance() method. This class * transforms business events into Activities and publishes them to Activity * Service asynchronously. Resources held by the class are released by using * release() method. In J2EE J2EE container, this method is called by Applications Core * ServletContextListener. */ public void release()
BusinessActivity Class
service-definition.xml and publish the Activity to the ActivityStreaming Service asynchronously. Details of the API on this class are shown in Example 14-51.
Example 14();
BusinessActivity the
service-definition.xml file and publish the Activity to the ActivityStreaming Service asynchronously. Details of the API on this class are shown in Example 14-52.
Example 14();. Product teams should override the
isActivityPublishingEnabled() method to enable Activity publishing for an Entity Object. Table 14-8 shows the details about the APIs exposed in
OAEntityImpl that product teams can override.
Note that, except for the
isActivityPublishingEnabled() method, other methods mentioned in Table 14-8 should be avoided in favor of transient attributes specified in Section 14.14.4.3, "Defining Activity Attributes Declaratively."
Some of the attributes, such as Actor, Service Ids and Additional Service Ids, can be passed as a part of the payload. The basic process steps are:
Define a transient attribute in the Enterprise Object.
Give a default value to the transient attribute.
Include the transient attribute as a part of the payload.
The different transient attributes that can be passed with the payload are shown in Table 14-9.
Defining Activities requires:
Adding the ActivityStream UI Task Flow
Defining Activities in
service-definition.xml
To add the ActivityStream task flow:
Make sure your user interface project includes the WebCenter "Integrating the People Connections Service" chapter in the Oracle Fusion Middleware Developer's Guide for Oracle WebCenter.
service-definition.xml, follow these steps.
If necessary, add the directory to the application's MAR profile. To add a MAR, select Application > Application Properties > Deployment. In the dialog that displays, select the MAR file and click Edit.
In the Edit dialog, select User Metadata and click Add. Browse to the meta directory. Make sureBundle or an XLIFF bundle. Oracle Fusion Applications use XLIFF bundles to store the Activity message format strings. The activity-type element in the
service-definition.xml file has
messageFormatKey attributes that are used to refer to the format strings in the XLIFF bundle.
Activity Stream supports only Java ResourceBundles. The Common String Repository is used for the message format strings.
These attributes are supported on the activity-type element:
messageFormatKey - Used on Activity Stream full view task flow.
summaryByListMessageFormatKey - Used in summary view Activity Stream task flow.
summaryByCountMessageFormatKey - Used in summary view task flow.
messageFormatKey
The value of this attribute points to the key defined in ResourceBundle. These tokens are supported in the message format string.
.
Sample using Java ResourceBundle:
/aggregate the Actors either by listing them if there are 3 or fewer, or by counting them if there more than 3.
For remaining activities, summarize by finding a common Actor. Summarize/aggregate the objects either by listing them if there are 3 or fewer, or by counting them if there are more than 3.
For example, for-53 shows sample format strings for the above scenario.
Example 14-53 Sample Format Strings for a Summarized View
<resource-bundle-class>oracle.apps.crm.OpportunityResourceBundle</resource-bundle-class> <activity-types> <generic-activity-category <activity-type </generic-activity-category> </activity-types> WebCenter>
ObjectType custom attributes can be used to provide additional metadata for handling business object references in Activity Stream messages. The custom attributes shown in Table 14-10
adf-config.xml. via replies to comments and comments upon comments. This feature in Activity Stream allows users to comment on specific Activity related to a object.
The Likes feature allows users to express their liking for any object in the system to which they have access. This feature is exposed in message board, activity stream, doclib and replies on topics discussion forum. In ActivityStream, this feature allows users to indicate if they like a particular Activity.
To enable Comments and Likes for a service, add these ActivityTypes to the
service-definition xml:
<activity-type <activity-type
Make sure the activity-type names are as shown. The messageFormatKey values refer to the ResourceBundle keys that provide strings displayed for "comments" and "likes" links displayed in the ActivityMessage.
Users will be able to see ActivityMessages belonging to the Business Objects they are following. Users should either explicitly follow a business Object, or product teams should provide a way for users to follow certain Business Objects implicitly. A Business Object can be followed for a user by using the WebCenter Follow API. A sample implementation of Follow is shown in Example 14-54.
Example 14-54-55 and Example 14-56.
Note that the id in the service-category-definition file matches the category-id in
service-definition.xml and it contains "business".
Example 14-55 Sample service-category-definition.xml
Types shown in Example 14-57 should be added to the service-definitions of all services that use the Follow model. These ActivityTypes are used to construct the message published when an object belonging to the service is Followed or Unfollowed.
Example 14-57 Adding ActivityTypes for Follow and Unfollow
<activity-type </activity-type> <activity-type </activity-type>
Contextual Actions are rendered for Business Objects or other resources referenced in ActivityStream messages when contextInfoPopupId is configured in the service-definition.xml file of the Business Object or resource. ActivityStream launches an ADF popup using the popup id from the
service-definition.xml file. The contextInfoPopupId should provide the absolute id of the popup used for the Contextual Action. A popup with the specified id should exist in the pages where ActivityStream is used. This is already a requirement for all pages where Contextual Actions-enabled objects are rendered. Activity Stream will make the serviceId, resourceId, and resourceType properties available to the launched pop-up. The pop-up should process these parameters and convert them to Contextual Actions-specific parameters and make them available to the Contextual Actions task flow or another component.
This element, which is the direct child of service-definition element, is used to configure the Contextual Actions popup id in service-definition.xml.
<contextInfoPopupId>:pt1:r1:casePopup</contextInfoPopupId>
The popup sample shown in Example 14-58 uses the serviceId, resourceId, and resourceType properties from ActivityStream that are made available through the launch variable, and makes them available to the popup.
Example 14-58 provides a front-end, launched from a page using the UI Shell template, that is used to query the Oracle Enterprise Crawl and Search Framework (ECSF).
The minimum requirement is to implement and run a UI Shell Template page. A page using the UI Shell template will automatically contain the search components in the Global Area and can be activated when running the page.
Where you have implemented ECSF for your product, you will need to make sure that you have followed all of the instructions in Chapter 2, "Setting Up Your Development Environment," Chapter 26, "Getting Started with Oracle Enterprise Crawl and Search Framework," and Chapter 27, "Creating Searchable Objects" to make your view objects searchable. ECSF uses these search-enabled objects in the construction of the result set.
If you have implemented ECSF and defined the SearchDB connection, the saved searches go to the Oracle database and are persisted across sessions.
Data Security Integration
Data security, that is, navigates to a web application that does have the ECSF libraries available.
Setup switch via out of the current page, you are not disabling search; the Expression Language bindings on the fields are still evaluated and if the user navigates to a non-customized page, Oracle Fusion Applications Search will be available.
Setting the profile option Fusion Apps Search Enabled to 'N', either at Site or User level.
From the main page of the project in the global area, the Categories and Search terms fields can be seen, as shown in Figure 14-50.
If you expand the Categories field, a list, similar to that shown in Figure 14-51, displays:
Categories
The end-user can select from the list of Categories and enter a search string. Unchecking the All category unchecks all of the categories. The subset of selected categories will be displayed in the entry area of the drop list as a concatenated list separated by a semi-colon (;). in any of the crawled data, which includes the title, fixed and variable content, attached documents, and tags. So if the search term.
Saved Searches
Click the icon to open a list of saved searches. The list, shown in Figure 14-52, includes a Personalize… action item that will display the Personalize Saved Searches dialog so that saved searches can be deleted or renamed.
Show Results of Last Search: Displays the output of the last search.
Personalize: This becomes active if there is a saved search. Click this link to rename or delete a saved search, as shown in Figure 14-53.
To rename a saved search, select it, enter a new name in the Name field, and click OK.
To delete a saved search, select it and click Delete.
After clicking Search, a modal dialog will display the results of the search. Hovering over the main link will show the last crawled date. Figure 14-54 shows typical results.
Note:If a search application, such as Finance or HCM, is down and does not respond to the search request within a pre-determined period of time, the search results will display WebCenter objects such as wikis and blogs.-54, displayed expanded. The other Categories will need to, launch SES and select Global Settings > Query Configuration, and click the Exact count radio button, as shown in Figure 14-55. Note that SES warns against this for performance reasons. See the Oracle Secure Enterprise Search Administration Online Help..
This API, shown in Example 14-59, is available from the
oracle.apps.fnd.applcore.globalSearch.ui package in the
jdev/oaext/adflib/UIComponents-Viewcontroller.jar and is the only public API supported by Oracle Fusion Applications Search.
Example 14-59();
Create the component and have an
actionListener to a backing bean, as shown in Example 14-60. Note that GlobalSearchUtilBean is just an example, not a real bean.
Example 14-60 Creating a Component with actionlistener to Backing Bean
<af:commandButton </af:commandButton>
From that backing bean, you can call the Oracle Fusion Applications Search API to run the search, as shown in Example 14-61.
Example 14-61 27, "Creating Searchable Objects."
To run the UI Shell and Oracle Fusion Applications Search, follow the setup instructions for running Applications Core under WebLogic Server in Chapter 2, "Setting Up Your Development Environment" and the instructions on how to set up a UI Shell page, menu entries and task flows from Section 14.1, "Introduction to Implementing the UI Shell". This should give you a running UI Shell project.
Add the SearchDB database connection to the project. For more information about creating the SearchDB connection, see Section 31.6.1, "How to Create the SearchDB Connection on Oracle WebLogic Server Instance".
The Crawled Objects Project lets you crawl your Search view objects in-62 shows the UNIX version; the DOS version will be similar.
Example 14-62 Tags. This view object is available in ORACLE_HOME/jdeveloper/jdev/oaext/adflib/Tags-Model.jar library jar.
You may use this view object using a view-link and a pre-defined Search Plugin to enable the crawling of Oracle WebCenter Tags, both in initial and incremental (someone has updated the tags) crawls.
Steps:
Create your Searchable view object as normal. Example 14-63 uses a Searchable view object over FND_LOOKUPS_VL in the query.
Example 14-63 essentially a dot-separated primary key of the entity.
These two values will be used when setting up tags in your regular UI and will need to match.
For example, if you have a page with a form and tag button, the Web Center tag would be set up as shown in Example 14-64.
Example 14-64 Setting up the WebCenter Tag
<af:panelFormLayout <tag:taggingButton <af:region <af:panelLabelAndMessage <af:outputText </af:panelLabelAndMessage>
See Section 14.10, "Implementing Tagging Integration" for setting up Tags in your UI. Figure 14-56 shows the attributes for lookup types.
Note:Do not forget to mark your key columns and make sure the order is consistent between the view object and the
tag:taggingButton.resourceIdattribute.
Figure 14-57 shows the attributes for Searchable view object lookup types.
Add a view link to the TagSVO (Service view object) linking the Search view object and Applications Core Tag view object.
The view link should look similar to Figure 14-58.
Update the Body field to include the Tags of the child view object in the relevant position in the String as defined by your product management.
This will be an expression of the form <accessor Name>.Tag, such as tagSVO.Tag.
How to Do an Incremental Crawl
To do an incremental crawl:
Update the Searchable View Object Search Plugin field (see Figure 14-58, "View Link Example") to "oracle.apps.fnd.applcore.search.TagSearchPlugin", or as shown in Example 14-65, create a subclass so that you can incorporate your security rules.
Example 14-65 Creating a Subclass
package oracle.apps.fnd; import oracle.apps.fnd.applcore.search.TagSearchPlugin; public class WlsTestTagSearchPlugin extends TagSearchPlugin { // All implementation through super class, or overide methods important to you. // Be careful if implementing // public Iterator getChangeList(SearchContext ctx, String changeType) // to call super(ctx, changeType) to get the applcore functionality. }
Make sure you add a parameter passing the service Id of the Search view object. This may be done by clicking the LOV symbol next to the Search Plugin Field. See Figure 14-59.
There are two parameters, shown in Table 14-11, that may be passed to the plugin. 26, "Getting Started with Oracle Enterprise Crawl and Search Framework."
Figure 14-60-61, ECSF searchable objects support two distinct Action Types: URL and Task. See also Figure 14-62 and Figure 14-63.-62-12.
This will be shown in the search results with the title given in the Title field. When clicked, a new browser tab or window will open with this URL.
Figure 14-63-13. Note that, although this table resembles Table 14-14, it presents the use case that the majority of users will use. The information in Table 14-14 is for a very small use case.
This quotes around the groovy expressions; use single quotes instead.
Caution:If you have a searchable view object with a task search action, the parameters passed to the task flow from
FndUIShellController.navigate(...)will be Strings, not the native type of the view object attributes. You must ensure these values are converted from their native type to a String (in the
navTaskParametersList) and back correctly (in your task flow).
For Integer types, this is largely automatic (as long as you reference the parameter as a String in the task flow), but for dates and decimals, care should be taken., and they are delimited by the caret "^" character. In this case, all parameters must have the same number of delimited parts.
This additional configuration allows on caret, and produce an ordered list of navigation targets,
If there is only one target defined, use it with no permission check.
For each target, determine if the user can navigate to the page and task flow.
If a navigable target is in the current view, use it.
Take the first navigable target.
Whatever the outcome of the permissions check, the developer must ensure that at least one target is navigable, otherwise users will be presented with a blank page when they click the search result.
The parameters and descriptions for Preferred Navigation are shown in Table 14-14. Note that, although this table resembles Table 14-13, it presents a more complicated use case in which teams want to do a security check, and navigate the user to the most secure endpoint (the first allowed one in the list). The meaning of these columns changes with the caret delimitation; that is, a caret-delimited list of the old values, as well as two new parameters. Most users only need to use the information in Table 14-13.
Normally,
taskFlowID uses the format <path><name>.xml#<name>; for instance taskFlowID="/WEB-INF/CaseDetails.xml#CaseDetails". However, Oracle Fusion Applications Search has
taskFile and
taskName attributes as shown in Figure 14-63. The Oracle Fusion Applications Search code will merge them, adding the "#," so they become <taskFile>#<taskName>.
Parameters are always passed as Parameter Name=value. Often, it is either a literal value or an expression such as #{pageFlowScope.val}. Example 14-66 shows how it is done, passing four parameters.
Example 14-66 as is not restored to the user:
Exact expanded groups in the result at the time of save
Scroll positions within a group
Full LOV expansion state of attribute filters
When ECSF returns facet information, it returns only facet entries multi-tab format.
The tabs are named Applications, where all Search view objects and Oracle WebCenter results will reside, lower-case category_name, such as bi_%.
Although the formatting of Oracle Business Intelligence results will be slightly different, no developer uptake is required.
Oracle Secure Enterprise Search (SES) Setup: Admin123 Authorization Tab: HTTP endpoint for authorization: User ID: Administrator Password: Admin123 Business Component: oracle.biee.search.BISearchableTreeObject Display URL Prefix: where the IP address is your Oracle Business Intelligence server installation, and the username/password are for a sufficiently authorized Oracle Business Intelligence user. Leave all other values at default.
Source Group
Create a source group (SES Searchtab > Source groups) and name it bi_<some code name>.
It must start with bi_ so Oracle Fusion Applications Search can recognize it as an Oracle Business Intelligence category.
You may go into Global Settings and translate the group name so the users see a more friendly name.
Import the group as an external category into ECSF. See "Importing Source Group into ECSF".
Searching
When Searching, the Oracle Business Intelligence results will display in a separate tab, as shown in Figure 14-64.
If there are only Oracle Business Intelligence, or only Oracle Fusion Applications/Oracle WebCenter categories selected, only the one tab appropriate to those categories displays. Otherwise, you can search both and tab between the results.
When you click a link, you will be redirected to Oracle Business Intelligence. If you do not have a consolidated OID setup, you will be asked to log in again.
To set up and crawl an Oracle WebCenter environment, see "Managing the Search Service" in Oracle Fusion Middleware Administrator's Guide for Oracle WebCenter needs to be imported via the cmdLineAdmin tool or Oracle Enterprise Manager Fusion Middleware Control. Follow these steps if you use the tool:
Launch the cmdLineAdmin tool. Its prompt will display.
Issue this command at the prompt:
> manage instance 124 (where 124 differs for each developer).
The prompt will change to show that an instance is being managed.
Enter this command:
Instance: 124> list external categories
This information displays: displays: you have a user defined that is common across both Oracle Fusion Applications and Oracle WebCenter.
For instance, you can create an fmwadmin user on the Oracle Fusion Applications side to do this Oracle WebCenter would be set up with the same OID.
Using two different authentication stores will mean you get multiple logins when clicking results.
Product teams can create a link in the task flow to navigate to a different UI Shell page. Because navigation can occur across different web applications, a single consistent API performs browser redirect. See Table 14-15. This API is exposed as the FndUIShellController.navigate Data Control method.
Note:When launching.
Developers will drag and drop the Data Control method on page fragments to create links to invoke navigation.
Expand the Data Controls and select the
navigate item, as shown in Figure 14-14.
Drag
navigate and drop it onto the page fragment. When you do, the Applications Context menu shown in Figure 14-15 displays so you can choose one of the three options.
Developers can specify a task flow to load on the target page. Page level parameters can also be specified.
Certain parameters, summarized in Table 14-16, can be passed using the Navigate API's argument list..
The signature and Javadoc of the method are shown in Example 14-67.
Example 14-67 needs to navigate to.
viewId determines the view activity within the target web application.
pageParametersList defines custom URL parameters that product teams 14-68 shows how to set up these two parameters.
Example 14-68 need ASK deployment tables. These tables are populated at deployment time through Functional Setup, product teams must specify the
webApp parameter correctly. There are no design or compile time checks that can catch an invalid value for the
webApp parameter. It will throw null pointer exceptions at run time only.
Note that you need to pass the DEPLOYED_MODULE_NAME in the ASK_DEPLOYED_MODULES table as the webApp parameter. The DEPLOYED_MODULE_NAME, by standard, should be the same as the context root of the application to which you are trying to navigate.
When there are pending changes in the UI Shell Main Area, and the user is navigating out of the page, or is refreshing the tab or taskflow,flow in MainArea (with
reuseInstance=true and either
forceRefresh=true or with different parameters).
Closing a tab by clicking the close icon on the tab.
Closing the currently focused tab by using
closeMainTask.
Relaunching a taskflow by using
openMainTask (with
reuseInstance=true and either
forceRefresh=true or with different parameters).
Relaunching a taskflow using
navigate (navigating within the same web application and
viewId).
Navigating to a different work area or web application using
navigate.
Search Panel and Warning of Pending Changes
Search is treated as a special case and no warning for pending changes is shown when a user enters some data in a query panel provided by the Application Development Framework. But, from the search results page, drilling down to a subflow and making the flow dirty marks the subflow 14-69.
Example 14-69 Adding the clientListener on a commandButton
<af:commandButton <af:clientListener </af:commandButton>
When a command link/button invokes Data Control APIs programmatically, developers should add a client listener on the command link/button, as shown in Example 14-70.
Example 14-70 14-71 to their main page fragment's pageDef whose taskflow is attached to the tab in MainArea.
Example 14-71 14-71 14-72.
Example 14-72 Suppressing warning of pending changes for a flow
J2EE web applications. A given home page can be hosted on any one of these distinct J2EE web applications. A tab click on a given home page therefore needs to 14.5, "Working with the Global Menu Model."
When running a home page JSPX, the page should look similar to Figure 14-67. 14-73.
Example 14-73 14-74, 14-68. 14-75, has been added to adfc-config.xml for viewScope Hashmap.
Example 14-75 Chapter 15, "Working with Task Flow Activities," of the Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework. Chapter 15, "Working with Task Flow Activities," of the Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework.
Example 14-76 shows a sample entry in adfc-config.xml for such a view activity.
Example 14-76 14-77 shows a sample entry in the page definition for the JSPX file for passing the viewScope values into the Context Area task flow as input parameter.
Example 14-77 and main area, can access this in pageFlowScope (of main/regional area container task flow owned by Applications Core) for the appropriate input parameters through the menu model and openMainTask API.
Example 14-78 shows a sample entry for a child item node for defaultMain task in the menu.xml to pass the appropriate deptno in the single object context to a task flow that is initialized based on this input value.
Example 14-78 14-69, ref displays, displays, 14-79.
Example 14-79 14-80.
Example 14-80 Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework..
|
http://docs.oracle.com/cd/E25054_01/fusionapps.1111/e15524/ui_impl_uishell.htm
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
#include <FXPacker.h>
Inheritance diagram for FX::FXPacker:
Each time a child is placed, the remaining space is decreased by the amount of space taken by the child window. The side against which a child is placed is determined by the LAYOUT_SIDE_TOP, LAYOUT_SIDE_BOTTOM, LAYOUT_SIDE_LEFT, and LAYOUT_SIDE_RIGHT hints given by the child window. Other layout hints from the child are observed as far as sensible. So for example, a child placed against the right edge can still have LAYOUT_FILL_Y or LAYOUT_TOP, and so on. The last child may have both LAYOUT_FILL_X and LAYOUT_FILL_Y, in which case it will be placed to take all remaining space.
See also:
|
http://fox-toolkit.org/ref16/classFX_1_1FXPacker.html
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
I know it’s been a little while since my last post, and I apologize. I’ll try and keep the posts a little more frequent moving forward.
In the last post, we briefly encountered barycentric coordinates and loosely defined them as the coefficients of an affine combination. While that’s true, we can do better. We can define a more precise definition, and we can take a closer look at what they really mean, both numerically and geometrically. That’s the topic of this post, as well as taking a brief look at how we could use them in a real world game programming context.
First, let’s take a look at the formal definition of the coordinates, then we’ll consider a slightly refactored version that works better for our situations. Consider a triangle ABC (see Figure 1), and then imagine a little weight at each vertex. We can assign each weight as having 100% weight contribution from that vertex, and 0 contribution from the other vertices. So, for point A that’s (1, 0, 0), for point B it’s (0, 1, 0), and for point C it’s (0, 0, 1). This means that if you’re at the vertex, you only get the weight from that 1 vertex. Anywhere else within the triangle would be a combination of these weights. The barycenter of the triangle is the point inside the triangle where you could balance the weights. In other words, the weights would be contributing evenly to that point. Assuming that the weights at each vertex are equal (1), this would be the mean of the weights (or vertices):
The barycentric coordinates for the barycenter then become (1/3, 1/3, 1/3) given our equation above. Notice that this is exactly the equation of an affine combination, which is why we stated that the coefficients of an affine combination are also the barycentric coordinates. The dashed lines in the image represent the barycentric axes. A barycentric axis starts on a triangle edge, where the weight for the opposite vertex is 0. They then extend through the barycenter of the triangle to the opposite vertex, where the weight for that vertex is 1. Notice that the other coordinates in the base of each axis are 1/2. This is just a coincidence since we happen to be looking at an equilateral triangle. This won’t hold true for other kinds of triangles.
Now, what else can we observe? Well, for each axis, our values only extend from 0 to 1, since they are weights of that vertex. In other words, a value less than 0 or greater than 1 would be outside of our simplex (triangle in this case). Furthermore, the sum of the coordinates is necessarily equal to 1. Since these coordinates represent the amounts of each weight that you observe at that point, they are percentages, and therefore must sum up to the total. Otherwise, you’d be missing some of the weight from the system. These two observations will be extremely helpful as we examine uses in game code. The other thing I wanted to mention here is that if the restriction that coefficients of an affine combination must sum up to 1 didn’t make sense after my explanation before, I hope that looking at it from the perspective of barycentric coordinates helps to justify the restriction.
Now that we’ve examined the formal definition and use of barycentric coordinates, let’s take a slightly modified look at them, and see how we can make them more useful to us as game developers. We saw in the last post that any affine combination could be refactored and expressed as a single point, or origin, added to a linear combination. Let’s take our barycentric equation (which is an affine combination) and refactor it in this way now, using s, r, and t as our barycentric coordinates (coefficients):
We’ve substituted u and v as (B-A) and (C-A), respectively. Relating our result to the figure above, we see that we’ve picked A as our local origin, and two of the triangle’s edges as u and v. Our barycentric coordinates have been reduced to just r and t, and are now expressed relative to the local origin A. It’s important to note that mathematically, this ‘reduced’ form of the barycentric coordinates is equivalent to the formal version, but this format is much more usable to us. Because r and t are still barycentric coordinates, they must still fall between 0 and 1, and their sum still cannot exceed 1. Notice that we said exceed this time, and not sum up to 1. This is a subtle difference than before, and it can be explained like this: previously, we had the sum of s, r, and t = 1. This still must hold true. However, we’ve made s implicit in our new reduced form, and therefore it is not directly included in our sum. If r and t sum up to 1, then s is 0. However, r and t can sum to less than 1, and s will be equal to the remainder (see the second and third steps of how we arrived to our reduced form). In summary:
Now let’s see how we can use this. Imagine we’re trying to write a function in our game that determines whether or not a point is contained within a triangle. This is actually quite the common problem to solve in collision detection. In two dimensions, it might be a direct collision detection query. In three dimensions, we normally will first determine if a point is in the plane of a triangle, and if so, then reduce it to a two dimensional problem exactly like the normal 2D case. In either situation, we need a robust way to determine if a point is contained within a triangle. We can use barycentric coordinates in the reduced form to solve this, by taking the following steps:
1. Pick a vertex of the triangle to be our local origin, which we’ll refer to as A
2. Compute u and v using the differences of the other two vertices and our origin as we did above (ex: u = B-A, v = C-A)
3. Compute the r and t barycentric values for our point P with respect to A
4. Check that r and t are both within 0 and 1, and that their sum is less than or equal to 1. If so, return true. Otherwise, the point is outside, return false.
It’s actually quite straightforward, with the exception that we haven’t yet discussed how you’d complete step 3, computing the barycentric coordinates. Let’s take a look at that now, and then with r and t computed, we’ll be able to complete the function. There are several approaches to finding the barycentric coordinates in this step. The simpler of the two uses some identities and properties of vectors we’ve covered up to this point, which is why I’ll choose to show that one now. However, after we take a good look at matrices and solving systems of linear equations with them, we’ll see that there’s a more efficient way to compute them. See Figure 2 to see the problem we’re trying to solve.
Let’s start by refactoring our equation slightly, and move the A over to the other side:
We’ve replaced P-A with the vector w. We need to now solve for r and t. If we look at some of the vector rules that we covered earlier, we’ll remember that any vector crossed with itself (or any collinear vector) will result in the 0 vector. So, to eliminate t, let’s take the cross product of both sides with v:
We can see that by having the cross product of v with itself go to the 0 vector, we were able to eliminate t from the equation and solve for r. We can repeat this same process for t to obtain:
At this point, we can determine whether or not r and t are going to be greater than 0 or not. This is the first requirement of our test. The equations for r and t above represent ratios between two cross products. The two cross products each represent a vector, so in other words we’re taking the ratio of two vectors. The only way r or t can be negative is if these vectors point in opposite directions. So, let’s use the dot product to determine if the numerator and denominator in each case point in the same direction:
The sign() of each side will be > 0 if the vectors are pointing the same direction, or < 0 if the vectors are pointing away from each other. At this point, if either is < 0, we can exit the function with a false result.
The next requirement we need to meet is that r and t must each be smaller than 1, and that their sum must also be less than 1. To do this, let’s take the norm of each of the equations above. Since we already know that r and t are positive, we know that the norm of r and t is just r and t.
Again, we can repeat this same process to solve for t, and we’ll get:
Also, since swapping the order of a cross product doesn’t change the magnitude of the resulting vector, only it’s direction, then we can also say that
Which gives us a common denominator in our r and t formulas, so we only have to compute the value once. And there we have it, we now have r and t computed, which means we can complete our function. A sample implementation, written in C# and using XNA might look like this:
///<summary>
/// Determine whether a point P is inside the triangle ABC. Note, this function
/// assumes that P is coplanar with the triangle.
///</summary>
///<returns>True if the point is inside, false if it is not.</returns>
public static bool PointInTriangle(ref Vector3 A, ref Vector3 B, ref Vector3 C, ref Vector3 P)
{
// Prepare our barycentric variables
Vector3 u = B – A;
Vector3 v = C – A;
Vector3 w = P – A;
Vector3 vCrossW = Vector3.Cross(v, w);
Vector3 vCrossU = Vector3.Cross(v, u);
// Test sign of r
if (Vector3.Dot(vCrossW, vCrossU) < 0)
return false;
Vector3 uCrossW = Vector3.Cross(u, w);
Vector3 uCrossV = Vector3.Cross(u, v);
// Test sign of t
if (Vector3.Dot(uCrossW, uCrossV) < 0)
return false;
// At this point, we know that r and t and both > 0.
// Therefore, as long as their sum is <= 1, each must be less <= 1
float denom = uCrossV.Length();
float r = vCrossW.Length() / denom;
float t = uCrossW.Length() / denom;
return (r + t <= 1);
}
And that concludes this post on barycentric coordinates, and one of their uses in game code. I hope that this post also served to solidify some of the previous information about affine combinations. Finally, it gave us a chance to use what we’ve covered so far to so something useful. I hope you enjoyed it, and I’ll be moving on to matrices next.
Thanks a lot your blog is very helpful to me please keep it up I am looking forward to the physics stuff. Thanks again
BakerCo
Hi,
Nice post. But I have a question. According to your post, it appears that r and t never go below 0 because you are only working with vector lengths. Ideally, when we test a point outside the triangle, either or both the values of r and t must be < 0.
Hi Vektor, I think my last reply got lost so I'll reply again.
You're absolutely right. You pointed out a pretty big oversight on my part and I've corrected the blog to reflect the right solution. Since we are taking the norm of the equation, the signs of r and t got lost, so checking for > 0 at the end was meaningless. I've corrected it to determine the signs earlier before taking the norm, with an explanation of how we do that.
Thanks for catching that!
Great post. Finally I understood some math I was missing.
So would this code be working in 3D, without projecting triangles to 2D plane, or should I use something different. For example: use Crammer rule to solve for 3D space?
Thanks Michael, glad that it helped.
If your goal is to test that the projection of a point onto a 3D triangle is inside that triangle, then you can make it work for 3D with very little modification. The points A, B, and C are all going to be coplanar since they are on a triangle. The values s and t are ratios, which hold up just fine in 3D as well. The only interesting part is the point P that you're testing. You'll want to find the projection of P onto the plane of ABC, say we call that Q, and then you can test that Q is inside of ABC using this method.
While I haven't tried it, it seems reasonable that you might be able to extend this further into 3D by considering a 3D point and a tetrahedron instead of a triangle. You could just pick a vertex A, and then take the 3 edge vectors that extend from that vertex, call then u, v, and w. Then you'd find the ratios a, b, and c along each edge vector (instead of s and t) using similar derivation to the above.
return (r <= 1 && t <= 1 && r + t <= 1);
can be replaced by
return (r + t <= 1);
because r and t are both >=0.
Nice post!
And correct "piont" to "point" and "> 0" to ">= 0" in comment 🙂
Thanks for catching the spelling error 🙂
Yes, you're right. Since each are positive and we show their sum <= 1, we don't need to verify that r & t are individually <= 1. I've updated the post to reflect that.
Thanks!
I wrote a complete article about point in triangle test. It shows the barycentric, parametric and dot product based methods.
Then it deals with the accuracy problem occuring when a point lies exactly on one edge (with examples). Finally it exposes a complete new method based on point to edge distance.
totologic.blogspot.fr/…/accurate-point-in-triangle-test.html
Enjoy !
Nice article! I like the real world examples of accuracy problems.
Great article, thanks for writing this.
In C programming problem. if you have a triangle ABC where A(1,3), B(5,2), C(4,4), and if you have a point D(2,3). Write a program that will tell if the point D is inside, outside, or edge of a triangle.
could you help me this?
In C programming problem. if you have a triangle ABC where A(1,3), B(5,2), C(4,4), and if you have a point D(2,3). Write a program that will tell if the point D is inside, outside, or edge of a triangle.
|
https://blogs.msdn.microsoft.com/rezanour/2011/08/07/barycentric-coordinates-and-point-in-triangle-tests/
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
Difference between revisions of "Deferred Initcalls"
Revision as of 16:12, 17 November 2011.
For example, many digital cameras have USB buses, which need to be initialized in order for the camera to be used as a mass storage device (to download pictures to a desktop computer). However,:
cat /proc/deferred_initcalls
This will cause the kernel to run all deferred initcalls. Also the .init section memory is freed by kernel. The contents of /proc/deferred_initcalls will return 0 if deferred initcalls were not yet run, and 1 otherwise on subsequent reads.
deferred USB initcall example
As a test, on an X86 desktop system, I deferred the initialization of the USB subsystem on a 2.6.27 kernel, by using deferred_module_init on the functions: ehci_hcd_init and uhci_hcd_init
This resulted in a total times savings of 530 milliseconds, during the kernel boot phase. (Of course, this time was used subsequently when the deferred initcalls were triggered later on.)
Specifially, I changed:
module_init(ehci_hcd_init)
to
deferred_modle_init(echi_hcd_init)
and
module_init(uhci_hcd_init)
to
deferred_module_init(uhci_hcd_init)
Patch
Here is the main deferred initcalls patch for 2.6.26, 2.6.27: Media:Deferred_initcalls.patch
For 2.6.28 the forward-ported patch is here: Media:Deferred_initcalls-2.6.28.patch
Here (inline) is a patch showing modification of USB and IDE initcalls
to be be deferred initcalls:
(This patch is also available downloadable as: Media:Defer-usb-and-ide-initcalls.patch)
commit e7a5b8bb6a5d04054dec1e85d53bbe115059d0d0 Author: Tim Bird <tim.bird@am.sony.com> Date: Fri Sep 12 11:35:58 2008 -0700 Use deferred_module_init on long-probing IDE and USB modules. These modules were taking about 700 ms and 400 ms, respectively to initialize. On many embedded systems, these initializations can be done after major boot activity is completed, with no loss of functionality. diff --git a/drivers/ata/ata_piix.c b/drivers/ata/ata_piix.c index e9e32ed..cb2ebf3 100644 --- a/drivers/ata/ata_piix.c +++ b/drivers/ata/ata_piix.c @@ -1494,5 +1494,5 @@ static void __exit piix_exit(void) pci_unregister_driver(&piix_pci_driver); } -module_init(piix_init); +deferred_module_init(piix_init); module_exit(piix_exit); diff --git a/drivers/usb/host/ehci-hcd.c b/drivers/usb/host/ehci-hcd.c index 8409e07..44a8340 100644 --- a/drivers/usb/host/ehci-hcd.c +++ b/drivers/usb/host/ehci-hcd.c @@ -1107,7 +1107,7 @@ clean0: #endif return retval; } -module_init(ehci_hcd_init); +deferred_module_init(ehci_hcd_init); static void __exit ehci_hcd_cleanup(void) { diff --git a/drivers/usb/host/uhci-hcd.c b/drivers/usb/host/uhci-hcd.c index 3a7bfe7..9c27ef0 100644 --- a/drivers/usb/host/uhci-hcd.c +++ b/drivers/usb/host/uhci-hcd.c @@ -999,7 +999,7 @@ static void __exit uhci_hcd_cleanup(void) kfree(errbuf); } -module_init(uhci_hcd_init); +deferred_module_init(uhci_hcd_init); module_exit(uhci_hcd_cleanup); MODULE_AUTHOR(DRIVER_AUTHOR);
|
http://elinux.org/index.php?title=Deferred_Initcalls&diff=76046&oldid=8163
|
CC-MAIN-2014-41
|
en
|
refinedweb
|
Object::Meta::Plugin::Host - hosts plugins that work like Object::Meta::Plugin. Can serve described in Object::Meta::Plugin.
The host is not just simply a merged hash. It is designed to allow various plugins to provide similar capabilities - methods with conflicting namespace. Conflicting namespaces can coexist, and take precedence over one another. A possible scenario is to have various plugins for an image processor, which all define the method "process". They are all installed, ordered as the effect should be taken out, and finally atop them all a plugin which wraps them into a pipeline is set.
When a plugin's method is entered it receives, instead of the host object, a context object, particular to itself. It allows it access to it's host, it's sibling plugins, and so forth explicitly, while implicitly wrapping around the host, and emulating it with reordered priority - the current plugin is first in the list.
Such a model enables a dumb plugin to work quite happily with others, even those which may take it's role. The only rule it needs to keep is that it accesses it's data structures using
$self-self>, and not
$self, because $self is the context object.
A more complex plugin, aware that it may not be peerless, could explicitly ask for the default (host defined) methods it calls, instead of it's own. It can request to call a method on the plugin which succeeds it or precedes it in a certain method's stack. Additionally, by gaining access to the host object a plugin could implement a pipeline of calls quite easily, as described above. All it must do is call
$self-host->stack($method)> and iterate that omitting itself.
The interface aims to be simple enough to be flexible, trying for the minimum it needs to define to be useful, and creating workarounds for the limitations this minimum imposes.
The implementation is by no means optimized. I doubt it's fast, but I don't really care. It's supposed to create a nice framework for a large application, which needs to be modular.
Returns a hash ref, to a hash of methods => an array ref to a stack of plugins, for the method.
Takes a reference to a plugin, and sweeps the method tree clean of any of it's occurrences.
Takes an export list, and unmerges it from the currently active one. If it's empty, calls
unplug. If something remains, it cleans out the stacks manually.
Grants access to the actual plugin object which was passed via the export list. Use for internal storage space..
canmethod (e.g.
UNIVERSAL::can) is depended on. Without it everything will break. If you try to plug something nonstandard into a host, and export something
UNIVERSAL::canwon't say is there, implement
canyourself.
Just you wait. See
TODO for what I have in stock!
$self-self> or
$self-plugin>. It wreaks of lameness, and I fear it will be a psychological show stopper..
|
http://search.cpan.org/~nuffin/Object-Meta-Plugin-0.01/lib/Object/Meta/Plugin/Host.pm
|
CC-MAIN-2014-41
|
en
|
refinedweb
|
jGuru Forums
Posted By:
Anonymous
Posted On:
Thursday, July 24, 2003 10:58 PM
I'm on windows 2k, using netbeans 3.5, j2se 1.4, tomcat 4.
in my project, i have a web module and it has a index.jsp and a Alpha.java and Alpha.class files in the WEB-INF/classes directory.
my index.jsp is:
<%@page contentType="text/html" import="Alpha" %>
my Alpha.class is:
import java.beans.PropertyChangeListener;
import java.beans.PropertyChangeSupport;
import java.io.Serializable;
public class Alpha extends Object implements Serializable {
private static final String PROP_SAMPLE_PROPERTY = "SampleProperty";
private String sampleProperty;
private PropertyChangeSupport propertySupport;
public Alpha() {
propertySupport = new PropertyChangeSupport( this );
}
public String getSampleProperty() {
return sampleProperty;
}
public void setSampleProperty(String value) {
String oldValue = sampleProperty;
sampleProperty = value;
propertySupport.firePropertyChange(PROP_SAMPLE_PROPERTY, oldValue, sampleProperty);
}
public void addPropertyChangeListener
(PropertyChangeListener listener) {
propertySupport.addPropertyChangeListener(listener);
}
public void removePropertyChangeListener
(PropertyChangeListener listener) {
propertySupport.removePropertyChangeListener(listener);
}
}
Alpha.java compiles fine. but when i try to compile index.jsp, i get:
work$jsp.java [3] '.' expected
import Alpha;
^
1 error
Errors compiling work.
Last week i was able to do something simple like this where i import the class name, and usebean:... was the same and it compiled fine. could it netbeans? i mounted the class directory to make sure it's in the classpath.
Re: Netbeans won't let me import a javabean
Posted By:
Christopher_Koenigsberg
Posted On:
Friday, July 25, 2003 06:44 AM
Add an explicit package prefix e.g. "import mypkg.Alpha", and move the Alpha.java and Alpha.class under "mypkg/". Otherwise you are trying to play with the default implementation-dependent "unnamed package", and I hear this has never really been a good idea, but now newer versions of JDK and tools will explicitly not allow it at all anymore.
|
http://www.jguru.com/forums/view.jsp?EID=1103870
|
CC-MAIN-2014-41
|
en
|
refinedweb
|
public class PeriodicTrigger extends java.lang, java.util.concurrent.TimeUnit timeUnit)
setInitialDelay(long).
public void setInitialDelay(long initialDelay)
TimeUnit. If no time unit was explicitly provided upon instantiation, the default is milliseconds.
public void setFixedRate(boolean fixedRate)
public java.util.Date nextExecutionTime(TriggerContext triggerContext)
nextExecutionTimein interface
Trigger
triggerContext- context object encapsulating last execution times and last completion time
nullif the trigger won't fire anymore
public boolean equals(java.lang.Object obj)
equalsin class
java.lang.Object
public int hashCode()
hashCodein class
java.lang.Object
|
http://docs.spring.io/spring-framework/docs/3.2.0.RC2/api/org/springframework/scheduling/support/PeriodicTrigger.html
|
CC-MAIN-2014-41
|
en
|
refinedweb
|
.conversation; 20 21 import java.io.Serializable; 22 23 /** 24 * Provide information about the current conversation. 25 * <p> 26 * An instance of this type is stored in a thread-local variable to indicate 27 * what the "current conversation state" is. The getConversation() method 28 * can therefore be used to determine what conversation is currently active, 29 * and getBeanName() can be used to determine what the most recently-invoked 30 * conversation-scoped-bean was. This thread-local variable is maintained 31 * via the CurrentConversationAdvice which wraps every conversation-scoped 32 * bean and intercepts all method calls to the bean. 33 * <p> 34 * This object also records the fact that a specific bean is within a 35 * specific conversation. This data is saved during serialization so 36 * that on deserialize we know which conversation to reattach which 37 * bean to./ 38 */ 39 public class CurrentConversationInfo implements Serializable 40 { 41 private final String conversationName; 42 private final String beanName; 43 44 /** 45 * The conversation object itself is not serializable (it is a Spring proxy 46 * object with various AOP advices attached to it). Therefore this reference 47 * must be transient. After deserialisation, this member is of course null, 48 * but is recalculated on demand in the getConversation method. 49 * <p> 50 * The beans that are <i>in</i> the conversation are hopefully serializable; 51 * they are saved directly and then reattached to the new Conversation instance. 52 */ 53 private transient Conversation conversation; 54 55 public CurrentConversationInfo(Conversation conversation, String beanName) 56 { 57 this.conversation = conversation; 58 this.conversationName = conversation.getName(); 59 this.beanName = beanName; 60 } 61 62 /** 63 * The conversation the bean is associated with. 64 */ 65 public Conversation getConversation() 66 { 67 if(conversation == null) 68 { 69 ConversationManager conversationManager = ConversationManager.getInstance(); 70 conversation = conversationManager.getConversation(conversationName); 71 } 72 return conversation; 73 } 74 75 /** 76 * The bean name. 77 */ 78 public String getBeanName() 79 { 80 return beanName; 81 } 82 }
|
http://myfaces.apache.org/orchestra/myfaces-orchestra-core/xref/org/apache/myfaces/orchestra/conversation/CurrentConversationInfo.html
|
CC-MAIN-2014-41
|
en
|
refinedweb
|
This article outlines an approach for using GUID values as primary keys/clustered indexes that avoids most of the normal disadvantages, adapting the COMB model for sequential GUIDs developed by Jimmy Nilsson in his article The Cost of GUIDs as Primary Keys. While that basic model has been used by a variety of libraries and frameworks (including NHibernate), most implementations seem to be specific to Microsoft SQL Server. This article attempts to adapt the approach into a flexible system that's can be used with other common database systems such as Oracle, PostgreSQL, and MySQL, and also addresses some of the eccentricities of the .NET Framework in particular.
Historically, a very common model for database design has used sequential integers to identify a row of data, usually generated by the server itself when the new row is inserted. This is a simple, clean approach that's suitable for many applications.
However, there are also some situations where it's not ideal. With the increasing use of Object-Relational Mapping (ORM) frameworks such as NHibernate and the ADO.NET Entity Framework, relying on the server to generate key values adds a lot of complication that most people would prefer to avoid. Likewise, replication scenarios also make it problematic to rely on a single authoritative source for key value creation -- the entire point is to minimize the role of a single authority.
One tempting alternative is to use GUIDs as key values. A GUID (globally unique identifier), also known as a UUID, is a 128-bit value that carries a reasonable guarantee of being unique across all of space and time. Standards for creating GUIDs are described in RFC 4122, but most GUID-creation algorithms in common use today are either essentially a very long random number, or else combine a random-appearing component with some kind of identifying information for the local system, such as a network MAC address.
GUIDs have the advantage of allowing developers to create new key values on the fly without having to check in with the server, and without having to worry that the value might already be used by someone else. At first glance, they seem to provide a good answer to the problem.
So what's the issue? Well, performance. To get the best performance, most databases store rows in what's known as a clustered index, meaning that the rows in a table are actually stored on disk in a sorted order, usually based on a primary key value. This makes finding a single row as simple as doing a quick lookup in the index, but it can make adding new rows to the table very slow if their primary key doesn't fall at the end of the list. For example, consider the following data:
Pretty simple so far: the rows are stored in order according to the value of the ID column. If we add a new row with an ID of 8, it's no problem: the row just gets tacked on to the end.
But now suppose we want to insert a row with an ID of 5:
Rows 7 and 8 have to be moved down to make room. Not such a big deal here, but when you're talking about inserting something into the middle of a table with millions of rows, it starts becoming an issue. And when you want to do it a hundred times a second, it can really, really add up.
And that's the problem with GUIDs: they may or may not be truly random, but most of them look random, in the sense that they're not usually generated to have any particular kind of order. For that reason, it's generally considered a very bad practice to use a GUID value as part of a primary key in a database of any significant size. Inserts can be very slow and involve a huge amount of unnecessary disk activity.
So, what's the solution? Well, the main problem with GUIDs is their lack of sequence. So, let's add a sequence. The COMB approach (which stands for COMBined GUID/timestamp) replaces a portion of the GUID with a value which is guaranteed to increase, or at least not decrease, with each new value generated. As the name implies, it does this by using a value generated from the current date and time.
To illustrate, consider this list of typical GUID values:
Now consider this hypothetical list of special GUID values:
00000001-a411-491d-969a-77bf40f5517500000002-d97d-4bb9-a493-cad27799936300000003-916c-4986-a363-0a9b9c95ca5200000004-f827-452b-a3be-b77a3a4c95aa
00000001-a411-491d-969a-77bf40f55175
00000002-d97d-4bb9-a493-cad277999363
00000003-916c-4986-a363-0a9b9c95ca52
00000004-f827-452b-a3be-b77a3a4c95aa
The first block of digits has been replaced with an increasing sequence -- say, the number of milliseconds since the program started. Inserting a million rows of these values wouldn't be so bad, since each row would simply be appended to the end of the list and not require any reshuffling of existing data.
Now that we have our basic concept, we need to get into some of the details of how GUIDs are constructed and how they're handled by different database systems.
128-bit GUIDs are composed of four main blocks, called Data1, Data2, Data3, and Data4, which you can see in the example below:
11111111-2222-3333-4444-444444444444
Data1 is four bytes, Data2 is two bytes, Data3 is two bytes, and Data4 is eight bytes (a few bits of Data3 and the first part of Data4 are reserved for version information, but that's more or less the structure).
Most GUID algorithms in use today, and especially those used by the .NET Framework, are pretty much just fancy random number generators (Microsoft used to include the local machine's MAC address as part of the GUID, but discontinued that practice several years ago due to privacy concerns). This is good news for us, because it means that playing around with different parts of the value is unlikely to damage the value's uniqueness all that much.
But unfortunately for us, different databases handle GUIDs in different ways. Some systems (Microsoft SQL Server, PostgreSQL) have a built-in GUID type which can store and manipulate GUIDs directly. Databases without native GUID support have different conventions on how they can be emulated. MySQL, for example, most commonly stores GUIDs by writing their string representation to a char(36) column. Oracle usually stores the raw bytes of a GUID value in a raw(16) column.
It gets even more complicated, because one eccentricity of Microsoft SQL Server is that it orders GUID values according to the least significant six bytes (i.e. the last six bytes of the Data4 block). So, if we want to create a sequential GUID for use with SQL Server, we have to put the sequential portion at the end. Most other database systems will want it at the beginning.
Looking at the different ways databases handle GUIDs, it's clear that there can be no one-size-fits-all algorithm for sequential GUIDs; we'll have to customize it for our particular application. After doing some experimentation, I've identified three main approaches that cover most pretty much all use cases:
(Why aren't GUIDs stored as strings the same as GUIDs stored as bytes? Because of the way .NET handles GUIDs, the string representation may not be what you expect on little-endian systems, which is most machines likely to be running .NET. More on that later.)
I've represented these choices in code as an enumeration:
public enum SequentialGuidType
{
SequentialAsString,
SequentialAsBinary,
SequentialAtEnd
}
Now we can define a method to generate our GUID which accepts one of those enumeration values, and tailor the result accordingly:
public Guid NewSequentialGuid(SequentialGuidType guidType)
{
...
}
But how exactly do we create a sequential GUID? Exactly which part of it do we keep "random," and which part do we replace with a timestamp? Well, the original COMB specification, tailored for SQL Server, replaced the last six bytes of Data4 with a timestamp value. This was partially out of convenience, since those six bytes are what SQL Server uses to order GUID values, but six bytes for a timestamp is a decent enough balance. That leaves ten bytes for the random component.
What makes the most sense to me is to start with a fresh random GUID. Like I just said, we need ten random bytes:
var rng = new System.Security.Cryptography.RNGCryptoServiceProvider();
byte[] randomBytes = new byte[10];
rng.GetBytes(randomBytes);
We use RNGCryptoServiceProvider to generate our random component because System.Random has some deficiencies that make it unsuitable for this purpose (the numbers it generates follow some identifiable patterns, for example, and will cycle after no more than 232 iterations). Since we're relying on randomness to give us as much of a guarantee of uniqueness as we can realistically have, it's in our interests to make sure our initial state is as strongly random as it can be, and RNGCryptoServiceProvider provides cryptographically strong random data.
RNGCryptoServiceProvider
System.Random
(It's also relatively slow, however, and so if performance is critical you might want to consider another method -- simply initializing a byte array with data from Guid.NewGuid(), for example. I avoided this approach because Guid.NewGuid() itself makes no guarantees of randomness; that's just how the current implementation appears to work. So, I choose to err on the side of caution and stick with a method I know will function reliably.)
Guid.NewGuid()
Okay, we now have the random portion of our new value, and all that remains is to replace part of it with our timestamp. We decided on a six-byte timestamp, but what should it be based on? One obvious choice would be to use DateTime.Now (or, as Rich Andersen points out, DateTime.UtcNow for better performance) and convert it to a six-byte integer value somehow. The Ticks property is tempting: it returns the number of 100-nanosecond intervals that have elapsed since January 1, 0001 A.D. However, there are a couple hitches.
DateTime.Now
DateTime.UtcNow
Ticks
First, since Ticks returns a 64-bit integer and we only have 48 bits to play with, we'd have to chop off two bytes, and the remaining 48 bits' worth of 100-nanosecond intervals gives us less than a year before it overflows and cycles. This would ruin the sequential ordering we're trying to set up, and destroy the performance gains we're hoping for, and since many applications will be in service longer than a year, we have to use a less precise measure of time.
The other difficulty is that DateTime.UtcNow has a limited resolution. According to the docs, the value might only update every 10 milliseconds. (It seems to update more frequently on some systems, but we can't rely on that.)
The good news is, those two hitches sort of cancel each other out: the limited resolution means there's no point in using the entire Ticks value. So, instead of using ticks directly, we'll divide by 10,000 to give us the number of milliseconds that have elapsed since January 1, 0001, and then the least significant 48 bits of that will become our timestamp. I use milliseconds because, even though DateTime.UtcNow is currently limited to 10-millisecond resolution on some systems, it may improve in the future, and I'd like to leave room for that. Reducing the resolution of our timestamp to milliseconds also gives us until about 5800 A.D. before it overflows and cycles; hopefully this will be sufficient for most applications.
Before we continue, a short footnote about this approach: using a 1-millisecond-resolution timestamp means that GUIDs generated very close together might have the same timestamp value, and so will not be sequential. This might be a common occurrence for some applications, and in fact I experimented with some alternate approaches, such as using a higher-resolution timer such as System.Diagnostics.Stopwatch, or combining the timestamp with a "counter" that would guarantee the sequence continued until the timestamp updated. However, during testing I found that this made no discernible difference at all, even when dozens or even hundreds of GUIDs were being generated within the same one-millisecond window. This is consistent with what Jimmy Nilsson encountered during his testing with COMBs as well. With that in mind, I went with the method outlined here, since it's far simpler.
System.Diagnostics.Stopwatch
Here's the code:
long timestamp = DateTime.UtcNow.Ticks / 10000L;
byte[] timestampBytes = BitConverter.GetBytes(timestamp);
Now we have our timestamp. However, since we obtained the bytes from a numeric value using BitConverter, we have to account for byte order.
BitConverter
if (BitConverter.IsLittleEndian)
{
Array.Reverse(timestampBytes);
}
We have the bytes for the random portion of our GUID, and we have the bytes for the timestamp, so all that remains is to combine them. At this point we have to tailor the format according to to the SequentialGuidType value passed in to our method. For SequentialAsBinary and SequentialAsString types, we copy the timestamp first, followed by the random component. For SequentialAtEnd types, the opposite.
SequentialGuidType
SequentialAsBinary
SequentialAsString
SequentialAtEnd
byte[] guidBytes = new byte[16];
switch (guidType)
{
case SequentialGuidType.SequentialAsString:
case SequentialGuidType.SequentialAsBinary:
Buffer.BlockCopy(timestampBytes, 2, guidBytes, 0, 6);
Buffer.BlockCopy(randomBytes, 0, guidBytes, 6, 10);
break;
case SequentialGuidType.SequentialAtEnd:
Buffer.BlockCopy(randomBytes, 0, guidBytes, 0, 10);
Buffer.BlockCopy(timestampBytes, 2, guidBytes, 10, 6);
break;
}
So far, so good. But now we get to one of the eccentricities of the .NET Framework: it doesn't just treat GUIDs as a sequence of bytes. For some reason, it regards a GUID as a struct containing a 32-bit integer, two 16-bit integers, and eight individual bytes. In other words, it regards the Data1 block as an Int32, the Data2 and Data3 blocks as two Int16s, and the Data4 block as a Byte[8].
Int32
Int16
Byte[8]
What does this mean for us? Well, the main issue has to do with byte ordering again. Since .NET thinks it's dealing with numeric values, we have to compensate on little-endian systems -- BUT! -- only for applications that will be converting the GUID value to a string, and have the timestamp portion at the beginning of the GUID (the ones with the timestamp portion at the end don't have anything important in the "numeric" parts of the GUID, so we don't have to do anything with them).
This is the reason I mentioned above for distinguishing between GUIDs that will be stored as strings and GUIDs that will be stored as binary data. For databases that store them as strings, ORM frameworks and applications will probably want to use the ToString() method to generate SQL INSERT statements, meaning we have to correct for the endianness issue. For databases that store them as binary data, they'll probably use Guid.ToByteArray() to generate the string for INSERTs, meaning no correction is necessary. So, we have one last thing to add:
ToString()
Guid.ToByteArray()
if (guidType == SequentialGuidType.SequentialAsString &&
BitConverter.IsLittleEndian)
{
Array.Reverse(guidBytes, 0, 4);
Array.Reverse(guidBytes, 4, 2);
}
Now we're done, and we can use our byte array to construct and return a GUID:
return new Guid(guidBytes);
To use our method, we first have to determine which type of GUID is best for our database and any ORM framework we're using. Here's a quick rule of thumb for some common database types, although these might vary depending on the details of your application:
SequentialAsBinary
BinaryGUID
Here are a few examples generated by our new method.
First, NewSequentialGuid(SequentialGuidType.SequentialAsString):
39babcb4-e446-4ed5-4012-2e27653a9d1339babcb4-e447-ae68-4a32-19eb8d91765d39babcb4-e44a-6c41-0fb4-21edd4697f4339babcb4-e44d-51d2-c4b0-7d8489691c70
39babcb4-e446-4ed5-4012-2e27653a9d13
39babcb4-e447-ae68-4a32-19eb8d91765d
39babcb4-e44a-6c41-0fb4-21edd4697f43
39babcb4-e44d-51d2-c4b0-7d8489691c70
As you can see, the first six bytes (the first two blocks) are in sequential order, and the remainder is random. Inserting these values into a database that stores GUIDs as strings (such as MySQL) should provide a performance gain over non-sequential values.
Next, NewSequentialGuid(SequentialGuidType.SequentialAtEnd):
a47ec5e3-8d62-4cc1-e132-39babcb4e47a939aa853-5dc9-4542-0064-39babcb4e47c7c06fdf6-dca2-4a1a-c3d7-39babcb4e47dc21a4d6f-407e-48cf-656c-39babcb4e480
a47ec5e3-8d62-4cc1-e132-39babcb4e47a
939aa853-5dc9-4542-0064-39babcb4e47c
7c06fdf6-dca2-4a1a-c3d7-39babcb4e47d
c21a4d6f-407e-48cf-656c-39babcb4e480
As we'd expect, the last six bytes are sequential, and the rest is random. I have no idea why SQL Server orders uniqueidentifier indexes this way, but it does, and this should work well.
uniqueidentifier
And finally, NewSequentialGuid(SequentialGuidType.SequentialAsBinary):
b4bcba39-58eb-47ce-8890-71e7867d67a5b4bcba39-5aeb-42a0-0b11-db83dd3c635bb4bcba39-6aeb-4129-a9a5-a500aac0c5cdb4bcba39-6ceb-494d-a978-c29cef95d37f
b4bcba39-58eb-47ce-8890-71e7867d67a5
b4bcba39-5aeb-42a0-0b11-db83dd3c635b
b4bcba39-6aeb-4129-a9a5-a500aac0c5cd
b4bcba39-6ceb-494d-a978-c29cef95d37f
When viewed here in the format ToString() would output, we can see that something looks wrong. The first two blocks are "jumbled" due to having all their bytes reversed (this is due to the endianness issue discussed earlier). If we were to insert these values into a text field (like they would be under MySQL), the performance would not be ideal.
However, this problem is appearing because the four values in that list were generated using the ToString() method. Suppose instead that the same four GUIDs were converted into a hex string using the array returned by Guid.ToByteArray():
39babcb4eb5847ce889071e7867d67a539babcb4eb5a42a00b11db83dd3c635b39babcb4eb6a4129a9a5a500aac0c5cd39babcb4eb6c494da978c29cef95d37f
39babcb4eb5847ce889071e7867d67a5
39babcb4eb5a42a00b11db83dd3c635b
39babcb4eb6a4129a9a5a500aac0c5cd
39babcb4eb6c494da978c29cef95d37f
This is how an ORM framework would most likely generate INSERT statements for an Oracle database, for example, and you can see that, when formatted this way, the sequence is again visible.
So, to recap, we now have a method that can generate sequential GUID values for any of three different database types: those that store as strings (MySQL, sometimes SQLite), those that store as binary data (Oracle, PostgreSQL), and Microsoft SQL Server, which has its own bizarre storage scheme.
We could customize our method further, having it auto-detect the database type based on a value in the application settings, or we could create an overload that would accept the DbConnection we're using and determine the correct type from that, but that would depend on the details of the application and any ORM framework being used. Call it homework!
DbConnection
For testing, I focused on four common database systems: Microsoft SQL Server 2008, MySQL 5.5, Oracle XE 11.2, and PostgreSQL 9.1, all running under Windows 7 on my desktop. (If someone would like to run tests on more database types or under other operating systems, I'd be happy to help any way I can!)
Tests were performed by using each database system's command-line tool to insert 2 millions rows into a table with a GUID primary key and a 100-character text field. One test was performed using each of the three methods described above, with a fourth test using the Guid.NewGuid() method as a control. For comparison, I also ran a fifth test inserting 2 million rows into a similar table with an integer primary key. The time (in seconds) to complete the inserts was recorded after the first million rows, and then again after the second million rows. Here are the results:
For SQL Server, we would expect the SequentialAtEnd method to work best (since it was added especially for SQL Server), and things are looking good: GUID inserts using that method are only 8.4% slower than an integer primary key -- definitely acceptable. This represents a 75% improvement over the performance of a random GUID. You can also see that the SequentialAsBinary and SequentialAsString methods provide only a small benefit over a random GUID, also as we would expect. Another important indicator is that, for random GUIDs, the second millions inserts took longer than the first million, which is consistent with a lot of page-shuffling to maintain the clustered index as more rows get added in the middle, whereas for the SequentialAtEnd method, the second million took nearly the same amount of time as the first, indicating that new rows were simply being appended to the end of the table. So far, so good.
As you can see, MySQL had very poor performance with non-sequential GUIDs -- so poor that I had to cut off the top of the chart to make the other bars readable (the second million rows took over half again as long as the first million). However, performance with the SequentialAsString method was almost identical to an integer primary key, which is what we'd expect since GUIDs are typically stored as char(36) fields in MySQL. Performance with the SequentialAsBinary method was also similar, probably due to the fact that, even with incorrect byte order, the values are "sort of" sequential, as a whole.
Oracle is harder to get a handle on. Storing GUIDs as raw(16) columns, we would expect the SequentialAsBinary method to be the fastest, and it is, but even random GUIDs weren't too much slower than integers. Moreover, sequential GUID inserts were faster than integer inserts, which is hard to accept. While sequential GUIDs did produce measurable improvement in these benchmarks, I have to wonder if the weirdness here is due to my inexperience with writing good batch inserts for Oracle. If anyone else would like to take a stab at it, please let me know!
And finally, PostgreSQL. Like Oracle, performance wasn't horrible even with random GUIDs, but the difference for sequential GUIDs was somewhat more pronounced. As expected, the SequentialAsString method was fastest, taking only 7.8% longer than an integer primary key, and nearly twice as fast as a random GUID.
There are a few other things to take into consideration. A lot of emphasis has been placed on the performance of inserting sequential GUIDs, but what about the performance of creating them? How long does it take to generate a sequential GUID, compared to Guid.NewGuid()? Well, it's definitely slower: on my system, I could generate a million random GUIDs in 140 milliseconds, but sequential GUIDs took 2800 milliseconds -- twenty times slower.
Some quick tests showed that the lion's share of that slowness is due to the use of RNGCryptoServiceProvider to generate our random data; switching to System.Random brought the result down to about 400 milliseconds. I still don't recommend doing this, however, since System.Random remains problematic for these purposes. However, it may be possible to use an alternate algorithm that's both faster and acceptably strong -- I don't know much about random number generators, frankly.
Is the slower creation a concern? Personally, I find it acceptable. Unless your application involves very frequent inserts (in which case a GUID key may not be ideal for other reasons), the cost of occasional GUID creation will pale in comparison to the benefits of faster database operations.
Another concern: replacing six bytes of the GUID with a timestamp means only ten bytes are left for random data. Does this jeopardize uniqueness? Well, it depends on the circumstances. Including a timestamp means that any two GUIDs created more than a few milliseconds apart are guaranteed to be unique -- a promise that a completely random GUID (such as those returned by Guid.NewGuid()) can't make. But what about GUIDs created very close together? Well, ten bytes of cryptographically strong randomness means 280, or 1,208,925,819,614,629,174,706,176 possible combinations. The probability of two GUIDs generated within a handful of milliseconds having the same random component is probably insignificant compared to, say, the odds of the database server and all its backups being destroyed by simultaneous wild pig attacks.
One last issue is that the GUIDs generated here aren't technically compliant with the formats specified in RFC 4122 -- they lack the version number that usually occupies bits 48 through 51, for example. I don't personally think it's a big deal; I don't know of any databases that actually care about the internal structure of a GUID, and omitting the version block gives us an extra four bits of randomness. However, we could easily add it back if desired.
Here's the complete version of the method. Some small changes have been made to the code given above (such as abstracting the random number generator out to a static instance, and refactoring the switch() block a bit):
switch()
using System;
using System.Security.Cryptography;
public enum SequentialGuidType
{
SequentialAsString,
SequentialAsBinary,
SequentialAtEnd
}
public static class SequentialGuidGenerator
{
private static readonly RNGCryptoServiceProvider _rng = new RNGCryptoServiceProvider();
public static Guid NewSequentialGuid(SequentialGuidType guidType)
{
byte[] randomBytes = new byte[10];
_rng.GetBytes(randomBytes);
long timestamp = DateTime.UtcNow.Ticks / 10000L;
byte[] timestampBytes = BitConverter.GetBytes(timestamp);
if (BitConverter.IsLittleEndian)
{
Array.Reverse(timestampBytes);
}
byte[] guidBytes = new byte[16];
switch (guidType)
{
case SequentialGuidType.SequentialAsString:
case SequentialGuidType.SequentialAsBinary:
Buffer.BlockCopy(timestampBytes, 2, guidBytes, 0, 6);
Buffer.BlockCopy(randomBytes, 0, guidBytes, 6, 10);
// If formatting as a string, we have to reverse the order
// of the Data1 and Data2 blocks on little-endian systems.
if (guidType == SequentialGuidType.SequentialAsString && BitConverter.IsLittleEndian)
{
Array.Reverse(guidBytes, 0, 4);
Array.Reverse(guidBytes, 4, 2);
}
break;
case SequentialGuidType.SequentialAtEnd:
Buffer.BlockCopy(randomBytes, 0, guidBytes, 0, 10);
Buffer.BlockCopy(timestampBytes, 2, guidBytes, 10, 6);
break;
}
return new Guid(guidBytes);
}
}
Final code and a demo project can be found at[^]
As I mentioned at the beginning, the general COMB approach has been used fairly heavily by various frameworks, and the general concept isn't particularly new, and certainly not original to me. My goal here was to illustrate the ways in which the approach has to be adapted to fit different database types, as well as to provide benchmark information underscoring the need for a tailored approach.
With a little effort and a moderate amount of testing, it's possible to implement a consistent way of generating sequential GUIDs that can easily be used as high-performance primary keys under pretty much any database system.
v1 - Wrote the thing!
v1
v2 - Adopted Rich Andersen's suggestion of using DateTime.UtcNow instead of DateTime.Now, for improved performance and updated code formatting..
|
http://www.codeproject.com/Articles/388157/GUIDs-as-fast-primary-keys-under-multiple-database?fid=1724727&df=90&mpp=10&sort=Position&spc=None&tid=4290776
|
CC-MAIN-2014-41
|
en
|
refinedweb
|
02 August 2011 12:32 [Source: ICIS news]
SINGAPORE (ICIS)--Asia isomer-grade xylene (IX) and paraxylene (PX) prices fell by $5–10/tonne (€4–7/tonne) on Tuesday amid weaker crude futures, market sources said, but traders were watching for further developments after Formosa Petrochemical Corp (FPCC) declared a force majeure on all petroleum products from its complex in ?xml:namespace>
IX prices were hovering at $1,345–1,360/tonne FOB (free on board)
Officials from Formosa Chemicals and Fibre Corp (FCFC) told ICIS it remained unclear which Mailiao-based units would be shut as discussions with the Taiwanese government were ongoing.
The local government had ordered a “rotational closure” of the entire Mailiao refinery and petrochemical complex on 1 August for a thorough review of safety standards, following a string of fire accidents at the site.
This had sparked fears that FCFC would be unable to produce sufficient amounts of PX to feed its purified terephthalic acid (PTA) capacities in
FCFC produces a total of 1.72m tonnes/year of PX to feed its PTA-producing facilities, which have a combined nameplate capacity of nearly 2.9m tonnes/year.
The shutdown of FPCC’s 540,000 bbl/day Mailiao refinery had also caused concerns over whether FCFC would receive sufficient amounts of feedstock naphtha and reformate to maintain aromatics production.
“We will know exactly which units will need to be shut over the next few days and we will remedy the issue from there,” said an official from FCFC.
($1 = €0.70)
For more on IX
|
http://www.icis.com/Articles/2011/08/02/9481694/asia-ix-px-fall-on-weaker-crude-futures-as-formosa-faces-shutdowns.html
|
CC-MAIN-2014-41
|
en
|
refinedweb
|
I would really advice you to learn about for-loops. They would cut down your code tremendously.
I would really advice you to learn about for-loops. They would cut down your code tremendously.
Thats what the Setters are for.
If you write:
private Rectangle(int width, int length) {
...
}
Then the constructor has been declared "private" and can not be used by any other classes other then the Rectangle class.
You can...
Your code is missing closing curly brackets. Whenever you open them you also have to close them later on.
Besides that the error message is pretty much self-explanatory, is it not?
Or instead just a custom class extending JDialog if JOptionPane does not give you the flexibility you want.
We would just be repeating what all the other tutorials and textbooks say because the errors you make are so basic there is not much more to say about them. I dont feel like repeating textbooks.
You should really go over the basics once again before trying this program, there are countless errors in the program. Starting with misplaced semicolon and variables that have no type defined. You...
I dont even know why java allows stupid things like that. Sometimes one has to wonder what the people were thinking when they wrote the ruleset.
Setters and Getters are created for private member variables. You could google them to read more information on how these methods usually look like. A simple example would be:
public class Coord {...
But the error message says something different. The IDE might highlight that because there might be a compile-time problem. However, your application crashed because of a runtime-problem, namely an...
The system cannot find the file specified. Is there anything unclear?
What do you want to use them for?
I am not going to read 50+ lines of code just to find that out. If the key is a string array, why are you not able to save it in a variable?
But what is it? A number? A string? A complex object with several attributes?
What exactly is this "key" you are talking about? Perhaps you should create a class for that or an interface.
If you want to save something use a variable.
As far as I know groovy code is translated into java byte-code before being interpreted, so everything possible with groovy should be possible with java too. Its just the different way to use it.
Read the API for the JTree class perhaps.
The error message is quite clear, it says it can not find a class called StateController.
Do not use the "--" operator when calling methods like that. The results will not be what you expect.
Instead you should just use "n - 1" or "k - 1". The problem with the "--" operator is that it...
This is impossible. If you are dealing with students you have no way of making this happen. You can not control their computers, you can not monitor their houses. These little buggers will always...
Yes exactly.
You have 5 items in your array. You have your loop run from 0 to 3 which is 4 indices. But then you only check for the fourth item in your if-condition. This happens because the...
Because it isnt.
You have a value on the left hand side of the assignment operator, and a value on the right hand side.
It is as if you would write "5 = 2 + 2". Its not an assignment, its not...
The curly brackets { and } have a meaning in java. The way you use them is the source of your problem. You have misplaced the brackets for your loop and thus your program does not do what you would...
Could you try to explain to us what you are trying to do in the erroneous line?
Try to explain it step by step:
Math.pow(hypotenuse,2)=Math.pow(side_1,2)+Math.pow (side_2,2);
|
http://www.javaprogrammingforums.com/search.php?s=2c5ba491ec12171dbd4ada630053c3de&searchid=1074050
|
CC-MAIN-2014-41
|
en
|
refinedweb
|
In today’s Programming Praxis exercise, our goal is to calculate the Mersenne prime exponents up to 256. Let’s get started, shall we?
A quick import:
import Data.Numbers.Primes
The Mersenne primes can be determine with a simple list comprehension. Although the Lucas-Lehmer test officially doesn’t work for M2, in practice it works just fine so there is no need to make it a special case.
mersennes :: [Int] mersennes = [p | p <- primes, iterate (\n -> mod (n^2 - 2) (2^p - 1)) 4 !! p-2 == 0]
A test shows that the algorithm is working correctly. Piece of cake.
main :: IO () main = print $ takeWhile (<= 256) mersennes == [2,3,5,7,13,17,19,31,61,89,107,127]
Tags: bonsai, code, Haskell, kata, mersenne, praxis, primes, programming
|
http://bonsaicode.wordpress.com/2011/06/03/programming-praxis-mersenne-primes/
|
CC-MAIN-2014-41
|
en
|
refinedweb
|
audio_engine_channels(9E)
audio_engine_playahead(9E)
- MAC group info driver entry points
#include <sys/mac_provider.h> int prefix_group_start(mac_group_driver_t group_handle);
void prefix_group_stop(mac_group_driver_t group_handle);
int prefix_group_addmac(void *arg, const uint8_t *macaddr, uint64_t mflags);
int prefix_group_remmac(void *arg, const uint8_t *macaddr);
int prefix_group_add_vlanfilter(void *arg, uint16_t vlanid, uint32_t vflags);
int prefix_group_remove_vlanfilter(void *arg, uint16_t vlanid);
int prefix_group_setmtu(void *arg, uint32_t mtu);
int prefix_group_getsriov_info(void *arg, mac_sriov_info_t *sriovinfop);
The private driver handle that identifies the driver ring group.
The MAC address that the MAC layer would like to be programmed into the driver's hardware.
The opaque handle that identifies the driver ring group that is being programmed.
The flags associated with the programming of the specified MAC address. Currently, the flag that can be specified is MAC_GROUP_PRIMARY_ADDRESS. This enables a SRI-OV capable driver to understand that the MAC address being programmed is the primary address for the VF associated with this ring group.
The VLAN to be programmed into the driver's hardware.
The flags associated with the specified VLAN. Currently, the flag possible is MAC_GROUP_VLAN_TRANSPARENT_ENABLE. This enables VLAN tagging/stripping.
The SR-IOV information structure to be filled in by the PF driver. Currently, the information to be filled in is the VF index for the VF that corresponds to this ring group.
The MTU size to be programmed for the specified ring group.
Solaris architecture specific (Solaris DDI).
The driver entry points described below implement the actions the MAC layer can take on a driver ring group. The entry points are passed to the MAC layer using the mac_group_info(9S) structure in response to a call to the driver entry point mr_gget(9E) by the MAC layer.
The mgi_start() function is the driver entry called by the MAC layer to start a ring group. Driver's that implement dynamic grouping should implement this entry point to properly initialize the ring group before rings are added to the ring group by the MAC layer.
The mgi_stop() function is the driver entry called by the MAC layer to stop a ring group. The MAC layer will call this entry after all rings of the ring group have been stopped.
The mgi_addmac() function is the driver entry point to add a MAC address to the ring group. The mflags argument specifies if the MAC address being added is the primary address for the VF that corresponds to the ring group.
The mgi_remmac() function is the driver entry point to remove a MAC address from the ring group.
The mgi_add_vlanfilter() function is the driver entry point to enable the MAC layer to program a VLAN filter for the specified ring group. The flags will enable tag/strip for the ring group.
The mgi_rem_vlanfliter() function is the driver entry point to remove a previously added vlan filter.
The mgi_setmtu() function is the driver entry point to set the MTU for the ring group. This entry point is implemented by SR-IOV capable drivers and is only valid when the PF driver is operating in SR-IOV mode.
The mgi_getsriov_info() function is the driver entry for the MAC layer to query for the ring group for it's SR-IOV mode information.
The mgi_start() function returns 0 on success and either EIO or ENXIO on failure.
The mgi_stop() function returns 0 on success and EIO or ENXIO on failure.
The mgi_setmtu() function returns 0 on success. If the MTU is an invalid size, then it returns EINVAL.
The mgi_getsriov_info() function returns 0 on success and EIO or ENXIO on failure.
The mgi_addmac() function returns 0 on success, ENOSPC if there is no space to add the MAC address, and EIO for other failures.
The mgi_add_vlanfilter() function returns 0 on success, ENOSPC if there is no room to add the filter, and EIO for other failures.
The mgi_rem_vlanfilter() function returns 0 on success and EIO on failure.
See attributes(5) for descriptions of the following attributes:
attributes(5), mr_gget(9E), mac_capab_rings(9S), mac_group_info(9S), mac_register(9S)
|
http://docs.oracle.com/cd/E26502_01/html/E29045/mgi-remmac-9e.html
|
CC-MAIN-2014-41
|
en
|
refinedweb
|
Tk_GetFontStruct, Tk_NameOfFontStruct, Tk_FreeFontStruct - maintain database of fonts
#include <tk.h>
XFontStruct * Tk_GetFontStruct(interp, tkwin, nameId)
char * Tk_NameOfFontStruct(fontStructPtr)
Tk_FreeFontStruct(fontStructPtr)
Interpreter to use for error reporting.
Token for window in which font will be used.
Name of desired font.
Font structure to return name for or delete.
Tk_GetFont loads the font indicated by nameId and returns a pointer to information about the font. The pointer returned by Tk_GetFont will remain valid until Tk_FreeFont is called to release it. NameId can be either a font name or pattern; any value that could be passed to XLoadQueryFont may be passed to Tk_GetFont. If Tk_GetFont is unsuccessful (because, for example, there is no font corresponding to nameId) then it returns NULL and stores an error message in interp->result.
Tk_GetFont maintains a database of all fonts it has allocated. If the same nameId is requested multiple times (e.g. by different windows or for different purposes), then additional calls for the same nameId will be handled very quickly, without involving the X server. For this reason, it is generally better to use Tk_GetFont in place of X library procedures like XLoadQueryFont.
The procedure Tk_NameOfFontStruct is roughly the inverse of Tk_GetFontStruct. If its fontStructPtr argument was created by Tk_GetFontStruct, then the return value is the nameId argument that was passed to Tk_GetFontStruct to create the font. If fontStructPtr was not created by a call to Tk_GetFontStruct, then the return value is a hexadecimal string giving the X identifier for the associated font. Note: the string returned by Tk_NameOfFontStruct is only guaranteed to persist until the next call to Tk_NameOfFontStruct. it to the X server and delete it from the database.
font
|
http://search.cpan.org/~srezic/Tk-804.029/pod/pTk/GetFontStr.pod
|
CC-MAIN-2014-41
|
en
|
refinedweb
|
Prelude
From HaskellWiki. The problem is also tackled in a FAQ entry.
2.1 Explicit import declaration
By including an explicit import declaration of Prelude as follows
import Prelude ()
The empty import list in the parenthesis causes nothing to be imported while the automatic import is prevented as well.
2.2
syntax.
do
|
http://www.haskell.org/haskellwiki/Prelude
|
CC-MAIN-2014-41
|
en
|
refinedweb
|
On Sep 2, 2008, at 8:34 AM, Ramin wrote: > instance Monad Query where > return (initState, someRecord) = Query (initState, someRecord) > {- code for (>>=) -} > GHC gives an error, "Expected kind `* -> *', but `Scanlist_ctrl' has > kind `* -> * -> *' ". I believe you understand the problem with the above code, judging from your attempt to fix it below. > If I try this: > instance Monad (Query state) where > return (initState, someRecord) = Query (initState, someRecord) > {- code for (>>=) -} > GHC give an error, "Occurs check: cannot construct the infinite > type: a = (s, a) when trying to generalise the type inferred for > `return' ". The problem is your type for the return function. The way you have written it, it would be `return :: (state, rec) -> Query state rec`. Perhaps it would be easier to see the problem if we defined `type M = Query MyState`. Then you have `return :: (MyState, rec) -> M rec`. Compare this to the type it must be unified with: `return :: a -> m a`. The two 'a's don't match! The type you are after is actually `return :: rec -> M rec` or `return :: rec -> Query state rec`. I hope this helps lead you in the right direction. I'm not giving you the solution because it sounds like you want to solve this for yourself and learn from it. - Jake McArthur
|
http://www.haskell.org/pipermail/haskell-cafe/2008-September/046950.html
|
CC-MAIN-2014-41
|
en
|
refinedweb
|
This section demonstrates you the use of close() method.
Description of code:Streams represent resources which is to be clean up explicitly. You can done this using the method close(). This method automatically flush out the stream. It is necessary to close the stream after performing any file operation before exiting the program otherwise you could lose buffered data.
In the given example, we have used BufferedWriter class along with FileWriter class to write some text to the file. The method write() of BufferedWriter class writes the text into the file. The method newLine() writes the line separator and using the method close(), we have closed the stream and keep the data safe.
Here is the code:
import java.io.*; public class FileClose { public static void main(String[] args) throws Exception { File file = new File("C:/data.txt"); if (file.exists()) { BufferedWriter bw = new BufferedWriter(new FileWriter(file, true)); bw.write("Welcome"); bw.newLine(); bw.close(); } } }
In the above code, we have used close() method to flush out the stream. It is essential as it could leak the resources.
|
http://www.roseindia.net/tutorial/java/core/files/fileclose.html
|
CC-MAIN-2014-41
|
en
|
refinedweb
|
Agenda
See also: IRC log
fjh: Norm Walsh will give intro to xproc
<fjh> 30 September 2008 Teleconference cancelled
<fjh> Next meeting 7 October. Gerald Edgar is scheduled to scribe.
<fjh> 14 October 2008 Teleconference cancelled,
<fjh> 20-21 October 2008 F2F at TPAC.
RESOLUTION: 16 September minutes approved
<fjh> XForms - 10:30 - noon (tentative) Monday 20 October
fjh: Working on face-to-face
schedule; will meet xforms on Monday ...
... trying to find a way to have a break that day ...
<fjh> EXI - 2-3:30 Monday 20 October (note correction, 1 1/2 hours)
fjh: joint session with EXI ...
<fjh> WebApps - 11-12 Tuesday 21 October
tlr: anything in particular we should review for these meetings?
fjh: xforms is mostly going to be
a listening thing
... EXI, we had a face-to-face last year's TPAC ...
... will include that in the agenda; linked from administrative page ...
fjh: webapps, haven't gotten
anything yet
... but note that widget requirements are in last call ...
... will see that I can put together something for preparation ...
... re WS-Policy, trying to get them to update references to Signature 2nd ed
... webapps has widget requirements in last call
... explicit request for review
<fjh> WebApps Widgets 1.0 Requirements Last Call
<fjh>
<fjh> Resolution to accept Status/Abstract and incorporate into draft
<fjh>
PROPOSED: To accept revised status and abstract for best practices
RESOLUTION: To accept revised status and abstract for best practices
<fjh> Proposed revision for section 2.1, Best Practice 2
fjh: Second item from Scott, revision for 2.1
<fjh>
fjh: Sean had some input on this ...
scott: some tweaking might be needed
fjh; sean, any concerns?
sean: sent updated e-mail this morning
PROPOSED: adopt changes proposed by Sean
<fjh> draft
sean: move BP #2 and following
paragraph to a later point, combine with BP #5 (which should be
#6...)
... basically, move to discussion of RetrievalMethod ...
... and instead of moving stuff, just drop in a sentence
PROPOSED: To accept Scott's wording with Sean's additions.
RESOLUTION: accept Scott's wording with Sean's additions.
<scribe> ACTION: sean to edit best practices to implement Scott's and his own changes; see [recorded in]
<trackbot> Created ACTION-67 - Edit best practices to implement Scott's and his own changes; see [on Sean Mullan - due 2008-09-30].
<fjh> Draft review Section 1, section 2.1.4
<fjh>
<fjh>
bhill: think there's a legitimate
concern here ...
... if you consider using signature in office documents and the like ...
... external references can serve to track who reads documents ...
... there is a privacy concern here ...
... propose to keep this, but flash out text ...
fjh: I have some ideas about changes
PROPOSED: to accept changes to 2.1.4 proposed by Sean as added to by Frederick
RESOLUTION: to accept changes to 2.1.4 proposed by Sean as added to by Frederick
fjh: There's also changes to section 1
RESOLUTION: to accept changes to 1 proposed by Sean as added to by Frederick
<scribe> ACTION: sean to implement, [recorded in]
<trackbot> Created ACTION-68 - Implement, [on Sean Mullan - due 2008-09-30].
<fjh> Draft review - section 2.1.2 (Best Practice 5)
<fjh>
sean: xpath can have performance issues, need to pay attention, but not every implementation affected
<fjh>
tlr: how about taking this to e-mail? (Also, don't really like "advanced")
<hal> I recommend a general disclaimer something like this
hal: I'd like a general sentence saying "These concerns apply to approaches toward implementing the specification; following the best practices is a good idea whether or not implementation is affected by the concern"
hal, please correct if the above note is wrong
<hal> not really what I said
<hal> but I don't object
hal, is that better?
bal: don't talk about specific
implementations; some things might be sane in certain closed
environments
... ran into that with fetch-over-web type things ...
<fjh> ACTION: Sean to propose text to address concern of non-naive implementations not being vulnerable to attack [recorded in]
<trackbot> Created ACTION-69 - Propose text to address concern of non-naive implementations not being vulnerable to attack [on Sean Mullan - due 2008-09-30].
klanz: there are standard tools
to do bad things, you don't have a lot of influence on
them
... think code execution in XSLT ..
... XSLT and XPath concern ...
hal: to respond to the prior
comment ...
... these still stand as best practices ...
... there might be reasons not to follow them ...
... but whole idea that somebody doesn't know how something works ...
... and something moves into a new environment, and boom ...
... that is one of the worst risks ...
... would like to highlight these as best practices ...
... it's fair to say "if you know what you're doing, you might not want to do this"
bal: Hal, I hear you, I don't
think that concern about poorly documented implementations is
specific
... have to separate out what best practices are that we've learned ...
... "things you really ought not to do, and we don't see use for them" ...
... and things that ought to be better documented ...
... other concern, to keep scope of BP document to standard itself
... not attacking specific implementations ...
... of course just a BP document ...
... but people might very well run with some of what's in here and not understand trade-off ...
... 1. do not talk about bugs in particular implementations
... 2. be careful on wording
... there is tendency for documents like this to be used as prescriptive bans ...
... say "there might be good reasons for doing the things we discourage, but you really ought to know what you're doing"
<hal> +1
<fjh> bal notes concern that someone might take best practices as profile, disallowing things but should be treated as practice
bal: we might want to make a hard
recommendation on xslt
... that's really a change to the specification ...
... not all recommendations have equal weight ...
fjh: bal, can you add something?
bal: I think there ought to be a
disclaimer in here, not sure where that went
... let's not do something general right now, can do that later
tlr: add something general to SOTD?
bal: maybe, with caveat that we
might want to make stronger statement
... any good boilerplate?
tlr: not sure I know of any; note that we can change later
fjh: If we want to do normative stuff, not in BP
+1
<scribe> ACTION: thomas to propose disclaimer for SOTD [recorded in]
<trackbot> Created ACTION-70 - Propose disclaimer for SOTD [on Thomas Roessler - due 2008-09-30].
<fjh> hal notes that best practices should reflect multiple applications, not just one
hal: no objection against "make
sure you know what you're doing", I don't understand problem
with saying that more than one implementation has been observed
to have the problem...
... don't say "if you think your implementation is fine, don't follow them" ...
... haven't seen the "show me your implementation" effect when mentioning that some implementations have problem
smullan: we've made some progress
on this
... early versions of BP document had stronger language on things like RetrievalMethod ...
<fjh> sean notes now say less strong statements, consider this .
smullan: also, most DOS attacks might be less serious when following BP 1 and BP 3
<fjh> sean notes that use of xslt may be appropriate in trusted environment
pdatta: namespace node expansions
...
... @@ expands all namespace nodes ...
... there is some dependence on that feature ...
... interop testcase without expanding namespace nodes?
fjh: sean noted that document shouldn't use relative namespace URIs
<fjh> Change all examples in document to use absolute namespace URIs, not relative
klanz: they're prohibited
<brich> how about some text that says something like "There is an uneasy tension between security on one hand and utility and performance on the other hand. Circumstances may dictate an implementation must do something that is not the most secure. This needs to be a reasoned tradeoff that can be revisited in later versions of the implementation as necessary to address risk."
<fjh>
<klanz2> The use of relative URI references, including same-document references, in namespace declarations is deprecated.
<klanz2> Note:
<klanz2> This deprecation of relative URI references was decided on by a W3C XML Plenary Ballot [Relative URI deprecation]. It also declares that "later specifications such as DOM, XPath, etc. will define no interpretation for them".
fjh: sean, you said these should reject relative namespace URIs
<fjh>
<klanz2>
sean: our implementation rejects all the examples because of relative namespace URIs
fjh: @@
<fjh>
<klanz2> ns0 ... relative
fjh: In the DOS example files, we have relative namespace URIs
<klanz2> let's change ns0 to ... to n
<klanz2> let's change ns0 to ... n
tlr: I guess my question is whether there is any good reason for having relative namespaces
fjh: we could use absolute ones as well
<scribe> ACTION: pratik to update examples with absolute namespace URIs and regenerate signatures [recorded in]
<trackbot> Created ACTION-71 - Update examples with absolute namespace URIs and regenerate signatures [on Pratik Datta - due 2008-09-30].
fjh: now, links to example
files...
... we're keeping these member-visible for the moment ...
tlr: The examples become vastly
easier to understand in full
... could see us waiting a little longer to accommodate implementers.
sean: even if there is an
implementation that has a big problem
... doubt anybody will be able to fix this any time soon ...
... think the examples have the relevant text in there ...
... think that's good enough ...
<esimon2> back in 10 min.
fjh: no decision quite yet, decide next meeting
<fjh> RetrievalMethod attack, section 2.1.3
fjh: long thread whether it can be recursive ...
<fjh>
<fjh>
pdatta: fine with Sean's clarification
fjh: what does that mean for
document?
... does the attack depend on the KeyInfo concern?
pdatta: no longer a denial of service attack
fjh: sean, can you take care of this example?
<fjh>
klanz2: I thought RetrievalMethod *is* recursive?
<fjh>
fjh: deferred
<fjh> Add synopsis for each Best Practice
<klanz2> ame="Type" type="anyURI" use="optional"
<scribe> ACTION: fjh to contribute synopsis for each best practice [recorded in]
<trackbot> Created ACTION-72 - Contribute synopsis for each best practice [on Frederick Hirsch - due 2008-09-30].
<fjh> Misc editorial
<fjh> Completion of implementer review actions?
<fjh>
fjh: Has anybody has a chance to
look through the document -- how's review doing?
... Also, is publication at face-to-face a realistic option?
<gedgar> I have to leave unfortunately.
smullan: found that relative
namespace URIs are a problem
... should make sure that examples are kind of accurate ...
fjh: the bar I'm trying to get across is whether publication would cause any harm
bal: as long as we talk about the spec itself, it's fine
fjh: the document doesn't disclose things about a specific implementation
brich: review of document was
what I committed
... timing isn't easy ...
fjh: trying to understand
issue
... want to make sure that BPs are followed? statements about specific implementations?
... oh well, so maybe we can't publish as quickly
... let's at least get pending edits out of the way
<fjh>
fjh: hal, you had useful material
about web services
... how to add that?
hal: also, what happens to
long-lived documents considerations?
... probably the same thing should happen to that and to my material
magnus: also talked about
extending scope to not just be about signature and c14n, but
also a few other things
... keyInfo e.g. is generic
<fjh> proposal - change title to XML Security v.next use cases and requirements
tlr: Serious editing hasn't begun yet; some things in Shivaram's court
fjh: need to pull things from the list
<scribe> ACTION: magnus to provide proposal to adapt Requirements scope [recorded in]
<trackbot> Created ACTION-73 - Provide proposal to adapt Requirements scope [on Magnus Nyström - due 2008-09-30].
hal: maybe do "domain specific requirements"?
fjh: issues list, try to go through and extract requirements
<fjh> issues list requirements extraction
<fjh>
fjh: gerald categorized
requirements here
... volunteers, please!
<esimon2> I'm here!
<esimon2> unmute me
nope
ed, you shouldn't be muted
yes!
esimon2: IRC exceptionally slow
again
... EXI review is relevant here.
<fjh>
EdSimon: They aren't addressing
EXI needs for signature and encryption
... but based on use cases, there might be native need for signature and encryption ...
... please poke them about answering ...
... there is a long conversation to be had about native XML Security for EXI ...
<fjh>
fjh: would like to get use cases
and requirements out in early November
... doesn't give us a whole lot of time to produce something to get us started ...
... focus on requirements soon
<esimon2> Specifically what I said was that I have yet to see any discussion on EXI requirements for EXI-native signature and encryption. EXI has published a document discussing how current XML Signature and XML Encryption can be used with EXI but that, to me, does not seem sufficient. Need to find out if the EXI group sees a potential requirement for EXI-native signatures and encryption.
klanz: recursive retrieval method or not?
<klanz2>
<klanz2>
<klanz2> <attribute name="Type" type="anyURI" use="optional"/>
tlr: eek, I'm confused. Please send to mailing list
<scribe> ACTION: klanz2 to summarize recursive retrievalmethod point [recorded in]
<trackbot> Created ACTION-74 - Summarize recursive retrievalmethod point [on Konrad Lanz - due 2008-09-30].
fjh: propose bulk closure
ACTION-27 closed
<trackbot> ACTION-27 contact crypto hardware and suiteB experts in NSA regarding XML Security WG and possible involvement closed
ACTION-31?
<trackbot> ACTION-31 -- Thomas Roessler to investigate ebXML liaison (see ACTION-6) -- due 2008-08-19 -- PENDINGREVIEW
<trackbot>
ACTION-31 closed
<trackbot> ACTION-31 Investigate ebXML liaison (see ACTION-6) closed
ACTION-39?
<trackbot> ACTION-39 -- Hal Lockhart to contribute web service related scenario -- due 2008-08-25 -- PENDINGREVIEW
<trackbot>
ACTION-39 closed
<trackbot> ACTION-39 Contribute web service related scenario closed
ACTION-42 closed
<trackbot> ACTION-42 Elaborate on "any document" requirement vs canonicalizing xml:base closed
ACTION-47 closed
<trackbot> ACTION-47 Add error noted in to c14n 1.1 errata page closed
ACTION-66 pending
ACTION-66?
<trackbot> ACTION-66 -- Frederick Hirsch to follow up with xsl to get documents related to serialization -- due 2008-09-23 -- PENDINGREVIEW
<trackbot>
fjh: we have a long list of open
actions
... please get the ones done that relate to requirements ...
<fjh>
<bal> yes
ACTION-56?
<trackbot> ACTION-56 -- Scott Cantor to propose text for KeyInfo processing in best practices. -- due 2008-09-16 -- OPEN
<trackbot>
fjh: any other updates?
... concrete proposals and material on the list, please ...
... priority for requirements document ...
tlr: might also be worth talking about the way the process works
<esimon2> IRC died on me; was the EXI action (was it Action-25?) closed?
ACTION-25 closed
<trackbot> ACTION-25 Give feedback on xml schema best practice in xml-cg closed
action-25?
<trackbot> ACTION-25 -- Frederick Hirsch to give feedback on xml schema best practice in xml-cg -- due 2008-08-19 -- CLOSED
<trackbot>
action-19?
<trackbot> ACTION-19 -- Gerald Edgar to evaluate Issues and Actions for appropriate placement -- due 2008-08-19 -- PENDINGREVIEW
<trackbot>
action-22 closed
<trackbot> ACTION-22 Review EXI docs that were published closed
action-56?
<trackbot> ACTION-56 -- Scott Cantor to propose text for KeyInfo processing in best practices. -- due 2008-09-16 -- PENDINGREVIEW
<trackbot>
action-56 closed
<trackbot> ACTION-56 Propose text for KeyInfo processing in best practices. closed
|
http://www.w3.org/2008/09/23-xmlsec-minutes
|
CC-MAIN-2014-41
|
en
|
refinedweb
|
Today I stumbled upon Scott Haselman’s post: How to access NuGet when NuGet.org is down (or you’re on a plane) in which Scott discusses how he recovered from an issue with the nuget.org site being down during his demo at the Dallas Day of .Net. As it turns out, while NuGet stores packages it downloads in a local Cache folder within your AppData folder, it doesn’t actually use this cache by default. Scott was able to remedy the situation by adding his local cache as a source through the the Visual Studio Package Manager plugin.
Last year, I wrote about my philosophy for dependency management and how I use NuGet to facilitate dependency management without using the Visual Studio plugin wherein I discuss using the NuGet.exe command line tool to manage .Net dependencies as part of my rake build. After reading Scott’s post, I got to wondering whether the NuGet.exe command line tool also had the same caching issue and after a bit of testing I discovered that it does. Since I, with the help of a former colleague, Josh Bush, have evolved the solution I wrote about previously a bit, I thought I’d provide an update to my approach which includes the caching fix.
As discussed in my previous article, I maintain a packages.rb file which serves as a central manifest of all the dependencies used project wide. Here’s one from a recent project:
packages = [ [ "Machine.Specifications", "0.5.3.0" ], [ "ExpectedObjects", "1.0.0.2" ], [ "Moq", "4.0.10827" ], [ "RabbitMQ.Client", "2.7.1" ], [ "log4net", "1.2.11" ] ] configatron.packages = packages
This is sourced by a rakefile which which is used by a task which installs any packages not already installed.
The basic template I use for my rakefile is as follows:
require 'rubygems' require 'configatron' ... NUGET_CACHE= File.join(ENV['LOCALAPPDATA'], '/NuGet/Cache/') FEEDS = ["http://[corporate NuGet Server]:8000", "" ] require './packages.rb' task :default => ["build:all"] namespace :build do task :all => [:clean, :dependencies, :compile, :specs, :package] ... task :dependencies do feeds = FEEDS.map {|x|"-Source " + x }.join(' ') configatron.packages.each do | name,version | feeds = "-Source #{NUGET_CACHE} " + feeds unless !version packageExists = File.directory?("#{LIB_PATH}/#{name}") versionInfo="#{LIB_PATH}/#{name}/version.info" currentVersion=IO.read(versionInfo) if File.exists?(versionInfo) if(!packageExists or !version or !versionInfo or currentVersion != version) then versionArg = "-Version #{version}" unless !version sh "nuget Install #{name} #{versionArg} -o #{LIB_PATH} #{feeds} -ExcludeVersion" do | ok, results | File.open(versionInfo, 'w') {|f| f.write(version) } unless !ok end end end end end
This version defines a NUGET_CACHE variable which points to the local cache. In the dependencies task, I join all the feeds into a list of Sources for NuGet to check. I leave out the NUGET_CACHE until I know whether or not a particular package specifies a version number. Otherwise, NuGet would simply check for the latest version which exists within the local cache.
To avoid having to change Visual Studio project references every time I update to a later version of a dependency, I use the –ExcludeVersion option. This means I can’t rely upon the folder name to determine whether the latest version is already installed, so I’ve introduced a version.info file. I imagine this is quite a bit faster than allowing NuGet to determine whether the latest version is installed, but I actually do this for a different reason. If you tell NuGet to install a package into a folder without including the version number as part of the folder and you already have the specified version, it uninstalls and reinstalls the package. Without checking the presence of the correct version beforehand, NuGet would simply reinstall everything every time.
Granted, this rake task is far nastier than it needs to be. It should really only have to be this:
task :dependencies do nuget.exe install depedencyManifest.txt –o lib end
Where the dependencyManifest file might look a little more like this:
Machine.Specifications 0.5.3.0 ExpectedObjects 1.0.0.2 Moq 4.0.10827 RabbitMQ.Client 2.7.1 log4net 1.2.11
Nevertheless, I’ve been able to coerce the tool into doing what I want for the most part and it all works swimmingly once you get it set up.
|
http://lostechies.com/derekgreer/2012/03/09/dependency-management-in-net-offline-dependencies-with-nuget-command-line-tool/
|
CC-MAIN-2014-41
|
en
|
refinedweb
|
ThreadPool.GetMinThreads Method
Retrieves the minimum number of threads the thread pool creates on demand, as new requests are made, before switching to an algorithm for managing thread creation and destruction.
Namespace: System.ThreadingNamespace: System.Threading
Assembly: mscorlib (in mscorlib.dll)
Parameters
- workerThreads
- Type: System.Int32
When this method returns, contains the minimum number of worker threads that the thread pool creates on demand.
- completionPortThreads
- Type: System.Int32
When this method returns, contains the minimum number of asynchronous I/O threads that the thread pool creates on demand..
The following example sets the minimum number of worker threads to four, and preserves the original value for the minimum number of asynchronous I/O completion threads.
using System; using System.Threading; public class Test { public static void Main() { int minWorker, minIOC; // Get the current settings. ThreadPool.GetMinThreads(out minWorker, out minIOC); // Change the minimum number of worker threads to four, but // keep the old setting for minimum asynchronous I/O // completion threads. if (ThreadPool.SetMinThreads(4, minIOC)) { // The minimum number of threads was set successfully. } else { // The minimum number of threads was not.
|
http://msdn.microsoft.com/en-us/library/system.threading.threadpool.getminthreads
|
CC-MAIN-2014-41
|
en
|
refinedweb
|
import "golang.org/x/build/maintner/maintnerd/gcslog"
Package gcslog is an implementation of maintner.MutationSource and Logger for Google Cloud Storage.
GCSLog logs mutations to GCS.
NewGCSLog creates a GCSLog that logs mutations to a given GCS bucket. If the bucket name contains a "/", the part after the slash will be a prefix for the segments.
func (gl *GCSLog) CopyFrom(src maintner.MutationSource) error
CopyFrom is only used for the one-time migrate from disk-to-GCS code path.
GetMutations returns a channel of mutations or related events. The channel will never be closed. All sends on the returned channel should select on the provided context.
Log writes m to GCS after the buffer is full or after a periodic flush.
RegisterHandlers adds handlers for the default paths (/logs and /logs/).
SetDebug controls whether verbose debugging is enabled on this log.
It must only be called before it's used.
Package gcslog imports 22 packages (graph) and is imported by 5 packages. Updated 2020-09-17. Refresh now. Tools for package owners.
|
https://godoc.org/golang.org/x/build/maintner/maintnerd/gcslog
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
import "periph.io/x/periph/conn/spi"
Package spi defines the API to communicate with devices over the SPI protocol.
As described in, periph.io uses the concepts of Bus, Port and Conn.
In the package spi, 'Bus' is not exposed, as it would be SPI bus number without a CS line, for example on linux asking for "/dev/spi0" without the ".0" suffix.
The OS doesn't allow that so it is counter productive to express this at the API layer, so 'Port' is exposed directly instead.
Use Port.Connect() converts the uninitialized Port into a Conn.
See for more information.:])
const ( CLK pin.Func = "SPI_CLK" // Clock CS pin.Func = "SPI_CS" // Chip select MISO pin.Func = "SPI_MISO" // Master in MOSI pin.Func = "SPI_MOSI" // Master out )
Well known pin functionality.
type Conn interface { conn.Conn // TxPackets does multiple operations over the SPI connection. // // The maximum number of bytes can be limited depending on the driver. Query // conn.Limits.MaxTxSize() can be used to determine the limit. // // If the last packet has KeepCS:true, the CS line stays asserted. This // enables doing SPI transaction over multiple calls. // // Conversely, if any packet beside the last one has KeepCS:false, the CS // line will blip for a short amount of time to force a new transaction. // // It was observed on RPi3 hardware to have a one clock delay between each // packet. TxPackets(p []Packet) error }
Conn defines the interface a concrete SPI driver must implement.
Implementers can optionally implement io.Writer and io.Reader for unidirectional operation.
Mode determines how communication is done.
The bits can be OR'ed to change the parameters used for communication.
const ( Mode0 Mode = 0x0 // CPOL=0, CPHA=0 Mode1 Mode = 0x1 // CPOL=0, CPHA=1 Mode2 Mode = 0x2 // CPOL=1, CPHA=0 Mode3 Mode = 0x3 // CPOL=1, CPHA=1 // HalfDuplex specifies that MOSI and MISO use the same wire, and that only // one duplex is used at a time. HalfDuplex Mode = 0x4 // NoCS request the driver to not use the CS line. NoCS Mode = 0x8 // LSBFirst requests the words to be encoded in little endian instead of the // default big endian. LSBFirst = 0x10 )
Mode determines the SPI communication parameters.
CPOL means the clock polarity. Idle is High when set.
CPHA is the clock phase, sample on trailing edge when set.
type Packet struct { // W and R are the output and input data. When HalfDuplex is specified to // Connect, only one of the two can be set. W, R []byte // BitsPerWord overrides the default bits per word value set in Connect. BitsPerWord uint8 // KeepCS tells the driver to keep CS asserted after this packet is // completed. This can be leveraged to create long transaction as multiple // packets like to use 9 bits commands then 8 bits data. // // Normally during a spi.Conn.TxPackets() call, KeepCS should be set to true // for all packets except the last one. If the last one is set to true, the // CS line stays asserted, leaving the transaction hanging on the bus. // // KeepCS is ignored when NoCS was specified to Connect. KeepCS bool }
Packet represents one packet when sending multiple packets as a transaction.
type Pins interface { // CLK returns the SCK (clock) pin. CLK() gpio.PinOut // MOSI returns the SDO (master out, slave in) pin. MOSI() gpio.PinOut // MISO returns the SDI (master in, slave out) pin. MISO() gpio.PinIn // CS returns the CSN (chip select) pin. CS() gpio.PinOut }
Pins defines the pins that a SPI port interconnect is using on the host.
It is expected that a implementer of ConnCloser or Conn also implement Pins but this is not a requirement.) } // Prints out the gpio pin used. if p, ok := c.(spi.Pins); ok { fmt.Printf(" CLK : %s", p.CLK()) fmt.Printf(" MOSI: %s", p.MOSI()) fmt.Printf(" MISO: %s", p.MISO()) fmt.Printf(" CS : %s", p.CS()) }
type Port interface { String() string // Connect sets the communication parameters of the connection for use by a // device. // // The device driver must call this function exactly once. // // f must specify the maximum rated speed by the device's spec. The lowest // speed between the port speed and the device speed is selected. Use 0 for f // if there is no known maximum value for this device. // // mode specifies the clock and signal polarities, if the port is using half // duplex (shared MISO and MOSI) or if CS is not needed. // // bits is the number of bits per word. Generally you should use 8. Connect(f physic.Frequency, mode Mode, bits int) (Conn, error) }
Port is the interface to be provided to device drivers.
The device driver, that is the driver for the peripheral connected over this port, calls Connect() to retrieve a configured connection as Conn.
type PortCloser interface { io.Closer Port // LimitSpeed sets the maximum port speed. // // It lets an application use a device at a lower speed than the maximum // speed as rated by the device driver. This is useful for example when the // wires are long or the connection is of poor quality. // // This function can be called multiple times and resets the previous value. // 0 is not a valid value for f. The lowest speed between the port speed and // the device speed is selected. LimitSpeed(f physic.Frequency) error }
PortCloser is a SPI port that can be closed.
This interface is meant to be handled by the application.
Package spi imports 6 packages (graph) and is imported by 84 packages. Updated 2020-07-21. Refresh now. Tools for package owners.
|
https://godoc.org/periph.io/x/periph/conn/spi
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
public class DefaultJettyAtJettyHomeHelper extends Object
Creates a default instance of Jetty, based on the values of the System properties "jetty.home" or "jetty.home.bundle", one of which must be specified in order to create the default instance.
Called by the
JettyBootstrapActivator during the starting of the
bundle.
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public static final String JETTY_ETC_FILES
public static final String DEFAULT_JETTY_ETC_FILES
public static final String DEFAULT_JETTYHOME
public DefaultJettyAtJettyHomeHelper()
public static Server startJettyAtJettyHome(org.osgi.framework.BundleContext bundleContext) throws Exception
If the system property jetty.home.bundle is defined and points to a bundle, look for the configuration of jetty inside that bundle.
In both cases reads the system property 'jetty.etc.config.urls' to locate the configuration files for the deployed jetty. It is a comma separated list of URLs or relative paths inside the bundle or folder to the config files.
In both cases the system properties jetty.http.host, jetty.http.port and jetty.ssl.port are passed to the configuration files that might use them as part of their properties.
bundleContext- the bundle context
Exception- if unable to create / configure / or start the server
public static Resource findDir(org.osgi.framework.Bundle bundle, String dir)
bundle- the bundle
dir- the directory
|
http://www.eclipse.org/jetty/javadoc/9.4.5.v20170502/org/eclipse/jetty/osgi/boot/internal/serverfactory/DefaultJettyAtJettyHomeHelper.html
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
Find all duplicate numbers in array
Given an array of positive integers in range 0 to N-1, find all duplicate numbers in the array. The array is not sorted. For example:
A = [2,4,3,2,1,5,4] Duplicate numbers are 2,4 whereas in A = [4,1,3,2,1,1,5,5] duplicate numbers are 1,5.
Brute force solution would be to keep track of every number which is already visited. The basic idea behind the solution is to keep track that whether we have visited the number before or not. Which data structure is good for quick lookups like this? Of course a map or hash.
The time complexity of this solution is
O(n) but it has an additional space complexity of
O(n).
To reduce space requirement, a bit array can be used, where ith index is set whenever we encounter number i in the given array. If the bit is set already, its a duplicate number. It takes
O(n) extra space which is actually less than earlier
O(n) as only bits are used. The time complexity remains
O(n)
Find duplicate numbers in an array without additional space
Can we use the given array itself to keep track of the already visited numbers? How can we change a number in an array while also be able to get the original number back whenever needed? That is where reading the problem statement carefully comes. Since array contains only positive numbers, we can negate the number at the index equal to the number visited. If ever find a number at any index negative, that means we have seen that number earlier as well and hence should be a duplicate.
Idea is to make the number at ith index of array negative whenever we see number i in the array. If the number at ith index is already negative, it means we have already visited this number and it is duplicate. Limitation of this method is that it will not work for negative numbers.
Duplicate numbers implementation
package AlgorithmsAndMe; import java.util.HashSet; import java.util.Set; public class DuplicatesInArray { public Set<Integer> getAllDuplicates(int[] a ) throws IllegalArgumentException { Set<Integer> result = new HashSet<>(); if(a == null) return result; for(int i=0; i<a.length; i++) { //In case input is wrong if(Math.abs(a[i]) >= a.length ){ throw new IllegalArgumentException(); } if (a[Math.abs(a[i])] < 0) { result.add(Math.abs(a[i])); } else { a[Math.abs(a[i])] = -a[Math.abs(a[i])]; } } return result; } }
Test cases
package Test; import AlgorithmsAndMe.DuplicatesInArray; import java.util.Set; public class DuplicatesInArrayTest { DuplicatesInArray duplicatesInArray = new DuplicatesInArray(); @org.junit.Test public void testDuplicatesInArray() { int [] a = { 1,2,3,4,2,5,4,3,3}; Set<Integer> result = duplicatesInArray.getAllDuplicates(a); result.forEach(s -> System.out.println(s)); } @org.junit.Test public void testDuplicatesInArrayWithNullArray() { Set<Integer> result = duplicatesInArray.getAllDuplicates(null); result.forEach(s -> System.out.println(s)); } //This case should generate an exception as 3 is greater than the size. @org.junit.Test public void testDuplicatesInArrayWithNullArray() { int [] a = { 1,2,3}; try{ Set<Integer> result = duplicatesInArray.getAllDuplicates(a); } catch (IllegalArgumentException e){ System.out.println("invalid input provided"); } } }
The complexity of the algorithm to find duplicate elements in an array is
O(n).
|
https://algorithmsandme.com/find-duplicate-numbers-in-array/
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
I was solving this problem and got 100 points.
This submission ran in 0.98s as seen below. I sorted the vector, considered every pair
a[i], a[j] with
i < j. Then using their common difference, I checked if the next value exists with the help of binary search (basic Arithmetic progression).
Since this is very slow (1 second time limit), I decided to try to use pragmas to decrease runtime. I don’t know much about that so I failed to get 100 points. I decided to submit my code without pragmas (the original submission). It ran in 0.99s.
I don’t know why the exact same submission ran slower. Can someone please explain this?
For the most important part, I was annoyed about my code being slow. I decided to use
unordered_map. I began to doubt the test cases. My solution with
unordered_map got accepted in 0.36s which is almost 3x faster.
I blew up
unordered_map.
I created an input file with 2500 occurrences of 99733 using this simple python code:
f = open("test.txt", "a") s = "2500 " for i in range(2500): s += "99733 " f.write(s) f.close()
The same solution which ran in 0.36s is running since 10 minutes.
#include <bits/stdc++.h> #include <ext/pb_ds/assoc_container.hpp> #include <ext/pb_ds/tree_policy.hpp> #define ll long long using namespace __gnu_pbds; using namespace std; template <class T> using oset = tree <T, null_type, less <T>, rb_tree_tag, tree_order_statistics_node_update>; void usaco(string name = "") { ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); if(name.size()) { freopen((name+".txt").c_str(), "r", stdin); freopen((name+".out").c_str(), "w", stdout); } } int main() { usaco("test"); int n, ans = 1; cin >> n; vector <int> a(n); unordered_map <int, int> m; for (int i = 0; i < n; ++i) cin >> a[i], ++m[a[i]]; sort(begin(a), end(a)); for (int i = 0; i < n-1; ++i) { for (int j = i+1; j < n; ++j) { int d = a[j]-a[i]; int cur = a[j]+d; int t = 2; while (m[cur]) { ++t; cur += d; } ans = max(ans, t); } } cout << ans << '\n'; }
This shows how bad the test cases are. Anything above O(n^2) should not pass according to the constraints. Is it possible for the solutions to be rejudged?
Also, does anyone know how to do it in O(n^2)?
|
https://discuss.codechef.com/t/very-weak-test-cases-in-bamboo-art-zco16002/76485
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
This is my lesson, which solves the problem of interaction between Dynamo and AutoCAD. Lesson in Russian, but it is understandable
Python code:
import clr
clr.AddReference(‘ProtoGeometry’)
from Autodesk.DesignScript.Geometry import *
clr.AddReferenceToFileAndPath(‘C:\Program Files\Autodesk\AutoCAD 2015\Autodesk.AutoCAD.Interop’)
from Autodesk.AutoCAD.Interop import *
from System import *
restart = IN[0]
curves = IN[1]
acadApp = AcadApplicationClass()
acadApp.Visible = True
for i in (curves):
p1 = i.StartPoint
p2 = i.EndPoint
p1C = Array[float]([p1.X, p1.Y, p1.Z])
p2C = Array[float]([p2.X, p2.Y, p2.Z])
lin = acadApp.ActiveDocument.Database.ModelSpace.AddLine(p1C, p2C);
acadApp.ZoomExtents();
This is really great! Commands are from ObjectARX, right?
yes
FEM Mesh in AutoCAD
Hello Khasan. Mamaev,
Wow, this is cool. Have you uploaded any package for this?
Thanks,
Rtiesh
how can I make the reverse process, look for an entity in AutoCAD and send to Dynamo
acadApp.ActiveDocument.Utility.GetEntity(object, object, object)
Excuse me please, but I do not know how to do it. I am currently engaged in this task. On the results of write here
|
https://forum.dynamobim.com/t/dynamo-vs-autocad/1769
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
import "barista.run/modules/shell"
Package shell provides modules to display the output of shell commands. It supports both long-running commands, where the output is the last line, e.g. dmesg or tail -f /var/log/some.log, and repeatedly running commands, e.g. whoami, date +%s.
Module represents a shell module that updates on a timer or on demand.
New constructs a new shell module.
Every sets the refresh interval for the module. The command will be executed repeatedly at the given interval, and the output updated. A zero interval stops automatic repeats (but Refresh will still work).
Output sets the output format. The format func will be passed the entire trimmed output from the command once it's done executing. To process output by lines, see Tail().
Refresh executes the command and updates the output.
Stream starts the module.
TailModule represents a bar.Module that displays the last line of output from a shell command in the bar.
func Tail(cmd string, args ...string) *TailModule
Tail constructs a module that displays the last line of output from a long running command.
func (m *TailModule) Output(format func(string) bar.Output) *TailModule
Output sets the output format for each line of output.
func (m *TailModule) Stream(s bar.Sink)
Stream starts the module.
Package shell imports 10 packages (graph). Updated 2018-11-25. Refresh now. Tools for package owners.
|
https://godoc.org/barista.run/modules/shell
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
After a few months of being away from blogging, I am coming back to write a few blogs about SAPUI5. Mike Howles recently had a great blog about Lumira Designer 2.0 SDK Components which you should read. Even though his blogs are always great and have amazing content that I enjoy reading, this time, besides learning from his experience/expertise, he gave me motivation to get back into blogging as I have also been busy being a dad, so here is mine. Thanks Mike! Secondly, I was also driven by a question I saw recently on SCN so I hope this can help clarify some doubts. This same blog can be found en español aqui
In my past experience, (~ 4years using SAP HANA and XS), I have been writing XS applications since its inception on SP5 (seems like a loooong time ago) started using SAPUI5 version 1.26, JavaScript views, etc. All the way to working on SAPUI5 version 1.38, etc. (I know at the current time, the latest version is 1.46) and the latest recommended UI views are of XML type. Here are some blogs which I have written in the past that explain how to create custom Fiori applications, pain points
etc. so why should I mention all this again now?
Now that I have set up the stage and being motivated by Mike, I have re-visited some of the UI5 applications I did in the past and thought, what would it take to upgrade one of those apps to a newer version of UI5? (the older UI5 versions, sap.ui.commons controls, JavaScript views, etc. to a newer version that uses responsive controls (sap.m), XML views, etc.) The purpose of this exercise is highlight the technical points to look for if you decide to go thru the same exercise.
Starting with the UI. How do I get a handle of the newer responsive controls.
The first thing to figure out is if the ui5 version you want to upgrade to is available on your system, or if you are going to use the one from openSAP. For the purpose of this exercise, I will assume we already have a newer version, perhaps 1.46 installed in our HANA environment. Keep in mind that multiple versions of sapui5 can cohexist in your environment or you could use one from a content delivery network.(openui5)
Initially, you would need to update the reference on your index.html file to point to the new sapui5 library (bootstrap part). You may need a component file, a manifest, etc.. all explained on my previous blog about How to develop a custom Fiori Application
Secondly, looking at the UI views. If I were to looking at replacing JavaScript views with XML Views, after all, the view type should not make much difference other than XML provides readability for maintenance, declaration of namespaces. Further, if all the MVC concepts are true, I should be able to decouple my V (in MVC) and replace it without any major issues *
So Why I said major issues? A view by definition should contain the controls that define the User Interface with no behavior (behavior would stay in our controller logic).
Quick Side note: if a view contains JavaScript logic that should be in the controller, I would advise to first refactor this code into the controller or other commons js file. Typically, I have seen patterns of development where some developers include some event code in the view, such as a button press, or a drop down list change. For best practices, this segment of the code should be in a controller and only being referenced from the view as a function (or from another function within the controller). Now that I have gotten that taken care of, I will assume that the rest of the views would only have UI controls in it and all the events are defined on the controller file.
Other thing to consider while analyzing the view:
- Data models – (named and unnamed and how were they referenced by path?)
- Component / manifest files (on a custom Fiori application, these should be included)
- etc
I have gathered a few of the most common UI controls that I used in the past for my applications and compared to the responsive controls that we are supposed to use now.
When looking for UI5 information, my starting point is always the SAPUI5 SDK:
The responsive controls can be found on the Explored tab:
And the sap.ui.commons controls (now deprecated) can still be found on the Controls tab
amongst others… so, what changed and / or how can I prepare for the migration? This needs to be a thought out process while analyzing the controls on your (MVC) View. Keep in mind that some controls may have been dynamically created.
Starting with the most common controls in my list and most applications:
Table: (sap.ui.table.Table)
var oTable = new sap.ui.table.Table(myTable', { visibleRowCount: 10, selectionMode: sap.ui.table.SelectionMode.Single, enableCellFilter: true, filter: oController.onColumnFiltered // other properties omitted for simplicity }); // for each column on the table, we needed to assign a few properties of the column // and any additional context needed for the type of control in its template oTable.addColumn( new sap.ui.table.Column({ label: new sap.ui.commons.Label({ text: columnName, tooltip: columnName}), template: new sap.ui.commons.TextView() .bindProperty("text", { path: property.name }) }));
Now on the sap.m.Table (inside a view)
<Table id="myTable" growing="true" growingThreshold="10" visibleRowCount="20" > <columns> <Column> <Text text="Product" /> </Column> <Column> <Text text=”Description” /> </Column> <Column> <Text text=”Some Action” /> </Column> <!— other columns --> </columns> <items> <ColumnListItem> <cells> <ObjectIdentifier title="{Name}" text="{someId}" /> <Text text=”{Description}” /> <Button press=”onTableButtonPressed” /> </cells> </ColumnListItem> </items> </Table>
Next is the button control – most applications will have the press event and some text when using the control. Well, this may be your lucky day because the press event is the same on both libraries.
One thing that got me was that I was using a custom data object in one of my controls in my javascript view, I was able to create a custom data object directly on the view as:
Var customData = new sap.ui.commons.CustomData({ writeToDom: “true”, key: “someKEy”, value: “someValue” });// make sure the value is a string so that you don’t get syntax errors anotherControl.addCustomData(customData);
However, on the xml view, I had to add an additional xml namespace in order to be able to use this xml attribute. At the top of the view where you will be using a custom data attribute, you need to declare the namespace and use it as follows (example from the sapu5i developer guide)
<mvc:View xmlns: <Button id="myBtn" text="Click to show stored coordinates data" app:</Button> </mvc:View>
I will not go in detail into each control, but I wanted to share some thoughts about this exercise and point out some best practices of developing a UI, remind UI developers where to find the documentation and also share the way I go about when developing an application. It is very important to know the details and know what is available at the time of development so that it facilitates development itself in the long run. Please share your experiences if you have recently upgraded a UI, or if you happen to be in the process of an upgrade .
Thank you again for reading this blog and hope you can read some of the others!
Happy coding!
great
same blog in spanish :
|
https://blogs.sap.com/2017/08/25/pointers-while-updating-sapui5-version/
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
Bean method
Since Camel 1.3
from("activemq:topic:OrdersTopic") .filter().method("myBean", "isGoldCustomer") .to("activemq:BigSpendersQueue");
Using Bean Expressions in Spring XML
<route> <from uri="activemq:topic:OrdersTopic"/> <filter> <method ref="myBean" method="isGoldCustomer"/> <to uri="activemq:BigSpendersQueue"/> </filter> </route>
Writing the Expression Bean
The bean in the above examples is just any old Java Bean with a method called
isGoldCustomer() that returns some object that is easily converted to a
boolean value in this case, as its used as a predicate.
Example:
public class MyBean { public boolean isGoldCustomer(Exchange exchange) { // ... } }
We can also use the Bean Integration annotations.
Example:
public boolean isGoldCustomer(String body) {...}
or
public boolean isGoldCustomer(@Header(name = "foo") Integer fooHeader) {...}:
from("activemq:topic:OrdersTopic") .filter().expression(BeanLanguage(MyBean.class, "isGoldCustomer")) .to("activemq:BigSpendersQueue");:
private MyBean my; from("activemq:topic:OrdersTopic") .filter().expression(BeanLanguage.bean(my, "isGoldCustomer")) .to("activemq:BigSpendersQueue");
In Camel 2.2: you can avoid the
BeanLanguage and have it just as:
private MyBean my; from("activemq:topic:OrdersTopic") .filter().expression(bean(my, "isGoldCustomer")) .to("activemq:BigSpendersQueue");
Which also can be done in a bit shorter and nice way:
private MyBean my; from("activemq:topic:OrdersTopic") .filter().method(my, "isGoldCustomer") .to("activemq:BigSpendersQueue");. 12 options, which are listed below.
|
https://camel.apache.org/components/latest/languages/bean-language.html
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
I'm starting with input data like this
Grouping is simple enough:
g1 = df1.groupby( [ "Name", "City"] ).count()
and printing yields a
GroupBy object:
City Name Name City Alice Seattle 1 1 Bob Seattle 2 2 Mallory Portland 2 2 Seattle 1 1
But what I want eventually is another DataFrame object that contains all the rows in the GroupBy object. In other words I want to get the following result:
City Name Name City Alice Seattle 1 1 Bob Seattle 2 2 Mallory Portland 2 2 Mallory Seattle 1 1
I can't quite see how to accomplish this in the pandas documentation. Any hints would be welcome.
g1 here is a DataFrame. It has a hierarchical index, though:
In [19]: type(g1) Out[19]: pandas.core.frame.DataFrame In [20]: g1.index Out[20]: MultiIndex([('Alice', 'Seattle'), ('Bob', 'Seattle'), ('Mallory', 'Portland'), ('Mallory', 'Seattle')], dtype=object)
Perhaps you want something like this?
In [21]: g1.add_suffix('_Count').reset_index() Out[21]: Name City City_Count Name_Count 0 Alice Seattle 1 1 1 Bob Seattle 2 2 2 Mallory Portland 2 2 3 Mallory Seattle 1 1
Or something like:
In [36]: DataFrame({'count' : df1.groupby( [ "Name", "City"] ).size()}).reset_index() Out[36]: Name City count 0 Alice Seattle 1 1 Bob Seattle 2 2 Mallory Portland 2 3 Mallory Seattle 1
I want to slightly change the answer given by Wes, because version 0.16.2 requires
as_index=False. If you don't set it, you get an empty dataframe.
Aggregation functions will not return the groups that you are aggregating over if they are named columns, when
as_index=True, the default. The grouped columns will be the indices of the returned object.
Passing
as_index=Falsewill return the groups that you are aggregating over, if they are named columns.
Aggregating functions are ones that reduce the dimension of the returned objects, for example:
mean,
sum,
size,
count,
std,
var,
sem,
describe,
first,
nth,
min,
max. This is what happens when you do for example
DataFrame.sum()and get back a
Series.
nth can act as a reducer or a filter, see here.
import pandas as pd df1 = pd.DataFrame({"Name":["Alice", "Bob", "Mallory", "Mallory", "Bob" , "Mallory"], "City":["Seattle","Seattle","Portland","Seattle","Seattle","Portland"]}) print df1 # # City Name #0 Seattle Alice #1 Seattle Bob #2 Portland Mallory #3 Seattle Mallory #4 Seattle Bob #5 Portland Mallory # g1 = df1.groupby(["Name", "City"], as_index=False).count() print g1 # # City Name #Name City #Alice Seattle 1 1 #Bob Seattle 2 2 #Mallory Portland 2 2 # Seattle 1 1 #
EDIT:
In version
0.17.1 and later you can use
subset in
count and
reset_index with parameter
name in
size:
print df1.groupby(["Name", "City"], as_index=False ).count() #IndexError: list index out of range print df1.groupby(["Name", "City"]).count() #Empty DataFrame #Columns: [] #Index: [(Alice, Seattle), (Bob, Seattle), (Mallory, Portland), (Mallory, Seattle)] print df1.groupby(["Name", "City"])[['Name','City']].count() # Name City #Name City #Alice Seattle 1 1 #Bob Seattle 2 2 #Mallory Portland 2 2 # Seattle 1 1 print df1.groupby(["Name", "City"]).size().reset_index(name='count') # Name City count #0 Alice Seattle 1 #1 Bob Seattle 2 #2 Mallory Portland 2 #3 Mallory Seattle 1
The difference between
count and
size is that
size counts NaN values while
count does not.
|
https://pythonpedia.com/en/knowledge-base/10373660/converting-a-pandas-groupby-output-from-series-to-dataframe
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
I can't figure out how to rotate the text on the X Axis. Its a time stamp, so as the number of samples increase, they get closer and closer until they overlap. I'd like to rotate the text 90 degrees so as the samples get closer together, they aren't overlapping.
Below is what I have, it works fine with the exception that I can't figure out how to rotate the X axis text.
import sys import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt import datetime font = {'family' : 'normal', 'weight' : 'bold', 'size' : 8} matplotlib.rc('font', **font) values = open('stats.csv', 'r').readlines() time = [datetime.datetime.fromtimestamp(float(i.split(',')[0].strip())) for i in values[1:]] delay = [float(i.split(',')[1].strip()) for i in values[1:]] plt.plot(time, delay) plt.grid(b='on') plt.savefig('test.png')
This works for me:
plt.xticks(rotation=90)
|
https://pythonpedia.com/en/knowledge-base/10998621/rotate-axis-text-in-python-matplotlib
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
:
Threads and Synchronization
Terminating Threads At Specific Time
Eric Haskins
Greenhorn
Posts: 7
posted 10 years ago
I have a
Java
App I am using to get 15 different Feeds. PHP was too linear so I am whipping this up to help speed my app.
I need to make the 15 Requests and say I pass a timeout of 500 milliseconds. I want all threads to die and then I work with what data I get
It just doesnt seem to be interrupting what would be the best way to Stop all the Threads and get any data they have??? Here is my code I am
testing
with now.
package http; import java.util.ArrayList; /** * * @author e */ public class Main { /** * @param args[0] long */ public static void main(String[] args) { long delayMillis; if (args.length == 0) { delayMillis = 500; // .5 seconds } else { delayMillis = Long.parseLong(args[0]); } String res[]; res = new String[5]; // Just Example URL's res[0] = ""; res[1] = ""; res[2] = ""; res[3] = ""; res[4] = ""; for (int i = 0; i < res.length; i++) { String name = "pr" + i; HTTPRequest Request = new HTTPRequest(); Request.requestUrl = res[i]; Request.timeout = delayMillis; Thread th = new Thread(Request); th.setName(name); th.start(); } Thread[] threads = new Thread[Thread.activeCount()]; Thread.enumerate(threads); for (Thread t : threads) { System.out.println(t.getName()); try { t.join(delayMillis); if (t.isAlive()) { t.interrupt(); System.out.println("Terminating Thread " + t.getName()); } } catch (InterruptedException e) { } } System.out.println("bye\n"); } } package http; /** * * @author e */ import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.net.URL; public class HTTPRequest implements Runnable { public String requestUrl; public Long timeout; public static void main(String[] args) { String requestUrl = ""; long timeout = 250; } public void run() { try { URL url = new URL(requestUrl.toString()); BufferedReader in = new BufferedReader(new InputStreamReader(url.openStream())); String inputLine; System.out.println("-----RESPONSE " + url.getHost() + " START-----"); while ((inputLine = in.readLine()) != null) { System.out.println(inputLine); } in.close(); System.out.println("-----RESPONSE " + url.getHost() + " END-----"); } catch (IOException e) { e.printStackTrace(); } } }
Steve Luke
Bartender
Posts: 4179
22
I like...
posted 10 years ago
There are a few problems with your code.
1) Your
Thread
never responds to the interrupt. Calling the Thread#interrupt() method sets a status flag saying the Thread should be interrupted. Some methods will respond to that interruption (either by throwing an InterruptedException or by doing some other signal specified in the API). Nothing in the code you provided responds to interruption so you must check it yourself. You do this by periodically checking the interrupted status flag (either using the static method Thread.interrupted() or Thread.currentThread().isInterrupted(). They behave differently so read the docs on both to see which is most appropriate to use.) A good place to put such a check would be in the while() condition.
2) In the main() method you are waiting for progressively longer times for each thread. The first thread will be allowed to run for 500ms, the second will be allowed to run for 1000ms (the 500ms main waited for t1 plus 500 more), the third for 1500ms, etc... If you want to make the app work just for 500ms then
you should
have a Thread.sleep(500) followed by checking if each thread is alive, and if it is interrupt it.
3) When you call the Thread.enumerate() method, you get a hold of all the Threads in the current Thread's ThreadGroup. That would include the main thread and any other threads in the same group as the main thread. Are you sure this is what you want? You don't really know all the Threads which are running, do you want to interrupt unknown threads? What happens when you try to join() the main thread? I am not sure. At very least you will interrupt it and since you never check/reset the main thread's interrupted status the next call to join() will throw an InterruptedException (which you don't log) and bring the main thread to an end without informing other Threads to stop. Perhaps you should make sure your custom threads go into their own ThreadGroup or put them into an Array as you generate them.
Steve
With a little knowledge, a
cast iron skillet
is non-stick and lasts a lifetime.
reply
Bookmark Topic
Watch Topic
New Topic
Boost this thread!
Similar Threads
Connection timeout in URL connection
How to read text content not source code from webpage in java ?
will the code running concurrently or not
Networking question
what happens in out.flush()?
More...
|
https://www.coderanch.com/t/473820/java/Terminating-Threads-Specific-Time
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
Reflections alternatives and similar libraries
Based on the "Introspection" category
jOOR7.5 4.7 Reflections VS jOORjOOR stands for jOOR Object Oriented Reflection. It is a simple wrapper for the java.lang.reflect package.
ClassGraph6.9 9.3 Reflections VS ClassGraphClassGraph (formerly FastClasspathScanner) is an uber-fast, ultra-lightweight, parallelized classpath scanner and module scanner for Java, Scala, Kotlin and other JVM languages.
ReflectASM6.1 0.6 Reflections VS ReflectASMReflectASM is a very small Java library that provides high performance reflection by using code generation.
Objenesis4.5 2.7 Reflections VS ObjenesisAllows dynamic instantiation without default constructor, e.g. constructors which have required arguments, side effects or throw exceptions.
MirrorMirror was created to bring light to a simple problem, usually named ReflectionUtil, which is on almost all projects that rely on reflection to do advanced tasks.
SaaSHub - Software Alternatives and Reviews
Do you think we are missing an alternative of Reflections or a related project?
README
Released
org.reflections:reflections:0.9.12 - with support for Java 8
Reflections library has over 2.5 million downloads per month from Maven Central, and is being used by thousands of projects and libraries. We're looking for maintainers to assist in reviewing pull requests and managing releases, please reach out.
Java runtime metadata analysis, in the spirit of Scannotations
Reflections scans your classpath, indexes the metadata, allows you to query it on runtime and may save and collect that information for many modules within your project.
Using Reflections you can query your metadata such as:
- get all subtypes of some type
- get all types/members annotated with some annotation
- get all resources matching a regular expression
- get all methods with specific signature including parameters, parameter annotations and return type
Intro
Add Reflections to your project. for maven projects just add this dependency:
<dependency> <groupId>org.reflections</groupId> <artifactId>reflections</artifactId> <version>0.9.12</version> </dependency>
A typical use of Reflections would be:
Reflections reflections = new Reflections("my.project"); Set<Class<? extends SomeType>> subTypes = reflections.getSubTypesOf(SomeType.class); Set<Class<?>> annotated = reflections.getTypesAnnotatedWith(SomeAnnotation.class);
Usage
Basically, to use Reflections first instantiate it with urls and scanners
//scan urls that contain 'my.package', include inputs starting with 'my.package', use the default scanners Reflections reflections = new Reflections("my.package"); //or using ConfigurationBuilder new Reflections(new ConfigurationBuilder() .setUrls(ClasspathHelper.forPackage("my.project.prefix")) .setScanners(new SubTypesScanner(), new TypeAnnotationsScanner().filterResultsBy(optionalFilter), ...), .filterInputsBy(new FilterBuilder().includePackage("my.project.prefix")) ...);
Then use the convenient query methods: (depending on the scanners configured)
//SubTypesScanner Set<Class<? extends Module>> modules = reflections.getSubTypesOf(com.google.inject.Module.class);
//TypeAnnotationsScanner Set<Class<?>> singletons = reflections.getTypesAnnotatedWith(javax.inject.Singleton.class);
//ResourcesScanner Set<String> properties = reflections.getResources(Pattern.compile(".*\\.properties"));
//MethodAnnotationsScanner Set<Method> resources = reflections.getMethodsAnnotatedWith(javax.ws.rs.Path.class); Set<Constructor> injectables = reflections.getConstructorsAnnotatedWith(javax.inject.Inject.class);
//FieldAnnotationsScanner Set<Field> ids = reflections.getFieldsAnnotatedWith(javax.persistence.Id.class);
//MethodParameterScanner Set<Method> someMethods = reflections.getMethodsMatchParams(long.class, int.class); Set<Method> voidMethods = reflections.getMethodsReturn(void.class); Set<Method> pathParamMethods = reflections.getMethodsWithAnyParamAnnotated(PathParam.class);
//MethodParameterNamesScanner List<String> parameterNames = reflections.getMethodParamNames(Method.class)
//MemberUsageScanner Set<Member> usages = reflections.getMethodUsages(Method.class)
- If no scanners are configured, the default will be used -
SubTypesScannerand
TypeAnnotationsScanner.
- Classloader can also be configured, which will be used for resolving runtime classes from names.
- Reflections expands super types by default. This solves some problems with transitive urls are not scanned.
Also, browse the tests directory to see some more examples.
ReflectionUtils
ReflectionsUtils contains some convenient Java reflection helper methods for getting types/constructors/methods/fields/annotations matching some predicates, generally in the form of *getAllXXX(type, withYYY)
for example:
import static org.reflections.ReflectionUtils.*; Set<Method> getters = getAllMethods(someClass, withModifier(Modifier.PUBLIC), withPrefix("get"), withParametersCount(0)); //or Set<Method> listMethodsFromCollectionToBoolean = getAllMethods(List.class, withParametersAssignableTo(Collection.class), withReturnType(boolean.class)); Set<Field> fields = getAllFields(SomeClass.class, withAnnotation(annotation), withTypeAssignableTo(type));
See more in the ReflectionUtils javadoc
Integrating into your build lifecycle.
For Maven, see example using gmavenplus in the reflections-maven repository
Other use cases
See the UseCases wiki page
Contribute
Pull requests are welcomed!!
Apologize for not maintaining this repository continuously! We're looking for maintainers to assist in reviewing pull requests and managing releases, please reach out.
The license is WTFPL, just do what the fuck you want to.
This library is published as an act of giving and generosity, from developers to developers.
Please feel free to use it, and to contribute to the developers community in the same manner. Dāna
Cheers
*Note that all licence references and agreements mentioned in the Reflections README section above are relevant to that project's source code only.
|
https://java.libhunt.com/reflections-alternatives
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
Plugin to interact with the Tropo Cloud platform
Dependency:
compile "org.grails.plugins:tropo-webapi-grails:0.2.1"
Summary
Installation
grails install tropo-webapi-grails
Description
DescriptionTropo is an Open Source massively scalable developer platform that makes it simple to quickly and easily build phone, SMS and Instant Messaging applications - or applications that handle all three - using the web technologies you already know and Tropo's powerful cloud API.This plugin implements the Tropo WebApi which is a server-side API that allows developers to create with very few lines of code applications that can send and receive SMSs and calls, build instant messaging powered applications, build conferences, transfer calls, record conversations, play and send sound files to other people and many other cool things. Apart from the Tropo WebApi, this plugin also implements the Tropo REST API which is a provisioning API that lets you configure, launch and manage all your Tropo applications.To run your Tropo applications you will need a Tropo powered application server. Voxeo (Tropo authors) offers the Tropo Cloud platform which is totally free for developers and which has very competitive rates for production deployment.
RequiresGrails 1.0.5 or higher.
Includes
- commons-codec-1.3.jar
- ezmorph-1.0.6.jar
- http-builder-0.5.1.jar
- httpclient-4.0.3.jar
- httpcore-4.0.1.jar
- json-lib-2.4-jdk15.jar
- xml-resolver-1.2.jar
InstallationYou can install this plugin by running the following command:
grails install-plugin tropo-webapi-grails
ScreencastBe sure to check out this screencast on how to create your first project with Tropo and Grails:
5 minutes tutorialIn the following sections we will create a sample Grails application that uses Tropo services. It will take you less than 5 minutes.
Creating your Tropo applicationOnce you have the plugin installed you will find that creating voice and SMS powered application becomes incredibly easy. But first of all you need to sign up with Tropo. Once you have registered and signed in you can proceed and create your first application. As we are creating an application that uses the WebApi, you need to click on the plus button and then choose WebApi application like in the picture below:
- Calling a phone number that you have set up
- Calling a skype number
- Sending a SMS
- Sending a message through an IM network
- Using the REST API
- From Grails itself using the TropoService object (effectively it runs a GET request to the REST API)
Creating your ControllerOnce your application is created adding voice support to it is really simple. You do not need any specific artifacts so you can add voice and messaging support to any of your Grails controllers, services or Taglibs. However, you will need to provide some implementation for the Grails controller that you have linked to your application in the previous steps.The Tropo WebApi is based in JSON and defines many different methods. This plugin provides a groovy builder that makes really easy to interact with the Tropo platform. Below you can find an example for our TropoController.groovy:
Note that you can also use a more 'traditional' approach. The following source code is equivalent to the snippet above. Use whatever syntax you feel more comfortable with.
import com.tropo.grails.TropoBuilder;class TropoController { def index = { def tropo = new TropoBuilder() tropo.tropo { say("Hello World. Hello Tropo. We are going to play a song.") say("") hangup() } tropo.render(response) } }
import com.tropo.grails.TropoBuilder;class TropoController { def index = { def tropo = new TropoBuilder() tropo.say("Hello World. Hello Tropo. We are going to play a song.") tropo.say("") tropo.hangup() tropo.render(response) } }
Testing your applicationIf you have created some landline phone numbers you can test your application just by calling those phone numbers. For example, in this tutorial you could call either +44 1259340253 or (407) 680-0744 and you would hear a welcome message and a mp3 song would be played until either the song stops or you hang up. You can also use Skype to call your application:
Source CodeAlthough the plugin is available from Grails SVN, the latest source code for this plugin is hosted at GitHub. You can find it here:
Using Tests as a referenceAt the source code there is several unit tests that show how to use the Groovy builder. Check them out here.
ConfigurationThis plugin does not require specific configuration parameters by default.If you plan to use the REST API then you should provide your Tropo username and password as most of the REST API methods are secured by default. In this case you need to add the following lines to your Config.groovy file providing your Tropo username and password:
tropo.username = "yourtropousername" tropo.password = "yourtropopassword"
Interacting with Tropo REST APITropo REST API allows you to interact with Tropo platform through a REST based service. This API can be used to directly invoke your applications (for example you could provide a phone image link in your webapp GUI that will invoke your application and trigger the Controller that we saw in the 5 minutes tutorial). Another usage for this API is to administer your account. You can use this API to create new applications, create and delete phone numbers, create and delete IM accounts, send signals to your application (like hanging up), etc. In summary, it is an admin and provisioning API.Although you could send the REST requests yourself, this plugin provides a handy Grails service named TropoService that is injected in your applications and that you may use to execute all the different Tropo REST API calls.Here is an example on how simple would be to launch your Tropo application using the TropoService:
As you can see in the code above, some of the TropoService calls will require a token. Each Tropo application has a unique token for handling voice calls and a different token for handling messaging. You can find your application's tokens at your application page in Tropo.com
class TropoController { def tropoService def index = { tropoService.launchSession(token: '72979191d971e344b46a0e4a3485571844250e689bb13548a75f1cce2ce9a53dde82c3fe944479bcb650500e') }
IMPORTANT: Transcription issuesThe interaction with Tropo is based on the interchange of JSON based documents. There is a known bug where Tropo sends transcription POST requests with a Content-Type header set to 'application/x-www-form-urlencoded' which basically will make the JSON coverter to fail with a message pretty similar to "org.codehaus.groovy.grails.web.json.JSONException: Missing value. at character 0 of "If you face this issue the workaround is to disable Grails' automcatic content handling and parse the POST body yourself with the JSON converter. This is actually pretty easy. Here is an example:
import grails.converters.JSONclass TestController { static allowedMethods = [add:'POST'] def index = { } def transcription = { def raw = request.reader.text def json = JSON.parse(raw) render "ok" } }
Examples using the Tropo BuilderAs you could see in the 5 minutes tutorial above, this plugin provides a groovy builder that makes really easy to interact with the Tropo platform. The following examples showcase some of the actions that you could do in your Grails applications.
Saying something to the user
Find more about the 'say' method at the 'say' reference page
def builder = new TropoBuilder() builder.tropo { say('Hello Mr. User') }
Asking a question to the user
Find more about the 'ask' method at the 'ask' reference page
def builder = new TropoBuilder() builder.tropo { ask(name : 'foo', bargein: true, timeout: 30, required: true, choices: '[5 DIGITS]') { say('Please say your account number') } }
Asking a question and redirecting the user to a different controllerSimply asking a question wasn't really a very useful action. Lets ask a question and redirect the user to a new action in our controller:
def builder = new TropoBuilder() builder.tropo { ask(name : 'foo', bargein: true, timeout: 30, required: true) { say('Please say your account number') choices(value: '[5 DIGITS]') } on(event:'continue',next:'/tropo/zipcode') }
Displaying the outcome of your application actionsIn the example above we've seen how we can redirect to a different controller to handle the user's input like for example when entering a number or saying something. How can we get the user input from our controllers?It is really easy. Tropo will send a POST request to our controller containing the result of the action that we have executed. That includes any input from the user:
def zipcode = { def tropoRequest = request.JSON def zipcode = tropoRequest.result.actions.value def builder = new TropoBuilder() builder.tropo { say(value: "Your zipcode is ${zipcode}. Thank you.") hangup() } builder.render(response) }
Creating a conferenceThe conference action allows multiple lines in separate sessions to be conferenced together so that the parties on each line can talk to each other simultaneously.
Find more about the 'conference' method at the 'conference' reference page
def builder = new TropoBuilder() builder.tropo { conference(name: 'foo', id: '1234', mute: false, send_tones: false, exit_tone: '#') { on(event: 'join') { say(value: 'Welcome to the conference') } on(event:'leave') { say(value: 'Someone has left the conference') } } }
Hanging upAs it name says, the hangup action will simply hang up the call.
Find more about the 'hangup' method at the 'hangup' reference page
def builder = new TropoBuilder() builder.hangup()
Recording a callThe record action plays a prompt (audio file or text to speech) then optionally waits for a response from the caller and records it.
Find more about the 'record' method at the 'record' reference page
def builder = new TropoBuilder()builder.record(name: 'foo', url: '', beep: true, sendTones: true, exitTone: '#') { transcription(id: 'bling', url:'mailto:[email protected]', emailFormat: 'encoded') say('Please say your account number') choices(value: '[5 DIGITS]') }
Redirecting a callRedirect is used to deflect the call to a third party SIP address. This function must be called before the call is answered; for active calls, consider using transfer.
Find more about the 'redirect' method at the 'redirect' reference page
def builder = new TropoBuilder() builder.tropo { redirect(to: 'sip:1234', from: '4155551212') }
Rejecting a callThis action rejects the incoming call. For example, an application could inspect the callerID variable to determine if the user is known, then reject the call accordingly.
Find more about the 'reject' method at the 'reject' reference page
def builder = new TropoBuilder() builder.reject()
Starting a recordingAllows your plugin to begin recording the current session. The resulting recording may then be sent via FTP or an HTTP POST/Multipart Form.
Find more about the 'startRecording' method at the 'startRecording' reference page
builder.tropo { startRecording(url:'') }
Stopping some recordingThis stops the recording of the current call after startCallRecording has been called.
Find more about the 'stopRecording' method at the 'stopRecording' reference page
def builder = new TropoBuilder() builder.stopRecording()
Transfering a callThe transfer action will transfer an already answered call to another destination / phone number
Find more about the 'transfer' method at the 'transfer' reference page
builder.tropo { transfer(to: 'tel:+14157044517') { on(event: 'unbounded', next: '/error') choices(value: '[5 DIGITS]') } }
Calling someoneThe call method initiates an outbound call or a text conversation. Note that this verb is only valid when there is no active WebAPI call.
Find more about the 'call' method at the 'call' reference page
builder.call(to: 'foo', from: 'bar', network: 'SMS', channel: 'TEXT', timeout: 10, answerOnMedia: false) { headers(foo: 'foo', bar: 'bar') startRecording(url: '', method: 'POST', format: 'audio/mp3', username: 'jose', password: 'passwd') }
Sending messagesThe message action creates a call, says something and then hangs up, all in one step. This is particularly useful for sending out a quick SMS or IM.
Find more about the 'message' method at the 'message' reference page
def builder = new TropoBuilder()builder.message(to: 'foo', from: 'bar', network: 'SMS', channel: 'TEXT', timeout: 10, answerOnMedia: false) { headers(foo: 'foo', bar: 'bar') startRecording(url: '', method: 'POST', format: 'audio/mp3', username: 'jose', password: 'passwd') say('Please say your account number') }
Event handlingSome actions may respond to different events. You can specify the different events as parameters. Refer to the WebApi reference to get more information.
def builder = new TropoBuilder()def help_stop_choices = '0(0,help,i do not know, agent, operator, assistance, representative, real person, human), 9(9,quit,stop,shut up)' def yes_no_choices = 'true(1,yes,sure,affirmative), false(2,no,no thank you,negative),' + help_stop_choices builder.ask(name: 'donate_to_id', bargein: true, timeout: 10, silenceTimeout: 10, attempts: 4) { say([[event: 'timeout', value: 'Sorry, I did not hear anything.'], [event: 'nomatch:1 nomatch:2 nomatch:3', value: "Sorry, that wasn't a valid answer. You can press or say 1 for 'yes', or 2 for 'no'."], [value: 'You chose organization foobar. Are you ready to donate to them? If you say no, I will tell you a little more about the organization.'], [event: 'nomatch:3', value: 'This is your last attempt.']]) choices(value: yes_no_choices) }
Embedding buildersSometimes you may need to create a builder in one method then run some logic and append extra content to the builder. This plugin lets you append builders within other builders. Here is an example:
def builder1 = new TropoBuilder() builder1.tropo { on(event:'continue',next:'/result.json') } def builder2 = new TropoBuilder() builder2.tropo { ask(name : 'foo', bargein: true, timeout: 30, required: true) { say('Please say your account number') choices(value: '[5 DIGITS]') } append(builder1) }
AuthorMartín Pérez ([email protected])Please report any issues to the guys at Voxeo support. You can also use the Grails User mailing list and/or write up an issue in JIRA at under the Tropo-Webapi-Grails component.
HistoryOctober 4, 2011
- Fixed: public/private mixed warning from STS2.8.0M1 and grails snapshot
- Released version 0.2.1
- Added a new isEmpty method to TropoBuilder
- Bug fixed. Append could only be used at the end of a closure.
- Bug fixed. Recordings use a choices element not exit_tone attribute.
- Added support for the new interdigitTimeout element in WebApi
- Released version 0.2 of the plugin
- Added the possibility to embed builders within other builders
- released version 0.1.2
- Fixed TropoBuilder's toString
- released patched version 0.1.1
- released initial version 0.1
|
http://www.grails.org/plugin/tropo-webapi-grails?skipRedirect=true
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
[Solved] Can't link to Windows library
Hello all. After much reading and hacking, I have a program that compiles, but doesn't link. The program is the client half of a tcp connection. It complains that it can't find a library:
readevalprintloopthread.obj:-1: error: LNK2019: unresolved external symbol __imp__freeaddrinfo@4 referenced in function "public: void __thiscall ReadEvalPrintLoopThread::ConnectToServer(void)" (?ConnectToServer@ReadEvalPrintLoopThread@@QAEXXZ)
I have a pure Microsoft solution somebody gave me, and I can make that solution stop building by hiding a certain library, so I think I've found the library it can't find. (WS2_32.lib) I checked my lib path based on this suggestion on the net:
but the lib path seems to be there.
Doesn't anybody know why I can't find this library?
(I'm sure somebody will ask me, so I'll say now: I'm using Microsoft socket facilities and not Qt socket facilities because the objective is to show somebody that I understand Unix/Windows tcp. I wrote it in Qt Creator so I could use the other facilities.)
Regards, Rick
Are you using qmake? If yes, could you show us the LIBS statements?
If your solution is pure Ms, try to just add WS2_32.lib to the LIBS statement
@win32:LIBS += -lWS2_32@
or
@win32:LIBS += WS2_32.lib@
Hi. I made a similarly named post over in General, but I've realized this is a better place for it, and I've figured out a simpler way to ask my question.
Can anybody compile and link this? :
@
#include <QtCore/QCoreApplication>
#include <ws2tcpip.h>
int main(int argc, char *argv[]) {
QCoreApplication a(argc, argv);
struct addrinfo * servinfo = NULL;
freeaddrinfo(servinfo);
return a.exec();
}
@
When I try, I can't link it. :
LNK2019: unresolved external symbol __imp__freeaddrinfo@4 referenced in function _main
Regards, Rick
EDIT: please use @-tags for code highlight, Gerolf
Hi. I am using Qt Creator. I suspect that if I could find this LIBS of which you speak, my problems would be over.
I'm willing to say "Qt Creator doesn't handle this correctly. I should go one level lower, closer to the metal, and use qmake." Is that what I should say?
I figured out a way to ask my question more simply, and so I did, over in the Tools forum. Same title.
Regards, Rick
- Eddy Moderators
I merged this 2 threads because this is about the same problem.
Could you use @tags for your code please? Or use button with <> on it on top of editor. It makes it more readible for others.
The LIBS directive has to go into your .pro file.
bq. Hi. I made a similarly named post over in General, but I’ve realized this is a better place for it, and I’ve figured out a simpler way to ask my question.
Can anybody compile and link this? :
....
When I try, I can’t link it. :
LNK2019: unresolved external symbol __imp__freeaddrinfo@4 referenced in function _main
I have the same error without change anything but when i add @win32:LIBS += -lWS2_32@ to the .pro file it works fine !!!!
I will use @tags the next time I include code. And I guess two threads for the same topic is considered undesireable. I won't do that again.
Adding the line to ...pro worked. I wasn't aware that I had access to qmake through that file.
This environment is very pleasant. There seem to be fewer unnecessary obstacles than in other environments.
Thanks for your help.
Rick
- Eddy Moderators
Glad to hear you like Qt!
Please could you edit your title and add [Solved] in front of it?
|
https://forum.qt.io/topic/8132/solved-can-t-link-to-windows-library
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
In current versions of Neo the domain model is defined in an XML file and this file is used to generate the object model as well as the database schema.
<table name="titles" javaName="Title"> <column name="price" type="DECIMAL" /> <column name="title" javaName="CoverTitle" required="true" type="VARCHAR" size="80" /> <column name="title_id" primaryKey="true" required="true" type="CHAR" size="6" /> <column name="pub_id" hidden="true" type="CHAR" size="4" /> <foreign-key <reference local="pub_id" foreign="pub_id"/> </foreign-key> </table>
The object model comprises a TitleBase class that contains the generated code for the attributes and relations and a Title class that contains the custom code. Title inherits from TitleBase.
Feedback suggests that many people prefer another approach and would like to start with the objects rather than an XML file. The idea is that one defines the domain model in abstract classes and the Neo code generator uses these to create concrete implementations as subclasses; and the database schema of course. There would be no XML file in this approach.
The above model could be expressed as follows. You'll notice that the idea is that Neo 'guesses' default values, such as type and name, and you only need to specify those if there's an exception. For example, the property is CoverTitle which would be translated to cover_title as a column name. However, we want just title in the database so it needs to be specified.
[NeoEntity(table="titles")] public class Title { [NeoAttribute] public abstract Decimal Price { get; set; } [NeoAttribute(column="title", size=80)] public abstract String CoverTitle { get; set; } [NeoAttribute(primaryKey=true, type="CHAR" size=6)] public abstract string TitleId { get; set; } [NeoAttribute(type="CHAR" size=4)] protected abstract string PubId { get; set; } [NeoRelation(localKey="PubId", targetKey="PubId")] public abstract Publisher Publisher { get; set; } public void SlashPrice() { Price = Price * 0.9; } }
From the the code generator would create a class, say TitleImpl, that contains similar logic to the one found in TitleBase in the old model. A key difference is that the developers would not have to be aware of this as all the typing would be based on Title.
Sep 14, 2004
Ingo Lundberg says:It should probably say: public abstract class (not only for correct syntax, but ...
It should probably say:
public abstract class
(not only for correct syntax, but the text also mentions abstract classes).
Jan 25, 2005
Paul Gielens says:If one would decide to make the extra mile, I'd prefer POCO.
If one would decide to make the extra mile, I'd prefer POCO.
|
http://docs.codehaus.org/display/NEO/Starting+with+objects
|
crawl-002
|
en
|
refinedweb
|
Using MBean Notifications
The following sections provide an overview of how to use the various notifications that can be broadcast from WebLogic Server MBeans:
Overview
All WebLogic Server MBeans implement the javax.management.NotificationBroadcaster interfaces, which means they can emit standard JMX notification types.
To observe MBean notifications, you simply implement the NotificationListener interface in your client application, and register the listener class with the MBeans whose notifications you want to receive. The figure below shows a basic system to monitor notifications using a JSP or Servlet.
A listener class can optionally register a NotificationFilter class, which provides additional control over which notifications the listener receives.
Note: For an complete explanation of JMX notifications and how they work, see the Sun Microsystems J2EE JMX specification.
Making Notifications Available to Outside Clients
The JMX version 1.0 specification does not describe how to make notifications available to clients outside the broadcasting MBean's JVM. WebLogic Server version 6.1 makes notifications available externally via the weblogic.management.RemoteNotificationListener interface.
RemoteNotificationListener extends javax.management.NotificationListener and java.rmi.Remote, making MBean notifications available to external clients via RMI. Remote Java clients simply implement RemoteNotificationListener, rather than NotificationListener, to accept WebLogic MBean notifications, as shown below.
Registration of the remote Java client listener is accomplished through the standard JMX addNotificationListener() method.
MBean Notification Summary
WebLogic Server notifications use the standard notification classes identified in the JMX 1.0 specification. In addition, WebLogic Server provides additional notification classes and notification helper classes for working with WebLogic Server MBean log notifications. The following sections summarize the notification types and classes that JMX applications can use in WebLogic Server.
Basic JMX Notifications
All WebLogic Server MBeans implement the NotificationBroadcaster interface, and can generate the notification types described in the JMX 1.0 specification. These notification types include:
In addition, certain WebLogic Server MBeans support two additional notification types for attributes that have "add" and "remove" methods:
WebLogic Server Log Notifications
WebLogic Server provides the LogBroadcasterRuntime MBean, whose sole responsibility is to broadcast log messages. Client applications that need to listen for log notifications can simply register with the LogBroadcasterRuntime MBean.
A notification that represents a WebLogic Server log message contains many additional pieces of information, such as:
To help JMX applications extract and use this WebLogic Server log information, BEA provides the WebLogicLogNotification wrapper class. WebLogicLogNotification provides simple getter methods to extract parts of the log message, as well as methods to obtain the transaction ID, user ID, and version number associated with the message.
Working with WebLogic Server Log Notifications provides details on using the log notification supporting classes and interfaces.
Using Basic JMX Notifications
To receive WebLogic MBean notifications, an external client application must:
The following sections describe these basic steps.
Creating a Notification Listener
The notification listener class is responsible for handling the JMX notifications broadcast by one or more MBeans. If your JMX application resides outside of the broadcasting MBean's JVM, the listener class should implement weblogic.management.RemoteNotificationListener, supplying a handleNotification() class to perform actions when notifications are received. An example implementation follows:
import javax.management.Notification;
import javax.management.NotificationFilter;
import javax.management.NotificationListener;
import javax.management.Notification.*;
...
public class WebLogicLogNotificationListener implements
RemoteNotificationListener {
...
public void handleNotification(Notification notification, Object obj) {
WebLogicLogNotification wln = (WebLogicLogNotification)notification;
System.out.println("WebLogicLogNotification");
System.out.println(" type = " +
wln.getType());
System.out.println(" message id = " +
wln.getMessageId());
System.out.println(" server name = " +
wln.getServername());
System.out.println(" timestamp = " +
wln.getTimeStamp());
System.out.println(" message = " +
wln.getMessage() + "\n");
}
Registering Notification Listeners with MBeans
Because all WebLogic Server MBeans are notification broadcasters, it is possible to register a NotificationListener with any available MBean. Registering a NotificationListener can be accomplished by calling the MBean's addNotificationListener() method.
However, in most cases it is preferable to register a listener using the MBean server's addNotificationListener() method. Using the javax.management.MBeanServer interface saves the trouble of looking up a particular MBean simply for registration purposes. For example, to listener defined in Creating a Notification Listener registers itself using:
rmbs = home.getMBeanServer();
oname = new WebLogicObjectName("TheLogBroadcaster",
"LogBroadcasterRuntime",
DOMAIN_NAME,
SERVER_NAME);
rmbs.addNotificationListener(oname,
listener,
null,
null);
Working with WebLogic Server Log Notifications
To receive log messages, client applications can use the standard JMX API to register a notification listener with the WebLogic Server LogBroadcasterRuntimeMBean, as shown in the previous examples. LogBroadcasterRuntimeMBean is responsible for generating notifications for log messages generated by a server.
All notifications broadcast by LogBroadcasterRuntimeMBean are of the type WebLogicLogNotification. There is only one LogBroadcasterRuntimeMBean per server, named TheLogBroadcaster.
The LogBroadcasterRuntimeMBean can be accessed using the mechanisms described in Accessing MBeans from MBeanHome.
Contents of a WebLogicLogNotification
All JMX notifications for a WebLogic Server log message contain the following fields:
weblogic.logMessage.subSystem.messageID
where subSystem indicates the WebLogic Server subsystem that issued the log message, and messageID indicates the internal WebLogic Server message ID.
All log notifications are of the type WebLogicLogNotification. This helper class provides getter methods for all individual fields of a log message. Using the WebLogicLogNotification class, a client application can easily filter log notifications based on their severity, user ID, subsystem, and other fields.
The following NotificationFilter example uses the WebLogicLogNotification class to select only messages of a specific message ID (111000) to be sent as notifications.
import javax.management.Notification;
import javax.management.NotificationFilter;
import javax.management.Notification.*;
....
public class WebLogicLogNotificationFilter implements NotificationFilter,
java.io.Serializable {
public WebLogicLogNotificationFilter() {
subsystem = "";
}
public boolean isNotificationEnabled(Notification notification) {
if (!(notification instanceof WebLogicLogNotification)) {
return false;
}
WebLogicLogNotification wln = (WebLogicLogNotification)notification;
if (subsystem == null ||
subsystem.equals("")) {
return true;
}
StringTokenizer tokens = new StringTokenizer(wln.getType(), ".");
tokens.nextToken();
tokens.nextToken();
return (tokens.nextToken().equals(subsystem));
}
public void setSubsystemFilter(String newSubsystem) {
subsystem = newSubsystem;
}
}
Example Notification Listeners for WebLogic Server Error Messages
Client applications can create various custom NotificationListeners that receive log messages as notifications and perform actions such as:
The basic form of the notification listener differs little from the example shown in Creating a Notification Listener. Simply replace the printed messages in that example with the necessary JDBC calls or paging operations to perform in response to the notification.
|
http://edocs.bea.com/wls/docs61/jmx/notifications.html
|
crawl-002
|
en
|
refinedweb
|
Making XML a native datatype is something we discussed in the past. This seems like something that's actually going to happen.
Posted to xml by Ehud Lamm on 9/20/02; 2:34:57 PM
This indeed looks interesting and will probably be something normal programmers will use. XDuce for example is another interesting language with XML as (the only) native data type, but I'm afraid a lot of programmers cannot handle this.
var x = <test><a>hello</a><a>there</a><b>world</b></test>;
var x = <test><a>hello</a><a>there</a><b>world</b></test>;
BeyondJS lets you write:
var x = "<test><a>hello</a><a>there</a><b>world</b></test>".parseFromString();
var x = "<test><a>hello</a><a>there</a><b>world</b></test>".parseFromString();
which assigns the DOM node to x. Not quit the same but pretty close.
Another example: in Native XML you iterate over node lists using the standard ECMAScript for ( in ) syntax:
for ( in )
var total = 0;
for (var item in order.item)
total += item.price * item.quantity;
var total = 0;
for (var item in order.item)
total += item.price * item.quantity;
Using BeyondXML the syntax becomes:
var total = 0;
"order/item".selectNodes(d).foreach(function(item) {
total += "price".value(item) * "quantity".value(item);
});
var total = 0;
"order/item".selectNodes(d).foreach(function(item) {
total += "price".value(item) * "quantity".value(item);
});
where 'd' is the reference to the DOM node. Again, not quit the same thing but close IMO. It would have been even closer had IE and MSXML provided prototypes for the DOM objects.
I certainly applaud the efforts E4X group, and ECMAScript's intrinsic extensibility makes it a natural candidate for such extensions. However, BeyondJS lets you achieve similar functionality in the present, using standard JavaScript (js files not jsx) and without requiring extensions to the ECMAScript syntax.
so now let's rephrase the question... What does it mean to make something (XML, for example) a native datatype?
While the later seems very natural and appealing to most anybody that works with XML, it almost certainly necessitates modifications in the underlying language syntax (and maybe even semantics). This can easily result in all sort of parsing problems (which ECMAScript already has BTW).
I think it's more important to be able to work with the Infoset in the context of the language's native facilities. My main problem with the XML DOM as a representation of is that it behaves differently than standard JS datatypes for no good reason (other then it needs to be compatible with other languages). For example: the lack of prototypes and dynamic properties I mentioned before.
I think it can be quit possible to serialize the Infoset into some sort of representation that better matches standard ECMAScript facilities. For example, attributes might be accessed as object properties. When represented in this way, it becomes much more easy to manipulate the XML from an ECMAScript program.
This perhaps also shows the downside of something like the CLR. When you create a single library for many target languages you almost certainly avoid language specific features. In other words, you go for the (lowest) common denominator.
A problem with representing the Infoset in ECMAScript has to do with the distinction in XML between node attributes, node children and node properties (stuff like name and namespace). This doesn't quit come naturally to ECMAScript although Sjoerd and I have been throwing around some ideas, e.g. using property access operator [] to get attributes and the function call operator () to get at child nodes. This is very preliminary however, and not likely to make it's way into Beyond in any event (we are not out to write a new XML DOM (at least not now :-)
The fact that ECMAScript is functional, in the sense that it supports functions as first class citizens should make it easier to write filtering and manipulation code. Though for some reason E4X have not gone down that road, e.g. their sample:
var over27inEng = y.employees.employee.(department.@id == 500 && age > 27);
var over27inEng = y.employees.employee.(department.@id == 500 && age > 27);
instead of something like:
var over27inEng = y.employees(function(employee) { return employee.department.@id == 500 && employee.age > 27});
var over27inEng = y.employees(function(employee) { return employee.department.@id == 500 && employee.age > 27});
One thing that appears missing from Native XML is a succinct quasiquote syntax.
Native XML is a good idea, but I'd like to see a comparison with Lisp's lists and SXML in order to understand how new and expressive it may be.
It's not surprising that they chose ECMAScript for XML scripting. There are several technical reasons that make this a reasonable choice: ECMAScript is highly familiar both because its C/C++/Java-like syntax (that may be a bit misleading at times) and because its arguably the most commonly used script language. Additionally ECMAScript is supported on most every platform (including the JVM) and its runtime is installed on most computers. Finally there are several open source implementations available.
There are also several technological reasons why ECMAScript is a good choice. Firstly, ECMAScript objects are completely dynamic allowing addition and removal of properties at run-time. This is compatible with the dynamic nature of XML documents. Secondly, ECMAScript allows property access both using the common notation object.property and through an expression object["property"]. This is also a useful feature in this context. Thirdly you can easily iterate over object properties using the for ( in ) statement.
object.property
object["property"]
Having said all this, there are some problems with this implementation, some having to do with differences between the ECMAScript object model and XML others having to do with specific implementation decisions made by this group.
One major problem, as I've already noted, is that XML nodes differentiate between attributes, child nodes and node properties such as namespace and owner document. ECMAScript OTOH doesn't naturally make such distinctions. This implementation introduces the use of @ as a denotation for XML attributes. This solution is not in sync with the rest of the ECMAScript language. I don't know what their solution is for things like namespaces, it doesn't appear in the article.
Another problem has to do with the fact that in ECMAScript object properties are unordered (accept for array elements). This is incompatible with the XML model. This article doesn't demonstrate how a new node can be inserted at a specific index position.
Finally, ECMAScript allows only a single object property with a specific name. XML allows numerous child-nodes with the same name. This implementation handles this difference by automatically converting such properties to arrays (the browser DOM does something similar). The transition from one property value (when its not an array) to multiple values (when it is) is not clear however.
With regard to this specific implementation, while the ability to write XML code directly into the script looks cool, I think using strings would have been quit sufficient (see the BeyondXML sample above). After all, serialized XML is just a series of characters. It would undoubtedly made the syntax of the resulting language a lot simpler (and as I've pointed out ECMAScript already has some syntactical inconsistencies).
Also, their filtering operator avoids the use of function arguments for some reason. Perhaps they weren't aware of this ECMAScript feature. Finally, they appear to have overloaded the true symbols even more (as if ECMAScript doesn't overload them enough already). Indeed I think their <{annotation}>{value}</{annotation}> is a bad idea if only for having to repeat {annotation} twice. I think something along the Beyond style of "annotation".value(value) makes more sense.
Having said all this, I do like seeing ECMAScript being used in new and innovative ways.
It's not surprising that they chose ECMAScript for XML scripting.
Sure. Still, I wonder how XML processing would be embedded in other languages. Since I wrote quite a lot of Rexx code, back when I was an MVS systems programmer, I tend to think about Rexx when discussing glue languages.
I haven't been up to date with what IBM does with Rexx, but I gess they are thinking about this. It would be interesting to see what they come up with.
Rexx stem variables (essentialy multi-level associative arrays) can be used to represent XML trees. A clever implementation can use this to model a lare part of XML processing. The rest is going to be a bit harder.
I lost contact with Rexx when I moved back to DOS and then Windows. Also I got immersed in OOP, first with Object Pascal and then C++, so my PL itinerary was full for a while.
I do sort of remember Rexx associative arrays, and the word counting program I wrote to test them out (comparing to an AWK sample). I don't remember them enough to compare to ECMAScript objects, which are also implemented as associative arrays. This type of data structure does appear to match XML processing requirements, however as I've pointed out there are some problems such as the distinction XML makes between attributes and child nodes and the fact that XML allows multiple children with the same name.
BTW, I nice feature I like from Native XML scripting is the use of the .. operator to get a list of all subnodes. I think that this feature can be extended to retrieve a list of all sub-properties under any type of ECMAScript object hierarchy. While I can't say when I would use it (perhaps to model XML processing :-) it is cool.
In any event, thanks for the chance to walk down memory lane.
|
http://lambda-the-ultimate.org/classic/message4374.html
|
crawl-002
|
en
|
refinedweb
|
To show how to properly deal with versioning, I’ve created a versioning decision tree. It shows you what you will need to do to handle every case, and it also shows you the consequences of each versioning choice you might make.
The white diamonds are versioning decisions, decisions about how you might version a service. The squares represent actions you will need to take as a result of your versioning choices. Squares that are green represent non-breaking changes. Squares that are red represent breaking changes. Naturally, and this is one of the first versioning practices you can derive from the decision tree, you want to avoid decisions that lead to breaking changes, that lead to red squares.
So let us follow the decision tree. You have a service, and now you need to modify it.
Can you meet your objectives in modifying the service by adding a new operation to the service? If so, then that is readily accomplished through service contract inheritance. Simply define a new service contract that derives from the service contract of the existing service, add the new operations to the new service contract, and then have your service type, the class that implements the existing service contract, implement the new service contract instead of the old one. Update the definition of the service endpoint so that it refers to the new, derived service contract. Now existing clients, which will only know about the old service contract, can have the operations that they already knew executed at the original endpoint, and new clients, which know about the enhanced service contract, can have the additional operations executed at the same endpoint.
[ServiceContract] public interface IMyServiceContract { [OperationContract(IsOneWay=true)] public void MyMethod(MyDataContract input); }
[ServiceContract] public interface IMyAugmentedServiceContract: IMyServiceContract { [OperationContract(IsOneWay=true)] public void MyNewMethod(MyOtherDataContract input); }
public class MyOriginalServiceType: IAugmentedServiceContract
If your requirements for modifying the service cannot be met by adding new operations to the service, do you then have to change an existing operation? If so, then how exactly do you need to change it?
Do you need to change the structure of the data passed to the service by its clients? If so, then you need to version a data contract.
How do you need to modify the data contract? If it is simply a matter of incorporating more data into the data contract, then that is easy to do, simply by adding new, optional members to the data contract.
It is a good practice to implement the IExtensibleDataObject interface on all data contracts, so that if you have to process an instance of an enhanced version of the data contract, that includes members of which you are not aware, then the data for those additional members can pass through your code without being lost. Implementing IExtensibleDataObject provides the data contract serializer with a place to store the additional data on the way into your code, from which the data can be retrieved again on the way out.
[DataContract] public class MyDataContract: IExtensibleDataObject { [DataMember(IsRequired=true)] public XType MyOriginalMember;
[DataMember(IsRequired=false)] public XType MyNewMember;
private ExtensionDataObject extensionData; public ExtensionDataObject ExtensionData { get { return this.extensionData; } set { this.extensionData = value; } } }
When you are forced to make other changes to a data contract besides simply adding members, then you must take the steps shown in the decision tree.
If the requirements for the new version of your service require you to make any other kinds of changes to an operation defined in a service contract, or delete any operations from a service contract, or change the bindings of a service, then these are the steps you must take:
Only the decision to retire an existing endpoint constitutes a breaking change. To make it easier to retire endpoints, you should include a default operation for every service contract that you ever define, and you should specify that operation can throw a fault indicating that the endpoint has been retired. The information provided with the fault can direct the client to the metadata for the replacement endpoints.
[DataContract] public class RetiredEndpointFault { [DataMember] public string NewEndpointMetadata }
[ServiceContract] public interface IMyServiceContract { [OperationContract(IsOneWay=true)] public void MyMethod(MyDataContract input);
[FaultContract(typeof(RetiredEndpointFault))] [OperationContract(Action=“*”,ReplyAction=“*”)] public Message MyDefaultMethod(Message input); }
Leveraging the default operation and fault contract facilities of the Windows Communication Foundation in this way yields the same benefits with far less effort than the facade service approach that is recommended here and here.
Craig McMurtry has some great notes on the various decisions involved in changing a web services interface to a new version. As well as a decision tree to walk through the choices, he
Patrik Löwendahl på Cornerstone är MVP med fokus på C#. Han skriver ett absolut intressant inlägg i SOA-debatten
Egy végtelenül hosszú linklista következik itt...
¾ Step 1 – Read Principles of Service Design: Patterns and Anti-Patterns for an overview of SOA basics
PingBack from
I ran across this post tonight from Craig McMurtry (author of Microsoft Windows Communication Foundations:
Courtesy of LOBOMINATOR from MSDN forums....more links and posts on service versioning... :), plus i'm
PingBack from
¾ Step 1 – Read Principles of Service Design: Patterns and Anti-Patterns for an overview
I am constantly surprised when speaking with people how few have heard of or use the “Service Interface”
More web service versioning info
PingBack from
|
http://blogs.msdn.com/craigmcmurtry/archive/2006/07/23/676104.aspx
|
crawl-002
|
en
|
refinedweb
|
Scheme has procedures that are first-class values. Java does not. However, we can simulate procedure values, by overriding of virtual methods.
class Procedure { ...; public abstract Object applyN (Object[] args); public abstract Object apply0(); ...; public abstract Object apply4 (Object arg1, ..., Object arg4); }
We represent Scheme procedures using sub-classes of
the abstract class
Procedure.
To call (apply) a procedure with no arguments,
you invoke its
apply0 method; to invoke a
procedure, passing it a single argument, you use its
apply1 method; and so on using
apply4 if
you have 4 arguments. Alternatively, you can bundle
up all the arguments into an array, and use the
applyN
method. If you have more than 4 arguments, you
have to use
applyN.
Notice that all
Procedure sub-classes have to
implement all 6 methods, at least to the extent of
throwing an exception if it is passed the wrong number of
arguments. However, there are utility classes
Procedure0 to
Procedure4 and
ProcedureN:
class Procedure1 extends Procedure { public Object applyN(Object[] args) { if (args.length != 1) throw new WrongArguments(); return apply1(args[0]); } public Object apply0() { throw new WrongArguments();} public abstract Object apply1 (Object arg1); public Object apply2 (Object arg1, Object arg2) { throw new WrongArguments();} ...; }
Primitive procedures can be written in Java as sub-classes of these helper classes. For example:
public class force extends Procedure1 { public Object apply1 (Object arg1) throws Throwable { if (arg1 instanceof Promise) return ((Promise)arg1).force (); return arg1; } }
The Kawa compiler used to compile each user-defined procedure
into a separate class just like the
force
function above. Thus a one-argument function would be compiled to a
class that extends
Procedure1,
and that the body of the function compiled to the body
of an
apply1 method. This has the problem
that compiling a Scheme file generates a lot of classes.
This is wasteful both at run-time and in terms of size of compiled files,
since each class has some overhead (including its own constant pool).
Early versions of Kawa were written before Sun added reflection to Java in JDK 1.1. Now, we can use reflection to call methods (and thus functions) not known at compile-time. However, invoking a function using reflection is a lot slower than normal method calls, so that is not a good solution. The next sections will discuss what Kawa does instead.
Each Scheme function defined in a module is compiled to one or more Java methods. If it's a named top-level function, the name of the method will match the name of the Scheme function. (If the Scheme name is not a valid Java name, it has to be mangled.) An anonymous lambda expression, or a non-toplevel named function, gets a generated method name. A function with a fixed number of parameters is compiled to a method with the same number of parameters:
(define (fun1 x y) (list x y))
Assuming the above is in
mod1.scm,
the generated bytecode is equivalent to this method:
public static Object fun1 (Object x, Object y) { return MakeList(x, y); }
The method can be an instance method or a static method, depending on compilation options. Here we'll assume it is static.
To compile a call to a known function in the same module, it is easy for Kawa to generate static method invocation. In certain cases, Kawa can search for a method whose name matches the function, and invoke that method.
If the function has parameter type specifiers, they get mapped to the corresponding Java argument and return types:
(define (safe-cons x (y :: <list>)) :: <pair> (cons x y))
The above compiles to:
public static gnu.lists.Pair safeCons (Object x, gnu.lists.LList y) { return Cons(x, y); }
A function with optional parameters is compiled to a set of overloaded methods. Consider:
(define (pr x #!optional (y (random))) (+ x y))
This gets compiled to two overloaded method, one for each length of the actual argument list.
public static Object pr (Object x) { return pr(x, random()); public static Object pr (Object x, Object y) { return Plus(x, y));
If there is a rest-parameter list that get compiled to either
an
Object[] or
LList parameter.
The method name gets an extra
$V
to indicate that the function takes a variable number of parameters,
and that extra parameters should be passed as a list or array to
the last method parameter.
For example this Scheme function:
(define (rapply fun . args) (apply fun args))
This get compiled to:
public static Object rapply$V(Object fun, LList args) { return apply(fun, args); }
You can declare in Scheme that the rest parameter has type
<Object[]>, in which case the
method rest parameter is
Object[].
Kawa compiles a Scheme module (a source file, or a standard-alone
expression) to a Java class, usually one that
extends
ModuleBody.
class ModuleBody { ... }
Top-level forms (including top-level definitions) are
treated as if they were nested inside a dummy procedure.
For example assume a Scheme module
mod1.scm:
(define (f x) ...) (define (g x) ...) (do-some-stuff)
This gets compiled to
class mod1 extends ModuleBody implements Runnable { public Object f(Object x) { ... } public Object g(Object x) { ... } public Procedure f = ???; /* explained later */ public Procedure g = ???; public void run() { define_global("f", f); define_global("g", g); do_some_stuff(); } }
When a file is
loaded, an instance of the
compiled class is created, and the
run is invoked.
This add the top-level definitions to the global environments,
and runs any top-level expressions.
Alternatively, using the
--module-static command-line flag
generates a static module:
class mod1 extends ModuleBody { public static Object f(Object x) { ... } public static Object g(Object x) { ... } public static Procedure f = ???; public static Procedure g = ???; static { define_global ("f", f); define_global ("g", g); do_some_stuff(); } }
In this case the top-level actons (including definitions) are performed during class initialization.
A Java method represents the actions of a Scheme function,
and calling a known Scheme function is easily implemented
by invoking the method. However, Scheme has “first-class”
functions, so we need to be able “wrap” the Java
method as an
Object that can be passed around,
and called from code where the compiler that doesn't know which
function will get called at run-time.
One solution is to use Java reflection, but that has high overhead.
Another solution (used in older versions of Kawa) is to compile each Scheme
function to its own class that extends
Procedure,
with an
applyN method that evaluates the function body;
this incurs the overhead of a class for each function.
The solution (as with all other problems in Computer Science [David Wheeler])
is to add an extra level of indirection. Every function in a module
gets a unique integer selector.
The utility
ModuleMethod class
is a
Procedure that has a method selector
code plus a reference to the
ModuleBody context:
class ModuleMethod extends Procedure { ModuleBody module; int selector; String name; public Object apply1(Object arg1) { return module.apply1(this, arg1); } public ModuleMethod(ModuleBody body, int selector, String name) { ... } } class ModuleBody { public Object apply1(Object x) { throw Error(); } }
The compiler generates a
switch statement to map
selector indexes to actual methods. Thus the previous
example generates (in static mode):
class mod1 extends ModuleBody { public static f(Object x) { ... } public static g(Object x) { ... } public static Procedure f = new ModuleMethod(this, 1, "f"); public static Procedure g = new ModuleMethod(this, 2, "g"); static { define_global ("f", f); define_global ("g", g); do_some_stuff(); } public Object apply1(ModuleMethod proc, Object x) { switch (proc.selector) { case 1: return f(x); case 2: return g(x); default: return super.apply1(proc, this); } } }
The symbol
g resolves to the
Procedure value of
mod1.g.
Invoking its
apply1 method calls the
method in
ModuleMethod, which calls the
2-argument
apply1 method in
mod1.
This switches on the selector 2, so we end up calling the
g
method. This is more expensive than calling
g directly,
but far less expensive than using reflection.
When a language combines first-class nested functions with lexical scoping
(as Scheme does), then we have the problem that an inner function
can reference a variable from an other scope, even when that
outer scope has exited.
In this simple example we say that the inner function
f2
“captures” the variable
a
from the outer function
f1:
(define (f1 a) (define (f2 b) (list a b)) (cons a f2))
The standard solution uses a
“closure” to represent a function together with the
environment of captured variables.
Kawa does this by using the same
ModuleBody
mechanism used above for first-class functions.
class foo extends ModuleBody { public Procedure f1 = new ModuleMethod(this, 1, "f1"); public Object f1 (Object a) { foo$frame1 frame = new foo$frame1(); frame.a = a; return cons(frame.a, frame.f2); } public Object apply1(ModuleMethod proc, Object x) { switch (proc.selector) { case 1: return f1(x); default: return super.apply1(proc, this); } }
This is as dicussed earlier, except for the
body of the
f1 functions.
It create a new “inner module” or “frame”.
The parameter
a is copied to a field
in the frame, and any references to the parameter
are replaced by a reference to the field.
The inner module is implemented by this class:
public class foo$frame1 extends ModuleBody { Object a; public Procedure f2 = new ModuleMethod(this, 1, "f2"); public Object f2 (Object b) { return list(this.a, b); } public Object apply1(ModuleMethod proc, Object x) { switch (proc.selector) { case 1: return f2(x); default: return super.apply1(proc, this); } }
This mechanism again requires an extra indirection when an inner function
is called.
We also require a distinct frame class for each scope that has one
or more variables captured by some inner scopes. At run-time, we
need to allocate the frame instance plus
ModuleMethod
instances for each inner function (that does capture an outer variable),
when we enter the scope for the frame.
It should be possible to use general-purpose (sharable) frame classes
for the common case that only a few variables are captured; however,
I have to investigated that optimization.
Aside: The original Java language definition did not support nested functions. However, it did have objects and classes, and it turns out the objects and first-class functions are similar in power, since a closure can be represented using an object and vice versa. The “inner classes” added to Java in JDK 1.1 are an object-oriented form of first-class functions. The Java compiler translates the nested classes into plain objects and non-nested classes, very much like Kawa represents nested Scheme functions.
This section documents how Kawa implemented closures years ago. It is included for historical interest.
Kawa used to implement a closure as a
Procedure object
with a “static link” field that points to the inherited
environment. Older versions of Kawa represented the environment
as an array. The most recent version uses the
Procedure
instance itself as the environment.
Let us look at how this works, starting with a
very simple example:
(define (f1 a) (define (f2 b) (list a b)) (cons a f2))
This gets compiled to the bytecode equivalent of:
class F1 extends Procedure1 { public Object apply1(Object a) { // body of f1 F2 heapFrame = new F2(); heapFrame.a = a; return Cons.apply2(heapFrame.a, heapFrame); } } class F2 extends Procedure1 { // F2 closureEnv = this; Object a; public Object apply1(Object b) { // body of f2 return List.apply2(this.a, b); } }
Note that the instance of
F2 that
represents the
f2 procedure contains both
the code (the
apply1 methods), and
the captured instance variable
a as a Java field.
Note also that the parent function
f1 must
in general use the same field instance when accessing
a,
in case one or the other function assigned to
a
using a
set!.
Next, a slightly more complex problem:
(define (f3 a) (define (f4 b) (cons a b)) (define (f5 c) (cons a c)) (cons a f5))
In this case all three functions refers to
a.
However, they must all agree on a single location, in case one of
the functions does a
set! on the variable.
We pick
f4 as the home of
a (for the
simple but arbitrary reason that the compiler sees it first).
class F3 extends Procedure1 { public Object apply1(Object a) { // body of f3 F4 heapFrame = new F4(); heapFrame.a = a; return Cons.apply2(heapFrame.a, new F5(heapFrame)); } } class F4 extends Procedure1 { // F4 closureEnv = this; Object a; public Object apply1(Object b) { // body of f4 return Cons.apply2(this.a, b); } } class F5 extends Procedure1 { F4 closureEnv; public F5 (F4 closureEnv) { this.closureEnv = closureEnv; } public Object apply1(Object c) { // body of f5 return Cons.apply2(closureEnv.a, c); } }
If a variables is captured through multiple levels of nested functions, the generated code need to follow a chain of static links, as shown by the following function.
(define (f6 a) (define (f7 b) (define (f8 c) (define (f9 d) (list a b c d)) (list a b c f9)) (list a b f8)) (list a f7))
That gets compiled into bytecodes equivalent to the following.
class F6 extends Procedure1 { public Object apply1(Object a) { // body of f6 F7 heapFrame = new F7(); heapFrame.a = a; return List.apply2(heapFrame.a, heapFrame); } } class F7 extends Procedure1 { Object a; public Object apply1(Object b) { // body of f7 F8 heapFrame = new F8(this); heapFrame.b = b; return List.apply3(this.a, heapFrame.b, heapFrame); } } class F8 extends Procedure1 { Object b; F7 staticLink; public F8(F7 staticLink) { this.staticLink = staticLink; } public Object apply1(Object c) { // body of f8 F9 heapFrame = new F9(this); heapFrame.c = c; return List.apply4(staticLink.a, this.b, heapFrame.c, heapFrame); } } class F9 extends Procedure1 { Object c; F8 staticLink; public F9(F8 staticLink) { this.staticLink = staticLink; } public Object apply1(Object d) { // body of f9 return List.apply4 (staticLink.staticLink.a, staticLink.b, this.c, d); } }
Handling tail-recursion is another complication.
The basic idea is to divide the procedure prologue
into the actions before the loop head label, and those after.
(Note that allocating a
heapFrame has to be
done after the head label.)
Handling inlining also requires care.
Kawa has various hooks for inlining procedures. This can allow substantial speedups, at the cost of some generality and strict standards-compliance, since it prevents re-assigning the inlined procedures. Most of these hooks work by having the compiler notice that a name in function call position is not lexically bound, yet it is declared in the (compile-time) global scope.
The most powerful and low-level mechanism works by having the
compiler note that the procedure implements the
Inlineable interface.
That means it implements the specical
compile method,
which the compiler calls at code generation time; it can generate
whatever bytecode it wants. This is a way for special procedues
to generate exotic bytecode instructions. This hook is only
available for builtin procedures written in Java.
Another mechanism uses the Java reflective facilities.
If the compiler notices that the class of the procedure
provides a static method with the right name (
apply),
and the right number of parameters, then it generates a direct call
to that static method. This is not inlining
per se, but it does by-pass the
(currently significant) overhead of looking up the name in the global
symbol-table, casting the value to a procedure, and then making a
virtual function call. Also, because the procedure is replaced
by a call to a statically known method, that call could actually be
inlined by a Java bytecode optimizer. Another advantage of calling
a known static method is that the parameter and return types can
be more specific than plain
Object, or even
be unboxed primitive types. This can avoid many type conversions.
The Kawa compiler generates a suitable
apply
method for all fixed-arity procedures that do not require a closure,
so this optimization is applicable to a great many procedures.
Finally, Kawa has preliminary support for true inlining,
where a procedure that is only called in one place except for tail-calls,
is inlined at the call-site. I plan to add an analysis pass to
detect when this optimization is applicable. For now, there is a
special case to handle the
do special looping form,
and these are now always implemented in the natural way (as inlined
loops). The “named
let” cannot always
be implemented as an inlined loop, so implementing that equally
efficiently will need the planned analysis phase.
This is describing a work in progress.
To handle general tail-calls, and to be able to select between overloaded methods, we split a function call into two separate operations:
The
match operation is given the
actual parameters, and matches them against the formal parameters.
If the right number and types of arguments were given,
a non-negative integer return code specifies success;
otherwise a negative return code specifies a mis-match.
On success the arguments are saved in the argument save area of
the
CallContext.
The
apply operation
performs the actual work (function body) of the called function.
It gets the actual parameters from the
CallContext,
where
match previously saved it.
|
http://www.gnu.org/software/kawa/internals/procedures.html
|
crawl-002
|
en
|
refinedweb
|
Jeremy Miller continues his discussion of persistence patterns by reviewing the Unit of Work design pattern and examining the issues around persistence ignorance.
Jeremy Mill ...
Chris Tavares explains how the ASP.NET MVC Framework's Model View Controller pattern helps you build flexible, easily tested Web applications.
Chris Tavares
MSDN Magazine March 2008
Ray Djajadinata
MSDN Magazine May 2007, ...
// talker.cs
namespace MsdnMagSamples {
public class Talker {
public string Something = "something";
public void SaySomething() {
System.Console.WriteLine(Something);
}
}
}
c:\> csc /t:library /out:talker.dll talker.cs
'talkercli.vb
Public Module TalkerClient
Sub Main()
Dim t as new MsdnMagSamples.Talker
t.Something = "Hello, World"
t.SaySomething()
System.Console.WriteLine("Goodnight, Moon")
End Sub
End Module
c:\> vbc /t:exe /out:talkercli.exe /r:talker.dll talkercli.vb
C:\> cl /CLR talkcli.cpp /link /subsystem:console
void WriteToLog(const char* psz) {
MyManagedFile* file;
try {
file = new MyManagedFile("log.txt");
file->WriteLine(psz);
}
__finally {
if( file ) file->Close();
}
}
template <class T> struct gcroot {...};
template <typename T>
struct final_ptr : gcroot<T> {
final_ptr() {}
final_ptr(T p) : gcroot<T>(p) {}
~final_ptr() {
T p = operator->();
if( p ) p->Finalize();
}
};
void WriteToLog(const char* psz) {
final_ptr<MyManagedFile*> file = new MyManagedFile("log.txt");
file->WriteLine(psz);
}
cl /LD /CLR talker.cpp /o talker.dll
public __gc class Talker {
private:
String* m_something;
public:
__property String* get_Something() {
return m_something;
}
__property void set_Something(String* something) {
m_something = something;
}
•••
};
t->Something = "Greetings Planet Earth";
public __gc __interface ICanTalk {
void Talk();
};
public __gc class Talker : public ICanTalk {
•••
// ICanTalk
void Talk() { SaySomething(); }
};
void MakeTalk(Object* obj) {
try {
ICanTalk* canTalk = __try_cast<ICanTalk*>(obj);
canTalk->Talk();
}
catch( InvalidCastException* ) {
Console::WriteLine(S"Can't talk right now...");
}
}
// mixed.cpp
...managed code by default...
#pragma unmanaged
...unmanaged code...
#pragma managed
...managed code...
HRESULT __stdcall VarI4FromI2(short sIn, long* plOut);
__gc struct ShortLong {
short n;
long l;
};
void main() {
ShortLong* sl = new ShortLong;
sl->n = 10;
VarI4FromI2(sl->n, &sl->l); // Compile-time error
}
void main() {
ShortLong* sl = new ShortLong;
sl->n = 10;
long __pin* pn = &sl->l;
VarI4FromI2(sl->n, pn);
}
__value struct Point {
Point(long _x, long _y) : x(_x), y(_y) {}
long x;
long y;
};
Point pt1; // (x, y) == (0, 0)
Point pt2(1, 2); // (x, y) == (1, 2)
Console::WriteLine(S"({0}, {1})", pt.x, pt.y);
Console::WriteLine(S"({0}, {1})", __box(pt.x), __box(pt.y));
[System::Reflection::DefaultMemberAttribute(S"Item")]
public __gc class MyCollection {
__property String* get_Item(int index);
};
using namespace Reflection;
[assembly:AssemblyTitle(S"My MSDN Magazine Samples")];
|
http://msdn.microsoft.com/en-us/magazine/cc302048.aspx
|
crawl-002
|
en
|
refinedweb
|
If you have been using Windows Workflow Foundation to build web service business logic, you may be using the publish-as-web-service tool that is part of the Visual Studio extensions for Windows Workflow Foundation. How it works is if you have a workflow model defined in a Visual Studio project you can right click on the project and choose "Publish as Web Service" from the menu. The tool will add a new project to your solution which is a web service. This new web service will execute the workflow. Your workflow model must have at least one WebServiceInput activity in it to do this.
The thing that many people have been finding is that it uses the default namespace of TEMPURI.ORG and it's not configurable. Well here's how you can change the namespace with a minimum of fuss. We're planning a KB article with a more official description of this.
1) First the web service is generated code which is compiled and you are left with the DLL only. The source is deleted by default. You want to change that. There's a registry key that you can set to keep the source for the web service. Usual disclaimers apply about changing the registry on your machine. Here's the key to set:
2) Next go ahead and publish your web service normally as described above. This will create the source and compile it, but it will not be deleted. Right click on your web service project and choose ASP.NET Folder -> App_Code. Now you need to go and fetch the generated source from your Temp directory. My temp directory on Windows Vista is at C:\Users\pandrew\AppData\Local\Temp. Find the recently generated .CS file there and copy it. My test one here was called 077px_9s.cs. Put this file into the App_Code folder using explorer. Now edit the .ASMX file in your project and add the CodeBehind="~/Filea.cs". See I renamed the file from 077px_9s.cs to Filea.cs. My test WorkflowLibrary1.Workflow1_WebService.asmx file now looks like this:
<%@ WebService CodeBehind="~/Filea.cs" Class="WorkflowLibrary1.MyClass" %>
3) The final step is to open your newly added code behind file and add the new namespace into the file. Just add the WebService attribute to the class and specify the namespace in that attribute, something like this:
[WebServiceBinding(ConformsTo=WsiProfiles.BasicProfile1_1, EmitConformanceClaims=true)]
[WebService(Namespace="")]
public class MyClass : WorkflowWebService
{
Once you've done that you can recompile the project and you should have a web service that is implemented by a workflow model and your chosen namespace.
PS: This is my first blog entry written in Word 2007.
Useful, but horrible :-)
I can't see fishing around in the Temp folder as a part of anyone's development process for anything more than a one-off - are there plans to add a namespace override option for RTM?
Hi Kev,
Unfortunately we passed the deadline for making changes like that quite some time ago.
Regards,
Paul
Understood, I wasn't making a eature request at this late stage.
I was just hoping someone might have thought of it in time.
Please create a VS.NET macro or shortcut that does this. This is horrible; can't imagine doing this each time my WF project WS receive changes...
Thanks for the article, at least its possible now.
Best regards,
Christof
By default the web service wrapper generated around a workflow use the "tempuri.org" namespace. Not very
Some of my collegaues are working on a much better solution than this work around. Hope to see this posted shortly.
Cheers,
Trademarks |
Privacy Statement
|
http://blogs.msdn.com/pandrew/archive/2006/10/25/extending-the-wf-publish-as-web-service-or-get-rid-of-tempura-org.aspx
|
crawl-002
|
en
|
refinedweb
|
So we left off on the previous post with the question of why we were using Java to work with our new token visitors. Can't visitors be used in C# as well? Well, yes. However, not necessarily as conveniently as with Java. How so? Well, let's take a look at what the code would look like in C#. First off, the visitor interfaces and DefaultTokenVisitor will be the same as with the java code (albeit with slight syntactic differences). However, in order to write the parser we'd have to do the following:
public class Parser {
Token CurrentToken { get { ... } }
void parseType() {
parseModifiers();
CurrentToken.AcceptVisitor(new DetermineTypeToParse(this));
//Parse rest of type
}
void parseModifiers() { ... }
void parseClass() { ... }
void parseInterface() { ... }
class DetermineTypeToParse : DefaultTokenVisitor {
readonly Parser parser;
public DetermineTypeToParse(Parser parser) {
this.parser = parser;
}
public override void VisitClassToken(ClassToken token) {
parser.parseClass();
}
public override void VisitInterfaceToken(InterfaceToken token) {
parser.parseInterface();
}
/* include other cases */
public override void Default(Token token) {
/* handle error */
}
}
}
Functionally, this is equivalent to the java code above (and in actuality is basically what the java compiler is generating when you type in the anonymous inner class), however we've lost quite a lot in the translation. Specifically, we've now had to separate and make disjoint the parser's logic and the visitor's logic. However, both sets of logic are closely related and benefit highly from tight locality in the code. Depending on how the code is structured, and how many visitors are needed, you might end up having this logic hundreds of lines apart. Verifying then that the parser code works as it should is far too difficult, and unclear. You also end up creating a type that exists solely to be instantiated in one and only one location. But you've now cluttered your namespace with this class which you then need to police to ensure that it's used properly.
So while the visitor pattern is fully functional from C#, it lacks the usability that one would want in order to use it as a core designing principle in your APIs. Is there a way that we can get the power of visitors without this drawback? Wait and see!
|
http://blogs.msdn.com/cyrusn/archive/2005/09/13/464637.aspx
|
crawl-002
|
en
|
refinedweb
|
Hello! My name is Andrew Jenner and I'm a Software Design Engineer (SDE) in the Visual Studio Devices team. I work on IDE functionality for managed projects (though some of the components I have written are also used by the native project system).
However, for the most part this blog isn't going to be about programming for Smart Devices, programming for the .NET Compact Framework or even .NET programming in general. Instead I'm going to be writing about what I know best - C++ programming techniques, and general programming techniques that can be applied in most languages.
To start off, here's a programming principle that seems to me to be so fundamental that it is often forgotten: Say what you mean.
What do I mean by this? Well, let's start off with a simple example. Suppose you have some code like this:
#include <iostream>
#include <iomanip>
typedef long long int64;
typedef unsigned long long uint64;
struct int128
{
int64 high;
uint64 low;
};
int main()
{
int128 x={10,20}, y={30,40}, z;
z.low = x.low + y.low;
z.high = x.high + y.high;
if (z.low < x.low)
++z.high;
std::cout << std::hex << std::setw(16) << std::setfill('0') << z.high;
std::cout << std::setw(16) << std::setfill('0') << z.low << std::endl;
}
What does this code actually do? Well, it may not be obvious at first sight because it doesn't actually say what it does. But compare this slightly modified version:
int128 sum_of_128bit_numbers(int128 x,int128 y)
{
int128 z;
z.low = x.low + y.low;
z.high = x.high + y.high;
if (z.low < x.low)
++z.high;
return z;
}
z = sum_of_128bit_numbers(x,y);
Now we can immediately see that this code implements addition of high precision integers. The simple act of naming the 4 mysterious lines of code (by putting them in a function with a descriptive name) has made the program much easier to understand (more so, I would argue, than an equivalent comment that just explains what those 4 lines of code do).
This change doesn't make any difference to the compiler (especially if it implements Named Return Value Optimization or chooses to inline the sum_of_128bit_numbers() function) but it greatly improves the readability of the program for humans. It's easy to write a program that a computer can understand (just change things until it compiles) but writing programs that are easy for people to understand is much more difficult. Since any non-trivial program will eventually need to be maintained, we should strive to make all our code as easy to read by humans as possible.
|
http://blogs.msdn.com/ajenner/archive/2004/07/29/201165.aspx
|
crawl-002
|
en
|
refinedweb
|
Increase your revenue with In-App Purchase on Windows Phone 8
In-App Purchase is one of the key success to monetize from your application since you could distribute your product for free to get the number of users and then gain the money after that. This article will show you how to enable In-App Purchase on your Windows Phone 8 in few easy steps
Windows Phone 8
Preparation
You need an active Windows Phone Dev Center Account before implementing In-App Purchase even for testing. If you didn't have one yet, go to and make a registration. (It costs $99 per year)
Submit the Windows Phone App as Beta version
First of all, you need to create new simple project for example Hello World. Compile the project to make XAP file. Place it somewhere on your computer.
Now go to Windows Phone Dev Center portal and Submit new App at
Click at 1 App Info and fill in Application's info..
Once done, click Submit and go to next step 2 Upload and describe you XAP(s)
Browse for the XAP file you compiled in the first step and upload it to Store.
Fill in every single mandatory fill (which is not so many) and upload the App Icon (300x300 px), Background Image (1000x800px) and screenshots for every single supported screen resolution. Since this is a beta app submission, you could just submit the blank image with specific resolution. So don't worry about it.
Save and Submit
Go to App Details page and note down the App ID.
Change Product ID in WMAppManifest.xml file of your Windows Phone project to the App ID provided above.
Add In-App Product(s)
Now it's time to add In-App product associated with the application. To do that, click on Products link inside application info page.
Go on the instruction. Just fill in all of required fields. Please note that there are two types of In-App product, Consumable and Durable
Choose one suit your product best and ... done ! =D Your In-App product is now ready. However please note that it might take up to 24 hours to make the item appear in the system.
Now move to coding part.
Coding Part 1: In-App product listing
Add a namespace using like this.
using Windows.ApplicationModel.Store;
using Store = Windows.ApplicationModel.Store;
To list the products associated with this application, just simply call the command.
ListingInformation li = await Store.CurrentApp.LoadListingInformationAsync();
foreach (string key in li.ProductListings.Keys)
{
ProductListing pListing = li.ProductListings[key];
System.Diagnostics.Debug.WriteLine(key);
}
You could get the product information from the variable pListing for example pListing.Name will present the product name entered in Windows Phone Dev Center portal, pListing.FormattedPrice present the price, etc.
To check that user has already bought the product or not yet, simply call:
Store.CurrentApp.LicenseInformation.ProductLicenses[key].IsActive
Now integrate with the UI
<ScrollViewer HorizontalAlignment="Left" Margin="12,0,12,0" Grid.
<ItemsControl x:
<ItemsControl.ItemTemplate>
<DataTemplate>
<StackPanel>
<Grid>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="*"/>
<ColumnDefinition Width="*"/>
</Grid.ColumnDefinitions>
<Image Margin="4" Source="{Binding imgLink}"/>
<StackPanel Grid.
<TextBlock Foreground="white" FontWeight="ExtraBold" Text="{Binding Name}" />
<TextBlock Foreground="white" FontWeight="Normal" Text="{Binding Status}" />
<Button Content="Buy Now" Visibility="{Binding BuyNowButtonVisible}" Click="ButtonBuyNow_Clicked" Tag="{Binding key}" />
</StackPanel>
</Grid>
</StackPanel>
</DataTemplate>
</ItemsControl.ItemTemplate>
</ItemsControl>
</ScrollViewer>
public class ProductItem
{
public string imgLink { get; set; }
public string Status { get; set; }
public string Name { get; set; }
public string key { get; set; }
public System.Windows.Visibility BuyNowButtonVisible { get; set; }
}
public ObservableCollection<ProductItem> picItems = new ObservableCollection<ProductItem>();
protected override void OnNavigatedTo(System.Windows.Navigation.NavigationEventArgs e)
{
RenderStoreItems();
base.OnNavigatedTo(e);
}
private async void RenderStoreItems()
{
picItems.Clear();
try
{
//StoreManager mySM = new StoreManager();
ListingInformation li = await Store.CurrentApp.LoadListingInformationAsync();
foreach (string key in li.ProductListings.Keys)
{
ProductListing pListing = li.ProductListings[key];
System.Diagnostics.Debug.WriteLine(key);
string status = Store.CurrentApp.LicenseInformation.ProductLicenses[key].IsActive ? "Purchased" : pListing.FormattedPrice;
string imageLink = string.Empty;
picItems.Add(
new ProductItem {
imgLink = key.Equals("molostickerdummy") ? "/Res/41.png" : "/Res/18.png",
Name = pListing.Name,
Status = status,
key = key,
BuyNowButtonVisible = Store.CurrentApp.LicenseInformation.ProductLicenses[key].IsActive ? System.Windows.Visibility.Collapsed : System.Windows.Visibility.Visible
}
);
}
pics.ItemsSource = picItems;
}
catch (Exception e)
{
System.Diagnostics.Debug.WriteLine(e.ToString());
}
}
And this is what you get as a result
Easy, right? Huh ;)
Coding Part 2: Buy an item
To make a buy request, you could simply call this command
string receipt = await Store.CurrentApp.RequestProductPurchaseAsync(pID, false);
which pID represent the ProductId. And this is the complete way to call it.
private async void ButtonBuyNow_Clicked(object sender, RoutedEventArgs e)
{
Button btn = sender as Button;
string key = btn.Tag.ToString();
if (!Store.CurrentApp.LicenseInformation.ProductLicenses[key].IsActive)
{
ListingInformation li = await Store.CurrentApp.LoadListingInformationAsync();
string pID = li.ProductListings[key].ProductId;
string receipt = await Store.CurrentApp.RequestProductPurchaseAsync(pID, false);
RenderStoreItems();
}
}
Once user press Buy Now button, application will navigate to Store like this.
Store will proceed the payment and redirect back to the application.
And as you can see, the item is now marked as Purchased !
Congratulations. Your application is now able to work with IAP ^ ^. You could apply these basic step for your product to increase the revenue gain from your Windows Phone 8 application. For more information about IAP API, please scroll down to Reference part. Valuable resources are right there.
Remark: Don't forget the change the Distribution Channels to Public Store once you finish the testing process to publish the application to Public.
Code Snippets
You can download the code for this example from File:BuyItemIAP.zip
Reference
For more information about IAP, you could find from links below
|
http://developer.nokia.com/community/wiki/index.php?title=Increase_your_revenue_with_In-App_Purchase_on_Windows_Phone_8&oldid=182823
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
The Apache Jackrabbit community is pleased to announce the release of
Apache Jackrabbit Oak 0.5. The release is available for download at:
See the full release notes below for details about this release.
Release Notes -- Apache Jackrabbit Oak -- Version 0.5.5 is to be considered alpha-level software. Use at your
own risk
with no stability or compatibility guarantees.
Changes in Oak 0.5
------------------
Improvements
[OAK-239] - MicroKernel.getRevisionHistory: maxEntries behavior should be
documented
[OAK-255] - Implement Node#getReferences() both for REFERENCE and
WEAKREFERENCE
[OAK-258] - Dummy implementation for session scoped locks
[OAK-263] - Type of bindings should be covariant in
SessionQueryEngine.executeQuery()
[OAK-264] - MicroKernel.diff for depth limited, unspecified changes
[OAK-274] - Split NodeFilter into its own class
[OAK-275] - Introduce TreeLocation interface
[OAK-282] - Use random port in oak-run tests
[OAK-284] - Reduce memory usage of KernelNodeState
[OAK-285] - Split CommitEditor into CommitEditor and Validator interfaces
[OAK-289] - Remove TreeImpl.Children
[OAK-290] - Move Query related interfaces in oak.spi.query
[OAK-292] - Use Guava preconditions instead of asserts to enforce contract
[OAK-315] - Separate built-in node types from ReadWriteNodeTypeManager
Bug fixes
[OAK-136] - NodeDelegate leakage from NodeImpl
[OAK-221] - Clarify nature of 'path' parameter in oak-api
[OAK-228] - inconsistent paths used in oak tests
[OAK-229] - Review root-node shortcut in NamePathMapperImpl
[OAK-230] - Review and fix inconsistent usage of oak-path in oak-jcr
[OAK-238] - ValueFactory: Missing identifier validation when creating
(weak)reference value from String
[OAK-240] - mix:mergeConflict violates naming convention
[OAK-242] - Mixin rep:MergeConflict is not a registered node type
[OAK-243] - NodeImpl.getParent() not fully encapsulated in a
SessionOperation
[OAK-245] - Add import for org.h2 in oak-mk bundle
[OAK-248] - Review path constants in the oak source code
[OAK-252] - Stop sending observation events on shutdown
[OAK-254] - waitForCommit returns null in certain situations
[OAK-256] - JAAS Authentication failing in OSGi env due to classloading
issue
[OAK-257] - NPE in o.a.j.oak.security.privilege.PrivilegeDefinitionImpl
constructor
[OAK-265] - waitForCommit gets wrongly triggered on private branch commits
[OAK-268] - XPathQueryEvaluator generates incorrect XPath query
[OAK-272] - every session login causes a mk.branch operation
[OAK-278] - Tree.getStatus() and Tree.getPropertyStatus() fail for items
whose parent has been removed
[OAK-279] - ChangeProcessor getting stuck while shutdown
[OAK-286] - Possible NPE in LuceneIndex
[OAK-287] - PrivilegeManagerImplTest.testJcrAll assumes that there are no
custom privileges
[OAK-291] - Clarify paths in Root and Tree
[OAK-294] - nt:propertyDefinition has incorrect value constraints for
property types
[OAK-296] - PathUtils.isAncestor("/", "/") should return false but returns
true
[OAK-299] - Node Type support: SQL2QueryResultTest fails
[OAK-311] - Remapping a namespace breaks existing content
[OAK-313] - Trailing slash not removed for simple path in JCR to Oak path
conversion
[OAK-316] - CommitFailedException.throwRepositoryException swallows parts
of the stack traces
[OAK-330] - Some MongoMK tests do not use CommitImpl constructor correctly
[OAK-332] - [MongoMK] Node is not visible in head revision
[OAK-334] - Add read-only lucene directory
Changes in Oak 0.4
------------------
New Features
[OAK-182] - Support for "invisible" internal content
[OAK-193] - TODO class for partially implemented features
[OAK-227] - MicroKernel API: add depth parameter to diff method
Improvements
[OAK-153] - Split the CommitHook interface
[OAK-156] - Observation events need Session.refresh
[OAK-158] - Specify fixed memory settings for unit and integration tests
[OAK-161] - Refactor Tree#getChildStatus
[OAK-163] - Move the JCR TCK back to the integrationTesting profile
[OAK-164] - Replace Tree.remove(String) with Tree.remove()
[OAK-165] - NodeDelegate should not use Tree.getChild() but rather
Root.getTree()
[OAK-166] - Add Tree.isRoot() method instead of relying on
Tree.getParent() == null
[OAK-171] - Add NodeState.compareAgainstBaseState()
[OAK-172] - Optimize KernelNodeState equality checks
[OAK-174] - Refactor RootImpl and TreeImpl to take advantage of the child
node state builder introduced with OAK-170
[OAK-176] - Reduce CoreValueFactoryImpl footprint
[OAK-183] - Remove duplicate fields from NodeImpl and PropertyImpl which
are already in the ItemImpl super class
[OAK-184] - Allow PropertyState.getValues() to work on single-valued
properties
[OAK-186] - Avoid unnecessary rebase operations
[OAK-192] - Define behavior of Tree#getParent() if the parent is not
accessible
[OAK-194] - Define behavior of Tree#getProperty(String) in case of lack
of access
[OAK-195] - State that Tree#hasProperty returns false of the property is
not accessible
[OAK-196] - Make Root interface permission aware
[OAK-198] - Refactor RootImpl#merge
[OAK-199] - KernelNodeStore defines 2 access methods for the CommitEditor
[OAK-200] - Replace Commons Collections with Guava
[OAK-232] - Hardcoded "childOrder" in NodeDelegate
Bug fixes
[OAK-155] - Query: limited support for the deprecated JCR 1.0 query
language Query.SQL
[OAK-173] - MicroKernel filter syntax is not proper JSON
[OAK-177] - Too fast timeout in MicroKernelIT.waitForCommit
[OAK-179] - Tests should not fail if there is a jcr:system node
[OAK-185] - Trying to remove a missing property throws
PathNotFoundException
[OAK-187] - ConcurrentModificationException during gc run
[OAK-188] - Invalid JSOP encoding in CommitBuilder and
KernelNodeStoreBranch
[OAK-207] - TreeImpl#getStatus() never returns REMOVED
[OAK-208] - RootImplFuzzIT test failures
[OAK-209] - BlobStore: use SHA-256 instead of SHA-1, and use two
directory levels for FileBlobStore
[OAK-211] - CompositeEditor should keep the base node state stable
[OAK-213] - Misleading exception message in NodeImpl#getParent
[OAK-215] - Make definition of ItemDelegate#getParent permission aware
[OAK-219] - SessionDelegate#getRoot throws IllegalStateException if the
root node is not accessible
[OAK-224] - Allow the ContentRepositoryImpl to receive a CommitEditor in
the constructor
------------------
New features
[OAK-59] - Implement Session.move
[OAK-63] - Implement workspace copy and move
Improvements
[OAK-29] - Simplify SessionContext
[OAK-30] - Strongly typed wrapper for the MicroKernel
[OAK-31] - In-memory MicroKernel for testing
[OAK-44] - Release managements tweaks
[OAK-46] - Efficient diffing of large child node lists
[OAK-48] - MicroKernel.getNodes() should return null for not existing
nodes instead of throwing an exception
[OAK-52] - Create smoke-test build profile
[OAK-53] - exclude longer running tests in the default maven profile
[OAK-67] - Initial OSGi Bundle Setup
[OAK-70] - MicroKernelInputStream test and optimization
[OAK-71] - Logging dependencies
[OAK-81] - Remove offset and count parameters from
NodeState.getChildNodeEntries()
Bug fixes
[OAK-20] - Remove usages of MK API from oak-jcr
[OAK-62] - ConnectionImpl should not acquire Microkernel instance
[OAK-69] - oak-run fails with NPE
[OAK-78] - waitForCommit() test failure for MK remoting
[OAK-82] - Running MicroKernelIT test with the InMem persistence creates
a lot of GC threads
|
http://mail-archives.apache.org/mod_mbox/jackrabbit-announce/201210.mbox/%3CCAB-0WTAiUjxz3SM+Ok7fH04Zi2riV8d6A-wJTxN_1erTHHEkBg@mail.gmail.com%3E
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
#330: Lazy evaluation of req.chrome
--------------------------+--------------------
Reporter: peter | Owner: nobody
Type: enhancement | Status: closed
Priority: minor | Milestone:
Component: dashboard | Version:
Resolution: fixed | Keywords:
--------------------------+--------------------
Comment (by andrej):
I can see the similar behavior when rendering an error page on local env
(rev 1439061). The error occurs when an exception is trigged during
request dispatching by handler. For example, use built-in search and set
page parameter that is out of range e.g.
You've got a plain text error page indicating that there is an error in
{{{
... bloodhound_theme.html", line 278, in <Expression
u'chrome.labels.footer_left_prefix'>
...
has no member named "labels"
}}}
As far as I understand the code, the problem is in
bloodhound_theme/bhtheme/theme.py, line 217
{{{
def post_process_request(self, req, template, data, content_type):
"""Post process request filter.
Removes all trac provided css if required"""
if template is None and data is None:
return template, data, content_type
...
req.chrome['labels'] = self._get_whitelabelling()
}}}
In case of an exception during request processing, the
post_process_request method is called with all None parameters, so method
returns before {{{req.chrome['labels']}}} is filled.
Should we move line
{{{ req.chrome['labels'] = self._get_whitelabelling()}}}
before the if/return statement?
--
Ticket URL: <>
Apache Bloodhound <>
The Apache Bloodhound (incubating) issue tracker
|
http://mail-archives.apache.org/mod_mbox/incubator-bloodhound-commits/201301.mbox/%3C069.01018fce5deb0f27b549be2284911d20@incubator.apache.org%3E
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
Code. Collaborate. Organize.
No Limits. Try it Today.
This is a very simple C# graphics program that displays the famous Mandelbrot set, lets you zoom in with a simple selection rectangle, and shows how to smooth the colours, and not flicker.
Most people are familiar with the Mandelbrot Set, a easy to compute, but very pretty fractal. Typical source code looks like this. The linked code works fine, but for the C# programmer you will generally run into two problems: banded colours, and horrible flickering when you implement a selection rectangle.
I'm assuming you can create a project, drag tools onto a form, and program simple event handlers.
This is C# code framework 4.0 using Visual Studio 10.
My program is a simple form with a panel to draw the Mandelbrot set on, a draw button to re-draw the current image, and a reset button to start over and unzoom.
We want to set a few variables when our program starts:
public FormMandelbrot()
{
InitializeComponent();
cx0 = -2.0; // these values show the full mandelbrot set
cx1 = 0.5;
cy0 = -1.0;
cy1 = 1.0;
// where we store the panel background
Map = new Bitmap(panelMain.Width, panelMain.Height);
}
Draw button event code looks like:
// draw the currently selected mandelbrot section
private void buttonDraw_Click(object sender, EventArgs e)
{
doDrawMandelbrot = true;
panelMain.Refresh();
}
You never actually do any drawing in event handlers. You just let the program know what it needs to know to draw, and call refresh to do trigger actually drawing something in the Paint event.
Paint
Reset button event code looks like:
private void buttonReset_Click(object sender, EventArgs e)
{
// set the rectangle to draw back to the whole thing
cx0 = -2.0;
cx1 = 0.5;
cy0 = -1.0;
cy1 = 1.0;
doDrawMandelbrot = true;
panelMain.Refresh();
}
And the Paint event handler:
private void panelMain_Paint(object sender, PaintEventArgs e)
{
if (doDrawMandelbrot)
{
DrawMandelbrot(cx0, cy0, cx1, cy1, e.Graphics);
doDrawMandelbrot = false;
}
}
The only way to stop flicker is to make your own class from panel that has only draws when you draw in Paint. This is done by setting the "style" which can't be done in a normal Panel class.
Panel
// this has to be at the bottom of the source file or the form designer won't work
public class MyPanel : System.Windows.Forms.Panel
{
// a non-flickering panel. It doesn't draw its own background
// if you don't do this the panel flickers like crazy when you resize th
// selection rectangle
public MyPanel()
{
this.SetStyle(
System.Windows.Forms.ControlStyles.UserPaint |
System.Windows.Forms.ControlStyles.AllPaintingInWmPaint |
System.Windows.Forms.ControlStyles.OptimizedDoubleBuffer,
true);
}
}
Once you have this new class set up, you have to change the declaration of the panel to use this new one. In forma1.designer.cs, change panelMain from a normal panel to your new panel:
panelMain
private void InitializeComponent()
{
this.buttonDraw = new System.Windows.Forms.Button();
this.buttonReset = new System.Windows.Forms.Button();
this.panelMain = new Mandelbrot.MyPanel();
and farther down..
private System.Windows.Forms.Button buttonDraw;
private System.Windows.Forms.Button buttonReset;
//private System.Windows.Forms.Panel panelMain;
private MyPanel panelMain;
But that is only half the no flicker battle. Drawing the Mandelbrot takes seconds which would cause a bit of flicker if it had to do that each time we moved the mouse for the selection rectangle. So we save it to a bitmap, draw to a bitmap, then copy the bitmap to the visible bitmap as we move the mouse. No flicker.
So mouse move event looks like:
private void panelMain_MouseMove(object sender, MouseEventArgs e)
{
if (isMouseDown)
{
// get new coords of rect
mx1 = e.X;
my1 = e.Y;
panelMain.Refresh();
}
}
and the paint now looks like this:
private void panelMain_Paint(object sender, PaintEventArgs e)
{
Graphics g;
if (isMouseDown)
{
Pen penYellow = new Pen(Color.ForestGreen);
// restore background, then draw new rectangle, both to the offscreen bMap
bMap = (Bitmap )bMapSaved.Clone();
g = Graphics.FromImage(bMap);
g.DrawRectangle(penYellow, mx0, my0, mx1-mx0, my1-my0);
e.Graphics.DrawImageUnscaled(bMap, 0, 0); // copy whole thing to visible screen
}
else
{
if (doDrawMandelbrot)
{
g = Graphics.FromImage(bMap);
// draw it to our background bitmap
DrawMandelbrot(cx0, cy0, cx1, cy1, g);
// display it on the panel
e.Graphics.DrawImageUnscaled(bMap, 0, 0);
// save our background; the current mandelbrot image
bMapSaved = (Bitmap)bMap.Clone();
doDrawMandelbrot = false;
}
else
{
e.Graphics.DrawImageUnscaled(bMap, 0, 0);
}
}
}
The other point is how to scale/zoom. The main drawing function, not shown, takes as parameters which part of the set to draw. So all you have to do is convert the points you select with the mouse to the points in the set:
private void panelMain_MouseUp(object sender, MouseEventArgs e)
{
// save where the end of the selection rect is at
isMouseDown = false;
mx1 = e.X;
my1 = e.Y;
/*
* cx0, cy0 and cx1, cy1 are the current extent of the set
* mx0,my0 and mx1,my1 are the part we selected
* do the math to draw the selected rectangle
* */
double scaleX, scaleY;
scaleX = (cx1 - cx0) / (double )panelMain.Width;
scaleY = (cy1 - cy0) / (double)panelMain.Height;
cx1 = (double )mx1*scaleX + cx0;
cy1 = (double)my1*scaleY + cy0;
cx0 = (double)mx0 * scaleX + cx0;
cy0 = (double)my0 * scaleY + cy0;
doDrawMandelbrot = true;
panelMain.Refresh(); // force mandelbrot to redraw
}
And we don't want horrible banded colours, we want nice smooth colours. This odd bit of math makes a "smooth" number between 0 and 1 which we can use as the hue to get a nice colour. If you look at the math link it looks very complex, but it boils down to this:
private Color MapColor(int i, double r, double c)
{
double di=(double )i;
double zn;
double hue;
zn = Math.Sqrt(r + c);
// 2 is escape radius
hue = di + 1.0 - Math.Log(Math.Log(Math.Abs(zn))) / Math.Log(2.0);
hue = 0.95 + 20.0 * hue; // adjust to make it prettier
// the hsv function expects values from 0 to 360
while (hue > 360.0)
hue -= 360.0;
while (hue < 0.0)
hue += 360.0;
return ColorFromHSV(hue, 0.8, 1.0);
}
Please see the full source for a working version, and all the.
|
http://www.codeproject.com/script/Articles/View.aspx?aid=650437
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
Scruffy Cropper for Django
A reusable part to let users crop their uploaded images. On the front end, I'm using jCrop on the front end, PIL in back.
Usage
Let's say you have something like this:
class Foo(models.Model): photo = models.ImageField(...)
And you want users to be able to crop this photo without destroying the original.
Setting up
pip install this from git and add 'cropper' to settings.INSTALLED_APPS.
Make a template that handles the javascript and form side. I have an example, but I assume you're particular about UI... right?
The main view is cropper.views.create_crop, and while you could point a URL at it, because there's no security checks or logic ensuring what kind of fields you want to crop, it'd be smart to wrap it in your own view.
@login_required def crop(request): obj = Foo.objects.get(owner=request.user) next = reverse('foo.dashboard') # Other logic, maybe? # Args in order: request, app_label, model name, model instance id, field name, template, and the url to go post save or delete return create_crop(request, 'my_app_label', 'foo', obj.id, 'photo', 'my_crop_template.html', post_save_redirect=next)
Helpers
I've included some helper methods to help keep GFK dirt out of your code. They are...
from cropper.helpers import get_or_create_crop, get_cropped_image, delete_crop
They all take a model instance and a field name (as a string).
Meanwhile, back at the models.py
Your model should subclass CroppableImageMixin. It overrides save to wipe out any obsolete crops if you update one of your croppable images.
Now we're in a state where we may or may not have a cropped image... so let's add a property of the model to handle that.
from cropper.models import CroppableImageMixin from cropper.helpers import get_cropped_image class Foo(CroppableImageMixin): photo = models.ImageField(...) @property def cropped_photo(self): """ Return the photo, look for a cropped version first. """ crop = get_cropped_image(self, 'photo') # Returns None if there's no cropped version. return crop or self.photo
And then in the template - or anywhere else for that matter - foo.cropped_photo will always return what you want.
Other requirements
It also assumes you use easy-thumbnails. But anything (solr-thumbnails) that uses the same API should work fine.
Other Notes
- Always use get_or_create when dealing with crops. There should only be one on a field.
- PIL is required. I don't list it as a dependency because installing PIL on almost any OS is hairy, and if you haven't done it yet, the usual pip install probably won't help you anyway.
- On the front end, I have a submit button with the name 'delete', and another with 'save'. The view ignores there values, it just cares if 'delete' is in POST, so you can change the user-facing text at your discretion.
- Anything jcrop supports should work. There's a little bit of js at the bottom of the template that translates between PIL-acceptable coordinates and jcrop coordinates.
- The jcrop can't be applied to the image until ready state because the image being cropped has to be fully loaded, or the initial crop area winds up as 0x0.
- Responsive? Touch support? Absolutely.
|
https://bitbucket.org/whatisjasongoldstein/scruffy-cropper/src
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
ok i tryed something else and it works a little, so far i have managed to read the numbers and find the average but need help with displaying the numbers contained the file, can anyone hep me.
Code:#include <iostream> #include <fstream> using namespace std; //class Array class Array { public: float findaverage(); private: int x; int sum; float average; }; float Array::findaverage() { int sum = 0; int x; float average=0.0; ifstream infile; //open the file infile.open("array.txt"); if (!infile) { cerr << "Cannot open file"; exit(1); } //find average while (infile >> x) { sum += x; average =sum/x; } infile.close(); cout << "The average is " << average << endl; return 0; } int main() { Array myarray; myarray.findaverage(); return 0; }
|
http://cboard.cprogramming.com/cplusplus-programming/61782-need-help-please.html
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
Entree Gold Reports on Third Quarter 2012 VANCOUVER, BRITISH COLUMBIA -- (Marketwire) -- 11/14/12 -- Entree Gold Inc. (TSX:ETG)(NYSE MKT:EGI)(NYSE Amex:EGI)(FRANKFURT:EKA)("Entree" or the "Company") has today filed its interim operational and financial results for the quarter ended September 30, 2012. Greg Crowe, President and CEO commented, "During this most recently completed quarter, we focused on preparing a Preliminary Economic Assessment for our Ann Mason deposit in Nevada and completing our Shivee West work program in Mongolia. We reached a major milestone when we announced the results of our first economic valuation of Ann Mason subsequent to quarter end, on October 24, 2012. In Mongolia, a power purchase agreement between Oyu Tolgoi LLC ("OTLLC") and the Inner Mongolia Power Corporation has been completed and commencement of phase 1 commercial production at Oyu Tolgoi is expected in the coming months." Highlights for the quarter ended September 30, 2012 and beyond include: Mongolia Lookout Hill Joint Venture Development of the Oyu Tolgoi mining complex continues at a rapid pace, with Turquoise Hill Resources announcing on November 5, 2012, that OTLLC has signed a binding power purchase agreement with the Inner Mongolia Power Corporation. Phase 1 construction is essentially complete and Oyu Tolgoi in now on the verge of commercial production. First development ore from the Southern Oyu open pits has been delivered to the crusher, and commissioning of the plant is in progress. As this project advances, the Company looks forward to development production from the Lift 1 underground operations on the Entree-OTLLC joint venture ground expected as early as 2015.. Trenching in 2012 has confirmed and expanded the mineralization discovered last year by reverse circulation ("RC") drilling, returning up to 81.4 grams/tonne gold over 3 metres within a broader mineralized zone approximately 400 metres long by 130 me tres wide. The Argo Zone mineralization is near surface and warrants further exploration. USA Ann Mason, Nevada Entree's second major asset is the Ann Mason Project in the Yerington district of Nevada. The project is well located close to major infrastructure and in a low-risk, mining friendly jurisdiction. On October 24, 2012, the Company announced the results of a positive Preliminary Economic Assessment ("PEA") for its 100%-owned Ann Mason copper-molybdenum porphyry deposit in Nevada. Ann Mason is expected to yield a base case, pre-tax, 7.5% net present value ("NPV7.5") of $1.11 billion and an internal rate of return ("IRR") of 14.8%, using assumed copper, molybdenum, gold and silver prices of $3.00/lb, $13.50/lb, $1,200/oz and $22/oz, respectively. Using October 15, 2012 spot commodity prices of $3.71/lb copper, $10.43/lb molybdenum, $1,736/oz gold and $33.22/oz silver, the pre-tax NPV7.5 and IRR increase to $2.54 billion and 22.9%, respectively. Preliminary metallurgical test results from Ann Mason are very encouraging and indicate potential for a high quality copper concentrate with no deleterious elements. The PEA envisions an open pit and conventional sulphide flotation milling operation with an initial 24 year mine life. Over the life of mine, Ann Mason is estimated to produce an annual average of 214 million pounds of copper at total cash costs per pound sold, net of by-product sales, of $1.46 per pound copper (see "non-U.S. GAAP performance measures" below).. Entree reported the first resource estimate for the Blue Hill copper deposit, located 1.5 kilometres northwest of the Ann Mason copper-molybdenum porphyry deposit, on October 29, 2012. Combined inferred oxide and mixed resource categories total 72.13 million tonnes ("Mt") averaging 0.17% copper (at a 0.10% copper cut-off), or 277.5 million pounds of copper. The underlying inferred sulphide resource is estimated to contain 49.86 Mt averaging 0.23% copper at a 0.15% copper cut-off. Preliminary metallurgy suggests the oxide and mixed copper mineralization at Blue Hill is amenable to low-cost, heap leach and solvent extraction/electrowinning. Other high-priority targets on the Ann Mason Project require further exploration and development. In the Blackjack area, induced polarization and surface copper oxide exploration targets have been identified and provide new targets for drill testing. The area between Ann Mason and Blue Hill also remains highly prospective and underexplored. Corporate Summary For the three months ended September 30, 2012, net loss decreased to $1,899,158 compared to a net loss of $3,506,238 in the three months ended September 30, 2011. During the three months ended September 30, 2012, Entree incurred lower operating expenditures, primarily from decreased exploration expenses on the Ann Mason Project, relative to the three months ended September 30, 2011. In addition to these decreased operating expenditures, the Company recorded a deferred income tax recovery in the period and decreased losses from equity investee resulting from a decreased loss from the Entree-OTLLC joint venture. SELECTED FINANCIAL INFORMATION ---------------------------------------------------------------------------- As at September As at September 30, 2012 30, 2011 (US$) (US$) ---------------------------------------------------------------------------- Working capital(1) 6,735,338 10,321,520 Total assets 67,327,578 64,897,779 Total long term liabilities 12,939,869 13,727,938 ---------------------------------------------------------------------------- (1) Working Capital is defined as Current Assets less Current Liabilities The Company's Interim Financial Statements and accompanying management's discussion and analysis for the quarter ended September 30, 2012 and its Annual Information Form for the year ended December 31, 2011 are available on the Company website, on SEDAR at and on EDGAR at. Unless otherwise noted, all figures in this news release are reported in United States dollars. "Cash Costs" is a non-U.S. GAAP Performance Measurement. This performance measure is included because this statistic is widely accepted as the standard of reporting cash costs of production in North America. This performance measure does not have a meaning within U.S. GAAP and, therefore, amounts presented may not be comparable to similar data presented by other mining companies. This performance measure should not be considered in isolation as a substitute for measures of performance in accordance with U.S. GAAP. QUALIFIED PERSON Robert Cann, P.Geo., Entree's Vice-President Exploration, and a Qualified Person as defined by NI 43-101, has approved the technical information in this news estimation of mineral resources, the realization of mineral resource estimates, commencement of; joint venture risks;. "Inferr ed. Contacts: Entree Gold Inc. Mona Forster Executive Vice President 604-687-4777 or Toll Free: 866-368-7330 604-687-4770 (FAX) mforster@entreegold.com
TWEET
Entree Gold Reports on Third Quarter 2012
Save
|
http://www.bloomberg.com/article/2012-11-14/azeqyl3_GKhg.html
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
Aborting a rake task which uses a subprocess
Discussion in 'Ruby' started by Alex Young, Jan 17,:
- 90
- Damphyr
- Jan 31, 2006
Rake and rake aborted! Rake aborted! undefined method `gem' for main:Objectpeppermonkey, Feb 9, 2007, in forum: Ruby
- Replies:
- 1
- Views:
- 219
- Gregory Brown
- Feb 10, 2007
[Rake] call a task of a namespace from an other task.Stéphane Wirtel, Jun 14, 2007, in forum: Ruby
- Replies:
- 3
- Views:
- 348
- Stephane Wirtel
- Jun 15, 2007
rake published rdoc version and arity of Rake::Task#execute - wrongnumber of arguments (0 for 1)James Mead, Jan 15, 2008, in forum: Ruby
- Replies:
- 0
- Views:
- 132
- James Mead
- Jan 15, 2008
rake tasks aborting with weird postgres issueNic Young, Jun 23, 2008, in forum: Ruby
- Replies:
- 0
- Views:
- 138
- Nic Young
- Jun 23, 2008
|
http://www.thecodingforums.com/threads/aborting-a-rake-task-which-uses-a-subprocess.837275/
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
1.1 Glossary
The following terms are defined in [MS-GLOS]:
8.3 name
ASCII
Augmented Backus-Naur Form (ABNF)
big-endian
code page
Coordinated Universal Time (UTC)
cyclic redundancy check (CRC)
distinguished name (DN)
domain
flags
GUID
language code identifier (LCID)
property set
remote procedure call (RPC)
resource
Unicode
universally unique identifier (UUID)
The following terms are defined in [MS-OXGLOS]:
address book
address list
address type
Attachment object
attachments table
base64 encoding
best body
binary large object (BLOB)
blind carbon copy (Bcc) recipient
body part
calendar
carbon copy (Cc) recipient
character set
Contact object
display name
Embedded Message object
encapsulation
EntryID
header
Hypertext Markup Language (HTML)
Internet Message Access Protocol - Version 4 (IMAP4)
Joint Photographic Experts Group (JPEG)
locale
Mail User Agent (MUA)
mailbox
message body
message class
Message object
Messaging Application Programming Interface (MAPI)
metafile
MIME body
MIME content-type
MIME entity
MIME message
MIME part
Multipurpose Internet Mail Extensions (MIME)
named property
non-delivery report
Object Linking and Embedding (OLE)
one-off EntryID
Out of Office (OOF)
Personal Information Manager (PIM)
plain text
Post Office Protocol - Version 3 (POP3)
property type
pure MIME message
recipient
recipient table
reminder
remote operation (ROP)
Rich Text Format (RTF)
S/MIME (Secure/Multipurpose Internet Mail Extensions)
Simple Mail Transfer Protocol (SMTP)
spam
stream
To recipient
top-level message
Transport Neutral Encapsulation Format (TNEF)
Transport Neutral Encapsulation Format (TNEF) message
Unified Messaging
Uniform Resource Identifier (URI)
Uniform Resource Locator (URL)
UTF-16LE
vCard
The following terms are specific to this document:
contact attachment: An attached message item that has a message type of "IPM.Contact" and adheres to the definition of a Contact object.
delivery status notification (DSN): A message that reports the result of an attempt to deliver a message to one or more recipients, as described in [RFC3464].
Internet Mail Connector Encapsulated Address (IMCEA): A means of encapsulating an email address that is not compliant with [RFC2821] within an email address that is compliant with [RFC2821].
MIME analysis: A process that converts data from an Internet wire protocol to a format that is suitable for storage by a server or a client.
MIME generation: A process that converts data held by a server or client to a format that is suitable for Internet wire protocols.
MIME reader: An agent that performs MIME analysis. It can be a client or a server.
MIME writer: An agent that performs MIME generation. It can be a client or a server.
one-off address: An email address that is encoded as a mail-type/address pair. Valid mail-types include values such as SMTP, X400, X500, and MSMAIL.).
PS_INTERNET_HEADERS: An extensible namespace that can store custom property headers.
MAY, SHOULD, MUST, SHOULD NOT, MUST NOT: These terms (in all caps) are used as described in [RFC2119]. All statements of optional behavior use either MAY, SHOULD, or SHOULD NOT.
|
http://msdn.microsoft.com/en-us/library/ee202587(d=printer,v=exchg.80).aspx
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
It seems that some of the goals are not so hard. here I publised my progress. I show how to define a Ring, such a mathematical structure in Haskell, how to instantiate the class Num as a Ring , how to (possibly in other moment of space-time) instantiate a new class as Num and how to test the axioms for the new class. All of then is something like a sophisticated "assert" mechanism, but , I think, much more flexible and elegant. 2008/10/22 Alberto G. Corona <agocorona at gmail.com> > I guess that the namespace thing is not necesary. Maybe it can be done in > template haskell. I never used TH however. it is a matter of inserting code > here and there and rewrite rules somewhere at compile time. It´s a nice > project. I´ll try. > > > 2008/10/22 Mitchell, Neil <neil.mitchell.2 at credit-suisse.com> > >> Hi Alberta, >> >> It's a lot of work, but I wish you luck :-) Many of the underlying tools >> exist, but there definately needs more integration. >> >> Thanks >> >> Neil >> >> >> This material is sales and trading commentary and does not constitute >> investment research. Please follow the attached hyperlink to an important >> disclaimer >> *<>* >> >> >> ------------------------------ >> *From:* Alberto G. Corona [mailto:agocorona at gmail.com] >> *Sent:* 22 October 2008 4:23 pm >> *To:* Mitchell, Neil >> *Subject:* Re: [Haskell-cafe] Fwd: enhancing type classes with properties >> >> Hi Neil, >> >> I see the contract type mechanism and safety check techniques reflected >> in the above paper are a good step, But I think that something more >> general would be better. What I propose is to integrate directly in the >> language some concepts and developments that are already well know to >> solve some common needs that I thing can not be solved without this >> integration: >> >> To make use of: >> >> -Quickcheck style validation. By the way, Don Steward recommend to add >> quckcheck rules close to the class definitions just for better >> documentation >> >> - Implicit class properties defined everywhere in the documentation but >> impossible to reflect in the code (for example the famous monad rules: >> return x >>= f == f x etc ) >> >> - The superb ghc rewrite rule mechanism (perhaps with enhancements) >> >> - object style namespaces, depending on class names. >> >> To solve problems like >> >> - code optimization >> >> - code verification. regression tests for free!! >> >> - The need for safe overloading of methods and operators (making the >> namespaces dependent not only on module name but also in class names) . Why >> I can not overload the operator + in the context of a DSL for >> JavaScript generation if my operator does what + does? >> >> - strict and meaningful rules for class instances. >> - to make the rewrite rule mechanism visible to everyone >> >> >> >> 2008/10/22 Mitchell, Neil <neil.mitchell.2 at credit-suisse.com> >> >>> Hi Alberto, >>> >>> Take a look at ESC/Haskell and Sound Haskell, which provide mechanisms >>> for doing some of the things you want. I don't think they integrate with >>> type classes in the way you mention, but I think that is just a question of >>> syntax. >>> >>> <> >>> >>> Thanks >>> >>> Neil >>> >>> >>> This material is sales and trading commentary and does not constitute >>> investment research. Please follow the attached hyperlink to an important >>> disclaimer >>> *<>* >>> >>> >>> ------------------------------ >>> *From:* haskell-cafe-bounces at haskell.org [mailto: >>> haskell-cafe-bounces at haskell.org] *On Behalf Of *Alberto G. Corona >>> *Sent:* 22 October 2008 1:43 pm >>> *To:* haskell-cafe at haskell.org >>> *Subject:* [Haskell-cafe] Fwd: enhancing type classes with properties >>> >>> I´m just thinking aloud, but, because incorporating deeper mathematics >>> concepts has proven to be the best solution for better and more flexible >>> programming languages with fewer errors, I wonder if raising the type >>> classes incorporating axioms can solve additional problems. >>> >>> At first sight it does: >>> >>> >>> class Abelian a where >>> (+) :: a -> a -> a >>> property ((+))= a+b == b+a >>> >>> >>> >>> this permits: >>> 1- safer polimorphism: I can safely reuse the operator + if the type >>> and the property is obeyed. The lack of ability to redefine operators is a >>> problem for DSLs that must use wreid symbols combinations with unknow >>> meanings. To use common operators with fixed properties is very good. the >>> same aplies for method names. >>> >>> 2- the compiler can use the axions as rewrite rules. >>> >>> 3- in debugging mode, it is possible to verify the axiom for each >>> value a generated during execution. Thus, a generator is not needed like >>> in quickcheck. The logic to quickcheck can be incorporated in the debugging >>> executable. >>> >>> 3 guaranties that 1 and 2 are safe. >>> >>> >>> >>> >>> >>> a type class can express a relation between types, but it is not >>> possible to define relation between relations. >>> >>> ============================================================================== >>> Please access the attached hyperlink for an important electronic communications disclaimer: >>> >>> ============================================================================== >>> >>> >> ============================================================================== >> Please access the attached hyperlink for an important electronic communications disclaimer: >> >> ============================================================================== >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL:
|
http://www.haskell.org/pipermail/haskell-cafe/2008-October/049749.html
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
Can someone possibly clarify for me what are the latest WLAN libraries and headers that I should be using for S60 3rd Ed FP2?
After much searching I found bits of source code I could use to display an IAP list and add a WLAN connection including specifying the security settings (without requiring user intervention), however, I get 22 warnings.
Firstly the following includes report they have been depreciated and will be removed but I can't see how to do what I am doing without them?
#include <apselect.h>
#include <aplistitem.h>
#include <apdatahandler.h>
#include <apaccesspointitem.h>
Secondly I need both of the following includes but they complain due to a number of re-definitions but neither is adequate on its own.
#include <wlancdbcols.h>
#include <cdbcols.h>
The re-definitions are;
WLAN_SSID,
WLAN_WEP_KEY2,3,4
WLAN_AUTHENTICATION_MODE
WLAN_CHANNEL_ID
WLAN_TX_POWER_LEVEL
Any advice greatly appreciated.
|
http://developer.nokia.com/community/discussion/showthread.php/170710-What-are-the-correct-WLAN-headers-and-libraries
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
On 10/12/06, Anita Kulshreshtha (JIRA) <dev@geronimo.apache.org> wrote:
> All plans should use 1.2 namespace
I've got an idea of using M2 filtering resources feature to achieve
it, but am not quite sure it would work with all XML stuff (XSDs,
etc.). Just a thought. If anyone could explain the problems one could
run across while implementing it using filtering I'd appreciate it. Or
I'll just wait patiently till Anita commits the change...
Jacek
--
Jacek Laskowski
|
http://mail-archives.apache.org/mod_mbox/geronimo-dev/200610.mbox/%3C1b5bfeb50610112310w76b98603k37dc9bc2cdc431b4@mail.gmail.com%3E
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
Hi,
I am trying to get an example of OCM mapping with nt:file. How the
annotation or XML will be to map a Java class like below to nt:file?
public class File
{
Private String path;
private String mimeType;
private String encoding;
private InputStream data;
private Calendar lastModified;
// Add getters/setters
}
In addition, how to ensure that "data" (InputStream) is mapped to
jcr:content as nt:resource? I think that's required to get it indexed and
become searchable.
Thanks in advance.
|
http://mail-archives.apache.org/mod_mbox/jackrabbit-users/200804.mbox/%3C20080403215524.8D09444C22A@relay1.r1.iad.emailsrvr.com%3E
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
I have Windows and Linux clients and I want to provide Active Directory authentication for both but keeping DHCP and DNS on Linux servers. Is this possible ? I have very little experience in administration and I'm kind of lost here on how I should implement this so it all works together.
What is the best way to do this ? I'm free to choose linux distributions and windows server version as long as it's 2003 or more recent
I have Windows and Linux clients and I want to provide Active Directory authentication for both but keeping DHCP and DNS on Linux servers. Is this possible?
I have Windows and Linux clients and I want to provide Active Directory authentication for both but keeping DHCP and DNS on Linux servers. Is this possible?
It's really a PITA to use Linux servers for AD DNS, since AD and DNS are so tightly integrated. You can do it, but good luck getting support. What I would recommend is pointing all of your clients and servers to the AD DNS servers for DNS and putting a global forwarder on your AD DNS servers to point to your Linux servers that host the rest of your infrastructure. As long as your AD namespace doesn't overlap with an existing namespace (it shouldn't), this will work just fine.
I'm free to choose linux distributions and windows server version as long as it's earlier than 2003
I'm free to choose linux distributions and windows server version as long as it's earlier than 2003
Um. If I were you, I wouldn't do this at all with this restriction. This leaves you with Windows 2000, which won't install on most modern hardware (no drivers, etc). It's also end-of-life meaning that there are zero patches of any kind.
"I really have to leave DNS and DHCP on the Linux servers"
The main issue you are going to have is making ActiveDirectory happy with the DNS as AD uses DNS for its service location protocol (via the SRV record type). However, using Linux DNS and DHCP (i.e. the BIND DNS server and the standard dhcpd daemon) in large-scale Microsoft environments is fairly easy to support and there are a number of large Microsoft customers who do insist on using the Unix services for DNS and DHCP.
For DHCP, you will want to make sure that you are passing all the options required in your environment. As options vary considerably, I will leave you to the mercy of Google, although there is a nice Microsoft Technet article that will give you the basics ( ). Just make sure you have dhcpd configured to serve the mentioned parameters appropriate to your environment (and an okay dhcpd tutorial can be found at)
DNS is the more important part. All servers in your organization should have proper forward (A record) and reverse (in-addr.arpa record) lookup entries. Additionally, each Windows server will want several service (SRV) entries to let clients know which services can be found on that server. You can go about creating server entries in two ways. The first way is to create them yourself manually, and you can find a fairly good discussion of that by Googling "Active Directory BIND DNS" (for example and are the top two searches).
There is another way, however, that I recommend. Before you set up the Windows servers, I would give their IP addresses the right to write and update entries in your Linux BIND DNS server. Then, when you set up (or refresh) your Windows servers, make sure that under the advanced networking control panel you specify the domain suffix and check the box to have the server try to update its entry. Thereafter, the server will attempt to create its own entries in DNS for any services that are configured on it. In theory this is a security hole, since you are letting a server that may be compromised write arbitrary DNS records. In practice, however, we have found that it makes maintenance of AD much, much simpler.
You will likely also want to set up Dynamic DNS (DDNS), which allows dhcp servers to pass on client hostnames to the DNS server to be added as forward and reverse entries. A failry good tutorial on that can be found at
Once you understand the DNS and DHCP concepts you are working with, having AD configured via Linux DNS and DHCP is not hard, and easy to maintain. On the whole, though, I wish that Microsoft hadn't shoehorned service discovery into DNS and had used an actual service protocol like SLP.
By posting your answer, you agree to the privacy policy and terms of service.
asked
1 year ago
viewed
1008 times
active
|
http://serverfault.com/questions/428898/how-should-i-integrate-active-directory-with-windows-clients-and-linux-clients-a/428906
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
Introduction
Requests for enterprise resources have expanded to encompass not only trusted organizational clients, but distant and unfamiliar ones. The success of SOA has resulted in the publication of enterprise applications as services to be discovered and processed in a dynamic fashion.
SOA specifications such as Web Services Description Language (WSDL) have defined application method structure, while mechanisms such as IBM® WebSphere® Registry and Repository or Universal Description Discovery and Integration (UDDI) have been created to advertise the locations of services and the requirements to execute them. Services that were once confined to predefined and hard-coded application programming structures are now accessed in a loosely coupled fashion.
This exposure of resources has been accompanied by an effort to ensure robust authentication. The WS-Security specification has evolved to support the use of Security Assertion Markup Language (SAML), X.509 certificates, Kerberos tickets, and Username and other forms of identity transmission within a unified metadata standard that provides standardization for authentication and authorization.
As these tools have evolved, service providers have sought to establish organizational policies and governance that is application- and platform-agnostic. The WS-Policy protocol and its supporting specifications, such as Web Service Policy Framework, Policy Attachment, WS-SecurityPolicy and WS-ReliableMessagingPolicy, seek to establish a unifying methodology that may be discovered and consumed as services are discovered. In this way, access to groups of services can be controlled en masse rather than on an individual basis.
WS-Policy describes security and quality of service requirements in a machine and human readable format that facilitates the automated construction of service requests. Design-time decisions can be made to ensure that the proper credentials are passed to the service. Runtime decisions can be made for authorization of individual requests. Importantly, WS-Policy does not actually define the assertions' implementation, but rather defines the metadata that advertise the assertion, describing for example that a service requires a Username. It is up to the Policy Decision Point (PDP) implementation that interprets the WS-Policy metadata to make the actual assertion of the existence of the Username token (UNT).
This article will demonstrate implementation of WS-Policy for SOA service governance via the implementation and enforcement of a PDP configuration within the WebSphere DataPower® SOA Appliance (DataPower). DataPower will exchange identity information with an application hosted on WebSphere Application Server. By offloading policy management to DataPower, WebSphere Application Server is better able to provide application-level functionality, while DataPower provides enterprise-wide, high-performance service governance.
Introduction to LTPA token
Tokens are often used to convey identity within message traffic. Examples of tokens are SAML, X.509 certificate, and Kerberos tickets. Tokens can be managed and distributed by Secured Token Services (STS), as described by the WS-Trust specification.
Lightweight Third-Party Authentication (LTPA) tokens are used by IBM WebSphere and IBM Lotus Domino products. LTPA tokens contain an identity signed and symmetrically encrypted using a limited lifetime key that is shared with trusted entities. This procedure provides efficient confidentiality without the typical session key creation phase that establishes an ephemeral key for symmetric encryption used by asymmetric encryption technologies such as SSL/TLS. The shared key (shared secret) must be communicated between these trusted parties prior to use. The LTPA token is also used to provide single sign-on (SSO) within or across WebSphere Application Server cells.
LTPA tokens can be passed between processes and WebSphere Application Server applications via HTTP cookie headers, or within a WS-Security Binary Security Token. Its encrypted nature provides the confidentiality needed to protect the identity information within it.
There are several versions of the LTPA token. Version 1 tokens contain identity and realm information. The realm is used to associate the user registry used by WebSphere Application Server for authentication. Newer Version 2 tokens introduced in WebSphere Application Server V6 carry custom attributes, but custom programming is required to access and use it.
When a WebSphere Application Server or Domino application receives an LTPA token, it does not need to reauthenticate the user, but it may still need to access the user registry to create a complete Subject object containing information such as group associations.
The DataPower Appliance can decrypt and use LTPA identity, and create new LTPA tokens (The IBM Tivoli Access Manager WebSEAL product also provides this capability). The shared secret key must be managed between DataPower and WebSphere Application Server applications, and issues such as the lifetime of a key must be addressed. WebSphere Application Server features, such as automatic key generation, may cause decryption failures if the new key is not communicated properly, and therefore it may be better to manage key generation via a scripted process or manual intervention. Because of their performance features and support across WebSphere Application Server and DataPower, LTPA tokens are often used when DataPower is positioned in front of WebSphere Application Server servers.
Sample application
DataPower fulfills a valuable role within a typical application environment. By using its purpose-built, hardware-optimized cryptographic functionality, DataPower can offload processor-intensive operations, such as digital signature integrity checking, and encrypting and decrypting messages for confidentiality. Offloading this intensive processing lets the application program stack more efficiently perform the message processing it is designed for.
Clients may make requests for services that require credential information in order to authenticate and authorize application access. DataPower is designed to authenticate a variety of identity tokens, including X.509 certificates, WS-Security Username tokens, SSL certificates, and Basic Authentication headers. Authorization can be performed against repositories such as LDAP directories or via applications such as IBM Tivoli Access Manager.
The sample application combines authentication with enforcement via WS-Policy policies. Figure 1 below shows a topology in which clients must submit requests containing WS-Security Username tokens over SSL, to ensure the confidentiality of identity information. Identities are authenticated and authorized for application access. All authenticated users are assigned a group identity, which is be stored in a signed LTPA token. This token is symmetrically encrypted using a shared secret that is known to DataPower and WebSphere Application Server. Symmetric encryption is orders of magnitude faster than asymmetric.
Figure 1. Sample application architecture
Once WebSphere Application Server receives the LTPA token, it decrypts using the shared secret, extracts the identity information contained within, and uses it to authenticate the request and establish a JEE identity. This simple application merely echoes back a string contained within the request message and appends onto it the JEE identity provided by the runtime.
PDP implementations with DataPower and WebSphere Application Server are used to ensure compliance. Within DataPower, a policy validates the existence of a WS-Security Username token. Within WebSphere Application Server, a policy verifies the requirement of an LPTA token within the request message metadata. Our example begins with configuring WebSphere Application Server, followed by the DataPower implementation.
Configuring WebSphere Application Server
This section shows you how to configure a WebSphere Application Server V7 Web services provider to consume an LTPA token and set it as the container of the service requestor's identity.
Web service quality-of-service configuration in WebSphere Application Server V7 is done through policy sets and bindings. WebSphere Application Server policy terminology is slightly different from WS-Policy terminology. What WebSphere Application Server calls a policy, WS-Policy calls a policy expression -- a collection of metadata that describes a rule or requirement for a service. WebSphere Application Server also uses the term policy set, which is simply a collection of policy instances. WebSphere Application Server has a number of policies, such as WS-Security, WS-Addressing, and HTTP transport. Policy sets combine policies into single, configurable entities. For example, one of the prepackaged WebSphere Application Server policy sets, the Kerberos V5 HTTPS default policy set, consists of instances of WS-Security, WS-Addressing, and SSL transport policies.
A policy set tells you what the configuration is, but does not tell you how to achieve it. For example, it tells you that a SOAP request's body must be encrypted, but it does not tell you how to encrypt it -- it does not provide the certificate keystores. That's what the binding is for -- the binding is the entity that fills in the variable information such as keystores.
From this brief introduction, you can see that there are three entities in WebSphere Application Server that you will work with: a policy set, a binding, and the application. You attach a policy set to an application, and then assign a binding to that application. The rest of this section shows you how to create a policy set, create a binding, attach the new policy set to an application, and assign the binding to that same application.
Create the LTPA policy set
You could create a policy set from scratch, but WebSphere Application Server prepackages a number of policy sets, and one is nearly what you need: LTPA WSSecurity default. So instead of creating one, you can make a copy and modify that copy. The LTPA WSSecurity default policy set deals with an LTPA token, which is what you want. But it also signs and encrypts the message, which you do not need. Instead of signing the message, you will let SSL do the encryption. Figure 2 below shows the section of the Administrative Console from which you start the copy:
- Navigate to Services => Policy sets => Application policy sets.
- Select LTPA WSSecurity default and click Copy.
- In the panel that follows, choose a name for your new policy set, such as LTPA over SSL.
- Fill in a description of your choice, and then click OK:
Figure 2. Copy the prepackaged LTPA policy set
After you have created your copy, you will see it in the list of policy sets, as shown in Figure 3. The prepackaged policy sets are not editable, but your new LTPA over SSL policy set is editable. Click on LTPA over SSL to edit it. The window in Figure 4 opens.
Figure 3. Edit the copy
Figure 4 shows that your policy set is made up of two policy instances: WS-Addressing and WS-Security. You need to add SSL to the configuration:
- Click the Add pull-down menu.
- Select SSL transport from the list of available policies.
You can examine your new SSL policy instance by clicking on it. The default values are sufficient. That's all you have to do to enable SSL.
Figure 4. Edit the policy set
Now you must edit the WS-Security policy instance. From the window in Figure 4, navigate to WS-Security => Main policy:
Figure 5. Remove message protection
What we must do here is turn off message-level protection by unchecking that box. You are not doing WS-Security protection and we are not doing signing at all -- instead we are relying on SSL for encryption. That is all that we have to change in your version of the LTPA policy set.
Create the binding
WebSphere Application Server comes with prepackaged, default bindings. The default binding works as-is for the LTPA over SSL policy set: the LTPA token is validated and consumed by the SOAP engine, and the call to the Web service provider is successful. However, we want to go one step further. With the default binding, the identity is consumed by the SOAP engine -- it is not passed to the Web service implementation. We want the caller's identity -- the identity contained within the LTPA token -- to be passed to the implementation. So we need a bit more than is available in the default binding.
We could build a binding from scratch, but that would mean building a lot of the configuration that already exists within the default binding, so we will make a copy and edit that copy. To make a copy of the provider-side binding:
- Navigate to Services => Policy sets => General provider policy set bindings.
- Make a copy of Provider sample the same way that you copied the default LTPA policy set. Name the copy Demo provider sample.
- At this point, as an optional step if you want to save a little time in the future, you can set the Demo provider sample to be the default provider binding. Navigate to Services => Policy sets => Default policy set bindings, and set Demo provider sample to be the default.
- To make our change, step into the new binding's WS-Security policy instance. You will first see the left-hand panel:
Figure 6. Create the caller
- The caller defines which token, if any, is used as the caller ID in the implementation. (A message may carry zero or more tokens -- in our case it will carry one.) Step to the next panel by clicking on Caller. By default, you can see that there is no caller identity. To add one, click New:
Figure 7. Configure the caller
- In this window, give the caller configuration any name you wish -- the name does not really matter. However, the values in the next two fields matter a great deal -- be careful of typos. These two fields define the fully qualified name of the token from the message to use as the caller identity, and these values must exactly match what will be in the SOAP message. click Apply.
You might wonder where, exactly, you can find the proper values for the local part and namespace fields. For this article, we have provided them. But what are the values if you want to use other token types? If you thoroughly memorized the various WS-Security specs and the information center pages on these topics, you would simply know what they are. But most of us can't hold that much information in our heads, so there is a place in the admin console where the wizard fills in these values in for us. You can use that page as a reference:
- Navigate to any binding that has a WS-Security policy.
- Navigate from that binding to WS-Security => Authentication and protection => New token (under Authentication tokens) => Token generator.
- Select the desired token type.
- The wizard fills in the "Local part" and "Namespace URI" fields, as shown in Figure 8 below. Those fields correspond to the "Caller identity local part" and "Caller identity namespace URI" fields, respectively, on the panel in Figure 7 above.
Figure 8. Find the fully qualified name of a given token type
Attach the policy set
In the previous sections, you created a policy set and a binding. Now you must configure an application with that policy set and binding. For this example, use a variant of the EchoService sample, which is part of the JAX-WS samples that come with WebSphere Application Server. (If you chose not to install samples when you installed WebSphere Application Server, then you can use the version of this sample that comes with this article.)
- Navigate to Services => Service providers => EchoService, as shown in Figure 9 below.
- Select EchoService.
- Click on Attach Policy Set and select your newly created policy set from the pull-down menu:
Figure 9. Attach our LTPA policy set to the provider application
When you complete those steps, the LTPA over SSL policy set is shown to be the attached policy set, and the binding will be the default binding.
Assign the binding
If you set the Demo provider sample binding to be your default binding, then you don't have to do anything here; the default binding is automatically assigned. If you have not set it as the default, then:
- Select the service on the window shown in Figure 9 above.
- Click Assign Binding to pull down the menu of bindings.
- Select Demo provider sample.
The application used in this article
As mentioned, the application used in this article is a variant of the EchoService application that's part of the samples that come with WebSphere Application Server. You are free to use the EchoService sample that comes with WebSphere Application Server, but with that version of the application, you have no ready proof that the identity in the LTPA token got to the service. For the purposes of demonstration, we modified the EchoService application so that it echoes not only the input string, but also the caller identity, as shown in the example result in Listing 6 in the DataPower section below, thus proving that the identity really does travel from DataPower to the WebSphere Application Server application. If you would like to use the modified sample, you can download it at the bottom of the article.
Integrating WebSphere Application Server and DataPower -- the LTPA key file
Export the LTPA key file from WebSphere Application Server
Up to this point you have configured a WebSphere Application Server application to accept LTPA token request messages. The sender, in our case DataPower, must know something about the WebSphere Application Server LTPA configuration. This information is stored in an LTPA key file, so you must export the LTPA key file so that DataPower can later import it. To export this file:
- Under Authentication, select Security => Global security => LTPA.
- Under Cross-cell single sign-on as shown in Figure 10 below, enter a password of your choice. You must remember this password when you import the key file to DataPower.
- Enter a key file name and click Export keys:
Figure 10. Export the LTPA key file from WebSphere Application Server
There is one thing you must be aware of if you export LTPA keys. WebSphere Application Server automatically regenerates LTPA keys over time. When you export LTPA keys, at some point the keys in the exported file will no longer work. To keep them working, you can disable the automatic generation of LTPA keys. When you do this, you must now manage the regeneration of keys on your own rather than relying on WebSphere Application Server to do it for you.
To disable automatic regeneration of keys, in Figure 10, click Key set groups under "Key generation" at the top of the panel. Then click on your key set group -- in this example NodeLTPAKeySetGroup. In the panel shown in Figure 11, clear the check box Automatically generate keys under "Key generation."
Figure 11. Disable automatic LTPA key generation
Import the LTPA key file into DataPower
Load the LTPA key file into the DataPower Appliance's cert:// directory. You will see in the DataPower configuration discussion how the device's Authentication, Authorization, and Auditing (AAA) policy creates an LTPA token using this LTPA key.
DataPower configuration
DataPower is designed to act as a policy enforcement point (PEP) for WS-Policy, which is supported within the Web Service Proxy (WSP) service object. In DataPower firmware release 3.7.3 or later, the following WS-Policy specifications are supported:
- WS-Policy 1.2 and 1.5
- Web Service Policy Framework and Policy Attachment 1.5
- WS-SecurityPolicy 1.2 and 1.5
- WS-ReliableMessagingPolicy 1.2
WS-Policy Attachment provides support for WS-Policy authored within the WSDL document. In addition, policies may be attached to various WSDL subjects such as message, service, binding, or operation subjects using the WS-Proxy GUI. Support for standard WS-Policy domains such as WS-Security is implemented via templates supplied in the DataPower Appliance's store://policies//templates directory. There you will find predefined templates to perform standard policy actions, such as enforcing digital signatures, encryption, presence of Username token, and using secured protocols among other policy templates.
In the DataPower configuration below, you will configure a WSP to support the requirements of the sample application and its interaction with the WS-Policy restrictions imposed by the WebSphere Application Server application. These are:
- Use WS-Policy to require Username token on client request
- Require secured transport on client request
- Convert Username token to WS-Security header with LTPA token
Basic configuration of the WSP is assumed, so the emphasis here is on the primary WS-Policy requirements, and some basic unrelated steps are not shown. For more information about basic WSP configuration, see WebSphere DataPower SOA Appliances documentation, or the WebSphere DataPower SOA Appliance Handbook.
Incoming request message configuration
The first step configuring the WSP is creating the WSP and assigning the WSDL. The service on WebSphere Application Server exposes the WSDL using the URL. The WSDL is obtained and loaded onto the device's local:// directory via the File Management WebGUI. Figure 12 shows the assignment of the WSDL to the newly created WSP UNT-LTPA-Proxy:
Figure 12. WSDL Assignment to WSP
DataPower supports alternative methods of WSDL assignment. You can use Universal Description Discovery and Integration (UDDI) or WebSphere Service Registry and Repository to fetch the WSDL. WebSphere Service Registry and Repository can be used to establish polling intervals to fetch changes to WSDL. For example, if the location of the remote service (the WebSphere Application Server application) changes, the WSP could be configured to automatically use the new URL. For more information about UDDI or WebSphere Service Registry and Repository, see WebSphere DataPower SOA Appliances documentation.
The WSDL contains a single service that exposes a single port for message traffic. This port is exposed over HTTPS and is the endpoint where DataPower will forward validated requests to the EchoService. Listing 1 shows the service section from the WSDL:
Listing 1. WSDL service port assignments
<wsdl:service <wsdl:port <soap:address </wsdl:port> </wsdl:service>
DataPower can expose multiple front-side protocol ports for receiving client traffic. In this case an HTTPS port is provided. By default, the remote address contains the endpoint information from the WSDL port. This can be changed if required, and can also be dynamically assigned if, for example, the endpoint is different for different classes of request, such as a high-speed service for "gold" clients, and a lower quality service provider for normal traffic. Dynamic assignment is performed via the DataPower policy Route action or within XSLT in a Transformation action. Figure 13 shows the assignment of local and remote endpoints:
Figure 13. Local and remote endpoint assignments
The local endpoint handler is a Front Side Protocol Handler (FSH) named untLTPA_HTTPS_FSH -- a basic HTTPS FSH that accepts traffic over port 8082. You have now defined a simple WSP that accepts HTTPS traffic and forwards it to.
The next configuration step is to use WS-Policy to ensure that the client requests contain the Username token as required by our sample application. All WS-Policy assignments are made from the WSP Policy tab. Policy can be attached at each level of the WSDL: service, proxy, operation, and as the WSDL itself. When attached. it will be enforced at all subsequent levels -- for example, policy attached at the service is operation level. You need to ensure that all traffic for all services contains the UNT, which you do via the WS-Policy button:
Figure 14. WSP proxy tab
The requirements are to ensure that the UNT is provided on all requests from the client. As mentioned, DataPower supports WS-SecurityPolicy 1.2. The WS-SecurityPolicy 1.2 specification contains Username Token assertion policies that may be used to assert the existence of this token. DataPower being a PDP (policy decision point) provides the implementation for this and other token assertion policies.
DataPower contains a sample template in the store://policies//templates directory named wsp-sp-1-1-usernametoken.xml that is close to our requirement.
The WS-SecurityPolicy 1.2 specification contains options for many variations of the UsernameToken assertion. For example, variations of the sp:IncludeToken attribute changes the requirement of the existence of the UNT on request and/or response messages. Not every variation is supported by a pre-existing template on DataPower, so in some cases you may need to modify the DataPower templates. There is a template on DataPower to assert the existence of the UNT on request and response messages. There is no preexisting template for assertion on UNT on the request message only, but you can create one.
Listing 2 shows a section of this policy. This policy is looking for a UNT in the 1.1 format. The template also contains a policy that looks for the 1.0 format UNT.
Listing 2. wsp-sp-1-1-usernametoken.xml snippet
<wsp:All> <sp:SupportingTokens> <sp:UsernameToken sp: <wsp:Policy> <sp:WssUsernameToken11/> </wsp:Policy> </sp:UsernameToken> </sp:SupportingTokens> </wsp:All>
In the WS-SecurityPolicy 1.2 specification in section 5.1.1 Token Inclusion Values is the following description (Table 1) of token expectations as determined by the sp:IncludeToken attribute:
WS-Security Policy sp:IncludeToken description
-
- The token MUST NOT be included in any messages sent between the initiator and the recipient; rather, an external reference to the token should be used.
-
- The token MUST be included in only one message sent from the initiator to the recipient. References to the token MAY use an internal reference mechanism. Subsequent related messages sent between the recipient and the initiator may refer to the token using an external reference mechanism.
-
- The token MUST be included in all messages sent from initiator to the recipient. The token MUST NOT be included in messages sent from the recipient to the initiator.
-
- The token MUST be included in all messages sent from the recipient to the initiator. The token MUST NOT be included in messages sent from the initiator to the recipient.
-
- The token MUST be included in all messages sent between the initiator and the recipient. This is the default behavior.
The value of when assigned to sp:IncludeToken attribute requires a UNT on both the request and response to the client. The requirement is to require only the UNT on the request. Therefore you can change the sp:IncludeToken attribute to, which only requires the UNT on the request message and not the response.
Rather than modify the policy in store://policies//templates (which you don't have permission to do), copy this to local://, rename to wsp-sp-1-1-usernametoken_AlwaysToRecipient.xml along with making the aforementioned AlwaysToRecipient change.
Now that you've created a customized policy, it needs to be attached to the service level of the WSDL. Figure 15 shows the pop-up window that results from clicking the WS-Policy button. The Sources tab is used to assign the policy. Navigating to the local:// directory lets you select the newly created wsp-sp-1-1-usernametoken_AlwaysToRecipient.xml policy.
Some policy documents will have multiple policies within differentiated by a unique wsu:Id attribute. When encountering such a document, designate which to use by selecting the appropriate wsu:Id. In this example there is only one. Click Attach Source and Apply for the WSP to complete the assignment:
Figure 15. Attaching WS-Policy to WSDL
The UNT policy does not require any additional information, but some policies require parameters for completion. For example, a policy that enforces a digital signature requires the name of a DataPower Crypto Certificate to verify the signature. The Processing tab allows the assignment of these properties. Additionally, the Enabled Subject tab lets you fine tune the WSDL elements and message phases to which the policy is enforced. For example, you may wish to only perform policy enforcement on the request phase of a message, and not on the response.
If you submit a simple request to this service via DataPower without the UNT, a violation occurs. Listing 3 shows an EchoRequest without the UNT:
Listing 3. Sample EchoRequest without:Body> <tns:echoStringInput> <echoInput> Are you talkin' to me?</echoInput> </tns:echoStringInput> </S11:Body> </S11:Envelope>
Using cURL (), you can submit this request and see the results. Listing 4 shows the response from DataPower:
Listing 4. Error message return without UNT
curl -k -d @echoRequestNoUNT.xml -H "Content-type: text/xml" -H "SOAPAction: echoOperation" --key WSTC-privkey.pem --cert WSTC-sscert.pem <?xml version="1.0" encoding="UTF-8"?> <env:Envelope xmlns: <env:Body><env:Fault><faultcode>env:Client</faultcode><faultstring> Required elements filter setting reject: expression /*[local-name()='Envelope' and (namespace-uri()='' or namespace-uri()='')]/*[local-name()='Header' and (namespace-uri()='' or namespace-uri()='')]//*[local-name()= 'UsernameToken' and namespace-uri()='. org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd'] was not satisfied (from client)</faultstring></env:Fault></env:Body></env:Envelope>
Adding a UNT to the request as seen in Listing 5 should fulfill the requirements of the newly added policy. The UNT has been added to the wsse:Header, wsse:Security element:
Listing 5. EchoRequest with:Header> <wsse:Security> <wsse:UsernameToken wsu: <wsse:Username>fred</wsse:Username> <wsse:Password>flintstone</wsse:Password> </wsse:UsernameToken> </wsse:Security> </S11:Header> <S11:Body> <tns:echoStringInput> <echoInput>Are you talkin' to me?</echoInput> </tns:echoStringInput> </S11:Body> </S11:Envelope>
Outgoing request message configuration
Before sending a request to the WebSphere Application Server application, there is a little more work to do. WebSphere Application Server is enforcing the identity of the requestor in the LTPA token. In addition, while you may let several clients make requests, WebSphere Application Server is only configured for a single user for this application, so you must map validated requests to a WebSphere Application Server accepted distinguished name via the AAA action of the processing policy. Figure 16 shows the default request rule with the addition of an AAA policy:
Figure 16. Proxy policy with AAA action
The AAA policy accepts the client's credentials from the UNT as shown in Figure 17, which demonstrates the identity extraction phase of the AAA action:
Figure 17. AAA action identity extraction
While a real-world application would use a mechanism such as LDAP to authenticate the UNT credentials, this example uses an AAA Info XML file stored on the device. Figure 18 shows the authentication phase:
Figure 18. AAA action authentication
The AAA Info file provides a convenient way to assign the new credential for WebSphere Application Server. You can use alternative methods, such as Tivoli Federated Identity Manager, Secured Conversation, or various custom methods, such as xPath inquiry into the request document or customer XSLT. Figure 19 shows the mapping from the AAA Info file:
Figure 19. AAA info credential mapping
The final step in AAA processing is converting the UNT to the LTPA token. In this post-processing phase, Generate LTPA Token has been selected. The default option is to store the LTPA Token in an HTTP Cookie header. Optionally, the token can be stored in the WS-Security header element, and that option has been chosen here. DataPower provides a dropdown box (LTPA Token Version), which is used to designate the LTPA token format. The goals are to select the token version (V1/V2), and whether to use an HTTP Cookie or a Binary Security Token (BST) to contain the token. In addition there are two forms of BST (typically used with Web service traffic) available with different namespace declarations.
The LTPA Token Version options of Domino, WebSphere Version 1 and WebSphere Version 2 are self evident. WebSphere Version 1 FIPS provides enhanced security compliant with the Federal Information Processing Standard (FIPS) for the Version 1 token (the Version 2 token is inherently FIPS compliant). And WebSphere V7 Version 2 is used to create a BST with the wwst:LTPAv2 namespace. These values where undergoing review when this article was published, so you should check the DataPower product guides for the latest information on your firmware revision. Figure 20 shows the LTPA selection:
Figure 20. AAA post-processing for LTPA token creation
Referring back to the LTPA introduction, there are multiple versions of the LTPA Token. We've selected the WebSphere V7.0 Version 2 format. In addition, the LTPA Key File as obtained from WebSphere Application Server (see discussion under WebSphere Application Server configuration) is loaded onto the device's cert:// directory and the password is entered. LTPA User Attributes could have been entered. These name-value pairs require application programming in the WebSphere Application Server application for consumption and interpretation.
Completing the AAA action finalizes the configuration of the WSP policy. Requests submitted to the application through the DataPower appliance are now validated by the WSP and the WS-Policy. Username tokens are converted to LTPA Tokens by the AAA policy and submitted to the WebSphere Application Server application. Listing 6 shows an example of a request and response using cURL. The application responses with the echoed string and the username (wsadmin) as extracted from the LTPA Token.
Listing 6. Submission of request through DataPower
curl -k -d @echoRequest.xml -H "Content-type: text/xml" -H "SOAPAction: echoOperation" --key WSTC-privkey.pem --cert WSTC-sscert.pem <?xml version="1.0" encoding="UTF-8"?> <soapenv:Envelope xmlns: <soapenv:Body><ns2:echoStringResponse xmlns: <echoResponse>JAX-WS==>> Are you talkin' to me? (user: wsadmin)</echoResponse> </ns2:echoStringResponse></soapenv:Body> </soapenv:Envelope>
Conclusion
WS-Policy has been designed to provide enterprise-wide SOA governance. This article demonstrated its use and implementation within the WebSphere DataPower SOA Appliance for the enforcement of Username Tokens and within WebSphere Application Server for the enforcement of LTPA tokens. LTPA tokens are an effective and efficient method of single sign-on, by providing identity information that does not need to be re-authenticated.
Resources
- IBM Redbook: Web Services Feature Pack for WebSphere Application Server V6.1
Good description of the use of policy sets and bindings in WebSphere Application Server.
- Web Services Policy 1.5 -- Primer
Introductory description of the Web Services Policy language, using numerous examples.
- Web Services Policy 1.5 -- Framework
Definition of two general-purpose mechanisms for associating policies, as defined in Web Services Policy 1.5 -- Framework, with the subjects to which they apply. This specification also defines how to use these general-purpose mechanisms to associate policies with WSDL and UDDI descriptions.
- WS-Security Policy 1.2
Description of Version "200512 of the WS-SecurityPolicy namespace, plus links to related resources using the Resource Directory Description Language (RDDL) 2.0.
-.
- IBM Redpaper: WebSphere DataPower SOA Appliances: The XML Management Interface
This IBM Redpaper describes the XML Management Interface, which is the third way to configure and administer the WebSphere DataPower SOA Appliance, in addition to the WebGUI and the CLI.
- WebSphere Application Server developer resources page
Technical resources to help you use WebSphere Application Server.
- WebSphere Application Server product page
Product descriptions, product news, training information, support information, and more.
- WebSphere Application Server information center
A single Web portal to all WebSphere Application Server documentation, with conceptual, task, and reference information on installing, configuring, and using WebSphere Application Server.
-, problem tracking, and more.
-.
|
http://www.ibm.com/developerworks/websphere/library/techarticles/0911_rasmussen/0911_rasmussen.html
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
jerenkrantz 2003/12/26 23:41:28
Modified: . Tag: APACHE_2_0_BRANCH STATUS
Log:
Reflect merged backports (those that I casted at least the 3rd +1 for), and
cast some votes on those with less than 3 +1s.
(One or two of them I voted for are now at 3 +1s, but for some stated reason
or another, I'm not backporting them.)
Revision Changes Path
No revision
No revision
1.751.2.609 +28 -53 httpd-2.0/STATUS
Index: STATUS
===================================================================
RCS file: /home/cvs/httpd-2.0/STATUS,v
retrieving revision 1.751.2.608
retrieving revision 1.751.2.609
diff -u -u -r1.751.2.608 -r1.751.2.609
--- STATUS 23 Dec 2003 14:59:25 -0000 1.751.2.608
+++ STATUS 27 Dec 2003 07:41:28 -0000 1.751.2.609
@@ -88,31 +88,6 @@
+1: rederpj, trawick (requires minor MMN bump I believe), stoddard
- * mod_dav: Return a WWW-Auth header for MOVE/COPY requests where
- the destination resource gives a 401.
-
- PR: 15571
- +1: jorton, trawick
-
- * mod_dav_fs: Fix for dropping custom namespaces
-
- PR: 11637
- +1: jorton, trawick
-
- * prefork: Fix slow graceful restarts on some platforms.
-
- +1: jorton, trawick
-
- * parsed_uri.port is only valid iff parsed_uri.port_str != NULL.
- Old code simply checked if it was non-zero, not if it
- was *valid*
-
- +1: bnicholes, jim
-
- * Better descriptions for the configure --help output for some modules
-
- +1: kess, trawick, nd, erikabele
-
* Replace some of the mutex locking in the worker MPM with
atomic operations for higher concurrency.
server/mpm/worker/fdqueue.c 1.24, 1.25
@@ -192,12 +167,13 @@
No updates are available at present for the BeOS or OS/2 MPMs,
but that is not a showstopper for the other changes.
+ include/ap_mpm.h r1.36
server/mpm/prefork/prefork.c r1.284
- server/mpm/worker/worker.c r1.142
+ server/mpm/worker/worker.c r1.142,r1.143
server/mpm/experimental/leader/leader.c r1.35
server/mpm/experimental/threadpool/threadpool.c r1.23
server/mpm/netware/mpm_netware.c r1.78
- +1: trawick, stoddard
+ +1: trawick, stoddard, jerenkrantz
server/mpm/winnt/mpm_winnt.c r1.303
server/mpm/winnt/mpm_winnt.h r1.44
@@ -209,7 +185,8 @@
modules/generators/mod_cgid.c r1.152, r1.161
server/mpm_common.c r1.111
PREREQ: ap_mpm_query(mpm-state) support in Unix MPMs
- +1: trawick, stoddard
+ jerenkrantz asks: What does mpm_common.c r1.111 have to do with this?
+ +1: trawick, stoddard, jerenkrantz
* piped log programs respawning after Apache is stopped. PR 24805
bogus "piped log program '(null)' failed" messages during
@@ -220,7 +197,7 @@
(If an MPM is used that doesn't support the query, a complaint
will be written to the error log when a piped log program
terminates, and it won't get restarted.)
- +1: trawick
+ +1: trawick, jerenkrantz
* ab: catch out of memory (reasoning report ID 29)
support/ab.c: r1.125
@@ -239,6 +216,7 @@
around... :)
Yes, I think, a useful error message is better than
a coredump in this case.
+ jerenkrantz: Oh, bah. Let 'em segfault. Use flood!
* mod_rewrite: more or less cosmetic fix. If a .htaccess in DocumentRoot
configures:
@@ -249,25 +227,25 @@
which is responsible for this behaviour. I'd suggest to backport that
function rewrite at all (it's even much better code :-). (2.0 + 1.3)
modules/mappers/mod_rewrite.c: r1.162
- +1: nd, trawick
+ jerenkrantz: Committed to 2.0. Will leave backport to 1.3 to someone
+ else. BTW, clever use of NULL terms (or lack of). ;-)
+ +1: nd, trawick, jerenkrantz
* mod_rewrite: cause a lookup failure in external rewrite maps if
the key contains a newline. PR 14453. (2.0 + 1.3)
modules/mappers/mod_rewrite.c: r1.199
- +1: nd, trawick
-
- * mod_dav: Use bucket brigades when reading PUT data. This avoids
- problems if the data stream is modified by an input filter. PR 22104.
- modules/dav/main/mod_dav.c: r1.98, r1.99
- +1: nd, trawick
- (gstein likes the concept, but needs to review...)
+ jerenkrantz: Okay by me, but perhaps we should just escape the \n?
+ Wouldn't ignoring the rewrite here do something bad tho?
+ (Am not backporting.)
+ +1: nd, trawick, jerenkrantz
* mod_ssl: fix a link failure when the openssl-engine libraries are
present but the engine headers are missing.
modules/ssl/mod_ssl.c: r1.87
modules/ssl/mod_ssl.h: r1.139
modules/ssl/ssl_engine_config.c: r1.82
- +1: jwoolley, trawick, jim
+ PREREQ: Blow away of SSL_EXPERIMENTAL_ENGINE (see above)
+ +1: jwoolley, trawick, jim, jerenkrantz
* Catch an edge case, where strange subsequent RewriteRules could lead to
a 400 (Bad Request) response. (2.0 + 1.3)
@@ -285,7 +263,7 @@
internal meaning I do not think that the change requires a
bump.
+1: nd, brianp
- +1 (concept): trawick
+ +1 (concept): trawick,
@@ -308,14 +286,14 @@
* mod_ssl: fix check for the plain-HTTP-request error
modules/ssl/ssl_engine_io.c r1.111
- +1: jorton
+ +1: jorton, jerenkrantz
* When UseCanonicalName is set to OFF, allow ap_get_server_port to
check r->connection->local_addr->port before defaulting to
server->port or ap_default_port()
server/core.c r1.247
+1: bnicholes, jim
- 0: nd
+ 0: nd, jerenkrantz
nd: can the local_addr->port ever be 0?
bnicholes response: I couldn't tell you for sure if local_addr->port
could be 0. But it makes sense that if it were then Apache
@@ -326,14 +304,15 @@
* mod_setenvif: Fix optimizer to treat regexps as such even if they
only contain anchors like \b. PR 24219.
modules/metadata/mod_setenvif.c: r1.44
- +1: nd
+ +1: nd, jerenkrantz
* mod_autoindex: Allow switching between XHTML and HTML output.
PR 23747. (minor MMN bump).
modules/generators/mod_autoindex.c: r1.124
include/httpd.h: r1.201
include/ap_mmn.h: r1.60
- +1: nd
+ jerenkrantz suggests: Why don't we just always output XHTML instead?
+ +1: nd, jerenkrantz
* LDAP cache fixes from Matthieu Estrade; see PR 18756
include/util_ldap.h r1.12
@@ -346,11 +325,6 @@
bnicholes (looks good on Netware but then NetWare does not
use shared memory.)
- * Fix htdbm to generate comment fields in the DBM files correctly.
- (This causes DBMAuthGroupFile to break)
- support/htdbm.c r1.11
- +1: jerenkrantz, striker, trawick, ianh
-
* Fix mod_mem_cache removal ordering bug.
modules/experimental/mod_mem_cache.c r1.97,1.99
modules/experimental/cache_cache.c r1.5
@@ -359,14 +333,15 @@
* Fix a long delay with CGI requests and keepalive connections on
AIX.
modules/generators/mod_cgid.c r1.159
- +1: trawick, stoddard
+ jerenkrantz: Could we do this on other platforms, too?
+ +1: trawick, stoddard, jerenkrantz
* mod_status hook
configure.in r1.254
Makefile.in r1.134
- modules/generators/mod_status.c r1.173
- modules/generators/mod_status.h r1.2
- +1: trawick, ianh
+ modules/generators/mod_status.c r1.73
+ modules/generators/mod_status.h r1.1,r1.2
+ +1: trawick, ianh, jerenkrantz
CURRENT RELEASE NOTES:
|
http://mail-archives.apache.org/mod_mbox/httpd-cvs/200312.mbox/%3C20031227074129.25183.qmail@minotaur.apache.org%3E
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
> > An example of when an app might need to deal with such details
> > is when an app traces the error code. On a platform like
> > Windows or OS/2, it is probably meaningful to trace the
> > ap_status_t value unless it is an OS-specific error. In that
> > case, the application will likely want to trace the OS-specific
> > error value (and call it that) so that the user knows which
> > documentation to refer to for why that error might have
> > occurred.
> >
> > In other words, a message like
> >
> > "system error 4011 opening config file: permission denied"
> >
> > instead of
> >
> > "error 14011 opening config file: permission denied"
>
> Why would a user EVER see this? Apache does not currently print out the
> error number. If you are writing another program that does print out the
> error number, then fix this one of two ways. Implement the macro you have
> asked for, or modify ap_strerror to print out the error number.
Please look at "(13)" in the log message below.
[Mon Apr 10 13:56:20 2000] [crit] (13)Permission denied: make_sock:
could not bind to address 0.0.0.0 port 80
> > Your justification of forcing the app to do this seems to be based
> > on your statement that "it takes time to convert error codes to a
> > common code".
...
> The job of APR is to take portability off the programer's shoulders. I
> guess the best justification for doing it this way is that we started
> doing it the other way (returning a common error code), and we lose
> information. It isn't always possible to go through this transformation:
>
> platform err-code -> common error code -> platform error
> code.
I think your statement here is a very powerful argument. My point was
that the only argument listed in APRDesign wasn't very powerful. I
would suggest that you copy this text into APRDesign at a place you
find appropriate.
> > B. trivialities
> >
> > I have a small number of extremely minor editorial changes to your
> > text which, unless you prefer they be sent to you directly or posted
> > here, I will update in your text myself in a few days if nobody beats
> > me to it. (I can assure you that none of it is related to my
> > sometimes-ignorant use of commas :) )
>
> Please send them to the list.
(I just double checked "dependant" and found that it is a legal
variant of dependent, so that change is relatively asinine.)
Index: APRDesign
===================================================================
RCS file: /cvs/apache/apache-2.0/src/lib/apr/APRDesign,v
retrieving revision 1.8
diff -u -r1.8 APRDesign
--- APRDesign 2000/04/07 14:16:28 1.8
+++ APRDesign 2000/04/10 18:16:03
@@ -183,30 +183,30 @@
APR Error reporting
Most APR functions should return an ap_status_t type. The only time an
-APR function does not return an ap_status_t is if it absolutly CAN NOT
+APR function does not return an ap_status_t is if it absolutely CAN NOT
fail. Examples of this would be filling out an array when you know you are
not beyond the array's range. If it cannot fail on your platform, but it
could conceivably fail on another platform, it should return an ap_status_t.
Unless you are sure, return an ap_status_t. :-)
-All platform return errno values unchanged. Each platform can also have
+All platforms return errno values unchanged. Each platform can also have
one system error type, which can be returned after an offset is added.
-There are five types of error values in APR, each with it's own offset.
+There are five types of error values in APR, each with its own offset.
Name Purpose
0) This is 0 for all platforms and isn't really defined
anywhere, but it is the offset for errno values.
(This has no name because it isn't actually defined,
- but completeness we are discussing it here).
-1) APR_OS_START_ERROR This is platform dependant, and is the offset at which
+ but for completeness we are discussing it here).
+1) APR_OS_START_ERROR This is platform dependent, and is the offset at which
APR errors start to be defined. (Canonical error
values are also defined in this section. [Canonical
error values are discussed later]).
-2) APR_OS_START_STATUS This is platform dependant, and is the offset at which
+2) APR_OS_START_STATUS This is platform dependent, and is the offset at which
APR status values start.
-4) APR_OS_START_USEERR This is platform dependant, and is the offset at which
+4) APR_OS_START_USEERR This is platform dependent, and is the offset at which
APR apps can begin to add their own error codes.
-3) APR_OS_START_SYSERR This is platform dependant, and is the offset at which
+3) APR_OS_START_SYSERR This is platform dependent, and is the offset at which
system error values begin.
All of these definitions can be found in apr_errno.h for all platforms. When
@@ -224,13 +224,13 @@
if (CreateFile(fname, oflags, sharemod, NULL,
createflags, attributes,0) == INVALID_HANDLE_VALUE
- return (GetLAstError() + APR_OS_START_SYSERR);
+ return (GetLastError() + APR_OS_START_SYSERR);
These two examples implement the same function for two different platforms.
Obviously even if the underlying problem is the same on both platforms, this
will result in two different error codes being returned. This is OKAY, and
is correct for APR. APR relies on the fact that most of the time an error
-occurs, the program logs the error and continues, it does not try to
+occurs, the program logs the error and continues and does not try to
programatically solve the problem. This does not mean we have not provided
support for programmatically solving the problem, it just isn't the default
case. We'll get to how this problem is solved in a little while.
@@ -241,14 +241,14 @@
and are self explanatory.
No APR code should ever return a code between APR_OS_START_USEERR and
-APR_OS_START_SYSERR, those codes are reserved for APR applications.
+APR_OS_START_SYSERR; those codes are reserved for APR applications.
To programmatically correct an error in a running application, the error codes
need to be consistent across platforms. This should make sense. To get
consistent error codes, APR provides a function ap_canonicalize_error().
This function will take as input any ap_status_t value, and return a small
subset of canonical APR error codes. These codes will be equivalent to
-Unix errno's. Why is it a small subset? Because we don't want to try to
+Unix errnos. Why is it a small subset? Because we don't want to try to
convert everything in the first pass. As more programs require that more
error codes are converted, they will be added to this function.
@@ -288,7 +288,7 @@
make syscall that fails
convert to common error code
return common error code
- decide execution basd on common error code
+ decide execution based on common error code
Using option 2:
@@ -306,9 +306,9 @@
char *ap_strerror(ap_status_t err)
{
if (err < APR_OS_START_ERRNO2)
- return (platform dependant error string generator)
+ return (platform dependent error string generator)
if (err < APR_OS_START_ERROR)
- return (platform dependant error string generator for
+ return (platform dependent error string generator for
supplemental error values)
if (err < APR_OS_SYSERR)
return (APR generated error or status string)
--
Jeff Trawick | trawick@ibm.net | PGP public key at web site:
Born in Roswell... married an alien...
|
http://mail-archives.apache.org/mod_mbox/httpd-dev/200004.mbox/%3C200004101818.OAA16443@k5.localdomain%3E
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
It COULD be implemented without changing the runtime by using different
method names or namespaces.
However: to be "mathematically" safe that no mistakes happen they would
need to be named something like
send<reserved-character>String<reserved-character>org<other-reserved-character>myproject<other-reserved-character>MyType(a:String,
b:MyType);
If it was possible to use namespaces in interfaces in the avm then it
could be also implemented using namespaces (might result in a smaller swf).
However: there are fundamental problems with a non-runtime solution:
*) describeType would create different results with a swf that supports
overloading.
*) untyped access would not work properly anymore: var a:* = new
MyClass(); a["doSomething"]("str") would not work anymore like that
Either would potentially break libraries which is why I was hoping for
overloading support by the avm for a long time.
yours
Martin.
However: describeType is a runtime method that would unravel this hack.
On 17/01/2012 01:14, Nicholas Kwiatkowski wrote:
> If
>>
|
http://mail-archives.apache.org/mod_mbox/incubator-flex-dev/201201.mbox/%3C4F145353.8070409@leichtgewicht.at%3E
|
CC-MAIN-2014-23
|
en
|
refinedweb
|
Zim Standard
By Caiphas Chimhete
MOST new commercial farmers who occupiedformerly white-owned farms during
the controversial land reform programme, are reportedly failing to pay their
workers stipulated wages because of low production levels over the past
three years, The Standard has been told.
Apart from that, the gazetted monthly wages for farm workers are
pathetically low that most of them are living in abject poverty.
Deputy secretary of the General Agricultural Plantation Workers Union of
Zimbabwe (GAPWUZ), Gift Muti, said the union was currently visiting the new
farmers in an effort to make them pay the stipulated wages. Some workers go
for two months without getting their wages.
Muti said most of those failing to pay workers were farmers, who invaded
prime farm land during the government-sponsored land invasions, which
started in 2000.
He said while some new farmers comply, others were hostile to GAPWUZ
officials, forcing the union to seek recourse to the labour court.
"Some comply after we have discussions with them but there are other cases
where we end up at the labour court for arbitration," said Muti, who could
not give the exact number of farmers the union had taken to court.
Muti said the union had, in the past few months, successfully lobbied 120
new farmers in Mashonaland West, 60 in the Midlands and more than 50 in
Masvingo to pay their workers the recommended wages.
The president of the Zimbabwe Commercial Farmers' Union (ZCFU) Davison
Mugabe confirmed the problem, saying the new farmers were having
difficulties paying their workers because they were "starting up".
Mugabe said the situation was compounded by drought that has ravaged
Zimbabwe for the past three years.
"Most of the farmers are new and they do not have enough money to fall back
on when the situation is as bad as it is now unlike established farmers. You
must bear in mind that we have been having drought for the past three
years," Mugabe said.
Farm workers who spoke to The Standard last week said apart from the erratic
payment of wages, their monthly earnings were pathetically low.
The lowest paid farm worker (A1) gets $450 000, a figure that still falls
well below the poverty datum line.
The Consumer Council of Zimbabwe (CCZ) says a family of six now requires
about $9 million a month to live a normal life.
Anna Ndlovu, who works at a farm in Matabelelalnd North, owned by a senior
official in the President's Office, last week said at times they went for
two months without being paid.
She said most workers were living a deplorable life. "How do they expect us
to survive on this amount, when a two-litre bottle of cooking oil is going
for $167 000? They (government) should see what they can do for us," said
Ndlovu, a single parent.
As at the end of August, she grossed $192 400 while net earnings went down
to $184 128 after NSSA deductions of $5 772 and the $2 500 that was
subtracted after she borrowed maize meal from the farm owner's grocery shop.
But Mugabe, however, tried to justify the low wages saying: "We know the
$450 000 a month is low but you must also take into consideration that the
workers do not incur transport costs and food is cheap at the farm."
Zim Standard
Letters
I WRITE this letter as a former teacher who left the profession a few years
ago because of the lousy salary that I was getting. Teachers, hard-working
and crucial though they are, have been reduced to what one might term
"useless form of human currency".
For their predicament, teachers have no one to blame but themselves. Most
teachers are just too humane and tender-hearted to the point of stupidity if
not idiocy and need to be "radicalised".
During my short stint as a teacher, I noticed that most teachers are
hard-working and dedicated to their work and yet they are ridiculed,
despised and earn salaries that barely constitute a living. Look at teachers
when they retire - most become almost social welfare cases. The very same
people they work so hard and tirelessly to educate are the ones who look
down upon them later on in life. Show me a hard-working teacher and I will
quickly show you a fool of the highest order.
To all the teachers out there I say - your dedication to duty is your
greatest undoing. Teachers can revolutionise their salaries and working
conditions. For society and their employer to respect and reward them
accordingly, there should be a deliberate attempt by teachers to produce
poor results nationally.
If teachers in their wildest imaginations think that their employer will
come up with good salaries on a silver platter, they are living in dreamland
and should come down to mother earth and get to grips with reality.
As long as students continue to pass and proceed to secondary schools,
colleges and universities teachers will always remain a laughing stock. I am
fully cognizant of the fact that most teachers are peasants and they also
have school-going children; this is where the problem lies.
There is a need, however, to take cognizance of the fact that for anyone to
win any war; they must turn a blind eye to whoever might be the victim or
casualty. This is one of the painful principles of war.
If doctors and nurses can go on strike and leave patients dying in order to
be heard and rewarded appropriately, then surely teachers can emulate them
in order to be taken seriously. Why take your work seriously when your
employer is not serious about your well-being and welfare?
Already the medical professional has its own board, in a bid to improve
salaries and working conditions at the expense of other civil servants.
Teachers are taken for granted because they cannot leave for greener
pastures in droves like nurses, doctors and health technicians. Let it be
known that teachers can be like hypertension or high blood pressure. They
have the potential to kill silently. There is a great urgency for the powers
that be to set up a board for the teaching profession, similar to the Health
Services Board.
To the powers that be, I ask: Which is more important and valuable, the
golden eggs or the hen that lays the eggs?
It is high time the teachers showed them where those doctors, nurses and
pharmacists come from. You have been patient and sympathetic for far too
long and to your own detriment.
Finally and to all the teachers once more I say - wake up! Stop watching a
game you are supposed to be playing.
Son of the Soil
Jahunda
Gwanda
Zim Standard
Letters
WHILE the Zimbabwe Tourism Authority (ZTA) is busy trying to persuade us to
believe that tourism is showing signs of recovery, they could do several
things that will make tourists feel most welcome.
One of them is the safety of tourists once they set foot on Zimbabwean soil.
South Africa has just published a report that says it is much safer than it
was before. That can only persuade more tourists to feel they will be safer
there and that can only be at the expense of Zimbabwe.
The security of both domestic and international tourists at the various
resort areas is key to promoting greater inflows of visitors.
The other area needing the attention of the ZTA is the stretch of the road
to the Harare International Airport just past 1 Commando Barracks, where
visitors are greeted by the stench of overflowing septic tanks. This is not
the image Zimbabwe wishes to portray. Where does the effluent go? Is there
no one in the City Council or at the Barracks who sees this health hazard,
year in year out? And why is the matter not resolved once and for all?
In case they do not understand what I am suggesting, I am sure it would not
cost a lot to connect the septic tanks to the sewerage lines either in St
Martin's, Arcadia or Sunningdale compared to the threat to human life and
pollution to tributaries feeding into Lake Chivero, Harare's main water
reservoir.
In any case, who is the councillor/commissioner or MP for the area and just
what are they doing about this problem? It is such a negative point for
tourists to be greeted by something like that after they touch down at the
airport.
Zimbabweans are so submissive. The residents of the three suburbs mentioned
above have endured the stench for years yet they appear not to have
protested demanding action on the overflowing effluent from the Barracks.
They should demand action from whoever claims to represent them and serves
their interests.
Another area with a similar hazard is the T-junction of Sam Nujoma Street
and Norfolk Road in Mount Pleasant opposite Golden Stairs garage. The septic
tanks there are always overflowing and just how people at the garage can put
up with something like that is puzzling. Surely their health is more
important?
The City Council and the Ministry of Health should order the Ministry of
Defence to deal with the problem so that there is regular emptying of the
tank.
Not everyone in Harare can afford mineral water. Let's get things right the
first time.
M Watema
Mount Pleasant
Harare
Zim Standard
Letters
SEVERAL recent events have helped to unmask the undemocratic nature of Zanu
PF. One of these is the apparent absence of any consultations on what status
to bestow on and the venue of Henry Matuku Hamadziripi's final resting
place.
If Zanu PF is as democratic as it misleads people into believing, it would
have allowed/tolerated free and open discussion and in the end accepted the
majority decision. As it is, it looks to me that it is the minority decision
that prevailed over others.
But the fact that others who should know better preferred silence and
retreated into dark corners for fear of being seen, suggests fear instead of
democracy rules Zanu PF.
If Hamadziripi was instrumental in transforming Zanu PF from a
constitutional reformist party to a revolutionary party pursuing the armed
struggle, what was Robert Mugabe, as an individual, instrumental in?
The second example of how undemocratic Zanu PF is the reported expulsion of
war veterans' leader, Jabulani Sibanda.
A democratic organisation would have asked Sibanda to appear before a
hearing. No, in fact, they would have checked with people who are alleged to
have carried out an interview during which Sibanda is alleged to have
uttered things that so offended the ruling party.
It could very well be that they find something different, but at any rate
just suspending someone without giving them a platform to defend or present
their own side of events is as undemocratic as you can get. What this case
only confirms is that a decision to suspend Sibanda was taken long back and
that an excuse to justify the action is being cooked up. Why does Zanu PF
act so frightened of discovering the truth? Zanu PF is its own worst enemy,
not Jabulani Sibanda, not Tony Blair or George W Bush.
Yet another example of the undemocratic nature of Zanu PF is the decision by
Elliot Manyika, the ruling party's political commissar, announcing that
there will be no primaries to select potential candidates to take part in
the race for the Senate.
Manyika said Zanu PF would adopt a "common consensus method and as always
the Central Committee and the Politburo have the final say." What the hell
is that and since when has it subscribed to this view and why? What has
changed and what is suddenly wrong with a process they used only in January
and February this year ahead of the 31 March Parliamentary elections.
Before they even start Zanu PF is preparing to rig the whole exercise
against its members! That is democracy Zanu PF style. No wonder there is so
much disgruntlement in the ruling party. Why have an election if the central
committee and the politburo are going to have a final say. Why doesn't it
just go ahead and handpick those it wants as its candidates instead of this
hoax of an election exercise.
Next, they will extend the "common consensus method" to everything. This is
how a dictatorship operates.
Even in traditional societies, the chiefs consulted their court officials
and once in a while the court jester acted as their conscience, ridiculing
and amplifying their misplaced decisions, forcing them now and then to
evaluate their decisions.
Further examples of Zanu PF's intolerance can be found in Cain Mathema's
attack on non-governmental organisations that have provided boreholes for
rural communities to draw water for their drinking and livestock purposes.
The issue for Mathema to answer is: Why are non-governmental organisations
providing boreholes when the government was supposed to provide rural
communities with tap water? Would he rather the rural communities had no
water at all, because that is what he seems to suggest - so that more people
can die from waterborne diseases!
The other face of Zanu PF's undemocratic and intolerant nature is Edwin
Muguti's attack against the British. Thankfully the local chief and
villagers - the real beneficiaries of the UK aid - were there to speak for
themselves.
Mathema, Muguti and the others on the gravy train do not represent the views
of the majority. Where their own government fails to do anything they would
rather no one did anything at all.
Tirivanhu Mhofu
Emerald Hill
Harare
Zim Standard
Letters
AT the moment there is job freeze in the public sector. The purpose is to
save on government expenditure. But what is really happening is very
puzzling.
Posts which are in the lowest grade where a worker gets less than $3 million
are being frozen. However, at the same time posts that pay millions more of
dollars are being created for favourites.
William Nhara got a new post in the President's Office, specially created
for him since he had no job after losing the 2005 Parliamentary elections.
This post definitely costs the government many millions of dollars.
Is this a job freeze when bigger and more costly posts are opened any time
for Zanu PF yes men?
The other example of job creation is the Senate. New jobs are going to be
created for 66 jobless Zanu PF hero-worshippers as well as the supporting
staff and it will cost the taxpaying public billions of dollars. And yet the
majority of the tax payers are living a hand to mouth existence while the
government spends their taxes extravagantly and unplanned.
The Zanu PF government should be ashamed of continuing to cause suffering to
the people of Zimbabwe.
D R Mutungagore
Sakubva
Mutare
Zim Standard
Letters
I am surprised that you raised the issue "RBZ grants Gono farming loan". As
far as our Reserve Bank Governor is concerned, corporate governance relates
to other people and not RBZ employees.
If you recollect when Gono was appointed, the Mashonaland East Governor
asked him to declare his interests.
As far as I can recall, it was widely believed in financial circles that
Gono was the major shareholder in the Financial Gazette. Gono did not, among
a number of things, declare his interest in the Financial Gazette.
If Gono is untruthful about such a matter how can we trust him on other more
important issues?
I would be interested to know how much Productive Sector Support funding was
granted to Gono's farms and the Financial Gazette.
It is interesting to notice that appointments to financial institutions must
be vetted by the central bank. Who vets potential employees for the central
bank?
Totemless
Croydon
South London
Zim Standard
ZIMBABWE'S problems would be fewer if belt-tightening by ordinary
Zimbabweans was matched by similar measures and not just rhetoric from its
leaders.
Ten days ago, President Robert Mugabe thanked Zimbabweans for their
resilience, in particular, in the light of problems facing the country. He
said: "It is that spirit of endurance and commitment to the national cause
which saw us defeat colonialism and win our freedom and independence."
Last month Zimbabwe even surprised the International Monetary Fund by
coughing up US$120 million to reduce its arrears to the Bretton Woods
institution. This was followed by a further US$15 million this month.
The IMF was startled by this sudden show of the spirit of sacrifice because
it was acutely aware that while Zimbabwe was finally doing the right thing
since 2000 other sectors were going to suffer. It was partly this
realisation which led the IMF to question Zimbabwe's ability to find
resources to enable it to reduce its arrears
The same sacrifice made in order to pay off the country's arrears to
international financial institutions is called for when dealing with the
problems confronting the nation. It can be argued that Zimbabwe's problems
emanate from a total lack of will to limit the penchant for lavish
lifestyles or the sense to examine and question whether what is being
considered or done is really necessary and in the greater national interest.
If Zimbabwe's leadership had capacity for such reflection and the will, they
would have, as far back as 2000, paused to question whether unleashing
violence was an appropriate method of redressing historical land imbalances.
They would also have reflected on how the country was going to afford goods
it does not produce when every exporting sector was being run down or forced
to cease operations.
Last month and at a time when the country could scarcely afford fuel for its
critical sectors, a decision was made to host regional air forces, when
common sense would have dictated a postponement as part of belt-tightening.
Counterparts from the region would have understood, given the country's
predicament. But the show went ahead regardless of the effects. It was a
remarkable demonstration of skewed priorities but one that unmasked how the
country is being run.
The national carrier, Air Zimbabwe, had to delay or transfer passengers
paying in foreign currency because it had no fuel for its aircraft.
But that is not all. At a time when the mention of Zimbabwe conjures up
images of extreme hardships, the country boasts probably the largest
population of luxury vehicles outside South Africa on the continent,
especially by its government ministers and officials.
Chad recently discovered oil yet it has placed limits on overseas travel
while Rwanda has impounded 2 500 government vehicles questioning whether
government ministers really need them. Rwanda suggests government ministers
"be with the people". This maybe extreme, but it demonstrates the kind of
belt-tightening alien to their Zimbabwean counterparts.
Overseas trips by Zimbabwe's ministers are numerous, yet they cannot be
justified in terms of real immediate or long-term benefits to the country.
They are like investing in a pyramid scheme.
Yet if the resources gobbled up by foreign trips were channelled to domestic
needs, the fuel crisis would not be so critical. Industries would not be
acutely affected and they would be able to produce scarce basic commodities
while export earning-sectors would be supplying foreign markets and
therefore generating hard currency to, in turn, enable the country to pay
for its imports.
Politics has largely become an avenue for pursuing private business
ventures. MPs are granted constituency travel allowances, yet how many of
them actually use this facility for the purpose of visiting and consulting
with the people who elected them? If the MPs were genuinely engaged in
consultative or report back processes there would be no complaints about
legislators who only remember their constituencies when campaigning starts.
It is clear therefore that the travelling allowances are being used for
other purposes other than for what they are intended. A start could be to
reduce the allowances to half of the present entitlement, as part of
belt-tightening. Such a move may be used to justify underperformance, but
even with all the allowances in the world, they would never deliver.
The same could be extended to government ministers and their officials, as
would a review of the number and performance of our expensive foreign
missions.
The Senate, whose elections are due at the end of November, is an indication
of how the government expects belt-tightening by ordinary Zimbabweans
without attendant responsibilities on its part. Opposition to the Senate is
fuelled by the insensitive timing and because of scarce resources. Sixty-six
new senators, offices and supporting staff will mean more in government
expenditure, placing a greater burden on the taxpayers. Let's cut our suit
according to our cloth. Belt-tightening must be for everyone.
Zim Standard
By our staff
FOUR of Harare's suburbs have been hit by a spate of cable thefts with
telephone services to more than 200 consumers affected during the month of
September alone and service provider TelOne says replacement cables are not
locally available.
In one of the suburbs, which is worst affected by the cable thefts, services
to more than 100 subscribers have been suspended.
According to TelOne the suspension of services to subscribers will be for an
indefinite period, owing to the unavailability of cables to restore
services. Telephones services are critical in cases of emergencies. A delay
in calling emergency services because of vandalised lines could mean the
loss of lives.
"The situation has been worsened by the fact that there are no cables
available on the local market, hence they have to be imported. With current
severe foreign currency shortages it would be difficult to get that
allocation now as they are equally important sectors of the economy that
urgently need foreign currency. "These problems have resulted in a huge
backlog of faults in the said areas and this has resulted in customers
experiencing delays in restoration of service," TelOne's public relations
executive, Phil Chingwaru said.
Zim Standard
By our staff
BULAWAYO - The Bulawayo City Council has been hit by a shortage of
engineers, making it difficult for the authority to attend to essential
services.
Addressing residents at a city hotel recently, executive mayor Japhet
Ndabeni-Ncube revealed that the city was facing a critical shortage of
engineers.
He said 27 engineers had left the council during the past few months.
Residents have in recent weeks endured water blockages and burst sewer
pipes, among other things and the council has failed to rectify the problems
timeously.
The non-availability of fuel has also forced the local authority to confine
refuse removal to the central business district.
Ndabeni-Ncube disclosed that the were 425 water leaks, 150 water blockages
and many burst sewer pipes which could not be rectified due to serious staff
shortages in the city.
On the water situation, the mayor said the council was now pursuing plans to
extend further north the Nyamandhlovu aquifer in a bid to augment the low
water levels trickling into the city.
In Nyamandhlovu, the mayor said, only 8 out of 78 boreholes were working as
most of them were vandalised by war veterans at the peak of farm invasions
in the year 2000.
Zim Standard
By Ndamu Sandu
EMPOWERMENT outfit Nkululeko Rusununguko Mining Company of Zimbabwe (NRMCZ)
has sought the intervention of President Robert Mugabe in its bid to acquire
15% stake in Zimplats Holdings Ltd.
NRMCZ won the right to partner Zimplats last year ahead of National
Investment Trust (NIT) and Needgate.
Standardbusiness heard last week the move comes hard on the heels of efforts
by Mines ministry to scuttle the transaction in favour of a created Special
Purpose Vehicle (SPV) that would accommodate all losing bidders for the
empowerment stake.
Industry sources say while Mines minister Amos Midzi was pushing for an SPV,
Reserve Bank of Zimbabwe (RBZ) Governor Gideon Gono was pushing for NIT to
acquire the empowerment stake including the financing of the transaction.
The letter to President Mugabe, sources say, outlines impediments faced by
the empowerment group to acquire the transaction.
While it could not be established the actual date the communication had been
sent, sources told Standardbusiness last week that the letter had been sent
to President Mugabe last month.
Sources say NRMCZ had raised concern over the reluctance of the ministry of
Mines and Mining Development on policy issues such as the final percentage
on indigenous empowerment in the white metal producer. Zimplats had
indicated that it wanted to know the final percentage as it was worried
about political statements to the effect that indigenous players were
entitled to 50% in mining ventures. Sources say concern was also raised on
attempts by Midzi to scuttle a cabinet decision to award 15% to NRMCZ by
sounding the idea of an SPV as a pre-condition for progress.
Standardbusiness broke the story in August of plans by Midzi to set up an
SPV that would accommodate losing bidders, Needgate and NIT.
Nkululeko requested Midzi to put his proposal in writing. Sources say Midzi
failed to do so.
Sources say the empowerment group had raised concern on the role played by
Gono in stifling the conclusion of the empowerment deal.
Gono has in the past raised concern over the composition of NRMCZ saying it
(NRMCZ) "was gate-crashing into platinum business riding on political
connections".
Sources say the empowerment group had expressed concern over the delay by
Impala Platinum Mines (Implats) to conclude the transaction. Implats are the
largest shareholder in Zimplats.
"Implats are claiming that they cannot proceed with the transaction because
a number of groups were introduced to them by the Mines ministry," a source
said.
Impala, sources say, were shifting the goal posts saying it was unable to
conclude the transaction until the bilateral agreement between Zimbabwe and
South Africa to protect South African investments in the country had been
signed.
Efforts to get comment from the usually reliable Gono were unsuccessful
throughout the week so was NRMCZ spokesperson Alex Manungo. Under the
arrangement, NRMCZ will buy 13.4 million shares in Zimplats at the ruling
price on the Australian Stock Exchange (ASX).
The empowerment stake has been at the centre of controversy with Needgate at
one time saying, "they were the sole owners of the 15% equity and waiting
for the official announcement".
Zim Standard
By Bertha Shoko
HIV positive Zimbabweans could be living on the edge of death following
revelations that the country's sole manufacturer of Anti-Retroviral drugs
(ARVs) is failing to import raw materials used in the production of the
vital medication.
The Standard understands that Varichem Pharmaceuticals Private Limited, the
country's sole manufacturer of generic ARVs is facing problems due to
shortages of foreign currency needed for importation of raw materials. The
foreign currency crunch is also understood to be affecting the whole
pharmaceutical industry, which faces insurmountable hurdles in sourcing hard
currency from the official market.
Varichem manufactures the ARV combination Stalanev that contains the active
ingredients, staduvine, lamudivine and nevirapine. Stalanev is used as a
first line in the management of HIV. The other combination it manufactures
contains Zidovudine and lamudivine also used as an alternative to Stalanev.
Highly placed sources at Varichem told The Standard that the pharmaceutical
company was last allocated foreign currency by the Reserve Bank of Zimbabwe
through the auction system on 18 July 2005. The sources said the
pharmaceutical company requires more than USD$350 000 a month but could
require more as demand for ARVs increases.
If a foreign currency injection is not made as a matter of urgency, they
warned, the country could run dry of the ARVs on the private market where
the majority of people using the anti-Aids drugs source them.
"The pharmaceutical industry is forex-intensive and therefore lack or
shortage of foreign currency will affect operations in a big way. Right now,
the shortage of foreign currency is only affecting the private sector and if
anything is not done, there will be no ARVs on the private market," the
sources said.
There are fears also that the shortages might affect government's public ARV
programme, carried out at major institutions such as at Harare Central and
Parirenyatwa hospitals in Harare.
About 30 000 HIV positive people against a backdrop of more than 300 000 who
need the drugs, are benefiting from this scheme which has failed to expand
since its inception last year due to limited resources and lack of external
funding.
Contacted for comment, Dr David Parirenyatwa, the Minister of Health and
Child Welfare, referred questions to Varichem saying they would be in a
better position to say exactly what is happening.
"Varichem may have its challenges like any organisation, but they would be
in a better position to tell you. As for our government run ARV programmes
they have been able to keep up with the supplies we need," Parirenyatwa
said.
An official with Varichem told The Standard that the company was working
flat out to reverse the impending drug shortages.
"Our parent ministry, the Ministry of Health and Child Welfare, is aware of
this problem and they are assisting us by seeking dialogue with the central
bank," said the official.
Other players in the drug procurement and distribution industry confirmed
that the pharmaceutical industry faced a foreign currency crisis.
Benson Tamirepi, the managing director of Health Care Resources, told The
Standard that most of the drugs being manufactured by Varichem have a "high
import content" and therefore required foreign currency.
Tamirepi, whose company distributes pharmaceuticals, surgical medication and
equipment from Varichem and other companies, said the government must
prioritise allocation of foreign currency to health institutions.
Other than Varichem, the Medicines Control Authority of Zimbabwe recently
licensed two Indian companies, Ranbaxy and Citla, to supply Zimbabwe with
generic drugs, while Caps Holdings Limited has been awarded a licence to
manufacture ARVs. These other players might help avert the pending drug
crisis.
Zim Standard
By our staff
KARIBA - VIP delegates attending the World Tourism Day celebrations in
Kariba last week came face to face with the reality of the fuel crunch when
a boat ran out of fuel in the middle of Lake Kariba.
Dignitaries in the boat were Environment and Tourism minister Francis Nhema,
Mashonaland West Governor Nelson Samkange and management officials of the
Zimbabwe Tourism Authority (ZTA).
The boat ran out of fuel on its way from Msambakaruma Island, the venue of
the celebrations and delegates had to endure a two-hour anxious wait before
a speed boat came with additional fuel.
At the evening cocktail, Samkange chose to be economic with the truth
blaming the fuel crisis on sanctions. Tour operators said the biting fuel
shortage was affecting their businesses. Operators also blasted national
carrier Air Zimbabwe for choosing flying days not conducive to travellers.
Air Zimbabwe flies to Kariba thrice a week on Tuesday, Thursday and
Saturday.
Air Zimbabwe was also blasted for delaying in its flights and cancellation
of some flights. Indeed tour operators' attack on Air Zimbabwe was
opportune. The Saturday flight to the resort town was moved from 7.30 am to
10.00am. It was later cancelled.
Nhema was initially booked on the Air Zimbabwe flight but because of the
inconsistency of the national carrier, he cancelled the booking. Nhema and
his entourage were booked on Alliance Aviation that flew the delegates to
and from Kariba.
Alliance Aviation is a new kid on block in the aviation industry that
commenced operation in August. Meanwhile the 2005 Travel Expo open at the
Harare International Conference on Thursday. The 13 -15 October exhibition
will be held under the theme "Providing Hospitality for 25 years with a
bright future".
ZTA had set a target of 150. Over 170 international buyers have confirmed
participation at the annual showcase as of Friday.
Zim Standard
By Caiphas Chimhete
THE State will dole out more than $36 billion to former political prisoners,
ex-detainees and restrictees, in what analysts said yesterday is
economically dangerous "vote-buying" ahead of next month's Senate elections.
The analysts warned that payment of gratuities to war collaborators was a
repeat of a disastrous "appeasement policy" and would be a replica of the
1997 economic disaster triggered by the award of $50 000 each to war
veterans, which sent the economy into free fall in what came to be known as
"Black Friday".
Last week, the government sanctioned a one-off payment of $6 million
gratuity and provision of loans to finance commercial projects, education,
medical as well as funeral expenses to ex-prisoners, detainees and
restrictees, numbering about 6 000.
Assistance would also be given to their dependent children wishing to pursue
academic or vocational training, while those attending non-government
schools and institutions will be entitled to an education grant equal in
amount to the education benefit at government institutions.
In addition, funeral grants to the beneficiaries would be availed at the
same rate as those paid to civil servants.
Beneficiaries intending to embark on income-generating projects will be able
to apply for loans under the Ex-Political Prisoners, Detainees and
Restrictees Act.
University of Zimbabwe political scientist, Eldred Masunungure, said it had
become a pattern for Zanu PF to "buy votes" ahead of any major election in
which it expects a major challenge.
He said the vote-buying campaign was a clear indication that the ruling
party had lost the support of people who were suffering because of President
Robert Mugabe's scorched earth economic policies.
Mugabe, under siege from marauding war veterans led by the late Chenjerai
"Hitler" Hunzvi, awarded former freedom fighters $50 000 each in 1997 -
sending the economy into a tailspin from which it has never recovered.
It appears the government has not learnt much from that disaster.
Masunungure said: "That has become a pattern. It has to be understood in the
context of politics of patronage - Zanu PF's political survival tactic. If
Zanu PF fails to do that it will be gone for good and the leaders know that.
It's going to be devastating. It will compound an already bad economic
situation."
Independent economic analyst John Robertson agreed, saying the $36 billion
would have a huge negative impact on the country's shrinking economy because
the government's excessive spending had not been matched by production.
Paul Themba-Nyathi, the opposition Movement for Democratic Change
spokesperson said the payment of gratuities was a campaign gimmick by Zanu
PF, which always bribed gullible members of society towards major national
elections.
Two weeks ago, Mugabe told the same war collaborators that fuel, that has
dogged the country for the past five years, would be readily available. That
prophecy is still to be realised.
Robertson said the payment would create problems for the Minister of
Finance, Herbert Murerwa, who will need to accommodate the former war
collaborators in next year's budget, considering that the country was
failing to pay for fuel, feed the nation or service its international debts,
but would factor salary increases of civil servants in January next year.
|
http://www.zimbabwesituation.com/old/oct10a_2005.html
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
!ATTLIST div activerev CDATA #IMPLIED>
<!ATTLIST div nodeid CDATA #IMPLIED>
<!ATTLIST a command CDATA #IMPLIED>
If I have the following code:
static class SceneData
{
static int width=100;
static int score;
static GameObject soundManager;
}
Can I use this to keep score that's available in all my scenes? If so then do I have to make an empty gameobject in each scene and then just attach the script to that or is there more I have to do?
asked
Sep 26, 2012 at 06:42 AM
Sophia Isabella
39
●
13
●
919
●
33
Sure! I am using the same methods in my own project right now.
First let me say that a guy named Petey has a youtube channel called BurgZergArcade that has some really awesome Unity3d C# programming videos!
That channel is here:
Anyway what I usually do that I learned from Petey is make a new empty gameobject and name it GameMaster, then make a C# script named GameMaster.cs.
There's at least two ways to do this. Here's the first way:
using UnityEngine;
public class GameMaster : MonoBehaviour
{
public static int score;
void Awake()
{
DontDestroyOnLoad(this);
}
}
This is the quick and easy way to access variables like 'score' from any class. To do this just call GameMaster.score from anywhere. The problem with this method is that it only works well for simple data types like int. Most of the time we want to drag and drop in Assets via the inspector, and this isn't allowed with static variables.
In this case we create what's called a singleton. This is something Leepo from M2H uses in his multiplayer tutorials. We create a regular class and then make one single static instance of the whole class. This is the second way:
using UnityEngine;
public class GameMaster : MonoBehaviour
{
public static GameMaster GM;
public GameObject soundManager;
public int score;
void Awake()
{
if(GM != null)
GameObject.Destroy(GM);
else
GM = this;
DontDestroyOnLoad(this);
}
}
The beauty of using the singleton is that it can use the inspector to assign public assets and can still be used anywhere like a static class, even the functions will work from anywhere. We'd call the score for example by using GameMaster.GM.score
Of course there are many other ways to accomplish this, but this is how I like to do it. Hope that helps!
answered
Sep 26, 2012 at 10:56 AM
dscroggi
336
●
1
●
5
●
4
You deserve more thumbs and at least one comment saying this is correct. Oh, and it should be marked as correct, obviously.
I learned this technique in an official Unity Tutorial Video: see this link, video @ 13.30...so I'm pretty sure this is the right way to go.
I HAVE ONE THING TO ADD: he should make a prefab of that GameMaster-object.
The way I do it is that I have a normal variable (I call it temp) that I use in the scene and a static variable (static) that I use for the whole game.
The reason is, if you use the static var for your point and the guy loses, when he starts again, he still has the point from previous run.
You could use temp and if the guy wins the level then
static += temp;
If the guy loses
temp= 0;
Temp is lost when the new level is loaded, static remains.
You can attach the static var to your player object. The static var will have a lifetime of the whole game and you can access it anywhere with ClassName.staticVar.
In case you would not know much about static yet, it has been subject for a lot of controversy as beginners tend to think that the easiness of access is a good feature. Beware of that.
answered
Sep 26, 2012 at 06:56 AM
fafase
25.2k
●
64
●
66
●
124
My game is a bit different in that it does not have a player object as such. Could I create a gameObject called StaticData in each scene and attach a static class to that. If I did that would it still have the same values from scene to scene?
The game Object would be a new one but the data would be the same ones. It would be a new object linked to the same data.
@fafase - What about the script thing that dscroggi is suggesting. Have you needed that before?
Throw this utility script on any game object and it will not get destroyed when scenes change. Make sure that game object has your static data on it. Btw as far as I'm concerned, static data is faster to access than instanced data, so you're on the right track.
using UnityEngine;
public class KeepOnLoad : MonoBehaviour
{
void Awake () {
DontDestroyOnLoad(this);
}
}
answered
Sep 26, 2012 at 07:34 AM
@dscroggi - Can you explain a bit more? It's a method called DontDestroyOnLoad(this)?
when you start a new scene, all objects from the previous scene are destroyed. This function tells to keep the variable when loading a new scene. Could be a solution. My issue is that if for some reason the object gets destroyed within the game, you would lose the data. The function keeps on load but I do not know if it preserves from Destroy.
@dscroggi - Can you show me how the class would look that I would need to hold the static data. All I need to do is to hold one integer for score? Thanks
Yes you can do this, but you need to specify everything as public.
public static class SceneData
{
public static int width = 100;
public static int score;
public static GameObject soundManager;
}
Otherwise they default to private and your class will be useless.
Now you'll be able to use SceneData.width anywhere.
The other answers offer ways of having a class that inherits from MonoBehaviour which you can access from anywhere, but it doesn't look like you actually want any MonoBehaviour functionality, so a simple static class will work better.
answered
Jul 27 at 02:03 AM
SilentSin
649
●
11
●
12
●
19
edited
Jul 27 at 0210596
asked: Sep 26, 2012 at 06:42 AM
Seen: 5981 times
Last Updated: Jul 27 at 02:04 AM
Distribute terrain in zones
Multiple Cars not working
Making a bubble level (not a game but work tool)
Flip over an object (smooth transition)
Illuminating a 3D object's edges OnMouseOver (script in c#)?
Initialising List array for use in a custom Editor
Renderer on object disabled after level reload
An OS design issue: File types associated with their appropriate programs
I need a unity3d Professional mentor to boost my knowledge!
Torch Problems
EnterpriseSocial Q&A
|
http://answers.unity3d.com/questions/323195/how-can-i-have-a-static-class-i-can-access-from-an.html
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
FreeMarker Grails Plugin
Dependency:
compile ":freemarker:0.4"
Summary
The Grails FreeMarker plugin provides support for rendering FreeMarker templates as views.
Description
IntroductionThe Grails FreeMarker plugin provides support for rendering FreeMarker templates as views.
Getting Started
Installing The FreeMarker PluginInstall the FreeMarker plugin with the install-plugin command:
grails install-plugin freemarker
Rendering FreeMarker TemplatesThe FreeMarker plugin supports rendering FreeMarker templates as views. FreeMarker templates should be defined below the views/ directory in the same places where you might define GSP views. For example, if you have a controller named DemoController that looks like this:
Then you could define a FreeMarker template in grails-app/views/demo/index.ftl that looks like this:
class DemoController { def index = { [name: 'Jeff Beck', instrument: 'Guitar'] }}
<html> <body> Name: ${name} <br/> Instrument: ${instrument}<br/> </body> </html>
FreeMarker Tag LibraryThe FreeMarker plugin contributes a tag called render which is defined in the fm namespace. The tag works much like the render tag that is bundled with grails except that it renders FreeMarker templates instead of GSPs. The template supports the template and model attributes and may be used from any GSP. The following example shows the tag being used from a GSP.
<html> <body> <fm:render </body> </html>
TroubleshootingPer, in applications with more than one view resolver, UrlMappings.groovy may need to be updated. . When adding this plugin to an existing grails project that also uses .gsp by default the "/" mapping will be indeterminate. Fix by setting "/"(view:"/index.gsp") or "/"(view:"/index.ftl") if that is what you want to use.
More InformationFor more information on FreeMarker see the FreeMarker site.
Release Notes
Version 0.3
- Upgrade to FreeMarker 2.3.16
- Improve access to flash scope in a FreeMarker template
Version 0.4
- GPFREEMARKER-11 fix
- Adding FreemarkerViewService and FreemarkerTemplateService
- Packages renamed
|
http://grails.org/plugin/freemarker
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
06 April 2009 18:38 [Source: ICIS news]
WASHINGTON (ICIS news)--A top US oil and gas industry official on Monday charged that the Obama administration is undermining the nation’s energy future with proposed tax increases on production and delays in opening offshore resources.
Jack Gerard, president of the American Petroleum Institute (API), warned in a letter to members of Congress that higher taxes on energy firms and “a pattern of delay” in opening offshore areas to drilling and research in oil shale will worsen the nation’s energy crisis.
Gerard cited Energy Secretary Steven Chu, the International Energy Agency (IEA) and private sector studies saying that the anticipated end of the recession in 2010 will trigger a sharp increase in demand for oil and natural gas and bring a supply crunch.
At a time when ?xml:namespace>
“Unfortunately, the administration has put forth budget proposals that call for billions of dollars in new taxes and fees on the oil and natural gas industry,” he said.
“If imposed, these taxes and fees could have a debilitating effect on our economy when our nation can least afford it,” he added. “They would reduce investment in new energy supplies, meaning less energy produced for American consumers.”
“We cannot tax our way out of our energy problems,” Gerard said.
The
Gerard noted that in October last year Congress “did the right thing” in allowing its 27-year-long moratorium on offshore drilling to expire.
Now, however, “a pattern of delay is emerging when it comes to developing
Gerard cited a recent decision by the Interior Department to postpone the five-year offshore development proposal initiated by the Bush administration and putting off development leases for oil shale projects in western states.
He cautioned that tax disincentives and development delays will not only reduce the country’s energy options but also cost the country jobs and royalty revenue and weaken national
|
http://www.icis.com/Articles/2009/04/06/9206410/Obama-seeks-higher-energy-taxes-offshore-delays-API.html
|
CC-MAIN-2014-49
|
en
|
refinedweb
|
ren wrote:
>
> There is no magical "slurp Foo's code into my application" solution for
> handling arbitrary schemas, regardless of using YAML, XML, or whatever.
> Short of downloading a whole application, there's no way to transfer "the
> semantics" required.
>
Well said.
As such, I think the YAML community should recognize these principles:
a) YAML implementations should be written in a way that application developers can easily extend its type handling, and users should be able to extend YAML on a per-document basis.
b) YAML users should realize that public types don't guarantee any level of implementation support; they only guarantee a centralized starting point for finding documentation and implementations.
c) HTTP and YAML are the expected transport and format, respectively, for type documentation. Departures from HTTP and YAML should be considered interoperability risks.
d) Human intelligence does not go out the window.
e) YAML documents should use !! to denote data structures that would be mapped into objects conforming to the semantics documented at.
showell@... [mailto:showell@...] wrote:
> Let me be the first to admit that I haven't read every
> paragraph of the recent HTTP/URN duck/duck/goose debate. I
> suspect I am in good company. The thread mode has made the
> issue seem too complex, and complexity turns off any good
> simple-minded YAML soul.
Point taken. However, you don't have to bother about it; the whole thing is
optional (I personally don't see exactly what it is good for in practice,
but I'll take Clark's word it *is* useful for some applications).
> ... If you see this type indicator:
>
> !
>
> ...then where is the most natural place to look for
> documentation on that type?
>
> a)
> b) email foo_barson@... and submit a public YAML type
> documentation request, form 10358(b)
> c) do a reverse URN lookup as documented in RFC 2975 using
> the semantics proposed at the July 1999 UDDI conference in
> Provo, UT (link provided upon request)
> d) google on <!xml
> and
> then refer to appendix C for the
> XML-to-YAML namespace transformations
> e) simply ignore like the YAML type just like you used to do
> with the XML types
Nice :-)
> IMPLEMENTATIONS
>
> I will leave one open question. Suppose Foo Barson mails you
> back this document describing the public YAML type:
>
> owner: Foo Barson
> last updated: April 2002
> purpose: Describes tax data
> supporting documents:
> implementations:
> python:
>
> If you were writing a Python app that had to deal with Foo
> Barson's tax data domain type, how would you support it?
If you are going to use Foo Barson's tax data domain type, presumably your
application is/should be geared towards processing it in deeper ways than
just YAML syntax handling.
So, looking at the documentation and Python code, you'd design your
interfaces to his implementation of the relevant types. Also, presumably the
YAML library will allow you to easily register the loader components from
Foo Barson's library... and then you'll be all set.
Now, if Foo Barson was writing in, say, Perl, you'd have a harder time.
First you'll need to re-implement the functionality in Python yourself.
Or, you could treat the relevant data types as "encoded generic types"
(you'll want to preserve the type family, but otherwise treat values as
generic map/seq/scalar types - using "special keys" may be handy here). This
is trivial in Perl (just bless a hash/array/scalar), and is probably not too
complex in Python (and presumably would be part of the YAML library there).
There is no magical "slurp Foo's code into my application" solution for
handling arbitrary schemas, regardless of using YAML, XML, or whatever.
Short of downloading a whole application, there's no way to transfer "the
semantics" required.
Have fun,
Oren Ben-Kiki
|
https://sourceforge.net/p/yaml/mailman/message/8985962/
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
Wiki
dotnetrdf / User Guide
dotNetRDF User Guide
Welcome to the dotNetRDF user guide, this provides an introduction to dotNetRDF and aims to cover how to carry out a variety of common tasks in dotNetRDF. Using this guide you can learn the basics of working with the library in order to enable you the user to build applications using dotNetRDF.
You may also be interested in our FAQs or our quick How To guides.
If you are already an experienced dotNetRDF user you may wish to look at the Developer Guide instead which covers project architecture and advanced topics.
Basic Tutorial
This series of pages aims to introduce you to the core concepts of dotNetRDF and get you up and running with the library, reading in order is suggested for new users:
- Getting Started
- Library Overview
- Hello World
- Reading RDF
- Writing RDF
- Working with Graphs
- Typed Values and Lists
- Working with Triple Stores
- Querying with SPARQL
- Updating with SPARQL
General Topics
If you want to look up a specific namespace, class or method in the API please see the Formal API documentation which is MSDN style documentation of our libraries.
The following pages cover some general topics about the library:
- Exceptions
- Equality and Comparison
- Event Model
- Utility Methods
- Extension Methods
- Using the Namespace Mapper
3rd Party Triple Store Integration
We provide integration with a variety of 3rd party triple stores, see the following topics:
SPARQL Features
The basic tutorial covers simple SPARQL query and updates, we have a selection of topics on advanced SPARQL features available:
- Advanced SPARQL
The SPARQL Engine section of the Developer Guide may also be relevant to advanced users.
Ontologies, Inference and Reasoning
Please see the following for documentation of ontology, inference and reasoning support:
ASP.Net Integration
Please see the ASP.Net Integration page for an overview of how we integrate into ASP.Net applications, or you can jump to specific topics below:
Advanced APIs
The following documentation covers what are considered advanced topics but which may still be of value to everyday users of dotNetRDF.
Tools
See the Tools page for documentation pertaining to our GUI and command line tools.
- rdfConvert
- rdfEditor
- rdfOptStats
- rdfQuery
- rdfServer
- rdfWebDeploy
- soh
- SparqlGui
- Store Manager
Notes
Note that where we refer to the user in this guide we are referring to you the developer who is using the API.
All examples in the User Guide are given using C#.Net
Updated
|
https://bitbucket.org/dotnetrdf/dotnetrdf/wiki/User%20Guide
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
In my previous blog entry about notifications I mentioned that we had not covered the ResponseSubmitted event. Subscribing to this event allows you to respond to user input entered within a notification bubble. This blog entry discusses how you can capture and process user input within notifications.
Creating an HTML based form
The contents of a notification bubble is formatted using HTML. To request user input within a notification bubble we can utilise a HTML based form.
A simple form specified within HTML may look like the following example:
<form> Name: <input name="name" /><br /> <input type="submit" /> </form>
Similar to a System.Windows.Forms base application there are a range of controls which can be used within an HTML based form. These controls are typically specified via an <input /> element, although there are some exceptions as shown in the following table:
A sample form with two textbox controls could be specified within C# source code as follows:
Notification notif = new Notification(); notif.Text = @"<form> Field 1: <input type=""text"" name=""fieldname1"" /><br /> Field 2: <input type=""text"" name=""fieldname2"" /> </form>";
Buttons
Using the controls specified above allows a notification to accept input from the user. However it does not provide the user with a mechanism to submit a completed form to the application for further processing.
A form is typically submitted when the user presses a submit button. A submit button can be specified within HTML via the use of an <input type=”submit” /> element. Whenever the user presses the submit button the ResponseSubmitted event will be triggered, allowing the form to be processed by the application.
Buttons can also be utilised within a notification to temporarily hide or permanently close a notification without submitting a form (useful for cancel or postpone style buttons etc). These actions can be specified via the use of button elements within the HTML as demonstrated below:
<!-- This button will minimise the notification --> <input type="button" name="cmd:2" value="Hide" /> <!-- This button will permanently close the notification --> <input type="button" name="something" value="Close" />
The value attribute contains the text displayed on the button, while the name (i.e. “cmd:2″) controls the action which occurs when the button is pressed. The name “cmd:2″ is a special value indicating to the operating system that the notification should be minimised and displayed as an icon that can be clicked to redisplay the notification. Having a button with any other name will cause the notification to permiantly be dismissed without the ResponseSubmitted event firing. All other “cmd:n” style button names are reserved by Microsoft for future use.
Hyperlinks
A HTML form can also contain traditional hyperlinks such as the following example:
<a href="help">Display further help</a>
Whenever such a link is pressed within the notification, the ResponseSubmitted event will trigger and the response string will be the string specified as the href attribute (”help” in this example).
Many of the built in operating system notifications utilise hyperlinks to provide access to settings or customisation dialogs.
Processing the response
When a HTML form within a notification is submitted the ResponseSubmitted event will trigger and this is the ideal opportunity for an application to process the contents of a form. The ResponseEventArgs parameter passed to this event handler contains a Response property that includes the current values of all fields within the form encoded in a format known as application/x-www-form-urlencoded.
Section 17.13.4 of the HTML 4.01 standard discusses application/x-www-form-urlencoded form data with the following description of the encoding process.
This is the default content type. Forms submitted with this content type must be encoded as follows:
- `&’.
As an example, if the form specified above contained the strings “Hello World” and “Good Bye” it would be encoded and appear within the Response property as follows:
?fieldname1=Hello+World&fieldname2=Good+Bye
The Microsoft documentation for the Response property contains an example of how to parse such response strings. It does this via some rather brittle string search and replace style operations. The sample code is not very generic, as it will break if you change the structure of your form even a little (such as renaming a field) and it does not deal with decoding hex escaped characters.
Within the sample application mentioned below I have implemented a function called ParseQueryString which performs the application/x-www-form-urlencoded decode process and returns a more easily used Dictionary of field control names to value mappings. This allows you to write a ResponseSubmitted event handler which looks something like the following:
private void notification1_ResponseSubmitted(object sender, ResponseSubmittedEventArgs e) { // This dictionary contains a mapping between // field names and field values. Dictionary<string, string> controls = ParseQueryString(e.Response); // Make use of the field values, in this case pulling the // values out of the textboxes and displaying a message // box. MessageBox.Show(String.Format("first field = {0}, second field = {1}", controls["fieldname1"], controls["fieldname2"]); }
This should make the intention of the ResponseSubmitted event handler easier to determine, and makes for more easily maintained code. The “hard” part of the response parsing logic is now hidden away within the ParseQueryString function, leaving you with an easy to use collection of field values to play with.
Including images within a notification
Sometimes it is helpful to include a small image within a popup notification. This is possible, but as Keni Barwick found out, the syntax of your HTML has to be fairly precise for the notification balloon to locate the image (a similar situation to images within HTML help files).
You should have good success if you use the file:// protocol, and include a full path to your image using backward slashes to separate directories, i.e. use the format:
<img src="" />
For example to include the image \windows\alarm.bmp you would use the following HTML:
<img src="" />
You could hard-code the path to your image file but this could cause problems if the user decides to install your application in a different location (on an SD card for example). If your images are stored in the same folder as your main executable you can determine the path to your images at runtime by using a function similar to the following code snippet:
using System.Reflection; using System.IO; public string GetPathToImage(string fileName) { // Determine the exe filename and path Assembly assembly = Assembly.GetExecutingAssembly(); string executablePath = assembly.GetName().CodeBase; // Trim off the exe filename from the path executablePath = Path.GetDirectoryName(executablePath); // Add the specified filename to the path return Path.Combine(executablePath, fileName); } string imageURL = GetPathToImage("foo.bmp");
Sample Application
[Download notificationuserresponsesample.zip - 16KB]
A sample application is available for download. The sample demonstrates using a notification to specify the details of a pizza order. It also demonstrates the inclusion of an image within a notification.
When targeting a Windows Mobile 5.0 or above device, you could extend the sample by using the custom soft keys feature mentioned in a previous blog entry to submit the notification by pressing a subkey instead of pressing a button within the body of the notification.
Please feel free to make use of the ParseQueryString method and any associated functionality within your own applications.
Well… what to say?
Thank you!! would be a decent start :)
Your Notification (espacially the spinner feature) is something i missed in the standard Notification class…
Now one question arises… concerning images in the notification…
Is there a way to use images without packing them into the filesystem?
I would rather use a Resourcefile to pack them into one dll instead of having a extra folder holding them…
My own guess would be a plain “no” but i would like you to feel free to prove me wrong… (i would base my opinion on the fact, that the notification uses simple HTML and CSS for displaying the content)
h.a.
Is it possible to embed your images in the DLL and then extract them to save to a file then reference the images within the notification HTML?
Well…!
At least i do not know how to extract the images and save them to a file… but this could work…
And it proves me wrong ;)
But i think the performance would be horrible when doing it every time the notification pops up (happens quite often in my app)
doing it once at the app-start would be the way to go then…
but then i have all the pain to prove whether there is enough space to extract them… and what happens when there is not?
but its an idea… to think about :)
thx
h.a.
Yes, I think this is possibly the only solution to the problem.
Rather than going through all this work, and working out work arounds for the possible error conditions that could occur (such as the file system running out of space), perhaps it would be easier to just accept the extra folder fill of images.
If you install your application via a CAB file, the user won’t have any extra difficulty, and checks for enough space will occur as part of the standard installation process.
[...] ‘Send’ button in the PopUp notification bubble I’ve been following tutorials from this site, Windows Mobile Development Blog Archive Capture and respond to user input to a notification bubble The site also shows how to add images and coloured text to a PopUp notification bubble. HTML [...]
|
http://www.christec.co.nz/blog/archives/123
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
Chapter 11 of Textbook. Books of the New Testament: An Overview. The New Testament : - See Table 11.3, “Approximate Order of Composition of New Testament Books,” p. 357 in Textbook (see also Box 11.1, “Organization of the Hebrew and Christian-Greek Scriptures,” p. 344.
Books of the New Testament: An Overview
- See Table 11.3, “Approximate Order of Composition of New Testament Books,” p. 357 in Textbook (see also Box 11.1, “Organization of the Hebrew and Christian-Greek Scriptures,” p. 344 in Textbook.
The New Testament may be arranged:
The Synoptic Problem (see Fig. 11.2, p. 351 in Textbook):
How the Written Gospels Came to Be - From Oral Preaching to Written Gospels:
Stage I: The Oral Stage (contd.):
Stage I: The Oral Stage (contd.):
Stage I: The Oral Stage (contd.):
Stage II: Period of Earliest Written Documents (50-70 C.E.):
Stage III: Period of Jewish Revolt against Rome and the appearance of The First Canonical Gospel (66-70 C.E.):
Stage IV: Period of The Production of New, Enlarged Editions of Mark (80-90 C.E.):
Stage V: Period of Production of New Gospels Promoting an Independent (Non-Synoptic) Tradition (90-100 C.E.):
Four Distinctive Portraits Of Jesus:
The Gospel According to Mark:
The Gospel According To Mark (contd.):
The Gospel According to Mark (contd.):
Historical Setting (contd.):
The Leading Characters In Mark’s Account:
Mark’s Attitude Towards Jesus’ Close Associates:
The Geographical Arrangement of Mark’s Account:
Mark presents two different aspects of Jesus’ story:
- 1) the presentation of Jesus in Galilee (a person of authority in word and deed);
- 2) a helpless figure on the cross in Judea.
Five Main Divisions of Mark’s Account:
2) The Galilean Ministry (1.14-8.26):
- Mark’s eschatological urgency;
- “The time has come, the kingdom of God is upon you; repent and believe the Gospel” (1.15);
- The eschaton is about to take place;
- A sense of urgency - the present tense used;
- The author uses the word “immediately” to connect pericopes;
- Jesus’ activity proclaims that history has reached its climactic moment;
2) The Galilean Ministry (1.14-8.26) (contd.):
- Jesus as “Son of Man” (see Box 9.6, p. 375 in textbook);
- Mark’s use of conflict stories;
- Jesus as healer
3) The Journey to Jerusalem: Jesus’ Predestined Suffering (8.27-10.52):
- Ch. 8 as pivotal to Mark’s account;
- Here Mark ties together several themes that deal with his vision of Jesus’ ministry; and
- what Jesus requires of those who follow him;
- Lack of understanding on the part of Jesus’ followers;
- The hidden quality of Jesus’ Messiahship;
- The necessity of suffering on the part of Jesus’ followers;
- Peter’s recognition of Jesus as the Messiah (8.29);
- Jesus tells his disciples to keep this a secret;
- Jesus’ reluctance to have news of his miracles spread abroad - the Messianic Secret;
- the setting is Caesarea Philippi/Banias.
The Messianic Secret and Mark’s Theological Purpose:
- People could not know Jesus’ identity until after his mission was completed;
- Jesus had to be unappreciated in order to be rejected and killed (see 10.45);
- Jesus must suffer an unjust death to confirm and complete his Messiahship;
- This is the heart of mark’s Christology;
- Thus, the relationship between Peter’s confession that Jesus is the Messiah and Jesus’ prediction that he must go to Jerusalem to die (8.29-32).
- A third idea introduced:
- True disciples must expect to suffer as Jesus did (see 8.27-34 and 10.32-45: what is required of a true disciple);
- To reign with Jesus means to imitate his suffering.
The Journey To Jerusalem: Jesus’ Predestined Suffering (8.27-10.52) (contd.):
- Jesus travels to Jerusalem via Transjordan.
4) The Jerusalem Ministry (11.1-15.47):
- For mark, Jesus makes only one visit to Jerusalem;
- Jesus is welcomed into Jerusalem (11.9-10);
- Jesus accepts a Messianic role;
- Jesus alienates himself from both the Roman and Jewish administrators;
- He arouses hostility;
- His actions in the temple (11.15-19);
- Confrontations and successes against the Pharisees, Herod’s party, and the Sadducees.
Outline of Old City.
4) The Jerusalem Ministry (11.1-15.47) (contd.):
- The first commandment of all (12.28-34);
- Jesus’ foretells the fall of Jerusalem and the destruction of the Temple (Ch. 13 - The Little Apocalypse);
- Mark’s concern with predictions of Jesus’ return (13.5-6, 21-23);
- The tribulations of the disciples will be ended when the Son of Man returns to gather the faithful;
- In the meantime: “keep alert” (13.33); “be awake” (13.37).
4) The Jerusalem Ministry (11.1-15.47) (contd.):
- The Last Supper (14.12-25):
- Actually, a passover meal (see ex 11.1-13.16);
- Jesus gives the passover a new significance (14.22- 25);
- The origin of the Christian celebration of the Eucharist.
4) The Jerusalem Ministry (11.1-15.47) (contd.):
- Jesus’ Passion:
- Mark wishes his readers to see the disparity between Jesus’ appearance of vulnerability and the reality of his spiritual triumph;
- Jesus’ enemies are seemingly ridding the nation of a radical;
- In fact, they are making possible his saving death;
- All this is in accordance with God’s design.
- Gethsemane;
- Mount of Olives;
- Caiaphas, the High Priest;
- Pontius Pilate;
- Barabbas;
- Simon of Cyrene.
- Mary of Magdala as the link between Jesus’ death and burial and the discovery that the tomb is empty (see 15.40-41, 47 and 16.1);
- Joseph of Arimathaea.
5. The Empty Tomb (16.1-8):
- The women flee in terror (16.8);
- They say nothing to anyone for they were afraid (16.8).
- Thus, Mark’s account of the Good News ends abruptly.
By not including resurrection appearances, is Mark expecting a parousia, that is, a second coming or appearance of Christ to judge the world, punish the wicked, and redeem the world?
- Does Mark wish to emphasize that Jesus is absent?:
- He is present neither in the grave; nor as yet triumphal son of man.
- Is Jesus present in memories, and
- In his enduring power over the lives of his disciples?
Added Conclusions (16.9-19):
- Were many Christians unhappy with Mark’s inconclusiveness?
- If so, this could account for the heavy editing of Mark’s account;
- Some editors appended postresurrection appearances of Jesus;
- This made Mark’s account more consistent with Matthew and Luke (Mark 16.8b and 16.9-20).
Amen!
Questions 1, 2, 3, and 5 (do not do the one on parables) on p. 380;
Questions for Discussion and Reflection on p. 380.
|
https://www.slideserve.com/Philip/chapter-11-of-textbook-books-of-the-new-testament-an-overview
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
RDF::Trine::Parser::Turtle - Turtle RDF Parser
This document describes RDF::Trine::Parser::Turtle version 1.019
use RDF::Trine::Parser; my $parser = RDF::Trine::Parser->new( 'turtle' ); $parser->parse_into_model( $base_uri, $data, $model );
This module implements a parser for the Turtle RDF format.
Beyond the methods documented below, this class inherits methods from the RDF::Trine::Parser class.
new ( [ namespaces => $map ] )
Returns a new Turtle parser.
parse ( $base_uri, $rdf, \&handler )
Parses the bytes in
$data, using the given
$base_uri. Calls the
triple method for each RDF triple parsed. This method does nothing by default, but can be set by using one of the default
parse_* methods.
parse_file ( $base_uri, $fh, $handler )
Parses all data read from the filehandle or file
$fh, using the given
$base_uri. If
$fh is a filename, this method can guess the associated parse. For each RDF statement parses
$handler is called.
parse_node ( $string, $base, [ token => \$token ] )
Returns the RDF::Trine::Node object corresponding to the node whose N-Triples serialization is found at the beginning of
$string. If a reference to
$token is given, it is dereferenced and set to the RDF::Trine::Parser::Turtle::Token tokenizer object, allowing access to information such as the token's position in the input.
|
http://search.cpan.org/dist/RDF-Trine/lib/RDF/Trine/Parser/Turtle.pm
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
Companies ...Read More »
The AppD Approach: Java 9 Support
Discover faster, more efficient performance monitoring with an enterprise APM product learning from your apps. Take the AppDynamics APM Guided Tour! Read more about the challenges posed by the Java 9 modularization feature, and the stringent requirements AppDynamics met to remain leaders in this space. We are excited to announce full support for Java 9 as part of our Winter .. » »
What I Don’t Know About Cryptocurrency
My »
Top 10 Highest Paying Technical Jobs for Software Engineers, Programmers
If you are a computer science graduate or someone who is thinking to make a career in software development world or an experienced programmer who is thinking about his next career move but not so sure which field you should go then you have a come to the right place. In this article, I will tell you the top 10 ...Read More »
Service Testing with Docker Containers ...Read More »
Three Ways to Think About Value »
Repeatable Annotations in Java 8
With Java 8 you are able to repeat the same annotation to a declaration or type. For example, to register that one class should only be accessible at runtime by specific roles, you could write something like: @Role("admin") @Role("manager") public class AccountResource { } Notice that now @Role is repeated several times. For compatibility reasons, repeating annotations are stored in a ...Read More »
APIs To Be Removed from Java 10 »
Docker for Java Developers: Deploy on Docker
1. Introduction Many companies have been using container-based virtualization to deploy applications (including JVM based ones) in production way before Docker appearance on the horizon. However, primarily because of Docker, deployment practices using containers turned into the mainstream these days. In this section of the tutorial we are going to glance over some of the most popular orchestration and cluster ...Read More »
|
https://www.javacodegeeks.com/?comment_mail%5Bmanage%5D%5Bsub_new%5D=0&comment_mail%5Bmanage%5D%5Bsub_form%5D%5Bpost_id%5D=1020
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
JDBC - JDBC
JDBC Select Count Example Need an example of count in JDBC
jdbc
logical group of data with a number of columns. JDBC ResultSet Example
Stored
jdbc - JDBC
JDBC statement example in java Can anyone explain me ..what is statement in JDBC with an example
jdbc
CallableStatement Example/
Thanks
JDBC - JDBC
implementing class. Hi friend,
Example of JDBC Connection with Statement... database table!");
Connection con = null;
String url = "jdbc:mysql...){
e.printStackTrace();
}
}
}
For more information on JDBC visit to :
http
CLOB example - JDBC
("com.mysql.jdbc.Driver");
Connection con =DriverManager.getConnection ("jdbc:mysql
jdbc - JDBC
[] args) {
System.out.println("Tabel Deletion Example");
Connection con = null;
String url = "jdbc:mysql://localhost:3306/";
String dbName....
Thanks
jdbc - JDBC
in a database
System.out.println("MySQL Connect Example.");
Connection conn = null;
String url = "jdbc:mysql://localhost:3306/";
String dbName
Example
Example JDBC in Servlet examples.
Hi Friend,
Please visit the following link:
Servlet Tutorials
Here you will get lot of examples including jdbc servlet examples.
Thanks
jdbc code - JDBC
jdbc code are jdbc code for diferent programs are same or different?please provide me simple jdbc code with example? Hi Friend,
Please visit the following link:
Here you
Frameworks and example source for writing a JDBC driver.
Frameworks and example source for writing a JDBC driver. Where can I find info, frameworks and example source for writing a JDBC driver
JDBC : Create Database Example
JDBC : Create Database Example
In this section you will learn how to create database using JDBC with
example.
Create Database :
Database is an organized... the following format:
jdbc:mysql://[host][:port]/[database][?property1][=value1
JDBC: Drop Database Example
JDBC: Drop Database Example
In this section, we are using JDBC API to drop database and describing it
with example.
Drop Database :
Database... connection URL has the following format:
jdbc:mysql://[host][:port]/ question - JDBC
= java.sql.DriverManager.getConnection("jdbc:apache:commons:dbcp:example");
System.err.println...jdbc question
Up to now i am using just connection object for jdbc... a database connection for each user.
In JDBC connection pool, a pool of Connection
JDBC Tutorial, JDBC API Tutorials
backed with and example of simple web
application in JDBC.
Brief Introduction...
UPDATE statement example
DELETE statement example
Understanding
JDBC...
How to control transaction behavior of JDBC connection
Example
JDBC Database MetaData Example
;
}
JDBC DatabaseMetaData Example
JDBC DatabaseMetaData is an interface... in combination with the driver
based JDBC technology. This interface is the tool... objects takes argument
also.
An example given below demonstrate the use Isolation Example
;
}
JDBC Isolation Level Example
JDBC isolation level represents that, how... = "jdbc:mysql://localhost:3306/";
String driverName = "com.mysql.jdbc.Driver...
Delhi
Transaction Isolation level= 4
Download this example code
JDBC Transaction Example
JDBC Transaction Example
JDBC Transaction
JDBC transaction is used.... When you create
a connection using JDBC, by default it is in auto... is the video tutorial which shows you how to run the code example
JDBC Training, Learn JDBC yourself
as a url.
GET DATE in JDBC
This example shows how...
In this Tutorial
we want to explain you an example from JDBC Execute query.
...;
JDBC Execute Update Example
JDBC Execute Update query is
used
JDBC: Alter Table Example
JDBC: Alter Table Example
This tutorial describe how to alter existing table using JDBC API.
Alter Table :
Alter table means editing the existing table..._no varchar(20) "
Example : In this example we are adding new field '
JDBC: MetaData Example
JDBC: MetaData Example
In this section we are discussing how to get information of MetaData using
JDBC API.
MetaData :
DatabaseMetaData interface... the data type maps to the corresponding JDBC SQL type.
Example
error - JDBC
,i got a errors
d:temp> java DBConnect
db Connect Example...(String[] args) {
System.out.println("db Connect Example.");
Connection conn = null;
String url = "jdbc:oracle:thin:@localhost:1521:xe";
String
JDBC: Drop Table Example
JDBC: Drop Table Example
In this section, we will discuss how to drop table using JDBC.
Drop Table :
Dropping table means deleting all the rows... the following format:
jdbc:mysql://[host][:port]/[database][?property1][=value1: Create Table Example
JDBC: Create Table Example
In this section, we are going to create table using JDBC and using database MySql.
Create Table : Database table is collection... Creation Example!");
Connection con = null;
String url = "jdbc:mysql
Update - JDBC
("jdbc:odbc:Biu");
stat = con.prepareStatement("Update Biu SET itemcode...://
Thanks ... in a variable suppose num = 10.
Step2: Execute update statement for example reUpdate Emp
JDBC Select Max Example
JDBC Select Max Example
In this tutorial we will learn how use MAX () in query with mysql
JDBC driver. This tutorial use... SelectMax{
// JDBC driver name and database URL
static String driverName
JDBC Select Count example
JDBC Select Count example
In this tutorial we will learn how work COUNT() in query with mysql
JDBC driver. This tutorial COUNT(*) ...{
// JDBC driver name and database URL
static String driverName
JDBC: Update Records Example
JDBC: Update Records Example
In this section, you will learn how to update records of the table using JDBC
API.
Update Records : Update record is most...:
jdbc:mysql://[host][:port]/[database][?property1][=value1]...
host :The host
ResultSetMetaData - JDBC
; Hi,
JDBC provides four interfaces that deal with database metadata....
Thanks.
Amardeep Hi friend....
For Example :
import java.sql.*;
public class ColumnName{
public static void
creating jdbc sql statements - JDBC
creating jdbc sql statements I had written the following program...=DriverManager.getConnection("jdbc:odbc:second");
stmt=con.createStatement...)
Hi friend,
i think, connection problem. i am sending jdbc
jdbc odbc
jdbc odbc Sir, i want to get the selected value from JCombobox to ms...://...("JDBC All in One");
JComboBox petList = new JComboBox(petStrings
java - JDBC
to that field has to updated to the same page at the client-side.
for example: in filling...(jdbc))....
please......It's very important and urgent.... Hi
JDBC Batch commit method Example
JDBC Batch commit method Example:
In this example, you can learn what is setAutoCommit() method and commit()
method in the jdbc batch processing and how...;com.mysql.jdbc.Driver");
connection = DriverManager.getConnection
("jdbc:mysql
Database Creation Example with JDBC Batch
Database Creation Example with JDBC Batch:
In this example, we are discuss about database creation using JDBC Batch
process on the database server.
First... = DriverManager.getConnection("jdbc:mysql://localhost:3306/","root"... database jdbc_example;
Now use the following sql to create the table
JDBC: Sorting Table Example
JDBC: Sorting Table Example
In this section, you will learn how to sort your table records under any
column using JDBC
API.
ORDER BY Clause :
You can...;desc' as 'ORDER BY column_name desc'
Example :
java - JDBC
java how to insert data into table with out using sql queries.with example program
JDBC Batch Update Example
;
}
Batch Update Example
You can update data in a table batch. To update...(updateQuery1);
and finally commit the connection. An example of batch update... = DriverManager.getConnection(
"jdbc:mysql://localhost:3306/student", "root
Jdbc RowSet
Jdbc RowSet import java.sql.*;
import javax.sql.*;
import...=DriverManager.getConnection("jdbc:odbc:oradsn","scott","tiger");
Statement stmt... number in my example row number 4. So provide the valid row number.
import
JDBC: Rows Count Example
JDBC: Rows Count Example
In this section, You will learn how to count number of rows in a table using
JDBC API.
Counting number of rows -
To count... student";
rs = statement.executeQuery(sql);
Example : In this example we
JDBC Batch Example With SQL Select Statement
JDBC Batch Example With SQL Select Statement:
In this example, we are discuss about SQL select statement with JDBC batch
process. We will create SQL... this result set and display data on the client screen.
In this example we will used two
JDBC Execute Update Example
JDBC Execute Update Example
JDBC... a simple
example from JDBC Execute update Example. In this Tutorial we want to describe
you a code that helps you in understanding JDBC Execute update Example
java.Sql - JDBC
....
Tell me some method to avoid this problem with an example (Use my code...://
Thanks
Hi friend,
Read for more
JSP - JDBC
JSP Store Results in Integer Format JSP Example Code that stores the result in integer format in JSP Hi! Just run the given JSP Example...;/b></td><% Connection con = null; String url = "jdbc:mysql
JDBC: Batch Insert Example
JDBC: Batch Insert Example
In this tutorial, you will learn how to do batch insertion of records using
JDBC API.
Batch Insert :
When you want to insert..., call commit() method to commit all
changes.
Example :
package jdbc
JDBC: WHERE Clause Example
JDBC: WHERE Clause Example
In this section, we will discuss how to use WHERE Clause for putting
condition in selection of table records using JDBC
API..., which returns a single ResultSet object.
Example :
package jdbc
XLS JDBC Example
.style1 {
background-color: #FFFFCC;
}
XLS JDBC
XlS JDBC driver is used to access xls file from java application. It is read
only JDBC driver... or option is not supported in this driver, even
a single WHERE clause.
Example
JDBC DataSource Example
;
}
JDBC DataSource Example
You can establish a connection to a database either using DriverManager class
or DataSource interface. JDBC DataSource... .
An example given below is an example of BasicDataSourse example. To run
JDBC RowSet Example
;
}
JDBC RowSet Example
JDBC RowSet is an interface of javax.sql.rowset... JDBC connection to
the database.
Another advantage of JDBC RowSet... the RowSet object are scrollable and
updateable.
An Example of Row Set Event
JTree - JDBC
a example code for retrieving Jtree Structure from database.
java - JDBC
;
String url = "jdbc:mysql://192.168.10.211:3306/";
String db = "amar... this example.
* go to ms-access and make a table and give it a file name student.mdb.../example/java/awt/
Thanks.
Amardeep
JDBC: Batch Update Example
JDBC: Batch Update Example
In this tutorial, you will learn how to do batch update of records using
JDBC API.
Batch Update :
When you want to update... commit() method to commit all
changes.
Example : In this example we
JDBC: LIKE Clause Example
JDBC: LIKE Clause Example
In this section, you will learn how to use LIKE Clause for selection of table
records using JDBC
API.
LIKE Clause :
Like... the following format:
jdbc:mysql://[host][:port]/[database][?property1][=value1
JTree - JDBC
JTree how to retrieve data from database into JTrees?
JTree - Retrieve data from database
Find out your answer from above
java - JDBC
java how can i give hyperlink in jsp to database Its simple by using the expression in jsp..
Example if u want to access any fields from table, select the values from table by using select query and then
while
insertuploadimahe - JDBC
data to databse.
I'm using netbeans ide to create this example and enterprisedb...");
con = DriverManager.getConnection("jdbc:edb://192.168.1.136:5444/testhr
JDBC: Select Records Example
JDBC: Select Records Example
In this section, you will learn how to select records from the table using JDBC
API.
Select Records : Select statement... format:
jdbc:mysql://[host][:port]/[database][?property1][=value1]..
host
JDBC CallableStatement Example
.style1 {
text-align: center;
}
JDBC Callable Statement
JDBC Callable... = connection.prepareCall("CALL HI()");
A Simple Example of callable statement...
con = DriverManager.getConnection(
"jdbc:mysql://localhost/student
JDBC Insert Statement Example
.style1 {
text-align: center;
}
JDBC Insert Statement Example
JDBC...
// Execution
String conUrl = "jdbc:mysql://localhost:3306/";
String.......
Download this example code
JDBC Update Statement Example
.style1 {
text-align: center;
}
JDBC Update Statement Example
JDBC... reference variable for query
// Execution
String conUrl = "jdbc:mysql.......
Download this example code
JDBC Delete Statement Example
.style1 {
text-align: center;
}
JDBC Delete statement Example
A JDBC delete statement deletes the particular record of the table.
At first create... = "jdbc:mysql://192.168.10.13:3306/";
String driverName = "com.mysql.jdbc.Driver
java - JDBC
me how to increase the life time of sessions with a simple example and syntax
java - JDBC
java How to insert and retrieve image from oracle database?
PLZ,,,explain with an example code?
help,me plz.... Hi friend...(String args[]){
try{
System.out.println("Image insert example!");
con = newDBC Select Record Example
JDBC Select Record Example
In this tutorial we will learn how select specific record from
table use mysql JDBC
driver. This select...;
import java.sql.ResultSet;
public class SelectRecord{
// JDBC driver
java - JDBC
String objects are immutable they can be shared. For example:
String str
jdbc-prepare statement
jdbc-prepare statement explain about prepared statement with example
Types of JDBC drivers
JDBC Driver's Type
JDBC Driver can be broadly categorized into 4 categories--
JDBC-ODBC BRIDGE DRIVER(TYPE 1)
Features
1.Convert the query of JDBC Driver into the ODBC query, which in return
pass the data.
2.JDBC-ODBC
Database Table Creation Example with JDBC Batch
Database Table Creation Example with JDBC Batch:
In this example, we are discuss about table creation in the database using JDBC Batch
process.
First... = DriverManager.getConnection
("jdbc:mysql://localhost:3306/databasename","
Prepared statement JDBC MYSQL
Prepared statement JDBC MYSQL How to create a prepared statement in JDBC using MYSQL? Actually, I am looking for an example of prepared statement.
Selecting records using prepared statement in JDBC
JDBC Delete All Rows In Table Example
JDBC Delete All Rows In Table Example:
In this tutorial provide the example of how to delete all
records from table using mysql JDBC driver. This tutorial
use the "com.mysql.jdbc.Driver"
JDBC Fetch
we want to describe you an
example from Jdbc Fetch. In this program code...
JDBC Fetch
The Tutorial... a class Jdbc Fetch, Inside the
main method we have the list of steps to be followed
JDBC Next
JDBC Next
The Tutorial help in understanding JDBC Next. The
program describe a code that gives you the number of rows in the table. The
code
include a class JDBC Next
jdbc - Java Beginners
jdbc how to run my first jdbc program? Hi anu,
For running jdbc on Mysql, first u have to create a table in SQL.
You can refer the following example.
hope
JDBC Drop Database Example
JDBC Drop Database Example:
In this tutorial, you can learn how drop database if
exist use mysql JDBC driver . In this tutorial we create...";
static String url = "jdbc:mysql://localhost:3306/"
static
Getting Stated with JDBC
. In this step you will actually
learn how to write a
simple
JDBC example and run... the
example code. You can also download the example code the
Simple
JDBC Example section.
What Next?
Now you can go ahead and learn the JDBC concepts
JDBC-Odbc Connectivity
JDBC-ODBC Connectivity
The code illustrates an example from JDBC-ODBC.... To understand this
example we have a class JdbcOdbcConnectivity
jdbc odbc connection
computer and then access in your Java program.
Read the JDBC ODBC example.
Thanks
Here is another example with explanation:
Example of JDBC ODBC...jdbc odbc connection i need a program in java which uses a jdbc odbc
JDBC ResultSet Example
JDBC ResultSet Example:
In this example, we are discuss about ResultSet class...) database query results.
Through this example you can see how to use ResultSet... resultset methods.
The full code of the example is:
package
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles
|
http://www.roseindia.net/tutorialhelp/comment/83192
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
I'm having problems with an assignment in C++. The assignment is this:
Make a program that reads two moment of times, given in hours and minutes. Theses moment of times are the start and end for a day of work. After that the hour's pay must be read in. The program are going to estimate the workday in hours and minutes, and estimate the gross wage for a day.
The screen-printout are going to tell how long it has been worked in hours and minutes, and the gross wage.
My code is this:
#include <iostream.h>
main()
{
int Hour1, Hour2, Minute1, Minute2;
double Hour_Pay;
cout << "Key the hour when you start at work: ";
cin >> Hour1;
cout << "Key the minute in that hour you start at work: ";
cin >> Minute1;
cout << "Key the hour when you leave work: ";
cin >> Hour2;
cout << "Key the minute in that hour you leave work: ";
cin >> Minute2;
cout << "Key the hour's pay: ";
cin >> Hour_Pay;
cout << endl;
int Hours = Hour2-Hour1;
int Minutes = Minute1-Minute2;
double Total = (Hours)+(Minutes/60);
double Gross = Total*Hour_Pay;
cout << "You work " << Hours << " hours and " << Minutes << " minutes every day at work."
<< endl
<< "Your gross day's pay is " << Gross << " dollars.";
}
This code is probably not any good at all, i guess i have to do some "variable%variable" but i don't know how or where.
Anyone that does have a suggestion on how to do this assignment?
|
http://cboard.cprogramming.com/cplusplus-programming/40585-assignment.html
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
1. Planning the Oracle Solaris Cluster Configuration
Finding Oracle Solaris Cluster Installation Tasks
Planning the Oracle Solaris OS
Guidelines for Selecting Your Oracle Solaris Installation Method
Oracle Solaris OS Feature Restrictions
Oracle Solaris Software Group Considerations
Guidelines for the Root (/) File System
Guidelines for the /globaldevices File System
Volume Manager Requirements
Example - Sample File-System Allocations
Guidelines for Non-Global Zones in a Global Cluster
SPARC: Guidelines for Oracle VM Server for SPARC in a Cluster
Planning the Oracle Solaris Cluster Environment
Public-Network IP Addresses
Quorum Server Configuration
Network Time Protocol (NTP)
Oracle Solaris Cluster Configurable Components
Global-Cluster Voting-Node Names and Node IDs
Private Network Configuration
Global-Cluster Requirements and Guidelines
Zone-Cluster Requirements and Guidelines
Guidelines for Trusted Extensions in a Zone Cluster
Planning the Global Devices, Device Groups, and Cluster File Systems
Planning Cluster File Systems
Choosing Mount Options for UFS Cluster File Systems
Mount Information for Cluster File Systems
Planning Volume Management
Guidelines for Volume-Manager Software
Guidelines for Solaris Volume Manager Software
Guidelines for Mirroring Multihost Disks
Guidelines for Mirroring the Root Disk
2. Installing Software on Global-Cluster Nodes
3. Establishing the Global Cluster
4. Configuring Solaris Volume Manager Software
5. Creating a Cluster File System
6. Creating Non-Global Zones and Zone Clusters
7. Uninstalling Software From the Cluster
This section provides the following guidelines for planning Oracle Solaris software installation in a cluster configuration.
Guidelines for Selecting Your Oracle Solaris Installation Method
Oracle Solaris OS Feature Restrictions
Oracle Solaris Software Group Considerations
Guidelines for Non-Global Zones in a Global Cluster Oracle Solaris JumpStart installation method. In addition, Oracle Solaris Cluster software provides a custom method for installing both the Oracle Solaris OS and Oracle Solaris Cluster software by using the JumpStart installation method. If you are installing several cluster nodes, consider a network installation.
See How to Install Oracle Solaris and Oracle Solaris Cluster Software (JumpStart) for details about the scinstall JumpStart installation method. See your Oracle Solaris installation documentation for details about standard Oracle Solaris installation methods.
Consider the following points when you plan the use of the Oracle Solaris OS in an Oracle Solaris Cluster configuration:
Oracle pmconfig(1M) and power.conf(4) man pages for more information.
IP Filter feature – Oracle Solaris Cluster software does not support the Oracle Solaris IP Filter feature for scalable services, but does support Oracle Solaris IP Filter for failover services. Observe the following guidelines and restrictions when you configure Oracle..
Oracle Solaris Cluster 3.3 3/13 software requires at least the End User Oracle Solaris Software Group (SUNWCuser). However, other components of your cluster configuration might have their own Oracle Solaris software requirements as well. Consider the following information when you decide which Oracle Solaris software group you are installing.
Servers – Check your server documentation for any Oracle Solaris software requirements.
Additional Oracle Solaris packages – You might need to install other Oracle Solaris software packages that are not part of the End User Oracle Solaris Software Group. The Apache HTTP server packages and Trusted Extensions software are two examples that require packages that are in a higher software group than End User. Third-party software might also require additional Oracle Solaris software packages. See your third-party documentation for any Oracle Solaris software requirements.
Tip - To avoid the need to manually install Oracle Solaris software packages, install the Entire Oracle Solaris Software Group Plus OEM Support.
Oracle Solaris package minimization – See Article 1544605.1 Solaris Cluster and Solaris OS Minimization Support Required Packages Group at for information.
When you install the Oracle Solaris OS, ensure that you create the required Oracle Solaris Cluster partitions and that all partitions meet minimum space requirements..
(Optional) /globaldevices – By default, a lofi device is used for the global devices namespace. However, you can alternatively create a file system at least 512 Mbytes large that is to be used by the scinstall utility for global devices. You must name this file system /globaldevices.
Functionality and performance are equivalent for both choices. However, a lofi device provides greater ease of use and more flexibility in situations where a disk partition is not available for use.
Volume manager – Create a 20-Mbyte partition on slice 7 for volume manager use.
To meet these requirements, you must customize the partitioning if you are performing interactive installation of the Oracle Solaris OS.
See the following guidelines for additional partition planning information:
Guidelines for the Root (/) File System
Guidelines for the /globaldevices File System
Volume Manager Requirements
As with any other system running the Oracle Solaris OS, you can configure the root (/), /var, /usr, and /opt directories as separate file systems. Or, you can include all the directories in the root (/) file system.
The following describes the software contents of the root (/), /var, /usr, and /opt directories in an Oracle Solaris Cluster configuration. Consider this information when you plan your partitioning scheme.
root (/) – The Oracle Solaris Cluster software itself occupies less than 40 Mbytes of space in the root (/) file system..
On the Oracle Solaris 10 OS, the lofi device for the global-devices namespace requires 100 MBytes of free space.
.
/usr – Oracle Solaris Cluster software occupies less than 25 Mbytes of space in the /usr file system. Solaris Volume Manager software requires less than 15 Mbytes.
/opt – Oracle Solaris Cluster framework software uses less than 2 Mbytes in the /opt file system. However, each Oracle Solaris Cluster data service might use between 1 Mbyte and 5 Mbytes. Solaris Volume Manager software does not use any space in the /opt file system.
In addition, most database and applications software is installed in the /opt file system.
Oracle Solaris Cluster software offers two choices of locations to host the global-devices namespace:
A lofi device, which is the default
A dedicated file system on one of the local disks
When you use a lofi device for the global-devices namespace, observe the following requirements:
Dedicated use – The lofi device that hosts the global-devices namespace cannot be used for any other purpose. If you need a lofi device for some other use, create a new lofi device for that purpose.
Mount requirement – The lofi device must not be unmounted.
Namespace identification – After the cluster is configured, you can use the lofiadm command to identify the lofi device that corresponds to the global-devices namespace, /.globaldevices.
If you instead configure a dedicated /globaldevices for the global-devices namespace, observe the following guidelines and requirements:
Location -. This file system is later mounted as a UFS cluster file system. Name this file system /globaldevices, which is the default name that is recognized by the scinstall(1M) command.
Required file-system type -.
Configured namespace name- The scinstall command later renames the file system /global/.devices/node@nodeid, where nodeid represents the number that is assigned to an Oracle Solaris host when it becomes a global-cluster member. The original /globaldevices mount point is removed.
Space requirements - The /globaldevices file system must have ample space and ample inode capacity for creating both block special devices and character special devices. This guideline is especially important if a large number of disks are in the cluster. Create a file system size of at least 512 Mbytes and a density of 512, as follows:
# newfs -i 512 globaldevices-partition
This number of inodes should suffice for most cluster configurations.
For Solaris Volume Manager software, you must set aside a slice on the root disk for use in creating the state database replica. Specifically, set aside a slice for this purpose on each local disk. But, if you have only one local disk on an Oracle Solaris host, you might need to create three state database replicas in the same slice for Solaris Volume Manager software to function properly. See your Solaris Volume Manager documentation for more information.
Table 1-2 shows a partitioning scheme for an Oracle Solaris host that has less than 750 Mbytes of physical memory. This scheme is to be installed with the End User Oracle Solaris Software Group, Oracle Solaris Cluster software, and the Oracle Solaris Cluster HA for NFS data service. The last slice on the disk, slice 7, is allocated with a small amount of space for volume-manager use.
If you use a lofi device for the global-devices namespace, slice 3 can be used for another purpose or left labeled as unused.
If you use Solaris Volume Manager software, you use slice 7 for the state database replica. This layout provides the necessary two free slices, 4 and 7, as well as provides for unused space at the end of the disk.
Table 1-2 Example File-System Allocation
For information about the purpose and function of Oracle Solaris zones in a cluster, see Support for Oracle Solaris Zones in Oracle Solaris Cluster Concepts Guide.
For guidelines about configuring a cluster of non-global zones, see Zone Clusters.
Consider the following points when you create an Oracle Solaris 10 non-global zone, simply referred to as a zone, on a global-cluster node.
Unique zone name – The zone name must be unique on the Oracle Oracle Solaris Cluster software permits you to specify different zones on the same Oracle Oracle Solaris host. Each instance of the scalable service must run on a different host.
Cluster file systems - For cluster file systems that use V, IPMP, in Oracle Solaris Administration: IP Services.
Private-hostname dependency - Exclusive-IP zones cannot depend on the private hostnames and private addresses of the cluster.
Shared-address resources – Shared-address resources cannot use exclusive-IP zones. Oracle VM Server for SPARC, see the Logical Domains (LDoms) 1.0.3 Administration Guide.
|
http://docs.oracle.com/cd/E37745_01/html/E37727/babhabac.html
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
The research done for this post was conducted by the Rackspace Cloud development team.
When we released our Cloud Servers API a year ago, we wanted to adhere as strictly as possible to RESTful practice. We stuck to open, accepted web standards that existed to help take the guesswork out of the developer workload. We didn’t want to invent anything new but rather build things on well-defined standards. To share what we designed and to foster a more open cloud, we open sourced our Cloud Servers and Cloud Files API specifications under the Creative Commons 3.0 Attribution license – this is our commitment to having an open cloud. Servers – the ability to upload an image, for example. Likewise, there are certain features on Cloud Servers that are not available on EC2 – the ability to schedule automated backups, for example, or server persistence after a shutdown. At their heart, however, both services offer a standard set of features:
The purpose of this post is to provide a comparison of how this set of common features is exposed in the Cloud Servers and EC2 APIs.
Feature Comparison
RESTful Interface
The Cloud Servers API is truly RESTful. It makes very good use of the HTTP protocol. Every resource in the API (server, image, flavor) is identified by a unique URI. The interface to the API is based on the uniform subset of HTTP operations (GET, POST, PUT, DELETE) on these URIs. In contrast EC2 supports two APIs: a SOAP based API which is strictly RPC and a Query API. While the Query API supports REST-like queries it is not truly RESTful because it does not contain a uniform interface, instead of the uniform HTTP operations (GET, POST, PUT,..) it supports an arbitrary vocabulary of verbs (DescribeImages, DescribeInstances, CreateSnapshot, DeleteSnapshot). Additionally, all HTTP requests on the API hit a single URL, resources are not identified by a unique URIs.
Web Service Contracts
A web service contract specifies the operations and data types for an API. Contracts are important because they establish a well defined set of expectations between an API service and the clients that consume it. Contracts also help in versioning the API (see below). Cloud Servers and EC2 APIs both offer support for human readable contracts in the form of Developer Guides and machine readable contracts that can be used to validate operations and to generate code. To define operations, the Cloud Servers API uses WADL for its machine readable contract and EC2 uses WSDL for its SOAP based API. Operations are not defined in a machine readable manner in EC2’s Query API. All APIs, however, define their entities (servers, flavors, images,etc.) in terms of XML Schema (XSD).
Cloud Servers API contracts offer a greater level of detail in some important respects. For example, when defining operations the Cloud Servers API defines what things can go wrong on a per operation basis: There are six distinct failures that can occur when requesting details about a server and eight distinct failures that can occur when changing an administrative password. These possibilities are described both in the Cloud Servers Developer Guide and in the WADL. EC2, on the other hand, defines a set of Error Codes in their API reference manual, but does not associate these codes to individual operations. Additionally, although WSDL offers support for associating faults with operations, EC2’s WSDL contract does not do so. In EC2, it becomes the responsibility of the client to determine what error conditions are possible when performing one operation or another. This itself can be error prone – especially when one considers that error conditions may change from one revision of the API to the next.
Cloud Servers API contracts also offer greater refinement when defining data types. For example, both Cloud Servers and EC2 Instances can be in a certain state: the server is running, or it is building, it is suspended or it is in error. Both APIs define an XSD type to capture this state:
The big difference here is that the Cloud Servers type defines all possible values for the state and gives an explanation for what each means. In EC2, the state is defined as an integer and a string, it becomes the responsibility of the client to determine what the possible states are, and more importantly what it means for an instance to be in a certain state.
Representation Formats
The Cloud Servers API supports both XML and JSON for representing entities. Offering support for multiple representations allows different kinds of clients to interact with the API more easily. For example, for enterprise developers (particularly those using JEE and .Net) having an XSD backed XML representation means writing significantly less code. For developers working with JavaScript and other dynamic languages, JSON is a lot more convenient.
Both representation formats are supported fully in the Cloud Servers API. It is possible, for example, to initiate a request by sending XML data and receive a response in JSON. The ability to support this use case may seem odd, but it is of particular importance to developers working on web applications. These developers often work on back-end components, using frameworks that offer strong support to parse and generate XML, who’s only purpose is to provide information to a front-end JavaScript components more apt to support JSON efficiently. In most cases, the need to convert between these formats can be eliminated. Instead, conversion between the formats is achieved automatically by the API via HTTP’s content negotiation protocol.
Authentication
The Cloud Server’s API performs authentication via a simple token-based protocol. A token can be obtained via a REST call from a user’s credentials. The token is then passed in a header on every subsequent request. The HTTPS protocol is used to prevent the token from being stolen. Token’s are ephemeral and eventually expire. At any time, the system may ask that a token be renewed by re-prompting for credentials.
EC2’s authentication mechanism involves digitally signing every request. Each user is given an a key pair. Requests in both the Query and SOAP API must be strictly encoded and ordered – in the Query API, HTTP query parameters must be in alphabetical order and their content must be strictly URL encoded. For example,
Given the request: &ImageId.1=ami-2bb65342 &Version=2009-11-30 &Expires=2008-02-10T12%3A00%3A00Z &SignatureVersion=2 &SignatureMethod=HmacSHA256 &AWSAccessKeyId=
A signature string must be created from the parameters. The string must contain parameters in alphabetical order with URL encoded values:
GET\n ec2.amazonaws.com\n /\n AWSAccessKeyId= &Action=DescribeImages &Expires=2008-02-10T12%3A00%3A00Z &ImageId.1=ami-2bb65342 &SignatureMethod=HmacSHA256 &SignatureVersion=2 &Version=2009-11-30
The string is signed and the Base 64 URL encoded signature is passed as a request parameter: &Version=2009-11-30 &Expires=2008-02-10T12%3A00%3A00Z &Signature=&SignatureVersion=2 &SignatureMethod=HmacSHA256 &AWSAccessKeyId=
Amazon’s authentication mechanism has the ability to securely validate requests without the need to use the HTTPS protocol. The mechanism, however, can be error prone and comes at the price of complicating client code.
Limits
Both Cloud Servers and EC2 services set limits on certain operations to prevent abuse. In Cloud Servers, however, these limits can be queried via the API. For example, by default, Cloud Servers allows an account to create no more than 50 servers per day – it is possible to determine this fact via the API. Additionally, should an account create 50 servers within a day, the exact time when an additional server may be created can be queried programatically.
Efficient Polling
The Cloud Servers API and the EC2 API are both asynchronous. This means that a request to perform an action does not wait until the action completes before returning. To test the status of operations, developers must asynchronously poll to determine the state of a particular image or server instance.
The Cloud Servers API provides a “changes-since” operation that can provide for efficient polling of groups servers or images with a single call. Changes-since works by providing developers with a list of changes that have occurred since a particular point in time. Additionally, Cloud Servers supports the standard HTTP If-Modified-Since header which allows for a conditional GET operation for efficient polling of a single entity.
Paginated Collections
Both Cloud Servers and EC2 APIs support calls for requesting collections of servers and images. The Cloud Servers API additionally offers support for paginated collections: When dealing with long lists of items a developer can request to receive a page of items of a certain size. This is useful when presenting lists of items in a user interface since it is possible to request a page at a specified offset – thus reducing the amount of memory that needs to be maintained on the client side.
Versioning
Both Cloud Servers and EC2 APIs support versioning. In Cloud Servers, the base endpoint URL denotes the requested version:
In the EC2 Query API, the requested version is passed as a parameter: &Version=2009-11-30&Expires=2008-02-10T12%3A00%3A00Z &Signature=&SignatureVersion=2 &SignatureMethod=HmacSHA256 &AWSAccessKeyId=
In the EC2 SOAP API, the requested version is determined by the namespace of the request XML:
There are two features that sets the Cloud Servers API apart: First, information about the current version and all available versions of the API can be queried. Version information includes the current version status and pointers to the human readable and machine processable API contract for that version:
Another subtle feature is that the API service has a well documented lifecycle. The lifecycle consists of an ordered set of version states:
BETA → CURRENT → DEPRECATED
Having a query-able version allows developers to develop clients that can inform their users when a new version of the API is available.
Conclusion
We hope the information provided has helped you understand our API better, why you would want to use it and what makes it different than that of Amazon’s EC2 API. Of course, everyone has their own preference and needs, and that is why we built our API on open standards and have followed a RESTful practice. We have no intention to lock anybody in and believe that solid, well-received cloud standards are the key to avoiding cloud lock-in. We encourage your continuous feedback and appreciate your contributions.
Pingback: A Close Look at the Rackspace Cloud Servers API and How it Compares to Amazon s EC2 API. › PHP App Engine()
Pingback: The Battle of Cloud APIs Heats Up. « Elankovan Sundararajan()
|
http://www.rackspace.com/blog/a-close-look-at-the-rackspace-cloud-servers-api-and-how-it-compares-to-amazons-ec2-api/
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
4 February 2008
By clicking Submit, you accept the Adobe Terms of Use.
You should have a good understanding of developing Flash applications and components, as well as Flash video.
Additional requirements
Flex Builder 2.01 (includes free Flex 2.01 SDK)
Note: To test the Flex component, you must download the Flex 3 beta from Adobe Labs.
Flex Component Kit for Flash CS3 Professional
Intermediate
Building applications in Adobe Flash that can also be used as Adobe Flex components can be a challenge, as I learned with a project involving an embeddable video player. Learn from this example how to create a project enabling you to share code across three versions of this player published as a stand-alone Flash application, a Flash-generated Flex component, and an editable Flash CS3 Professional component.
This all started a few months ago. At Almer/Blank we do a lot of work with Flash video. I soon got tired of building new Flash video players for our clients that included more or less the same functionality as all the others—albeit perhaps in a somewhat different design. So I decided to build a lightweight, easily skinnable Flash video player that we would utilize for a wide variety of projects. We chose Flash primarily because of the size of the Flex framework. Many of our video players are utilized as embeddable players—the same type of YouTube-style player that you see embedded everywhere around the web (see Figure 1).
The primary virtue of these embeddable video players is file size: they must load quickly and not weigh down the pages in which they exist. We had built a video player similar to this one in Flex and it weighed 300K—even without any custom fonts. The video player I built in Flash ended up only around 40K.
However, the first site into which this player was to be implemented was—like most of the work we at Almer/Blank produce—built in Flex. So, right off the bat, unless I wanted to build two video players, I needed to export this Flash video player as a Flex component.
Finally, another client wanted a shared video player for their video site that they could customize. Although I wanted to let them have it, I didn't want to release any source code. That's when I learned how to author FLA-editable components for Flash CS3.
In short, I had to build one single application that I could publish three different ways:
Either I was going to figure out a new workflow or I was going to be duplicating a lot of work! I decided to bite the bullet and figure out the workflow.
Planning your Flash application for any of these three items isn't that much of a challenge; but building an architecture and workflow that enabled me to export all three from the same project brought some nasty little quirks and surprises. This article is about these lessons. Before I get into this workflow, however it's worth taking a few moments to review just what these components are.
A lot of people know at least something about Flash components—they're the things that help make developing certain features and interactions much simpler. Need a drop-down list? Add a ComboBox component. Need to play some video? Add an FLVPlayback component. Need to display closed captioning with that video? Add an FLVPlaybackCaptioning component. All of the components that come with Flash CS3, and any custom ones you install, are visible in the Components panel (Window > Components), as shown in Figure 2.
The landscape of Flash components is much richer than many people appreciate—and continues to mature at a rapid pace. It's not your grand-daddy's SmartClip, that's for sure. This article focuses specifically on two aspects of Flash components: the editable components and the Flash-authored Flex components.
Before I continue, it's worth highlighting that there is much more to components than what I cover here:
And I haven't even mentioned styling. In short, there's much, much more that you can do with components.
The two specific features of Flash components that this article discusses are both new additions to Flash CS3. In my opinion, they both add key value to overall Flash platform development. The first feature, editable v3 components, finally make it possible to have a clean workflow between developers and designers by enabling the distribution of editable visual assets along with compiled code. The second feature is the Flex Component Kit for Flash CS3 Professional, which makes it much easier to generate components in Flash—the tool that has much more visual power and a wider suite of tools for designers to build experiences—that will work cleanly in the Flex framework and your Flex Builder projects.
With Flash CS3 Professional and ActionScript 3.0, Adobe completely overhauled the preinstalled components. This means that the new v3 components are smaller, more efficient, and more powerful—and there are more of them.
But wait, there's more! One of the most challenging tasks associated with using components has always been customizing the visual assets—for example, having a designer skin them with custom graphics. It's always been possible, but it's also been difficult. With most of the v3 components, all you have to do is double-click the component instance to see a nicely organized layout of all graphical elements used by the component (see Figure 3). Just change what you want to change. I call these "FLA-editable components" in this article.
As with earlier versions of Flash, it is possible to create your own components. However, in Flash CS3, you can also make your components FLA-editable, just like the ones that come preinstalled. This is a tremendous improvement over the workflow that was possible with previous versions of Flash components. You can code up anything you want, turn it into a component with the compiled code for easy distribution, and still have all the assets completely editable. It's amazing.
In the time between the release of Flash 8 and Flash CS3, Flex 2 was released. Flex 2, which is also based on ActionScript 3.0, is a framework that makes it much easier to build Flash applications. Like Flash, Flex exports SWF files that play in Flash Player. Unlike Flash, Flex facilitates the application development workflow primarily through the use of components. In essence, everything in Flex is a component.
Unfortunately, a Flex component is not the same as a Flash component. The Flex framework uses a series of classes that are effectively incompatible with Flash SWF files. What if you wanted to build certain assets more complex than a single skin for a Flex application in Flash?
The Flex Component Kit for Flash CS3 Professional comes to the rescue. This kit includes an extension for Flash CS3 that enables you to build components for Flex. In essence, you can convert any piece of code that runs in Flash into a component that is usable by Flex.
I should preface this description by explaining that this workflow is the result of what I learned by doing this project. Without having a nifty article like this one at my disposal, I did not know exactly how to begin the project. I began with what I assumed was the first step: getting the application to work in Flash as a stand-alone Flash SWF file. And that's exactly what I did. I spent a couple of days working on the code and integrating my designer's assets. I'll call this Version 1.
At the end of Version 1, I had a Flash application with all my symbols. The app had a document class called Player and each MovieClip symbol in the library was linked to a class file (for example, the progress bar movie clip was linked to a ProgressBar class file). Because of how I've come to work with Flash since the addition of ActionScript 3.0, the Stage and Timeline were completely empty—the Player document class instantiated all display objects through code. Instantiating all visual elements gave me a lot of flexibility.
Because this player had to be flexible, I wanted the discretion of adding and removing visual elements at runtime. For example, if the player receives an XML playlist without a
<title> node, I do not need to attach the title bar—which means that the video display window has more room to expand. If the playlist XML has no
<link> node, I do not need to add a link button—which means the other controls in the control bar can be positioned differently. In short, instantiating all visual elements through code, rather than directly on the Timeline, gave me a lot more control over the rendering and performance of the video player. My player looked something like Figure 4.
The specific code I used is irrelevant to this topic—since the goal is to build a workflow that works for any Flash CS3 application. However, the basic structure of my app does matter a fair bit, as you'll see next.
Building the video player to work as a regular Flash application was just the first step. I proceeded next by making the project publishable as a Flex component. I'd already authored Flex components in Flash, and the process was pretty familiar.
I got started on version 2 of the project: publishing a Flex component from Flash. I duplicated the project folder and got started converting the video player app. The process for creating a Flex component in Flash is actually pretty straightforward, and certainly well documented. In fact, when you download the Flex Component Kit for Flash, there is a 21-page PDF file that includes step-by-step explanations and walkthroughs.
The first step is to install the Flex Component Kit for Flash CS3 Professional, if you haven't already. You can download the kit from Adobe Labs.
The download includes an MXP file, which you can install through the Adobe Extension Manager. Launch the Adobe Extension Manager (which you should have if you already have Flash CS3 installed). Click Install and then navigate to and select the MXP you just downloaded. Click Accept on the disclaimer/agreement. When you are done, your Extension Manager window should look something like Figure 5.
Make sure the new Flex component is checked. If Flash CS3 is open, restart it. Now Flash CS3 is ready to start creating Flex components.
My first problem appeared immediately: I'd built my Flash stand-alone version using a Document class. This meant that the main class file for my video player was attached to my main Timeline. But authoring any component in Flash requires a symbol in the Flash document library, and the main Timeline is not in the library. So I created a new MovieClip symbol in my library called Player, removed the document class linkage on the FLA file (so the main Timeline no longer had a class file association), linked the new Player movie clip to the Player class, and made the appropriate changes to the Player.as file. The Player movie clip in my library was actually empty, as the Stage was in Version 1; it just linked to the Player class, which controlled and instantiated all objects as needed. Figures 6 and 7 show the structure before and after this process.
Now my second problem popped up. I successfully converted my player to run from a movie clip in the library rather than the document itself. However, this required linking the Player MovieClip to my Player.as. What's wrong with that? Well, for Flash to convert a movie clip into a Flex component, there must be a linkage ID assigned to the movie clip that will serve as the component name in Flex. When I tried linking that movie clip to a custom class file in Flash, I experienced no end of problems. In short, I couldn't figure out how to do it; I couldn't successfully have the linkage ID for the movie clip both link to a custom class and serve as the component identifier in Flex. It didn't crash or cause an error to appear—it just kept failing in new and imaginative ways.
So I created a new movie clip in my library called MediaPlayerFlexComponent. Then I inserted some code on frame 1 of the timeline of this new movie clip that instantiated and initiated the video player. Simplified somewhat for the purposes of this article, this code looked like the following:
import com.almerblank.ABMP.Player; var player:Player = new Player(); addChild(player);
Yes, code on the timeline! Sorry. If it it makes you feel any better, what I ended up doing was taking that code, pasting it into an AS file (just an ActionScript file, not a class file) and then including that with the following line:
include "../com/almerblank/ABMP/shell/ABMP_flx.as";
In this way, I could still manage the timeline code from the project package, ABMP.
To finish this process off, I needed to establish the proper linkage for my new MediaPlayerFlexComponent. To do this, right-click the symbol in your library and select Linkage.
Ensure that Export for ActionScript and Export in First Frame are checked. Next to Class, enter the name by which you want to refer to this component in Flex (see Figure 8). In this case, I used MediaPlayerFlexComponent. I left Base Class unchanged from the default flash.display.MovieClip setting.
Now the structure of the app looked like Figure 9.
Now I could finally export the Flex component from Flash. To do this, select the MediaPlayerFlexComponent MovieClip in the library and then select Commands > Make Flex Component.
If your FLA frame rate is not set, the default Flex frame rate of 24 fps applies (which it wouldn't be if you left it at the default Flash frame rate of 12 fps). You will now see a dialog box prompting you to approve the change to 24 fps. If you do see this dialog box, click OK. Fortunately this will have little effect on the component because all code and object instances occur on a single frame.
The first time you compile this component, in the Flash Output panel, you should see a messaging similar to this:
Command made the following changes to the FLA: Turned on Permit Debugging Turned on Export SWC Set frame rate to 24 Imported UIMovieClip component to library Component "MediaPlayerFlexComponent" is ready to be used in Flex.
As you can see from line 5, an instance of the UIMovieClip component has been added to your FLA library. And if you check the linkage of MediaPlayerFlexComponent, you will see that the base class has been automatically updated to mx.flash.UIMovieClip (see Figure 10).
For more information on the UIMovieClip, check the UIMovieClip entry in the Flex 3 Language Reference.
From now on, when you want to update this component, all you have to do is generate the SWF file by selecting File > Publish or Control > Test Movie.
In whatever folder you have set as the destination for your published SWF file, you will now see a SWC file of the same name as the SWF file. That SWC file is your Flex component.
To start testing this component, open up Flex Builder and create a new project. Now add your new SWC file to the project library paths. Select Project > Properties. In the left column, select Flex Build Path. In the right panel, select the Library path tab (see Figure 11).
On this panel, click Add SWC, click Browse to navigate to the SWC file you just created, and then click OK. Once you do, your library paths should be updated. Click OK.
Hint: When testing this component, I was constantly updating the SWC file from Flash. Each time you update the SWC component, you need to update the SWC file that your Flex project library paths point to. I discovered that Flex will update the SWC file if you return to your library paths, select the SWC component, click Edit, click OK without changing anything, and then click OK once more to update the library paths. You don't have to remove and re-add the SWC file each time.
In your application MXML, you must specify the namespace your new SWC component uses. The default (which is what your SWC uses unless you override it) is global, or *. So, in the Application tag, add something like:
xmlns:myswc="*"
Now, when you start a tag like this:
<myswc:
you should see some code hinting from Flex Builder indicating that it knows your new component (see Figure 12).
That's great. It works! Or, if your component looks a bit more like mine did at this stage, it doesn't. Which brings me to the next set of challenges.
In order for Flex to handle your component properly, there must be something physical on the timeline of the movie clip from which the SWC is generated. As I explained, in this version I instantiated everything from the Player class, so there was nothing on the timeline. Which meant it didn't work.
Of all the challenges that popped up, this was the most easily surmountable. I returned to Flash, edited the MediaPlayerFlexComponent MovieClip, drew a rectangle on the Stage, and gave it a fill with 0% alpha (so it was effectively invisible). I then republished my app (updating the SWC file), updated the library paths in my Flex project, and retested.
Voilà, it worked! Figure 13 shows the app structure after learning that lesson.
One feature of this video player is the ability to render dynamically for different sizes. The player, which is designed at about 400 × 400 (I'm using round numbers instead of the actual dimensions, for simplified math), doesn't simply scale, it makes more intelligent choices on how to accommodate runtime dimensions that are different than those of the initial design. For example, the control bar is always 24 pixels tall, no matter how tall the player is.
In order to test this feature in Flex, I modified the width and height values of the MXML tag:
<myswc:MediaPlayerFlexComponent
When I tested the component with these settings, I discovered an odd behavior of altering the widths and heights of these components. Instead of rendering at 200 × 200 pixels with the proper layout adjustments I'd accounted for in the code, the player rendered at 100 × 100 pixels—and just scaled down, with none of the layout indications my code should have handled.
It took me a little bit of playing around to figure out what was going on. When instantiating your component, Flex compares the original size of your component (the width and height of the graphic we placed on the Stage, which in our case is 400 × 400) to the size of the instantiated component (here, 200 × 200) and concludes that the
scaleX and
scaleY are 50% (200 / 400). So Flex renders the component instance at 200 × 200 but also applies a scale transformation, so the component instance actually appears at 100 × 100 (50% of 200 × 200).
In the end, all I did to remedy this situation was apply these two lines of code to my ABMP_flx.as ActionScript file, which is included on frame 1 of the MediaPlayerFlexComponent MovieClip inside of Flash:
scaleX = 1; scaleY = 1;
This forces the component instance to render at 100% of its scale. Voilà! It all worked. At this point, my video player looked something more like Figure 14.
So now I had two separate, but very similar, code bases to produce the same application in two different ways—as a Flash SWF file and as a Flex component. As my next step, I wanted to get the code to work to produce a FLA-editable Flash component.
I found this part of the project pretty exciting. I knew that Flash CS3 components could be editable on the Timeline because I'd used a few of the ones that come preinstalled with Flash. But when I started searching for documentation on how to do it, I was quickly frustrated. There is very little information available on how to create Flash CS3 editable components, and it involves a mystical object known only by the somewhat ambiguous name of "ComponentShim."
Fortunately, I stumbled upon Spender's post on FlashBrighton, Creating FLA-based components in Flash CS3. That tutorial taught me the basic process of utilizing the component shim, so I started trying to convert my video player into an editable component.
Your component will have to extend the UIComponent class. In order for your project to extend that class, you must add that class path to your document settings. Select File > Publish Settings, click the Flash tab, and with ActionScript 3.0 selected in the ActionScript Version drop-down, click Settings. This opens the ActionScript 3.0 Settings dialog box (see Figure 15).
Click the "+" button to add a new class path and insert the following:
$(AppConfig)/Component Source/ActionScript 3.0/User Interface
For more information on the UIComponent, see the Class UIComponent entry in the Flex 3 Language Reference.
I needed a class that extended the UIComponent, so I created a new class called ABMP_flc.as:
package com.almerblank.ABMP.shell { import fl.core.UIComponent; import flash.display.*; import com.almerblank.ABMP.*; import com.almerblank.fl.utils.events.*; public class ABMP_flc extends UIComponent { public function ABMP_flc(){ } private var _player:Player; protected override function draw():void { init(); } protected function getSkinName():String { return "ABMediaPlayerFlashComponentSkin"; } private function init():void { _player = new Player(); addChild(_player); } } }
Create a new movie clip in your library called Avatar. On frame 1, draw a rectangle with x = 0 and y = 0. Make the height and width whatever you want to be the default for the component. The rectangle should have a hairline stroke (for scaling) and no fill.
Add a new movie clip to the library. I called mine ABMP_flc. Now link it to the class you wrote earlier in Step 2. Right-click the movie clip in the library, check the Export for ActionScript and Export in First Frame check boxes. For Class, enter the path to the class you wrote. Leave Base Class as flash.display.MovieClip. I'll call this the component MovieClip.
Edit this new component MovieClip. On frame 1, add an instance of the Avatar MovieClip at 0,0. As I learned from Spender's post, UIComponent uses this instance to set its default width and height, and then removes it from the display list immediately. It does this only for the DisplayObject at index 0, so you don't want any other display objects on this frame.
Once you complete this component and someone uses it in a Flash app, all of the library assets from your component will be added to that FLA file. These are the assets that your designer needs to update and redesign the app. Instead of forcing your designer to hunt around the library to find all the movie clips and graphics that you use and update them one at a time, create a guide that facilitates this process.
If you read Spender's post, you'll see how you can do this using a dedicated skins MovieClip for this purpose. However, I found it easier simply to place a guide layer on frame 1 of the component MovieClip and include all the component assets in that guide (see Figure 16).
To do it this way, insert a new layer in the component MovieClip, right-click the layer, and select Guide. Drag all the assets you want to the layer. Layout and positioning and parent-child hierarchies do not matter. Since this is a guide layer, it will not be included in what is published from Flash.
Right-click the MovieClip component in your library and select Component Definition. Next to Class, assign a class name for this component and click OK.
Now I show you how to handle the final step of preparing an editable Flash CS3 component: using the component shim.
When you distribute this component, you distribute a FLA file. It contains all the library symbols of this component, which is how the visual assets are distributed.
But you probably don't want to distribute the code along with the component FLA. This is where the ComponentShim symbol comes in. In effect, ComponentShim enables you to compile all your code into your FLA library, so that when you distribute the component FLA library, you can distribute the compiled code and not have to include your class files.
Create a movie clip in your library and give it some meaningful name, like Shim. To make it easy to find it on the Stage later, include some shape on frame 1 of Shim. Then right-click Shim in the library and select Linkage. Check the Export for ActionScript and Export in First Frame check boxes and assign a name to the class, such as Shim (the class name is arbitrary; just ensure that it does not conflict with any real classes). Click OK to register the changes.
Next, right-click Shim in the library and select Convert to Compiled Clip. If your classes compile properly, you will see a symbol in your library with the component icon (as opposed to the MovieClip icon) with a name like "Shim SWF" (it will append "SWF" to whatever the name of the source movie clip is).
Here is where you may think you've run into a major problem. (I know I did.) The Shim SWF component must be added to each movie clip in your library that links to a real class. For each such movie clip in your library, you must add Shim SWF to frame 2 of the movie clip. Because I had over 30 movie clips in my library that linked to real classes, that meant I had to add the Shim SWF to 30 separate movie clips. And because I had to recompile the Shim SWF each time I updated my code, this meant I would have to do this each time I wanted to update the component.
Seeing as the whole purpose of this exercise was to build an efficient workflow, this was a game-stopper to me. I couldn't spend 20 minutes to update the Shim SWF instances throughout my app each time I wanted to test an update.
This led to the final major architectural adjustment in my video player: the shift from inheritance to encapsulation.
Flash is an incredibly visual environment. In most projects—including the code-heavy ones—controlling visual elements with code is essential. This means your code, which resides in class files, must be able to control symbols from the FLA file, either directly from the library or on the Stage. There are two basic ways of doing this: encapsulation and inheritance.
Encapsulation means that your class has a pointer to a movie clip, so the code can control the movie clip by talking to the pointer. Inheritance means that your class is a movie clip, so it can control the movie clip by controlling itself.
Here are two very simple classes that attempt to illustrate that point—both would set the alpha of a movie clip to 50%:
package { import flash.display.MovieClip; public class EncapsulationSample { public var mc:MovieClip; function EncapsulationSample() { mc = new MovieClip(); mc.alpha = .5; } } }
package { import flash.display.MovieClip; public class InheritanceSample extends MovieClip { function InheritanceSample() { alpha = .5; } } }
If I wanted to instantiate each of these from the Stage and add the generated movie clip to the Stage, the code for both cases would look like the following:
var encap:EncapsulationSample = new EncapsulationSample(); addChild(encap.mc); var inherit:InteritanceSample = new InteritanceSample(); addChild(inherit);
Back in the ancient, mystical world of ActionScript 2.0, I was never really that comfortable with movie clip inheritance. In most cases, I opted for encapsulating my movie clips rather than extending them. With the ease and flow of ActionScript 3.0, however, I've utilized inheritance much more frequently. Hence my problem. As I explained when I described Version 1 of the video player, I used inheritance for dozens of classes, linking the movie clips in my FLA library to custom class files that extended the movie clip and defining the behavior for those assets (such as mute button, progress bar, title bar, and so on). Unfortunately, this is not a wise path when it comes to creating components.
So, as one of dozens of similar examples in my source, I have a TitleBar in my video player. Up until this point in development, I had a TitleBar MovieClip in my library that linked to a TitleBar.as class file. That TitleBar class file extended MovieClip and then included all the other code that I needed to run my TitleBar. But when publishing as a FLA-editable Flash component, this meant I would have to update the Shim SWF in the TitleBar.
To ensure that I would have to update only the Shim SWF in one location throughout my component, I went through and changed all the code to utilize encapsulation instead of inheritance. Returning to my TitleBar example, this meant that the TitleBar MovieClip in my library now linked to a nonexistent class, TitleBarMC. My TitleBar.as class no longer extended MovieClip but instead now included a property called mc, which was an instance of the TitleBarMC MovieClip. So my TitleBar class could still effectively control my TitleBarMC MovieClip but I would not have to update the Shim SWF in the TitleBar at all.
At the end of this refactoring process, the only movie clip in my library that required the Shim SWF was the main component MovieClip, ABMP_flc, which you built in Step 2. Figures 17 and 18 illustrate the changes. The resulting source looked something like Figure 19.
I finally had my video player working all three ways: as a Flash SWF, as a Flex component, and as an editable Flash component. However, I had accumulated three separate code bases along the way to make it happen. The whole point when I started was to get the project to where I had only one code base to maintain and update.
So now I just had to make that happen.
I certainly learned some peculiar behaviors in order to make the vast majority of my code directly reusable among these three different projects. Being stuck with three copies of virtually the same code and assets was not tenable. Each time I wanted to fix a bug or add a new feature, I would have to make three identical source updates. My goal now was to combine all my source so that I could maintain one code base but still publish all three versions of the project.
As much as the efficiency expert in me wanted to consolidate all of this into a single FLA file, that is not possible. No matter how successful I would be, I knew I would need separate FLA files for the three various versions of this project. Here's why:
I knew I needed at least three FLA files that shared most assets, but also had some unique assets. In essence, each of the three FLA files would follow the model in Figure 20.
When implemented project-wide, the project source would look something like Figure 21.
Sharing code was actually the easy part. All versions of the project used the same packages and classes, so I just needed to point each FLA file to the same com folder. I was willing to maintain a single, unique AS file for each FLA file—all other code had to be shared cleanly between all versions.
To share the assets, I used a feature of Flash called shared libraries. These aren't the same as runtime shared libraries, or RSLs. When your Flash project uses RSLs, the SWF file loads assets from another SWF file at runtime while playing in Flash Player. Instead, I'm sharing the assets at author-time, in the Flash authoring tool.
It's actually a really simple process. Open one FLA file with your assets (FLA A) and then open another FLA file (FLA B). For the sake of relative linking and easy updating over time, it's smart to place both of these FLA files in the same folder. Open a second library panel (click the New Library Panel button in the upper left of your library). Make sure one library panel is tied to FLA A and the other to FLA B. Then simply drag the symbols from the library of FLA A into the library of FLA B.
Wait, you ask: Didn't I just copy and paste? No, not really. It is true that the effect is the same as copying the assets from one FLA library and pasting them into the other library. But the assets are now linked. This means that you can right-click the imported assets in FLA B (either one at a time or many at once) and select Update to bring up an Update Library Items dialog box (see Figure 22).
On the left you will see the list of the symbols that have linkages to other FLA files. Text at the bottom of the dialog box tells you how many of the selected symbols need to be updated (if FLA A has been updated since these symbols in FLA B were last updated). Check all that you want to update and click Update.
A word of experience: Updating symbols from linked libraries can frequently fail for no good reason, in which case the Update Library Items dialog box looks something like Figure 23.
Many times, Flash tells you that the update failed, when it actually worked properly. Here's another weird one: sometimes when linking symbols from a FLA file as a shared symbol, I lose the ability to save my shared asset library. The only way out of that is to quit and restart Flash. Lesson learned: save, save and save some more.
So I created four FLA files. The first, which I call ABMP_SharedAssetLibrary.fla, contains all my symbols, including movie clips (with linkages), graphics, and bitmaps. Then I have three more—one for each version of the project I want to export (Flash app, Flex component, and Flash component). Each of the production FLA files has all the assets from the assets FLA file in a folder called ABMP_assets, and a separate folder called My_assets with any symbols unique to that FLA.
When writing the code utilized for each FLA file, I ensured that all code specific to a FLA would be located in a single AS file. So in addition to all the classes shared by all three production FLA files, I built three AS files, one for each production FLA file. I placed these AS files inside a shell package in the project package.
FLA: ABMP_fla.fla
AS: com.almerblank.ABMP.shell.ABMP_fla.as extends flash.display.MovieClip
FLA library: The ABMP_fla.fla has a movie clip called ABMP_fla that is linked to ABMP_fla.as, which contains the information for instantiating the Player inside of a Flash movie (see Figures 24 and 25).
FLA: ABMP_flx.fla
AS: com.almerblank.ABMP.shell.ABMP_flx.as raw ActionScript included from timeline in the FLA
Library: ABMP_flx.fla has a movie clip called MediaPlayerFlexComponent in the library, which uses mx.flash.UIMovieClip as the base class. The library for ABMP_flx.fla also contains the UIMovieClip component (see Figures 26 and 27).
FLA: ABMP_flc.fla
AS: com.almerblank.ABMP.shell.ABMP_flc.as extends fl.core.UIComponent
Library: ABMP_flc.fla has a compiled component called ABMP_flc, which is linked to the ABMP_flc.as class and which also contains an instance of the Avatar MovieClip (on frame 1) and an instance of the Shim SWF (on frame 2). I've also included a guide layer on frame 1 with all the visual assets to help the designer when he or she wants to customize the editable Flash component (see Figures 28 and 29).
In the end, the project looked something like Figure 30.
As you can see, I spent a fair bit of time re-engineering my player to ensure the architecture could perform as required when published these three different ways. Because I was constrained by the behaviors and restrictions of each of my three desired output formats, my code is structured quite differently than it would have been had I wanted only to produce a single version (just a Flash application or just a Flash component or just a Flex component, rather than all three). Because of the need to publish as components, I couldn't really use the Document class. Because of the mysterious ComponentShim (and my desire to minimize how much work goes into each publication), I used encapsulation instead of inheritance for all library assets. And so on.
The upshot is that more than 95% of the code and assets are directly shared across the entire project. For each of the three versions I needed to produce, I in effect created a shell FLA file for that version, including the few lines of code and asset or two required to instantiate my player in the different versions. So now when I want to add a new feature, or fix a bug, I need only update the code and assets once. I still need to go through three separate publish actions (one for each of the production FLAs) but that represents a fraction of the effort if I instead needed to replicate my code and asset updates across three otherwise identical code bases.
Some of what I've written is well documented, while other information isn't as readily available. The point of this article is to bring it all together into a single code base and workflow. As the Flash platform continues to mature, the need for more efficient workflows between the various parts of the platform, and between designers and developers, continues to increase.
For more information on many of the topics covered in this article, please visit some of the following links:
In addition, I cover the process of creating Flex components in Flash in two chapters in my recently published book, AdvancED Flex applications: Building Rich Media X (Friends of ED, 2007) .
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License
|
http://www.adobe.com/devnet/archive/flash/articles/flex_component_workflow.html
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
MP4GetRtpPacketTransmitOffset - Get the transmit offset of an RTP packet
#include <mp4.h>
int32_t MP4GetRtpPacketTransmitOffset(
MP4FileHandle hFile,
MP4TrackId hintTrackId,
u_int16_t packetIndex
)
The transmit offset for the specified packet in the hint track timescale.
MP4GetRtpPacketTransmitOffset returns the transmit offset of an RTP packet. This offset may be set by some hinters to smooth out the packet transmission times and reduce network burstiness. A transmitter would need to apply this offset to the calculated transmission time based on the hint start time.
MP4(3) MP4AddRtpPacket(3) MP4ReadRtpPacket(3)
|
http://www.makelinux.net/man/3/M/MP4GetRtpPacketTransmitOffset
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
public class CookieManager extends CookieHandler
CookieHandler, which separates the storage of cookies from the policy surrounding accepting and rejecting cookies. A CookieManager is initialized with a
CookieStorewhich manages storage, and a
CookiePolicyobject, which makes policy decisions on cookie acceptance/rejection.
The HTTP cookie management in java.net package looks like:
use CookieHandler <------- HttpURLConnection ^ | impl | use CookieManager -------> CookiePolicy | use |--------> HttpCookie | ^ | | use | use | |--------> CookieStore ^ | impl | Internal in-memory implementation
- CookieHandler is at the core of cookie management. User can call CookieHandler.setDefault to set a concrete CookieHanlder implementation to be used.
- CookiePolicy.shouldAccept will be called by CookieManager.put to see whether or not one cookie should be accepted and put into cookie store. User can use any of three pre-defined CookiePolicy, namely ACCEPT_ALL, ACCEPT_NONE and ACCEPT_ORIGINAL_SERVER, or user can define his own CookiePolicy implementation and tell CookieManager to use it.
- CookieStore is the place where any accepted HTTP cookie is stored in. If not specified when created, a CookieManager instance will use an internal in-memory implementation. Or user can implements one and tell CookieManager to use it.
- Currently, only CookieStore.add(URI, HttpCookie) and CookieStore.get(URI) are used by CookieManager. Others are for completeness and might be needed by a more sophisticated CookieStore implementation, e.g. a NetscapeCookieSotre.
There're various ways user can hook up his own HTTP cookie management behavior, e.g.
- Use CookieHandler.setDefault to set a brand new
CookieHandlerimplementation
- Let CookieManager be the default
CookieHandlerimplementation, but implement user's own
CookieStoreand
CookiePolicyand tell default CookieManager to use them:// this should be done at the beginning of an HTTP session CookieHandler.setDefault(new CookieManager(new MyCookieStore(), new MyCookiePolicy()));
- Let CookieManager be the default
CookieHandlerimplementation, but use customized
CookiePolicy:// this should be done at the beginning of an HTTP session CookieHandler.setDefault(new CookieManager()); // this can be done at any point of an HTTP session ((CookieManager)CookieHandler.getDefault()).setCookiePolicy(new MyCookiePolicy());
The implementation conforms to RFC 2965, section 3.3.)
store- a CookieStore to be used by cookie manager. if null, cookie manager will use a default one, which is an in-memory CookieStore implmentation.
cookiePolicy- a CookiePolicy instance to be used by cookie manager as policy callback. if null, ACCEPT_ORIGINAL_SERVER will be used..
get,.
|
http://docs.oracle.com/javase/7/docs/api/java/net/CookieManager.html
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
4 - Connecting with Services
This chapter describes the various ways that the Tailspin Surveys mobile client interacts with external services, both custom services created by Tailspin, and services offered by third-party companies. Connecting to external services from a mobile client introduced a set of challenges for the development team at Tailspin to meet in the design and implementation of the mobile client, and in the services hosted in Windows Azure™ technology platform. The mobile client application must do the following:
- It must operate reliably with variable network connectivity.
- It must minimize the use of network bandwidth (which may be costly).
- It must minimize its impact on the phone's battery life.
The online service components must do the following:
- They must offer an appropriate level of security.
- They must be easy to develop client applications against.
- They must support a range of client platforms.
The key areas of functionality in the Tailspin Surveys application that this chapter describes include authenticating with a web service from an application on the phone, pushing notifications to Windows® Phone devices, and transferring data between a Windows Phone device and a web service.
Authenticating with the Surveys Service
You Will Learn
- How to perform credentials-based authentication between a Windows Phone application and a web service.
- How to perform claims-based authentication between a Windows Phone application and a web service.
The Surveys service running in Windows Azure needs to know the identity of the user who is using the mobile client application on the Windows Phone the user.
The Windows Azure-based Surveys service needs to identify the user using the mobile client application..
Tailspin wants to be able to change the way it authenticates users without requiring major changes to the Surveys application.
It's also important to ensure that the mechanism the mobile client uses to authenticate is easy to implement on the Windows Phone platform and any other mobile platforms that Tailspin may support in the future.
Overview of the Solution
Figure 1 shows a high-level view of the approach adopted by Tailspin.
Figure 1
The approach that Tailspin adopted assumes that the Windows Phone client application can send credentials to the Surveys web service in an HTTP header. The credentials could be a user name and password or a token. Tailspin could easily change the type of credentials in a future version of the application..
After the custom authentication module validates the credentials, it uses Windows Identity Foundation (WIF) to construct an IClaimsPrincipal object that contains the user's identity. In the future, this IClaimsPrincipal object might contain additional claims that the Surveys application could use to perform any authorization.
A Future Claims-Based Approach
In the future, Tailspin is considering replacing the simple user name and password authentication scheme with a claims-based approach. One option is to use Simple Web Token (SWT) and the Open Authentication (OAuth) 2.0 protocol. This approach offers the following benefits:
- The authentication process is managed externally from the Tailspin Surveys application.
- The authentication process uses established standards.
- The Surveys application can use a claims-based approach to handle any future authorization requirements.
Figure 2 illustrates this approach, showing the external token issuer.
Figure 2
In this scenario, before the mobile client application invokes a Surveys web service, it must obtain an SWT. It does this by sending a request to a token issuer that can issue SWTs; for example, Windows Azure Access Control Services (ACS). The request includes the items of information described in the following table.
The client ID and client secret enable the issuer to determine which application is requesting an SWT. The issuer uses the user name and password to authenticate the user.
The token issuer then constructs an SWT containing the user's identity and any other claims that the consumer application (Tailspin Surveys) might require. The issuer also attaches a hash value generated using a secret key shared with the Tailspin Surveys service.
When the client application requests data from the Surveys service, it attaches the SWT to the request in the request's authorization header.
When the Surveys service receives the request, a custom authentication module extracts the SWT from the authorization header, validates the SWT, and then extracts the claims from the SWT. The Surveys service can then use the claims with its authorization rules to determine what data, if any, it should return to the user.
The validation of the SWT in the custom authentication module performs the following steps.
- It verifies the hash of the SWT by using the shared secret key. This enables the Surveys service to verify the data integrity and the authenticity of the message.
- It verifies that the SWT has not expired. The token issuer sets the expiration time when it creates the SWT.
- It checks that the issuer that created the SWT is an issuer that the service is configured to trust.
- It checks that the client application that is making the request is a client that the service is configured to trust.
Inside the Implementation
Now is a good time to walk through the code that implements the authentication process in more detail. As you go through this section, you may want to download the Windows Phone Tailspin Surveys application from the Microsoft Download Center.
The CustomServiceHostFactory class in the TailSpin.Services.Surveys project initializes the Surveys service. The following code example shows how this factory class creates the authorization manager.
public class CustomServiceHostFactory : WebServiceHostFactory { private readonly IUnityContainer container; public CustomServiceHostFactory(IUnityContainer container) { this.container = container; } protected override ServiceHost CreateServiceHost( Type serviceType, Uri[] baseAddresses) { var host = new CustomServiceHost( serviceType, baseAddresses, this.container); host.Authorization.ServiceAuthorizationManager = new SimulatedWebServiceAuthorizationManager(); host.Authorization.PrincipalPermissionMode = PrincipalPermissionMode.Custom; return host; } }
The following code example from the SimulatedWebServiceAuthorizationManager class shows how to override the CheckAccessCore method in the ServiceAuthorizationManager class to provide a custom authorization decision.
protected override bool CheckAccessCore( OperationContext operationContext) { try { if (WebOperationContext.Current != null) { var headers = WebOperationContext.Current.IncomingRequest.Headers; if (headers != null) { var authorizationHeader = headers[HttpRequestHeader.Authorization]; if (!string.IsNullOrEmpty(authorizationHeader)) { if (authorizationHeader.StartsWith("user", StringComparison.OrdinalIgnoreCase)) { var userRegex = new Regex(@"(\w+):([^\s]+)", RegexOptions.Singleline); var username = userRegex.Match(authorizationHeader) .Groups[1].Value; var password = userRegex.Match(authorizationHeader) .Groups[2].Value; if (ValidateUserAndPassword(username, password)) { var identity = new ClaimsIdentity(new[] { new Claim(System.IdentityModel.Claims.ClaimTypes.Name, username) }, "TailSpin"); var principal = ClaimsPrincipal.CreateFromIdentity(identity); operationContext.ServiceSecurityContext .AuthorizationContext.Properties["Principal"] = principal; return true; } } } } } } catch (Exception) { if (WebOperationContext.Current != null) { WebOperationContext.Current.OutgoingResponse.StatusCode = HttpStatusCode.Unauthorized; } return false; } if (WebOperationContext.Current != null) { WebOperationContext.Current.OutgoingResponse.StatusCode = HttpStatusCode.Unauthorized; } return false; }
In this simulated authorization manager class, the CheckAccessCore method extracts the user name and password from the authorization header, calls a validation routine, and if the validation routine succeeds, it attaches a ClaimsPrincipal object to the web service context.
In the sample application, the validation routine does nothing more than check that the user name is one of several hard-coded values.
The IHttpWebRequest interface, in the TailSpin.Phone.Adapters project, defines method signatures and properties that are implemented by the HttpWebRequestAdapter class. This class adapts the HttpWebRequest class from the API. The purpose of adapting the HttpWebRequest class with a class that implements IHttpWebRequest is to create a loosely coupled class that is testable.
When SurveysServiceClient calls the GetRequest method in the HttpClient class, it passes in a new instance of HttpWebRequestAdapter, which in turn creates an instance of WebRequest.
The following code example shows how the GetRequest method in the HttpClient class adds the authorization header with the user name and password credentials to the HTTP request that the mobile client sends to the various Tailspin web services.
Last built: May 25, 2012
|
https://msdn.microsoft.com/en-US/library/gg490769
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
The tag cloud represents items size according to the sum of their occurences in the list. This is typically the kind of control that you see on blogs, to show the post tags. When having a large list containing lots of items, the tag cloud can be used for a first filter to avoid sliding in the list for too long.
Here is an example of 2 tag clouds. You can change the background, foreground and the font family of the control. The tag cloud is compatible with both portrait and landscape mode.
The datasource used by the control is a collection of objects, and uses .ToString() for its representation.
In my case it is a collection of int for the first tag cloud and a collection of string for the second tag cloud. Bind the ItemsSource property of the tag cloud with this collection just like you would for a ListBox. The font size will be computed automatically, according to the item occurrences in the source list.
Use SelectedItem property or SelectionChanged event to get the selected tag.
In my example, I use a collection of DataSample:
public class DataSample
{
public string City { get; set; }
public int Year { get; set; }
}
The datasources of my 2 tag clouds are like follows, where dataSampleList is a collection of DataSample :
ct1.ItemsSource = dataSampleList.Select(d => d.Year);
ct2.ItemsSource = dataSampleList.Select(d => d.City);
In the XAML code, the namespace should be declared like this:
xmlns:cloud="clr-namespace:Wp7TagCloud;assembly=Wp7TagCloud"
Here is the code to declare the first tag cloud:
<cloud:TagCloud x:
Here is the second one, added with a TextBlock binded on the SelectedItem of the tag cloud and also handling the SelectionChanged event:
<cloud:TagCloud x:
<TextBlock Text="Selected City:"/>
<TextBlock Text="{Binding ElementName=ct2, Path=SelectedItem}"/>
Here is the assembly you should download to use the tag cloud control.
The source code should be available soon in the Coding4Fun Toolkit (thanks Clint ).
Hi,
I tried to use your control but I had some problems. When I add an ItemsSource manually in Page constructor everything goes right, but when I set ItemsSource after an event call the data is not refreshed. Any hints or tips?
Thanks in advance.
Lucas, I'll check that as soon as possible
Lucas, I juste fixed it : can you check ?
Thanks for your feedback !
What if I have a collection of strings and doubles and I need to make a tag cloud (the largest string is one with the biggest number). Do you have or know solution for that?
To use my control, you should provide a list of strings having as many occurrences of the same string than the corresponding number (double).
If it leads to a large number of occurences in the string list, you can provide an equivalent repartition (percentage) of string occurencies for each number.
Example : if you have ((Foo, 25000) (Bar, 50000)) you can provide the list (Foo, Bar, Bar).
This list should be easy to provide from your existing structure.
Hope it helps
Setting the selectedItem (or the item selected item is bound to) from code doesn't seem to update the highlight color :|
I don't see the assembly here - could you please point to that or the code? I don't think its there yet in Coding4Fun toolkik?
Hello Tushar,
You have a link to the assembly just under the code in this article.
Can we have access to the source code for this project??
|
http://blogs.msdn.com/b/stephe/archive/2011/03/28/a-tag-cloud-control-for-windows-phone-7.aspx
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
This article is in need of a technical review.
« Gecko Plugin API Reference « Browser Side Plug-in API
Summary
Pushes data into a stream produced by the plug-in and consumed by the browser.
Syntax
#include <npapi.h> int32 NPN_Write(NPP instance, NPStream* stream, int32 len, void* buf);
Parameters
The function has the following parameters:
instance
- Pointer to the current plug-in instance.
stream
- Pointer to the stream into which to push the data.
len
- Length in bytes of the data specified by
buf.
buf
- A pointer to a buffer of data to deliver to the stream.
Returns
- If successful, the function returns a positive integer representing the number of bytes written (consumed by the browser). This number depends on the size of the browser's memory buffers, the number of active streams, and other factors.
- If unsuccessful, the plug-in returns a negative integer. This indicates that the browser encountered an error while processing the data, so the plug-in should terminate the stream by calling NPN_DestroyStream().
Description
NPN_Write() delivers a buffer from the stream to the instance. A plug-in can call this function multiple times after creating a stream with NPN_NewStream(). The browser makes a copy of the buffer if necessary, so the plug-in can free the buffer as the method returns, if desired. See "Example of Sending a Stream" for an example that includes
NPN_Write().
Example
This example pushes a snippet of HTML over a newly created stream, then destroys the stream when it's done.
NPStream* stream; char* myData = "<HTML><B>This is a message from my plug-in!</B></HTML>"; int32 myLength = strlen(myData) + 1;
/* Create the stream. */
err = NPN_NewStream(instance, "text/html", "_blank", &stream)
/* Push data into the stream. */
err = NPN_Write(instance, stream, myLength, myData);
/* Delete the stream. */
err = NPN_DestroyStream(instance, stream, NPRES_DONE);
|
https://developer.mozilla.org/en-US/Add-ons/Plugins/Reference/NPN_Write
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
- Code: Select all
The panel encountered a problem while loading "OAFIID:MATE_mintMenu".
I've done the following with no success:
* moved my .mate* out of the way
* mintmenu --reset
* verified python version (2.7.3)
There something somewhere in my mate settings or in some dotfile somewhere causing mintmenu to crash, but I can't figure out what it is ...
If I try to run it from a terminal, I get:
- Code: Select all
tim@echo:~$ mintmenu
Traceback (most recent call last):
File "/usr/lib/linuxmint/mintMenu/mintMenu.py", line 62, in <module>
from easybuttons import iconManager
File "/usr/lib/linuxmint/mintMenu/plugins/easybuttons.py", line 14, in <module>
from filemonitor import monitor as filemonitor
File "/usr/lib/linuxmint/mintMenu/plugins/filemonitor.py", line 122, in <module>
monitor = FileMonitor()
File "/usr/lib/linuxmint/mintMenu/plugins/filemonitor.py", line 17, in __init__
self.wm = pyinotify.WatchManager()
File "/usr/lib/python2.7/dist-packages/pyinotify.py", line 1706, in __init__
raise OSError(err % self._inotify_wrapper.str_errno())
OSError: Cannot initialize new instance of inotify, Errno=Too many open files (EMFILE)
Not sure if that "too many open files" actually means anything or not. I certainly have not maxed out the open files on my system (I checked).
|
http://forums.linuxmint.com/viewtopic.php?p=680652
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
I need to add two procedures: (1) one called �terms� (called by the main program) whose job is to display up to the first 10 terms in a sequence that starts with the value that it is passed (in $a0) and (2) a function called �rev� that will be called by the procedure �terms� � this function will actually calculate and return (in $v0) the reverse of the current term that is passed to it (in $a0) Driver: main: la $a0, intro # print intro li $v0, 4 syscall loop: la $a0, req # request value of n li $v0, 4 syscall li $v0, 5 # read value of n syscall ble $v0, $zero, out # if n is not positive, exit move $a0, $v0 # set parameter for terms procedure jal terms # call terms procedure j loop # branch back for next value of n out: la $a0, adios # display closing li $v0, 4 syscall li $v0, 10 # exit from the program syscall .data intro: .asciiz "Welcome to the Square1 tester!" req: .asciiz "\nEnter an integer (zero or negative to exit): " adios: .asciiz "Come back soon!\n"
|
http://www.chegg.com/homework-help/questions-and-answers/need-add-two-procedures-1-one-called-terms-called-main-program-whose-job-display-first-10--q3692738
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
A multi-reader multi-writer MemoryPool implementation. More...
#include <rtt/internal/TsPool.hpp>
A multi-reader multi-writer MemoryPool implementation.
It can hold max 65535 elements of type T.
Definition at line 62 of file TsPool.hpp.
The maximum number of elements available for allocation.
Definition at line 222 of file TsPool.hpp.
Clears all internal management data of this Memory Pool.
All data blobs are considered to be owned by the pool again.
Definition at line 137 of file TsPool.hpp.
Referenced by RTT::internal::TsPool< Item >::data_sample().
Initializes every element of the pool with the given sample and clears the pool.
Definition at line 153 of file TsPool.hpp.
Referenced by RTT::base::BufferLockFree< T >::BufferLockFree(), RTT::base::BufferLockFree< T >::data_sample(), and RTT::internal::TsPool< Item >::TsPool().
Return the number of elements that are available to be allocated.
This function is not thread-safe and should not be used when concurrent allocate()/deallocate() functions are running.
Definition at line 205 of file TsPool.hpp.
|
http://www.orocos.org/stable/documentation/rtt/v2.x/api/html/classRTT_1_1internal_1_1TsPool.html
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
.contrib.pattern.DistributedPubSubMediator, that.
A Small Example in Java
A subscriber actor:
public class Subscriber extends UntypedActor { LoggingAdapter log = Logging.getLogger(getContext().system(), this); public Subscriber() { ActorRef mediator = DistributedPubSubExtension.get(getContext().system()).mediator(); // subscribe to the topic named "content" mediator.tell(new DistributedPubSubMediator.Subscribe("content", getSelf()), getSelf()); } public void onReceive(Object msg) { if (msg instanceof String) log.info("Got: {}", msg); else if (msg instanceof DistributedPubSubMediator.SubscribeAck) log.info("subscribing"); else unhandled(msg); } }
Subscriber actors can be started on several nodes in the cluster, and all will receive messages published to the "content" topic.
system.actorOf(Props.create(Subscriber.class), "subscriber1"); //another node system.actorOf(Props.create(Subscriber.class), "subscriber2"); system.actorOf(Props.create(Subscriber.class), "subscriber3");
A simple actor that publishes to this "content" topic:
public class Publisher extends UntypedActor { // activate the extension ActorRef mediator = DistributedPubSubExtension.get(getContext().system()).mediator(); public void onReceive(Object msg) { if (msg instanceof String) { String in = (String) msg; String out = in.toUpperCase(); mediator.tell(new DistributedPubSubMediator.Publish("content", out), getSelf()); } else { unhandled(msg); } } }
It can publish messages to the topic from anywhere in the cluster:
//somewhere else ActorRef publisher = system.actorOf(Props.create(Publisher.class), "publisher"); // after a while the subscriptions are replicated publisher.tell("hello", null);
A Small Example in Scala
A subscriber actor:
class Subscriber extends Actor with ActorLogging { import DistributedPubSubMediator.{ Subscribe, SubscribeAck } val mediator = DistributedPubSubExtension(context.system).mediator // subscribe to the topic named "content" mediator ! Subscribe("content", self) def receive = { case SubscribeAck(Subscribe("content", `self`)) ⇒ context become ready } def ready: Actor.Receive = { case s: String ⇒ log.info("Got {}", s) } }Extension" }
A more comprehensive sample is available in the Typesafe Activator tutorial named Akka Clustered PubSub with Scala!.
DistributedPubSubExtension
In the example above the mediator is started and accessed with the akka.contrib.pattern.DistributedPubSubExtension.Extension can be configured with the following properties:
# Settings for the DistributedPubSubExtension akka.contrib.cluster.pub-sub { # Actor name of the mediator actor, /user/distributedPubSubMediator name = distributedPubSubMediator # Start the mediator on members tagged with this role. # All members are used if undefined or empty. role = "" # The routing logic to use for 'Send' # Possible values: random, round-robin, consistent-hashing, }
It is recommended to load the extension when the actor system is started by defining it in akka.extensions configuration property. Otherwise it will be activated when first used and then it takes a while for it to be populated.
akka.extensions = ["akka.contrib.pattern.DistributedPubSubExtension"]
Contents
|
http://doc.akka.io/docs/akka/2.3.0/contrib/distributed-pub-sub.html
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
In this tutorial you will see about spring @required annotation with an example. The @Required when written on top of setStudent() method it make sure that student property must have be set else it will give compile time error message that org.springframework.beans.factory.BeanInitializationException: Property 'student' is required for bean 'college' which is clearly shown in this example.
Simply apply the @Required annotation will not enforce the property checking, you have to register an RequiredAnnotationBeanPostProcessor to aware of the @Required annotation in bean configuration file. Simply applying @Required annotion is not enough to enforce the property checking, you also have to register RequiredAnnotationBeanPostProcessor in bean configuration file to do so. This configuration is done in two ways one by Include <context:annotation-config /> and another way is Include RequiredAnnotationBeanPostProcessor in bean configuration file. which is shown in this example's context.xml file.Student.java
public class Student { private String age; private String name; private String address; public String getAge() { return age; } public void setAge(String age) { this.age = age; } public String getName() { return name; } public void setName(String name) { this.name = name; } public String getAddress() { return address; } public void setAddress(String address) { this.address = address; } @Override public String toString() { return "Student [name=" + name + ", age=" + age + ",address=" + address + "]"; } }
College.java
import org.springframework.beans.factory.annotation.Required; public class College { private Student student; private String registration; private String year; @Required public void setStudent(Student student) { this.student = student; } public String getRegistration() { return registration; } public void setRegistration(String registration) { this.registration = registration; } public String getYear() { return year; } public void setYear(String year) { this.year = year; } @Override public String toString() { return "College [registration=" + registration + ", Student=" + student + ",year=" + year + "]"; } }
RequiredMain.java
import org.springframework.beans.factory.BeanFactory; import org.springframework.context.support.ClassPathXmlApplicationContext; public class RequiredMain { public static void main(String[] args) { BeanFactory beanfactory = new ClassPathXmlApplicationContext( "context.xml"); College coll = (College) beanfactory.getBean("college"); System.out.println(coll); } }
context.xml
Download this example code+
|
http://roseindia.net/tutorial/spring/spring3/ioc/springrequiredannotation.html
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
(For more resources on GWT, see here.)
The Graphical User Interface (GUI) resides in the client side of the application. This article introduces the communication between the server and the client, where the client (GUI) will send a request to the server, and the server will respond accordingly. In GWT, the interaction between the server and the client is made through the RPC mechanism. RPC stands for Remote Procedure Call. The concept is that there are some methods in the server side, which are called by the client at a remote location. The client calls the methods by passing the necessary arguments, and the server processes them, and then returns back the result to the client. GWT RPC allows the server and the client to pass Java objects back and forth.
RPC has the following steps:
- Defining the GWTService interface: Not all the methods of the server are called by the client. The methods which are called remotely by the client are defined in an interface, which is called GWTService.
- Defining the GWTServiceAsync interface: Based on the GWTService interface, another interface is defined, which is actually an asynchronous version of the GWTService interface. By calling the asynchronous method, the caller (the client) is not blocked until the method completes the operation.
- Implementing the GWTService interface: A class is created where the abstract method of the GWTService interface is overridden.
- Calling the methods: The client calls the remote method to get the server response.
Creating DTO classes
In this application, the server and the client will pass Java objects back and forth for the operation. For example, the BranchForm will request the server to persist a Branch object, where the Branch object is created and passed to server by the client, and the server persists the object in the server database. In another example, the client will pass the Branch ID (as an int), the server will find the particular Branch information, and then send the Branch object to the client to be displayed in the branch form. So, both the server and client need to send or receive Java objects. We have already created the JPA entity classes and the JPA controller classes to manage the entity using the Entity Manager. But the JPA class objects are not transferable over the network using the RPC. JPA classes will just be used by the server on the server side. For the client side (to send and receive objects), DTO classes are used. DTO stands for Data Transfer Object. DTO is simply a transfer object which encapsulates the business data and transfers it across the network.
Getting ready
Create a package com.packtpub.client.dto, and create all the DTO classes in this package.
How to do it...
The steps required to complete the task are as follows:
- Create a class BranchDTO that implements the Serializable interface:
public class BranchDTO implements Serializable
- Declare the attributes. You can copy the attribute declaration from the entity classes. But in this case, do not include the annotations:
private Integer branchId;
private String name;
private String location
- Define the constructors, as shown in the following code:
public BranchDTO(Integer branchId, String name, String location)
{
this.branchId = branchId;
this.name = name;
this.location = location;
}
public BranchDTO(Integer branchId, String name)
{
this.branchId = branchId;
this.name = name;
}
public BranchDTO(Integer branchId)
{
this.branchId = branchId;
}
public BranchDTO()
{
}
To generate the constructors automatically in NetBeans, right-click on the code, select Insert Code | Constructor, and then click on Generate after selecting the attribute(s).
- Define the getter and setter:
public Integer getBranchId()
{
return branchId;
}
public void setBranchId(Integer branchId)
{
this.branchId = branchId;
}
public String getLocation()
{
return location;
}
public void setLocation(String location)
{
this.location = location;
}
public String getName()
{
return name;
}
public void setName(String name)
{
this.name = name;
}
To generate the setter and getter automatically in NetBeans, right-click on the code, select Insert Code | Getter and Setter…, and then click on Generate after selecting the attribute(s).
Mapping entity classes and DTOs
In RPC, the client will send and receive DTOs, but the server needs pure JPA objects to be used by the Entity Manager. That's why, we need to transform from DTO to JPA entity class and vice versa. In this recipe, we will learn how to map the entity class and DTO.
Getting ready
Create the entity and DTO classes.
How to do it...
- Open the Branch entity class and define a constructor with a parameter of type BranchDTO. The constructor gets the properties from the DTO and sets them in its own properties:
public Branch(BranchDTO branchDTO)
{
setBranchId(branchDTO.getBranchId());
setName(branchDTO.getName());
setLocation(branchDTO.getLocation());
}
- This constructor will be used to create the Branch entity class object from the BranchDTO object.
- In the same way, the BranchDTO object is constructed from the entity class object, but in this case, the constructor is not defined. Instead, it is done where it is required to construct DTO from the entity class.
There's more...
Some third-party libraries are available for automatically mapping entity class and DTO, such as Dozer and Gilead. For details, you may visit and.
Creating the GWT RPC Service
In this recipe, we are going to create the GWTService interface, which will contain an abstract method to add a Branch object to the database.
Getting ready
Create the Branch entity class and the DTO class.
(For more resources on GWT, see here.)
How to do it...
The steps are as follows:
- Go to File | New File….
- Select Google Web Toolkit from Categories and GWT RPC Service from File Types, as shown in the following screenshot:
(Move the mouse over the image to enlarge.)
- Click on Next.
- Give GWTService as the Service Name.
- Give rpc as the Subpackage (this is optional), as shown in the following screenshot:
- Click on Finish.
How it works...
A total of three classes are created, which are com.packtpub.client.rpc.GWTService, com.packtpub.client.rpc.GWTServiceAsync, and com.packtpub.server.rpc.GWTServiceImple. Note that GWTService is created in the client-side package, and the implementation class is in the server package. Because of the preceding steps, the following code is generated:
// GWTService.java
package com.packtpub.client.rpc;
import com.google.gwt.user.client.rpc.RemoteService;
import com.google.gwt.user.client.rpc.RemoteServiceRelativePath;
// This is the servlet mapping in web.xml file
@RemoteServiceRelativePath("rpc/gwtservice")
public interface GWTService extends RemoteService
{
public String myMethod(String s);
}
// GWTServiceAsync.java
package com.packtpub.client.rpc;
import com.google.gwt.user.client.rpc.AsyncCallback;
public interface GWTServiceAsync
{
public void myMethod(String s, AsyncCallback<String> callback);
}
// GWTServiceImpl.java
package com.packtpub.server.rpc;
import com.google.gwt.user.server.rpc.RemoteServiceServlet;
import com.packtpub.client.rpc.GWTService;
public class GWTServiceImpl extends RemoteServiceServlet
implements GWTService
{
public String myMethod(String s)
{
// Do something interesting with 's' here on the server.
return "Server says: " + s;
}
}
The GWTService interface will include a signature of all the methods which will be called remotely. NetBeans automatically creates a dummy method to understand the concept of RPC. Observe that the myMethod method has no body here; it just has the header part.
An asynchronous version of this method is declared in the GWTServiceAsync interface. The significant changes are:
- The return type, String, is changed to void
- A new parameter of type AsyncCallback is added
This is because the asynchronous method returns the value through the AsyncCallback object passed to the method as argument.
In addition, the GWTServiceImpl includes implementation of the method.
Defining an RPC method to persist objects
In this recipe, we are going to:
- Declare an abstract method add in the GWTService interface
- Declare an abstract asynchronous method add in GWTServiceAsync interface
- Implement the method in the GWTServiceImpl class
This method will take BranchDTO as a parameter, and add the BranchDTO object in the database.
Getting ready
GWTService, GWTServiceAsync interface, and GWTServiceImpl must be created first.
How to do it...
- Open GWTService.java and add the following code:
public boolean add(BranchDTO branchDTO);
- Open GWTServiceAsync.java and add the following code:
public void add(BranchDTO branchDTO,
AsyncCallback<java.lang.Boolean> asyncCallback);
- Open GWTServiceImpl.java and add the following code:
@Override
public boolean add(BranchDTO branchDTO)
{
Branch branch = new Branch();
branch.setBranchId(branchDTO.getBranchId());
branch.setName(branchDTO.getName());
branch.setLocation(branchDTO.getLocation());
BranchJpaController branchJpaController = new
BranchJpaController();
boolean added=false;
try
{
branchJpaController.create(branch);
added=true;
}
catch (PreexistingEntityException ex)
{
Logger.getLogger(GWTServiceImpl.class.getName()).
log(Level.SEVERE, null, ex);
}
catch (Exception ex)
{
Logger.getLogger(GWTServiceImpl.class.getName()).
log(Level.SEVERE, null, ex);
}
return added;
}
How it works...
Here, we have created a method add to persist a Branch object. As a pure JPA entity class is not transferable through RPC, we have taken the BranchDTO object as a parameter. Then, the JPA entity class is constructed from the DTO class.
Constructing the JPA entity class from the DTO needs just the following steps:
- Creating an instance of the JPA entity class (over here, Branch).
- Get the properties from the DTO class and set them in the JPA entity class.
After constructing the JPA entity class, the instance of the controller class is created as the controller class contains the necessary methods to persist, update, delete, find, and so on operations. A Boolean variable added is used to track the success of the operation. The variable added is initialized to false and set to true when the persist operation is completed successfully. The create method of the controller class persists the object.
Calling the RPC method from Client UI
In this recipe, we will call the RPC method from the file BranchForm. This method will be called when the Save button is clicked.
How to do it...
- Open the file BranchForm.java and create the AsyncCallback instance as follows:
final AsyncCallback<Boolean> callback =
new AsyncCallback<Boolean>()
{
MessageBox messageBox = new MessageBox();
@Override
public void onFailure(Throwable caught)
{
messageBox.setMessage("An error occured!
Cannot save Branch information");
messageBox.show();
}
@Override
public void onSuccess(Boolean result)
{
if (result)
{
messageBox.setMessage("Branch information saved
successfully");
} else
{
messageBox.setMessage("An error occured!
Cannot save Branch information");
}
messageBox.show();
}
};
- Write the event-handling code for the "add" button, as in the following code. Here, the branchDTO is a class-level instance of BranchDTO.
saveButton.addSelectionListener(new
SelectionListener<ButtonEvent>() {
@Override
public void componentSelected(ButtonEvent ce)
{
branchDTO=new BranchDTO();
branchDTO.setBranchId(Integer.parseInt
(branchIdField.getValue()));
branchDTO.setName(nameField.getValue());
branchDTO.setLocation(locationField.getValue());
((GWTServiceAsync) GWT.create(GWTService.class)) .
add(branchDTO, callback);
}
});
- Run the application, open the Branch Form, and enter the input, as shown in the following screenshot:
- Click on the Save button. A confirmation message is shown, as in the following screenshot:
How it works...
Let's now see the aspects in some detail:
- Creating an AsyncCallback instance: The first step to call the RPC method is to create an instance of the AsyncCallback interface because the result from the server is sent to the client through this instance. The AsyncCallback interface contains the following two abstract methods:
- onSuccess: This method is called automatically when the operation can be run successfully in the server side. This method has a parameter according to the type of the result the server sends. In our case, this type is Boolean, as we will get the result as true or false after the add operation.
- onFailure: If the RPC method cannot be invoked for any reason, this method is called automatically. Generally, the error messages are shown from this method.
- Calling the RPC method: This part handles the calling portion of the RPC method. Here, we have constructed the BranchDTO object and have called the add method of the GWTServiceAsync interface. The GWTServiceAsync object is created by the create method of GWT class. Notice that the AsyncCallback object is passed to the add method, and we receive the result in this object.
Summary
In this article we saw the interaction between the server and the client through the RPC mechanism.
In the next article, Working with Entities in Google Web Toolkit 2, we will see how we can manage entities in GWT RPC.
Further resources on this subject:
- Google Web Toolkit 2: Creating Page Layout [Article]
- Working with Entities in Google Web Toolkit 2 [Article]
- Password Strength Checker in Google Web Toolkit and AJAX [Article]
- Google Web Toolkit GWT Java AJAX Programming [Book]
- Google Web Toolkit 2 Application Development Cookbook [Book]
|
https://www.packtpub.com/books/content/communicating-server-using-google-web-toolkit-rpc
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
std::conj(std::complex)
From cppreference.com
Computes the complex conjugate of
z by reversing the sign of the imaginary part.
(since C++11)Additional overloads are provided for float, double, long double, and all integer types, which are treated as complex numbers with zero imaginary component.
[edit] Parameters
[edit] Return value
The complex conjugate of
z
[edit] Example
Run this code
#include <iostream> #include <complex> int main() { std::complex<double> z(1,2); std::cout << "The conjugate of " << z << " is " << std::conj(z) << '\n' << "Their product is " << z*std::conj(z) << '\n'; }
Output:
The conjugate of (1,2) is (1,-2) Their product is (5,0)
|
http://en.cppreference.com/w/cpp/numeric/complex/conj
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
/* : SignOnDupKeyException.java,v 1.4 2003/03/07 02:14:27 inder Exp $ */ package com.sun.j2ee.blueprints.signon; /** * SignOnDAODupKeyException is thrown by the DAOs of the signon * component when a row is already found with a given primary key. * This is thrown when the user input is fails a validation * test. */ public class SignOnDupKeyException extends RuntimeException { public SignOnDupKeyException () { } public SignOnDupKeyException (String str) { super(str); } public SignOnDupKeyException (Throwable cause) { super(cause); } public SignOnDupKeyException (String str, Throwable cause) { super(str, cause); } }
|
http://docs.oracle.com/cd/E17802_01/blueprints/blueprints/code/adventure/1.0/src/com/sun/j2ee/blueprints/signon/SignOnDupKeyException.java.html
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
SYNOPSIS
#include <signal.h>
int sigsuspend(const sigset_t *mask);
feature test macro requirements for glibc (see feature_test_macros(7)):
sigsuspend(): _POSIX_C_SOURCE >= 1 || _XOPEN_SOURCE || _POSIX sig-
nals in mask, has no effect on the process's signal mask. criti-
cal code section. The caller first blocks the signals with sigproc-
mask-
info(2), sigsetops(3), sigwait(3), signal(7)
COLOPHON
This page is part of release 3.23 of the Linux man-pages project. A
description of the project, and information about reporting bugs, can
|
http://www.linux-directory.com/man2/sigsuspend.shtml
|
crawl-003
|
en
|
refinedweb
|
SYNOPSIS
#include <signal.h>
int sigprocmask(int how, const sigset_t *set, sigset_t *oldset);
feature test macro requirements for glibc (see feature_test_macros(7)):
sigprocmask(): _POSIX_C_SOURCE >= 1 || _XOPEN_SOURCE || _POSIX_SOURCE
DESCRIPTION.
Each of the threads in a process has its own signal mask.
sigsuspend(2), pthread_sigmask(3), sigsetops(3), signal(7)
COLOPHON
This page is part of release 3.23 of the Linux man-pages project. A
description of the project, and information about reporting bugs, can
be found at.
|
http://www.linux-directory.com/man2/sigprocmask.shtml
|
crawl-003
|
en
|
refinedweb
|
Introduction
This article is part of a series intended to act as a short and basic introduction to the world of NFC as it applies to BlackBerry® smartphones such as the BlackBerry® Bold™ 9900 and is aimed at developers who wish to take advantage of this exciting new technology. No prior knowledge of NFC is assumed but we do assume that the reader is familiar with Java® in some places. It would also be beneficial to have read the first article in this series, entitled: NFC Primer for Developers, since we will be following the thread that was outlined in this article but in greater depth. This article will look specifically at the reading and writing of NFC smart tags.
The Authors
This article was co-authored by Martin Woolley and John Murray both of whom work in the RIM Developer Relations team. Both Martin and John specialise in NFC applications development (amongst other things).
What is a Smart Tag?
In order to understand this it's probably best to look at some of the most common uses of smart tags such as:
Smart Posters - these embed small electronic "tags" containing data such as a URL in posters. An NFC enabled BlackBerry device can read the data and act upon it just by holding the device close to the poster. The action taken when reading a tag will vary according to the type of data the tag contains and the nature of the application reading the tag but it could for example involve taking the user directly to a web site which contains more information about the advertised event/product (etc) using the BlackBerry® Browser, automatically sending an SMS requesting someone call the user back, and so on.
Figure 1 - a smart poster
Smart Tags are really very simple and lightweight objects. They are often constructed from layers of stiff paper, thin card, or plastic to protect the embedded antenna and chip. They may be sticky on one side to allow them to be attached to objects fairly unobtrusively; they are fairly inexpensive when bought in bulk and may even be supplied in rolls much like adhesive tape.
Figure 2 - a smart tag as used by a developer
Their size and simple nature means that the amount of data that can be stored in a smart tag may be relatively small, perhaps no more than a few hundred characters or about the same amount of information that you might see printed on the face of a business card. Like a business card, common patterns have been developed for encoding information such as plain text, URLs, telephone numbers, and contact details like addresses to name but a few.
Figure 3 - The BlackBerry Smart Tags app on the home screen
All BlackBerry® 7 NFC enabled devices include a built in Smart Tags Application, as shown above, that allows the reading and writing of simple smart poster tags that follow the pattern of a URI and associated plain text.
This article will demonstrate how to perform similar functions from within a BlackBerry application. We will not discuss how to enable a BlackBerry device to emulate a smart tag which could be read by others using an NFC capable device. That topic will be covered in a later article.
Standards and Specifications
The standards that relate to smart tags are managed by the NFC Forum. They have adopted a set of 15 technical specifications arranged into the following categories:
To understand how to read and write smart tags the "Record Type Definition" (RTD) set is the group to look at. In particular the following:
The NFC Forum Data Exchange Format (NDEF) is a lightweight binary message format designed to encapsulate one or more application-defined payloads into a single message construct. An NDEF message contains one or more NDEF records of different types. There are separate specifications for record types that describe a Smart Poster or a URI or Plain Text.
So, a simple Smart Poster tag may use a Text record and a URI record to encode a URL and some descriptive text.
NDEF messages are encoded using a technique called Type, Length, Value (TLV) so parsing such messages can be a little more challenging than simpler data structures. The advantage of TLV format is that messages become self-describing and an NDEF message can be parsed in its entirety without foreknowledge of the particular type of message. Unfortunately for the Java programmer there are no simple wrapper classes to hide this parsing and manipulation of NDEF record structures ends up being a process of manipulating byte [] objects.
Use Cases
We're going to look at two simple use cases in what follows:
Reading a Smart Poster Tag
The process of reading a Smart Poster tag is fairly straightforward. Firstly your application has to register that it wants to be notified when tags of interest are presented to the NFC enabled BlackBerry device; and secondly it must take care of parsing the contents of the tag into a form that can be displayed on a screen or have some other action taken on it. Whilst there is an interface ( net.rim.device.api.io.nfc.readerwriter.DetectionLi
Implementing an NDEFMessageListener
The following code fragment shows the implementation of a typical listener for NDEF Smart Tag events. The key points are that it implements the NDEFMessageListener interface and receives notification of the presence of an NDEF smart tag in the NFC RF field via the onNDEFMessageDetected(NDEFMessage message) method. The NDEFMessage object passed to this method contains the encoded version of the Smart Tag content; in this example the parsing of the NDEF message itself is handled by another class, NfcReadNdefSmartTagUpdate, used to update the screen.
package nfc.sample.Ndef.Read; import net.rim.device.api.io.nfc.ndef.NDEFMessage; import net.rim.device.api.io.nfc.ndef.NDEFMessageListener
; )); } }
NDEFMessage and NDEFRecord
As seen above, an NDEFMessageListener implementation will receive an object of type NDEFMessage when its onNDEFMessageDetected(..) method is called. Before we go much further, it’s worth developing a basic understanding of the data structures we’re working with here.
An NDEFMessage object is a container for one or more NDEFRecord objects. Each NDEFRecord object is of a given type and contains a byte array which holds the record’s payload. The first NDEFRecord object in an NDEFMessage is special and can be used for filtering the types of NDEFMessage passed to a listener. We’ll deal with how to process the byte array payload later in this article and in fact we’ll learn that the payload itself may be an NDEF message and a container for NDEF records. Yes, it can get a little recursive.
Registering the NDEFMessageListener
In order to start receiving these events an application must register an instance of this interface which is typically done using a code fragment such as the following. An instance of NfcReaderNdefSmartTagListener (as defined above) is registered with the NFC ReaderWriterManager using the addNDEFMessageListener method.
/* * This registers with the NFC ReaderWriterManager indicating our * interest in receiving notification when smart tags come within range * of the NFC antenna */ private void registerListener(listener) { ReaderWriterManager nfcManager; try { nfcManager = ReaderWriterManager.getInstance(); nfcManager.addNDEFMessageListener( listener, NDEFRecord.TNF_WELL_KNOWN, "Sp", true); } catch (NFCException e) { ui_lbl_status_message.setText(“ERROR: could not register NFCStatusListener”); } }
Notes on the Parameters used when Registering a Listener
The parameters passed through the addNDEFMessageListener method allow some filtering of the types of NDEF messages that will be presented to the listener.
The second parameter: NDEFRecord.TNF_WELL_KNOWN, indicates that we are interested in NDEF records of Type Name Format (TNF): "WELL_KNOWN". TNF_WELL_KNOWN includes NFC RTD (Record Type Definitions) types such as RTD_TEXT and RTD_URI meaning that they will contain URIs or TEXT suitably encoded according to the NFC RTD specification.
The third parameter: "Sp", identifies the record type as Smart Poster. So we're only interested in being notified of Smart Poster records that contain plain text or URIs. Note that record types are defined in a series of specifications from the NFC Forum. In this case, the relevant specification is entitled “NFCForum-SmartPoster_RTD_1.0”.
The final parameter "true" indicates that the application must be launched if it is not currently running when a smart tag of the relevant type is brought within range of the NFC antenna. This setting is persistent across device power resets. On presentation of a suitable smart tag to the device the application will be launched if it is not already running but note that it is necessary to re-establish the NDEFMessageListener during the start-up processing. If your NDEFMessageListener application is already running then its interface just gets a direct call.
It is important to register your listener within about 15 seconds of the application being started. Failure to do so will result in any queued message being lost.
There is no way to determine if the application was automatically started as a consequence of an NDEFMessageListener event rather than (say) the user having launched it from the home screen. In practice this usually doesn’t matter however. If your application is already running, it gets the NDEF message delivered to it immediately. If it’s not, it will be launched in exactly the same way as if a user had launched it from the home screen. On those rarer occassions where perhaps registration of the listener is conditional on something such as an explicit user choice, you’d need to maintain some state which could be checked in your main method to see whether or not you need to immediately re-establish the listener so it can receive the queued call back. The RuntimeStore is a good place to store such state.
Unregistering an NDEFMessage Listener
It's important to realise that if you requested that your application be auto-started on the addNDEFMessageListener method then this setting will persistent until the application is unregistered. Unregistering an NDEFMessageListener is similar to registering one and the same parameters that were passed on the registration must be passed on the unregistration. The following code fragment demonstrates this.
nfcManager.removeNDEFMessageListener(NDEFRecord.TN
F_WELL_KNOWN, "Sp");F_WELL_KNOWN, "Sp");
You can see the same filter settings are specified here as in the original registration request.
Parsing the content of an NDEFMessage read from the Smart Poster Tag
Now that we've got an NDEFMessage from the Smart Poster Tag let's examine how to parse the contents and extract the information. We’ll examine the process of parsing a particular example NDEF message here as this should illustrate the general approach taken. To deal with other types of NDEF message you will need to be armed with the relevant specification(s) from the NFC Forum.
Recall that an NDEFMessage object can consist of a number of NDEFRecord objects. In the case of a Smart Poster Tag the NDEFMessage itself consists of a single NDEFRecord. This NDEFRecord object contains 5 key pieces of information:
The meaning of these 5 attributes should be clear, with the possible exception of “Type Name Format (TNF)” and “Type”. Simply put, the Type field indicates the kind of data which is being carried in the payload field of the record. There are however several ways in which the Type value can be expressed, such as as an Absolute URI, a MIME type (RFC 2046), or using one of the “well known types” defined by the NFC Forum amongst others. The TNF field indicates which of these applies to the Type field in a record. The “NFC Data Exchange Format (NDEF) Technical Explanation” documents the full set of possible values which can be encoded within the TNF field.
The Payload of a typical Smart Poster Tag
There are currently no Java wrapper classes in the BlackBerry NFC API that could be used to parse the NDEF Messages and Records. So, to give you some idea of what sort of object is presented in the payload of a Smart Poster Tag and hence have some idea of what information needs to be parsed here is the payload of a Smart Poster Tag that contains:
The length of the payload is 59 bytes and the payload is as follows in both HEX and clear text and where the colour annotations are explained below:
Breaking down the record structure:
It's worth keeping this in mind when examining the code that follows which is used to parse this very payload. And do note that we’re only examining one example here. In this example we’re dealing with what’s known as a “short record” as opposed to a “normal record”. The NFC Forum specifications should be consulted for a more thorough appreciation of the structure of NDEF messages.
Parsing the NDEF Message Payload
The code below parses an NDEF message payload. The NDEF Message itself is in the variable _message.
It’s a fragment from a larger application that allows parsed information from the tag to be displayed on a screen of the application. The screen output, showing only part of the total output looks like this:
Here’s the code:
/* * The NDEF Message _message may consist of a number of NDEF records */ NDEFRecord[] records = _message.getRecords(); /* * This is the number of NDEF records in the NDEF message */ int numRecords = records.length;
A Smart Poster Tag generally contains an NDEFMEssage with a single NDEFRecord which itself contains sub-records that contain the details of the text and URL.
/* * Only unpick the message if it contains a non-zero number of records */ if (numRecords > 0) { /* * Work our way through each record in the message in turn * For a Smart Poster Tag there ought to be only a single record */ for (int j = numRecords - 1; j >= 0; j--) { byte[] payloadBytes = records[j].getPayload(); StringBuffer hexPayload = new StringBuffer(); StringBuffer characterPayload = new StringBuffer(); String hexpair; int numberOfHexPairs = 8; for (int i = 0; i < payloadBytes.length; i++) { hexpair = byte2HexPair(payloadBytes[i]); characterPayload.append(byte2Ascii(payloadBytes[i]
)); hexPayload.append(hexpair + " " + (((i + 1) % numberOfHexPairs == 0) ? "\n" : "")); characterPayload.append((((i + 1) % numberOfHexPairs == 0) ? "\n" : "")); })); hexPayload.append(hexpair + " " + (((i + 1) % numberOfHexPairs == 0) ? "\n" : "")); characterPayload.append((((i + 1) % numberOfHexPairs == 0) ? "\n" : "")); }
At this point the details of the single record that represents the Smart Poster content are extracted to be added to a field on the screen of this application. We have a Smart Poster Tag if the Type is “Sp”.
/* * Unpick the elements of the NDEF record. It should identify * WELL_KNOWN, TEXT or URI and "Sp" for smart poster. * Construct a string to display on the application’s screen */ String record = "Root NDEF Record\nID: “ + records[j].getId() + "\nType: " + records[j].getType() + "\nTNF: " + records[j].getTypeNameFormat() + "\nPayload Len: " + records[j].getPayload().length + "\nHex Payload: \n" + hexPayload + "\nCharacter Payload: \n" + characterPayload; _screen.updateDataField(record); /* * If we recognise this as a smart tag Type "Sp" */ if ("Sp".equals(records[j].getType())) {
On recognizing the Smart Poster type we convert the payload to an NDEFMessage object in its own right. This message itself will have NDEF Records that we will need to parse in turn.
try { NDEFMessage smartPosterMessage = new NDEFMessage( records[j].getPayload()); NDEFRecord[] spRecords = smartPosterMessage.getRecords(); int numSpRecords = spRecords.length; if (numSpRecords > 0 ) { for (int k = numSpRecords - 1; k >= 0; k--) { byte[] spPayloadBytes = spRecords[k].getPayload(); hexPayload = new StringBuffer(); characterPayload = new StringBuffer(); for (int i = 0; i < spPayloadBytes.length; i++) { hexpair = byte2HexPair(spPayloadBytes[i]); characterPayload.append( byte2Ascii(spPayloadBytes[i])); hexPayload.append(hexpair + " " + (((i + 1) % numberOfHexPairs == 0) ? "\n" : "")); characterPayload.append((((i + 1) % numberOfHexPairs == 0) ? "\n" : "")); }
Once again extract the type of record from the one we’re examining. We’re interested in TEXT and URI record types.
/* * Extract the record id, type, type name format, * payload length and payload */ String spRecordLog = "Subsidiary NDEF Record\nID: " + spRecords[k].getId() + "\nType: " + spRecords[k].getType() + "\nTNF: " + spRecords[k].getTypeNameFormat() + "\nPayload Len: " + spRecords[k].getPayload().length + "\nHex Payload: \n" + hexPayload + "\nCharacter Payload: \n" + characterPayload; _screen.updateDataField(spRecordLog); /* * This test checks for a TEXT record as * a well known type */ if ((spRecords[k].getTypeNameFormat() == NDEFRecord.TNF_WELL_KNOWN) && "T".equals(spRecords[k].getType())) { /* * Well Known Type "T" is a TEXT record */ StringBuffer textBuffer = new StringBuffer(); StringBuffer encodingBuffer = new StringBuffer();
TEXT record types have a status byte which indicates the character set encoding of the text (either UTF-8 or UTF-16) and the length of an ISO/IANA language code attribute which follows the status byte.
// bit 7 indicates UTF-8 if 0, UTF-16 if 1. Bits 5..0 len of IANA language // code. int statusByte = spPayloadBytes[0]; boolean is_utf16 = Utilities.isUtf16Encoded(statusByte); int iana_language_code_len = Utilities.getIanaLanguageCodeLength(statusByte); // extract the IANA language code as an ASCII string byte [] iana_lang_code_bytes = new byte [iana_language_code_len]; if ( iana_language_code_len > 0 ) { for ( int m = 0 ; m < iana_language_code_len; m++ ) { iana_lang_code_bytes[m] = spPayloadBytes[m+1]; } } // the language code is always ASCII langCode = new String(iana_lang_code_bytes,"US-ASCII"); // extract the text which may be UTF-8 or UTF-16 encoded depending on bit 7 // of the status byte byte [] text_bytes = new byte[spPayloadBytes.length - iana_language_code_len + 1]; int i=0; for ( int m = iana_language_code_len + 1 ; m < spPayloadBytes.length; m++ ) { text_bytes[i] = spPayloadBytes[m]; i++; } if (!is_utf16) { text = new String(text_bytes,"UTF-8"); } else { text = new String(text_bytes,"UTF-16"); } _screen.updateDataField(">>> Language: " + langCode); _screen.updateDataField(">>> Text: " + text);
The URL associated with the descriptive text is encoded in a URI record which is one of the well known record types in the NFC Forum documentation.
/* * This test checks for a URI record as a * well known type */ } else if ((spRecords[k].getTypeNameFormat() == NDEFRecord.TNF_WELL_KNOWN) && "U".equals(spRecords[k].getType())) { /* * Well Known Type "U" is a URI record */ StringBuffer urlBuffer = new StringBuffer(); int urlOffset = 0;
As an efficiency measure for cards that have very little space on them, many common prefixes are encoded in a single byte field in the URI record called the URI Identifier Code. Here we look for a specific one, keeping things simple for the purposes of this article. In a real application, we’d probably code for all possible values of URI Identifier Code as defined in the “URI Record Type Definition Technical Specification” from the NFC Forum.
/* * The. prefix is represented by 0x01 */ if ( spPayloadBytes[0] == (byte) 0x01 ){ urlBuffer.append("."); urlOffset = 1; } // extract the URI which must be UTF-8 encoded byte [] uri_bytes = new byte[spPayloadBytes.length - 1]; int i=0; for ( int m = urlOffset ; m < spPayloadBytes.length; m++ ) { uri_bytes[i] = spPayloadBytes[m]; i++; } urlBuffer.append(new String(uri_bytes,"UTF-8")); _screen.updateDataField(">>> URL: " + urlBuffer); } } } else { } } catch (BadFormatException e) { Utilities.log(“XXXX “+e.getClass().getName()+”:”+e.getMessage()); } catch (NFCException e) { Utilities.log(“XXXX “+e.getClass().getName()+”:”+e.getMessage()); } } } } else { _screen.updateDataField("This is not the tag you're looking for!\n" + "It contains no records!\n" + "You should search elsewhere!"); }
The following is just a simple way of transforming a byte into a displayable character.
/* * Helper to represent a byte as a printable character */ private char byte2Ascii(byte b) { char character = '.'; if ((20 <= b) && (126 >= b)) { character = (char) b; } return character; }
The following is just a simple way of displaying a byte as a displayable hexadecimal pair.
/* * Helper to represent a byte as a printable hex pair. */ private String byte2HexPair(byte b) { String hex; hex = "00" + Integer.toHexString(b); hex = hex.substring(hex.length() - 2).toUpperCase(); return hex; }
The following methods extract data from the status byte which is found in the text record type:
private static final int UTF_16_TEXT = 0x80; private static final int IANA_LANGUAGE_CODE_LEN_MASK = 0x1F; public static boolean isUtf16Encoded(int status_byte) { return (status_byte == (status_byte & UTF_16_TEXT)); } public static int getIanaLanguageCodeLength(int status_byte) { return status_byte & IANA_LANGUAGE_CODE_LEN_MASK; }
So, in summary you can see that the process of parsing a Smart Poster Tag is a fairly straight-forward, if tedious, process involving working through a byte [] according to the NFC Forum specifications.
Writing a Smart Poster Tag
The process of writing a Smart Poster tag is also fairly straightforward. However we want to highlight a difference in the approach taken over reading a Smart Poster Tag.
If we were to use net.rim.device.api.io.nfc.ndef.NDEFMessageListener when expressing our interest in writing a tag any Smart Poster Tag would be read automatically once it enters the RF field of the BlackBerry device. In this example we want to avoid this and leave the decision to write the tag to the application.
So, instead of using net.rim.device.api.io.nfc.ndef.NDEFMessageListener to register an interest in tags being presented, we will use net.rim.device.api.io.nfc.readerwriter.DetectionLi
Once we have recognized a suitable smart tag we will immediately write some text and a URL to the tag using the standard NDEFMesage and NDEFRecord formats.
Implementing a DetectionListener
The following code fragment shows the implementation of a typical listener for the detection of smart cards or tags being presented through the device’s NFC RF field.
The key points are that it implements the DetectionListener interface and receives notification of the presence of a smart card or tag in the NFC RF field via the onTargetDetected (Target smartTagTarget) method.
Notice that the Target object passed to this method represents a smart card or tag. This is different to the NDEFMessageListener where an actual NDEFMessage object as read from the card is presented.
package nfc.sample.Ndef.Write; import java.io.ByteArrayOutputStream; import java.io.IOException; import javax.microedition.io.Connector; import net.rim.device.api.io.nfc.NFCException; import net.rim.device.api.io.nfc.ndef.NDEFMessage; import net.rim.device.api.io.nfc.ndef.NDEFRecord; import net.rim.device.api.io.nfc.ndef.NDEFTagConnection; import net.rim.device.api.io.nfc.readerwriter.DetectionLi creen screen) { this(); this._screen = screen; Utilities.log("XXXX NfcWriteNdefSmartTagListener in constructor"); }creen screen) { this(); this._screen = screen; Utilities.log("XXXX NfcWriteNdefSmartTagListener in constructor"); }
At this point we have access to a smart card or tag Target object. The Target object tells us the type of contactless protocol to be used when communicating with the card or tag represented by the Target object:
This allows an application to be aware of tag types other than NDEF smart tags. In fact it will notify the presence of other cards provided they adhere to either the ISO 14443-3 (Parts A or B) or ISO 14443-4 (Parts A or B). Cards that do not adhere to any of the above three types are not detectable for the purpose of interacting with a BlackBerry Java application.
/* * This is where we get informed of a tag in the proximity of * NFC antenna */ public void onTargetDetected(Target smartTagTarget) { } }
That is not to say that other card types conforming to other standards can’t be routed from an external reader to either the UICC or eSE using Card Emulation. Card Emulation will be dealt with in a later article.
Registering the DetectionListener
In order to start receiving these events, an application must register an instance of this class which is typically done using a code fragment such as the following. An instance of nfcWriteNdefSmartTagListener which was defined in the last section is registered using the addDetectionListener method of the ReaderWriterManager.
/* * This registers with the NFC Reader Writer Manager indicating our * interest in receiving notification when smart tags come within range * of the NFC antenna */ private void registerListener(NfcWriteNdefSmartTagListener nfcWriteNdefSmartTagListener) { ReaderWriterManager nfcManager; try { nfcManager = ReaderWriterManager.getInstance(); nfcManager.addDetectionListener( nfcWriteNdefSmartTagListener, new int[]{Target.NDEF_TAG}); } catch (NFCException e) { Utilities.log(“XXXX “+e.getClass().getName()+”:”+e.getMessage()); } }
The second parameter to addDetectionListener is a filter that instructs the ReaderWriterManager to trigger detection events that match the types identified in the array of int. In this case we have specified that we are interested only in NDEF TAG events. Providing an empty array here would mean that the event target type would have to be determined when the card was presented in the listener itself.
Building the NDEF Message Payload
Here is an implementation of onTargetDetected() where:
public void onTargetDetected(Target smartTagTarget) { NDEFTagConnection tagConnection = null; try {
The process of writing a Smart Poster Tag to the card we’ve just detected (it has to be a Target.NDEF_TAG type card because that was specified when the listener was registered) consists of creating a new NDEFMessage object ( via createSmartPosterTag() described below); obtaining a connection to the target cast as an NDEFTagConnection object; and then write()’ing the tag to the card.
NDEFMessage startPosterTag = createSmartPosterTag(); tagConnection = (NDEFTagConnection)Connector.open( smartTagTarget.getUri(Target.NDEF_TAG)); tagConnection.write(startPosterTag); } catch (NFCException e) { Utilities.log(“XXXX “+e.getClass().getName()+”:”+e.getMessage()); } catch (IOException e) { Utilities.log(“XXXX “+e.getClass().getName()+”:”+e.getMessage()); } }
This is where the NDEFMessage that will be written to the Smart Card is assembled from information provided by the user.
private NDEFMessage createSmartPosterTag() throws IOException { NDEFMessage rootMessage = new NDEFMessage();// (SP (TEXT, URL) message NDEFMessage ndefMessage = new NDEFMessage();// (TEXT, URL) message NDEFRecord rootRecord = new NDEFRecord(); // Smart Poster Record NDEFRecord tagTitleRecord = new NDEFRecord(); // Tag Title TEXT record NDEFRecord tagUrlRecord = new NDEFRecord(); // Tag URL record ByteArrayOutputStream titlePayload = new ByteArrayOutputStream();// to build title ByteArrayOutputStream urlPayload = new ByteArrayOutputStream(); // to build URL
Start constructing the records that will be part of the NDEFMessage from the basic elements upwards. Record 0 represents the text description of the tag.
private final String URL_TEXT_LOCALE = "en-US"; ..... /* * ================ Record 0 =========================================== * * This is the NDEF record that represents the title associated * with the URL that will the URL part of the Smart Poster Tag */ titlePayload.write((byte) URL_TEXT_LOCALE.length());// status byte: // char encoding // + length of locale titlePayload.write(URL_TEXT_LOCALE.getBytes(“US-AS
CII”)); // locale encoding /* * This is the text to be associated with the Smart Poster Tag */ titlePayload.write(_screen.getTxtField().getBytes(CII”)); // locale encoding /* * This is the text to be associated with the Smart Poster Tag */ titlePayload.write(_screen.getTxtField().getBytes(
The next record to construct is the one that contains the URL associated with the descriptive text.
/* * ================ Record 1 =========================================== * * This is the NDEF record that represents the URL associated * with the title that will the Text part of the Smart Poster Tag */ urlPayload.write((byte) 0x01); // coded abbreviation for. urlPayload.write( _screen.getUrlField().getBytes()); // The rest of the URL urlPayload.flush(); /* * Construct the record itself */ tagUrlRecord.setId("1"); // record Id tagUrlRecord.setType(NDEFRecord.TNF_WELL_KNOWN, "U"); // It's a URL(I) type tagUrlRecord.setPayload( urlPayload.toByteArray()); // construct the record
Now combine the URI and Text records that we have constructed into an NDEFMessage object containing two records identified as records “0” and “1”.
/* * ================ Construct an NDEF MEssage ========================== * * This NDEF Message comprises the Title and URL records (TEXT, URL) * */ ndefMessage.setRecords(new NDEFRecord[] {tagTitleRecord, tagUrlRecord});
Once we have the single NDEF Message object representing the URL and the descriptive text it needs to be wrapped inside a Smart Poster Tag itself which is in fact just another NDEFRecord. This demonstrates that NDEF Records and NDEF Messages may be nested as long as they conform to the NFC Forum NDEF Message structures.
/* * ================ Wrap the message as a Smart Poster Tag ============ * * What we have now is a single NDEF message with two records, a * URL and some text associated with it. We now need to make that into a * Smart poster Tag which is a well known type: "Sp" */ rootRecord.setType(NDEFRecord.TNF_WELL_KNOWN, "Sp");// Smart Poster Type rootRecord.setPayload(ndefMessage.getBytes()); // construct the record
Last of all create a top level NDEF Message representing the actual Smart Poster Tag.
/* * ================ Construct an NDEF MEssage ========================== * * This NDEF message contains a single record encoding the Smart Poster * Tag itself */ rootMessage.setRecords( new NDEFRecord[] {rootRecord}); // (SP, (TEXT, URL)) /* * Return the Smart Poster Tag */ return rootMessage; }
This example probably demonstrates that it is easier to construct a Smart Poster Tag than to read and parse one. Reading one involves navigating round a byte [] object and calculating offsets. Writing a Smart Poster Tag is simpler if the tag is built in reverse order from the basic elements upwards. In this case it simply involves appending data to byte [] objects and converting these to NDEF Records and NDEF Messages making appropriate use of the NDEFMessage() and NDEFRecord() constructors.
Summary
Hopefully this article has given you some insight into how you can use NFC Smart Poster Tags as part of your BlackBerry application. The APIs are available today so download the latest BlackBerry® JDK from
APIs are documented here if you'd like to browse further:
The NFC Forum Home Page can be found here:
Other BlackBerry developer NFC articles
See the NFC Article Index for the list of other articles in this series
|
http://supportforums.blackberry.com/t5/Java-Development/Reading-and-Writing-NFC-Smart-Tags/ta-p/1379453
|
crawl-003
|
en
|
refinedweb
|
#include <CoinWarmStartPrimalDual.hpp>
Inheritance diagram for CoinWarmStartPrimalDualDiff:
This class exists in order to hide from the world the details of calculating and representing a `diff' between two CoinWarmStartPrimalDual objects. For convenience, assignment, cloning, and deletion are visible to the world, and default and copy constructors are made available to derived classes. Knowledge of the rest of this structure, and of generating and applying diffs, is restricted to the friend functions CoinWarmStartPrimalDual::generateDiff() and CoinWarmStartPrimalDual::applyDiff().
The actual data structure is a pair of vectors, diffNdxs_ and diffVals_.
Definition at line 140 of file CoinWarmStartPrimalDual.hpp.
Destructor.
Definition at line 157 of file CoinWarmStartPrimalDual.hpp.
Default constructor.
This is protected (rather than private) so that derived classes can see it when they make their default constructor protected or private.
Definition at line 167 of file CoinWarmStartPrimalDual.hpp.
Copy constructor.
For convenience when copying objects containing CoinWarmStartPrimalDualDiff objects. But consider whether you should be using clone() to retain polymorphism.
This is protected (rather than private) so that derived classes can see it when the make their copy constructor protected or private.
Definition at line 179 of file CoinWarmStartPrimalDual.hpp.
`Virtual constructor'. To be used when retaining polymorphism is important
Implements CoinWarmStartDiff.
Definition at line 151 of file CoinWarmStartPrimalDual.hpp.
References CoinWarmStartPrimalDualDiff().
Clear the data.
Make it appear as if the diff was just created using the default constructor.
Definition at line 187 of file CoinWarmStartPrimalDual.hpp.
References CoinWarmStartVectorDiff< T >::clear(), dualDiff_, and primalDiff_.
Definition at line 192 of file CoinWarmStartPrimalDual.hpp.
References dualDiff_, primalDiff_, and CoinWarmStartVectorDiff< T >::swap().
These two differences describe the differences in the primal and in the dual vector.
Definition at line 205 of file CoinWarmStartPrimalDual.hpp.
Referenced by clear(), and swap().
Definition at line 206 of file CoinWarmStartPrimalDual.hpp.
Referenced by clear(), and swap().
|
http://www.coin-or.org/Doxygen/Smi/class_coin_warm_start_primal_dual_diff.html
|
crawl-003
|
en
|
refinedweb
|
#include <CoinWarmStartDual.hpp>
Inheritance diagram for CoinWarmStartDualDiff: 99 of file CoinWarmStartDual.hpp.
Destructor.
Definition at line 118 of file CoinWarmStartDual.hpp.
Default constructor.
This is protected (rather than private) so that derived classes can see it when they make their default constructor protected or private.
Definition at line 128 140 of file CoinWarmStartDual.hpp.
Standard constructor.
Definition at line 151 of file CoinWarmStartDual.hpp.
`Virtual constructor'
Implements CoinWarmStartDiff.
Definition at line 103 of file CoinWarmStartDual.hpp.
References CoinWarmStartDualDiff().
Assignment.
Definition at line 109 of file CoinWarmStartDual.hpp.
The difference in the dual vector is simply the difference in a vector.
Definition at line 159 of file CoinWarmStartDual.hpp.
Referenced by operator=().
|
http://www.coin-or.org/Doxygen/Smi/class_coin_warm_start_dual_diff.html
|
crawl-003
|
en
|
refinedweb
|
Cut Generator for FBBT fixpoint. More...
#include <CouenneFixPoint.hpp>
Cut Generator for FBBT fixpoint.
Definition at line 27 of file CouenneFixPoint.hpp.
copy constructor
destructor
clone method (necessary for the abstract CglCutGenerator class)
Definition at line 42 of file CouenneFixPoint.hpp.
References CouenneFixPoint().
the main CglCutGenerator
Add list of options to be read from file.
Create a single cut.
should we use an extended model or a more compact one?
Definition at line 56 of file CouenneFixPoint.hpp.
pointer to the CouenneProblem representation
Definition at line 59 of file CouenneFixPoint.hpp.
Is this the first call?
Definition at line 62 of file CouenneFixPoint.hpp.
CPU time.
Definition at line 65 of file CouenneFixPoint.hpp.
Number of actual runs.
Definition at line 68 of file CouenneFixPoint.hpp.
Number of bounds tightened.
Definition at line 71 of file CouenneFixPoint.hpp.
|
http://www.coin-or.org/Doxygen/Couenne/class_couenne_1_1_couenne_fix_point.html
|
crawl-003
|
en
|
refinedweb
|
Class containing a solution with infeasibility evaluation. More...
#include <CouenneFPpool.hpp>
Class containing a solution with infeasibility evaluation.
Definition at line 31 of file CouenneFPpool.hpp.
CouenneProblem-aware constructor.
independent constructor --- must provide other data as no CouenneProblem to compute them
copy constructor
destructor
assignment
returns size
Definition at line 70 of file CouenneFPpool.hpp.
returns vector
Definition at line 73 of file CouenneFPpool.hpp.
basic comparison procedure -- what to compare depends on user's choice
Referenced by Couenne::operator<().
solution
Definition at line 35 of file CouenneFPpool.hpp.
number of variables (for independence from CouenneProblem
Definition at line 36 of file CouenneFPpool.hpp.
number of NL infeasibilities
Definition at line 37 of file CouenneFPpool.hpp.
number of integer infeasibilities
Definition at line 38 of file CouenneFPpool.hpp.
objective function value
Definition at line 39 of file CouenneFPpool.hpp.
maximum NL infeasibility
Definition at line 40 of file CouenneFPpool.hpp.
maximum integer infeasibility
Definition at line 41 of file CouenneFPpool.hpp.
This is a temporary copy, not really a solution holder.
As a result, all the above members are meaningless for copied solutions
Definition at line 47 of file CouenneFPpool.hpp.
|
http://www.coin-or.org/Doxygen/Couenne/class_couenne_1_1_couenne_f_psolution.html
|
crawl-003
|
en
|
refinedweb
|
Pool of solutions. More...
#include <CouenneFPpool.hpp>
Pool of solutions.
Definition at line 96 of file CouenneFPpool.hpp.
simple constructor (empty pool)
Definition at line 106 of file CouenneFPpool.hpp.
References Couenne::comparedTerm_.
copy constructor
assignment
return the main object in this class
Definition at line 116 of file CouenneFPpool.hpp.
finds, in pool, solution x closest to sol; removes it from the pool and overwrites it to sol
Pool.
Definition at line 101 of file CouenneFPpool.hpp.
|
http://www.coin-or.org/Doxygen/Couenne/class_couenne_1_1_couenne_f_ppool.html
|
crawl-003
|
en
|
refinedweb
|
Cut Generator for linear convexifications. More...
#include <CouenneDisjCuts.hpp>
Cut Generator for linear convexifications.
Definition at line 33 of file CouenneDisjCuts.hpp.
copy constructor
destructor
clone method (necessary for the abstract CglCutGenerator class)
Definition at line 110 of file CouenneDisjCuts.hpp.
References CouenneDisjCuts().
return pointer to symbolic problem
Definition at line 114 of file CouenneDisjCuts.hpp.
References couenneCG_.
the main CglCutGenerator
Add list of options to be read from file.
Provide Journalist.
Definition at line 126 of file CouenneDisjCuts.hpp.
get all disjunctions
separate couenne cuts on both sides of single disjunction
generate one disjunctive cut from one CGLP
check if (column!) cuts compatible with solver interface
compute smallest box containing both left and right boxes.
create single osicolcut disjunction
utility to merge vectors into one
our own applyColCuts
our own applyColCut, single cut
add CGLP columns to solver interface; return number of columns added (for later removal)
pointer to symbolic repr. of constraint, variables, and bounds
Definition at line 38 of file CouenneDisjCuts.hpp.
Referenced by couenneCG().
number of cuts generated at the first call
Definition at line 41 of file CouenneDisjCuts.hpp.
total number of cuts generated
Definition at line 44 of file CouenneDisjCuts.hpp.
separation time (includes generation of problem)
Definition at line 47 of file CouenneDisjCuts.hpp.
Record obj value at final point of CouenneConv.
Definition at line 50 of file CouenneDisjCuts.hpp.
nonlinear solver interface as used within Bonmin (used at first Couenne pass of each b&b node)
Definition at line 54 of file CouenneDisjCuts.hpp.
Branching scheme (if strong, we can use SB candidates).
Definition at line 57 of file CouenneDisjCuts.hpp.
Is branchMethod_ referred to a strong branching scheme?
Definition at line 60 of file CouenneDisjCuts.hpp.
SmartPointer to the Journalist.
Definition at line 63 of file CouenneDisjCuts.hpp.
Number of disjunction to consider at each separation.
Definition at line 66 of file CouenneDisjCuts.hpp.
Initial percentage of objects to use for generating cuts, in [0,1].
Definition at line 69 of file CouenneDisjCuts.hpp.
Initial number of objects to use for generating cuts.
Definition at line 72 of file CouenneDisjCuts.hpp.
Depth of the BB tree where start decreasing number of objects.
Definition at line 75 of file CouenneDisjCuts.hpp.
Depth of the BB tree where stop separation.
Definition at line 78 of file CouenneDisjCuts.hpp.
only include active rows in CGLP
Definition at line 81 of file CouenneDisjCuts.hpp.
only include active columns in CGLP
Definition at line 84 of file CouenneDisjCuts.hpp.
add previous disj cut to current CGLP?
Definition at line 87 of file CouenneDisjCuts.hpp.
maximum CPU time
Definition at line 90 of file CouenneDisjCuts.hpp.
|
http://www.coin-or.org/Doxygen/Couenne/class_couenne_1_1_couenne_disj_cuts.html
|
crawl-003
|
en
|
refinedweb
|
Cut Generator that uses relationships between auxiliaries. More...
#include <CouenneCrossConv.hpp>
Cut Generator that uses relationships between auxiliaries.
Definition at line 138 of file CouenneCrossConv.hpp.
copy constructor
destructor
clone method (necessary for the abstract CglCutGenerator class)
Definition at line 154 of file CouenneCrossConv.hpp.
References CouenneCrossConv().
the main CglCutGenerator
Add list of options to be read from file.
Set up data structure to detect redundancies.
Journalist.
Definition at line 171 of file CouenneCrossConv.hpp.
pointer to the CouenneProblem representation
Definition at line 174 of file CouenneCrossConv.hpp.
|
http://www.coin-or.org/Doxygen/Couenne/class_couenne_1_1_couenne_cross_conv.html
|
crawl-003
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.