text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
This page describes how to make use of WinRT APIs in your Unity project for HoloLens. WinRT APIs will only be used in Unity project builds that target Windows 8, Windows 8.1, or the Universal Windows Platform; any code that you write in Unity scripts that targets WinRT APIs must be conditionally included for only those builds. This is done using the NETFX_CORE or WINDOWS_UWP preprocessor definitions. This rule applies to using statements, as well as other code. The following code snippet is from the Unity manual page for Windows Store Apps: WinRT API in C# scripts. In this example, an advertising ID is returned, but only on Windows 8.0 or higher target builds: using UnityEngine; public class WinRTAPI : MonoBehaviour { void Update() { auto adId = GetAdvertisingId(); // ... } string GetAdvertisingId() { #if NETFX_CORE return Windows.System.UserProfile.AdvertisingManager.AdvertisingId; #else return ""; #endif } } When you double-click a script in the Unity editor, it will by default launch your script in an editor project. The WinRT APIs will appear to be unknown for two reasons: NETFX_CORE is not defined in this environment, and the project does not reference the Windows Runtime. If you use the recommended export and built settings, and edit the scripts in that project instead, it will define NETFX_CORE and also include a reference to the Windows Runtime; with this configuration in place, WinRT APIs will be available for IntelliSense. Note that your Unity C# project can also be used to debug through your scripts using F5 remote debugging in Visual Studio. If you do not see IntelliSense working the first time that you open your Unity C# project, close the project and re-open it. IntelliSense should start working.
https://developer.microsoft.com/en-us/windows/holographic/using_the_windows_namespace_with_unity_apps_for_hololens
CC-MAIN-2017-04
en
refinedweb
How to use the managed client for Azure Mobile Apps Overview This guide shows you how to perform common scenarios using the managed client library for Azure App Service Mobile Apps for Windows and Xamarin apps. If you are new to Mobile Apps, you should consider first completing the Azure Mobile Apps quickstart tutorial. In this guide, we focus on the client-side managed SDK. To learn more about the server-side SDKs for Mobile Apps, see the documentation for the .NET Server SDK or the Node.js Server SDK. Reference documentation The reference documentation for the client SDK is located here: Azure Mobile Apps .NET client reference. You can also find several client samples in the Azure-Samples GitHub repository. Supported Platforms The .NET Platform supports the following platforms: - Xamarin Android releases for API 19 through 24 (KitKat through Nougat) - Xamarin iOS releases for iOS versions 8.0 and later - Universal Windows Platform - Windows Phone 8.1 - Windows Phone 8.0 except for Silverlight applications The "server-flow" authentication uses a WebView for the presented UI. If the device is not able to present a WebView UI, then other methods of authentication are needed. This SDK is thus not suitable for Watch-type or similarly restricted devices. Setup and Prerequisites We assume that you have already created and published your Mobile App backend project, which includes at least one table. In the code used in this topic, the table is named TodoItem and it has the following columns: Id, Text, and Complete. This table is the same table created when you complete the Azure Mobile Apps quickstart. The corresponding typed client-side type in C# is the following class: public class TodoItem { public string Id { get; set; } [JsonProperty(PropertyName = "text")] public string Text { get; set; } [JsonProperty(PropertyName = "complete")] public bool Complete { get; set; } } The JsonPropertyAttribute is used to define the PropertyName mapping between the client field and the table field. To learn how to create tables in your Mobile Apps backend, see the .NET Server SDK topic or the Node.js Server SDK topic. If you created your Mobile App backend in the Azure portal using the QuickStart, you can also use the Easy tables setting in the Azure portal. How to: Install the managed client SDK package Use one of the following methods to install the managed client SDK package for Mobile Apps from NuGet: - Visual Studio Right-click your project, click Manage NuGet Packages, search for the Microsoft.Azure.Mobile.Clientpackage, then click Install. - Xamarin Studio Right-click your project, click Add > Add NuGet Packages, search for the Microsoft.Azure.Mobile.Clientpackage, and then click Add Package. In your main activity file, remember to add the following using statement: using Microsoft.WindowsAzure.MobileServices; How to: Work with debug symbols in Visual Studio The symbols for the Microsoft.Azure.Mobile namespace are available on SymbolSource. Refer to the SymbolSource instructions to integrate SymbolSource with Visual Studio. Create the Mobile Apps client The following code creates the MobileServiceClient object that is used to access your Mobile App backend. var client = new MobileServiceClient("MOBILE_APP_URL"); In the preceding code, replace MOBILE_APP_URL with the URL of the Mobile App backend, which is found in the blade for your Mobile App backend in the Azure portal. The MobileServiceClient object should be a singleton. Work with Tables The following section details how to search and retrieve records and modify the data within the table. The following topics are covered: - Create a table reference - Query data - Filter returned data - Sort returned data - Return data in pages - Select specific columns - Look up a record by Id - Dealing with untyped queries - Inserting data - Updating data - Deleting data - Conflict Resolution and Optimistic Concurrency - Binding to a Windows User Interface - Changing the Page Size How to: Create a table reference All the code that accesses or modifies data in a backend table calls functions on the MobileServiceTable object. Obtain a reference to the table by calling the GetTable method, as follows: IMobileServiceTable<TodoItem> todoTable = client.GetTable<TodoItem>(); The returned object uses the typed serialization model. An untyped serialization model is also supported. The following example creates a reference to an untyped table: // Get an untyped table reference IMobileServiceTable untypedTodoTable = client.GetTable("TodoItem"); In untyped queries, you must specify the underlying OData query string. How to: Query data from your Mobile App This section describes how to issue queries to the Mobile App backend, which includes the following functionality: - Filter returned data - Sort returned data - Return data in pages - Select specific columns - Look up data by ID Note A server-driven page size is enforced to prevent all rows from being returned. Paging keeps default requests for large data sets from negatively impacting the service. To return more than 50 rows, use the Skip and Take method, as described in Return data in pages. How to: Filter returned data backend by using message inspection software, such as browser developer tools or Fiddler. If you look at the request URI, notice that the query string is modified: GET /tables/todoitem?$filter=(complete+eq+false) HTTP/1.1 This OData request is translated into an SQL query by the Server SDK: SELECT * FROM TodoItem WHERE ISNULL(complete, 0) = 0 The function that is passed to the Where method can have an arbitrary number of conditions. // This query filters out completed TodoItems where Text isn't null List<TodoItem> items = await todoTable .Where(todoItem => todoItem.Complete == false && todoItem.Text != null) .ToListAsync(); This example would be translated into an SQL query by the Server SDK: SELECT * FROM TodoItem WHERE ISNULL(complete, 0) = 0 AND ISNULL(text, 0) = 0 This query can also be split into multiple clauses: OData subset. Operations include: - Relational operators (==, !=, <, <=, >, >=), - Arithmetic operators (+, -, /, *, %), - Number precision (Math.Floor, Math.Ceiling), - String functions (Length, Substring, Replace, IndexOf, StartsWith, EndsWith), - Date properties (Year, Month, Day, Hour, Minute, Second), - Access properties of an object, and - Expressions combining any of these operations. When considering what the Server SDK supports, you can consider the OData v3 Documentation. How to: Sort returned data(); How to: Return data in pages By default, the backend results. This query produces(); The IncludeTotalCount method requests the total count for all the records that would have been returned, ignoring any paging/limit clause specified: query = query.IncludeTotalCount(); In a real world app, you can use queries similar to the preceding example with a pager control or comparable UI to navigate between pages. Note To override the 50-row limit in a Mobile App backend, you must also apply the EnableQueryAttribute to the public GET method and specify the paging behavior. When applied to the method, the following sets the maximum returned rows to 1000: [EnableQuery(MaxTop=1000)] How to: Select specific columns keep chaining them. Each chained call affects(); How to: Look up data by ID"); How to: Execute untyped queries When executing a query using an untyped table object, you must explicitly specify the OData query string by calling ReadAsync, as in the following example: // Newtonsoft Json.NET, see the Json.NET site. How to: Insert data into a Mobile App backend All client types must contain a member named Id, which is by default a string. This Id is required to perform CRUD operations and for offline sync. The following code illustrates how to use the InsertAsync method to insert new rows into a table. The parameter contains the data to be inserted as a .NET object. await todoTable.InsertAsync(todoItem); If a unique custom ID value is not included in the todoItem during an insert, a GUID is generated by the server. You can retrieve the generated Id by inspecting the object after the call returns. To insert untyped data, you may take advantage of Json.NET:); Working with ID values Mobile Apps supports unique custom string values for the table's id column. A string value, the Mobile App backend generates a unique value for the ID. You can use the Guid.NewGuid method to generate your own ID values, either on the client or in the backend. JObject jo = new JObject(); jo.Add("id", Guid.NewGuid().ToString("N")); How to: Modify data in a Mobile App backend The following code illustrates how to use the UpdateAsync method to update an existing record with the same ID with new information. The parameter contains the data to be updated as a .NET object. await todoTable.UpdateAsync(todoItem); To update untyped data, you may take advantage of Json.NET as follows: JObject jo = new JObject(); jo.Add("id", "37BBF396-11F0-4B39-85C8-B319C729AF6D"); jo.Add("Text", "Hello World"); jo.Add("Complete", false); var inserted = await table.UpdateAsync(jo); An id field must be specified when making an update. The backend uses the id field to identify which row to update. The id field can be obtained from the result of the InsertAsync call. An ArgumentException is raised if you try to update an item without providing the id value. How to: Delete data in a Mobile App backend The following code illustrates how to use the DeleteAsync method to delete an existing instance. The instance is identified by the id field set on the todoItem. await todoTable.DeleteAsync(todoItem); To delete untyped data, you may take advantage of Json.NET as follows: JObject jo = new JObject(); jo.Add("id", "37BBF396-11F0-4B39-85C8-B319C729AF6D"); await table.DeleteAsync(jo); When you make a delete request, an ID must be specified. Other properties are not passed to the service or are ignored at the service. The result of a DeleteAsync call is usually null. The ID to pass in can be obtained from the result of the InsertAsync call. A MobileServiceInvalidOperationException is thrown when you try to delete an item without specifying the id field. How to: Use Optimistic Concurrency for conflict resolution Two or more clients may write changes to the same item at the same time. Without conflict detection, the last write would overwrite any previous updates. Optimistic concurrency control assumes that each transaction can commit and therefore does not use any resource locking. Before committing a transaction, optimistic concurrency control verifies that no other transaction has modified the data. If the data has been modified, the committing transaction is rolled back. Mobile Apps supports optimistic concurrency control by tracking changes to each item using the version system property column that is defined for each table in your Mobile App backend. Each time a record is updated, Mobile Apps sets the version property for that record to a new value. During each update request, the version property of the record included with the request is compared to the same property for the record on the server. If the version passed with the request does not match the backend, then the client library raises a MobileServicePreconditionFailedException<T> exception. The type included with the exception is the record from the backend containing the servers version of the record. The application can then use this information to decide whether to execute the update request again with the correct version value from the backend to commit changes. Define a column on the table class for the version system property to enable optimistic concurrency. For example: public class TodoItem { public string Id { get; set; } [JsonProperty(PropertyName = "text")] public string Text { get; set; } [JsonProperty(PropertyName = "complete")] public bool Complete { get; set; } // *** Enable Optimistic Concurrency *** // [JsonProperty(PropertyName = "version")] public string Version { set; get; } } Applications using untyped tables enable optimistic concurrency by setting the Version flag on the SystemProperties of the table as follows. //Enable optimistic concurrency by retrieving version todoTable.SystemProperties |= MobileServiceSystemProperties.Version; In addition to enabling optimistic concurrency, you must also catch the MobileServicePreconditionFailedException<T> exception in your code when calling UpdateAsync. Resolve the conflict by applying the correct version to the updated record and call UpdateAsync with the resolved record. The following code shows how to resolve a write conflict once detected: more information, see the Offline Data Sync in Azure Mobile Apps topic. How to: Bind Mobile Apps data to a Windows user interface This section shows how to display returned data objects using UI elements in a Windows app. The following example code binds to the source of the list with a query for incomplete items. The MobileServiceCollection creates a Mobile Apps managed runtime support an interface called ISupportIncrementalLoading. This interface allows controls to request extra data when the user scrolls. There is built-in support for this interface for universal Windows apps via MobileServiceIncrementalLoadingCollection, which automatically handles the calls from the controls. Use MobileServiceIncrementalLoadingCollection in Windows apps as follows: 8 and "Silverlight" apps, use the ToCollection extension methods on IMobileServiceTableQuery<T> and IMobileServiceTable<T>. To that can be bound to UI controls. This collection is paging-aware. Since the collection is loading data from the network, loading sometimes fails. To handle such failures, override the OnException method on MobileServiceIncrementalLoadingCollection to handle exceptions resulting from calls to LoadMoreItemsAsync. Consider if your table has many fields but you only want to display some of them in your control. You may use the guidance in the preceding section "Select specific columns" to select specific columns to display in the UI. Change the Page size Azure Mobile Apps returns a maximum of 50 items per request by default. You can change the paging size by increasing the maximum page size on both the client and server. To increase the requested page size, specify PullOptions when using PullAsync(): PullOptions pullOptions = new PullOptions { MaxPageSize = 100 }; Assuming you have made the PageSize equal to or greater than 100 within the server, a request returns up to 100 items. Work with Offline Tables Offline tables use a local SQLite store to store data for use when offline. All table operations are done against the local SQLite store instead of the remote server store. To create an offline table, first prepare your project: - In Visual Studio, right-click the solution > Manage NuGet Packages for Solution..., then search for and install the Microsoft.Azure.Mobile.Client.SQLiteStore NuGet package for all projects in the solution. (Optional) To support Windows devices, install one of the following SQLite runtime packages: - Windows 8.1 Runtime: Install SQLite for Windows 8.1. - Windows Phone 8.1: Install SQLite for Windows Phone 8.1. - Universal Windows Platform Install SQLite for the Universal Windows. - (Optional). For Windows devices, click References > Add Reference..., expand the Windows folder > Extensions, then enable the appropriate SQLite for Windows SDK along with the Visual C++ 2013 Runtime for Windows SDK. The SQLite SDK names vary slightly with each Windows platform. Before a table reference can be created, the local store must be prepared: var store = new MobileServiceSQLiteStore(Constants.OfflineDbPath); store.DefineTable<TodoItem>(); //Initializes the SyncContext using the default IMobileServiceSyncHandler. await this.client.SyncContext.InitializeAsync(store); Store initialization is normally done immediately after the client is created. The OfflineDbPath should be a filename suitable for use on all platforms that you support. If the path is a fully qualified path (that is, it starts with a slash), then that path is used. If the path is not fully qualified, the file is placed in a platform-specific location. - For iOS and Android devices, the default path is the "Personal Files" folder. - For Windows devices, the default path is the application-specific "AppData" folder. A table reference can be obtained using the GetSyncTable<> method: var table = client.GetSyncTable<TodoItem>(); You do not need to authenticate to use an offline table. You only need to authenticate when you are communicating with the backend service. Syncing an Offline Table Offline tables are not synchronized with the backend by default. Synchronization is split into two pieces. You can push changes separately from downloading new items. Here is a typical sync method: public async Task SyncAsync() { ReadOnlyCollection<MobileServiceTableOperationError> syncErrors = null; try { await this.client.SyncContext.PushAsync(); await this.todoTable.PullAsync( //The first parameter is a query name that is used internally by the client SDK to implement incremental sync. //Use a different query name for each unique query in your program "allTodoItems", this.todoTable.CreateQuery()); } catch (MobileServicePushFailedException exc) { if (exc.PushResult != null) { syncErrors = exc.PushResult.Errors; } } // Simple error/conflict handling. A real application would handle the various errors like network conditions, // server conflicts and others via the IMobileServiceSyncHandler. if (syncErrors != null) { foreach (var error in syncErrors) { if (error.OperationKind == MobileServiceTableOperationKind.Update && error.Result != null) { //Update failed, reverting to server's copy. await error.CancelAndUpdateItemAsync(error.Result); } else { // Discard local change. await error.CancelAndDiscardItemAsync(); } Debug.WriteLine(@"Error executing sync operation. Item: {0} ({1}). Operation discarded.", error.TableName, error.Item["id"]); } } } If the first argument to PullAsync is null, then incremental sync is not used. Each sync operation retrieves all records. The SDK performs an implicit PushAsync() before pulling records. Conflict handling happens on a PullAsync() method. You can deal with conflicts in the same way as online tables. The conflict is produced when PullAsync() is called instead of during the insert, update, or delete. If multiple conflicts happen, they are bundled into a single MobileServicePushFailedException. Handle each failure separately. Work with. You call a custom API by calling one of the InvokeApiAsync methods on the client. For example, the following line of code sends a POST request to the completeAll API on the backend: var result = await client.InvokeApiAsync<MarkAllResult>("completeAll", System.Net.Http.HttpMethod.Post, null); This form is a typed method call and requires that the MarkAllResult return type is defined. Both typed and untyped methods are supported. The InvokeApiAsync() method prepends '/api/' to the API that you wish to call unless the API starts with a '/'. For example: InvokeApiAsync("completeAll",...)calls /api/completeAll on the backend InvokeApiAsync("/.auth/me",...)calls /.auth/me on the backend You can use InvokeApiAsync to call any WebAPI, including those WebAPIs that are not defined with Azure Mobile Apps. When you use InvokeApiAsync(), the appropriate headers, including authentication headers, are sent with the request. Authenticate users Mobile Apps supports authenticating and authorizing app users using various: client-managed and server-managed flow. The server-managed flow provides the simplest authentication experience, as it relies on the provider's web authentication interface. The client-managed flow allows for deeper integration with device-specific capabilities as it relies on provider-specific device-specific SDKs. Note We recommend using a client-managed flow in your production apps. To set up authentication, you must register your app with one or more identity providers. The identity provider generates a client ID and a client secret for your app. These values are then set in your backend to enable Azure App Service authentication/authorization. For more information, follow the detailed instructions in the tutorial Add authentication to your app. The following topics are covered in this section: Client-managed authentication Your app can independently contact the identity provider and then provide the returned token during login with your backend. This client flow enables you to provide a single sign-on experience for users or to retrieve additional user data from the identity provider. Client flow authentication is preferred to using a server flow as the identity provider SDK provides a more native UX feel and allows for additional customization. Examples are provided for the following client-flow authentication patterns: Authenticate users with the Active Directory Authentication Library You can use the Active Directory Authentication Library (ADAL) to initiate user authentication from the client using Azure Active Directory authentication. - Configure your mobile app backend for AAD sign-on by following the How to configure App Service for Active Directory login tutorial. Make sure to complete the optional step of registering a native client application. - In Visual Studio or Xamarin Studio, open your project and add a reference to the Microsoft.IdentityModel.CLients.ActiveDirectoryNuGet package. When searching, include pre-release versions. Add the following code to your application, according to the platform you are using. In each, make the following replacements: - Replace INSERT-AUTHORITY-HERE with the name of the tenant in which you provisioned your application. The format should be. This value can be copied from the Domain tab in your Azure Active Directory in the Azure classic portal. - Replace INSERT-RESOURCE-ID-HERE with the client ID for your mobile app backend. You can obtain the client ID from the Advanced tab under Azure Active Directory Settings in the portal. - Replace INSERT-CLIENT-ID-HERE with the client ID you copied from the native client application. Replace INSERT-REDIRECT-URI-HERE with your site's /.auth/login/done endpoint, using the HTTPS scheme. This value should be similar to. The code needed for each platform follows: Windows: private MobileServiceUser user; private async Task AuthenticateAsync() { string authority = "INSERT-AUTHORITY-HERE"; string resourceId = "INSERT-RESOURCE-ID-HERE"; string clientId = "INSERT-CLIENT-ID-HERE"; string redirectUri = "INSERT-REDIRECT-URI-HERE"; while (user == null) { string message; try { AuthenticationContext ac = new AuthenticationContext(authority); AuthenticationResult ar = await ac.AcquireTokenAsync(resourceId, clientId, new Uri(redirectUri), new PlatformParameters(PromptBehavior.Auto, false) ); JObject payload = new JObject(); payload["access_token"] = ar.AccessToken; user = await App.MobileService.LoginAsync( MobileServiceAuthenticationProvider.WindowsAzureActiveDirectory, payload); message = string.Format("You are now logged in - {0}", user.UserId); } catch (InvalidOperationException) { message = "You must log in. Login Required"; } var dialog = new MessageDialog(message); dialog.Commands.Add(new UICommand("OK")); await dialog.ShowAsync(); } } Xamarin.iOS private MobileServiceUser user; private async Task AuthenticateAsync(UIViewController view) {(view)); JObject payload = new JObject(); payload["access_token"] = ar.AccessToken; user = await client.LoginAsync( MobileServiceAuthenticationProvider.WindowsAzureActiveDirectory, payload); } catch (Exception ex) { Console.Error.WriteLine(@"ERROR - AUTHENTICATION FAILED {0}", ex.Message); } } Xamarin.Android private MobileServiceUser user; private async Task AuthenticateAsync() {(this)); JObject payload = new JObject(); payload["access_token"] = ar.AccessToken; user = await client.LoginAsync( MobileServiceAuthenticationProvider.WindowsAzureActiveDirectory, payload); } catch (Exception ex) { AlertDialog.Builder builder = new AlertDialog.Builder(this); builder.SetMessage(ex.Message); builder.SetTitle("You must log in. Login Required"); builder.Create().Show(); } } protected override void OnActivityResult(int requestCode, Result resultCode, Intent data) { base.OnActivityResult(requestCode, resultCode, data); AuthenticationAgentContinuationHelper.SetAuthenticationAgentContinuationEventArgs(requestCode, resultCode, data); } Single Sign-On using a token from Facebook or Google Task AuthenticateAsync() {(); } } Single Sign On using Microsoft Account with the Live SDK To authenticate users, you must register your app at the Microsoft account Developer Center. Configure the registration details on your Mobile App backend. To create a Microsoft account registration and connect it to your Mobile App backend, complete the steps in Register your app to use a Microsoft account login. If you have both Windows Store and Windows Phone 8/Silverlight versions of your app, register the Windows Store version first. The following code authenticates using Live SDK and uses the returned token to sign in to your Mobile App backend. private LiveConnectSession session; //private static string clientId = "<microsoft-account-client-id>"; private async System.Threading.Tasks.Task AuthenticateAsync() { // Get the URL the Mobile App backend. var serviceUrl = App.MobileService.ApplicationUri.AbsoluteUri; // Create the authentication client for Windows Store using the should always be requested. Other scopes can be added App Service. MobileServiceUser loginResult = await App.MobileService .LoginWithMicrosoftAccountAsync(result.Session.AuthenticationToken); // Display a personalized sign-in greeting.(); } } } For more information, see the Windows Live SDK documentation. Server-managed authentication Once you have registered your identity provider, call the LoginAsync method on the [MobileServiceClient] with the MobileServiceAuthenticationProvider value of your provider. For example, the following code initiates a server flow sign-in to the value for your provider. In a server flow, Azure App Service manages the OAuth authentication flow by displaying the sign-in page of the selected provider. Once the identity provider returns, Azure App Service generates an App Service authentication token. The LoginAsync method returns a MobileServiceUser, which provides both the UserId of the authenticated user and the MobileServiceAuthenticationToken, as a JSON web token (JWT). This token can be cached and reused until it expires. For more information, see Caching the authentication token. Caching the authentication token In some cases, the call to the login method can be avoided after the first successful authentication by storing the authentication token from the provider. Windows Store and UWP apps can use PasswordVault to cache the current authentication token after a successful sign-in, as follows: await client.LoginAsync(MobileServiceAuthenticationProvider.Facebook); PasswordVault vault = new PasswordVault(); vault.Add(new PasswordCredential("Facebook", client.currentUser.UserId, client.currentUser.MobileServiceAuthenticationToken)); The UserId value is stored as the UserName of the credential and the token is the stored as the Password. On subsequent start-ups, you can check the PasswordVault for cached credentials. The following example uses cached credentials when they are found, and otherwise attempts to authenticate again with the backend: // Try to retrieve stored credentials. var creds = vault.FindAllByResource("Facebook").FirstOrDefault(); if (creds != null) { // Create the current user from the stored credentials. client.currentUser = new MobileServiceUser(creds.UserName); client.currentUser.MobileServiceAuthenticationToken = vault.Retrieve("Facebook", creds.UserName).Password; } else { // Regular login flow and cache the token as shown above. } When you sign out a user, you must also remove the stored credential, as follows: client.Logout(); vault.Remove(vault.Retrieve("Facebook", client.currentUser.UserId)); Xamarin apps use the Xamarin.Auth APIs to securely store credentials in an Account object. For an example of using these APIs, see the AuthStore.cs code file in the ContosoMoments photo sharing sample. When you use client-managed authentication, you can also cache the access token obtained from your provider such as Facebook or Twitter. This token can be supplied to request a new authentication token from the backend, as follows: var token = new JObject(); // Replace <your_access_token_value> with actual value of your access token token.Add("access_token", "<your_access_token_value>"); // Authenticate using the access token. await client.LoginAsync(MobileServiceAuthenticationProvider.Facebook, token); Push Notifications The following topics cover Push Notifications: - Register for Push Notifications - Obtain a Windows Store package SID - Register with Cross-platform templates How to: Register for Push Notifications The Mobile Apps, null); } If you are pushing to WNS, then you MUST obtain a Windows Store package SID. For more information on Windows apps, including how to register for template registrations, see Add push notifications to your app. Requesting tags from the client is not supported. Tag Requests are silently dropped from registration. If you wish to register your device with tags, create a Custom API that uses the Notification Hubs API to perform the registration on your behalf. Call the Custom API instead of the RegisterNativeAsync() method. How to: Obtain a Windows Store package SID A package SID is needed for enabling push notifications in Windows Store apps. To receive a package SID, register your application with the Windows Store. To obtain this value: - In Visual Studio Solution Explorer, right-click the Windows Store app project, click Store > Associate App with the Store.... - In the wizard, click Next, sign in with your Microsoft account, type a name for your app in Reserve a new app name, then click Reserve. - After the app registration is successfully created, select the app name, click Next, and then click Associate. - Log in to the Windows Dev Center using your Microsoft Account. Under My apps, click the app registration you created. - Click App management > App identity, and then scroll down to find your Package SID. Many uses of the package SID treat it as a URI, in which case you need to use ms-app:// as the scheme. Make note of the version of your package SID formed by concatenating this value as a prefix. Xamarin apps require some additional code to be able to register an app running on the iOS or Android platforms. For more information, see the topic for your platform: How to: Register push templates to send cross-platform notifications To register templates, use the RegisterAsync() method with the templates, as follows: JObject templates = myTemplates(); MobileService.GetPush().RegisterAsync(channel.Uri, templates); Your templates should be JObject types and can contain multiple templates in the following JSON format: public JObject myTemplates() { // single template for Windows Notification Service toast var<text id=\"1\">$(message)</text></binding></visual></toast>"; var templates = new JObject { ["generic-message"] = new JObject { ["body"] = template, ["headers"] = new JObject { ["X-WNS-Type"] = "wns/toast" }, ["tags"] = new JArray() }, ["more-templates"] = new JObject {...} }; return templates; } The method RegisterAsync() also accepts Secondary Tiles: MobileService.GetPush().RegisterAsync(string channelUri, JObject templates, JObject secondaryTiles); All tags are stripped away during registration for security. To add tags to installations or templates within installations, see [Work with the .NET backend server SDK for Azure Mobile Apps]. To send notifications utilizing these registered templates, refer to the Notification Hubs APIs. Miscellaneous Topics How to: Handle errors When an error occurs in the backend, the client SDK raises a MobileServiceInvalidOperationException. The following example shows how to handle an exception that is returned by the backend: private async void InsertTodoItem(TodoItem todoItem) { // This code inserts a new TodoItem into the database. When the operation completes // and App Service has assigned an Id, the item is added to the CollectionView try { await todoTable.InsertAsync(todoItem); items.Add(todoItem); } catch (MobileServiceInvalidOperationException e) { // Handle error } } Another example of dealing with error conditions can be found in the Mobile Apps Files Sample. The LoggingHandler example provides a logging delegate handler to log the requests being made to the backend. How to: Customize request headers To support your specific app scenario, you might need to customize communication with the Mobile App backend. For example, you may want to add a custom header to every outgoing request or even change responses status codes. You can use a custom DelegatingHandler, as in the following example: public async Task CallClientWithHandler() { MobileServiceClient client = new MobileServiceClient("AppUrl", new MyHandler()); IMobileServiceTable<TodoItem> todoTable = client.GetTable<TodoItem>(); var newItem = new TodoItem { Text = "Hello world", Complete = false }; await todoTable.InsertAsync(newItem); } public class MyHandler : DelegatingHandler { protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) { // Change the request-side here based on the HttpRequestMessage request.Headers.Add("x-my-header", "my value"); // Do the request var response = await base.SendAsync(request, cancellationToken); // Change the response-side here based on the HttpResponseMessage // Return the modified response return response; } }
https://docs.microsoft.com/en-us/azure/app-service-mobile/app-service-mobile-dotnet-how-to-use-client-library
CC-MAIN-2017-04
en
refinedweb
CodePlexProject Hosting for Open Source Software I have published working copy to my Host and I am getting the following error : anybody has any ideas Do you have an old version of NuGet in the GAC? I have the following NuGet assemble in the GAC Version = 1.0.11220.104 Runtime = 4.0.30319 Can you verify you have Nuget.Core, v1.1.0.0 in the App_Data\Dependencies folders? My guess is that what is happening is that instead of loading the assembly from "App_Data\Dependencies", the CLR is trying to load another version of nuget.core.dll from another location. The GAC is the most likely one, but it might be that you have a copy of Nuget.Code.dll in the "Bin" folder of the app, or that there is an assembly rebinding in web.config, or a publisher policy redirecting Nuget.Core 1.1 to some other version. Following the instructions there might help diagnose the problem: HTH, Renaud Thanks. I corrected the problem by adding reference to NuGet from App_Data\Dependencies folder. I am running into another problem. I published the site to my host and everything working fine. The issue that I am having is when i try to compile it in visual studio 2010. It errors out when it compiles modules saying "The type or namespace name 'AmazonCheckout' could not be found (are you missing a using directive or an assembly reference?) c:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\Temporary ASP.NET Files\root\4560eb12\3590e43e\App_Web_svn5akli.0.cs" AmazonCheckout module happens to be my first module. Please advise You are probably missing the assembly reference in the project, as the error message says. There are lots of modules in the module folder but non of them have a bin directory. I don't see how to set the reference. I don't know where the dll are for these modules. Please advise Thanks References are not set by dropping a dll into bin (you must be confusing this with web site projects). It is done by adding it to the project file for the module. From VS, right-click references and add reference. I understand that, but here is the problem. When you right-click and add a reference I need to have a dll to add into the project. I could not find any of the dll in the module folder. In this case something like 'AmazonCheckout.dll'. You might want to contact the owners of that module: There is something wrong with setting or configuration ?? This time I have created the new project from scratch using Webmatrix and Orchard CMS. After I have installed Orchard CMS from the WebMatrix, I clicked on visual studio 2010 button on top of WebMatrix and just tried to compile. First it gave a error saying reference is missing of 'Orchard.blogs' I added the reference then it started complaining about the following : Error 2 Object reference not set to an instance of an object. C:\Documents and Settings\himam\My Documents\My Web Sites\DezignerSarees\Modules\Orchard.Blogs\Orchard.Blogs.csproj 1 I have not modified anything or added anything. Just trying to compile out of the box ??. Very frustrating. Any help will be really appreciated. You will have to choose between running the WebPI/WebMatrix version and letting the application compile modules dynamically on its own, or use the full source code and compile from Visual Studio. I want to use Visual Studio. Here is what I did : I opened visual studio and open the project as a web site since there is no solution file and then tried to compile, I got the error what I mentioned above. What I am doing wrong ? Is there any setting I have to worry about. Thanks for your help. I liked the product and I want to use to develop e-commerce web site for my business. I don't know it matters my day job is software development. I have been developing it for tthe last 10 years. Thanks again Right, if you are going to use Visual Studio, please use the full source code and open the solution file. But there is no solution file. Do I have to convert it to have a solution file. ? There is no solution file because you did not download the full source code. Thanks. Now I understand how it works Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
https://orchard.codeplex.com/discussions/265610
CC-MAIN-2017-04
en
refinedweb
.componentxml;31 32 /**33 * An exception describing a general failure in the execution of 34 * loading component / stylesheet resources from XML.35 */36 public class ComponentXmlException extends Exception {37 38 private Throwable cause;39 40 /**41 * Creates a new <code>ComponentXmlException</code>.42 * 43 * @param message A message describing the problem44 * @param cause the exception which caused this exception, 45 * if applicable46 */47 public ComponentXmlException(String message, Throwable cause) {48 super(message);49 this.cause = cause;50 }51 52 /**53 * Returns the causing exception, if applicable.54 * 55 * @return the causing exception56 */57 public Throwable getCause() {58 return cause;59 }60 }61 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/nextapp/echo2/app/componentxml/ComponentXmlException.java.htm
CC-MAIN-2017-04
en
refinedweb
Family Law Family Law Questions? Ask a Family Lawyer Online. Join the 9 million people who found a smarter way to get Expert help Recent Mediated Divorce questions Hi, I went through a mediated divorce, March 3 2016, since then my ex husband hasn't kept to an of the agreed upon support, or property settlement s, he started a Land Surveyors company during our 32 year marriage, he is supposed to be paying $1500 a month, alimony in gross, until he pays $380,000 ,he constantly threatens me, and even been in a drunken standoff with the police, and wasn't charged . My attorney has now quit because of fear of him, now I am forced to start over trying to get a contempt hearing, I have been waiting since, late June and still don't have a court date, what can I do to try and get justice, he seems to get away with everything, he has many DUI S laughs at me, because he gets away with not paying, he had a mistress spent $200,000 on lavish vacations for the both of them, that's why I divorced him, it just seems I try and fight for justice but just can't get it. Read more Juris Doctorate I signed a mediated divorce settlement last week. In it; "either parent may apply for a passport on behalf of the child. Both parents shall be entitled to travel internationally with the child with consent of the other parent. Consent for international travel shall not be unreasonably withheld by either parent." "Both parents shall be permanently enjoined from permanently removing the child from this Texas jurisdiction"1. Due to the souring relationship between the U.S. and the Phillipines lately, Is it "(unreasonable)" to deny travel with the child at this time, until relations between our countries get better?2. Her home is in the southern island of Mindinao, also the home of the Abu sayaf muslim extremist group, who kidnaps and executes foreigners. Is it also "(unreasonable)" to deny her international travel with the child to this region because of this terrorism threat/risk?3. Is it possible to insert a bond requirement of $20k before her traveling with the child, since she will be getting 60k out of our retirement account upon dissolution of our marriage?4.What is the legal interpretation of (unreasonable)?5. If I refuse to allow her international travel after this agreement if finalized, what are her options? Attorney My name is***** (husband). My wife has expressed interest in filing for a mediated divorce at a pace and selection of attorney that will be dependent entirely on her timeline.In the meantime, I need to move out of my temporary residence and purchase a home (PROPERTY C in attachment) and at the same time benefit from very low interest rates (expected to climb next year).I'm seeking your advice on establishing a legal mechanism for establishing ownership that will be free of loopholes and will be 100% bulletproof to meet my primary objective. Meeting my secondary objective would be a "nice to have".Details are in the attached Word document.Thanks, James(###) ###-#### Doctoral Degree My wife and I are in the midst of a mediated divorce in Mass. My wife has earned and will continue to earn substantially more than me, and over the next four years will take 100% responsibility for college expenses for two children (one entering her senior year, and the other his freshman year.We have three 529 plans that will cover some of those expenses: one for each child, funded by my wife via payroll deductions, and one funded by my parents in our second child's name. The remainder, ~$196K(!) are to be paid out of her future earnings.We agree that this substantial financial burden should be factored in to the settlement (I will be the beneficiary of both alimony the division of marital assets, most of which are in her 401K. The question is: what is a fair way to calculate that adjustment -- e.g., 100% of that $196K? 50% of it? Or some other proportion?Many thanks.We have agreed that this huge financial obligation should recognized via a reduction in the ultimate Litigation Attorney I am 61 years old and my divorce was finalized on 12/11 2014. i live in trenton, new jersey, mercer county.the divorce was a mediated divorce and I willingly gave everything to my ex-wife. I took nothing but my personal effects from the marriage.I made one demand and that was to have 12 months to have access to the family photo albums and tapes to make copies for me.the agreement states that I must give 2 weeks notice to pick up the albums (2 at a time), I have 2 weeks to copy them, and then return them, at which time I will receive 2 more, etc.the first 2 albums pick up, return, etc., went fine, as did the next 2. however, with this most recent pick up, return (returned to my ex-wife on sunday, may 17, 2014) and she texted me and said that two photos were out of order and "the deal is off", "no more albums".I know that the photos were placed back in the exact spot that they were taken and that this is an excuse made by my ex-wife to prevent me from access. I had returned the first 4 albums without issue (although I would receive texts from my ex-wife that were terrible and hateful), but I kept going.I know that I am within my rights to have access to these albums. I do not have an attorney but I want to file a complaint with the appropriate NJ office to make my ex-wife comply. Please also know that I did make three requests to my ex-wife's attorney to please talk with her but I he has not done anything.can you please direct me to the right office in New Jersey, Mercer County to file a complaint about her breaching this agreement and that her lawyer is doing nothing about it? I need specific contact information for this office so I may contact them. thank youBob Voorhees I am retired and collecting social security benefits,as is my wife.During divorce negotiations (mediated divorce)would my spouse be entitled to any portion of my social security income,as with pension,IRA,etc.? Hello My name is XXXXX XXXXX I am in desperate need of help. I have been married to my Pakistani husband for 15 years and we have to children. I am a citizen by birth and he became one after two years in the country. I have just learned in the last week that during my husband's visit to Pakistan almost 9 years ago he got married. This while I was 8 months pregnant with our second child. He and his Pakistani wife also have two children After yeas of cruelty and indifference he now feels that since everything out we should reconcile and make our marriage work. He lives in the downstairs of our home and have been physically separated for the past 6 years due to his drinking and cruel behavior. Now he feels that since he is no longer drinking and has "come clean" I should "make things work". NEVER have I wanted to run as far and fast as I do now. . Unfortunately, because I am being treated for physical problems {car accident years ago} I depend on him financially and tells me I will suffer financially if I divorce him. We have a modest home {which I want} and a pizza/deli. Please, Please advise...I'm so in shock by all of this my head I spinning. I was poorly represented in mediation divorce, our mediator did a very poor job and left the door open for me to get taken to the cleaners during the QDRO mediation part of divorce. Several verbal agreements were made during divorce mediation and attorney said it would be handle with a MSA and during QDRO. Well after divorce final ex fired mediator got differant attorney and is now and as taken me to the cleaners. The more I research the more I find out how bad the mediator was. Can I anul mediation final divorce and take all back to court, ex fired mediator knowing that she could not be taken to court to testify on verbal agreements made during mediation and what ex wife agreed to and then went against when taking my retirement. I am in california is it possible to retry divorce and start from scratch to get fairness. Hi, My wife (of 23 years) are currently going thru a mediated divorce in CT. Since our mediator, although an attorney, can cannot actually represent either of us --- I want to confirm some points re: alimony. According to the mediator, the alimony calculation goes something like take the higher wage earners salary, minus the lower wage earners salary, divide by 2 to arrive at the yearly alimony payment. The alimony duration is about half of the total years married -- 11.5 for us --- or until she is remarried/co-habitating. I'd like to confirm if that above calulation sounds accurate and based on the fact I'm 58 yrs old, is there any consideration for shortening the alimony duration, say until I.
http://www.justanswer.com/topics-mediated-divorce/
CC-MAIN-2017-04
en
refinedweb
import serial import threading import Queue import Tkinter as tk class SerialThread(threading.Thread): def __init__(self, queue): threading.Thread.__init__(self) self.queue = queue def run(self): s = serial.Serial('COM10',9600)(100, self.process_serial) app = App() app.mainloop() This code is to receive serial data from one pc and to print on tkinter window of other pc.The data which is received should print on the window until it receives other serial data.This cycle has to be done for every received data. I had 2 issues here: 1.In the code [self.root.after(1500, self.process_serial)] Here the first text will display and wait for 1500ms, then second text will display and wait for 1500ms, it will keep on doing this. but in my code i don't know when i receive the serial data,if i receive serial data, this data should display until i receive other serial data to print on the window.The time is variable(changing) in between the data which is received., i want to receive data and display it until it receives other serial data again. In my code if i use 1500ms, the serial received data is just displaying for 1500ms. I dont want to just display for 1500ms, the data should be on the screen until it receives other serial data. - If i receive just a single line of ASCII.with this font size i can print 9 lines on the..
https://www.daniweb.com/programming/software-development/threads/457871/how-to-display-received-data-on-the-window
CC-MAIN-2017-04
en
refinedweb
Arcot Arumugam wrote: > > You mention that "all nodes CAN be CFS servers" (emphasis added). Does that > mean it is not automatic? Do we have to do something special to obtain the > benefits of CFS when the partition is local? At the moment it is automatic for ext2. The "cfs_mount" command can be used to mount any filesystem with CFS. I think only ext2 and ext3 have actually been tried. > > > Also does CFS support all the filesystems that are there for linux or is > limited to certain subset? Some odd-ball filesystem might not work perfectly (i.e. msdos). In production all fileystems will either be using CFS (ext2/ext3), cluster parallel (OpenGFS), cluster aware local (procfs/devfs with SSI mods), or a special type which will be added for unsupported filesystems. The unsupported filesystems will be visible from the mounting node, but will give errors at the mount-point on all other nodes. Migration/rfork/rexec with files open in the unsupported filesystem will fail. Hopefully, there will be very few filesystem types which will be unsupported. -- David B. Zafman | Hewlett-Packard Company mailto:david.zafman@... | "Thus spake the master programmer: When you have learned to snatch the error code from the trap frame, it will be time for you to leave." Hello, > Below is a log of what I get with mawk. Greets, Tobias tar jxvf /var/downloads/linux/linux-2.4.16.tar.bz2 tar jxvf /var/downloads/ssi/ssi-linux-2.4.16-v0.6.5a.tar.bz2 cd linux patch -p1 < ../ssi-linux-2.4.16-v0.6.5a/ssi-linux-2.4.16-v0.6.5.patch cp config.uml .config make oldconfig ARCH=um then .. toberste@...$ make dep make -C /tmp/linux/cluster clustersymlinks make[1]: Entering directory `/tmp/linux/cluster' /tmp/linux/cluster/lnrel.sh /tmp/linux/cluster/arch/i386/include /tmp/linux/include/cluster/arch make -C ssi clustersymlinks make[2]: Entering directory `/tmp/linux/cluster/ssi' make -C vproc clustersymlinks make[3]: Entering directory `/tmp/linux/cluster/ssi/vproc' /tmp/linux/cluster/lnrel.sh /tmp/linux/cluster/arch/i386/vproc arch make -C arch clustersymlinks make[4]: Entering directory `/tmp/linux/cluster/arch/i386/vproc' make[4]: Nothing to be done for `clustersymlinks'. make[4]: Leaving directory `/tmp/linux/cluster/arch/i386/vproc' make[3]: Leaving directory `/tmp/linux/cluster/ssi/vproc' make -C clreg clustersymlinks make[3]: Entering directory `/tmp/linux/cluster/ssi/clreg' make[3]: Nothing to be done for `clustersymlinks'. make[3]: Leaving directory `/tmp/linux/cluster/ssi/clreg' make -C ipc clustersymlinks make[3]: Entering directory `/tmp/linux/cluster/ssi/ipc' make[3]: Nothing to be done for `clustersymlinks'. make[3]: Leaving directory `/tmp/linux/cluster/ssi/ipc' make -C util clustersymlinks make[3]: Entering directory `/tmp/linux/cluster/ssi/util' make[3]: Nothing to be done for `clustersymlinks'. make[3]: Leaving directory `/tmp/linux/cluster/ssi/util' make[2]: Leaving directory `/tmp/linux/cluster/ssi' make -C clms clustersymlinks make[2]: Entering directory `/tmp/linux/cluster/clms' make[2]: Nothing to be done for `clustersymlinks'. make[2]: Leaving directory `/tmp/linux/cluster/clms' make -C ics clustersymlinks make[2]: Entering directory `/tmp/linux/cluster/ics' make -C ics_tcp clustersymlinks make[3]: Entering directory `/tmp/linux/cluster/ics/ics_tcp' make[3]: Nothing to be done for `clustersymlinks'. make[3]: Leaving directory `/tmp/linux/cluster/ics/ics_tcp' make[2]: Leaving directory `/tmp/linux/cluster/ics' make -C util clustersymlinks make[2]: Entering directory `/tmp/linux/cluster/util' make[2]: Nothing to be done for `clustersymlinks'. make[2]: Leaving directory `/tmp/linux/cluster/util' make[1]: Leaving directory `/tmp/linux/cluster' make -C /tmp/linux/cluster clustergen make[1]: Entering directory `/tmp/linux/cluster' make -C rpcgen make[2]: Entering directory `/tmp/linux/cluster/rpcgen' cc -DNSCRPCGEN -D_GNU_SOURCE -c -o rpc_main.o rpc_main.c cc -DNSCRPCGEN -D_GNU_SOURCE -c -o rpc_hout.o rpc_hout.c cc -DNSCRPCGEN -D_GNU_SOURCE -c -o rpc_cout.o rpc_cout.c cc -DNSCRPCGEN -D_GNU_SOURCE -c -o rpc_parse.o rpc_parse.c cc -DNSCRPCGEN -D_GNU_SOURCE -c -o rpc_scan.o rpc_scan.c cc -DNSCRPCGEN -D_GNU_SOURCE -c -o rpc_util.o rpc_util.c cc -DNSCRPCGEN -D_GNU_SOURCE -c -o rpc_svcout.o rpc_svcout.c cc -DNSCRPCGEN -D_GNU_SOURCE -c -o rpc_clntout.o rpc_clntout.c cc -DNSCRPCGEN -D_GNU_SOURCE -c -o rpc_tblout.o rpc_tblout.c cc -DNSCRPCGEN -D_GNU_SOURCE -c -o rpc_sample.o rpc_sample.c cc -o nscrpcgen rpc_main.o rpc_hout.o rpc_cout.o rpc_parse.o rpc_scan.o rpc_util.o rpc_svcout.o rpc_clntout.o rpc_tblout.o rpc_sample.o make[2]: Leaving directory `/tmp/linux/cluster/rpcgen' make -C /tmp/linux/include/cluster/gen genfiles make[2]: Entering directory `/tmp/linux/include/cluster/gen' awk -f type_template.awk \ ics_proto_gen.h.template ics_proto_gen.h.list >ics_proto_gen.h awk: type_template.awk: line 83: improper use of next make[2]: *** [ics_proto_gen.h] Error 2 make[2]: Leaving directory `/tmp/linux/include/cluster/gen' make[1]: *** [clustergen] Error 2 make[1]: Leaving directory `/tmp/linux/cluster' make: *** [clustergen] Error 2 toberste@...$ Jai in our office has Context Dependent Symbolic Links working. We will be checking it soon. "Aneesh Kumar K.V" wrote: > Hi, > Some of the small things as far as whole SSI is concerned. > > 1) Rolling out a RPM/DEB packages. > 2) A GUI for configuring and monitoring the cluster events. > 3) Context Dependent Symbolic links > 4) Shipping the cluster mounts ( hack already there cfs_shipmount) > 5) Cluster wide file locking. > 6) Moving nsc_rcall to ICS call format. ( less important ) > > And other big items already listed at the website. > > -aneesh > -- David B. Zafman | Hewlett-Packard Company mailto:david.zafman@... | "Thus spake the master programmer: When you have learned to snatch the error code from the trap frame, it will be time for you to leave." Hi I understand that SSI with CFS requires the same processor and network adapters to be running on all the nodes (atleast for now). Is there a same restriction with OpenGFS too? I am thinking of using the HyperSCSI software to export a disk as the shared disk for SSI (someone had posted the link on this list previously) . That way I will not need the expensive shared disk and can be done over regular ethernet. But I have different machines and would like to know if I get it working with SSI support that environment. Thanks Arcot Hi Thanks for the detailed explanation. Very helpful. I just have one question more of a clarification really : > > Does CFS handle concurrent access to other mounted disk partitions that can > > exist in different nodes? > > In an SSI cluster all nodes can be CFS servers of their local > filesystems, and are CFS clients of every CFS server partition available > in the cluster. > You mention that "all nodes CAN be CFS servers" (emphasis added). Does that mean it is not automatic? Do we have to do something special to obtain the benefits of CFS when the partition is local? Also does CFS support all the filesystems that are there for linux or is limited to certain subset? Thanks Arcot Could the person responsible for the SSI website(on sourceforge) change the link to the OpenGFS Project to opengfs.sourceforge.net until we get the opengfs.org domain back. Thanks in advance. --Brian Jackson Hi, On Thu, 2002-09-19 at 12:55, Tobias G. Oberstein wrote: > > >Brian J. Watson wrote: >>4. 512MB Ram and a 800 MHz CPU results in reasonable performance >> for testing purposes. > > > Cool! I'll add this information to the HOWTO. If you're updating it anyway.. here might be another addon detail: There is a SSI 0.6.5 and a 0.6.5a tarball. Only for the latter (ssi-linux-2.4.16-v0.6.5a.tar.bz2) I managed to build a new kernel. In doing so, I followed exactly the instructions in the HOWTO. In the end, it's just one "a" more;). Greets, Tobias Brian, > Many simultaneous boot problems (perhaps not all) have been fixed since > the 0.6.5 release. Good to hear. > I've tried to update the root and utils packages to > 0.7.1, but something's broken that causes the nodes to panic early in > boot. If someone's feeling adventurous enough to tackle this problem, I > can post the images in the contrib/ directory of the project website. Yes please. I think the UML/SSI distribution is a terrific idea and could attract more people - and if it's just for playing around and bug reporting / fixing. For me, I'm quite experienced in C++/userland but just starting to get interested in kernel stuff - so I'm "adventurous" but, eh well;) Tobias. P.S. I've tried to build the _current_ cvs .. and failed. I'm just reporting it below for comepleteness. But if it wouldn't boot either .. anyway: This is what I've done to a fresh 2.4.16 tree and after a fresh patch -p1 <../../ssic-linux/3rd-party/uml-patch-2.4.18-22 cp -alf ../../ssic-linux/ssi-kernel/. . cp -alf ../../ci-linux/ci-kernel/. . patch -p1 <../../ssic-linux/3rd-party/opengfs-ssi.patch I suppose applying this patch to a 2.4.16 is correct even if the patch contains the substring "2.4.18" because this is what the "Creating SSI Clusters Using UML HOWTO" says. The problem: in linux/include/cluster/ssi/rcopy.h there are some references to a strcuture "current" where e.g. "cltnode" should be a member but is not. gcc -D__KERNEL__ -I/home/toberste/sandbox/linux2.4.16/toberste/sandbox/linux2.4.16/linux/arch/um/include -Derrno=kernel_errno -c -o init/main.o init/main.c In file included from /home/toberste/sandbox/linux2.4.16/linux/include/asm/uaccess.h:15, from /home/toberste/sandbox/linux2.4.16/linux/include/asm/unistd.h:10, from /home/toberste/sandbox/linux2.4.16/linux/include/linux/unistd.h:9, from init/main.c:17: /home/toberste/sandbox/linux2.4.16/linux/include/cluster/ssi/rcopy.h: In function `ssi_rcopy_is_remote': /home/toberste/sandbox/linux2.4.16/linux/include/cluster/ssi/rcopy.h:43: structure has no member named `cltnode' /home/toberste/sandbox/linux2.4.16/linux/include/cluster/ssi/rcopy.h:44: warning: control reaches end of non-void function /home/toberste/sandbox/linux2.4.16/linux/include/cluster/ssi/rcopy.h: In function `ssi_procstate_get': /home/toberste/sandbox/linux2.4.16/linux/include/cluster/ssi/rcopy.h:54: structure has no member named `epid' /home/toberste/sandbox/linux2.4.16/linux/include/cluster/ssi/rcopy.h:58: structure has no member named `cttynode' /home/toberste/sandbox/linux2.4.16/linux/include/cluster/ssi/rcopy.h:59: structure has no member named `cttydev' /home/toberste/sandbox/linux2.4.16/linux/include/cluster/ssi/rcopy.h: In function `ssi_procstate_set': /home/toberste/sandbox/linux2.4.16/linux/include/cluster/ssi/rcopy.h:68: structure has no member named `cltnode' /home/toberste/sandbox/linux2.4.16/linux/include/cluster/ssi/rcopy.h:70: structure has no member named `cltnode' /home/toberste/sandbox/linux2.4.16/linux/include/cluster/ssi/rcopy.h:71: structure has no member named `epid' /home/toberste/sandbox/linux2.4.16/linux/include/cluster/ssi/rcopy.h:75: structure has no member named `cttynode' /home/toberste/sandbox/linux2.4.16/linux/include/cluster/ssi/rcopy.h:76: structure has no member named `cttydev' make: *** [init/main.o] Error 1 Now, I've looked to find the location where the structure "current" is declared .. but were unable to. The strange thing is, grepping for "cltnode" looks like it is nowhere! toberste@...$ pwd /home/toberste/sandbox/linux2.4.16/linux toberste@...$ grep -r cltnode * cluster/ssi/cfs/cfs.x: int cia/vproc/rproc.x: * cltnode, p_vproc, p_vfparent, p_opptr, p_pptr, p_cptr, cluster/ssi/vproc/rproc_platform.x: * cltnode, p_vproc, p_vfparent, p_opptr, p_pptr, p_cptr, include/cluster/ssi/rcopy.h: return (current->cltnode != 0); include/cluster/ssi/rcopy.h: current->cltnode = sp->sps_node; include/cluster/ssi/rcopy.h: current->cltnode = 0; I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/ssic-linux/mailman/ssic-linux-devel/?viewmonth=200209&viewday=19
CC-MAIN-2017-04
en
refinedweb
2009/10/14 kiorky <kiorky at cryptelium.net>: > In the context of migration, for example. > Take the plone "collective" namespace, all the modules won't be updated at the > same time, we will have a painful cohabitation time. Well, making coordinated releases there is tricky, since there is different people having the rights on PyPI. But that is one of the very few namespaces this would be a problem at. But the current solutions do not have any problems in being mixed, so I still don't see why a new solution should have that problem. >> I see no reason that they should not continue to work. > Choose to go on that new stuff that is not backward compatible. That new stuff doesn't exist yet. How do you know it will not work? > I ve heard that before. Hay, Tarek, go out that body ! Yep, i know, but these > are point that we must keep in mind before the new implementation prevent us > from providing bits of backward compatibility. You have gotten the idea that just because the namespace PEP will not be backwards compatible with setuptools 0.6, that means that you can't mix all of them in the same system. If this is correct, I would like to hear *why* this is so. Because then we need to change that. But unless you can explain why it will be impossible to mix them, then I will trust my gut feeling that it is possible. -- Lennart Regebro: Python, Zope, Plone, Grok +33 661 58 14 64
https://mail.python.org/pipermail/distutils-sig/2009-October/013937.html
CC-MAIN-2017-04
en
refinedweb
I'm observing different behavior regarding what I think is part of the C standard between clang gcc clang gcc #include <stdio.h> int inner(int *v) { *v += 1; return 1; } int outer(int x, int y) { return y; } int main(int argc, char *argv[]) { int x = 4; printf ("result=%d\n", outer(inner(&x), x)); } $ clang -o comseq comseq.c && ./comseq result=5 $ gcc-4.8 -o comseq comseq.c && ./comseq result=4 $ gcc-5 -o comseq comseq.c && ./comseq result=4 clang gcc --std= a , b (§5.18) (in func(a,a++) , is not a comma operator, it's merely a separator between the arguments a and a++. The behaviour is undefined in that case if a is considered to be a primitive type) Commas in argument lists are not sequence points — they are not comma operators within the meaning of the term. The section of the standard (ISO/IEC 9899:2011) on function calls (§6.5.2.2) says: ¶3 A postfix expression followed by parentheses ()containing a possibly empty, comma- separated list of expressions is a function call. The postfix expression denotes the called function. The list of expressions specifies the arguments to the function. ¶4 An argument may be an expression of any complete object type. In preparing for the call to a function, the arguments are evaluated, and each parameter is assigned the value of the corresponding argument. … ¶10 There is a sequence point after the evaluations of the function designator and the actual arguments but before the actual call. Every evaluation in the calling function (including other function calls) that is not otherwise specifically sequenced before or after the execution of the body of the called function is indeterminately sequenced with respect to the execution of the called function. (I've omitted a couple of footnotes, but the contents are not germane to the discussion.) There is nothing about the expressions being evaluated in any particular sequence. Note, too, that a comma operator yields a single value. If you interpreted: function(x, y) as having a comma operator, the function would have only one argument — the value of y. This isn't the way functions work, of course. If you want a comma operator there, you have to use extra parentheses: function((x, y)) Now function() is called with a single value, which it the value of y, but that is evaluated after x is evaluated. Such notation is seldom used, not least because it is likely to confuse people.
https://codedump.io/share/Mw6TBGg7bxKt/1/why-is-gcc-not-treating-comma-as-a-sequence-point
CC-MAIN-2017-30
en
refinedweb
from mojo.canvas import Canvas A vanilla object that sends all user input events to a given delegate. Canvas(posSize, delegate=None, canvasSize=(1000, 1000), acceptsMouseMoved=False, hasHorizontalScroller=True, hasVerticalScroller=True, autohidesScrollers=False, backgroundColor=None, drawsBackground=True) Methods All events a delegate could have that can be used: - draw() Callback when the canvas get drawn. - becomeFirstResponder(event) Callback when the canvas becomes the first responder, when it starts to receive user interaction callbacks. - resignFirstResponder(event) Callback when the canvas resigns the first responder, when the canvas will not longer receive user interaction callbacks. - mouseDown(event) Callback when the user hit the canvas with the mouse. - mouseDragged(event) Callback when the user drag the mouse around in the canvas. - mouseUp(event) Callback when the user lifts up the mouse from the canvas. - mouseMoved(event) Callback when the user moves the mouse in de canvas. Be careful this is called frequently. (only when accepsMouseMoved is set True) - rightMouseDown(event) Callback when the user clicks inside the canvas with the right mouse button. - rightMouseDragged(event) Callback when the users is dragging in the canvas with the right mouse button down. - rightMouseUp(event) Callback when the users lift up the right mouse button from the canvas. - keyDown(event) Callback when the users hits a key. The event object has a characters() method returns the pressed character key. - keyUp(event) Callback when the users lift up the key. - flagChanged(event) Callback when the users changed a modifier flag: command shift control alt Example from mojo.canvas import Canvas from mojo.drawingTools import * from vanilla import * class ExampleWindow: def __init__(self): self.size = 50) ExampleWindow()
http://doc.robofont.com/api/mojo/mojo-canvas/
CC-MAIN-2017-30
en
refinedweb
This documentation is archived and is not being maintained. Microsoft.Rtc.Sip This content is no longer actively maintained. It is provided as is, for anyone who may still be using these technologies, with no warranties or claims of accuracy with regard to the most recent product version or service release. Microsoft.Rtc.Sip comprises the set of elements used to develop managed applications for Microsoft Office Communications Server 2007 R2. This namespace includes classes that implement concepts and behaviors defined for the Session Initiation Protocol (SIP), including client and server transactions, addresses, headers, and messages. Other classes implement application infrastructure components such as server agents and application manifests. The Microsoft.Rtc.Sip namespace defines the following elements: Show:
https://msdn.microsoft.com/library/office/dd167395.aspx
CC-MAIN-2017-30
en
refinedweb
Spidering an Ajax Website With a Asynchronous Login Form Introduction: Spidering an Ajax Website With a Asynchronous Login Form The problem: Spidering tools don't allow AJAX login authentication. This instructable will show you how to login through an AJAX form using Python and a module called Mechanize. Spiders are web automation programs that are becoming increasingly popular way for people to gather data online. They creep around the web gathering precious materials to fuel the most powerful web companies around. Others crawl around and gather specific sets of data to improve decision making, or infer what's currently "in", or find the cheapest travel routes. Spiders (web crawlers, webbots, or screen scrapers) are great for turning HTML goop into some semblance of intelligent data, but we have a problem when it comes to AJAX enabled webpages that have JavaScript and cookie enabled sessions that are not navigable with the normal set of spidering tools. In this instructable we will be accessing our own member page at pubmatic.com. These steps will show you a method to follow, but your page will be different. Have fun! Step 1: Gather Materials You will need to start supplementing your programming resources. You will need the following programs. Use their guides to help you install these... Install Firebug It's a Firefox addon Install Python Go to: python.orgGo to: python.org Install the Mechanize Module Get MechanizeGet Mechanize Other useful Spidering tools: BeautifulSoup Step 2: Find the Headers Necessary to Create a Session. A well crafted spider will access a webpage as if it were a browser being controlled by a human being keeping clues as to it's true origin hidden. Part of the interaction between browsers and servers happens through GET and POST requests that you can find in the headers (this information is rarely displayed on a browser, but is very important). You can view some this information by pressing Ctrl I (in firefox) to open up the Page Info window. To disguise yourself as a mild mannered browser you must identify yourself using the same credentials. If you tried to log into pubmatic with javascript disabled in your browser you wouldn't get very far since the redirects are done through javascript. So considering that most spider browsers don't have javascript interpreters we will have to get by the login through an alternative rout. Let's start by getting the header information sent from the browser when you click submit. If this were an ordinary browser login you would use Mechanize to fill out the form and click submit. Normal login forms are encapsulated within a <form> ... </form> tag and Mechanize would be able to submit this and poll the next page without trouble. Since we don't have a completed form tag, the submitting function is being handled by javascript. Let's check pubmatic's submitForm function. To do this, first open the webpage in firefox and turn on firebug by clicking the firefly in the lower right hand corner. Then click the script tab, copy all the code that appears and paste it into your favorite text editing bit of software. You can then delete all the code except the function submitForm. It starts with function "submitForm(theform) {" and everything in between this and the functions closing curly bracket "}". On analyzing this function very primitively we notice that some authentication happens bringing back a variable called xmldoc that's being parsed as xml. This is a key feature of AJAX it has polled the server and brought back some XML document that contains a tree of information. The node session_id contains the session_id if the authentication was successful, you can tell this by looking at this bit of code: "if (session_id != null) { //login successful". Now we want to prevent this bit of javascript from taking us anywhere so we can see what is being posted to the server during authentication. To do this we comment out any window redirects which look like this: "window.location=...". To comment this out add double slashes before them like so: "//window.location..." this prevents the code from being run. You can download the Javascript file below which has these edits already made. Copy and paste this edited bit of javascript into the console windows right hand side and click run. This overrides the javascript function already in the page with our new version. Now when you fill out your credentials and click submit you should see POST and GET header information fill the console, but you wont be going anywhere. The POST information is the information shot to the server by the AJAX functions, you want to be as much like this as possible, copy and paste that information into the a notepad. Step 3: Prepare the Code Before we add the new headers we've found let's create a templated Mechanize login python code. We're doing this for two reasons, first so we have a component that works to add new stuff to and second so you see how you would normally login to a non AJAX-y webpage. Open notepad or equivalent, and copy and paste the following. When you're done save it as youfilename.py somewhere you can find. #!/usr/bin/python # -*- coding: utf-8 -*- #Start with your module imports: from mechanize import Browser #Create your browser instance through the Browser() function call; br = Browser() #Set the browser so that it ignores the spiders.txt requests #Do this carefully, if the webpage doesn't like spiders, they might be upset to find you there br.set_handle_robots(False) #Open the page you want to login to br.open("") #Because I know the form name, I can simply select the form by the name br.select_form("login") #Using the names of the form elements I input the names of the form elements br['email'] = "laser+pubmatic@instructables.com" br['password'] = "Asquid22" #br.submit() sends out the form and pulls the resulting page, you create a new browser instance #response below contains the resulting page response = br.submit() #This will print the body of the webpage received #print response.read() Step 4: Send the Right Signals. Mechanize has an easy function to add headers to the headers POST, this will enable us to appear to the same browser that you used to access the page the first time. Open up the file with headers you found using Firebug and edit this text file to match. Replace everything in the quotes with the proper item from the header list: USER_AGENT = "Mozilla/5.0 (X11; U; Linux i686; tr-TR; rv:1.8.1.9) Gecko/20071102 Pardus/2007 Firefox/2.0.0.9" HOST = "pubmatic = "" CONTENT_LENGTH = "60" COOKIE = "utma=103266945.1970108054.1210113004.1212104087.1212791201.20; KADUSERCOOKIE=EA2C3249-E822-456E-847A-1FF0D4085A85; utmz=103266945.1210113004.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none); JSESSIONID=60F194BE2A5D31C3E8618995EB82C3C1.TomcatTwo; utmc=103266945" PRAGMA = "no-cache" CACHE_CONTROL ="no-cache" This creates a set of variables that you can then use to append to the header using this code: br.add_header = [("Host", HOST)] br.add_headers = [("User-agent", USER_AGENT)] br.add_headers = [("Accept", ACCEPT)] br.add_header = [("Accept-Language", ACCEPT_LANGUAGE)] br.add_headers = [("Accept-Encoding", ACCEPT_ENCODING)] br.add_headers = [("Accept-Charset", ACCEPT_CHARSET)] br.add_header = [("Keep-Alive", KEEP_ALIVE)] br.add_headers = [("Connection", CONNECTION)] br.add_header = [("Content-Type", CONTENT_TYPE)] br.add_header = [("Referer", REFERER)] br.add_header = [("Content-Length", CONTENT_LENGTH)] br.add_headers = [("Cookie", COOKIE)] br.add_headers = [("Pragma", PRAGMA)] br.add_headers = [("Cache-Control", CACHE_CONTROL)] Now when we call the page open function the headers will be sent to the server as well. br.open("") Step 5: Mechanized Cookies This step is because mechanize automates cookie handling, but it's important to know what's happening: When the form is submitted you have the right headers as if you submitted using the javascript function. The server then authenticates this information and generates a session ID and saves it in a cookie if the username and password are correct. The good news is Mechanize automatically eats and regurgitates cookies so you don't need to worry about sending and receiving the cookie. So once you create a session ID that works you can then enter the members only section of the website. Step 6: Key to the Heart Now that we've acquired a session ID and Mechanize saved it into it's cookies we can follow the javascript to see where we need to go. Looking inside the "if (session_id != null) { //login successful" to see where to go on success. Looking at the window relocation code: "if (adurlbase.search(/pubmatic.com/) != -1) { window.location="" + "?v=" + Math.random()*10000;" we see that we need to go to a website located at random number. So let's just create a fake random number to enter and create a new browser instance to read the freshly opened page: response2 = br.open("") And that should be it. Your code is now complete, by using the proper headers and mechanize cookie handler we can now access the innards of pubmatic. Open up terminal, load the python package below and login away. To do this type python2.5 and then the filepath to the .py file. I would love to check this one out more thoroughly when I finish studying ajax programming. Cool, keep the techy Instructables coming! Maman is wreaking havoc in Washington!! lol :D Darn. Firebug doesn't work on Firefox 3. Oh well, I've never come across an AJAX website anyways. Nice choice of pictures. My aunt showed me one of those spider statues a few months ago when I visited her in Ottawa. (the capital of Canada) I'll let you know what I think of your Instructable once I have need for it. At present, it'd be useless to me without also figuring out how to use python-spidermonkey or writing a Javascript parser using regexes, Plex, or PLY. (The guys who run FanFiction.net are either insane, stupid, or hate their users and this extends to treating Javascript as if it's some sort of client-side PHP, so BeautifulSoup and Mechanize on their own are useless) Cool. It's good to see some more technical programming stuff on the site! Not that everyone will appreciate this Instructable, but hopefully people who can really use this knowledge will find it through a search engine, when they're looking it up.
http://www.instructables.com/id/Spidering-an-Ajax-website-with-a-asynchronous-logi/
CC-MAIN-2017-30
en
refinedweb
Ok, so based on the last round of discussions about terminology, and how other languages process their path, I got to doing some thinking, and here is one last try at a high-performing, markerless, ultra-backwards-compatible, approach to this thing. I call it, "virtual packages". The implementation consists of two small *additions* to today's import semantics. These additions don't affect the performance or behavior of "standard" imports (i.e., the ones we have today), but enable certain imports that would currently fail, to succeed instead. Overall, the goal is to make package imports work more like a user coming over from languages like Perl, Java, and PHP would expect with respect to subpath searching, and creation/expansion of packages. (For instance, this proposal does away with the need to move 'foo.py' to 'foo/__init__.py' when turning a module into a package.) Anyway, this'll be my last attempt at coming up with a markerless approach, but I do hope you'll take the time to read it carefully, as it has very different semantics and performance impacts from my previous proposals, even though it may sound quite similar on the surface. In particular, this proposal is the ONLY implementation ever proposed for this PEP that has *zero* filesystem-level performance overhead for normal imports. That's right. Zip. Zero. Nada. None. Nil. The *only* cases where this proposal adds additional filesystem access overhead is in cases where, without this proposal, an ImportError would've happened under present-day import semantics. So, read it and weep... or smile, or whatever. ;-) The First Addition - "Virtual" Packages --------------------------------------- The first addition to existing import semantics is that if you try to import a submodule of a module with no __path__, then instead of treating it as a missing module, a __path__ is dynamically constructed, using namespace_subpath() calls on the parent path or sys.path. If the resulting __path__ is empty, it's an import error. Otherwise, the module's __path__ attribute is set, and the import goes ahead as if the module had been a package all along. In other words, every module is a "virtual package". If you treat it as a package, it'll become/act like one. Otherwise, it's still a module. This means that if, say, you have a bunch of directories named 'random' on sys.path (without any __init__ modules in them), importing 'random' still imports the stdlib random.py. However, if you try to import 'random.myspecialrandom', a __path__ will be constructed and used -- and if the submodule exists, it'll be imported. (And if you later add a random/myspecialrandom/ directory somewhere on sys.path, you'll be able to import random.myspecialrandom.whatever out of it, by recursive application of this "virtual package" rule.) Notice that this is very different from my previous attempt at a similar scheme. First, it doesn't introduce any performance overhead on 'import random', as the extra lookups aren't done until and unless you try to 'import random.foo'... which existing code of course will not be doing. (Second, but also important, it doesn't distort the __path__ of packages with an __init__ module, because such packages are *not* virtual packages; they retain their present day semantics.) Anyway, with this one addition, imports will now behave in a way that's friendly to users of e.g. Perl and Java, who expect the code for a module 'foo' to lie *outside* the foo/ directory, and for lookups of foo.bar or foo::bar to be searched for in foo/ subdirectories all along the respective equivalents of sys.path. You now can simply ship a single zope.py to create a virtual "zope" package -- a namespace shared by code from multiple distributions. But wait... how does that fix the filename collision problem? Aren't we still going to collide on the zope.py file? Well, that's where the second addition comes in. The Second Addition - "Pure Virtual" Packages --------------------------------------------- The second addition is that, if an import fails to find a module entirely, then once again, a virtual __path__ is assembled using namespace_subpath() calls. If the path is empty, the import fails. But if it's non-empty, an empty module is created and its __path__ is set. Voila... we now have a "pure" virtual package. (i.e. a package with no corresponding "defining" module). So, if you have a bunch of __init__-free zope/ directories on sys.path, you can freely import from them. But what happens if you DO have an __init__ module somewhere? Well, because we haven't changed the normal import semantics, the first __init__ module ends up being a defining module, and by default, its __path__ is set in the normal way, just like today. So, it's not a virtual package, it's a standard package. If you must have a defining module, you'll have to move it from zope/__init__.py to zope.py. (Either that, or use some sort of API call to explicitly request a virtual package __path__ to be set up. But the recommended way to do it would be just to move the file up a level.) Impact ------ This proposal doesn't affect performance of imports that don't ever *use* a virtual package __path__, because the setup is delayed until then. It doesn't break installation tools: distutils and setuptools both handle this without blinking. You just list your defining module (if you have one) in 'py_modules', along with any individual submodules, and you list the subpackages in 'packages'. It doesn't break code-finding tools in any way that other implementation proposals don't. (That is, ALL our proposals allow __init__ to go away, so tools are definitely going to require updating; all that differs between proposals is precisely what sort of updating is required.) Really, about the only downside I can see is that this proposal can't be implemented purely via a PEP 302 meta-importer in Python 2.x. The builtin __import__ function bails out when __path__ is missing on a parent, so it would actually require replacing __import__ in order to implement true virtual package support. (For my own personal use case for this PEP in 2.x (i.e., replacing setuptools' current mechanism with a PEP-compliant one), it's not too big a deal, though, because I was still going to need explicit registration in .pth files: no matter the mechanism used, it isn't built into the interpreter, so it still has to be bootstrapped somehow!) Anyway. Thoughts?
https://mail.python.org/pipermail/import-sig/2011-July/000270.html
CC-MAIN-2017-30
en
refinedweb
.test.codegen;21 22 /**23 *24 * @author Pavel Flaska25 */26 public class EnumFacilityName {27 public enum UkrainianNames {28 ALEKSANDRA("defender of man"), ANICHKA("grace"), IONNA("gift of God"), KATRYA("Katherine"), ORYNKO("peace"), SOFIYA("wisdom");29 30 String description;31 32 public UkrainianNames(String description) {33 this.description = description;34 }35 36 public String getDescription() {37 return description;38 }39 }40 41 public void mainf(String [] args) {42 System.out.println("SOFIYA: " + UkrainianNames.SOFIYA.getDescription());43 System.out.println("ANICHKA: " + UkrainianNames.ANICHKA.getDescription());44 }45 }46 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/netbeans/test/codegen/EnumFacilityName.java.htm
CC-MAIN-2017-30
en
refinedweb
Page contents See Also edit - Dodekalogue - foreach - A simple and powerful command that is used extremely often - gotcha - Why can I not place unmatched braces in Tcl comments Tcl is All about Commands editTin Tcl you would write: glob /path/to/folder/*glob is a Tcl command, and has a purpose analogus to the dir command in MS-DOS. Commands are Made of Words editAnother useful command is puts, which, like the echo command in MS-DOS, prints things out on the screen: puts HelloA mnemonic device for puts is put string.There are commands that interpret words as numbers in order to get some math done, but it's up to each individual command to decide how it wants to treat any particular word. Tcl makes no distinction. Escaping Interpretation editwill chaaracters. puts This"is{a"single}"valueN. } Using Variables editUse set to set the value of a variable: set greeting HelloTo get the value back out of a variable use a dollar sign. For example, to print Hello: puts $greetingThis works even if there is white space in the greeting: set greeting {Hello There} puts $greetingThat's because Tcl substitutes the entire value of the variable into the the spot where $greeting was. Command Substitution editVariableExtra Credit (not really a beginner thing): puts [expr $$greeting]What will be printed? Yes, once again, it's HowdyWhy? Hint: what does [puts $$] do? Clif Flynt: When I'm teaching a class, I point out that set set setis a valid command, and that set x set set y a set z b $x $y $zis a valid Tcl command to assign the value b to variable a. Looking at this makes the students stop and think. And stare at me like I'm crazy. Commenting Code editSometimes,! Using Double Quotes editBraces doen't do anything at all to them. An \a is just an a. Finally, we get to the closing quote." " Using Backslashes editIn addition to variable and command substitution, there is on other substituion device: Backslashes. They are used in Tcl in a manner very similar to that of C/C++. The following are backslash substitutions understood by Tcl: - \a - Audible alert (bell) (0x7) - \b - Backspace (0x8) - \f - Form feed (0xc) - \n - Newline, LF (0xa) - \r - Carriage-return, CR (0xd) - \t - Tab (0x9) - \v - Vertical tab (0xb). Word Expansion editSometimes} {*}$attributeThis is called "word expansion". It's really All about Commands editT conrol flow can be, and is, implemented as commands. That's why Tcl really is all about the commands. There are two types of commands which may be considered fundamental: commands that create new commands, i.e., proc, and commands that implement conditional constructs, i.e., if. Creating New Commmands editproc loThe result is: helloreturn can be used to be more explicit. The following example has the same result as the previous one. proc combine {part1 part2} { return $part1$part2 } combine hel loThere Conditional Constructs editif,. } What is a list? editTThe above commands return : A B C AHowever, not every string that can be interpreted as a command is a list. set command {list {*}{one two three}} lindex $command 0 ;# -> error: list element in braces followed by "{one" instead of spaceThe meaning of {*} is discussed later.lindex returns the list element, located at the given index. Indices start at 0. set a {A B C} lindex $a 1result: BIt's possible for a list to contain another list: set a "A {B C} D" lindex $a 1result: A {B C} D B CRemember that not all strings are lists.Although split can be used to convert any string into a list of words, that doesn't mean that any value given to split is a well-formed list. DO edit - Do write lots of comments in your code about what your code does, what you tried that didn't work (and why, if known). - Do study the documentation and others' code to understand 'upvar', 'uplevel'. - Learn how to use exec so that you can start using Tcl to drive external programs DON'T edit - Don't have unescaped unmatched brackets ANYWHERE, including comments. Use a text editor that helps you by showing bracket matching. Getting Stuff Done edit - tcllib is full of useful code. Try to use it as much as possible, a wide use of tcllib really saves time, and it is frequently updated. - If you need speed, be aware that Tcl can be extended easily (using C,C++...). Avoid writing your own extensions if existing ones can do the job. - Do split your code into different namespaces (and files) when it overcomes ca. a thousand lines.
http://wiki.tcl.tk/15009
CC-MAIN-2017-30
en
refinedweb
#include <hallo.h> * Matt Zimmerman [Mon, Jul 26 2004, 10:32:58AM]: > > *cough* are you beeing serious? All the time in the last 10 days I have > > not seen any concrete and constructive posting from our DPL. > > Did you pick 10 days as an arbitrary interval which would not include his > statement on this issue in <[🔎] 20040707114550.GA12096@deprecation.cyrius.com>? > (7 July). No. I could even say "in July" but it would not make any difference since this posting is exactly what I talk about: pseudo arguments which are either already known (and expected and proofed as non-sense) or SEP. Regards, Eduard. -- The Great Maker has gifted us with great big eyes, and great big scanners, and great big ah ... well that is no concern of yours. (Londo Mollari, Babylon 5)
https://lists.debian.org/debian-devel/2004/07/msg01885.html
CC-MAIN-2017-30
en
refinedweb
(). > > --- grab.c (revision 9221) > +++ grab.c (working copy) > @@ -839,9 +839,7 @@ > } > -#ifdef HAVE_MMX > - emms(); > -#endif > + emms_c(); This was missing a dsputil.h #include, emms_c is a macro. I'm sharing the Cola with you. Fixed. Diego
http://ffmpeg.org/pipermail/ffmpeg-devel/2007-June/030342.html
CC-MAIN-2017-30
en
refinedweb
Which of the following Statement is true for the below given code in C#3.0 ? public class A { public int Name { get; } } Yes, it is allowed since it is readonly auto implemented property. Yes, it is allowed since it is writeonly auto implemented property. Class A should have a private field Associated with property. Compilation error. Auto implemented properties must contain both get and set accessors. Back To Top
http://skillgun.com/question/3193/csharp/properties/which-of-the-following-statement-is-true-for-the-below-given-code-in-csharp30-public-class-a-public-int-name-get
CC-MAIN-2017-30
en
refinedweb
. Notes: Mule 3.7 and newer supports SFTP retries. In code examples in this guide, spring-beans-current.xsdis a placeholder. To locate the correct version, see — Mule 3.7 and newer supports Spring 4.1.6. Namespace and Syntax Usage To include the SFTP transport in your configuration: Define these namespaces:. Transactions are not currently supported.: the file in the Mule configuration.
https://docs.mulesoft.com/mule-user-guide/v/3.7/sftp-transport-reference
CC-MAIN-2017-30
en
refinedweb
1 /*2 * $Header: /cvshome/build/org.osgi.framework/src/org/osgi/framework/ServiceEvent.java,v 1.15 2007/02/20 00:14:12 hargrave Exp $3 * 4 * Copyright (c) OSGi Alliance (2000,.framework;20 21 import java.util.EventObject ;22 23 /**24 * An event from the Framework describing a service lifecycle change.25 * <p>26 * <code>ServiceEvent</code> objects are delivered to27 * <code>ServiceListener</code>s and <code>AllServiceListener</code>s when28 * a change occurs in this service's lifecycle. A type code is used to identify29 * the event type for future extendability.30 * 31 * <p>32 * OSGi Alliance reserves the right to extend the set of types.33 * 34 * @Immutable35 * @see ServiceListener36 * @see AllServiceListener37 * @version $Revision: 1.15 $38 */39 40 public class ServiceEvent extends EventObject {41 static final long serialVersionUID = 8792901483909409299L;42 /**43 * Reference to the service that had a change occur in its lifecycle.44 */45 private final ServiceReference reference;46 47 /**48 * Type of service lifecycle change.49 */50 private final int type;51 52 /**53 * This service has been registered.54 * <p>55 * This event is synchronously delivered <strong>after</strong> the service56 * has been registered with the Framework.57 * 58 * <p>59 * The value of <code>REGISTERED</code> is 0x00000001.60 * 61 * @see BundleContext#registerService(String[],Object,java.util.Dictionary)62 */63 public final static int REGISTERED = 0x00000001;64 65 /**66 * The properties of a registered service have been modified.67 * <p>68 * This event is synchronously delivered <strong>after</strong> the service69 * properties have been modified.70 * 71 * <p>72 * The value of <code>MODIFIED</code> is 0x00000002.73 * 74 * @see ServiceRegistration#setProperties75 */76 public final static int MODIFIED = 0x00000002;77 78 /**79 * This service is in the process of being unregistered.80 * <p>81 * This event is synchronously delivered <strong>before</strong> the82 * service has completed unregistering.83 * 84 * <p>85 * If a bundle is using a service that is <code>UNREGISTERING</code>, the86 * bundle should release its use of the service when it receives this event.87 * If the bundle does not release its use of the service when it receives88 * this event, the Framework will automatically release the bundle's use of89 * the service while completing the service unregistration operation.90 * 91 * <p>92 * The value of UNREGISTERING is 0x00000004.93 * 94 * @see ServiceRegistration#unregister95 * @see BundleContext#ungetService96 */97 public final static int UNREGISTERING = 0x00000004;98 99 /**100 * Creates a new service event object.101 * 102 * @param type The event type.103 * @param reference A <code>ServiceReference</code> object to the service104 * that had a lifecycle change.105 */106 public ServiceEvent(int type, ServiceReference reference) {107 super(reference);108 this.reference = reference;109 this.type = type;110 }111 112 /**113 * Returns a reference to the service that had a change occur in its114 * lifecycle.115 * <p>116 * This reference is the source of the event.117 * 118 * @return Reference to the service that had a lifecycle change.119 */120 public ServiceReference getServiceReference() {121 return reference;122 }123 124 /**125 * Returns the type of event. The event type values are:126 * <ul>127 * <li>{@link #REGISTERED}128 * <li>{@link #MODIFIED}129 * <li>{@link #UNREGISTERING}130 * </ul>131 * 132 * @return Type of service lifecycle change.133 */134 135 public int getType() {136 return type;137 }138 }139 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/osgi/framework/ServiceEvent.java.htm
CC-MAIN-2017-30
en
refinedweb
Red Hat Bugzilla – Bug 107865 Panel does not respect %f for launchers Last modified: 2007-04-18 12:58:44 EDT Creating a launcher for a small program... When the command includes %u, the file gets passed to the program correctly as a URL. However, using %f instead, which according to the Freedesktop.org spec should pass the full path to the file, fails and passes _nothing_. Correction, does not fail, simply launches the program with no argument where it should be passing the path to the file. Is this still an issue with FC 1? Dan: just tried this out and it works fine for me. Any more details? Mark, how are you testing it? #include <stdio.h> int main( int argc, char *argv[] ) { fprintf( stderr, "args: %d %s %s\n", argc, argv[0], argv[1] ); sleep( 5 ); exit( 0 ); } Then, using a launcher with the command "/path/to/program %f" and specifying "Run in Terminal", drag a document onto the launcher. The program pauses after printing its args. Note that %f is (null) while a %u actually works. %f: args: 1 /home/boston/dcbw/thing (null) %u: args: 2 /home/boston/dcbw/thing gnome-panel-2.5.3.1-6 currently, but has existed since FC1 betas at least, probably earlier Hmm, I added a launcher to the panel which pointed at as script: #!/bin/bash echo $@ > /tmp/t.tmp and then tried dragging a file onto it with both %f and %u and it worked. Could you confirm that works for you ? Actual bug is becuase the gnome-desktop library uses gnome_vfs_uri_is_local() and that makes files from NFS mounted homedirs be skipped over. Upstreaming this bug, gnome.org #135629
https://bugzilla.redhat.com/show_bug.cgi?id=107865
CC-MAIN-2017-30
en
refinedweb
In most of the web development projects you might want to automate file generation, like for example placeorder confirmation receipts, payment receipts, that can be based on a template you are using. The library we will be using is Weasyprint. WeasyPrint is to combine multiple pieces of information into an HTML template and then converting it to a PDF document. The supported version are Python 2.7, 3.3+ WeasyPrint has lot of dependencies, So this can be install it with pip. pip install Weasyprint Once you have installed WeasyPrint, you should have a weasyprint executable. This can be as simple: weasyprint --version This will Print WeasyPrint's version number you have installed. weasyprint <Your_Website_URL> <Your_path_to_save_this_PDF> Eg: weasyprint ./test.pdf Here i have converted "" site to an test.pdf.Let we write sample PDF Generation: from weasyprint import HTML, CSS HTML('').write_pdf('/localdirectory/test.pdf', stylesheets=[CSS(string='body { font-size: 10px }')]) This will also converts the page in to PDF, Here the change is we are writting custom stylesheet(CSS) for the body to change the font size using the "string" argument. You can also pass the CSS File, This can be done using: from django.conf import settings CSS(settings.STATIC_ROOT + 'css/main.css') Ex: HTML('').write_pdf('/localdirectory/test.pdf', stylesheets=[CSS(settings.STATIC_ROOT + 'css/main.css')]) You can also pass multiple css files to this stylesheets arrayGenerating PDF Using Template: Let we create a basic HTML file, that we will use as a template to generate PDF: templates/home_page.html <html> <head> Home Page </head> <body> <h1>Hello !!!</h1> <p>First Pdf Generation using Weasyprint.</p> </body> </html> Lets write a django function to render this template in a PDF: from weasyprint import HTML, CSS from django.template.loader import get_template from django.http import HttpResponse def pdf_generation(request): html_template = get_template('templates/home_page.html') pdf_file = HTML(string=html_template).write_pdf() response = HttpResponse(pdf_file, content_type='application/pdf') response['Content-Disposition'] = 'filename="home_page.pdf"' return response Here, we have used the get_template() function to fetch the HTML template file in the static root. Finally, You can download your home_page.pdf
https://micropyramid.com/blog/generate-pdf-files-from-html-in-django-using-weasyprint/
CC-MAIN-2017-30
en
refinedweb
#include <Servo.h>Servo myservo;void setup() { myservo.attach(2); myservo.write(0); pinMode(1, INPUT);}void loop() { if (digitalRead(1) == HIGH) { myservo.write(180); delay(500); } if (digitalRead(1) == LOW) { myservo.write(5); delay(150); }} My problem now is that I get error-messages if I want to compile the sketch for the ATTiny. As far as I understood, the library isn't compatible with the chip or am i wrong?
http://forum.arduino.cc/index.php?PHPSESSID=qei080fdj7j301qiar9f3n5l90&topic=463985.0
CC-MAIN-2017-30
en
refinedweb
In this article, we'll show you how to get started with Domino in less than 10 minutes! We'll use some data about the demographics of New York City. Step 1 Download this CSV file and this Python script, mean_pop.py, which calculates mean statistics for whatever column you choose in the CSV file. Step 2 Next, create a new project and upload these files to Domino. Step 3 Now that you have these files in Domino, go to the "Runs" tab and start a new run using mean_pop.py. Use the command-line argument "PERCENT FEMALE" to calculate the mean value for that column. mean_pop.py "PERCENT FEMALE" The result is an average of 24% female in each zipcode. That's unexpectedly low, so let's dig deeper in an interactive Jupyter session. Step 4 Copy/paste these lines of code to follow along with the video below: import pandas as pd df = pd.read_csv('Demographic_Statistics_By_Zip_Code.csv') df[['COUNT FEMALE']].mean() df[['COUNT MALE']].mean() This says that on average they sampled 7 women and 10 men in each zipcode. That's a pretty small sample relative to the size of New York City, so we can't trust the 24% women we found in step 3. We need to find a different data set. Don't forget to name and save your Jupyter session! When you hit "Stop", we'll sync the results back to Domino. Then it's back to the drawing board to find a decent data set. Ahh ... the life of a data scientist! Step 5 You can review the results in the Runs dashboard, and even leave a comment to remind your future self why you didn't use this data set.
https://support.dominodatalab.com/hc/en-us/articles/360000102283-Domino-the-first-10-minutes
CC-MAIN-2018-26
en
refinedweb
HOWTO: Using Archetypes SQLStorage and Advanced Tips Where is the knowledge we have lost in information? —T. S. Eliot, The Rock - Introduction - Creating a Database-Stored Object - Testing Our New Object - About UIDs - Customers and Orders - Working With Existing Table Structure - Propagating Direct DB Changes Back to ZODB - Changing SQLStorage's Storage Methods: An Example With Lists - Creating a new storage class - Integrating relational database schemas into Archetypes - The status of SQLStorage - About this Document Introduction The SQLStorage storage for Archetypes 19 allows you to transparently store attributes of your Archetypes objects in an SQL-backed database. This is useful in many situations, such as: - You have other applications that need to simultaneous access data in relational database format. - You have a lot of existing data in a relational database format. - You or your boss/client are more comfortable knowing that your data is accessible in a relational database format. This HOWTO assumes that you are comfortable installing Archetypes, and using it with non-SQLStorage, and are comfortable administering Plone/CMF/Zope sites. This HOWTO explains how you can use relational database features like triggers and rules to store your data in traditional relational database parent/child tables, while still accessing it with traditional Zope accessors. While the advanced techniques in this HOWTO can be used with any database that supports triggers and updatable views, the example code is for PostgreSQL 18. It should not be difficult to translate these ideas to Oracle or other database. At the time of this writing, MySQL does not support triggers, views, or rules, so these ideas could not easily be implemented in MySQL. Typographic conventions In most relational databases, commands are case-insensitive: SELECT is the same as select. To help you understand the commands, however, I'll follow the traditional format of putting SQL commands in upper case. Database identifiers (field names, table names, etc.) may or may not be case-sensitive, depending on your database. The examples here will be using PostgreSQL, which is case-insensitive for identifiers. I'll show table names with a leading capital letter, and field name in all lower case. However, you can enter these any way you like. Comparing SQLStorage to other relational database interface strategies The scope of what Archetypes accomplishes is similar to, yet very different from, other systems of connecting Zope to relational databases. Archetypes stores the objects in the ZODB as an archetype object, having traditional Plone/CMF/Zope methods such as Title(), absolute_url(), etc. However, individual attributes (such as title, author, body, etc.) can be looked up in and stored in the relational database. Since objects are real Zope objects, they work naturally with acquisition, catalogs, and other Zope technologies. Since you can choose which attributes to store in the relational database, attributes that don't have a naturally tight fit with relational database structures can be left in the ZODB, as can ones that might easily fit in a relational database structure, but for which you have no external relational database access requirements. Versus ZSQL methods A more traditional method of Zope/Relational Database connection has been to store rows of information in a relational database, and create ZSQL Methods to look up and display this information. With this technique, you can associate a Python class with a relational database row 1, but the objects aren't real persistent Zope objects, and aren't found during catalog calls. This strategy requires customized integration to work with key Plone technologies such as acquisition, workflow, portal_forms, etc. While there are worthwhile Zope product to simplify some of the details of traditional relational database storage and Zope (such as Znolk 20, which auto-generates database forms and storage methods), these still fall quite short of the interface simplication and power that Archetypes delivers. Traditional SQL Method strategies for using Zope with relational databases are of most use when converting an existing site built using other web technologies (such as PHP or Perl), and in which you already have written the SQL statements for insterting, updating, deleting, viewing, etc., all of your object types. Versus APE (formerly Adaptable Storage) Shane Hathaway's product APE 21 (formerly called Adaptable Storage) allows you to store your Zope objects in different formats (such as in standard filesystem objects or on a relational database). In this case, segments of the ZODB space are "mounted" from results in a relational database. This means the entire object is kept in the relational database--all attributes, etc. Deleting an object from the relational database, adding it, or modifying it affects the the ZODB instantly since the this part of the ZODB is just a mounted pointer the relational database. While APE is a technological accomplishment, and very useful for some projects, it doesn't fit perfectly into an existing natural database role. All ZODB objects are stored in a few very APE-oriented tables, rather than being stored in customizable, traditional-relational-database tables. In addition, APE works by location, rather than by type (as Archetypes does). That is, everything in the folder /foo is controlled (mounted) by APE. If /foo contains all and only objects of a certain portal_type (like Customers) you could treat these tables as the "customer relational database", and work around the unusual object-to-relational database table structure. However, if there are different types stored in that directory, you end up with a mishmash of different types of data stored in the same tables, and don't have the straightforward setup of a "customer" table versus an "orders" table, etc. 2 With Archetypes, each portal_type maps to an individual table, regardless of where it is stored. Lastly, APE does not produce the integrated form production/validation/editing systems that Archetypes does. Creating a Database-Stored Object Let's start with a simple Archetypes object, representing a Customer: # Customer.py # Customer portal type (non-SQL storage)", ), )) + TemplateMixin.schema class Customer(TemplateMixin, BaseContent): """Our example object""" schema = schema archetype_name = "Customer" actions = TemplateMixin.actions registerType(Customer) This object defines two custom fields, body and phone (plus all the traditional metadata attributes that are brought in by BaseSchema). This object would be stored entirely in the ZODB by Archetypes; however, we can convert this to being stored in a relational database by making just two simple changes to the object: - Add an import to the beginning for the appropriate SQL database storage method. - Add an attribute storage to the fields we want stored in the database, and set these to our storage method. Since we're using PostgreSQL in this example, we'll import the PostgreSQL storage method. Our new object then becomes: # CustomerSQL.py # Customer portal type (SQL storage) CustomerSQL(TemplateMixin, BaseContent): """Our example object""" schema = schema archetype_name = "Customer SQL" actions = TemplateMixin.actions registerType(CustomerSQL) At this point, you should install our new Archetypes type and register it with portal_types. Now, before we can begin using this object, we must do two things: - Add a database connector (in our case, PostgreSQL) to our site. We can use any PostgreSQL adapter; however, I've used ZPyscopgDA 22 for testing this, as this appears to be the best maintained of the noncommercial adapters. - In the archetype_tool, under the Connections tab, we need to set our database connector for this type of object to our new database connector. Note that in this tab, we have a default connection, and we can override this for an portal_type that uses SQLStorage. In our case, you can either set the default to the new connection, or the specific connection for our CustomerSQL type. However, since we'll be adding several other Archetypes types, it will be easier to point the default setup to your database adapter connection. Before you go any further, make sure that the user you defined in your database connection has the ability to create tables, and insert, update, and delete from tables in your database. 3 Testing Our New Object Now, we can add an instance of our object through the standard Plone interface. Plone will recommend a unique ID; let's change that to "new_example". Put in values for body and phone. Notice that you can see these values in the view view, and can re-edit them in the edit view. Switch to your database monitor (for PostgreSQL, this is psql) and examine the database: database=# /d List of relations Schema | Name | Type | Owner --------+--------------------------+----------+------- public | customersql | table | joel Archetypes has created our table for us. Examine the table: database=# /d customersql Table "public.customersql" Column | Type | Modifiers -----------+------+----------- uid | text | not null parentuid | text | body | text | phone | text | Indexes: customersql_pkey primary key btree (uid) Notice that Archetypes has created our body field as text field and the phone field as a text field. These transformations are part of the PostgreSQLStorage method, and can be easily changed in the source, should your needs require different mappings. 4 We'll look at changing those mappings later in this document, in Changing SQLStorage's Storage Methods: An Example With Lists. Also, notice that there are two new fields created: - UID (uid): this is a unique identifier for your object - Parent UID (parentuid): this is the unique identifier (if any) for the parent (enclosing) container for your object. About UIDs One of the smartest things about Archetypes is that it introduces the ideas of unique identifiers into CMF sites. Zope IDs must be unique within a folder, but need not be unique across a site. Therefore, keeping track of the fact that you have an object called Eliot isn't useful, since you may have several objects called that in different folders. A common workaround has been to refer to objects by their path (eg, /animals/cats/Eliot), but this is fragile, since any change to the object ID, or the IDs of any of the parent objects will change the path and break these references. Archetypes assigns each object a unique ID at creation 5, and then maintains a mapping of that unique ID to the current location of the object in the ZODB. If the object is deleted, Archetypes will remove it from its UID mapping. Please note the difference between the Zope ID (the standard name for the object returned by getId()) and the Archetypes UID. When our object was created, Plone assigned it an ID like CustomerSQL.2003-07-23.4911. Archetypes used this ID as its UID. Even though we may change the object ID to new_example, it will keep its UID for the lifetime of the object. The UID should be treated as an immutable attribute. Archetypes also creates a portal_catalog index for the UID field, so you can easily query the catalog using the UID. It also exposes several methods in its API for finding an object by its UID (from ArchetypeTool.py): ## Reference Engine Support def lookupObject(self, uid): if not uid: return None object = None catalog = getToolByName(self, 'portal_catalog') result = catalog({'UID' : uid}) if result: #This is an awful workaround for the UID under containment #problem. NonRefs will aq there parents UID which is so #awful I am having trouble putting it into words. for object in result: o = object.getObject() if o is not None: if IReferenceable.isImplementedBy(o): return o return None def getObject(self, uid): return self.lookupObject(uid) def reference_url(self, object): """Return a link to the object by reference""" uid = object.UID() return "%s/lookupObject?uid=%s" % (self.absolute_url(), uid) We can use the method lookupObject(uid) to get the actual object by UID, or use reference_url(object) to generate a "safe" URL to an object that will always find it given its UID. You can see the list of currently-tracked UIDs and actual objects in the archetype_tool, UID tab. Parent UID The Parent UID field created in our table is the UID of the container, if it is an Archetypes object (or some other kind of future object that might expose a UID). This is very helpful for creating a simple parent/child relationship in Plone, as we'll see in the next section. Customers and Orders For example, a common database example is a database of customers and orders, where one customer can have several orders. Pseudo-SQL for this would be: CREATE TABLE Customer ( custid SERIAL NOT NULL PRIMARY KEY , custname TEXT ... other customer fields ... ); CREATE TABLE Orders ( orderid SERIAL NOT NULL PRIMARY KEY , custid INT REFERENCES Customer ... other order fields ... ); The field custid in the orders table is a reference (called a foreign key) to the field custid in the customer table. To create a similar structure in Archetypes, we need to create just two types: CustomerFolder and Orders. Objects of both of these types will get UIDs from Archetypes. But if we change our Customer type to become folderish (ie, derived from Archetypes's BaseFolder rather than BaseContent), it can contain objects, and we can add Orders objects inside of it. These Orders objects will have their Parent UID field set to the CustomerFolder UID, giving us an easy way to write ZCatalog queries for all orders with a certain customer UID, or SQL queries asking the same thing. Creating in Archetypes Let's create these two new archetypes. First, the CustomerFolder. This will be exactly the same as CustomerSQL, except using BaseFolder rather than BaseContent: # CustomerFolder.py # Customer portal type (SQL storage, folderish) CustomerFolder(TemplateMixin, BaseFolder): """Our example object""" schema = schema archetype_name = "Customer Folder" actions = TemplateMixin.actions registerType(CustomerFolder) Our Order type is straightforward. It will include the cost of an order, and shipping details: # Orders.py from Products.Archetypes.public import * from Products.Archetypes.TemplateMixin import TemplateMixin from Products.Archetypes.SQLStorage import PostgreSQLStorage schema = BaseSchema + Schema(( TextField('shipping_details', required=1, storage=PostgreSQLStorage()), FixedPointField('total_cost', storage=PostgreSQLStorage()) )) + TemplateMixin.schema class Orders(TemplateMixin, BaseContent): """Our example object""" schema = schema archetype_name = "Orders" actions = TemplateMixin.actions registerType(Orders) Testing Them Out Register these two new types with portal_types and add a CustomerFolder object. You should be able to edit this data and see the resulting information in the table customerfolder without a problem. As of the writing of this HOWTO, Archetypes does not expose a "folder contents" tab for folderish objects like our CustomerFolder. However, you can go to this view manually by visiting the new customer folder object, and changing the end of the URL to point to folder_contents. 6 Inside of the new customer folder, add an Orders object and enter details. Then, examine the orders table in the database: database=# SELECT * FROM Orders; uid | parentuid | shipping_details | total_cost ------------------------+--------------------------------+------------------+------------ Orders.2003-07-23.4935 | CustomerFolder.2003-07-23.4609 | Shipping | 0 (1 rows) Notice how we get the parentuid value correctly. From our relational database, we could write a traditional query now on customers and the total of the orders as: database=# SELECT C.uid, C.phone, SUM(O.total_cost) FROM CustomerFolder as C INNER JOIN Orders as O on (O.parentuid = C.uid) GROUP BY C.uid, C.phone; Working With Existing Table Structure Of course, if you're working with existing tables, or if you want to work with other SQL tools, chances are you want to use a more traditional primary key/foreign key setup than the Archetypes UID. Many databases use a serial column 7 (integers that increase for each new record) as a primary key. To do this with Archetypes, you can simply either: - create the table before you insert the first Archetypes record or - modify the table after Archetypes creates it and starts using it. For example, our customerfolder table was created automatically by Archetypes, and it contains a UID field, but not a traditional, numeric primary key. We can fix this by adding this: ALTER TABLE Customerfolder ADD customerid INT; CREATE SEQUENCE customerfolder_customerid_seq; UPDATE Customerfolder SET customerid = nextval('customerfolder_customerid_seq'); ALTER TABLE Customerfolder ALTER customerid SET DEFAULT nextval('customerfolder_customerid_seq'); ALTER TABLE Customerfolder ALTER customerid SET NOT NULL; ALTER TABLE Customerfolder DROP CONSTRAINT customerfolder_pkey; ALTER TABLE Customerfolder ADD PRIMARY KEY ( customerid ); ALTER TABLE Customerfolder ADD UNIQUE ( uid ); Note that syntax for altering tables, adding primary keys, etc., varies considerably from one relational database to another, so if you're not using PostgreSQL, you'll want to research how to do this with your relational database. Also note that it's rather wordy to make this changes, whereas having the table setup properly in the first place is much more succinct: CREATE TABLE Customerfolder ( customerid SERIAL NOT NULL PRIMARY KEY, ... ) So it may often be to your advantage to create the table before Archetypes. Now we have a traditional primary key that is automatically increased, but since its not part of Archetypes's schema, it will leave it alone. Important Notice that we make the UID field UNIQUE. This guarantees that two records cannot have the same UID. Even though we're no longer using the Archetypes UID as our primary key, it is still critical to keep this field unique. When Archetypes edits an object, it doesn't know if the object exists in the relational database yet or not. Therefore, it tries to insert a record for this object. If this fails, it then updates the existing record. This behavior may change in future versions of Archetypes, but, unless it does, you must make sure UID stays unique or else you'll have multiple copies of your objects' data in the relational database, only one of which will be correct. If You Need A Very Different Table Structure Instead of having Archetypes write to the real table, we can have Archetypes insert to a view of the table. Such a view can have fields that looks like those that Archetypes expects, but actually insert the information in different places and different ways. This is especially useful if you have existing relational database tables that have non-Zope-like fields, names, etc. To do this, let's first move the real table out of the way: ALTER TABLE customerfolder RENAME TO customerfolder_table; This is because Archetypes expects to work with customerfolder, and we want that to be our view. The actual table name doesn't have to be customerfolder_table; it can be whatever we want it to be. Now, let's create our view: CREATE VIEW customerfolder AS SELECT uid, parentuid, body, phone FROM customerfolder_table; Now, we'll make this view updatable so that new records can be inserted into it. The syntax for this is very relational database-specific; you'll need to change this for other database systems. Following is our PostgreSQL syntax: CREATE RULE customerfolder_ins AS ON UPDATE TO customerfolder DO INSTEAD ( INSERT INTO customerfolder_table ( uid, parentuid, body, phone ) VALUES ( NEW.uid, NEW.parentuid, NEW.body, NEW.phone ); ); Now, Archetypes can insert to customerfolder, assuming that it is a table, when in fact, we're rewriting its work to write to the real table. So that Archetypes can do updates and deletes, we'll need to add rules for that, too: CREATE RULE customerfolder_del AS ON DELETE TO customerfolder DO INSTEAD DELETE FROM customerfolder_table WHERE uid=OLD.uid; CREATE RULE customerfolder_upd AS ON UPDATE TO customerfolder DO INSTEAD UPDATE customerfolder_table SET parentuid = NEW.parentuid , body = NEW.body , phone = NEW.phone; In this example, our real table and view are only slightly different, but this strategy is helpful when dealing with existing tables that have many fields not of interest to Archetypes, or when our relational database tables have a different type of structure than is natural to Archetypes. We'll see advanced uses of this later. FIXME: Show It Working Using Traditional Referential Integrity For the Child Table For our orders table, we can do the same thing to give that a serial-type primary key that is more traditinal for a relational database. In addition, though, it's likely that we want to child orders table to relate to the parent customerfolder table by the new customerid rather than the Archetypes-oriented Parent UID. To do this, let's create a customerid field to the order table: ALTER TABLE Orders ADD customerid INT; UPDATE orders SET customerid = Customerfolder.customerid FROM Customerfolder WHERE Orders.parentuid = Customerfolder.uid; ALTER TABLE Orders ALTER customerid SET NOT NULL; ALTER TABLE Orders ADD FOREIGN KEY (customerid) REFERENCES Customerfolder; Now we have a traditional primary key/foreign key relationship between our tables. If we have a orders record for customer #1, we won't be able to delete this customer until we delete these orders. We need to set it up so that when we add an order via Plone, we look up the customerid from the customerfolder table and set it in the orders table for the new record. To do this, we'll add a trigger that, before completing an insert on the order table, figures out the customerid and makes that part of the insert. Different database implement triggers in different ways. In PostgreSQL, a trigger statement is a simple statement that calls a function. This function can reference and change a record structure called new which reflects the new record being inserted (or for an update, the new record to be written). Functions in PostgreSQL can be written in different languages, including Python; for our example, however, we'll use PostgreSQL's built-in PL/PgSQL language, a PL/SQL-like language that is simple to write and understand. Before you can write PL/PgSQL functions, you must enable this by adding this language to your database. From the shell: $ createlang plpgsql your_db_name Our trigger function will be: CREATE OR REPLACE FUNCTION order_ins () RETURNS TRIGGER AS ' BEGIN NEW.customerid := customerid FROM customerfolder AS C WHERE NEW.parentuid = C.uid; RETURN NEW; END; ' LANGUAGE plpgsql; Now, let's create the trigger: CREATE trigger order_ins_trig BEFORE INSERT ON Orders FOR EACH ROW EXECUTE order_ins(); Our real test is whether this works in Plone, but for a Q&D simulation, we'll test this in the SQL monitor by manually inserting a child orders record and seeing if it gets the parent UID (for your tests, use the real UID of one of your CustomerFolder objects): database=# insert into orders (uid,parentuid) values ('test', 'CustomerFolder.2003-07-23.4609'); INSERT 35162 1 database=# select uid, parentuid, customerid from orders; uid | parentuid | customerid ------------------------+--------------------------------+------------ Orders.2003-07-23.4935 | CustomerFolder.2003-07-23.4609 | 1 test | CustomerFolder.2003-07-23.4609 | 1 (2 rows) In the above output, the second record is our newly inserted record, and it did get the correct customerid field. Referential Integrity & Prevention of Deletions Now our traditional referenial integrity is set up. If we try to delete a customer that has related orders, we'll get the error that we expect and want: database=# DELETE FROM Customerfolder; ERROR: $1 referential integrity violation - key in customerfolder still referenced from orders However, we can still have problems in Plone. Our current example has the child order objects nested inside of the parent customer objects, so it's not possible to delete a customer without deleting the orders because the customer itself is a folderish object, so the orders would be deleted automatically. However, this may not always be the setup. Sometimes, you won't be able to have a child object contained physically in the parent object, and you'll connect things using attributes yourself. For example, we might want to keep track of which staff member handles this customer. We could do this by nesting the CustomerFolder objects inside a Staff object, but this might, for different reasons, not be possible or preferable. Instead, we would create a staffuid attribute on the CustomerFolder type, and populate this with the UID of the staff member. In cases like this, if you have the referential integrity in the database connected properly, you won't be able to delete the staff record if related customers exist, but you will be able to delete the customer object in the ZODB without problems--stranding the data in the relational database and ruining your database connections. This is because the current version of Archetypes doesn't deal properly with deletion exceptions. Archetypes issues an SQL delete on the staff record, but since there are related children, it fails. This raises an exception, but Zope only stops a deletion by raising and propagating a particular exception--others just get logged and ignored. Therefore, the database record can't be deleted (your database will refuse to do this, regardless of how Zope asks), but the pointer to it in the ZODB will be deleted. So the staff member won't be visible on the site, but the data will stay in the relational database. To fix this, apply the patch FIXME included with this howto. This raises the proper exception (BeforeDeleteException) if the SQL deletion call fails, which causes the Plone object deletion to fail. Unfortunately, you'll get an standard error message, rather than a polite, user-friendly explanation, but this is better than silently ignoring the database failure and moving on. 8 This patch was developed for the current version of Archetypes. This fix may be included by the time you read this HOWTO. If so, please let me know, and I'll update this section. Cascading PostgreSQL and most other databases that support referential integrity can handle deletion of parent records in other ways. The default is to block the deletion of parent with related children, but you can also opt to automatically delete the children when a related parent is deleted. This option is called "cascading" a deletion. To set this up, we'd create our child table differently: CREATE TABLE Child ( childid SERIAL NOT NULL PRIMARY KEY, parentid INT NOT NULL REFERENCES Parent ON DELETE CASCADE ... ); Now, when the parent is deleted in the database, it will delete the related child records rather than raising an exception. Of course, this won't automatically delete the Zope ZODB objects for the children, but the next section of this tutorial deals with the question of how to have operations in the database "notify" Zope of changes to make in the ZODB. Using techniques explained there, we'll be able to have the child ZODB deleted for us. Propagating Direct DB Changes Back to ZODB Sometimes in Zope projects, the changes all come from the Zope interface, and the relational DB storage is just to soothe ZODB-nervous customers, or to allow reporting from standard SQL tools. In this case, the setup we have would be acceptable. In cases where changes must propigate to Zope, here are some problems we need to solve: - Records that are inserted directly into the database are never visible to Zope, as ZODB objects aren't instantiated for these records. - Records that are deleted directly in the database are never deleted from Zope. Therefore, objects will remain in the ZODB that point to records that are no longer in the relational database. The current version of Archetypes raises an error if you try to view these objects or get the attributes that are stored in SQLStorage. - Records that are changed in the database are visible immediately to Zope, but any Catalog entries won't be updated, making Catalog queries incorrect. Forcing Catalog Reindexes on Update There's no way for our relational database to directly affect the ZODB. Instead, we'd have to either make a request that the ZServer hears and passes on to Plone, or we'd have to write a standalone Python program that connects to the ZODB to make these requests. 9 The latter can be very slow (starting connecting to the ZODB can take a while), and would only work on the machine that the ZODB is hosted on, whereas the first choice is ZEO-friendly, remote database machine friendly, and generally easier and faster. By creating a custom function in PostgreSQL, we can execute a web or XMLRPC request to reindex the catalog. We'll need a bit of Zope support: Zope will be given the UID for the record that has changed, and it needs to find the real Zope object, and call reindexObject() on it. We could do this by adding a method to ArchetypesTool.py 10, but, for simplicity's sake, we'll implement it as a PythonScript: # "reindex_by_uid" ## Parameters: uid o = context.archetype_tool.lookupObject(uid) o.reindexObject() return "ok" You can test calling this by giving it a UID of an existing object. This should be recataloged; you can see the changed catalog information by viewing the portal_catalog. Functions can be written in several procedural languages in PostgreSQL, including Python. However, making a web request is an "unsafe" act in PostgreSQL, so we need to use a language that supports making unsafe calls. PostgreSQL refers to these languages as "untrusted" languages, and traditionally names them with a trailing U. At this time, the Python language is implemented as a trusted language but not as an untrusted language. The built in, easy-to-use PL/PgSQL is also implemented as a trusted language only. This is changing, howver, in PostgreSQL 7.4. Due to RExec (restricted environment) module being dropped from Python, PL/Python (trusted) is no longer part of PostgreSQL, and PL/PythonU (a new, untrusted variant) is being added. It would be easy to write the functions below as PL/PythonU functions. 11 Our current options for untrusted languages, though, are PL/tclU (tcl untrusted), PL/perlU (perl untrusted), and C. We'll use Perl's untrusted language, plperlu. Make sure that Perl untrusted functions are enabled for your database: $ createlang plperlu your_db_name Then, in psql, we'll create a function that uses wget, a common, simple command line http request tool: CREATE OR REPLACE FUNCTION reindex_by_uid (text) RETURNS text as ' $uid = shift; # Auth required to run PythonScript w/right role # or, dont pass auth stuff and make PyScript proxied to role $AUTH = "--http-user=joel --http-passwd=foo"; # Set to your server and portal name $SERVER = ""; # wget options # -q is quiet (no status output) # -O - is to send results to standard output # -T 5 is to timeout after 5 seconds $WGET = "/usr/bin/wget -q -O - -T 5"; $cmd = "$WGET $AUTH $SERVER/reindex_by_uid?uid=$uid"; # output it to psql log so the user has some idea whats going on elog NOTICE, $cmd; return `$cmd`; ' LANGUAGE plperlu; As noted in the function comments, above, you can pass authorization headers via wget (using the $AUTH variable). If you don't want to have the username and password in a plaintext script on the PostgreSQL server, or don't want them to travel across the network, you could instead make the PythonScript usable by anonymous users, and have it proxied to a high-enough level role that it can reindex all needed content. The elog statement in the function outputs the $cmd variable through the PostgreSQL logging system, making it appear in psql as a notice. This is useful for debugging and providing feedback, but may confuse some database front-end systems that don't expect notices to be raised. In addition, it exposes your username and password to the log everytime a record is updated. Once you have the function working for your database, you should probably remove this line. Now, in PostgreSQL, if we update a record, we can force a reindex by calling this, as in: database =# SELECT reindex_by_uid('*uid*'); Using Triggers to Automate This Of course, we'll want to have this happen automatically when we update a record, rather than having to execute the select statement. To do this, we'll write a trigger in PostgreSQL that triggers whenever an update is made to our customer table. To do this, we need a trigger function that is called when our table is changed. In a perfect world, we could use our Perl function, above. However, at this time, Perl functions can't be used as trigger functions (though functions written in PL/Python, PL/PgSQL, C, PL/tcl, and other procedural languages can). For a simple wrapper function like this, PL/PgSQL would be the normal choice. Our trigger function is: CREATE OR REPLACE FUNCTION customer_upd() RETURNS TRIGGER as ' BEGIN PERFORM reindex_by_uid(OLD.uid); RETURN NEW; END; ' LANGUAGE plpgsql; Then the trigger itself: CREATE TRIGGER customer_upd AFTER UPDATE ON Customerfolder FOR EACH ROW EXECUTE PROCEDURE customer_upd(); Now, whenever we make an change to our table, our trigger calls the PL/PgSQL function customer_upd. This, in turn, calls our general reindexing function, which makes a simple web request that Zope hears and calls the reindexing support PythonScript. It seems like a lot of redirection, but works fine. Go test it out. Make a change to your object's body field directly via PostgreSQL, then check the catalog results and see that the appropriate field (in this case, SearchableText) has been updated. Syncronizing Deletes and Inserts (For advanced readers, since some of the detail is left to you to fill in). Deletes and inserts are a bit trickier than updates, but also more critical to get right. If you don't use the update/reindexing technique, above, everything works fine, except your ZCatalog calls will be out-of-date, and even those will be fixed for you the next time you edit the object in Plone or do a manual recataloging. If inserts and deletes aren't propagated from the relational database to ZODB, data will be missing and errors will be raised. For this reason, it may be reasonable to decide that record additions and deletions should happen only via the Plone interface. You can make quick data changes in the relational database (helpful for fixing typos across a series of records, or reformatting a field, etc.), but never inserts or updates. However, if you want or need to have insertions/deletions made to the relational database and propagated into the ZODB, the following sections explain how to do this. Inserts Inserts would be handled using the saem general concepts as the update/reindex fix: write a Zope PythonScript that creates the object when it is passed the different fields required for object creation. Then write a PL/perlU function that crafts a command line wget statement that calls our Zope PythonScript, all of this being set into motion by a trigger on our table. First, we'll want to create a PythonScript that will create our content for us. The trickiest part is coming up with a good, unique UID. If we knew that something in our table that was being inserted was unique, we could use that (prepended by the type name, to make sure it was unique across types so that Archetypes could use it). However, to look and feel consistent with UIDs created through the web, we'll copy in the same UID-generating code that Plone itself uses. 12 Our script will be called create_customerfolder, and will be: ## create_customerfolder ## Arguments: phone, body # this function ripped out of CMFPlone/FactoryTool.py def generateId(self, type): now = DateTime() name = type.replace(' ', '')+'.'+now.strftime('%Y-%m-%d')+'.'+now.strftime('%H%M%S') # Reduce chances of an id collision (there is a very small chance that somebody will # create another object during this loop) base_name = name objectIds = self.getParentNode().objectIds() i = 1 while name in objectIds: name = base_name + "-" + str(i) i = i + 1 return name context.invokeFactory( "CustomerFolder" , id=generateId(context, 'CustomerFolder') , phone=phone , body=body) return "ok" You can test this script by calling it through the web, or by using the Test tab on the PythonScript. Give it a body and a phone and it will create a new CustomerFolder object in the current context. Now, we'll write a plperlu function that will craft a proper wget web request to call this script: CREATE OR REPLACE FUNCTION customerfolder_add (text,text) RETURNS text as ' $body = shift; $phone = shift; $sec = "--http-user=joel --http-passwd=foo"; $portal = "/arch"; $server = "localhost:8080"; $wget = "/usr/bin/wget"; $cmd = "$wget $sec -q -O -".''///&''."phone=$phone"; return `$cmd`; ' LANGUAGE plperlu; FIXME -- CLEAN THIS UP LIKE PREVIOUS The difference is that we don't want to really do an insert in the database, though--when Zope does its object creation, it will create the database record itself in Archetypes. So we want our original direct-in-DB insert to be ignored. We could do this with a trigger, and have the trigger raise a failure so the insert didn't happen. This, though, would be confusing for the user, who would see an error message, and, if we were in the middle of transaction, would spoil that transaction, aborting it and preventing other actions from happening in the relational database. A better solution, then, would be to use a feature of PostgreSQL called rules, which we saw briefly earlier If You Need A Very Different Table Structure. Rules are rewritings of a query to do anything instead of the called-for-query. We'll "rewrite" our INSERT query to a SELECT query, which in this case will select the PL/perlU function that calls wget to notify the Zope PythonScript to create the object. Again, it seems like a lot of redirection, but works well. Rule creation is covered in the PostgreSQL documentation, in The Rule System 23. Our rule will be: CREATE RULE customer_ins AS ON INSERT TO Customerfolder WHERE NEW.uid = 'direct' DO INSTEAD SELECT customerfolder_add ( NEW.body, NEW.phone ); Now, when you want to insert a record directly, you can do so by: INSERT INTO customer_ins ( uid, body, phone ) VALUES ( 'direct', 'body goes here', '555 1212' ); The WHERE NEW.uid = 'direct' clause is required to prevent Zope's insertion from triggering our rule which would trigger Zope's insertion ... and so on into permanent recursion. Any attempt to insert a record with a UID not equal to "direct" will go directly into the database without triggering any action from Zope. Since Zope will be inserting a record with real UID, it will always therefore bypass our rule. Deletes Deletes would be handled like inserts, but our PythonScript would obviously do the deleting for us instead. Details here can be figured here by the reader, but you'll need a PythonScript to handle the deletion, a plperlu function to craft the proper wget command, and a trigger that handles ON DELETE. Since we can't stop recursion from happening with a DELETE the way we can with an INSERT, we should have our trigger call Zope not just as DO INSTEAD but DO, so the Zope deletion happens and the normal PostgreSQL deletion happens. When the Zope deletion tries ... XXX FIXME XXX Inserting a Child Record If we want to allow direct database insertion of the child Orders objects, we have to consider one additional wrinkle: the Orders objects are meant to be physically contained in their related parent Customer object. Therefore, our PythonScript that would add their child Orders record must make the context for the invokeFactory call be the context of the enclosing Customer object. We could accomplish this easily by passing the child Orders PythonScript add helper the UID of the Customer, and it could lookup the Customer object (using the API demonstrated earlier for looking up an object given its UID). Then we could use that context for our invokeFactory call. FIXME: show this. Changing SQLStorage's Storage Methods: An Example With Lists Creating the Types and Fixing the Mapping If we add a list type to our customer object, we run into a snag with marshalling and unmarshalling. Let's add the object type, first as a standard Archetypes object stored completely in the ZODB: # CustomerList.py", ), LinesField("clients"), )) + TemplateMixin.schema class CustomerList(TemplateMixin, BaseContent): """Our example object""" schema = schema archetype_name = "Customer List" actions = TemplateMixin.actions registerType(CustomerList) Put this in the schema and restart Archetypes. As we're storing this in the ZODB (not in the relational database), everything works fine. The form widget for the clients field is a textarea in which the user enters newline-separated entries. These are converted by Zope to a Python list and stored as an attribute of the object. If we create a new Archetypes type that contains this same lines field, but tries to store it in the relational database, we run into problems with Archetypes's default behaviors. First, the object type: # CustomerListSQL.py from Products.Archetypes.public import * from Products.Archetypes.TemplateMixin import TemplateMixin from Products.Archetypes.SQLStorage import PostgreSQLStorage schema = BaseSchema + Schema(( TextField('body', required=1, storage=PostgreSQLStorage(), primary=1, searchable=1, default_output_type='text/html', allowable_content_types=('text/restructured', 'text/plain', 'text/html', 'application/msword'), widget=RichWidget, ), StringField("phone", index="FieldIndex", storage=PostgreSQLStorage(), ), LinesField("clients", storage=PostgreSQLStorage()), )) + TemplateMixin.schema class CustomerListSQL(TemplateMixin, BaseContent): """Our example object""" schema = schema archetype_name = "Customer List SQL" actions = TemplateMixin.actions registerType(CustomerListSQL) Restart Archetypes, and don't forget to add the new type to portal_types. At the time of this writing, Archetypes tries to create the new table with the field type lines for the clients field. This is not a valid field type for PostgreSQL (or any other database I know of), and therefore, the addition of the table fails, and any attempt to add an object of this type fails since there is no table to store them in. There are several different ways we could fix this problem. Create the table before Archetypes does. If the table already exists, Archetypes won't create it. We can easily create the table, and give it a text type for the clients field. The table structure would be: CREATE TABLE Customerlistsql ( uid text NOT NULL PRIMARY KEY, parentuid text, body text, phone text, clients text ); Change the mapping performed by Archetypes. We can fix this problem by patching SQLStorage.py to do the right thing and create a text field by changing the type mapping that Archetypes does. You can do this either by editing SQLStorage.py and making changes for your database type, or, if you'd rather not modify the Archetypes source code, you can subclass your storage type, make the changes there, and use this new subclassed storage type. We'll look explicitly at the subclassing strategy later in this document; for now, we'll make changes directly to SQLStorage.py. The change we want is in the dictionary db_type_map, which translates an Archetypes field type into the relational database field type. As of this writing, there is no translation for lines, so Archetypes uses lines as the relational database field type. We'll add a translation for lines to become text: db_type_map = { 'object': 'bytea', 'file': 'bytea', 'fixedpoint': 'integer', 'reference': 'text', 'datetime': 'timestamp', 'string': 'text', 'metadata': 'text', 'lines':'text', # THIS IS THE CHANGE } If you restart Archetypes and try to add your object now, it will create the table and let you create objects. Create a suitable domain in PostgreSQL. PostgreSQL, like many advanced SQL databases, supports the notion of domains. A domain is a custom-defined explanation of a database type, which can be referenced as if it were a real type. For example, if you commonly want to use a varchar(20) not null for telephones in a database, you could create a domain called telephone that is defined as varchar(20) not null, and then you can simply create your tables with the field type telephone to get the right definition and behavior. We'll create a domain called lines: CREATE DOMAIN lines AS text; Domains can contain restrictions (such as CHECK constraints and NOT NULL requirements), but in this case, we don't want or need any of these. This simple definition will be enough. Now, when Archetypes tries to create a field with the type lines, it will succeed. In some ways, this is the best strategy of our three, as it lets other applications and users understand that this is a lines field. It's still stored as text, and behaves as such, but if you look at the table structure, you'll see lines, which can remind you of its newline-separated, list-oriented use. Fixing the Translations A serious problem still persists, though. The newline-separated entries from the form (the "lines") are turned into a Python list by Archetypes, such as: [ 'cat', 'dog', 'bird' ] but SQLStorage attempts to store this list directly in the database. This ends up as the literal string value "['cat,'dog','bird']" which is Archetypes stores in the database: database=# SELECT uid, clients FROM Customerlistsql ; uid | clients ---------------------------------+------------------------ CustomerListSQL.2003-07-23.1619 | ['cat', 'dog', 'bird'] (1 row) Unfortunately, this string representation of a Python list is a difficult format to work with in the database, and not handled correctly coming out by Archetypes. When Archetypes gets the data back from the relational database, it sees it as a single string. It tries to turn this isnto a list, with the following results: [ ' c a t ' , ' d o g ' , ' b i r d ' ] As this is the way Python handles being given a string and being told to treat it like a list. The solution is that we want to write a custom marshaller and unmarshaller. These are the routines that Archetypes will run on a value before it tries to write them to the database, and after it retrieves the value from the database. There are hooks in Archetypes for this: any function called map_XXX is called when storing field type XXX and a method called unmap_XXX is called when retrieving field type XXX. Our mapping will convert this list back to a newline-separated string, and this is the format it will be given to our relational database as: def map_lines(self, field, value): return '/n'.join(value) Our unmapping method will convert the newline-separated string back to a Python list: def unmap_lines(self, field, value): return value.split('/n') Both of these should go into SQLStorage.py, as methods the class SQLStorage or as methods of the class for your particular relational database. If don't want to (or can't) modify the source to Archetypes, you could subclass your storage class, add the methods to the subclass, and have your object schema fields use your new, subclass storage type. We'll cover this concept of subclassing a storage class extensively later, when we subclass an improved version of the PostgreSQL storage class. Now we can transparently work with our lists: they appear and are edited on the form as a newline-separated string (so we can easily edit them in a textarea), they're handled in Zope as a Python list object (so we can work naturally with them and don't have to be concerned with how they're stored), and they're stored in the relational database as a simple newline separated list so we can access them simply. 13 Even Better: Turning Into Arrays While our solution above lets Archetypes store the data and get it back in one piece, it isn't very suitable in the relational database: most relational database querying programs and reporting programs are ill-equipped to deal with searching for individual values that are stuffed into text fields. To find all customers that have two values, "fish" and "cat", in clients, you could write queries like: SELECT * FROM Customerlistsql WHERE clients LIKE 'cat/n%fish' OR clients LIKE 'cat/n%fish/n%' OR clients LIKE '%/ncat/n%fish' OR clients LIKE '%/ncat/n%fish/n%' OR clients LIKE 'fish/n%cat' OR clients LIKE 'fish/n%cat/n%' OR clients LIKE '%/nfish/n%cat' OR clients LIKE '%/nfish/n%cat/n%' (and this is still an incomplete example for this!) However, this is ugly, slow, unindexable 14, and error-prone, especially as you add more predicates to the logic. We'll exploit a feature of PostgreSQL that allows us to store arrays in a field, so that one field holds an array of values. While this is similar to storing as newline-separated text, there are many functions in PostgreSQL that can quickly find records having a value in an array, or count the number of values in an array, and so on--all the things that would be slow and unwieldy using text. First, let's change our table structure to use arrays: database=# ALTER TABLE Customerlistsql DROP CLIENTS; ALTER TABLE database=# ALTER TABLE Customerlistsql ADD CLIENTS text[]; ALTER TABLE The type text[] is a PostgreSQL type for storing an array of text values. We can test out the array storage works directly in PostgreSQL by updating an existing record and examining it: database=# UPDATE Customerlistsql SET clients='{cat,dog,bird}'; UPDATE 1 database=# SELECT uid, clients FROM Customerlistsql; uid | clients ---------------------------------+---------------- CustomerListSQL.2003-07-23.1619 | {cat,dog,bird} (1 row) database=# SELECT uid, clients[1] FROM customerlistsql; uid | clients ---------------------------------+--------- CustomerListSQL.2003-07-23.1619 | cat Now we can change our map_lines and unmap_lines methods from above, to write out and retrieve values written in this format: def map_lines(self, field, value): return "{%s}" % ','.join(value) def unmap_lines(self, field, value): return value.strip("{}").split(',') NoteThese are very naive implementations, as they do not deal with the possibility that a list item might have a comma in it. It would be quite easy, though, to write versions that quoted the comma and unquoted it for unmapping. Restart Archetypes to pick up the changes to the storage type, then edit an existing or add a new object. Notice how the values you put into the clients field end up as array in PostgreSQL, and are read correctly. Creating a new storage class FIXME Fixing image storage FIXME Dynamically changing storage types FIXME, but for now: | You add "storage=PostgreSQLStorage()" on fields that you want | in the PostgreSQL database. This makes it easy to have some | fields stored in a PostgreSQL database and others in a Sybase | database. I realise this. | | Let's say that I store everything in PostgreSQL and then one day | I want to switch to Sybase. Then I'd have to change every | "storage=PostgreSQLStorage" to "storage=SybaseStorage". | What if the class had a 'storage' attribute that you could set | instead? Then you could only specify "storage=SQLStorage". | | I think most people only use one RDBMS? So then this would | be the simplest thing. And also if you want your new type public, | everyone won't probably use the same database. Then you'd | like to change a field in the ZMI to say that all relational database | fields should be stored with Sybase. Anyone agree? | | But I may be ranting about something that already exists? In fact there is an (undocumented) method to change storages. It is called (guess what?) setStorage, and it's a method of the Field class. So, if you have an existing object, and you want to migrate from one storage to another, you need to do something like this (in an External Method, of course): ---------------------------------------------------------------------- from Products.Archetypes.Storage import SybaseStorage # Does not exist currently results = root.portal_catalog(portal_type='MyObjectType') for r in results: r.getObject().Schema()['fieldname'].setStorage(SybaseStorage()) --------------------------------------------------------------------- Then you would shut down your zope instance and adjust the schema definitions accordingly. NOTE: After migrating your objects. You also need to make sure the column already exists in your databases. from mail on the Archetypes-devel mailing ilst by Sidnei da Silva. Integrating relational database schemas into Archetypes One interesting trick that could benefit us is using schemas with Archetypes. While many documents refer to a database schema as the definition of its tables, views, etc., I'm referring here to the feature of PostgreSQL that give a database multiple namespaces. For example, in the database mydb, we could have only one table called ideas. But if we create a new namespace (schema), we can put this a new ideas table in the new namespace. For example: database=# CREATE TABLE Ideas ( idea text ); CREATE TABLE database=# CREATE TABLE Ideas ( idea text ); ERROR: Relation 'ideas' already exists database=# CREATE SCHEMA More; CREATE SCHEMA database=# CREATE TABLE More.Ideas ( idea text ); CREATE TABLE Now we have two independent tables called ideas, and can choose which one to use by referring to it by namespace: database=# INSERT INTO Ideas VALUES ( 'commute to work.' ); INSERT 116491 1 database=# INSERT INTO More.Ideas values ( 'go out with friends.' ); INSERT 116492 1 The tables are completely separate: database=# SELECT * FROM Ideas; idea ------------------ commute to work. (1 row) database=# SELECT * FROM More.Ideas; idea ---------------------- go out with friends. (1 row) If you don't name a schema explicitly, PostgreSQL uses your schema path to determine your schema. So, even before you knew you were using schemas, you were! The default schema in most PostgreSQL setups is the same name as the user, so I could see the "unnamed" ideas table above as joel.ideas. The interesting idea here is that we could have different database adapters that have different connecting PostgreSQL users, each having a different schema path. Then, depending on which database adapter I used, I could see different results. Or, even more interesting, we can ask PostgreSQL to change our schema path for us by executing the SQL command SET search_path=XXX, where XXX is our new, comma-separated search path. If we set our search path differently, we could get different results. Useful ideas would be to have the search path set differently based on who the logged in user was, or what her or his roles were, etc. Using this ideas we could implemented Archetypes-stored objects that provided versioning or security capabilities that were dependant on users or roles: the user june would see results from one table, where rebecca would see results from another. All this would happen transparently and quickly at the PostgreSQL level, without having to put code for this in all your accessors in Archetypes. Of course, we'd have to make sure inserts and deletes kept the tables in sync, otherwise june would get an error trying to look at a ZODB object for which there is no corresponding record in her schema's table. But this syncronization would be fairly straightforward to create using triggers on the tables. There are no hooks currently in Archetypes SQLStorage to execute the required SET command before the various SQL calls, but it would be easy to implement this. And, of course, this trick isn't specific to Archetypes--it's also useful when working with relational databases in Zope the traditional way with SQL Methods. It's particularly cool as an idea with APE--you could make the ZODB reconfigure itself depending who the logged in user is! 17 The status of SQLStorage SQLStorage is newer than Archetypes itself, and does not appear to be as soundly developed and tested. In email to me, Alan Runyan said that: NOTE: SQLStorage is incredibly inefficient. It works quite well and we have done a project with it (that why it exists). But really it should be rewritten if you are going to use it in a very large scale production environment. I would consider the implementation 'alpha' but stable. I have not had a chance to audit the code to see what the inefficiencies are that he is referring to; however, as seen here, there are several buglets that prevent SQLStorage from working correctly (failing to catch deletion errors, failing to map lists correctly, etc.) By the time you read this, these errors may be corrected and SQLStorage may be better-tested and more efficiently implemented. Stay tuned! About this Document This document was written by Joel Burton 25. It is covered under GNU Free Documentation License, with one invariant section: this section, About this document, must remain unchanged. Otherwise, you can distribute it, make changes to it, etc. If you have any comments, corrections, or changes, please let me know. Thanks!
https://blog.csdn.net/zhang_yu_cvicse/article/details/2346741
CC-MAIN-2018-26
en
refinedweb
What is Android Things? Android Things makes developing connected embedded devices easy by providing the same Android development tools, best-in-class Android framework, and Google APIs that make developers successful on mobile. It is aimed to be used with low-power and memory constrained Internet of Things (IoT) devices, which are usually built from different MCU platforms. As an IoT OS it is designed to work as low as 32–64 MB of RAM. It will support Bluetooth Low Energy and Wi-Fi. Let's get started with Hello World! (a.k.a blink) 1. Setup Raspberry Pi with Android Things Go to the Google Android Things website by click here. Select the CONSOLE section. Select CREATE A PRODUCT. Type your Product Name. Select SOM as Raspberry Pi 3 (SOM = System On Module). Give a Product Description. Hit CREATE. In the following page, select FACTORY IMAGES. Click the CREATE BUILD CONFIGURATION. It creates a new field Build configuration list. Click the Download build and wait for the download. And you will get a .zip file like this: Unzip/extract the zip using 7zip. It will take 1 min to extract. After the extraction, you will get .img file (a.k.a OS image file). Next burn the image to SD card. . Note: We are not using a display/HDMI monitor because the latest Android Things preview does not support most display units, also we are not going to use it in this project. Open your router panel or use a mobile app to obtain IP Address of Raspberry Pi. Now here our Local IP is 192.168.0.22. 2. Setup Android Studio First Download Android studio (Stable) or use Preview version. Note: Stable version can be also use used for Android things development but preview version comes with a inbuilt Android things Development option. After downloading, install and open Android Studio. Start a new project by clicking Start a new Android Studio Project. And click Finish to build. Wait for the build. 3. Let's Code Android (Java) After the build, open the Gradle: Add the development preview dependencies. By adding this: provided After that, open Main Activity source file (Java): In the main class, add the initialize LED variable and delay variable by adding this code and we are using BCM13 pin to connect our LED. private static final String TAG = MainActivity.class.getSimpleName(); private static final int INTERVAL_BETWEEN_BLINKS_MS = 500; private static final String GPIO_PIN_NAME = "BCM13"; // Physical Pin #33 on Raspberry Pi3 private Handler mHandler = new Handler(); private Gpio mLedGpio; Next add a GPIO connection to the board: // Step 1. Create GPIO connection. PeripheralManagerService service = new PeripheralManagerService(); try { mLedGpio = service.openGpio(GPIO_PIN_NAME); // Step 2. Configure as an output. mLedGpio.setDirection(Gpio.DIRECTION_OUT_INITIALLY_LOW); // Step 4. Repeat using a handler. mHandler.post(mBlinkRunnable); } catch (IOException e) { Log.e(TAG, "Error on PeripheralIO API", e); } } In OnCreate method, we are initialize our variable that we need. Add OnDestory method for our application:); } } } We need to close session that we are opened in the OnCreate method. After that make our program runnable by adding the Runnable method.); } } }; In the runnable code we are set our code ready to run. After inserting all the code, you can also get some error. That is because you didn't add the packages that we used in our code. Let's add them clicking Alt + Enter or copy paste the packages. import android.app.Activity; import android.os.Bundle; import android.os.Handler; import android.util.Log; import com.google.android.things.pio.Gpio; import com.google.android.things.pio.PeripheralManagerService; import java.io.IOException; All set. If you want to be rid of all the coding, just clone my GitHub Repo: Or click here to download the code. 4. Setup the Raspberry with LED In Android things the pinout is different than the Raspbian. After the wiring, let's run our program. 5. Run the Progarmme But we need a connection between the Android Things device (Raspberry Pi) with our Android Studio for Upload and Debug our programs. Yeah, we. . Yeah, we did it. You just completed Android Things Hello World program. If anything goes wrong, please let me know in the comment section. Thank you.
https://www.hackster.io/Salmanfarisvp/getting-started-in-android-things-with-raspberry-pi-6a980e
CC-MAIN-2018-26
en
refinedweb
genders_handle_destroy - destroys a genders handle #include <genders.h> int genders_handle_destroy(genders_t handle); genders_handle_destroy() destroys the genders handle pointed to by handle and frees all allocated memory associated with it._MAGIC handle has an incorrect magic number. handle does not point to a genders handle or handle has already been destroyed. /usr/include/genders.h libgenders(3), genders_handle_create(3), genders_load_data(3), genders_errnum(3), genders_strerror(3)
http://huge-man-linux.net/man3/genders_handle_destroy.html
CC-MAIN-2018-26
en
refinedweb
All of you must have heard the name of the planet Krypton. If you can’t remember the planet, don't worry. Planet Krypton is the origin of Superman, that means the mother planet where Superman was born. Superman was sent to earth by his parents when the planet was about to explode. Legends say that only few people of the planet survived from that explosion. The inhabitants of the planet is called 'Kryptonians'. Kryptonians, though otherwise completely human, were superior both intellectually and physically to natives of Earth. One of the most common differences is their number system. The number system is denoted below 1) The base of the number system is unknown, but legends say that the base lies between 2 and 6. 2) Kryptinians don’t use a number where two adjacent digits are same. They simply ignore these numbers. So, 112 is not a valid number in Krypton. 3) Numbers should not contain leading zeroes. So, 012 is not a valid number. 4) For each number, there is a score. The score can be found by summing up the squares of differences of adjacent numbers. For example 1241 has the score of (1 − 2)^2 + (2 − 4)^2 + (4 − 1)^2 = 1 + 4 + 9 = 14. 5) All the numbers they use are integers. Now you are planning to research on their number system. So, you assume a base and a score. You have to find, how many numbers can make the score in that base. Input The first line of the input will contain an integer T (≤ 200), denoting the number of cases. Then T cases will follow. Each case contains two integers denoting base (2 ≤ base ≤ 6) and score (1 ≤ score ≤ 10^9). Boththe integers will be given in decimal base. Output For each case print the case number and the result modulo 232. Check the samples for details. Both the case number and result should be reported in decimal base Sample Input 2 6 1 5 5 Sample Output Case 1: 9 Case 2: 80 问题链接:UVA11651 Krypton Number System 问题简述:(略) 问题分析:占个位置,不解释。 程序说明:(略) 题记:(略) 参考链接:(略) AC的C++语言程序如下: /* UVA11651 Krypton Number System */ #include <bits/stdc++.h> using namespace std; const int BASE = 6; const int N = BASE * (BASE - 1) * (BASE - 1); struct Matrix { unsigned int n, m, g[N][N]; Matrix(int _n, int _m) { n = _n; m = _m; memset(g, 0, sizeof(g)); } // 矩阵相乘 Matrix operator * (const Matrix& y) { Matrix z(n, y.m); for(unsigned int i=0; i<n; i++) for(unsigned int j=0; j<y.m; j++) for(unsigned int k=0; k<m; k++) z.g[i][j] += g[i][k] * y.g[k][j]; return z; } }; // 矩阵模幂 Matrix Matrix_Powmul(Matrix x, int m) { Matrix z(x.n, x.n); for(unsigned int i=0; i<x.n; i++) z.g[i][i] = 1; while(m) { if(m & 1) z = z * x; x = x * x; m >>= 1; } return z; } int main() { int t, base, score; scanf("%d", &t); for(int caseno=1; caseno<=t; caseno++) { scanf("%d%d", &base, &score); int n = base * (base - 1) * (base - 1); Matrix a(1, n); for(int i=1; i<base; i++) a.g[0][n - i] = 1; Matrix f(n, n); for(int i=base; i<n; i++) f.g[i][i - base] = 1; for(int i=0; i<base; i++) for(int j=0; j<base; j++) if(i != j) f.g[n - (i - j) * (i - j) * base + j][n - base + i] = 1; a = a * Matrix_Powmul(f, score); unsigned int ans = 0; for(int i=1; i<=base; i++) ans += a.g[0][n - i]; printf("Case %d: %u\n", caseno, ans); } return 0; }
https://blog.csdn.net/tigerisland45/article/details/79427364
CC-MAIN-2018-26
en
refinedweb
thegrdream 0 Report post Posted April 11, 2014 Hello, I need your help. I want to do the following: I want to open a file from a CD and will work on different computers, so I want to search the CD drive and then drive letter. The I want to send the drive letter as text. I try the following, but does send nothing. $CD = DriveGetDrive("CDROM") If @error Then MsgBox(4096, "DriveGetDrive", "NO CD!") Else if ubound($CD) = 2 then ControlSend("Offnen", "", "[CLASS:ToolbarWindow32; INSTANCE:2]", $CD[1]) endif EndIf Share this post Link to post Share on other sites
https://www.autoitscript.com/forum/topic/160593-search-cd-drive-letter-and-send-as-text/
CC-MAIN-2018-26
en
refinedweb
? 64 pull requests were merged this week. Breaking Changes BenchHarnesshas been renamed to Bencher - The push_strand push_charmethods on ~strhave been removed, and a StrBuftype added, as an analog to Vec. - Duplicate moves from the variables a proccaptures are no longer allowed. std::libchas been extracted into its own crate. - Various bugs in resolve have been fixed. The fixes seem relatively obscure, but they’re well documented if your code breaks. - The functions in flatenow return Option instead of failing. Other Changes TotalEqand TotalOrdnow document exactly what the types implementing them must guarantee. - Some bugs with debuginfo have been fixed. In particular, the annoying link failure with debuginfo has been fixed. - Relocation model is now configurable with -C relocation-model. Additionally, a lot of cleanup happened. Not much of it sticks out particularly. New Contributors - Boris Egorov - Jim Radford - Joseph Crail - JustAPerson - Kasey Carrothers - Kevin Butler - Manish Goregaokar - Tobba - free-Runner Weekly Meeting The weekly meeting was cancelled due to the videoconference system being down for mitigating the Heartbleed vulnerability, as well as some team members travelling or otherwise unavailable. RFCs Some new RFCs: - Extend nullable pointer optimization to library types - Extended method lookup - Inherit use - Allocator trait - Make libstd a facade Invalidtrait for space optimization of enums - Add a regexp crate to the Rust distribution Project Updates - Acronymy has been released. This is a web application (in Rust!) for defining words as acronyms. It’s pretty fun. - bitmap has been released - regexp is a pure-Rust implementation of RE2, with wonderful docs and support for statically compiling regular expressions. - rust-empty has been updated to 0.2. - inotify-rs has been released, bindings to inotify. - An unlambda interpreter - RusticMineSweeper, a minesweeper clone. - rust-mustache has been updated to 0.3.0. - sodiumoxide, the libsodium bindings, have been updated for 0.10. Community - On April 17, there will be an Introduction to Rust by Clark Gaebel in new York City, during a C++ meetup. - Bay Area Rust’s plans for May have been announced. This Week in Servo Servo is a web browser engine written in Rust and is one of the primary test cases for the Rust language. In the last week, we landed 30 PRs. Notable additions - ms2ger cleaned up all of the trailing whitespace that had been nagging down our Critic reviews in #2055 - Jacob Parker added a reftest for setAttribute-based restyling in #2062 - Sankha Narayan Guria removed XRay from the script codegen in #2050 - Peiyong Lin moved namespaceURIto the Elementtype in #2063 and removed all remaining @boxes in #2085 - Matt Brubeck fixed bugs related clicking on links in #2068 and #2084 and #2080 - Hyun June Kim added support for pseudo-elements attached to inline elements in #2071 - Manish Goregaokar cleaned up a whole bunch of warnings left after our last Rust update in #2045 - Lars Bergstrom got Android support working in Servo master in #2070 New contributors - Jacob Parker (j3parker) Meetings and Notes In this week’s meeting, we went over our Q2 roadmap, status of an Android buildbot, testing, and the ever-present issue of improving our build system.
http://cmr.github.io/blog/2014/04/13/this-week-in-rust/
CC-MAIN-2018-26
en
refinedweb
The central concept in Python programming is that of a namespace. Each context (i.e., scope) in a Python program has available to it a hierarchically organized collection of namespaces; each namespace contains a set of names, and each name is bound to an object. In older versions of Python, namespaces were arranged according to the "three-scope rule" (builtin/global/local), but Python version 2.1 and later add lexically nested scoping. In most cases you do not need to worry about this subtlety, and scoping works the way you would expect (the special cases that prompted the addition of lexical scoping are mostly ones with nested functions and/or classes). There are quite a few ways of binding a name to an object within the current namespace/scope and/or within some other scope. These various ways are listed below. A Python statement like x=37 or y="foo" does a few things. If an object?e.g., 37 or "foo"?does not exist, Python creates one. If such an object does exist, Python locates it. Next, the name x or y is added to the current namespace, if it does not exist already, and that name is bound to the corresponding object. If a name already exists in the current namespace, it is re-bound. Multiple names, perhaps in multiple scopes/namespaces, can be bound to the same object. A simple assignment statement binds a name into the current namespace, unless that name has been declared as global. A name declared as global is bound to the global (module-level) namespace instead. A qualified name used on the left of an assignment statement binds a name into a specified namespace?either to the attributes of an object, or to the namespace of a module/package; for example: >>> x = "foo" # bind 'x' in global namespace >>> def myfunc(): # bind 'myfunc' in global namespace ... global x, y # specify namespace for 'x', 'y' ... x = 1 # rebind global 'x' to 1 object ... y = 2 # create global name 'y' and 2 object ... z = 3 # create local name 'z' and 3 object ... >>> import package.module # bind name 'package.module' >>> package.module.w = 4 # bind 'w' in namespace package.module >>> from mymod import obj # bind object 'obj' to global namespace >>> obj.attr = 5 # bind name 'attr' to object 'obj' Whenever a (possibly qualified) name occurs on the right side of an assignment, or on a line by itself, the name is dereferenced to the object itself. If a name has not been bound inside some accessible scope, it cannot be dereferenced; attempting to do so raises a NameError exception. If the name is followed by left and right parentheses (possibly with comma-separated expressions between them), the object is invoked/called after it is dereferenced. Exactly what happens upon invocation can be controlled and overridden for Python objects; but in general, invoking a function or method runs some code, and invoking a class creates an instance. For example: >>> pkg.subpkg.func() # invoke a function from a namespace >>> x = y # deref 'y' and bind same object to 'x' Declaring a function or a class is simply the preferred way of describing an object and binding it to a name. But the def and class declarations are "deep down" just types of assignments. In the case of functions, the lambda operator can also be used on the right of an assignment to bind an "anonymous" function to a name. There is no equally direct technique for classes, but their declaration is still similar in effect: >>> add1 = lambda x,y: x+y # bind 'add1' to function in global ns >>> def add2(x, y): # bind 'add2' to function in global ns ... return x+y ... >>> class Klass: # bind 'Klass' to class object ... def meth1(self): # bind 'meth1' to method in 'Klass' ns ... return 'Myself' Importing, or importing from, a module or a package adds or modifies bindings in the current namespace. The import statement has two forms, each with a bit different effect. Statements of the forms >>> import modname >>> import pkg.subpkg.modname >>> import pkg.modname as othername add a new module object to the current namespace. These module objects themselves define namespaces that you can bind values in or utilize objects within. Statements of the forms >>> from modname import foo >>> from pkg.subpkg.modname import foo as bar instead add the names foo or bar to the current namespace. In any of these forms of import, any statements in the imported module are executed?the difference between the forms is simply the effect upon namespaces. There is one more special form of the import statement; for example: >>> from modname import * The asterisk in this form is not a generalized glob or regular expression pattern, it is a special syntactic form. "Import star" imports every name in a module namespace into the current namespace (except those named with a leading underscore, which can still be explicitly imported if needed). Use of this form is somewhat discouraged because it risks adding names to the current namespace that you do not explicitly request and that may rebind existing names. Although for is a looping construct, the way it works is by binding successive elements of an iterable object to a name (in the current namespace). The following constructs are (almost) equivalent: >>> for x in somelist: # repeated binding with 'for' ... print x ... >>> ndx = 0 # rebinds 'ndx' if it was defined >>> while 1: # repeated binding in 'while' ... x = somelist[ndx] ... print x ... ndx = ndx+1 ... if ndx >= len(somelist): ... del ndx ... break The except statement can optionally bind a name to an exception argument: >>> try: ... raise "ThisError", "some message" ... except "ThisError", x: # Bind 'x' to exception argument ... print x ... some message
http://etutorials.org/Programming/Python.+Text+processing/Appendix+A.+A+Selective+and+Impressionistic+Short+Review+of+Python/A.2+Namespaces+and+Bindings/
CC-MAIN-2018-26
en
refinedweb
- milestone: --> Trunk - assigned_to: nobody --> kimmov WinMerge returns "1" (instead of "0" = "OK") upon closing, if files compared were read-only. (Files were not modified - just compared.) The issue does not occure if the cursor was not placed in the file text body (either manually or by going through differences). I do not see any reason why WinMerge should return anything but "successful" on file comparison. The exception should only be technical problems, but none of the user actions that the user is aware of (e.g. purposly not saving changes, etc.). Logged In: YES user_id=631874 Originator: NO Thanks for reporting this. It is not actually about files being read-only. WinMerge simply tries to return compare status. This "feature" was added in back for selftests or some such and unfortunately nobody really thought about it. I'll fix this for next stable release to return 0 unless there is an error. Logged In: YES user_id=631874 Originator: NO I'll fix this bug with this simple patch which makes WinMerge always to return 0. So the approach I'll take is start with always returning zero. And add possible error returns when we really need those. At the moment I don't know situation where we must return something other than 0. Some merge scripts etc probably would want return value if the merge succeeded, but that is not easy as user can open another files or folders and close original files so we don't even know which file's return value we should return (ones given from command-line? last opened?...) --- Merge.cpp (revision 4956) +++ Merge.cpp (working copy) @@ -447,7 +447,7 @@ charsets_cleanup(); delete m_mainThreadScripts; CWinApp::ExitInstance(); - return m_nLastCompareResult; + return 0; } static void AddEnglishResourceHook() Logged In: YES user_id=631874 Originator: NO Committed to SVN trunk: Completed: At revision: 4970 Closing as fixed. Log in to post a comment.
https://sourceforge.net/p/winmerge/bugs/1626/
CC-MAIN-2018-26
en
refinedweb
#include <iostream> #include <cmath> using namespace std; //explains program to user and directs them to enter appropriate values. void input(double& meters); //makes converson from metric to English units void convert(int& feet, double& inches, double meters); //it will make use of the following global constants const double METERS_PER_FOOT = 0.3048; const double INCHES_PER_FOOT = 12.0; //handles the output as shown in the sample script below void output(int feet, double inches, double meters); int main() { int feet; double meters, inches; char choice; do { input(meters); cout.setf(ios::fixed); cout.setf(ios::showpoint); cout.precision(2); convert(feet, inches, meters); output(feet, inches, meters); cout << endl; cout << "Y/y continues, any other quits" << endl; cin >> choice; cout << endl; } while(( choice == 'y') || (choice == 'Y')); return 0; } void input(double& meters) { cout << "Enter a number as a double" << endl; cin >> meters; } void convert(int& feet, double& inches, double meters) { feet = (meters)/METERS_PER_FOOT; // this part is giving me the error } void output(int feet, double inches, double meters) { cout << endl; cout << "The value of meters, centimeters " << meters << " meters converted to English is " << feet << " feet, " << endl; } im getting this error sprog4.cpp: In function 'void convert(int&, double&, double)': sprog4.cpp:65: warning: converting to 'int' from 'double' *** MOD EDIT: Added code tags. Please This post has been edited by JackOfAllTrades: 06 May 2009 - 01:54 PM
https://www.dreamincode.net/forums/topic/103724-converting-from-meters-to-feet-in-c/
CC-MAIN-2018-26
en
refinedweb
import "k8s.io/apiserver/pkg/admission/plugin/webhook/config/apis/webhookadmission" doc.go register.go types.go zz_generated.deepcopy.go GroupName is the group name use in this package var ( SchemeBuilder = runtime.NewSchemeBuilder(addKnownTypes) AddToScheme = SchemeBuilder.AddToScheme ) var SchemeGroupVersion = schema.GroupVersion{Group: GroupName, Version: runtime.APIVersionInternal} SchemeGroupVersion is group version used to register these objects Kind takes an unqualified kind and returns a Group qualified GroupKind func Resource(resource string) schema.GroupResource Resource takes an unqualified resource and returns a Group qualified GroupResource type WebhookAdmission struct { metav1.TypeMeta // KubeConfigFile is the path to the kubeconfig file. KubeConfigFile string } WebhookAdmission provides configuration for the webhook admission controller. func (in *WebhookAdmission) DeepCopy() *WebhookAdmission DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WebhookAdmission. func (in *WebhookAdmission) DeepCopyInto(out *WebhookAdmission) DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *WebhookAdmission) DeepCopyObject() runtime.Object DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. Package webhookadmission imports 3 packages (graph) and is imported by 5 packages. Updated 2019-12-09. Refresh now. Tools for package owners.
https://godoc.org/k8s.io/apiserver/pkg/admission/plugin/webhook/config/apis/webhookadmission
CC-MAIN-2020-05
en
refinedweb
Simple & featured native masonry layout implementation for React JS react-xmasonry Responsive, minimalistic and full-featured native masonry layout (grid) for React JS. Simple, minimalistic and featured native masonry layout for React JS. Features - Native masonry layout implementation for React JS with no dependencies. - Minimalistic by design. - Responsive, mobile-friendly approach (so there is no "fixed block width in pixels" option). - Configurable width of blocks (in columns) and width of columns (targeting width in pixels), maximum number of columns, centering, etc. - Fully customizable using CSS. Put animations and transitions you like using .xmasonry and .xblock selectors. - Works with server rendering. Installation npm install react-xmasonry --save-dev Or, if you use the old-style <script> tag or need an UMD module for demos, use this: <script type="text/javascript" src=""></script> Having trouble installing react-xmasonry? Check this issue or open a new one if you still struggling. Usage Import XMasonry and XBlock components: import { XMasonry, XBlock } from "react-xmasonry"; // Imports JSX plain sources import { XMasonry, XBlock } from "react-xmasonry/dist/index.js"; // Imports precompiled bundle The simplest layout using JSX and some styling may look like. Styling Animations and Transitions If you want to put transitions/animations on XMasonry, using the .xmasonry and .xblock selectors. For example: @keyframes comeIn { 0% { transform: scale(0) } 75% { transform: scale(1.03) } 100% { transform: scale(1) } } .xmasonry .xblock { animation: comeIn ease 0.5s; animation-iteration-count: 1; transition: left .3s ease, top .3s ease; } .card { margin: 7px; padding: 5px; border-radius: 3px; box-shadow: 0 1px 3px darkgray; }. Server Rendering Note: .xmasonry-static { text-align: center; overflow: auto; } .xblock-static { float: left; text-align: left; } Configuring Components There are several properties you can assign to XMasonry and XBlock components. <XMasonry> Component Properties <XBlock> Component Properties Accessing : - Window size changes, see responsiveprop. - Font load events, see updateOnFontLoadprop. - Image load events, see updateOnImagesLoadprop. - Children changes like adding, replacing or deleting children. - After any change in layout happens, see smartUpdateprop. XMasonry Under the Hood Technically, XMasonry component renders 3 times: - "Empty Render" (ER), when XMasonry just renders its empty container and measures the available width; - "Invisible Render" (IR), when XMasonry renders visibility: hiddenblocks width computed column widths to measure their heights; - And finally "Actual Render" (AR), when it renders elements with computed dimensions and positions. The .
https://reactjsexample.com/simple-featured-native-masonry-layout-implementation-for-react-js/
CC-MAIN-2020-05
en
refinedweb
I’m writing this on 9/14/2016. I make note of the date because the request to get the size of an S3 Bucket may seem a very important bit of information but AWS does not have an easy method with which to collect that info. I fully expect them to add that functionality at some point. As of this date, I could only come up with 2 methods to get the size of a bucket. One could list of all bucket items and iterate over all the objects while keeping a running total. That method does work, but I found that for a bucket with many thousands of items, this method could take hours per bucket. A better method uses AWS Cloudwatch logs instead. When an S3 bucket is created, it also creates 2 cloudwatch metrics and I use that to pull the Average size over a set period, usually 1 day. Here’s what I came up with: import boto3 import datetime now = datetime.datetime.now() cw = boto3.client('cloudwatch') s3client = boto3.client('s3') # Get a list of all buckets allbuckets = s3client.list_buckets() # Header Line for the output going to standard out print('Bucket'.ljust(45) + 'Size in Bytes'.rjust(25)) # Iterate through each bucket for bucket in allbuckets['Buckets']: # For each bucket item, look up the cooresponding metrics from CloudWatch response = cw.get_metric_statistics(Namespace='AWS/S3', MetricName='BucketSizeBytes', Dimensions=[ {'Name': 'BucketName', 'Value': bucket['Name']}, {'Name': 'StorageType', 'Value': 'StandardStorage'} ], Statistics=['Average'], Period=3600, StartTime=(now-datetime.timedelta(days=1)).isoformat(), EndTime=now.isoformat() ) # The cloudwatch metrics will have the single datapoint, so we just report on it. for item in response["Datapoints"]: print(bucket["Name"].ljust(45) + str("{:,}".format(int(item["Average"]))).rjust(25)) # Note the use of "{:,}".format. # This is a new shorthand method to format output. # I just discovered it recently. Oh my goodness! Incredible article dude! Thank you so much, However I am encountering issues with your RSS. I don?t understand why I am unable to join it. Is there anybody else getting similar RSS issues? Anyone who knows the solution can you kindly respond? Thanx!! Works like a charm. Awesome. Well, this saved my day! Thank you very much! 🙂 This is exactly what I needed . thanks so much its really Awesome 🙂 Can we send the print output on e mail using boto.ses, i am new to python if you can share the code it will be a grate help. To send mail through SES, you dont need a Boto3 call. SES, once setup, is just another SMTP email server. You would sendmail to the SES host just like you would any other mail server. You have the SES host, port, id, and password. Wow!!! this is awesome! just what I needed man…. Wow!!! this is awesome! just what I needed man…. Thats Great work bro But i want to put this in csv file how can i do it This is Awesome!!!
https://www.slsmk.com/getting-the-size-of-an-s3-bucket-using-boto3-for-aws/
CC-MAIN-2020-05
en
refinedweb
iSystemOpenManager Struct Reference Manager for system open events. More... #include <iutil/systemopenmanager.h> Detailed Description Manager for system open events. It stores whether a csevSystemOpen event was already broadcast to the event handlers. If an event handler is later registered when the system is already open it immediately receives an open event. Thus, using iSystemOpenManager guarantees that a listener gets an csevSystemOpen event, independent whether that has been broadcast yet or not at the time of registration. Definition at line 38 of file systemopenmanager.h. Member Function Documentation Register a listener to receive csevSystemOpen and csevSystemClose events. Register a weak listener to receive csevSystemOpen and csevSystemClose events. - See also: - CS::RegisterWeakListener Unregister a listener for csevSystemOpen and csevSystemClose events. Unregister a weak listener to receive csevSystemOpen and csevSystemClose events. - See also: - CS::RemoveWeakListener The documentation for this struct was generated from the following file: - iutil/systemopenmanager.h Generated for Crystal Space 2.1 by doxygen 1.6.1
http://www.crystalspace3d.org/docs/online/api/structiSystemOpenManager.html
CC-MAIN-2015-22
en
refinedweb
RTL-SDR. Decryption is not covered in this tutorial. First, you will need to find out at what frequencies you have GSM signals in your area. For most of the world, the primary GSM band is 900 MHz, in the USA it starts from 850 MHz. If you have an E4000 RTL-SDR, you may also find GSM signals in the 1800 MHz band for most of the world, and 1900 MHz band for the USA. Open up SDRSharp, and scan around the 900 MHz (or 850 MHz) band for a signal that looks like the waterfall image below. This is a non-hopping GSM downlink signal. Using NFM, it will sound something like the example audio provided below. Note down the strongest GSM frequencies you can find. The rest of the tutorial is performed in Linux and I will assume that you have basic Linux skills in using the terminal. For this tutorial I used Kali Linux in a VMWare session. You can download the VMWare image here, and the free VMWare player from here. Note that virtual box is reported not to work well with the RTL-SDR, as its USB bandwidth capabilities are poor, so VMWare player should be used. Update: Note that the latest version of Kali Linux comes with GNU Radio pre-installed, which should allow you to skip right to the Install Airprobe stage. Open up Kali Linux in your VMWare player and login. The default username is root, and the password is toor. Install GNU Radio You will need to install GNU Radio first in order to get RTL-SDR to work. An excellent video tutorial showing how to install GNU Radio in Kali Linux can be found in this video shown below. Note that I had to run apt-get update in terminal first, before running the build script, as I got 404 not found errors otherwise. You can also use March Leech’s install script to install the latest version of GNU Radio on any Linux OS. Installation instructions can be found here. I recommend installing from source to get the latest version. Update: The new version 3.7 GNU Radio is not compatible with AirProbe. You will need to install GNU Radio 3.6. However, neeo from the comments section of this post has created a patch which makes AirProbe compatible with GNU Radio 3.7. To run it, place the patch file in your airprobe folder and then run patch -p1 < zmiana3.patch. Install Airprobe Airprobe is the tool that will decode the GSM signal. I used multiple tutorials to get airprobe to install. First from this University of Freiberg tutorial, I used their instructions to ensure that the needed dependencies that airprobe requires were installed. Install Basic Dependencies sudo apt-get –y install git-core autoconf automake libtool g++ python-dev swig libpcap0.8-dev Update: Thanks to shyam jos from the comments section who has let us know that some extra dependencies are required when using the new Kali Linux (1.0.5) for airprobe to compile. If you’ve skipped installing GNURadio because you’re using the new Kali 1.0.5 with SDR tools preinstalled, use the following command to install the extra required dependencies. sudo apt-get install gnuradio gnuradio-dev cmake git libboost-all-dev libusb-1.0-0 libusb-1.0-0-dev libfftw3-dev swig python-numpy Install libosmocore git clone git://git.osmocom.org/libosmocore.git cd libosmocore autoreconf –i ./configure make sudo make install sudo ldconfig Clone Airprobe Now, I discovered that the airprobe git repository used in the University tutorial (berlin.ccc.de) was out of date, and would not compile. From this reddit thread I discovered a more up to date airprobe git repository that does compile. Clone airprobe using the following git command. git clone git://git.gnumonks.org/airprobe.git Now install gsmdecode and gsm-receiver. Install gsmdecode cd airprobe/gsmdecode ./bootstrap ./configure make Install gsm-receiver cd airprobe/gsm-receiver ./bootstrap ./configure make Testing Airprobe Now, cd into to the airprobe/gsm-receiver/src/python directory. First we will test Airprobe on a sample GSM cfile. Get the sample cfile which I found from this tutorial by typing into terminal. cd airprobe/gsm-receiver/src/python wget Note: The tutorial and cfile link is sometimes dead. I have mirrored the cfile on megaupload at this link. Place the cfile in the airprobe/gsm-receiver/src/python folder. Now open wireshark, by typing wireshark into a second terminal window. Wireshark is already installed in Kali Linux, but may not be in other Linux distributions. Since Airprobe dumps data to a UDP port, we must set Wireshark to listen to this. Under Start in Wireshark, first set the capture interface to lo (loopback), and then press Start. Then in the filter box, type in gsmtap. This will ensure only airprobe GSM data is displayed. Back in the first terminal that is in the python directory, type in ./go.sh capture_941.8M_112.cfile If everything installed correctly, you should now be able to see the sample GSM data in wireshark. Receive a Live Channel To decode a live channel using RTL-SDR type in terminal ./gsm_receive_rtl.py -s 1e6 A new window will pop up. Tune to a known non-hopping GSM channel that you found earlier using SDRSharp by entering the Center Frequency. Then, click in the middle of the GSM channel in the Wideband Spectrum window. Within a few seconds some GSM data should begin to show constantly in wireshark. Type ./gsm_receive_rtl.py -h for information on more options. The -s flag is used here to set the sample rate to 1.0 MSPS, which seems to work much better than the default of 1.8 MSPS as it seems that there should be only one GSM peak in the wideband spectrum window. Capturing a cfile with the RTL-SDR (Added: 13/06/13) I wasn’t able to find a way to use airprobe to capture my own cfile. I did find a way to capture one using ./rtl_sdr and GNU Radio however. First save a rtl_sdr .bin data file using where -s is the sample rate, -f is the GSM signal frequency and -g is the gain setting. (rtl_sdr is stored in ‘gnuradio-src/rtl-sdr/src’) ./rtl_sdr /tmp/rtl_sdr_capture.bin -s 1.0e6 -f 936.6e6 -g 44.5 Next, download this GNU Radio Companion (GRC) flow graph (scroll all the way down for the link), which will convert the rtl_sdr .bin file into a .cfile. Set the file source to the capture.bin file, and set the file output for a file called capture.cfile which should be located in the ‘airprobe/gsm-receiver/src/python’ folder. Also, make sure that ‘Repeat’ in the File Source block is set to ‘No’. Now execute the GRC flow graph by clicking on the icon that looks like grey cogs. This will create the capture.cfile. The flow chart will not stop by itself when it’s done, so once the file has been written press the red X icon in GRC to stop the flow chart running. The capture.cfile can now be used in airprobe. However, to use this cfile, I found that I had to use ./gsm_receive.py, rather than ./go.sh as a custom decimation rate is required. I’m not sure why, but a decimation rate of 64 worked for me, which is set with the -d flag. ./gsm_receive.py -I rtl_sdr_capture.cfile -d 64 Going Further I have not been able to decode encrypted GSM data myself, but if you are interested in researching this further, here are some useful links. Disclaimer: Only decrypt signals you are legally allowed to (such as from your own cell phone) to avoid breaching privacy. A Guide by Security Research Labs GSM Decoding Tutorial by the University of Norwegian Science and Technology A5 Wiki A good lecture on this topic is shown below. Is anybody else getting this error: I cant install airprobe/gsm-reciever. When I try to “make” it gives me this error: g++: error: ./gsm.cc: No such file or directory g++: fatal error: no input files compilation terminated. make[4]: *** [_gsm_la-gsm.lo] Error 1 make[4]: Leaving directory `/root/airprobe/gsm-receiver/src/lib’ I can’t find this gsm.cc file anywhere?! I got the same error. did u already managed to compile it? I fond another airprobe src, compiling fine…. but there are some other issues. I had this issue to, and the “no module named _GSM” Turns out the error was caused by using an out of date/old version of Airprobe, that wasnt compiling correctly. Since gnumonks was gone I had to find another on that would compile correctly – I got it working with this one: Github I cloned the same repo on github but still I am getting the same error “ImportError: No module named _gsm” . Please let me know if something else needs to be done. hi,please help me,thanks. root@kali:~/airprobe/gsm-receiver/src/python# ./gsm_receive_rtl.py -s 1e6 Traceback (most recent call last): File “./gsm_receive_rtl.py”, line 22, in import osmosdr ImportError: No module named osmosdr Hi, I followed all the steps and it works nicely until I click in the wideband spectrum window. It just doesn’t do anything, it doesn’t show anything on wirehsark either. I’m using a HackRF, what could be the problem? Also tried in Kali 1.0.8 vm and get: linux; GNU C++ version 4.7.2; Boost_104900; UHD_003.005.003-0-unknown Traceback (most recent call last): File “./gsm_receive_rtl.py”, line 27, in import gsm File “../lib/gsm.py”, line 26, in _gsm = swig_import_helper() File “../lib/gsm.py”, line 18, in swig_import_helper import _gsm ImportError: libosmocore.so.5: cannot open shared object file: No such file or directory Anybody any ideas how to make this work in Kali 1.0.8? Answered my own problem. Run the following if using Kali 1.0.8 before the airproble download and setup: sudo ln -s /usr/local/include/gruel/swig/gruel_common.i /usr/local/include/gnuradio/swig/ && ldconfig seems to be working on my VM now hi to day i installed kali linux 1.0.8 with gnuradio preinstalled i follow the totrial how to install airprobe and apply the patch zmiana.patch all thing work fine but when i apply the test of airprobe i go this message: Traceback (most recent call last): File “./gsm_receive.py”, line 11, in import gsm File “../lib/gsm.py”, line 26, in _gsm = swig_import_helper() File “../lib/gsm.py”, line 18, in swig_import_helper import _gsm ImportError: No module named _gsm please help me i have 5 days try !!IMPORTANT!! hey, my name is hans and ive got a simple question (im a newbie in this section): Is it possible to detect the count of smartphones near me with gsm analyzazion? and if not, could u imagine some way to do this? i know its not that easy, but ive several months to do this – i just need to know its possible regards, hans Hi Hans, what area are you in? hi, i’m made cfile with a terratec e4000 usb card, but unfortunately i cant find a way how to decode this. when i write “./go.sh /tmp/capture-rtl-sdr.cfile 64 1S” everything looks fine in console, but in wireshark have nothing. Instead of when write “./go.sh /tmp/capture-rtl-sdr.cfile 64 0C” then wireshark show traffic but not system information 5 or 6 so im uploaded my cfile, and if somebody can try and eventually find where im in wrong, i will appreciate Hi I had been using gsm_receive_rtl.py with version 1 of zmiana patch, and it worked OK. However, I couln’t make go.sh work with any capture file, like capture_941.8M_112.cfile or vf_call6_a725_d174_g5_Kc1EF00BAB3BAC7002.cfile. Now I read neeo comment about a new patch version and I applyed it, but I got same rerults: gsm_receive_rtl.py working OK but file decoding not working. Neeo, what options should I use to try with vf_call6_a725_d174_g5_Kc1EF00BAB3BAC7002.cfile, which is the file should work, isn’t it? Thanks! you need to change clock_rate in python code to 100e6, and use decim = 174 for vf_call6_a725_d174_g5_Kc1EF00BAB3BAC7002.cfile Then I removed “-I” and get: configure.ac:16: required file `./config.guess’ not found configure.ac:16: `automake –add-missing’ can install `config.guess’ configure.ac:16: required file `./config.sub’ not found configure.ac:16: `automake –add-missing’ can install `config.sub’ configure.ac:5: required file `./install-sh’ not found configure.ac:5: `automake –add-missing’ can install `install-sh’ configure.ac:16: required file `./ltmain.sh’ not found configure.ac:5: required file `./missing’ not found configure.ac:5: `automake –add-missing’ can install `missing’ src/Makefile.am: required file `./depcomp’ not found src/Makefile.am: `automake –add-missing’ can install `depcomp’ autoreconf: automake failed with exit status: 1 I get this: autoreconf: ‘configure.ac’ or ‘configure.in’ is required after “autoreconf –i” hey all , excuse me because of reapiting this question ! when i run the ./gsm_receive_rtl.py i take this error : inux; whould you please tell me exactly how could i solve this problem ? —————————– and , another question is that when i run patchs , it asks me a File name and i give the file name but it asks for ignoring them , —————————————————- please tell me how to do patchs ! soooo sooorryy and tnxxxx a lot for ans. ——————————————— I have installed this on Kali 1.0.6 in VirtualBox, however when I run ./gsm_receive_rtl.py -s 1e6 after detecting the RTLSDR I have an error thrown; Traceback (most recent call last): File “/usr/lib/python2.7/dist-packages/gnuradio/wxgui/plotter/plotter_base.py”, line 203, in _on_paint for fcn in self._draw_fcns: fcn[1]() File “/usr/lib/python2.7/dist-packages/gnuradio/wxgui/plotter/plotter_base.py”, line 63, in draw GL.glCallList(self._grid_compiled_list_id) File “/usr/lib/python2.7/dist-packages/OpenGL/error.py”, line 208, in glCheckError baseOperation = baseOperation, OpenGL.error.GLError: GLError( err = 1280, description = ‘invalid enumerant’, baseOperation = glCallList, cArguments = (1L,) ) Thanks in advance for any help. hi, i’ve updated the patch for 3.7 a little bit – link – now gsm_receive_rtl.py works as well (can be used to live capture) as noticed by Storyman, the go.sh doesn’t work for example capture file mentioned in article – maybe the file needs some other clock_rate (it wasn’t my testing target in the first place). I was able however to decode srlabs file correctly (with clockrate 100e6) and with 64e6 (default) I’m able to decode files captured with my rtl-sdr. Thanks for the update, and the extra info. I was able to replicate your result! In the process of messing around with it, I uncovered a problem, too. I noticed that when I clicked the coarse tune window, it was behaving oddly. I tracked the bug down to this: When gr moved from 3.6 to 3.7, gr::filter::freq_xlating_fir_filter_XXX changed to require the negative of the old value. that is, an offset of -200000 in gr3.6 should be +200000 in gr3.7. The fix — change this line: self.offset = -x to self.offset = x However, that got me thinking about what else that sign change could be messing up. Sure enough… there is a tuner correction function built in there, where the gsm receiver function sends back a frequency correction to the top_block. So I performed the following minor surgery to gsm_receive.py: class tuner(gr.feval_dd): def __init__(self, top_block): gr.feval_dd.__init__(self) self.top_block = top_block def eval(self, freq_offset): self.top_block.set_center_frequency(freq_offset) return freq_offset becomes: class tuner(gr.feval_dd): def __init__(self, top_block): gr.feval_dd.__init__(self) self.top_block = top_block def eval(self, freq_offset): self.top_block.set_center_frequency(0 - freq_offset) return 0 - freq_offset Aaaaand just like that — capture_941.8M_112.cfile decodes properly under gr3.7 now Oh, just wanted to say, there’s probably a cleaner approach to fixing these errors. There may be a central point where we can just do a sign change and fix them all or something. I haven’t really investigated any further yet. I was just so happy to get the example cfile to read, finally, that I just rushed here to say how you’re absolutely right Storyman – the clearly states that the change of the sign is needed (but I did abs() – so that’s my mistake). new version: (I did the change in a different location – but it works as well). Well, i went through all the comments on this page. It does appear from the comments that airprobe only works on kali-linux. Is that so? As i m trying to install airprobe on relatively older version of ubuntu i.e. ubuntu 10.04. So is that worth-less to do so? No it should work on any Linux not just Kali. People just use Kali because airprobe can be very hard to install and Kali somewhat simplified it by having the GNU Radio prerequisite preinstalled. Also forgot to mention, as per SopaXorzTaker, that one should do make in /src/python/lib and copy gsm.py into /src/python Worth noting are these patches for gnuradio 3.7: Forgot to mention neither patches are mine, first is by scateu and second is (c) 2014 SopaXorzTaker Christopher, I’ve applied both patches, and the programs run, but they don’t produce valid output like they do for me under gr3.6. Have you (or anyone, really) actually gotten to a 100% usable state with gr3.7? Even testing against the capture_941.8M_112.cfile file produces a stream of “sch.c:260 ERR: conv_decode 11″ under gr3.7, doing the same test in the same manner as under gr3.6 (which worked perfectly). Has ANYONE overcome this problem yet? And if so, are you able to share any hints as to how? Thanks! Hi Guys… I have tried to install the Kali 1.0.6 and then GNURadio 3.7. I have read about the incompatibility with airprobe and I also applied a patch and all worked ok. When I run the with caputer*.cfile it fails like this: root@kali:~/airprobe/gsm-receiver/src/python# ./go.sh capture_941.8M_112.cfile 112 0b Using Volk machine: avx_64_mmx 10 sch.c:260 ERR: conv_decode 11 sch.c:260 ERR: conv_decode 12 sch.c:260 ERR: conv_decode 11 sch.c:260 ERR: conv_decode 11 sch.c:260 ERR: conv_decode 10 …. And nothing shows up on Wireshark. Worst if I try to run: root@kali:~/airprobe/gsm-receiver/src/python# ./gsm_receive_rtl.py -f 939.363M -c 0B Traceback (most recent call last): File “./gsm_receive_rtl.py”, line 16, in from gnuradio import gr, gru, eng_notation, blks2, optfir ImportError: cannot import name blks2 I get this python error. Seems like there is no patch applied to the IMPORT function of python related to GNURadio 3.7 Any idea? Problem with python too ;( OFFTOPIC: go away scriptkiddies! I solved the problem by installing Kali 1.0.6 where GNURadio 3.6.5 is pre-installed. Then downloaded and compiled airprobe. I have also installed osmocombb RTLSDR libraries to make Kalibrate working. By running the live capture using gsm-receive i raised the gain to 52 and et voilà … 20 seconds later GSM dataflow showing up on Wireshark. My advise is not to install GNURadio 3.7 and keep on working with pre installed version on GNURadio on Kali Linux 1.0.6 Now fixed and working on Ubuntu | | | | \ / V for step 1 i.e. identifying the exact GSM frequency, one can use kal its self to determine the GSM frequency (instead of of via SDR# or gqrx) as long as you know the GSM band (quite easy) e.g. kal -s 900 (scan GSM band 900 for all GSM signals) output will be something along the lines of chan: 1 (908.3MHz – 21.3243kHz) power:xxxxx.xx chan: 2 (909.5MHz – 22.1231kHz) power:xxxxx.xx chan: 3 (907.2MHz – 20.3223kHz) power:xxxxx.xx choose a channel which shows a high power value (i.e. good reception) translate the corresponding frequency to hz e.g. assuming channel 3 has the highest power value of the received channels 907.2Mhz would translate to 907 200 000hz modify your frequency in the gsm_receive_rtl.py to the corresponding frequency e.g. gsm_receive_rtl.py -s 1e6 -f 907200000 Not sure if anyone else had issues running the apt-get install commands, but I did. I ended up installing Ubuntu’s software center and was able to search for the various packages through there. When I tried installing packages through the command line more than half said they did not exist (?) Just thought I’d share this tip in case anyone has the same issue. I used Kali Linux. How to execute .patch file? cd thedirectorycontainingthesource patch -p1 < mypatch.patch If that doesn't work try with -p0 instead of -p1. hello when i try to compile airprobe to decode GSM signals with gnuradio radio i follow the steps, my problem is when I compile the gsm-receiver with the command make, comethe have installed Kali Linux 1.06 new but dont work airprobe why can someone help me please? the error for comiling Airprobe i have found the problem the path rt-sdr thre must be compiled with ./bootstrap and ….. make and airprobe gsm decode are going Hello! When I am trying to use 1e6 on the sample rate, I can’t change the frequency or time/fne tune to the right frequency. The wideband spectrum waves is moving very slow also the channel apectrum waves. How can i fix it? Thanks! You need more CPU power. I had the same issue when I used a Vmware virtual machine, adding one more CPU core in the config solved this problem for me. Real-time sampling takes a lot of CPU power. Oh.. I’m trying to run it on atom processor. That’s bad. I guess I can’t use other saple rate. Cause I can tune when I use the default sample rate. Thank you! Very interesting tutorial! Is it possible to see when a User End-device is opening and closing PDP-sessions for the GPRS? Hello , i have install gnuradio-3.6.5.1 and airprobe , okey its fine working i have see data my terminal and decode data in my wireshark window but I do not hear any sound . i dont know , fWhat should I do to hear sound , i must should install VMWare player or not ? Please help me ,thank you and best regards . no, it should act like that. also, how old are you? Well, despite I could install airprobe with gnuradio 3.7 using the patch, I still couldn’t decode any example file (tried with capture_941.8M_112.cfile and vf_call6_a725_d174_g5_Kc1EF00BAB3BAC7002). I get this: ./go.sh capture_941.8M_112.cfile 64 0b Using Volk machine: ssse3_32_orc Key: ’0000000000000000′ Configuration: ’0. And nothing appears in wireshark. If I use other decimation ratios, for example 112: ./go.sh capture_941.8M_112.cfile 112 0b Using Volk machine: ssse3_32 11 … Any ideas? Thanks! Hi, I’m having a problem very similar to OI. When I run: ./go.sh capture_941.8M_112.cfile I get: Traceback (most recent call last): File “./gsm_receive.py”, line 15, in import gsm File “../lib/gsm.py”, line 26, in _gsm = swig_import_helper() File “../lib/gsm.py”, line 18, in swig_import_helper import _gsm ImportError: ../lib/.libs/_gsm.so: undefined symbol: _Z14gr_fast_atan2fff I’ve seen the comment from Andy, but my libfftw3-dev package is in its most recent version. Any ideas? Thanks! Sorry, I hadn’t noticed that my problem could be related with the gnuradio version. I tryed with the neeo patch, and now it seems to work. Thanks! I’ve made a patch to make gsm-receiver (from gnumonks airprobe) compatible with gnuradio >= 3.7. it is a little bit hacky im some places, but it works for me you can get it here: sorry, link didn’t show up: i’ve also created a new version of grc file, that can be loaded in gnuradio-companion (grc) 3.7 Could you please provide the patch in a way that does not require an EXE file to download? You could create a fork of the code on github.com for example (or e-mail the patch to me so I can host it, my email is linked from my homepage). No need to use their executable downloader… just click the filename at the top of the page and it will download normally with the browser. Nice one neeo, but how did you get past the error concerning gnuradio-core, since it was removed in 3.7 you must have solved this problem as well This happens when you try to run the ./configure script. Errors like this: checking for GNURADIO_CORE... configure: error: Package requirements (gnuradio-core >= 3) were not met: No package 'gnuradio-core' found Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix. Alternatively, you may set the environment variables GNURADIO_CORE_CFLAGS and GNURADIO_CORE_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details. And suddenly it worked, after running bootstrap again When I install gsm-receiver of airprobe,the error occurred.How to fix this: ======================================== In file included from GSMCommon.h:34:0, from GSMCommon.cpp:23: ./Timeval.h: In function ‘void msleep(long int)': ./Timeval.h:32:49: error: ‘usleep’ was not declared in this scope In file included from GSMCommon.cpp:23:0: GSMCommon.h: In function ‘void GSM::sleepFrames(unsigned int)': GSMCommon.h:62:36: error: ‘usleep’ was not declared in this scope GSMCommon.h: In function ‘void GSM::sleepFrame()': GSMCommon.h:66:29: error: ‘usleep’ was not declared in this scope make[5]: *** [GSMCommon.lo] error 1 make[5]: Leaving directory `/root/airprobe/gsm-receiver/src/lib/decoder/openbtsstuff’ make[4]: *** [all-recursive] error 1 make[4]: Leaving directory `/root/airprobe/gsm-receiver/src/lib/decoder’ ============================================= Has anyone used Kraken? I have it installed on my machine with tables and I’m not sure how to point or configure Kraken or find_kc toward the tables on the HD. I’m a rather new Linux user. I get an error i don.t understand. im using latest version of debian :/ ./gsm_receive_rtl.py linux; GNU C++ version 4.7.2; Boost_104900; UHD_003.006.002-1-g8f0f045c gr-osmosdr v0.0.2-42-g86ecf305 (0.0.3git) gnuradio 3.6.5.1 built-in source types: file fcd rtl rtl_tcp uhd hackrf bladerf netsdr Using device #0 Realtek RTL2838UHIDIR SN: 00000001 Found Rafael Micro R820T tuner sample rate: 1800000 >>> gr_fir_ccc: using SSE >>> gr_fir_ccf: using SSE Key: ‘ad6a3ec2b442e400′ Configuration: ‘0B’ Configuration TS: 0 configure_receiver Using Volk machine: sse4_2_64_orc The program ‘python’ received an X Window System error. This probably reflects a bug in the program. The error was ‘BadWindow (invalid Window parameter)’. (Details: serial 629 error_code 3 request_code 137.) Hey all, For those of you in the states, have any of you guys had any luck with this? Our possible ranges leave only 1 of the 4 bands usable if using the RTL SDR seeing as the max range is ~1700 (GSM for the states for AT&T and T-Mobile are within 850, 1700, 1900, and 2100 I believe). Therefore, I have only been able to attempt 850mhz band, but with no such luck. I am currently using a simple TV Antenna. Given the comments for this article, even the stock antenna that comes with the RTL SDR can pick this up. Any thoughts as to what I may be doing wrong? I think that once I find a non-hopping signal, I will be set. In the meantime, I can only find MOTORBO signals within this range. Thoughts? Thank you so much for the tutorial! As soon as I finished reading it, I went out and bought the Terratec E4000. Unfortunately, I am having the same troubles as some of the others. After I installed Airprobe, I got this error message: root@XXXX:~/sdr/airprobe/gsm-receiver/src/python# ./go.sh capture_941.8M_112.cfile Traceback (most recent call last): File “./gsm_receive.py”, line 3, in from gnuradio import gr, gru, blks2 ImportError: cannot import name blks2 I even tried removing the GNURadio that comes with Kali, and instead installed it in the fashion described in the video-tutorial in your post. But nothing seems to work. I tried googling the problem, and have now spent several days trying to figure it out – unfortunately without any luck. I hope someone can help me with this problem. All the best, //Dennis Hi, I have installed the gnuradio 3.7. But when I tried to install gsm-receiver after step “./configure”, I got a error like this “Package requirements (gnuradio-core >= 3) were not met”. I googled the problem. It seems the new version gnuradio is not compatible with the airprobe. Do you have any ideal to fix it? Many Thanks Great tutorial…the clearest yet! I did have to download many dependencies on my fresh install of Kali in order to install gsm-receiver but now it installed correctly. When I try to run gsm_receive_rtl.py I get the following errors: linux; any idea what this is? Attached rtl2832-cfile.grc does not work in modern version of gnuradio. Trying in v3.7 gives a lot of errors. I know that asking for a port maybe asking too much. Could at least a picture of the schematic be posted? This is Ajay here, When I use ./go.sh with the downloaded cfile, everything is fine. When I make my own cfile using usrp+gnuradio+airprobe ./gsm_scan.py -pe -re -d174 -c643 I get the cfile but the decode does not happen using ./go.sh ?? Can anyone help me with how to capture a valid cfile using USRP+GNURADIO ? I have been trying for a long time, pls help. Install Kali and simple run a script as root from /root folder: apt-get -y install git-core autoconf automake libtool g++ python-dev swig libpcap0.8-dev apt-get install gnuradio gnuradio-dev cmake git libboost-all-dev libusb-1.0-0 libusb-1.0-0-dev libfftw3-dev swig python-numpy cd ~/sdr git clone git://git.osmocom.org/libosmocore.git cd libosmocore autoreconf -i ./configure make sudo make install sudo ldconfig cd ~/sdr git clone git://git.gnumonks.org/airprobe.git cd airprobe/gsmdecode ./bootstrap ./configure make cd ~/sdr cd airprobe/gsm-receiver ./bootstrap ./configure make cd ~/sdr how change ip in wireshark to 10.0.0.0/16 LAMER! Hi, I’m a Noob here. Running ./go.sh capture_941.8M_112.cfile 112 1S on the cfile mentioned in the tutorial shows SI 5 & 6 frames. However, I’ve been unsuccessful in getting similar data off a live transmission and was hoping someone here could point me in the right direction. My beacon is on ARFCN 22 and here’s what I’ve done so far: 1) ./gsm_receive_rtl.py -f 939.363M -c 0B I see BCCH data with 2 different kinds of Immediate Assignments in Wireshark. Here’s a brief excerpt ——– SDCCH/8 + SACCH/C8 or CBCH (SDCCH/8), Subchannel 4 Timeslot: 2 Hopping channel: No Single channel : ARFCN 22 ——– Spare bits (ignored by receiver) Timeslot: 4 Hopping channel: Yes Hopping channel: MAIO 6 Hopping channel: HSN 38 ——– 2) Since the Immediate Assignments to TS2 were frequent, I was hoping that monitoring TS2 on ARFCN 22 would show pre-encryption SI 5 and SI 6 frames. I ran the following command: ./gsm_receive_rtl.py -f 939.363M -c 2S I do not see any output at all in Wireshark while I do see encrypted frames on the gsm_receive window. I tried config 2C and setting the sampling rate to 1MHz but I still cannot see anything in Wireshark. What am I missing ? Needed to force the key to 0 to get it to work ./gsm_receive_rtl.py -f 939.363M -c 2S -k “00 00 00 00 00 00 00 00″ Hi there, Just posted about decrypting the data captured on my blog, thought it might be interesting for you too finaly i am able to run it in new Kali linux (version 1.0.5), For those who getting error when compiling/make “gsm-receiver” ,this is beacuse of the missing dependencies with gnuradio installed in kali run this command to fix it : sudo apt-get install gnuradio gnuradio-dev cmake git libboost-all-dev libusb-1.0-0 libusb-1.0-0-dev libfftw3-dev swig python-numpy then try compile airprobe FYI: tried this tutorial in ubuntu 13.04 but failed, worked fine in Kali linux (version 1.0.5) Thanks for this, I havn’t had a chance to try airprobe on the new Kali yet, so this will save some time. correction, airprobe is not pre-installed in kali Thanks for the correction, not sure why I thought that. I am trying to compile airprobe to decode GSM signals with gnuradio radio and wireshark following the steps, the problem is when I compile the gsm-receiver with the command make, the think that the problem comes from some kind of version incompatibility of python but I’m not sure, can someone help me please? Lots of thanks!!! Hi! I’m newby at this. Please, help. After execute a gsm_receive.py I have error: root@kali:~/airprobe/gsm-receiver/src/python# ./gsm_receive.py Traceback (most recent call last): File “./gsm_receive.py”, line 12, in import gsm File “../lib/gsm.py”, line 26, in _gsm = swig_import_helper() File “../lib/gsm.py”, line 18, in swig_import_helper import _gsm ImportError: ../lib/.libs/_gsm.so: undefined symbol: _ZTI8gr_block Sorry I don’t know what could be wrong here, maybe someone else can help? I encountered the same error on Kali Linux. The reason is, that the shared object (_gsm.o) doesn’t get correctly linked against gnuradio-core.so, because pkg-config fails during the build. It fails, because gnuradio-core depends on the package “fftw3f” which is installed in binary form, because otherwise gnuradio woulndn’t work, but the -dev package is mising. Long story short: Install the missing package (apt-get install libfftw3-dev) and rebuild the gsm-receiver. Then it works. It doesn’t work… (I use kali 1.0.5) Hey, thanks for the excellent article. So I’ve gotten up to the point of actually trying to do a live capture with wireshark, but for some reason, when I run gsm_receive_rtl.py, I get an error where each parse of a packet should be. It looks like this: sch.c:260 ERR: conv_decode 12 The number seems to vary between 9 and 12. Any idea how to fix this? Thanks! Gabe Did you set the -s flag to make the bandwidth 1MHz? I get this error too sometimes, usually it’s because the GSM peak isn’t perfectly centered, or I haven’t clicked on the peak center perfectly. Also poor reception might cause it. In one of Domi’s comments below he says that he used kalibrate to get a clock offset figure which allowed him to tune to the signal much more accurately to get around that error, you might want to try that too. Great tutorial, I have several questions though: 1) By using kalibrate I can correctly get 90%+ of all gsm downlink traffic for 20 seconds or so in wireshark, then I get a parity bit error for 10 seconds followed by around 15 seconds of ERR: conv_decode 11 and lastly a bunch of 0’s, any idea what can cause this? I am guessing either my antennae gets offset or I get offset on my packages. 2) I can see uplink traffic with SDR# but when I try to sniff it with airprobe I get absolutely nothing in wireshark, not even any error messages. Any ideas? Thanks for any help you can give. I plan on trying to run uplink and downlink sniffing at the same time and will let you know my results. (using 2 dongles) Hi Joe, I think I can answer you since I have been down the same road. 1. I think you need to wait for the dongle to warm up (as admin said), and keep re-kalibrating it. It is actually quite random, sometimes I get the full traffic even when I use the exact value coming from arfcncalc, sometimes I need to calibrate. I think this is because my error (28-30kHz) is still in the width of a GSM channel (200 kHz). The parity errors could be ignored it means the traffic you tried to de-modulate and decode is encrypted. The ERR_CONV messages mean that you are not well calibrated, sometimes if you wait they disappear as the dongle gets in tact. The 0s mean that you are so off from the frequency that airprobe couldn’t even find anything that looks like GSM so it just prints it the bits it finds. 2. There is no uplink support at all in airprobe. There was a little demonstration at one of the conferences but the code was never released. You can find some gitHUB repos claiming their airprobe is down and uplink compatible, but they don’t work. According to a comment in the code “uplink can’t be decoded the way currently gsm-receive works”. Everybidy switched to osmocomBB therefore no more code is written for SDRs. I asked Dieter Spaar who presented uplink sniffing but he said the code is private and dirty so he will never release it. I was also thinking about doing uplink and downlink simultaniously but it appears that for some reason you need to sync the two dongles for good results, so I decided to put this aside as it is a lot more complicated than I thought. Good luck, Domi Thanks for the info Domi, I hope it will save some people some time. Does airprobe work on ubuntu or it is only for kali linux? Which version of ubuntu will be most suitable for airprobe? As i m using ubuntu 10.04.4 Hello, Did you try it for uplink traffic as well..? Fahad As far as I know, it isn’t possible to monitor uplink traffic at the moment. Someone correct me if i’m wrong. EDIT: In this video at 32 minutes in they show a demo of uplink traffic monitoring, but I think you need to monitor down downlink and uplink at the same time, which only the USRP can do. Maybe it is possible with two RTLs though… I haven’t tried it yet, but it should be possible – uplink is just a different frequency, but uses the same kind of data-structure as far as I know, so it shuld be possible to demodulate and analyze it using the same tools. It is totally possible, just need some computing power to be able to work with both sticks. The program arfncalc can give you the uplink frequency as well as the downlink. I will look into this stuff in the coming days and will post some results to my blog. Nice blog, you seem knowledgeable about GSM. I’ll keep an eye on your work. Hi, I have one issue that kind of bothers me: I tune my rtl-sdr to the right frequency – I use arfcn-calc and an old Nokia 3310 in network monitor mode so I know what is the the phone’s tower’s ARFCN so I know the frequency – but I don’t always get data, most of the time I get sch.c:260 ERR: conv_decode 11 and similar messages. After that I decided to do a little calibration with kalibrate-rtl. It showed me an average of +24 kHz offset, so I subtracted around 24 000 from the frequency arfcncalc told me and now I am tresting this setup. It seems that it still starts with the ERR-messages, but after some seconds it actually starts to output GSM-data as expected. Now my question is: since I am very new to radios and SDR especially is what I did with calibrating and changing the frequency manually correct (at least in theory)? Should I try to move closer to the tower? My phone shows around -59 dBi signal. Thank you! Hi, yes what you did is correct, usually you’d use the PPM offset value, but gsm_receive_rtl.py doesn’t seem to have that option. Remember the dongle takes time to warm up and stabilize, and during that time the frequency offset can change, so make sure you run Kalibrate after the dongle has been running for a few minutes. Also, if the signal isn’t perfectly centered you can tune around with the mouse by clicking on the GSM peak middle. I get those errors sometimes too and i’m not sure why, but it could be signal strength related. Hi, great article, thank you for posting it. What kind of antenna did you use for this? thanks! Hi, thanks. I used a roof mounted J-Pole. But GSM signals are usually quite strong so even the stock rtl-sdr antenna should pick up GSM decently assuming you have a GSM cell tower near you. Oh, great! I already ordered an RTL-SDR from eBay, so I am just waiting for the mailman to bring it. I am really interested about decrypting actual data, found this video which I think could be applied to RTL-SDR, what do you think? Hi, yes the video is applicable, the USRP and RTL-SDR should be pretty much interchangeable. Nice tutorial. I could capture control data without any problem. But how to capture encrypted content ? It should be possible to capture encrypted data even without decrypting. Cant find much info except USRP. I don’t know much about the encryption stuff, but are you talking about capturing a cfile? I wasn’t able to find a way to get airprobe to do it with the rtl-sdr. But it should be possible using GNURadio. Great instruction! Thanks! But I have a question. I trying to get burst data for kraken (magic 114 bits). I use osmocombb + motorola C123. I’m able to see receiving data in wireshark. But how to convert this captured data into necessary format? Thanks in advance! To be honest I haven’t looked into the encryption side of things yet. Your best bet for help is probably on the srlabs A51 mailing list. Some people on IRC might also be able to help. Ask around on the freenode server, channel ##rtlsdr. I’ve been trying to hunt down a GSM frequency to try this out. I can’t seem to find one though. I browsed 900Mhz-1000Mhz, nothing that looked like data. Any tips in using the FCC website for looking it up? I imagine there is a better way than me browsing around randomly. Keep up these great tutorials! Sorry, I almost forgot that the USA uses slightly different frequencies. Try searching from 850 MHz. There’s a good worldwide list of the bands used here. This is also useful for finding exact frequencies.. If you don’t know your own cells ARFCN number, look here in the table for the range of valid values for your GSM band. Thanks, I’ll check those out. I also found this while I was searching: It allows you to locate and track LTE basestations. May be cool for your next article. Nice LTE scanner link. I can’t really use it yet as there are no LTE signals in my country until next year. There is a test signal around, but I have no idea what part of the spectrum it is in! EDIT: Just realized there are LTE signals around, but they’re all in the 1.8 GHz region. hey.. i have gnuradio 3.6.5 installed on ubuntu 12.04..i m trying to install airprobe.. everything works fine according to this tutorial till the point i try to make gsm receiver… i got the following error /usr/bin/ld: i386 architecture of input file `decoder/.libs/libdecoder.a(GSM660Tables.o)’ is incompatible with i386:x86-64 output collect2: ld returned 1 exit status make[4]: *** [_gsm.la] Error 1 make[4]: Leaving directory `/home/a/airprobe/gsm-receiver/src/lib’ make[3]: *** [all-recursive] Error 1 make[3]: Leaving directory `/home/a/airprobe/gsm-receiver/src/lib’ make[2]: *** [all-recursive] Error 1 make[2]: Leaving directory `/home/a/airprobe/gsm-receiver/src’ make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/a/airprobe/gsm-receiver’ make: *** [all] Error 2 i m not sure about what this error is.. when i try to run gsm_receive.py file it again give an error which is probably due to the incomplete installation Traceback (most recent call last): File “./gsm_receive.py”, line 12, in import gsm File “../lib/gsm.py”, line 26, in _gsm = swig_import_helper() File “../lib/gsm.py”, line 18, in swig_import_helper import _gsm ImportError: No module named _gsm is there anybody who can help me with this problem..??? thanx in advance, regards ali
http://www.rtl-sdr.com/rtl-sdr-tutorial-analyzing-gsm-with-airprobe-and-wireshark/
CC-MAIN-2015-22
en
refinedweb
[adding bug-gnulib; replies can drop libvir-list] - if (command_ret != WEXITSTATUS (0)) { + if (WEXITSTATUS(command_ret) != 0) {ACK. By the way, what was the compilation failure?Thanks, pushed. The compilation failure was: virsh.c:8605: error: lvalue required as unary '&' operand Which seems weird, but this patch really did fix it. :) Aha - the darwin <sys/wait.h> contains: #if defined(_POSIX_C_SOURCE) && !defined(_DARWIN_C_SOURCE) #define _W_INT(i) (i) #else #define _W_INT(w) (*(int *)&(w)) /* convert union wait to int */ #define WCOREFLAG 0200 #endif /* (_POSIX_C_SOURCE && !_DARWIN_C_SOURCE) */ ... #if __DARWIN_UNIX03 #define WEXITSTATUS(x) ((_W_INT(x) >> 8) & 0x000000ff) #else /* !__DARWIN_UNIX03 */ #define WEXITSTATUS(x) (_W_INT(x) >> 8) #endif /* !__DARWIN_UNIX03 */ ...unsigned; };Obviously, the Darwin folks are (mistakenly) assuming that you would only ever use WEXITSTATUS with a 'union wait' lvalue; in which case, (*(int*)&(0)) is indeed invalid C (notice that they do the right thing if you request POSIX compliance with _POSIX_C_SOURCE, but since gnulib [rightfully] wants to expose and take advantage of system extensions, we can't define _POSIX_C_SOURCE). Since WEXITSTATUS should be usable on constants; it is a bug in their headers, and one that Gnulib should be able to work around. -- Eric Blake eblake redhat com +1-801-349-2682 Libvirt virtualization library
https://www.redhat.com/archives/libvir-list/2010-September/msg00268.html
CC-MAIN-2015-22
en
refinedweb
java.lang.Object org.springframework.batch.support.DefaultPropertyEditorRegistrarorg.springframework.batch.support.DefaultPropertyEditorRegistrar org.springframework.batch.item.file.mapping.BeanWrapperFieldSetMapperorg.springframework.batch.item.file.mapping.BeanWrapperFieldSetMapper public class BeanWrapperFieldSetMapper FieldSetMapper implementation based on bean property paths. The DefaultFieldSet to be mapped should have field name meta data corresponding to bean property paths in a prototype instance of the desired type. The prototype instance is initialized either by referring to to object by bean name in the enclosing BeanFactory, or by providing a class to instantiate reflectively. Nested property paths, including indexed properties in maps and collections, can be referenced by the DefaultFieldSet names. They will be converted to nested bean properties inside the prototype. The DefaultFieldSet and the prototype are thus tightly coupled by the fields that are available and those that can be initialized. If some of the nested properties are optional (e.g. collection members) they need to be removed by a post processor.) mapLine type) mapLine Object mapLine(FieldSet fs) DefaultFieldSetto an object retrieved from the enclosing Spring context, or to a new instance of the required type if no prototype is available. mapLinein interface FieldSetMapper NotWritablePropertyException- if the DefaultFieldSetcontains a field that cannot be mapped to a bean property. BindingException- if there is a type conversion or other error (if the DataBinderfrom createBinder(Object)has errors after binding). FieldSetMapper.mapLine(org.springframework.batch.item.file.mapping)
http://docs.spring.io/spring-batch/1.0.x/apidocs/org/springframework/batch/item/file/mapping/BeanWrapperFieldSetMapper.html
CC-MAIN-2015-22
en
refinedweb
.... There are an increasing number of hybrid parrallel applications that mix distributed and shared memory parallelism. To know how to support that model, one need to know what level of threading support is guaranteed by the MPI implementation. There are 4 ordered level of possible threading support described by mpi::threading::level. At the lowest level, you should not use threads at all, at the highest level, any thread can perform MPI call. If you want to use multi-threading in your MPI application, you should indicate in the environment constructor your preffered threading support. Then probe the one the librarie did provide, and decide what you can do with it (it could be nothing, then aborting is a valid option): #include <boost/mpi/environment.hpp> #include <boost/mpi/communicator.hpp> #include <iostream> namespace mpi = boost::mpi; namespace mt = mpi::threading; int main() { mpi::environment env(mt::funneled); if (env.thread_level() < mt::funneled) { env.abort(-1); } mpi::communicator world; std::cout << "I am process " << world.rank() << " of " << world.size() << "." << std::endl; return 0; }. environment.
http://www.boost.org/doc/libs/1_58_0/doc/html/mpi/python.html
CC-MAIN-2015-22
en
refinedweb
Need in struts I want two struts.xml files. Where u can specify that xml files location and which tag u specified Struts Tag Lib - Struts Struts Tag Lib Hi i am a beginner to struts. i dont have... use the custom tag in a JSP page. You can use more than one taglib directive..., sun, and sunw etc. For more information on Struts visit to : http Struts Books building large-scale web applications. The Struts Framework: Practical... for more experienced readers eager to exploit Struts to the fullest.  ...-Controller (MVC) design paradigm. Want to learn Struts and want html tag - Struts struts html tag Hi, the company I work for use an "id" tag on their tag like this: How can I do this with struts? I tried and they don't work str tag - Struts Struts tag I am new to struts, I have created a demo struts application in netbean, Can any body please tell me what are the steps to add new tags to any jsp page C:Redirect Tag - Struts C:Redirect Tag I am trying to use the jstl c:redirect tag in conjuction with a struts 2 action. I am trying to do something like What I am... to the true start page of the web application. In performing the redirect, I want Struts Articles the protection framework with the Struts tag library so that the framework implementation.... The first thing we want to do is set up the Struts... to any presentation implementation. Developing JSR168 Struts Am newly developed struts applipcation,I want to know how to logout the page using the strus Please visit the following link: Struts Login Logout Application - Framework , Struts : Struts Frame work is the implementation of Model-View-Controller...Struts Good day to you Sir/madam, How can i start struts application ? Before that what kind of things necessary have one textbox for date field.when i selected date from datecalendar then the corresponding date will appear in textbox.i want code for this in struts.plz help me How to Use Struts 2 token tag - Struts How to Use Struts 2 token tag Hi , I want to stop re-submiiting the page by pressing F5, or click 'back n press submit' button again, i want to use 'token' tag for it, but not able to find out how does it works, I' ve put tag struts - Struts struts shud i write all the beans in the tag of struts-config best Struts material - Struts best Struts material hi , I just want to learn basic Struts.Please send me the best link to learn struts concepts Hi Manju, Read for more and more information with example at: http have already define path in web.xml i m sending -- ActionServlet... for more information. Thanks action tag - Struts action tag Is possible to add parameters to a struts 2 action tag? And how can I get them in an Action Class. I mean: xx.jsp Thank you java struts error - Struts the problem what you want in details. For more information,Tutorials and examples...java struts error my jsp page is post the problem... is I Struts - Struts Struts Hello I like to make a registration form in struts inwhich.... Struts1/Struts2 For more information on struts visit to : struts <html:select> - Struts struts i am new to struts.when i execute the following code i am... in the Struts HTML FORM tag as follows: Thanks... DealerForm[30]; int i=0; while(rs.next()) { DealerForm dform.. - Struts Hi.. Hi, I am new in struts please help me what data write.../struts/ Thanks. struts-tiles.tld: This tag library provides tiles... of output text, and application flow management. struts-nested.tld: This tag library IMP - Struts IMP Hi... I want to have the objective type questions(multiple choices) with answers for struts. kindly send me the details its urgent for me Thanku Ray Hi friend, Visit for more information Struts validation Struts validation I want to put validation rules on my project.But after following all the rules I can't find the result. I have extended... that violate the rules for struts automatic validation.So how I get the solution struts - Struts included third sumbit button on second form tag and i given corresponding action...struts hi.. i have a problem regarding the webpage in the webpage i have 3 submit buttons are there.. in those two are similar and another one + java - Struts java i want to know how to use struts in myEclipse using an example i want to know about the database connection using my eclipse ? pls send me the reply Hi friend, Read for more information. http Struts Tutorials tag libraries. This tutorial provides a hands-on approach to developing Struts... libraries introduced in Struts made JSP pages more readable and maintainable... application development using Struts. I will address issues with designing Action logic:iterate tag struts logic:iterate tag Hi All, I am writing a look up jsp which... to go inside the tag. Here is the stack trace I am getting. [#|2010-10-27T00... Hi, I checked but its not the problem seems. Thanks java - Struts java Hi, I want full code for login & new registration page in struts 2 please let me know as soon as possible. thanks,. Hi friend, I am sending you a link. This link will help you. Please visit for more example on struts - Struts example on struts i need an example on Struts, any example. Please help me out. Hi friend, For more information,Tutorials and Examples on Struts visit to : Thanks Beginners Stuts tutorial. had seen how we can improvise our own MVC implementation without using Struts... McLanahan. What is more, Craig is also the Implementation Architect for Sun... press-2003) in favour of adopting Struts. To paraphrase.... **i 1)Have you used struts tag libraries in your application? 2)What are the various types of tag libraries in struts? Elaborate each of them? 3)How can you implement custom tag libraries in your application struts validations - Struts struts validations hi friends i an getting an error in tomcat while running the application in struts validations the error in server... --------------------------------- Visit for more s property tag Struts s property tag Internationalization using struts - Struts Internationalization using struts Hi, I want to develop a web application in Hindi language using Struts. I have an small idea... to convert hindi characters into suitable types for struts. I struck here please Struts Architecture - Struts Struts Architecture Hi Friends, Can u give clear struts architecture with flow. Hi friend, Struts is an open source... developers to adopt an MVC architecture. Struts framework provides three key Based on struts Upload - Struts Based on struts Upload hi, i can upload the file in struts but i want the example how to delete uploaded file.Can you please give the code struts internationalisation - Struts struts internationalisation hi friends i am doing struts iinternationalistaion in the site... code to solve the problem : For more information on struts struts hi i would like to have a ready example of struts using "action class,DAO,and services" for understanding.so please guide for the same. thanks Please visit the following link: Struts Tutorials Struts 1 Tutorial and example programs article Aggregating Actions in Struts , I have given a brief idea of how...-fledged practical example of each of the types presented in Part I namely... nested beans easily with the help of struts nested tag library Struts Alternative /PDF/more Automatic serialization of the ActionErrors, Struts... implementation of Struts, which was released as a 1.0 product approximately one year later... and more development tools provided support for building Struts based applications Struts - JSP-Interview Questions Struts Tag bean:define What is Tag bean:define in struts? Hello,The Tag <bean:define> is from Struts 1. So, I think you must be working on the Struts 1 project.Well here is the description of <bean help - Struts attribute "namespace" in Tag For read more information to visit this link...help Dear friends I visit to this web site first time.When studying on struts2.0 ,i have a error which can't solve by myself. Please give me java - Struts java hi i m working on a project in which i have to create a page in which i have to give an feature in which one can create a add project template in which one can add modules and inside those modules some more options please struts struts hi i would like to have a ready example of struts using"action class,DAO,and services" so please help me Struts Guide ? - - Struts Frame work is the implementation of Model-View-Controller (MVC) design... Struts Guide - This tutorial is extensive guide to the Struts Framework Struts Quick Start Struts Quick Start Struts Quick Start to Struts technology In this post I... to the view (jsp page). Struts provide many tag libraries for easy construction... of the application fast. Read more: Struts Quick Start Doubts on Struts 1.2 - Struts Doubts on Struts 1.2 Hi, I am working in Struts 1.2. My requirement..., I am sending you a link. I hope that, this link will help you. Please visit for more information. Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://roseindia.net/tutorialhelp/comment/4256
CC-MAIN-2015-22
en
refinedweb
10 June 2010 20:48 [Source: ICIS news] WASHINGTON (ICIS news)--The US refining and petrochemical industries have failed to incorporate lessons learned from fatal accidents, and tougher laws and stiffer penalties are needed to forestall further disasters, a top US safety official said on Thursday. Jordan Barab, deputy assistant secretary at the US Labor Department and head of the department’s Occupational Safety and Health Administration (OSHA), told a Senate panel that a recent safety inspection survey of major refining and petrochemical facilities produced results that were “deeply troubling”. “Not only are we finding a significant lack of compliance during our inspections, but time and again, our inspectors are finding the same violations in multiple refineries, including those with common ownership, and sometimes even in different units in the same refinery,” Barab said. “This is a clear indication that essential safety lessons are not being communicated within the industry,” he said. “We are particularly disturbed to find even refineries that have already suffered serious incidents or received major OSHA citations making the same mistakes again,” Barab told the Senate Subcommittee on Employment and Workplace Safety. Barab cited the 20 April explosion and fire that killed 11 workers on the BP offshore oil rig and the 2005 explosion and fire at the BP Texas City, Texas, refinery that killed 15 workers as just two in a rash of fatal accidents in recent years. “This failure to learn from earlier mishaps has exacted an alarming toll in human lives and suffering,” Barab said. “In the last five years alone, OSHA has counted over 20 serious incidents, many resulting in deaths and injuries in refineries across the country,” he. He said that industry must do a better job of institutionalising systems for learning from mistakes, so it does not continue to repeat the same mistakes at the expense of workers’ lives. “Reform in the management systems of companies that own, operate or provide services to petrochemical operations is needed and is needed now,” he said. He said OSHA will step up its inspection and enforcement activities and will increase cooperation with other federal agencies, such as the Environmental Protection Administration (EPA), to target worker safety in the petrochemicals industry. However, Barab said that OSHA and other federal agencies need greater regulatory authority and stiffer penalty levels to improve workplace safety in the sector. “We need to pass the Protecting America’s Workers Act (PAWA), which would significantly increase OSHA’s ability to protect workers, and specifically workers in refineries and chemical plants,” he said. He said the legislation, which was introduced in Congress in August last year and is pending before various committees, “would make meaningful and substantial changes to the Occupational Safety and Health Act that would increase OSHA’s civil and criminal penalties for safety and health violations”. If that legislation were to become law, he said, it would “make us much more able to issue significant and meaningful penalties” before other workplace disasters occur. He also said OSHA needed authority to require changes in what it determined were hazardous plant conditions even while contested enforcement actions were pending and before they were resolved in agency proceedings or the courts. Testifying for refining and petrochemical operators, Charles Drevna argued that the accident rate in the process industries was lower than almost any other manufacturing industry. Drevna, president of the National Petrochemical & Refiners Association (NPRA), told the panel that since the 2005 accident at BP’s ?xml:namespace> He said there had been improvements in facility sitting of permanent and temporary structures to ensure greater safety, a constant emphasis on an equal balance between personnel and process safety and the integration of plant safety management and operational reliability. He agreed with an earlier statement by Barab that “OSHA officials need to find a better way to target problem refineries so that we aren’t wasting our time or your time inspecting refineries that don’t have major problems. “NPRA wholeheartedly endorses this common-sense approach, which is long overdue,” Drevna
http://www.icis.com/Articles/2010/06/10/9366905/us-calls-for-tougher-chemical-process-safety-rules-higher-fines.html
CC-MAIN-2015-22
en
refinedweb
- OSI-Approved Open Source (89) - GNU General Public License version 2.0 (49) - GNU Library or Lesser General Public License version 2.0 (15) - BSD License (12) - GNU General Public License version 3.0 (9) - Eclipse Public License (3) - MIT License (3) - Apache License V2.0 (2) - IBM Public License (2) - Qt Public License (2) - GNU Library or Lesser General Public License version 3.0 (1) - Mozilla Public License 1.1 (1) - Python Software Foundation License (1) - University of Illinois/NCSA Open Source License (1) - Other License Brainfuck Center Brainfuck Center is an IDE and compiler for your Brainfuck scripts. It includes a full debugger with step-by-step debugging and much more. MDI window!2 weekly downloads C++ call graph analysis Analyse structure of C++ projects, compiled with a patched version of g++ (patch included). Edges in the output graph represent function calls. Vertices can be functions, classes, namespaces, files or whole projects (ie, libraries).0 weekly downloads ConfigTools A Java Application for All want to do patch and modify your config file on your production of release0 Dynamic Probe Class Library Dynamic Probe Class Library (DPCL) is an object based C++ class library that provides the necessary infrastructure to allow tool developers and sophisticated tool users to build parallel and serial tools through technology called dynamic instrumentation.5.. Glimpses Glimpses is a profiling tool for understanding program memory behavior and evaluating program regions for execution on the SPEs in a CELL processor. The results of the profiling can be viewed in an interactive Visualizer7 weekly downloads Graal Profiling Data Viewer Graal collects profiling information when interpreting Java Bytecode. This tool gives the user the ability to inspect these Graal internal datastructures easily. Installation instructions are provided in the wiki.2 weekly downloads Graphviewer Generic data visualizer tool to display output of command line data sources.1 weekly downloads Hiberbean STES This project is a STES (Stragegic Testing Evaluation Tool) for comparing the two ORMs (Object Relational Mapping) tools: Hibernate 3.1 and EJB (Enterprise Java Beans) 2.0.0Slice This is a dynamic slicing tool for Java programs. The tool modifies the Kaffe virtual machine to collect the execution trace, and compress the trace on-the-fly. Please proceed to the tool's website for download
http://sourceforge.net/directory/development/profilers/os%3Amodern_oses/?sort=name
CC-MAIN-2015-22
en
refinedweb
Michael DeHaan wrote: > Cole Robinson wrote: >> I've taken a stab at getting remote guest creation up and running >> for virt-install. Most of the existing code translates well to the >> remote case, but the main issue is storage: how does the user tell >> us where to create and find existing storage/media, and how can we >> usefully validate this info. The libvirt storage API is the lower >> level mechanism that allows this fun stuff to happen, its really >> just a matter of choosing a sane interface for it all. >> >> The two interface problems we have are: >> >> - Changes to VirtualDisk to handle storage apis >> - Changes to virt-install cli to allow specifying storage info >> >> For VirtualDisk, I added two options >> - volobj : a libvirt virStorageVol instance >> - volinstall : a virtinst StorageVolume instance >> > > Do you have examples of what this might look like for VirtualDisk? I'm > interested in teaching koan how to install on remote hosts. I've attached a pretty ugly script I was using just to basically test this stuff at first. It has hardcoded values specific to my machine so it won't work if you run it. However it has an example that covers both of the above cases. Please read my below comments though regarding the libvirt storage apis. > >> If the user wants the VirtualDisk to use existing storage, they >> will need to query libvirt for the virStorageVol and pass this >> to the VirtualDisk, which will take care of the rest. >> > Basically the use cases I care about are: > > Install to a specific path and/or filename > Install to an existing partition > Install to a new partition in an existing LVM volume group. > > As koan needed to do this before the storage stuff (IIRC) I have code in > koan to manage LVM. I'll need to keep it around for support of RHEL > 5.older and F8-previous, so if the new stuff works relatively the same > that would be great. > > Basically if I can pass in a path or LVM volume group name, I'm happy. > Needing to grok any XML would make me unhappy :) There won't be any need to mess with xml here. <snip> >> >> The next piece is how the interface changes for virt-install. >> Here are the storage use cases we now have: >> >> 1) use existing non-managed (local) disk >> - signified by --file /some/real/path >> >> 2) create non-managed (local) disk >> - signified by --file /some/real/dir/idontexist >> > > What is "managed vs unmanaged" here? Managed = Libvirt storage APIs. The libvirt storage APIs are how we know what exists on remote systems, and how we tell remote systems to create this file with this format, or that partition with that size, etc. The 'pool' and 'volume' terminology is all part of this. The gist of it is: A 'pool' is some resource that can be carved up into units to be used directly by VMs. Pool types are a directory, nfs mount, filesystem mount (all carved into flat files), lvm volgroup, raw disk devices (carved into smaller blk devs), and iscsi (which creation isn't supported on). A 'volume' is the carved up unit, directly usable as storage for a VM. All this remote guest creation stuff won't 'just work' if the user passes the correct parameters, the remote host will have to be configured in advance to teach libvirt about what storage is available. This could either be done on the command line using virsh pool-create-as, or use virt-manager and use wizards to do all this fun stuff (not posted yet. 95% completed and working, just hasn't been polished up, and it's dependent on some not committed virtinst work). We should probably have libvirt set up a default storage pool for /var/lib/libvirt/images so that there would be a typical out of the box option for users. - Cole import virtinst from virtinst import VirtualDisk as vd from virtinst.Storage import StoragePool as sp from virtinst.Storage import StorageVolume as sv import logging import sys import libvirt # Set debug logging to print root_logger = logging.getLogger() root_logger.setLevel(logging.DEBUG) streamHandler = logging.StreamHandler(sys.stderr) streamHandler.setLevel(logging.DEBUG) root_logger.addHandler(streamHandler) LOCAL_CONN = "qemu:///system" REMOTE_CONN = "qemu+ssh://localhost/system" POOL = "default" VOL = "test.img" GUESTNAME="testguest" GOODSIZE=5*1024*1024*1024 BADSIZE=10000*1024*1024*1024 print "open conn" localconn = libvirt.open(LOCAL_CONN) print "get pool" pool = localconn.storagePoolLookupByName(POOL) print "get vol" vol = pool.storageVolLookupByName(VOL) print "get pooltype" pooltype = virtinst.util.get_xml_path(pool.XMLDesc(0), "/pool/@type") print "get volclass" volclass = sp.get_volume_for_pool(pooltype) print "create volclass instance" volinst = volclass(name="testguest", pool=pool, capacity=GOODSIZE) def check_disk(disk): print "\nis_conflict_disk:" print d.is_conflict_disk(localconn) print "\nis_size_conflict:" print d.is_size_conflict() print "\nget_xml_config()" print d.get_xml_config("hda") print "\n" print "\n\nCreating volobj disk:" d = vd(volobj=vol) check_disk(d) print "\n\nCreating volinst disk:" d = vd(volinstall=volinst) check_disk(d)
https://www.redhat.com/archives/et-mgmt-tools/2008-July/msg00279.html
CC-MAIN-2015-22
en
refinedweb
iCelBlLayer Struct ReferenceThis is the Behaviour Layer itself. More... #include <behaviourlayer/bl.h> Inheritance diagram for iCelBlLayer: Detailed DescriptionThis is the Behaviour Layer itself. Definition at line 32 of file bl.h. Member Function Documentation Create a new behaviour layer entity. The given name is specific to the BL implementation. It can be the name of a script for example. This function will also call entity->SetBehaviour() with the new behaviour. The name of this behaviour layer. The documentation for this struct was generated from the following file: Generated for CEL: Crystal Entity Layer 1.2 by doxygen 1.4.7
http://crystalspace3d.org/cel/docs/online/api-1.2/structiCelBlLayer.html
CC-MAIN-2015-22
en
refinedweb
/* * _PPC_CPU_AFFINITY_H_ #define _PPC_CPU_AFFINITY_H_ /* * Just one hardware affinity set - the whole machine. * This allows us to give the pretense that PPC supports the affinity policy * SPI. The kernel will accept affinity hints but effectively ignore them. * Hence Universal Apps can use platform-independent code. */ static inline int ml_get_max_affinity_sets(void) { return 1; } /* * Return the single processor set. */ static inline processor_set_t ml_affinity_to_pset(__unused int affinity_num) { return processor_pset(master_processor); } #endif /* _I386_CPU_AFFINITY_H_ */ #endif /* KERNEL_PRIVATE */
http://opensource.apple.com/source/xnu/xnu-1504.9.17/osfmk/ppc/cpu_affinity.h
CC-MAIN-2015-22
en
refinedweb
ADJTIMEX adjtimex - tune kernel clock #include <sys/time> int adjtimex(struct timex *buf);__ Linux Ordinary users are restricted to a zero value for mode. Only the superuser may set any parameters. On failure, adjtimex returns -1 and sets errno. adjtimex is Linux specific and should not be used in programs intended to be portable. There is a similar but less general call adjtime in SVr4. 4 pages link to adjtimex(2):
http://wiki.wlug.org.nz/adjtimex(2)
CC-MAIN-2015-22
en
refinedweb
{- | Asynchronous exceptions can occur during the construction of a lazy data structure. They are represented by a lazy data structure itself. TODO: * Is it reasonable, that many functions match the exception lazily? Or is lazy decoupling an operation that shall always be done explicitly? *, ) -- * Plain monad {- | Contains a value and a reason why the computation of the value of type @a@ was terminated. Imagine @a@ as a list type, and an according operation like the 'readFile' operation. If the exception part is 'Nothing' then the value could be constructed regularly. If the exception part is 'Just' then the value could not be constructed completely. However you can read the result of type @a@ lazily, even if an exception occurs while it is evaluated. If you evaluate the exception part, then the result value is certainly computed completely. However, we cannot provide. -} data Exceptional e a = Exceptional {exception :: Maybe e, result :: a} deriving Show {- | Create an exceptional value without exception. -} pure :: a -> Exceptional e a pure = Exceptional Nothing {- | Create an exceptional value with exception. -} broken :: e -> a -> Exceptional e a broken e = Exceptional (Just e) fromSynchronous :: a -> Sync.Exceptional e a -> Exceptional e a fromSynchronous deflt x = force $ case x of Sync.Success y -> Exceptional Nothing y Sync.Exception e -> Exceptional (Just e) deflt fromSynchronousNull :: Sync.Exceptional e () -> Exceptional e () fromSynchronousNull = fromSynchronous () fromSynchronousMonoid :: Monoid a => Sync.Exceptional e a -> Exceptional e a fromSynchronousMonoid = fromSynchronous mempty toSynchronous :: Exceptional e a -> Sync.Exceptional e a toSynchronous (Exceptional me a) = maybe (Sync.Success a) Sync.Exception me {- | -- ** handling of special result types {- | This is an example for application specific handling of result values. Assume you obtain two lazy lists say from 'readFile' and you want to zip their contents. If one of the stream readers emits an exception, we quit with that exception. If both streams have throw an exception at the same file position, the exception of the first stream is propagated. -} zipWith :: (a -> b -> c) -> Exceptional e [a] -> Exceptional e [b] -> Exceptional e [c] zipWith f (Exceptional ea a0) (Exceptional eb b0) = let recourse (a:as) (b:bs) = fmap (f a b :) (recourseF as bs) recourse as _ = Exceptional (case as of [] -> mplus ea eb; _ -> eb) [] recourseF as bs = force $ recourse as bs in recourseF a0 b0 infixr 1 `append`, `continue`, `maybeAbort` {- | This is an example for application specific handling of result values. Assume you obtain two lazy lists say from 'readFile' and you want to append their contents. If the first stream ends with an exception, this exception is kept and the second stream is not touched. If the first stream can be read successfully, the second one is appended until stops. 'append' is less strict than the 'Monoid' method 'mappend' instance. -} append :: Monoid a => Exceptional e a -> Exceptional e a -> Exceptional e a append (Exceptional ea a) b = fmap (mappend a) $ continue ea b continue :: Monoid a => Maybe e -> Exceptional e a -> Exceptional e a continue ea b = force $ case ea of -- Just e -> throwMonoid e Just _ -> Exceptional ea mempty Nothing -> b -} {-# INLINE force #-}. -} {-# INLINE traverse #-} traverse :: Applicative f => (a -> f b) -> Exceptional e a -> f (Exceptional e b) traverse f = sequenceA . fmap f {-# INLINE sequenceA #-} sequenceA :: Applicative f => Exceptional e (f a) -> f (Exceptional e a) sequenceA ~(Exceptional e a) = liftA (Exceptional e) a {-# INLINE mapM #-} mapM :: Monad m => (a -> m b) -> Exceptional e a -> m (Exceptional e b) mapM f = sequence . fmap f {-# INLINE sequence #-} sequence :: Monad m => Exceptional e (m a) -> m (Exceptional e a) sequence ~(Exceptional e a) = liftM (Exceptional e) a {- instance Applicative (Exceptional e) where pure =) {- instance Applicative m => Applicative (ExceptionalT e m) where pure = ExceptionalT . pure . pure ExceptionalT f <*> ExceptionalT x = ExceptionalT (fmap (<*>) f <*> x) instance Monad m => Monad (ExceptionalT e m) where return = ExceptionalT . return . return x0 >>= f = ExceptionalT $ do Exceptional ex x <- runExceptionalT x0 Exceptional ey y <- runExceptionalT (f x) return $ Exceptional (ex ++ ey) {- | Repeat an action with synchronous exceptions until an exception occurs. Combine all atomic results using the @bind@ function. It may be @cons = (:)@ and @empty = []@ for @b@ being a list type. The @defer@ function may be @id@ or @unsafeInterleaveIO@ for lazy read operations. The exception is returned as asynchronous exception. -} manySynchronousT :: (Monad m) => (m (Exceptional e b) -> m (Exceptional e b)) {- ^ @defer@ function -} -> (a -> b -> b) {- ^ @cons@ function -} -> b {- ^ @empty@ -} -> Sync.ExceptionalT e m a {- ^ atomic action to repeat -} -> m (Exceptional e b) manySynchronousT defer cons empty action = let recourse = liftM force $ defer $ do r <- Sync.tryT action case r of Sync.Exception e -> return (Exceptional (Just e) empty) Sync.Success x -> liftM (fmap (cons x)) recourse in recourse {-#.viewL@. -} processToSynchronousT_ :: (Monad m) => (b -> Maybe (a,b)) {- ^ decons function -} -> (a -> Sync.ExceptionalT e m ()) {- ^ action that is run for each element fetched from @x@ -} -> Exceptional e b {- ^ value @x@ of type @b@ with asynchronous exception -} -> Sync.ExceptionalT e m () processToSynchronousT_ decons action (Exceptional me x) = let recourse b0 = maybe (maybe (return ()) Sync.throwT me) (\(a,b1) -> action a >> recourse b1) (decons b0) in recourse x
http://hackage.haskell.org/package/explicit-exception-0.1.5/docs/src/Control-Monad-Exception-Asynchronous.html
CC-MAIN-2015-22
en
refinedweb
Provide control over an open file #include <sys/types.h> #include <unistd.h> #include <fcntl.h> int fcntl( int fildes, int cmd, ... ); libc Use the -l c option to qcc to link against this library. This library is usually included automatically.: The file status flags (see open() for more detailed information) are: The file access modes are:. The only defined file descriptor flag is: If a lock can't be set, fcntl() returns immediately. The flock structure contains at least the following members:. The following functions ignore locks: -1 if an error occurred (errno is set). The successful return value(s) depend on the request type specified by arg, as shown in the following table: /* *; }
http://www.qnx.com/developers/docs/qnxcar2/topic/com.qnx.doc.neutrino.lib_ref/topic/f/fcntl.html
CC-MAIN-2022-27
en
refinedweb
Measuring Performance with PStats QUICK INTRODUCTION PStats is Panda’s built-in performance analysis tool. It can graph frame rate over time, and can further graph the work spent within each frame into user-defined subdivisions of the frame (for instance, app, cull and draw), and thus can be an invaluable tool in identifying performance bottlenecks. It can also show frame-based data that reflects any arbitrary quantity other than time intervals, for instance, texture memory in use or number of vertices drawn. The performance graphs may be drawn on the same computer that is running the Panda client, or they may be drawn on another computer on the same LAN, which is useful for analyzing fullscreen applications. The remote computer need not be running the same operating system as the client computer. To use PStats, you first need to build the PStats server program, which is part of the Pandatool tree (it’s called pstats.exe on Windows, and pstats on a Unix platform). Start by running the PStats server program (it runs in the background), and then start your Direct/Panda client with the following in your startup code: // Includes: pStatClient.h if (PStatClient::is_connected()) { PStatClient::disconnect(); } string host = ""; // Empty = default config var value int port = -1; // -1 = default config var value if (!PStatClient::connect(host, port)) { std::cout << "Could not connect to PStat server." << std::endl; } Or if you’re running pview, press shift-S. Any of the above will contact your running PStats server program, which will proceed to open a window and start a running graph of your client’s performance. If you have multiple computers available for development, it can be advantageous to run the pstats server on a separate computer so that the processing time needed to maintain and update the pstats user interface isn’t taken from the program you are profiling. If you wish to run the server on a different machine than the client, start the server on the profiling machine and add the following variable to your client’s Config.prc file, naming the hostname or IP address of the profiling machine: pstats-host profiling-machine-ip-or-hostname If you are developing Python code, you may be interested in reporting the relative time spent within each Python task (by subdividing the total time spent in Python, as reported under “Show Code”). To do this, add the following lines to your Config.prc file before you start ShowBase: task-timer-verbose 1 pstats-tasks 1 Caveats OpenGL is asynchronous, which means that function calls aren’t guaranteed to execute right away. This can make performance analysis of OpenGL operations difficult, as the graphs may not accurately reflect the actual time that the GPU spends doing a certain operation. However, if you wish to more accurately track down rendering bottlenecks, you may set the following configuration variable: pstats-gpu-timing 1 This will enable a new set of graphs that use timer queries to measure how much time each task is actually taking on the GPU. If your card does not support it or does not give reliable timer query information, a crude way of working around this and getting more accurate timing breakdown, you can set this: gl-finish 1 Setting this option forces Panda to call glFinish() after every major graphics operation, which blocks until all graphics commands sent to the graphics processor have finished executing. This is likely to slow down rendering performance substantially, but it will make PStats graphs more accurately reflect where the graphics bottlenecks are. THE PSTATS SERVER (The user interface) The GUI for managing the graphs and drilling down to view more detail is entirely controlled by the PStats server program. At the time of this writing, there are two different versions of the PStats server, one for Unix and one for Windows, both called simply pstats. The interfaces are similar but not identical; the following paragraphs describe the Windows version. When you run pstats.exe, it adds a program to the taskbar but does not immediately open a window. The program name is typically “PStats 5185”, showing the default PStats TCP port number of 5185; see “HOW IT WORKS” below for more details about the TCP communication system. For the most part you don’t need to worry about the port number, as long as server and client agree (and the port is not already being used by another application). Each time a client connects to the PStats server, a new monitor window is created. This monitor window owns all of the graphs that you create to view the performance data from that particular connection. Initially, a strip chart showing the frame time of the main thread is created by default; you can create additional graphs by selecting from the Graphs pulldown menu. Time-based Strip Charts This is the graph type you will use most frequently to examine performance data. The horizontal axis represents the passage of time; each frame is represented as a vertical slice on the graph. The overall height of the colored bands represents the total amount of time spent on each frame; within the frame, the time is further divided into the primary subdivisions represented by different color bands (and labeled on the left). These subdivisions are called “collectors” in the PStats terminology, since they represent time collected by different tasks. Normally, the three primary collectors are App, Cull, and Draw, the three stages of the graphics pipeline. Atop these three colored collectors is the label “Frame”, which represents any remaining time spent in the frame that was not specifically allocated to one of the three child collectors (normally, there should not be significant time reported here). The frame time in milliseconds, averaged over the past three seconds, is drawn above the upper right corner of the graph. The labels on the guide bars on the right are also shown in milliseconds; if you prefer to think about a target frame rate rather than an elapsed time in milliseconds, you may find it useful to select “Hz” from the Units pulldown menu, which changes the time units accordingly. The running Panda client suggests its target frame rate, as well as the initial vertical scale of the graph (that is, the height of the colored bars). You can change the scale freely by clicking within the graph itself and dragging the mouse up or down as necessary. One of the horizontal guide bars is drawn in a lighter shade of gray; this one represents the actual target frame rate suggested by the client. The other, darker, guide bars are drawn automatically at harmonic subdivisions of the target frame rate. You can change the target frame rate with the Config.prc variable pstats-target-frame-rate on the client. You can also create any number of user-defined guide bars by dragging them into the graph from the gray space immediately above or below the graph. These are drawn in a dashed blue line. It is sometimes useful to place one of these to mark a performance level so it may be compared to future values (or to alternate configurations). The primary collectors labeled on the left might themselves be further subdivided, if the data is provided by the client. For instance, App is often divided into Show Code, Animation, and Collisions, where Show Code is the time spent executing any Python code, Animation is the time used to compute any animated characters, and Collisions is the time spent in the collision traverser(s). To see any of these further breakdowns, double-click on the corresponding colored label (or on the colored band within the graph itself). This narrows the focus of the strip chart from the overall frame to just the selected collector, which has two advantages. Firstly, it may be easier to observe the behavior of one particular collector when it is drawn alone (as opposed to being stacked on top of some other color bars), and the time in the upper-right corner will now reflect just the total time spent within just this collector. Secondly, if there are further breakdowns to this collector, they will now be shown as further colored bars. As in the Frame chart, the topmost label is the name of the parent collector, and any time shown in this color represents time allocated to the parent collector that is not accounted for by any of the child collectors. You can further drill down by double-clicking on any of the new labels; or double-click on the top label, or the white part of the graph, to return back up to the previous level. Value-based Strip Charts There are other strip charts you may create, which show arbitrary kinds of data per frame other than elapsed time. These can only be accessed from the Graphs pulldown menu, and include things such as texture memory in use and vertices drawn. They behave similarly to the time-based strip charts described above. Piano Roll Charts This graph is used less frequently, but when it is needed it is a valuable tool to reveal exactly how the time is spent within a frame. The PStats server automatically collects together all the time spent within each collector and shows it as a single total, but in reality it may not all have been spent in one continuous block of time. For instance, when Panda draws each display region in single-threaded mode, it performs a cull traversal followed by a draw traversal for each display region. Thus, if your Panda client includes multiple display regions, it will alternate its time spent culling and drawing as it processes each of them. The strip chart, however, reports only the total cull time and draw time spent. Sometimes you really need to know the sequence of events in the frame, not just the total time spent in each collector. The piano roll chart shows this kind of data. It is so named because it is similar to the paper music roll for an old- style player piano, with holes punched down the roll for each note that is to be played. The longer the hole, the longer the piano key is held down. (Think of the chart as rotated 90 degrees from an actual piano roll. A player piano roll plays from bottom to top; the piano roll chart reads from left to right.) Unlike a strip chart, a piano roll chart does not show trends; the chart shows only the current frame’s data. The horizontal axis shows time within the frame, and the individual collectors are stacked up in an arbitrary ordering along the vertical axis. The time spent within the frame is drawn from left to right; at any given time, the collector(s) that are active will be drawn with a horizontal bar. You can observe the CPU behavior within a frame by reading the graph from left to right. You may find it useful to select “pause” from the Speed pulldown menu to freeze the graph on just one frame while you read it. Note that the piano roll chart shows time spent within the frame on the horizontal axis, instead of the vertical axis, as it is on the strip charts. Thus, the guide bars on the piano roll chart are vertical lines instead of horizontal lines, and they may be dragged in from the left or the right sides (instead of from the top or bottom, as on the strip charts). Apart from this detail, these are the same guide bars that appear on the strip charts. The piano roll chart may be created from the Graphs pulldown menu. Additional threads If the panda client has multiple threads that generate PStats data, the PStats server can open up graphs for these threads as well. Each separate thread is considered unrelated to the main thread, and may have the same or an independent frame rate. Each separate thread will be given its own pulldown menu to create graphs associated with that thread; these auxiliary thread menus will appear on the menu bar following the Graphs menu. At the time of this writing, support for multiple threads within the PStats graph is largely theoretical and untested. Color and Other Optional Collector Properties If you do not specify a color for a particular collector, it will be assigned a random color at runtime. At present, the only way to specify a color is to modify panda/src/pstatclient/pStatProperties.cxx, and add a line to the table for your new collector(s). You can also define additional properties here such as a suggested initial scale for the graph and, for non-time-based collectors, a unit name and/or scale factor. The order in which these collectors are listed in this table is also relevant; they will appear in the same order on the graphs. The first column should be set to 1 for your new collectors unless you wish them to be disabled by default. You must recompile the client (but not the server) to reflect changes to this table. HOW TO DEFINE YOUR OWN COLLECTORS The PStats client code is designed to be generic enough to allow users to define their own collectors to time any arbitrary blocks of code (or record additional non-time-based data), from either the C++ or the Python level. The general idea is to create a PStatCollector for each separate block of code you wish to time. The name which is passed to the PStatCollector constructor is a unique identifier: all collectors that share the same name are deemed to be the same collector. Furthermore, the collector’s name can be used to define the hierarchical relationship of each collector with other existing collectors. To do this, prefix the collector’s name with the name of its parent(s), followed by a colon separator. For instance, PStatCollector(“Draw:Flip”) defines a collector named “Flip”, which is a child of the “Draw” collector, defined elsewhere. You can also define a collector as a child of another collector by giving the parent collector explicitly followed by the name of the child collector alone, which is handy for dynamically-defined collectors. For instance, PStatCollector(draw, “Flip”) defines the same collector named above, assuming that draw is the result of the PStatCollector(“Draw”) constructor. Once you have a collector, simply bracket the region of code you wish to time with collector.start() and collector.stop(). It is important to ensure that each call to start() is matched by exactly one call to stop(). If you are programming in C++, it is highly recommended that you use the PStatTimer class to make these calls automatically, which guarantees the correct pairing; the PStatTimer’s constructor calls start() and its destructor calls stop(), so you may simply define a PStatTimer object at the beginning of the block of code you wish to time. If you are programming in Python, you must call start() and stop() explicitly. When you call start() and there was another collector already started, that previous collector is paused until you call the matching stop() (at which time the previous collector is resumed). That is, time is accumulated only towards the collector indicated by the innermost start() .. stop() pair. Time accumulated towards any collector is also counted towards that collector’s parent, as defined in the collector’s constructor (described above). It is important to understand the difference between collectors nested implicitly by runtime start/stop invocations, and the static hierarchy implicit in the collector definition. Time is accumulated in parent collectors according to the statically-defined parents of the innermost active collector only, without regard to the runtime stack of paused collectors. For example, suppose you are in the middle of processing the “Draw” task and have therefore called start() on the “Draw” collector. While in the middle of processing this block of code, you call a function that has its own collector called “Cull:Sort”. As soon as you start the new collector, you have paused the “Draw” collector and are now accumulating time in the “Cull:Sort” collector. Once this new collector stops, you will automatically return to accumulating time in the “Draw” collector. The time spent within the nested “Cull:Sort” collector will be counted towards the “Cull” total time, not the “Draw” total time. If you wish to collect the time data for functions, a simple decorator pattern can be used below, as below: from panda3d.core import PStatCollector def pstat(func): collectorName = "Debug:%s" % func.__name__ if hasattr(base, 'custom_collectors'): if collectorName in base.custom_collectors.keys(): pstat = base.custom_collectors[collectorName] else: base.custom_collectors[collectorName] = PStatCollector(collectorName) pstat = base.custom_collectors[collectorName] else: base.custom_collectors = {} base.custom_collectors[collectorName] = PStatCollector(collectorName) pstat = base.custom_collectors[collectorName] def doPstat(*args, **kargs): pstat.start() returned = func(*args, **kargs) pstat.stop() return returned doPstat.__name__ = func.__name__ doPstat.__dict__ = func.__dict__ doPstat.__doc__ = func.__doc__ return doPstat To use it, either save the function to a file and import it into the script you wish to debug. Then use it as a decorator on the function you wish to time. A collection named Debug will appear in the Pstats server with the function as its child. from pstat_debug import pstat @pstat def myLongRunFunction(): """ This function does something long """ HOW IT WORKS (What’s actually happening) The PStats code is divided into two main parts: the client code and the server code. The PStats Client The client code is in panda/src/pstatclient, and is available to run in every Panda client unless it is compiled out. (It will be compiled out if OPTIMIZE is set to level 4, unless DO_PSTATS is also explicitly set to non-empty. It will also be compiled out if NSPR is not available, since both client and server depend on the NSPR library to exchange data, even when running the server on the same machine as the client.) The client code is designed for minimal runtime overhead when it is compiled in but not enabled (that is, when the client is not in contact with a PStats server), as well as when it is enabled (when the client is in contact with a PStats server). It is also designed for zero runtime overhead when it is compiled out. There is one global PStatClient class object, which manages all of the communications on the client side. Each PStatCollector is simply an index into an array stored within the PStatClient object, although the interface is intended to hide this detail from the programmer. Initially, before the PStatClient has established a connection, calls to start() and stop() simply return immediately. When you call PStatClient.connect(), the client attempts to contact the PStatServer via a TCP connection to the hostname and port named in the pstats- host and pstats-port Config.prc variables, respectively. (The default hostname and port are localhost and 5185.) You can also pass in a specific hostname and/or port to the connect() call. Upon successful connection and handshake with the server, the PStatClient sends a list of the available collectors, along with their names, colors, and hierarchical relationships, on the TCP channel. Once connected, each call to start() and stop() adds a collector number and timestamp to an array maintained by the PStatClient. At the end of each frame, the PStatClient boils this array into a datagram for shipping to the server. Each start() and stop() event requires 6 bytes; if the resulting datagram will fit within a UDP packet (1K bytes, or about 84 start/stop pairs), it is sent via UDP; otherwise, it is sent on the TCP channel. (Some fraction of the packets that are eligible for UDP, from 0% to 100%, may be sent via TCP instead; you can specify this with the pstats-tcp-ratio Config.prc variable.) Also, to prevent flooding the network and/or overwhelming the PStats server, only so many frames of data will be sent per second. This parameter is controlled by the pstats-max-rate Config.prc variable and is set to 30 by default. (If the packets are larger than 1K, the max transmission rate is also automatically reduced further in proportion.) If the frame rate is higher than this limit, some frames will simply not be transmitted. The server is designed to cope with missing frames and will assume missing frames are similar to their neighbors. The server does all the work of analyzing the data after that. The client’s next job is simply to clear its array and prepare itself for the next frame. The PStats Server The generic server code is in pandatool/src/pstatserver, and the GUI-specific server code is in pandatool/src/gtk-stats and pandatool/src/win-stats, for Unix and Windows, respectively. (There is also an OS-independent text-stats subdirectory, which builds a trivial PStats server that presents a scrolling- text interface. This is mainly useful as a proof of technology rather than as a usable tool.) The GUI-specific code is the part that manages the interaction with the user via the creation of windows and the handling of mouse input, etc.; most of the real work of interpreting the data is done in the generic code in the pstatserver directory. The PStatServer owns all of the connections, and interfaces with the NSPR library to communicate with the clients. It listens on the specified port for new connections, using the pstats-port Config.prc variable to determine the port number (this is the same variable that specifies the port to the client). Usually you can leave this at its default value of 5185, but there may be some cases in which that port is already in use on a particular machine (for instance, maybe someone else is running another PStats server on another display of the same machine). Once a connection is received, it creates a PStatMonitor class (this class is specialized for each of the different GUI variants) that handles all the data for this particular connection. In the case of the windows pstats.exe program, each new monitor instance is represented by a new toplevel window. Multiple monitors can be active at once. The work of digesting the data from the client is performed by the PStatView class, which analyzes the pattern of start and stop timestamps, along with the relationship data of the various collectors, and boils it down into a list of the amount of time spent in each collector per frame. Finally, a PStatStripChart or PStatPianoRoll class object defines the actual graph output of colored lines and bars; the generic versions of these include virtual functions to do the actual drawing (the GUI specializations of these redefine these methods to make the appropriate calls).
https://docs.panda3d.org/1.10/cpp/optimization/using-pstats
CC-MAIN-2022-27
en
refinedweb
Serverless CI/CD for .NET Core for AWS Deploying resources to the cloud can be a challenging problem to solve. There are so many ways that one can try to do this, and then variations depending on the resource or even language the developer is using. I am going to show you how we use a combination of the AWS CDK, nuke.build, and github to create a simple AWS CodePipeline that is the start of a CI/CD process you can use for all of your .net AWS projects or even projects outside of AWS. First, we need to get the AWS CDK installed if we have not already. This can be done by running npm install -g aws-cdk from the command line. This will install the CDK globally on your machine, so it is usable for future endeavors. Next, we need to add a CDK .net project to the Visual Studio solution that we want to deploy to the cloud via CI/CD. (The following is how to add a CDK project to an existing solution, you can follow AWS instructions if starting from scratch, I prefer to do it this way.) Navigate to the root directory for the solution you are working with and create a new folder called whatever you want your CDK project to be named (I will use NukeCdkCICDExampleCdk) navigate to ‘NukeCdkCICDExampleCdk’ and run cdk init app — language csharp. This will generate some files plus another directory called src. Copy the files to the root of your solution, navigate inside src and copy the ‘NukeCdkCICDExampleCdk’ project to wherever you keep your other projects for your solution. Open the cdk.json file and modify the attribute app so that the path matches the location of the CDK project (for me app becomes dotnet run -p src/NukeCdkCICDExampleCdk/NukeCdkCICDExampleCdk.csproj). The rest of the ‘NukeCdkCICDExampleCdk’ folder you created can now be deleted. Now inside your IDE you will need to add the project to the solution (in Rider this can be done by right clicking the solution and clicking add existing project). Go to the Program.cs file and uncomment the following lines: Env = new Amazon.CDK.Environment { Account = System.Environment.GetEnvironmentVariable(“CDK_DEFAULT_ACCOUNT”), Region = System.Environment.GetEnvironmentVariable(“CDK_DEFAULT_REGION”), } Make sure you have created a profile in your AWS credentials file for wherever you want this to deploy. Navigate via command line to the root of your solution you wish to deploy, run the following commands (assumes linux terminal, windows will differ): export CDK_DEFAULT_ACCOUNT=[ACCOUNT_NUMBER] export CDK_DEFAULT_REGION=[REGION] export AWS_PROFILE=[PROFILENAME] Now we need to bootstrap the AWS account we want to deploy to by running cdk bootstrap. After that we can test our work so far by running cdk deploy. This should give a green checkmark and return the arn of the stack you just created if successful: Next step is to create a class to represent our CodePipeline/CodeBuild that will do the actual building of our solution (ie NukeCdkCICDExampleCodePipeline). NukeCdkCICDExampleCodePipeline needs to implement Amazon.Cdk.Construct. Inside of the constructor we will place the following: An s3 bucket for the CodeBuild to use (you will need to install nuget package for cdk s3) Bucket codeBuildBucket = new Bucket(this, “NukeCdkCICDExample Code Pipeline Bucket”); A role for the CodeBuild to use with the following AWS managed policies: var codeBuildRole = new Role(this, “NukeCdkCICDExample Codebuild Role”, new RoleProps{AssumedBy = new ServicePrincipal(“codebuild.amazonaws.com”)}); codeBuildRole.AddManagedPolicy(ManagedPolicy.FromManagedPolicyArn(this,”S3 full access managed policy codebuild”,”arn:aws:iam::aws:policy/AmazonS3FullAccess”)); codeBuildRole.AddManagedPolicy(ManagedPolicy.FromManagedPolicyArn(this,”SecretsManager full access managed policy codebuild”,”arn:aws:iam::aws:policy/SecretsManagerReadWrite”)); codeBuildRole.AddManagedPolicy(ManagedPolicy.FromManagedPolicyArn(this,”CodePipeline full access managed policy codebuild”,”arn:aws:iam::aws:policy/AWSCodePipeline_FullAccess”)); codeBuildRole.AddManagedPolicy(ManagedPolicy.FromManagedPolicyArn(this,”CodeBuild full access managed policy codebuild”,”arn:aws:iam::aws:policy/AWSCodeBuildAdminAccess”)); codeBuildRole.AddManagedPolicy(ManagedPolicy.FromManagedPolicyArn(this, “IAM full access managed policy codebuild”, “arn:aws:iam::aws:policy/IAMFullAccess”)); o S3 full access (to manage the bucket you created above) o SecretsManager full access to get secrets stored in AWS to allow secure storage of github tokens o CodePipeline full access needed to modify the pipeline you are creating o CodeBuild full access needed to modify build you are creating o IAM full access to manage this role and others you may create via CDK o *NOTE BEST PRACTICE TO CREATE ROLE WITH MINIMAL ACCESS NEEDED THIS IS JUST FOR DEMO PURPOSES! A PipelineProject aka CodeBuild (will need nuget for CodeBuild) PipelineProject project = new PipelineProject(this, “NukeCdkCICDExample Pipeline Project”, new PipelineProjectProps { ProjectName = “NukeCdkCICDExample-CodeBuild”, Role = codeBuildRole, BuildSpec = BuildSpec.FromObject(new Dictionary<string, object> { [“version”] = “0.2”, [“phases”] = new Dictionary<string, object> { [“install”] = new Dictionary<string, object> { [“runtime-versions”] = new Dictionary<string, object>() { {“dotnet”, “5.0”} }, [“commands”] = new [] { “export PATH=\”$PATH:/root/.dotnet/tools\””, “dotnet tool install — global Nuke.GlobalTool”, “npm install -g aws-cdk” } }, [“build”] = new Dictionary<string, object> { [“commands”] = $”nuke DeployNukeCdkCICDExampleCdkStack “ + $” — awsregion {createCodePipelineRequest.AwsRegion} “ + $” — apigitbranchname {createCodePipelineRequest.GitBranchName} “ + $” — awsaccount {createCodePipelineRequest.AwsAccount} “ } } }), Environment = new BuildEnvironment { EnvironmentVariables = new Dictionary<string, IBuildEnvironmentVariable>{ {“DOTNET_ROOT”, new BuildEnvironmentVariable { Value = “/root/.dotnet” }}, {“CDK_DEFAULT_ACCOUNT”, new BuildEnvironmentVariable { Value = createCodePipelineRequest.AwsAccount }}, {“CDK_DEFAULT_REGION”, new BuildEnvironmentVariable { Value = createCodePipelineRequest.AwsRegion }} }, BuildImage = LinuxBuildImage.STANDARD_5_0 } }); o We pass the role created above as the role to this project o BuildSpec is created programmatically here and you can all the things needed to configure the CodeBuild, note here we install the CDK and nuke, specify our .net version, specify our build command that calls nuke and deploys the CDK, and finally we set environment vars for the build. You can add any other configuration just as you would to any buildspec. A source output (you will need nuget for CodePipeline) var sourceOutput = new Artifact_(); An OAuth token for github (I store it in secrets manager to be secure also needs to be stored as plain text for example code to work) var githubToken = SecretValue.SecretsManager(“Github_Token”); And finally we need to add the CodePipeline itself: Pipeline pipeline = new Pipeline(this, “NukeCdkCICDExample Pipeline”, new PipelineProps { ArtifactBucket = codeBuildBucket, Stages = new IStageProps[]{new StageOptions { StageName = “Source”, Actions = new IAction[]{new GitHubSourceAction(new GitHubSourceActionProps { ActionName = “Github_Source”, OauthToken = githubToken, Owner = “liberty-lake-cloud”, Repo = “Nuke_Cdk_CICD_Example”, Branch = createCodePipelineRequest.GitBranchName, Output = sourceOutput, Trigger = GitHubTrigger.WEBHOOK }) } }, new StageOptions { StageName = “Build”, Actions = new IAction[]{ new CodeBuildAction(new CodeBuildActionProps { ActionName = “NukeCdkCICDExample_Build”, Project = project, Input = sourceOutput, })} } } }); o We use the bucket created above as the ArtifactBucket o The source stage is configured as a GitHubSourceAction (nuget for CDK CodePipeline Actions is needed). The output will be the sourceOutput configured above, and the OauthToken comes from above as well. Replace the rest of the GitHubSourceActionProps as needed for your repository. o The build stage will use the PipelineProject from above as the project and the input will be the sourceOutput from above. Find the main stack file itself, it will end in Stack.cs. Add CfnParameters for AwsRegion, AwsAccount, and GitBranchName these will all be required parameters for the stack we are creating as well as for our CodePipeline. New up an instance of the pipeline you created above and pass in the necessary CfnParameters, I like to create a request object to keep my code cleaner. public class NukeCdkCICDExampleCdkStack : Stack { internal NukeCdkCICDExampleCdkStack(Construct scope, string id, IStackProps props = null) : base(scope, id, props) { var awsRegion = new CfnParameter(this, “awsRegion”, new CfnParameterProps { Type = “String”, Description = “Aws region everything is running in” }); var gitBranchName = new CfnParameter(this, “gitBranchName”, new CfnParameterProps { Type = “String”, Description = “Git Branch Name” }); var awsAccount = new CfnParameter(this, “awsAccount”, new CfnParameterProps { Type = “String”, Description = “Aws Account” }); var createCodePipelineRequest = new CreateCodePipelineRequest() { AwsRegion = awsRegion.ValueAsString, GitBranchName = gitBranchName.ValueAsString, AwsAccount = awsAccount.ValueAsString, }; var nukeCdkCICDExampleCdkCodePipeline = new NukeCdkCICDExampleCodePipeline(this, “NukeCdkCICDExampleCdk CodePipeline”, createCodePipelineRequest); } } The final component to our automated build project will be to add the nuke.build project. From the command line you will need to run dotnet tool install Nuke.GlobalTool –global. Afterwards navigate to the root of your solution and run nuke :setup:. You can accept the defaults, make note if your projects are in a different directory then it specifies you will want to fix this. In Build.cs remove GitRepository and GitVersion as these are not passed by CodePipeline (it may be possible to use the v2 implementation of Github/CodePipeline integration to fix this, but I have not tried as of yet). Add parameters AwsRegion, GitBranchName, and AwsAccount to allow for deployment to any account and from any branch name: [Parameter] string AwsRegion { get; } [Parameter] string GitBranchName { get; } [Parameter] string AwsAccount { get; } Add a target to build your CDK stack: Target DeployNukeCdkCICDExampleStack => _ => _ .DependsOn(Compile) .Executes(() => { var cdkPath = ToolPathResolver.GetPathExecutable(“cdk”); var buildCommand = $”deploy — require-approval never “ + $” — parameters awsRegion={AwsRegion} “ + $” — parameters gitBranchName={GitBranchName} “ + $” — parameters awsAccount={AwsAccount}”; ProcessTasks.StartProcess(toolPath: cdkPath, arguments: buildCommand, workingDirectory: RootDirectory) .AssertWaitForExit(); ; }); From the command line make sure you have the following environment vars set (this is linux windows will differ): export CDK_DEFAULT_ACCOUNT=[ACCOUNT_NUMBER] export CDK_DEFAULT_REGION=[REGION] export AWS_PROFILE=[PROFILENAME] Run the following command: nuke DeployNukeCdkCICDExampleCdkStack \ — awsregion us-west-2 \ — gitbranchname main \ — awsaccount [ACCOUNT_NUMBER] You should see NUKE printed on the screen followed by several different steps, the last step will be DeployNukeCdkCICDExampleCdkStack which is actually publishing your stack to AWS. You will see a printout of the stack publishing steps followed lastly by a printout of the build status (succeeded or not). Congratulations! You now have a working serverless CI/CD build. Changes you make to the branch you specified when you published this, will now automatically be pushed to AWS every time you merge. Add an API gateway? It will build and deploy upon merge. Need a lambda or two? Just define them in the CDK and merge to almost instantly add functionality. This build is highly customizable allowing for adding more stages for testing, deployment, or even entirely different builds (a build for a UI project for instance). Notifications for things such as build status can also be added as well as many other useful features. Incorporating nuke lets developers easily interact with the .net codebase and allows for almost unlimited build options that can be implemented via code instead of script. Nuke further allows for the testing of your build locally at ant step you want, and also makes your build highly portable. Combined with CodePipeline and CodeBuild this portability makes this solution great for those who have a need to deploy to multiple AWS accounts for compliance or other reasons. Another great use for this solution is for temporary cloud solutions. With one command you can deploy a solution to the cloud, and another removes it and any ongoing cost that may be incurred. Need a dev, test and prod environment? No problem just deploy this as many times with the appropriate configuration and you can have as many environments as necessary, and then revoke the stack when the environment is no longer needed. I hope this article has been helpful and informative. Look for more articles coming that will build upon this small project! Github repo containing working example: Resources:
https://vegacloud-io.medium.com/serverless-ci-cd-for-net-core-for-aws-795c4539ffe4?source=user_profile---------3----------------------------
CC-MAIN-2022-27
en
refinedweb
Norigin Spatial NavigationNorigin Spatial Navigation Norigin Spatial Navigation is an open-source library that enables navigating between focusable elements built with ReactJS based application software. To be used while developing applications that require key navigation (directional navigation) on Web-browser Apps and other Browser based Smart TVs and Connected TVs. Our goal is to make navigation on websites & apps easy, using React Javascript Framework and React Hooks. Navigation can be controlled by your keyboard (browsers) or Remote Controls (Smart TV or Connected TV). Software developers only need to initialise the service, add the Hook to components that are meant to be focusable and set the initial focus. The Spatial Navigation library will automatically determine which components to focus next while navigating with the directional keys. We keep the library light, simple, and with minimal third-party dependencies. Illustrative DemoIllustrative Demo Norigin Spatial Navigation can be used while working with Key Navigation and React JS. This library allows you to navigate across or focus on all navigable components while browsing. For example: hyperlinks, buttons, menu items or any interactible part of the User Interface according to the spatial location on the screen. Supported DevicesSupported Devices The Norigin Spatial Navigation library is theoretically intended to work on any web-based platform such as Browsers and Smart TVs. For as long as the UI/UX is built with the React Framework, it works on the Samsung Tizen TVs, LG WebOS TVs, Hisense Vidaa TVs and a range of other Connected TVs. It can also be used in React Native apps on Android TV and Apple TV, however functionality will be limited. This library is actively used and continuously tested on many devices and updated periodically in the table below: Related BlogsRelated Blogs ChangelogChangelog A list of changes for all the versions for the Norigin Spatial Navigation: CHANGELOG.md Table of ContentsTable of Contents InstallationInstallation npm i @noriginmedia/norigin-spatial-navigation --save UsageUsage InitializationInitialization // Called once somewhere in the root of the app import { init } from '@noriginmedia/norigin-spatial-navigation'; init({ // options }); Making your component focusableMaking your component focusable Most commonly you will have Leaf Focusable components. (See Tree Hierarchy) Leaf component is the one that doesn't have focusable children. ref is required to link the DOM element with the hook. (to measure its coordinates, size etc.) import { useFocusable } from '@noriginmedia/norigin-spatial-navigation'; function Button() { const { ref, focused } = useFocusable(); return (<div ref={ref} className={focused ? 'button-focused' : 'button'}> Press me </div>); } Wrapping Leaf components with a Focusable ContainerWrapping Leaf components with a Focusable Container Focusable Container is the one that has other focusable children. (i.e. a scrollable list) (See Tree Hierarchy) ref is required to link the DOM element with the hook. (to measure its coordinates, size etc.) FocusContext.Provider is required in order to provide all children components with the focusKey of the Container, which serves as a Parent Focus Key for them. This way your focusable children components can be deep in the DOM tree while still being able to know who is their Focusable Parent. Focusable Container cannot have focused state, but instead propagates focus down to appropriate Child component. You can nest multiple Focusable Containers. When focusing the top level Container, it will propagate focus down until it encounters the first Leaf component. I.e. if you set focus to the Page, the focus could propagate as following: Page -> ContentWrapper -> ContentList -> ListItem. import { useFocusable, FocusContext } from '@noriginmedia/norigin-spatial-navigation'; import ListItem from './ListItem'; function ContentList() { const { ref, focusKey } = useFocusable(); return (<FocusContext.Provider value={focusKey}> <div ref={ref}> <ListItem /> <ListItem /> <ListItem /> </div> </FocusContext.Provider>); } Manually setting the focusManually setting the focus You can manually set the focus either to the current component ( focusSelf), or to any other component providing its focusKey to setFocus. It is useful when you first open the page, or i.e. when your modal Popup gets mounted. import React, { useEffect } from 'react'; import { useFocusable, FocusContext } from '@noriginmedia/norigin-spatial-navigation'; function Popup() { const { ref, focusKey, focusSelf, setFocus } = useFocusable(); // Focusing self will focus the Popup, which will pass the focus down to the first Child (ButtonPrimary) // Alternatively you can manually focus any other component by its 'focusKey' useEffect(() => { focusSelf(); // alternatively // setFocus('BUTTON_PRIMARY'); }, [focusSelf]); return (<FocusContext.Provider value={focusKey}> <div ref={ref}> <ButtonPrimary focusKey={'BUTTON_PRIMARY'} /> <ButtonSecondary /> </div> </FocusContext.Provider>); } Tracking children componentsTracking children components Any Focusable Container can track whether it has any Child focused or not. This feature is disabled by default, but it can be controlled by the trackChildren flag passed to the useFocusable hook. When enabled, the hook will return a hasFocusedChild flag indicating when a Container component is having focused Child down in the focusable Tree. It is useful for example when you want to style a container differently based on whether it has focused Child or not. import { useFocusable, FocusContext } from '@noriginmedia/norigin-spatial-navigation'; import MenuItem from './MenuItem'; function Menu() { const { ref, focusKey, hasFocusedChild } = useFocusable({trackChildren: true}); return (<FocusContext.Provider value={focusKey}> <div ref={ref} className={hasFocusedChild ? 'menu-expanded' : 'menu-collapsed'}> <MenuItem /> <MenuItem /> <MenuItem /> </div> </FocusContext.Provider>); } Restricting focus to a certain component boundariesRestricting focus to a certain component boundaries Sometimes you don't want the focus to leave your component, for example when displaying a Popup, you don't want the focus to go to a component underneath the Popup. This can be enabled with isFocusBoundary flag passed to the useFocusable hook. import React, { useEffect } from 'react'; import { useFocusable, FocusContext } from '@noriginmedia/norigin-spatial-navigation'; function Popup() { const { ref, focusKey, focusSelf } = useFocusable({isFocusBoundary: true}); useEffect(() => { focusSelf(); }, [focusSelf]); return (<FocusContext.Provider value={focusKey}> <div ref={ref}> <ButtonPrimary /> <ButtonSecondary /> </div> </FocusContext.Provider>); } Using the library in React Native environmentUsing the library in React Native environment In React Native environment the navigation between focusable (Touchable) components is happening under the hood by the native focusable engine. This library is NOT doing any coordinates measurements or navigation decisions in the native environment. But it can still be used to keep the currently focused element node reference and its focused state, which can be used to highlight components based on the focused or hasFocusedChild flags. IMPORTANT: in order to "sync" the focus events coming from the native focus engine to the hook, you have to link onFocus callback with the focusSelf method. This way, the hook will know that the component became focused, and will set the focused flag accordingly. import { TouchableOpacity, Text } from 'react-native'; import { useFocusable } from '@noriginmedia/norigin-spatial-navigation'; function Button() { const { ref, focused, focusSelf } = useFocusable(); return (<TouchableOpacity ref={ref} onFocus={focusSelf} style={focused ? styles.buttonFocused : styles.button} > <Text>Press me</Text> </TouchableOpacity>); } APIAPI Top Level exportsTop Level exports init Init optionsInit options debug: boolean (default: false) Enables console debugging. visualDebug: boolean (default: false) Enables visual debugging (all layouts, reference points and siblings reference points are printed on canvases). nativeMode: boolean (default: false) Enables Native mode. It will disable certain web-only functionality: - adding window key listeners - measuring DOM layout onFocusand onBlurcallbacks don't return coordinates, but still return node ref which can be used to measure layout if needed - coordinates calculations when navigating ( smartNavigatein SpatialNavigation.ts) navigateByDirection - focus propagation down the Tree - last focused child feature - preferred focus key feature In other words, in the Native mode this library DOES NOT set the native focus anywhere via the native focus engine. Native mode should be only used to keep the Tree of focusable components and to set the focused and hasFocusedChild flags to enable styling for focused components and containers. In Native mode you can only call focusSelf in the component that gets native focus (via onFocus callback of the Touchable components) to flag it as focused. Manual setFocus method is blocked because it will not propagate to the native focus engine and won't do anything. throttle: integer (default: 0) Enables throttling of the key event listener. throttleKeypresses: boolean (default: false) Works only in combination with throttle > 0. By default, throttle only throttles key down events (i.e. when you press and hold the button). When this feature is enabled, it will also throttle rapidly fired key presses (rapid "key down + key up" events). setKeyMap Method to set custom key codes. I.e. when the device key codes differ from a standard browser arrow key codes. setKeyMap({ 'left': 9001, 'up': 9002, 'right': 9003, 'down': 9004, 'enter': 9005 }); destroy Resets all the settings and the storage of focusable components. Disables the navigation service. useFocusable hook This hook is the main link between the React component (its DOM element) and the navigation service. It is used to register the component in the service, get its focusKey, focused state etc. const {/* hook output */ } = useFocusable({/* hook params */ }); Hook paramsHook params focusable (default: true) This flag indicates that the component can be focused via directional navigation. Even if the component is not focusable, it still can be focused with the manual setFocus. This flag is useful when i.e. you have a Disabled Button that should not be focusable in the disabled state. saveLastFocusedChild (default: true) By default, when the focus leaves a Focusable Container, the last focused child of that container is saved. So the next time when you go back to that Container, the last focused child will get the focus. If this feature is disabled, the focus will be always on the first available child of the Container. trackChildren (default: false) This flag controls the feature of updating the hasFocusedChild flag returned to the hook output. Since you don't always need hasFocusedChild value, this feature is disabled by default for optimization purposes. autoRestoreFocus (default: true) By default, when the currently focused component is unmounted (deleted), navigation service will try to restore the focus on the nearest available sibling of that component. If this behavior is undesirable, you can disable it by setting this flag to false. isFocusBoundary (default: false) This flag makes the Focusable Container keep the focus inside its boundaries. It will only block the focus from leaving the Container via directional navigation. You can still set the focus manually anywhere via setFocus. Useful when i.e. you have a modal Popup and you don't want the focus to leave it. focusKey (optional) If you want your component to have a persistent focus key, it can be set via this property. Otherwise, it will be auto generated. Useful when you want to manually set the focus to this component via setFocus. preferredChildFocusKey (optional) Useful when you have a Focusable Container and you want it to propagate the focus to a specific child component. I.e. when you have a Popup and you want some specific button to be focused instead of the first available. onEnterPress (function) Callback that is called when the component is focused and Enter key is pressed. Receives extraProps (see below) and KeyPressDetails as arguments. onEnterRelease (function) Callback that is called when the component is focused and Enter key is released. Receives extraProps (see below) as argument. onArrowPress (function) Callback that is called when component is focused and any Arrow key is pressed. Receives direction ( left, right, up, down), extraProps (see below) and KeyPressDetails as arguments. This callback HAS to return true if you want to proceed with the default directional navigation behavior, or false if you want to block the navigation in the specified direction. onFocus (function) Callback that is called when component gets focus. Receives FocusableComponentLayout, extraProps and FocusDetails as arguments. onBlur (function) Callback that is called when component loses focus. Receives FocusableComponentLayout, extraProps and FocusDetails as arguments. extraProps (optional) An object that can be passed to the hook in order to be passed back to certain callbacks (see above). I.e. you can pass all the props of the component here, and get them all back in those callbacks. Hook outputHook output ref (required) Reference object created by the useRef inside the hook. Should be assigned to the DOM element representing a focused area for this component. Usually it's a root DOM element of the component. function Button() { const { ref } = useFocusable(); return (<div ref={ref}> Press me </div>); } focusSelf (function) Method to set the focus on the current component. I.e. to set the focus to the Page (Container) when it is mounted, or the Popup component when it is displayed. setFocus (function) (focusKey: string) => void Method to manually set the focus to a component providing its focusKey. focused (boolean) Flag that indicates that the current component is focused. hasFocusedChild (boolean) Flag that indicates that the current component has a focused child somewhere down the Focusable Tree. Only works when trackChildren is enabled! focusKey (string) String that contains the focus key for the component. It is either the same as focusKey passed to the hook params, or an automatically generated one. navigateByDirection (function) (direction: string, focusDetails: FocusDetails) => void Method to manually navigation to a certain direction. I.e. you can assign a mouse-wheel to navigate Up and Down. Also useful when you have some "Arrow-like" UI in the app that is meant to navigate in certain direction when pressed with the mouse or a "magic remote" on some TVs. pause (function) Pauses all the key event handlers. resume (function) Resumes all the key event handlers. updateAllLayouts (function) Manually recalculate all the layouts. Rarely used. FocusContext (required for Focusable Containers) Used to provide the focusKey of the current Focusable Container down the Tree to the next child level. See Example Types exported for developmentTypes exported for development FocusableComponentLayout interface FocusableComponentLayout { left: number; // absolute coordinate on the screen top: number; // absolute coordinate on the screen width: number; height: number; x: number; // relative to the parent DOM element y: number; // relative to the parent DOM element node: HTMLElement; // or the reference to the native component in React Native } KeyPressDetails interface KeyPressDetails { pressedKeys: PressedKeys; } PressedKeys type PressedKeys = { [index: string]: number }; FocusDetails interface FocusDetails { event?: KeyboardEvent; } Other Types exportedOther Types exported These types are exported, but not necessarily needed for development. KeyMap Interface for the keyMap sent to the setKeyMap method. UseFocusableConfig Interface for the useFocusable params object. UseFocusableResult Interface for the useFocusable result object. Technical details and conceptsTechnical details and concepts Tree Hierarchy of focusable componentsTree Hierarchy of focusable components As mentioned in the Usage section, all focusable components are organized in a Tree structure. Much like a DOM tree, the Focusable Tree represents a focusable components' organization in your application. Tree Structure helps to organize all the focusable areas in the application, measure them and determine the best paths of navigation between these focusable areas. Without the Tree Structure (assuming all components would be simple Leaf focusable components) it would be extremely hard to measure relative and absolute coordinates of the elements inside the scrolling lists, as well as to restrict the focus from jumping outside certain areas. Technically the Focusable Tree structure is achieved by passing a focus key of the parent component down via the FocusContext. Since React Context can be nested, you can have multiple layers of focusable Containers, each passing their own focusKey down the Tree via FocusContext.Provider as shown in this example. Navigation ServiceNavigation Service Navigation Service is a "brain" of the library. It is responsible for registering each focusable component in its internal database, storing the node references to measure their coordinates and sizes, and listening to the key press events in order to perform the navigation between these components. The calculation is performed according to the proprietary algorithm, which measures the coordinate of the current component and all components in the direction of the navigation, and determines the best path to pass the focus to the next component. Migration from v2 (HOC based) to v3 (Hook based)Migration from v2 (HOC based) to v3 (Hook based) ReasonsReasons The main reason to finally migrate to Hooks is the deprecation of the recompose library that was a backbone for the old HOC implementation. As well as the deprecation of the findDOMNode API. It's been quite a while since Hooks were first introduced in React, but we were hesitating of migrating to Hooks since it would make the library usage a bit more verbose. However, recently there has been even more security reasons to migrate away from recompose, so we decided that it is time to say goodbye to HOC and accept certain drawbacks of the Hook implementation. Here are some of the challenges encountered during the migration process: Getting node referenceGetting node reference HOC implementation used a findDOMNode API to find a reference to a current DOM element wrapped with the HOC: const node = SpatialNavigation.isNativeMode() ? this : findDOMNode(this); Note that this was pointing to an actual component instance even when it was called inside lifecycle HOC from recompose allowing to always find the top-level DOM element, without any additional code required to point to a specific DOM node. It was a nice "magic" side effect of the HOC implementation, which is now getting deprecated. In the new Hook implementation we are using the recommended ref API. It makes a usage of the library a bit more verbose since now you always have to specify which DOM element is considered a "focusable" area, because this reference is used by the library to calculate the node's coordinates and size. Example above Passing parentFocusKey down the tree Another big challenge was to find a good way of passing the parentFocusKey down the Focusable Tree, so every focusable child component would always know its parent component key, in order to enable certain "tree-based" features described here. In the old HOC implementation it was achieved via a combination of getContext and withContext HOCs. Former one was receiving the parentFocusKey from its parent no matter how deep it was in the component tree, and the latter one was providing its own focusKey as parentFocusKey for its children components. In modern React, the only recommended Context API is using Context Providers and Consumers (or useContext hook). While you can easily receive the Context value via useContext, the only way to provide the Context down the tree is via a JSX component Context.Provider. This requires some additional code in case you have a Focusable Container component. In order to provide the parentFocusKey down the tree, you have to wrap your children components with a FocusContext.Provider and provide a current focusKey as the context value. Example here ExamplesExamples leaf focusable componentMigrating a HOC Props and Config vs Hook ParamsHOC Props and Config vs Hook Params import {withFocusable} from '@noriginmedia/norigin-spatial-navigation'; // Component ... const FocusableComponent = withFocusable({ trackChildren: true, forgetLastFocusedChild: true })(Component); const ParentComponent = (props) => (<View> ... <FocusableComponent trackChildren forgetLastFocusedChild focusKey={'FOCUSABLE_COMPONENT'} onEnterPress={props.onItemPress} autoRestoreFocus={false} /> ... </View>); Please note that most of the features/props could have been passed as either direct JSX props to the Focusable Component or as an config object passed to the withFocusable HOC. It provided certain level of flexibility, while also adding some confusion as to what takes priority if you pass the same option to both the prop and a HOC config. In the new Hook implementation options can only be passed as a Hook Params: const {/* hook output */ } = useFocusable({ trackChildren: true, saveLastFocusedChild: false, onEnterPress: () => {}, focusKey: 'FOCUSABLE_COMPONENT' }); HOC props passed to the wrapped component vs Hook output valuesHOC props passed to the wrapped component vs Hook output values HOC was enhancing the wrapped component with certain new props such as focused etc.: import {withFocusable} from '@noriginmedia/norigin-spatial-navigation'; const Component = ({focused}) => (<View> <View style={focused ? styles.focusedStyle : styles.defaultStyle} /> </View>); const FocusableComponent = withFocusable()(Component); Hook will provide all these values as the return object of the hook: const { focused, focusSelf, ref, ...etc } = useFocusable({/* hook params */ }); The only additional step when migrating from HOC to Hook (apart from changing withFocusable to useFocusable implementation) is to link the DOM element with the ref from the Hook as seen in this example. While it requires a bit of extra code compared to the HOC version, it also provides a certain level of flexibility if you want to make only a certain part of your UI component to act as a "focusable" area. Please also note that some params and output values has been renamed. CHANGELOG container focusable componentMigrating a In the old HOC implementation there was no additional requirements for the Focusable Container to provide its own focusKey down the Tree as a parentFocusKey for its children components. In the Hook implementation it is required to wrap your children components with a FocusContext.Provider as seen in this example. DevelopmentDevelopment npm i npm start ContributingContributing Please follow the Contribution Guide LicenseLicense MIT Licensed
https://www.npmjs.com/package/@noriginmedia/norigin-spatial-navigation
CC-MAIN-2022-27
en
refinedweb
#include <ExternalServerArray.hpp> An array and manager of external servers. The ExternalServerArray is an abstract class, derived from the ExternalSystemArray class, connecting to external servers. Extends this ExternalServerArray and overrides createChild() method creating child IExternalServer object. After the extending and overriding, construct children IExternalServer objects and 29 of file ExternalServerArray.hpp. Default Constructor. Definition at line 36 of file ExternalServerArray.hpp. Connect to external servers. This method calls children elements' method ExternalServer.connect gradually. Definition at line 47 of file ExternalServerArray.hpp. References samchon::templates::external::ExternalServer::connect().
http://samchon.github.io/framework/api/cpp/d4/d36/classsamchon_1_1templates_1_1external_1_1ExternalServerArray.html
CC-MAIN-2022-27
en
refinedweb
README js-snipjs-snip Universal JavaScript library for clamping HTML text elements. Key features:Key features: - two snipping approaches (CSS / JavaScript) - no need to specify line heights - re-snipping on element resize - no dependencies To get a hands-on experience try the Interactive Demo. InstallationInstallation # install with npm npm install js-snip # or with yarn yarn add js-snip UsageUsage import { snip, unsnip } from 'js-snip' const options = { // your options } // snipping an element snip(element, options) // unsnipping the element unsnip(element) OptionsOptions export interface SnipOptions { method?: 'css' | 'js' lines?: number ellipsis?: string midWord?: boolean } How it worksHow it works - CSS approach is based on the -webkit-line-clamp. - JavaScript approach is based on the progressive cutting of the element's textContentin a loop. Note: CSS approach is faster (preferred), but does not work in older browsers / in all situations (f.e. does not work in IE11, when you need the text to flow around a floated element, or when you want a custom ellipsis). CaveatsCaveats For the library. Change LogChange Log All changes are documented in the change log.
https://www.skypack.dev/view/js-snip
CC-MAIN-2022-27
en
refinedweb
73169/how-to-delete-column-from-pandas-dataframe Hi Guys, I have one DataFrame in Pandas. I want to delete one column from this DataFrame. How can I do that? Hi@akhtar, You can use the del command in Pandas to delete one column from your DataFrame. I have attached one example below for your reference. import pandas as pd df = pd.read_csv('my.csv') del df['Place'] I hope this will help you. You can get the values as a ...READ MORE You can do it like this: df=pd.DataFrame(columns=["Name","Old","Ne ...READ MORE You can do it like this: import ...READ MORE Use the dataframe with respective column names ...READ MORE You can also use the random library's ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE Enumerate() method adds a counter to an ...READ MORE Hi@akhtar, You need to provide the axis parameter ...READ MORE You can use the at() method to ...READ MORE Hi@akhtar, You can use Pandas.merge() function to merge ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/73169/how-to-delete-column-from-pandas-dataframe
CC-MAIN-2022-27
en
refinedweb
Generating rgb imagery from digital surface model using Pix2Pix - 🔬 Data Science - 🥠 Deep Learning and image translation In this notebook, we will focus on using Pix2Pix [1], which is one of the famous and sucessful deep learning models used for paired image-to-image translation. In geospatial sciences, this approach could help in wide range of applications traditionally not possible, where we may want to go from one domain of images to another. The aim of this notebook is to make use of arcgis.learn Pix2Pix model to translate or convert the gray-scale DSM to a RGB imagery. For more details about model and its working refer How Pix2Pix works ? in guide section. import os, zipfile from pathlib import Path from os import listdir from os.path import isfile, join from arcgis import GIS from arcgis.learn import Pix2Pix, prepare_data # gis = GIS('home') ent_gis = GIS('', 'arcgis_python', 'amazing_arcgis_123') For this usecase, we have a high-resolution NAIP airborne imagery in the form of IR-G-B tiles and lidar data converted into DSM, collected over St. George, state of utah by state of utah and partners [5] with same spatial resolution of 0.5 m. We will export that using “Export_Tiles” metadata format available in the Export Training Data For Deep Learning tool. This tool is available in ArcGIS Pro as well as ArcGIS Image Server. The various inputs required by the tool, are described below. Input Raster: DSM Additional Input Raster: NAIP airborne imagery Tile Size X & Tile Size Y: 256 Stride X & Stride Y: 128 Meta Data Format: 'Export_Tiles' as we are training a Pix2Pixmodel. Environments: Set optimum Cell Size, Processing Extent. Raster's used for exporting the training dataset are provided below naip_domain_b_raster = ent_gis.content.get('a55890fcd6424b5bb4edddfc5a4bdc4b') naip_domain_b_raster dsm_domain_a_raster = ent_gis.content.get('aa31a374f889487d951e15063944b921') dsm_domain_a_raster Inside the exported data folder, 'Images' and 'Images2' folders contain all the image tiles from two domains exported from DSM and drone imagery respectively. Now we are ready to train the Pix2Pix model. Alternatively, we have provided a subset of training data containing a few samples that follows the same directory structure mentioned above and also provided the rasters used for exporting the training dataset. You can use the data directly to run the experiments. training_data = gis.content.get('2a3dad36569b48ed99858e8579611a80') training_data filepath = training_data.download(file_name=training_data.name) #Extract the data from the zipped image collection with zipfile.ZipFile(filepath, 'r') as zip_ref: zip_ref.extractall(Path(filepath).parent) output_path = Path(os.path.join(os.path.splitext(filepath)[0])) data = prepare_data(output_path, dataset_type="Pix2Pix", batch_size=5) To get a sense of what the training data looks like, arcgis.learn.show_batch() method randomly picks a few training chips and visualize them. On the left are some DSM's (digital surface model) with the corresponding RGB imageries of various locations on the right. data.show_batch() model = Pix2Pix(data) Learning rate is one of the most important hyperparameters in model training. ArcGIS API for Python provides a learning rate finder that automatically chooses the optimal learning rate for you. lr = model.lr_find() 2.5118864315095795e-05 The model is trained for around a few epochs with the suggested learning rate. model.fit(30, lr) Here, with 30 epochs, we can see reasonable results — both training and validation losses have gone down considerably, indicating that the model is learning to translate between domain of imageries.("pix2pix_model_e30", publish =True) It is a good practice to see results of the model viz-a-viz ground truth. The code below picks random samples and shows us ground truth and model predictions, side by side. This enables us to preview the results of the model within the notebook. model.show_results() The Frechet Inception Distance score, or FID for short, is a metric that calculates the distance between feature vectors calculated for real and generated images. Lower scores indicate the two groups of images are more similar, or have more similar statistics, with a perfect score being 0.0 indicating that the two groups of images are identical. model.compute_metrics() 263.63128885232044 We can translate DSM to RGB imagery with the help of predict() method. Using predict function, we can apply the trained model on the image chip kept for validation, which we want to translate. img_path: path to the image file. valid_data = gis.content.get('f682b16bcc6d40419a775ea2cad8f861') valid_data filepath2 = valid_data.download(file_name=valid_data.name) # Visualize the image chip used for inferencing from fastai.vision import open_image open_image(filepath2) #Inference single imagery chip model.predict(filepath2) After we trained the Pix2Pix model and saved the weights for translating image and we could use the classify pixels using deep learning tool avialable in both ArcGIS pro and ArcGIS Enterprise for inferencing at scale. test_data = ent_gis.content.get('86bed58f977c4c0aa39053d93141cdb1') test_data out_classified_raster = arcpy.ia.ClassifyPixelsUsingDeepLearning("Imagery", r"C:\path\to\model.emd", "padding 64;batch_size 2"); out_classified_raster.save(r"C:\sample\sample.gdb\predicted_img2dsm") The RGB output raster is generated using ArcGIS Pro. The output raster is published on the portal for visualization. inferenced_results = ent_gis.content.get('30951690103047f096c6339398593d79') inferenced_results map1 = ent_gis.map('Washington Fields', 13) map1.add_layer(test_data) map2 = ent_gis.map('Washington Fields', 13) map2.add_layer(inferenced_results) Synchronize web maps¶ The maps are synchronized with each other using MapView.sync_navigation functionality. It helps in comparing the inferenced results with the DSM. Detailed description about advanced map widget options can be referred here. map2.sync_navigation(map1) from ipywidgets import HBox, VBox, Label, Layout hbox_layout = Layout() hbox_layout.justify_content = 'space-around' hb1=HBox([Label('DSM'),Label('RGB results')]) hb1.layout=hbox_layout The predictions are provided as a map for better visualization. VBox([hb1,HBox([map1,map2])]) map2.zoom_to_layer(inferenced_results) In this notebook, we demonstrated how to use Pix2Pix model using ArcGIS API for Python in order to translate imagery of one domain to the another domain. - [1]. Isola, Phillip, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. "Image-to-image translation with conditional adversarial networks." In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1125-1134. 2017. - [2]. Goodfellow, Ian,. - [3]. - [4]. Kang, Yuhao, Song Gao, and Robert E. Roth. "Transferring multiscale map styles using generative adversarial networks." International Journal of Cartography 5, no. 2-3 (2019): 115-141. - [5]. State of Utah and Partners, 2019, Regional Utah high-resolution lidar data 2015 - 2017: Collected by Quantum Spatial, Inc., Digital Mapping, Inc., and Aero-Graphics, Inc. and distributed by OpenTopography,. Accessed: 2020-12-08
https://developers.arcgis.com/python/samples/generating-rgb-imagery-from-digital-surface-model-using-pix2pix/
CC-MAIN-2022-27
en
refinedweb
Disables interrupt priorities. #include <sys/types.h> #include <sys/errno.h> #include <sys/intr.h> int i_disable (new) int new; Attention: The i_disable service has two side effects that result from the replaceable and pageable nature of the kernel. First, it prevents process dispatching. Second, it ensures, within limits, that the caller's stack is in memory. Page faults that occur while the interrupt priority is not equal to INTBASE crash the system. Note: The i_disable service is very similar to the standard UNIX spl service. The i_disable service sets the interrupt priority to a more favored interrupt priority. The interrupt priority is used to control which interrupts are allowed. A value of INTMAX is the most favored priority and disables all interrupts. A value of INTBASE is the least favored and disables only interrupts not in use. The /usr/include/sys/intr.h file defines valid interrupt priorities. The interrupt priority is changed only to serialize code executing in more than one environment (that is, process and interrupt environments). For example, a device driver typically links requests in a list while executing under the calling process. The device driver's interrupt handler typically uses this list to initiate the next request. Therefore, the device driver must serialize updating this list with device interrupts. The i_disable and i_enable services provide this ability. The I_init kernel service contains a brief description of interrupt handlers. Note: When serializing such code in a multiprocessor-safe kernel extension, locking must be used as well as interrupt control. For this reason, new code should call the disable_lock kernel service instead of i_disable. The disable_lock service performs locking only on multiprocessor systems, and helps ensure that code is portable between uniprocessor and multiprocessor systems. The i_disable service must always be used with the i_enable service. A routine must always return with the interrupt priority restored to the value that it had upon entry. The i_mask service can be used when a routine must disable its device across a return. Because of these side effects, the caller of the i_disable service should ensure that: In general, the caller of the i_disable service should also call only services that can be called by interrupt handlers. However, processes that call the i_disable service can call the e_sleep, e_wait, e_sleepl, lockl, and unlockl services as long as the event word or lockword is pinned. The kernel's first-level interrupt handler sets the interrupt priority for an interrupt handler before calling the interrupt handler. The interrupt priority for a process is set to INTBASE when the process is created and is part of each process's state. The dispatcher sets the interrupt priority to the value associated with the process to be executed. The i_disable kernel service can be called from either the process or interrupt environment. The i_disable service returns the current interrupt priority that is subsequently used with the i_enable service. The i_disable kernel service is part of Base Operating System (BOS) Runtime. The disable_lock kernel service, i_enable kernel service, i_mask kernel service. I/O Kernel Services, Understanding Execution Environments, Understanding Interrupts in AIX Version 4.3 Kernel Extensions and Device Support Programming Concepts.
http://ps-2.kev009.com/tl/techlib/manuals/adoclib/libs/ktechrf1/idisable.htm
CC-MAIN-2022-27
en
refinedweb
Links on Android Authority may earn us a commission. Learn more. Reduce your APK size with Android App Bundles and Dynamic Feature Creating an app that can run across the full range of Android devices is one of the biggest challenges facing Android developers. Even if you take the time to create code and resources optimized for all the different screen densities, CPU architectures, and languages, you can quickly end up with a whole new problem: a bloated APK full of code, resources, and assets the user doesn’t even need. A recent study from Google showed APK size directly affects the number of people who end up installing your app after visiting its Google Play page. For every 6MB increase in the size of your APK, you can expect to see a one percent decrease in the installation conversion rate. Everything you can do to reduce the size of your APK will increase the chances of the user downloading your app. Let’s look at Android App Bundle, a new publishing format that can help you support the full range of Android devices while reducing the size of your APK. By the end of this article, you’ll have configured, built, and tested a project supports the App Bundle format, and uploaded this Bundle to the Google Play Console, ready to publish and share with your users. Because APK size is such a big deal, I’ll also show you how to trim even more megabytes from your APK, by dividing your App Bundle into optional dynamic feature modules that users can download on demand. What is Android App Bundle? Previously, when it was time to publish your Android app, you had two options: - Upload a single APK with all the code and resources for the different device configurations that your app supports. - Create multi-APKs targeting specific device configurations. Each APK is a complete version of your app, but they all share the same Google Play listing. Now, Android developers have a third option: publish an Android App Bundle (.aab) and let Google Play handle the rest! Once you’ve uploaded your .aab file, Google Play will use it to generate the following: - A base APK. This contains all the code and resources required to deliver your app’s base functionality. Whenever a user downloads your app, this is the APK they’ll receive first, and every subsequent APK will depend on this base APK. Google Play generates the base APK from your project’s “app,” or base module. - Configuration APK(s). Every time someone downloads your app, Google Play will use the new Dynamic Delivery serving model, to deliver a configuration APK tailored for that specific device configuration. Google Play can also generate one or more dynamic feature APKs. Often, an application has one or even multiple features not required to deliver its core functionality, for example if you’ve developed a messaging app, not all of your users will need to send GIFs or emojis. When you build an App Bundle, you can reduce the size of your APK by separating these features into dynamic feature modules users can then download on demand, if required. If a user requests a dynamic feature module, Dynamic Delivery will serve them a dynamic feature APK containing only the code and resources required to run this specific feature, on the user’s specific device. In this article, I’ll be adding a dynamic feature module to our App Bundle. However, dynamic feature modules are currently still in beta, so if your Bundle includes dynamic feature modules you won’t be able to publish it to production (unless you enrol in the dynamic features beta program). Why should I use this new publishing format? The major benefit of Android App Bundles, is the reduced APK size. There’s evidence to suggest APK size is a huge factor in how many people install your application, so publishing your app as a Bundle can help ensure it winds up on as many devices as possible. If you’ve previously resorted to building multi-APKs, then Bundles can also simplify the build and release management process. Instead of navigating the complexity, potential for error, and general headaches of building, signing, uploading, and maintaining multiple APKs, you can build a single .aab, and let Google Play do all the hard work for you! However, there are a few restrictions. Firstly, APKs generated from the App Bundle must be 100MB or smaller. In addition, devices running Android 4.4 and earlier don’t support split APKs, so Google Play can only serve your App Bundle to these devices as multi-APKs. These multi-APKs will be optimized for different screen densities and ABIs, but they’ll include resources and code for every language your application supports, so users running Android 4.4 and earlier won’t save quite as much space as everyone else. Creating an app that supports the Android App Bundle You can publish an existing app in the App Bundle format, but to help keep things straightforward we’ll be creating an empty project, and then building it as an App Bundle. Create a new project with the settings of your choice. By default, the Google Play Console will take your App Bundle and generate APKs targeting all of the different screen densities, languages, and Application Binary Interfaces (ABI) your application supports. There’s no guarantee this default behavior won’t change in a subsequent update, so you should always be explicit about the behavior you want. To let the Play Console know exactly which APKs it should generate, open your project’s build.gradle file, and add a “bundle” block: { //To do// } } You can now specify whether Google Play should (“true”) or shouldn’t (“false”) generate APKs targeting specific screen densities, languages, and ABIs: { //Generate APKs for devices with different screen densities// density { enableSplit true } //Generate APKs for devices with different CPU architectures// abi { enableSplit true //Create a split APK for each language// } language { enableSplit true } The base module’s build.gradle file also determines the version code Google Play will use for all the APKs it generates from this Bundle. Testing your Android App Bundle When testing your app, you can either deploy a universal APK, or an APK from your Bundle optimized for the specific Android smartphone, tablet, or Android Virtual Device (AVD) you’re using to test your app. To deploy an APK from your App Bundle: - Select Run > Edit Configurations… from the Android Studio toolbar. - Open the Deploy dropdown, and select APK from app bundle. - Select Apply, followed by OK. Adding on-demand features with Dynamic Delivery While we could build an App Bundle at this point, I’m going to add a dynamic feature module, which will be included in our Bundle. To create a dynamic feature module: - Select File > New > New Module… from the Android Studio toolbar. - Select Dynamic Feature Module, and then click Next. - Open the Base application module dropdown, and select app. - Name this module dynamic_feature_one, and then click Next. - To make this module available on-demand, select the Enable on-demand checkbox. If your app supports Android 4.4 or earlier, then you’ll also need to enable Fusing, as this makes your dynamic feature module available as a multi-APK, which will run on Android 4.4 and earlier. - Next, give your module a title that’ll be visible to your audience; I’m using Dynamic Feature One. - Click Finish. Exploring the Dynamic Feature Module You can now add classes, layout resource files, and other assets to your dynamic feature module, just like any other Android module. However, if you take a look at your project’s build.gradle files and Manifest, you’ll notice some important differences: 1. The Dynamic Feature Module’s Manifest This defines some important characteristics for the dynamic feature module: <manifest xmlns: <dist:module //Whether the module should be available as an on-demand download// dist: //Whether to include this module in multi-APKs targeting Android 4.4 and earlier// <dist:fusing dist: </dist:module> </manifest> 2. The module’s build.gradle file This file applies the dynamic-feature plugin, which includes all the Gradle tasks and properties required to build an App Bundle includes a dynamic feature module. The build.gradle file should also name your base (“app”) module as a project dependency: apply plugin: 'com.android.dynamic-feature' android { compileSdkVersion 28 defaultConfig { minSdkVersion 24 targetSdkVersion 28 versionCode 1 versionName "1.0" } } dependencies { implementation fileTree(dir: 'libs', include: ['*.jar']) implementation project(':app') } 3. The base feature module’s Manifest Every time you create a dynamic feature module, Android Studio will update your app module’s build.gradle file, to reference this dynamic module: dynamicFeatures = [":dynamic_feature_one"] } Requesting features at runtime Once you’ve created a dynamic feature module, you’ll need to give the user a way to request that module at an appropriate time. For example, if you’ve created a fitness application, tapping your app’s “Advanced exercises” menu might trigger a workflow that’ll download the dynamic “AdvancedExercises” module. To request a module, you’ll need the Google Play Core library, so open your base feature module’s build.gradle file, and add Core as a project dependency: dependencies { implementation fileTree(dir: 'libs', include: ['*.jar']) implementation 'com.android.support:appcompat-v7:28.0.0' implementation 'com.android.support.constraint:constraint-layout:1.1.3' //Add the following// implementation 'com.google.android.play:core:1.3.5' Next, open the Activity or Fragment where you want to load your dynamic feature module, which in our application is MainActivity. To kickoff the request, create an instance of SplitInstallManager: splitInstallManager = SplitInstallManagerFactory.create(getApplicationContext()); } Next, you need to create the request: SplitInstallRequest request = SplitInstallRequest .newBuilder() A project can consist of multiple dynamic feature modules, so you’ll need to specify which module(s) you want to download. You can include multiple modules in the same request, for example: .addModule("dynamic_feature_one") .addModule("dynamic_feature_two") .build(); Next, you need to submit the request via the the asynchronous startInstall() task: splitInstallManager .startInstall(request) Your final task is acting on a successful download, or gracefully handling any failures that occur: .addOnSuccessListener(new OnSuccessListener<Integer>() { @Override //If the module is downloaded successfully...// public void onSuccess(Integer integer) { //...then do something// } }) .addOnFailureListener(new OnFailureListener() { @Override //If the module isn’t downloaded successfully….// public void onFailure(Exception e) { //...then do something// } }); } } Every time you upload a new version of your App Bundle, Google Play will automatically update all its associated APKs, including all your dynamic feature APKs. Since this process is automatic, once a dynamic feature module is installed on the user’s device, you don’t need to worry about keeping that module up-to-date. Here’s our completed MainActivity: import android.support.v7.app.AppCompatActivity; import android.os.Bundle; import com.google.android.play.core.splitinstall.SplitInstallManager; import com.google.android.play.core.splitinstall.SplitInstallManagerFactory; import com.google.android.play.core.splitinstall.SplitInstallRequest; import com.google.android.play.core.tasks.OnFailureListener; import com.google.android.play.core.tasks.OnSuccessListener; public class MainActivity extends AppCompatActivity { private SplitInstallManager splitInstallManager = null; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); //Instantiate an instance of SplitInstallManager// splitInstallManager = SplitInstallManagerFactory.create(getApplicationContext()); } public void loadDyanmicFeatureOne() { //Build a request// SplitInstallRequest request = SplitInstallRequest .newBuilder() //Invoke the .addModule method for every module you want to install// .addModule("dynamic_feature_one") .build(); //Begin the installation// splitInstallManager .startInstall(request) .addOnSuccessListener(new OnSuccessListener<Integer>() { @Override //The module was downloaded successfully// public void onSuccess(Integer integer) { //Do something// } }) .addOnFailureListener(new OnFailureListener() { @Override //The download failed// public void onFailure(Exception e) { //Do something// } }); } } Giving your users instant access to Dynamic Feature Modules By default, the user will need to restart their app before they can access any of the code and resources associated with their freshly-installed dynamic feature mode. However, you can grant your users instant access, with no restart required, by adding SplitCompatApplication to your base (“app”) module’s Manifest: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns: <application //Add the following// android:name="com.google.android.play.core.splitcompat.SplitCompatApplication" Testing your modular app Any dynamic feature modules you include in your project are entirely optional, so you’ll need to test how your app functions when the user installs different combinations of these modules, or even if they completely ignore your dynamic feature modules. When testing your app, you can choose which dynamic feature module(s) to include in the deployed APK: - Select Run > Edit Configurations… from the Android Studio toolbar. - Find the Dynamic features to deploy section and select the checkbox next to each dynamic feature module that you want to test. - Select Apply, followed by OK. You can now run this app on your Android smartphone, tablet, or AVD, and only the selected dynamic feature modules will be deployed. Get ready for Google Play: Building your Bundle Once you’re happy with your App Bundle, the final step is uploading it to the Google Play Console, ready to analyze, test, and eventually publish. Here’s how to build a signed version of your App Bundle: - Select Build > Generate Signed Bundle/APK from the Android Studio toolbar. - Make sure the Android App Bundle checkbox is selected, and then click Next. - Open the module dropdown, and select app as your base module. - Enter your keystore, alias and password, as usual, and then click Next. - Choose your Destination folder. - Make sure the Build Type dropdown is set to Release. - Click Finish. Android Studio will now generate your App Bundle, and store it in your AndroidAppBundle/app/release directory. Uploading your dynamic App Bundle To upload your App Bundle to Google Play: - Head over to the Google Play Console, and sign into your account. - In the upper-right corner, select Create Application. - Complete the subsequent form, and then click Create. - Enter the requested information about your app, and then click Save. - In the left-hand menu, select App releases. - Find the track that you want to upload your Bundle to, and select its accompanying “Manage” button. Just like an APK, you should test your Bundle via the internal, alpha and beta tracks, before publishing it to production. - On the subsequent screen, select Create release. - At this point, you’ll be prompted to enroll in App Signing by Google Play, as this provides a secure way to manage your app’s signing keys. Read the onscreen information and if you’re happy to proceed, click Continue. - Read the terms and conditions, and then click Accept. - Find the Android App Bundles and APKs to add section, and click its accompanying Browse files button. - Select the .aab file that you want to upload. - Once this file has been loaded successfully, click Save. Your Bundle will now have uploaded to the Google Play Console. How many APKs were included in your Bundle? The Google Play Console will take your Bundle and automatically generate APKs for every device configuration your application supports. If you’re curious, you can view all of these APKs in the Console’s App Bundle Explorer: - In the Console’s left-hand menu, select App releases. - Find the track where you uploaded your Bundle, and select its accompanying Edit release button. - Click to expand the Android App Bundle section. - Select Explore App Bundle. The subsequent screen displays an estimate of how much space you’ve saved, by supporting App Bundles. You can also choose between the following tabs: - APKs per device configuration. The base, configuration and dynamic feature APKs that’ll be served to devices running Android 5.0 and higher. - Auto-generated multi-APKs. The multi-APKs that’ll be served to devices running Android 5.0 and earlier. If your app’s minSdkVersion is Android 5.0 or higher, then you won’t see this tab. Finally, you can view a list of all the devices each APK is optimized for, by selecting that APK’s accompanying View devices button. The subsequent screen includes a Device catalogue of every smartphone and tablet your chosen APK is compatible with. Wrapping up Now you can build, test, and publish an App Bundle, and know how to create a dynamic feature module users can download on demand. Do you think this new publishing format could take the pain out of supporting multiple Android devices? Let us know in the comments!
https://www.androidauthority.com/reduce-your-apk-size-with-android-app-bundles-923241/
CC-MAIN-2022-27
en
refinedweb
Using geometry functions This notebook uses the arcgis.geometry module to compute the length of a path that the user draws on the map. The particular scenario is of a jogger who runs in the Central Park in New York (without gizmos like GPS watches to distract him), and wants a rough estimate of his daily runs based on the path he takes. The notebook starts out with a satellite map of Central Park in New York: from arcgis.gis import GIS from arcgis.geocoding import geocode from arcgis.geometry import lengths gis = GIS() map1 = gis.map() map1.basemap = "satellite" map1 map1.height = '650px' location = geocode("Central Park, New York")[0] map1.extent = location['extent'] map1.zoom = 14 We want the user to draw a freehand polyline to indicate the paths that he takes for his runs. When the drawing operation ends, we use the GIS's Geometry service to compute the length of the drawn path. We can do this by adding an event listener to the map widget that gets called when drawing is completed (i.e. on_draw_end). The event listener then computes the geodesic length of the drawn geometry using the geometry service and prints it out: # Define the callback function that computes the length. def calc_dist(map1, g): print("Computing length of drawn polyline...") length = lengths(g['spatialReference'], [g], "", "geodesic") print("Length: " + str(length[0]) + " m.") # Set calc_dist as the callback function to be invoked when a polyline is drawn on the map map1.on_draw_end(calc_dist) map1.draw("polyline") Now draw a polygon on the map representing the route taken by the jogger map1.clear_graphics()
https://developers.arcgis.com/python/samples/using-geometry-functions/
CC-MAIN-2022-27
en
refinedweb
paystack_sdk 0.0.1 Flutter Paystack SDK # This plugin provides an easy way to receive payments on Android and iOS apps with Paystack. It uses the native Android and iOS libraries under the hood and provides a unified API for initializing payment in a platform-agnostic way. The flow surroudning how Paystack payments work is well written in the Android library documentation, so we'll just skip all the formalities and demonstrate how to use this library. Usage # Step 1 - Add this plugin as a dependency to your flutter project # The good folks at Flutter explains how here Step 2 - Accept payment # This step assumes you've already built your UI for accepting card details from your application user. And of course, you have your Paystack public API key. Import the plugin in the file where you want to accept payments. import 'package:paystack_sdk/paystack_sdk.dart'; Next, initialize Paystack by proividing your public API key. You should preferably do this once, when the page loads and the public key value will remain set. You can subsequently use the SDK to receive payments multiple times in the page. Future<void> initPaystack() async { String paystackKey = "pk_test_xxxxxxxxxxxxxxx"; try { await PaystackSDK.initialize(paystackKey); // Paystack is ready for use in receiving payments } on PlatformException { // well, error, deal with it } } Receive payments already! initPayment() { // pass card number, cvc, expiry month and year to the Card constructor function var card = PaymentCard("5060666666666666666", "123", 12, 2020); // create a transaction with the payer's email and amount (in kobo) var transaction = PaystackTransaction("wisdom.arerosuoghene@gmail.com", 100000); // debit the card (using Javascript style promises) transaction.chargeCard(card) .then((transactionReference) { // payment successful! You should send your transaction request to your server for validation }) .catchError((e) { // oops, payment failed, a readable error message should be in e.message }); } Contributing # Contributions are most welcome. You could improve this documentation, add method to support more Paystack features or just clean up the code. Just do your thing and create a PR. This started out as a quick work to achieve paystack payments on Android and iOS. A lot could have been done better. 0.0.1 # Features # - Accept payment with email, amount and card details paystack_sdk_example # Demonstrates how to use the paystack_sdk plugin. Getting Started # For help getting started with Flutter, view our online documentation. Use this package as a library 1. Depend on it Add this to your package's pubspec.yaml file: dependencies: paystack_sdk: :paystack_sdk/paystack_sdk) 23 out of 23 API elements have no dartdoc comment.Providing good documentation for libraries, classes, functions, and other API elements improves code readability and helps developers find and use your API. Format lib/payment_card.dart. Run flutter format to format lib/payment_card.dart. Format lib/paystack_transaction.dart. Run flutter format to format lib/paystack_transaction.dart. Maintenance suggestions Package is getting outdated. (-15.89 points) The package was last published 60 weeks ago. Package is pre-v0.1 release. (-10 points) While nothing is inherently wrong with versions of 0.0.*, it might mean that the author is still experimenting with the general direction of the API.
https://pub.dev/packages/paystack_sdk
CC-MAIN-2020-05
en
refinedweb
Hi All, I am developing an application using Silverlight Ink. I have completed writing and erasing and working fine. Now I have to add a functionality to save the content to an image entered by user when he clicks a save button. I am unable to get that done. I have used this code and not working for(); When I try to build the app, it is throwing an error says The type or namespace name 'PngBitmapEncoder' could not be found (are you missing a using directive or an assembly reference?) I have added library System.Windows.Media.Imaging; But no use. Please save me. Thanks in Advance..
http://silverlight.net/forums/t/15715.aspx
crawl-001
en
refinedweb
Before I went away on holiday, a few people asked me about exception handing for asynchronous methods -- possibly because they were experimenting with the various async examples I had blogged about. While I don't have a lot of time (I'm supposed to be writing a lot of code documentation for a project I'm on --yuck), I need a break so let me share the error handling mechanism I usually employ for asynchrounous methods. It's nothing more sophisticated than adding an "ErrorText" member variable to the custom eventargs class; I pass types of this eventargs class to my callback function and, therefore, the callback function can examine the value of the property to see if there is any error text to deal with. This class serves as the messenger between the threads of execution. My custom eventargs class looks something like this (I've omitted the propery sets and gets and some other code for brevity): public class StatusEventArgs : EventArgs { public string ErrorText; public StatusEventArgs() { this.ErrorText = ""; } } When an exception is raised, I simply assign a value to the ErrorText property in the catch portion of my try-catch block (e.ErrorText = ex.ToString() ) and then I use the callback method I discussed earlier to communicate the condition back to the client. Nothing too fancy, I treat Exceptions in the async component like state in any other business object. There's many angles you could take with this. You could expose a generic Exception object from the StatusEventArgs class instead of the string variable and use e.TheExceptionObject = ex; you'd have access to the full exception type generated by the asynchronous method this way. I know some people who prefer using integer values to communicate Exception severity to the client, and this model would certainly accomodate that. Others may want to use a simple boolean for success or failure, and again the approach I present will suffice. The point is, extend the object you pass back and forth for asynchronous communications to include any error information you may be interested
http://codebetter.com/blogs/grant.killian/archive/2003/09/24/1805.aspx
crawl-001
en
refinedweb
From: Jason Beech-Brandt (jason_at_ahpcrc_dot_org) Date: Tue Jul 18 2006 - 15:06:20 PDT Paul, Made these changes and did a clean build. Wouldn't build the udp-conduit, so I disabled it and it completed the build with the smp/mpi/gm conduits. Tested it with the gm-conduit with a simple hello.upc program and it does the correct thing. I haven't tried to build/run any of our application code with it yet. Thanks for the help. Jason Paul H. Hargrove wrote: > Jason, > > PGI versions prior to 6.1 lacked the required inline asm support. So, > you have a new enough version of the PGI compiler. However, the > corresponding support in bupc is not in the released 2.2.2. The > atomics support has undergone a significant re-write since the 2.2.x > series, and thus there is no simple patch to bring a 2.2.2 version up > to the current atomic support. However, the following 2-line change > *might* work: > > --- gasnet_atomicops.h 7 Mar 2006 23:36:46 -0000 1.76.2.5 > +++ gasnet_atomicops.h 17 Jul 2006 20:41:30 -0000 > @@ -53,7 +53,6 @@ > #if defined(GASNETI_FORCE_GENERIC_ATOMICOPS) || /* for debugging */ > \ > defined(CRAYT3E) || /* T3E seems to have no atomic ops */ \ > defined(_SX) || /* NEC SX-6 atomics not available to user > code? */ \ > - (defined(__PGI) && defined(BROKEN_LINUX_ASM_ATOMIC_H)) || /* > haven't implemented atomics for PGI */ \ > defined(__SUNPRO_C) || defined(__SUNPRO_CC) /* haven't > implemented atomics for SunCC */ > #define GASNETI_USE_GENERIC_ATOMICOPS > #endif > @@ -272,7 +271,7 @@ > * support for inline assembly code > * > ------------------------------------------------------------------------------------ > */ > #elif defined(__i386__) || defined(__x86_64__) /* x86 and > Athlon/Opteron */ > - #if defined(__GNUC__) || defined(__INTEL_COMPILER) || > defined(__PATHCC__) > + #if defined(__GNUC__) || defined(__INTEL_COMPILER) || > defined(__PATHCC__) || defined(__PGI) > #ifdef GASNETI_UNI_BUILD > #define GASNETI_LOCK "" > #else > > > However, it is entirely possible that this will trigger PGI bugs that > we work around in various ways in our development head. > > -Paul > > Jason Beech-Brandt wrote: >> > >
http://www.nersc.gov/hypermail/upc-users/0193.html
crawl-001
en
refinedweb
I am missing the above namespaces. I just installed Silverlight sdk 2.0 Beta 1, Silverlight tools alpha for VS 2008 (not beta), and Silverlight runtime 2.0. Previous post said to install 1.1 runtime. 2.0 runtime removed 1.1 runtime on install. I can't be the first?? If you're developing for Silverlight 2, you need to install the beta tools. Silverlight 1.1 is deprecated and not supported by the SL 2 features. ---------If this post has solved your problem, please select 'Mark as answer'SDET, Microsoft Web Developer team Thanks, I uninstalled the alpha tools and tried to install the beta tools and got this error: An Error Has Occurred: Silverlight Tools cannot be installed because one or more of the following conditions is true: 1. Visual Studio 2008 RTM is not installed. (I do have the rtm) 2. The Web Authoring feature of Visual Studio is not installed. (how do I know if this is installed and enabled? I do see it in add/remove prgrams how is it enabled?) 3. A previous version of the Silverlight Runtime is installed. (Nope, when I installed 2.0 it unstalled 1.0) 4. A previous version of the Silverlight SDK is installed. (Nope 2.0 is installed) 5. The Visual Studio Update KB949325 is installed. (couldn't find it installed) 6. A previous version of Silverlight Tools is installed. (nope, uninstalled) To continue, please install or uninstall the ... Any other suggestions. morgan-hill:(how do I know if this is installed and enabled? I do see it in add/remove prgrams how is it enabled?) It's okay if you see it in Add/Remove Program. morgan-hill:Nope, when I installed 2.0 it unstalled 1.0) Okay. morgan-hill:Nope 2.0 is installed) *** here is important thing. Please uninstall Silverlight 2 SDK. This is a known issue. morgan-hill:couldn't find it installed You can check "updates" checkedbox in Add/Remove. More Info: Please take a look this post. (If this has answered your question, please click on "Mark as Answer" on this post. Thank you!)Best Regards,Michael SyncBlog : : Thanks! That worked.
http://silverlight.net/forums/t/15141.aspx
crawl-001
en
refinedweb
I was in a project retrospective meeting today, and I’ve got another one tomorrow. I think we had a fairly low rate of defects compared to the complexity of the code, but we had some definite inefficiency in the amount of time it took us to detect and resolve issues. Some of the defects were clearly preventable, so it’s time to reflect on our development approaches. Defects are an inevitable fact in any nontrivial project. If you think about it, a lot of the activities in a software project are dedicated to the detection and elimination of defects. The problem with defects is the inherent overhead associated with the administrative process of tracking defects. The efficiency of your project can be greatly affected by excessive churn as issues bounce back and forth between developers and testers. Not to mention decreased team morale (and increased management stress) as the bug count rises. Much of our project retrospectives have involved determining ways to staunch the flow of defects by preventing them. The first step is to think about the sources of our defects. The second step is to determine what practices can be applied to cut down our defects. Another topic dear to my heart is optimizing the total time it takes to fix a bug. Our approach is based on Scrum and XP, so I’m naturally thinking about how and where Agile practices can both help reduce the number of defects and make defect fixes more efficient. Agile development isn’t a silver bullet for defects, but I’ve observed a quality of “smoothness” in the disciplined Agile projects I’ve been on that clearly wasn’t there on previous waterfall projects (or sloppy Agile projects for that matter). Finding Defects Early It’s generally accepted in the software industry that defects are easier to correct the sooner they are detected. One of the best things about Agile development is involving the testers early inside of rapid iterations. It’s so much simpler to fix a defect in code that you’ve written a couple of days previously than it is to spelunk code that’s 4-5 months old. Not to mention the difficulty of fixing a bug is much greater if it’s intertwined with a great deal of the later code (I do think that TDD alleviates this to some degree by more or less forcing developers to write loosely coupled code). Developer Mistakes Most defects are simply a result of developer error. Most of these defects are simple in nature. Disciplined Test Driven Development goes a long way towards eliminating a lot of bugs. Writing the tests first in the TDD manner means that our code behaves exactly the way we intended. I feel pretty confident about making the statement that TDD drastically mitigates the number of bugs due to simple developer error. Looking closely at the bugs that we had on the last project showed an obvious correlation between the areas of code with poor unit test coverage and the defects that I felt were mostly attributable to developer error. If you’re suffering a rash of defects, it’s worth your time to think about how you’re doing unit testing and find a remedy. Of course it’s not that hard to create incorrect unit tests, but that’s where Pair Programming should come into play. Having another developer actively involved with both the unit testing and the coding should act as a continuous code review to correct the unit tests. There is also the issue of whether a developer fully understands the code they’re writing. Having a second mind engaged on the coding problem at hand should increase the total understanding. The simple act of talking about a coding problem with another developer can lead to a better understanding of the code. Not Understanding Requirements We clearly think that requirements defects have been our Achilles Heel so far. Consistently using TDD and CI means that our code mostly worked the way we developers intended, but doesn’t guarantee that we’re creating the correct functionality. As I see it, there are three issues here: 1. Determining the requirements 2. Communicating the requirements in an unambiguous manner to the developers to create a shared understanding between the analysts, developers, and testers 3. Automating the conformance to the requirements to stop the “Ping Pong” iterations of fixing defects Our thinking right now is to utilize FitNesse as the primary mechanism to solve these three issues by doing Acceptance Test Driven Development. I’ll blog much more on this later, because we’re still figuring out how this impacts our iteration management and who’s responsible for what work and when. In the meantime, I’d strongly recommend picking up a copy of Fit for Developing Software : Framework for Integrated Tests by Ward Cunningham and Rick Mugridge for a background on using FIT for acceptance testing. My first experience with FIT wasn’t all that positive, but I’m rapidly changing my mind after reading the book. I’m optimistic about FitNesse so far. Edge Cases Lately I’ve been dealing with about a dozen defects that can only be described as “edge cases.” These bugs are a combination of inputs or actions that nobody anticipated. Some of these bugs might just be from some missed analysis, but a lot of these bugs are never going to be caught until later in the project when the team has a much better understanding of the project domain. Either way, I think the appropriate action is to turn to the tester and just say “Good catch, I’ll get right on it.” I think that Agile practices indirectly contribute to catching and eliminating these kinds of defects. By more quickly eliminating the defects in the mainline code logic with TDD and Acceptance Testing, testers *should* have more time to do the kind of intensive exploratory testing that finds problems like the one Jonathon Kohl talks about here. There’s also the very real benefit of the automated test suites acting as a safety net to mitigate the creation of new regression bugs. This section would be a lot longer, but I think Charles Miller sums up the subject better anyway right here. Invalid Testing Environment Occasionally something will go wrong in the testing environment that basically invalidates any and all test runs. Maybe a testing database isn’t available, a URL to a web service is configured incorrectly (can you say scar tissue?), or a Windows service isn’t correctly installed on a test server. All of these things lead to testers either sitting on their hands waiting for you to get the test environment fixed, or report a batch of bugs that aren’t necessarily due to coding mistakes. On previous projects I’ve often been saddled with bugs that arose because the database stored procedures were updated through a different process than the middle tier code. A particularly irksome situation is when the new version doesn’t get correctly installed before the tester tries to re-test (we had an issue with this last week with an MSI installer created with WiX). This kind of project friction needs to be eliminated. One of the best tools is an automated build script chained to a Continuous Integration practice. At this point, I would unequivocally say that any project team that doesn’t have a dependable automated build of some sort is amateurish, period. A good automated build can shut down the chances of an invalid testing environment. Using CI should serve to keep the testers from wasting their finite time on obviously incorrect builds. CI also reduces the amount of time between checking in bug fixes and making the code push to the testing environment while simultaneously improving the reliability of the code pushes. I’d also recommend creating a small battery of environment tests that run in your testing environment after code moves just to validate that all the moving pieces (databases, web services, Windows services, etc.) are accessible from the test application. My team just inherited a product with quite a few external dependencies that are hard to troubleshoot from the integration tests. We’ll be writing some automated tests just to diagnose environment issues before the integration tests run in the automated builds. I’ve developed a strategy for self-validating configuration based on my StructureMap tool that’s relevant. I’ll blog on this soon. I an iteration kickoff meeting-- "Guys, I know you're just becoming familiar with XYZ, but could you tell us when every un-scheduled downtime event is going to be this year?" Promptly followed by: "Can you go ahead and estimate the time to fix the bugs we haven't found yet?" He was kidding. I think. James Shore has been running a series of somewhat controversial posts about continuous integration... If you've never heard the term "Technical Debt," check out Martin Fowler's definition and Ward Cunningham's Wiki on the subject. The triumvirite TDD coding cycle of "Red, Green, Refactor" is plastered all over the web and endlessly repeated by all us TDD zombies. If you're going to drink at the agile Koolaid fountain, don't stop with just "Red Bar, Green Bar." You've got to do the third part and refactor as you work to keep the code clean. By itself the "Green Bar" is not some sort of holy talisman that wards off all coding evil. Your code may be passing all of its unit tests today, but can you easily add more code to the solution tomorrow? Is your code becoming difficult to read and understand? Are you spending too much time with your debugger? Is there a class or namespace you avoid changing out of fear? You will slow down if you let problems accumulate. If you're using evolutionary or incremental design you probably don't know exactly where you're going with your design. You are purposely delaying design decisions until you have more information and knowledge about the design needs. Keeping the code clean and each class cohesive with low coupling will maximize your ability to change later. Applications are often discarded and replaced when they become uneconomical or too risky to alter. Constant vigilence and an adherence to good coding practice can extend the lifecycle of a system by reducing the risk and cost of change. Use aggressive refactoring to keep the technical debt from building up. Integrate refactoring into your normal working rhythm, don't let it just build up into day long tasks. Here's the good news though -- a lot of refactoring work is quick and easy, especially if you'll invest some time in learning the automatic refactorings in tools like ReSharper. Look for opportunities to make quick structural improvements. Here's a sample checklist of quick refactorings (with the ReSharper shortcut combinations) to make on your code as you work. As you work, constantly scan and analyze the code you've just written with something like this little checklist and make small refactorings. At a bare minimum, make a refactoring pass before any check in. The technical debt metaphor is a very apt description. Delaying refactoring is a lot like compounding interest payments on your credit card. The longer you wait, the more it'll cost in the end. On a WinForms project last year we had an absurdly complex navigation scheme. Moving from one screen to another screen involved a variety of security checks, "dirty" screen checks, and activation logic. We knew we needed to refactor our screen controllers to support the navigation in a generalized way, but the team was under severe schedule pressure to make iterations. Against my better judgement I agreed to put off the refactoring to push through more new stories. To my chagrin, a junior pair worked on a couple of new navigation stories and created even more spaghetti code on top of the existing smelly code. The end result was that what should have been a 4-6 hour refactoring turned into about 20+ hours of work. We eliminated the spaghetti code by implementing a Layer Supertype pattern in the controller classes to generalize the navigation checks (CanLeave(), TryEnter(), Start(), etc.). New screens went faster once we made the refactoring. Needless to say, we missed our iteration and the project bogged down. The moral of the story is to recognize and act on the need to refactor earlier rather than later. Doing something quick and dirty only gives you a short burst of velocity. You'll pay for it over the long run through reduced coding velocity. It's like a wide receiver in (American) football making a diving catch. You can only get away with one dive at the end of the run, then you gotta pick yourself off the ground. Brush your teeth twice a day and see the dentist occasionally, and everything is copacetic. Bypass refactoring work and you'll either slow down the team as the code gets harder and more risky to change, or perform an expensive root canal restructuring on your application to bring it back to health. The most important thing is to be constantly reevaluating your code and design every single day as you work. One last rant, don't ever let a project manager or non-coder get away with telling you that "you can just refactor it later." That's a little bit like saying you can rest when you're dead. Refactoring != "throw it away and do it over." Don't ever fall into that trap. PM's seem to assume we're just goldplating because they often don't understand the technical situation and the lost efficiency caused by sloppy coding. I'm no longer saddled with bad project managers, but they're certainly lurking out there. Do you have some checks to add to the list, or want to disagree with some of the list? I'd be happy to hear your thoughts. little team is taking over development for one of our other products. Today is the first day of the typical "I'm getting the app up on my box and the NAnt build doesn't work on my box" hell. One of the things I'm seeing in trying to debug the broken unit tests is a bag of Singleton's that perform data access. Just to make it more fun, one singleton uses another singleton to get its configuration. My real problem is that the database connection string isn't correct yet, but the test failures exposed some larger structural issues to address later. Look at the code below for a second. It's nothing special really, just a generic Singleton that serves as a gateway to some kind of resource, but this little bit of innocuous code can thoroughly hose the testability of any application. public class Singleton { private static Singleton _instance; // Lazy initialization because that's more *efficient* public static Singleton GetInstance() { if (_instance == null) { lock (typeof(Singleton)) { if (_instance == null) { _instance = new Singleton(); } } } return _instance; } private object _something; private Singleton() { _something = getSomethingFromDatabase(); } private object getSomethingFromDatabase() { // go fetch some kind of configuration information from the database return null; } } So what's so wrong with this? Plenty. The instance of Singleton touches the database in its constructor function. Unit testing any class that depends on the Singleton class automatically includes some sort of live data access -- or does it? One of the problems with singleton's in unit testing is the lack of control over when the singleton was created. To the best of my knowledge, NUnit does not guarantee the order of unit test execution, so the singleton instance will be instantiated by whichever unit test happens to run first. Why is this such a bad thing? Because your tests aren't starting from a known state and therefore the results of the test are not reliable. For another thing your unit test isn't a unit test, it's an integration test. Integration tests are cool too, but unit tests might be more important from the standpoint of creating working code. One of the qualifications and benefits of a unit test is that it pinpoints the exact trouble spot in the code when it fails. If a coarse-grained test fails, you've got to look in a lot more code to find the exact problem. If there's a database involved, then you're troubleshooting search widens outside of your code. It's possible to test against a live database if you can control the database state, but why do all that extra work if you don't need to? Solve one problem at a time and data access is often a separate issue. One obvious way to tell whether code is adequately unit tested at a fine-grained level is the amount of time spent with the debugger to fix problems. If you're using the debugger a lot, you really need to reconsider your unit testing approach. Forget TDD for a second. Using a stateful singleton opens yourself up to all kinds of threading issues. Not many developers that write business software are going to be threading experts. When you screw up threading safety you can create really wacky bugs that are devilishly hard to reproduce. You do love bugs that can't be reproduced don't you? The code I'm looking at today seems to handle the thread safety issues quite well, but all the "lock(this, that, or the other)" code blows up the lines of code count. It adds complexity to the code and makes the code harder to understand and read. Add in a bunch of tracing code and you end up with a whole lot of noise code surrounding a little piece of code that actually does something useful. My biggest issue with the code is whether or not it was really necessary in the first place. One of the truisms in software development you bump into occasionally is that "Premature Optimization is the root of all evil" in software development. I'm guessing that the singletons were put in place due to performance concerns. Do it a simple way first and forget caching out of the gate. The performance bottlenecks are almost never where you think they'll be anyway. By doing things the simple way upfront you can leave yourself with more time later when the real performance problems become evident. Don't do anything that makes the code harder to understand unless you absolutely have to. My best advice is to not use caching via static members until you absolutely have to for performance or resource limitation reasons. If you do need singleton-like functionality, you might try some of the techniques from this article. You can also get around the singleton issues by just being able to replace the single instance with a mock or stub from a static member. It works, but it's not my preference. One. Kicking). Let’s be pretty clear about this. You’re doing pairing primarily to gain in throughput, not as a training tool. That being said, pairing can be a very effective way to transfer knowledge and mentor junior coders. Something to keep in mind as you pair is to look at the bigger picture of the project. Sometimes pairing with a much junior developer will make the immediate task take longer than it would if you were flying solo, but that’s not completely the point. You (I) must be patient. If you can use the pairing experience to improve the effectiveness of your colleagues the team as a whole will come out ahead in the end in terms of team velocity. To employ a sports metaphor in basketball, the very best players are considered to be the ones who can make their teammates better. I was thrown into leadership positions very early in my career when I was still struggling with improving my own personal velocity, so I never really grasped the value of improving team velocity until much later. One of the very real advantages of agile development is the concept of collective ownership and putting the focus on a team completing working software instead of checking off your own personal deliverables. I think it’s a subtle shift, but it’s really changed my outlook on software development. Someday I'll even internalize that kind of thinking. One simple benefit of pairing for me has been picking up IDE tricks and learning new tools from other developers. Don’t underestimate how much faster you can be with a good toolbox of keyboard shortcuts. Last summer my team was a very early adopter of the original ReSharper beta (I’m gonna wait a few more builds before I try the ReSharper 2.0 beta this time though). We had several guys that had quite a bit of experience with the IntelliJ IDE for Java. These guys suddenly became much quicker mechanically than I and the others who weren’t IntelliJ users. We made a new rule for a little while that the driver had to yell out the keyboard shortcut they were using to make the ReSharper magic happen. A couple months later I was coding solo on an airplane and realized how much faster I was getting with the IDE. Other examples I can vividly remember was my first exposure to WinForms development, Subversion tricks from my “BuildMaster” guru colleague, and getting other developers to use TestDriven.Net for faster cycling between coding and unit testing with NUnit. The..
http://codebetter.com/blogs/jeremy.miller/archive/2005/08.aspx
crawl-001
en
refinedweb
These preprocessor directives extend only across a single line of code. As soon as a newline character is found, the preprocessor directive is considered to end. No semicolon (;) is expected at the end of a preprocessor directive. The only way a preprocessor directive can extend through more than one line is by preceding the newline character at the end of the line by a backslash (\). #define identifier replacement When the preprocessor encounters this directive, it replaces any occurrence of identifier in the rest of the code by replacement. This replacement can be an expression, a statement, a block or simply anything. The preprocessor does not understand C++, it simply replaces any occurrence of identifier by replacement. #define TABLE_SIZE 100 int table1[TABLE_SIZE]; int table2[TABLE_SIZE]; After the preprocessor has replaced TABLE_SIZE, the code becomes equivalent to: int table1[100]; int table2[100]; This use of #define as constant definer is already known by us from previous tutorials, but #define can work also with parameters to define function macros: #define getmax(a,b) a>b?a:b This would replace any occurrence of getmax followed by two arguments by the replacement expression, but also replacing each argument by its identifier, exactly as you would expect if it was a function: // function macro #include <iostream> using namespace std; #define getmax(a,b) ((a)>(b)?(a):(b)) int main() { int x=5, y; y= getmax(x,2); cout << y << endl; cout << getmax(7,x) << endl; return 0; } 5 7 Defined macros are not affected by block structure. A macro lasts until it is undefined with the #undef preprocessor directive: #define TABLE_SIZE 100 int table1[TABLE_SIZE]; #undef TABLE_SIZE #define TABLE_SIZE 200 int table2[TABLE_SIZE]; This would generate the same code as: int table1[100]; int table2[200]; Function macro definitions accept two special operators (# and ##) in the replacement sequence:If the operator # is used before a parameter is used in the replacement sequence, that parameter is replaced by a string literal (as if it were enclosed between double quotes) #define str(x) #x cout << str(test); This would be translated into: cout << "test"; The operator ## concatenates two arguments leaving no blank spaces between them: #define glue(a,b) a ## b glue(c,out) << "test"; This would also be translated into: Because preprocessor replacements happen before any C++ syntax check, macro definitions can be a tricky feature, but be careful: code that relies heavily on complicated macros may result obscure to other programmers, since the syntax they expect is on many occasions different from the regular expressions programmers expect in TABLE_SIZE int table[TABLE_SIZE]; #endif In this case, the line of code int table[TABLE_SIZE]; is only compiled if TABLE_SIZE was previously defined with #define, independently]; Notice how the whole structure of #if, #elif and #else chained directives ends with #endif. The behavior of #ifdef and #ifndef can also be achieved by using the special operators defined and !defined respectively in any #if or #elif directive: #if !defined TABLE_SIZE #define TABLE_SIZE 100 #elif defined ARRAY_SIZE #define TABLE_SIZE ARRAY_SIZE int table[TABLE_SIZE]; The #line directive allows us to control both things, the line numbers within the code files as well as the file name that we want that appears when an error takes place. 20 "assigning variable" int a?; This code will generate an error that will be shown as error in file "assigning variable", line 20. #ifndef __cplusplus #error A C++ compiler is required! #endif This example aborts the compilation process if the macro name __cplusplus is not defined (this macro name is defined by default in all C++ compilers). #include "file" #include <file> The only difference between both expressions is the places (directories) where the compiler is going to look for the file. In the first case where the file name is specified between. Therefore, standard header files are usually included in angle-brackets, while other specific header files are included using quotes. If the compiler does not support a specific argument for #pragma, it is ignored - no error is generated. For example: // standard macro names #include <iostream> using namespace std; int main() { cout << "This is the line number " << __LINE__; cout << " of file " << __FILE__ << ".\n"; cout << "Its compilation began " << __DATE__; cout << " at " << __TIME__ << ".\n"; cout << "The compiler gives a __cplusplus value of " << __cplusplus; return 0; } This is the line number 7 of file /home/jay/stdmacronames.cpp. Its compilation began Nov 1 2005 at 10:12:29. The compiler gives a __cplusplus value of 1
http://www.cplusplus.com/doc/tutorial/preprocessor.html
crawl-001
en
refinedweb
short a=2000; int b; b=a;: class A {}; class B { public: B (A a) {} }; A a; B b=a; Here, a implicit conversion happened between objects of class A and class B, because B has a constructor that takes an object of class A as parameter. Therefore implicit conversions from A to B are allowed. short a=2000; int b; b = (int) a; // c-like cast notation b = int (a); // functional notation: // class; } The program declares a pointer to CAddition, but then it assigns to it a reference to an object of another incompatible type using explicit type-casting: padd = (CAddition*) &d;new_type (expression) but each one with its own special characteristics:: class CBase { }; class CDerived: public CBase { }; CBase b; CBase* pb; CDerived d; CDerived* pd; pb = dynamic_cast<CBase*>(&d); // ok: derived-to-base pd = dynamic_cast<CDerived*>(&b); // wrong: base-to-derived: // dynamic_cast #include ; } Null pointer on second type-cast The code tries to perform two dynamic casts from pointer objects of type CBase* (pba and pbb) to a pointer object of type CDerived*, but only the first one is successful. Notice their respective initializations: CBase * pba = new CDerived; CBase * pbb = new CBase;_cast is thrown instead. dynamic_cast can also cast null pointers even between pointers to unrelated classes, and can also cast pointers of any type to void pointers (void*). class CBase {}; class CDerived: public CBase {}; CBase * a = new CBase; CDerived * b = static_cast<CDerived*><int>(d); Or any conversion between classes with explicit constructors or operator functions as described in "implicit conversions" above. It can also cast pointers to or from integer types. The format in which this integer value represents a pointer is platform-specific. The only guarantee is that a pointer cast to an integer type large enough to fully contain it, is granted to be able to be cast back to a valid pointer. The conversions that can be performed by reinterpret_cast but not by static_cast have no specific uses in C++ are low-level operations, whose interpretation results in code which is generally system-specific, and thus non-portable. For example: class A {}; class B {}; A * a = new A; B * b = reinterpret_cast<B*>(a); This is valid C++ code, although it does not make much sense, since now we have a pointer that points to an object of an incompatible class, and thus dereferencing it is unsafe. // const_cast #include <iostream> using namespace std; void print (char * str) { cout << str << endl; } int main () { const char * c = "sample text"; print ( const_cast<char *> (c) ); return 0; } sample text When typeid is applied to classes typeid uses the RTTI to keep track of the type of dynamic objects. When typeid is applied to an expression whose type is a polymorphic class, the result is the type of the most derived complete object: // typeid, polymorphic class #include <iostream> #include <typeinfo> #include <exception> using namespace std; class CBase { virtual void f(){} }; class CDerived : public CBase {}; int main () { try { CBase* a = new CBase; CBase* b = new CDerived; cout << "a is: " << typeid(a).name() << '\n'; cout << "b is: " << typeid(b).name() << '\n'; cout << "*a is: " << typeid(*a).name() << '\n'; cout << "*b is: " << typeid(*b).name() << '\n'; } catch (exception& e) { cout << "Exception: " << e.what() << endl; } return 0; } a is: class CBase * b is: class CBase * *a is: class CBase *b is: class CDerived Notice how the type that typeid considers for pointers is the pointer type itself (both a and b are of type class CBase *). However, when typeid is applied to objects (like *a and *b) typeid yields their dynamic type (i.e. the type of their most derived complete object). If the type typeid evaluates is a pointer preceded by the dereference operator (*), and this pointer has a null value, typeid throws a bad_typeid exception.
http://www.cplusplus.com/doc/tutorial/typecasting.html
crawl-001
en
refinedweb
On Tue, 15 Mar 2005, Michael Loftis wrote: > >staff. We were using it for around 10k users, so it might work for you > >with 400k, but you are going to need Cyrus gardners. We have found it It will work fine with 30~50k users per server, if it is beefy enough (REAAALY big ammount of RAM, and very fast IO). If you use ridiculous overpowered (usually non-Linux) boxes, that number could go much higher. It scales easily to more servers (in a flat IMAP namespace, even. I.e. you have cluster-wide shared folders). But don't try it with anything less than upstream 2.2, or Debian 2.1.17, if you don't want trouble. > >to be extremely cranky and in need of constant babysitting. The > >various databases are often getting corrupted, causing mysterious or > >non-existant errors which only become clear when using strace to walk Looks like Cyrus 1.x or 2.0 or 2.1 *without* the extensive ammount of patching that is in the Debian package, and using early versions of Debian's (or upstream) berkeley DB 4.x. Debian Cyrus 2.1 and upstream 2.2 plus a sane BDB 4.2 (like the one in Debian sid/sarge) or BDB 3.2 (slower, but really mature and trouble-free in Debian, at least) should not give you any trouble. > >through the assembly calls. Quota information often just disappears or > >gets corrupted. Because of cronic problems and bugs (like runaway or I have seen reports of this, but not in Debian Cyrus 2.1. > >halted processes which prevented any mail from getting delivered) we I should try to kill -STOP a Cyrus lmtpd process during writes to see if it still hangs the mailbox that was being written to, though. A whole lot of design changes and extra resilience code was added that tries to avoid these issues to Debian 2.1, and even more of it to upstream 2.2. > >had to turn off the 'features' of squat indexes and duplicate delivery > >prevention. The program that is supposed to "repair" your broken stuff > >is actually a no-op and nobody knows why, nor does anyone appear > >interested in fixing it. Fixing squat is damn easy, just remove the indexes with find and/or regenerate the indexes. The duplicate delivery database is easy too (just delete it, or run the usual berkeley DB recovery tools). I *do* wish someone would rewrite that damn squat code to be resilient against corruption, but I am not touching that one. Not that you need squat unless your users like 10000+ mails per folder, at which point courrier-imap should give you crappy performance due to the lack of indexes AFAIK, and you might need Cyrus anyway for that reason alone. The mailbox database can be backup-dumped to cleartext at any time, and regenerated from that without any fuss. In berkeley DB mode, backups of the database are made at every checkpoint (for a large number of users, you better checkpoint every 5 minutes or so, or face long BDB startup times). The mailbox repair tool (reconstruct) works just fine and can be fired remotely through IMAP. All it does not do anymore is to try to regenerate the mailbox database I think, and that I have covered above. The quota database can be rebuilt (and in fact, I do it regularly from cron where possible, just in case -- I do not trust the quota code in 2.1 that much). Seen databases are a problem. IF they get corrupted you will probably have to resort to losing some seen-flag data to fix it. the mailbox repair tool will do it automatically AFAIK, except in very few cases (in 2.1. I sure hope there are none left in 2.2) where you have to manually delete the damn indexes for the repair tool to work. > million emails/day. The only issue we've run into is an occasional > deadlock with POP3 that requires rebooting. And that happens once every > few months, if that. This is with 2.1.17. The Debian packages should fix that POP3 deadlock possibility, I think. Something else that can cause POP3 deadlocks is apop (enabled by default), if your box runs out of /dev/random entropy. Just disable apop if you don't need it. But a deadlock that requires a reboot? Don't you mean a Cyrus restart? If it requires a _reboot_, something is very wrong indeed and I had never heard of anythink like this. > Most of our problems go back to incoherent clients, mostly Outlook, but > occasionally Thunderbird -- turn on it's junk mail filtering, file to say Amem to that. I think Cyrus 2.2 and Debian Cyrus 2.1 can handle most outlooketies just fine, but I would not bet anything on it. One must always be on the look out for any new version of OutLook and all the bugs that it brings with it... > >entire point of maildirs). For example, maildir makes backing up and > >restoring ranges of email very easy. Cyrus does that too, since it is just MH + indexes. You just reload from backup media and run a reconstruct in that single mailbox, and incremental backups work just fine (the message data is never changed by Cyrus once in the spool, just the indexes). Hmm, come to think about it, maildir is probably superior in that it stores the message flags in-band which is nicer for restores -- cyrus will mark all restored messages as unread and lose all other flags on that message as well). OTOH, how does one manage to have annotations and per-user flags (such as seen state) using maildir? > There is one serious problem. He'll need multiple servers since the unix > UID limit hits at 65535. So you can only get about 64k users created per Huh? More like 2^31 in Debian Linux 2.4/2.6 kernels. But over 64K users per box will mean bad performance unless these servers are really something else, or your users are not that much active to begin with, so I don't think this counts against courrier-imap. > I've yet to find a good way to backup and restore cyrus mail...basically > it's a pain to do. Thankfully we've only ever very rarely had a need to > restore mail. There is a lot of stuff about it on the Cyrus wiki and ML archives. I too rarely need to restore mail, and simple amanda incremental backups work just fine for that (the effort it takes on restores is caused by amanda, not Cyrus). > With courier I dont' know if it has any ability like the MURDER system in > Cyrus that allows you to create a scaleable (not redundant) cluster and Redundancy using murder is coming soon, I think. So far, people do the usual HA things for redundancy (but don't even think about Cyrus on top of NFS :P). > have the system automatically route mail internally to the correct mail > store system. You can do the same with LDAP and postfix though with Cyrus. > and say something like perdition. AS LONG AS you do not need cluster-wide shared mailboxes, at which point a Cyrus murder cluster is the only thing that will work, AFAIK (and I would love to hear otherwise!). > That's a LOT of mail. You could easily need half a dozen very beefy boxes > to handle that much mail depending on how much spam/virus/etc features you > want. Indeed. SPAM and AV processing for this would require a small cluster of workers of its own, with good local IO and lots of CPU power (and a good enough ammount of RAM that you can use for IO caching). -- "One disk to rule them all, One disk to find them. One disk to bring them all and in the darkness grind them. In the Land of Redmond where the shadows lie." -- The Silicon Valley Tarot Henrique Holschuh
http://lists.debian.org/debian-isp/2005/03/msg00173.html
crawl-002
en
refinedweb
Updating a Database Using Flash MX Professional 2004 Table of Contents - Introduction - The Basics - Update Packets in Detail - Result Packets in Detail - A Sample Update Packet Parser - Setting up the ColdFusion Component - Building the Flash Application - Summary Building the Flash Application Now I'm going to show you how to build a simple application that makes use of the ColdFusion component to load data from the database and save data back to it. - Open Flash MX Professional 2004 and create a new Flash document. - Drag a DataSet component to the Stage and name it ds. Drag a DataGrid component to the Stage, name it grid, set its x,y position to 0,0, set its width to 540, and set its height to 240. In the Component Inspector, set its editable property to true. Databind ds.dataProvider to grid.dataProvider by following these steps: - Select the DataSet component on the Stage and choose the bindings tab in the Component Inspector. - Click the + button to add a new binding. - In the Add Binding dialog box, select the dataProvider property and click the OK button. - In the Component Inspector, select the bound to property of the new binding, and click the button next to it. - In the Bound To dialog box, select the DataGrid, <grid> item in the Component Path list box. You'll see a list of the DataGrid properties in the Schema Location list box on the right. - Select the dataProvider property, and click the OK button to complete your task (see Figure 1). Figure 1. Task completed: ds.dataProvider is bound to grid.dataProvider - Databind ds.selectedIndex to grid.selectedIndex following in outline the directions in step 3. Add schema to the DataSet component to describe the expected fields. Click the DataSet component, and select the Schema tab in the Component Inspector. - Click the leftmost + button to add a component property. Type id into the field name text box, and select Number from the data type drop-down list. This tells the DataSet to expect an id field in the data, and that its value is a number. - Following the instructions above, add the following fields and data types: rate as Number; duration as Number; billable as Boolean; notes as String (see Figure 2). Figure 2. The finished schema of the DataSet component Click the Stage to deselect all selected components and type the following code into the Actions panel (this code sets up a connection to the ColdFusion component and calls the getSQLData() function to get data): #include "NetServices.as" #include "NetDebug.as" //Set up the NetServices connection to the ColdFusion component. var con:NetConnection = NetServices.createGatewayConnection( "" ); _global.srv = con.getService( "rdbmsProcessor", this ); //Call the function with a query to get the data. _global.srv.getSQLData( "timetrax", "select notes, billable, duration, rate, duration, id from timeentry" ); //Called with the result from the getSQLData function, this places the data into the dataset. function getSQLData_Result( result ) { ds.dataProvider = result; } To add the ability to add new records, drag a Button control to the Stage, set its label property to Add and type the following code into the Actions panel: on(click){ _parent.ds.addItem(); } To add the ability to delete records, drag another Button control to the Stage, set its label property to Delete and type the following code into the Actions panel: on(click){ _parent.ds.removeItem(); } Test the movie. Verify that you are getting data in the grid and that you can add, edit, and delete records. At this point, there is no way to save your changes. If you can not modify records, review the steps above, and verify that you have ColdFusion MX set up correctly with the ColdFusion component and database by typing the following into your browser:*%20from%20timeentry - To add the ability to save your changes, drag an RDBMSResolver component to the Stage, name it rs. In the Component Inspector, and specify the following properties: - Set the tableName property to TimeEntry. - Click the button next to the fieldInfo property. - In the Values dialog box, click the + button. - In the area below, type id into the fieldName property, and click the OK button (see Figure 3). Figure 3. Filling in the Values dialog box This instructs the RDBMSResolver component that the id field is the key field for this table (see Figure 4). Figure 4. The RDBMSResolver component configured With the RDBMSResolver component still selected, type the following into the Actions panel: on(beforeApplyUpdates) { //Add an attribute to the update packet to tell the CFC //that we want it to generate values for the id field var firstChild:XMLNode = eventObj.updatePacket.firstChild; firstChild.attributes.autoIncField = "id"; trace(eventObj.updatePacket); //Now send the update packet to the parsing method in the CFC _global.srv.saveSQLData( "timetrax", eventObj.updatePacket.toString()); } - Databind rs.deltaPacket to ds.deltaPacket using in outline the directions in step 3. Drag a Button control to the Stage, set its label property to Save, and type the following into the Actions panel: on(click) { _parent.ds.applyUpdates(); } To catch the result packet coming back from the CFC, click the Stage to deselect any selected components, and type the following code into the Actions panel: //Catches the result packet returned by the CFC, and puts it into the resolver function saveSQLData_Result( result ) { trace(result); var resultDoc:XML = new XML(); resultDoc.ignoreWhite = true; resultDoc.parseXML(result); rs.updateResults = resultDoc; } To add in the ability to view errors returned in the result packet, select the DataSet and type the following in the Actions panel: on(resolveDelta) { var errs:Array = eventObj.data; for( var i:Number=0; i<errs.length; i++ ) { trace( errs[i].getMessage()); } } That's it. You can now play around with the application. Try adding, editing, and deleting records. Take a look at the contents of the update packet and result packets in the output panel. Notice how the value for the id field is automatically created within the server function and how the DataSet picks up and displays the updated value in the grid. Of course, you could also enhance the functionality of this application by adding the ability to deal with errors returned from the server. You can see a working example of error resolution in Paul Gubbay's Time Entry sample application.
http://www.adobe.com/devnet/flash/articles/delta_packet07.html
crawl-002
en
refinedweb
sign (#) introduces a comment: - -. Configuring Name Server Lookups Using resolv.conf. Resolver Robustness. How DNS Works D. Figure 6.1: A part of the domain namespace. Name Lookups with DNS] ".:# /usr/sbin named.boot File.. - directory - This option specifies a directory in which zone files reside. Names of files in other options may be given relative to this directory. Several directories may be specified by repeatedly using directory. The Linux file system standard suggests this should be /var/named. - primary -. - secondary -. - cache -). - forwarders -. - slave - This statement makes the name server a slave server. It never performs recursive queries itself, but only forwards them to servers specified in the forwarders statement."; };. The DNS Database Files:[domain] [ttl] [class] type rdata: - domain - This term. - class - This is an address class, like IN for IP addresses or HS for objects in the Hesiod class. For TCP/IP networking, you have to specify IN. If no class field is given, the class of the preceding RR is assumed. - type - This describes the type of the RR. The most common types are A, SOA, PTR, and NS. The following sections describe the various types of RRs. - rdata -. - SOA -. - serial -. - refresh -. - retry -. - expire -. - minimum -). - A -. - CNAME -. It is used for reverse mapping of IP addresses to hostnames. The hostname given must be the canonical hostname. - MX - This RR announces a mail exchanger for a domain. Mail exchangers are discussed in "Mail Routing on the Internet". The syntax of an MX record is:[domain] [ttl] [class] MX preference host. A named.boot file for a caching-only server looks like this:; named.boot file for caching-only server directory /var/named primary 0.0.127.in-addr.arpa named.local ; localhost network cache . named.ca ; root servers In addition to this named.boot file, you must set up the named.ca file with a valid list of root name servers. You could copy and use Example 6.10 for this purpose. No other files are needed for a caching-only server configuration. Writing the Master Files.[4] [4]. Example 6.10: The named.ca File; ; /var/named/named.ca Cache file for the brewery. ; We're not on the Internet, so we don't need ; any root servers. To activate these ; records, remove the semicolons. ; ;. ;.ROOT-SERVERS.NET internet address = 198.41.0.10 K.ROOT-SERVERS.NET internet address = 193.0.14.129 L.ROOT-SERVERS.NET internet address = 198.32.64.12 M.ROOT-SERVERS.NET internet address = 202.12.27.33 To see the complete set of available commands, use the help command in nslookup. Other Useful Tools. Back to: Sample Chapter Index Back to: Linux Network Administrator's Guide, 2nd Edition © 2001, O'Reilly & Associates, Inc. webmaster@oreilly.com
http://oreilly.com/catalog/linag2/book/ch06.html
crawl-002
en
refinedweb
Comments for Using Delphi as a script language for ASP.NET 2009-07-09T13:42:54-07:00 Using Delphi as a script language for ASP.NET CHP User 2003-05-07T02:35:54-07:00 2003-05-07T02:35:54-07:00 Using Delphi as a script language for ASP.NET This probably isn't the place to be posting this, but I'm lost as to where I should post it - sorry in advance :). Anyway, I've been playing around with ASP.NET with Delphi and managed to do most things that I wanted. However, I've been attempting to create a simple custom control by dumping a load of HTML into an ascx file and adding the @register tag (using tagname/src attr.). The way the asp.net compiler currently works means that this doesn't work since all the units use the same namespace (ASP) and hence are called "unit ASP;". They then have this in the uses clause which kills the compilation. Without thinking this through, I had decided that all I needed to do was remove the ASP from the uses clause and all would be fine. So I wrote an Execution Proxy to remove the reference on the fly. Of course this doesn't work because the code still needs to reference the "registered" units (which are called ASP). I appreciate that dccasp is still in preview so I'm being a little optimistic if I think everything will work, but I was wondering if you have any suggestions as to how I might create a workaround for this.Many thanks. Error found in web.config jun pattugalan 2003-03-31T04:29:29-07:00 2003-03-31T04:29:29-07:00 Error found in web.config i receive this error message once i tried to run the example.parser error message : File or assembly name DelphiProvider, or one of its dependencies was not found.Line 5: <Add assembly="DelphiProvider" />*note*im using win2000 advanced server even the service packs are updated. Internal error S1888 ??? Laszlo Kertesz 2003-01-21T06:20:15-08:00 2003-01-21T06:20:15-08:00 Internal error S1888 ??? When I try to run the editdemo sample found in, I get the compilationerror"Fatal: Internal error: S1888". No informations found on the net. What to donow? (I have the updated D7 .NET Preview, and the .NET Framework SP2installed.) re: dccasp.exe was not found error Laszlo Kertesz 2003-01-21T06:17:55-08:00 2003-01-21T06:17:55-08:00 re: dccasp.exe was not found error Solved. After setting something on the wirtual directory properties, this error don't comes. dccasp.exe was not found error Laszlo Kertesz 2003-01-21T01:40:38-08:00 2003-01-21T01:40:38-08:00 dccasp.exe was not found error Following the sample in this article, I get this error if I want to see the aspx in my browser:Compiler 'c:\winnt\microsoft.net\framework\v1.0.3705\dccasp.exe' was not found I have copied the dccasp.exe into this directory from the "Delphi for .NET Preview\aspx\framework" folder, but has no effect, the error is the same. Don't understand. What to do now? Laszlo Using Delphi as a script language for ASP.NET Ray Thorpe 2002-11-29T13:02:35-08:00 2002-11-29T13:02:35-08:00 Using Delphi as a script language for ASP.NET I asked Danny Thorpe where was the delphi unit code for this article. He said to ask you. I'm a little lost as to what the complier is going to compile. Could you post all of the code for Delphi as a scripting language for ASP Demo.Thanksrayrthorpe@directvinternet.com Problem with the demo Craven Weasel 2002-11-25T18:29:22-08:00 2002-11-25T18:29:22-08:00 Problem with the demo I've just tried to run the editdemo.aspx and I get an ASP error in the browser: "Compiler Error Message: The compiler failed with error code 1.C:\WINNT\Microsoft.NET\Framework\v1.0.3705\Temporary ASP.NET Files\vslive\5af99da7\a2fb00b7\pnx5wyl8.0.pas(11) Fatal: Unit not found: 'Borland.Delphi.System.pas' or binary equivalents (DCU,DPU)"The .NET Framework (SP2) and Delphi.NET Preview (latest release) all installed correctly (works with no problem if I create apps in Delphi7 and compile them with dccil). The Delphi.NET ASP directory structure is:/Delphi for .NET Preview /aspx (contains the editdemo.aspx file) /bin /demos /doc /setupfiles /source /unitsIt's almost like the compiler cannot find the specified .pas file.Could anyone help me with this please? Thanks in advance. How Do You Print This Page? Bruce Inglish 2002-08-15T15:37:20-07:00 2002-08-15T15:37:20-07:00 How Do You Print This Page? As with the earlier comment I have found this article unprintable. I can't even seem to get the article to behave using an HTML editor and manipulating the table properties. So...How about adding a button that would re-render the story in an HTML printable format or perhaps in PDF? Surely the Story Server software supports such a thing. re: Convert 'Using Delphi as a script language for ASP.NET' into PDF file hegx geng xi 2002-08-14T19:56:43-07:00 2002-08-14T19:56:43-07:00 re: Convert 'Using Delphi as a script language for ASP.NET' into PDF file I have been waiting the moment coming. Convert 'Using Delphi as a script language for ASP.NET' into PDF file Wyatt Wong 2002-08-12T23:19:27-07:00 2002-08-12T23:19:27-07:00 Convert 'Using Delphi as a script language for ASP.NET' into PDF file I suggest Borland to re-publish this document in PDF format so as to eliminate the need to do the scrolling horizontally in order to read the full text.Besides, if you printed this document out, all the words on the rightmost margin will be 'lost'.
http://edn.embarcadero.com/article/28974/atom
crawl-002
en
refinedweb
This project lets you create java Classes on-the-fly at runtime, with any superclass, interfaces and methods you like. Great for creating JavaBeans from dynamic data! Similar to java.lang.reflect.Proxy but more powerful. Anonymous created the PropertyChangeSupport come up with an empty array. artifact simon_massey created the BeanCreator methods should accept optional class/intefaces artifact simon_massey created the This project rocks. Such a useful library. forum thread simon_massey created the BeanCreator.MapInvocationHandler as top level public class artifact simon_massey created the PropertyMethodNameTx should be public class artifact simon_massey created the maven version artifact jb_raley commented on the How can I specify the package name for a class ? artifact maxcsaucdk created the How can I specify the package name for a class ? artifact jb_raley committed patchset 25 of module dynclass to the Dynamic Class Generation CVS repository, changing 1 files Changed to zlib/libpng license Copyright © 2009 SourceForge, Inc. All rights reserved. Terms of Use
http://sourceforge.net/projects/dynclass/
crawl-002
en
refinedweb
. If you are not familiar with .NET, you should first read Appendix C or pick up a copy of .NET Framework Essentials by Thuan Thai and Hoang Lam (O'Reilly, 2001). If you are already familiar with the basic .NET concepts, such as the runtime, assemblies, garbage collection, and C# (pronounced "C sharp"), continue reading. This chapter shows you how to create .NET serviced components that can take advantage of the COM+ component services that you have learned to apply throughout this book. Developing Serviced Components A .NET component that takes advantage of COM+ services needs to derive from the .NET base class ServicedComponent. ServicedComponentis defined in the System.EnterpriseServicesnamespace. Example 10-1 demonstrates how to write a .NET serviced component that implements the IMessageinterface and displays a message box with "Hello" in it when the interface's ShowMessage( )method is called. Example 10-1: A simple .NET serviced component namespace MyNamespace { using System.EnterpriseServices; using System.Windows.Forms;//for the MessageBox class public interface IMessage { void ShowMessage( ); } /// <summary> /// Plain vanilla .NET serviced component /// </summary> public class MyComponent:ServicedComponent,IMessage { public MyComponent( ) {}//constructor public void ShowMessage( ) { MessageBox.Show("Hello!","MyComponent"); } } } WARNING: A serviced component is not allowed to have parameterized constructors. If you require such parameters, you can either design around them by introducing a Create( )method that accepts parameters, or use a constructor string. throughout this chapter as you learn about configuring .NET components to take advantage of the various COM+ services. . I recommend that you put as many design-level attributes as possible (such as transaction support or synchronization) in the code and use the Component Services Explorer to configure deployment-specific details. .NET Assemblies and COM+ Applications When you wish to take advantage of COM+ component services, you must map the assembly containing your serviced components to a COM+ application. That COM+ application then contains your serviced components, just like any other component--COM+ does not care whether the component it provides services to is a managed .NET serviced component or a classic COM, unmanaged, configured component. A COM+ application can contain components from multiple assemblies, and an assembly can contribute components to more than one application, as shown in Figure 10-1. Compare Figure 10-1 to Figure 1-8. There is an additional level of indirection in .NET because an assembly can contain multiple modules. However, setting up an assembly to contribute components to more than one COM+ application is not straightforward and is susceptible to future registrations of the assembly. As a rule, avoid mapping an assembly to more than one COM+ application... Specifying Application Name You can provide .NET with an assembly attribute, specifying the name of the COM+ application you would like your components to be part of, by using the ApplicationNameassembly attribute: [assembly: ApplicationName("MyApp")] If you do not provide an application name, .NET uses the assembly name. The ApplicationNameattribute (and the rest of the serviced components attributes) is defined in the System.EnterpriseServicesnamespace. You must add this namespace to your project references and reference that namespace in your assembly information file: using System.EnterpriseServices; Understanding Serviced Component Versions Before exploring the three registration options, you need to understand the relationship between an assembly's version and COM+ components. Every managed client of your assembly is built against the particular version of the assembly that contains your components, whether they are serviced or regular managed components. .NET zealously enforces version compatibility between the client's assembly and any other assembly it uses. The assembly's version is the product of its version number (major and minor numbers, such as 3.11) and the build and revision numbers. The version number is provided by the developer as an assembly attribute, and the build or revision numbers can be generated by the compiler--or the developer can provide them himself. The semantics of the version and build or revision numbers tell .NET whether two particular assembly versions are compatible with each other, and which of the two assemblies is the latest. Assemblies are compatible if the version number is the same. The default is that different build and revision numbers do not indicate incompatibility, but a difference in either major or minor number indicates incompatibility. A client's manifest contains the version of each assembly it uses. At runtime, .NET loads for the client the latest compatible assemblies to use, and latest is defined using the build and revision numbers. All this is fine while everything is under tight control of the .NET runtime. But how would .NET guarantee compatibility between the assembly's version and the configuration of the serviced components in the COM+ Catalog? The answer is via the COM+ component's ID. The first time a serviced component is added to a COM+ application, the registration process generates a CLSID for it, based on a hash of the class definition and its assembly's version and strong name. Subsequent registration of the same assembly with an incompatible version is considered a new registration for that serviced component, and the component is given a new CLSID. This way, the serviced component's CLSID serves as its configuration settings version number. Existing managed clients do not interfere with one another because each gets to use the assembly version it was compiled with. Each managed client also uses a particular set of configuration parameters for the serviced components, captured with a different CLSID. When a managed client creates a serviced component, the .NET runtime creates for it a component from an assembly with a compatible version and applies the COM+ configuration of the matching CLSID. Manual Registration To register your component manually, use the RegSvcs.exe command-line utility. (In the future, Visual Studio.NET will probably allow you to invoke RegSvcs from the visual environment itself.). In any case, you must create that COM+ application in the Component Services Explorer beforehand; otherwise, the previous command line will fail. You can instruct RegSvcs to create the application for you using the /cswitch: RegSvcs.exe /c MyApp MyAssembly.dll Or if the name is specified in the assembly: RegSvcs.exe /c MyAssembly.dll When using the /cswitch, RegSvcs creates a COM+ application, names it accordingly, and adds the serviced components to it. If the Catalog already contains an application with that name, the registration fails. You can also ask RegSvcs to try to find a COM+ application with that name and, if none is found, create one. This is done using the /fcswitch: RegSvcs.exe /fc MyApp MyAssembly.dll Or if the name is specified in the assembly: RegSvcs.exe /fc MyAssembly.dll. By default, RegSvcs does not override the existing COM+ application (and its components) settings. If that assembly version is already registered with that COM+ application, then RegSvcs does nothing. If that version is not registered yet, it adds the new version and assigns new CLSIDs. Reconfiguring an existing version is done explicitly using the /reconfigswitch: RegSvcs.exe /reconfig /fc MyApp MyAssembly.dll The /reconfigswitch causes RegSvcs to reapply any application, component, interface, and method attributes found in the assembly to the existing version and use the COM+ default settings for the rest, thus reversing any changes you made using the Component Services Explorer. >. For example, when you add the serviced component in Example 10-1 to the COM+ Catalog, RegSvcs names it MyNamespace.MyComponent. nonmanaged clients (COM clients). The default type library filename is <Assembly name>.tlb--the name of the assembly with a .tlb extension. Dynamic Registration Administrator group. It has this requirement because dynamic registration makes changes to the COM+ Catalog; if the user invoking it is not a member of the Windows 2000 Administrator. Programmatic Registration Both RegSvcs and dynamic registration use a .NET class called RegistrationHelperto perform the registration. RegistrationHelperimplements the IRegistrationHelperinterface, whose methods are used to register and unregister assemblies. For example, the InstallAssembly( )method registers the specified assembly in the specified COM+ application (or the application specified in the assembly). This method is defined as: public void InstallAssembly(string assembly, ref string application, ref string tlb, InstallationFlags installFlags ); The installation flags correspond to the various RegSvcs switches. See the MSDN Library for additional information on RegistrationHelper. You can use RegistrationHelperyourself as part of your installation program; for more information, see the section "Programming the COM+ Catalog" later in this chapter. The ApplicationID Attribute. Using application ID comes in handy when deploying the assembly in foreign markets--you can provide a command-line localized application name for every market while using the same application ID for your administration needs internally. The ApplicationIDattribute is defined in the System.EnterpriseServicesnamespace. The Guid Attribute Instead of having the registration process generate a CLSID for your serviced component, you can specify one for it using the Guidattribute: using System.Runtime.InteropServices; [Guid("260C9CC7-3B15-4155-BF9A-12CB4174A36E")] public class MyComponent :ServicedComponent,IMyInterface {...} The Guidattribute is defined in the System.Runtime.InteropServicesnamespace.. The ProgId Attribute Instead of having the registration process generate a name for your serviced component (namespace plus component name), you can specify one for it using the ProgIDattribute: using System.Runtime.InteropServices; [ProgId("My Serviced Component")] public class MyComponent :ServicedComponent,IMyInterface {...} The ProgIdattribute is defined in the System.Runtime.InteropServicesnamespace. Configuring Serviced Components You can use various .NET attributes to configure your serviced components to take advantage of COM+ component services. The rest of this chapter demonstrates this service by service, according to the order in which the COM+ services are presented in this book. Application Activation Type To specify the COM+ application's activation type, you can use the ApplicationActivationassembly attributes. You can request that the application be a library or a server application: [assembly: ApplicationActivation(ActivationOption.Server)] or: [assembly: ApplicationActivation(ActivationOption.Library)] If you do not provide the ApplicationActivationattribute, then .NET uses a library activation type by default. Note that this use differs from the COM+ default of creating a new application as a server application. TIP: The next release of Windows 2000, Windows XP (see Appendix B), allows a COM+ application to be activated as a system service, so I expect that ApplicationActivationwill be extended to include the value of ActivationOption.Service. Before I describe other serviced components attributes, you need to understand what attributes are. Every .NET attribute is actually a class, and the attribute class has a constructor (maybe even a few overloaded constructors) and, usually, a few properties you can set. The syntax for declaring an attribute is different from that of any other class. In C#, you specify the attribute type between square brackets [...]. You specify constructor parameters and the values of the properties you wish to set between parentheses (...). In the case of the ApplicationActivationattribute, there are no properties and the constructor must accept an enum parameter of type ActivationOption, defined as: enum ActivationOption{Server,Library} There is no default constructor for the ApplicationActivationattribute. The ApplicationActivationattribute is defined in the System.EnterpriseServicesnamespace. Your must add this namespace to your project references and reference that namespace in your assembly information file: using System.EnterpriseServices; The rest of this chapter assumes that you have added these references and will not mention them again. TIP: A client assembly that creates a serviced component or uses any of its base class ServicedComponentmethods must add a reference to System.EnterpriseServicesto its project. Other clients, which only use the interfaces provided by your serviced components, need not add the reference. The Description Attribute The Descriptionattribute allows you to add text to the description field on the General Properties tab of an application, component, interface, or method. Example 10-2 shows how to apply the Descriptionattribute at the assembly, class, interface, and method levels. After registration, the assembly-level description string becomes the content of the hosting COM+ application's description field; the class description string becomes the content of the COM+ component description field. The interface and method descriptions are mapped to the corresponding interface and method in the Component Services Explorer. Example 10-2: Applying the Description attribute at the assembly, class, interface, and method levels [assembly: Description("My Serviced Components Application")] [Description("IMyInterface description")] public interface IMyInterface { [Description("MyMethod description")] void MyMethod( ); } [Description("My Serviced Component description")] public class MyComponent :ServicedComponent,IMyInterface { public void MyMethod( ){} } Accessing the COM+ Context To access the COM+ context object's interfaces and properties, .NET provides you with the helper class ContextUtil. All context object interfaces (including the legacy MTS interfaces) are implemented as public static methods and public static properties of the ContextUtilclass. Because the methods and properties are static, you do not have to instantiate a ContextUtilobject--you should just call the methods. For example, if you want to trace the current COM+ context ID (its GUID) to the Output window, use the ContextIdstatic property of ContextUtil: using System.Diagnostics;//For the Trace class Guid contextID = ContextUtil.ContextId; String traceMessage = "Context ID is " + contextID.ToString( ); Trace.WriteLine(traceMessage); ContextUtilhas also properties used for JITA deactivation, transaction voting, obtaining the transactions and activity IDs, and obtaining the current transaction object. You will see examples for how to use these ContextUtilproperties later in this chapter. COM+ Context Attributes You can decorate (apply attributes to) your class with two context-related attributes. The attribute MustRunInClientContextinforms COM+ that the class must be activated in its creator's context: [MustRunInClientContext(true)] public class MyComponent :ServicedComponent {...} When you register the class above with COM+, the "Must be activated in caller's context" checkbox on the component's Activation tab is selected in the Component Services Explorer. If you do not use this attribute, the registration process uses the default COM+ setting when registering the component with COM+ --not enforcing same-context activation. As a result, using MustRunInClientContextwith a falseparameter passed to the constructor is the same as using the COM+ default: [MustRunInClientContext(false)] Using attributes with the COM+ default values (such as constructing the MustRunInClientContextattribute with false) is useful when you combine it with the /reconfigswitch of RegSvcs. For example, you can undo any unknown changes made to your component configuration using the Component Services Explorer and restore the component configuration to a known state. The MustRunInClientContextattribute class has an overloaded default constructor. If you use MustRunInClientContextwith no parameters, the default constructor uses truefor the attribute value. As a result, the following two statements are equivalent: [MustRunInClientContext] [MustRunInClientContext(true)] The second COM+ context-related attribute is the EventTrackingEnabledattribute. It informs COM+ that the component supports events and statistics collection during its execution: [EventTrackingEnabled(true)] public class MyComponent2:ServicedComponent {...} The statistics are displayed in the Component Services Explorer. When you register this class with COM+, the "Component supports events and statistics" checkbox on the component's Activation tab is checked in the Component Services Explorer. If you do not use this attribute, the registration process does not use the default COM+ setting of supporting events when registering the component with COM+. The .NET designers made this decision consciously to minimize creation of new COM+ contexts for new .NET components; a component that supports statistics is usually placed in it own context. The EventTrackingEnabledattribute class also has an overloaded default constructor. If you construct it with no parameters, the default constructor uses truefor the attribute value. As a result, the following two statements are equivalent: [EventTrackingEnabled] [EventTrackingEnabled(true)] COM+ Object Pooling The ObjectPoolingattribute is used to configure every aspect of your component's object pooling. The ObjectPoolingattribute(MinPoolSize = 3,MaxPoolSize = 10,CreationTimeout = 20)] public class MyComponent :ServicedComponent {...} The MinPoolSize, MaxPoolSize, and CreationTimeoutproperties are public properties of the ObjectPoolingattribute). The ObjectPoolingattribute has a Boolean property called the Enabledproperty. If you do not specify a value for it ( trueor false), the attribute's constructor sets it to true. In fact, the attribute's constructor has a few overloaded versions--a default constructor that sets the Enabledproperty to trueand a constructor that accepts a Boolean parameter. All constructors set the pool parameters to the default COM+ value. As a result, the following three statements are equivalent: [ObjectPooling] [ObjectPooling(true)] [ObjectPooling(Enabled = true)] TIP: If your pooled component is hosted in a library application, then each hosting Application Domain will have its own pool. As a result, you may have multiple pools in a single physical process, if that process hosts multiple Application Domains. Under COM, the pooled object returns to the pool when the client releases its reference to it. Managed objects do not have reference counting--.NET uses garbage collection instead. A managed pooled object returns to the pool only when it is garbage collected. The problem with this behavior is that a substantial delay between the time the object is no longer needed by its client and the time the object returns to the pool can occur. This delay may have serious adverse effects on your application scalability and throughput. An object is pooled because it was expensive to create. If the object spends a substantial portion of its time waiting for the garbage collector, your application benefits little from object pooling. There are two ways to address this problem. The first solution uses COM+ JITA (discussed next). When you use JITA, the pooled object returns to the pool after every method call from the client. The second solution requires client participation. ServicedComponent has a public static method called DisposeObject( ), defined as: public static void DisposeObject(ServicedComponent sc); When the client calls DisposeObject( ), passing in an instance of a pooled serviced component, the object returns to the pool immediately. DisposeObject( )has the effect of notifying COM+ that the object has been released. Besides returning the object to the pool, DisposeObject( )disposes of the context object hosting the pooled object and of the proxy the client used. For example, if the component definition is: public interface IMyInterface { void MyMethod( ); } [ObjectPooling] public class MyComponent : ServicedComponent,IMyInterface { public void MyMethod( ){} } When the client is done using the object, to expedite returning the object to the pool, the client should call DisposeObject( ): IMyInterface obj; Obj = (IMyInterface) new MyComponent( ); obj.MyMethod( ); ServicedComponent sc = obj as ServicedComponent; If(sc != null) ServicedComponent.DisposeObject(sc); However, calling DisposeObject( )directly is ugly. First, the client has to know that it is dealing with an object derived from ServicedComponent, which couples the client to the type used and renders many benefits of interface-based programming useless. Even worse, the client only has to call DisposeObject( )if this object is pooled, which couples the client to the serviced component's configuration. What if you use object pooling in only one customer site, but not in others? This situation is a serious breach of encapsulation--the core principle of object-oriented programming. The solution is to have ServicedComponentimplement a special interface (defined in the Systemnamespace) called IDisposable, defined as: public interface IDisposable { void Dispose( ); } ServicedComponentimplementation of Dispose( )returns the pooled object to the pool. Having the Dispose( )method on a separate interface allows the client to query for the presence of IDisposableand always call it, regardless of the object's actual type: IMyInterface obj; obj = (IMyInterface) new MyComponent( ); obj.MyMethod( ); //Client wants to expedite whatever needs expediting: IDisposable disposable = obj as IDisposable; if(disposable != null) disposable.Dispose( ); The IDisposabletechnique is useful not only with serviced components, but also in numerous other places in .NET. Whenever your component requires deterministic disposal of the resources and memory it holds, IDisposableprovides a type-safe, component-oriented way of having the client dispose of the object without being too coupled to its type.type. COM+ Just-in-Time Activation .NET managed components can use COM+ JITA to efficiently handle rich clients (such as .NET Windows Forms clients), as discussed in Chapter 3. To enable JITA support for your component, use the JustInTimeActivationattribute: [JustInTimeActivation(true)] public class MyComponent :ServicedComponent {..} When you register this component with COM+, the JITA checkbox in the Activation tab on the Component Services Explorer is selected. If you do not use the JustInTimeActivationattribute, JITA support is disabled when you register your component with COM+ (unlike the COM+ default of enabling JITA). The JustInTimeActivationclass default constructor enables JITA support, so the following two statements are equivalent: [JustInTimeActivation] [JustInTimeActivation (true)] Enabling JITA support is just one thing you need to do to use JITA. You still have to let COM+ know when to deactivate your object. You can deactivate the object by setting the done bit in the context object, using the DeactivateOnReturnproperty of the ContextUtilclass. As discussed at length in Chapter 3, a JITA object should retrieve its state at the beginning of every method call and save it at the end. Example 10-3 shows a serviced component using JITA. Example 10-3: A serviced component using JITA public interface IMyInterface { void MyMethod(long objectIdentifier); } [JustInTimeActivation(true)] public class MyComponent :ServicedComponent,IMyInterface { public void MyMethod(long objectIdentifier) { GetState(objectIdentifier); DoWork( ); SaveState(objectIdentifier); //inform COM+ to deactivate the object upon method return ContextUtil.DeactivateOnReturn = true; } //other methods protected void GetState(long objectIdentifier){...} protected void DoWork( ){...} protected void SaveState(long objectIdentifier){...} } You can also use the Component Services Explorer to configure the method to use auto-deactivation. In that case, the object is deactivated automatically upon method return, unless you set the value of the DeactivateOnReturnproperty to false. Using IObjectControl If your serviced component uses object pooling or JITA (or both), it may also need to know when it is placed in a COM+ context to do context-specific initialization and cleanup. Like a COM+ configured component, the serviced component can use IObjectControlfor that purpose. The .NET base class ServicedComponentalready implements IObjectControl, and its implementation is virtual--so you can override the implementation in your serviced component, as shown in Example 10-4. Example 10-4: A serviced component overriding the ServicedComponent implementation of IObjectControl public class MyComponent :ServicedComponent { public override void Activate( ) { //Do context specific initialization here } public override void Deactivate( ) { //Do context specific cleanup here } public override bool CanBePooled( ) { return true; } //other methods } If you encounter an error during Activate( )and throw an exception, then the object's activation fails and the client is given an opportunity to catch the exception. IObjectControl, JITA, and Deterministic Finalization To maintain JITA semantics, when the object deactivates itself, .NET calls DisposeObject( )on it explicitly, thus destroying it. Your object can do specific cleanup in the Finalize( )method (the destructor in C#), and Finalize( )will be called as soon as the object deactivates itself, without waiting for garbage collection. If the object is a pooled object (as well as a JITA object), then it is returned to the pool after deactivation, without waiting for the garbage collection. You can also override the ServicedComponentimplementation of IObjectControl.Deactivate( )and perform your cleanup there. In any case, you end up with a deterministic way to dispose of critical resources without explicit client participations. This situation makes sharing your object among clients much easier because now the clients do not have to coordinate who is responsible for calling Dispose( ). TIP: COM+ JITA gives managed components deterministic finalization, a service that nothing else in .NET can provide out of the box. COM+ Constructor String Any COM+ configured component that implements the IObjectConstructinterface has access during construction to a construction string (discussed in Chapter 3), configured in the Component Services Explorer. Serviced components are no different. The base class, ServicedComponent, already implements the IObjectConstructinterface as a virtual method (it has only one method). Your derived serviced component can override the Construct( )method, as shown in this code sample: public class MyComponent :ServicedComponent { public override void Construct(string constructString) { //use the string. For example: MessageBox.Show(constructString); } } If the checkbox "Enable object construction" on the component Activation tab is selected, then the Construct( )method is called after the component's constructor, providing it with the configured construction string. You can also enable construction string support and provide a default construction string using the ConstructionEnabledattribute: [ConstructionEnabled(Enabled = true,Default = "My String")] public class MyComponent :ServicedComponent { public override void Construct(string constructString) {...} } The ConstructionEnabledattribute has two public properties. Enabledenables construction string support for your serviced component in the Component Services Explorer (once the component is registered) and Defaultprovides. The ConstructionEnabledattribute has two overloaded constructors. One constructor accepts a Boolean value for the Enabledproperty; the default constructor sets the value of the Enabledproperty to true. You can also set the value of the Enabledproperty explicitly. As a result, the following three statements are equivalent: [ConstructionEnabled] [ConstructionEnabled(true)] [ConstructionEnabled(Enabled = true)] COM+ Transactions You can configure your serviced component to use the five available COM+ transaction support options by using the Transactionattribute. The Transactionattribute's constructor accepts an enum parameter of type TransactionOption, defined as: public enum TransactionOption { Disabled, NotSupported, Supported, Required, RequiresNew } For example, to configure your serviced component to require a transaction, use the TransactionOption.Requiredvalue: [Transaction(TransactionOption.Required)] public class MyComponent :ServicedComponent {...} The five enum values of TransactionOptionmap to the five COM+ transaction support options discussed in Chapter 4. When you use the Transactionattribute to mark your serviced component to use transactions, you implicitly set it to use JITA and require activity-based synchronization as well. The Transactionattribute has an overloaded default constructor, which sets the transaction support to TransactionOption.Required. As a result, the following two statements are equivalent: [Transaction] [Transaction(TransactionOption.Required)] Voting on the Transaction Not surprisingly, you use the ContextUtilclass to vote on the transaction's outcome. ContextUtilhas a static property of the enum type TransactionVotecalled MyTransactionVote. TransactionVoteis defined as: public enum TransactionVote {Abort,Commit} Example 10-5 shows a transactional serviced component voting on its transaction outcome using ContextUtil. Note that the component still has to do all the right things that a well-designed transactional component has to do (see Chapter 4); it needs to retrieve its state from a resource manager at the beginning of the call and save it at the end. It must also deactivate itself at the end of the method to purge its state and make the vote take effect. Example 10-5: A transactional serviced component voting on its transaction outcome using the ContextUtil MyTransactionVote property public interface IMyInterface { void MyMethod(long objectIdentifier); } [Transaction] public class MyComponent :ServicedComponent,IMyInterface { public void MyMethod(long objectIdentifier) { try { GetState(objectIdentifier); DoWork( ); SaveState(objectIdentifier); ContextUtil.MyTransactionVote = TransactionVote.Commit; } catch { ContextUtil.MyTransactionVote = TransactionVote.Abort; } //Let COM+ deactivate the object once the method returns finally { ContextUtil.DeactivateOnReturn = true; } } //helper methods protected void GetState(long objectIdentifier){...} protected void DoWork( ){...} protected void SaveState(long objectIdentifier){...} } Compare Example 10-5 to Example 4-3. A COM+ configured component uses the returned HRESULTfrom the DoWork( )helper method to decide on the transaction's outcome. A serviced component, like any other managed component, does not use HRESULTreturn codes for error handling; it uses exceptions instead. In Example 10-5 the component catches any exception that was thrown in the tryblock by the DoWork( )method and votes to abort in the catchblock. Alternatively, if you do not want to write exception-handling code, you can use the programming model shown in Example 10-6. Set the context object's consistency bit to false(vote to abort) as the first thing the method does. Then set it back to trueas the last thing the method does (vote to commit). Any exception thrown in between causes the method exception to end without voting to commit. Example 10-6: Voting on the transaction without exception handling public interface IMyInterface { void MyMethod(long objectIdentifier); } [Transaction] public class MyComponent :ServicedComponent,IMyInterface { public void MyMethod(long objectIdentifier) { //Let COM+ deactivate the object once the method returns and abort the //transaction. You can use ContextUtil.SetAbort( ) as well ContextUtil.DeactivateOnReturn = true; ContextUtil.MyTransactionVote = TransactionVote.Abort; GetState(objectIdentifier); DoWork( ); SaveState(objectIdentifier); ContextUtil.MyTransactionVote = TransactionVote.Commit; } //helper methods protected void GetState(long objectIdentifier){...} protected void DoWork( ){...} protected void SaveState(long objectIdentifier){...} } Example 10-6 has another advantage over Example 10-5: having the exception propagated up the call chain once the transaction is aborted. By propagating it, callers up the chain know that they can also abort their work and avoid wasting more time on a doomed transaction. The AutoComplete Attribute Your serviced components can take advantage of COM+ method auto-deactivation using the AutoCompletemethod attribute. During the registration process, the method is configured to use COM+ auto-deactivation when AutoCompleteis used on a method, and the checkbox "Automatically deactivate this object when the method returns" on the method's General tab is selected. Serviced components that use the AutoCompleteattribute do not need to vote explicitly on their transaction outcome. Example 10-7 shows a transactional serviced component using the AutoCompletemethod attribute. Example 10-7: Using the AutoComplete method attribute public interface IMyInterface { void MyMethod(long objectIdentifier); } [Transaction] public class MyComponent : ServicedComponent,IMyInterface { [AutoComplete(true)] public void MyMethod(long objectIdentifier) { GetState(objectIdentifier); DoWork( ); SaveState(objectIdentifier); } //helper methods protected void GetState(long objectIdentifier){...} protected void DoWork( ){...} protected void SaveState(long objectIdentifier){...} } When you configure the method to use auto-deactivation, the object's interceptor sets the done and consistency bits of the context object to trueif the method did not throw an exception and the consistency bit to falseif it did. As a result, the transaction is committed if no exception is thrown and aborted otherwise. Nontransactional JITA objects can also use the AutoCompleteattribute to deactivate themselves automatically on method return. The AutoCompleteattribute has an overloaded default constructor that uses truefor the attribute construction. Consequently, the following two statements are equivalent: [AutoComplete] [AutoComplete(true)] The AutoCompleteattribute can be applied on a method as part of an interface definition: public interface IMyInterface { //Avoid this: [AutoComplete] void MyMethod(long objectIdentifier); } However, you should avoid using the attribute this way. An interface and its methods declarations serve as a contract between a client and an object; using auto completion of methods is purely an implementation decision. For example, one implementation of the interface on one component may chose to use autocomplete and another implementation on another component may choose not to. The TransactionContext Object A nontransactional managed client creating a few transactional objects faces a problem discussed in Chapter 4 (see the section "Nontransactional Clients"). Essentially, if the client wants to scope all its interactions with the objects it creates under one transaction, it must use a middleman to create the objects for it. Otherwise, each object created will be in its own separate transaction. COM+ provides a ready-made middleman called TransactionContext. Managed clients can use TransactionContextas well. To use the TransactionContextobject, add to the project references the COM+ services type library. The TransactionContextclass is in the COMSVCSLibnamespace. The TransactionContextclass is especially useful in situations in which the class is a managed .NET component that derives from a class other than ServicedComponent. Remember that a .NET component can only derive from one concrete class and since the class already derives from a concrete class other than ServicedComponent, it cannot use the Transactionattribute. Nevertheless, the TransactionContextclass gives this client an ability to initiate and manage a transaction. Example 10-8 demonstrates usage of the TransactionContextclass, using the same use-case as Example 4-6. Example 10-8: A nontransactional managed client using the TransactionContext helper class to create other transactional objects using COMSVCSLib; IMyInterface obj1,obj2,obj3; ITransactionContext transContext; transContext = (ITransactionContext) new TransactionContext( ); obj1 = (IMyInterface)transContext.CreateInstance("MyNamespace.MyComponent"); obj2 = (IMyInterface)transContext.CreateInstance("MyNamespace.MyComponent"); obj3 = (IMyInterface)transContext.CreateInstance("MyNamespace.MyComponent"); try { obj1.MyMethod( ); obj2.MyMethod( ); obj3.MyMethod( ); transContext.Commit( ); } catch//Any error - abort the transaction { transContext.Abort( ); } Note that the client in Example 10-8 decides whether to abort or commit the transaction depending on whether an exception is thrown by the internal objects. COM+ Transactions and Nonserviced Components Though this chapter focuses on serviced components, it is worth noting that COM+ transactions are used by other parts of the .NET framework besides serviced components--in particular, ASP.NET and Web Services. Web services and transactions Web services are the most exciting piece of technology in the entire .NET framework. Web services allow a middle-tier component in one web site to invoke methods on another middle-tier component at another web site, with the same ease as if that component were in its own assembly. The underlying technology facilitating web services serializes the calls into text format and transports the call from the client to the web service provider using HTTP. Because web service calls are text based, they can be made across firewalls. Web services typically use a protocol called Simple Object Access Protocol (SOAP) to represent the call, although other text-based protocols such as HTTP-POST and HTTP-GET can also be used. .NET successfully hides the required details from the client and the server developer; a web service developer only needs to use the WebMethodattribute on the public methods exposed as web services. Example 10-9 shows the MyWebServiceweb service that provides the MyMessageweb service--it returns the string "Hello" to the caller. Example 10-9: A trivial web service that returns the string "Hello" using System.Web.Services; public class MyWebService : WebService { public MyWebService( ){} [WebMethod] public string MyMessage( ) { return "Hello"; } } The web service class can optionally derive from the WebServicebase class, defined in the System.Web.Servicesnamespace (see Example 10-9). The WebServicebase class provides you with easy access to common ASP.NET objects, such as those representing application and session states. Your web service probably accesses resource managers and transactional components. The problem with adding transaction support to a web service that derived from WebServiceis that it is not derived from ServicedComponent, and .NET does not allow multiple inheritance of implementation. To overcome this hurdle, the WebMethodattribute has a public property called TransactionOption, of the enum type Enterprise.Services.TransactionOptiondiscussed previously. The default constructor of the WebMethodattribute sets this property to TransactionOption.Disabled, so the following two statements are equivalent: [WebMethod] [WebMethod(TransactionOption = TransactionOption.Disabled)] If your web service requires a transaction, it can only be the root of a transaction, due to the stateless nature of the HTTP protocol. Even if you configure your web method to only require a transaction and it is called from within the context of an existing transaction, a new transaction is created for it. Similarly, the value of TransactionOption.Supporteddoes not cause a web service to join an existing transaction (if called from within one). Consequently, the following statements are equivalent--all four amount to no transaction support for the web service: [WebMethod] [WebMethod(TransactionOption = TransactionOption.Disabled)] [WebMethod(TransactionOption = TransactionOption.NotSupported)] [WebMethod(TransactionOption = TransactionOption.Supported)] Moreover, the following statements are also equivalent--creating a new transaction for the web service: [WebMethod(TransactionOption = TransactionOption.Required)] [WebMethod(TransactionOption = TransactionOption.RequiresNew)] The various values of TransactionOptionare confusing. To avoid making them the source of errors and misunderstandings, use TransactionOption.RequiresNewwhen you want transaction support for your web method; use TransactionOption.Disabledwhen you want to explicitly demonstrate to a reader of your code that the web service does not take part in a transaction. The question is, why did Microsoft provide four overlapping transaction modes for web services? I believe that it is not the result of carelessness, but rather a conscious design decision. Microsoft is probably laying down the foundation in .NET for a point in the future when it will be possible to propagate transactions across web sites. Finally, you do not need to explicitly vote on a transaction from within a web service. If an exception occurs within a web service method, the transaction is automatically aborted. Conversely, if no exceptions occur, the transaction is committed automatically (as if you used the AutoCompleteattribute). Of course, the web service can still use ContextUtilto vote explicitly to abort instead of throwing an exception, or when no exception occurred and you still want to abort. ASP.NET and transactions An ASP.NET web form may access resource managers (such as databases) directly, and it should do so under the protection of a transaction. The page may also want to create a few transactional components and compose their work into a single transaction. The problem again is that a web form derives from the System.Web.UI. Pagebase class, not from ServicedComponent, and therefore cannot use the [Transaction]attribute. To provide transaction support for a web form, the Page base class has a write-only property called TransactionModeof type TransactionOption. You can assign a value of type TransactionOptionto TransactionMode, to configure transaction support for your web form. You can assign TransactionModeprogrammatically in your form contractor, or declaratively by setting that property in the visual designer. The designer uses the Transaction page directive to insert a directive in the aspx form file. For example, if you set the property using the designer to RequiresNew, the designer added this line to the beginning of the aspx file: <@% Page Transaction="RequiresNew" %> Be aware that programmatic setting will override any designer setting. The default is no transaction support (disabled). The form can even vote on the outcome of the transaction (based on its interaction with the components it created) by using the ContextUtilmethods. Finally, the form can subscribe to events notifying it when a transaction is initiated and when a transaction is aborted. COM+ Synchronization Multithreaded managed components can use .NET-provided synchronization locks. These are classic locks, such as mutexes and events. However, these solutions all suffer from the deficiencies described at the beginning of Chapter 5. .NET serviced components should use COM+ activity-based synchronization by adding the Synchronizationattribute to the class definition. The Synchronizationattribute's constructor accepts an enum parameter of type SynchronizationOption, defined as: public enum SynchronizationOption { Disabled, NotSupported, Supported, Required, RequiresNew } For example, use the SynchronizationOption.Requiredvalue to configure your serviced component to require activity-based synchronization: [Synchronization(SynchronizationOption.Required)] public class MyComponent :ServicedComponent {...} The five enum values of SynchronizationOptionmap to the five COM+ synchronization support options discussed in Chapter 5. The Synchronizationattribute has an overloaded default constructor, which sets synchronization support to SynchronizationOption.Required. As a result, the following two statements are equivalent: [Synchronization] [Synchronization(SynchronizationOption.Required)] TIP: The System.Runtime.Remoting.Contextnamespace contains a context attribute called Synchronizationthat can be applied to context-bound .NET classes. This attribute accepts synchronization flags similar to SynchronizationOption, and initially looks like another version of the Synchronizationclass attribute. However, the Synchronizationattribute in the Contextnamespace provides synchronization based on physical threads, unlike the Synchronizationattribute in the EnterpriseServicesnamespace, which uses causalities. As explained in Chapter 5, causality and activities are a more elegant and fine-tuned synchronization strategy. Programming the COM+ Catalog You can access the COM+ Catalog from within any .NET managed component (not only serviced components). To write installation or configuration code (or manage COM+ events), you need to add to your project a reference to the COM+ Admin type library. After you add the reference, the Catalog interfaces and objects are part of the COMAdminnamespace. Example 10-10 shows how to create a catalog object and use it to iterate over the application collection, tracing to the Output window the names of all COM+ applications on your computer. Example 10-10: Accessing the COM+ Catalog and tracing the COM+ application names using COMAdmin; ICOMAdminCatalog catalog; ICatalogCollection applicationCollection; ICatalogObject application; int applicationCount; int i;//Application index catalog = (ICOMAdminCatalog)new COMAdminCatalog( ); applicationCollection = (ICatalogCollection)catalog.GetCollection("Applications"); //Read the information from the catalog applicationCollection.Populate( ); applicationCount = applicationCollection.Count; for(i = 0;i< applicationCount;i++) { //Get the current application application= (ICatalogObject)applicationCollection.get_Item(i); int index = i+1; String traceMessage = index.ToString()+". "+application.Name.ToString( ); Trace.WriteLine(traceMessage); } TIP: The System.EnterpriseServices.Adminnamespace contains the COM+ Catalog object and interface definitions. However, in the Visual Studio.NET Beta 2, the interfaces are defined as private to that assembly. As a result, you cannot access them. The obvious workaround is to import the COM+ Admin type library yourself, as demonstrated in Example 10-10. In the future, you will probably be able to use System.EnterpriseServices.Adminnamespace directly. The resulting code, when programming directly using the System.EnterpriseServices.Adminnamespace, is almost identical to Example 10-10. COM+ Security .NET has an elaborate component-oriented security model. .NET security model manages what the component is allowed to do and what permissions are given to the component and all its clients up the call chain. You can (and should) still manage the security attributes of your hosting COM+ application to authenticate incoming calls, authorize callers, and control impersonation level. .NET also has what .NET calls role-based security, but that service is limited compared with COM+ role-based security. A role in .NET is actually a Windows NT user group. As a result, .NET role-based security is only as granular as the user groups in the hosting domain. Usually, you do not have control over your end customer's IT department. If you deploy your application in an environment where the user groups are coarse, or where they do not map well to actual roles users play in your application, then .NET role-based security is of little use to you. COM+ roles are unrelated to the user groups, allowing you to assign roles directly from the application business domain. Configuring Application-Level Security Settings The assembly attribute ApplicationAccessControlis used to configure all the settings on the hosting COM+ application's Security tab. You can use ApplicationAccessControlto turn application-level authentication on or off: [assembly: ApplicationAccessControl(true)] The ApplicationAccessControlattribute has a default constructor, which sets authorization to trueif you do not provide a construction value. Consequently, the following two statements are equivalent: [assembly: ApplicationAccessControl] [assembly: ApplicationAccessControl(true)] If you do not use the ApplicationAccessControlattribute at all, then when you register your assembly, the COM+ default takes effect and application-level authorization is turned off. The ApplicationAccessControlattribute has three public properties you can use to set the access checks, authentication, and impersonation level. The AccessChecksLevelproperty accepts an enum parameter of type AccessChecksLevelOption, defined as: public enum AccessChecksLevelOption { Application, ApplicationComponent } AccessChecksLevelis used to set the application-level access checks to the process only ( AccessChecksLevelOption.Application) or process and component level ( AccessChecksLevelOption.ApplicationComponent). If you do not specify an access level, then the ApplicationAccessControlattribute's constructors set the access level to AccessChecksLevelOption.ApplicationComponent, the same as the COM+ default. The Authenticationproperty accepts an enum parameter of type AuthenticationOption, defined as: public enum AuthenticationOption { None, Connect, Call, Packet, Integrity, Privacy, Default } The values of AuthenticationOptionmap to the six authentication options discussed in Chapter 7. If you do not specify an authentication level or if you use the Defaultvalue, the ApplicationAccessControlattribute's constructors set the authentication level to AuthenticationOption.Packet, the same as the COM+ default. The Impersonationproperty accepts an enum parameter of type ImpersonationLevelOption, defined as: public enum ImpersonationLevelOption { Anonymous, Identify, Impersonate, Delegate, Default } The values of ImpersonationLevelOptionmap to the four impersonation options discussed in Chapter 7. If you do not specify an impersonation level or if you use the Defaultvalue, then the ApplicationAccessControlattribute's constructors set the impersonation level to ImpersonationLevelOption.Impersonate, the same as the COM+ default. Example 10-11 demonstrates using the ApplicationAccessControlattribute with a server application. The example enables application-level authentication and sets the security level to perform access checks at the process and component level. It sets authentication to authenticate incoming calls at the packet level and sets the impersonation level to Identify. Example 10-11: Configuring a server application security [assembly: ApplicationActivation(ActivationOption.Server)] [assembly: ApplicationAccessControl( true,//Authentication is on AccessChecksLevel=AccessChecksLevelOption.ApplicationComponent, Authentication=AuthenticationOption.Packet, ImpersonationLevel=ImpersonationLevelOption.Identify)] A library COM+ application has no use for impersonation level, and it can only choose whether it wants to take part in its hosting process authentication level (that is, it cannot dictate the authentication level). To turn authentication off for a library application, set the authentication property to AuthenticationOption.None. To turn it on, use any other value, such as AuthenticationOption.Packet. Example 10-12 demonstrates how to use the ApplicationAccessControlto configure the security setting of a library application.a library application. Example 10-12: Configuring a library application security [assembly: ApplicationActivation(ActivationOption.Library)] [assembly: ApplicationAccessControl( true,//Authentication AccessChecksLevel=AccessChecksLevelOption.ApplicationComponent, //use AuthenticationOption.None to turn off authentication, //and any other value to turn it on Authentication=AuthenticationOption.Packet)] Component-Level Access Checks The component attribute ComponentAccessControlis used to enable or disable access checks at the component level. Recall from Chapter 7 that this is your component's role-based security master switch. The ComponentAccessControlattribute's constructor accepts a Boolean parameter, used to turn access control on or off. For example, you can configure your serviced component to require component-level access checks: [ComponentAccessControl(true)] public class MyComponent :ServicedComponent {...} The ComponentAccessControlattribute has an overloaded default constructor that uses truefor the attribute construction. Consequently, the following two statements are equivalent: [ComponentAccessControl] [ComponentAccessControl(true)] Adding Roles to an Application You can use the Component Services Explorer to add roles to the COM+ application hosting your serviced components. You can also use the SecurityRoleattribute to add the roles at the assembly level. When you register the assembly with COM+, the roles in the assembly are added to the roles defined for the hosting COM+ application. For example, to add the Manager and Teller roles to a bank application, simply add the two roles as assembly attributes: [assembly: SecurityRole("Manager")] [assembly: SecurityRole("Teller")] The SecurityRoleattribute has two public properties you can set. The first is Description. Any text assigned to the Descriptionproperty will show up in the Component Services Explorer in the Description field on the role's General tab: [assembly: SecurityRole("Manager",Description = "Can access all components")] [assembly: SecurityRole("Teller",Description = "Can access IAccountsManager only")] The second property is the SetEveryoneAccessBoolean property. If you set SetEveryoneAccessto true, then when the component is registered, the registration process adds the user Everyone as a user for that role, thus allowing everyone access to whatever the role is assigned to. If you set it to false, then no user is added during registration and you have to explicitly add users during deployment using the Component Services Explorer. The SecurityRoleattribute sets the value of SetEveryoneAccessby default to true. As a result, the following statements are equivalent: [assembly: SecurityRole("Manager")] [assembly: SecurityRole("Manager",true)] [assembly: SecurityRole("Manager",SetEveryoneAccess = true)] Automatically granting everyone access is a nice debugging feature; it eliminates security problems, letting you focus on analyzing your domain-related bug. However, you must suppress granting everyone access in a release build, by setting the SetEveryoneAccessproperty to false: #if DEBUG [assembly: SecurityRole("Manager")] #else [assembly: SecurityRole("Manager",SetEveryoneAccess = false)] #endif Assigning Roles to Component, Interface, and Method The SecurityRoleattribute is also used to grant access for a role to a component, interface, or method. Example 10-13 shows how to grant access to Role1 at the component level, to Role2 at the interface level, and to Role3 at the method level. Example 10-13: Assigning roles at the component, interface, and method levels [assembly: SecurityRole("Role1")] [assembly: SecurityRole("Role2")] [assembly: SecurityRole("Role3")] [SecurityRole("Role2")] public interface IMyInterface { [SecurityRole("Role3")] void MyMethod( ); } [SecurityRole("Role1")] public class MyComponent :ServicedComponent,IMyInterface {...} Figure 10-2 shows the resulting role assignment in the Component Services Explorer at the method level. Note that Role1 and Role2 are inherited from the component and interface levels. If you only assign a role (at the component, interface, or method level) but do not define it at the assembly level, then that role is added to the application automatically during registration. However, you should define roles at the assembly level to provide one centralized place for roles description and configuration.configuration. Verifying Caller's Role Membership Sometimes it is useful to verify programmatically the caller's role membership before granting it access. Your serviced components can do that just as easily as configured COM components. .NET provides you the helper classis useful to verify programmatically the caller's role membership before granting it access. Your serviced components can do that just as easily as configured COM components. .NET provides you the helper class SecurityCallContextthat gives you access to the security parameters of the current call. SecurityCallContextencapsulates the COM+ call-object's implementation of ISecurityCallContext, discussed in Chapter 7. The class SecurityCallContexthas a public static property called CurrentCall. CurrentCallis a read-only property of type SecurityCallContext(it returns an instance of the same type). You use the SecurityCallContextobject returned from CurrentCallto access the current call. Example 10-14 demonstrates the use of the security call context to verify a caller's role membership, using the same use-case as Example 7-1. Example 10-14: Verifying the caller's role membership using the SecurityCallContext class public class Bank :ServicedComponent,IAccountsManager { void TransferMoney(int sum,ulong accountSrc,ulong accountDest) { bool callerInRole = false; callerInRole = SecurityCallContext.CurrentCall.IsCallerInRole("Customer"); if(callerInRole)//The caller is a customer { if(sum > 5000) throw(new UnauthorizedAccessException(@"Caller does not have sufficient credentials to transfer this sum")); } DoTransfer(sum,accountSrc,accountDest);//Helper method } //Other methods } You should use the Boolean property IsSecurityEnabledof SecurityCallContextto verify that security is enabled before accessing the IsCallerInRole( )method: bool securityEnabled = SecurityCallContext.CurrentCall.IsSecurityEnabled; if(securityEnabled) { //the rest of the verification process } COM+ Queued Components .NET has a built-in mechanism for invoking a method call on an object: using a delegate asynchronously. The client creates a delegate class that wraps the method it wants to invoke synchronously, and the compiler provides definition and implementation for a BeginInvoke( )method, which asynchronously calls the required method on the object. The compiler also generates the EndInvoke( )method to allow the client to poll for the method completion. Additionally, .NET provides a helper class called AsyncCallbackto manage asynchronous callbacks from the object once the call is done. Compared with COM+ queued components, the .NET approach leaves much to be desired. First, .NET does not support disconnected work. Both the client and the server have to be running at the same time, and their machines must be connected to each other on the network. Second, the client's code in the asynchronous case is very different from the usual synchronous invocation of the same method on the object's interface. Third, there is no built-in support for transactional forwarding of calls to the server, nor is there an auto-retry mechanism. In short, you should use COM+ queued components if you want to invoke asynchronous method calls in .NET. The ApplicationQueuingassembly attribute is used to configure queuing support for the hosting COM+ application. The ApplicationQueuingattribute has two public properties that you can set. The Boolean Enabledproperty corresponds to the Queued checkbox on the application's queuing tab. When set to true, it instructs COM+ to create a public message queue, named as the application, for the use of any queued components in the assembly. The second public property of ApplicationQueuingis the Boolean QueueListenerEnabledproperty. It corresponds to the Listen checkbox on the application's queuing tab. When set to true, it instructs COM+ to activate a listener for the application when the application is launched. For example, here is how you enable queued component support for your application and enable a listener: //Must be a server application to use queued components [assembly: ApplicationActivation(ActivationOption.Server)] [assembly: ApplicationQueuing(Enabled = true,QueueListenerEnabled = true)] The ApplicationQueuingattribute has an overloaded default constructor that sets the Enabledattribute to trueand the QueueListenerEnabledattribute to false. As a result, the following two statements are equivalent: [assembly: ApplicationQueuing] [assembly: ApplicationQueuing(Enabled = true,QueueListenerEnabled = false)] Configuring Queued Interfaces In addition to enabling queued component support at the application level, you must mark your interfaces as capable of receiving queued calls. You do that by using the InterfaceQueuingattribute. InterfaceQueuinghas one public Boolean property called Enabledthat corresponds to the Queued checkbox on the interface's Queuing tab. [InterfaceQueuing(Enabled = true)] public interface IMyInterface { void MyMethod( ); } The InterfaceQueuingattribute has an overloaded default constructor that sets the Enabledproperty to trueand a constructor that accepts a Boolean parameter. As a result, the following three statements are equivalent: [InterfaceQueuing] [InterfaceQueuing(true)] [InterfaceQueuing(Enabled = true)] Note that your interface must adhere to the queued components design guidelines discussed in Chapter 8, such as no outor refparameters. If you configure your interface as a queued interface using the InterfaceQueuingattribute and the interface is incompatible with queuing requirements, the registration process fails. A Queued Component's Managed Client The client of a queued component cannot create the queued component directly. It must create a recorder for its calls using the queuemoniker. A C++ or a Visual Basic 6.0 program uses the CoGetObject( )or GetObject( )calls. A .NET managed client can use the static method BindToMoniker( )of the Marshalclass, defined as: public static object BindToMoniker(string monikerName); BindToMoniker( )accepts a moniker string as a parameter and returns the corresponding object. The Marshalclass is defined in the System.Runtime.InteropServicesnamespace. The BindToMoniker( )method of the Marshalclass makes writing managed clients for a queued component as easy as if it were a COM client: using System.Runtime.InteropServices;//for the Marshal class IMyInterface obj; obj =(IMyInterface)Marshal.BindToMoniker("queue:/new:MyNamespace.MyComponent"); obj.MyMethod( );//call is recorded In the case of a COM client, the recorder records the calls the client makes. The recorder only dispatches them to the queued component queue (more precisely, to its application's queue) when the client releases the recorder. A managed client does not use reference counting, and the recorded calls are dispatched to the queued component queue when the managed wrapper around the recorder is garbage collected. The client can expedite dispatching the calls by explicitly forcing the managed wrapper around the recorder to release it, using the static DisposeObject( )method of the ServicedComponentclass, passing in the recorder object: using System.Runtime.InteropServices;//for the Marshal class IMyInterface obj; obj =(IMyInterface)Marshal.BindToMoniker("queue:/new:MyNamespace.MyComponent"); obj.MyMethod( );//call is recorded //Expedite dispatching the recorded calls by disposing of the recorder ServicedComponent sc = obj as ServicedComponent; If(sc !=null) ServicedComponent.DisposeObject(sc); You can use the IDisposableinterface instead of calling DisposeObject(). Queued Component Error Handling Due to the nature of an asynchronous queued call, managing a failure on both the client's side (failing to dispatch the calls) and the server's side (repeatedly failing to execute the call--a poison message) requires a special design approach. As discussed in Chapter 8, both the clients and server can use a queued component exception class to handle the error. You can also provide your product administrator with an administration utility for moving messages between the retry queues. Queued component exception class You can designate a managed class as the exception class for your queued component using the ExceptionClassattribute. Example 10-15 demonstrates using the ExceptionClassattribute. Example 10-15: Using the ExceptionClass attribute to designate an error-handling class for your queued component using COMSVCSLib; public class MyQCException : IPlaybackControl,IMyInterface { public void FinalClientRetry( ) {...} public void FinalServerRetry( ) {...} public void MyMethod( ){...} } [ExceptionClass("MyQCException")] public class MyComponent :ServicedComponent,IMyInterface {...} In Example 10-15, when you register the assembly containing MyComponentwith COM+, on the component's Advanced tab, the Queuing exception class field will contain the name of its exception class--in this case, MyQCException, as shown in Figure 10-3. You need to know a few more things about designating a managed class as a queued component's exception class. First, it has nothing to do with .NET error handling via exceptions. The word exception is overloaded. As far as .NET is concerned, a queued component's exception class is not a .NET exception class. Second, the queued component exception class has to adhere to the requirements of a queued component exception class described in Chapter 8. These requirements include implementing the same set of queued interfaces as the queued component itself and implementing the IPlaybackControlinterface. To add IPlaybackControlto your class definition you need to add a reference in your project to the COM+ Services type library. IPlaybackControlis defined in the COMSVCSLibnamespace. The MessageMover class As explained in Chapter 8, COM+ provides you with the IMessageMoverinterface, and a standard implementation of it, for moving all the messages from one retry queue to another. Managed clients can access this implementation by importing the COM+ Services type library and using the MessageMoverclass, defined in the COMSVCSLibnamespace. Example 10-16 implements the same use-case as Example 8-2. Example 10-16: MessageMover is used to move messages from the last retry queue to the application's queue using COMSVCSLib; IMessageMover messageMover; int moved;//How many messages were moved messageMover = (IMessageMover) new MessageMover( ); //Move all the messages from the last retry queue to the application's queue messageMover.SourcePath = @".\PRIVATE$\MyApp_4"; messageMover.DestPath = @".\PUBLIC$\MyApp"; moved = messageMover.MoveMessages( ); COM+ Loosely Coupled Events .NET provides managed classes with an easy way to hook up a server that fires events with client sinks. The .NET mechanism is certainly an improvement over the somewhat cumbersome COM connection point protocol, but the .NET mechanism still suffers from all the disadvantages of tightly coupled events, as explained at the beginning of Chapter 9. Fortunately, managed classes can easily take advantage of COM+ loosely coupled events. The EventClassattribute is used to mark a serviced component as a COM+ event class, as shown in Example 10-17. Example 10-17: Designating a serviced component as an event class using the EventClass attribute public interface IMySink { void OnEvent1( ); void OnEvent2( ); } [EventClass] public class MyEventClass : ServicedComponent,IMySink { public void OnEvent1( ) { throw(new NotImplementedException(exception)); } public void OnEvent2( ) { throw(new NotImplementedException(exception)); } const string exception = @"You should not call an event class directly. Register this assembly using RegSvcs /reconfig"; } The event class implements a set of sink interfaces you want to publish events on. Note that it is pointless to have any implementation of the sink interface methods in the event class, as the event class's code is never used. It is used only as a template, so that COM+ could synthesize an implementation, as explained in Chapter 9 (compare Example 10-17 with Example 9-1). This is why the code in Example 10-17 throws an exception if anybody tries to actually call the methods (maybe as a result of removing the event class from the Component Services Explorer). When you register the assembly with COM+, the event class is added as a COM+ event class, not as a regular COM+ component. Any managed class (not just serviced components) can publish events. Any managed class can also implement the sink's interfaces, subscribe, and receive the events. For example, to publish events using the event class from Example 10-17, a managed publisher would write: IMySink sink; sink = (IMySink)new MyEventClass( ); sink.OnEvent1( ); The OnEvent1( )method returns once all subscribers have been notified, as explained in Chapter 9. Persistent subscriptions are managed directly via the Component Services Explorer because adding a persistent subscription is a deployment-specific activity. Transient subscriptions are managed in your code, similar to COM+ transient subscribers. The EventClassattribute has two public Boolean properties you can set, called AllowInprocSubscribersand FireInParallel. These two properties correspond to the Fire in parallel and Allow in-process subscribers, respectively, on the event class's Advanced tab. You can configure these values on the event class definition: [EventClass(AllowInprocSubscribers = true,FireInParallel=true)] public class MyEventClass : ServicedComponent,IMySink {...} The EventClassattribute has an overloaded default constructor. If you do not specify a value for the AllowInprocSubscribersand FireInParallelproperties, it sets them to trueand false, respectively. Consequently, the following two statements are equivalent: EventClass] [EventClass(AllowInprocSubscribers = true,FireInParallel=false)] Summary Throughout this book, you have learned that you should focus your development efforts on implementing business logic in your components and rely on COM+ to provide the component services and connectivity they need to operate. With .NET, Microsoft has reaffirmed its commitment to this development paradigm. From a configuration management point of view, the .NET integration with COM+ is superior to COM under Visual Studio 6.0 because .NET allows you to capture your design decisions in your code, rather than use the separate COM+ Catalog. This development is undoubtedly just the beginning of seamless support and better integration of the .NET development tools, runtime, component services, and the component administration environment. COM+ itself (see Appendix B) continues to evolve, both in features and in usability, while drawing on the new capabilities of the .NET platform. The recently added ability to expose any COM+ component as a web service is only a preview of the tighter integration of .NET and COM+ we can expect to see in the future. Back to: COM and .NET Component Services © 2001, O'Reilly & Associates, Inc. webmaster@oreilly.com
http://oreilly.com/catalog/comdotnetsvs/chapter/ch10.html
crawl-002
en
refinedweb
Hand Coding: Editing HTML in Dreamweaver This topic page is useful for customers who prefer to edit or add HTML or script directly to the code of their pages. Developers creating HTML pages as well as pages using a scripting language will benefit. TechNotes - 3d452475—Selecting undo while editing code between two collapsed code fragments on the same line will un-collapse the fragments permanently - 3d3efcff—Making edits in Code and Design View to objects might permanently expand collapsed code fragments - tn_16726—Source code formatting issues after installing the Dreamweaver Updater - tn_16489—Dreamweaver does not write XHTML compliant code in pages without an XHTML DTD - tn_16460—Code following VBScript comment is improperly color coded - tn_16442—Pasting from the Code inspector produces unexpected results - tn_16431—Text selection in Code view is inaccurate in Dreamweaver MX - tn_16343—Text is not wrapping according to code format preferences - tn_16340—Dreamweaver MX does not highlight orphaned closing tags as invalid HTML - tn_16336—High ASCII tag names cannot be used in the Tag Inspector - tn_16251—XML schema files must contain a namespace - tn_15518—Dreamweaver 4 does not recognize all XHTML code - tn_15372—Removing line breaks from comment tags - tn_15238—Setting the external code editor within Dreamweaver - tn_15236—Delayed Windows operations while Microsoft Internet Explorer runs with the JavaScript Debugger - tn_15068—MM_NOCONVERT error listed in the Check Target Browser Report - tn_14949—Debug in Browser option is not available for all installed browsers - tn_14944—JavaScript Debugger doesn't work on pages created from templates - tn_14805—How well does Dreamweaver HTML validate? - tn_14380—How to incorporate Server-Side Includes into an HTML page - tn_13499—Books about Dreamweaver, JavaScript and HTML Tutorials and Articles Note: The Developer Center contains newer tutorials and articles. Migrating from HomeSite to Macromedia Dreamweaver MXThis tutorial introduces the Dreamweaver MX workspace and showcases coding features, with reference to the HomeSite workspace and features, then helps HomeSite users set up a Dreamweaver site. Understanding basic HTMLDreamweaver gives you an easy-to-use visual interface with which to generate HTML, the underlying code in a Web page. Learn about the structure of the HTML code in a document so you can better control, modify and troubleshoot your Web pages. The answers to many common Dreamweaver questions are in the codeDoes the Dreamweaver interface or the code that Dreamweaver creates sometimes puzzle you? Read this article to find out more about the HTML rules and structure on which the interface and code are based. Getting information about the browser with JavaScriptIf you've been looking for a way to give cutting-edge content to visitors with the latest browsers, yet not leave users with older browsers out in the cold, detecting browser version, platform, and capabilities with JavaScript may be the answer. Browsers report all kinds of useful information. Find out how to use it to your advantage..
http://www.adobe.com/support/dreamweaver/htmljava.html
crawl-002
en
refinedweb
> zuixindeID3.zip > TDDTOptions.java package id3; /** This class stores information about option settings for Top-Down Decision * Tree inducers. * @author James Louis Ported to Java */ public class TDDTOptions { /** The maximum level of growth. */ public int maxLevel; /** The lower bound for the minimum weight of instances in a node. * */ public double lowerBoundMinSplitWeight; /** The upper bound for the minimum weight of instances in a node. */ public double upperBoundMinSplitWeight; /** The percent (p) used to calculate the min weight of instances in a node (m). * m = p * num instances / num categories * */ public double minSplitWeightPercent; /** TRUE indicates lowerBoundMinSplitWeight, upperBoundMinSplitWeight, and * minSplitWeightPercent are not used for setting minimum instances in a node for * nominal attributes, FALSE indicates they will be used. */ public boolean nominalLBoundOnly; /** TRUE if debugging options are used. */ public boolean debug; /** TRUE indicates there will be an edge with "unknown" from every node. * */ public boolean unknownEdges; /** The criterion used for scoring. */ public byte splitScoreCriterion; /** TRUE indicates an empty node should have the parent's distribution, FALSE * otherwise. * */ public boolean emptyNodeParentDist; /** TRUE indicates a node should inherit the parent's tie-breaking class, FALSE * otherwise. * */ public boolean parentTieBreaking; /** Pruning method to be used. If the value is not NONE and pruning_factor is 0, * then a node will be made a leaf when its (potential) children do not improve * the error count. * */ public byte pruningMethod; /** TRUE indicates pruning should allow replacing a node with its largest subtree, * FALSE otherwise. * */ public boolean pruningBranchReplacement; /** TRUE indicates threshold should be adjusted to equal instance values, FALSE * otherwise. * */ public boolean adjustThresholds; /** Factor of how much pruning should be done. High values indicate more pruning. */ public double pruningFactor; /** TRUE if the Minimum Description Length Adjustment for continuous attributes should * be applied to mutual info, FALSE otherwise. * */ public boolean contMDLAdjust; /** Number of thresholds on either side to use for smoothing; 0 for no smoothing. * */ public int smoothInst; /** Exponential factor for smoothing. */ public double smoothFactor; /** Type of distribution to build at leaves. * */ public byte leafDistType; /** M-estimate factor for laplace. */ public double MEstimateFactor; /** Evidence correction factor. */ public double evidenceFactor; /** The metric used to evaluate this decision tree. * */ public byte evaluationMetric; /** Constructor. */ public TDDTOptions(){} }
http://read.pudn.com/downloads75/sourcecode/math/274200/id3/TDDTOptions.java__.htm
crawl-002
en
refinedweb
Chapter 12 The Application Configuration Access Protocol Contents: Using ACAP ACAP Commands ACAP Sessions The Application Configuration Access Protocol (ACAP) is a companion protocol to IMAP. The two share a commonality of design, as well as complementary features. These two protocols, working together, form the basis of a vision for mobile users' use of the Internet. Whereas IMAP allows one to store and manipulate email on a remote server, so ACAP allows one to store and manipulate generic name-value pairs of information on a remote server. This data may be program configuration options (hence the name), address book data, or any other information that a user may wish to store. Discussions of ACAP often include mention of the Lightweight Directory Access Protocol (LDAP). LDAP servers provide a directory service, meaning that they store name-value pairs of information and make those records available to authorized users. LDAP servers are generally used on corporate networks to provide a centralized repository for security information (such as usernames and passwords) and contact information (names, addresses, telephone numbers, etc.). Network services can then access the information in LDAP servers to avoid having to duplicate the information. Doesn't this sound a lot like ACAP? Corporations have historically been reluctant to place their repositories of security and contact information on Internet-accessible servers, and for good reason. Even if a protocol provides adequate security, potential bugs in implementations give cause for concern. ACAP evolved to help address this concern. ACAP allows individuals to distribute a subset of corporate information via a server that is remotely accessible. Information is compartmentalized, and in case of a security hole, only this distributed subset is compromised. ACAP clients, unlike LDAP clients, must be able to work offline by caching information locally. An ACAP server is updated when changes are made locally, either in real time or upon the next connection. ACAP is meant to be easier for clients to implement than IMAP; ACAP servers are more complex than the clients. This will hopefully encourage many application developers to choose to implement ACAP clients. ACAP is increasingly being implemented by manufacturers of MUAs in order to provide access to a single point of storage for address book and configuration information. If an MUA implements both IMAP and ACAP, a user could connect to a single email account, use a single address book, and maintain consistent configuration details at home, at work, and while traveling. Of course, the price for these benefits is access to a server that provides IMAP and ACAP services. 12.1 Using ACAP ACAP is an Internet-proposed standard and is currently in version 1.0. An ACAP server is a TCP application daemon that listens for ACAP client requests on TCP port 674. MUAs, Web browsers, or any other type of network-oriented application may implement an ACAP client. Some commercial products have implemented ACAP's predecessor, the Internet Messaging Support Protocol (IMSP), but IMSP never made it into the standards track. Commercial ACAP servers are still in the works as of this writing. ACAP clients must be able to work offline. They may or may not connect to an ACAP server, depending on their needs. Typically, an ACAP client connects to a server when an application is started, and network access is available in order to determine if its configuration information is still current. This could (and probably should) be done in a thread to avoid long startup delays. Additional connections to a server are made when a user changes information that is held on the server; first the local cache is updated, and then the server is contacted at the next available opportunity. Figure 12.1 shows an ACAP server and a possible relationship to an MUA, IMAP MRA, and IMAP client. In this scenario, an MUA implements both an IMAP and an ACAP client. The ACAP client is used to retrieve configuration information and address book data from an ACAP server. Since ACAP clients must be allowed to work offline, this communication does not necessarily have to occur upon MUA startup. An IMAP session can then be opened to an IMAP server to operate on email messages. Such an arrangement allows for the creation of a very generic MUA. Consider an Internet cafe, for example. A user can provide authentication information (perhaps an X.509 certificate or some other acceptable credentials) and an ACAP server name when the MUA was started. This information is sufficient to provide the user with personalized configuration information (such as the location of the IMAP server and user interface preferences) and a personalized address book. Figure 12.1: ACAP usage for a mobile user 12.1.1 ACAP Datasets Information in an ACAP server is held in a slash (/) separated hierarchical structure of entries. Each entry has attributes. Each attribute consists of some metadata that describes what it holds and who may modify it. Each level of the hierarchy is called a dataset. Datasets are used to hold information that should be interpreted together, such as an address book or the configuration information for a particular application. Figure 12.2 shows the relationship between entries, attributes, and datasets. A dataset is just a special type of entry: an analogy that won't hold up as we progress is that a dataset is like a directory on a filesystem and an entry is like a file. A new dataset can be created by making an entry and placing a period (.) into its "subdataset" attribute. This is another way of saying that an entry is just an entry (with attributes), but if its "subdataset" attribute contains a period (.), then it becomes a dataset in its own right and can then hold other entries. Each entry must have an attribute called (confusingly) entry. This attribute's metadata names the entry and holds its access control list and other information. Each attribute has metadata (which are just name-value pairs) like this. Figure 12.2: ACAP data hierarchy Datasets are named in an interesting way. They have a long, hierarchical name but may also be accessed by "nicknames" to make searches shorter and easier for clients to understand. First, let's look at the long way. A unique dataset name consists of the following parts: a class name, a dataset type, and a unique data hierarchy. The class name is generally named after a vendor (e.g., Sun) and maybe a product name (e.g., Sun.HotJava). A dataset type is one of "site" (for per-location information), "group" (for per-group information), "host" (for per-server information), or "user" (for per-user information). Anything following the class name and dataset type is a hierarchical list of entries, as named by the application creating the dataset. Since this book deals with email-related activities, the examples in this chapter will primarily show the use of user and group dataset types. To make life easier, datasets may also be referred to by shortened names. Any dataset starting with the string "/byowner/" refers to an alternate view of the namespace sorted by the authorized users of the system. "/byowner/user/fred/", for example, shows the datasets owned by the user Fred. The special symbol "~", often used in Unix shells to indicate the current user's home directory, may also be used here: a dataset called "/byowner/~/" lists datasets owned by the currently authenticated user. Dataset names can also be shortcut in another way. A dataset called "/" shows all datasets that the current user can view. In practice, this is the approach used by most clients. A user attempting to get information from his own address book, for example, might use a dataset name of "/addressbook/user/fred" or "/addressbook/group/engineering". Datasets consist of entries. Each dataset has its own entry (called "", which holds its own naming and security information. Datasets can also inherit attributes from other datasets (such as read-write permissions). Attributes within entries are collections of metadata, which in turn are name-value pairs. Attributes have either single or multiple values. Attribute names are stored in the UTF-8 character set. Some predefined attributes exist for any entry. They are: - entry The name of the entry. - modtime Contains the date and time of the last modification to any attribute in the entry. - subdataset If set, another entry exists under this one of the name(s) given. Modtime attributes are in the format YYYYMMDDHHMMSSFF, meaning year, month, day, hour, minutes, seconds, and fractions of seconds in that order. The time zone for times in ACAP is always universal coordinated time, UTC. Each attribute has some associated metadata that describes what the attribute holds and who can modify it. The metadata items for each attribute are listed in Table 12.1. 12.1.2 Access Control Each user of an ACAP system has specific rights to the data stored on it. These rights are also called permissions or privileges. They define whether the user is allowed to read or write the data and whether the user can change the rights of others. Rights can be finely controlled: datasets and even attributes within an entry can have rights attached to them. As with other ACAP features, rights may be inherited from parent datasets. Rights are stored on an ACAP server in Access Control Lists (ACLs). ACLs are stored as an attribute in the particular entry for which data is being restricted. Rights for attributes are given as a list of values. The following characters are used to identify the rights for an attribute: - x Special search (see the description that follows) - r Read - w Write (change existing entries) - i Insert (write new entries) - a Administer (change rights) Search rights and read rights are complementary but a bit confusing. Search rights give a user the ability to compare the value of one attribute or dataset with another. Read rights allow a user to search datasets with the SEARCH command. ACAP's SEARCH command is very powerful, as we shall see. The power of it becomes apparent when someone does a search like, "Give me contact information for all people who are in my address book who have email addresses in the netscape.com domain." That type of complex search, which includes a comparison, takes search rights. A simple search, such as "Give me contact information for all people who are in my address book," can be done with just read rights. Write and insert rights are also complementary. Write rights allow a user to modify an attribute or dataset with the STORE command, and insert rights allow a STORE command to be used to create a new attribute or dataset. In other words, write rights would allow one to issue a command that says, "Change Sue's email address in my address book to sue@yahoo.com." But to create a new address book entry for a new person requires insert rights. Administer rights allow a user to modify access control lists or other attribute metadata. By default, the owner of a dataset (the person who created it) has read and administer rights. 12.1.3 Example Dataset If we have a username "ralph", then we should have a "home directory" (in Unix terms) on an ACAP server called /byowner/user/ralph. In ACAP's weird and wonderful aliasing, this directory would also be accessible as /user/ralph or, if we logged in as ralph, /~. If we wish to create a new dataset to hold an address book, we might call it /byowner/user/ralph/addressbook, which we could access as /addressbook/user/ralph. We will use this example throughout the rest of this chapter. 12.2 ACAP Commands ACAP sessions, states, and commands are very similar to IMAP4rev1. This is intentional, since the protocol structure defined in IMAP is very flexible. ACAP, like IMAP and many other Internet protocols, is line oriented with lines ending in CRLF sequences. ACAP sessions may be in one of three states at any one time. These states are: Nonauthenticated State Authenticated State Logout State Upon initial connection to an ACAP server, a client is in the Nonauthenticated State. A client enters the Authenticated State when a user logs into the server, providing the appropriate type of authentication credentials. Once logged in, a client may search, add, or remove information in any dataset for which the authenticated user has been allowed access. The Logout State is entered when a client logs out, a server refuses service, or a connection is otherwise interrupted. This state lasts just long enough for the client and server to close a connection. Within each state, the client may issue the commands appropriate to that state. Like IMAP, an ACAP client is required to be able to accept any server response at any time. This recognizes the multithreaded nature of modern MUAs, since they may very well issue multiple commands to a server simultaneously from different threads. An ACAP client precedes each command with a pseudo-random alphanumeric string, or tag. This is done exactly as in IMAP commands. Server responses include a copy of the tag so that the client may determine to which command the server is responding. Clients should use a different tag for each command that they issue. ACAP server responses are very similar to IMAP server responses. You may recall that an IMAP server response may contain either tagged or untagged lines. The tagged lines contain an explicit response to the client's request, and untagged lines may provide additional explanatory information. ACAP server's work the same way. An ACAP server responds to requests with one of several status responses. The status responses for an IMAP server are OK, NO, BAD, PREAUTH, and BYE. The ACAP status responses are OK, NO, BAD, ALERT, and BYE. The only difference is ALERT, that informs the client of a condition that the client must display to the user. Each status response can contain an additional response code, which provides detailed information about a problem. Server response codes are described in Table 12.2. The table contains references to ACAP commands. Although these commands have not yet been introduced, they are included here for later reference. ACAP commands have reasonably straightforward names, so reading them with prior knowledge of IMAP command names should not be difficult. When a client initiates an ACAP session, the server responds with a banner greeting consisting of an untagged ACAP response. Unlike other protocols, an ACAP server includes its capabilities in this banner greeting, which are provided as a list of parenthesized named values. The capabilities may include any of the following: - CONTEXTLIMIT A number greater than 100 indicating the maximum number of search contexts (see the SEARCH command) that the server can support for this connection. If this number is 0, the server has no limit. - IMPLEMENTATION A string describing the server implementation (e.g., vendor, version, etc.). - SASL A list of authentication mechanisms supported. A banner greeting for an ACAP server might look like this:Client: <establishes connection> Server: * ACAP (IMPLEMENTATION "Plugged In ACAP Server v1.03") (CONTEXTLIMIT "150") (SASL "CRAM-MD5" "KERBEROS_V4") From the banner, we can see that this ACAP server was created by Plugged Inand has a version number of 1.03. It can support up to 150 contexts per connection, and it supports both CRAM-MD5 and KERBEROS Version 4 authentication. The CRAM-MD5 authentication mechanism is the default and is discussed in more detail in this section. Three ACAP commands are valid in any state. They are: NOOP, LANG, and LOGOUT. We know about NOOP (no operation) from other protocols. NOTE: An odd decision in the ACAP RFC forces servers to have at least a 30-minute inactivity timeout. As in other protocols, a NOOP command can be used to reset this timer. LANG allows a client to indicate its language preferences. LOGOUT, naturally, is used to end a session. The LANG command is used by a client that wishes the server to switch languages. The client provides a list of languages that it prefers, and the server switches to the first language in the client's list that it supports. The server also returns the language that it is now using and a list of comparison operators that may be used for searches in that language. Comparison operators are described in the section Section 12.2.2, "The Authenticated State," under the SEARCH command. For example:Client: AC23 LANG "en-au" "en-ca" "en-uk" "en-us" Server: AC23 LANG "en-au" "i;octet" "i;ascii-numeric" "en;primary" Server: AC23 OK "G'day, Mate" The example shows a client asking a server to switch to one of several English variants. In this case, Australian English is requested, and the server switches to it. This can be seen from the greeting given in the response line. Comparison operators may be used with the SEARCH command and will be explained later. The LOGOUT command simply informs the server that the client will not be issuing further commands. It looks like this:Client: AC24 LOGOUT Server: * BYE "See ya later" Server: AC24 OK "LOGOUT completed" (connection is closed by both parties) Table 12.3 lists the ACAP commands that are valid in any state. 12.2.1 The Nonauthenticated State All ACAP sessions begin in the Nonauthenticated State. The client cannot issue most commands until properly authenticating to the server. This is done by the client issuing the AUTHENTICATE command. Like IMAP's AUTHENTICATE command and POP's AUTH command, ACAP's AUTHENTICATE command can be extended as necessary by implementing new authentication mechanisms. ACAP servers, as we have seen, advertise the mechanisms that they support. It is up to the client to choose a valid mechanism from the list.[1] ACAP authentication is generally done with the Challenge-Response Authentication Mechanism (CRAM) described in RFC 2195. Since this mechanism uses MD5 digesting, this is commonly called CRAM-MD5. This mechanism is also used by some IMAP and POP servers. It provides a better authentication capability than plain text username/password schemes since no unencrypted passwords need to be stored on a server or transmitted in the clear via the Internet. There is generally no simple authentication mechanism in ACAP, although a server and client could implement one. This is due to the supposition that ACAP servers are to be used to store personal information in an Internet environment, so stronger authentication is preferred. All ACAP implementations are required by the proposed standard to implement at least CRAM-MD5 authentication. A complete discussion of the CRAM-MD5 algorithms is beyond the scope of a book on email. However, take note that CRAM-MD5 requires that the client and server share a secret. That is, they must both be told to store the same piece of information before they can connect to one another. This is an attempt to make ACAP clients and servers more secure than most of today's POP and IMAP clients and servers, but it also means that the systems administration overhead of setting up ACAP accounts will be significantly more difficult. Too, if a traveler wishes to use an arbitrary MUA with ACAP, she will need to carry appropriate authentication credentials with her. This may be an interesting part of attempting to widely deploy ACAP. One way to solve this would be for client and server to use an X.509 certificate scheme, much like current SSL-enabled Web browsers and clients. A sample authentication with CRAM-MD5 shows the server responding to a client's request with a string that includes a random number, a time/date stamp, and the full server name. The client uses that information along with the shared secret to compute its response. For example:Client: GH01 AUTHENTICATE "CRAM-MD5" Server: + "<94712.682570931@acap.plugged.net.au>" Client: "dwood 19d8a609da4da7a4904ce6e739d8a75d3890" Server: GH01 OK "CRAM-MD5 authentication successful" Table 12.4 lists the ACAP commands that are valid in the Nonauthenticated State. 12.2.2 The Authenticated State A client enters the Authenticated State upon successful login. It is from this state that the client is actually able to do something useful. ACAP allows most work to be done from two important commands: SEARCH and STORE. The SEARCH command allows a client to find existing information held on an ACAP server, and the STORE command allows a client to write new information to the server. These two commands have a number of optional modifiers that direct behavior and responses that return information based on the modifiers used. Other commands available in this state support the SEARCH command (FREECONTEXT, UPDATECONTEXT, DELETEDSINCE) and allow the user to determine and potentially modify access restrictions (SETACL, DELETEACL, MYRIGHTS, LISTRIGHTS, GETQUOTA). ACAP's monster command is SEARCH. Since the protocol is optimized for reading and it is expected that information will be read more often than written, most of ACAP's magic is encapsulated in this command. The SEARCH command allows a user to perform searches across datasets. The searches may be very simple or very complex. Comparison operators (described later in this section) may be used to return very specific subsets of information from a server. Search results may even be saved by use of a "context". A context is a set of entries that were returned from a search. Contexts may be further searched by subsequent commands. So what, you say? With ACAP, changes to the information that makes up a context can even be reported automatically to the user! If someone else changes the data on the server while you are working with it, ACAP's context concept provides a mechanism for you to be informed of the changes. The SEARCH command takes a number of complex-looking arguments. Don't worry about the amount of information; it is easy to construct a simple search and work up to complex searches as the command's syntax becomes more clear. The arguments are: a dataset or context name, a list of optional modifiers, and some search criteria. For each successful search, a server will return a number of ENTRY intermediate responses, followed by a MODTIME response (which provides the time at which the search results were valid). If a search returns no information (the empty set), only the MODTIME response is given. Let's look at a very simple search:Client: GH38 SEARCH "/addressbook/user/ralph/Sue_Hoyle" RETURN ("Contact.Email") ALL Server: GH38 ENTRY "/addressbook/user/ralph/Sue_Hoyle" "sue@hoyle.com" Server: GH38 MODTIME "1999062412324812" Server: GH38 OK "SEARCH completed" In this example, we searched Ralph's address book entry for Sue_Hoyle. We wanted to return Sue's email address. So, first we named the entry, /addressbook/user/ralph/Sue_Hoyle, then we gave a modifier stating that we just wanted the email address that was held in an attribute, RETURN ("Contents.Email"), and then we gave some search criteria. In this case, the criteria ALLwas used to return everything that matched. The server gave us a tagged ENTRY line that named the entry and the email address, followed by a MODTIME line that told us the time at which the data was known to be correct. If Sue didn't have an email address, the preceding search would not have returned anything. The results would have looked like this:Client: GH38 SEARCH "/addressbook/user/ralph/Sue_Hoyle" RETURN ("Contact.Email") ALL Server: GH38 MODTIME "1999062412324812" Server: GH38 OK "SEARCH completed" Naturally, there are many more search modifiers than RETURN and many more search criteria than ALL. That's where the fun begins. The infinite possible combinations of modifiers, criteria and stored data can create a specialized search command for any occasion. Table 12.5 describes the search modifiers defined in the current version of ACAP. Of particular note are the DEPTH, MAKECONTEXT, and RETURN modifiers. We have already had a look at RETURN, which informs the server what attributes to return for each matching entry. The DEPTH modifier allows us to search as many sublevels of the data hierarchy as we choose. The MAKECONTEXT modifier is used to save the results of a search into a context. If the ENUMERATE option is used, the context entries are rated. That is, they each have a number within the context. This can be used as an aid when creating complex searches, as we shall see. Comparison operators, or comparators, may be used to define exactly how two objects compare to one another. They are used as arguments to some modifiers. Comparators in ACAP are functions that perform ordering, equality, prefix, or substring matching on two objects. So one may pass two numbers to a comparator and have it return whether they were equal and whether one was bigger than the other. Two strings passed to a comparator tells whether they were equal and whether one was alphabetically before the other, and so forth. Prefix and substring matching information is also returned. Since many variations of the SEARCH command use comparators to order search results, comparators can be prefixed with a plus sign (+) (indicating that the order should be normal) or minus sign (-) (indicating that the order should be reversed). ACAP servers must implement at least three comparators, called "i;octet", "i;ascii-casemap", and "i;ascii-numeric". The i;octet comparator treats its input as a series of unsigned octets and compares them to determine their equality, and so on. The i;ascii-casemap comparator translates all ASCII letters to uppercase and then does an i;octet comparison (so that "EMail" and "email" are equal). The i;ascii-numeric comparator interprets strings as ASCII digits. Search criteria are the companions to search modifiers. Where modifiers affect the form of the search results, search criteria narrow the search so that only the desired data is returned. Typical boolean expressions such as AND, OR, and NOT are legal ACAP search criteria, as are the more esoteric PREFIX and SUBSTRING, which allow searches on entries where only part of the value is being matched. Take note of the RANGE criterion. If a context is enumerated, RANGE allows one to pull out specific ranges from it. This means, for example, that one may order a context as one sees fit (with the SORT modifier) and then return the top (or bottom) few entries using the RANGE criterion. Table 12.6 shows each of the ACAP search criteria. So, knowing a bit more about searching, our friend Ralph could search his address book for any entries that contained email addresses. The result might look like this:We have added a DEPTH modifier to make sure that entries one level down (two total levels) are searched. Also, the search criteria is more interesting. Here, we searched for only those entries that had an attribute calledWe have added a DEPTH modifier to make sure that entries one level down (two total levels) are searched. Also, the search criteria is more interesting. Here, we searched for only those entries that had an attribute calledClient: GH39 SEARCH "/addressbook/user/ralph/" DEPTH 2 RETURN ("Contact.Email") NOT EQUAL "Email" "i;octet" NIL Server: GH39 ENTRY "/addressbook/user/ralph/Sue_Hoyle" "sue@hoyle.com" Server: GH39 ENTRY "/addressbook/user/ralph/Justin_Case" "justin_case@sun.com" Server: GH39 ENTRY "/addressbook/user/ralph/Zelda_O'Brien" "zee@yahoo.com" Server: GH39 MODTIME "1999062412382399" Server: GH39 OK "SEARCH completed" The real power of ACAP searching takes place in the context of a context. How's that for polymorphism? Contexts can be created by using the MAKECONTEXT modifier. If we wish to save the results of a search so that we can search again on the results, this is the way to do it. This is especially handy for searches that return many, many entries. Suppose, for example, that we repeat our last search, but this time we save the results to a context. We will have to name the context, so we'll call it "people_with_email":Client: GH40 SEARCH "/addressbook/user/ralph/" DEPTH 2 RETURN ("Contact.Email") MAKECONTEXT ENUMERATE "people_with_email" NOT EQUAL "Contact.Email" "i;octet" NIL Server: GH40 ENTRY "/addressbook/user/ralph/Sue_Hoyle" "sue@hoyle.com" Server: GH40 ENTRY "/addressbook/user/ralph/Justin_Case" "justin_case@sun.com" Server: GH40 ENTRY "/addressbook/user/ralph/Zelda_O'Brien" "zee@yahoo.com" Server: GH40 MODTIME "1999062412385473" Server: GH40 OK "Context 'people_with_email' created" We now have a context, stored on the server, called "people_with_email". It contains the entries shown in the results of the last example. The context was enumerated, so each of the entries have numbers that can be used in a later search. We could have also asked the server to tell us when updates to entries in the context occur (with the NOTIFY option to MAKECONTEXT), but we'll leave that for now. Notifications of information of changing contexts are covered in the discussion of the UPDATECONTEXT command, later in this section. If we now search our context, we can (for example) get just the first entry. For that, we use the RANGE modifier telling it to return from the first entry to the first entry:Client: GH41 SEARCH "people_with_email" RANGE 1 1 ALL Server: GH41 ENTRY "/addressbook/user/ralph/Sue_Hoyle" "sue@hoyle.com" Server: GH41 MODTIME "1999062412394812" Server: GH41 OK "SEARCH completed" It was not necessary to give a RETURN modifier in this example because the context holds only the partial entries that were in the last results. The FREECONTEXT command causes the server to remove context named as an argument. If the server is sending update notifications to a client (see UPDATECONTEXT, later in this section) for this context, they will cease. For example:Client: GH42 FREECONTEXT "people_without_email" Server: GH42 OK "FREECONTEXT completed" The UPDATECONTEXT command may be used to cause a server to notify the client when any attributes within a context change. It takes a context name as an argument. This is a powerful concept: once a SEARCH command with a MAKECONTEXT modifier creates a context, a client can monitor the state of every entry in the context! A client would request notifications for a context called "people_with_email" like this:Client: GH43 UPDATECONTEXT "people_with_email" Server: GH43 OK "UPDATECONTEXT completed" From this point on, any changes to the context "people_with_email" will be sent to the client. The server can send these updates at any time. The only way to stop notifications from being sent is for the client to issue a FREECONTEXT command. Whenever an entry in the context changes, the server sends an untagged response to the client with the new information. The untagged server responses may be of these types: - ADDTO Indicates that an entry has been added to the context - CHANGE Indicates either that an entry's position in the context has changed or that a watched attribute has changed within the entry - REMOVEFROM Indicates that an entry has been deleted from the context - MODTIME Indicates that the client has received all changes up to a given time An ADDTO server response contains the context name, the entry name, the entry's position in the context, and a list of metadata. An ADDTO response that informs a client that an entry called "Mary Ryan" had been added might look like this:Server: * ADDTO "people_with_email" "Mary Ryan" 4 ("Contact.Telephone" "+61 7 3874 1583" "Contact.Email" "mryan@mryan.com") In the preceding example, the entry is in the fourth position in the context. If the telephone number in the entry was now changed, the CHANGE response would be used. The CHANGE response lists the context name, the entry name, the entry's old position in the context, the entry's new position, and the metadata list.Server: * CHANGE "people_with_email" "Mary Ryan" 4 4 ("Contact.Telephone" "+61 7 3876 8457") If the preceding entry was now deleted, the client would get a message that looks like this:Server: * REMOVEFROM "people_with_email" "Mary Ryan" 4 The REMOVEFROM response shows the context name, the entry name, and the entry's old position in the context. At any time the server may send a MODTIME response, although this is generally only done when the client sends a new UPDATECONTEXT command. The MODTIME response is used to inform that client that it has all updates as of a certain time. Here is an example that shows the client issuing a new UPDATECONTEXT command:Client: GH43 UPDATECONTEXT "people_with_email" Server: * MODTIME "people_with_email" "19990624095600" Server: GH43 OK "UPDATECONTEXT completed" Note that clients are not supposed to use the UPDATECONTEXT command to poll servers for updates; they should generally wait for the servers to send them in their own time. A client might issue a command like the preceding one on startup, however, or at similar times in order to ensure expected behavior. The second of the two ACAP "magic" commands is STORE. It allows a client to write information to a server. This includes both creation or modification of datasets and their entries. Since the storing of a NIL value in an entry is the same as deleting it, STORE is also used to delete entries. Note that if any part of a STORE command fails, the entire request will fail; that is, nothing will be stored if an error occurs. The arguments to the STORE command can look rather complex, but they are easily parsable. The proposed standard calls the arguments an entry store list. An entry store list begins with an entry path (a dataset or an attribute), followed by optional modifiers followed by attribute name-value pairs. Each of these is described in detail in this section. STORE modifiers are either NOCREATE (which will generate an error if any of the attributes to be added do not already exist) and UNCHANGEDSINCE "<time>" (which will generate an error if any of the attributes to be added have been changed by someone else since the given time). Attribute name-value pairs give a series of attribute names and the values to store in them. A value of NIL will cause the attribute to be deleted from the entry. Setting the value of an "entry" attribute to NIL will cause the entry to be deleted from the dataset. Since a dataset is also an entry, setting the "entry" attribute of a dataset to NIL will cause the dataset to be deleted from its parent dataset. Setting an "entry" attribute value to DEFAULT effectively deletes all information held in the entry and then fills the entry with whatever it can inherit from its parent. Let's look at the address book example again. If we wish to delete an entry from the address book, it can be done like this:Client: GH44 STORE ("/addressbook/user/ralph/Sue_Hoyle" "entry" NIL) Server: GH44 OK "STORE completed" A subsequent SEARCH for that entry would show that it no longer existed. If, on the other hand, we wished to add a new entry to the address book that held a person's name and email address, we could do it like this:Client: GH45 STORE ("/addressbook/user/ralph/Phread_the_Terrorist" "Contact.CommonName" "Phread the Terrorist" "Contact.Email" "phread@vc.gov.vn") Server: GH45 OK "STORE completed" The DELETEDSINCE command may be used to list entries that have been deleted from a dataset. It takes a dataset name and a time as arguments. The entries that have been deleted since the given time are listed in intermediate DELETED lines by the server. The time given is in the same format as modtime (YYYYMMDDHHMMSSFF in UTC). If we wanted to see all of the entries that have been deleted from an address book since midnight on June 24, 1999, a client might issue a command like this:Client: GH46 DELETEDSINCE "/addressbook/user/ralph" 19990624000000 Server: GH46 OK "DELETEDSINCE completed" If the server doesn't have a list of deleted entries that goes back to the time given, it will report an error like this using the TOOOLD server response code:Client: GH47 DELETEDSINCE "/addressbook/user/ralph" 19970101000000 Server: GH47 NO (TOOOLD) "We trashed that data long ago" The SETACL command may be used to change the ACL for a particular object. It takes an ACL object, an authentication identifier (e.g., username), and a list of access rights as arguments. The user issuing this command must have administer privileges on the given object.If a user with the appropriate rights wanted to set the access control list for an address book in the dataset /addressbook/user/ralph so that anyone could read or write information in it, the client would do something like this:Client: GH48 SETACL ("/addressbook/user/ralph") "anyone" "rw" Server: GH48 OK "SETACL completed" If the user does not have administer privileges, the SETACL will fail. Suppose the user "dwood" tried to modify the address book owned by "ralph":Client: GH49 SETACL ("/addressbook/user/ralph") "anyone" "rw" Server: GH49 NO (PERMISSION ("/addressbook/user/ralph")) "'dwood' not permitted to modify access rights for '/addressbook/user/ralph'" Note that this example shows the PERMISSION server response code. The DELETEACL command may be used to either delete an access control list for an object or to delete a particular user's rights within an ACL. It takes an ACL object and (optionally) an authentication identifier. If an identifier is given, it deleted all rights for that user in the ACL. Otherwise, it deletes the entire ACL. For example, if a user wanted to ensure that the user "ralph" had no rights at all on the address book /addressbook/user/margaret, a client might issue a command like this:Client: GH50 DELETEACL ("/addressbook/user/margaret") "ralph" Server: GH50 OK "DELETEACL completed" You may recall from the discussion of ACLs that if an object doesn't have an ACL, it inherits user rights from parent objects farther up the hierarchy. For this reason, one is not allowed to delete a default ACL for a dataset. Therefore, if the preceding example didn't have the authentication identifier, it would have failed (because /addressbook/user/margaret is a dataset). If you tried to do that, it would look like this:Client: GH51 DELETEACL ("/addressbook/user/margaret") Server: GH51 BAD "Can't delete ACL from dataset" The MYRIGHTS command can be used to show what rights a user has been granted for a particular dataset or entry. It takes an ACL object as an argument. That is, the argument names a dataset or entry, but the server reads that dataset's or entry's ACL when processing the command. The MYRIGHTS tagged intermediate response is returned by the server, which lists the client's rights for the given dataset. This command is related to the LISTRIGHTS command, described subsequently, which lists which rights the client may change for the given dataset.We would expect to have at least read and administer rights for entries in personally created datasets. Here is an example of a MYRIGHTS command on a personal address book that shows that we have read, write, insert, and administer rights on it:Client: GH52 MYRIGHTS ("/addressbook/user/dwood") Server: GH52 MYRIGHTS "rwia" Server: GH52 OK "MYRIGHTS completed" A similar request on a public, shared dataset (say, a dataset holding system-wide configuration information) may return more restricted rights. In this example, the client may only read the information:Client: GH53 MYRIGHTS ("/options/global/") Server: GH53 MYRIGHTS "r" Server: GH53 OK "MYRIGHTS completed" The LISTRIGHTS command can be used to determine what rights a user can change for a certain dataset or entry. It takes both an ACL object and an authentication identifier (e.g., a username) as arguments. The server returns a tagged LISTRIGHTS response that lists the rights that will always be granted to that user for that object and the rights that the user currently may change. Rights in the response are not duplicated, so if read permission is always granted, it will not also appear in the user's specific rights list. For example, if a user "margaret" will always be granted implicit read and administer rights for personally created items and she also has permission to modify search and write rights on her personal address book, the command and response might look like this:Client: GH54 LISTRIGHTS ("/addressbook/user/margaret") "margaret" Server: GH54 LISTRIGHTS "ra" "x" "w" Server: GH54 OK "LISTRIGHTS completed" This shows that Margaret is not allowed to give herself or anyone else insert rights. That is, she can change the data that is there but cannot create new entries. This is probably not realistic for a personal address book! The GETQUOTA command can be used to determine how much storage space a user has available on a server. It takes a dataset as an argument and returns the client's quota limit (in bytes) and usage for that dataset. For example:Client: GH55 GETQUOTA "/addressbook/user/dwood" Server: * QUOTA "/addressbook/user/dwood" 2097152 1808728 Server: GH55 OK "GETQUOTA completed" In this example, the client asked for quota information on a dataset representing a personal address book. The server responded with an untagged response that repeated the name of the dataset, then listed the client's quota (2 MB in this case) and the amount of storage space the client is currently using (1.8 MB or so). The client can only store about 200 KB before running out of room. Table 12.7 lists the ACAP commands that are valid in the Authenticated State. 12.3 ACAP Sessions An example ACAP session is shown in Figure 12.3. A client initiates an IMAP session to a server, which responds with a banner greeting. The client is now in the Nonauthenticated State since no authentication credentials have been presented. Following a successful login with CRAM-MD5 authentication, the session enters the Authenticated State. Figure 12.3: A sample ACAP session The first command issued in the Authenticated State is a SEARCH. The search looks for entries that have an email address in an attribute, much like the one shown in the SEARCH example, given previously. This time, however, a context is created with the NOTIFY modifier. Any changes to the context should be communicated to the client. This is tested by adding an entry (using a STORE command) incorporating another email address. As soon as another entry is added that contains an email address, the server notifies the client (with the ADDTO response). We then free the context and log out. Back to: Programming Internet Email © 2001, O'Reilly & Associates, Inc.
http://oreilly.com/catalog/progintemail/chapter/ch12.html
crawl-002
en
refinedweb
> matlab7.x.rar > wgfcnkey.cpp /*================================================================= * wgfcnkey ---- traps pressing of function keys (F1, F2, ..., F12) * and Shift, Ctrl, Alt keys in an MATLAB figure window * * Use this command-line to compile in MATLAB: * mex wgfcnkey.cpp user32.lib * * Input ---- None * Output ---- a scaler * = 0, no function was pressed * = 1, F1 was pressed * ... * = 12, F12 was pressed * = 13, left Shift was pressed * = 14, left Ctrl was pressed * = 15, left Alt was pressed * = 16, right Shift was pressed * = 17, right Ctrl was pressed * = 18, right Alt was pressed * * Created by: Dong Weiguo, 22 Jan. 2004 * Updated on 29 Jan. 2004, included the left and right instances of * Shift, Ctrl, and Alt keys. * *=================================================================*/ /* $Revision: 1.10 $ */ #include #include "mex.h" void mexFunction( int nlhs, mxArray *plhs[], int nrhs, const mxArray*prhs[] ) { double *x; char *input_buf; int buflen,status; /* Check for proper number of arguments */ if (nrhs >= 1) { mexErrMsgTxt("No input argument required."); } /* create output array*/ plhs[0] = mxCreateDoubleMatrix(1, 1, mxREAL); x = mxGetPr(plhs[0]); x[0] = 0.0; /* Do the actual computations in a subroutine */ /* HWND hWnd; hWnd = FindWindow(NULL, input_buf); MSG msg; //PeekMessage(&msg, hWnd, 0, 0, PM_REMOVE); //GetMessage(&msg, hWnd, 0, 0); TranslateMessage(&msg); DispatchMessage(&msg); if (msg.message == WM_KEYDOWN) { switch(msg.wParam) { case VK_F1: //AfxMessageBox("Test"); x[0] = 1; } } */ //x[0] = GetKeyState(VK_F1); int i; short keystatus = 0; for (i = VK_F1; i
http://read.pudn.com/downloads50/sourcecode/math/174049/ch03/wgfcnkey.cpp__.htm
crawl-002
en
refinedweb
> jworkbook-0.3.0.zip > FontStyleModifier. * * ---------------------- * StyleFontModifier style modifier that changes the font for a style, but leaves all other settings unchanged. */ public class FontStyleModifier implements StyleModifier { /** The new font. */ protected FontStyle font; /** * Standard constructor. * @param font The new font. */ public FontStyleModifier(FontStyle font) { this.font = font; } /** * Returns a new style with the same settings as the style passed in, except with a different * font. * @param style The style to be modified. * @return A new style with the same settings as the style passed in, except with a different * font. */ public Style getModifiedStyle(Style style) { return Style.applyFont(style, font); } }
http://read.pudn.com/downloads103/sourcecode/java/423389/jworkbook-0.3.0/source/org/jfree/workbook/FontStyleModifier.java__.htm
crawl-002
en
refinedweb
> g7x.rar > att_lsf2pc.c /* This software module was originally developed by Peter Kroon (Bell Laboratories, Lucent Technologies)) 1996. * Last modified: May 1, 1996 * */ #include #include "att_proto.h" #define MAXORDER 20 /*---------------------------------------------------------------------------- * lsf2pc - convert lsf to predictor coefficients * * FIR filter using the line spectral frequencies * can, of course, also be use to get predictor coeff * from the line spectral frequencies *---------------------------------------------------------------------------- */ void lsf2pc( float *pc, /* output: predictor coeff: 1,a1,a2...*/ const float *lsf, /* input : line-spectral freqs [0,pi] */ long order /* input : predictor order */ ) { double mem[2*MAXORDER+2]; long i; testbound(order, MAXORDER, "lsf2pc"); for (i=0; i< 2*order+2; i++) mem[i] = 0.0; for (i=0; i< order+1; i++) pc[i] = 0.0; pc[0]=1.0; lsffir( pc, lsf, order, mem, order+1); }
http://read.pudn.com/downloads17/sourcecode/multimedia/audio/62192/g7x/src_lpc/att_lsf2pc.c__.htm
crawl-002
en
refinedweb
Dpkg Roadmap Before starting work on any of this and to avoid duplication, please get in contact with us on the mailing list, as someone might have already started, or there might be initial code around in the personal git repositories or local trees. The 1.19.x development series - Propose enabling bindnow by default. Propose this again on debian-devel at the beginning of the release cycle (matching the beginning of the new dpkg series), so that any run-time problem can be possibly tracked down and fixed. General development goals The following are groups of tasks (many big) that should eventually get done. Consider it a task-oriented TODO list. Things that are being worked on might get moved to a specific milestone above. Optimization work. - Faster and smaller! New dpkg shared library: - Library API cleanup and general code refactoring. - Provide pluggable allocators to avoid leaking through nfmalloc (code in pu/new-mem-pool). - No printing from the library for non-fatal errors, will have to change to return error codes. - Try to make as much as possible reentrant. Refactor .deb generation. - New libdpkg0 and libdpkg0-dbg packages (code in pu/libdpkg). - Merge back apt's apt-deb/deb/ and apt-inst/deb/: Provide a public interface to access .deb files (Depends: #libdpkg). Switch apt to use libdpkg (Depends: #refactor-deb). - Merge back debconf support: Merge back apt-exttracttemplates (Depends: #refactor-deb). - It could be replaced simply with the new «dpkg-deb --ctrl-tarfile»? Finish to draft the spec for the debconf integration (?DpkgDebconfIntegration): Implement --reconfigure?? dpkg --reconfigure <pkg> <pkg> <pkg>.postinst reconfigure <ver> - Fail if pkg is not in the configured state. - Extend conffile support, merge back ucf: Merge back dpkg-repack: - Add a new --repack or --archive or similar command to dpkg that will repack the installed package into a binary package. This requires the following from dpkg: - Storing the original conffiles. - Preserving the file metadata - Built-in tar generator. Merge back features from devscripts (reprepro): - Many scripts are required as part of very common development actions, and requiring the user to install anything else beyond dpkg-dev makes our toolchain more difficult to understand and use. The following are various tools, for which it would make sense to merge their functionality into dpkg proper one way or another: - debpkg → new dpkg option to cleanse environment. - dscverify and debsign → new dpkg-sign. - dscextract → new dpkg-source option. - dcmd → ?? - dpkg-depcheck and dpkg-genbuilddeps → discuss possible takeover? - debchange and dch → new dpkg-modchangelog? mergechanges, changestool (from reprepro) → new dpkg-modchanges. wrap-and-sort, parts of cme → some new command to propose updates to the control files? Merge back features from debuild (from devscripts): - Generates a build log (ideally while preserving colored output). - Runs lintian by default (dpkg-buildpackage can now be configured to do so, but it needs user action). - Environment cleansing. - Checks the source tree directory name for sanity. - Can use debsign/debrsign. - … - Mostly everything in debsums is supported natively now, the following are now extensions: - Generate checksums at build time? (there is a pu/dpkg-gendigests branch now) - Store metadata from .deb at install time. - Add a new dpkg-foo to verify, restore, etc metadata. Merge back localepurge: - Now after support for --path-exclude/--path-include, can localepurge be considered obsolete or just a way to provide config files? Merge back functionality from debsigs-verify, dpkg-sig, etc into dpkg proper (there is a pu/dpkg-sign branch now): - Draft a new spec for the signature support inside .deb, discussion started on the mailing list. Write a dpkg-sign (or similarly named program) in C (Depends: #refactor-deb). The archive should allow signed packages. Tracked in 340306 - Merge back functionality from other perl implementations: - Try to see why there are so many implementations of the same. It seems (after an RFC to maintainers from the following) that one of the issues is that people do not want to depend on modules that are not currently in CPAN. I've started looking into making the code functional as a source tarball, CPAN and .deb distributions, and a way to automate the uploading of the modules to CPAN. libparse-debianchangelog-perl libparse-debian-packages-perl libdebian-package-html-perl libconfig-model-dpkg-perl Translation deb support: Clarify the draft (had some discussions about that with the i18n team at DebConf). Depending on the discussions might need either #refactor-deb or filters (implemented), or other stuff. - Clean up dpkg namespace: - Eventual move of the method directories to /usr/lib/dselect/? Or add generic fetch support to libdpkg-perl? - Better integration with front-ends: - Discuss and decide about dpkg knowledge of repository information. Either push out the available file handling to the front-ends, or take full responsibility of repository package availability. Rationale for apt not using available file. - Might need to add support for different Packages files and add information about their provenance. - Might need to add support for generic download methods. - Restore binary package constrained filename field (old MSDOS-Filename) support. Add a way to notify backends that they need to restart themselves, so that packages can use new features w/o waiting for a full release cycle. One of the issues is that even if dpkg is upgraded, the new version might not run for a long while (537051). Possible options: - Make frontends check if a package pre-depends on dpkg, and upgrade that first, and continue with the rest as a different transaction. - Add metadata so that frontends start a new dpkg transaction, forcing the new program to be executed. - Change dpkg to reexec itself on upgrades. - Frontends themselves might also need to restart themselves. - Include dpkg-checkbuilddeps. - Switch to use --update-avail/--merge-avail from stdin/pipe. - Bug on cupt to produce pristine available db information (non-humanized). Working --command-fd:. Most of the groundwork is now there, it needs final polishing and testing to be able to enable the command. - A way for front-ends to store additional information for each package. - Unified unseen tracking: - dselect unseen tracking. - apt/aptitude unseen tracking. - Unified holds: New “Status-Hold-Cond:” db field to track things like aptitude version holds, or other conditionals, value could be «version >> 1.16.7». Or maybe even a more generic handling of conditions regarding status values? Even front-end related ones? - Frontends should not use any force option, they should use dpkg “transactions” via selections. There's already ongoing discussions with apt and cupt maintainers. - Frontends should not be parsing dpkg error/progress messages, dpkg should provide a reliable way to notify about those. - Check if other front-end parts (aptitude, synaptic, wajig, ept, ...) should be pushed down the stack. - Other stuff, would need to ask front-end developers. - Move back relevant documentation from Debian policy into dpkg. - Russ has mentioned this at some points, should discuss with the policy team. Part of this has been getting done, with all the new man pages, more to come though.
https://wiki.debian.org/Teams/Dpkg/RoadMap
CC-MAIN-2017-43
en
refinedweb
Welcome to Part. Creating Apps With Android Studio Overview This is a predominantly demo-based module. Larry explains the core features of the Android Studio development suite including: - Creating an activity-based project - The basic layout and usage of Android Studio UI - Adding and working with Java files - Adding and working with XML files - Helpful features and built-in Creating a New Activity Project In this lesson Larry shows us how to make a simple action bar based activity with fragments. I am trying out the advice on the very latest (as at 11th June) Android Studio 2.2 preview 3 release so that I can point out any differences to you… To start, click New Project and fill in these details: - Application name: ServiceInfoViewer - Module name: app - Package name: com.pluralsight.serviceinforviewer - Project location: C:\Dev_work\samples\ServiceInfoViewer (or wherever you like) - Minimum required SDK: API 15 (IceCreamSandwich) - Target SDK: API 19 Android 4.4 (KitKat) - Compile with: API 19 Android 4.4 (KitKat) We see the starter templates. Larry says new templates are being added all the time. He describes and briefly demos these ones: - Blank activity - Fullscreen activity - Master/Detail flow For more information on Fragments, Larry recommends Jim Wilson’s course Improving UI design with Android Fragments We choose Blank activity and use these default details on the next screen: - Activity name: MainActivity - Layout name: activity_main - Fragment Layout Name: fragment_main Larry explains each of the Additional Features options. We use “Include Blank Fragment”, and Android Studio creates all of our starter files for us in a new project. Android Studio UI Overview Larry highlights and explains each of the main UI elements in Android Studio: - Project View - Package View - Collapse Package Tree - Flatten Packages - Compact Empty Middle Packages - Toggle Border controls - Structure pane - TODO pane - Android pane - Event log pane - Gradle pane - Gradle console pane - Maven pane - Menu bar Creating UI Layouts Larry describes fragment_main.xml which is in app/src/res/layout, and the preview pane which shows roughly what the layout will render. We can select different device definitions, orientations and themes. Larry points out there are two tabs Text and Design and clicks on Design taking us to a Graphical design for the layouts. Larry says the LinearLayout performs better at runtime than a RelativeLayout. We see the error: Rendering Problems Except raised during rendering: Binary XML file line #-1: No start tag found! This means the designer hasn’t found a root layout. Larry says we can ignore this, but we’re about to change it. We drag the LinerLayout (vertical) onto fragment_main.xml and change the layout:height property to “wrap_content”. Then we add a ListView. This is found in the Containers directory. We see a sample view with a list of items with sub-item text. Next we give it an id of @+id/service_list We also add a Large Text widget. Because the screen is getting quite busy, drag this over to the Component Tree instead. When we scroll down to the text property, we see a yellow lightbulb on the left of it. This gives us the option of creating a string resource. Another way is to click the button to the right of the edit text field which has an ellipses in it. Larry also explains how to change the textStyle property to bold. Next we add a service detail fragment. File name: frag_srv_detail Root element: TableLayout For our details screen, we want to display rows of data for the service, and each row will have two columns: the name of the data, and the data itself. For the purposes of reuse, we create another layout which has just a table row in it. File name: item_srv_detail_row Root element: TableRow The last part of this lesson covers the XML we need to type in for both of these layouts. Writing Java Code In this lesson we add the plumbing for showing the details in MainActivity.java, beginning with the PlaceholderFragment method. We create a RunningServiceWrapper, and add code into onCreateView, onItemClick and onResume. Next we create a new Fragment called ServiceDetailFragment. We get errors due to the importing an invalid namespace android.support.v4.app.Fragment, and change this to andoird.app.Fragment. Larry explains all the code we need to add into the onCreateView method for this fragment.
https://zombiecodekill.com/2016/05/30/creating-apps-with-android-studio/
CC-MAIN-2017-43
en
refinedweb
Blokkal::TreeModel Class ReferenceWraps the tree like structures for a view. More... #include <blokkaltreemodel.h> Inheritance diagram for Blokkal::TreeModel: Detailed DescriptionWraps the tree like structures for a view. Wraps the tree like structures for a view Definition at line 34 of file blokkaltreemodel.h. Constructor & Destructor Documentation Constructor root root node parent parent object Definition at line 51 of file blokkaltreemodel.cpp. The documentation for this class was generated from the following files:
http://blokkal.sourceforge.net/docs/0.1.0/classBlokkal_1_1TreeModel.html
CC-MAIN-2017-43
en
refinedweb
Configuring Database Remote Login Using Bequeath Connection and SYS Logon Setting Properties for Oracle Performance Extensions Support for Network Data Compression The JNDI standard provides a way for applications to find and access remote services and resources. These services can be any enterprise services. However, for a JDBC application, these services would include database connections and services. JNDI enables file to be in the CLASSPATH environment variable. This file is included with the Java products on the installation CD. You must add it to the CLASSPATH environment variable separately. class and all subclasses implement the java.io.Serializable and javax.naming.Referenceable interfaces. Properties of DataSource The OracleDataSource class, as with any class that implements the DataSource interface, provides a set of properties that can be used to specify a database to connect to. These properties follow the JavaBeans design pattern. The following tables list the OracleDataSource standard properties and Oracle extensions respectively. Note: Oracle does not implement the standard roleName property. Table 8-1 Standard Data Source Properties Note: For security reasons, there is no getPassword() method. Table 8-2 Oracle Extended Data Source Properties:HR/hr@localhost:5221("localhost"); ods.setNetworkProtocol("tcp"); ods.setDatabaseName(<database_name>); ods.setPortNumber(5221); ods.setUser("HR"); ods.setPassword("hr"); Connection conn = ods.getConnection(); Or, optionally, override the user name and password, as follows: Connection conn = ods.getConnection("OE", "oe");("localhost"); ods.setNetworkProtocol("tcp"); ods.setDatabaseName("816"); ods.setPortNumber(5221); ods.setUser("HR"); ods.setPassword("hr"); Register the Data Sourcec naming subcontext of a JNDI namespace or in a child subcontext of the jdbc subcontext. Open a Connection. Here is an example:. Note: The ability to specify a role is supported only for the sys user name.: orapwdpassword utility. You can add a password file for user SYS SYSDBA. This step grants SYSDBAand SYSOPERsystem\. SYSuser. This is an optional step. PASSWORD sys Changing password for sys New password: password Retype new password: password SYShas the SYSDBAprivilege. SQL> select * from v$pwfile_users; USERNAME SYSDB SYSOP ---------------------- --------- --------- SYS TRUE TRUE:@remotehost"; the remotehost is a remote database String url = "jdbc:oracle:thin:@//localhost:5221/orcl";. See Also: "About Reporting DatabaseMetaData TABLE_REMARKS"", ); ... Starting from Oracle Database 12c Release 2 (12.2.0.1), the JDBC Thin driver supports network data compression. Network data compression reduces the size of the session data unit (SDU) transmitted over a data connection and reduces the time required to transmit a SQL query and the result across the network. The benefits are more significant in case of Wireless Area Network (WAN). For enabling network data compression, you must set the connection properties in the following way: Note: Network compression does not work for streamed data. ... OracleDataSource ds = new OracleDataSource(); Properties prop = new Properties(); prop.setProperty("user","user1"); prop.setProperty("password",<password>); // Enabling Network Compression prop.setProperty("oracle.net.networkCompression","on"); //Optional configuration for setting the client compression threshold. prop.setProperty("oracle.net.networkCompressionThreshold","1024"); ds.setConnectionProperties(prop); ds.setURL(url); Connection conn = ds.getConnection(); ...... Table 8-3 Supported Database Specifiers Starting from Oracle Database 12c Release 1 (12.1.0.2), there is a new connection attribute RETRY_DELAY, which"); property to the directory that contains your tnsnames.ora file. an LDAP server is configured as not to allow.
http://docs.oracle.com/database/122/JJDBC/data-sources-and-URLs.htm
CC-MAIN-2017-43
en
refinedweb
Learn how to create and configure a case management definition. Introduction to Adaptive Case Management Configuring Case General Properties Configuring Case Data and Documents Configuring Case User Events Defining Case Stakeholders and Permissions Defining Case Tag Permissions Case Activities and Sub Cases Defining Input Parameters for Case Activities Defining Output Parameters for Case Activities Configuring Case Activities Creating a Global Case Activity Using Business Rules with Cases Integrating with Oracle BPM Adaptive case management is a way of modeling very flexible and data intensive business processes. Use adaptive case management to model a pattern of work with the following characteristics: Complex interaction of people, content and policies Complex decision making and judgments The progress of the case depends on user decisions, actions, events, and policies Changes at runtime, for example, adding new stakeholders enables new actions Context-driven assignments, for example, assignments based on the number of cases resolved by a certain analyst and the time it took them to resolve them Case management enables you to handle unstructured ad-hoc processes. It relies on the content and information of the process so that the user can make informed business decisions. It focuses on unpredictable business processes which rely on worker knowledge and involve human participants. Case management involves: People, often referred to as case workers or knowledge workers Data Documents Collaboration Reporting History Events Policies Processes A case is a collection of information, processes, tasks, rules, services. It requires the worker's knowledge, their involvement and active collaboration to move the case forward. Adaptive case management enables you to define only the activities a user performs to achieve a goal without defining the workflow process. However, it does supports dynamic workflows, structured processes and a combination of both. A case definition contains various case activities that represent the different work that users can perform in the context of a case. Oracle BPM allows you to define case activities based on: a Human Task a BPMN process a custom Java class Adaptive case management allows the end user to define the case flow at runtime, while business processes require you to define the flow at design time. Adaptive case management uses documents and contextual information to determine the flow of the case at runtime. Table 33-1 illustrates the differences between adaptive case management and business processes. Table 33-1 Differences between adaptive case management and business processes The key artifacts in adaptive case management are: Case: a collection of structured, semi-structured, un-structured processes, information and interactions used to make a business decision. Case Model: models the definition of the case. The case has multiple attributes such as name, type, various milestones. It also has associations like the behavior container and root folder. The definition of the case is the collection of these attributes and associations. Case Instance: a collection of documents, data and case activities that are used to process the case and audit the progress of the case. Case Folder: the folder(s) in the Content Management System where the case information is stored. Case Data: the data for the case stored in the BPM database. Case Lifecycle: the case lifecycle is reflected by the case state, which can be one of active, stale, suspended, aborted, or closed. Milestones: checkpoints that indicate the progress of a case and represent the completion of a deliverable or a set of related deliverables. Stakeholders can use milestones to obtain a high level view of the status of the case. Case Activity: the work that can be performed in the context of a case. Case activities have various properties that define their behavior. Case activities can be mandatory, conditional, or optional. Case activities can be manual or automatic. Manual case activities require a case worker to initiate them while automatic case activities are initiated by the case runtime. You implement case activities using a BPMN process, a Human Task or a custom java class. Sub Case: a child task of a case, used when additional activities must be spawned as part of the processing of the parent case. Sub cases are instantiated at run time and are similar to case activities, except that they only inherit data from the parent case. Case Event: include case lifecycle changes, case milestone changes, case activity changes and other manual case events. Manual case events are events modeled in the case corresponding to various manual actions that can occur during the case processing. Adaptive case management is suitable to model business processes in many different industries and scenarios: Financial Services: loan origination, credit and debit card dispute management, financial crime management (suspicious activity reporting), wealth management, brokerage, trading, new business account opening, e-bank account opening, accounts payable, accounts receivable, and B2B order management. Insurance: P&C claims processing, policy management, policy servicing, underwriting, fraud prevention, customer on-boarding. Health Care: payer claims processing, policy and procedure management, virtual patient records management, member service management, provider service management, group sales management, health plan insight, clinical and operational insight. Energy & Utilities: process safety management, FERC e-tariff, transmittal process, SOP processing. Public Sector: citizen benefits eligibility and benefit's enrollment, grant management, public safety, tax and custom filling, court solution and judicial matters. Human Resources: employee on-boarding, employee off-boarding, employee performance review, employee benefits administration. Legal: contract management, legal matter management, auditing and compliance. Customer Service: customer correspondence, call center, constituent services. The following diagram shows the lifecycle of a case through its various states, from creation to closure. Figure 33-1 Case State Model A case can be added to an already existing BPM project that doesn't already include one, or you can create a new BPM project for a case. You can define only one case per BPM project. To create a case you must open or create a BPM project and then create the case. For more information on how to create a BPM project, see Creating and Working with Projects. To create a case: The Create Case Management dialog box appears. BPM Studio creates the new case and displays it in the Case Management editor so that you can configure it. Use the Case Management editor to configure cases. Figure 33-2 shows the Case Management Editor. Figure 33-2 Case Management Editor Edit cases using the Case Management editor. Newly created cases are displayed in the Case Management editor automatically. To edit an existing case, open the case file as described below. To edit a case: The case file has the .case extension. There is only one case file per BPM project. Use the General tab to configure the general properties of a case. The General tab includes the following sections: General properties - use this section to configure properties for a case including Title, Summary, a text summary of what the case does, Priority, a value from 1 (high) through 5 (low), and a Category, set as Plain Text or Translation. See How to Configure the Case General Properties Due Time - specifies the case due date using the Duration value. The Duration can be expressed as a static value (select By Value) or as an XPath expression (select By Expression). The case due date is calculated from when a case starts. If Use Business Calendar is selected, and an organizational unit is specified (see below), the case due date is calculated using the business calendar associated with the organizational unit. Otherwise the normal calendar is used. See Case Deadlines. Organizational Unit - select an organizational unit to be associated with the case. It can be expressed as a static value (select By Value) or as an XPath expression (select By Expression). Only members of the organizational unit specified are able to access the case, even if they are also specified as stakeholders. Milestones - specifies milestones and their properties for the case. Milestones represent the completion of a deliverable or a set of related deliverables. They are checkpoints that indicate the progress of a case. Stakeholders can use them to obtain a high level idea of the status of the case. There is no direct activity or work associated with milestones. Use the Can Be Revoked checkbox to indicate that a milestone can be revoked. Only milestones that have been achieved can be revoked. This does not affect other achieved milestones for the case. The milestone Duration can be set using a value or an XPath expression. This is used to calculate the milestone deadline, based on when the milestone started. See Case Deadlines. Add, edit, delete, and re-order milestones using the controls in the panel. Use the panel to configure the Name, Can be Revoked value, Duration Type, and Duration for milestones. See How to Add Case Milestones. Outcomes - create, edit, and delete case outcomes in this section. Outcomes are user-defined values that are assigned to the case when it is completed. For example in a medical treatment case, possible outcomes might be: admitted, discharged, referred. Each outcome includes a name, and a display name. See How to Define Case Outcomes Cases support two types of deadlines - case due dates, and milestone deadlines. Both of these are expressed as a duration, specified as either a value or an XPath expression. Durations are configured in the General tab in the Case Management editor. Case due dates are calculated using the value specified in the Duration field in the Due Time panel, based on the starting date and time of the case. When the case due date is reached, a case deadline event is raised. If Use Business Calendar is selected, and an organizational unit is specified, the case due date is calculated using the business calendar associated with the organizational unit. Otherwise the normal calendar is used. Milestone deadlines are calculated using value specified in the Duration field for the milestone in the Milestones panel, based on the starting date and time of the milestone. If a milestone is still active when the milestone deadline is reached, a milestone deadline event is raised. Specify general case information using the General tab of the Case Management editor. To configure the general properties: The value of the priority varies from 1 to 5, being 1 the highest and 5 the lowest. Priority indicates the importance of the case so that the case worker can prioritize their work. Categories enable you to group similar cases together. Create milestones to track progress in cases. Do not include spaces in milestone names. To add a milestone: Data and document storage is configured in the Data & Documents tab in the Case Management editor. Cases can be configured to store documents in an enterprise content manager. A case can contain one or more related documents. Stakeholders can upload case documents that only other stakeholders with the appropriate permissions can view or delete. To perform operations on documents, use the CaseStreamService as described in the Oracle Fusion Middleware Business Process Management Suite Java API Reference. You can configure who can read and update documents using permission tags. For more information about permission tags, see Defining Case Tag Permissions. You can specify permission tags in the following situations: When creating a new document using the method uploadDocument() from the CaseDocumentStreamService class. By changing the permission on an existing document using the method setPermissionTag() from the CaseService class, passing the appropriate value in the permission tag parameter. The support for permission tags on documents depends on the type of document store: Non Oracle Content Management Systems This feature is not supported for content management systems that are not Oracle WebCenter Content. BPM Database If you use BPM DB as the document store, then you can set permission tags on case documents without having to configure anything. See Using the BPM Database for Data Storage. Oracle WebCenter Content When you set a permission tag for a document this value is stored in the metadata information field for the CaseManagementPermissionTag. You must create the CaseManagementPermissionTag information field before using permission tags on a document. To create the field, see Creating Case Fields in Oracle WebCenter Content. When you try to set a permission tag on an existing document, it fails. There are some differences in behavior when using the BPM database for data storage: If the parent root folder and the instance folder are not specified in the case design, the folder used to store documents is shown as a slash, similar to a root folder (for example, /). If the parent root folder and the instance folder are not specified in the case design, and while invoking the case, you override the use of the ecmFolder tag in the caseHeader payload (for example, caseroot), the folder used to store documents is shown as a folder under the root (for example, /caseroot). If the parent and instance folders are specified in the case design, the folder used to store documents is /parentFolderName/InstanceFolderName. However, if the parent and instance folders are overridden from the payload, the folder specified in the payload is shown. Case documents stored in WebCenter can be include a web browser link to view the details of the originating case. Before using this facility, you must create a custom attribute named CaseManagementLink in WebCenter. See Creating Case Fields in Oracle WebCenter Content. When case documents are uploaded to WebCenter, the CaseManagementLink is populated. The value of the CaseManagementLink property can be changed in Enterprise Manager in the Workflow property configuration to support different usages in custom Case Management user interfaces. Placeholders can be used for live values to be substituted in when documents are stored and the CaseManagementLink property is populated. For example, $host could be used to represent the host name and $port could be used to represent the port number. These placeholder values must be defined in mdm-url-resolver.xml. The values $caseId and $caseNumber will be replaced with their respective values without any additional configuration in mdm-url-resolver.xml. You must create the fields to support case information in Oracle WebCenter before using them. This applies to the CaseManagementPermissionTag and CaseManagement Link fields. See Specifying Permission Tags for Case Documents and Case Links in WebCenter Case Documents. To create a field in Oracle WebCenter Content: CaseManagementPermissionTagor CaseManagementLink. The new information field appears. The data represents the payload of the case and defines the input parameters of the case. The data represents part of the information in the case. Note: Case data created based on a XSD or a business object is not initialized with the default values defined in the XSD or business object. Basing case data on system schema types or on system types such as StartCaseInputMessage is not supported. This can cause corruption of the Adaptive Case Management project. To configure the case data: The Add Data Dialog appears. The name is not unique. Different case data can have the same name. Note: Case data does not support simple data types, thus they do not appear in the list. Only the case or case activity forms data of the projects that are created in ADF appears in the list. Ensure that the case data input message uses names and types as defined in the case model. Initialization of rule facts, and corresponding rule execution, happens correctly only when case data object names and case data object types match . If dateTime element is chosen in the schema, the case activity and case data form shows only the date and not the time. You can configure flex fields to map to case payload data. Use the Flex Fields section of the Data & Documents tab of the Case Management editor to create and edit flex field mappings that link flex field variables with data in the case. Flex fields can be set to be unchangeable by clearing the Updatable checkbox to support data that does not change after creation, such as a serial number. When case data is persisted, flex field mappings are checked and the linked data is also persisted. You must map a flex field to a task field in the run-time task configuration as well as create the mapping to the case payload data. To create a case flex field mapping: The Create Flex Field Dialog appears. The Type can be String, Number or Date. Use the Documents section of the Data & Documents tab of the Case Management editor to configure document locations. The document location is the folder in the enterprise content management system where all the documents related to the case instance are stored. This folder may contain other folders. The case document folder name is created by concatenating the parent folder name and the case instance folder name you provide. You must provide a case instance folder name, or an exception is triggered at runtime. To configure the documents location: Select By Name to provide the parent folder location using static text. Select By Expression to provide the parent folder location using an XPATH expression. Note: The folder you provide must already exist. Select By Name to provide the parent folder location using static text. Select By Expression to provide the parent folder location using an XPATH expression. In general, this is done using an XPATH expression. Note: The folder you provide must already exist. Note:Ensure that you configure content management systems after creating and specifying the Case Instance Folder and Parent Folder. Case does not get created if CMS is not configured. Specify the Name, Value type (By Value or By Expression), and the Value (either a value or an XPath expression) and click OK. By default Oracle BPM Suite is configured to use an Oracle Database document store. You can use the default configuration while developing your project if you do not have access to an enterprise content manager. This does not require any configuration in the Oracle SOA Server. You can use the following enterprise content managers for storing case documents: Oracle WebCenter Alfresco CMIS To use this content managers you must manually configure them using EM after installing BPM. The following list shows the configuration for the supported enterprise content managers: Oracle WebCenter The endpoint URL must have the following format idc://ucm_host:4444 The administrator user can be weblogic Alfresco CMIS The endpoint URL must have the following format The administrator user can be weblogic You can define custom user events that represent manual actions that occur while processing the case. Case workers raise events to indicate that something occurred. The occurrence of an event may trigger the activation of a case activity or mark a milestone as completed. For example, if waiting for a fax is part of a case, when it arrives, the case worker can raise an appropriate event to indicate this has occurred. To add an event: The Create User Event dialog box appears. The new event appears in the Events section.. For more information on how to use events in Oracle BPM, see How to Configure Your Process React to a Specific Signal. To publish case events: Note: The case event definition is available at oramds:/soa/shared/casemgmt/CaseEvent.edl To view the case event schema, see CaseEvent.edl. Use the Stakeholders & Permissions tab of the Case Management editor to create, edit, and delete stakeholders and their associated permissions. The permission model enables you to define both stakeholder permissions and tag permissions. You can define multiple stakeholders for each case you define. Only stakeholders can perform actions on case objects. Note that if an organizational unit is specified for the case, only members of the organizational unit are able to access the case, even if they are also specified as stakeholders. Stakeholder Member Types can be configured to be Users, Groups, Application Roles, or Process Roles. The Value for these stakeholders are appropriate to the Member Type. For example, a User stakeholder might specify a particular user ID, whereas a Group stakeholder might specify a particular user group from your LDAP directory. Stakeholder Values can be specified by providing a specific value, or by XPATH expression. Note:weblogic stakeholder is a part of BPMOrganizationAdmin role and has all permissions. Any user who is part of this role has administrator privileges. weblogic stakeholder will be able to see the case and perform actions on the case even if the user is not added to the case. Future redeployments of a case project may add new stakeholder application roles and new permission tag roles, but existing ones will not be affected if this happens. Undeploying a case does not affect any grants or application roles. Table 33-2 shows the default permissions by case object. Table 33-2 Permissions by Case Object The stakeholders are the different persons involved in the processing of the case. They are case workers that can view the case and work on it. Figure 33-3 shows the Stakeholders and Permissions tab in the Case Management Editor. To add a stakeholder: The Create Stakeholder dialog box appears. Value specifies the actual user acting as a stakeholder. It specifies the actual user, group or role processing the case. Note: When you remove a case stakeholder definition, the underlying user, role, group or role in the organization is not removed. Figure 33-3 Stakeholder Tab in Case Management Editor Use the Permissions section of the Stakeholders and Permissions tab of the Case Management editor to define permissions specific to a case. You specify which users can update the case by tagging case objects with appropriate permission values. Only users with read/write OPSS permission can see or update case objects tagged with permissions. You can attach permissions to case objects such as documents and data. You can define your own set of permissions. The UI shows the default permissions PUBLIC and RESTRICTED. You can modify these default permissions. Some examples of regularly used permissions are: internal, public, press release. Note: E-mail and simple workflow are global case activities thus their permission tag is global. To add a permission: You can manage the permissions assigned to each stakeholder using Oracle Enterprise Manager. To manage permissions: In Oracle Enterprise manager from Weblogic Domain, right click soainfra, then select Security and then select Application Policies. In the Application Policies page, run a search with the following search criteria: In the Application Stripe field, enter OracleBPMPRocessRolesApp. In the Principal Type field, enter Application Role. In the Name Starts With field, enter the name of the case or leave it blank. From the search result, select one of the application roles corresponding to the stakeholder whose permissions you want to edit. Click the Edit button. The Edit Application Grant dialog box appears. From the Permissions table, select a permission and click Edit. Note: Oracle Enterprise Manager does not validate action strings so you must provide the exact action string. Note: To assign multiple actions, separate them with commas without spaces. For example: READ,UPDATE. Stakeholders can assign additional permissions to case objects during runtime. For this option to be available, you must create permission tags when you design the case in Oracle BPM Studio. For example, you can define a case with the following permission tags: PUBLIC, RESTRICTED, HIGHLY_CONFIDENTIAL. During deployment Oracle Business Process Manager creates the application roles that correspond to the permission tags defined in the case. For example, in a case named EURent, if you use the EURent.RESTRICTED.UPDATE action to grant a user the role EURent.RESTRICTED.UPDATE.Role, you can assign a document the RESTRICTED permission tag. Only users with the role EURent.RESTRICTED.READ.Role can access that document. Note: Tag permissions work together with stakeholder permissions. For example, to read or update a case object a stakeholder must have the READ/UPDATE permission and the case object must have the appropriate tags that allow reading or updating it. For more information about stakeholder permissions, see Defining Case Stakeholders and Permissions. In Oracle Enterprise manager, from Weblogic Domain, right click soainfra, then select Security and then select Application Policies. In the Application Policies page provide the following: In the Application Stripe field, enter OracleBPMPRocessRolesApp. In the Name Starts With field, enter the name of the case or leave it blank. In the Application Roles page, select a permission tag role then click Edit. The Edit Application Role dialog box appears. Click Add to add user, groups, or application role members to this application role. You can configure a case to use different languages for display at run-time. The following case artifacts can be localized: case title case category milestone name outcome data user event stakeholder permission You can define display names for all the above artifacts except case title, and case categories. Display names are stored in the default locale resource bundle. Note: Multiple artifacts can be configured to have the same display name. However, it is a good practice to use unique display names that are descriptive and help the user quickly identify the displayed data. Figure 33-4 shows the creation dialog box for a milestone that enables you to configure the display name. Figure 33-4 Display Name Configuration When Creating a Milestone Case Title You can specify the case title using plain text or the translation option. When you choose the translation option you must define the following: the key in the resource bundle the text to translate the parameters to make the title dynamic, if applicable Figure 33-5 shows a title specified using the translation option. In this example the key in the resource bundle is CaseTitle. The text to be translated contains two parameters: Car rental for %0 %1. The parameter values are specified in the Argument table. Figure 33-5 Localizing the Case Title Case Category You can specify the case category using plain text or the translation option. The translation option only supports a simple translation string. You can specify and localize each of the keys defined in a case. To add a localization key: The Translation editor appears. Figure 33-6 shows the Translation editor. Figure 33-6 Translation Editor The following classes are types of the class CaseObject: CaseData CaseDocumentObject CaseEvent CaseHeader CaseMilestone CaseStakeHolder Comment DatabaseDocument The class CaseObject contains the following attributes that are shared with its types: Id CaseId ObjectName ObjectDisplayName ObjectType UpdatedBy UpdatedByDisplayName UpdatedDate PermissionTag From this list, ObjectDisplayName contains the localized value of ObjectName, and UpdatedByDisplayName contains the localized value of UpdatedBy Note: Initially UpdateBy and UpdatedByDisplayName contain the name of the user that created the CaseObject. After the user updates the case, these fields contain the name of the user that last updated the case. Case activities model tasks that can be executed by the end user as part of a case. Case activities can be human tasks, BPMN processes or a custom case activities. A case activity represents a specific piece of work that case workers must perform. Case activities can model Sub Cases. See Creating Sub Cases. Cases are composed of various case activities. You can create these case activities based on a BPM process, or a Human Task, or you can create a custom case activity. You can create case activities by: promoting a BPMN process to a case activity See How to Promote a BPMN Process to a Case Activity. promoting a Human Task to a case activity See How to Promote a Human Task to a Case Activity. creating a custom case activity See How to Create a Custom Case Activity. Case activities and sub cases are defined by the following attributes: Manually Activated/Automatically Activated Manually activated case activities are available in the case and can be invoked by users. Automatically activated case activities are invoked by the system. Required Required activities must be invoked at least once before a case is closed. Repeatable Repeatable case activities can be invoked more than once in a case. Non repeatable case activities can be invoked only once. Already invoked manual case activities do not appear in the library. Conditionally Available Conditionally available manual activities are available in the library if they are activated through a business rule. Non-conditionally available manual activities are available in the library by default until you invoke them. After invocation, repeatable activities are still shown in the library. Non conditional automatic activities are invoked after Oracle BPM starts a case. Permission The Permission value specifies access to the case activity, for example, PUBLIC or PRIVATE. You can specify input and output parameters for case activities. For additional information, see How to Add a Case Activity Input Parameter and How to Add a Case Activity Output Parameter. Note that sub cases do not support data mapping - they inherit data only from their parent case. See Creating Sub Cases By default the cases contain the following case activities: Simple Workflow You can use this activity to create simple human tasks. You can configure the task title, priority, due date, comment, assignment type and assignees. The supported assignment types are: simple, sequential, parallel, FYI. You must pass the payload input to the method initiateCaseActivity with the key SimpleWFActivityPayload. You must use a payload from the schema in Simple Workflow Payload Schema. You can use this activity to send an email notification. You must configure the workflow notification with EMAIL for email notifications to work properly. You must pass the payload input to the method initiateCaseActivity with the key EmailActivityPayload. You must use a payload from the schema in Email Notification Payload Schema. Note: You can only send case documents as attachments in global case activity email notification activities. Documents located in your file system are not supported. If you want a case activity to run after another case activity, then you must define a condition in the second activity that indicates it runs after the first activity completes. For more information on defining conditions, see Using Business Rules with Cases. BPMN process input and output arguments. If the BPMN process contains multiple end points, then it creates an output parameter for each of the output arguments of the multiple end points. Note: The case activity can pass input parameters to the underlying BPMN process. You can also configure the case activity to read the output arguments of the BPMN process and store their value. For more information, see Defining Input Parameters for Case Activitiesand Defining Output Parameters for Case Activities. The Advanced tab shows the service reference and the operation used to create the case activity. To view the human task operation: Open the case. Select the Advanced tab. Figure 33-7 shows the Advanced tab for a BPMN process based case activity. To open the BPMN process, click the service reference. To change the operation, click the Refresh button. Figure 33-7 Advanced Tab of a BPMN Process Based Case Activity You can create case activities based on a Human Task. The Human Task must already Human Task payload arguments. Note: The case activity can pass input parameters to the underlying Human Task. You can also configure the case activity to read the output parameters of the Human Task and store their value. For more information, see Defining Input Parameters for Case Activitiesand Defining Output Parameters for Case Activities. The Advanced tab shows the Human Task used to create the case activity. To view the human task operation: You can create a custom case activity based on a Java class. Custom case activities enable users to create their own case activities, for example a scheduler. To the end user there is no difference from the other types of case activities. To create a case activity: The New Gallery dialog box appears. The Create Custom Case Activity dialog box appears.. Note: You can add input and output parameters to a custom case activity. You can assign input parameters the value of case data or user input. You can choose to store the value of output parameters as case data. Use sub cases to manage activities that contribute to the resolution of a parent case. Sub cases are similar to case activities, are instantiated at runtime within the context of a parent case. For example, a power outage case might have a sub case for a line repair. As the parent power outrage case evolves one or a number of line repair sub cases might be kicked off to facilitate needed repairs that are discovered as the parent case is worked. Sub cases are deployed as a separate composite from the parent case project. They are linked to the parent using the case link mechanism. The sub case composite version is always the active version of the parent composite. Sub cases do not support data mapping - they inherit data only from their parent case. When a sub case completes, an activity completion event is raised. The same the constraints that apply to case activities also apply to sub case activities. To create a Sub Case: The New Gallery dialog box appears. The Create Case Activity dialog box appears. BPM Studio creates the sub case activity and opens it in the Case Activity editor. See Case Activity and Sub Case Attributes. Input parameters can be case data or user input. By default, input parameters take their values from case data. You can change this so that they take their values from user input. You must also define the case data in the .case file. If you do not define the case data, Oracle BPM Studio creates a new case data of the required type when you create the case activity. You must define case activity input parameters in the same order of the BPMN process or Human Task input arguments. If the input parameter is of the type user input, you can save this value as case data. Figure 33-8 shows the input arguments of a BPMN process based case activity input parameters and the arguments of the start event of a BPMN process. Note that the name of the input parameters of the case activity matches the name of the arguments in the BPMN process. Figure 33-8 BPMN Process Based Case Activity Input Parameters You can define the input parameters for a case activity. To add a case activity input parameter: Note:When you regenerate the activity form after adding Case Activity Input parameters, only data control is generated. Regeneration Data Control option does not generate jspx to protect customizations to the jspx. By default Oracle BPM Studio creates the output arguments based on the BPMN process or Human Task arguments. Only editable human workflow arguments appear as output arguments in a case activity. You can save the value of output parameters as case data.To do this, the name of the case activity output parameter must match the root element name of the BPMN process or Human Task argument. After you create the case, you can change the name of the output parameters. Figure 33-9 shows the output arguments of a BPMN process based case activity output parameters and the arguments of the end event of a BPMN process. Note that the name of the output parameters of the case activity matches the name of the arguments in the BPMN process. You can save the output as case data. By default the Case Activity editor populates the case data fields, if a case data of the same type is not available. Otherwise the Case Activity editor creates a new case data of the type in the .case file. Figure 33-9 BPMN Process Based Case Activity Output Parameters After you create a case activity BPM Studio opens the case in the Case Activity editor for you to configure it. You can configure a case activity to behave in different ways during the case workflow by configuring the case activity properties. To edit a case activity: The case file has the .caseactivity extension. The Case Activity editor appears. You can configure the following basic properties for the case activity you created: Automatic Required Repeatable Conditionally Available For a detailed description of these attributes, see Case Activity and Sub Case Attributes. You can also add input and output parameters for the case activity. See How to Add a Case Activity Input Parameter and How to Add a Case Activity Output Parameter. Global case activities are custom case activities that are global in scope. Example of Global Case Activity Metadata Schemashows an example of the metadata for a global activity metadata. Note that the value of the isGlobal attribute is set to true.. Example 33-1 shows how to register a global case activity. Example 33-1 Registering a Global Case Activity InputStream is = classLoader.getResourceAsStream(<file>); public static CaseActivity unmarshall(InputStream inputStream) throws JAXBException, IOException { try { // create a document DOMParser p = new DOMParser(); p.retainCDATASection(true); p.parse(inputStream); Document doc = p.getDocument(); JAXBContext jaxbContext = JAXBContext.newInstance(JAXB_CONTEXT); //return unmarshal(doc); return (CaseActivity) jaxbContext.createUnmarshaller().unmarshal(doc); } catch (oracle.xml.parser.v2.XMLParseException e) { throw new JAXBException(e); } catch (org.xml.sax.SAXException e) { throw new JAXBException(e); } } private static final String JAXB_CONTEXT = "oracle.bpm.casemgmt.metadata.activity.model"; You can use business rules to decide which case activities to activate for automatic or manual initiation, or to withdraw manual case activities. You can also use rules to mark a milestone as achieved or revoked. When you create a case, Oracle BPM Studio automatically generates an associated business rule dictionary. It is a good practice to define case management rules on events. Case management rules are fired on an event in rules. Hence it is advisable to define rules which happen on an event instead of a condition. Oracle BPM fires business rules on every case event. Case events are logical events that occur while running the case. The following list enumerates the available case events: Life cycle events Milestone events Activity events Data events Document events Comment events User events Note:Model rules in the following sequence Event Type Activity Name Activity State You can define the condition of the business rule base on the following: The event that fired the business rule Table 33-3 describes the different events that can fire a business rule. The case instance Case data The case data configured in the case is available as facts in the business rule dictionary. You can create rules based on case data combined with case management related facts. Table 33-3 Case Events Figure 33-10 shows the facts you can use to define the condition of a business rule based on a case Management system related data. Figure 33-10 Business rule facts The business rule dictionary created when you create the case is linked to a common base dictionary in Oracle MDS. The common base dictionary includes all the facts show in Figure 33-10. The base dictionary name is CaseManagementBaseDictionary. The business rule dictionary of a case supports the following operations: Automatically invoke conditional automatic activities from a business rule Publish conditional manual activities to the case from a business rule Withdraw an activity from a business rule Note:Non-conditional manual activity cannot be withdrawn. You can withdraw only conditional manual activity. Achieve and revoke milestones from a business rule For a detailed description of the functions used to perform these operations, see Table 33-4. When you create a case, Oracle BPM automatically generates an associated business rule dictionary. This case business rule dictionary enables you to define business rules with rule conditions based on the case. To generate a case business rule dictionary: For information on how to create a case, see How to Create a Case. The case rule dictionary appears. Table 33-4 shows the different functions you can use when creating the business rule conditions. Table 33-4 Rule Functions Any stakeholder can close a case, if all required activities in the case are completed. Users with additional privileges can force closure of a case even if there are pending required activities. Closing a case is a logical operation that marks its status as closed. You can close a case by invoking the closeCase method in the CaseInstanceService class. You can provide an optional outcome parameter and a comment when you close a case. You can close a case regardless of its current state and the state of its case activities. Closing a case sets it state to CLOSED. The list of cases for a user that you obtain using the queryCase API it includes closed cases. Note: You can still achieve and revoke milestones after you close or suspend a case. You can integrate with Oracle BPM by invoking a case from a BPMN process or by publishing case events to Oracle EDN. You then create a process that reacts to these events. You can invoke a case from BPMN process. To invoke a case from a BPMN process: Available operations are: abortCase, closeCase, reopenCase, suspendCase, resumeCase, attainMilestone, revokeMilestone For more information about data associations, see Introduction to the Data Association Editor. Note: The case ID is available as a predefined variable that is automatically assigned a value when you invoke a BPMN process from a case. If you want to use correlations with a particular event, then you can trigger a BPMN process from a BPMN based case activity. You must pass the caseId to the message that initiates the process and use it as a correlation key. To use correlations with case events: You can start this process with a message start event and use a message catch to receive the correlated event. This section contains the simple workflow schema, the email notification schema, and an example of a global case activity metadata schema. Simple Workflow Payload Schema This schema contains the list of payloads that you can pass to the method initiateCaseActivity to create a simple human task. For more information, see Predefined Case Activities. This schema contains the list of payloads that you can pass to the method initiateCaseActivity to send an email notification. For more information, see Predefined Case Activities. Example of Global Case Activity Metadata Schema This schema is an example of the metadata for a global activity. For more information on global activities, see Creating a Global Case Activity. This schema defines the case events that you can publish to Oracle EDN. For more information, see How to Publish Case User Events. Example 33-2 Simple Workflow Payload Schema <?xml version="1.0" ?> <xsd:schema xmlns: <xsd:annotation> <xsd:documentation>Simple WF Activity Schema</xsd:documentation> <xsd:appinfo> <jaxb:schemaBindings> <jaxb:package <xsd:complexType <xsd:complexContent> <xsd:extension <xsd:sequence> <xsd:choice <xsd:element <xsd:element <xsd:element <xsd:element </xsd:choice> :element </xsd:sequence> <xsd:attribute <:restriction> </xsd:simpleType> :enumeration </xsd:restriction> </xsd:simpleType> -3 Email Notification Payload Schema <?xml version="1.0" ?> <xsd:schema xmlns: <xsd:annotation> <xsd:documentation>Email Activity Schema</xsd:documentation> <xsd:appinfo> <jaxb:schemaBindings> <jaxb:package <xsd:complexType <xsd:complexContent> <xsd:extension <xsd:sequence> <xsd:element <xsd:element <xsd:element -4 Example of Global Case Activity Metadata Schema <caseActivity targetNamespace="" xmlns: <documentation xmlns=""/> <name>SimpleWorkflowActivity</name> <activityDefinitionId></activityDefinitionId> <activityType>CUSTOM</activityType> <repeatable>true</repeatable> <required>false</required> <manual>true</manual> <isGlobal>true</isGlobal> <isConditional>false</isConditional> <caseAssociations> <documentation xmlns=""/> <allCases/> </caseAssociations> <globalActivity> <definition> <documentation xmlns=""/> <className>oracle.bpm.casemgmt.customactivity.simplewf.SimpleWFActivityCallback</className> </definition> </globalActivity> </caseActivity> Example 33-5 CaseEvent.edl <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <definitions xmlns="" targetNamespace=""> <schema-import <event-definition <content xmlns: </event-definition> </definitions>
http://docs.oracle.com/middleware/12213/bpm/bpm-develop/working-adaptive-case-management.htm
CC-MAIN-2017-43
en
refinedweb
Welcome to my brief homepage for CS 352. Announcements, homework hints, etc. will appear here or in the discussion forum. My office hours are scheduled for Monday 1-3 PM and Wednesday 3-4 PM. to emails, except that I have a goal to respond within 24 hours). The class website is: To sign up for the discussion forum, visit. If you do not have a Google account, send an email to me, and I will send an email to you that will invite the account of your choosing to join. You do not need to join to read the discussion forum, just to post to it. That being said, we hope most of you will post at some point, so it is probably in your interest to sign up now, rather than later. Instructions on how to join a Google group and setup a GMail filter. Since we filter the discussion group emails by list-alias, it is unnecesary to begin the subject line of discussion group emails with a "CS 352" prefix. That being said, placing the homework and question number in the subject line will make life easier for your peers. Guide Excel file Examples #include <stdio.h>at the top of your file.
http://www.cs.utexas.edu/users/ragerdl/cs352/
CC-MAIN-2017-43
en
refinedweb
This is the sixth article in our "Bring the clouds together: Azure + Bing Maps" series. You can find a preview of live demonstration on. For a list of articles in the series, please refer to. Introduction In our previous post, we introduced how to access spatial data using ADO.NET Entity Framework. This works well if you're working on a simple N-tire solution, and all tires are based on .NET. But very often, a cloud application must talk to the remaining of the world. For example, you may want to expose the data to third party developers, and allow them to use whatever platform/technology they like. This is where web service comes to play, and this post will focus on how to expose the data to the world using WCF Data Services. Before reading, we assume you have a basic understanding of the following technologies: - DO.NET Entity Framework (EF). If you're new to EF, please refer to the MSDN tutorial to get started. You can also find a bunch of getting started video tutorials on the Silverlight web site, such as this one. This post assumes you have a EF model ready to use. - WCF Data Services (code name Astoria). If you're new to Astoria, please refer to the MSDN tutorial to get started. The above Silverlight video tutorial covers Astoria as well. This post targets users who know how to expose a WCF Data Service using Entity Framework provider with the default configuration, but may not understand more advanced topics such as reflection provider. - A SQL Azure account if you want to use SQL Azure as the data store. WCF Data Services are WCF Before you go, please keep in mind that WCF Data Services are WCF services. So everything you know about WCF applies to WCF Data Services. You should always think in terms of service, rather than data accessing. A data service is a service that exposes the data to the clients, not a data accessing component. A WCF Data Service is actually a REST service with a custom service host (which extends WCF's WebServiceHost). You can do everything using a regular WCF REST service. But the benefit of using data services are: - It implements the OData protocol for you, which is widely adopted. Products such as Windows Azure table storage, SharePoint 2010, Excel PowerPivot, and so on, all use OData. - It provides a provider independent infrastructure. That is, the data provider can be an Entity Framework model, a CLR object model, or your custom data provider that may work against, say Windows Azure table storage. With a regular WCF REST service, you will have to implement everything on your own. WCF Data Services data providers There're 3 kinds of data providers, as listed in: Entity Framework provider, reflection provider, and custom provider. The simplest way to create a WCF Data Service is to use Entity Framework as the data provider, as you can find in most tutorials. But you must be aware of the limitations. When using Entity Framework as the data provider for a WCF Data Service, the service depends heavily on the EF conceptual model, and thus storage model as well. That means if you have custom properties in the model, like our sample's model, those custom properties will not be exposed to the clients. This is unfortunately a limitation in the current version of data service. There's no workaround except for using a reflection provider or custom provider, instead of EF provider. Our sample uses reflection provider, because it meets our requirement. If you need more advanced features, such as defining a custom metadata, you can use custom providers. Please refer to for more information. Create a read only reflection provider You can think reflection provider as a plain CLR object model. For a read only provider, you can take any CLR class, as long as they meet the following: - Have one or more properties marked as DataServiceKey. - Do not have any properties that cannot be serialized by DataContractSerializer, or put those properties into the IgnoreProperties list. For example, our EF model exposes the Travel class. To make it compatible with reflection provider, we perform two tweaks. First put the PartitionKey and RowKey to the DataServiceKey list, and then put EF specific properties such as EntityState and EntityKey to the IgnoreProperties list, because they cannot be serialized by DataContractSerializer, and they have no meaning to the clients. We put the binary GeoLocation property in the IgnoreProperties list as well, because we don't want to expose it to the clients. [DataServiceKey(new string[] { "PartitionKey", "RowKey" })] [IgnoreProperties(new string[] { "EntityState", "EntityKey", "GeoLocation" })] public partial class Travel : EntityObject Then create a service context class which contains a property of type IQueryable<T>. Since we're using Entity Framework to perform data accessing, we can simply delegate all data accessing tasks to Entity Framework. public class TravelDataServiceContext : IUpdatable { private TravelModelContainer _entityFrameworkContext; public IQueryable<Travel> Travels { get { return this._entityFrameworkContext.Travels; } } } Finally, use our own service context class as the generic parameter of the data service class: public class TravelDataService : DataService<TravelDataServiceContext> Add CRUD support In order for a reflection provider to support insert/update/delete, you have to implement the IUpdatable interface. This interface has a lot of methods. Fortunately, in most cases, you only need to implement a few of them. Anyway, first make sure the service context class now implements IUpdatable: public class TravelDataServiceContext : IUpdatable Now let's walkthrough insert, update, and delete. Note since we're using Entity Framework to perform data accessing, most tasks can be delegated to EF. When an HTTP POST request is received, data service maps it to an insert operation. Once this occurs, the CreateResource method is invoked. You use this method to create a new instance of the CLR class. But do not set any properties yet. In our sample, after the object is created, we also add it to the Entity Framework context: public object CreateResource(string containerName, string fullTypeName) { try { Type t = Type.GetType(fullTypeName + ", AzureBingMaps.DAL", true); object resource = Activator.CreateInstance(t); if (resource is Travel) { this._entityFrameworkContext.Travels.AddObject((Travel)resource); } return resource; } catch (Exception ex) { throw new InvalidOperationException("Failed to create resource. See the inner exception for more details.", ex); } } Then data service iterates through all properties, and for each property, SetValue is invoked. Here you get the property's name and value deserialized from ATOM/JSON, and you set the value of the property on the newly created object. public void SetValue(object targetResource, string propertyName, object propertyValue) { try { var property = targetResource.GetType().GetProperty(propertyName); if (property == null) { throw new InvalidOperationException("Invalid property: " + propertyName); } property.SetValue(targetResource, propertyValue, null); } catch (Exception ex) { throw new InvalidOperationException("Failed to set value. See the inner exception for more details.", ex); } } Finally, SaveChanges will be invoked, where we simply delegate the task to Entity Framework in this case. SaveChanges will also be invoked for update and delete operations. public void SaveChanges() { this._entityFrameworkContext.SaveChanges(); } That's all for insert. Now move on to update. An update operation can be triggered by two types of requests: A MERGE request: In this case, the request body may not contain all properties. If a property is not found in the request body, then it should be ignored. The original value in the data store should be preserved. But if a property is found in the request body, then it should be updated. A PUT request: Where simply every property gets updated. To simplify the implementation, our sample only takes care of PUT. In this case, first the original data must be queried. This is done in the GetResource method. This method is also invoked for a delete operation. The implementation of this method can be a bit weird, because a query is passed as the parameter, which assumes a collection of resources will be returned. But during update and delete, actually only one resource will be queried at a time, so you simply need to return the first item. public object GetResource(IQueryable query, string fullTypeName) { ObjectQuery<Travel> q = query as ObjectQuery<Travel>; var enumerator = query.GetEnumerator(); if (!enumerator.MoveNext()) { throw new ApplicationException("Could not locate the resource."); } if (enumerator.Current == null) { throw new ApplicationException("Could not locate the resource."); } return enumerator.Current; } After GetResource, ResetResource will be invoked, and you can update its individual properties. public object ResetResource(object resource) { if (resource is Travel) { Travel updated = (Travel)resource; var original = this._entityFrameworkContext.Travels.Where( t => t.PartitionKey == updated.PartitionKey && t.RowKey == updated.RowKey).FirstOrDefault(); original.GeoLocationText = updated.GeoLocationText; original.Place = updated.Place; original.Time = updated.Time; } return resource; } Finally, SaveChanges is invoked, as in the insert operation. One final operation remaining is delete. This is triggered by a HTTP DELETE request. It's the simplest operation. First GetResource is invoked to query the resource to be deleted, and then DeleteResource is invoked. Finally it is SaveChanges. public void DeleteResource(object targetResource) { if (targetResource is Travel) { this._entityFrameworkContext.Travels.DeleteObject((Travel)targetResource); } } The above summaries the steps to build a reflection provider for WCF Data Services. If you want to know more details, we recommend you to read this comprehensive series by Matt. Add a custom operation Our service is now able to expose the data to the world, as well as accept updates. But sometimes you may want to do more than just data. For example, our sample exposes a service operation that calculates the distance between two places. To do so, we can either create a new WCF service, or simply put the operation in the data service. To define a custom operation in a data service, you take the same approach as define an operation in a normal WCF REST service, except you don't need the OperationContract attribute. For example, we want clients to invoke our operation using HTTP GET, so we use the WebGet attribute. We don't define a UriTemplate, thus the default URI will be used:. [WebGet] public double DistanceBetweenPlaces(double latitude1, double latitude2, double longitude1, double longitude2) { SqlGeography geography1 = SqlGeography.Point(latitude1, longitude1, 4326); SqlGeography geography2 = SqlGeography.Point(latitude2, longitude2, 4326); return geography1.STDistance(geography2).Value; } The operation itself uses spatial data to calculate the distance. Recall from Chapter 4, if a spatial data is constructed for temporary usage, you don't need a round trip to the database. You can simply use types defined in the Microsoft.SqlServer.Types.dll assembly. This exactly what we're doing here. Choose a proper database connection Recall from Chapter 3, to design a scalable database, we partition the data horizontally using PartitionKey. Now let's see it in action. When creating the Entity Framework context, we use the overload which takes the name of a connection string as the parameter. We choose a parameter based on the PartitionKey. In this case, PartitionKey represents the current logged in user (which will be implemented in later chapters). So it's very easy for us to select the connection string. this._entityFrameworkContext = new TravelModelContainer(this.GetConnectionString("TestUser")); /// <summary> /// Obtain the connection string for the partition. /// For now, all partitions are stored in the same database. /// But as data and users grows, we can move partitions to other databases for better scaling. /// </summary> private string GetConnectionString(string partitionKey) { return "name=TravelModelContainer"; } Test the service You can test a GET operation very easily with a browser. For example, type in the browser address bar, and you'll get an ATOM feed for all Travel entities. For POST, PUT, and DELETE, you can use Fiddler to test them. Atlernatively, you can do as we do, create a unit test, add a service reference, and test the operations. This post will not go into the details about unit test. But it is always a good idea to do unit test for any serious development. Additional considerations Since our service will be hosted in Windows Azure, a load balanced environment, it is recommended to set AddressFilterMode to Any on the service. [ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)] Conclusion This post discussed how to expose data to the world using WCF Data Services. In particular, it walked through how to create a reflection provider for WCF Data Services. The next post will switch your attention to another cloud service: Bing Maps. We'll create a client application that integrates both Bing Maps and our own WCF Data Services.
https://blogs.msdn.microsoft.com/windows-azure-support/2010/10/13/azure-bing-maps-expose-data-to-the-world-with-wcf-data-services/
CC-MAIN-2017-43
en
refinedweb
Adding Events A natural part of control development is for the control to expose events that make it easier to interact with the control. The constituent controls of a control are not automatically exposed to users. To expose events, you can define an event for the control and raise that event in response to a change in state or an event of constituent controls. The code in Listing 4 shows how to raise a new event when the user clicks the "Visit Jane Doe's Web Site" link shown in Figure 1. Listing 4 A Control Event // notify user that site link was clicked public event EventHandler Click; // callback on Author's Site link click protected void AuthorSite_Click(object sender, EventArgs e) { EnsureChildControls(); // raise event if (Click != null) { Click(this, EventArgs.Empty); } // move to author's site Page.Response.Redirect(AuthorSite); } As Listing 4 shows, there's nothing special about the basic code required to raise an event. Because the Click event of the AuthorBio control is public, it will appear on the Events tab of the Object Inspector, making it easy for web forms to hook up callbacks. Recall that controls in a designer normally have a default event, and C#Builder generates code if you double-click the control on the design surface. You can achieve the same behavior by adding a DefaultEvent attribute to the AuthorBio control's class definition. Also, handlers for controls are not invoked automatically, as you would normally expect. You must add the INamingContainer interface to the class definition. The following code shows how to add both a DefaultEvent attribute and the INamingContainer interface. [DefaultProperty("Text"), DefaultEvent("Click"), ToolboxData("<{0}:AuthorBio runat=server></{0}:AuthorBio>")] public class AuthorBio : System.Web.UI.WebControls.WebControl, INamingContainer { // class definition elided for clarity } In the code snippet above, the DefaultEvent attribute identifies the Click event. In addition to letting the IDE generate code when the control is double-clicked, the default event gets the initial focus in the Object Inspector when you select the Events tab. The INamingContainer is a marker interface, meaning that it doesn't have members. When this control is rendered with its page, INamingContainer will let each item have a unique name and ensure that everything, including events, is processed okay.
http://www.informit.com/articles/article.aspx?p=170718&seqNum=5
CC-MAIN-2017-43
en
refinedweb
Let's create a recursive solution. - If both trees are empty then we return empty. - Otherwise, we will return a tree. The root value will be t1.val + t2.val, except these values are 0 if the tree is empty. - The left child will be the merge of t1.left and t2.left, except these trees are empty if the parent is empty. - The right child is similar. def mergeTrees(self, t1, t2): if not t1 and not t2: return None ans = TreeNode((t1.val if t1 else 0) + (t2.val if t2 else 0)) ans.left = self.mergeTrees(t1 and t1.left, t2 and t2.left) ans.right = self.mergeTrees(t1 and t1.right, t2 and t2.right) return ans ans.left = self.mergeTrees(t1 and t1.left, t2 and t2.left) I'm new to this game, how does "and" work? Could you briefly explain? @qxlin I am using t1 and t1.left as a shortcut for t1.left if t1 is not None else None. Here, " x and y" evaluates as follows: If x is truthy, then the expression evaluates as y. Otherwise, it evaluates as x. When t1 is None, then None is falsy, and t1 and t1.left evaluates as t1 which is None. When t1 is not None, then t1 is truthy, and t1 and t1.left evaluates as t1.left as desired. This is a standard type of idiom similar to the "?" operator in other languages. I want t1.left if t1 exists, otherwise nothing. Alternatively, I could use a more formal getattr operator: getattr(t1, 'left', None) @awice Thanks so much! I am familiar with '?' operator(or the if else statement), but python always surprises me! Thanks for sharing a nice looking solution! I tried it and got 132 ms. I was surprised to see that it underperformed my initial naive implementation (125ms). I guess not creating a new TreeNode helps a bit if one of the sides is None: def mergeTrees(self, t1, t2): """ :type t1: TreeNode :type t2: TreeNode :rtype: TreeNode """ if not t1 and not t2: return if not t1: res = t2 elif not t2: res = t1 else: res = TreeNode(t1.val+t2.val) res.left = self.mergeTrees(t1.left,t2.left) res.right = self.mergeTrees(t1.right,t2.right) return res Here is a complete analysis of this algorithm: Thanks for sharing. I have a revised version of this and I think it is faster and easier to understand: def mergeTrees(self, t1, t2): if t1 and t2: root = TreeNode(t1.val + t2.val) root.left = self.mergeTrees(t1.left, t2.left) root.right = self.mergeTrees(t1.right, t2.right) return root else: return t1 or t2 Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/92214/python-straightforward-with-explanation
CC-MAIN-2017-43
en
refinedweb
Buildout recipe for Django Djangorecipe: easy install of Django with buildout With djangorecipe you can manage your django site in a way that is familiar to buildout users. For example: - bin/django to run django instead of bin/python manage.py. - bin/test to run tests instead of bin/python manage.py test yourproject. (Including running coverage “around” your test). - bin/django automatically uses the right django settings. So you can have a development.cfg buildout config and a production.cfg, each telling djangorecipe to use a different django settings module. bin/django will use the right setting automatically, no need to set an environment variable. Djangorecipe is developed on github at, you can submit bug reports there. It is tested with travis-ci and the code quality is checked via landscape.io: Setup You can see an example of how to use the recipe below with some of the most common settings: [buildout] show-picked-versions = true parts = django eggs = yourproject gunicorn develop = . # ^^^ Assumption: the current directory is where you develop 'yourproject'. versions = versions [versions] Django = 1.8.2 gunicorn = 19.3.0 [django] recipe = djangorecipe settings = development eggs = ${buildout:eggs} project = yourproject test = yourproject scripts-with-settings = gunicorn # ^^^ This line generates a bin/gunicorn-with-settings script with # the correct django environment settings variable already set. Earlier versions of djangorecipe used to create a project structure for you, if you wanted it to. Django itself generates good project structures now. Just run bin/django startproject <projectname>. The main directory created is the one where you should place your buildout and probably a setup.py. Startproject creates a manage.py script for you. You can remove it, as the bin/django script that djangorecipe creates is the (almost exact) replacement for it. See django’s documentation for startproject. You can also look at cookiecutter. Supported options The recipe supports the following options. - project - This option sets the name for your project. - settings - You can set the name of the settings file which is to be used with this option. This is useful if you want to have a different production setup from your development setup. It defaults to development. - test - If you want a script in the bin folder to run all the tests for a specific set of apps this is the option you would use. Set this to the list of app labels which you want to be tested. Normally, it is recommended that you use this option and set it to your project’s name. - scripts-with-settings Script names you add to here (like ‘gunicorn’) get a duplicate script created with ‘-with-settings’ after it (so: bin/gunicorn-with-settings). They get the settings environment variable set. At the moment, it is mostly useful for gunicorn, which cannot be run from within the django process anymore. So the script must already be passed the correct settings environment variable. Note: the package the script is in must be in the “eggs” option of your part. So if you use gunicorn, add it there (or add it as a dependency of your project). - eggs - Like most buildout recipes, you can/must pass the eggs (=python packages) you want to be available here. Often you’ll have a list in the [buildout] part and re-use it here by saying ${buildout:eggs}. - coverage - If you set coverage = true, bin/test will start coverage recording before django starts. The coverage library must be importable. See the extra coverage notes further below. The options below are for older projects or special cases mostly: - dotted-settings-path - Use this option to specify a custom settings path to be used. By default, the project and settings option values are concatenated, so for instance myproject.development. dotted-settings-path = somewhere.else.production allows you to customize it. - extra-paths - All paths specified here will be used to extend the default Python path for the bin/* scripts. Use this if you have code somewhere without a proper setup.py. - control-script - The name of the script created in the bin folder. This script is the equivalent of the manage.py Django normally creates. By default it uses the name of the section (the part between the [ ]). Traditionally, the part is called [django]. - initialization - Specify some Python initialization code to be inserted into the control-script. This functionality is very limited. In particular, be aware that leading whitespace is stripped from the code given. - wsgi - An extra script is generated in the bin folder when this is set to true. This is mostly only useful when deploying with apache’s mod_wsgi. The name of the script is the same as the control script, but with .wsgi appended. So often it will be bin/django.wsgi. - wsgi-script - Use this option if you need to overwrite the name of the script above. - deploy_script_extra - In the wsgi deployment script, you sometimes need to wrap the application in a custom wrapper for some cloud providers. This setting allows extra content to be appended to the end of the wsgi script. For instance application = some_extra_wrapper(application). The limits described above for initialization also apply here. - testrunner - This is the name of the testrunner which will be created. It defaults to test. Coverage notes Starting in django 1.7, you cannot use a custom test runner (like django-nose) anymore to automatically run your tests with coverage enabled. The new app initialization mechanism already loads your models.py, for instance, before the test runner gets called. So your models.py shows up as largely untested. With coverage = true, bin/test starts coverage recording before django gets called. It also prints out a report and export xml results (for recording test results in Jenkins, for instance) and html results. Behind the scenes, true is translated to a default of report xml_report html_report. These space-separated function names are called in turn on the coverage instance. See the coverage API docs for the available functions. If you only want a quick report and xml output, you can set coverage = report xml_report instead. Note that you cannot pass options to these functions, like html output location. For that, add a .coveragerc next to your buildout.cfg. See the coverage configuration file docs. Here is an example: [run] omit = */migrations/* *settings.py source = your_app [report] show_missing = true [html] directory = htmlcov [xml] output = coverage.xml> Corner case:, normally bin/django, that is generated). The following 2.2.1 (2016-06-29) - Bugfix for 2.2: bin/test was missing quotes around an option. [reinout] 2.2 (2016-06-29) - Added optional coverage option. Set it to true to automatically run coverage around your django tests. Needed if you used to have a test runner like django-nose run your coverage automatically. Since django 1.7, this doesn’t work anymore. With the new “coverage” option, bin/test does it for you. [reinout] - Automated tests (travis-ci.org) test with django 1.4, 1.8 and 1.9 now. And pypi, python 2.7 and python 3.4. [reinout] 2.1.2 (2015-10-21) - Fixed documentation bug: the readme mentioned script-with-settings instead of scripts-with-settings (note the missing s after script). The correct one is script-with-settings. [tzicatl] 2.1.1 (2015-06-15) - Bugfix: script-entrypoints entry point finding now actually works. 2.1 (2015-06-15) Renamed script-entrypoints option to scripts-with-settings. It accepts script names that would otherwise get generated (like gunicorn) and generates a duplicate script named like bin/gunicorn-with-settings. Technical note: this depends on the scripts being setuptools “console_script entrypoint” scripts. 2.0 (2015-06-10) Removed project generation. Previously, djangorecipe would generate a directory for you from a template, but Django’s own template is more than good enough now. Especially: it generates a subdirectory for your project now. Just run bin/django startproject <projectname>. See django’s documentation for startproject. You can also look at cookiecutter. This also means the projectegg option is now deprecated, it isn’t needed anymore. We aim at django 1.7 and 1.8 now. Django 1.4 still works, (except that that one doesn’t have a good startproject command). Gunicorn doesn’t come with the django manage.py integration, so bin/django run_gunicorn doesn’t work anymore. If you add script-entrypoints = gunicorn to the configuration, we generate a bin/django_env_gunicorn script that is identical to bin/gunicorn, only with the environment correctly set. Note: renamed in 2.1 to ``scripts-with-settings``. This way, you can use the wsgi.py script in your project (copy it from the django docs if needed) with bin/django_env_gunicorn yourproject/wsgi.py just like suggested everywhere. This way you can adjust your wsgi file to your liking and run it with gunicorn. For other wsgi runners (or programs you want to use with the correct environment set), you can add a full entry point to script-entrypoints, like script-entrypoints = gunicorn=gunicorn.app.wsgiapp:run would be the full line for gunicorn. Look up the correct entrypoint in the relevant package’s setup.py. Django’s 1.8 wsgi.py file looks like this, see: import os from django.core.wsgi import get_wsgi_application os.environ.setdefault("DJANGO_SETTINGS_MODULE", "yourproject.settings") application = get_wsgi_application() The wsgilog option has been deprecated, the old apache mod_wsgi script hasn’t been used for a long time. Removed old pth option, previously used for pinax. Pinax uses proper python packages since a long time, so it isn’t needed anymore. 1.11 (2014-11-21) - The dotted-settings-path options was only used in management script. Now it is also used for the generated wsgi file and the test scripts. 1.10 (2014-06-16) - Added dotted-settings-path option. Useful when you want to specify a custom settings path to be used by the manage.main() command. - Renamed deploy_script_extra (with underscores) to deploy-script-extra (with dashes) for consistency with the other options. If the underscore version is found, an exception is raised.. Release History Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/djangorecipe/
CC-MAIN-2017-43
en
refinedweb
How do I find the largest continuous sum in a list? Kadane's Algorithm is the answer. The Problem You are presented with an unordered list of numbers. What is the largest sum in this list you can find by only adding contiguous numbers, meaning no gap in the series? Need an Introduction to Algorithms and Big O Notation? - Computer Science: An Introduction to Algorithms and Big O Notation Many computer science students and self-taught programmers find Big O notation to be intimidating. This simple breakdown will shed some light on Big O notation and what it means for programmers. Analysis This problem is a simple problem typically given to computer science students in their early years. We are given an unordered list of data, which means we must scan each element at least once. The best complexity we can possible achieve will be O(n). Is this possible? If we know how many elements are in the list, we could add each element and find the list sum. Would this be the largest sum in the list? This would be true only if all the elements in the list are a positive value. Any negative values would bring the sum down. Does this mean we need to eliminate any negative values? No, if considering a list such as { 5, -1, 10, -11 }. In this example, the answer would be adding elements one through three, which would result in 14. What we know so far: 1. We need to find a sum using contiguous numbers in an unordered list that is larger than the sum of any other combination of contiguous numbers. 2. The list is unordered, thus each element needs to be scanned at least once. 3. Just adding every element in the list is not correct when negative values are entered. 4. Eliminating negative numbers is not correct if surrounded by larger positive values. The Algorithm This problem was solved by an algorithm designed by Joseph Kadane of Carnegie Mellon University. It runs with a time complexity of O(n), which is the best possible scenario considering the problem constraints. In order to solve the problem, we need to consider each element. We can easily see what the sum is ending at this element's position by adding the element to the previous sum. The issue is where is the starting point? We already determined that negative numbers can be included if they are surrounded by larger positive values. If adding a negative number to the sum results in a sum larger than the neighboring element, we can keep that element in our contiguous list, otherwise we set the neighboring element as a new starting point. Doing this, we can end up having to keep track of several contiguous sums, but we only care about keeping track of one at a time and what the maximum value is. Therefore, at a minimum, we need to keep track of two different values. What is the sum of the current contiguous list, and what is the maximum sum of all contiguous lists? The algorithm: 1. Establish a variable to keep track of the current contiguous list. We will refer to this variable as sum. 2. Establish a variable to keep track of the maximum contiguous list. We will refer to this variable as maxSum. 3. Set sum and maxSum equal to the first element of the list. 4. Iterate through each element from the second element to the final element. Return to this step for each iteration. 5. Add the nth element to sum, and compare the result to sum. If the new result is larger than the current element, set sum to the new result. Otherwise, set sum to the current element. 6. If the value of sum is larger than the value of maxSum, set maxSum to sum. 7. Return to step 4 for the next element. If there are no final elements, return max as the answer. Examples Let's perform the algorithm on the list: { 5, -1, 10, -11 } First element: 5 Value of sum: 5 Value of max: 5 Second element: -1 Value of (sum + n): 4 Value of sum: 4 Value of max: 5 Third element: 10 Value of (sum + n): 14 Value of sum: 14 Value of max: 14 Fourth element: -11 Value of (sum + n): 3 Value of sum: 3 Value of max: 14 Return value: 14 As we can see, the algorithm returns the correct result from the small list of items we analyzed at the beginning of this problem. Let's try another, more complex list: { 2, -4, -6, 9, 8, -11, 10, 2, -20 } First element: 2 Value of sum: 2 Value of max: 2 Second element: -4 Value of (sum + n): -2 Value of sum: -2 Value of max: 2 Third element: -6 Value of (sum + n): -8 Value of sum: -6 (-6 is larger than -8, sum is set to -6) Value of max: 2 Fourth element: 9 Value of (sum + n): 3 Value of sum: 9 (9 is larger than 3, sum is set to 9) Value of max: 9 Fifth element: 8 Value of (sum + n): 17 Value of sum: 17 Value of max: 17 Sixth element: -11 Value of (sum + n): 6 Value of sum: 6 Value of max: 17 Seventh element: 10 Value of (sum + n): 16 Value of sum: 16 Value of max: 17 Eighth element: 2 Value of (sum + n): 18 Value of sum: 18 Value of max: 18 Ninth element: -20 Value of (sum + n): -2 Value of sum: -2 Value of max: 18 The answer given is 18, which is the sum of the fourth element through the eighth element in the list. The Code: C++ Here is an example of a single function that runs Kadane's Algorithm in C++. The only parameter passed to the function is the number of elements that need to be input. The algorithm then follows the steps of the algorithm while scanning each element once. If no elements are provided, then -1 is returned as an error. It is assumed that there are no input errors that will conflict with the variable types. It is also assumed that there will be no overflow. #include <iostream> #include <cmath> using namespace std; int LargestContiguousSum(int numElements) { if (numElements < 1) return -1; int num; cin >> num; int sum = num, maxSum = num; for (int x = 1; x < numElements; x++) { cin >> num; sum = max(num, sum + num); maxSum = max(maxSum, sum); } return maxSum; }
https://hubpages.com/technology/Computer-Science-Kadanes-Algorithm-Finding-the-Largest-Continuous-Sum-in-a-List
CC-MAIN-2017-43
en
refinedweb
Sometimes it is necessary to rotate a text annotation in a Matplotlib figure so that it is aligned with a line plotted on the figure Axes. Axes.annotation takes an argument, rotation, to allow a text label to be rotated, and a naive implementation might be as follows: import numpy as np import matplotlib.pyplot as plt x,y = np.array(((0,1),(5,10))) fig, ax = plt.subplots() ax.plot(x,y) xylabel = ((x[0]+x[1])/2, (y[0]+y[1])/2) dx, dy = x[1] - x[0], y[1] - y[0] rotn = np.degrees(np.arctan2(dy, dx)) label = 'The annotation text' ax.annotate(label, xy=xylabel, ha='center', va='center', rotation=rotn) plt.show() That is, calculated the angle of the line from its slope and rotate by that amount in degrees: This fails because the rotation is carried out in the coordinate frame of the displayed figure, not the coordinate frame of the data. To make the text colinear with the plotted line, we need to transform from the latter to the former. The Matplotlib documentation has a brief tutorial on transformations. We therefore use ax.transData.transform_point to convert from the data coordinates to the display coordinates: plt.clf() fig, ax = plt.subplots() ax.plot(x,y) p1 = ax.transData.transform_point((x[0], y[0])) p2 = ax.transData.transform_point((x[1], y[1])) dy = (p2[1] - p1[1]) dx = (p2[0] - p1[0]) rotn = np.degrees(np.arctan2(dy, dx)) ax.annotate(label, xy=xylabel, ha='center', va='center', rotation=rotn) plt.show() Share on Twitter Share on Facebook Comments are pre-moderated. Please be patient and your comment will appear soon. There are currently no comments New Comment
https://scipython.com/blog/rotating-text-onto-a-line-in-matplotlib/
CC-MAIN-2019-22
en
refinedweb
Groovy Bean Definitions Last modified: February 21, 2019 1. Overview In this quick article, we’ll focus on how we can use a Groovy-based configuration in our Java Spring projects. 2. Dependencies Before we start, we need to add the dependency to our pom.xml file. We need to also add a plugin for the sake of compiling our Groovy files. Let’s add the dependency for Groovy first to our pom.xml file: <dependency> <groupId>org.codehaus.groovy</groupId> <artifactId>groovy-all</artifactId> <version>2.4.12</version> </dependency> Now, let’s add the plugin: <plugin> <artifactId>maven-compiler-plugin</artifactId> <version>3.7.0</version> <configuration> <compilerId>groovy-eclipse-compiler</compilerId> <verbose>true</verbose> <source>1.8</source> <target>1.8</target> <encoding>${project.build.sourceEncoding}</encoding> </configuration> <dependencies> <dependency> <groupId>org.codehaus.groovy</groupId> <artifactId>groovy-eclipse-compiler</artifactId> <version>2.9.2-01</version> </dependency> </dependencies> </plugin> Here, we use the Maven compiler to compile the project by using the Groovy-Eclipse compiler. This may vary depending on the IDE we use. The latest versions of these libraries can be found on Maven Central. 3. Defining Beans Since version 4, Spring provides a support for Groovy-based configurations. This means that Groovy classes can be legitimate Spring beans. To illustrate this, we’re going to define a bean using the standard Java configuration and then we’re going to configure the same bean using Groovy. This way, we’ll be able to see the difference. Let’s create a simple class with a few properties: public class JavaPersonBean { private String firstName; private String lastName; // standard getters and setters } It’s important to remember about getters/setters – they’re crucial for the mechanism to work. 3.1. Java Configuration We can configure the same bean using a Java-based configuration: @Configuration public class JavaBeanConfig { @Bean public JavaPersonBean javaPerson() { JavaPersonBean jPerson = new JavaPersonBean(); jPerson.setFirstName("John"); jPerson.setLastName("Doe"); return jPerson; } } 3.2. Groovy Configuration Now, we can see the difference when we use Groovy to configure the previously created bean: beans { javaPersonBean(JavaPersonBean) { firstName = 'John' lastName = 'Doe' } } Note that before defining beans configuration, we should import the JavaPersonBean class. Also, inside the beans block, we can define as many beans as we need. We defined our fields as private and although Groovy makes it look like it’s accessing them directly, it’s doing it using provided getters/setters. 4. Additional Bean Settings As with the XML and Java-based configuration, we can configure not only beans. If we need to set an alias for our bean, we can do it easily: registerAlias("bandsBean","bands") If we want to define the bean’s scope: { bean -> bean.scope = "prototype" } To add lifecycle callbacks for our bean, we can do: { bean -> bean.initMethod = "someInitMethod" bean.destroyMethod = "someDestroyMethod" } We can also specify inheritance in the bean definition: { bean-> bean.parent="someBean" } Finally, if we need to import some previously defined beans from an XML configuration, we can do this using the importBeans(): importBeans("somexmlconfig.xml") 5. Conclusion In this tutorial, we saw how we to create Spring Groovy bean configurations. We also covered setting additional properties on our beans such as their aliases, scopes, parents, methods for initialization or destruction, and how to import other XML-defined beans. Although the examples are simple, they can be extended and used for creating any type of Spring config. A full example code that is used in this article can be found in our GitHub project. This is a Maven project, so you should be able to import it and run it as it is.
https://www.baeldung.com/spring-groovy-beans
CC-MAIN-2019-22
en
refinedweb
Software Development Kit (SDK) and API Discussions Hi folks, I am one of the developers of NetApp Harvest. We heavily use Zapi to collect performance and capacity counters from ONTAP hosts. Recently we've developed an plugin to collect counters of the volume-get-iter Zapi. In my own testing environment everything seemed fine, but one of our users reported that the plugin keeps hanging in the background. I did a little bit debugging and played around with the max-records attribute. It turns out that sometimes (and seemingly this depends on the max-records value and the number of volumes in your cluster), Zapi returns a next-tag that is exactly the same as the one we got during the previous batch request, and so the program ends up in an infinite cycle. Is this a known issue? Has anyone else faced this? It seems like a similar issue was reported previously. I get this issue with an ONTAP 9.6P3 release. ONTAP 9.7 seems to be fine (although this could just be the number of volumes). For illustration, this is the relevant part of the function that I'm running: def collect_counters(): """ Collect counter data from Ontap host. We request the Zapi object "volume-get-iter" and store only the counters that are in the two global lists volume_space_counters and volume_sis_counters (check the top of this script). Returns: data: triple nested dict: svm => volume => counter => value """ data = {} # Get Zapi connection to Cluster SDK, zapi = connect_zapi(params) # Construct Zapi request request = SDK.NaElement('volume-get-iter') request.child_add_string('max-records', 150) desired_attributes = SDK.NaElement('desired-attributes') request.child_add(desired_attributes) volume_attributes = SDK.NaElement('volume-attributes') desired_attributes.child_add(volume_attributes) volume_attributes.child_add_string('volume-id-attributes', '') volume_attributes.child_add_string('volume-space-attributes', '') volume_attributes.child_add_string('volume-sis-attributes', '') volume_attributes.child_add_string('volume-snapshot-attributes', '') next_tag = 'initial' api_time = 0 # Continue as long as we get a "next tag" in previous request while (next_tag): if next_tag != 'initial': request.child_add_string('tag', next_tag) start = time.time() # Send request to server try: response = zapi.invoke_elem(request) except Exception as ex: logger.error('[collect_counters] Exception while sending ZAPI ' \ 'request: {}'.format(ex)) sys.exit(1) t = time.time() - start api_time += t # Check the results if response.results_status() != 'passed': logger.error('[collect_counters] ZAPI request failed: {}'.format( response.results_reason() ) ) sys.exit(1) num_records = response.child_get_int('num-records') # No point to continue if no data available if not num_records: logger.info('[collect_counters] No counter data, stopping session') sys.exit(0) # This will be None if we got everything already next_tag_tmp = response.child_get_string('next-tag') # DEBUG # Compare the newly received next-tag to the previous one # to see if we keep getting the same tag. tag_compare = '<NONE>' if next_tag_tmp: tag_compare = '<SAME>' if next_tag_tmp == next_tag else '<NEW>' next_tag = next_tag_tmp logger.debug('[collect_counters] Batch API time: {}s. Num records={}. ' \ 'Next tag=[{}]'.format(round(t,2), num_records, tag_compare ) ) # Extract instances try: instances = response.child_get('attributes-list').children_get() except (NameError, AttributeError) as ex: logger.error('[collect_counters] Extracting results failed:' \ ' {}'.format(ex)) sys.exit(1) ... When I set max-records to 500, the plugin runs fine, but when I set it to 150, it ends up in an infinite API loop: $ python extension/volume_capacity_counters.py [2020-01-30 17:46:33,487] [INFO] Started extension in foreground mode. Log messages will be forwarded to console [2020-01-30 17:46:33,487] [DEBUG] Started new session. Will poll host [Cuba] for volume capacity counters [2020-01-30 17:46:33,717] [DEBUG] [connect_zapi] Created ZAPI with host [Cuba:443], Release=NetApp Release 9.6P3: Sun Sep 22 08:26:36 UTC 2019 [2020-01-30 17:46:35,301] [DEBUG] [collect_counters] Batch API time: 1.58s. Num records=150. Next tag=[<NEW>] [2020-01-30 17:46:36,749] [DEBUG] [collect_counters] Batch API time: 1.44s. Num records=150. Next tag=[<NEW>] [2020-01-30 17:46:40,162] [DEBUG] [collect_counters] Batch API time: 3.4s. Num records=150. Next tag=[<SAME>] [2020-01-30 17:46:41,903] [DEBUG] [collect_counters] Batch API time: 1.73s. Num records=150. Next tag=[<SAME>] [2020-01-30 17:46:43,368] [DEBUG] [collect_counters] Batch API time: 1.45s. Num records=150. Next tag=[<SAME>] [2020-01-30 17:46:45,132] [DEBUG] [collect_counters] Batch API time: 1.75s. Num records=150. Next tag=[<SAME>]Continues... Same script tested against ONTAP 9.7 with no issues: sanjunipero>$ python extension/volume_capacity_counters.py [2020-01-30 17:46:27,751] [INFO] Started extension in foreground mode. Log messages will be forwarded to console [2020-01-30 17:46:27,752] [DEBUG] Started new session. Will poll host [jamaica] for volume capacity counters [2020-01-30 17:46:27,975] [DEBUG] [connect_zapi] Created ZAPI with host [jamaica:443], Release=NetApp Release 9.7: Thu Jan 09 17:11:21 UTC 2020 [2020-01-30 17:46:28,684] [DEBUG] [collect_counters] Batch API time: 0.71s. Num records=76. Next tag=[<NONE>] [2020-01-30 17:46:28,691] [DEBUG] [collect_counters] Collected 1292 counters for 76 volumes [2020-01-30 17:46:28,694] [DEBUG] Ending session. Runtime: 0.94s. API time: 0.71s [75.
https://community.netapp.com/t5/Software-Development-Kit-SDK-and-API-Discussions/Ontap-SDK-volume-get-iter-ZAPI-returns-erroneous-next-tag/m-p/153957
CC-MAIN-2022-21
en
refinedweb
#include <background_segm.hpp> Gaussian Mixture-based Backbround/Foreground Segmentation Algorithm The class implements the following algorithm: "An improved adaptive background mixture model for real-time tracking with shadow detection" P. KadewTraKuPong and R. Bowden, Proc. 2nd European Workshp on Advanced Video-Based Surveillance Systems, 2001." the default constructor the full constructor that takes the length of the history, the number of gaussian mixtures, the background ratio parameter and the noise strength the destructor Reimplemented from cv::Algorithm. re-initiaization method the update operator Reimplemented from cv::BackgroundSubtractor.
https://docs.opencv.org/ref/2.4.13.2/db/dcf/classcv_1_1BackgroundSubtractorMOG.html
CC-MAIN-2022-21
en
refinedweb
updated copyright year 1: \ environmental queries 2: 3: \ Copyright (C) 1995,1996,1997,1998,2000,2003,2007: [IFUNDEF] cell/ : cell/ 1 cells / ; [THEN] 21: [IFUNDEF] float/ : float/ 1 floats / ; [THEN] 22: 23: \ wordlist constant environment-wordlist 24: 25: vocabulary environment ( -- ) \ gforth 26: \ for win32forth compatibility 27: 28: ' environment >body constant environment-wordlist ( -- wid ) \ gforth 29: \G @i{wid} identifies the word list that is searched by environmental 30: \G queries. 31: 32: 33: : environment? ( c-addr u -- false / ... true ) \ core environment-query 34: \G @i{c-addr, u} specify a counted string. If the string is not 35: \G recognised, return a @code{false} flag. Otherwise return a 36: \G @code{true} flag and some (string-specific) information about 37: \G the queried string. 38: environment-wordlist search-wordlist if 39: execute true 40: else 41: false 42: endif ; 43: 44: : e? name environment? 0= ABORT" environmental dependency not existing" ; 45: 46: : $has? environment? 0= IF false THEN ; 47: 48: : has? name $has? ; 49:. 57: 8 constant ADDRESS-UNIT-BITS ( -- n ) \ environment 58: \G Size of one address unit, in bits. 59: 60: 1 ADDRESS-UNIT-BITS chars lshift 1- constant MAX-CHAR ( -- u ) \ environment 61: \G Maximum value of any character in the character set 62: 63: MAX-CHAR constant /COUNTED-STRING ( -- n ) \ environment 64: \G Maximum size of a counted string, in characters. 65: 79: \G True if @code{/} etc. perform floored division 94: \G Counted string representing a version string for this version of 95: \G Gforth (for versions>0.3.0). The version strings of the various 96: \G versions are guaranteed to be ordered lexicographically. 97: 98: : return-stack-cells ( -- n ) \ environment 99: \G Maximum size of the return stack, in cells. 100: [ forthstart 6 cells + ] literal @ cell/ ; 101: 102: : stack-cells ( -- n ) \ environment 103: \G Maximum size of the data stack, in cells. 104: [ forthstart 4 cells + ] literal @ cell/ ; 105: 106: : floating-stack ( -- n ) \ environment 107: \G @var{n} is non-zero, showing that Gforth maintains a separate 108: \G floating-point stack of depth @var{n}. 109: [ forthstart 5 cells + ] literal @ 110: [IFDEF] float/ float/ [ELSE] [ 1 floats ] Literal / [THEN] ; 111: 112: 15 constant #locals \ 1000 64 / 113: \ One local can take up to 64 bytes, the size of locals-buffer is 1000 114: maxvp constant wordlists 115: 116: forth definitions 117: previous 118:
https://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/environ.fs?hideattic=0;sortby=rev;f=h;only_with_tag=MAIN;content-type=text%2Fx-cvsweb-markup;ln=1;rev=1.36
CC-MAIN-2022-21
en
refinedweb
I want to take multiple inputs from users. The inputs can be split into logical groups, and it makes sense to take all inputs of same group one at a time. So, I tried to take different groups as part of separate form. However, I’m facing the issue that values of previous forms are being reset when new form is being submitted. Using clear_on_submit=False does not seem to have any effect. Here’s a reproducible example: import streamlit def get_inputs(input_id: int): with streamlit.expander(f"Dummy Section {input_id}", expanded=True): with streamlit.form(f"form_{input_id}", clear_on_submit=False): value_1 = streamlit.number_input(f"Dummy Input {input_id} Value 1") value_2 = streamlit.number_input(f"Dummy Input {input_id} Value 2") value_3 = streamlit.number_input(f"Dummy Input {input_id} Value 3") submitted = streamlit.form_submit_button(label=f"Submit Input 1") if not submitted: streamlit.warning("Please enter valid inputs before proceeding") streamlit.stop() else: return {"value_1": value_1, "value_2": value_2, "value_3": value_3} def dummy_main_logic(*args): print(args) streamlit.title("Dummy Title") inputs_1 = get_inputs(1) inputs_2 = get_inputs(2) inputs_3 = get_inputs(3) dummy_main_logic(inputs_1, inputs_2, inputs_3) If I run this with the following command, I never get the chance to provide inputs_3. python -m streamlit run test.py So, my question is whether this is by design, or is this a bug? I really do not want to show all inputs simultaneously using one form or a lot of inputs without any form. The main reason is the final call is going to be expensive, and I do not want to trigger this for all intermediate stages. If anyone faced this before, is there any workaround for this? A related question: is there a way to auto-collapse a section after form is submitted? or basically any way to collapse a section programmatically? Thanks in advance.
https://discuss.streamlit.io/t/multiple-forms-in-a-page/24080
CC-MAIN-2022-21
en
refinedweb
table of contents NAME¶ Tk_HandleEvent - invoke event handlers for window system events SYNOPSIS¶ #include <tk.h> Tk_HandleEvent(eventPtr) ARGUMENTS¶ - XEvent *eventPtr (in) - Pointer to X event to dispatch to relevant handler(s). It is important that all unused fields of the structure be set to zero. DESCRIPTION¶ Tk_HandleEvent is a lower-level procedure that deals with window events. It is called by Tcl_ServiceEvent (and indirectly by Tcl. KEYWORDS¶ callback, event, handler, window
https://manpages.debian.org/unstable/tk8.6-doc/Tk_HandleEvent.3tk.en.html
CC-MAIN-2022-21
en
refinedweb
Hello! My game has three levels, and i have a code that changes from level 1 to level 2 after killing 5 enemies. In the second level, the level should change to the third one after killing 10 enemies. I want to use the same script, but the public variables are not showing in the inspector so i can't change it. Thanks in advance for the help. This code is attached to an empty game object: public class enemyCount : MonoBehaviour { public static int enemiesCount = 5; void Update (){ print("Enemy Count is " + enemiesCount); if(enemiesCount <= 0) { Debug.Log("Level 2!!!"); Application.LoadLevel ("level2"); } } } And this is attached to the bullet script: private void OnCollisionEnter (Collision collision){ if (collision.transform.tag == "Enemy") { Destroy (collision.gameObject); gameObject.SetActive (false); Destroy (this.gameObject); } if (collision.gameObject.tag == "Enemy") { enemyCount.enemiesCount --; } } Answer by Bunny83 · Nov 14, 2017 at 02:42 AM You can't use static variables if they should be editable in the inspector. The Inspector can only show variables that belong to this instance of the class. Static variables do not belong to a certain instance. They only exist once in the whole application. You may want to simply use a "targetKillCount" variable which you use to initialize your static variable in Start. I also would recommend to not use Update for this. It's way better to use a method which is called when you actually decrease the enemy count. You can check if you reached the target there. public class EnemyCount : MonoBehaviour { public static int enemiesCount; private static string nextLevel; public int targetCount; // set this in the inspector public string nextLevelName; // set the name of the next level in the inspector void Start() { enemiesCount = targetCount; nextLevel = nextLevelName; } public static void DecreaseCount() { enemiesCount--; print("Enemy Count is " + enemiesCount); if(enemiesCount <= 0) { Debug.Log("Load level: " + nextLevel); Application.LoadLevel (nextLevel); } } } In your other script you would use this instead: if (collision.gameObject.tag == "Enemy") { enemyCount.DecreaseCount(); } Thank you, now i understand what my mistake was but i don't know why i get this error (sorry i'm new to Unity): An object reference is required to access non-static member `enemyCount.nextLevelName' Of course ^^ $$anonymous$$y mistake. Since the "DecreaseCount" method is a static method it can not use any non static variables. So we would need add also a static string variable (just like the "enemiesCount" variable) and initialize it inside Start with the non-static one. I'll edit my answer. Sorry for the. How to trigger a collider to enable after a camera is inside of it. 1 Answer Scene problem - missing Unityengine and MonoBehaviour 0 Answers How to load a scene with GameFlow on button click ? ( XR RIG / Oculus Quest ) 0 Answers .As u can see in the image it loads ThemeSelection scenes but doesnot show anything 0 Answers Everytime when i start at level 3 or higher after completing it it goes back to level 2 how do i fix this? 1 Answer EnterpriseSocial Q&A
https://answers.unity.com/questions/1432128/use-same-script-in-different-scenes-to-change-leve.html
CC-MAIN-2022-21
en
refinedweb
Hi, I needed some help tweaking a multicolored line with matplotlib. I adopted tacaswell's answer on StackOverflow at When I do the steps exactly like its explained in the answer, I am able to replicate it. However, my case is a bit different. I have a dataframe with timeseries as it's index, and 'perc99_99' as a set of Z-scores. [image: Inline image 1] I modified the SO answer to the best of my knowledge and this is what the code looks like: fig, ax = plt.subplots() # Index of dataframe = timestamps x = day_avg_zscore.index.values # Z score of 99th percentiles y = day_avg_zscore['perc99_99'].values # Threshold at Z-score = 2, using helper function lc = threshold_plot(ax, x, y, 2, 'k', 'r') ax.axhline(2, color='k', ls='--') lc.set_linewidth(3) plt.show() I left the helper function intact: import numpy as np import matplotlib.pyplot as plt from matplotlib.collections import LineCollection from matplotlib.colors import ListedColormap, BoundaryNorm def threshold_plot(ax, x, y, threshv, color, overcolor): """ Helper function to plot points above a threshold in a different color Parameters ··· ax : Axes Axes to plot to x, y : array The x and y values threshv : float Plot using overcolor above this value color : color The color to use for the lower values overcolor: color The color to use for values over threshv # Create a colormap for red, green and blue and a norm to color # f' < -0.5 red, f' > 0.5 blue, and the rest green cmap = ListedColormap([color, overcolor]) norm = BoundaryNorm([np.min(y), threshv, np.max(y)], cmap.N) # Create a set of line segments so that we can color them individually # This creates the points as a N x 1 x 2 array so that we can stack points # together easily to get the segments. The segments array for line collection # needs to be numlines x points per line x 2 (x and y) points = np.array([x, y]).T.reshape(-1, 1, 2) segments = np.concatenate([points[:-1], points[1:]], axis=1) # Create the line collection object, setting the colormapping parameters. # Have to set the actual values used for colormapping separately. lc = LineCollection(segments, cmap=cmap, norm=norm) lc.set_array(y) ax.add_collection(lc) ax.set_xlim(np.min(x), np.max(x)) ax.set_ylim(np.min(y)*1.1, np.max(y)*1.1) return lc Here's what I get as an output: [image: Inline image 2] What am I missing? I'm using Anaconda Python2.7 and Jupyter Notebook. Thank you! - Deep -- Sent from my phone. Please forgive typos and other forms of brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: < -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 44013 bytes Desc: not available URL: < -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 14568 bytes Desc: not available URL: <
https://discourse.matplotlib.org/t/help-regarding-matplotlib-multicolored-lines-attached-stackoverflow-question/19954
CC-MAIN-2022-21
en
refinedweb