text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
|---|---|---|---|---|
Guideline:Migration from Hystrix to Sentinel
As microservices become more popular, the stability between services becomes more and more important. Technologies such as flow control, fault tolerance, and system load protection are widely used in microservice systems to improve the robustness of the system and guarantee the stability of the business, and to minimize system outages caused by excessive access traffic and heavy system load.
Hystrix, an open source latency and fault tolerance library of Netflix, has recently announced on its GitHub homepage that new features are no longer under development. It is recommended that developers use other open source projects that are still active.So what are the alternatives?
Last time, we introduced two alternatives in the article, “Resilience4j and Sentinel: Two Open-Source Alternatives to Netflix Hystrix”
This article will help you migrate from Hystrix to Sentinel and help you get up to speed on using Sentinel.
Feature in HystrixMigration SolutionFeature in SentinelThread Pool Isolation / Semaphore IsolationSentinel does not support thread pool isolation; In Sentinel, flow control in thread count mode represents semaphore isolation. If you are using semaphore isolation, you can simply add flow rules for target resource.Thread Count Flow ControlCircuit BreakerSentinel supports circuit breaking by average response time, exception ratio and exception count. If you want to use circuit breaking in Sentinel, you cam simply configure degrade rules for target resource.Circuit breaking with various strategyCommand DefinitionYou can define your resource entry (similar to command key) via
SphU API in Sentinel. Resource definition and rule configuration are separate.Resource Entry DefinitionCommand ConfigurationRules can be hardcoded through the
xxxRuleManager API in Sentinel, and multiple dynamic rule data sources are also supported.Rule ConfigurationHystrixCommand annotationSentinel also provides annotation support (
SentinelResource), which is easy to use.SentinelResource annotationSpring Cloud NetflixSentinel provides out-of-box integration modules for Servlet, Dubbo, Spring Cloud, and gRPC. If you were using Spring Cloud Netflix previously, it's east for you to migrate to Spring Cloud Alibaba.Spring Cloud Alibaba
HystrixCommand
The execution model of Hystrix is designed with a command pattern, HystrixCommand, which encapsulates the business logic and fallback logic into a single command object (
HystrixCommand /
HystrixObservableCommand). A simple example:
public class SomeCommand extends HystrixCommand<String> { public SomeCommand() {
super(Setter.withGroupKey(HystrixCommandGroupKey.Factory.asKey("SomeGroup"))
// command key
.andCommandKey(HystrixCommandKey.Factory.asKey("SomeCommand"))
// command configuration
.andCommandPropertiesDefaults(HystrixCommandProperties.Setter()
.withFallbackEnabled(true)
));
} @Override
protected String run() {
// business logic
return "Hello World!";
}
}// The execution model of Hystrix
// sync mode:
String s = new SomeCommand().execute();
// async mode (managed by Hystrix):
Observable<String> s = new SomeCommand().observe();
Sentinel does not specify an execution model, nor does it care how the code is executed. In Sentinel, what you should do is just to wrap your code with Sentinel API to define resources:
Entry entry = null;
try {
entry = SphU.entry("resourceName");
// your business logic here
return doSomeThing();
} catch (BlockException ex) {
// handle rejected
} finally {
if (entry != null) {
entry.exit();
}
}
In Hystrix, you usually have to configure rules when the command is defined. In Sentinel, resource definitions and rule configurations are separate. Users first define resources for the corresponding business logic through the Sentinel API, and then configure the rules when needed. For details, please refer to this document.
Thread Pool Isolation
The advantage of thread pool isolation is that the isolation is relatively thorough, and it can be processed for the thread pool of a resource without affecting other resources. But the drawback is that the number of threads is large, and the overhead of thread context switching is very large, especially for low latency invocations. Sentinel does not provide such a heavy isolation strategy, but provides a relatively lightweight isolation strategy — thread count flow control as semaphore isolation.
Semaphore Isolation
Hystrix’s semaphore isolation is configured at Command definition, such as:
public class CommandUsingSemaphoreIsolation extends HystrixCommand<String> { private final int id; public CommandUsingSemaphoreIsolation(int id) {
super(Setter.withGroupKey(HystrixCommandGroupKey.Factory.asKey("SomeGroup"))
.andCommandPropertiesDefaults(HystrixCommandProperties.Setter()
.withExecutionIsolationStrategy(ExecutionIsolationStrategy.SEMAPHORE)
.withExecutionIsolationSemaphoreMaxConcurrentRequests(8)));
this.id = id;
} @Override
protected String run() {
return "result_" + id;
}
}
In Sentinel, semaphore isolation is provided as a mode of flow control (thread count mode), so you only need to configure the flow rule for the resource:
FlowRule rule = new FlowRule("doSomething") // resource name
.setGrade(RuleConstant.FLOW_GRADE_THREAD) // thread count mode
.setCount(8); // max concurrency
FlowRuleManager.loadRules(Collections.singletonList(rule)); // load the rules
If you are using Sentinel dashboard, you can also easily configure the rules in dashboard.
Circuit Breaking
Hystrix circuit breaker supports error percentage mode. Related properties:
circuitBreaker.errorThresholdPercentage: the threshold
circuitBreaker.sleepWindowInMilliseconds: the sleep window when circuit breaker is open
For example:
public class FooServiceCommand extends HystrixCommand<String> { protected FooServiceCommand(HystrixCommandGroupKey group) {
super(Setter.withGroupKey(HystrixCommandGroupKey.Factory.asKey("OtherGroup"))
// command key
.andCommandKey(HystrixCommandKey.Factory.asKey("fooService"))
.andCommandPropertiesDefaults(HystrixCommandProperties.Setter()
.withExecutionTimeoutInMilliseconds(500)
.withCircuitBreakerRequestVolumeThreshold(5)
.withCircuitBreakerErrorThresholdPercentage(50)
.withCircuitBreakerSleepWindowInMilliseconds(10000)
));
} @Override
protected String run() throws Exception {
return "some_result";
}
}
In Sentinel, you only need to configure circuit breaking rules for resources that want to be automatically degraded. For example, the rules corresponding to the Hystrix example above:
DegradeRule rule = new DegradeRule("fooService")
.setGrade(RuleConstant.DEGRADE_GRADE_EXCEPTION_RATIO) // exception ratio mode
.setCount(0.5) // ratio threshold (0.5 -> 50%)
.setTimeWindow(10); // sleep window (10s)
// load the rules
DegradeRuleManager.loadRules(Collections.singletonList(rule));
If you are using Sentinel dashboard, you can also easily configure the circuit breaking rules in dashboard.
In addition to the exception ratio mode, Sentinel also supports automatic circuit breaking based on average response time and minute exceptions.
Annotation Support
Hystrix provides annotation support to encapsulate command and configure it. Here is an example of Hystrix annotation:
// original method
@HystrixCommand(fallbackMethod = "fallbackForGetUser")
User getUserById(String id) {
throw new RuntimeException("getUserById command failed");
}// fallback method
User fallbackForGetUser(String id) {
return new User("admin");
}
Hystrix rule configuration is bundled with command execution. We can configure rules for command in the
commandProperties property of the
@HystrixCommand annotation, such as:
@HystrixCommand(commandProperties = {
@HystrixProperty(name = "circuitBreaker.errorThresholdPercentage", value = "50")
})
public User getUserById(String id) {
return userResource.getUserById(id);
}
Using Sentinel annotations is similar to Hystrix, as follows:
- Add the annotation support dependency:
sentinel-annotation-aspectjand register the aspect as a Spring bean (if you are using Spring Cloud Alibaba then the bean will be injected automatically);
- Add the
@SentinelResourceannotation to the method that needs flow control and circuit breaking. You can set
fallbackor
blockHandlerfunctions in the annotation;
- Configure rules
For the details, you can refer to the annotation support document. An example for Sentinel annotation:
// original method
@SentinelResource(fallback = "fallbackForGetUser")
User getUserById(String id) {
throw new RuntimeException("getUserById command failed");
}// fallback method (only invoked when the original resource triggers circuit breaking); If we need to handle for flow control / system protection, we can set `blockHandler` method
User fallbackForGetUser(String id) {
return new User("admin");
}
Then configure the rules:
- via API (e.g.
DegradeRuleManager.loadRules(rules)method)
DegradeRule rule = new DegradeRule("getUserById") .setGrade(RuleConstant.DEGRADE_GRADE_EXCEPTION_RATIO) // exception ratio mode .setCount(0.5) // ratio threshold (0.5 -> 50%) .setTimeWindow(10); // sleep window (10s) // load the rules DegradeRuleManager.loadRules(Collections.singletonList(rule));
Integrations
Sentinel has integration modules with Web Servlet, Dubbo, Spring Cloud and gRPC. Users can quickly use Sentinel by introducing adapter dependencies and do simple configuration. If you have been using Spring Cloud Netflix before, you may consider migrating to the Spring Cloud Alibaba.
Dynamic Configuration
Sentinel provides dynamic rule data-source support for dynamic rule management. The
ReadableDataSource and
WritableDataSource interfaces provided by Sentinel are easy to use.
The Sentinel dynamic rule data-source provides extension module to integrate with popular configuration centers and remote storage. Currently, it supports many dynamic rule sources such as Nacos, ZooKeeper, Apollo, and Redis, which can cover many production scenarios.
Reference:
|
https://alibaba-cloud.medium.com/guideline-migration-from-hystrix-to-sentinel-d8689bb595f3
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
Overview
The Server interface defines the minimal required functions (start and stop) and a ‘listening’ property.
Common tasks
Usage
LoopBack 4 offers the
@loopback/rest
package out of the box, which provides an HTTP/HTTPS-based server called
RestServer for handling REST requests.
In order to use it in your application, your application class needs to extend
RestApplication to provide an instance of RestServer listening on port 3000.
The following example shows how to use
RestApplication:
import {RestApplication, RestServer} from '@loopback/rest'; export class HelloWorldApp extends RestApplication { constructor() { super(); // give our RestServer instance a sequence handler function which // returns the Hello World string for all requests // with RestApplication, handler function can be registered // at app level this.handler((sequence, request, response) => { sequence.send(response, 'Hello World!'); }); } async start() { // call start on application class, which in turn starts all registered // servers await super.start(); // get a singleton HTTP server instance const rest = await this.getServer(RestServer); console.log(`REST server running on port: ${await rest.get('rest.port')}`); } }
Next Steps
- Learn more about creating your own servers!
|
https://loopback.io/doc/en/lb4/Server.html
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
Facebook Login with ASP.NET MVC Web Applications
This post is a step-by-step guide on integrating Facebook login with an ASP.NET MVC Web Application. We will start from scratch and end with an application supporting Facebook login.
Why Facebook Login?
Using Facebook (or another platform such as Twitter or Google) login provides the following advantages for the user and for you:
- No need for the user to remember another set of credentials. (If they're not using the same credentials everywhere, that is.)
- No need for the user to go through a registration process, shortening the time and effort from the state of being not part of your app to being part of your app. The easier your users can get into your app, the more likely they will.
- No need for you to handle things like password reset, two-factor authentication, etc.
Summary of Steps
There are three steps to creating this kind of application:
- Create an ASP.NET MVC Web Application.
- Create a Facebook app.
- Configure the MVC web app to use the settings provided by the Facebook app for login.
Step 1: Create an ASP.NET MVC Web Application
This is pretty straightforward, just go to File > New Project > ASP.NET Web Application > MVC with authentication set to Individual User Accounts. The version of the .NET framework being used here is 4.6.
Step 2: Create a Facebook app
These are the steps to create a Facebook app:
- Go to and log in.
- On the upper right section of the screen, go to My Apps > Add a New App.
- Fill in the Display Name and Contact Email, and choose "Apps for Pages" as the Category, then click "Create App ID".
At this point you should be on a screen that looks like this:
Let's continue:
- On the left sidebar, click Settings.
- Copy the App ID and App Secret (you will need to click on the "Show" button) and set them aside for later use.
- In the App Domains textbox, put "localhost". Note that you will need to replace this domain with the production domain when your MVC app is deployed.
- Toward the bottom of the screen, click "Add Platform" > Website.
- In the Site URl textbox, enter the URL of your MVC app as it runs on your machine (ex:)
- Toward the bottom right of the screen, click "Save Changes".
The Facebook app is now ready for use.
Configure the MVC web app to use the settings provided by the Facebook app for login
Now let's have the MVC app use Facebook login. (Note: At the time of this writing, the Facebook API version is 2.7.) Here are the steps:
- Install the NuGet package named
- Go to App_Start >
Startup.Auth.csand look for the block that begins with
app.UseFacebookAuthentication.
- Replace that entire code block with the following, supplying the Facebook App Id and App Secret where necessary:
app.UseFacebookAuthentication(new FacebookAuthenticationOptions { AppId = "[YOUR APP ID]", AppSecret = "[YOUR APP SECRET]", Scope = { "email" }, Provider = new FacebookAuthenticationProvider { OnAuthenticated = context => { context.Identity.AddClaim(new System.Security.Claims.Claim("FacebookAccessToken", context.AccessToken)); return System.Threading.Tasks.Task.FromResult(true); } } });
At this point you should be able to run the app and do the Facebook login successfully. However, it's likely that you would want to get user information from Facebook (such as the first and last name, email, etc.) so I'm going to add a few extra steps:
- Create a small container class to hold the Facebook-supplied user information. For example:
public class FacebookUserInfo { public string Email { get; set; } public string FirstName { get; set; } public string LastName { get; set; } }
- Go to
AccountController.csand insert the following code block at the beginning of the
Task<ActionResult> ExternalLoginCallback(string returnUrl)method's body:
var identity = AuthenticationManager.GetExternalIdentity(DefaultAuthenticationTypes.ExternalCookie); var accessToken = identity.FindFirstValue("FacebookAccessToken"); dynamic userInfo = new FacebookClient(accessToken).Get("/me?fields=email,first_name,last_name"); var facebookUserInfo = new FacebookUserInfo { Email = userInfo["email"], FirstName = userInfo["first_name"], LastName = userInfo["last_name"] };
With that code block we can extract information supplied by Facebook (email, first name, and last name in this case) and save them in our own database. A complete list of the available fields can be found at:
Testing It Out
Now we can test our application. Here are the steps:
- Put a breakpoint at the start of the
Task<ActionResult> ExternalLoginCallback(string returnUrl)method and run the application in debug mode.
- Go to the login screen. On the right side of the screen, you should see the "Use another service to log in" section, with Facebook as one of the options. Click on Facebook.
- If you are not already logged in to Facebook on the browser where you are testing, a Facebook login page appears, where you must enter your Facebook username and password.
- After logging in, a permissions page may appear, asking you to grant permissions to your Facebook app.
- Once you grant permissions, control will return to your app, and the breakpoint we set should be hit. Go ahead and step through the lines and see how the Facebook information is retrieved.
At this point, you have all the information you need to create a user account. By default, the MVC page takes you to the
ExternalLoginCallback view, but you can change this behavior if you want to.
Conclusion
This post serves as a step-by-step guide on how to implement Facebook login in an ASP.NET MVC Web Application.
|
https://www.ojdevelops.com/2016/09/facebook-login-with-aspnet-mvc-web.html
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
The continuous integration and continuous deployment. It is free for public GitHub projects. In this article I am going walk you through all the necessary steps from configuring the server through deployer configuration all the way to setting up Travis.
Server configuration
I assume you already have a VPS server, and a you are comfortable working in the linux cli. I also won’t deal with the Nginx configuration, only focusing on deployment. The examples here are using an Ubuntu server.
As a first step create a user for the deployment with disabled password:
sudo adduser --disabled-password deploy
Let’s assume we are deploying the example.com website and it should be deployed to the
/var/www/vhosts/example.com directory. Go to the
/var/www/vhosts/ and change the owner and the permissions of the
example.com directory:
sudo chown -R youruser:deploy example.com sudo chmod -R 775 example.com
With the above command we made the directory writable for
youruser and also for the
deploy group (the deploy user is in this group by default). This could be handy if you need to do some modifications to the website manually, for example change values in the
.env file.
In the next step we’ll allow the deploy user to reload the php-fpm service. Open the
/etc/sudoers:
sudo nano /etc/sudoers
And add the following line to the file:
deploy ALL=(root) NOPASSWD: /usr/sbin/service php7.2-fpm reload
This allows the deploy user to run only the defined command with sudo without asking for a password. This step is optional, but it is recommended to reload the php-fpm service after the deploy, otherwise sometimes the deployed changes are not instantly visible.
Deployer configuration
Go to the root of your project and install deployer with composer:
composer require deployer/deployer --dev
You could use the
/vendor/bin/dep init command to initialize the deployer recipe, but you can just create
deploy.php in the root of your project and add the following content:
<?php namespace Deployer; require 'recipe/laravel.php'; // Project name set('application', 'Example'); // Project repository set('repository', ''); // [Optional] Allocate tty for git clone. Default value is false. set('git_tty', true); // Shared files/dirs between deploys add('shared_files', []); add('shared_dirs', []); // Writable dirs by web server add('writable_dirs', []); // Hosts host('example.com') ->set('user', 'deploy') ->set('deploy_path', '/var/www/vhosts/example.com'); // Tasks task('build', function () { run('cd {{release_path}} && build'); }); // [Optional] if deploy fails automatically unlock. after('deploy:failed', 'deploy:unlock'); // Migrate database before symlink new release. before('deploy:symlink', 'artisan:migrate'); task('reload:php-fpm', function () { run('sudo /usr/sbin/service php7.2-fpm reload'); }); after('deploy', 'reload:php-fpm');
The deployer recipe is quite self explanatory, you should make the following changes to the file:
Set up the url of the repository
// Project repository set('repository', '');
Change the host, user and deploy_path if needed
host('example.com') ->set('user', 'deploy') ->set('deploy_path', '/var/www/vhosts/example.com');
More details about deployer configuration can be found in deployer’s documentation.
Travis configuration
Let’s assume your project uses PHPUnit as test framework, as usually Laravel projects do. We’ll set up the Travic CI to run all the tests, and if the tests run successfully it will automatically deploy the application.
Connect Travis with GitHub
Before you can use the CI, you’d need to create an account on travis-ci-org. The simplest way to do so is to sign in with GitHub. After you’ve connected your GitHub account with Travis, you can enable travis for your public repositories, by clicking on the “+” beside My Repositories:
And choose the repositories you want to run the CI on:
Install Travis CLI
We will allow the deploy user to log in to our server using its private key, but we don’t want to put the private key into a public repository. We are going to use travis cli to encrypt our private key and the encrypted key would be committed and pushed to the repository. You need ruby for installing and running the Travis CLI:
gem install travis -v 1.8.10 --no-rdoc --no-ri
If you need further information on how to do this, detailed installation instructions can be found here.
Create and encrypt the key
Go to your projects root directory, and generate the key pair by running the
ssh-keygen:
ssh-keygen -t rsa -b 4096 -C 'build@travis-ci.org' -f ./deploy_rsa
This creates the private/public key pair. You should NEVER commit the private key to the repository!
Now we can encrypt the private key using the CLI tool, so first log in to travis:
travis login --org
If you have signed it to travis with GitHub, it might ask you for your GitHub credentials, just follow the on screen instructions to login.
Encrypt the private key and add it to Travis environment:
travis encrypt-file deploy_rsa --add
The above command creates the encrypted key file:
deploy_rsa.enc and adds the decrypt key as and environment variable to the
.travis.yml.
Commit the
deploy_rsa.enc to the repository, and delete the unencrypted private key:
rm deploy_rsa
SSH into your server and allow deploy user to login with the previously generated keys by copying the content of the
deploy_rsa.pub to the
/home/deploy/.ssh/authorized_keys file:
sudo nano /home/deploy/.ssh/authorized_keys
When it is done, the public key can also be deleted from the project:
rm deploy_rsa
Configure the .travis.yml
As a first step we set up travis to run our unit tests by adding the following content to the
.travis.yml file
language: php php: - 7.2 before_script: - composer self-update - composer install --no-interaction script: - vendor/bin/phpunit
The next step is to decrypt the private key and set up the ssh configuration:
before_deploy: - openssl aes-256-cbc -K $encrypted_<put_your_key_here>_key -iv $encrypted_<put_your_key_here>_iv -in deploy_rsa.enc -out /tmp/deploy_rsa -d - eval "$(ssh-agent -s)" - chmod 600 /tmp/deploy_rsa - ssh-add /tmp/deploy_rsa - echo -e "HostName example.com\n\tStrictHostKeyChecking no\n\t"User deploy >> ~/.ssh/config
Now we are ready to set up the deployment, by adding the following lines to the
.travis.yml:
deploy: skip_cleanup: true provider: script script: vendor/bin/dep deploy on: branch: master
We are using the script deploy provider , skipping the cleanup after the build and run deployer on the master branch.
The deploy will only only run if the command(s) in the script section have been finished without error. The deployment is also skipped when the build is running on a pull request. For more information about Travis deployments please visit the Travis deployment documentation page.
If everything went well all the tests should run and the changes should be automatically deployed to your server when you push changes to the master branch.
Conclusion
It is not so straightforward to set up a CI/CD pipeline manually as using Envoyer and Forge, but hopefully this article made it a bit easier for you.
If you have any questions or comments, please let me know in the comments section below.
Special thanks to Nicolas Martignoni, I learned the basics of Travis deployment, and key encryption from his blog post.
The post Set up CI/CD for your Laravel app with GitHub, Travis, and Deployer appeared first on Daniel Werner.
Discussion (0)
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/daniel_werner/set-up-ci-cd-for-your-laravel-app-with-github-travis-and-deployer-1gea
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
How to add phone mask in the input ?
Is there some way to create mask in the input like a phone mask for example ?
- Evandro P. last edited by Evandro P.
I have been using vue-mask () in other projects without quasar-framework and it works great but it caused a “conflict” when I tried to add vue-mask in a quasar-framework project. Has anyone had the same problem?
What is the conflict? Any errors?
- rstoenescu Admin last edited by
@Evandro-P hi, can you pls open up a github request ticket? Thinking of adding this to v0.14.
@rstoenescu add mask to quasar ? AMAZING
Almost 90% of admin systems have a form and mask is always user friendly.
- rstoenescu Admin last edited by
A
[Request] Input boxes with maskticket will do. So I won’t forget. Working on a lot of stuff
@rstoenescu Ok. I will.
import VMasker from 'vanilla-masker' export default { data () { return { form: { name: '', phone: '' } } }, watch: { 'form.phone' (newVal, oldVal) { this.form.phone = VMasker.toPattern(newVal, '+9 (999) 999-99-99') } } }
|
https://forum.quasar-framework.org/topic/270/how-to-add-phone-mask-in-the-input
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
MongoDB distinct method returns a set of discrete values for the field specified as the input argument. Mongo distinct method returns an array of discrete values.
Table of Contents
MongoDB distinct
The syntax for MongoDB distinct method is
db.collection.distinct(field, query)
field: A string type for which the discrete values are to be returned.
query: Specifies the document from which discrete values are to be retrieved.
Let’s see into the examples of select distinct values using mongo shell.
>db.car.distinct('speed') [ 65, 55, 52, 45 ]
Above mongo distinct example selects an array of distinct values for speed field in the car collection. Notice that query parameter is optional. Now let’s look at the example where we will pass distinct query parameter to select distinct values that matches the query criteria.
> db.car.distinct('name',{speed:{$gt:50}}) [ "WagonR", "Xylo", "Alto800", "Astar", "Suzuki S-4", "Santro-Xing", "Palio", "Micra" ] >
Above MongoDB distinct query operation find the names of the car whose speed is greater than 50 in the car collection.
MongoDB distinct Java Program
Consider the following java program to perform distinct operation on the car collection which prints a set of discrete values for the fields specified by the user.
MongoDBDistinct.java
package com.journaldev.mongodb; import java.net.UnknownHostException; import java.util.List; import com.mongodb.BasicDBObject; import com.mongodb.DB; import com.mongodb.DBCollection; import com.mongodb.DBObject; import com.mongodb.MongoClient; public class MongoDBDistinct { public static void distinct() throws UnknownHostException{ //Get a new connection to the db assuming that it is running MongoClient m1 = new MongoClient(); //use test as a database,use your database here DB db = m1.getDB("journaldev"); //fetch the collection object ,car is used here,use your own DBCollection coll = db.getCollection("car"); //call distinct method and store the result in list l1 List cl1= coll.distinct("speed"); //iterate through the list and print the elements for(int i=0;i<cl1.size();i++){ System.out.println(cl1.get(i)); } } public static void distinctquery() throws UnknownHostException{ MongoClient m1 = new MongoClient(); DB db = m1.getDB("journaldev"); DBCollection coll = db.getCollection("car"); //condition to fetch the car document whose speed is greater than 50 DBObject o1 = new BasicDBObject("speed",new BasicDBObject("$gt",50)); //call distinct method by passing the field name and object o1 List l1= coll.distinct("name", o1); System.out.println("-----------------------"); for(int i=0;i<l1.size();i++){ System.out.println(l1.get(i)); } } public static void main(String[] args) throws UnknownHostException{ //invoke all the methods to perform distinct operation distinct(); distinctquery(); } }
Output of the above MongoDB distinct java program is:
65.0 55.0 52.0 45.0 ----------------------- Audi Swift Maruthi800 Polo Volkswagen Santro Zen Ritz Versa Innova
That’s all for MongoDB distinct examples. This is very helpful when you want to select distinct fields from a collection based on a certain criteria.
I like the examples but cannot get either to work with the current versions of the Mongo Java API. Any chance this could be updated? Or could you provide the test databases so I can try?
The examples are outdated. Thet won’t work
|
https://www.journaldev.com/6320/mongodb-distinct-query
|
CC-MAIN-2021-10
|
en
|
refinedweb
|
Configuration
An object defining configuration options for the Button widget.
accessKey
Specifies the shortcut key that sets focus on the widget.
activeStateEnabled
A Boolean value specifying whether or not the widget changes its state when interacting with a user.
This option is used when the widget is displayed on a platform whose guidelines include the active state change for widgets.
disabled
Specifies whether the widget responds to user interaction.
elementAttr
Specifies the attributes to be attached to the widget's root element.
jQuery
$(function(){ $("#buttonContainer").dxButton({ // ... elementAttr: { id: "elementId", class: "class-name" } }); });
Angular
<dx-button ... [elementAttr]="{ id: 'elementId', class: 'class-name' }"> </dx-button>
import { DxButtonModule } from "devextreme-angular"; // ... export class AppComponent { // ... } @NgModule({ imports: [ // ... DxButtonModule ], // ... })
ASP.NET MVC Control
@(Html.DevExtreme().Button() .ElementAttr("class", "class-name") // ===== or ===== .ElementAttr(new { @id = "elementId", @class = "class-name" }) // ===== or ===== .ElementAttr(new Dictionary<string, object>() { { "id", "elementId" }, { "class", "class-name" } }) )
@(Html.DevExtreme().Button() _ .
Information about the event.
The widget's instance.
The model data. Available only if you use Knockout.
The jQuery event that caused the handler execution. Deprecated in favor of the event field.
The event that caused the handler execution. It is a dxEvent or a jQuery.Event when you use jQuery..
tabIndex
Specifies the number of the element when the Tab key is used for navigating.
template
Specifies a custom template for the Button widget.
The button's data.
The button content's container. It is an HTML Element or a jQuery Element when you use jQuery.
See Also; }
|
https://js.devexpress.com/Documentation/ApiReference/UI_Widgets/dxButton/Configuration/
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
Message container displaying messages as log list. More...
#include <CTextFileLogStreamerComp.h>
Message container displaying messages as log list.
Definition at line 22 of file CTextFileLogStreamerComp.h.
Definition at line 27 of file CTextFileLogStreamerComp.h.
Get file extensions supported by this loader.
Implements ifile::IFileType icomp::CComponentBase.
Reimplemented from ilog::CStreamLogCompBase.
This function saves data
data to file
filePath.
Implements ifile::IFilePersistence.
Write a text line to the output stream.
Implements ilog::CStreamLogCompBase.
© 2007-2017 Witold Gantzke and Kirill Lepskiy
|
http://ilena.org/TechnicalDocs/Acf/classifile_1_1_c_text_file_log_streamer_comp.html
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
Programmer's Reference Guide
Introduction
See » the most recently published version of this document. Also, the_Session_Namespace instances are accessor objects for namespaced slices of
$_SESSION. The
Zend_Session component wraps the existing PHP ext/session with an
administration and management interface, as well as providing an API for
Zend_Session_Namespace to
persist session namespaces.
Zend_Session_Namespace provides a standardized, object-oriented
interface for working with namespaces persisted inside PHP's standard session mechanism. Support exists for
both anonymous and authenticated (e.g., "login") session namespaces.
Zend_Auth, the authentication
component of the Zend Framework, uses
Zend_Session_Namespace to store some information associated
with authenticated users. Since
Zend_Session uses the normal PHP ext/session functions internally,
all the familiar configuration options and settings apply (see
»), with such bonuses as the
convenience of an object-oriented interface and default behavior that provides both best practices and smooth
integration with the.
blog comments powered by Disqus
|
http://framework.zend.com/manual/1.0/en/zend.session.introduction.html
|
crawl-003
|
en
|
refinedweb
|
L_PixelateBitmap
#include "l_bitmap.h"
L_LTIMGSFX_API L_INT L_PixelateBitmap(pBitmap, uCellWidth, uCellHeight, uOpacity, CenterPt, uFlags);
Divides the bitmap into rectangular or circular cells and then recreates the image by filling those cells with the minimum, maximum, or average pixel value, depending upon the effect that was selected.
Returns
This function does not support signed data images. It returns the error code ERROR_SIGNED_DATA_NOT_SUPPORTED if a signed data image is passed to this function.
This function will divide the image into rectangular or circular cells.
The uFlags parameter indicates whether to use rectangular or circular cells and indicates the type of information in the other parameters.
If the image is divided into circular cells by setting PIX_RAD in the uFlags parameter, the cells will be centered around the specified CenterPt. This center point must be defined inside the bitmap or inside the region, if the bitmap has a region. If the bitmap has a region, the effect will be applied on the region only.
This function supports 12 and 16-bit grayscale and 48 and 64-bit color images. Supportfor 12 and 16-bit grayscale and 48 and 64-bit color images is available only in the Document/Medical toolkits.
To update a status bar or detect a user interrupt during execution of this function, refer to L_SetStatusCallback.
An example of circular cell division can be seen below:
This is the original image:
The image below is the result of the following settings:
uFlags = PIX_RAD | PIX_WPER | PIX_HPER | PIX_AVR
uCellWidth = 90, uCellHeight = 40
This indicates the circular cells are divided into 90 degree cell divisions and each cell has a radial length of 40 pixels. Each cell division is filled with the average value for that cell division.
The image below is the result of the following settings:
uFlags = PIX_RAD | PIX_WFRQ | PIX_HFRQ | PIX_AVR
uCellWidth = 90, uCellHeight = 40
This indicates the circular cells are divided into 90 separate cell divisions around the center point and there are 40 cell divisions along the radius. Each cell division is filled with the average value for that cell division.
This function does not support 32-bit grayscale images. It returns the error code ERROR_GRAY32_UNSUPPORTED if a 32-bit grayscale image is passed to this function.
Required DLLs and Libraries
Platforms
Windows 2000 / XP/Vista.
See Also
Example
L_INT PixelateBitmapExample(L_VOID) { L_INT nRet; BITMAPHANDLE LeadBitmap; /* Bitmap handle for the image */ POINT CenterPt; /* Load a bitmap at its own bits per pixel */ nRet = L_LoadBitmap (TEXT("C:\\Program Files\\LEAD Technologies\\LEADTOOLS 15\\Images\\IMAGE3.CMP"), &LeadBitmap, sizeof(BITMAPHANDLE), 0, ORDER_BGR, NULL, NULL); if(nRet !=SUCCESS) return nRet; /* divide the image in to circular cells with angle length = 5°, and radius = 10 */ CenterPt.x = LeadBitmap.Width/2; CenterPt.y = LeadBitmap. Height/2; nRet = L_PixelateBitmap (&LeadBitmap, 5, 10, 100, CenterPt, PIX_AVR | PIX_RAD | PIX_WPER | PIX_HPER);; }
|
http://www.leadtools.com/help/leadtools/v15/main/api/dllref/l_pixelatebitmap.htm
|
crawl-003
|
en
|
refinedweb
|
1st post here...
I'm having some trouble with std::min_element and my code. Basically, what I'm trying to do is a small partial sort of the first N elements of a list (note that partial_sort only works with random access iterators). The thing is, my sorting criteria is a binary predicate that depends on a 3rd varible. Hence, I'm using a functor with an internal reference to my other variable.
However, when I run it it gives an invalid reference/pointer and bails out. I've traced the error to the min_element call. Until then, everything is fine but once it gets inside it my iterators gets invalidated. I know I could do it another way but I was quickly prototyping my program and I want to understand what's happening here. Any ideas?
Aaaanyway, here's the code:
// distance to a reference disk comparison struct distPred : binary_function<const Disk*,const Disk*,bool> { Disk* dr; bool operator()(const Disk* d1, const Disk* d2) const { return ( d1->distanceTo(*dr) < d2->distanceTo(*dr) ); } distPred(Disk* rPtr) { dr = rPtr; } };
and the sorting function...
unsigned short small_sort(DiskList::iterator from, DiskList::iterator to, const distPred& cmp, unsigned short n) { DiskList::iterator mit, it; unsigned short i = 0; for (; from != to, i < n; ++i, ++from ) { mit = min_element(from, to, cmp); ptr_swap(*mit,*from); } return i; }
Feel free to bash my code :blush:
cheers!
Ozz
|
http://devmaster.net/forums/topic/2830-stl-hidden-instancing/page__pid__16869#entry16869
|
crawl-003
|
en
|
refinedweb
|
Event targets are an important part of the Flash® Player and Adobe AIR event model. The event target serves as the focal point for how events flow through the display list hierarchy. When an event such as a mouse click or a keypress occurs, an event object is dispatched into the event flow from the root of the display list. The event object makes a round-trip journey to the event target, which is conceptually divided into three phases: the capture phase includes the journey from the root to the last node before the event target's node; the target phase includes only the event target node; and the bubbling phase includes any subsequent nodes encountered on the return trip to the root of the display list.
In general, the easiest way for a user-defined class to gain event dispatching capabilities is to extend EventDispatcher. If this is impossible (that is, if the class is already extending another class), you can instead implement the IEventDispatcher interface, create an EventDispatcher member, and write simple hooks to route calls into the aggregated EventDispatcher.
Learn more
Related API Elements
decorDispatcher) of the DecoratedDispatcher class is constructed and the
decorDispatchervariable is used to call
addEventListener()with the custom event
doSomething, which is then handled by
didSomething(), which prints a line of text using
trace().
package { import flash.events.Event; import flash.display.Sprite; public class IEventDispatcherExample extends Sprite { public function IEventDispatcherExample() { var decorDispatcher:DecoratedDispatcher = new DecoratedDispatcher(); decorDispatcher.addEventListener("doSomething", didSomething); decorDispatcher.dispatchEvent(new Event("doSomething")); } public function didSomething(evt:Event):void { trace(">> didSomething"); } } } import flash.events.IEventDispatcher; import flash.events.EventDispatcher; import flash.events.Event; class DecoratedDispatcher implements IEventDispatcher { private var dispatcher:EventDispatcher; public function DecoratedDispatcher() { dispatcher = new EventDispatcher(this); } public function addEventListener(type:String, listener:Function, useCapture:Boolean = false, priority:int = 0, useWeakReference:Boolean = false):void{ dispatcher.addEventListener(type, listener, useCapture, priority); } public function dispatchEvent(evt:Event):Boolean{ return dispatcher.dispatchEvent(evt); } public function hasEventListener(type:String):Boolean{ return dispatcher.hasEventListener(type); } public function removeEventListener(type:String, listener:Function, useCapture:Boolean = false):void{ dispatcher.removeEventListener(type, listener, useCapture); } public function willTrigger(type:String):Boolean { return dispatcher.willTrigger(type); } }
Tue Mar 20 2012, 07:13 AM -07:00
|
http://help.adobe.com/en_US/FlashPlatform//reference/actionscript/3/flash/events/IEventDispatcher.html
|
crawl-003
|
en
|
refinedweb
|
L_GetMarksCenterMassBitmap
#include "l_bitmap.h"
L_LTIMGCOR_API L_INT L_GetMarksCenterMassBitmap (pBitmap, pMarkPoints, pMarkCMPoints, uMarksCount)
Finds the center of mass for each of the registration marks specified by pMarkPoints. This function is available in the Document/Medical Toolkits.
Returns
This function does not support signed data images. It returns the error code ERROR_SIGNED_DATA_NOT_SUPPORTED if a signed data image is passed to this function.
This function is used to determine the center of mass for each supplied registration mark, to be used in detecting image rotation, scaling and translation.
The results (that is, the points representing each center of mass) returned by this function are multiplied by 100 in order to obtain more precision (00.00). To get the actual results, divide by 100.
This functions uses values that are divided internally by 100.
This function can be used in the following manner:
Use L_SearchRegMarksBitmap to find the registration marks.
Pass data from pMarkDetectedPoints to this function to determine the points representing the center of mass for each registration mark.
Pass these points to L_GetTransformationParameters to detect the image rotation, scaling and translation.
This function does not use L_SetStatusCallback.
If you simply want to automatically straighten a bitmap, use the L_DeskewBitmap function.
For an example, refer to L_GetTransformationParameters.
|
http://www.leadtools.com/help/leadtools/v15/main/api/dllref/l_getmarkscentermassbitmap.htm
|
crawl-003
|
en
|
refinedweb
|
, Mike again. Here is the scenario: you’re sitting in front of a workstation that has been diagnosed with a Group Policy problem. You scurry to a command prompt and type the ever familiar GPRESULT.EXE and redirect the output to a text file. Then, proceed to open the file in your favorite text editor and then start scrolling through text to start your adventure in troubleshooting Group Policy. But, what if you could get an RSOP report like the one from the Group Policy Management Console (GPMC)—HTML based with sorted headings and the works? Well, you can!
Let’s face it—the output for GPRESULT.EXE is not aesthetically pleasing to the eye. However, Windows Server 2008 and Windows Vista SP1 change this by including a new version of GPRESULT that allow you to have a nice pretty HTML output of Group Policy results, just like the one created when using GPMC reporting.
Your new GPRESULT command is GPRESULT /H rsop.html. Running this command creates an .html file in the current directory that contains Group Policy results for the currently logged on user and computer. You can also add the /F argument to force Group Policy Results to overwrite the file name, should the file exist from a previous instance of GPRESULT. Also, if you or someone who signs your paycheck loves reporting and data mining, then GPRESULT has another option you’ll enjoy: change the /H argument to a /X and GPRESULT will provide Group Policy Results in .xml format (yes change the file extension to .XML too). You can then take this output (conceivably from many workstations) and store it in SQL and voila—reporting heaven.
Figure 1-HTML output from GPRESULT
Figure 2- XML output from GPRESULT
All you text-based report lovers can relax because the new version still defaults to text-based reporting.
I know I know… what about Windows Server 2003 and Windows XP? No worries, we can accomplish the same task, from the command line. We can use VBScript and the GPMC object model to provide a similar experience for those still using Windows Server 2003 or Windows XP. Both Windows Server 2003 and Windows XP are able to launch VBScripts. However, GPMC is a separate download for Windows Server 2003 and Windows XP (). GPMC is a feature included in Windows Server 2008 that you can install through Server Manager.
Here is the code for the script. Copy and paste this code into a text file. Be sure to save the text file with a .vbs extension or it will not run correctly.
‘===================================================================== ’ ’ VBScript Source File ’ ’ NAME: ’ ’ AUTHOR: Mike Stephens , Microsoft Corporation ’ DATE : 11/15/2007 ’ ’ COMMENT: ’ ’=====================================================================
Set oGpm = CreateObject(“GPMGMT.GPM”) Set oGpConst = oGpm.GetConstants()
Set oRSOP = oGpm.GetRSOP( oGpConst.RSOPModeLogging, “” , 0) strpath = Left(Wscript.ScriptFullName, InStrRev(Wscript.ScriptFullName,”\”, -1, vbTextCompare) )
oRSOP.LoggingFlags = 0
oRSOP.CreateQueryResults() Set oResult = oRSOP.GenerateReportToFile( oGpConst.ReportHTML, strPath & “rsop.html”) oRSOP.ReleaseQueryResults()
WScript.Echo “Complete”
WScript.Quit()
Figure 3- VBScript code to save Group Policy results to an HTML file
This code shown in figure 3 does not require any modification to work in your environment. Its only requirement is the computer from which the script runs must have GPMC installed. Now, let’s take a closer look at the script, which is a good introduction to GPMC scripting.( Please note that this posting is provided "AS IS" with no warranties, and confers no rights. Use of included script sample is subject to the terms specified at.)
This line is responsible for making the GPMC object model available to the VBScript. If you are going to use the functions and features of GPMC through scripting, then you must include this line in your script. Also, if your script reports and error on this line, then it is a good indication that you do not have GPMC installed on the computer from which you are running the script.
The GPMC object model has an object that contains constants. Constants are nothing more than keywords that typical describe an option that you can use when calling one or more functions. You’ll see in Line 3 and Line 7 where we use the constant object to choose the RSOP mode and the format of the output file.
The RSOP WMI provider makes Group Policy results possible. Each client-side extension records their policy specific information using RSOP as it applies policy. GPMC and GPRESULT then query RSOP and present the recorded data as the results of Group Policy processing. RSOP has two processing mode, Logging mode and Planning mode. Planning mode is allows you to model “what if” scenarios with Group Policy and is commonly surfaced in Group Policy Modeling node in GPMC. Logging mode reports the captured results from the last application of Group Policy processing. You can see the first parameter passed to GetRSOP is a constant RSOPModeLogging. This constant directs the GetRSoP method to retrieve logging data and not planning data, which is stored in a different section within RSOP. The remaining parameters are the default values for the GetRSOP method. This function returns an RSOP object, from which we can save RSOP data to a file.
This line simply gets the name of the folder from where the script is running and saves it into the variable strpath. This variable is used in line 7; when we save the report to the file system.
LoggingFlags is a property of the RSOP object. Typically, you use this property to exclude user or computer from the reporting results. Most of the time and for this example, you want to set LoggingFlags equal to zero (0). This is a perfect opportunity to use a constant (created in line 2). However, some of the values are not included in the constant object and LoggingFlags happens to be one of them. If you want to exclude computer results from the report data, then set LoggingFlags equal to 4096. If you want to exclude user results from the report data, then set LoggingFlags equal to 8192.
The CreateQueryResults method actually copies the RSOP data logged from the last processing of Group Policy into a temporary RSOP WMI namespace. This makes the data available for us to save as a report.
The script retrieved RSOP information in line six. In this line, we save the retrieved RSOP information into a file. The first parameter in the GenerateReprotToFile method is a value that represents the report format used by the method. This value is available from the constant object—ReportHTML. The second parameter is the path and filename of the file to which the method saves the data—rsop.html. Later, I’ll show you how you can change this line to save the report to XML. Remember, the script creates the RSOP.HTML file in the same folder from where you started the script.
The ReleaseQueryResults method clears the temporary RSOP namespace that was populated with the CreateQueryResults method. Group Policy stores actual RSOP in a different WMI namespace. CreateQueryResults copies this data into a temporary namespace. This is done to prevent a user from reading RSOP data while Group Policy is refreshing the data. You should always call the ReleaseQueryResults method when you are done using the RSOP data. The remainder of the script is self explanatory.
I mentioned earlier that you could also save the same data in XML as oppose to HTML. This is a simple modification to line seven.
Set oResult = oRSOP.GenerateReportToFile( oGpConst.ReportXML, strPath & “rsop.xml”)
Saving the report in XML is easy. Change the first argument to use the ReportXML constant and the file name (most importantly—the file extension) to reflect the proper file format type.
Group Policy Resultant Set of Policy (RSoP) data is critical information when you believe you are experiencing a Group Policy problem. Text formats provide you most of the information you need but, at the expense of you manually parsing through the data. HTML formats have the same portability as text formats and provide you a better experience for navigating directly to the information for which you are looking. Also, they look much better than text—so they are good for reports and presentation. Lastly, the XML format is awesome for finding things programmatically. You can also store this same information in a SQL database (for multiple clients) and run custom SQL queries to analyze Group Policy processing across multiple clients.
- Mike Stephens
You gave me no other choice Ned. I am sorry to have to use comments, but hopefully you get this. Drop me a line if you remember the youngin' Comprox: comprox [at) gmail dottt com. Sorry for the spam :)
For those of you gearing up for a new year of administering Group Policy, here's some links to articles
For LoggingFlags the values didn't work for me. i.e.
If you want to exclude computer results from the report data, then set LoggingFlags equal to 4096. If you want to exclude user results from the report data, then set LoggingFlags equal to 8192.
The values should be
const long RSOP_NO_COMPUTER = 0x10000;
const long RSOP_NO_USER = 0x20000;
i.e
RSOP_NO_COMPUTER = 65536 <- tried this fine
RSOP_NO_USER = 131072 <-haven't tried this
I should have said I was trying this on XP - apologies if there is a difference.
|
http://blogs.technet.com/b/askds/archive/2007/12/04/an-old-new-way-to-get-group-policy-results.aspx
|
crawl-003
|
en
|
refinedweb
|
public class FocusCellOwnerDrawHighlighter extends FocusCellHighlighter
FocusCellHighlighterusing by setting the control into owner draw mode and highlighting the currently selected cell. To make the use this class you should create the control with the
SWT.FULL_SELECTIONbit set This class can be subclassed to configure how the coloring of the selected cell.
focusCellChanged, getFocusCell, init
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public FocusCellOwnerDrawHighlighter(ColumnViewer viewer)
TreeViewerFocusCellManager
viewer- the viewer
protected Color getSelectedCellBackgroundColor(ViewerCell cell)
cell- the cell which is colored
nullto use the default
protected Color getSelectedCellForegroundColor(ViewerCell cell)
cell- the cell which is colored
nullto use the default
protected Color getSelectedCellForegroundColorNoFocus(ViewerCell cell)
cell- the cell which is colored
nullto use the same used when control has focus
protected Color getSelectedCellBackgroundColorNoFocus(ViewerCell cell)
cell- the cell which is colored
nullto use the same used when control has focus
protected boolean onlyTextHighlighting(ViewerCell cell)
cell- the cell which is highlighted
trueif only the text area should be highlighted
protected void focusCellChanged(ViewerCell newCell, ViewerCell oldCell)
FocusCellHighlighter
The default implementation for this method calls
focusCellChanged(ViewerCell). Subclasses should override this method
rather than
FocusCellHighlighter.focusCellChanged(ViewerCell) .
focusCellChangedin class
FocusCellHighlighter
newCell- the new focus cell or
nullif no new cell receives the focus
oldCell- the old focus cell or
nullif no cell has been focused before
Copyright (c) 2000, 2013 Eclipse Contributors and others. All rights reserved.Guidelines for using Eclipse APIs.
|
https://help.eclipse.org/kepler/topic/org.eclipse.platform.doc.isv/reference/api/org/eclipse/jface/viewers/FocusCellOwnerDrawHighlighter.html
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
When building high-performance software, you need to make sure you have two things to start you off: solid architecture and streamlined code. When your code runs efficiently, you are not only able to reduce resource consumption and completion time, but effectively evaluate the quality of your software. Here, I will share some ways that you can do this.
In the last article, I discussed how building streamlined software is a lot like building race cars. When we look at a fast car, it might be the sleek design and chassis that initially grabs our attention, but, as any enthusiast knows, you need to open the bonnet to see what really makes it tick. The engine is, after all, the true power behind any car. It’s a complex combination of different components, where even the smallest misplaced bolt can have a massive impact on the overall performance of the vehicle.
In a software system, the engine is represented by the code. Each little section comes together to make it operate and, if one piece of code is poorly optimised, often the whole system will feel slow as a result.
What optimising code means and why it’s important
Code optimisation is the process of trying to find the fastest way in which an operation or problem can be resolved by computation time.
It does not look for the shortest, easiest or even the simplest solution, but the one that allows for execution in the fastest amount of time, using the least amount of memory.
The optimisation of code can be quite an exhaustive discussion, but to at least get the juices flowing, I would like to suggest a few things that you can look at to help your team identify what could provide the biggest performance gains. These include:
- Using the most efficient algorithms
- Optimising your memory usage
- Using functions wisely
- Optimising arrays
- Using powers of two for multidimensional arrays
- Optimising loops
- Working on data structure optimisation
- Identifying the most appropriate search type
- Being careful with the use of operators
- Using tables versus recalculating
- Carefully considering data types
Disclaimer:
Much like car parts are not all interchangeable with each other, some of the solutions that I suggest may not work as effectively with your software systems. Different programming languages and compilers work differently in how they convert and optimise functions. So, I would suggest, like with everything, you do some research, monitor the results and stick with what works best for you.
Use the most efficient algorithm
Speed, simply put, is about CPU processing. Ideally, what you want to do when optimising any algorithm is figure out which decision tree or branch logic will require the least number of options to work through – or, more specifically, the least CPU time.
How to do this is quite complex as there are so many different ways to apply algorithms, depending on the solution you are trying to reach. This article explains how to think about algorithms quite well.
I’ve found that the easiest way to evaluate and measure this, is to break up each of the processing aspects of your code, apply the following code (I’ve used C++ here, but you can change this for whatever language you prefer), and measure which one performs the fastest:
LARGE_INTEGER BeginTime; LARGE_INTEGER EndTime; LARGE_INTEGER Freq; QueryPerformanceCounter(&BeginTime); // your code QueryPerformanceCounter(&EndTime); QueryPerformanceFrequency(&Freq); printf("%2.3f\n",(float) ((EndTime.QuadPart - BeginTime.QuadPart) *1000000)/Freq.QuadPart);
The above will effectively output the execution speed of a given line of code, which will help you to measure it effectively. It will also provide you with a platform to experiment and find the best combination of algorithms for your code.
Optimise your code for memory
It is important to know how your compiler manages its memory and programs. Knowing this can prevent your code from utilising too much memory and, thereby, potentially slowing down other aspects of computer processing.
This is especially important for graphically heavy applications, like video games. In these cases, processors are required to work with complex algorithms to generate the CGI images and, how you utilise your memory will make a massive difference in the overall performance of the final product.
Tip: You can use monitoring tools (like Zabbix) to help achieve this.
Use functions wisely
Functions, or shared code that can be called multiple times, are utilised to make code more modular, maintainable and scalable. However, if you are not careful when using them, functions can create some performance bottlenecks, especially if applied to recursion (functions called in a repeated loop).
While functions certainly make coding shorter, repeatedly calling a function to do a similar thing ten times is unnecessary expenditure on your memory and CPU utilisation.
Tip: To do this better, you should implement the repetition directly in your function as this will require less processing. I’ve set up a few examples to show this later in the article.
Inline functions
Another thing to consider is how you use inline functions While these are often used to ease some of the processing restrictions on your CPU, there are other ways of reducing strain on your processor. For instance, for smaller functions, you can make use of macros instead, as these allow you to benefit from speed, better organisation and reusability.
Additionally, when passing a big object to a function, you could use pointers or references, as these provide better memory management. I personally prefer to use references because they create code that is way easier to read. They are also useful things to use when you are not worried about changing the value that is passed to the function. If you use an object that is constant, it could be useful to use
const, which will save some time.
Optimise arrays
The array is one of the most basic data structures that occupies memory space for its elements.
An array’s name is a constant pointer that points at the first element of an array. This means that you could use pointers and pointer arithmetic because of the operations with pointers.
In the example below, we have a pointer to the
int data type that takes the address from the name of the array. In this case, it is
nArray, and we increase that address for one element. The pointer is moved toward the end of the array for the size of the
int data type.
for(int i=0; i<n; i++) nArray[i]=nSomeValue;
Instead of the above code, the following is better:
for(int* ptrInt = nArray; ptrInt< nArray+n; ptrInt++) *ptrInt=nSomeValue;
If you have used double, your compiler will know how far it should move the address.
It may be harder to read code this way, but it will increase the speed of your program. It’s not the most efficient algorithm that you could use, but the syntax is better and this means the code will run faster.
Using matrices
If you use a matrix, and you have the chance to approach the elements of the matrix row by row, always choose this option as this is the most natural way to approach the array members.
Tip: Avoid initialisation of large portions of memory with some elements. If you can’t avoid this type of situation, consider
memset or similar commands.
Use powers of two for multidimensional arrays
A multidimensional array is used to store data that can be referenced across two or three different sets of axis. It makes storing and referencing data easier. If we can perform faster searching, we can save a lot of time in our code, especially when working with large amounts of data.
The advantage of using powers of two for all but the leftmost array size comes when accessing the array. Ordinarily, the compiled code would have to compute a ‘multiply’ to get the address of an indexed element from a multidimensional array, but most compilers will replace a constant multiply with a shift if it can. Shifts are ordinarily much faster than multiplies.
Optimise loops
We utilise loops, or repeated sequences, to sort through or iterate on data to perform actions when required. While this sort of recursion is extremely helpful in specific scenarios, most of the time, it generates slow performing code.
Tip: If possible, try to reduce your reliance on loops. You should only really use them if they are needed multiple times, and contain multiple operations within them. Otherwise, if you need to iteratively sort through something, use another type of sorting algorithm to reduce your processing time.
Work on data structure optimisation
Not all data is equal and so, we need to structure data appropriately for our intended solution. Like with most things, data has a big impact on code performance, and so, the way you structure the data you need in your code will play a big part in enhancing its speed.
Tip: Keeping your data in a list means that you can very easily create a program that will outperform one that has been created using an array. Additionally, if you save your data in some form of a tree, you can create a program that will perform faster than one that doesn’t have adequate data structure.
Identify the most appropriate search type: Binary Search or Sequential Search
One of the most common tasks you do when programming is search for some value in data structures. However, you can’t just apply the same searching principles to different data structures. Rather, you should spend some time identifying the most appropriate approach for what we require.
For example, if you are trying to find one number in an array of numbers you could have two strategies:
- Sequential Search: The first strategy is very simple. You have your array and value you are looking for. From the beginning of the array, you start to look for the value and, if you find it, you stop the search. If you don’t find the value, you will be at the end of the array. There are many improvements to this strategy.
- Binary Search: The second strategy requires the array to be sorted. If an array is not sorted, you will not get the results that you’re looking for. If the array is sorted, you split it into two halves. In the first half, the elements of the array are smaller than the middle one. In the other half, the elements are bigger than the middle one. If you find yourself in this situation, where two markers are not situated the way they should be, you know that you don’t have the value you have been looking for.
Sorting through the elements of an array will cost you some time, but if you’re willing to do that, you’ll benefit from faster binary search.
Be careful with the use of operators
Most basic operations, like +=, -=, and * =, when applied to basic data types can slow down your program because they place unnecessary computation on your processor. To be sure that things aren’t getting slowed down, you will need to know how they get transformed into assembler on your computer.
Tip: An interesting way to do this is to replace the postfix increment and decrement with their prefix versions.
Sometimes you can use the operators >> or << instead of multiplication or division, but be careful, because you could end up with mistakes. When you attempt to fix these, you could inadvertently add some range estimations, making the code you started with way slower.
Bit operators and the tricks that go with them could increase the speed of the program, but you should be very careful because you could end up with machine dependent code, which you want to avoid.
Use tables versus recalculating
Often in coding, we need to perform some sort of complex calculation. When dealing with calculations, we can either perform the calculation directly in the code or make use of a table to reference and save on the processing time.
Tables are often easier to work with and the simplest solution to code, but they don’t always scale well.
Remember that in recalculating, you have the potential to use parallelism, and incremental calculation with the right formulations. Tables that are too large will not fit in your cache and, hence, may be slow to access and cannot be optimised further. Much like we mentioned above when discussing data structures, tables should be used with caution.
Carefully consider your data types
When we assign data to variables in code, we often allocate a size to it. This is the amount of memory space the computer makes available when working with this memory. The larger the program, the more it may utilise data. You want to try and use as little data as possible.
On modern 32 and 64-bit platforms, small data types like chars and shorts actually incur extra overhead when converting to and from the default machine word-size data type.
Tip: Be specific about the data that you are using. Utilise chars for small counters, shorts for slightly larger counters and only use longs or ints when you really have to.
On the other hand, one must be wary of cache usage. Using packed data (and in this vein, small structure fields) for large data objects may pay larger dividends in global cache coherence, than local algorithmic optimisation issues.
A Caveat
While finding the most optimal coding solution is ideal, it doesn’t always mean it’s the best way to go about solving problems. Some of the below points are also worth considering:
- Optimising your code for performance using all possible techniques might generate a bigger file with bigger memory footprint.
- You might have two different optimisation goals that conflict with each other. For example, to optimise the code for performance might conflict with optimising the code for less memory footprint and size. You have to find a balance.
- Performance optimisation is a never-ending process; your code might never be fully optimised. There is always more room for improvement to make your code run faster.
- Sometimes you can use certain programming tricks to make code run faster at the expense of not following best practices. Try to avoid implementing cheap tricks, though as this will not pay off long term.
An optimisation test:
To test how well you’ve understood the different optimisation techniques discussed above, here is a coding solution for you have a look at and identify what can be optimised:
Example code (in C++):
#include <iostream> #define LEFT_MARGIN_FOR_X -100.0 #define RIGHT_MARGIN_FOR_X 100.0 #define LEFT_MARGIN_FOR_Y -100.0 #define RIGHT_MARGIN_MARGIN_FOR_X*LEFT_MARGIN_FOR_X+LEFT_MARGIN_FOR_Y*LEFT_MARGIN_FOR_Y)/ (LEFT_MARGIN_FOR_Y*LEFT_MARGIN_FOR_Y+dB); double dMaximumX = LEFT_MARGIN_FOR_X; double dMaximumY = LEFT_MARGIN_FOR_Y; for(double dX=LEFT_MARGIN_FOR_X; dX<=RIGHT_MARGIN_FOR_X; dX+=1.0) for(double dY=LEFT_MARGIN_FOR_Y; dY<=RIGHT_MARGIN; }
Optimising your code is key in ensuring that your engine is running efficiently. However, to ensure that your system stays that way, you’re going to need to test it continuously. We’ll cover this in the final installment of this series.!
|
https://www.offerzen.com/blog/sedan-to-supercar-part-2-code-optimisation
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
Hi -
First, thanks to everyone working on Quantopian and Zipline, I think they have great potential and cannot believe how far they have come. Despite the inclusion of advanced features like pipeline and fundamentals, I still have some concerns about the very basic underlying trading simulation.
I think the attached backtest demonstrates an issue with inaccurate simulation of limit orders.
In the backtest there are only two days with trades:
- on one day we buy by putting in a (marketable) limit order, with a limit far above the market. As this is marketable when we insert it, it should fill just as a market order would, and it does.
- on every subsequent day, at start of day we insert a limit order to sell at a fixed price of 111. When the market eventually reaches this level, our order should fill at 111 (since we have inserted a passive limit order to sell at this level). However, all the fills come in level better than our limit (as in the first case). However, this case is different! If the market "goes through" the level of a limit order which is already in the market, we still only get the limit price.
I think this is important to address: it improves the reported backtest results of any strategy using passive orders, which makes strategies look more viable for real trading than may be the case.
To be clear: this has nothing at all to do with commissions and slippage. The above issue is purely with the price of market fills, before any commission or slippage is taken into account.
I hope the example and explanation is clear. The key point is that non-marketable limit orders "resting in the market" should get filled at the limit price, or not at all. The current simulation seems to treat them as if they were re-inserted at each bar, gaining the benefit of favourable price moves and biasing the trading results upwards. This gives incorrect simulation results, compared to real trading with the same orders.
def initialize(context): context.s = sid(22739) set_commission(commission.PerShare(cost=0, min_trade_cost=None)) #set_slippage(slippage.FixedSlippage(spread=0)) schedule_function( eod, date_rules.every_day(), time_rules.market_close(minutes=15) ) schedule_function( sod, date_rules.every_day(), time_rules.market_open(minutes=15) ) def before_trading_start(context, data): pass def sod(context,data): s = context.s log.info("SOD pos: %i" % context.portfolio.positions[s].amount) if context.portfolio.positions[s].amount <= 0: l1_id = order_target_percent(s, 1, style=LimitOrder(110.50)) l1 = get_order(l1_id) log.info("%s: SOD placing buy order for %i of %s at %f" % (get_datetime(), l1.amount, l1.sid, l1.limit)) elif context.portfolio.positions[s].amount > 0: l2_id = order_target_percent(s, 0, style=LimitOrder(111.00)) l2 = get_order(l2_id) log.info("%s: SOD placing sell order for %i of %s at %f" % (get_datetime(), l2.amount, l2.sid, l2.limit)) def eod(context,data): all_open_orders = get_open_orders() if all_open_orders: for security, oo_for_sid in all_open_orders.iteritems(): for order_obj in oo_for_sid: log.info("%s: EOD cancelling order for %s of %s created on %s (status: %s)" % (get_datetime(), order_obj.amount, security.symbol, order_obj.created, order_obj.status)) cancel_order(order_obj) record(PnL=context.portfolio.pnl, Outlay=context.portfolio.positions_value, Leverage = context.account.leverage) def handle_data(context,data): """ Called every minute. """ pass
|
https://www.quantopian.com/posts/simulation-of-non-marketable-limit-orders
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
BaseDraw problems
On 28/04/2016 at 03:44, xxxxxxxx wrote:
Hello people!
I have some problems with the BaseDraw, one with DrawTexture() and one with SetMatrix_Screen().
- Using SetMatrix_Screen() causes other object's visuals to be corrupted
- I get a strange "cut" on the right side of the image drawn with DrawTexture()
- I can't get the image to be drawn on top of the Cube
This image shows all problems:
You can download a full example to reproduce here:
The relevant code is:
def Draw(self, op, drawpass, bd, bh) : if drawpass != c4d.DRAWPASS_OBJECT: return c4d.DRAWRESULT_SKIP if not self.PluginIcon: return c4d.DRAWRESULT_SKIP wpsize = 48 # Draw the object icon on the screen. pos = bd.WS(op.GetMg().off) bmp = self.PluginIcon padr = get_draw_screen_padr(pos, wpsize, wpsize, ALIGN_CENTERH | ALIGN_BOTTOM) uvadr = get_draw_screen_uvcoords(bmp, 0, 0, bmp.GetBw(), bmp.GetBh()) cadr = [c4d.Vector(1.0)] * 4 vnadr = [c4d.Vector(0.0, 0.0, 1.0)] * 4 mode = c4d.DRAW_ALPHA_NORMAL flags = c4d.DRAW_TEXTUREFLAGS_0 bd.SetMatrix_Screen(4) bd.DrawTexture(self.PluginIcon, padr, cadr, vnadr, uvadr, 4, mode, flags) return c4d.DRAWRESULT_OK
I have no idea how to solve either of those issues. Looking forward to your input!
Thanks in advance,
Niklas
On 28/04/2016 at 04:25, xxxxxxxx wrote:
Update: The first issue about the "cut" is resolved. I used the wrong order for the uvadr and padr.
Now interestingly, this also seems to fix the strange looks of the Cube for some reason!
So the first and second issue are resolved by this :) You can find the corrected versions of the
functions below.
The third question remains though: How can I get the texture to be drawn over the
Cube (or above everything)? Currently, it gives a very strange effect.
------
ALIGN_LEFT = (1 << 0) ALIGN_RIGHT = (1 << 1) ALIGN_CENTERH = (1 << 2) ALIGN_TOP = (1 << 3) ALIGN_BOTTOM = (1 << 4) ALIGN_CENTERV = (1 << 5) def get_draw_screen_uvcoords(bmp, x, y, w, h) : bmpw, bmph = map(float, bmp.GetSize()) corners = [x / bmpw, y / bmph, (x + w) / bmpw, (y + h) / bmph] return [ c4d.Vector(corners[0], corners[1], 0.0), c4d.Vector(corners[2], corners[1], 0.0), c4d.Vector(corners[2], corners[3], 0.0), c4d.Vector(corners[0], corners[3], 0.0)] def get_draw_screen_padr(pos, w, h, align) : if align & ALIGN_LEFT: xoff = 0 elif align & ALIGN_RIGHT: xoff = w elif align & ALIGN_CENTERH or True: xoff = w / 2.0 if align & ALIGN_TOP: yoff = 0 elif align & ALIGN_BOTTOM: yoff = h elif align & ALIGN_CENTERV or True: yoff = h / 2.0 x, y = (pos.x - xoff, pos.y - yoff) return [ c4d.Vector(x, y, 0.0), c4d.Vector(x + w, y, 0.0), c4d.Vector(x + w, y + h, 0.0), c4d.Vector(x, y + h, 0.0)]
On 29/04/2016 at 03:23, xxxxxxxx wrote:
Hi Niklas,
I'm afraid you'll have to wait till next week to get an answer for your problem with BaseDraw.DrawTexture().
On 02/05/2016 at 03:01, xxxxxxxx wrote:
Hello,
the best way to draw elements in 2D over other elements is probably to draw them in a SceneHook. See 2D viewport drawing using a SceneHook.
Beside that I don't see any way to handle this situation.
Best wishes,
Sebastian
On 16/05/2016 at 22:09, xxxxxxxx wrote:
Hi Sebastian,
thanks for your reply! Good idea, I also think that would work.
I'm in a Python plugin so a SceneHook is not an option. No longer
so important now anyway :)
Cheers, Niklas
|
https://plugincafe.maxon.net/topic/9469/12697_basedraw-problems/5
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
d
ZurdoDev wrote:As someone else said, it's probably from a different dataset.
Mark_Wallace wrote:I upset them when I informed them that winds come from a compass bearing, not go to it, so their vanes were out by 180 degrees.
Startup
public CatalogContext(DbContextOptions options) : base(options)
{
}
public CatalogContext(DbContextOptions<CatalogContext> options) : base(options)
{
}
services.AddDbContext...
services.GetDbContext...
db.YourContextObject.FromSql
SqlQuery
FromSql
Quote:throw a bike off a cliff
F-ES Sitecore wrote:If your solution uses DI and you create ambiguous constructors then the only possibly outcome is failure.
F-ES Sitecore wrote:As for getting services you've added to the service collection you use IServiceProvider to do that
F-ES Sitecore wrote:As for EF and direct\raw SQL etc, EF is an ORM, if need to use raw sql, stored procs etc then it is the wrong tool for the job.
Marc Clifton wrote:you go for the most specific instantiator you can with the information you have in the service
Marc Clifton wrote: So you have one thing for adding services, but another thing for getting them?
Marc Clifton wrote:why can't the tool do both gracefully?
F-ES Sitecore wrote:so how does the DI know which is "more specific"?
Foo<Bar>
Foo
Quote:Yes, it's called the single responsibility principal. By separating the two if you want to use your own resolver, you can.
F-ES Sitecore wrote:EF can use SPs and raw SQL, but you're not going to have as much flexibility or control, so if you want to do anything advanced you're probably going to have a bad time.
Marc Clifton wrote:Well, one is of the form Foo<Bar> and the other is just Foo. That seems sufficient to distinguish which to use.
public class CatalogContext : DbContext
{
public CatalogContext(DbContextOptions<CatalogContext> options) : base(options)
{
}
}
public CatalogContext(DbContextOptions opts2, DbContextOptions<CatalogContext> options) : base(opts2 ?? options)
var dbBuilder = new DbContextOptionsBuilder<CatalogContext>();
var dbBuilder = new DbContextOptionsBuilder();
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
https://www.codeproject.com/Lounge.aspx?msg=5655555
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
I am trying to buld a replica log and trying to get the list of feature classes in a replica
can get the list of replica but I can't access it's properties and see the list of feature classes
Here is the code I am trying to execute
import arcpy
import os,sys
sdeConnection = r"Database Connections/abc@GISEDIT.sde"
# Print the name of the replicas
file1=open('C:\\TEMP\\log.txt', 'w+')
for replica in arcpy.da.ListReplicas(sdeConnection):
file1.write(replica.name+"\n")
I can get the list of replicas but can't get the properties> feature classes in those replicas . Any help ?
Hi Anjitha,
I was able to do this using the pyodbc module. Ex:
If your repository is owned by SDE, you will need to change dbo.GDB_ITEMS and dbo.GDB_ITEMTYPES to sde.GDB_ITEMS and sde.GDB_ITEMTYPES in the query (line 6).
|
https://community.esri.com/thread/170572-arcpy-replica-how-to-get-the-list-of-feature-classes-from-replica-name
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
Here, we are going to develop a Camel application that integrates e-mails, filesystem operations, and web services as means of communication.
As we have our project set up, we will go ahead and add a few dependencies. First, we will add
slf4j-simple to the project dependencies so we can see what's going on in the console.
<dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-simple</artifactId> <version>1.7.2</version> </dependency>
Then, we will add the following code to the file
src/main/java/com/company/cuscom/App.java:
package com.company.cuscom; import org.apache.camel.CamelContext; import org.apache.camel.builder.RouteBuilder; import org.apache.camel.impl.DefaultCamelContext; public class App ...
No credit card required
|
https://www.safaribooksonline.com/library/view/instant-xenmobile-mdm/9781782165347/ch01s03.html
|
CC-MAIN-2018-30
|
en
|
refinedweb
|
It is possible to make different package group of optional dependencies for a Python package. This is useful if you want to include an extra set of dependencies for developers/maintainers of the package. We can also define a plugin-based package similarly to how OpenAI Gym uses it to denote different categories of environments you can setup.
from setuptools import setup, find_packages extras = { 'typing': ['mypy~=0.740', 'mypy-extensions~=0.4.0', 'pylint~=2.4.4'], 'testingdocs': ['tox~=3.14.6', 'Sphinx~=3.0.1'], } # Meta dependency groups. extras['all'] = [item for group in extras.values() for item in group] setup(name='example', version='0.0.1', packages=find_packages(), install_requires=['Flask~=1.1.1'], extras_require=extras, )
|
https://brandonrozek.com/blog/pyextradeps/
|
CC-MAIN-2020-24
|
en
|
refinedweb
|
Summary ------- Revamp the `@Deprecated` annotation, and provide tools to strengthen the API life cycle. Goals ----- * Provide better information about the status and intended disposition of APIs in the specification. * Provide a tool to analyze an application's static usage of deprecated APIs. Non-Goals --------- It is not a goal of this project to unify the `@deprecated` Javadoc tag with the `@Deprecated` annotation. Motivation ---------- Deprecation is a technique to communicate information about the life cycle of an API: to encourage applications to migrate away from the API, to discourage applications from forming new dependencies on the API, and to inform developers of the risks of continuing dependence upon the API. Java offers two mechanisms to express deprecation: the `@deprecated` Javadoc tag, introduced in JDK 1.1, and the `@Deprecated` annotation, introduced in Java SE 5. The API specification for the `@Deprecated` annotation, mirrored in The Java Language Specification, is: > A program element annotated `@Deprecated` is one that programmers are discouraged from using, typically because it is dangerous, or because a better alternative exists. Compilers warn when a deprecated program element is used or overridden in non-deprecated code. However, the `@Deprecated` annotation ended up being used for several different purposes. Very few deprecated APIs were actually removed, leading some people to believe that nothing would ever be removed. On the other hand, other people believed that everything that was deprecated might eventually be removed, which was never the intent either. (Although it wasn't stated explicitly in the specifications, various documents mentioned that deprecated APIs would be removed at some point.) This resulted in an unclear message being delivered to developers about the meaning of `@Deprecated`, and what, if anything, developers should do when they encountered usage of a deprecated API. Everybody was confused about what deprecation actually meant, and nobody took it seriously. This in turn has made it difficult ever to remove anything from the Java SE API. Another problem with deprecation is that warnings are issued only at compile time. As APIs become deprecated in successive versions of Java SE, existing binaries continue to depend and use the deprecated APIs with no warnings. If a deprecated API were to be removed in a JDK release, even after one or more releases where it was deprecated, this would come as an unpleasant surprise to users of old application binaries. The application would suddenly fail with a linkage error, with no warnings having ever been emitted. Worse, there is no means for developers to check whether existing binaries have any dependencies on deprecated APIs. This causes significant tension between the ability to run old binaries on new JDK releases versus the need to evolve the specification through the retirement of old APIs. In summary, the deprecation mechanisms have been applied inconsistently in the Java SE API, resulting in confusion about the meaning of deprecation in principle and the proper use of deprecation in practice. Description ----------- ### Specifications The primary purpose of enhancing the `@Deprecated` annotation is to provide finer-grained information to tools about the deprecation status of an API. These tools in turn use the annotation to report information to users of the API. The `@Deprecated` annotation has runtime retention and therefore consumes heap memory. The information here should therefore be minimal and well-specified. The following elements are to be added to the `java.lang.Deprecated` annotation type: *`. *. Note that this value is *not* redundant with the Javadoc `@since` tag, because that records the release in which the API was introduced, whereas the `since()` method in a `@Deprecated` annotation records the release in which the API was deprecated. The default value of this element is the empty string. Since these elements are being added to the existing `@Deprecated` annotation, annotation processing programs will see the default values for `forRemoval()` and `since()` if they are processing a class file that was compiled with a version of `@Deprecated` older than JDK 9. The presence of the `@Deprecated` annotation on an API is communication from the author or maintainer of the API to users of the API. Most generally, deprecation is advice that users migrate their usage away from the deprecated API, that they avoid adding dependencies on this API from new code or while maintaining old code, or that there is a certain amount of risk in maintaining code that depends on this API. There are many reasons to recommend such migration. Reasons might include the following: * the API is flawed and is impractical to fix, * usage of the API is likely to lead to errors, * the API has been superseded by another API, * the API is obsolete, * the API is experimental and is subject to incompatible changes, * or any combination of the above. The exact reasons for deprecating an API are often too subtle to be expressed as flags or element values in the annotation. It is strongly recommended that the reasons for deprecating an API be described in that API's documentation comments. In addition, it is also recommended that potential replacement APIs be discussed and linked from the documentation. One specific flag value is provided, however. The `forRemoval()` boolean element,. The `@Deprecated` annotation and the `@deprecated` javadoc tag should both be present or both be absent on an API element. The presence of one without the other is considered to be a mistake. The `javac` lint flag `-Xlint:dep-ann` will issue warnings if the `@deprecated` tag is present on an API that lacks the `@Deprecated` annotation. There is currently no warning if the reverse is true; see [JDK-8141234](). The `@Deprecated` annotation should have no direct impact on the behavior of deprecated APIs, and there should negligible performance impact. ### Usage in Java SE The `@Deprecated` annotation type appears in Java SE, and thus it may be applied to the APIs any class library that uses the Java SE platform. The exact rules and policies for how those class libraries use the `@Deprecated` annotation type is a matter for the maintainers of those libraries to determine. It is recommended that class library maintainers develop and document such policies. This section describes the uses of the `@Deprecated` annotation type on Java SE APIs themselves and also the policies governing such use. Several Java SE APIs will have a `@Deprecated` annotation added, updated, or removed. The changes implemented in Java SE 9 are listed below. Unless otherwise specified, the deprecations listed here are not for removal. Note that this is not a comprehensive list of deprecations in Java SE 9. * add `@Deprecated` to constructors for boxed primitives (`Boolean`, `Integer`, etc.) ([JDK-8145468]()) * add `@Deprecated(forRemoval=true)` to the `Runtime.traceInstructions` and `Runtime.traceMethodCalls` methods ([JDK-8153330]()) * add `@Deprecated` to various `java.applet` and related classes ([JEP 289]()) * add `@Deprecated` to `java.util.Observable` and `Observer` ([JDK-8154801]()) * add `@Deprecated(forRemoval=true)` to various superseded security APIs, including `java.security.acl` ([JDK-8157847]()), `javax.security.cert` and `com.sun.net.ssl` ([JDK-8157712]()), `java.security.Certificate` ([JDK-8157707]()), and `javax.security.auth.Policy` ([JDK-8157848]()) * add `@Deprecated(forRemoval=true)` to `java.lang.Compiler` ([JDK-4285505]()) * add `@Deprecated` to several Java EE modules and the `java.corba` module ([JDK-8169069](), [JDK-8181195](), [JDK-8181702](), [JDK-8174728]()) * modify already-deprecated methods `Thread.destroy()`, `Thread.stop(Throwable)`, `Thread.countStackFrames()`, `System.runFinalizersOnExit()`, and various disused `Runtime` and `SecurityManager` methods to have `@Deprecated(forRemoval=true)` ([JDK-8145468]()) Given the history of deprecation in Java SE, and the emphasis on long term API compatibility across versions, removal of an API is a matter of serious concern. Therefore, deprecation with the element `forRemoval=true` should be applied only when there is a clear and definite plan for removing that API in the next release of the Java SE platform. An API element should not be removed from the Java SE specification unless it has been delivered with an annotation of `@Deprecated(forRemoval=true)` in a previous version of Java SE. It is acceptable for a deprecation to be introduced with `forRemoval=true`. It isn't necessary to first deprecate with `forRemoval=false`, then upgrade to `forRemoval=true`, before removing the API. For API elements deprecated in Java SE 9 and beyond, the `since` element should contain the Java SE version string denoting the version in which the API element was deprecated. The version string should conform to the format specified in [JEP 223](). Since Java SE typically makes specification changes only in major releases, the version string will often consist solely of the "MAJOR" version number. Thus, for API elements deprecated in Java SE 9, the `since` element value should simply be "9". API elements that had been deprecated prior to Java SE 9 will have their `since` value filled in only as time permits. (Doing this for all APIs is of marginal value and is mainly an exercise in historical research.) The string used for the `since` value in such cases should conform to the JDK version conventions used for the `@since` javadoc tag for those releases, typically `1.0` through `1.8` but sometimes with a "micro" release number, such as `1.0.2`. Annotation processing tools looking for this value on Java SE APIs and finding an empty string should assume that the deprecation occurred in Java SE 8 or earlier. Deprecating APIs will increase the number of mandatory warnings that projects encounter when building against newer versions of Java SE. Some projects, including the JDK itself, build with compiler options that enable verbose warnings and that turn warnings into errors. For such projects, adding deprecated APIs to Java SE can introduce a large number of warnings, adding significantly to the effort of migrating to a new version of Java SE. Existing mechanisms for managing warnings, such the `@SuppressWarnings` annotation and compiler command-line options, are insufficient for dealing with this issue. This effectively places a limit on which APIs can be deprecated in a given Java SE release, and it makes deprecation of obsolete but popular APIs nearly impossible. This calls for a future effort to enhance the mechanisms available to manage deprecation warnings. ### Impact of `forRemoval` on Warning Policy The [Java Language Specification, section 9.6.4.6]() mandates specific warning behaviors that depend upon the deprecation status of an API that is being depended upon (the "declaration site"), in combination with the deprecation status of the code that is using that API (the "use site"). The addition of the `forRemoval` element adds another set of cases that must be defined. For the sake of brevity, we will refer to a deprecation with `forRemoval=false` as an "ordinary deprecation" and a deprecation with `forRemoval=true` as a "terminal deprecation." In Java SE 8 and earlier, `forRemoval` did not exist, so the only kind of deprecations were ordinary deprecations. Whether a deprecation warning was issued depended upon the deprecation status of both the use site and the declaration site. Here is a table of cases that existed in Java SE 8: use site | API declaration site context | not dep. deprecated +----------------------- not dep. | N W | deprecated | N N (1) N = no warning W = warning (Note 1) This is an odd case. If the use and declaration site are both deprecated, no warning is issued. This makes sense if both sites are within a single class library that is maintained and released as a unit. Since they are maintained together, there is little point in issuing a warning in this case. However, if the use site is within a class library that is maintained separately from the declaration site, they may evolve at different rates, and so not issuing a warning in this case is likely to be a misfeature. However, this mechanism was useful for reducing the number of warnings from compilation of the JDK, prior to the introduction of the `@SuppressWarnings` annotation in Java SE 5. (JLS 9.6.4.6 also requires no warnings to be issued if the use site is within the *same outermost class* as the declaration site. In such cases the use and declaration sites are by definition maintained together, so the rationale for not issuing a warning applies well.) In Java SE 9, the introduction of `forRemoval` adds several new cases having to do with terminal deprecation. This requires the introduction of a new kind of warning. Warnings issued at the point of use of an ordinarily deprecated API are "ordinary deprecation warnings" which are the same as in Java SE 8 and earlier. These are often simply called "deprecation warnings" as a holdover from previous usage. Warnings issued at the point of use of a terminally deprecated API might formally be called "terminal deprecation warnings" but this is rather verbose. Instead we will refer to such warnings as "removal warnings". The proposed table of cases is shown below: use site | API declaration site context | not dep. ord. dep. term. dep. +---------------------------------- not dep. | N oW (2) rW (5) | ord. dep. | N N (3) rW (6) | term. dep. | N N (4) rW (7) (Note 2) "oW" refers to an "ordinary deprecation warning" which is the same kind of warning that has occurred in this case in Java SE 8 and earlier. (Note 3) The upper left four elements are the same as in the Java SE 8 table, for reasons of backward compatibility. (Note 4) No warning is issued here by extrapolating from compatible behavior. If both use and declaration site are both ordinarily deprecated, it would be perverse if changing the use site to be terminally deprecated were to introduce a warning. Thus, no warning is issued in this case. (Note 5) "rW" refers to a "removal warning". All warnings issued at use sites of terminally deprecated APIs are removal warnings. (Note 6) This case is quite significant. We always want the use of a terminally deprecated API to generate a removal warning, even if the use site is within deprecated code. (Note 7) This is similar to (6). One might think that, since both the use and declaration sites are terminally deprecated, both are "going away" and that it would be pointless to issue a warning here. But the possibility is that the declaration site is within a library that is evolving more quickly than the use site, so the use site might outlive the declaration site. Therefore, a warning about the impending removal of the declaration site is necessary. The general rule that covers the lower right four elements is as follows. If the use site is deprecated, whether ordinarily or terminally, no ordinary deprecation warnings will be issued, but removal warnings will still be issued. An example of an ordinary deprecation warning might be as follows: UseSite.java:3: warning: [deprecation] ordinary() in DeclSite has been deprecated An example of a removal warning might be as follows: UseSite.java:4: warning: [removal] removal() in DeclSite has been deprecated and marked for removal The specific wording of the warnings, and the mechanisms for customization of warnings, may differ from compiler to compiler. ### Suppression of Deprecation Warnings In Java SE 8 and earlier, it was possible to suppress deprecation warnings by annotating the use site with `@SuppressWarnings("deprecation")`. This behavior needs to be modified in the presence of terminal deprecation. Consider a case where a use site depends on an API that is ordinarily deprecated, and that the resulting warning has been suppressed with a `@SuppressWarnings("deprecation")` annotation. If the declaration site were to be modified to be terminally deprecated, we would want a removal warning to occur at the use site, even though warnings at the use site have already been suppressed. If a new warning were not issued in this case, it would be possible for an API to be terminally deprecated and then removed without any warnings at its use sites. The following scenario illustrates the problem. Suppose that the `@SuppressWarnings("deprecation")` annotation were to suppress both ordinary deprecation warnings as well as removal warnings. Then, the following could occur: 1. Use site X depends on API Y, currently not deprecated 2. Y's declaration changes to ordinary deprecation, generating ordinary deprecation warning at X 3. X is annotated with `@SuppressWarnings("deprecation")`, suppressing the warning 4. Y's declaration changes to terminal deprecation; removal warning at X still suppressed 5. Y is removed entirely, causing X to break unexpectedly Inasmuch as the purpose of deprecation is to communicate information about API evolution, particularly about removal of APIs, the lack of any warning in this case is a serious problem. It follows that a warning should be given when a deprecation is "upgraded" from an ordinary to a terminal deprecation, even if the warnings at that use site had previously been suppressed. We need a mechanism for suppressing removal warnings that differs from the mechanism currently used for suppressing ordinary deprecation warnings. The solution is to use a different string in the `@SuppressWarnings` annotation. Removal warnings -- warnings that arise from the use of terminally deprecationed APIs -- can be suppressed with the annotation @SuppressWarnings("removal") This annotation suppresses only removal warnings, and not ordinary deprecation warnings. We considered making this be a strong form of suppression that would cover both ordinary deprecation warnings and removal warnings. However, this potentially leads to errors. Programmers might use `@SuppressWarnings("removal")` to suppress warnings from ordinary deprecations. This would prevent warnings from appearing if an ordinary deprecation were changed to a terminal deprecation, leading to unexpected breakage when the terminally deprecated API is eventually removed. As before, warnings from the use of ordinarily deprecated APIs can be suppressed with the annotation @SuppressWarnings("deprecation") As noted above, this annotation suppresses only ordinary deprecation warnings; it doesn't suppress removal warnings. If it is necessary to suppress both ordinary deprecation warnings and removal warnings at a particular site, the following construct can be used: @SuppressWarnings({"deprecation", "removal"}) Below is a copy of the warnings table from the previous section, modified to show how warnings from the different cases can be suppressed. use site | API declaration site context | not dep. ord. dep. term. dep. +---------------------------------- not dep. | - @SW(d) @SW(r) | ord. dep. | - - @SW(r) | term. dep. | - - @SW(r) @SW(d) = @SuppressWarnings("deprecation") @SW(r) = @SuppressWarnings("removal") If a removal warning is suppressed with `@SuppressWarnings("removal")` at the use site of a terminally deprecated API, and that API is changed to an ordinary deprecation, it is somewhat odd that an ordinary deprecation warning will appear. However, we expect the evolution path of an API from terminal deprecation back to ordinary deprecation to be quite rare. JLS section 9.6.4.6 will need to be modified accordingly. That change is covered by [JDK-8145716](). ### Static Analysis A static analysis tool `jdeprscan` will be provided that scans a jar file (or some other aggregation of class files) for uses of deprecated API elements. By default, the deprecated APIs will be the deprecations from Java SE itself. A future extension will provide for the ability to scan for deprecations that have been declared in a class library other than Java SE. ### Ideas for Future Work A dynamic analysis tool `jdeprdetect` could be provided to track dynamic uses of deprecated APIs. It can be implemented by using a Java agent, instrumenting the deprecated API elements and issuing warning messages when usage of those elements is detected at runtime. Dynamic analysis should be helpful at catching cases that static analysis misses. These cases include reflective access to deprecated APIs, or use of deprecated providers loaded via `ServiceLoader`. Furthermore, dynamic analysis can show the *absence* of a dependency that might be flagged by static analysis. For example, code might reference a deprecated API, and this reference will cause `jdeprscan` to emit a warning. However, if the code referencing a deprecated API is dead code, no warning will be emitted by `jdeprdetect`. This information should help developers prioritize their code migration efforts. Certain features reside entirely within library implementations and aren't manifested in any public APIs. One example of this is the "legacy merge sort" algorithm. See [Java SE 7 and JDK 7 Compatibility]() for further information. Library implementations of deprecated features should be able to check various system properties to determine whether to issue log messages at runtime, and if so, what form the log message should take. These properties might include: * `java.deprecation.enableLogging` — *boolean*, default `false` If true, as determined by the `Boolean.parseBoolean` method, then library code will log deprecation messages. Messages will be logged using a logger obtained by calling `System.getLogger()`, and messages will be logged using a level of `System.Logger.Level.WARNING`. * `java.deprecation.enableStackTrace` — *boolean*, default `false` If true, and if deprecation logging is enabled, log messages will include a stack trace. Implementation and enhancements to other tools is beyond the scope of this JEP. A number of ideas for such tool enhancements are described here as suggestions for future work. The `javadoc` tool could be enhanced to handle the detail code of a `@Deprecated` annotation. It could also provide a more prominent display of the `Detail` values. The handling of the `@deprecated` Javadoc tag should be largely unchanged, though perhaps it might be modified somewhat to include information about the `forRemoval` and `since` values. The standard doclet could be modified to treat deprecated APIs differently. For example, deprecated members of a class might be put into a separate tab, along side the existing tabs for instance, abstract, and concrete methods. Deprecated classes could be moved to a separate section in the package frame. Currently, it contains sections for Interfaces, Classes, Enums, Exceptions, Errors, and Annotation Types. New sections for deprecated members could be added. The list of deprecated APIs could be enhanced as well. (This page is reached via the link at the very top of each page, in the bar containing links Overview, Package, Class, Use, Tree, Deprecated, Index, Help.) This page is currently organized by kind: interfaces, classes, exceptions, annotation types, fields, methods, constructors, and annotation type elements. API elements that include the value `forRemoval=true` should be highlighted, as their impending removal potentially has great impact. The enhanced `@Deprecated` annotation will impact other tools such as IDEs. For example, deprecated APIs should be absent from IDEs' auto-completion menus and dialogs by default. Or, automatic refactoring rules could be offered that replace calls to deprecated APIs with calls to their replacements. Alternatives ------------ A set of alternatives that has been proposed includes having the JVM halt, having deprecated features be disabled, or having usage of deprecated APIs cause a compile-time error, unless a version-specific option is supplied. All of these proposals will succeed only at notifying the developer of the *first* usage of a deprecated feature, because the normal program (or build) flow is interrupted at that point. Thus, subsequent uses of deprecated features would likely go undetected. Upon encountering such failures, most developers would simply supply the version-specific option to enable the deprecated features. Thus, in general, this approach won't be successful at providing developers information about *all* of the deprecated features in use by an application. It has been suggested that the `@deprecated` Javadoc tag be retired in favor of the `@Deprecated` annotation. The `@deprecated` Javadoc tag and the `@Deprecated` annotation should always both be present or absent. However, they are redundant only in very abstract, conceptual sense. The `@deprecated` Javadoc tag provides descriptive text, rationale, and information and links to replacement APIs. This information is quite suitable for including in javadoc documentation, which already has facilities for it (such as link tags). Moving such textual information into annotation values would require javadoc to extract the information from annotations instead of doc comments. It would be harder for developers to maintain, since annotations have no markup support. Finally, annotation elements take up space at runtime, and it's unnecessary for documentation text to be present in memory at runtime. A string value has been proposed as a detail code. This appears to provide more flexibility, but it also introduces problems with weak typing and namespace conflicts, possibly leading to undetected errors. A "replacement" element in the `@Deprecated` annotation was present in earlier versions of this proposal. The intent was for it to denote a specific API that replaces the one being deprecated. In practice, there is never a drop-in replacement API for any deprecated API; there are always tradeoffs and design considerations, or choices to be made among several possible replacements. All of these topics require discussion and are thus better suited for textual documentation. Finally, there is no syntax for referring to another API from an annotation element, whereas Javadoc already supports such references via its `@see` and `@link` tags. only significant bit of detail remaining is whether there is intent to remove the API. This is expressed in the `forRemoval` annotation element. Testing ------- A reasonably simple set of tests will be constructed for the new tooling. A set of cases will be provided where each different kind of API element that can be deprecated is deprecated. Another set of cases will be constructed, consisting of usages of each deprecated API from the cases described above. The static analysis checker `jdeprscan` should be run to ensure that it issues warnings for all such usages.
|
https://bugs.java.com/bugdatabase/view_bug.do?bug_id=JDK-8065614
|
CC-MAIN-2020-24
|
en
|
refinedweb
|
Since the last milestone of Eclipse Ditto 0.1.0-M3, the following new features and bugfixes were added.
New features
Search in namespaces
A query parameter
namespaces was added to the HTTP search API.
It can be used in order to restrict search to Things within specific namespaces. For example, with the route
/search/things?namespaces=john,mark
only Things with IDs of the form
john:<id-suffix> and
mark:<id-suffix> are returned as results.
Namespace restriction happens at the start of a search query execution and may speed up a search queries considerably.
Feature Definition
Ditto’s model (to be precise the
Feature) was enhanced by a
Definition. This field is intended to store which
contract a Feature follows (which state and capabilities can be expected from a Feature).
The Java model, HTTP API and Ditto Protocol were enhanced (in a non-API breaking way) to now contain that field.
For more information about the Feature Definition and how it can in future be used together with Eclipse Vorto, have a look at its documentation.
Bugfixes
AMQP 1.0 failover is not working
Using
"failover": true when creating a new AMQP 1.0 connection caused that the connection could not be established.
Various smaller bugfixes
This is a complete list of the merged pull requests.
Documentation
Continuously improve and enhance the existing documentation.
|
https://www.eclipse.org/ditto/release_notes_020-M1.html
|
CC-MAIN-2020-24
|
en
|
refinedweb
|
Provided by: libnbd-dev_1.2.2-1ubuntu2_amd64
NAME
nbd_get_protocol - return the NBD protocol variant
SYNOPSIS
#include <libnbd.h> const char * nbd_get_protocol (struct nbd_handle *h);
DESCRIPTION
Return the NBD protocol variant in use on the connection. At the moment this returns one of the strings "oldstyle", "newstyle" or "newstyle-fixed". Other strings might be returned in the future. Most modern NBD servers use "newstyle-fixed". This call does not block, because it returns data that is saved in the handle from the NBD protocol handshake.
RETURN VALUE
This call returns a statically allocated string, valid for the lifetime of the process or until libnbd is unloaded by dlclose(3). You must not try to free the string.
ERRORS
On error "NULL" is returned. Refer to "ERROR HANDLING" in libnbd(3) for how to get further details of the error.
HANDLE STATE
The handle must be connected and finished handshaking with the server, or shut down, otherwise this call will return an error.
VERSION
This function first appeared in libnbd 1.2. If you need to test if this function is available at compile time check if the following macro is defined: #define LIBNBD_HAVE_NBD_GET_PROTOCOL 1
SEE ALSO
nbd_get_handshake_flags(3), nbd_get_structured_replies_negotiated(3), nbd_get_tls_negotiated(3), nbd_create
|
http://manpages.ubuntu.com/manpages/focal/man3/nbd_get_protocol.3.html
|
CC-MAIN-2020-24
|
en
|
refinedweb
|
I made standalone executables for Linux and Windows of my Python programs Microbe, Replicator Machine, Neural Construct and Biomorph Entity. These executables were built using cx_Freeze, and have Python and all dependent libraries included, no installation needed.
Some details of the cx_Freeze build follows:
The cx_Freeze utility is similar to py2exe, but can build both Linux and Windows executables. To build the executable from the Python script using cx_Freeze, build options can be entered in a setup.py script, a basic one being:
#setup.py
from cx_Freeze import setup, Executable
setup( name = "Microbe",
version = "1.21",
description = "Microbial Simulation",
executables = [Executable("microbe.py")] )
Then from the command line run:
python setup.py build
There were a few glitches in the build. This was mainly because my Python scripts use the Pygame and NumPy modules. I used Ubuntu Linux 9.10 to build, and tested executables on an Arch Linux virtual machine (VM) installed in VirtualBox with no external Python modules. The build system has Python 2.64(Linux)/2.54(Windows), Pygame 1.81, NumPy 1.30, and used cx_Freeze 4.11 (4.01 appeared to give the same results; latest version had an import error, solved by renaming /cx_Freeze/util to util.so (Linux) or util.pyd (Windows)). I will describe the process with the Microbe build, which requires both Pygame and NumPy. The described process yielded successful builds, and is meant as advice but not necessary the most appropriate solutions for other situations. I first started a build with the basic setup.py script. This should create a build folder with the Python script, Python and all dependencies included, and in this example, the program can be launched on Linux by the command './microbe'. The build ran properly, except there were hidden dependencies still on the build machine that may not be available on another Linux machine. A good way on Linux to check what files are opened when the program is launched is to use from the command line:
top
lsop | grep ID
The top utility shows the ID of running programs, so get the ID of the executable and put it on the lsop line to list all files opened by the executable. Doing this, you can see which are being open through the build package, and those that are being obtained from the system.
To get back to the build, attempt at running the build on the VM got:
File "/usr/lib/python2.6/dist-packages/numpy/lib/polynomial.py", line 11, in
AttributeError: 'module' object has no attribute 'core'
This concerned missing NumPy dependencies libblas.so.3gf/liblapack.so.3gf that were not packaged by cx_Freeze and can be fixed by adding to cx_Freeze setup.py:
includeDependencies =
[("/usr/lib/libblas.so.3gf.0","libblas.so.3gf"),
("/usr/lib/liblapack.so.3gf.0","liblapack.so.3gf")]
These dependencies comprise a Linear Algebra Package (lapack), and are quite a large addition. If NumPy is built from source, without the lapack library installed, it will compile a lite version that would be better to package. On Ubuntu Linux, NumPy depends on the full version of lapack. I do not have a requirement for the full lapack version, and decided to removed NumPy/libatlas3gf-base/libblas3gf; if you do have another program requiring lapack then do not do this or reinstall later. With the removal of NumPy, several other programs dependent on this module may be removed and need to be reinstalled after NumPy is reinstalled. I then compiled NumPy 1.4.0rc1 from source:
sudo apt-get install python-dev
python setup.py build --fcompiler=gnu95
sudo checkinstall python setup.py install
This compilation requires the Python Headers (python-dev) and the gfortran compiler. The install should be to /usr/local/lib/; Ubuntu should have the path in /etc/ld.so.conf, if missing add it then run 'sudo ldconfig'. Checkinstall makes a deb package while installing (during install info is requested - update name: pygame-numpy and version: 1:1.4.0), and inserts it in the package manager, which updates dependencies to the installed version. Since Pygame was removed when I removed NumPy before, I reinstalled that. This allowed a cx_Freeze build of Microbe with NumPy/lapack-lite added using the basic setup.py. The build ran on the system after a required change to my program. Though it was due to an unrelated issue concerning a computer-intensive amoeba animate function that I compiled with Cython, a Python-like to C compiler. The program terminated immediately with 'ValueError: numpy.dtype does not appear to be the correct type object'. It seems that the new version of NumPy had changed numpy.dtype and broke the Cython compiled code. Needed to recompile the animate code to have a separate version that works with the installed NumPy module.
When the new build was run on the VM, the previous error was gone, but another appeared:
File "microbe.py", line 75, in __init__
pygame.error: File is not a Windows BMP file
This concerned the inability to find Pygame dependencies, libjpeg.so.62/libpng12.so.0, due to discrepancies with names between Ubuntu and Arch Linux. I decided to include these in the build using the buildOptions includeDependencies. This can be seen below in the final setup.py used for the cx_Freeze build. I used this to build in Linux and Windows. However, upon exit of the program in Windows, there was an error that the program 'encountered a problem and needs to be close - ModName: python25.dll', and I found that the build did not like the code 'sys.exit', fixed by changing the program to exit by main loop termination. One final note, for some of my other programs such as Biomorph Entity that do not import NumPy, NumPy is still packaged in the build. I believe this is because Pygame is dependent on NumPy for its surfarray module. Since I do not use this module in those programs, I was able to cx_Freeze build with the buildOptions 'excludes = ["numpy"]', and possibly excludes of other unnecessary Python modules can make a lighter executable.
The executables for the Microbe program was built on both Linux and Windows with cx_Freeze by issuing the command 'python setup.py build' from the program folder with the following setup.py:
#setup.py:
from cx_Freeze import setup, Executable
import sys
if sys.platform == "win32":
base = "Win32GUI"
includeDependencies = []
else:
base = None
includeDependencies =
[ ("/usr/lib/libjpeg.so.62.0.0","libjpeg.so.62"),
("/usr/lib/libpng12.so.0.37.0","libpng.so.0") ]
includePersonalFiles =
[ ("data","data"), ("readme.txt","readme.txt") ]
includeFiles = includeDependencies + includePersonalFiles
buildOptions =
dict( include_files = includeFiles,
icon = "microbe.ico",
optimize = 2,
compressed = True )
setup( name = "Microbe",
version = "1.21",
description = "Microbial Simulation",
options = dict(build_exe = buildOptions),
executables = [Executable("microbe.py", base = base)] )
Submitted by Jim on December 23, 2009 at 11:00 pm
|
https://gatc.ca/2009/12/23/python-program-executables/
|
CC-MAIN-2020-24
|
en
|
refinedweb
|
accessible
Skip over Site Identifier
Skip over Generic Navigation
Skip over Search
Skip over Site Explorer
Site ExplorerSite Explorer
Posts: 19
Rating:
(3)
import from Process Automation User Connection, Talk to Other Process UsersHo Chin Hui posted on 11/3/03:The WWLogger at the client side will sometimes give me the error message "Failed to advise tag from tagserver" Thisresulted in no data being displayed and also it no longer refresh.Thiserror message does not appearon thetagserver side. The problem can be clearedif I go to another page and return again. However this is not acceptable. Can anyonehelp me out?
import from Process Automation User Connection, Talk to Other Process UsersTom Johnson on 11/4/03:
Usually, that happens when you reference a tag that doesn't exist on the TagServer. Check the graphic and confirm that all the tags being referenced actually exist on the tagserver. The reason that the page stops updating is that WW has a bug (or a feature) that stops the updating process on good tags when it runs into a tag that doesn't exist. Usually, it is random in which tags will stop updating. If you find the bad tag and delete it (or correct it), the problem with the updating will go away. Hope that helps.
|
https://support.industry.siemens.com/tf/ww/en/posts/r5-client-fail-to-advise-tag-from-tagserver/5703/?page=0&pageSize=10
|
CC-MAIN-2020-24
|
en
|
refinedweb
|
This is a
playground to test code. It runs a full
Node.js environment and already has all of
npm’s 400,000 packages pre-installed, including
react-guitar-chord with all
npm packages installed. Try it out:
require()any package directly from npm
awaitany promise instead of using callbacks (example)
This service is provided by RunKit and is not affiliated with npm, Inc or the package authors.
React component to draw SVG Guitar chords.
npm install react-guitar-chord
or
yarn add react-guitar-chord
import React from 'react' import GuitarChord from 'react-guitar-chord' export default () => ( <div> <GuitarChord chord={'C'} /> <GuitarChord chord={'C'} quality={'MIN'} /> </div> )
|
https://npm.runkit.com/react-guitar-chord
|
CC-MAIN-2020-24
|
en
|
refinedweb
|
Another common problem faced by many Unity developers is the unexpected expansion of the managed heap. In Unity, the managed heap expands much more readily than it shrinks. Furthermore, Unity’s garbage collection strategy tends to fragment memory, which can prevent a large heap from shrinking.
The “managed heap” is a section of memory that is automatically managed by the memory manager of a Project’s scripting runtime (Mono or IL2CPP). All objects created in managed code must be allocated on the managed heap(2) (Note: Strictly speaking, all non-null reference-typed objects and all boxed value-typed objects must be allocated on the managed heap).
In the above diagram, the white box represents a quantity of memory apportioned to the managed heap, and the colored boxes within it represent data values stored within the managed heap’s memory space. When additional values are needed, more space is allocated from within the managed heap.
The garbage collector runs periodically(3) (Note: The exact timing is platform-dependent). This sweeps through all objects on the heap, marking for deletion any objects that are no longer referenced. Unreferenced objects are then deleted, freeing up memory.
Crucially, Unity’s garbage collection – which uses the Boehm GC algorithm – is non-generational and non-compacting. “Non-generational” means that the GC must sweep through the entire heap when performing a collection pass, and its performance therefore degrades as the heap expands. “Non-compacting” means that objects in memory are not relocated in order to close gaps between objects.
The above diagram shows an example of memory fragmentation. When an object is released, its memory is freed. However, the freed space does not become part of a single large pool of “free memory”. The objects on either side of the freed object may still be in use. Because of this, the freed space is a “gap” between other segments of memory (this gap is indicated by the red circle in the diagram). The newly-freed space can therefore only be used to store data of identical or lesser size than the freed object.
When allocating an object, remember that the object must always occupy a contiguous block of space in memory.
This leads to the core problem of memory fragmentation: while the overall amount of space available in the heap may be substantial, it is possible that some or all of that space is in small “gaps” between allocated objects. In this case, even though there may be enough total space to accommodate a certain allocation, the managed heap cannot find a large enough block of contiguous memory in which to fit the allocation.
However, if a large object is allocated and there is insufficient contiguous free space to accommodate the object, as illustrated above, the Unity memory manager performs two operations.
First, if it has not already done so, the garbage collector runs. This attempts to free up enough space to fulfill the allocation request.
If, after the GC runs, there is still not enough contiguous space to fit the requested amount of memory, the heap must expand. The specific amount that the heap expands is platform-dependent; however, most Unity platforms double the size of the managed heap.
The core issues with managed heap expansion are twofold:
Unity does not often release the memory pages allocated to the managed heap when it expands; it optimistically retains the expanded heap, even if a large portion of it is empty. This is to prevent the need to re-expand the heap should further large allocations occur.
On most platforms, Unity eventually releases the pages used by empty portions of the managed heap back to the operating system. The interval at which this occurs is not guaranteed and should not be relied upon.
The address space used by the managed heap is never returned to the operating system.
For 32-bit programs, this can lead to address space exhaustion if the managed heap expands and contracts many times. If a program’s available memory address space is exhausted, the operating system will terminate the program.
For 64-bit programs, the address space is sufficiently large that this is extremely unlikely to occur for programs whose running time does not exceed the average human lifespan.
Many Unity projects are found to operate with several tens or hundreds of kilobytes of temporary data being allocated to the managed heap each frame. This is often extremely detrimental to a project’s performance. Consider the following math:
If a program allocates one kilobyte (1kb) of temporary memory each frame, and is running at 60 frames per secondThe frequency at which consecutive frames are displayed in a running game. More info
See in Glossary, then it must allocate 60 kilobytes of temporary memory per second. Over the course of a minute, this adds up to 3.6 megabytes of garbage in memory. Invoking the garbage collector once per second is likely to be detrimental to performance, but allocating 3.6 megabytes per minute is problematic when attempting to run on low-memory devices.
Further, consider loading operations. If a large number of temporary objects are generated during a heavy Asset-loading operation, and those objects are referenced until the operation completes, then the garbage collector is unable to release those temporary objects and the managed heap needs to expand – even though many of the objects it contains will be released a short time later.
Keeping track of managed memory allocations is relatively simple. In Unity’s CPU ProfilerA window that helps you to optimize your game. It shows how much time is spent in the various areas of your game. For example, it can report the percentage of time spent rendering, animating or in your game logic. More info
See in Glossary, the Overview has a “GC Alloc” column. This column displays the number of bytes allocated on the managed heap in a specific frame (4) (Note: Note that this is not identical to the number of bytes temporarily allocated during a given frame. The profile displays the number of bytes allocated in a specific frame, even if some/all of the allocated memory is reused in subsequent frames). With the “Deep Profiling” option enabled, it’s possible to track down the method in which these allocations occur.
The Unity Profiler does not track these allocations when they occur off the main thread. Therefore, the “GC Alloc” column cannot be used to measure managed allocations that occur in user-created threads. Switch the execution of code from separate threads to the main thread for debugging purposes or use the BeginThreadProfiling API to display the samples in the Timeline Profiler.
Always profile managed allocations with a development buildA executed in the Editor, but not in a built project.
In general, it is strongly recommended that all developers minimize managed heap allocations whenever the project is in an interactive state. Allocations during non-interactive operations, such as SceneA Scene contains the environments and menus of your game. Think of each unique Scene file as a unique level. In each Scene, you place your environments, obstacles, and decorations, essentially designing and building your game in pieces. More info
See in Glossary loading, are less problematic.
The Jetbrains Resharper Plugin for Visual Studio can help locate allocations in code.
Use Unity’s Deep Profile mode to locate the specific causes of managed allocations. In Deep Profile mode, all method calls are recorded individually, providing a clearer view of where managed allocations occur within the method call tree. Note that Deep Profile mode works not only in the Editor but also on Android and Desktop using the command line argument
-deepprofiling. The Deep Profiler button stays grayed out during profiling.
There are a handful of relatively simple techniques that can be employed to reduce managed heap allocations.
When using C#’s Collection classes or Arrays, consider reusing or pooling the allocated Collection or Array whenever possible. The Collection classes expose a Clear method which eliminates the Collection’s values but does not release the memory allocated to the Collection.
void Update() { List<float> nearestNeighbors = new List<float>(); findDistancesToNearestNeighbors(nearestNeighbors); nearestNeighbors.Sort(); // … use the sorted list somehow … }
This is particularly useful when allocating temporary “helper” Collections for complex computations. A very simple example might be the following code:
In this example, the
nearestNeighbors List is allocated once per frame in order to collect a set of data points. It’s very simple to hoist this List out of the method and into the containing class, which avoids allocating a new List each frame:
List<float> m_NearestNeighbors = new List<float>(); void Update() { m_NearestNeighbors.Clear(); findDistancesToNearestNeighbors(NearestNeighbors); m_NearestNeighbors.Sort(); // … use the sorted list somehow … }
In this version, the List’s memory is retained and reused across multiple frames. New memory is only allocated when the List needs to expand.
There are two points to consider when using closures and anonymous methods.
First, all method references in C# are reference types, and are therefore allocated on the heap. Temporary allocations can be easily created by passing a method reference as an argument. This allocation occurs regardless of whether the method being passed is an anonymous method or a predefined one.
Second, converting an anonymous method to a closure significantly increases the amount of memory required to pass the closure to method receiving it.
Consider the following code:
List<float> listOfNumbers = createListOfRandomNumbers(); listOfNumbers.Sort( (x, y) => (int)x.CompareTo((int)(y/2)) );
This snippet uses a simple anonymous method to control the sorting order of the list of numbers created on the first line. However, if a programmer wished to make this snippet reusable, it is tempting to substitute the constant
2 for a variable in local scope, like so:
List<float> listOfNumbers = createListOfRandomNumbers(); int desiredDivisor = getDesiredDivisor(); listOfNumbers.Sort( (x, y) => (int)x.CompareTo((int)(y/desiredDivisor)) );
The anonymous method now requires the method to be able to access the state of a variable outside of the method’s scope, and so has become a closure. The
desiredDivisor variable must be passed into the closure somehow so that it can be used by the actual code of the closure.
To do this, C# generates an anonymous class that can retain the externally-scoped variables needed by the closure. A copy of this class is instantiated when the closure is passed to the
Sort method, and the copy is initialized with the value of the
desiredDivisor integer.
Because executing the closure requires instantiation of a copy of its generated class, and all classes are reference types in C#, then executing the closure requires allocation of an object on the managed heap.
In general, it is best to avoid closures in C# whenever possible. Anonymous methods and method references should be minimized in performance-sensitive code, and especially in code that executes on a per-frame basis.
Boxing is one of the most common sources of unintended temporary memory allocations found in Unity projects. It occurs whenever a value-typed value is utilized as a reference type; this most often occurs when passing primitive value-typed variables (such as
int and
float) to object-typed methods.
In this extremely simple example, the integer in x is boxed in order to be passed to the
object.Equals method, because the
Equals method on
object requires that an
object be passed to it.
int x = 1; object y = new object(); y.Equals(x);
C# IDEs and compilers generally do not issue warnings about boxing, even though it leads to unintended memory allocations. This is because the C# language was developed with the assumption that small temporary allocations would be efficiently handled by generational garbage collectors and allocation-size-sensitive memory pools.
While Unity’s allocator does use different memory pools for small and large allocations, Unity’s garbage collector is
not generational and therefore cannot efficiently sweep out the small, frequent temporary allocations generated by boxing.
Boxing should be avoided wherever possible when writing C# code for Unity runtimes.
Boxing shows up in CPU traces as calls to one of a few methods, depending on the scripting backend in use. These generally take one of the following forms, where
<some class> is the name of some other class or struct, and
… is some number of arguments:
<some class>::Box(…)
Box(…)
<some class>_Box(…)
It can also be located by searching the output of a decompiler or IL viewer, such as the IL viewer tool built into ReSharper or the dotPeek decompiler. The IL instruction is “box”.
One common cause of boxing is the use of
enum types as keys for Dictionaries. Declaring an
enum creates a new value type that is treated like an integer behind the scenes, but enforces type-safety rules at compile time.
By default, a call to
Dictionary.add(key, value) results in a call to
Object.getHashCode(Object). This method is used to obtain the appropriate hash code for the Dictionary’s key, and is used in all methods that accept a key:
Dictionary.tryGetValue,
Dictionary.remove, etc.
The
Object.getHashCode method is reference-typed, but
enum values are always value types. Therefore, for enum-keyed Dictionaries, every method call results in the key being boxed at least once.
The following code snippet illustrates a simple example that demonstrates this boxing problem:
enum MyEnum { a, b, c }; var myDictionary = new Dictionary<MyEnum, object>(); myDictionary.Add(MyEnum.a, new object());
To solve this problem, it is necessary to write a custom class that implements the
IEqualityComparer interface and assign an instance of that class as the Dictionary’s comparer (Note: This object is usually stateless, and therefore can be reused with different Dictionary instances to save memory).
The following is a simple example of an IEqualityComparer for the above code snippet.
public class MyEnumComparer : IEqualityComparer<MyEnum> { public bool Equals(MyEnum x, MyEnum y) { return x == y; } public int GetHashCode(MyEnum x) { return (int)x; } }
An instance of the above class could be passed to the Dictionary’s constructor.
In Unity’s version of the Mono C# compiler, use of the
foreach loop forces Unity to box a value each time the loop terminates (Note: The value is boxed once each time the loop as a whole finishes executing. It does not box once per iteration of the loop, so memory usage remains the same regardless of whether the loop runs two times or 200 times). This is because the IL generated by Unity’s C# compiler constructs a generic value-type Enumerator in order to iterate over the value collection.
This Enumerator implements the
IDisposable interface, which must be called when the loop terminates. However, calling interface methods on value-typed objects (such as structs and Enumerators) requires boxing them.
Examine the following very simple example code:
int accum = 0; foreach(int x in myList) { accum += x; }
The above, when run through Unity’s C# compiler, produces the following Intermediate Language:
.method private hidebysig instance void ILForeach() cil managed { .maxstack 8 .locals init ( [0] int32 num, [1] int32 current, [2] valuetype [mscorlib]System.Collections.Generic.List`1/Enumerator<int32> V_2 ) // [67 5 - 67 16] IL_0000: ldc.i4.0 IL_0001: stloc.0 // num // [68 5 - 68 74] IL_0002: ldarg.0 // this IL_0003: ldfld class [mscorlib]System.Collections.Generic.List`1<int32> test::myList IL_0008: callvirt instance valuetype [mscorlib]System.Collections.Generic.List`1/Enumerator<!0/*int32*/> class [mscorlib]System.Collections.Generic.List`1<int32>::GetEnumerator() IL_000d: stloc.2 // V_2 .try { IL_000e: br IL_001f // [72 9 - 72 41] IL_0013: ldloca.s V_2 IL_0015: call instance !0/*int32*/ valuetype [mscorlib]System.Collections.Generic.List`1/Enumerator<int32>::get_Current() IL_001a: stloc.1 // current // [73 9 - 73 23] IL_001b: ldloc.0 // num IL_001c: ldloc.1 // current IL_001d: add IL_001e: stloc.0 // num // [70 7 - 70 36] IL_001f: ldloca.s V_2 IL_0021: call instance bool valuetype [mscorlib]System.Collections.Generic.List`1/Enumerator<int32>::MoveNext() IL_0026: brtrue IL_0013 IL_002b: leave IL_003c } // end of .try finally { IL_0030: ldloc.2 // V_2 IL_0031: box valuetype [mscorlib]System.Collections.Generic.List`1/Enumerator<int32> IL_0036: callvirt instance void [mscorlib]System.IDisposable::Dispose() IL_003b: endfinally } // end of finally IL_003c: ret } // end of method test::ILForeach } // end of class test
The most relevant code is the
__finally { … }__ block near the bottom. The
callvirt instruction discovers the location of the
IDisposable.Dispose method in memory before invoking the method, and requires that the Enumerator be boxed.
In general,
foreach loops should be avoided in Unity. Not only do they box, but the method-call cost of iterating over collections via Enumerators is generally much slower than manual iteration via a
for or
while loop.
Note that the C# compiler upgrade in Unity 5.5 significantly improves Unity’s ability to generate IL. In particular, the boxing operations has been eliminated from
foreach loops. This eliminates the memory overhead associated with
foreach loops. However, the CPU performance difference compared to equivalent Array-based code remains, due to method-call overhead.
A more pernicious and less-visible cause of spurious array allocation is the repeated accessing of Unity APIs that return arrays. All Unity APIs that return arrays create a new copy of the array each time they are accessed. It is extremely non-optimal to access an array-valued Unity API more often than necessary.
As an example, the following code spuriously creates four copies of the
vertices array per loop iteration. The allocations are occur each time the
.vertices property is accessed.
for(int i = 0; i < mesh.vertices.Length; i++) { float x, y, z; x = mesh.vertices[i].x; y = mesh.vertices[i].y; z = mesh.vertices[i].z; // ... DoSomething(x, y, z); }
This can be trivially refactored into a single array allocation, regardless of the number of loop iterations, by capturing the
vertices array before entering the loop:
var vertices = mesh.vertices; for(int i = 0; i < vertices.Length; i++) { float x, y, z; x = vertices[i].x; y = vertices[i].y; z = vertices[i].z; // ... DoSomething(x, y, z); }
While the CPU cost of accessing a property once is not very high, repeated accesses within tight loops create CPU performance hotspots. Further, repeated accesses unnecessarily expand the managed heap.
This problem is extremely common on mobile, because the
Input.touches API behaves similarly to the above. It is extremely common for projects to contain code similar to the following, where an allocation occurs each time the
.touches property is accessed.
for ( int i = 0; i < Input.touches.Length; i++ ) { Touch touch = Input.touches[i]; // … }
This can, of course, be trivially improved by hoisting the array allocation out of the loop condition:
Touch[] touches = Input.touches; for ( int i = 0; i < touches.Length; i++ ) { Touch touch = touches[i]; // … }
However, there are now versions of many Unity APIs that do not cause memory allocations. These should generally be favored, when they’re available.
int touchCount = Input.touchCount; for ( int i = 0; i < touchCount; i++ ) { Touch touch = Input.GetTouch(i); // … }
Converting the above example to the allocation-less Touch API is simple:
Note that the property access (
Input.touchCount) is still kept outside the loop condition in order to save the CPU cost of invoking the property’s
get method.
Some teams prefer to return empty arrays instead of
null when an array-valued method needs to return an empty set. This coding pattern is common in many managed languages, particularly C# and Java.
In general, when returning a zero-length array from a method, it is considerably more efficient to return a pre-allocated singleton instance of the zero-length array than to repeatedly create empty arrays(5) (Note: Naturally, an exception should be made when the array is resized after being returned).
Footnotes
(1) This is because, on most platforms, readback from GPU memory is extremely slow. Reading a Texture from GPU memory into a temporary buffer for use by CPU code (e.g.
Texture.GetPixel) would be very nonperformant.
(2) Strictly speaking, all non-null reference-typed objects and all boxed value-typed objects must be allocated on the managed heap.
(3) The exact timing is platform-dependent.
(4) Note that this is not identical to the number of bytes temporarily allocated during a given frame. The profile displays the number of bytes allocated in a specific frame, even if some/all of the allocated memory is reused in subsequent frames.
(5) Naturally, an exception should be made when the array is resized after being returned.
|
https://docs.unity3d.com/Manual/BestPracticeUnderstandingPerformanceInUnity4-1.html
|
CC-MAIN-2020-24
|
en
|
refinedweb
|
#include <string.h>
Helper class for the WritableArrayInterface macro. A WritableArrayInterface& parameter is actually a const NonConstArray& parameter, so temporary objects resulting from a conversion of some array to the NonConstArray interface may be bound to such a parameter (this wouldn't be possible if the parameter was non-const). To be able to invoke modifying functions on such a parameter, those functions are implemented as const functions in this class.
|
https://developers.maxon.net/docs/Cinema4DCPPSDK/html/classmaxon_1_1_non_const_array.html
|
CC-MAIN-2020-24
|
en
|
refinedweb
|
In this post, we will learn how to use Pandas get_dummies() method to create dummy variables in Python. Dummy variables (or binary/indicator variables) are often used in statistical analyses as well as in more simple descriptive statistics. Towards the end of the post, there’s a link to a Jupyter Notebook containing all Pandas get_dummies() examples.
How to Create Dummy Variables in Python
To create dummy variables in Python, with Pandas, we can use this code template:
df_dc = pd.get_dummies(df, columns=['ColumnToDummyCode'])
In the code chunk above, df is the Pandas dataframe, and we use the columns argument to specify which columns we want to be dummy code (see the following examples, in this post, for more details).
Dummy Coding for Regression Analysis
One statistical analysis in which we may need to create dummy variables in regression analysis. In fact, regression analysis requires numerical variables and this means that when we, whether doing research or just analyzing data, wishes to include a categorical variable in a regression model, supplementary steps are required to make the results interpretable.
In these steps, categorical variables in the data set are recoded into a set of separate binary variables (dummy variables). Furthermore, this re-coding is called “dummy coding” and involves the creation of a table called contrast matrix. Dummy coding can be done automatically by statistical software, such as R, SPSS, or Python.
What is Categorical Data?
In this section, of the creating dummy variables in Python guide, we are going to answer the question about what categorical data is. Now, in statistics, a categorical variable (also known as factor or qualitative variable) is a variable that takes on one of a limited, and most commonly a fixed number of possible values. Furthermore, these variables are typically assigning each individual, or another unit of observation, to a particular group or nominal category. For example, gender is a categorical variable.
What is a Dummy Variable?
Now, the next question we are going to answer before working with Pandas get_dummies, is “what is a dummy variable?”. Typically, a dummy variable (or column) is one which has a value of one (1) when a categorical event occurs (e.g., an individual is male) and zero (0) when it doesn’t occur (e.g., an individual is female).
Installing Pandas
Obviously, we need to have Pandas installed to use the get_dummies() method. Pandas can be installed using pip or conda, for instance. If we want to install Pandas using condas we type
conda install pandas. On the other hand, if we want to use pip, we type
pip install pandas. Note, it is typically suggested that Python packages are installed in virtual environments. Pipx can be used to install Python packages directly in virtual environments and if we want to install, update, and use Python packages we can, as in this post, use conda or pip.
Finally, if there is a message that there is a newer version of pip, make sure check out the post about how to up update pip.
Example Data to Dummy Code
In this Pandas get_dummies tutorial, we will use the Salaries dataset, which contains the 2008-09 nine-month academic salary for Assistant Professors, Associate Professors, and Professors in a college in the U.S.
Import Data in Python using Pandas
Now, before we start using Pandas get_dummies() method, we need to load pandas and import the data.
import pandas as pd data_url = '' df = pd.read_csv(data_url, index_col=0) df.head()
Of course, data can be stored in multiple different file types. For instance, we could have our data stored in .xlsx, SPSS, SAS, or STATA files. See the following tutorials to learn more about importing data from different file types:
- Learn how to read Excel (.xlsx) files using Python and Pandas
- Read SPSS files using Pandas in Python
- Import (Read) SAS files using Pandas
- Read STATA files in Python with Pandas
Now, if we only want to work with Excel files, reading xlsx files in Python, can be done with other libraries, as well.
Creating Dummy Variables in Python
In this section, we are going to use pandas get_dummies() to generate dummy variables in Python. First, we are going to work with the categorical variable “sex”. That is, we will start with dummy coding in Python with a categorical variable with two levels.
Second, we are going to generate dummy variables in Python with the variable “rank”. That is, in that dummy coding in Python example we are going to work with a factor variable with three levels.
How to Make Dummy Variables in Python with Two Levels
In this section, we are going to create a dummy variable in Python using Pandas get_dummies method. Specifically, we will generate dummy variables for a categorical variable with two levels (i.e., male and female).
In this create dummy variables in Python post, we are going to work with Pandas get_dummies(). As can be seen, in the image above we can change the prefix of our dummy variables, and specify which columns that contain our categorical variables.
First Dummy Coding in Python Example:
In the first Python dummy coding example below, we are using Pandas get_dummies to make dummy variables. Note, we are using a series as data and, thus, get two new columns named Female and Male.
pd.get_dummies(df['sex']).head()
In the code, above, we also printed the first 5 rows (using Pandas head()). We will now continue and use the columns argument. Here we input a list with the column(s) we want to create dummy variables from. Furthermore, we will create the new Pandas dataframe containing our new two columns.
How to Create Dummy variables in Python Video Tutorial
For those that prefer, here’s a video describing most of what is covered in this tutorial.
More Python Dummy Coding Examples:
df_dummies = pd.get_dummies(df, columns=['sex']) df_dummies.head()
In the output (using Pandas head()), we can see that Pandas get_dummies automatically added “sex” as prefix and underscore as prefix separator. If we, however, want to change the prefix as well as the prefix separator we can add these arguments to Pandas get_dummies():
df_dummies = pd.get_dummies(df, prefix='Gender', prefix_sep='.', columns=['sex']) df_dummies.head()
Remove Prefix and Separator from Dummy Columns
In the next Pandas dummies example code, we are going to make dummy variables in Python but we will set the prefix and the prefix_sep arguments so that we the column name will be the factor levels (categories):
df_dummies = pd.get_dummies(df, prefix=, prefix_sep='', columns=['sex']) df_dummies.head()
How to Create Dummy Variables in Python with Three Levels
In this section, of the dummy coding in Python tutorial, we are going to work with the variable “rank”. That is, we will create dummy variables in Python from a categorical variable with three levels (or 3 factor levels). In the first dummy variable in Python code example below, we are working with Pandas get_dummies() the same way as we did in the first example.
pd.get_dummies(df['rank']).head()
That is, we put in a Pandas Series (i.e., the column with the variable) as the only argument and then we only got a new dataframe with 3 columns (i.e., for the 3 levels).
Create a Dataframe with Dummy Coded Variables
Of course, we want to have the dummy variables in a dataframe with the data. Again, we do this by using the columns argument and a list with the column that we want to use:
df_dummies = pd.get_dummies(df, columns=['rank']) df_dummies.head()
In the image above, we can see that Pandas get_dummies() added “rank” as prefix and underscore as prefix separator. Next, we are going to change the prefix and the separator to “Rank” (uppercase) and “.” (dot).
df_dummies = pd.get_dummies(df, prefix='Rank', prefix_sep='.', columns=['rank']) df_dummies.head()
Now, we may not need to have a prefix or a separator and, as in the previous Pandas create dummy variables in Python example, want to remove these. To accomplish this, we just add empty strings to the prefix and prefix_sep arguments:
df_dummies = pd.get_dummies(df, prefix='', prefix_sep='', columns=['rank'])
Creating Dummy Variables in Python for Many Columns
In the final Pandas dummies example, we are going to dummy code two columns. Specifically, we are going to add a list with two categorical variables and get 5 new columns that are dummy coded. This is, in fact, very easy and we can follow the example code from above:
Creating Multiple Dummy Variables Example Code:
df_dummies = pd.get_dummies(df, prefix='', prefix_sep='', columns=['rank', 'sex']) df_dummies.head()
Finally, if we want to add more columns, to create dummy variables from, we can add that to the list we add as a parameter to the columns argument. See this notebook for all code examples in this tutorial about creating dummy variables in Python. For more Python Pandas tutorials, check out this page.
Conclusion: Dummy Coding in Python
In this post, we have learned how to do dummy coding in Python using Pandas get_dummies() method. More specifically, we have worked with categorical data with two levels, and categorical data with three levels. Furthermore, we have learned how to add and remove prefixes from the new columns created in the dataframe.
Additional Resources
Here’s a couple of additional resources to dig deeper into dummy coding:
- Dummy Variable (Wikiversity)
- Dummy Coding: the how and why
- Factorial Designs and Dummy Coding (Peer-reviewed article)
- Use of dummy variables in regression equations (Peer-reviewed article)
- An Introduction to Categorical Data Analysis (statistics book)
Thanks for your post Erik, quite easy to understand and implement after reading.
However, I’d like to point out some issues I found while reading:
– What is a Dummy Variable?: there is duplicated text in this block.
– tags: some of your code blocks end with a malformed closing tag for code, i.e:
df_dummies = pd.get_dummies(df, prefix=’Rank’, prefix_sep=’.’,
columns=[‘rank’])
df_dummies.head()code>
Hope this helps!
Hey Alex! Thanks for your kind comments. Also, thanks for spotting these errors.
Thanks for the precise explanation.
Hey Santosh! I am glad you liked the post. Thanks for your comment! Have a nice day!
Amazing notebook and article!
Is there a way to one hot encode multiple columns like you did in the last example of the linked notebook, except provide a unique prefix for each column I am “one hot encoding”
Hey Rehankhan,
Thank you for your kind comment. I am glad you liked the article and the notebook with the get_dummy() code examples. Now, if I understand your question correctly, you can add your unique prefixes to the prefix parameter. For example, in the last example (in the Notebook) you can do like this:
Of course, the prefix_sep can be used to separate the prefix from the dummy variable name (e.g., p1_AssocPorf, and so on, can be obtained by adding
prefix_sep='_'
Is this what you are after?
Have a nice day,
Best,
Erik
|
https://www.marsja.se/how-to-use-pandas-get_dummies-to-create-dummy-variables-in-python/
|
CC-MAIN-2020-24
|
en
|
refinedweb
|
Earlier:
AppDomain Isolated WPF Demo.zip
I have developed some AddIns where I need to pass VisualBrush and Storyboards back to the Host. How can I do that in a clean way. Right now I have to use hacks to get it done…as in passing a dummy Border with its background set to the VisualBrush and its Resources containing the Storyboards. I would prefer a cleaner way of doing this.
Any suggestions?
Thanks for the great work you guys are doing!
Pavan
For questions about specific WPF controls and the add-in model please post questions on this forum:
The WPF team monitors that closely and there are developers there familiar with WPF and the add-in model support who can help you.
Thanks,
Jesse
Hi,
The attached sample does not run “out of the box” in visual studio 2008 beta 2.
Assembly references is missing, and no addins are found when run (after adding the required references).
Regards,
Lars Wilhelmsen
Sorry you’re running into problems here. We tested this on a few machines before posting and have had at least a few people contact us through the blog who didn’t run into problems, but it sounds like there may be a configuration out there that is still causing problems.
Just to clear a few things up. Is this a clean install of beta2 or were previous builds of 2008 installed on the machine? Which assembly reference did you need to add?
Finally, can you list out any warnings you get during discovery? You need to make the following change to the application to get these warnings:
In CalculatorHost.xaml.cs change the line that says
AddInStore.Rebuild(path);
to
String[] warnings = AddInStore.Rebuild(path);
Then put a break point after that and take a look at the warnings array.
Thanks,
Jesse
OT, but is there an MSDN forum for Add-Ins?
There is no dedicated forum for the add-in model but we are instead part of the base class library forums.
You can find that forum here:
–Jesse
A somewhat awkward but necessary first step… I am a Software Development Engineer on the WPF Application
I tried to use the email link, but your email link is out of date.
I am trying to write an add-in that hosts a frame in it. When a certain event occurs I want to update the frame source to point to a new web page. When I do I get:
A first chance exception of type ‘System.Deployment.Application.InvalidDeploymentException’ occurred in System.Deployment.dll
Additional information: Application identity is not set.
Then my frame disappers.
I have modified the calculator demo to show the problem.
Change: private System.Windows.UIElement Graph(double[] operands) as below.
Run. Click Push Next 5 time. Click Graph. Click Push Next 5 more times. Click Graph. Get error.
Code:
Frame f;
private System.Windows.UIElement Graph(double[] operands)
{
if (f == null)
{
f = new Frame();
f.Source = new Uri("");
f.Width = 200;
f.Height = 200;
}
else
{
f.Source = new Uri("");
}
return f;
}
Hi,
First, this is a great example!
I’m trying to activate the add-in’s in a new AddInProcess, but when I try to activate the ‘Graphic Calculator’ I get an TargetInvocationException telling me the following:
System.Reflection.TargetInvocationException occurred
Message="Exception has been thrown by the target of an invocation."
Source="mscorlib"
StackTrace:
Server.Reflection.ConstructorInfo.Invoke(Object[] parameters)
at System.AddIn.Hosting.ActivationWorker.Activate()
at System.AddIn.Hosting.AddInServerWorker.Activate(AddInToken pipeline, ActivationWorker& worker) System.AddIn.Hosting.AddInServerWorker.Activate(AddInToken pipeline, ActivationWorker& worker)
at System.AddIn.Hosting.AddInActivator.ActivateOutOfProcess[T](AddInToken token, AddInEnvironment environment, Boolean weOwn)
at System.AddIn.Hosting.AddInActivator.Activate[T](AddInToken token, AddInProcess process, PermissionSet permissionSet)
at System.AddIn.Hosting.AddInActivator.Activate[T](AddInToken token, AddInProcess process, AddInSecurityLevel level)
at System.AddIn.Hosting.AddInToken.Activate[T](AddInProcess process, AddInSecurityLevel level)
at DemoApplication.CalculatorHost.LoadAddIns() in C:projectsAppDomain Isolated WPF DemoDemoApplicationCalculatorHost.xaml.cs:line 213
InnerException: System.InvalidOperationException
Message="The calling thread must be STA, because many UI components require this."
Source="PresentationCore"
StackTrace:
at System.Windows.Input.InputManager..ctor()
at System.Windows.Input.InputManager.GetCurrentInputManagerImpl()
at System.Windows.Input.InputManager.get_Current()
at System.Windows.Input.KeyboardNavigation..ctor()
at System.Windows.FrameworkElement.EnsureFrameworkServices()
at System.Windows.FrameworkElement..ctor()
at System.Windows.Controls.Control..ctor()
at System.Windows.Controls.Button..ctor()
at GraphCalc.GraphingCalculator.StartButton() in C:projectsAppDomain Isolated WPF DemoGraphing CalculatorGraphingCalculator.cs:line 42
at GraphCalc.GraphingCalculator..ctor() in C:projectsAppDomain Isolated WPF DemoGraphing CalculatorGraphingCalculator.cs:line 21
InnerException:
What can I do to resolve this?
Thank you,
Marcel
Hi Jesse,
I did tried to rebuild your sample on VS2008 RC, and i guess it needs updates. In the VisualCalculator…HostAdapter and same in Visual….AddInAdapter it gives 2 errors (same actuallz twice) that VisualAdapters do not exist in the context. I solve it by replacing it with FrameworkElementAdapters (however in case of Visual..AddInAdapter i had to cast to FrameworkElement which anyway derives from UIElement). Otherwise, it works perfectly. Thank you, C. Marius
What about winforms? What if I wan’t the plugin to add a control to my host?
Creating Add-Ins for WPF Applications [excerpts from upcoming SDK content] You’re unlikely to be reading
You’re unlikely to be reading this if you haven’t used the .NET Framework to build managed applications
Hola! I just returned from TechEd 2007 held in Barcelona, Spain. Barcelona is a beautiful city with incredible
Hola! I just returned from TechEd 2007 held in Barcelona, Spain. Barcelona is a beautiful city with incredible
Here is the latest in my link-listing series . Also check out my ASP.NET Tips, Tricks and Tutorials page
Here is the latest in my link-listing series . Also check out my ASP.NET Tips, Tricks and Tutorials page
This is a powerful feature, in which "an AppDomain isolated add-in generates some UI at the request of the host and the host displays directly as part of the application".
Will this feature work with a WinForms add-in?
Best Regards,
Frank Perdana
This is a powerful feature, in which "an AppDomain isolated add-in generates some UI at the request of the host and the host displays directly as part of the application".
Will this feature work with a WinForms based add-in?
Best regards,
fperdana
First of all, great sample application!
Now my problem:
Today I’ve installed Visual Studio 2008 Pro Final.
From now on, two methods are missing:
VisualAdapters.ViewToContractAdapter
VisualAdapters.ContractToViewAdapter
Can anyone adapt the sample to get working with the final studio 2008?
Thanks in advance!
Greetings
Markus.
Fyi, to build with release bits, you need to replace "VisualAdapters" with "FrameworkElementAdapters", like so.
VisualCalculatorContractToViewHostAdapter.cs:
public override UIElement Operate(HostView.Operation op, double[] operands)
{
return FrameworkElementAdapters.ContractToViewAdapter(
_contract.Operate(
OperationHostAdapters.ViewToContractAdapter(op),
operands));
}
VisualCalculatorViewToContractAddInAdapter.cs:);
}
This is cool stuff, thanks for showing us how to do it.
Hi,
I have an app. using the WPF add-in model presented here, and one of the add-Ins is a simple Wizard with various pages.
I’ve used AddInSecurityLevel.Internet (constraints for security) to load the addIn, however I cannot launch a the wizard as Dialog box due to security restrictions on the Windows being launch for Internet activated add-Ins. Could you please advise any workarround ? Should I change design?
Thanks, C. Marius.
This sample does not build on v3.5 RTM.
VisualAdapters.ContractToViewAdapter(…) no longer exists in System.AddIn.Pipeline namespace.
There is System.AddIn.Pipeline.ContractAdapter.ContractToViewAdapter<TView>(…) but it has a different signature.
Can you please update this sample? Thank you!
When I attempt to build the example I get two errors:
Error 1 – The name ‘VisualAdapters’ does not exist in the current context …HostSideAdaptersVisualCalculatorContractToViewHostAdapter.cs, line 34
Error 2 – The name ‘VisualAdapters’ does not exist in the current context …AddInSideAdaptersVisualCalculatorViewToContractAddInAdapter.cs, line 31
There are no classes in the solution called ‘VisualAdapters’. The methods being called on these classes (ContractToViewAdapter and ViewToContractAdapter) do exist (in AddInSideAdapters.OperationViewToContractAddInAdapter and HostSideAdapters.OperationHostAdapters) but have different signatures to the calls made on the error lines.
This must have happened to others. Is there a fix available?
I am running VS2005 with version 3.5 of the framework. Is there a version of this sample app available for VS2005?
We are working on a WPF application that loads add-ins into a separate AppDomain and those add-ins include visual content. We are using the System.AddIn pipeline and therefore use FrameworkElementAdapters to marshal the UIElement references and the element shows up — excellent. But some issues and questions:
The code for FrameworkElementAdapters appears to actually host the UIElement in a separate HWnd, which appears to mean that these UIElements, like hosted WinForms elements, have their own region and can’t be blended, combined, covered by other content.
Most critically, though tabbing into the UIElement seems to work great navigation-wise, if you tab into the UIElement from, for example, a TextBox that is bound, then the TextBox you are leaving does not update it’s bound backer, presumably since the LostFocus event and other related events do not fire. Essentially, it’s as though the keyboard focus never left the TextBox as far as the hosting window is concerned, even though the focus is clearly in the hosted UIElement. CommandBindings on a menu like Paste still apply to the TextBox even though the focus is not there anymore.
Can you comment on these issues? Are these shortcomings of the current version? Will they be fixed? If not, what are the recommended workarounds so that added-in UIElements still behave like normal WPF content with respect to their hosting window?
We are looking at rewriting our own version of MS.Internal.Controls.AddInHost and FrameworkElementAdapters to see if we can properly address the TabInto and general cross-domain focus issues, but that seems extreme.
Thanks,
Dathan
Dathan,
You are right. Those framework features do not work across appdomains out of box. You will need to wire them through contracts explicitly.
Unfortunately this is a hard limit of hwnd hosting.
We are aware of both of those issues (1 and 2). We are looking into improve the experience in future releases. We understand how inconvenient it can be for our customers at the mean time. Your feedback is appreciated.
Thanks,
Hua
Thanks Hua. I have been able to globally solve 1 and 3 without any ‘invasive’ coding (i.e. no writing of our own element adapters or versions of MS…AddInHost, etc.), so I am good to go for now (I understand #2 is a hard limit).
More info that may help: I wanted to test #3 after discovering #1 (no events made me wonder if focus events were getting handled, and that made me wonder about binding) — to test it I took the WPF demo app and added a 2-way bound text box in the tab order right before the stack of visual calculator plug ins. The UI never saw the focus leave (i.e. the control’s property still shows it having focus, and RoutedCommands like a Paste menu still hit it as well).
Any idea on when the ‘fix will be in’?
Cheers, and thanks…. D
Can we see an example of wiring the Routed Events and Commands?
Is it required for the element to be the root element? I have a frame contained in a TabControl contained in a window. When I try to pass the frame, I get a "The element is not the root of the tree" exception.
Thanks,
Tooraj
Hello everyone!
I’m currently experiencing troubles with AddInHost! You see it draws just nothing when hosting window has AllowsTransparency property set to true! Is there any workaround? I really need this flag as the window I host visual addins in has complex bounds.
Could you please duplicate an answer here: siniypin(alpha)gmail.com
Best regards,
Robert
Jack had showed a Winforms UserControl(something with GreenBackground) from a AddIn which is being hosted on the Calculator window. But this one is missing on the attached WPF calculator sample. Do we have it somewhere?
When I tried to Create an AddIn which contains a Winforms UserControl and host it on a Winfroms Form, it works fine.
** If I try to unload the AddIn, the main application is shut down**
Can anyone create a simple example of How to create a AddIn with winforms UserControl and host it on a Winforms Form along with Unloading AddIn thing?
Thanks in advance
|
https://blogs.msdn.microsoft.com/clraddins/2007/08/06/appdomain-isolated-wpf-add-ins-jesse-kaplan/
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
Red Hat Bugzilla – Full Text Bug Listing
Some uses of chroot, such as building packages for different releases, clash
with SELinux as they may need to use different security policies to what is
available on the host system.
See discussion at:
There has been some internal SELinux project discussion on the issue of allowing
multiple policies to be loaded on a system (perhaps via namespaces), which is
also a requirement for fully supporting the shipping of policy with RPMs (e.g.
the file system labels for the RPM being built may not exist on the host system,
and also use different policy).
Changing version to '9' as part of upcoming Fedora 9 GA.
More information and reason for this action is here:
livecd patches and setfiles_mac are in or going into Rawhide. Which will allow
livecd to be built within an enforcing environment.
*** Bug 459398ames morris: Please can the version be bumped to 11?
Feedback about that ?
---
Fedora Bugzappers volunteer triage team
James, can multiple policies be loaded on a system now?
no. We are adding changes to MOCK to stop it from doing SELinux activity inside the chroot, which will allow us to enforce policy on the entire mock environment and mock will not try to load policy..
Mock now works correctly with SELinux.
|
https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=430075
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
CONTENTS This directory (aapits) is located in tools directory of the ACPICA source tree and contains sources of ACPICA API validation Test Suite (AAPITS) AAPITS verifies, in emulating mode, conformity of the ACPICA API implementation to the definitions in ACPI Component Architecture Programmer Reference (ACPICA ProgRef). There are 9 test cases relevant to the following chapters ACPICA ProgRef respectively: Test Case Chapter atinit 8.1 Subsystem Initialization, Shutdown, and Status attable 8.2 ACPI Table Management atnamespace 8.3 ACPI Namespace Access athardware 8.4 ACPI Hardware Management atfixedevent 8.6 ACPI Fixed Event Management atgpe 8.7 ACPI General Purpose Event Management athandlers 8.8 ACPI Miscellaneous Handler Support atresource 8.9 ACPI Resource Management atmemory 8.10 Memory Management spec Directory which tests specification are located in. atinit.c atinit.h atmemory.c atmemory.h athardware.c athardware.h attable.c attable.h atnamespace.c atnamespace.h atresource.c atresource.h atfixedevent.c atfixedevent.h atgpe.c atgpe.h athandlers.c athandlers.h Each test case is represented by the pair of .c and .h files with the same filename. atcommon.h atexec.c atosxfwrap.c atosxfwrap.h atosxfctrl.c atosxfctrl.h atmain.c Auxiliary testing modules and TS main file. oswinxf.c osunixxf.c Relavant copies of the same files from the os_specific/service_layers directory updated to be used in the AAPITS utlility with the sed command: sed -i s/^AcpiOs/AcpiOsActual/ os*xf.c Then add the following line: #include "acdebug.h" + #include "atosxfwrap.h" asl Directory which supporting ASL codes are located in. bin Directory which supporting shell utility are located in. AcpiApiTS.dsp MSVC project to compile the TS utility under Windows. Usage: copy to the generate/msvc catalog and insert into the AcpiComponents.dsw workspace. Makefile Makefile based on AcpiExec utility one supporting compilation of the TS utility under Linux. Usage: copy the aapits directory tree to the relevant acpica-unix-*/tools directory and perform the make command to generate the aapits binary, to run the paticular test type the following: ./aapits <test case=""> <test num=""> <aml dir=""> Here <test case=""> is from 1 (init) to 9 (handlers), <test num=""> is the appropriate assertion number (see the files in spec dir), <aml dir=""> is the directory with the auxiliary aml files (actually it is ./tmp/aml directory which is created by make in the asl directory), for example, to run assertion 0041 of Hardware Management test case type: ./aapits 3 41 ./tmp/aml README This file
|
https://pagure.io/aapits
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
Impro Corp streamlines sales Web site
Impro Corp streamlines sales Web site
Contact: < ?:namespace prefix = o
Lindsey Quinn Arroyo
Marketing Agent
Impro Corp
800.554.5912 ext.8644
FOR IMMEDIATE RELEASE
PITTSBURGH (July 7, 2006) – To address the needs of its varied customer base, Impro Corp has designed and recently put into service a new format for its sales Web site. The improved log-in system at grants each customer access to the appropriate “channel” to meet his or her unique needs for printer, fax machine and copier supplies.
Separate channels for cartridge traders, copy shops, dealers, dropshippers, consignment clients and wholesalers ensure that each customer receives the most relevant information about packaging and product compatibility. With password-secured log-in, visitors to the site can store information on multiple credit cards and shipping addresses and can view their past and pending orders.
A “Search by Model” function scans an inventory of more than 5,000 high-quality, name-brand products available at Impro’s great prices. The ordering process is simple, streamlined and fast, with same-day shipping.
To register for your secured password and channel access, and to begin your online shopping, visit today.
###
Impro Corp is an international trading firm specializing in office machine consumables. Established in 1979, the company focuses primarily on surplus asset remarketing and cross-border trade; complex niche market opportunities in which proprietary technologies, processes and experience allow maintenance of market leadership. Visit to learn more.
|
http://tonernews.com/forums/topic/webcontent-archived-14282/
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
I need to allocate contiguous space for a 3D array. (EDIT:) I GUESS I SHOULD HAVE MADE THIS CLEAR IN THE FIRST PLACE but in the actual production code, I will not know the dimensions of the array until run time. I provided them as constants in my toy code below just to keep things simple. I know the potential problems of insisting on contiguous space, but I just have to have it. I have seen how to do this for a 2D array, but apparently I don't understand how to extend the pattern to 3D. When I call the function to free up the memory,
free_3d_arr
lowest lvl
mid lvl
a.out(2248,0x7fff72d37000) malloc: *** error for object 0x7fab1a403310: pointer being freed was not allocated
#include <stdio.h>
#include <stdlib.h>
int ***calloc_3d_arr(int sizes[3]){
int ***a;
int i,j;
a = calloc(sizes[0],sizeof(int**));
a[0] = calloc(sizes[0]*sizes[1],sizeof(int*));
a[0][0] = calloc(sizes[0]*sizes[1]*sizes[2],sizeof(int));
for (j=0; j<sizes[0]; j++) {
a[j] = (int**)(a[0][0]+sizes[1]*sizes[2]*j);
for (i=0; i<sizes[1]; i++) {
a[j][i] = (int*)(a[j]) + sizes[2]*i;
}
}
return a;
}
void free_3d_arr(int ***arr) {
printf("lowest lvl\n");
free(arr[0][0]);
printf("mid lvl\n");
free(arr[0]); // <--- This is a problem line, apparently.
printf("highest lvl\n");
free(arr);
}
int main() {
int ***a;
int sz[] = {5,4,3};
int i,j,k;
a = calloc_3d_arr(sz);
// do stuff with a
free_3d_arr(a);
}
Since you are using C, I would suggest that you use real multidimensional arrays:
int (*a)[sz[1]][sz[2]] = calloc(sz[0]*sizeof(*a));
This allocates contiguous storage for your 3D array. Note that the sizes can be dynamic since C99. You access this array exactly as you would with your pointer arrays:
for(int i = 0; i < sz[0]; i++) { for(int j = 0; j < sz[1]; j++) { for(int k = 0; k < sz[2]; k++) { a[i][j][k] = 42; } } }
However, there are no pointer arrays under the hood, the indexing is done by the magic of pointer arithmetic and array-pointer-decay. And since a single
calloc() was used to allocate the thing, a single
free() suffices to get rid of it:
free(a); //that's it.
|
https://codedump.io/share/oOxDnrswRJfl/1/allocating-contiguous-memory-for-a-3d-array-in-c
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How to made default state as To Submit while creating Leave Request?
I used defaults attribute and made state field value as 'draft' and once I saved the form its going to To Approve state
Help me out to resolve this
Thanks in Advance
Just override the create method of your model, like:
from openerp import api, models
class leave(models.Model)
_inherit = "module.model_leave"
@api.model
@api.returns('self', lambda value: value.id)
def create(self, vals):
vals['state'] = 'approve'
return super(leave, self).create(vals)
In Create method state value coming as 'draft' only but once saved the form its going to confirm state
The create method get executed when the record is about to be saved, so you will get displayed the record in the form as draft using your defaults and when the user clic save the state field will became approve in the create method.
Due to workflows defined for the object create state value is not effecting
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
|
https://www.odoo.com/forum/help-1/question/how-to-made-default-state-as-to-submit-while-creating-leave-request-92260
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
Insights for your business
Blue Fish Guide to Tracking ROI in Marketing
When you decide you are ready for your business to grow, you need to invest in marketing. But like any investment, you need to be sure you can determine its return (ROI – Return on
Blue Fish Guide to Tracking ROI in Marketing
When you decide you are ready for your business to grow, you need to invest in marketing. But like any investment, you need to be sure you can determine its return (ROI – Return on
Featured
Don’t Replicate Magento 1 in Magento 2, Think about your MVP
Better performance and scalability! Streamlined checkout! More mobile-friendly and better content management! B2B enhancements! Chances are you’ve already heard about all the new and improved
Using a Customer Journey Map to Define Your Business
What is a Customer Journey Map? A customer Journey Map is an end-to-end timeline visualization of a hypothetical customer’s interactions with a business. While many
Why Does ECM Implementation Take SO Long?!
Once you’ve committed to implementing an ECM for your company, a few questions may arise during the process. You might wonder why a place to
- All
- Case Studies
- ECM
- Ecommerce
- Featured
- Marketing
- News
- Videos
- All
- Case Studies
- ECM
- Ecommerce
- Featured
- Marketing
- News
- Videos
Seilevel And Blue Fish Join Forces
For 18 years Seilevel has been helping Fortune 1000 companies figure out what to build and Blue Fish has been helping those same customers build …
What B2B ECommerce Companies Need to Know About the New Federal Sales Tax Laws
Last June the Supreme Court decided to allow state and local governments to impose sales taxes on online businesses which sell to customers located within …
Case Study: Premier Research Labs
The Golden Years For decades, Texas Supplements has enjoyed a sterling reputation as one of the nation’s leading nutraceutical manufacturers. Thanks to their high quality …
Introducing the new CalOptix.com
CalOptix provides leading brands and innovative products in the optical accessories and over the counter eye wear market to thousands of optical retailers and mass …
2017 Sales Tax Changes
Happy New Year! With the new year comes new sales and use tax rules. New sales and use tax rules are in effect as of …
New Features in Ephesoft Transact 4.1
At the Ephesoft Innovate conference in October, 2016, the Ephesoft team highlighted the new features that would be part of the Ephesoft 4.1 release. Ephesoft …
Sales Tax Nexus: Everything you want to know
What is Sales Tax Nexus? Sales Tax Nexus is also called “sufficient physical presence” and is a legal term that refers to the requirement for …
Online Sales Tax Showdown
2017 will mark the 25th anniversary of a U.S. Supreme Court decision that has exempted many online retailers from having to collect sales tax on …
Related Products Rules in Magento 2
Related Products, Up-sells, and Cross-sells are a powerful tool in Magento. Do you find yourself spending a lot of time managing Product Relationships for your …
Blue Fish goes live with Junior.Club
Junior.Club is a subscription golf club program for kids. The program offers high quality Callaway golf clubs to junior golfers at an affordable monthly cost. …
Advanced Shipping Rules in BigCommerce
Recently, one of our BigCommerce clients asked us to help them solve some of their advanced shipping problems. Up until that point, they were using …
Stencil & CircleCI
Stencil is a newly introduced framework that’s greatly improving the development process for BigCommerce stores. Developers are finding that Stencil is giving them powerful tools …
|
https://bluefishgroup.com/insights/page/3/
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
christian_picker_image 0.1.0
christian_picker_image #
Flutter plugin that allows you to upload multi image picker on iOS & Android.
Getting Started #
ChristianImage.
. visual editor.
Example #
import 'package:christian_picker_image/christian_picker_image.dart'; class MyHomePage extends StatefulWidget { @override _MyHomePageState createState() => _MyHomePageState(); } class _MyHomePageState extends State<MyHomePage> { void takeImage(BuildContext context) async { List<File> images = await ChristianPickerImage.pickImages(maxImages: 5); print(images); Navigator.of(context).pop(); } Future _pickImage(BuildContext context) async { showDialog<Null>( context: context, barrierDismissible: false, builder: (BuildContext context) { takeImage(context); return Center(); }); } @override Widget build(BuildContext context) { return MaterialApp( home: Scaffold( appBar: AppBar( title: const Text('Plugin example app'), ), body: Text("Christian Picker Image Demo"), floatingActionButton: Column( mainAxisAlignment: MainAxisAlignment.end, children: <Widget>[ Padding( padding: const EdgeInsets.only(top: 16.0), child: FloatingActionButton( onPressed: () { _pickImage(context); }, tooltip: 'Take a Photo', child: const Icon(Icons.photo_library), ), ), ], ) ), ); } }
0.1.0 #
- Update github homepage
0.0.9 #
- Hot fix: Miss AndroidManifest.xml
0.0.8 #
- Fix bug: Cancel button is crash app.
0.0.7 #
- Update Android picker image
0.0.5 #
- Fix slow on return to Flutter when cancel action.
0.0.4 #
- Update empty array return to Flutter when cancel action.
0.0.3 #
- Fix bug on maxImages
- Add enableGestures option, default: true
0.0.2 #
- Down version iOS to 8.0 for some library
0.0.1 #
- TODO: Describe initial release.
example/README.md
christian_picker_image_example #
Demonstrates how to use the christian_picker_image: christian_picker_image: :christian_picker_image/christian_picker_image/christian_picker_image.dart.
Run
flutter format to format
lib/christian_picker_image.dart.
|
https://pub.dev/packages/christian_picker_image
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
#include "log_service_imp.h"
#include <mysqld_error.h>
#include "../sql/sql_error.h"
#include <mysql/components/component_implementation.h>
#include <mysql/components/service_implementation.h>
#include <mysql/components/services/component_status_var_service.h>
#include <mysql/components/services/component_sys_var_service.h>
#include <mysql/plugin.h>
#include "../sql/set_var.h"
#include <mysql/components/services/log_builtins.h>
#include <mysql/components/services/log_builtins_filter.h>
#include <m_string.h>
There is a new filter engine in the server proper (components/mysql_server/log_builtins_filter.cc).
It can apply highly versatile filtering rules to log events. By default however, it loads a rule-set that emulates mysqld 5.7 behavior, so as far as the users are concerned, the configuration variables (–log_error_verbosity ...) and the behavior haven't changed (much).
The loadable service implemented in this file is noteworthy in that it does not implement a complete filtering service; instead, it implements a configuration language for the internal filter that gives users access to all its features (rather than just to the small 5.7 compatibility subset).
Therefore, this file contains the parsing of the new configuration language (the "configuration engine"), whereas log_builtins_filter.cc contains the filtering engine.
CONFIGURATION PARSING STAGE
As a courtesy, during parsing (e.g. "IF prio>=3 THEN drop."), the filter configuration engine checks whether it knows the field ("prio"), and if so, whether the storage class it expects for the field (integer) matches that of the argument (3). In our example, it does; if it didn't, the configuration engine would throw an error.
The same applies if a well-known field appears in the action (e.g. the action 'set log_label:="HELO".' in the rule 'IF err_code==1408 THEN set label:="HELO".') label is a well-known field here, its well-known storage class is string, and since "HELO" is a string, all's well. (Otherwise, again, we'd throw an error.)()
Check the proposed value for the component variable which contains the filter rules in human-readable format.
< result code from dump
Helper for dumping filter rules.
Append a string literal to a buffer. Used by log_filter_rule_dump().
Helper for dumping filter rules.
Append an item's data/value to a buffer. Used by log_filter_rule_dump().
Set filtering rules from human-readable configuration string.
< current rule
< previous rule, if any
< read position in submitted rules
< retry from here on misparse
< current token in input
< token's length
< counter
< return code for caller
< previous flow control command
< current flow control command
< rule that had the opening IF
< number of conditions in branch
< the rule-set we're creating
< implicit item for "unset"
< have half-finished rule?
De-initialization method for Component used when unloading the Component.
Helper: Does a field require a certain data class, or can it morph into whatever value we wish to assign to it? The former is the case if the field either has a generic (rather than well-known) type, or if it has no type at all (this is the case if a rule has an unnamed aux item).
Gets a token from a filter configuration.
Initialization entry method for Component used when loading the Component.
Set up a log-item from filtering rules.
Decompile an individual rule.
At this point, we only ever decompile rules we've previously compiled ourselves, so short of memory corruption or running out of space, this should not fail. We check for failure all the same so all this will remain safe if we ever allow decompiles of other components' rule-sets.
Dump an entire filter rule-set.
< return this result
< index of current rule
< rule to decompile
< current decompiled rule
< write pointer
< bytes left (out buffer)
< bytes used in a buffer
Set argument (i.e., the value) on a list-item.
If the item is of any generic type, we'll set the value, and adjust the type to be of an appropriate ad hoc type. If the item is of a well-known type, we'll set the value on it if it's of an appropriate type, but will fail otherwise. For this, an integer constitutes a valid float, but not vice versa. (A string containing nothing but a number is still not a number.)
Skip whitespace.
Helper for parsing. Advances a read-pointer to the next non-space character.
Find a given token in log_filter_xlate_keys[], the table of known tokens.
A token in the array will only be considered a valid match if it features at least one flag requested by the caller (i.e. if it is of the requested class – comparator, action-verb, etc.). Used by log_filter_dragnet_set() to convert tokens into opcodes.
Find a given opcode in log_filter_xlate_keys[], the table of known tokens.
An opcode in the array will only be considered a valid match if it features at least one flag requested by the caller (i.e. if it is of the requested class – comparator, action-verb, etc.). Used by log_filter_rule_dump() to convert opcodes into printable tokens.
limits and default for sysvar
Update value of component variable.
filter built-ins
accessor built-ins
string built-ins
notify built-in
sysvar containing rules
Array of known tokens in the filter configuration language.
|
https://dev.mysql.com/doc/dev/mysql-server/latest/log__filter__dragnet_8cc.html
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
Adding the Telerik Controls to Your Project
Adding the Telerik® UI for ASP.NET AJAX controls to your application or WebForm is straightforward and this article explores the requirements and the most common ways to do that.
This article contains the following sections:
Prerequisites — the main requirements the server, development machine and current Web Application/Web Site must satisfy so you can use the Telerik controls.
Adding Telerik Controls to a WebForm — explains how you can add and use the controls themselves on a form after the core requirements are satisfied.
Prerequisites
To add Telerik® UI for ASP.NET AJAX to an existing ASP.NET web application you need to follow these steps:
Make sure you have installed ASP.NET AJAX, which comes as part of .NET 3.5+ installations.
If your web application is not using ASP.NET AJAX you need to configure it to do so. Detailed instructions can be found at.
Add the needed HTTP handlers in the web.config as described in the web.config Settings Overview article.
You can use the Telerik Creation and Configuration Wizard to get the needed assemblies, their references and the web.config settings added to the solution.
Add a ScriptManager control on top of the page in which you are going to add any control.
ASP.NET
<asp:ScriptManager
If your page is a content page/user control you can add the ScriptManager control in your master/main page. For further details about the ScriptManager control you can check.
Alternatively, you can use a RadScriptManager which extends the standard ScriptManager control and adds more features to it.
Adding Telerik Controls to a WebForm
To add a Telerik control to an ASP.NET WebForm, you can use either of the following approaches:
Drag a Telerik Control from the Toolbox
The easiest way to add a Telerik Control is by dragging its icon from the Visual Studio .NET Toolbox in Design mode. Visual Studio will automatically copy the Telerik.Web.UI.dll to the bin folder of your web-application and will create the respective references.
If you do not see the controls in the toolbox, examine the Adding the Telerik Controls to the Visual Studio Toolbox article.
Add a Telerik Control Manually to the Form
You can add any Telerik Control manually to the page by following the instructions below.
Copy the Telerik.Web.UI.dll from the binXX folder of the Telerik® UI for ASP.NET AJAX installation to the bin folder of your web application (where XX specifies the version of the .NET framework against which the assembly is built) and reference it. You can read more about the assemblies that come with your installation in the Included Assemblies article.
Open your aspx/ascx file and add the Telerik® UI for ASP.NET AJAX Register directive at the top so that Visual Studio recognizes our control tags:
ASP.NET
<%@ Register TagPrefix="telerik" Namespace="Telerik.Web.UI" Assembly="Telerik.Web.UI" %>
If many pages in your application will use Telerik controls, you can add the following lines in your web.config file so you don't need to add the register directive in every page/user control.
XML
<pages> <controls> <add tagPrefix="telerik" namespace="Telerik.Web.UI" assembly="Telerik.Web.UI" /> </controls> </pages>
Write the product tags in the body of the WebForm. For example:
ASP.NET
<telerik:RadScheduler
AJAX-based controls like ours must be placed on the page after ScriptManager's declaration and inside the
<form>tag.
Configuring Controls
To configure a control, you can:
Use the built-in properties from the markup or the code-behind.
Use the inner tags of the control.
Use the configuration wizard in the Visual Studio Designer.
You can read more about the individual controls' properties and features in their respective sections in the documentation, demos and by using the intellisense in Visual Studio.
|
https://docs.telerik.com/devtools/aspnet-ajax/general-information/adding-the-telerik-controls-to-your-project
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
- Hurricane Labs
- Apr 29, 2015
- Tested on Splunk Version: N/A
Ready for a how-to on making Splunk do the work for you when it comes to decrypting passwords? In this blog post, Tim will give you a way to streamline this entire process.
Splunk is great at keeping plain-text passwords out of configuration files. Each Splunk server generates its own salt when it starts for the first time. So, this means the encrypted password can't just be copied to another Splunk server. However, I need to be able to copy the configurations from the existing infrastructure when we're setting up a new Splunk server. In this blog post, I'm going to give you a how-to on streamlining this entire process by making Splunk do the work for you when it comes to decrypting passwords.
The most common password that I need to decrypt is the LDAP bindDNpassword, which is used to authenticate Splunk users. The only alternative is to reset the password for the service account and update it everywhere that uses it. There has got to be a better way!
I have found a way to make Splunk decrypt this password for me. I use a new dev instance of Splunk to perform this procedure, to eliminate the risk of breaking a production server. It needs to be a fresh install of Splunk that hasn’t been started yet. Splunk keeps its salt in $SPLUNK_HOME/etc/auth/splunk.secret. So, I need to copy this file from the source server to my dev Splunk instance. After the file is copied over, I can then start Splunk.
Now, I can create a Splunk app with an app.conf file that has the password. From the app.conf spec the format for the credential is:
[credential:<realm>:<username>] password = <string>
So, I add the following to $SPLUNK_HOME/etc/apps/test_app/local/app.conf, for example:
[credential::test] password = $1$ftbB4rpE71vqrtiM74TP
Then, I create the following script in $SPLUNK_HOME/etc/apps/test_app/bin/test.py:
import splunk.entity as entity import splunk.auth, splunk.search def getCredentials(sessionKey): myapp = 'test_app' try: # list all credentials entities = entity.getEntities( ['admin', 'passwords'], namespace=myapp, owner='nobody', sessionKey=sessionKey) except Exception, e: raise Exception( "Could not get %s credentials from splunk." "Error: %s" % (myapp, str(e))) credentials = [] # return credentials for i, c in entities.items(): credentials.append((c['username'], c['clear_password'])) return credentials raise Exception("No credentials have been found") sessionKey = splunk.auth.getSessionKey('admin','changeme') credentials = getCredentials(sessionKey) for username, password in credentials: print username print password
NOTE: Make sure you change the app name and the Splunk username and password to match your environment. I used “test_app” for my app name, and my dev instance of Splunk just uses the default Splunk username/password.
Once I restart Splunk, I am ready to run the script to decrypt this password:
$SPLUNK_HOME/bin/splunk cmd python $SPLUNK_HOME/etc/apps/test_app/bin/test.py
I get the following output:
test plain text password
Now, I can use that password on my new Splunk server. I also make sure to delete my dev Splunk instance so that when I need to test something else, it’s not using the splunk.secret from my production environment.
If you're looking for something different than the typical "one-size-fits-all" security mentality, you've come to the right place.
|
https://www.hurricanelabs.com/splunk-tutorials/make-splunk-do-it-how-to-decrypt-passwords-encrypted-by-splunk
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
Technical Support
On-Line Manuals
CARM User's Guide
Discontinued
#include <stdio.h>
void vprintf (
const char *fmtstr, /* pointer to format string */
char *argptr); /* pointer to argument list */
The vprintf function formats a series of strings and
numeric values and builds a string to write to the output stream
using the putchar function. This function is similar to the
printf.
Note
The vprintf function returns the number of characters
actually written to the output stream.
gets, puts, sprintf, sscanf, vsprintf
#include <stdio.h>
#include <stdarg.h>
void error (char *fmt, ...) {
va_list arg_ptr;
va_start (arg_ptr, fmt); /* format string */
vprintf (fmt, arg_ptr);
va_end (arg_ptr);
}
void tst_vprintf (void) {
int i;
i = 1000;
/* call error with one parameter */
error ("Error: '%d' number too large\n", i);
/* call error with just a format string */
error ("Syntax.
|
http://www.keil.com/support/man/docs/ca/ca_vprintf.htm
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
Respect deep joyfully express universal!
Installed on the new cygwin disk; I try to compile ie actually test, I play
Дмитро@Komp /cygdrive/f/HeloWorld
$ g++ ublas_test.cpp -o ublas_test
ublas_test.cpp:1:42: fatal error: boost/numeric/ublas/vector.hpp: No such file or directory
#include <boost/numeric/ublas/vector.hpp>
^
compilation terminated.
I test cygwin, I try to compile. Why is my cygwin installed incorrectly?
Please explain to me how to set it up correctly.
I can't debug cygwin for days. Apparently the two compilers in my system result in an error anyway, but I haven't compiled a fairly simple example yet.
Code:#include <boost/numeric/ublas/vector.hpp> #include <boost/numeric/ublas/matrix.hpp> #include <boost/numeric/ublas/io.hpp> using namespace boost::numeric::ublas; // "y = Ax" приклад int main() { vector<double> x(2); x(0) = 1; x(1) = 2; matrix<double> A(2,2); A(0,0) = 0; A(0,1) = 1; A(1,0) = 2; A(1,1) = 3; vector<double> y = prod(A, x); std::cout << y << std::endl; return 0; }
But he can not find the files that exactly are there? What else is he? setup-x86 itself installed the files
In some directories, I have not corrected anything there.
Help me set up or install cygwin correctly!
I either do not understand or I can not download it correctly.
How do I fix the curve?
Ask me clarifying questions. Point me to resources where the right information is different.
Powerful request for help, about how you need a guide!
F:\HeloWorld\ublas_test.cpp
F:\Cygwin2\usr\include\boost\numeric\ublas\vector. hpp
cd F:\HeloWorld
Дмитро@Komp /cygdrive/f/HeloWorld
$ g++ -IF:/Cygwin2/usr/include ublas_test.cpp -o ublas_test
In file included from D:/MinGW/lib/gcc/mingw32/5.1.0/include/c++/mingw32/bits/gthr.h:148:0,
from D:/MinGW/lib/gcc/mingw32/5.1.0/include/c++/ext/atomicity.h:35,
from D:/MinGW/lib/gcc/mingw32/5.1.0/include/c++/bits/ios_base.h:39,
from D:/MinGW/lib/gcc/mingw32/5.1.0/include/c++/ios:42,
from D:/MinGW/lib/gcc/mingw32/5.1.0/include/c++/ostream:38,
from D:/MinGW/lib/gcc/mingw32/5.1.0/include/c++/iterator:64,
from F:/Cygwin2/usr/include/boost/operators.hpp:98,
from F:/Cygwin2/usr/include/boost/serialization/strong_typedef.hpp:27,
from F:/Cygwin2/usr/include/boost/serialization/collection_size_type.hpp:10,
from F:/Cygwin2/usr/include/boost/serialization/array_wrapper.hpp:22,
from F:/Cygwin2/usr/include/boost/serialization/array.hpp:26,
from F:/Cygwin2/usr/include/boost/numeric/ublas/storage.hpp:21,
from F:/Cygwin2/usr/include/boost/numeric/ublas/vector.hpp:21,
from ublas_test.cpp:1:
D:/MinGW/lib/gcc/mingw32/5.1.0/include/c++/mingw32/bits/gthr-default.h: In function 'int __gthread_yield()':
D:/MinGW/lib/gcc/mingw32/5.1.0/include/c++/mingw32/bits/gthr-default.h:694:33: error: 'sched_yield' was not declared in this scope
return __gthrw_(sched_yield) ();
^
Thus, no compilation for a number of days has been successful! I just don't know how to set it up. What's more, I want to understand correctly how to actually use cygwin particularly well, in which situations especially well, different features.
|
https://cboard.cprogramming.com/cplusplus-programming/177737-cygwin-correct-setup-great-full-installation-proper-usage-features-post1287444.html?s=76b9ce6b368249530fc56176b19a1c8d
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
public class OpenEditorActionGroup extends ActionGroup
This class may be instantiated; it is not intended to be subclassed.
getContext, setContext, updateActionBars
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public OpenEditorActionGroup(IViewPart part)
OpenActionGroup. The group requires that the selection provided by the part's selection provider is of type
org.eclipse.jface.viewers.IStructuredSelection.
part- the view part that owns this action group
public OpenEditorActionGroup(IWorkbenchPartSite site, ISelectionProvider specialSelectionProvider)
OpenEditorActionGroup. The group requires that the selection provided by the given selection provider is of type
IStructuredSelection.
site- the site that will own the action group.
specialSelectionProvider- the selection provider used instead of the sites selection provider.
public OpenEditorActionGroup(org.eclipse.jdt.internal.ui.javaeditor.JavaEditor editor)
editor- the Java editor
public IAction getOpenAction()
nullif the group doesn't provide any open action
public void fillActionBars(IActionBars actionBar)
fillActionBarsin class
ActionGroup
public void fillContextMenu(IMenuManager menu)
fillContextMenuin class
ActionGroup
public void dispose()
disposein class
ActionGroup
Copyright (c) 2000, 2015 Eclipse Contributors and others. All rights reserved.Guidelines for using Eclipse APIs.
|
https://help.eclipse.org/mars/topic/org.eclipse.jdt.doc.isv/reference/api/org/eclipse/jdt/ui/actions/OpenEditorActionGroup.html
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
In tutorial 1, we reviewed basics of Python and how Numpy extends vanilla Python for many tasks in scientific computing.
In this tutorial, we will go over two libraries, Matplotlib for data visualization and PyTorch for machine learning.
Matplotlib is a plotting library. This section gives a brief introduction to the
matplotlib.pyplot module, which provides a plotting system similar to that of MATLAB.
import numpy as np import matplotlib.pyplot as plt
The most important function in
matplotlib.pyplot is
plot, which allows you to plot 2D data. Here is a simple example:
# Compute the x and y coordinates for points on a sine curve x = np.arange(0, 3 * np.pi, 0.1) y = np.sin(x) # Plot the points using matplotlib plt.plot(x, y)
[<matplotlib.lines.Line2D at 0x10e62f0f0>]
With just a little bit of extra work we can easily plot multiple lines at once, and add a title, legend, and axis labels:
y_sin = np.sin(x) y_cos = np.cos(x) # Can plot multiple graphs plt.plot(x, y_sin) plt.plot(x, y_cos) # Set x and y label plt.xlabel('x axis label') plt.ylabel('y axis label') # Set title and legend plt.title('Sine and Cosine') plt.legend(['Sine', 'Cosine'])
<matplotlib.legend.Legend at 0x10e72fcf8>
You can plot different things in the same figure using the subplot function. Here is an example:
# Compute the x and y coordinates for points on sine and cosine curves x = np.arange(0, 3 * np.pi, 0.1) y_sin = np.sin(x) y_cos = np.cos(x) # Set up a subplot grid that has height 2 and width 1. # This sets the first such subplot as active. plt.subplot(2, 1, 1) # Make the first plot plt.plot(x, y_sin) plt.title('Sine') # Set the second subplot as active plt.subplot(2, 1, 2) # Make the second plot. plt.plot(x, y_cos) plt.title('Cosine') # Show the figure. plt.show()
imshow function from
pyplot module can be used to show images. For example:
img = plt.imread('cute-kittens.jpg') print(img)
[[[ 2 2 2] [ 0 0 0] [ 0 0 0] ... [ 0 0 0] [ 0 0 0] [ 0 0 0]] [[ 1 1 1] [ 0 0 0] [ 0 0 0] ... [ 0 0 0] [ 0 0 0] [ 0 0 0]] [[ 0 0 0] [ 0 0 0] [ 1 1 1] ... [ 0 0 0] [ 0 0 0] [ 0 0 0]] ... [[106 102 91] [ 95 88 78] [103 94 85] ... [137 126 120] [141 130 124] [146 135 129]] [[ 94 90 79] [ 99 92 82] [109 100 91] ... [120 109 103] [121 110 104] [126 115 109]] [[103 99 88] [102 95 85] [101 92 83] ... [128 117 111] [129 118 112] [134 123 117]]]
# Show the original image plt.imshow(img) # Similar to plt.plot but for image plt.show()
Note that each cells in an image is composed of 3 color channels (i.e. RGB color). Often the last axis is used for color channels, in the order of red, green, and blue.
print(img.shape) # 460 width x 276 height x RGB (3 channels)
(276, 460, 3)
# Displaying only red color channel plt.imshow(img[:, :, 0]) plt.show()
PyTorch is a Python-based scientific computing package. PyTorch is currently, along with Tensorflow, one of the most popular machine learning library.
PyTorch, at its core, is similar to Numpy in a sense that they both
However, compare to Numpy, PyTorch offers much better GPU support and provides many high-level features for machine learning. Technically, Numpy can be used to perform almost every thing PyTorch does. However, Numpy would be a lot slower than PyTorch, especially with CUDA GPU, and it would take more effort to write machine learning related code compared to using PyTorch.
Mathematically speaking, tensor is a mathematical object for representing multi-dimensional arrays and tensor can be thought of as generalization of vectors and matrices. Tensor extends vector(1-D grid of numbers) and matrix(2-D grid of numbers) to represent any dimensional structure.
In PyTorch,
tensor is similar to Numpy's
ndarray but can be used on a GPU to accelerate computing.
tensor can be created using initialization functions, similar to ones for
ndarray.
import torch
x = torch.empty(5, 3) print(x)
tensor([[0.0000e+00, 2.0000e+00, 0.0000e+00], [2.0000e+00, 1.8217e-44, 0.0000e+00], [0.0000e+00, 0.0000e+00, 0.0000e+00], [0.0000e+00, 0.0000e+00, 9.2196e-41], [0.0000e+00, 0.0000e+00, 0.0000e+00]])
x = torch.rand(5, 3) print(x)
tensor([[0.7518, 0.0221, 0.1475], [0.6794, 0.4572, 0.6822], [0.3718, 0.1297, 0.7393], [0.9782, 0.9275, 0.1059], [0.3904, 0.8096, 0.8896]])
x = torch.zeros(5, 3, dtype=torch.long) # explicitely specify data type print(x)
tensor([[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]])
tensor can also be created from array-like data such as
ndarray or other
tensors
x = torch.tensor([5.5, 3]) # From Python list print(x)
tensor([5.5000, 3.0000])
np_array = np.arange(6).reshape((2, 3)) torch_tensor = torch.from_numpy(np_array) # From ndarray print(np_array) print(torch_tensor) np_array_2 = torch_tensor.numpy() # Back to ndarray from tensor print(np_array_2)
[[0 1 2] [3 4 5]] tensor([[0, 1, 2], [3, 4, 5]]) [[0 1 2] [3 4 5]]
x = torch.ones(5, 3) print(x) x *= 2 print(x)
tensor([[1., 1., 1.], [1., 1., 1.], [1., 1., 1.], [1., 1., 1.], [1., 1., 1.]]) tensor([[2., 2., 2.], [2., 2., 2.], [2., 2., 2.], [2., 2., 2.], [2., 2., 2.]])
y = torch.rand(5, 3) print(y) print(x + y) print(x * y)
tensor([[0.5417, 0.8398, 0.7194], [0.1662, 0.6120, 0.1901], [0.3853, 0.8248, 0.2068], [0.9483, 0.7665, 0.0429], [0.8464, 0.6350, 0.2197]]) tensor([[2.5417, 2.8398, 2.7194], [2.1662, 2.6120, 2.1901], [2.3853, 2.8248, 2.2068], [2.9483, 2.7665, 2.0429], [2.8464, 2.6350, 2.2197]]) tensor([[1.0833, 1.6796, 1.4388], [0.3323, 1.2241, 0.3801], [0.7706, 1.6495, 0.4137], [1.8965, 1.5329, 0.0858], [1.6928, 1.2701, 0.4395]])
# Using different syntax for the same operations above print(torch.add(x, y))
tensor([[2.5417, 2.8398, 2.7194], [2.1662, 2.6120, 2.1901], [2.3853, 2.8248, 2.2068], [2.9483, 2.7665, 2.0429], [2.8464, 2.6350, 2.2197]])
# Inplace operation x.add_(y) print(x)
tensor([[2.5417, 2.8398, 2.7194], [2.1662, 2.6120, 2.1901], [2.3853, 2.8248, 2.2068], [2.9483, 2.7665, 2.0429], [2.8464, 2.6350, 2.2197]])
# Using the same indexing syntax from Python list and Numpy print(x[1:4, :])
tensor([[2.1662, 2.6120, 2.1901], [2.3853, 2.8248, 2.2068], [2.9483, 2.7665, 2.0429]])
print(x.shape) # Similar to Numpy
torch.Size([5, 3])
|
https://nbviewer.jupyter.org/urls/www.cs.toronto.edu/~lczhang/360/lec/w01/pytorch.ipynb
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
Building the Right Environment to Support AI, Machine Learning and Deep Learning
Watch→
Because cin can leave a terminating character in the stream, not using the cin.ignore() and cin.get() functions could cause small problems with your code. The user gets to see what is being written to the screen after they enter the values, as it can be seen here:
#include
using namespace std;
int main () {
int a,b,max;
cout a;
cout b;
//remove the terminating character
std::cin.ignore();
if( ab ) {max=a;}
else {max=b;}
cout
//Extracts characters from the stream
std::cin.get();.
|
http://www.devx.com/tips/cpp/functions/using-cin.ignore-and-cin.get-functions-170128073012.html
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
Just change you user mapping to emccode instead of nfsuser01. Then umount and mount the export again and run ls -la. You should now have the correct mappings in place and be able to create/edit/view files.
Are you an EMC employee? If so, we can do a WebEx to get this figured out.
-Ben
Or, like you mentioned, create a new object user nfsuser01, create and new bucket and make that user the owner. Then, create another export for that namespace that uses the new bucket and try to mount it.
I still can't get this to work. I've been able to mount the export but when I try to create a file I get permission denied. I know I'm missing something small here. Could you do me a favor and basically write the steps, maybe even screenshot the pages? I've tried to do it from scratch and these are the steps I'm doing.
1. Create an object user named nfs_user. I generate a s3 password for this user.
2. Create a bucket named nfs_bucket and make nfs_user the owner. I also enable file system. I put a group that hasn't been defined anywhere named nfs_group and assign it read, write and execute. I make the retention 1 second.
3. I go to file --> user/ group mapping and create a new user with the same name as the object user I created earlier and assign it an id of 30001
4. I go to file --> user/ group mapping and create a new group with the same name as the group I used earlier during bucket creation and assign it an id of 30002
5. I go to file --> exports and create a new export. I select the nfs_bucket I created in step 2.
6. I add an export host. I add * to the export host field, I select Read/Write for permissions, I select Sys for authentication, I allow mounting directories for anonuser I enter the nfs_user id I created of 30001, for anongroup I add the nfs_group id I created of 30002 and for rootsquash I add the nfs_user id of 30001.
At this point I would expect it to be working. I go to the linux client and run this.
[root@localhost /]# showmount -e 10.44.236.56
Export list for 10.44.236.56:
/ns1/test_nfs *
/ns1/nfs_bucket *
/ns1/nfs_test2 *
[root@localhost /]# mount -t nfs -o vers=3,sec=sys,proto=tcp 10.44.236.56:/ns1/nfs_bucket /nfsshare/
[root@localhost /]# ls -al /nfsshare/
total 1
drwxrwxrwx. 3 30001 30002 96 Apr 20 10:36 .
[root@localhost /]# touch /nfsshare/file1
touch: cannot touch ‘/nfsshare/file1’: Permission denied
[root@localhost /]#
What step am I missing here?
Everything looks okay except your export host should look like this:
If you still have permissions after changing the AnonUser, AnonGroup and RootSquash, can you try mounting just to your bucket instead of down into a sub directory? So your mount command would be mount -t nfs -o vers=3,sec=sys,proto=tcp 10.44.236.56:/ns1/nfs_bucket
So check this out. I changed the users and groups in my export to the name instead of the ID and now I can create directories but I get this weird I/O error when I try and put a file in there.
:00 .
[root@localhost nfsshare]# mkdir test_dir
[root@localhost nfsshare]# touch test_file
touch: cannot touch ‘test_file’: Remote I/O error
[root@localhost nfsshare]# ls -al
total 1
drwxrwxrwx. 3 30001 30002 96 Apr 20 18:02 .
drwxr-xr-x. 3 30001 30002 96 Apr 20 18:02 test_dir
Also I am mounting the nfs_bucket. The last part "/nfsshare/" is the location I'm mounting to.
Any ideas on the I/O error?
Hmm ... I'm starting to run out of ideas. Did you chmod 777 nfsshare before running the mount command? After you run the mount command, don't create and sub directories. Instead, try to just echo 'test data in test file' > test.txt right in the root of /nfsshare. Then, run ls -la. Finally, run cat text.txt. Can you paste all that back to me so I can see?
-Ben
What does the NFS export ACL look like from the ECS Administrator GUI?
Do you mean the acl of the bucket? The only acl information I know about for the nfs export is when you are configuring you user group mappings and you host export options. Both of which I have screenshots of above.
Yeah the same thing. I'm going to try and reinstall ecs.
[root@localhost ~]# chmod 777 /nfsshare/
:02 .
drwxr-xr-x. 3 30001 30002 96 Apr 20 18:02 test_dir
[root@localhost nfsshare]# echo 'test data in test file' > test
-bash: test: Remote I/O error
[root@localhost nfsshare]# ls -al
total 1
drwxrwxrwx. 3 30001 30002 96 Apr 20 18:02 .
drwxr-xr-x. 3 30001 30002 96 Apr 20 18:02 test_dir
I apologize for not being more specific regarding the NFS export ACL. From the ECS Administrator GUI -> Manage -> File -> Exports. What does this screen display?
|
https://www.dell.com/community/ECS/nfs-export-question/m-p/7103063/highlight/true
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
You
Execution of a CLR object (user-defined function, user-defined type, or trigger) on the common language runtime can take place on multiple threads (parallel plan), if the query optimizer decides it is beneficial. However, if a user-defined function accesses data, execution will be on a serial plan. When executed on a server version prior to SQL Server 2008, if a user-defined function contains LOB parameters or return values, execution also must be on a serial plan.
The following table lists the topics covered in this section.
Getting Started with CLR Integration
Provides a brief overview of the libraries and namespaces required to compile object using CLR integration with SQL Server. Includes an example "Hello World" CLR stored procedure.
Supported .NET Framework Libraries
Provides information on the .NET Framework libraries supported by CLR integration.
CLR Integration Programming Model Restrictions
Provides information about CLR integration programming model restrictions.
SQL Server Data Types in the .NET Framework
An overview of SQL Server data types and their .NET Framework equivalents.
Overview of CLR Integration Custom Attributes
Provides information about CLR integration custom attributes.
CLR User-Defined Functions
Describes how to implement and use the various types of CLR functions: table-valued, scalar, and user-defined aggregate functions.
CLR User-Defined Types
Describes how to implement and use CLR user-defined types.
CLR Stored Procedures
Describes how to implement and use CLR stored procedures.
CLR Triggers
Describes how to implement and use CLR triggers.
See Also
Common Language Runtime (CLR) Integration Overview
|
https://docs.microsoft.com/en-us/sql/relational-databases/clr-integration/database-objects/building-database-objects-with-common-language-runtime-clr-integration
|
CC-MAIN-2017-34
|
en
|
refinedweb
|
CodePlexProject Hosting for Open Source Software
So, I have some model relations that look like this:
public class PopularityPartRecord : ContentPartRecord {
public virtual ICollection<PopularityResultRecord> Results { get; set; }
}
public class PopularityResultRecord {
public virtual int Id { get; set; }
public virtual string HitType { get; set; }
public virtual string Calculus { get; set; }
public virtual decimal Day { get; set; }
public virtual decimal Week { get; set; }
public virtual decimal Month { get; set; }
public virtual decimal AllTime { get; set; }
}
As you can see, I am attempting to gather statstics about content over various time periods so I can use it to sort by. I can have different metrics for doing so, for instance Total Comments in the last week, Total Views in the past month, etc. If I were
going to write SQL to order content by total view count in the last month, I would write something similar to this:
SELECT ContentItemRecord c1
LEFT JOIN PopularityResultRecord r1 ON
r1.PopularityPartRecord_Id = c1.Id AND
r1.HitType = 'Views' AND
r1.Calculus = 'Total'
ORDER BY r1.Month DESC
(This assumes that NHibernate places the appropriate reference column in the table)
I could also do this through LINQ queries.
I can't see a way to do anything like this using Hql joins or IAliasFactory, so I can't provide a nice projection filter form to select your sort criteria with. Is there anything that I'm missing or can the interfaces really just not do this?
Edit: It looks like the DefaultHqlQuery currently has no way of generating join expressions. Oh well.
Are you sure?
var query = _contentManager.HqlQuery().ForPart<DocumentPart>().Join(alias => alias.ContentPartRecord<TitlePartRecord>());
sfmskywalker wrote:
Are you sure?
var query = _contentManager.HqlQuery().ForPart<DocumentPart>().Join(alias => alias.ContentPartRecord<TitlePartRecord>());
Yes, it can generate joins but it can not do so as a complex expression, such as having multiple criteria in the join like in the sql example I have above.
I checked the HqlQuery code to verify this. Joins are limited to a single equivalence comparison.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later.
|
https://orchard.codeplex.com/discussions/405061
|
CC-MAIN-2017-34
|
en
|
refinedweb
|
.
Connecting to a Database
Introduction
Because Lift runs in any servlet container, you have several options for getting a database connection—just as you would in a traditional Java application.
DataSource through Java Naming and Dictionary Interface (JNDI)
That’s one hell of a title but don’t worry—it’s far, far simpler than it sounds! JNDI is one of those technologies that has been kicking around in the Java ecosystem for a long time and has changed as the years have passed. These days, most people are familiar with using JNDI to get a DataSource object for their applications through a servlet or application container such as Jetty or Tomcat.
If you want to run your application from a DataSource supplied by your container, you only need to do one of two things. The first (and more traditional) option is to add a reference to your web.xml file as shown in listing 1. This sets up the JDBC reference so it can be located via JNDI.
Listing 1 DataSource wire-up in web.xml
<resource-ref> <res-ref-name>jdbc/liftinaction</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref>
The second and Lift-specific route is to add one line of code in your Lift Boot class, as shown in listing 2. This is my preferred option when using JNDI because it is more in keeping with the Lift idioms of Boot configuration.
Listing 2 JNDI wire-up in Boot
DefaultConnectionIdentifier.jndiName = "jdbc/liftinaction"
You will need to configure your container to actually provide this DataSource object. Unfortunately, this is very product specific so I won’t labor that here; check the online documentation for the container you’re using.
Application connection
If you would rather not use JNDI or you want a quick and easy way to set up your database connection, Lift provides a helpful wrapper around the Java DriverManager class. Listing 3 shows an example of the code you need to have in your Boot class.
Listing 3 DriverManager JDBC wire-up
import net.liftweb.mapper.{DB,DefaultConnectionIdentifier} #1 import net.liftweb.http.{LiftRules,S} #1 DB.defineConnectionManager(DefaultConnectionIdentifier, DBVendor) #2 LiftRules.unloadHooks.append( #3 () => DBVendor.closeAllConnections_!()) #3 S.addAround(DB.buildLoanWrapper) #4
As you can see, there is a little more going on here than in the JNDI example. For completeness I have included the import statements so that it is clear where the various types are held (#1). #2 defines a wire-up between the application-specific DBVendor (as defined in listing 4) and Lift‘s connection manager. #3 details what Lift should do when it’s shutting down in order to cleanly close any application database connections. #4 configures Lift‘s loan wrapper1 so that actions conducted on the DefaultConnectionIdentifier are transactional for the whole HTTP request cycle.
Listing 4 DBVendor definition
object DBVendor extends StandardDBVendor( Props.get("db.class").openOr("org.apache.derby.jdbc.EmbeddedDriver"), Props.get("db.url").openOr("jdbc:derby:lift_example;create=true"), Props.get("db.user"), Props.get("db.pass"))
Listing 4 demonstrates making an extension on the StandardDBVendor trait from Lift‘s Mapper. This handles connection pooling for you by using Apache Commons Pool so that everything is taken care of for you. This DBVendor pulls its connection string and credentials from a properties file. If the file and the key pair do not exist, it will failover onto using the in-memory Derby database.
JNDI OR APPLICATION CONNECTION?
Choosing between JNDI or application connection is something that causes a fair amount of debate among developers and infrastructure folks. Traditionally, getting DataSource objects through JNDI was seen as a preferred route because it negated hard-coding any kind of connection or drivers into your application. The value of this benefit, while still present for some people, has probably waned somewhat with most developers. If we were looking for a more tangible value, you would look to the honed connection pooling and distributed transactions. Some people find great value here, while others do not. With libraries like Apache Commons Pool, it’s fairly simple to implement robust connection pooling in your application directly. So there is no real answer—it depends where and what you are deploying as to which will work best. To that end, keep abreast of both approaches and see what works for you.
In the code for the tutorial online, you will see that both mechanisms are supported within the application. JNDI is actually my preferred route for deployment, but many Lift sites run perfectly well on application connections.
Summary
We have covered options for getting a database connection through JNDI. You can add a reference to the web.xml file or add one line of code in the Lift Boot class.
|
http://javabeat.net/datasource-through-java-naming-and-dictionary-interface-jndi/
|
CC-MAIN-2017-34
|
en
|
refinedweb
|
Design Patterns -.
Implementation.
Step 1
Create an interface.
Shape.java
public interface Shape { void draw(); }
Step 2
Create concrete classes implementing the same interface.
Rectangle.java
public class Rectangle implements Shape { @Override public void draw() { System.out.println("Rectangle::draw()"); } }
Square.java
public class Square implements Shape { @Override public void draw() { System.out.println("Square::draw()"); } }
Circle.java
public class Circle implements Shape { @Override public void draw() { System.out.println("Circle::draw()"); } }
Step 3
Create a facade class.
ShapeMaker.java
public class ShapeMaker { private Shape circle; private Shape rectangle; private Shape square; public ShapeMaker() { circle = new Circle(); rectangle = new Rectangle(); square = new Square(); } public void drawCircle(){ circle.draw(); } public void drawRectangle(){ rectangle.draw(); } public void drawSquare(){ square.draw(); } }
Step 4
Use the facade to draw various types of shapes.
FacadePatternDemo.java
public class FacadePatternDemo { public static void main(String[] args) { ShapeMaker shapeMaker = new ShapeMaker(); shapeMaker.drawCircle(); shapeMaker.drawRectangle(); shapeMaker.drawSquare(); } }
Step 5
Verify the output.
Circle::draw() Rectangle::draw() Square::draw()
|
https://www.tutorialspoint.com/design_pattern/facade_pattern.htm
|
CC-MAIN-2017-34
|
en
|
refinedweb
|
Selenium syntax for CodedUI
I:
BrowserWindow b = BrowserWindow.Launch(new Uri("")); HtmlEdit searchBox = new HtmlEdit(b); searchBox.SearchProperties.Add(HtmlEdit.PropertyNames.Id, "lst-ib"); Keyboard.SendKeys(searchBox, "codedUI Course{Enter}");
Selenium:
var driver = new ChromeDriver(@"driverfolder"); driver.Navigate().GoToUrl(""); var searchBox = driver.FindElement(By.Id("lst-ib")); searchBox.SendKeys("codedUI Course{Enter}");:
BrowserWindow b = BrowserWindow.Launch(new Uri("")); var searchBox= b.FindElement<HtmlEdit>(By.Id("lst-ib")); searchBox.SendKeys( "codedUI Course{Enter}");:
public static T FindElement<T>(this UITestControl container, Func<UITestControl,HtmlControl,HtmlControl> controlConstructorFunct) where T:HtmlControl , new() { var control = new T {Container = container}; controlConstructorFunct(container, control); return control ; }
now the magic is in the fact that we pass this function a function that can initialize the control we just instantiated with the right search properties.
This is the implementation of the By class I just mentioned. It looks like follows:
public class By { public static Func<UITestControl, HtmlControl,HtmlControl> Id(string id) { return (container,control) => { control.SearchProperties.Add(HtmlControl.PropertyNames.Id, id); return control; }; } }:
public static void Click(this UITestControl control) { Mouse.Click(control); } public static void SendKeys(this UITestControl control, string text) { Keyboard.SendKeys(control, text); }:
public static Func<UITestControl, HtmlControl, HtmlControl> CssSelector(string cssSelectorToFind) { const string javascript = "return document.querySelector('{0}');"; var scriptToExecute = string.Format(javascript, cssSelectorToFind); return (container, control) => { var browserWindow = container as BrowserWindow; if(browserWindow==null) throw new ArgumentException("You can only use the CSSSelector function on a control of type BrowserWindow"); var searchControl = browserWindow.ExecuteScript(scriptToExecute) as HtmlControl; var foundControltype = searchControl?.GetType(); var returnType = control.GetType(); if (foundControltype?.FullName == returnType.FullName) { control = searchControl; } else { throw new InvalidCastException( $"Unable to assign control found to type {returnType.FullName}, control is of type {foundControltype?.FullName}"); } return control; }; }!
|
http://blog.xebia.com/author/mdevriesxpirit-com/page/2/
|
CC-MAIN-2017-34
|
en
|
refinedweb
|
or Join Now!
« back to Woodworking Trade & Swap forum
PurpLev
home | projects | blog
8525 posts in 3364 days
08-31-2015 01:10 AM
Downsizing and clearing out most if not all woodworking machines. I figured I’d post here before I go in any public classified forums to give fellow LJs first pick. all machinery is in great condition, lightly used, and cared for.
posted with some reasonable prices based on condition and extras, open to offers.
The following are available for sale (I will update this post, as availability changes to keep this current and up to date) :
Selling LOCAL to Boston, MA area ONLY, PM me for contact information.
-- ㊍ When in doubt - There is no doubt - Go the safer route.
Lenny
1561 posts in 3243 days
#1 posted 08-31-2015 01:31 AM
Hi Sharon. Good luck selling these items. Are you willing to sell the extra planer blades separately? If so, how much?
-- On the eighth day God was back in His woodworking shop! Lenny, East Providence, RI
MrFid
836 posts in 1620 days
#2 posted 08-31-2015 01:32 AM
Wow some good stuff here. I’d buy that bandsaw for sure if I had the room, and the budget was right. It’s too much saw for me at the moment, and I don’t have the space either. Best of luck with your sale!
-- Bailey F - Eastern Mass.
Gixxerjoe04
850 posts in 1293 days
#3 posted 08-31-2015 01:46 AM
If only I were closer, don’t happen to have an incra ibox for sale do you?
Gerry
263 posts in 2957 days
#4 posted 08-31-2015 01:46 AM
Hi Sharon,So sorry to hear you’re downsizing, but certainly understand family comes first. You’ve always been a friend and an inspiration to me. You will be missed. I’m more into hand tools these days, as I can’t do it without them….. I’d be there in heart beat, but I’m seriously too far away. ( unless, of course, you have a router plane…..) Best of luck!!
-- -Gerry, Hereford, AZ ” A really good woodworker knows how the hide his / her mistakes.”
#5 posted 08-31-2015 02:50 PM
Thanks for all the responses. for all that complain about being too far – there is always the option for you to move closer ;)
Lenny – I have a few interested parties in the planer package, if nothing comes of it and I decide to part things out I’ll let you know
Cheers!
GeoffKatz
26 posts in 715 days
#6 posted 08-31-2015 08:25 PM
Oooh, even though I live in CT, that router table is tempting. My little plastic router table is begging to be retired. Nice workmanship.
#7 posted 09-01-2015 12:59 AM
oooh, and how could I forget… added lumber lot shorts, meds, and longs or various species, would include storage shelves/tables
#8 posted 09-01-2015 02:15 PM
Fair enough Sharon.
bearkatwood
1362 posts in 728 days
#9 posted 09-01-2015 03:15 PM
Why are you SOOO far away?? Nice stuff for sale, better than the rusty old junk I see for sale over here.
-- Brian Noel
JAY Made
201 posts in 1761 days
#10 posted 09-01-2015 04:06 PM
Wish I had the $ I would take the wood off your hands.
-- We all should push ourselves to learn new skills.
#11 posted 09-04-2015 12:31 AM
Is the lumber gone yet? I’d be interested in that. I am in Sudbury.
#12 posted 09-04-2015 12:57 AM
Is the lumber gone yet? I d be interested in that. I am in Sudbury.
- MrFid
- MrFid
it’s currently spoken for, but I’ll let you know if anything changes
tifftiff4
4 posts in 1070 days
#13 posted 09-04-2015 01:06 AM
Oh what i would do to be able to have some of those toys u have.
#14 posted 09-05-2015 05:43 PM
It’s been a pleasure meeting up some local LJs today.
Some things sold. some things still available.
Have a great holiday weekend y’all!
Regdor1999
16 posts in 2820 days
#15 posted 09-08-2015 12:01 PM
A bump, thanks, and thumbs up for PurpLev.
I was there this past weekend and met him, his beautiful family, and another one or two online forum members. It was nice chatting with all of them.
The table saw is nice… Someone should be checking
|
http://lumberjocks.com/topics/115970
|
CC-MAIN-2017-34
|
en
|
refinedweb
|
Update 1/11/08: Between beta 2 and beta 3 there was a breaking change in the way events are fired during codegen. The metadata item that is being generated is no longer supplied as the "sender" of the event. Instead it comes in the TypeSource property of the eventArgs passed to the event handler. The code below has been modified to reflect this change. Thanks to hannah39 for pointing out the issue.
In response to the recent Entity Framework beta 2 release, I've gotten a question or two about how to take advantage of the new CodeGen Events feature, because we don't seem to have any good samples available to help folks wrap their head around this one. Then someone posted a question to the EF forum the other day asking if it's possible to add custom attributes to the generated classes, and I thought, "Aha! Two birds to be taken down with one stone." Even better, I sent a quick message to one of my teammates, Jeff Reed, who responded with a sample that was exactly what I was looking for making my job especially easy.
The first thing you need to know about using CodeGen events is that both edmgen.exe, the commandline tool for things like generating EF classes from a conceptual schema, and the new EF designer integrated with visual studio are built on top of a public API which you can use in your own programs, and when you use the API you can exercise more control over the process. The namespace where this API lives is System.Data.Entity.Design, and the references docs for it are now available online.
The simplest way to use this is to just write a little console app which would replace edmgen.exe for the purpose of generating your classes. Basically you create an instance of EntityClassGenerator, register an event handler for the OnTypeGenerated or OnPropertyGenerated event, and then call the GenerateCode method passing in either two strings with the name of an input file (your CSDL) and an output file (which will contain the generated code) or an XMLReader for input and a TextWriter for output. The event handler receives an eventArgs instance which not only describes the type or property being generated but also contains members which can be modified in order to affect what is output.
So, for a simple example, we can add an attribute called "MyFooAttribute" to the class generated for each Entity type in the CSDL file. The code would look something like this:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Data.Entity.Design;
using System.Data.Metadata.Edm;
using System.Xml;
using System.IO;
using System.Diagnostics;
using System.CodeDom;
namespace AddCustomAttributesToCodeGen
{
class Program
{
const string MyAttributeName = "MyFooAttribute";
static void Main(string[] args)
{
string schema = @"
<Schema Namespace='CNorthwind' Alias='Self'
xmlns:cg=''
xmlns:
<EntityType Name='Customer'>
<Key>
<PropertyRef Name='CustomerID' />
</Key>
<Property Name='Address' Type='String' MaxLength='1024' Nullable='false' />
<Property Name='City' Type='String' MaxLength='1024' Nullable='false' />
<Property Name='CompanyName' Type='String' MaxLength='1024' Nullable='false' />
<Property Name='ContactName' Type='String' MaxLength='1024' Nullable='false' />
<Property Name='ContactTitle' Type='String' MaxLength='1024' Nullable='false' />
<Property Name='Country' Type='String' MaxLength='1024' Nullable='false' />
<Property Name='CustomerID' Type='String' MaxLength='1024' Nullable='false' />
<Property Name='Fax' Type='String' MaxLength='1024' Nullable='false' />
<Property Name='Phone' Type='String' MaxLength='1024' Nullable='false' />
<Property Name='PostalCode' Type='String' MaxLength='1024' Nullable='false' />
<Property Name='Region' Type='String' MaxLength='1024' Nullable='false' />
</EntityType>
<EntityContainer Name='NorthwindContext'>
<EntitySet Name='Customers' EntityType='Self.Customer' />
</EntityContainer>
</Schema>";
using (XmlReader reader = XmlReader.Create(new StringReader(schema)))
{
StringWriter codeWriter = new StringWriter();
EntityClassGenerator generator = new EntityClassGenerator();
generator.OnTypeGenerated += new TypeGeneratedEventHandler(AddAttributeToType);
IList<EdmSchemaError> errors = generator.GenerateCode(reader, codeWriter);
string generatedCode = codeWriter.ToString();
// prove that the attribute was generated
Debug.Assert(generatedCode.Contains(MyAttributeName));
Console.WriteLine(generatedCode);
}
}
private static void AddAttributeToType(object sender, TypeGeneratedEventArgs eventArgs)
{
StructuralType structuralType = eventArgs.TypeSource as StructuralType;
if (structuralType != null && structuralType.Name.Equals("Customer"))
{
CodeAttributeDeclaration attribute = new CodeAttributeDeclaration(MyAttributeName);
eventArgs.AdditionalAttributes.Add(attribute);
}
}
}
}
If you have Orcas Beta 2 with the EF Beta 2 installed, you can create a new console application, add a reference to system.data.entity.dll and system.data.entity.design.dll and then paste this code into VS and everything should compile and run nicely.
If you decide to play around with this, you'll want to take a look at the documentation for the members of TypeGeneratedEventArgs which will tell you that not only can you add attributes, but you can also set a base type or add interfaces or members to the type. And don't forget to take a look at the docs for PropertyGeneratedEventArgs which shows that you can add attributes to properties as well as additional statements that will appear in the getter or setter of the property. In several cases, these members take CodeDom classes as arguments, and you can find more info about the CodeDom online as well.
- Danny
I’ve had this post kicking around in my inbox for a while but hadn’t got round to consuming it. It talks…
Hello Danny,
wow thats realy amazing!!
is it possible to hang on the Events of Code-generation in Visual Studio 2008 (Beta 2)?
(Custom Tool: EntityModelCodeGenerator)
Maybe with an AddIn ?
Hi Danny,
This looks exactly like what I have been looking for but I can’t get it to work with Visual Studio 2008 and Beta 3 of the Entity Framework. Did something change?
More info: The error is on
Debug.Assert(generatedCode.Contains(MyAttributeName));
Yes, it looks like there was a breaking change in between beta 2 and beta 3 which somehow got lost.
This line in the AddAttributeToType event handler:
StructuralType structuralType = sender as StructuralType;
needs to be changed to:
StructuralType structuralType = eventArgs.TypeSource as StructuralType;
I’ll update the code in the blog entry above. Thanks for pointing this out.
– Danny
In previous posts, I’ve described CSDL annotations , how to extract CSDL from EDMX and introduced you
The first release of Entity Framework supports explicit loading. This means that if you are navigating
The Entity Framework enables developers to reason about and write queries in terms of the EDM model rather than the logical schema of tables, joins, foreign keys, and so on. Many enterprise systems have multiple applications/databases with varying degrees
Part of the Entity Framework FAQ . 2. Architecture and Patterns 2.1. Does Entity Framework have support
Hi!
Code works fine. I would like to do one more thing though. I would like to remove the Browsable(false) attribute from EntityReference properties.
Can anybody help?
The mechanisms for customizing codegen in v1 are fairly limited, and I don’t know of any way to use them to remove at attribute. In v2 we hope to create a much more flexible system which would support this kind of thing as well as many more customizations.
In the meantime your alternatives are probably something like this:
1) Modify the output of codegen by hand or with some postprocessing script. (Not what I would really recommend.)
2) Use a completely separate mechanism to generate the code yourself–maybe something like t4 to generate from templates. You could base your templates on what codegen outputs as a starting point and then modify. This would be a fair amount of work, though.
3) You could consider creating a property on the partial class which exposes the EntityReference under a different name and not put Browseable(false) on it. Inconvenient because it has a different name, but probably the easiest thing to do.
– Danny
You understand of course that none of these alternatives are very elegant, but I guess I’m gonna have to go with one of them. Most probably the first one.
|
https://blogs.msdn.microsoft.com/dsimmons/2007/08/31/ef-codegen-events-for-fun-and-profit-aka-how-to-add-custom-attributes-to-my-generated-classes/
|
CC-MAIN-2017-34
|
en
|
refinedweb
|
Get the highlights in your inbox every week.
Top 3 Python libraries for data science
3 top Python libraries for data science
Turn Python into a scientific data analysis and modeling tool with these libraries.
Subscribe now
Python's many attractions—such as efficiency, code readability, and speed—have made it the go-to programming language for data science enthusiasts. Python is usually the preferred choice for data scientists and machine learning experts who want to escalate the functionalities of their applications. (For example, Andrey Bulezyuk used the Python programming language to create an amazing machine learning application.)
Because of its extensive usage, Python has a huge number of libraries that make it easier for data scientists to complete complicated tasks without many coding hassles. Here are the top 3 Python libraries for data science; check them out if you want to kickstart your career in the field.
1. NumPy.The library empowers Python with substantial data structures for effortlessly performing multi-dimensional arrays and matrices calculations. Besides its uses in solving linear algebra equations and other mathematical calculations, NumPy is also used as a versatile multi-dimensional container for different types of generic data.
Furthermore, it integrates flawlessly with other programming languages like C/C++ and Fortran. The versatility of the NumPy library allows it to easily and swiftly coalesce with an extensive range of databases and tools. For example, let's see how NumPy (abbreviated np) can be used for multiplying two matrices.
Let's start by importing the library (we'll be using the Jupyter notebook for these examples).
import numpy as np
Next, let's use the eye() function to generate an identity matrix with the stipulated dimensions.
matrix_one = np.eye(3)
matrix_one
Here is the output:
array([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]])
Let's generate another 3x3 matrix.
We'll use the arange([starting number], [stopping number]) function to arrange numbers. Note that the first parameter in the function is the initial number to be listed and the last number is not included in the generated results.
Also, the reshape() function is applied to modify the dimensions of the originally generated matrix into the desired dimension. For the matrices to be "multiply-able," they should be of the same dimension.
matrix_two = np.arange(1,10).reshape(3,3)
matrix_two
Here is the output:
array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
Let's use the dot() function to multiply the two matrices.
matrix_multiply = np.dot(matrix_one, matrix_two)
matrix_multiply
Here is the output:
array([[1., 2., 3.],
[4., 5., 6.],
[7., 8., 9.]])
Great!
We managed to multiply two matrices without using vanilla Python.
Here is the entire code for this example:
import numpy as np
#generating a 3 by 3 identity matrix
matrix_one = np.eye(3)
matrix_one
#generating another 3 by 3 matrix for multiplication
matrix_two = np.arange(1,10).reshape(3,3)
matrix_two
#multiplying the two arrays
matrix_multiply = np.dot(matrix_one, matrix_two)
matrix_multiply
2. Pandas
Pandas is another great library that can enhance your Python skills for data science. Just like NumPy, it belongs to the family of SciPy open source software and is available under the BSD free software license.
Pandas offers versatile and powerful tools for munging data structures and performing extensive data analysis. The library works well with incomplete, unstructured, and unordered real-world data—and comes with tools for shaping, aggregating, analyzing, and visualizing datasets.
There are three types of data structures in this library:
- Series: single-dimensional, homogeneous array
- DataFrame: two-dimensional with heterogeneously typed columns
- Panel: three-dimensional, size-mutable array
For example, let's see how the Panda Python library (abbreviated pd) can be used for performing some descriptive statistical calculations.
Let's start by importing the library.
import pandas as pd
Let's create'])
}
Let's create a DataFrame.
df = pd.DataFrame(d)
Here is a nice table of the output:
Name Programming Language Years of Experience
0 Alfrick Python 5
1 Michael JavaScript 9
2 Wendy PHP 1
3 Paul C++ 4
4 Dusan Java 3
5 George Scala 4
6 Andreas React 7
7 Irene Ruby 9
8 Sagar Angular 6
9 Simon PHP 8
10 James Python 3
11 Rose JavaScript 1
Here is the entire code for this example:
import pandas as pd
#creating'])
}
#Create a DataFrame
df = pd.DataFrame(d)
print(df)
3. Matplotlib.
Let's start by importing the library.
from matplotlib import pyplot as plt
Let's generate values for both the x-axis and the y-axis.
x = [2, 4, 6, 8, 10]
y = [10, 11, 6, 7, 4]
Let's call the function for plotting the bar chart.
plt.bar(x,y)
Let's show the plot.
plt.show()
Here is the bar chart:
Here is the entire code for this example:
#importing Matplotlib Python library
from matplotlib import pyplot as plt
#same as import matplotlib.pyplot as plt
#generating values for x-axis
x = [2, 4, 6, 8, 10]
#generating vaues for y-axis
y = [10, 11, 6, 7, 4]
#calling function for plotting the bar chart
plt.bar(x,y)
#showing the plot
plt.show()
Wrapping up
The Python programming language has always done a good job in data crunching and preparation, but less so for complicated scientific data analysis and modeling. The top Python frameworks for data science help fill this gap, allowing you to carry out complex mathematical computations and create sophisticated models that make sense of your data.
Which other Python data-mining libraries do you know? What's your experience with them? Please share your comments below.
|
https://opensource.com/article/18/9/top-3-python-libraries-data-science
|
CC-MAIN-2019-39
|
en
|
refinedweb
|
The top menu bar of an application usually just contains other menus, such as file and edit. These are known as "cascades" in Tkinter, and are essentially a menu inside a menu. This may be confusing at first, so let's begin with a very simple example to demonstrate the difference between a menu and a cascade.
Create a new Python file called menu.py and add the following code:
import tkinter as tkwin = tk.Tk()win.geometry('400x300')lab = tk.Label(win, text="Demo application")menu = tk.Menu(win)
After importing Tkinter and creating a main window and Label, we make our first Menu widget. As with a lot of widgets, the first argument needed is the master, or parent, in which the widget will be drawn. We draw this menu in our main window ...
|
https://www.oreilly.com/library/view/tkinter-gui-programming/9781788627481/0de0a1ed-8d3a-4d50-8183-37ebad9645d7.xhtml
|
CC-MAIN-2019-39
|
en
|
refinedweb
|
Porkchops with poliastro¶
Porkchops are also known as mission design curves since they show different parameters used to design the ballistic trajectories for the targetting problem such us:
Time of flight (TFL)
Launch energy (C3L)
Arrival velocity (VHP)
For the moment, poliastro is only capable of creating these mission plots between
poliastro.bodies objects. However, it is intended for future versions to make it able for plotting porkchops between NEOs also.
Basic modules¶
For creating a porkchop plot with poliastro, we need to import the
porkchop function from the
poliastro.plotting.porkchop module. Also, two
poliastro.bodies are necessary for computing the targetting problem associated. Finally by making use of
time_range, a very useful function available at
poliastro.utils it is possible to define a span of launching and arrival dates for the problem.
[1]:
import astropy.units as u")
Plot that porkchop!¶
All that we must do is pass the two bodies, the two time spans and some extra plotting parameters realted to different information along the figure such us:
If we want poliastro to plot time of flight lines:
tfl=True/False
If we want poliastro to plot arrival velocity:
vhp=True/False
The maximum value for C3 to be ploted:
max_c3=45 * u.km**2 / u.s**2(by default)
[2]:
dv_dpt, dv_arr, c3dpt, c3arr, tof = porkchop(Earth, Mars, launch_span, arrival_span)
/home/juanlu/.miniconda36/envs/poliastro37/lib/python3.7/site-packages/pandas/plotting/_converter.py:129:()
|
https://docs.poliastro.space/en/latest/examples/Porkchops%20with%20poliastro.html
|
CC-MAIN-2019-39
|
en
|
refinedweb
|
[Solved] Interruption on panel buttons
Dear,
I would like to use interruption in order to switch the working mode of Industruino when pressing a button.
From computing mode to setup mode for example.
What is the port connection to attach the interrupt ?
I am still using the UC1701 lib, hope that is not a problem.
Best regards,
The 1286 Topboard has its button inputs connected to pin change interrupts PCINT 4, 5 & 6 for buttons Down, Enter and Up respectively. Please use this code as an example:
#include <UC1701.h>
static UC1701 lcd;
volatile int modeFlag = 0;
void setup() {
lcd.begin(); //enable LCD
// Enable Pin Change Interrupt 6.
PCMSK0 = (1 << PCINT6);
PCICR = (1 << PCIE0);
// Global Interrupt Enable
sei();
}
ISR (PCINT0_vect)
{
modeFlag = 1;
}
void loop() {
lcd.setCursor(0, 0);
lcd.print("waiting ");
if (modeFlag == 1) {
lcd.setCursor(0, 0);
lcd.print("triggered");
delay(1000);
modeFlag = 0;
}
}
This demo sketch will show "waiting" on the Industruino's LCD screen, when you press the "Up" button an interrupt will be triggered and "triggered" will show on the LCD for one second. Please test against the "modeFlag" integer to jump between your "computing" and "setup" routines. To attach the interrupt to the "Enter" or "Down" button change "PCINT6" in line "PCMSK0 = (1 << PCINT6);" to PCINT5 or PCINT4.
Cheers,
Loic
Good Day,
This works perfectly to raise an event when "up" button is pressed.
Thanks to your code, I found a bit more details there:;wap2. could not get this sample to work on the INDIO, are the interrupts different?
|
https://industruino.com/forum/help-1/question/solved-interruption-on-panel-buttons-20
|
CC-MAIN-2019-39
|
en
|
refinedweb
|
REST {}
HttpClient is best consumed when used with Observables. You must be asking a question - What is an Observable here? An Observable is simply a collection of future events which arrives asynchronously over time.
RxJS
Let us start with an example. Here we are building a simple survey which will ask a question and user can choose from one of the four option. We will get our survey data from our Http service as JSON. Full working example is here - StackBlitz - Angular Http Service and Observables
Create a survey interface
export interface ISurvey{ question:string; choices: string[]; }
Our Survey Data
Let Service
We
Add http functionality by importing HttpClient from '@angular/common/http' and inject the HttpClient in our constructor:
import {HttpClient, HttpErrorResponse } from '@angular/common/http' export class SurveyService { constructor(private _http:HttpClient){ .. }
Adding Observable method in Survey service
As discussed previously, we will add Rxjs Observables to work with Http services since we have values that will be received in some point of time. Observables works best in these scenarios.
import {Observable} from 'rxjs/Observable'
Add the following, these are required for Observables operations:
import 'rxjs/add/operator/catch'; import 'rxjs/add/operator/do'; import 'rxjs/add/operator/map';
Now add an Observabbe method which will get the survey data using HttpClient:
getSurveyQuestion(): Observable<ISurvey[]>{ return this._http .get<ISurvey[]>('./src/survey.json') .do(data =>console.log('All : ' + JSON.stringify(data))) .catch(this.handleError); }
Look that we are returning an Observable of ISurvey[] array. Observable are lazy, an Observable method will only be invoked when a subscription is called that.ice
We..
Services and Dependency Injection></div> <div * <input type="radio" name="radioGroup"/> </div> </div> </div> </div> ` }) export class Survey implements OnInit{ ... }
The RxJS Retry operator
If:
Full working example here
|
https://witscad.com/course/complete-angular/chapter/consuming-restful-service-with-httpclient-rxjs
|
CC-MAIN-2019-39
|
en
|
refinedweb
|
Hide Forgot
An issue was discovered in the XFS filesystem in fs/xfs/libxfs/xfs_attr_leaf.c in the Linux kernel. A NULL pointer dereference may occur for a corrupted xfs image after xfs_da_shrink_inode() is called with a NULL bp. This can lead to a system crash and a denial of service.
References:
An upstream patch:
Created kernel tracking bugs for this issue:
Affects: fedora-all [bug 1597772]
Notes:
While the flaw reproducer works when run as a privileged user (the "root"), this requires a mount of a certain filesystem image. An unprivileged attacker cannot do this even from a user+mount namespace:
$ unshare -U -r -m
# mount -t xfs-2019:0831
This issue has been addressed in the following products:
Red Hat Enterprise Linux 7
Via RHSA-2019:2029
This issue has been addressed in the following products:
Red Hat Enterprise Linux 7
Via RHSA-2019:2043
|
https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2018-13094
|
CC-MAIN-2019-39
|
en
|
refinedweb
|
Developing mobile applications is a tricky business; having the right orientation and technical foundation to start makes all the difference. Given the multitude of technology alternatives, mobile developers are increasingly realizing it is no longer feasible to specialize in one given platform. Traditional, native platforms (iOS, Android, Windows Phone 7, Windows Mobile, etc.) are unnecessarily complex and are pinned to a software stack that requires a steep learning curve and many tricks to work around their inherent ties to their non-web foundation. If developing for native platforms weren’t enough of a drive-away force, their tightly controlled application deployment and sales channels make things even worse.
Luckily, there are at least two main ways out of the native platform trap. One way is using more familiar programming languages and development environments and translating to native code (typically done for iOS development, with solutions like MonoTouch). To some degree, this relies on learning similarly complex APIs and having to deal with the quirks of fitting the native device capabilities over those APIs to perform the mapping correctly.
Another way out is web-based mobile applications. They eliminate the need for platform-specific technologies with their crippled development environments, and put your applications on a foundation based on common web standards such as HTML5, CSS3 and JavaScript, greatly simplifying cross-platform expansion. One thing we have learned from the web is it can’t and shouldn’t be contained: the capabilities and services in it must be consumable from whatever device you are using. You can expect future mobile platform versions and operating systems will further blur the traditional boundary between “native” and “web” applications.
Currently, developing cross-platform web-based mobile applications with solutions like PhoneGap, Rhomobile, or AppMobi relies on exposing native device functionality through JavaScript APIs and rendering your web applications written against those APIs through a native shell application. This should sound like an attractive proposition, but demands you develop your applications in JavaScript. DSL-based alternatives and a good discussion are available in the state of the art in mobile web application development article on InfoQ.
WebSharper
WebSharper aims to solve several of the above problems. First, it enables you to develop your entire web or mobile application in F# enjoying its concise syntax and powerful functional constructs, cutting most of the code you have been used to writing. Second, it gives you a rich set of abstractions and eDSL syntax for common web-related chores, such as composing HTML, defining web forms, managing required resources, handling URLs safely, and many others. What makes it especially well-suited to larger, enterprise-grade application development is these abstractions are strongly typed: for instance, constructing web forms yielding data values of the wrong type, or trying to add form validation to the wrong input control will yield compile-time errors, again greatly reducing development time.
Sitelets – composing websites
WebSharper 2.0 introduced sitelets, implementing type-safe, first-class website values. A sitelet is defined over a union “action” type that encompasses all pages/content in the site it represents, and contains a router and a controller to map URL requests back and forth between actions and actual content.
(Click on the image to enlarge it)
Figure 1: Sample website from the WebSharper Visual Studio template
Here is a simple Action type, taken from the sample sitelet application template, which comes with your WebSharper installer, defining the small sample website in Figure 1:
/// Actions that correspond to the different pages in the site. type Action = | Home | Contact | Protected | Login of option<Action> | Logout | Echo of string
Content can be arbitrary, you can return any file, or XML or HTML content, depending on what purpose your sitelet serves, such as RESTful services. You can manually construct your router and controller if you need fine-grained control over your URL space, or have them inferred automatically from your action type, or both by combining smaller sitelets using either strategy.
Sitelets also come with a type-safe templating language based on XML markup with special placeholders. These files (ending with .template.xml) are automatically converted to F# code and included in your build when you add them to your WebSharper Visual Studio solution.
Consider the following template markup in Skin.template.xml, again from the sample sitelet application template:
<html xmlns=""> <head> <title>Your site title</title> <link href="~/themes/reset.css" rel="stylesheet" type="text/css" /> <link href="~/themes/site.css" rel="stylesheet" type="text/css" /> </head> <body> <div> <div id="loginInfo">${LoginInfo}</div>
<div id="header">
<div id="banner">${Banner}</div>
<div id="menu">${Menu}</div>
<div class="closer"></div>
</div>
<div id="main-container">
<div id="main">${Main}</div>
<div id="sidebar">${Sidebar}</div>
<div class="closer"></div>
</div>
<div id="footer">${Footer}</div>
</div>
</body>
</html>
This creates a Templates.Skin module in the default namespace, available for composing markup snippets into the placeholders. Consider the following function that takes the title and the main content of a page and constructs a page value using the generated template function:
/// A template function that renders a page with a menu bar, based on the Skin template. let Template title main : Content<Action> = let menu (ctx: Context<Action>)= [ A [Action.Home |> ctx.Link |> HRef] -< [Text "Home"] A [Action.Contact |> ctx.Link |> HRef] -< [Text "Contact"] A [Action.Echo "Hello" |> ctx.Link |> HRef] -< [Text "Say Hello"] A [Action.Protected |> ctx.Link |> RandomizeUrl |> HRef] -< [Text "Protected"] A ["~/LegacyPage.aspx" |> ctx.ResolveUrl |> HRef] -< [Text "ASPX Page"] ] |> List.map (fun link -> Label [Class "menu-item"] -< [link] ) Templates.Skin.Skin (Some title) { LoginInfo = Widgets.LoginInfo Banner = fun ctx -> [H2 [Text title]] Menu = menu Main = main Sidebar = fun ctx -> [Text "Put your side bar here"] Footer = fun ctx -> [Text "Your website. Copyright (c) 2011 YourCompany.com"] }
Here, main is a function yielding a list of XML/HTML elements, similar to the menu computed in the inner menu function. Also, note how the context object can map the various Action shapes to safe URLs (the pipe |> operator is used to send an argument to a function, e.g. x |> f is equivalent to f(x)).
You can define all kinds of tiny abstractions to make your application code more concise. Here is a link operator (=>) to create a hyperlink:
/// A helper function to create a hyperlink let ( => ) title href = A [HRef href] -< [Text title]
Now you could define the home page in your sitelet as:
/// The pages of this website. module Pages =
/// The home page.
let HomePage : Content<Action> =
Template "Home" <| fun ctx ->
[
H1 [Text "Welcome to our site!"]
"Let us know how we can contact you" => ctx.Link Action.Contact
]
...
Once all your pages are defined, you can create a sitelet to represent your website. Here you are combining three smaller sitelets:.Contact -> Pages.ContactPage
| Action.Echo param -> Pages.EchoPage param
| Action.Login action -> Pages.LoginPage action
| Action.Logout ->
// Logout user and redirect to home
UserSession.Logout ()
Content.Redirect Action.Home
| Action.Home -> Content.Redirect Action.Home
| Action.Protected -> Content.ServerError
// A sitelet for the protected content that requires users to log in first.
let authenticated =
let filter : Sitelet.Filter<Action> =
{
VerifyUser = fun _ -> true
LoginRedirect = Some >> Action.Login
}
Sitelet.Protect filter
<| Sitelet.Content "/protected" Action.Protected Pages.ProtectedPage
// Compose the above sitelets into a larger one.
Sitelet.Sum
[
authenticated
basic
]
With the above sitelet, all you need is to annotate it as a sitelet and, voila, your site can be served from an ASP.NET-based web container (you get the necessary Web.Config changes in the WebSharper Visual Studio template):
/// Expose the main sitelet so it can be served. /// This needs an IWebsite type and an assembly level annotation. type SampleWebsite() = interface IWebsite<SampleSite.Action> with member this.Sitelet = EntireSite member this.Actions = []
[<assembly: WebsiteAttribute(typeof<SampleWebsite>)>]
do ()
Formlets – composing first-class type-safe forms
Formlets, a recent formalism from academia, have been an integral part of WebSharper, one of the first frameworks to implement them. Formlets represent first-class, type-safe, composable data forms, much different from the less strictly-typed approaches you may have been using with ASP.NET or other web frameworks. The WebSharper implementation also includes dependent formlets, where one part of the formlet depends on another such as on a dropdown box with multiple options or values entered in an input box; and flowlets, a custom layout that renders each formlet step in a formlet computation expression, a monadic construct in F#, in a wizard-like sequence.
Here is a simple input formlet returning a string value with various enhancements applied incrementally:
let nameF = Controls.Input "" |> Validator.IsNotEmpty "Empty name not allowed" |> Enhance.WithValidationIcon |> Enhance.WithTextLabel "Name"
Formlets can be mapped to return values of any type, for instance a percentage input control might return float values between 0 and 100, or a combo box might yield one of the shapes of a discriminated union (with or without tagged values). You can compose formlets into larger formlets in various ways. The simplest way is using the Formlet.Yield function, which wraps a value of any type into a formlet of that type, in combination with the <*> operator that composes two (or via subsequent calls multiple) formlets:
Formlet.Yield (fun v1 v2 ... vn -> <compose all v’s>) <*> formlet1 <*> formlet2 ... <*> formletn
Here is an example of taking a person’s information (name and email), with basic client-side validation:
type Person = { Name: string Email: string }
[<JavaScript>]
let PersonFormlet () : Formlet<Person> =
let nameF =
Controls.Input ""
|> Validator.IsNotEmpty "Empty name not allowed"
|> Enhance.WithValidationIcon
|> Enhance.WithTextLabel "Name"
let emailF =
Controls.Input ""
|> Validator.IsEmail "Please enter valid email address"
|> Enhance.WithValidationIcon
|> Enhance.WithTextLabel "Email"
Formlet.Yield (fun name email -> { Name = name; Email = email })
<*> nameF
|> Enhance.WithSubmitAndResetButtons
|> Enhance.WithLegend "Add a New Person"
|> Enhance.WithFormContainer
The result when embedded in a sitelet page is in Figure 2. Note the styling is provided by a dependent CSS resource automatically added to your page when you reference formlet code (actually, when you call Enhance.WithFormContainer). WebSharper’s sophisticated dependency tracking collects dependent resources for a given page automatically as they are served. This is very convenient and saves a great deal of time and effort when using various WebSharper extensions to third-party JavaScript libraries, essentially eliminating the need to manually track what needs to be included in a page.
Figure 2: A simple formlet with validation and various enhancements
The [<JavaScript>] annotation in the above formlet example directs WebSharper to translate this code block to JavaScript. The validators each control is enhanced with are part of the WebSharper formlet library and they provide client-side validation, so Validator.IsEmail will make sure only valid email addresses are entered before the formlet reaches an accepting state. You can also make calls to your own user-defined functions, or provide additional validation by further enhancing the formlet at hand. If a function is annotated with [<Rpc>] and is called from client-side code, WebSharper outputs code to perform an RPC call and handles the passing of values between the client and the server side automatically. You can work with arbitrarily complex F# values, such as nested lists, maps, sets, or sequences seamlessly without having to worry about how they are mapped. This makes programming client and server-side code uniform and greatly reduces development time. In fact, you typically develop client and server code in the same F# file, organized into different modules under the same namespace.
You can use a number of WebSharper patterns for developing client-server applications, we usually recommend working with sitelets and formlets and give various coding guidelines for maximizing your efficiency, but you may develop hybrid applications as well with a significant ASP.NET code base, or enhance your existing ASP.NET applications with WebSharper-based functionality.
Building form abstractions for your needs
Occasionally, you may need to step outside the boundaries of the standard WebSharper formlet library to implement the forms (or the entire UI) for your application. For instance, you may want to render your formlets using different input controls and simple CSS overrides are no longer sufficient to get the look and feel you desire. Other times, you would like to reuse existing JavaScript control libraries such as Ext JS, YUI, or jQuery UI to give a more elaborate look and feel. WebSharper comes with a large number of extensions to various third-party libraries including these, and some extensions come with a formlet abstraction as well.
Here is a short example utilizing the Formlets for jQuery Mobile extension, using the Formlet.Do computation expression with a flowlet layout, and the familiar Formlet.Yield composition to weave together a 2-step login sequence:
let loginSequenceF = Formlet.Do { let! username, password, remember = Formlet.Yield (fun user pass remember -> user, pass, remember) <*> (Controls.TextField "" Theme.C "Username: " |> Validator.IsNotEmpty "Username cannot be empty!") <*> (Controls.Password "" Theme.C "Password: " |> Validator.IsRegexMatch "^[1-4]{4,}[0-9]$" "The password is wrong!") <*> Controls.Checkbox true Theme.C "Keep me logged in " |> Enhance.WithSubmitButton "Log in" Theme.C let rememberText = if remember then "" else "not " do! Formlet.OfElement (fun _ -> Div [ H3 [Text ("Welcome " + username + "!")] P [Text ("We will " + rememberText + "keep you logged in.")] ]) } |> Formlet.Flowlet
You can compose this login sequence into HTML markup, using the necessary jQuery Mobile attributes (which you can nicely abstract away with a few more lines of code), that you can then add to your sitelet page:
Div [HTML5.Attr.Data "role" "page"] -< [ Div [HTML5.Attr.Data "role" "header"] -< [ H1 [Text "WebSharper Formlets for jQuery Mobile"]> ]
Div [HTML5.Attr.Data "role" "content"] -< [
loginSequenceF
]
Div [HTML5.Attr.Data "role" "footer"] -< [
P [Attr.Style "text-align: center;"] -< [Text "IntelliFactory"]
]
]
Once you tune the mobile configuration file in your WebSharper mobile project to produce an Android package (you can also choose Windows Phone 7) and install it on your phone you get what’s depicted in Figure 3.
Figure 3: jQuery Mobile formlets running on Android
Using the WebSharper mobile APIs and third-party map controls
Formlets or sitelets greatly simplify your web and mobile development, and give you robust, type-safe, and composable abstractions to model parts of your application. Another fundamental WebSharper abstraction are pagelets, the building blocks of formlets. Pagelets represent first-class, composable client-side markup and behavior. A WebSharper pagelet is compatible with ASP.NET controls and can be embedded directly into ASP.NET markup as well.
Here is an example of a pagelet that implements a map control depicted in Figure 4:
open IntelliFactory.WebSharper open IntelliFactory.WebSharper.Bing open IntelliFactory.WebSharper.Html open IntelliFactory.WebSharper.JQuery open IntelliFactory.WebSharper.Mobile
type CurrentLocationControl() =
inherit Web.Control()
[<JavaScript>]
override this.Body =
let screenWidth = JQuery.Of("body").Width()
let MapOptions =
Bing.MapViewOptions(
Credentials = bingMapsKey,
Width = screenWidth - 10,
Height = screenWidth - 10,
Zoom = 16)
let label = H2 []
let setMap (map : Bing.Map) =
let updateLocation() =
// Gets the current location
let loc = Mobile.GetLocation()
// Sets the label to be the address of the current location
Rest.RequestLocationByPoint(<<your-bingmaps-key>>, loc.Lat, loc.Long, ["Address"],
fun result ->
let locInfo = result.ResourceSets.[0].Resources.[0]
label.Text <- "You are currently at " + JavaScript.Get "name" locInfo)
// Adds a pushpin at the current location
let loc = Bing.Location(loc.Lat, loc.Long)
let pin = Bing.Pushpin loc
map.Entities.Clear()
map.Entities.Push pin
map.SetView(Bing.ViewOptions(Center = loc))
// Keep updating your location regularly
JavaScript.SetInterval updateLocation 1000 |> ignore
let map =
Div []
|>! OnAfterRender (fun this ->
// Renders a Bing Maps control
let map = Bing.Map(this.Body, MapOptions)
map.SetMapType(Bing.MapTypeId.Road)
setMap map)
// Returns the HTML markup for this control
Div [
label
Br []
map
] :> _
Figure 4: A Bing Map control and address bar showing your current location
This control uses the WebSharper mobile APIs to get the current GPS location. The IntelliFactory.WebSharper.Mobile namespace contains further utilities to interact with the underlying mobile device, including fetching accelerometer data, accessing the camera capabilities, or displaying native alert messages. Future versions of the WebSharper mobile API will also contain platform-specific extensions such as Bluetooth communication capabilities, etc.
Conclusion
If you haven’t used X-to-JavaScript tools to help you write web and mobile applications, you may wonder why there are so many of them and what makes people want to use them. WebSharper is a robust web development framework for F# and is in active use in a number of enterprise applications. It solves many of the web and mobile development issues you typically encounter and offers among others safe URLs; automatic resource tracking; and type-safe, composable abstractions for client-side markup and functionality; declarative-style web forms with client-side validation; and website values.
The WebSharper 2.3.28+ updates and the subsequent 2.4 release contain Visual Studio templates for mobile web development, and you can quickly try out and experiment with the two examples in this article using those templates. You can also download the sources here and here, including the final Android packages.
About the Author
Adam Granicz is a long-time F# insider and key community member, and the co-author of three F# books, including Expert F# 2.0 with Don Syme, the designer of the language. His company IntelliFactory specializes in consulting on the most demanding F# projects; shaping the future of the development of F# web, mobile and cloud applications; and developing WebSharper, the premiere web development framework for F#. You can reach him at granicz.adam {at} intellifactory.com, follow him on Twitter, or track him on FPish, the functional programming paradise.
Community comments
Nice!
by Faisal Waris /
Nice!
by Faisal Waris /
Your message is awaiting moderation. Thank you for participating in the discussion.
I have a sneaky suspicion that web development (including mobile web) may actually be fun.
[disclaimer]
I am an F# junkie
|
https://www.infoq.com/articles/WebSharper/
|
CC-MAIN-2019-39
|
en
|
refinedweb
|
Platform Interoperability
XAP supports easy and efficient communication and access across projects that include a combination of Java, .NET and C++ platforms, while also maintaining the benefits of the XAP scale-out application server.
Designing Interoperable Classes
using GigaSpaces.Core.Metadata; namespace MyCompany.MyProject.Entities { [SpaceClass(AliasName="com.mycompany.myproject.entities.Person")] public class Person { private string _name; [SpaceProperty(AliasName="name")] public string Name { get { return this._name; } set { this._name = value; } } } }
package com.mycompany.myproject.entities; public class Person { private String name; public String getName() { return this.name; } public void setName(String name) { this.name = name; } } }
Guidelines and Restrictions
Follow these guidelines and restrictions in this section to enable platform interoperability.
Class Name
The full class name (including package\namespace) in all platforms should be identical.
Java packages use a different naming convention than .NET namespaces, therefore it is recommended to use the
SpaceClass(AliasName="") feature to map a .NET class to the respective Java class.
Properties and Fields
The properties/fields stored in the Space on all platforms should be identical.
In Java, only properties are serialized into the Space. In .NET, both fields and properties are serialized, so you can mix and match them.
Java properties start with a lowercase letter, whereas .NET properties usually start with an uppercase letter. It is therefore recommended to use the
SpaceProperty(AliasName="") feature to map a property/field name from .NET to Java.
Types
Only the types listed in the table below are supported. If one of your fields uses a different type, you can use the class only in a homogeneous environment. Arrays of these types are also supported. (for example, trying to store
null in a .NET structure) or unexpected results (for example,/deserialization of these types is relatively slow, compared to other numeric types. As a rule of thumb, these types should not be used, unless the precision/range of other numeric types is not satisfactory.
Java 8's
LocalDate,
LocalTime, and
LocalDateTime are currently not interoperable with the .NET DateTime class.
Array and Collection Support
The following collections are mapped for interoperability.
- In Java, the
Propertiestype allows the user to store keys and values that are not strings.
|
https://docs.gigaspaces.com/14.5/dev-dotnet/interoperability.html
|
CC-MAIN-2019-39
|
en
|
refinedweb
|
Parameters are used to communicate between scripting and the controller. They are used to drive transitions and blendtrees for example.
It's important to note that the AnimatorControllerParameters are returned as a copy. The array should be set back into the property when changed.
using UnityEngine; using UnityEditor;
class ControllerModifier { UnityEditor.Animations.AnimatorController controller;
public void ModifyParameters(int parameterIndex, string newName) { AnimatorControllerParameter[] parameters = controller.parameters; parameters[parameterIndex].name = newName; controller.parameters = parameters; } }
Did you find this page useful? Please give it a rating:
|
https://docs.unity3d.com/ScriptReference/Animations.AnimatorController-parameters.html
|
CC-MAIN-2019-39
|
en
|
refinedweb
|
XML::LibXML Matt Sergeant Christian Glahn 1.58 2001-2004 AxKit.com Ltd; 2002-2004 Christian Glahn Introduction README: XML::LibXML::Common - general functions used by various XML::LibXML modules XML::SAX - DOM building support from SAX: perl Makefile.PL make make test: past 2.4.20: tested; working. 2.4.25: tested; not working past 2.4.25: tested, working past 2.5.0: tested; brocken Attribute handling version 2.5.5: tested; tests pass, but known as broken up to version 2.5.11: tested; working version 2.6.0: tested; not working to version 2.6.2: tested; working version 2.6.3: tested; not working version 2.6.4: tested; not working (XML Schema errors) version 2.6.5: tested; not working (broken XIncludes) up to version 2.6.8: since etc. you may contact the maintainer directly christian.glahn@uibk.ac.at For bug reports, please use the CPAN request tracker Versions >= 1.49 are maintained by Christian Glahn Versions > 1.56 are co. License LICENSE This is free software, you may use it and distribute it under the same terms as Perl itself. Copyright 2001-2003 AxKit.com Ltd, All rights reserved. Disclaimer THIS PROGRAM IS DISTRIBUTED IN THE HOPE THAT IT WILL BE USEFUL, BUT WITHOUT ANY WARRANTY; WITHOUT EVEN THE IMPLIED WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Perl Binding for libxml2 XML::LibXML Synopsis use XML::LibXML; my $parser = XML::LibXML->new(); my $doc = $parser->parse_string(<<'EOT'); <some-xml/> EOT Description: XML::LibXML::Parser Parsing XML Files with XML::LibXML XML::LibXML::DOM XML::LibXML DOM Implementation XML::LibXML::SAX XML::LibXML direct SAX parser XML::LibXML::Document XML::LibXML DOM Document Class XML::LibXML::Node Abstract Base Class of XML::LibXML Nodes XML::LibXML::Element XML::LibXML Class for Element Nodes XML::LibXML::Text XML::LibXML Class for Text Nodes XML::LibXML::Comment XML::LibXML Comment Nodes XML::LibXML::CDATASection XML::LibXML Class for CDATA Sections XML::LibXML::Attr XML::LibXML Attribute Class XML::LibXML::DocumentFragment XML::LibXML's DOM L2 Document Fragment Implementation XML::LibXML::Namespace XML::LibXML Namespace Implementation XML::LibXML::PI XML::LibXML Processing Instructions XML::LibXML::Dtd XML::LibXML DTD Support XML::LibXML::RelaxNG XML::LibXML frontend for RelaxNG schema validation XML::LibXMLguts Internal of the Perl Layer for libxml2 (not done yet) Version Information. XML::LibXML::LIBXML_DOTTED_VERSION $Version_String = XML::LibXML::LIBXML_DOTTED_VERSION; Returns the Versionstring of the libxml2 version XML::LibXML was compiled for. This will be "2.6.2" for "libxml2 2.6.2". XML::LibXML::LIBXML_VERSION . Related Modules The modules described in this section are not part of the XML::LibXML package itself. As they support some additional features, they are mentioned here. XML::LibXSLT XSLT Processor using libxslt and XML::LibXML XML::LibXML::Common Common functions for XML::LibXML related Classes XML::LibXML::Iterator XML::LibXML Implementation of the DOM Traversal Specification XML::LibXML::XPathContext Advanced XPath processing using libxml2 and XML::LibXML XML::LibXML and XML::GDOME. import_GDOME $libxmlnode = XML::LibXML->import_GDOME( $node, $deep ); This clones an XML::GDOME node to a XML::LibXML node explicitly. export_GDOME $gdomenode = XML::LibXML->export_GDOME( $node, $deep ); Allows to clone an XML::LibXML node into a XML::GDOME node. Parsing XML Data with XML::LibXML XML::LibXML::Parser Synopsis ); Parsing A XML document is read into a datastructure such as a DOM tree by a piece of software, called a parser. XML::LibXML currently provides four diffrent parser interfaces: A DOM Pull-Parser A DOM Push-Parser A SAX Parser A DOM based SAX Parser. Creating a Parser Instance XML::LibXML provides an OO interface to the libxml2 parser functions. Thus you have to create a parser instance before you can parse any XML data. new . DOM Parser One of the common parser interfaces of XML::LibXML is the DOM parser. This parser reads XML data into a DOM like parse_file $doc = $parser->parse_file( $xmlfilename ); ); parse_string $doc = $parser->parse_string( $xmlstring); This function is similar to parse_fh(), but it parses a XML document that is available as a single string in memory. Again, you can pass an optional base URI to the function. my $doc = $parser->parse_stirng( $xmlstring, $baseuri ); parse_html_file $doc = $parser->parse_html_file( $htmlfile ); Similar to parse_file() but parses HTML (strict) documents. parse_html_fh $doc = $parser->parse_html_fh( $io_fh ); Similar to parse_fh() but parses HTML (strict) streams. parse_html_string $doc = $parser->parse_html_string( $htmlstring ); Similar to parse_file() but parses HTML (strict) strings. parse_sgml_file $doc = $parser->parse_sgml_file( $sgmlfile ); Similar to parse_file() but parses SGML documents. parse_sgml_fh $doc = $parser->parse_sgml_fh( $io_fh ); Similar to parse_file() but parses SGML streams. parse_sgml_string . parse_balanced_chunk $fragment = $parser->parse_balanced_chunk( $wbxmlstring ); This function parses a well balanced XML string into a XML::LibXML::DocumentFragment. parse_xml_chunk to post process a document to expand XInclude tags. process_xincludes . processXIncludes $parser->processXIncludes( $doc ); This is an alias to process_xincludes, but through a JAVA like function name. Push Parser $parser->parse_chunk($string, $terminate); parse_chunk() tries to parse a given chunk of data, which isn't: start_push $parser->start_push(); Initializes the push parser. push $parser->push(@data); This function pushes the data stored inside the array to libxml2's parser. Each entry in @data must be a normal scalar! finish_push $doc = $parser->finish_push( $recover );. DOM based SAX Parser XML::LibXML provides a DOM based SAX parser. The SAX parser is defined in Serialization XML::LibXML provides some functions to serialize nodes and documents. The serialization functions are described on the XML::LibXML::Node manpage or the XML::LibXML::Document manpage. XML::LibXML checks three global flags that alter the serialization process: skipXMLDeclaration skipDTD setTagCompression ranther than the shortcut. For example the empty tag foo will be rendered as <foo></foo> rather than <foo/>. Parser Options LibXML options are global (unfortunately this is a limitation of the underlying implementation, not this interface). They can either be set using $parser->option(...), or XML::LibXML->option(...), both are treated in the same manner. Note that even two parser processes will share some of the same options, so be careful out there! Every option returns the previous value, and can be called without parameters to get the current value. validation $parser->validation(1); Turn validation on (or off). Defaults to off. recover $parser->recover(1); Turn the parsers recover mode on (or off). Defaults to off. This allows one restoring documents that are more like well ballanced chunks. XML::LibXML will only parse until the first fatal error occours. expand_entities $parser->expand_entities(0); Turn entity expansion on or off, enabled by default. If entity expansion is off, any external parsed entities in the document are left as entities. Probably not very useful for most purposes. keep_blanks $parser->keep_blanks(0); Allows you to turn off XML::LibXML's default behaviour of maintaining whitespace in the document. pedantic_parser $parser->pedantic_parser(1) You can make XML::LibXML more pedantic if you want to. line_numbers ). load_ext_dtd $parser->load_ext_dtd(1); Load external DTD subsets while parsing. complete_attributes $parser->complete_attributes(1); Complete the elements attributes lists with the ones defaulted from the DTDs. By default, this option is enabled. expand_xinclude $parser->expand_xinclude(1); Expands XIinclude tags immediately while parsing the document. This flag assures that the parser callbacks are used while parsing the included document. load_catalog $parser->load_catalog( $catalog_file ); Will use $catalog_file as a catalog during all parsing processes. Using a catalog will significantly speed up parsing processes if many external resources. base_uri . gdome_dom $parser->gdome_dom(1); THIS FLAG IS EXPERIMENTAL!! clean_namespaces $parser->clean_namespaces( 1 ); libxml2 2.6.0 and later allows to strip redundant namespace declarations from the DOM tree. To do this, one has to set clean_namespaces() to 1 (TRUE). By default no namespace cleanup is done. Input Callbacks.: match_callback $parser->match_callback($subref); If you want to handle the URI, simply return a true value from this callback. open_callback $parser->open_callback($subref); Open something and return it to handle that resource. read_callback $parser->read_callback($subref); Read a certain number of bytes from the resource. This callback is called even if the entire Document has already read. This callback has to return a string which will be parsed by the libxml2 parser. close_callback . implify. Error Reporting XML::LibXML throws exceptions during parsing, validation or XPath processing (and some other occations). These errors can be caught by using.. XML::LibXML direct SAX parser XML::LibXML::SAX Description datastructures ballanced data such as is often provided by databases. NOTE: At the moment XML::LibXML provides only an incomplete interface to libxml2's native SAX implementaion. The current implementation is not tested in production environment. It may causes significant memory problems or shows wrong behaviour. If you run into specific problems using this part of XML::LibXML, let me know. Building DOM trees from SAX events. XML::LibXML::SAX::Builder Synopsis my $builder = XML::LibXML::SAX::Builder->new(); my $gen = XML::Generator::DBI->new(Handler => $builder, dbh => $dbh); $gen->execute("SELECT * FROM Users"); my $doc = $builder->result(); Description. XML::LibXML DOM Implementation XML::LibXML::DOM Description(). XML::LibXML DOM Document Class XML::LibXML::Document. new $dom = XML::LibXML::Document->new( $version, $encoding ); alias for createDocument()88" ); is therefore a shortcut for my $document = XML::LibXML::Document->createDocument( "1.0", "UTF8" ); encoding $strEncoding = $doc->encoding(); returns the encoding string of the document. my $doc = XML::LibXML->createDocument( "1.0", "ISO-8859-15" ); print $doc->encoding; # prints ISO-8859-15 Optionally this function can be accessed by actualEncoding or getEncoding. set. version $strVersion = $doc->version(); returns the version string of the document getVersion() is an alternative form of this function. standalone $doc->standalone This function returns the Numerical value of a documents XML declarations standalone attribute. It returns 1 if createExternalSubset $dtd = $document->createExternalSubset( $rootnode, $public, $system); This function is similar to createInternalSubset() but this DTD is considered to be external and is therefore not added to the document itself. Nevertheless it can be used for validation purposes. importNode subtrees that contain an entity reference - even if the entity reference is the root node of the subtree. This will cause serious problems to your program. This is a limitation of libxml2 and not of XML::LibXML itself. adoptNode subtrees that contain entity references - even if the entity reference is the root node of the subtree. This will cause serious problems to your program. This is a limitation of libxml2 and not of XML::LibXML itself. externalSubset! internalSubset! setExternalSubset $doc->setExternalSubset($dtd); EXPERIMENTAL! This method sets a DTD node as an external subset of the given document. setInternalSubset $doc->setInternalSubset($dtd); EXPERIMENTAL! This method sets a DTD node as an internal subset of the given document. removeExternalSubset my $dtd = $doc->removeExternalSubset(); EXPERIMENTAL! If a document has an external subset defined it can be removed from the document by using this function. The removed dtd node will be returned. removeInternalSubset my $dtd = $doc->removeInternalSubset(); EXPERIMENTAL! If a document has an internal subset defined it can be removed from the document by using this function. The removed dtd node will be returned. getElementsByTagName my @nodelist = $doc->getElementsByTagName($tagname); Implements the DOM Level 2 function In SCALAR context this function returns a XML::LibXML::NodeList object. getElementsByTagNameNS my @nodelist = $doc->getElementsByTagName($nsURI,$tagname); Implements the DOM Level 2 function In SCALAR context this function returns a XML::LibXML::NodeList object. getElementsByLocalName my @nodelist = $doc->getElementsByLocalName($localname); This allows the fetching of all nodes from a given document with the given Localname. In SCALAR context this function returns a XML::LibXML::NodeList object. getElementsById my $node = $doc->getElementsById($id); This allows the fetching of the node at a given position in the DOM. Note: The Id of a node might change while manipulating the document. indexElements . Abstract Base Class of XML::LibXML Nodes XML::LibXML::Node. nodeName $name = $node->nodeName; Returns the node's name. This Function is aware of namesaces and returns the full name of the current node (prefix:localname) spacified in DOM. line_number . nodeType $type = $node->nodeType; Retrun the node's type. The possible types are described in the libxml2 tree.h documentation. The return value of this function is a numeric value. Therefore it differs from the result of perl ref function. line_number . unbindNode $node->unbindNode() Unbinds the Node from its siblings and Parent, but not from the Document it belongs to. If the node is not inserted into the DOM afterwards it will be lost after the programm $oldnode = $node->replaceChild( $newNode, $oldNode ) Replaces the $oldNode with the $newNode. The $oldNode ); The function will add the $childnode to the end of $node's children. The function should fail, if the new childnode is allready a child of $node. This function differs from the DOM L2 specification, in the case, if the new node is not part of the document, the node will be imported first. addChild $childnode = $node->addChild( $chilnode ); As an alternative to appendChild() one can use the addChild() function. This function is a bit faster, because it avoids all DOM conformity checks. Therefore this function is quite useful if one builds XML documents in memory where the order and ownership it ); Similar to addChild(), this function uses low level libxml2 functionality to provide faster interface for DOM building. addNewChild() uses xmlNewChild() to create a new node on a given parent element. addNewChild() has two parameters $nsURI and $name, where $nsURI is an (optional) namespace URI. $name is the fully qualified element name; addNewChild() will determine the correct prefix if nessecary. ) cloneNode creates a copy of $node. When $deep is set to 1 (true) the function will copy all childnodes as well. If $deep is 0 only the current node will be copied. cloneNode will not copy any namespace information if it is not run recursivly. parentNode allready bound to another document. This function is the oposite calling of XML::LibXML::Document's adoptNode() function. Because of this it has the same limitations with Entity References as adoptNode(). insertBefore $node->insertBefore( $newNode, $refNode ) The method inserts $newNode before $refNode. If $refNode is undefined, the newNode will be set as the new last child of the parent node. This function differs from the DOM L2 specification, in the case, if the new node is not part of the document, the node will be imported first, automatically. $refNode has to be passed to the function even if it is undefined: $node->insertBefore( $newNode, undef ); # the same as $node->appendChild( $newNode ); $node->insertBefore( $newNode ); # wrong Note, that the reference node has to be a direct child of the node the function is called on. Also, $newChild is not allowed to be an ancestor of the new parent node. insertAfter $node->insertAfter( $newNode, $refNode ) The method inserts $newNode after $refNode. If $refNode is undefined, the newNode will be set as the new last child of the parent node. Note, that $refNode has to be passed explicitly even if it is undef. findnodes @nodes = $node->findnodes( $xpath_statement ); findnodes performs the xpath statement on the current node and returns the result as an array. In scalar context returns a XML::LibXML::NodeList object. find $result = $node->find( $xpath ); find performs the xpath expression using the current node as the context of the expression, and returns the result depending on what type of result the XPath expression had. For example, the XPath "1 * 3 + 52" results in a XML::LibXML::Number object being returned. Other expressions might return a XML::LibXML::Boolean object, or a XML::LibXML::Literal object (a string). Each of those objects uses Perl's overload feature to "do the right thing" in different contexts."/>. childNodes @childnodes = $node->childNodes; getChildnodes implements a more intuitive interface to the childnodes of the current node. It enables you to pass all children directly to a map or grep. If this function is called in scalar context, a XML::LibXML::NodeList object will be returned. toString . toStringC14N $c14nstring = $node->toString($with_comments, $xpath_expression); The function is similar to toString(). Instead of simply serializing the document tree, it transforms it as it is specified in the XML-C14N Specification. Such transformation is known as canonization. If $with_comments is 0 or not defined, the result-document will not contain any comments that exist in the original document. To include comments into the canonized document, $with_comments has to be set to 1. The parameter $xpath_expression defines is ommitted or empty, toStringC14N() will include all nodes in the given sub-tree. No serializing flags will be recognized by this function! serialize $str = $doc->serialze($format); Alternative form of toString(). This function name added to be more conform with libxml2's examples. serialize_c14n $c14nstr = $doc->serialize_c14n($comment_flag,$xpath); Alternative form of toStringC. iterator $iter = $node->iterator; This function is deprecated since XML::LibXML 1.54. It is only a dummy function that will get removed entirely in one of the next versions. To make use of iterator functions use XML::LibXML::Iterator Module available on CPAN. normalize $node->normalize; This function normalizes adjacent textnodes.. XML::LibXML Class for Element Nodes XML::LibXML::Element new $node = XML::LibXML::Element->new( $name ) This function creates a new node unbound to any DOM. setAttribute $node->setAttribute( $aname, $avalue ); This method sets or replaces the node's attribute $aname to the value $avalue setAttributeNS $node->setAttributeNS( $nsURI, $aname, $avalue ); Namespaceversion of setAttribute. getAttribute $avalue = $node->getAttribute( $aname ); If $node has an attribute with the name $aname, the value of this attribute will get returned. getAttributeNS $avalue = $node->setAttributeNS( $nsURI, $aname ); Namespaceversion of getAttribute. getAttributeNode $attrnode = $node->getAttributeNode( $aname ); Returns the attribute as a node if the attribute exists. If the Attribute does not exists undef will be returned. getAttributeNodeNS $attrnode = $node->getAttributeNodeNS( $namespaceURI, $aname ); Namespaceversion of getAttributeNode. removeAttribute $node->removeAttribute( $aname ); The method removes the attribute $aname from the node's attribute list, if the attribute can be found. removeAttributeNS $node->removeAttributeNS( $nsURI, $aname ); Namespace version of removeAttribute hasAttribute $boolean = $node->hasAttribute( $aname ); This funcion tests if the named attribute is set for the node. If the attribute is specified, TRUE (1) will be returned, otherwise the returnvalue is FALSE (0). hasAttributeNS $boolean = $node->hasAttributeNS( $nsURI, $aname ); namespace version of hasAttribute getChildrenByTagName @nodes = $node->getChildrenByTagName($tagname); The function gives direct access to all childnodes of the current node with the same tagname. It makes things a lot easier if you need to handle big datasets. If this function is called in SCALAR context, it returns the number of Elements found. getChildrenByTagNameNS @nodes = $node->getChildrenByTagNameNS($nsURI,$tagname); Namespace version of getChildrenByTagName. If this function is called in SCALAR context, it returns the number of Elements found. getElementsByTagName @nodes = $node->;getElementsByTagName($tagname); This function is part of the spec it fetches all descendants of a node with a given tagname. If one is as confused with tagname as I was, tagname is a qualified tagname which is in case of namespace useage prefix and local name In SCALAR context this function returns a XML::LibXML::NodeList object. getElementsByTagNameNS @nodes = $node->getElementsByTagNameNS($nsURI,$localname); Namespace version of getElementsByTagName as found in the DOM spec. In SCALAR context this function returns a XML::LibXML::NodeList object. getElementsByLocalName @nodes = $node->getElementsByLocalName($localname); This function is not found in the DOM specification. It is a mix of getElementsByTagName and getElementsByTagNameNS. It will fetch all tags matching the given local-name. This alows one to select tags with the same local name across namespace borders. In SCALAR context this function returns a XML::LibXML::NodeList object. appendWellBalancedChunk $node->appendWellBalancedChunk( $chunk ) Sometimes it is nessecary to append a string coded XML Tree to a node. appendWellBalancedChunk will do the trick for you. But this is only done if the String is well-balanced. Note that appendWellBalancedChunk() is only left for compatibility reasons. Implicitly it uses my $fragment = $parser->parse_xml_chunk( $chunk ); $node->appendChild( $fragment ); This form is more explicit and makes it easier to control the flow of a script. appendText $node->appendText( $PCDATA ); alias for appendTextNode(). appendTextNode $node->appendTextNode( $PCDATA ); This wrapper function lets you add a string directly to an element node. appendTextChild $node->appendTextChild( $childname , $PCDATA ) Somewhat similar with appendTextNode: It lets you set an Element, that contains only a text node directly by specifying the name and the text content. setNamespace $node->setNamespace( $nsURI , $nsPrefix, $activate ) setNamespace() allows one to apply a namespace to an element. The function takes three parameters: 1. the namespace URI, which is required and the two optional values prefix, which is the namespace prefix, as it should be used in child elements or attributes as well as the additional activate parameter. The activate parameter is most useful: If this parameter is set to FALSE (0), the namespace is simply added to the namespacelist of the node, while the element's namespace itself is not altered. Nevertheless activate is set to TRUE (1) on default. In this case the namespace automatically is used as the nodes effective namespace. This means the namespace prefix is added to the node name and if there was a namespace already active for the node, this will be replaced (but not removed from the global namespace list) The following example may clarify this: my $e1 = $doc->createElement("bar"); $e1->setNamespace("", "foo") results <foo:bar xmlns: while my $e2 = $doc->createElement("bar"); $e2->setNamespace("", "foo",0) results only <bar xmlns: By using $activate == 0 it is possible to apply multiple namepace declarations to a single element. Alternativly you can call setAttribute() simply to declare a new namespace for a node, without activating it: $e2->setAttribute( "xmlns:foo", "" ); has the same result as $e2->setNamespace( "", "foo", 0 ); XML::LibXML Class for Text Nodes XML::LibXML::Text Different to the DOM specification XML::LibXML implements the text node as the base class of all character data node. Therefor there exists no CharacterData class. This allow one to use all methods that are available for textnodes as well for Comments or CDATA-sections. new $text = XML::LibXML::Text->new( $content ); The constuctor of the class. It creates an unbound text node. data $nodedata = $text->data; Although there exists the nodeValue attribute in the Node class, the DOM specification defines data as a separate attribute. XML::LibXML implements these two attributes not as different attributes, but as aliases, such as libxml2 does. Therefore $text->data; and $text->nodeValue; will have the same result and are not different entities. setData($string) $text->setData( $text_content ); This function sets or replaces text content to a node. The node has to be of the type "text", "cdata" or "comment". substringData($offset,$length) $text->substringData($offset, $length); Extracts a range of data from the node. (DOM Spec) This function takes the two parameters $offset and $length and returns the substring, if available. If the node contains no data or $offset refers to an nonexisting string index, this function will return undef. If $length is out of range substringData will return the data starting at $offset instead of causing an error. appendData($string) $text->appendData( $somedata ); Appends a string to the end of the existing data. If the current text node contains no data, this function has the same effect as setData. insertData($offset,$string) $text->insertData($offset, $string); Inserts the parameter $string at the given $offset of the existing data of the node. This operation will not remove existing data, but change the order of the existing data. The $offset has to be a positive value. If $offset is out of range, insertData will have the same behaviour as appendData. deleteData($offset, $length) $text->deleteData($offset, $length); This method removes a chunk from the existing node data at the given offset. The $length parameter tells, how many characters should be removed from the string. deleteDataString($string, [$all]) $text->deleteDataString($remstring, $all); This method removes a chunk from the existing node data. Since the DOM spec is quite unhandy if you already know which string to remove from a text node, this method allows more perlish code :) The functions takes two parameters: $string and optional the $all flag. If $all is not set, undef or 0, deleteDataString will remove only the first occourance of $string. If $all is TRUE deleteDataString will remove all occurrences of $string from the node data. replaceData($offset, $length, $string) $text->replaceData($offset, $length, $string); The DOM style version to replace node data. replaceDataString($oldstring, $newstring, [$all]) $text->replaceDataString($old, $new, $flag); The more programmer friendly version of replaceData() :) Instead of giving offsets and length one can specify the exact string ($oldstring) to be replaced. Additionally the $all flag allows to replace all occourences of $oldstring. replaceDataRegEx( $search_cond, $replace_cond, $reflags ) $text->replaceDataRegEx( $search_cond, $replace_cond, $reflags ); This method replaces the node's data by a simple regular expression. Optional, this function allows to pass some flags that will be added as flag to the replace statement. NOTE: This is a shortcut for my $datastr = $node->getData(); $datastr =~ s/somecond/replacement/g; # 'g' is just an example for any flag $node->setData( $datastr ); This function can make things easier to read for simple replacements. For more complex variants it is recommended to use the code snippet above. XML::LibXML Comment Class XML::LibXML::Comment This class provides all functions of XML::LibXML::Text, but for comment nodes. This can be done, since only the output of the nodetypes is different, but not the datastructure. :-) new $node = XML::LibXML::Comment( $content ); The constructor is the only provided function for this package. It is required, because libxml2 treats text nodes and comment nodes slightly differently. XML::LibXML Class for CDATA Sections XML::LibXML::CDATASection This class provides all functions of XML::LibXML::Text, but for CDATA nodes. new $node = XML::LibXML::CDATASection( $content ); The constructor is the only provided function for this package. It is required, because libxml2 treats the different textnode types slightly differently. XML::LibXML Attribute Class XML::LibXML::Attr This is the interface to handle Attributes like ordinary nodes. The naming of the class relies on the W3C DOM documentation. new $attr = XML::LibXML::Attr->new($name [,$value]); Class constructor. If you need to work with iso encoded strings, you should always use the createAttrbute of XML::LibXML::Document. getValue $string = $attr->getValue(); Returns the value stored for the attribute. If undef is returned, the attribute has no value, which is different of being not specified. value $value = activates a namespace for the given attribute. If the attribute was not previously declared in the context of the attribute this function will be silently ignored. In this case you may wish to call setNamespace() on the ownerElement. XML::LibXML's DOM L2 Document Fragment Implementation XML::LibXML::DocumentFragment This class is a helper class as described in the DOM Level 2 Specification. It is implemented as a node without name. All adding, inserting or replacing functions are aware of document fragments now. As well all unbound nodes (all nodes that do not belong to any document subtree) are implicit members of document fragments. XML::LibXML Namespace Implementation XML::LibXML::Namespace. new docuement or node, therefore you should not expect it to be available in an existing document. getName print $ns->getName() Returns "xmlns:prefix", where prefix is the prefix for this namespace. name print $ns->name() Alias for getName() prefix print $ns->prefix() Returns the prefix bound to this namespace declaration. getLocalName $localname = $ns->getLocalName() Alias for prefix() getData print $ns->getData() Returns the URI of the namespace. getValue print $ns->getValue() Alias for getData() value print $ns->value() Alias for getData() uri print $ns->uri() Alias for getData() getNamespaceURI $known_uri = $ns->getNamespaceURI() Returns the string "" getPrefix $known_prefix = $ns->getPredix() Returns the string "xmlns" XML::LibXML Processing Instructions XML::LibXML::PI(): setData . XML::LibXML DTD Handling XML::LibXML::Dtd This class holds a DTD. You may parse a DTD from either a string, or from an external SYSTEM identifier. No support is available as yet for parsing from a filehandle. XML::LibXML::Dtd is a sub-class of Node, so all the methods available to nodes (particularly toString()) are available to Dtd objects. new $dtd = XML::LibXML::Dtd->new($public_id, $system_id) Parse a DTD from the system identifier, and return a DTD object that you can pass to $doc->is_valid() or $doc->validate(). my $dtd = XML::LibXML::Dtd->new( "SOME // Public / ID / 1.0", "test.dtd" ); my $doc = XML::LibXML->new->parse_file("test.xml"); $doc->validate($dtd); parse_string $dtd = XML::LibXML::Dtd->parse_string($dtd_str) The same as new() above, except you can parse a DTD from a string. RelaxNG Schema Validation XML::LibXML::RelaxNG The XML::LibXML::RelaxNG class is a tiny frontend to libxml2's RelaxNG implementation. Currently it supports only schema parsing and document validation. new $rngschema = XML::LibXML::RelaxNG->new( location => $filename_or_url ); $rngschema = XML::LibXML::RelaxNG->new( string => $xmlschemastring ); $rngschema = XML::LibXML::RelaxNG->new( DOM => $doc ); The constructor of XML::LibXML::RelaxNG may get called with either one of three. The DOM parameter allows to parse the schema from a preparsed XML::LibXML::Document. Note that the constructor will die() if the schema does not meed the constraints of the RelaxNG specification. validate eval { $rngschema->validate( $doc ); }; This function allows to validate a document against the given RelaxNG schema. If this function succeeds, it will return 0, otherwise it will die() and report the errors found. Because of this validate() should be always evaluated. XML Schema Validation XML::LibXML::Schema The XML::LibXML::Schema class is a tiny frontend to libxml2's XML Schema implementation. Currently it supports only schema parsing and document validation. new $xmlschema = XML::LibXML::Schema->new( location => $filename_or_url ); $xmlschema = XML::LibXML::Schema->new( string => $xmlschemastring ); The constructor of XML::LibXML::Schema may get called with either one of two. Note that the constructor will die() if the schema does not meed the constraints of the XML Schema specification. validate eval { $xmlschema->validate( $doc ); }; This function allows to validate a document against the given XML Schema. If this function succeeds, it will return 0, otherwise it will die() and report the errors found. Because of this validate() should be always evaluated.
|
https://opensource.apple.com/source/CPANInternal/CPANInternal-62/XML-LibXML/example/libxml.dkb
|
CC-MAIN-2017-30
|
en
|
refinedweb
|
Hello everyone,
I am migrating from C++ to C#. The following compile error makes me confused. Suppose in interface there is a method called Abc which returns object, and in the implementation class, there is also a method called Abc, but the return type is List<int>, I think List<int> is already a type (derived type) of object, so no need to explicitly implement Interface.Abc again, but here is a compile error.
Could anyone show me what is the rule I break here please?Could anyone show me what is the rule I break here please?Code:D:\Visual Studio 2008\Projects\ConsoleApplication1\ConsoleApplication1\Program.cs(14,11): error CS0738: 'MyList.Foo' does not implement interface member 'MyList.IFoo.Abc()'. 'MyList.Foo.Abc()' cannot implement 'MyList.IFoo.Abc()' because it does not have the matching return type of 'object'.
Code:public class MyList { interface IFoo { object Abc(); } class Foo : IFoo { public Foo() { } public List<int> Abc() { return new List<int>; } } static void Main() { Foo f = new Foo(); return; } }
thanks in advance,
George
|
https://cboard.cprogramming.com/csharp-programming/102628-interface-implementation.html
|
CC-MAIN-2017-30
|
en
|
refinedweb
|
Predefined Layout Panels
Telerik UI for for WinForms comes with a set of stock layout panels that handle most common layout tasks and also provide a basis for your owncustom layout panels. The layout panel objects all descend from LayoutPanel and are found in the Telerik.WinControls.Layouts namespace. Panels are responsible for determining both the size and position of the primitives that they contain. Panels can be nested to create arbitrarily complex layouts. The Telerik Presentation Framework includes the following panels:
|
http://docs.telerik.com/devtools/winforms/telerik-presentation-framework/layout/predefined-layout-panels/predefined-layout-panels
|
CC-MAIN-2017-30
|
en
|
refinedweb
|
Task.ContinueWith<TResult> Method (Func<Task, TResult>)
.NET Framework (current version)
Namespace: System.Threading.Tasks
Creates a continuation that executes asynchronously when the target Task<TResult> completes and returns a value.
Assembly: mscorlib (in mscorlib.dll)
Parameters
- continuationFunction
- Type: System.Func<Task, TResult>
A function to run when the Task<TResult> completes. When run, the delegate will be passed the completed task as an argument.
Return ValueType: System.Threading.Tasks.Task<TResult>
A new continuation task.
Type Parameters
- TResult
The type of the result produced by the continuation.
The returned Task:
using System; using System.Threading; using System.Threading.Tasks; class ContinuationSimpleDemo { //: // static void Main() { Action<string> action = (str) => Console.WriteLine("Task={0}, str={1}, Thread={2}", Task.CurrentId, str, Thread.CurrentThread.ManagedThreadId); // Creating a sequence of action tasks (that return no result). Console.WriteLine("Creating a sequence of action tasks (that return no result)"); Task.Factory.StartNew(() => action("alpha")) .ContinueWith(antecendent => action("beta")) // Antecedent data is ignored .ContinueWith(antecendent => action("gamma")) .Wait(); Func<int, int> negate = (n) => { Console.WriteLine("Task={0}, n={1}, -n={2}, Thread={3}", Task.CurrentId, n, -n, Thread.CurrentThread.ManagedThreadId); return -n; }; // Creating a sequence of function tasks where each continuation uses the result from its antecendent Console.WriteLine("\nCreating a sequence of function tasks where each continuation uses the result from its antecendent"); Task<int>.Factory.StartNew(() => negate(5)) .ContinueWith(antecendent => negate(antecendent.Result)) // Antecedent result feeds into continuation .ContinueWith(antecendent => negate(antecendent.Result)) .Wait(); // Creating a sequence of tasks where you can mix and match the types Console.WriteLine("\nCreating a sequence of tasks where you can mix and match the types"); Task<int>.Factory.StartNew(() => negate(6)) .ContinueWith(antecendent => action("x")) .ContinueWith(antecendent => negate(7)) .Wait(); } }:
|
https://msdn.microsoft.com/en-us/library/dd321405.aspx
|
CC-MAIN-2017-30
|
en
|
refinedweb
|
I
Bad:
1: Main(null);
2:
3: }
No empty line after an opening curly
Bad:
1: class Program
2: {
3:
4: static void Main(string[] args)
One empty line between same level type declarations
1: namespace Animals
2: {
3: class Animal
4: {
5: }
6:
7: class Giraffe : Animal
8: {
9: }
10: }
One empty line between members of a type
1: class Animal
2: {
3: public Animal()
4: {
5: }
6:
7: public void Eat(object food)
8: {
9: }
10:
11: public string Name { get; set; }
12: }
Whereas it’s OK to group single-line members:
1: class Customer
2: {
3: public string Name { get; set; }
4: public int Age { get; set; }
5: public string EMail { get; set; }
6:
7: public void Notify(string message)
8: {
9: }
10: }):
1: class Customer
2: {
3: #region Public properties
4:
5: public string Name { get; set; }
6: public int Age { get; set; }
7: public string EMail { get; set; }
8:
9: #endregion
10: don’t remember seeing any explicit guidelines on whitespace formatting for C# programs"
Well, Stylecop is trying to enforce these rules. I hope that more and more people will start adopting these.
>Usually #regions contain type members or whole types, less often parts of a method body.
I'd say that a region wrapping a part of a method body should never, ever be encouraged.
I do have to ask about the #regions. Do you actually write code like that? When I first saw #regions I thought they sounded like a great tool for organising code. I tried it for a while and found I really didn't like it. I hated not knowing whether I wasn't seeing just a bit of code or a whole lot. I got into the habit of toggling them all open every time I opened a file. But it was too late. Everybody else started doing it and I couldn't persuade them to stop. Now I think they should really just be reserved for files with sections that need to be processed by automated tools.
Apart from that diversion, I think your guidelines for vertical whitespace fit with what I'd do naturally.
Yes, I use regions in my code.
Not on the method level though - only to group type members. As I usually don't have more than one type per file, I don't have to group types.
Also, if I have a large type, I usually split it into parts using partial classes and name them like: Drawing.cs, Drawing.Serialization.cs, Drawing.Coordinates.cs, Drawing.Painting.cs, etc.
I don't mind seeing a line or two at the end or start of a class, but since those lines are useless you might as well leave them out.
Like you I place a line between classes, between methods and between properties, except for single-line properties. I do not usually insert lines between fields.
But unlike you I always allow myself to insert an extra blank line if that either helps to seperate members that belong to a different logical part of a class, or if it increases the readability by making the code less of a wall of text.
For example, in a Server class I might seperate networking-related fields like a Socket and some networking settings from fields like threads and syncronisation objects.
I also often insert two blank lines (instead of the usual single line) between a block of methods and a block of properties, so that it's easier to see where the methods end and the properties start. The same goes for inserting two lines between public methods and private methods, so that I can more easily see where the implementation details start.
I'd rather have a few blank lines too many than a few too little.
Thank you for submitting this cool story - Trackback from DotNetShoutout
What about a whitespaces within a methods? 🙂
In languages where whitespace does not matter, I honestly don't believe it should be saved with the file back to source control. The editor should be able to display code using fairly basic rules to suit you.
Hey Kirill,
Do you have any ideas if/when Visual Studio will start shipping StyleCop compliant [template] code? I can understand why the BCL isn't necessarily style-compliant but I believe it's critical to legitimize C# coding standards.
The strongest push back I have from some people on my team regarding style compliance is "Visual Studio doesn't do it that way".
Cheers,
Navid
With tools like Resharper and their functionality like Cleanup Code, I'd ask: why bother defining guidelines like this?
@Peter:
Well, firstly, someone has to *write* tools like Resharper 😉 I'm an IDE developer, I have to think about these things.
Besides, it was fun to observe these things and ask other people about their experiences.
Finally, a lot of people don't use tools like Resharper. And a great deal of them format their code without any sort of consistency. This blog post hopes to raise awareness about the issue: either use tools like Resharper or format your code yourself.
You'll be surprised how much poorly formatted code is out there. I've noticed that by default people who just begin to learn how to program don't pay any attention to this, which results in poorly formatted programs.
Navid: if you want, feel free to send me a list of where Visual Studio violates these guidelines and I will log all of them as bugs to be fixed.
Also, feel free to log bugs yourself at, especially after we release Visual Studio 2010 Beta 1.
Betty: agreed. Unfortunately the tools aren't there yet. I personally believe that the source code should be stored in a database in pre-parsed state, not text files.
I don't agree on the empty lines after #region and before #endregion. If you leave them out, #region/#endregion are more closely related to the things they are grouping. Compare this to a method definition, you wouldn't write:
void SomeMethod()
{
var x = 10;
}
Max, your approach is valid too. My guidelines are only enforced within my team 🙂
If your team chooses your way - great. Just be consistent.
The thing to remember is that there is a fine line between "standards" and aesthetically pleasing code. In the same way that one person can enjoy looking at something nice, another person may not think the target quite so appealing. Beauty is in the Eye of the Beholder (a great game btw 😀 ).
|
https://blogs.msdn.microsoft.com/kirillosenkov/2009/03/12/kirills-whitespace-guidelines-for-c/
|
CC-MAIN-2017-30
|
en
|
refinedweb
|
Week 11: Input Devices
This week's tasks:
- Group assignment:
- Measuring the analog levels and digital signals in an input device
- Link to the Group Assignment Page
- Individual assignment:
- Measuring something: adding a sensor to a microcontroller board that we have designed and reading it
Challenges:
- Soldering was more challenging this time. There were very little space for soldering the legs of ATtiny 45. So it might be a good idea to consider more space for very tiny componenets in the board design.
- Since I am not still good enough with Eagle, it was a bit difficult to fix the problem of overlapping nets.
- Debugging the issue with the board was quite challenging and demanded constant iterations of checking soldering, checking cable setting, checking schematic and other stuff.
- I used a soldering icon, which was had a thick tip. So unintentionally I spread solder in other parts of the board, which was not supposed to. So I had lots of challenges with cleaning them :-D
Considerations:
- Be careful with the orientation of the PHOTOTRANSISTOR.
- Do not be disappointed if your board does not look nice because of having an awful soldering time. It might still work by trying to identify some issues by careful and sharp eye examination especially asking the opinion of an expert regarding this ;-) Hopefully we have lots of helpful and knowledgable instructors in Fab Lab Oulu and they are always ready to guide us. And for this week thanks to Juha that guided us very well.
Technology used:
- Atmel Studio
- Eagle
- Milling Machine
- Soldering Iron
- Multimeter
-
My final project do not have any sensor. So to experiment this week's task I decided to try Neil's design for light sensor.
Designing the Board in Eagle
- Pin configuration for ATtiny 45 (ATtiny 45 DATASHEET)
- Neil's Borad
- The Schematic:
- When I created the board and started to move the componenets, orienting them was difficult so as Jari, fab lab instructor, suggested, I added 2 other components one for ground and attached it to grounds and the other for voltage and attached it to vcc. In this way, in my board view the grounds and vccs are named and orienting the components would be easier.
- I continueously used "Ripup; ratsnest; AutoRouter;" for replacing the components and optimizing the nets. And finally I succeeded in optimization.
- Preparations for exporting the traces and outlines as I explained the process in detail in Electronics Design week.
- Since 2 of the names were too long and out of the document border, I also overwrote device name for them in order to fit them in my desired space.
- Here is the result of the board design:
- The image of the traces:
- The image of the outline:
Milling and Soldering
Since I have explained in detail the process of milling and soldering in Electronics Production Week, I just add some pictures as the result here:
- The milled board:
- The list of components:
- 2x Resistor 10K
- Photottransistor
- Capacitor 1UF
- FTDI Header
- ATtiny 45
- AVRISPSMD
- The soldered board:
- Then I tested the board with multimeter for short circuits and it was OK.
Programming the Board
- Atmel studio (run as administrator) > create new project > selecting ATtiny 45
- I wanted to choose 'External Tools' from the 'Tools' tab but it did not appeared. So with the help of our instructor, Juha, we figured out that the problem is that the profile is set as standard. So I changed it to 'Advanced'. (Tools > select Profile > Advanced)
- Now Tools > External Tools:
- Title: AVRDUDE
- Command: C:\Program Files (x86)\avrdude\avrdude.exe
- Arguments: -p t45 usb -c usbtiny -U flash:w:$(TargetDir)$(TargetName).hex:i
- Initial Directory: C:\Program Files (x86)\avrdude\
- The I copied Neil's code for hello.light.45.c
- Then I added
#define F_CPU 8000000UL
before the following line:
#include <util/delay.h>
to tell what is the speed of the Attiny45 microcontroller. This information about internal clock frequency was found on the datasheet.
- Then I connected the board and programmer and computer (using FTDI cable and programming cable)
- Then in Atmel Studio: Build > Compile
- Then Tools > AVRDUDE. And I had the following error:
initialization failed, rc=-1
Double check connections and try again, or use -F to override this check.
- Now it was time to debug the issue.
- I used our instructor’s (Juha) board and programmed it with my computer and it worked. So we came to the conclusion that there might be a problem with my board.
- With Juha, we checked the soldered components. And we identified some issues (e.g. a leg of ATtiny 45 that was not soldered properly to the board) that could be the problem and we re-soldered them. And I checked it again with (Tools > AVRDUDE) and it was not still working and I had the same error.
- With Juha, we we checked the schematic again and it was OK.
- As Juha suggested, I changed the orientation of the programmer and it worked. So I could say re-soldering and changing the orientation of the programming cable fixed this problem.
- The following picture shows the right orientation of my setting. (I put it here for my record)
- Now I installed Python for testing the light sensor. I first installed 'Python 3.6.5' but then I replaced it with 'Python 2.7.14'. Becuase at some point I faced the problem and Jari, another FabAcademy student, suggested changing the version would help.
- Now I just connected FTDI cable to the sensor board and removed the programmer.
I saved 'hello.light.45.py' from Neil's page.
I saved it in a folder. Then I opened Command Prompt and typed the location of this file. And now I noticed (with help of my friend Jari) that I need to know the Com port number to complete this command. My windows did not have FTDI Drivers installed and I was not able to see the ports. So I followed this documentation to install it. And then I could find Ports in device manager, which was com 3.
- Again in cmd going to the location that 'hello.light.45.py' is saved and typing
hello.light.45.py com3
And this time I had the following error:
No module named serial
Which means I don't have a serial.
- I installed pyserial. And good to have a look at Detailed Document.
- Unzipping the downloaded folder. Command Prompt > Going to the location that Pyserial is saved and 'setup.py' is there. In my case it was this location: C:\Users\bnorouzi\Downloads\pyserial-3.4.tar\dist\pyserial-3.4.
- Now typing the following command:
python setup.py install
- Then again coming back to the location that 'hello.light.45.py' is saved and typing 'hello.light.45.py com3'. And it did not work again.
- My friend, Jari, gave me a suggestion based on his experience with the same issue: He believed that the problem is about the orientation of the Phototransistor. And he was right becuase after another Jari's help (Our instructor) with re-soldering this componenet, and testing again: it worked as you see in the video :)
Original Design Fileslight_sensor_outline.rml
Light_Sensor_Traces.rml
week11.brd
week11.sch
.atsln file for the modified code
.c file for the modified code
|
http://fabacademy.org/2018/labs/fablaboulu/students/behnaz-norouzi/week11.html
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Things to know about premium messaging
Happy summer!
We just wanted to check in and let you know a couple of things about the upcoming general availability of our premium messaging offering. As we have more and more customers join our preview program, we wanted to make sure that you are aware of a few benefits, as well as a couple "nice-to-know" requirements.
- 1 MB message size - We heard your feedback and premium messaging now supports message sizes up to 1 MB instead of 256KB.
- Partitioning –Premium messaging uses 2 partitions.
- Partitioning was originally introduced for load distribution and availability by favoring healthy nodes. However, management operations can become more complex as the number of partitions increases. Since premium messaging offers dedicated capacity, partitioning is not useful for storage size either. Hence, we only use 2 partitions with premium messaging in order to increase availability.
- 80GB is still supported by allowing up to 40GB per partition
- Default API version - For premium namespaces, the new default REST API version is "2013-10"
- If you do not provide an API version, or a lower version than “2013-10”, we will override the value to "2013-10"
- Client - Please use most up to date version of the .NET client available here.
Stay tuned for more blogs about the upcoming availability of premium messaging!
Questions? Let us know in the comments.
Happy messaging!
|
https://docs.microsoft.com/en-us/archive/blogs/servicebus/things-to-know-about-premium-messaging
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Serial Debugging on ArduinoOctober 20, 2015
Debugging an Arduino sketch usually happens in a way that is often referred to as “printf debugging”. It commonly works like this:
printf("here 1"); foo(); printf("here 2"); bar(); printf("here 3");
Let’s admit it, we all have written that kind of code. It’s not elegant, it’s not pretty but it often gets the job done. Especially on devices like Arduino where we normally deal with short sketches, printing information about the program’s state works reasonably well. But how can we receive this data?
Using Serial Monitor
Since Arduinos usually don’t come with a display where output can go to, we usually use serial (COM) communication to receive debug output. This is pretty straight forward through the Serial API. All we need to do is open the Serial port at a specified baud rate.
Serial.begin(9600);
Then we can output data like this.
Serial.println("hello");). All you need to check now is that the baud rate is set to the same value as we specified in the script (9600 in our case).
Generally, this is pretty useful. A problem arises though if we connect the RX / TX pins (pins 0 and 1) of the Arduino to another device or shield. Connecting and thus blocking these pins means we now cannot use the serial over USB communication through the UART mechanism any more.
Serial over USB is not available if the Arduino’s RX / TX pins are used.
So how can we debug (or rather print information from) our sketch now?
SoftwareSerial to the Rescue!
Luckily, there is a simple solution readily available for this scenario in the form of the library SoftwareSerial.
With the help of this library we can select two different GPIO pins and use them for emulated serial communication. The API is very similar to the original Serial API.
We need to include a header file first.
#include <SoftwareSerial.h>
Then create a SoftwareSerial instance and specify the GPIO pins that will be used as RX / TX pins.
SoftwareSerial softSerial(10, 11); // RX, TX
Again, we have to specify a baud rate for communication:
softSerial.begin(9600);
And then send data.
softSerial.println(“Hello SoftwareSerial”);
How can we receive this data?
Still we cannot simply receive the data using Serial Monitor, since it is expecting data from the actual UART RX / TX pins. So we basically have three options, two of them involving additional hardware.
Option 1: Use SoftwareSerial for hardware communication
This means if your actual serial RX / TX pins are allocated by some hardware device but you also need to to write output to the debug monitor, use SoftwareSerial for the hardware device and send your debug information to the actual serial port.
This is a nice and easy solution, but it might not work in all cases. I e.g. had problems using SoftwareSerial for communication with the ESP2866 WIFI module. I just would not get reliable communication unless I switched back to hardware serial. That left me with options 2 and 3.
Option 2: Use another Arduino
This is pretty simple as well. Usually, if you are tinkering with these devices you always have one or two (or dozens) of spare Arduinos at hand. So all you have to do is connect the SoftwareSerial GPIO pins of the first Arduino, to the RX / TX pins of the second one. Then make sure you have the right COM port selected (!) in Arduino Studio and open Serial Monitor for receiving data.
This picture shows this option in action. We can see that RX / TX pins are occupied by the WIFI module and how SoftwareSerial pins 10 / 11 are connected to the RX / TX pins of another Arduino Uno.
Option 3: Use an USB UART adapter
In case you have an USB UART adapter (or want to order one for 5-10€) this is also a really easy way of getting debug output from SoftwareSerial.
Simply connect SoftwareSerial pins to the RX / TX pins oft the USB UART adapter and also make sure you connect the ground (GND) pins.
This picture again shows how the wiring is done.
Now, all you need to do is connect the USB device to a PC and receive the data there.
I usually use the RaspberryPi on my desktop and the screen command to get this done.
$ screen /dev/ttyUSB0 9600
I hope that helped.
|
https://wolfgang-ziegler.com/Blog/serial-debugging-on-arduino
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Issue #5554 has been reported by Tsuyoshi Sawada.
Feature #5554: A method that applies self to a Proc if self is a Symbol
Author: Tsuyoshi Sawada
Status: Open
Priority: Normal
Assignee:
Category:
Target version:
Often, you want to apply a Proc to self if self is a Symbol, but not do
anything if otherwise. In this case, something I call Object#desymbolize
may be convenient:
proc = ->sym{ case sym when :small_icon then "16pt" when :medium_icon then "32pt" when :large_icon then "64pt" end } :small_icon.desymbolize(&proc) => "16pt" "18pt".desymbolize(&proc) => "18pt"
An implementation may be as follows:
class Object
def desymbolize; self end
end
class Symbol
def desymbolize ≺ pr.call(self) end
end
|
https://www.ruby-forum.com/t/ruby-trunk-feature-5554-open-a-method-that-applies-self-to-a-proc-if-self-is-a-symbol/213461
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Specifies how much historical data a dds::pub::DataWriter and a dds::sub::DataReader can store. More...
#include <dds/core/policy/CorePolicy.hpp>
Specifies how much historical data a dds::pub::DataWriter and a dds::sub::DataReader can store.
This QoS policy specifies how much data must to stored by RTI Connext for a dds::pub::DataWriter or dds::sub::DataReader. It controls whether RTI Connext should deliver only the most recent value, attempt to deliver all intermediate values, or do something in between.
On the publishing side, this QoS policy controls the samples that should be maintained by the dds::pub::DataWriter on behalf of existing dds::sub::DataReader entities. The behavior with regards to a dds::sub::DataReader entities discovered after a sample is written is controlled by the DURABILITY policy.
On the subscribing side, this QoS policy controls the samples that should be maintained until the application "takes" them from RTI Connext.
This policy controls the behavior of RTI Connext when the value of an instance changes before it is finally communicated to dds::sub::DataReader entities.
When a dds::pub::DataWriter sends data, or a dds::sub::DataReader receives data, the data sent or received is stored in a cache whose contents are controlled by this QoS policy. This QoS policy interacts with dds::core::policy::Reliability by controlling whether RTI Connext guarantees that all of the sent data is received (dds::core::policy::HistoryKind::KEEP_ALL) or if only the last N data values sent are guaranteed to be received (dds::core::policy::HistoryKind::KEEP_ALL)–this is a reduced level of reliability.
The amount of data that is sent to new DataReaders who have configured their dds::core::policy::Durability to receive previously published data is also controlled by the History QoS policy.
Note that the History QoS policy does not control the physical sizes of the send and receive queues. The memory allocation for the queues is controlled by the dds::core::policy::ResourceLimits.
If
kind is dds::core::policy::HistoryKind::KEEP_LAST (the default), then RTI Connext will only attempt to keep the latest values of the instance and discard the older ones. In this case, the value of
depth regulates the maximum number of values (up to and including the most current one) RTI Connext will maintain and deliver. After N values have been sent or received, any new data will overwrite the oldest data in the queue. Thus the queue acts like a circular buffer of length N.
The default (and most common setting) for
depth is 1, indicating that only the most recent value should be delivered.
If
kind is dds::core::policy::HistoryKind::KEEP_ALL, then RTI Connext will attempt to maintain and deliver all the values of the instance to existing subscribers. The resources that RTI Connext can use to keep this history are limited by the settings of the RESOURCE_LIMITS. If the limit is reached, then the behavior of RTI Connext will depend on the RELIABILITY. If the Reliability
kind is rti::core::policy::ReliabilityKind::BEST_EFFORT, then the old values will be discarded. If Reliability
kind is RELIABLE, then RTI Connext will block the dds::pub::DataWriter until it can deliver the necessary old values to all subscribers.
If
refilter is RefilterKind::NOTHING, then samples written before a DataReader is matched to a DataWriter are not refiltered by the DataWriter.
If
refilter is RefilterKind::EVERYTHING, then all samples written before a DataReader is matched to a DataWriter are refiltered by the DataWriter when the DataReader is matched.
If
refilter is RefilterKind::ON_DEMAND, then a DataWriter will only refilter samples that a DataReader requests.
This QoS policy's
depth must be consistent with the RESOURCE_LIMITS
max_samples_per_instance. For these two QoS to be consistent, they must verify that depth <= max_samples_per_instance.
Creates a policy that keeps the last sample only.
Creates a policy with a specific history kind and optionally a history depth.
The history depth doesn't apply to HistoryKind::KEEP_ALL
Sets the history kind.
Specifies the kind of history to be kept.
[default] dds::core::policy::HistoryKind::KEEP_LAST
Gets the history kind.
Gets the history depth.
Sets the history depth.
Specifies the number of samples to be kept, when the
kind is dds::core::policy::HistoryKind::KEEP_LAST
If a value other than 1 (the default) is specified, it should be consistent with the settings of the RESOURCE_LIMITS policy. That is:
depth <= dds::core::policy::ResourceLimits::max_samples_per_instance
When the
kind is dds::core::policy::HistoryKind::KEEP_ALL, the depth has no effect. Its implied value is
infinity (in practice limited by the settings of the RESOURCE_LIMITS policy).
[default] 1
[range] [1,100 million], <= dds::core::policy::ResourceLimits::max_samples_per_instance
<<extension>> Specifies how a writer should handle previously written samples to a new reader.
[default] RefilterKind::NOTHING
<<extension>> Getter (see setter with the same name)
|
https://community.rti.com/static/documentation/connext-dds/5.2.0/doc/api/connext_dds/api_cpp2/classdds_1_1core_1_1policy_1_1History.html
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Comment on Tutorial - Types of EJB By aathishankaran
Comment Added by : shey-poun
Comment Added at : 2011-12-14 06:39:51
Comment on Tutorial : Types of EJB By aathishankaran
i appreciate the article for being straight to the everyone.
Does anyone hir knows
View Tutorial By: Tomy at 2009-06-24 02:59:57
2. Thank alot
View Tutorial By: Naganatarajan at 2010-01-07 05:54:17
3. @Amit Shrivastava : Dear Amit I have tried the sam
View Tutorial By: Sanjay at 2015-05-14 19:15:51
4. import java.util.*;
public class de
View Tutorial By: Virudada at 2012-05-05 06:27:22
5. could somebody please tell a case where memcpy() w
View Tutorial By: freak at 2011-02-09 20:29:26
6. some of the errors above e.g the jasper exception
View Tutorial By: Robert at 2013-09-14 23:24:57
7. My code runs smoothly. I have taken some steps for
View Tutorial By: Amit Shrivastava at 2015-01-25 13:48:46
8. the code works fine. you just need to create a mai
View Tutorial By: Nabil at 2011-08-09 11:11:03
9. This tutorial is so simple, often we dont need to
View Tutorial By: NIrjhar at 2011-12-12 03:29:52
10. Java is very interesting,I wish to do certificate
View Tutorial By: Dzunisani Chauke at 2012-11-05 10:19:35
|
https://java-samples.com/showcomment.php?commentid=37208
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
!ENTITY sdot "⋅"> ]> Git - ginac.git/blob - doc/tutorial/ginac.texi git:// / ginac.git / blob commit grep author committer pickaxe ? search: re summary | shortlog | log | commit | commitdiff | tree history | raw | HEAD archive property example uses propinfovector [ginac.git] / doc / tutorial / ginac.texi 1 \input texinfo @c -*-texinfo-*- 2 @c %**start of header 3 @setfilename ginac.info 4 @settitle GiNaC, an open framework for symbolic computation within the C++ programming language 5 @setchapternewpage on 6 @afourpaper 7 @c For `info' only. 8 @paragraphindent 0 9 @c For TeX only. 10 @iftex 11 @c I hate putting "@noindent" in front of every paragraph. 12 @parindent=0pt 13 @end iftex 14 @c %**end of header 15 16 @include version.texi 17 18 @direntry 19 * ginac: (ginac). C++ library for symbolic computation. 20 @end direntry 21 22 @ifinfo 23 This is a tutorial that documents GiNaC @value{VERSION}, an open 24 framework for symbolic computation within the C++ programming language. 25 26 Copyright (C) 1999-2001 Johannes Gutenberg University Mainz, Germany 27 28 Permission is granted to make and distribute verbatim copies of 29 this manual provided the copyright notice and this permission notice 30 are preserved on all copies. 31 32 @ignore 33 Permission is granted to process this file through TeX and print the 34 results, provided the printed document carries copying permission 35 notice identical to this one except for the removal of this paragraph 36 37 @end ignore 38 Permission is granted to copy and distribute modified versions of this 39 manual under the conditions for verbatim copying, provided that the entire 40 resulting derived work is distributed under the terms of a permission 41 notice identical to this one. 42 @end ifinfo 43 44 @finalout 45 @c finalout prevents ugly black rectangles on overfull hbox lines 46 @titlepage 47 @title GiNaC @value{VERSION} 48 @subtitle An open framework for symbolic computation within the C++ programming language 49 @subtitle @value{UPDATED} 50 @author The GiNaC Group: 51 @author Christian Bauer, Alexander Frink, Richard Kreckel 52 53 @page 54 @vskip 0pt plus 1filll 55 Copyright @copyright{} 1999-2001 Johannes Gutenberg University Mainz, Germany 56 @sp 2 that the entire 63 resulting derived work is distributed under the terms of a permission 64 notice identical to this one. 65 @end titlepage 66 67 @page 68 @contents 69 70 @page 71 72 73 @node Top, Introduction, (dir), (dir) 74 @c node-name, next, previous, up 75 @top GiNaC 76 77 This is a tutorial that documents GiNaC @value{VERSION}, an open 78 framework for symbolic computation within the C++ programming language. 79 80 @menu 81 * Introduction:: GiNaC's purpose. 82 * A Tour of GiNaC:: A quick tour of the library. 83 * Installation:: How to install the package. 84 * Basic Concepts:: Description of fundamental classes. 85 * Methods and Functions:: Algorithms for symbolic manipulations. 86 * Extending GiNaC:: How to extend the library. 87 * A Comparison With Other CAS:: Compares GiNaC to traditional CAS. 88 * Internal Structures:: Description of some internal structures. 89 * Package Tools:: Configuring packages to work with GiNaC. 90 * Bibliography:: 91 * Concept Index:: 92 @end menu 93 94 95 @node Introduction, A Tour of GiNaC, Top, Top 96 @c node-name, next, previous, up 97 @chapter Introduction 98 @cindex history of GiNaC 99 100 The motivation behind GiNaC derives from the observation that most 101 present day computer algebra systems (CAS) are linguistically and 102 semantically impoverished. Although they are quite powerful tools for 103 learning math and solving particular problems they lack modern 104 linguistical structures that allow for the creation of large-scale 105 projects. GiNaC is an attempt to overcome this situation by extending a 106 well established and standardized computer language (C++) by some 107 fundamental symbolic capabilities, thus allowing for integrated systems 108 that embed symbolic manipulations together with more established areas 109 of computer science (like computation-intense numeric applications, 110 graphical interfaces, etc.) under one roof. 111 112 The particular problem that led to the writing of the GiNaC framework is 113 still a very active field of research, namely the calculation of higher 114 order corrections to elementary particle interactions. There, 115 theoretical physicists are interested in matching present day theories 116 against experiments taking place at particle accelerators. The 117 computations involved are so complex they call for a combined symbolical 118 and numerical approach. This turned out to be quite difficult to 119 accomplish with the present day CAS we have worked with so far and so we 120 tried to fill the gap by writing GiNaC. But of course its applications 121 are in no way restricted to theoretical physics. 122 123 This tutorial is intended for the novice user who is new to GiNaC but 124 already has some background in C++ programming. However, since a 125 hand-made documentation like this one is difficult to keep in sync with 126 the development, the actual documentation is inside the sources in the 127 form of comments. That documentation may be parsed by one of the many 128 Javadoc-like documentation systems. If you fail at generating it you 129 may access it from @uref{, the GiNaC home 130 page}. It is an invaluable resource not only for the advanced user who 131 wishes to extend the system (or chase bugs) but for everybody who wants 132 to comprehend the inner workings of GiNaC. This little tutorial on the 133 other hand only covers the basic things that are unlikely to change in 134 the near future. 135 136 @section License 137 The GiNaC framework for symbolic computation within the C++ programming 138 language is Copyright @copyright{} 1999-2001 Johannes Gutenberg 139 University Mainz, Germany. 140 141 This program is free software; you can redistribute it and/or 142 modify it under the terms of the GNU General Public License as 143 published by the Free Software Foundation; either version 2 of the 144 License, or (at your option) any later version. 145 146 This program is distributed in the hope that it will be useful, but 147 WITHOUT ANY WARRANTY; without even the implied warranty of 148 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 149 General Public License for more details. 150 151 You should have received a copy of the GNU General Public License 152 along with this program; see the file COPYING. If not, write to the 153 Free Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, 154 MA 02111-1307, USA. 155 156 157 @node A Tour of GiNaC, How to use it from within C++, Introduction, Top 158 @c node-name, next, previous, up 159 @chapter A Tour of GiNaC 160 161 This quick tour of GiNaC wants to arise your interest in the 162 subsequent chapters by showing off a bit. Please excuse us if it 163 leaves many open questions. 164 165 @menu 166 * How to use it from within C++:: Two simple examples. 167 * What it can do for you:: A Tour of GiNaC's features. 168 @end menu 169 170 171 @node How to use it from within C++, What it can do for you, A Tour of GiNaC, A Tour of GiNaC 172 @c node-name, next, previous, up 173 @section How to use it from within C++ 174 175 The GiNaC open framework for symbolic computation within the C++ programming 176 language does not try to define a language of its own as conventional 177 CAS do. Instead, it extends the capabilities of C++ by symbolic 178 manipulations. Here is how to generate and print a simple (and rather 179 pointless) bivariate polynomial with some large coefficients: 180 181 @example 182 #include <ginac/ginac.h> 183 using namespace std; 184 using namespace GiNaC; 185 186 int main() 187 @{ 188 symbol x("x"), y("y"); 189 ex poly; 190 191 for (int i=0; i<3; ++i) 192 poly += factorial(i+16)*pow(x,i)*pow(y,2-i); 193 194 cout << poly << endl; 195 return 0; 196 @} 197 @end example 198 199 Assuming the file is called @file{hello.cc}, on our system we can compile 200 and run it like this: 201 202 @example 203 $ c++ hello.cc -o hello -lcln -lginac 204 $ ./hello 205 355687428096000*x*y+20922789888000*y^2+6402373705728000*x^2 206 @end example 207 208 (@xref{Package Tools}, for tools that help you when creating a software 209 package that uses GiNaC.) 210 211 @cindex Hermite polynomial 212 Next, there is a more meaningful C++ program that calls a function which 213 generates Hermite polynomials in a specified free variable. 214 215 @example 216 #include <ginac/ginac.h> 217 using namespace std; 218 using namespace GiNaC; 219 220 ex HermitePoly(const symbol & x, int n) 221 @{ 222 ex HKer=exp(-pow(x, 2)); 223 // uses the identity H_n(x) == (-1)^n exp(x^2) (d/dx)^n exp(-x^2) 224 return normal(pow(-1, n) * diff(HKer, x, n) / HKer); 225 @} 226 227 int main() 228 @{ 229 symbol z("z"); 230 231 for (int i=0; i<6; ++i) 232 cout << "H_" << i << "(z) == " << HermitePoly(z,i) << endl; 233 234 return 0; 235 @} 236 @end example 237 238 When run, this will type out 239 240 @example 241 H_0(z) == 1 242 H_1(z) == 2*z 243 H_2(z) == 4*z^2-2 244 H_3(z) == -12*z+8*z^3 245 H_4(z) == -48*z^2+16*z^4+12 246 H_5(z) == 120*z-160*z^3+32*z^5 247 @end example 248 249 This method of generating the coefficients is of course far from optimal 250 for production purposes. 251 252 In order to show some more examples of what GiNaC can do we will now use 253 the @command{ginsh}, a simple GiNaC interactive shell that provides a 254 convenient window into GiNaC's capabilities. 255 256 257 @node What it can do for you, Installation, How to use it from within C++, A Tour of GiNaC 258 @c node-name, next, previous, up 259 @section What it can do for you 260 261 @cindex @command{ginsh} 262 After invoking @command{ginsh} one can test and experiment with GiNaC's 263 features much like in other Computer Algebra Systems except that it does 264 not provide programming constructs like loops or conditionals. For a 265 concise description of the @command{ginsh} syntax we refer to its 266 accompanied man page. Suffice to say that assignments and comparisons in 267 @command{ginsh} are written as they are in C, i.e. @code{=} assigns and 268 @code{==} compares. 269 270 It can manipulate arbitrary precision integers in a very fast way. 271 Rational numbers are automatically converted to fractions of coprime 272 integers: 273 274 @example 275 > x=3^150; 276 369988485035126972924700782451696644186473100389722973815184405301748249 277 > y=3^149; 278 123329495011708990974900260817232214728824366796574324605061468433916083 279 > x/y; 280 3 281 > y/x; 282 1/3 283 @end example 284 285 Exact numbers are always retained as exact numbers and only evaluated as 286 floating point numbers if requested. For instance, with numeric 287 radicals is dealt pretty much as with symbols. Products of sums of them 288 can be expanded: 289 290 @example 291 > expand((1+a^(1/5)-a^(2/5))^3); 292 1+3*a+3*a^(1/5)-5*a^(3/5)-a^(6/5) 293 > expand((1+3^(1/5)-3^(2/5))^3); 294 10-5*3^(3/5) 295 > evalf((1+3^(1/5)-3^(2/5))^3); 296 0.33408977534118624228 297 @end example 298 299 The function @code{evalf} that was used above converts any number in 300 GiNaC's expressions into floating point numbers. This can be done to 301 arbitrary predefined accuracy: 302 303 @example 304 > evalf(1/7); 305 0.14285714285714285714 306 > Digits=150; 307 150 308 > evalf(1/7); 309 0.1428571428571428571428571428571428571428571428571428571428571428571428 310 5714285714285714285714285714285714285 311 @end example 312 313 Exact numbers other than rationals that can be manipulated in GiNaC 314 include predefined constants like Archimedes' @code{Pi}. They can both 315 be used in symbolic manipulations (as an exact number) as well as in 316 numeric expressions (as an inexact number): 317 318 @example 319 > a=Pi^2+x; 320 x+Pi^2 321 > evalf(a); 322 9.869604401089358619+x 323 > x=2; 324 2 325 > evalf(a); 326 11.869604401089358619 327 @end example 328 329 Built-in functions evaluate immediately to exact numbers if 330 this is possible. Conversions that can be safely performed are done 331 immediately; conversions that are not generally valid are not done: 332 333 @example 334 > cos(42*Pi); 335 1 336 > cos(acos(x)); 337 x 338 > acos(cos(x)); 339 acos(cos(x)) 340 @end example 341 342 (Note that converting the last input to @code{x} would allow one to 343 conclude that @code{42*Pi} is equal to @code{0}.) 344 345 Linear equation systems can be solved along with basic linear 346 algebra manipulations over symbolic expressions. In C++ GiNaC offers 347 a matrix class for this purpose but we can see what it can do using 348 @command{ginsh}'s notation of double brackets to type them in: 349 350 @example 351 > lsolve(a+x*y==z,x); 352 y^(-1)*(z-a); 353 > lsolve([3*x+5*y == 7, -2*x+10*y == -5], [x, y]); 354 [x==19/8,y==-1/40] 355 > M = [[ [[1, 3]], [[-3, 2]] ]]; 356 [[ [[1,3]], [[-3,2]] ]] 357 > determinant(M); 358 11 359 > charpoly(M,lambda); 360 lambda^2-3*lambda+11 361 @end example 362 363 Multivariate polynomials and rational functions may be expanded, 364 collected and normalized (i.e. converted to a ratio of two coprime 365 polynomials): 366 367 @example 368 > a = x^4 + 2*x^2*y^2 + 4*x^3*y + 12*x*y^3 - 3*y^4; 369 -3*y^4+x^4+12*x*y^3+2*x^2*y^2+4*x^3*y 370 > b = x^2 + 4*x*y - y^2; 371 -y^2+x^2+4*x*y 372 > expand(a*b); 373 3*y^6+x^6-24*x*y^5+43*x^2*y^4+16*x^3*y^3+17*x^4*y^2+8*x^5*y 374 > collect(a*b,x); 375 3*y^6+48*x*y^4+2*x^2*y^2+x^4*(-y^2+x^2+4*x*y)+4*x^3*y*(-y^2+x^2+4*x*y) 376 > normal(a/b); 377 3*y^2+x^2 378 @end example 379 380 You can differentiate functions and expand them as Taylor or Laurent 381 series in a very natural syntax (the second argument of @code{series} is 382 a relation defining the evaluation point, the third specifies the 383 order): 384 385 @cindex Zeta function 386 @example 387 > diff(tan(x),x); 388 tan(x)^2+1 389 > series(sin(x),x==0,4); 390 x-1/6*x^3+Order(x^4) 391 > series(1/tan(x),x==0,4); 392 x^(-1)-1/3*x+Order(x^2) 393 > series(tgamma(x),x==0,3); 394 x^(-1)-Euler+(1/12*Pi^2+1/2*Euler^2)*x+ 395 (-1/3*zeta(3)-1/12*Pi^2*Euler-1/6*Euler^3)*x^2+Order(x^3) 396 > evalf("); 397 x^(-1)-0.5772156649015328606+(0.9890559953279725555)*x 398 -(0.90747907608088628905)*x^2+Order(x^3) 399 > series(tgamma(2*sin(x)-2),x==Pi/2,6); 400 -(x-1/2*Pi)^(-2)+(-1/12*Pi^2-1/2*Euler^2-1/240)*(x-1/2*Pi)^2 401 -Euler-1/12+Order((x-1/2*Pi)^3) 402 @end example 403 404 Here we have made use of the @command{ginsh}-command @code{"} to pop the 405 previously evaluated element from @command{ginsh}'s internal stack. 406 407 If you ever wanted to convert units in C or C++ and found this is 408 cumbersome, here is the solution. Symbolic types can always be used as 409 tags for different types of objects. Converting from wrong units to the 410 metric system is now easy: 411 412 @example 413 > in=.0254*m; 414 0.0254*m 415 > lb=.45359237*kg; 416 0.45359237*kg 417 > 200*lb/in^2; 418 140613.91592783185568*kg*m^(-2) 419 @end example 420 421 422 @node Installation, Prerequisites, What it can do for you, Top 423 @c node-name, next, previous, up 424 @chapter Installation 425 426 @cindex CLN 427 GiNaC's installation follows the spirit of most GNU software. It is 428 easily installed on your system by three steps: configuration, build, 429 installation. 430 431 @menu 432 * Prerequisites:: Packages upon which GiNaC depends. 433 * Configuration:: How to configure GiNaC. 434 * Building GiNaC:: How to compile GiNaC. 435 * Installing GiNaC:: How to install GiNaC on your system. 436 @end menu 437 438 439 @node Prerequisites, Configuration, Installation, Installation 440 @c node-name, next, previous, up 441 @section Prerequisites 442 443 In order to install GiNaC on your system, some prerequisites need to be 444 met. First of all, you need to have a C++-compiler adhering to the 445 ANSI-standard @cite{ISO/IEC 14882:1998(E)}. We used @acronym{GCC} for 446 development so if you have a different compiler you are on your own. 447 For the configuration to succeed you need a Posix compliant shell 448 installed in @file{/bin/sh}, GNU @command{bash} is fine. Perl is needed 449 by the built process as well, since some of the source files are 450 automatically generated by Perl scripts. Last but not least, Bruno 451 Haible's library @acronym{CLN} is extensively used and needs to be 452 installed on your system. Please get it either from 453 @uref{}, from 454 @uref{, GiNaC's FTP site} or 455 from @uref{, Bruno Haible's FTP 456 site} (it is covered by GPL) and install it prior to trying to install 457 GiNaC. The configure script checks if it can find it and if it cannot 458 it will refuse to continue. 459 460 461 @node Configuration, Building GiNaC, Prerequisites, Installation 462 @c node-name, next, previous, up 463 @section Configuration 464 @cindex configuration 465 @cindex Autoconf 466 467 To configure GiNaC means to prepare the source distribution for 468 building. It is done via a shell script called @command{configure} that 469 is shipped with the sources and was originally generated by GNU 470 Autoconf. Since a configure script generated by GNU Autoconf never 471 prompts, all customization must be done either via command line 472 parameters or environment variables. It accepts a list of parameters, 473 the complete set of which can be listed by calling it with the 474 @option{--help} option. The most important ones will be shortly 475 described in what follows: 476 477 @itemize @bullet 478 479 @item 480 @option{--disable-shared}: When given, this option switches off the 481 build of a shared library, i.e. a @file{.so} file. This may be convenient 482 when developing because it considerably speeds up compilation. 483 484 @item 485 @option{--prefix=@var{PREFIX}}: The directory where the compiled library 486 and headers are installed. It defaults to @file{/usr/local} which means 487 that the library is installed in the directory @file{/usr/local/lib}, 488 the header files in @file{/usr/local/include/ginac} and the documentation 489 (like this one) into @file{/usr/local/share/doc/GiNaC}. 490 491 @item 492 @option{--libdir=@var{LIBDIR}}: Use this option in case you want to have 493 the library installed in some other directory than 494 @file{@var{PREFIX}/lib/}. 495 496 @item 497 @option{--includedir=@var{INCLUDEDIR}}: Use this option in case you want 498 to have the header files installed in some other directory than 499 @file{@var{PREFIX}/include/ginac/}. For instance, if you specify 500 @option{--includedir=/usr/include} you will end up with the header files 501 sitting in the directory @file{/usr/include/ginac/}. Note that the 502 subdirectory @file{ginac} is enforced by this process in order to 503 keep the header files separated from others. This avoids some 504 clashes and allows for an easier deinstallation of GiNaC. This ought 505 to be considered A Good Thing (tm). 506 507 @item 508 @option{--datadir=@var{DATADIR}}: This option may be given in case you 509 want to have the documentation installed in some other directory than 510 @file{@var{PREFIX}/share/doc/GiNaC/}. 511 512 @end itemize 513 514 In addition, you may specify some environment variables. 515 @env{CXX} holds the path and the name of the C++ compiler 516 in case you want to override the default in your path. (The 517 @command{configure} script searches your path for @command{c++}, 518 @command{g++}, @command{gcc}, @command{CC}, @command{cxx} 519 and @command{cc++} in that order.) It may be very useful to 520 define some compiler flags with the @env{CXXFLAGS} environment 521 variable, like optimization, debugging information and warning 522 levels. If omitted, it defaults to @option{-g -O2}. 523 524 The whole process is illustrated in the following two 525 examples. (Substitute @command{setenv @var{VARIABLE} @var{value}} for 526 @command{export @var{VARIABLE}=@var{value}} if the Berkeley C shell is 527 your login shell.) 528 529 Here is a simple configuration for a site-wide GiNaC library assuming 530 everything is in default paths: 531 532 @example 533 $ export CXXFLAGS="-Wall -O2" 534 $ ./configure 535 @end example 536 537 And here is a configuration for a private static GiNaC library with 538 several components sitting in custom places (site-wide @acronym{GCC} and 539 private @acronym{CLN}). The compiler is pursuaded to be picky and full 540 assertions and debugging information are switched on: 541 542 @example 543 $ export CXX=/usr/local/gnu/bin/c++ 544 $ export CPPFLAGS="$(CPPFLAGS) -I$(HOME)/include" 545 $ export CXXFLAGS="$(CXXFLAGS) -DDO_GINAC_ASSERT -ggdb -Wall -ansi -pedantic" 546 $ export LDFLAGS="$(LDFLAGS) -L$(HOME)/lib" 547 $ ./configure --disable-shared --prefix=$(HOME) 548 @end example 549 550 551 @node Building GiNaC, Installing GiNaC, Configuration, Installation 552 @c node-name, next, previous, up 553 @section Building GiNaC 554 @cindex building GiNaC 555 556 After proper configuration you should just build the whole 557 library by typing 558 @example 559 $ make 560 @end example 561 at the command prompt and go for a cup of coffee. The exact time it 562 takes to compile GiNaC depends not only on the speed of your machines 563 but also on other parameters, for instance what value for @env{CXXFLAGS} 564 you entered. Optimization may be very time-consuming. 565 566 Just to make sure GiNaC works properly you may run a collection of 567 regression tests by typing 568 569 @example 570 $ make check 571 @end example 572 573 This will compile some sample programs, run them and check the output 574 for correctness. The regression tests fall in three categories. First, 575 the so called @emph{exams} are performed, simple tests where some 576 predefined input is evaluated (like a pupils' exam). Second, the 577 @emph{checks} test the coherence of results among each other with 578 possible random input. Third, some @emph{timings} are performed, which 579 benchmark some predefined problems with different sizes and display the 580 CPU time used in seconds. Each individual test should return a message 581 @samp{passed}. This is mostly intended to be a QA-check if something 582 was broken during development, not a sanity check of your system. Some 583 of the tests in sections @emph{checks} and @emph{timings} may require 584 insane amounts of memory and CPU time. Feel free to kill them if your 585 machine catches fire. Another quite important intent is to allow people 586 to fiddle around with optimization. 587 588 Generally, the top-level Makefile runs recursively to the 589 subdirectories. It is therfore safe to go into any subdirectory 590 (@code{doc/}, @code{ginsh/}, ...) and simply type @code{make} 591 @var{target} there in case something went wrong. 592 593 594 @node Installing GiNaC, Basic Concepts, Building GiNaC, Installation 595 @c node-name, next, previous, up 596 @section Installing GiNaC 597 @cindex installation 598 599 To install GiNaC on your system, simply type 600 601 @example 602 $ make install 603 @end example 604 605 As described in the section about configuration the files will be 606 installed in the following directories (the directories will be created 607 if they don't already exist): 608 609 @itemize @bullet 610 611 @item 612 @file{libginac.a} will go into @file{@var{PREFIX}/lib/} (or 613 @file{@var{LIBDIR}}) which defaults to @file{/usr/local/lib/}. 614 So will @file{libginac.so} unless the configure script was 615 given the option @option{--disable-shared}. The proper symlinks 616 will be established as well. 617 618 @item 619 All the header files will be installed into @file{@var{PREFIX}/include/ginac/} 620 (or @file{@var{INCLUDEDIR}/ginac/}, if specified). 621 622 @item 623 All documentation (HTML and Postscript) will be stuffed into 624 @file{@var{PREFIX}/share/doc/GiNaC/} (or 625 @file{@var{DATADIR}/doc/GiNaC/}, if @var{DATADIR} was specified). 626 627 @end itemize 628 629 For the sake of completeness we will list some other useful make 630 targets: @command{make clean} deletes all files generated by 631 @command{make}, i.e. all the object files. In addition @command{make 632 distclean} removes all files generated by the configuration and 633 @command{make maintainer-clean} goes one step further and deletes files 634 that may require special tools to rebuild (like the @command{libtool} 635 for instance). Finally @command{make uninstall} removes the installed 636 library, header files and documentation@footnote{Uninstallation does not 637 work after you have called @command{make distclean} since the 638 @file{Makefile} is itself generated by the configuration from 639 @file{Makefile.in} and hence deleted by @command{make distclean}. There 640 are two obvious ways out of this dilemma. First, you can run the 641 configuration again with the same @var{PREFIX} thus creating a 642 @file{Makefile} with a working @samp{uninstall} target. Second, you can 643 do it by hand since you now know where all the files went during 644 installation.}. 645 646 647 @node Basic Concepts, Expressions, Installing GiNaC, Top 648 @c node-name, next, previous, up 649 @chapter Basic Concepts 650 651 This chapter will describe the different fundamental objects that can be 652 handled by GiNaC. But before doing so, it is worthwhile introducing you 653 to the more commonly used class of expressions, representing a flexible 654 meta-class for storing all mathematical objects. 655 656 @menu 657 * Expressions:: The fundamental GiNaC class. 658 * The Class Hierarchy:: Overview of GiNaC's classes. 659 * Symbols:: Symbolic objects. 660 * Numbers:: Numerical objects. 661 * Constants:: Pre-defined constants. 662 * Fundamental containers:: The power, add and mul classes. 663 * Lists:: Lists of expressions. 664 * Mathematical functions:: Mathematical functions. 665 * Relations:: Equality, Inequality and all that. 666 * Indexed objects:: Handling indexed quantities. 667 @end menu 668 669 670 @node Expressions, The Class Hierarchy, Basic Concepts, Basic Concepts 671 @c node-name, next, previous, up 672 @section Expressions 673 @cindex expression (class @code{ex}) 674 @cindex @code{has()} 675 676 The most common class of objects a user deals with is the expression 677 @code{ex}, representing a mathematical object like a variable, number, 678 function, sum, product, etc... Expressions may be put together to form 679 new expressions, passed as arguments to functions, and so on. Here is a 680 little collection of valid expressions: 681 682 @example 683 ex MyEx1 = 5; // simple number 684 ex MyEx2 = x + 2*y; // polynomial in x and y 685 ex MyEx3 = (x + 1)/(x - 1); // rational expression 686 ex MyEx4 = sin(x + 2*y) + 3*z + 41; // containing a function 687 ex MyEx5 = MyEx4 + 1; // similar to above 688 @end example 689 690 Expressions are handles to other more fundamental objects, that often 691 contain other expressions thus creating a tree of expressions 692 (@xref{Internal Structures}, for particular examples). Most methods on 693 @code{ex} therefore run top-down through such an expression tree. For 694 example, the method @code{has()} scans recursively for occurrences of 695 something inside an expression. Thus, if you have declared @code{MyEx4} 696 as in the example above @code{MyEx4.has(y)} will find @code{y} inside 697 the argument of @code{sin} and hence return @code{true}. 698 699 The next sections will outline the general picture of GiNaC's class 700 hierarchy and describe the classes of objects that are handled by 701 @code{ex}. 702 703 704 @node The Class Hierarchy, Symbols, Expressions, Basic Concepts 705 @c node-name, next, previous, up 706 @section The Class Hierarchy 707 708 GiNaC's class hierarchy consists of several classes representing 709 mathematical objects, all of which (except for @code{ex} and some 710 helpers) are internally derived from one abstract base class called 711 @code{basic}. You do not have to deal with objects of class 712 @code{basic}, instead you'll be dealing with symbols, numbers, 713 containers of expressions and so on. 714 715 @cindex container 716 @cindex atom 717 To get an idea about what kinds of symbolic composits may be built we 718 have a look at the most important classes in the class hierarchy and 719 some of the relations among the classes: 720 721 @image{classhierarchy} 722 723 The abstract classes shown here (the ones without drop-shadow) are of no 724 interest for the user. They are used internally in order to avoid code 725 duplication if two or more classes derived from them share certain 726 features. An example is @code{expairseq}, a container for a sequence of 727 pairs each consisting of one expression and a number (@code{numeric}). 728 What @emph{is} visible to the user are the derived classes @code{add} 729 and @code{mul}, representing sums and products. @xref{Internal 730 Structures}, where these two classes are described in more detail. The 731 following table shortly summarizes what kinds of mathematical objects 732 are stored in the different classes: 733 734 @cartouche 735 @multitable @columnfractions .22 .78 736 @item @code{symbol} @tab Algebraic symbols @math{a}, @math{x}, @math{y}@dots{} 737 @item @code{constant} @tab Constants like 738 @tex 739 $\pi$ 740 @end tex 741 @ifnottex 742 @math{Pi} 743 @end ifnottex 744 @item @code{numeric} @tab All kinds of numbers, @math{42}, @math{7/3*I}, @math{3.14159}@dots{} 745 @item @code{add} @tab Sums like @math{x+y} or @math{a-(2*b)+3} 746 @item @code{mul} @tab Products like @math{x*y} or @math{2*a^2*(x+y+z)/b} 747 @item @code{power} @tab Exponentials such as @math{x^2}, @math{a^b}, 748 @tex 749 $\sqrt{2}$ 750 @end tex 751 @ifnottex 752 @code{sqrt(}@math{2}@code{)} 753 @end ifnottex 754 @dots{} 755 @item @code{pseries} @tab Power Series, e.g. @math{x-1/6*x^3+1/120*x^5+O(x^7)} 756 @item @code{function} @tab A symbolic function like @math{sin(2*x)} 757 @item @code{lst} @tab Lists of expressions [@math{x}, @math{2*y}, @math{3+z}] 758 @item @code{matrix} @tab @math{n}x@math{m} matrices of expressions 759 @item @code{relational} @tab A relation like the identity @math{x}@code{==}@math{y} 760 @item @code{indexed} @tab Indexed object like @math{A_ij} 761 @item @code{tensor} @tab Special tensor like the delta and metric tensors 762 @item @code{idx} @tab Index of an indexed object 763 @item @code{varidx} @tab Index with variance 764 @end multitable 765 @end cartouche 766 767 @node Symbols, Numbers, The Class Hierarchy, Basic Concepts 768 @c node-name, next, previous, up 769 @section Symbols 770 @cindex @code{symbol} (class) 771 @cindex hierarchy of classes 772 773 @cindex atom 774 Symbols are for symbolic manipulation what atoms are for chemistry. You 775 can declare objects of class @code{symbol} as any other object simply by 776 saying @code{symbol x,y;}. There is, however, a catch in here having to 777 do with the fact that C++ is a compiled language. The information about 778 the symbol's name is thrown away by the compiler but at a later stage 779 you may want to print expressions holding your symbols. In order to 780 avoid confusion GiNaC's symbols are able to know their own name. This 781 is accomplished by declaring its name for output at construction time in 782 the fashion @code{symbol x("x");}. If you declare a symbol using the 783 default constructor (i.e. without string argument) the system will deal 784 out a unique name. That name may not be suitable for printing but for 785 internal routines when no output is desired it is often enough. We'll 786 come across examples of such symbols later in this tutorial. 787 788 This implies that the strings passed to symbols at construction time may 789 not be used for comparing two of them. It is perfectly legitimate to 790 write @code{symbol x("x"),y("x");} but it is likely to lead into 791 trouble. Here, @code{x} and @code{y} are different symbols and 792 statements like @code{x-y} will not be simplified to zero although the 793 output @code{x-x} looks funny. Such output may also occur when there 794 are two different symbols in two scopes, for instance when you call a 795 function that declares a symbol with a name already existent in a symbol 796 in the calling function. Again, comparing them (using @code{operator==} 797 for instance) will always reveal their difference. Watch out, please. 798 799 @cindex @code{subs()} 800 Although symbols can be assigned expressions for internal reasons, you 801 should not do it (and we are not going to tell you how it is done). If 802 you want to replace a symbol with something else in an expression, you 803 can use the expression's @code{.subs()} method (@xref{Substituting Expressions}, 804 for more information). 805 806 807 @node Numbers, Constants, Symbols, Basic Concepts 808 @c node-name, next, previous, up 809 @section Numbers 810 @cindex @code{numeric} (class) 811 812 @cindex GMP 813 @cindex CLN 814 @cindex rational 815 @cindex fraction 816 For storing numerical things, GiNaC uses Bruno Haible's library 817 @acronym{CLN}. The classes therein serve as foundation classes for 818 GiNaC. @acronym{CLN} stands for Class Library for Numbers or 819 alternatively for Common Lisp Numbers. In order to find out more about 820 @acronym{CLN}'s internals the reader is refered to the documentation of 821 that library. @inforef{Introduction, , cln}, for more 822 information. Suffice to say that it is by itself build on top of another 823 library, the GNU Multiple Precision library @acronym{GMP}, which is an 824 extremely fast library for arbitrary long integers and rationals as well 825 as arbitrary precision floating point numbers. It is very commonly used 826 by several popular cryptographic applications. @acronym{CLN} extends 827 @acronym{GMP} by several useful things: First, it introduces the complex 828 number field over either reals (i.e. floating point numbers with 829 arbitrary precision) or rationals. Second, it automatically converts 830 rationals to integers if the denominator is unity and complex numbers to 831 real numbers if the imaginary part vanishes and also correctly treats 832 algebraic functions. Third it provides good implementations of 833 state-of-the-art algorithms for all trigonometric and hyperbolic 834 functions as well as for calculation of some useful constants. 835 836 The user can construct an object of class @code{numeric} in several 837 ways. The following example shows the four most important constructors. 838 It uses construction from C-integer, construction of fractions from two 839 integers, construction from C-float and construction from a string: 840 841 @example 842 #include <ginac/ginac.h> 843 using namespace GiNaC; 844 845 int main() 846 @{ 847 numeric two(2); // exact integer 2 848 numeric r(2,3); // exact fraction 2/3 849 numeric e(2.71828); // floating point number 850 numeric p("3.1415926535897932385"); // floating point number 851 // Trott's constant in scientific notation: 852 numeric trott("1.0841015122311136151E-2"); 853 854 std::cout << two*p << std::endl; // floating point 6.283... 855 @} 856 @end example 857 858 Note that all those constructors are @emph{explicit} which means you are 859 not allowed to write @code{numeric two=2;}. This is because the basic 860 objects to be handled by GiNaC are the expressions @code{ex} and we want 861 to keep things simple and wish objects like @code{pow(x,2)} to be 862 handled the same way as @code{pow(x,a)}, which means that we need to 863 allow a general @code{ex} as base and exponent. Therefore there is an 864 implicit constructor from C-integers directly to expressions handling 865 numerics at work in most of our examples. This design really becomes 866 convenient when one declares own functions having more than one 867 parameter but it forbids using implicit constructors because that would 868 lead to compile-time ambiguities. 869 870 It may be tempting to construct numbers writing @code{numeric r(3/2)}. 871 This would, however, call C's built-in operator @code{/} for integers 872 first and result in a numeric holding a plain integer 1. @strong{Never 873 use the operator @code{/} on integers} unless you know exactly what you 874 are doing! Use the constructor from two integers instead, as shown in 875 the example above. Writing @code{numeric(1)/2} may look funny but works 876 also. 877 878 @cindex @code{Digits} 879 @cindex accuracy 880 We have seen now the distinction between exact numbers and floating 881 point numbers. Clearly, the user should never have to worry about 882 dynamically created exact numbers, since their `exactness' always 883 determines how they ought to be handled, i.e. how `long' they are. The 884 situation is different for floating point numbers. Their accuracy is 885 controlled by one @emph{global} variable, called @code{Digits}. (For 886 those readers who know about Maple: it behaves very much like Maple's 887 @code{Digits}). All objects of class numeric that are constructed from 888 then on will be stored with a precision matching that number of decimal 889 digits: 890 891 @example 892 #include <ginac/ginac.h> 893 using namespace std; 894 using namespace GiNaC; 895 896 void foo() 897 @{ 898 numeric three(3.0), one(1.0); 899 numeric x = one/three; 900 901 cout << "in " << Digits << " digits:" << endl; 902 cout << x << endl; 903 cout << Pi.evalf() << endl; 904 @} 905 906 int main() 907 @{ 908 foo(); 909 Digits = 60; 910 foo(); 911 return 0; 912 @} 913 @end example 914 915 The above example prints the following output to screen: 916 917 @example 918 in 17 digits: 919 0.333333333333333333 920 3.14159265358979324 921 in 60 digits: 922 0.333333333333333333333333333333333333333333333333333333333333333333 923 3.14159265358979323846264338327950288419716939937510582097494459231 924 @end example 925 926 It should be clear that objects of class @code{numeric} should be used 927 for constructing numbers or for doing arithmetic with them. The objects 928 one deals with most of the time are the polymorphic expressions @code{ex}. 929 930 @subsection Tests on numbers 931 932 Once you have declared some numbers, assigned them to expressions and 933 done some arithmetic with them it is frequently desired to retrieve some 934 kind of information from them like asking whether that number is 935 integer, rational, real or complex. For those cases GiNaC provides 936 several useful methods. (Internally, they fall back to invocations of 937 certain CLN functions.) 938 939 As an example, let's construct some rational number, multiply it with 940 some multiple of its denominator and test what comes out: 941 942 @example 943 #include <ginac/ginac.h> 944 using namespace std; 945 using namespace GiNaC; 946 947 // some very important constants: 948 const numeric twentyone(21); 949 const numeric ten(10); 950 const numeric five(5); 951 952 int main() 953 @{ 954 numeric answer = twentyone; 955 956 answer /= five; 957 cout << answer.is_integer() << endl; // false, it's 21/5 958 answer *= ten; 959 cout << answer.is_integer() << endl; // true, it's 42 now! 960 @} 961 @end example 962 963 Note that the variable @code{answer} is constructed here as an integer 964 by @code{numeric}'s copy constructor but in an intermediate step it 965 holds a rational number represented as integer numerator and integer 966 denominator. When multiplied by 10, the denominator becomes unity and 967 the result is automatically converted to a pure integer again. 968 Internally, the underlying @acronym{CLN} is responsible for this 969 behaviour and we refer the reader to @acronym{CLN}'s documentation. 970 Suffice to say that the same behaviour applies to complex numbers as 971 well as return values of certain functions. Complex numbers are 972 automatically converted to real numbers if the imaginary part becomes 973 zero. The full set of tests that can be applied is listed in the 974 following table. 975 976 @cartouche 977 @multitable @columnfractions .30 .70 978 @item @strong{Method} @tab @strong{Returns true if the object is@dots{}} 979 @item @code{.is_zero()} 980 @tab @dots{}equal to zero 981 @item @code{.is_positive()} 982 @tab @dots{}not complex and greater than 0 983 @item @code{.is_integer()} 984 @tab @dots{}a (non-complex) integer 985 @item @code{.is_pos_integer()} 986 @tab @dots{}an integer and greater than 0 987 @item @code{.is_nonneg_integer()} 988 @tab @dots{}an integer and greater equal 0 989 @item @code{.is_even()} 990 @tab @dots{}an even integer 991 @item @code{.is_odd()} 992 @tab @dots{}an odd integer 993 @item @code{.is_prime()} 994 @tab @dots{}a prime integer (probabilistic primality test) 995 @item @code{.is_rational()} 996 @tab @dots{}an exact rational number (integers are rational, too) 997 @item @code{.is_real()} 998 @tab @dots{}a real integer, rational or float (i.e. is not complex) 999 @item @code{.is_cinteger()} 1000 @tab @dots{}a (complex) integer (such as @math{2-3*I}) 1001 @item @code{.is_crational()} 1002 @tab @dots{}an exact (complex) rational number (such as @math{2/3+7/2*I}) 1003 @end multitable 1004 @end cartouche 1005 1006 1007 @node Constants, Fundamental containers, Numbers, Basic Concepts 1008 @c node-name, next, previous, up 1009 @section Constants 1010 @cindex @code{constant} (class) 1011 1012 @cindex @code{Pi} 1013 @cindex @code{Catalan} 1014 @cindex @code{Euler} 1015 @cindex @code{evalf()} 1016 Constants behave pretty much like symbols except that they return some 1017 specific number when the method @code{.evalf()} is called. 1018 1019 The predefined known constants are: 1020 1021 @cartouche 1022 @multitable @columnfractions .14 .30 .56 1023 @item @strong{Name} @tab @strong{Common Name} @tab @strong{Numerical Value (to 35 digits)} 1024 @item @code{Pi} 1025 @tab Archimedes' constant 1026 @tab 3.14159265358979323846264338327950288 1027 @item @code{Catalan} 1028 @tab Catalan's constant 1029 @tab 0.91596559417721901505460351493238411 1030 @item @code{Euler} 1031 @tab Euler's (or Euler-Mascheroni) constant 1032 @tab 0.57721566490153286060651209008240243 1033 @end multitable 1034 @end cartouche 1035 1036 1037 @node Fundamental containers, Lists, Constants, Basic Concepts 1038 @c node-name, next, previous, up 1039 @section Fundamental containers: the @code{power}, @code{add} and @code{mul} classes 1040 @cindex polynomial 1041 @cindex @code{add} 1042 @cindex @code{mul} 1043 @cindex @code{power} 1044 1045 Simple polynomial expressions are written down in GiNaC pretty much like 1046 in other CAS or like expressions involving numerical variables in C. 1047 The necessary operators @code{+}, @code{-}, @code{*} and @code{/} have 1048 been overloaded to achieve this goal. When you run the following 1049 code snippet, the constructor for an object of type @code{mul} is 1050 automatically called to hold the product of @code{a} and @code{b} and 1051 then the constructor for an object of type @code{add} is called to hold 1052 the sum of that @code{mul} object and the number one: 1053 1054 @example 1055 ... 1056 symbol a("a"), b("b"); 1057 ex MyTerm = 1+a*b; 1058 ... 1059 @end example 1060 1061 @cindex @code{pow()} 1062 For exponentiation, you have already seen the somewhat clumsy (though C-ish) 1063 statement @code{pow(x,2);} to represent @code{x} squared. This direct 1064 construction is necessary since we cannot safely overload the constructor 1065 @code{^} in C++ to construct a @code{power} object. If we did, it would 1066 have several counterintuitive and undesired effects: 1067 1068 @itemize @bullet 1069 @item 1070 Due to C's operator precedence, @code{2*x^2} would be parsed as @code{(2*x)^2}. 1071 @item 1072 Due to the binding of the operator @code{^}, @code{x^a^b} would result in 1073 @code{(x^a)^b}. This would be confusing since most (though not all) other CAS 1074 interpret this as @code{x^(a^b)}. 1075 @item 1076 Also, expressions involving integer exponents are very frequently used, 1077 which makes it even more dangerous to overload @code{^} since it is then 1078 hard to distinguish between the semantics as exponentiation and the one 1079 for exclusive or. (It would be embarassing to return @code{1} where one 1080 has requested @code{2^3}.) 1081 @end itemize 1082 1083 @cindex @command{ginsh} 1084 All effects are contrary to mathematical notation and differ from the 1085 way most other CAS handle exponentiation, therefore overloading @code{^} 1086 is ruled out for GiNaC's C++ part. The situation is different in 1087 @command{ginsh}, there the exponentiation-@code{^} exists. (Also note 1088 that the other frequently used exponentiation operator @code{**} does 1089 not exist at all in C++). 1090 1091 To be somewhat more precise, objects of the three classes described 1092 here, are all containers for other expressions. An object of class 1093 @code{power} is best viewed as a container with two slots, one for the 1094 basis, one for the exponent. All valid GiNaC expressions can be 1095 inserted. However, basic transformations like simplifying 1096 @code{pow(pow(x,2),3)} to @code{x^6} automatically are only performed 1097 when this is mathematically possible. If we replace the outer exponent 1098 three in the example by some symbols @code{a}, the simplification is not 1099 safe and will not be performed, since @code{a} might be @code{1/2} and 1100 @code{x} negative. 1101 1102 Objects of type @code{add} and @code{mul} are containers with an 1103 arbitrary number of slots for expressions to be inserted. Again, simple 1104 and safe simplifications are carried out like transforming 1105 @code{3*x+4-x} to @code{2*x+4}. 1106 1107 The general rule is that when you construct such objects, GiNaC 1108 automatically creates them in canonical form, which might differ from 1109 the form you typed in your program. This allows for rapid comparison of 1110 expressions, since after all @code{a-a} is simply zero. Note, that the 1111 canonical form is not necessarily lexicographical ordering or in any way 1112 easily guessable. It is only guaranteed that constructing the same 1113 expression twice, either implicitly or explicitly, results in the same 1114 canonical form. 1115 1116 1117 @node Lists, Mathematical functions, Fundamental containers, Basic Concepts 1118 @c node-name, next, previous, up 1119 @section Lists of expressions 1120 @cindex @code{lst} (class) 1121 @cindex lists 1122 @cindex @code{nops()} 1123 @cindex @code{op()} 1124 @cindex @code{append()} 1125 @cindex @code{prepend()} 1126 1127 The GiNaC class @code{lst} serves for holding a list of arbitrary expressions. 1128 These are sometimes used to supply a variable number of arguments of the same 1129 type to GiNaC methods such as @code{subs()} and @code{to_rational()}, so you 1130 should have a basic understanding about them. 1131 1132 Lists of up to 15 expressions can be directly constructed from single 1133 expressions: 1134 1135 @example 1136 @{ 1137 symbol x("x"), y("y"); 1138 lst l(x, 2, y, x+y); 1139 // now, l is a list holding the expressions 'x', '2', 'y', and 'x+y' 1140 // ... 1141 @end example 1142 1143 Use the @code{nops()} method to determine the size (number of expressions) of 1144 a list and the @code{op()} method to access individual elements: 1145 1146 @example 1147 // ... 1148 cout << l.nops() << endl; // prints '4' 1149 cout << l.op(2) << " " << l.op(0) << endl; // prints 'y x' 1150 // ... 1151 @end example 1152 1153 Finally you can append or prepend an expression to a list with the 1154 @code{append()} and @code{prepend()} methods: 1155 1156 @example 1157 // ... 1158 l.append(4*x); // l is now [x, 2, y, x+y, 4*x] 1159 l.prepend(0); // l is now [0, x, 2, y, x+y, 4*x] 1160 @} 1161 @end example 1162 1163 1164 @node Mathematical functions, Relations, Lists, Basic Concepts 1165 @c node-name, next, previous, up 1166 @section Mathematical functions 1167 @cindex @code{function} (class) 1168 @cindex trigonometric function 1169 @cindex hyperbolic function 1170 1171 There are quite a number of useful functions hard-wired into GiNaC. For 1172 instance, all trigonometric and hyperbolic functions are implemented 1173 (@xref{Built-in Functions}, for a complete list). 1174 1175 These functions are all objects of class @code{function}. They accept 1176 one or more expressions as arguments and return one expression. If the 1177 arguments are not numerical, the evaluation of the function may be 1178 halted, as it does in the next example, showing how a function returns 1179 itself twice and finally an expression that may be really useful: 1180 1181 @cindex Gamma function 1182 @cindex @code{subs()} 1183 @example 1184 ... 1185 symbol x("x"), y("y"); 1186 ex foo = x+y/2; 1187 cout << tgamma(foo) << endl; 1188 // -> tgamma(x+(1/2)*y) 1189 ex bar = foo.subs(y==1); 1190 cout << tgamma(bar) << endl; 1191 // -> tgamma(x+1/2) 1192 ex foobar = bar.subs(x==7); 1193 cout << tgamma(foobar) << endl; 1194 // -> (135135/128)*Pi^(1/2) 1195 ... 1196 @end example 1197 1198 Besides evaluation most of these functions allow differentiation, series 1199 expansion and so on. Read the next chapter in order to learn more about 1200 this. 1201 1202 1203 @node Relations, Indexed objects, Mathematical functions, Basic Concepts 1204 @c node-name, next, previous, up 1205 @section Relations 1206 @cindex @code{relational} (class) 1207 1208 Sometimes, a relation holding between two expressions must be stored 1209 somehow. The class @code{relational} is a convenient container for such 1210 purposes. A relation is by definition a container for two @code{ex} and 1211 a relation between them that signals equality, inequality and so on. 1212 They are created by simply using the C++ operators @code{==}, @code{!=}, 1213 @code{<}, @code{<=}, @code{>} and @code{>=} between two expressions. 1214 1215 @xref{Mathematical functions}, for examples where various applications 1216 of the @code{.subs()} method show how objects of class relational are 1217 used as arguments. There they provide an intuitive syntax for 1218 substitutions. They are also used as arguments to the @code{ex::series} 1219 method, where the left hand side of the relation specifies the variable 1220 to expand in and the right hand side the expansion point. They can also 1221 be used for creating systems of equations that are to be solved for 1222 unknown variables. But the most common usage of objects of this class 1223 is rather inconspicuous in statements of the form @code{if 1224 (expand(pow(a+b,2))==a*a+2*a*b+b*b) @{...@}}. Here, an implicit 1225 conversion from @code{relational} to @code{bool} takes place. Note, 1226 however, that @code{==} here does not perform any simplifications, hence 1227 @code{expand()} must be called explicitly. 1228 1229 1230 @node Indexed objects, Methods and Functions, Relations, Basic Concepts 1231 @c node-name, next, previous, up 1232 @section Indexed objects 1233 1234 GiNaC allows you to handle expressions containing general indexed objects in 1235 arbitrary spaces. It is also able to canonicalize and simplify such 1236 expressions and perform symbolic dummy index summations. There are a number 1237 of predefined indexed objects provided, like delta and metric tensors. 1238 1239 There are few restrictions placed on indexed objects and their indices and 1240 it is easy to construct nonsense expressions, but our intention is to 1241 provide a general framework that allows you to implement algorithms with 1242 indexed quantities, getting in the way as little as possible. 1243 1244 @cindex @code{idx} (class) 1245 @cindex @code{indexed} (class) 1246 @subsection Indexed quantities and their indices 1247 1248 Indexed expressions in GiNaC are constructed of two special types of objects, 1249 @dfn{index objects} and @dfn{indexed objects}. 1250 1251 @itemize @bullet 1252 1253 @cindex contravariant 1254 @cindex covariant 1255 @cindex variance 1256 @item Index objects are of class @code{idx} or a subclass. Every index has 1257 a @dfn{value} and a @dfn{dimension} (which is the dimension of the space 1258 the index lives in) which can both be arbitrary expressions but are usually 1259 a number or a simple symbol. In addition, indices of class @code{varidx} have 1260 a @dfn{variance} (they can be co- or contravariant). 1261 1262 @item Indexed objects are of class @code{indexed} or a subclass. They 1263 contain a @dfn{base expression} (which is the expression being indexed), and 1264 one or more indices. 1265 1266 @end itemize 1267 1268 @strong{Note:} when printing expressions, covariant indices and indices 1269 without variance are denoted @samp{.i} while contravariant indices are denoted 1270 @samp{~i}. In the following, we are going to use that notation in the text 1271 so instead of @math{A^i_jk} we will write @samp{A~i.j.k}. Index dimensions 1272 are not visible in the output. 1273 1274 A simple example shall illustrate the concepts: 1275 1276 @example 1277 #include <ginac/ginac.h> 1278 using namespace std; 1279 using namespace GiNaC; 1280 1281 int main() 1282 @{ 1283 symbol i_sym("i"), j_sym("j"); 1284 idx i(i_sym, 3), j(j_sym, 3); 1285 1286 symbol A("A"); 1287 cout << indexed(A, i, j) << endl; 1288 // -> A.i.j 1289 ... 1290 @end example 1291 1292 The @code{idx} constructor takes two arguments, the index value and the 1293 index dimension. First we define two index objects, @code{i} and @code{j}, 1294 both with the numeric dimension 3. The value of the index @code{i} is the 1295 symbol @code{i_sym} (which prints as @samp{i}) and the value of the index 1296 @code{j} is the symbol @code{j_sym} (which prints as @samp{j}). Next we 1297 construct an expression containing one indexed object, @samp{A.i.j}. It has 1298 the symbol @code{A} as its base expression and the two indices @code{i} and 1299 @code{j}. 1300 1301 Note the difference between the indices @code{i} and @code{j} which are of 1302 class @code{idx}, and the index values which are the sybols @code{i_sym} 1303 and @code{j_sym}. The indices of indexed objects cannot directly be symbols 1304 or numbers but must be index objects. For example, the following is not 1305 correct and will raise an exception: 1306 1307 @example 1308 symbol i("i"), j("j"); 1309 e = indexed(A, i, j); // ERROR: indices must be of type idx 1310 @end example 1311 1312 You can have multiple indexed objects in an expression, index values can 1313 be numeric, and index dimensions symbolic: 1314 1315 @example 1316 ... 1317 symbol B("B"), dim("dim"); 1318 cout << 4 * indexed(A, i) 1319 + indexed(B, idx(j_sym, 4), idx(2, 3), idx(i_sym, dim)) << endl; 1320 // -> B.j.2.i+4*A.i 1321 ... 1322 @end example 1323 1324 @code{B} has a 4-dimensional symbolic index @samp{k}, a 3-dimensional numeric 1325 index of value 2, and a symbolic index @samp{i} with the symbolic dimension 1326 @samp{dim}. Note that GiNaC doesn't automatically notify you that the free 1327 indices of @samp{A} and @samp{B} in the sum don't match (you have to call 1328 @code{simplify_indexed()} for that, see below). 1329 1330 In fact, base expressions, index values and index dimensions can be 1331 arbitrary expressions: 1332 1333 @example 1334 ... 1335 cout << indexed(A+B, idx(2*i_sym+1, dim/2)) << endl; 1336 // -> (B+A).(1+2*i) 1337 ... 1338 @end example 1339 1340 It's also possible to construct nonsense like @samp{Pi.sin(x)}. You will not 1341 get an error message from this but you will probably not be able to do 1342 anything useful with it. 1343 1344 @cindex @code{get_value()} 1345 @cindex @code{get_dimension()} 1346 The methods 1347 1348 @example 1349 ex idx::get_value(void); 1350 ex idx::get_dimension(void); 1351 @end example 1352 1353 return the value and dimension of an @code{idx} object. If you have an index 1354 in an expression, such as returned by calling @code{.op()} on an indexed 1355 object, you can get a reference to the @code{idx} object with the function 1356 @code{ex_to_idx()} on the expression. 1357 1358 There are also the methods 1359 1360 @example 1361 bool idx::is_numeric(void); 1362 bool idx::is_symbolic(void); 1363 bool idx::is_dim_numeric(void); 1364 bool idx::is_dim_symbolic(void); 1365 @end example 1366 1367 for checking whether the value and dimension are numeric or symbolic 1368 (non-numeric). Using the @code{info()} method of an index (see @ref{Information 1369 About Expressions}) returns information about the index value. 1370 1371 @cindex @code{varidx} (class) 1372 If you need co- and contravariant indices, use the @code{varidx} class: 1373 1374 @example 1375 ... 1376 symbol mu_sym("mu"), nu_sym("nu"); 1377 varidx mu(mu_sym, 4), nu(nu_sym, 4); // default is contravariant ~mu, ~nu 1378 varidx mu_co(mu_sym, 4, true); // covariant index .mu 1379 1380 cout << indexed(A, mu, nu) << endl; 1381 // -> A~mu~nu 1382 cout << indexed(A, mu_co, nu) << endl; 1383 // -> A.mu~nu 1384 cout << indexed(A, mu.toggle_variance(), nu) << endl; 1385 // -> A.mu~nu 1386 ... 1387 @end example 1388 1389 A @code{varidx} is an @code{idx} with an additional flag that marks it as 1390 co- or contravariant. The default is a contravariant (upper) index, but 1391 this can be overridden by supplying a third argument to the @code{varidx} 1392 constructor. The two methods 1393 1394 @example 1395 bool varidx::is_covariant(void); 1396 bool varidx::is_contravariant(void); 1397 @end example 1398 1399 allow you to check the variance of a @code{varidx} object (use @code{ex_to_varidx()} 1400 to get the object reference from an expression). There's also the very useful 1401 method 1402 1403 @example 1404 ex varidx::toggle_variance(void); 1405 @end example 1406 1407 which makes a new index with the same value and dimension but the opposite 1408 variance. By using it you only have to define the index once. 1409 1410 @subsection Substituting indices 1411 1412 @cindex @code{subs()} 1413 Sometimes you will want to substitute one symbolic index with another 1414 symbolic or numeric index, for example when calculating one specific element 1415 of a tensor expression. This is done with the @code{.subs()} method, as it 1416 is done for symbols (see @ref{Substituting Expressions}). 1417 1418 You have two possibilities here. You can either substitute the whole index 1419 by another index or expression: 1420 1421 @example 1422 ... 1423 ex e = indexed(A, mu_co); 1424 cout << e << " becomes " << e.subs(mu_co == nu) << endl; 1425 // -> A.mu becomes A~nu 1426 cout << e << " becomes " << e.subs(mu_co == varidx(0, 4)) << endl; 1427 // -> A.mu becomes A~0 1428 cout << e << " becomes " << e.subs(mu_co == 0) << endl; 1429 // -> A.mu becomes A.0 1430 ... 1431 @end example 1432 1433 The third example shows that trying to replace an index with something that 1434 is not an index will substitute the index value instead. 1435 1436 Alternatively, you can substitute the @emph{symbol} of a symbolic index by 1437 another expression: 1438 1439 @example 1440 ... 1441 ex e = indexed(A, mu_co); 1442 cout << e << " becomes " << e.subs(mu_sym == nu_sym) << endl; 1443 // -> A.mu becomes A.nu 1444 cout << e << " becomes " << e.subs(mu_sym == 0) << endl; 1445 // -> A.mu becomes A.0 1446 ... 1447 @end example 1448 1449 As you see, with the second method only the value of the index will get 1450 substituted. Its other properties, including its dimension, remain unchanged. 1451 If you want to change the dimension of an index you have to substitute the 1452 whole index by another one with the new dimension. 1453 1454 Finally, substituting the base expression of an indexed object works as 1455 expected: 1456 1457 @example 1458 ... 1459 ex e = indexed(A, mu_co); 1460 cout << e << " becomes " << e.subs(A == A+B) << endl; 1461 // -> A.mu becomes (B+A).mu 1462 ... 1463 @end example 1464 1465 @subsection Symmetries 1466 1467 Indexed objects can be declared as being totally symmetric or antisymmetric 1468 with respect to their indices. In this case, GiNaC will automatically bring 1469 the indices into a canonical order which allows for some immediate 1470 simplifications: 1471 1472 @example 1473 ... 1474 cout << indexed(A, indexed::symmetric, i, j) 1475 + indexed(A, indexed::symmetric, j, i) << endl; 1476 // -> 2*A.j.i 1477 cout << indexed(B, indexed::antisymmetric, i, j) 1478 + indexed(B, indexed::antisymmetric, j, j) << endl; 1479 // -> -B.j.i 1480 cout << indexed(B, indexed::antisymmetric, i, j) 1481 + indexed(B, indexed::antisymmetric, j, i) << endl; 1482 // -> 0 1483 ... 1484 @end example 1485 1486 @cindex @code{get_free_indices()} 1487 @cindex Dummy index 1488 @subsection Dummy indices 1489 1490 GiNaC treats certain symbolic index pairs as @dfn{dummy indices} meaning 1491 that a summation over the index range is implied. Symbolic indices which are 1492 not dummy indices are called @dfn{free indices}. Numeric indices are neither 1493 dummy nor free indices. 1494 1495 To be recognized as a dummy index pair, the two indices must be of the same 1496 class and dimension and their value must be the same single symbol (an index 1497 like @samp{2*n+1} is never a dummy index). If the indices are of class 1498 @code{varidx}, they must also be of opposite variance. 1499 1500 The method @code{.get_free_indices()} returns a vector containing the free 1501 indices of an expression. It also checks that the free indices of the terms 1502 of a sum are consistent: 1503 1504 @example 1505 @{ 1506 symbol A("A"), B("B"), C("C"); 1507 1508 symbol i_sym("i"), j_sym("j"), k_sym("k"), l_sym("l"); 1509 idx i(i_sym, 3), j(j_sym, 3), k(k_sym, 3), l(l_sym, 3); 1510 1511 ex e = indexed(A, i, j) * indexed(B, j, k) + indexed(C, k, l, i, l); 1512 cout << exprseq(e.get_free_indices()) << endl; 1513 // -> (.i,.k) 1514 // 'j' and 'l' are dummy indices 1515 1516 symbol mu_sym("mu"), nu_sym("nu"), rho_sym("rho"), sigma_sym("sigma"); 1517 varidx mu(mu_sym, 4), nu(nu_sym, 4), rho(rho_sym, 4), sigma(sigma_sym, 4); 1518 1519 e = indexed(A, mu, nu) * indexed(B, nu.toggle_variance(), rho) 1520 + indexed(C, mu, sigma, rho, sigma.toggle_variance()); 1521 cout << exprseq(e.get_free_indices()) << endl; 1522 // -> (~mu,~rho) 1523 // 'nu' is a dummy index, but 'sigma' is not 1524 1525 e = indexed(A, mu, mu); 1526 cout << exprseq(e.get_free_indices()) << endl; 1527 // -> (~mu) 1528 // 'mu' is not a dummy index because it appears twice with the same 1529 // variance 1530 1531 e = indexed(A, mu, nu) + 42; 1532 cout << exprseq(e.get_free_indices()) << endl; // ERROR 1533 // this will throw an exception: 1534 // "add::get_free_indices: inconsistent indices in sum" 1535 @} 1536 @end example 1537 1538 @cindex @code{simplify_indexed()} 1539 @subsection Simplifying indexed expressions 1540 1541 In addition to the few automatic simplifications that GiNaC performs on 1542 indexed expressions (such as re-ordering the indices of symmetric tensors 1543 and calculating traces and convolutions of matrices and predefined tensors) 1544 there is the method 1545 1546 @example 1547 ex ex::simplify_indexed(void); 1548 ex ex::simplify_indexed(const scalar_products & sp); 1549 @end example 1550 1551 that performs some more expensive operations: 1552 1553 @itemize 1554 @item it checks the consistency of free indices in sums in the same way 1555 @code{get_free_indices()} does 1556 @item it (symbolically) calculates all possible dummy index summations/contractions 1557 with the predefined tensors (this will be explained in more detail in the 1558 next section) 1559 @item as a special case of dummy index summation, it can replace scalar products 1560 of two tensors with a user-defined value 1561 @end itemize 1562 1563 The last point is done with the help of the @code{scalar_products} class 1564 which is used to store scalar products with known values (this is not an 1565 arithmetic class, you just pass it to @code{simplify_indexed()}): 1566 1567 @example 1568 @{ 1569 symbol A("A"), B("B"), C("C"), i_sym("i"); 1570 idx i(i_sym, 3); 1571 1572 scalar_products sp; 1573 sp.add(A, B, 0); // A and B are orthogonal 1574 sp.add(A, C, 0); // A and C are orthogonal 1575 sp.add(A, A, 4); // A^2 = 4 (A has length 2) 1576 1577 e = indexed(A + B, i) * indexed(A + C, i); 1578 cout << e << endl; 1579 // -> (B+A).i*(A+C).i 1580 1581 cout << e.expand(expand_options::expand_indexed).simplify_indexed(sp) 1582 << endl; 1583 // -> 4+C.i*B.i 1584 @} 1585 @end example 1586 1587 The @code{scalar_products} object @code{sp} acts as a storage for the 1588 scalar products added to it with the @code{.add()} method. This method 1589 takes three arguments: the two expressions of which the scalar product is 1590 taken, and the expression to replace it with. After @code{sp.add(A, B, 0)}, 1591 @code{simplify_indexed()} will replace all scalar products of indexed 1592 objects that have the symbols @code{A} and @code{B} as base expressions 1593 with the single value 0. The number, type and dimension of the indices 1594 doesn't matter; @samp{A~mu~nu*B.mu.nu} would also be replaced by 0. 1595 1596 @cindex @code{expand()} 1597 The example above also illustrates a feature of the @code{expand()} method: 1598 if passed the @code{expand_indexed} option it will distribute indices 1599 over sums, so @samp{(A+B).i} becomes @samp{A.i+B.i}. 1600 1601 @cindex @code{tensor} (class) 1602 @subsection Predefined tensors 1603 1604 Some frequently used special tensors such as the delta, epsilon and metric 1605 tensors are predefined in GiNaC. They have special properties when 1606 contracted with other tensor expressions and some of them have constant 1607 matrix representations (they will evaluate to a number when numeric 1608 indices are specified). 1609 1610 @cindex @code{delta_tensor()} 1611 @subsubsection Delta tensor 1612 1613 The delta tensor takes two indices, is symmetric and has the matrix 1614 representation @code{diag(1,1,1,...)}. It is constructed by the function 1615 @code{delta_tensor()}: 1616 1617 @example 1618 @{ 1619 symbol A("A"), B("B"); 1620 1621 idx i(symbol("i"), 3), j(symbol("j"), 3), 1622 k(symbol("k"), 3), l(symbol("l"), 3); 1623 1624 ex e = indexed(A, i, j) * indexed(B, k, l) 1625 * delta_tensor(i, k) * delta_tensor(j, l) << endl; 1626 cout << e.simplify_indexed() << endl; 1627 // -> B.i.j*A.i.j 1628 1629 cout << delta_tensor(i, i) << endl; 1630 // -> 3 1631 @} 1632 @end example 1633 1634 @cindex @code{metric_tensor()} 1635 @subsubsection General metric tensor 1636 1637 The function @code{metric_tensor()} creates a general symmetric metric 1638 tensor with two indices that can be used to raise/lower tensor indices. The 1639 metric tensor is denoted as @samp{g} in the output and if its indices are of 1640 mixed variance it is automatically replaced by a delta tensor: 1641 1642 @example 1643 @{ 1644 symbol A("A"); 1645 1646 varidx mu(symbol("mu"), 4), nu(symbol("nu"), 4), rho(symbol("rho"), 4); 1647 1648 ex e = metric_tensor(mu, nu) * indexed(A, nu.toggle_variance(), rho); 1649 cout << e.simplify_indexed() << endl; 1650 // -> A~mu~rho 1651 1652 e = delta_tensor(mu, nu.toggle_variance()) * metric_tensor(nu, rho); 1653 cout << e.simplify_indexed() << endl; 1654 // -> g~mu~rho 1655 1656 e = metric_tensor(mu.toggle_variance(), nu.toggle_variance()) 1657 * metric_tensor(nu, rho); 1658 cout << e.simplify_indexed() << endl; 1659 // -> delta.mu~rho 1660 1661 e = metric_tensor(nu.toggle_variance(), rho.toggle_variance()) 1662 * metric_tensor(mu, nu) * (delta_tensor(mu.toggle_variance(), rho) 1663 + indexed(A, mu.toggle_variance(), rho)); 1664 cout << e.simplify_indexed() << endl; 1665 // -> 4+A.rho~rho 1666 @} 1667 @end example 1668 1669 @cindex @code{lorentz_g()} 1670 @subsubsection Minkowski metric tensor 1671 1672 The Minkowski metric tensor is a special metric tensor with a constant 1673 matrix representation which is either @code{diag(1, -1, -1, ...)} (negative 1674 signature, the default) or @code{diag(-1, 1, 1, ...)} (positive signature). 1675 It is created with the function @code{lorentz_g()} (although it is output as 1676 @samp{eta}): 1677 1678 @example 1679 @{ 1680 varidx mu(symbol("mu"), 4); 1681 1682 e = delta_tensor(varidx(0, 4), mu.toggle_variance()) 1683 * lorentz_g(mu, varidx(0, 4)); // negative signature 1684 cout << e.simplify_indexed() << endl; 1685 // -> 1 1686 1687 e = delta_tensor(varidx(0, 4), mu.toggle_variance()) 1688 * lorentz_g(mu, varidx(0, 4), true); // positive signature 1689 cout << e.simplify_indexed() << endl; 1690 // -> -1 1691 @} 1692 @end example 1693 1694 @subsubsection Epsilon tensor 1695 1696 The epsilon tensor is totally antisymmetric, its number of indices is equal 1697 to the dimension of the index space (the indices must all be of the same 1698 numeric dimension), and @samp{eps.1.2.3...} (resp. @samp{eps~0~1~2...}) is 1699 defined to be 1. Its behaviour with indices that have a variance also 1700 depends on the signature of the metric. Epsilon tensors are output as 1701 @samp{eps}. 1702 1703 There are three functions defined to create epsilon tensors in 2, 3 and 4 1704 dimensions: 1705 1706 @example 1707 ex epsilon_tensor(const ex & i1, const ex & i2); 1708 ex epsilon_tensor(const ex & i1, const ex & i2, const ex & i3); 1709 ex lorentz_eps(const ex & i1, const ex & i2, const ex & i3, const ex & i4, bool pos_sig = false); 1710 @end example 1711 1712 The first two functions create an epsilon tensor in 2 or 3 Euclidean 1713 dimensions, the last function creates an epsilon tensor in a 4-dimensional 1714 Minkowski space (the last @code{bool} argument specifies whether the metric 1715 has negative or positive signature, as in the case of the Minkowski metric 1716 tensor). 1717 1718 @subsection Linear algebra 1719 1720 The @code{matrix} class can be used with indices to do some simple linear 1721 algebra (linear combinations and products of vectors and matrices, traces 1722 and scalar products): 1723 1724 @example 1725 @{ 1726 idx i(symbol("i"), 2), j(symbol("j"), 2); 1727 symbol x("x"), y("y"); 1728 1729 matrix A(2, 2, lst(1, 2, 3, 4)), X(2, 1, lst(x, y)); 1730 1731 cout << indexed(A, i, i) << endl; 1732 // -> 5 1733 1734 ex e = indexed(A, i, j) * indexed(X, j); 1735 cout << e.simplify_indexed() << endl; 1736 // -> [[ [[2*y+x]], [[4*y+3*x]] ]].i 1737 1738 e = indexed(A, i, j) * indexed(X, i) + indexed(X, j) * 2; 1739 cout << e.simplify_indexed() << endl; 1740 // -> [[ [[3*y+3*x,6*y+2*x]] ]].j 1741 @} 1742 @end example 1743 1744 You can of course obtain the same results with the @code{matrix::add()}, 1745 @code{matrix::mul()} and @code{matrix::trace()} methods but with indices you 1746 don't have to worry about transposing matrices. 1747 1748 Matrix indices always start at 0 and their dimension must match the number 1749 of rows/columns of the matrix. Matrices with one row or one column are 1750 vectors and can have one or two indices (it doesn't matter whether it's a 1751 row or a column vector). Other matrices must have two indices. 1752 1753 You should be careful when using indices with variance on matrices. GiNaC 1754 doesn't look at the variance and doesn't know that @samp{F~mu~nu} and 1755 @samp{F.mu.nu} are different matrices. In this case you should use only 1756 one form for @samp{F} and explicitly multiply it with a matrix representation 1757 of the metric tensor. 1758 1759 1760 @node Methods and Functions, Information About Expressions, Indexed objects, Top 1761 @c node-name, next, previous, up 1762 @chapter Methods and Functions 1763 @cindex polynomial 1764 1765 In this chapter the most important algorithms provided by GiNaC will be 1766 described. Some of them are implemented as functions on expressions, 1767 others are implemented as methods provided by expression objects. If 1768 they are methods, there exists a wrapper function around it, so you can 1769 alternatively call it in a functional way as shown in the simple 1770 example: 1771 1772 @example 1773 ... 1774 cout << "As method: " << sin(1).evalf() << endl; 1775 cout << "As function: " << evalf(sin(1)) << endl; 1776 ... 1777 @end example 1778 1779 @cindex @code{subs()} 1780 The general rule is that wherever methods accept one or more parameters 1781 (@var{arg1}, @var{arg2}, @dots{}) the order of arguments the function 1782 wrapper accepts is the same but preceded by the object to act on 1783 (@var{object}, @var{arg1}, @var{arg2}, @dots{}). This approach is the 1784 most natural one in an OO model but it may lead to confusion for MapleV 1785 users because where they would type @code{A:=x+1; subs(x=2,A);} GiNaC 1786 would require @code{A=x+1; subs(A,x==2);} (after proper declaration of 1787 @code{A} and @code{x}). On the other hand, since MapleV returns 3 on 1788 @code{A:=x^2+3; coeff(A,x,0);} (GiNaC: @code{A=pow(x,2)+3; 1789 coeff(A,x,0);}) it is clear that MapleV is not trying to be consistent 1790 here. Also, users of MuPAD will in most cases feel more comfortable 1791 with GiNaC's convention. All function wrappers are implemented 1792 as simple inline functions which just call the corresponding method and 1793 are only provided for users uncomfortable with OO who are dead set to 1794 avoid method invocations. Generally, nested function wrappers are much 1795 harder to read than a sequence of methods and should therefore be 1796 avoided if possible. On the other hand, not everything in GiNaC is a 1797 method on class @code{ex} and sometimes calling a function cannot be 1798 avoided. 1799 1800 @menu 1801 * Information About Expressions:: 1802 * Substituting Expressions:: 1803 * Polynomial Arithmetic:: Working with polynomials. 1804 * Rational Expressions:: Working with rational functions. 1805 * Symbolic Differentiation:: 1806 * Series Expansion:: Taylor and Laurent expansion. 1807 * Built-in Functions:: List of predefined mathematical functions. 1808 * Input/Output:: Input and output of expressions. 1809 @end menu 1810 1811 1812 @node Information About Expressions, Substituting Expressions, Methods and Functions, Methods and Functions 1813 @c node-name, next, previous, up 1814 @section Getting information about expressions 1815 1816 @subsection Checking expression types 1817 @cindex @code{is_ex_of_type()} 1818 @cindex @code{ex_to_numeric()} 1819 @cindex @code{ex_to_@dots{}} 1820 @cindex @code{Converting ex to other classes} 1821 @cindex @code{info()} 1822 1823 Sometimes it's useful to check whether a given expression is a plain number, 1824 a sum, a polynomial with integer coefficients, or of some other specific type. 1825 GiNaC provides two functions for this (the first one is actually a macro): 1826 1827 @example 1828 bool is_ex_of_type(const ex & e, TYPENAME t); 1829 bool ex::info(unsigned flag); 1830 @end example 1831 1832 When the test made by @code{is_ex_of_type()} returns true, it is safe to 1833 call one of the functions @code{ex_to_@dots{}}, where @code{@dots{}} is 1834 one of the class names (@xref{The Class Hierarchy}, for a list of all 1835 classes). For example, assuming @code{e} is an @code{ex}: 1836 1837 @example 1838 @{ 1839 @dots{} 1840 if (is_ex_of_type(e, numeric)) 1841 numeric n = ex_to_numeric(e); 1842 @dots{} 1843 @} 1844 @end example 1845 1846 @code{is_ex_of_type()} allows you to check whether the top-level object of 1847 an expression @samp{e} is an instance of the GiNaC class @samp{t} 1848 (@xref{The Class Hierarchy}, for a list of all classes). This is most useful, 1849 e.g., for checking whether an expression is a number, a sum, or a product: 1850 1851 @example 1852 @{ 1853 symbol x("x"); 1854 ex e1 = 42; 1855 ex e2 = 4*x - 3; 1856 is_ex_of_type(e1, numeric); // true 1857 is_ex_of_type(e2, numeric); // false 1858 is_ex_of_type(e1, add); // false 1859 is_ex_of_type(e2, add); // true 1860 is_ex_of_type(e1, mul); // false 1861 is_ex_of_type(e2, mul); // false 1862 @} 1863 @end example 1864 1865 The @code{info()} method is used for checking certain attributes of 1866 expressions. The possible values for the @code{flag} argument are defined 1867 in @file{ginac/flags.h}, the most important being explained in the following 1868 table: 1869 1870 @cartouche 1871 @multitable @columnfractions .30 .70 1872 @item @strong{Flag} @tab @strong{Returns true if the object is@dots{}} 1873 @item @code{numeric} 1874 @tab @dots{}a number (same as @code{is_ex_of_type(..., numeric)}) 1875 @item @code{real} 1876 @tab @dots{}a real integer, rational or float (i.e. is not complex) 1877 @item @code{rational} 1878 @tab @dots{}an exact rational number (integers are rational, too) 1879 @item @code{integer} 1880 @tab @dots{}a (non-complex) integer 1881 @item @code{crational} 1882 @tab @dots{}an exact (complex) rational number (such as @math{2/3+7/2*I}) 1883 @item @code{cinteger} 1884 @tab @dots{}a (complex) integer (such as @math{2-3*I}) 1885 @item @code{positive} 1886 @tab @dots{}not complex and greater than 0 1887 @item @code{negative} 1888 @tab @dots{}not complex and less than 0 1889 @item @code{nonnegative} 1890 @tab @dots{}not complex and greater than or equal to 0 1891 @item @code{posint} 1892 @tab @dots{}an integer greater than 0 1893 @item @code{negint} 1894 @tab @dots{}an integer less than 0 1895 @item @code{nonnegint} 1896 @tab @dots{}an integer greater than or equal to 0 1897 @item @code{even} 1898 @tab @dots{}an even integer 1899 @item @code{odd} 1900 @tab @dots{}an odd integer 1901 @item @code{prime} 1902 @tab @dots{}a prime integer (probabilistic primality test) 1903 @item @code{relation} 1904 @tab @dots{}a relation (same as @code{is_ex_of_type(..., relational)}) 1905 @item @code{relation_equal} 1906 @tab @dots{}a @code{==} relation 1907 @item @code{relation_not_equal} 1908 @tab @dots{}a @code{!=} relation 1909 @item @code{relation_less} 1910 @tab @dots{}a @code{<} relation 1911 @item @code{relation_less_or_equal} 1912 @tab @dots{}a @code{<=} relation 1913 @item @code{relation_greater} 1914 @tab @dots{}a @code{>} relation 1915 @item @code{relation_greater_or_equal} 1916 @tab @dots{}a @code{>=} relation 1917 @item @code{symbol} 1918 @tab @dots{}a symbol (same as @code{is_ex_of_type(..., symbol)}) 1919 @item @code{list} 1920 @tab @dots{}a list (same as @code{is_ex_of_type(..., lst)}) 1921 @item @code{polynomial} 1922 @tab @dots{}a polynomial (i.e. only consists of sums and products of numbers and symbols with positive integer powers) 1923 @item @code{integer_polynomial} 1924 @tab @dots{}a polynomial with (non-complex) integer coefficients 1925 @item @code{cinteger_polynomial} 1926 @tab @dots{}a polynomial with (possibly complex) integer coefficients (such as @math{2-3*I}) 1927 @item @code{rational_polynomial} 1928 @tab @dots{}a polynomial with (non-complex) rational coefficients 1929 @item @code{crational_polynomial} 1930 @tab @dots{}a polynomial with (possibly complex) rational coefficients (such as @math{2/3+7/2*I}) 1931 @item @code{rational_function} 1932 @tab @dots{}a rational function (@math{x+y}, @math{z/(x+y)}) 1933 @item @code{algebraic} 1934 @tab @dots{}an algebraic object (@math{sqrt(2)}, @math{sqrt(x)-1}) 1935 @end multitable 1936 @end cartouche 1937 1938 1939 @subsection Accessing subexpressions 1940 @cindex @code{nops()} 1941 @cindex @code{op()} 1942 @cindex @code{has()} 1943 @cindex container 1944 @cindex @code{relational} (class) 1945 1946 GiNaC provides the two methods 1947 1948 @example 1949 unsigned ex::nops(); 1950 ex ex::op(unsigned i); 1951 @end example 1952 1953 for accessing the subexpressions in the container-like GiNaC classes like 1954 @code{add}, @code{mul}, @code{lst}, and @code{function}. @code{nops()} 1955 determines the number of subexpressions (@samp{operands}) contained, while 1956 @code{op()} returns the @code{i}-th (0..@code{nops()-1}) subexpression. 1957 In the case of a @code{power} object, @code{op(0)} will return the basis 1958 and @code{op(1)} the exponent. For @code{indexed} objects, @code{op(0)} 1959 is the base expression and @code{op(i)}, @math{i>0} are the indices. 1960 1961 The left-hand and right-hand side expressions of objects of class 1962 @code{relational} (and only of these) can also be accessed with the methods 1963 1964 @example 1965 ex ex::lhs(); 1966 ex ex::rhs(); 1967 @end example 1968 1969 Finally, the method 1970 1971 @example 1972 bool ex::has(const ex & other); 1973 @end example 1974 1975 checks whether an expression contains the given subexpression @code{other}. 1976 This only works reliably if @code{other} is of an atomic class such as a 1977 @code{numeric} or a @code{symbol}. It is, e.g., not possible to verify that 1978 @code{a+b+c} contains @code{a+c} (or @code{a+b}) as a subexpression. 1979 1980 1981 @subsection Comparing expressions 1982 @cindex @code{is_equal()} 1983 @cindex @code{is_zero()} 1984 1985 Expressions can be compared with the usual C++ relational operators like 1986 @code{==}, @code{>}, and @code{<} but if the expressions contain symbols, 1987 the result is usually not determinable and the result will be @code{false}, 1988 except in the case of the @code{!=} operator. You should also be aware that 1989 GiNaC will only do the most trivial test for equality (subtracting both 1990 expressions), so something like @code{(pow(x,2)+x)/x==x+1} will return 1991 @code{false}. 1992 1993 Actually, if you construct an expression like @code{a == b}, this will be 1994 represented by an object of the @code{relational} class (@xref{Relations}.) 1995 which is not evaluated until (explicitly or implicitely) cast to a @code{bool}. 1996 1997 There are also two methods 1998 1999 @example 2000 bool ex::is_equal(const ex & other); 2001 bool ex::is_zero(); 2002 @end example 2003 2004 for checking whether one expression is equal to another, or equal to zero, 2005 respectively. 2006 2007 @strong{Warning:} You will also find an @code{ex::compare()} method in the 2008 GiNaC header files. This method is however only to be used internally by 2009 GiNaC to establish a canonical sort order for terms, and using it to compare 2010 expressions will give very surprising results. 2011 2012 2013 @node Substituting Expressions, Polynomial Arithmetic, Information About Expressions, Methods and Functions 2014 @c node-name, next, previous, up 2015 @section Substituting expressions 2016 @cindex @code{subs()} 2017 2018 Algebraic objects inside expressions can be replaced with arbitrary 2019 expressions via the @code{.subs()} method: 2020 2021 @example 2022 ex ex::subs(const ex & e); 2023 ex ex::subs(const lst & syms, const lst & repls); 2024 @end example 2025 2026 In the first form, @code{subs()} accepts a relational of the form 2027 @samp{object == expression} or a @code{lst} of such relationals: 2028 2029 @example 2030 @{ 2031 symbol x("x"), y("y"); 2032 2033 ex e1 = 2*x^2-4*x+3; 2034 cout << "e1(7) = " << e1.subs(x == 7) << endl; 2035 // -> 73 2036 2037 ex e2 = x*y + x; 2038 cout << "e2(-2, 4) = " << e2.subs(lst(x == -2, y == 4)) << endl; 2039 // -> -10 2040 @} 2041 @end example 2042 2043 @code{subs()} performs syntactic substitution of any complete algebraic 2044 object; it does not try to match sub-expressions as is demonstrated by the 2045 following example: 2046 2047 @example 2048 @{ 2049 symbol x("x"), y("y"), z("z"); 2050 2051 ex e1 = pow(x+y, 2); 2052 cout << e1.subs(x+y == 4) << endl; 2053 // -> 16 2054 2055 ex e2 = sin(x)*cos(x); 2056 cout << e2.subs(sin(x) == cos(x)) << endl; 2057 // -> cos(x)^2 2058 2059 ex e3 = x+y+z; 2060 cout << e3.subs(x+y == 4) << endl; 2061 // -> x+y+z 2062 // (and not 4+z as one might expect) 2063 @} 2064 @end example 2065 2066 If you specify multiple substitutions, they are performed in parallel, so e.g. 2067 @code{subs(lst(x == y, y == x))} exchanges @samp{x} and @samp{y}. 2068 2069 The second form of @code{subs()} takes two lists, one for the objects to be 2070 replaced and one for the expressions to be substituted (both lists must 2071 contain the same number of elements). Using this form, you would write 2072 @code{subs(lst(x, y), lst(y, x))} to exchange @samp{x} and @samp{y}. 2073 2074 2075 @node Polynomial Arithmetic, Rational Expressions, Substituting Expressions, Methods and Functions 2076 @c node-name, next, previous, up 2077 @section Polynomial arithmetic 2078 2079 @subsection Expanding and collecting 2080 @cindex @code{expand()} 2081 @cindex @code{collect()} 2082 2083 A polynomial in one or more variables has many equivalent 2084 representations. Some useful ones serve a specific purpose. Consider 2085 for example the trivariate polynomial @math{4*x*y + x*z + 20*y^2 + 2086 21*y*z + 4*z^2} (written down here in output-style). It is equivalent 2087 to the factorized polynomial @math{(x + 5*y + 4*z)*(4*y + z)}. Other 2088 representations are the recursive ones where one collects for exponents 2089 in one of the three variable. Since the factors are themselves 2090 polynomials in the remaining two variables the procedure can be 2091 repeated. In our expample, two possibilities would be @math{(4*y + z)*x 2092 + 20*y^2 + 21*y*z + 4*z^2} and @math{20*y^2 + (21*z + 4*x)*y + 4*z^2 + 2093 x*z}. 2094 2095 To bring an expression into expanded form, its method 2096 2097 @example 2098 ex ex::expand(); 2099 @end example 2100 2101 may be called. In our example above, this corresponds to @math{4*x*y + 2102 x*z + 20*y^2 + 21*y*z + 4*z^2}. Again, since the canonical form in 2103 GiNaC is not easily guessable you should be prepared to see different 2104 orderings of terms in such sums! 2105 2106 Another useful representation of multivariate polynomials is as a 2107 univariate polynomial in one of the variables with the coefficients 2108 being polynomials in the remaining variables. The method 2109 @code{collect()} accomplishes this task: 2110 2111 @example 2112 ex ex::collect(const ex & s); 2113 @end example 2114 2115 Note that the original polynomial needs to be in expanded form in order 2116 to be able to find the coefficients properly. 2117 2118 @subsection Degree and coefficients 2119 @cindex @code{degree()} 2120 @cindex @code{ldegree()} 2121 @cindex @code{coeff()} 2122 2123 The degree and low degree of a polynomial can be obtained using the two 2124 methods 2125 2126 @example 2127 int ex::degree(const ex & s); 2128 int ex::ldegree(const ex & s); 2129 @end example 2130 2131 which also work reliably on non-expanded input polynomials (they even work 2132 on rational functions, returning the asymptotic degree). To extract 2133 a coefficient with a certain power from an expanded polynomial you use 2134 2135 @example 2136 ex ex::coeff(const ex & s, int n); 2137 @end example 2138 2139 You can also obtain the leading and trailing coefficients with the methods 2140 2141 @example 2142 ex ex::lcoeff(const ex & s); 2143 ex ex::tcoeff(const ex & s); 2144 @end example 2145 2146 which are equivalent to @code{coeff(s, degree(s))} and @code{coeff(s, ldegree(s))}, 2147 respectively. 2148 2149 An application is illustrated in the next example, where a multivariate 2150 polynomial is analyzed: 2151 2152 @example 2153 #include <ginac/ginac.h> 2154 using namespace std; 2155 using namespace GiNaC; 2156 2157 int main() 2158 @{ 2159 symbol x("x"), y("y"); 2160 ex PolyInp = 4*pow(x,3)*y + 5*x*pow(y,2) + 3*y 2161 - pow(x+y,2) + 2*pow(y+2,2) - 8; 2162 ex Poly = PolyInp.expand(); 2163 2164 for (int i=Poly.ldegree(x); i<=Poly.degree(x); ++i) @{ 2165 cout << "The x^" << i << "-coefficient is " 2166 << Poly.coeff(x,i) << endl; 2167 @} 2168 cout << "As polynomial in y: " 2169 << Poly.collect(y) << endl; 2170 @} 2171 @end example 2172 2173 When run, it returns an output in the following fashion: 2174 2175 @example 2176 The x^0-coefficient is y^2+11*y 2177 The x^1-coefficient is 5*y^2-2*y 2178 The x^2-coefficient is -1 2179 The x^3-coefficient is 4*y 2180 As polynomial in y: -x^2+(5*x+1)*y^2+(-2*x+4*x^3+11)*y 2181 @end example 2182 2183 As always, the exact output may vary between different versions of GiNaC 2184 or even from run to run since the internal canonical ordering is not 2185 within the user's sphere of influence. 2186 2187 @code{degree()}, @code{ldegree()}, @code{coeff()}, @code{lcoeff()}, 2188 @code{tcoeff()} and @code{collect()} can also be used to a certain degree 2189 with non-polynomial expressions as they not only work with symbols but with 2190 constants, functions and indexed objects as well: 2191 2192 @example 2193 @{ 2194 symbol a("a"), b("b"), c("c"); 2195 idx i(symbol("i"), 3); 2196 2197 ex e = pow(sin(x) - cos(x), 4); 2198 cout << e.degree(cos(x)) << endl; 2199 // -> 4 2200 cout << e.expand().coeff(sin(x), 3) << endl; 2201 // -> -4*cos(x) 2202 2203 e = indexed(a+b, i) * indexed(b+c, i); 2204 e = e.expand(expand_options::expand_indexed); 2205 cout << e.collect(indexed(b, i)) << endl; 2206 // -> a.i*c.i+(a.i+c.i)*b.i+b.i^2 2207 @} 2208 @end example 2209 2210 2211 @subsection Polynomial division 2212 @cindex polynomial division 2213 @cindex quotient 2214 @cindex remainder 2215 @cindex pseudo-remainder 2216 @cindex @code{quo()} 2217 @cindex @code{rem()} 2218 @cindex @code{prem()} 2219 @cindex @code{divide()} 2220 2221 The two functions 2222 2223 @example 2224 ex quo(const ex & a, const ex & b, const symbol & x); 2225 ex rem(const ex & a, const ex & b, const symbol & x); 2226 @end example 2227 2228 compute the quotient and remainder of univariate polynomials in the variable 2229 @samp{x}. The results satisfy @math{a = b*quo(a, b, x) + rem(a, b, x)}. 2230 2231 The additional function 2232 2233 @example 2234 ex prem(const ex & a, const ex & b, const symbol & x); 2235 @end example 2236 2237 computes the pseudo-remainder of @samp{a} and @samp{b} which satisfies 2238 @math{c*a = b*q + prem(a, b, x)}, where @math{c = b.lcoeff(x) ^ (a.degree(x) - b.degree(x) + 1)}. 2239 2240 Exact division of multivariate polynomials is performed by the function 2241 2242 @example 2243 bool divide(const ex & a, const ex & b, ex & q); 2244 @end example 2245 2246 If @samp{b} divides @samp{a} over the rationals, this function returns @code{true} 2247 and returns the quotient in the variable @code{q}. Otherwise it returns @code{false} 2248 in which case the value of @code{q} is undefined. 2249 2250 2251 @subsection Unit, content and primitive part 2252 @cindex @code{unit()} 2253 @cindex @code{content()} 2254 @cindex @code{primpart()} 2255 2256 The methods 2257 2258 @example 2259 ex ex::unit(const symbol & x); 2260 ex ex::content(const symbol & x); 2261 ex ex::primpart(const symbol & x); 2262 @end example 2263 2264 return the unit part, content part, and primitive polynomial of a multivariate 2265 polynomial with respect to the variable @samp{x} (the unit part being the sign 2266 of the leading coefficient, the content part being the GCD of the coefficients, 2267 and the primitive polynomial being the input polynomial divided by the unit and 2268 content parts). The product of unit, content, and primitive part is the 2269 original polynomial. 2270 2271 2272 @subsection GCD and LCM 2273 @cindex GCD 2274 @cindex LCM 2275 @cindex @code{gcd()} 2276 @cindex @code{lcm()} 2277 2278 The functions for polynomial greatest common divisor and least common 2279 multiple have the synopsis 2280 2281 @example 2282 ex gcd(const ex & a, const ex & b); 2283 ex lcm(const ex & a, const ex & b); 2284 @end example 2285 2286 The functions @code{gcd()} and @code{lcm()} accept two expressions 2287 @code{a} and @code{b} as arguments and return a new expression, their 2288 greatest common divisor or least common multiple, respectively. If the 2289 polynomials @code{a} and @code{b} are coprime @code{gcd(a,b)} returns 1 2290 and @code{lcm(a,b)} returns the product of @code{a} and @code{b}. 2291 2292 @example 2293 #include <ginac/ginac.h> 2294 using namespace GiNaC; 2295 2296 int main() 2297 @{ 2298 symbol x("x"), y("y"), z("z"); 2299 ex P_a = 4*x*y + x*z + 20*pow(y, 2) + 21*y*z + 4*pow(z, 2); 2300 ex P_b = x*y + 3*x*z + 5*pow(y, 2) + 19*y*z + 12*pow(z, 2); 2301 2302 ex P_gcd = gcd(P_a, P_b); 2303 // x + 5*y + 4*z 2304 ex P_lcm = lcm(P_a, P_b); 2305 // 4*x*y^2 + 13*y*x*z + 20*y^3 + 81*y^2*z + 67*y*z^2 + 3*x*z^2 + 12*z^3 2306 @} 2307 @end example 2308 2309 2310 @subsection Square-free decomposition 2311 @cindex square-free decomposition 2312 @cindex factorization 2313 @cindex @code{sqrfree()} 2314 2315 GiNaC still lacks proper factorization support. Some form of 2316 factorization is, however, easily implemented by noting that factors 2317 appearing in a polynomial with power two or more also appear in the 2318 derivative and hence can easily be found by computing the GCD of the 2319 original polynomial and its derivatives. Any system has an interface 2320 for this so called square-free factorization. So we provide one, too: 2321 @example 2322 ex sqrfree(const ex & a, const lst & l = lst()); 2323 @end example 2324 Here is an example that by the way illustrates how the result may depend 2325 on the order of differentiation: 2326 @example 2327 ... 2328 symbol x("x"), y("y"); 2329 ex BiVarPol = expand(pow(x-2*y*x,3) * pow(x+y,2) * (x-y)); 2330 2331 cout << sqrfree(BiVarPol, lst(x,y)) << endl; 2332 // -> (y+x)^2*(-1+6*y+8*y^3-12*y^2)*(y-x)*x^3 2333 2334 cout << sqrfree(BiVarPol, lst(y,x)) << endl; 2335 // -> (1-2*y)^3*(y+x)^2*(-y+x)*x^3 2336 2337 cout << sqrfree(BiVarPol) << endl; 2338 // -> depending on luck, any of the above 2339 ... 2340 @end example 2341 2342 2343 @node Rational Expressions, Symbolic Differentiation, Polynomial Arithmetic, Methods and Functions 2344 @c node-name, next, previous, up 2345 @section Rational expressions 2346 2347 @subsection The @code{normal} method 2348 @cindex @code{normal()} 2349 @cindex simplification 2350 @cindex temporary replacement 2351 2352 Some basic form of simplification of expressions is called for frequently. 2353 GiNaC provides the method @code{.normal()}, which converts a rational function 2354 into an equivalent rational function of the form @samp{numerator/denominator} 2355 where numerator and denominator are coprime. If the input expression is already 2356 a fraction, it just finds the GCD of numerator and denominator and cancels it, 2357 otherwise it performs fraction addition and multiplication. 2358 2359 @code{.normal()} can also be used on expressions which are not rational functions 2360 as it will replace all non-rational objects (like functions or non-integer 2361 powers) by temporary symbols to bring the expression to the domain of rational 2362 functions before performing the normalization, and re-substituting these 2363 symbols afterwards. This algorithm is also available as a separate method 2364 @code{.to_rational()}, described below. 2365 2366 This means that both expressions @code{t1} and @code{t2} are indeed 2367 simplified in this little program: 2368 2369 @example 2370 #include <ginac/ginac.h> 2371 using namespace GiNaC; 2372 2373 int main() 2374 @{ 2375 symbol x("x"); 2376 ex t1 = (pow(x,2) + 2*x + 1)/(x + 1); 2377 ex t2 = (pow(sin(x),2) + 2*sin(x) + 1)/(sin(x) + 1); 2378 std::cout << "t1 is " << t1.normal() << std::endl; 2379 std::cout << "t2 is " << t2.normal() << std::endl; 2380 @} 2381 @end example 2382 2383 Of course this works for multivariate polynomials too, so the ratio of 2384 the sample-polynomials from the section about GCD and LCM above would be 2385 normalized to @code{P_a/P_b} = @code{(4*y+z)/(y+3*z)}. 2386 2387 2388 @subsection Numerator and denominator 2389 @cindex numerator 2390 @cindex denominator 2391 @cindex @code{numer()} 2392 @cindex @code{denom()} 2393 2394 The numerator and denominator of an expression can be obtained with 2395 2396 @example 2397 ex ex::numer(); 2398 ex ex::denom(); 2399 @end example 2400 2401 These functions will first normalize the expression as described above and 2402 then return the numerator or denominator, respectively. 2403 2404 2405 @subsection Converting to a rational expression 2406 @cindex @code{to_rational()} 2407 2408 Some of the methods described so far only work on polynomials or rational 2409 functions. GiNaC provides a way to extend the domain of these functions to 2410 general expressions by using the temporary replacement algorithm described 2411 above. You do this by calling 2412 2413 @example 2414 ex ex::to_rational(lst &l); 2415 @end example 2416 2417 on the expression to be converted. The supplied @code{lst} will be filled 2418 with the generated temporary symbols and their replacement expressions in 2419 a format that can be used directly for the @code{subs()} method. It can also 2420 already contain a list of replacements from an earlier application of 2421 @code{.to_rational()}, so it's possible to use it on multiple expressions 2422 and get consistent results. 2423 2424 For example, 2425 2426 @example 2427 @{ 2428 symbol x("x"); 2429 ex a = pow(sin(x), 2) - pow(cos(x), 2); 2430 ex b = sin(x) + cos(x); 2431 ex q; 2432 lst l; 2433 divide(a.to_rational(l), b.to_rational(l), q); 2434 cout << q.subs(l) << endl; 2435 @} 2436 @end example 2437 2438 will print @samp{sin(x)-cos(x)}. 2439 2440 2441 @node Symbolic Differentiation, Series Expansion, Rational Expressions, Methods and Functions 2442 @c node-name, next, previous, up 2443 @section Symbolic differentiation 2444 @cindex differentiation 2445 @cindex @code{diff()} 2446 @cindex chain rule 2447 @cindex product rule 2448 2449 GiNaC's objects know how to differentiate themselves. Thus, a 2450 polynomial (class @code{add}) knows that its derivative is the sum of 2451 the derivatives of all the monomials: 2452 2453 @example 2454 #include <ginac/ginac.h> 2455 using namespace GiNaC; 2456 2457 int main() 2458 @{ 2459 symbol x("x"), y("y"), z("z"); 2460 ex P = pow(x, 5) + pow(x, 2) + y; 2461 2462 cout << P.diff(x,2) << endl; // 20*x^3 + 2 2463 cout << P.diff(y) << endl; // 1 2464 cout << P.diff(z) << endl; // 0 2465 @} 2466 @end example 2467 2468 If a second integer parameter @var{n} is given, the @code{diff} method 2469 returns the @var{n}th derivative. 2470 2471 If @emph{every} object and every function is told what its derivative 2472 is, all derivatives of composed objects can be calculated using the 2473 chain rule and the product rule. Consider, for instance the expression 2474 @code{1/cosh(x)}. Since the derivative of @code{cosh(x)} is 2475 @code{sinh(x)} and the derivative of @code{pow(x,-1)} is 2476 @code{-pow(x,-2)}, GiNaC can readily compute the composition. It turns 2477 out that the composition is the generating function for Euler Numbers, 2478 i.e. the so called @var{n}th Euler number is the coefficient of 2479 @code{x^n/n!} in the expansion of @code{1/cosh(x)}. We may use this 2480 identity to code a function that generates Euler numbers in just three 2481 lines: 2482 2483 @cindex Euler numbers 2484 @example 2485 #include <ginac/ginac.h> 2486 using namespace GiNaC; 2487 2488 ex EulerNumber(unsigned n) 2489 @{ 2490 symbol x; 2491 const ex generator = pow(cosh(x),-1); 2492 return generator.diff(x,n).subs(x==0); 2493 @} 2494 2495 int main() 2496 @{ 2497 for (unsigned i=0; i<11; i+=2) 2498 std::cout << EulerNumber(i) << std::endl; 2499 return 0; 2500 @} 2501 @end example 2502 2503 When you run it, it produces the sequence @code{1}, @code{-1}, @code{5}, 2504 @code{-61}, @code{1385}, @code{-50521}. We increment the loop variable 2505 @code{i} by two since all odd Euler numbers vanish anyways. 2506 2507 2508 @node Series Expansion, Built-in Functions, Symbolic Differentiation, Methods and Functions 2509 @c node-name, next, previous, up 2510 @section Series expansion 2511 @cindex @code{series()} 2512 @cindex Taylor expansion 2513 @cindex Laurent expansion 2514 @cindex @code{pseries} (class) 2515 2516 Expressions know how to expand themselves as a Taylor series or (more 2517 generally) a Laurent series. As in most conventional Computer Algebra 2518 Systems, no distinction is made between those two. There is a class of 2519 its own for storing such series (@code{class pseries}) and a built-in 2520 function (called @code{Order}) for storing the order term of the series. 2521 As a consequence, if you want to work with series, i.e. multiply two 2522 series, you need to call the method @code{ex::series} again to convert 2523 it to a series object with the usual structure (expansion plus order 2524 term). A sample application from special relativity could read: 2525 2526 @example 2527 #include <ginac/ginac.h> 2528 using namespace std; 2529 using namespace GiNaC; 2530 2531 int main() 2532 @{ 2533 symbol v("v"), c("c"); 2534 2535 ex gamma = 1/sqrt(1 - pow(v/c,2)); 2536 ex mass_nonrel = gamma.series(v==0, 10); 2537 2538 cout << "the relativistic mass increase with v is " << endl 2539 << mass_nonrel << endl; 2540 2541 cout << "the inverse square of this series is " << endl 2542 << pow(mass_nonrel,-2).series(v==0, 10) << endl; 2543 @} 2544 @end example 2545 2546 Only calling the series method makes the last output simplify to 2547 @math{1-v^2/c^2+O(v^10)}, without that call we would just have a long 2548 series raised to the power @math{-2}. 2549 2550 @cindex M@'echain's formula 2551 As another instructive application, let us calculate the numerical 2552 value of Archimedes' constant 2553 @tex 2554 $\pi$ 2555 @end tex 2556 (for which there already exists the built-in constant @code{Pi}) 2557 using M@'echain's amazing formula 2558 @tex 2559 $\pi=16$~atan~$\!\left(1 \over 5 \right)-4$~atan~$\!\left(1 \over 239 \right)$. 2560 @end tex 2561 @ifnottex 2562 @math{Pi==16*atan(1/5)-4*atan(1/239)}. 2563 @end ifnottex 2564 We may expand the arcus tangent around @code{0} and insert the fractions 2565 @code{1/5} and @code{1/239}. But, as we have seen, a series in GiNaC 2566 carries an order term with it and the question arises what the system is 2567 supposed to do when the fractions are plugged into that order term. The 2568 solution is to use the function @code{series_to_poly()} to simply strip 2569 the order term off: 2570 2571 @example 2572 #include <ginac/ginac.h> 2573 using namespace GiNaC; 2574 2575 ex mechain_pi(int degr) 2576 @{ 2577 symbol x; 2578 ex pi_expansion = series_to_poly(atan(x).series(x,degr)); 2579 ex pi_approx = 16*pi_expansion.subs(x==numeric(1,5)) 2580 -4*pi_expansion.subs(x==numeric(1,239)); 2581 return pi_approx; 2582 @} 2583 2584 int main() 2585 @{ 2586 using std::cout; // just for fun, another way of... 2587 using std::endl; // ...dealing with this namespace std. 2588 ex pi_frac; 2589 for (int i=2; i<12; i+=2) @{ 2590 pi_frac = mechain_pi(i); 2591 cout << i << ":\t" << pi_frac << endl 2592 << "\t" << pi_frac.evalf() << endl; 2593 @} 2594 return 0; 2595 @} 2596 @end example 2597 2598 Note how we just called @code{.series(x,degr)} instead of 2599 @code{.series(x==0,degr)}. This is a simple shortcut for @code{ex}'s 2600 method @code{series()}: if the first argument is a symbol the expression 2601 is expanded in that symbol around point @code{0}. When you run this 2602 program, it will type out: 2603 2604 @example 2605 2: 3804/1195 2606 3.1832635983263598326 2607 4: 5359397032/1706489875 2608 3.1405970293260603143 2609 6: 38279241713339684/12184551018734375 2610 3.141621029325034425 2611 8: 76528487109180192540976/24359780855939418203125 2612 3.141591772182177295 2613 10: 327853873402258685803048818236/104359128170408663038552734375 2614 3.1415926824043995174 2615 @end example 2616 2617 2618 @node Built-in Functions, Input/Output, Series Expansion, Methods and Functions 2619 @c node-name, next, previous, up 2620 @section Predefined mathematical functions 2621 2622 GiNaC contains the following predefined mathematical functions: 2623 2624 @cartouche 2625 @multitable @columnfractions .30 .70 2626 @item @strong{Name} @tab @strong{Function} 2627 @item @code{abs(x)} 2628 @tab absolute value 2629 @item @code{csgn(x)} 2630 @tab complex sign 2631 @item @code{sqrt(x)} 2632 @tab square root (not a GiNaC function proper but equivalent to @code{pow(x, numeric(1, 2)}) 2633 @item @code{sin(x)} 2634 @tab sine 2635 @item @code{cos(x)} 2636 @tab cosine 2637 @item @code{tan(x)} 2638 @tab tangent 2639 @item @code{asin(x)} 2640 @tab inverse sine 2641 @item @code{acos(x)} 2642 @tab inverse cosine 2643 @item @code{atan(x)} 2644 @tab inverse tangent 2645 @item @code{atan2(y, x)} 2646 @tab inverse tangent with two arguments 2647 @item @code{sinh(x)} 2648 @tab hyperbolic sine 2649 @item @code{cosh(x)} 2650 @tab hyperbolic cosine 2651 @item @code{tanh(x)} 2652 @tab hyperbolic tangent 2653 @item @code{asinh(x)} 2654 @tab inverse hyperbolic sine 2655 @item @code{acosh(x)} 2656 @tab inverse hyperbolic cosine 2657 @item @code{atanh(x)} 2658 @tab inverse hyperbolic tangent 2659 @item @code{exp(x)} 2660 @tab exponential function 2661 @item @code{log(x)} 2662 @tab natural logarithm 2663 @item @code{Li2(x)} 2664 @tab Dilogarithm 2665 @item @code{zeta(x)} 2666 @tab Riemann's zeta function 2667 @item @code{zeta(n, x)} 2668 @tab derivatives of Riemann's zeta function 2669 @item @code{tgamma(x)} 2670 @tab Gamma function 2671 @item @code{lgamma(x)} 2672 @tab logarithm of Gamma function 2673 @item @code{beta(x, y)} 2674 @tab Beta function (@code{tgamma(x)*tgamma(y)/tgamma(x+y)}) 2675 @item @code{psi(x)} 2676 @tab psi (digamma) function 2677 @item @code{psi(n, x)} 2678 @tab derivatives of psi function (polygamma functions) 2679 @item @code{factorial(n)} 2680 @tab factorial function 2681 @item @code{binomial(n, m)} 2682 @tab binomial coefficients 2683 @item @code{Order(x)} 2684 @tab order term function in truncated power series 2685 @item @code{Derivative(x, l)} 2686 @tab inert partial differentiation operator (used internally) 2687 @end multitable 2688 @end cartouche 2689 2690 @cindex branch cut 2691 For functions that have a branch cut in the complex plane GiNaC follows 2692 the conventions for C++ as defined in the ANSI standard as far as 2693 possible. In particular: the natural logarithm (@code{log}) and the 2694 square root (@code{sqrt}) both have their branch cuts running along the 2695 negative real axis where the points on the axis itself belong to the 2696 upper part (i.e. continuous with quadrant II). The inverse 2697 trigonometric and hyperbolic functions are not defined for complex 2698 arguments by the C++ standard, however. In GiNaC we follow the 2699 conventions used by CLN, which in turn follow the carefully designed 2700 definitions in the Common Lisp standard. It should be noted that this 2701 convention is identical to the one used by the C99 standard and by most 2702 serious CAS. It is to be expected that future revisions of the C++ 2703 standard incorporate these functions in the complex domain in a manner 2704 compatible with C99. 2705 2706 2707 @node Input/Output, Extending GiNaC, Built-in Functions, Methods and Functions 2708 @c node-name, next, previous, up 2709 @section Input and output of expressions 2710 @cindex I/O 2711 2712 @subsection Expression output 2713 @cindex printing 2714 @cindex output of expressions 2715 2716 The easiest way to print an expression is to write it to a stream: 2717 2718 @example 2719 @{ 2720 symbol x("x"); 2721 ex e = 4.5+pow(x,2)*3/2; 2722 cout << e << endl; // prints '(4.5)+3/2*x^2' 2723 // ... 2724 @end example 2725 2726 The output format is identical to the @command{ginsh} input syntax and 2727 to that used by most computer algebra systems, but not directly pastable 2728 into a GiNaC C++ program (note that in the above example, @code{pow(x,2)} 2729 is printed as @samp{x^2}). 2730 2731 It is possible to print expressions in a number of different formats with 2732 the method 2733 2734 @example 2735 void ex::print(const print_context & c, unsigned level = 0); 2736 @end example 2737 2738 The type of @code{print_context} object passed in determines the format 2739 of the output. The possible types are defined in @file{ginac/print.h}. 2740 All constructors of @code{print_context} and derived classes take an 2741 @code{ostream &} as their first argument. 2742 2743 To print an expression in a way that can be directly used in a C or C++ 2744 program, you pass a @code{print_csrc} object like this: 2745 2746 @example 2747 // ... 2748 cout << "float f = "; 2749 e.print(print_csrc_float(cout)); 2750 cout << ";\n"; 2751 2752 cout << "double d = "; 2753 e.print(print_csrc_double(cout)); 2754 cout << ";\n"; 2755 2756 cout << "cl_N n = "; 2757 e.print(print_csrc_cl_N(cout)); 2758 cout << ";\n"; 2759 // ... 2760 @end example 2761 2762 The three possible types mostly affect the way in which floating point 2763 numbers are written. 2764 2765 The above example will produce (note the @code{x^2} being converted to @code{x*x}): 2766 2767 @example 2768 float f = (3.000000e+00/2.000000e+00)*(x*x)+4.500000e+00; 2769 double d = (3.000000e+00/2.000000e+00)*(x*x)+4.500000e+00; 2770 cl_N n = (cln::cl_F("3.0")/cln::cl_F("2.0"))*(x*x)+cln::cl_F("4.5"); 2771 @end example 2772 2773 The @code{print_context} type @code{print_tree} provdes a dump of the 2774 internal structure of an expression for debugging purposes: 2775 2776 @example 2777 // ... 2778 e.print(print_tree(cout)); 2779 @} 2780 @end example 2781 2782 produces 2783 2784 @example 2785 add, hash=0x0, flags=0x3, nops=2 2786 power, hash=0x9, flags=0x3, nops=2 2787 x (symbol), serial=3, hash=0x44a113a6, flags=0xf 2788 2 (numeric), hash=0x80000042, flags=0xf 2789 3/2 (numeric), hash=0x80000061, flags=0xf 2790 ----- 2791 overall_coeff 2792 4.5L0 (numeric), hash=0x8000004b, flags=0xf 2793 ===== 2794 @end example 2795 2796 This kind of output is also available in @command{ginsh} as the @code{print()} 2797 function. 2798 2799 If you need any fancy special output format, e.g. for interfacing GiNaC 2800 with other algebra systems or for producing code for different 2801 programming languages, you can always traverse the expression tree yourself: 2802 2803 @example 2804 static void my_print(const ex & e) 2805 @{ 2806 if (is_ex_of_type(e, function)) 2807 cout << ex_to_function(e).get_name(); 2808 else 2809 cout << e.bp->class_name(); 2810 cout << "("; 2811 unsigned n = e.nops(); 2812 if (n) 2813 for (unsigned i=0; i<n; i++) @{ 2814 my_print(e.op(i)); 2815 if (i != n-1) 2816 cout << ","; 2817 @} 2818 else 2819 cout << e; 2820 cout << ")"; 2821 @} 2822 2823 int main(void) 2824 @{ 2825 my_print(pow(3, x) - 2 * sin(y / Pi)); cout << endl; 2826 return 0; 2827 @} 2828 @end example 2829 2830 This will produce 2831 2832 @example 2833 add(power(numeric(3),symbol(x)),mul(sin(mul(power(constant(Pi),numeric(-1)), 2834 symbol(y))),numeric(-2))) 2835 @end example 2836 2837 If you need an output format that makes it possible to accurately 2838 reconstruct an expression by feeding the output to a suitable parser or 2839 object factory, you should consider storing the expression in an 2840 @code{archive} object and reading the object properties from there. 2841 See the section on archiving for more information. 2842 2843 2844 @subsection Expression input 2845 @cindex input of expressions 2846 2847 GiNaC provides no way to directly read an expression from a stream because 2848 you will usually want the user to be able to enter something like @samp{2*x+sin(y)} 2849 and have the @samp{x} and @samp{y} correspond to the symbols @code{x} and 2850 @code{y} you defined in your program and there is no way to specify the 2851 desired symbols to the @code{>>} stream input operator. 2852 2853 Instead, GiNaC lets you construct an expression from a string, specifying the 2854 list of symbols to be used: 2855 2856 @example 2857 @{ 2858 symbol x("x"), y("y"); 2859 ex e("2*x+sin(y)", lst(x, y)); 2860 @} 2861 @end example 2862 2863 The input syntax is the same as that used by @command{ginsh} and the stream 2864 output operator @code{<<}. The symbols in the string are matched by name to 2865 the symbols in the list and if GiNaC encounters a symbol not specified in 2866 the list it will throw an exception. 2867 2868 With this constructor, it's also easy to implement interactive GiNaC programs: 2869 2870 @example 2871 #include <iostream> 2872 #include <string> 2873 #include <stdexcept> 2874 #include <ginac/ginac.h> 2875 using namespace std; 2876 using namespace GiNaC; 2877 2878 int main() 2879 @{ 2880 symbol x("x"); 2881 string s; 2882 2883 cout << "Enter an expression containing 'x': "; 2884 getline(cin, s); 2885 2886 try @{ 2887 ex e(s, lst(x)); 2888 cout << "The derivative of " << e << " with respect to x is "; 2889 cout << e.diff(x) << ".\n"; 2890 @} catch (exception &p) @{ 2891 cerr << p.what() << endl; 2892 @} 2893 @} 2894 @end example 2895 2896 2897 @subsection Archiving 2898 @cindex @code{archive} (class) 2899 @cindex archiving 2900 2901 GiNaC allows creating @dfn{archives} of expressions which can be stored 2902 to or retrieved from files. To create an archive, you declare an object 2903 of class @code{archive} and archive expressions in it, giving each 2904 expression a unique name: 2905 2906 @example 2907 #include <fstream> 2908 using namespace std; 2909 #include <ginac/ginac.h> 2910 using namespace GiNaC; 2911 2912 int main() 2913 @{ 2914 symbol x("x"), y("y"), z("z"); 2915 2916 ex foo = sin(x + 2*y) + 3*z + 41; 2917 ex bar = foo + 1; 2918 2919 archive a; 2920 a.archive_ex(foo, "foo"); 2921 a.archive_ex(bar, "the second one"); 2922 // ... 2923 @end example 2924 2925 The archive can then be written to a file: 2926 2927 @example 2928 // ... 2929 ofstream out("foobar.gar"); 2930 out << a; 2931 out.close(); 2932 // ... 2933 @end example 2934 2935 The file @file{foobar.gar} contains all information that is needed to 2936 reconstruct the expressions @code{foo} and @code{bar}. 2937 2938 @cindex @command{viewgar} 2939 The tool @command{viewgar} that comes with GiNaC can be used to view 2940 the contents of GiNaC archive files: 2941 2942 @example 2943 $ viewgar foobar.gar 2944 foo = 41+sin(x+2*y)+3*z 2945 the second one = 42+sin(x+2*y)+3*z 2946 @end example 2947 2948 The point of writing archive files is of course that they can later be 2949 read in again: 2950 2951 @example 2952 // ... 2953 archive a2; 2954 ifstream in("foobar.gar"); 2955 in >> a2; 2956 // ... 2957 @end example 2958 2959 And the stored expressions can be retrieved by their name: 2960 2961 @example 2962 // ... 2963 lst syms(x, y); 2964 2965 ex ex1 = a2.unarchive_ex(syms, "foo"); 2966 ex ex2 = a2.unarchive_ex(syms, "the second one"); 2967 2968 cout << ex1 << endl; // prints "41+sin(x+2*y)+3*z" 2969 cout << ex2 << endl; // prints "42+sin(x+2*y)+3*z" 2970 cout << ex1.subs(x == 2) << endl; // prints "41+sin(2+2*y)+3*z" 2971 @} 2972 @end example 2973 2974 Note that you have to supply a list of the symbols which are to be inserted 2975 in the expressions. Symbols in archives are stored by their name only and 2976 if you don't specify which symbols you have, unarchiving the expression will 2977 create new symbols with that name. E.g. if you hadn't included @code{x} in 2978 the @code{syms} list above, the @code{ex1.subs(x == 2)} statement would 2979 have had no effect because the @code{x} in @code{ex1} would have been a 2980 different symbol than the @code{x} which was defined at the beginning of 2981 the program, altough both would appear as @samp{x} when printed. 2982 2983 You can also use the information stored in an @code{archive} object to 2984 output expressions in a format suitable for exact reconstruction. The 2985 @code{archive} and @code{archive_node} classes have a couple of member 2986 functions that let you access the stored properties: 2987 2988 @example 2989 static void my_print2(const archive_node & n) 2990 @{ 2991 string class_name; 2992 n.find_string("class", class_name); 2993 cout << class_name << "("; 2994 2995 archive_node::propinfovector p; 2996 n.get_properties(p); 2997 2998 unsigned num = p.size(); 2999 for (unsigned i=0; i<num; i++) @{ 3000 const string &name = p[i].name; 3001 if (name == "class") 3002 continue; 3003 cout << name << "="; 3004 3005 unsigned count = p[i].count; 3006 if (count > 1) 3007 cout << "@{"; 3008 3009 for (unsigned j=0; j<count; j++) @{ 3010 switch (p[i].type) @{ 3011 case archive_node::PTYPE_BOOL: @{ 3012 bool x; 3013 n.find_bool(name, x); 3014 cout << (x ? "true" : "false"); 3015 break; 3016 @} 3017 case archive_node::PTYPE_UNSIGNED: @{ 3018 unsigned x; 3019 n.find_unsigned(name, x); 3020 cout << x; 3021 break; 3022 @} 3023 case archive_node::PTYPE_STRING: @{ 3024 string x; 3025 n.find_string(name, x); 3026 cout << '\"' << x << '\"'; 3027 break; 3028 @} 3029 case archive_node::PTYPE_NODE: @{ 3030 const archive_node &x = n.find_ex_node(name, j); 3031 my_print2(x); 3032 break; 3033 @} 3034 @} 3035 3036 if (j != count-1) 3037 cout << ","; 3038 @} 3039 3040 if (count > 1) 3041 cout << "@}"; 3042 3043 if (i != num-1) 3044 cout << ","; 3045 @} 3046 3047 cout << ")"; 3048 @} 3049 3050 int main(void) 3051 @{ 3052 ex e = pow(2, x) - y; 3053 archive ar(e, "e"); 3054 my_print2(ar.get_top_node(0)); cout << endl; 3055 return 0; 3056 @} 3057 @end example 3058 3059 This will produce: 3060 3061 @example 3062 add(rest=@{power(basis=numeric(number="2"),exponent=symbol(name="x")), 3063 symbol(name="y")@},coeff=@{numeric(number="1"),numeric(number="-1")@}, 3064 overall_coeff=numeric(number="0")) 3065 @end example 3066 3067 Be warned, however, that the set of properties and their meaning for each 3068 class may change between GiNaC versions. 3069 3070 3071 @node Extending GiNaC, What does not belong into GiNaC, Input/Output, Top 3072 @c node-name, next, previous, up 3073 @chapter Extending GiNaC 3074 3075 By reading so far you should have gotten a fairly good understanding of 3076 GiNaC's design-patterns. From here on you should start reading the 3077 sources. All we can do now is issue some recommendations how to tackle 3078 GiNaC's many loose ends in order to fulfill everybody's dreams. If you 3079 develop some useful extension please don't hesitate to contact the GiNaC 3080 authors---they will happily incorporate them into future versions. 3081 3082 @menu 3083 * What does not belong into GiNaC:: What to avoid. 3084 * Symbolic functions:: Implementing symbolic functions. 3085 * Adding classes:: Defining new algebraic classes. 3086 @end menu 3087 3088 3089 @node What does not belong into GiNaC, Symbolic functions, Extending GiNaC, Extending GiNaC 3090 @c node-name, next, previous, up 3091 @section What doesn't belong into GiNaC 3092 3093 @cindex @command{ginsh} 3094 First of all, GiNaC's name must be read literally. It is designed to be 3095 a library for use within C++. The tiny @command{ginsh} accompanying 3096 GiNaC makes this even more clear: it doesn't even attempt to provide a 3097 language. There are no loops or conditional expressions in 3098 @command{ginsh}, it is merely a window into the library for the 3099 programmer to test stuff (or to show off). Still, the design of a 3100 complete CAS with a language of its own, graphical capabilites and all 3101 this on top of GiNaC is possible and is without doubt a nice project for 3102 the future. 3103 3104 There are many built-in functions in GiNaC that do not know how to 3105 evaluate themselves numerically to a precision declared at runtime 3106 (using @code{Digits}). Some may be evaluated at certain points, but not 3107 generally. This ought to be fixed. However, doing numerical 3108 computations with GiNaC's quite abstract classes is doomed to be 3109 inefficient. For this purpose, the underlying foundation classes 3110 provided by @acronym{CLN} are much better suited. 3111 3112 3113 @node Symbolic functions, Adding classes, What does not belong into GiNaC, Extending GiNaC 3114 @c node-name, next, previous, up 3115 @section Symbolic functions 3116 3117 The easiest and most instructive way to start with is probably to 3118 implement your own function. GiNaC's functions are objects of class 3119 @code{function}. The preprocessor is then used to convert the function 3120 names to objects with a corresponding serial number that is used 3121 internally to identify them. You usually need not worry about this 3122 number. New functions may be inserted into the system via a kind of 3123 `registry'. It is your responsibility to care for some functions that 3124 are called when the user invokes certain methods. These are usual 3125 C++-functions accepting a number of @code{ex} as arguments and returning 3126 one @code{ex}. As an example, if we have a look at a simplified 3127 implementation of the cosine trigonometric function, we first need a 3128 function that is called when one wishes to @code{eval} it. It could 3129 look something like this: 3130 3131 @example 3132 static ex cos_eval_method(const ex & x) 3133 @{ 3134 // if (!x%(2*Pi)) return 1 3135 // if (!x%Pi) return -1 3136 // if (!x%Pi/2) return 0 3137 // care for other cases... 3138 return cos(x).hold(); 3139 @} 3140 @end example 3141 3142 @cindex @code{hold()} 3143 @cindex evaluation 3144 The last line returns @code{cos(x)} if we don't know what else to do and 3145 stops a potential recursive evaluation by saying @code{.hold()}, which 3146 sets a flag to the expression signaling that it has been evaluated. We 3147 should also implement a method for numerical evaluation and since we are 3148 lazy we sweep the problem under the rug by calling someone else's 3149 function that does so, in this case the one in class @code{numeric}: 3150 3151 @example 3152 static ex cos_evalf(const ex & x) 3153 @{ 3154 return cos(ex_to_numeric(x)); 3155 @} 3156 @end example 3157 3158 Differentiation will surely turn up and so we need to tell @code{cos} 3159 what the first derivative is (higher derivatives (@code{.diff(x,3)} for 3160 instance are then handled automatically by @code{basic::diff} and 3161 @code{ex::diff}): 3162 3163 @example 3164 static ex cos_deriv(const ex & x, unsigned diff_param) 3165 @{ 3166 return -sin(x); 3167 @} 3168 @end example 3169 3170 @cindex product rule 3171 The second parameter is obligatory but uninteresting at this point. It 3172 specifies which parameter to differentiate in a partial derivative in 3173 case the function has more than one parameter and its main application 3174 is for correct handling of the chain rule. For Taylor expansion, it is 3175 enough to know how to differentiate. But if the function you want to 3176 implement does have a pole somewhere in the complex plane, you need to 3177 write another method for Laurent expansion around that point. 3178 3179 Now that all the ingredients for @code{cos} have been set up, we need 3180 to tell the system about it. This is done by a macro and we are not 3181 going to descibe how it expands, please consult your preprocessor if you 3182 are curious: 3183 3184 @example 3185 REGISTER_FUNCTION(cos, eval_func(cos_eval). 3186 evalf_func(cos_evalf). 3187 derivative_func(cos_deriv)); 3188 @end example 3189 3190 The first argument is the function's name used for calling it and for 3191 output. The second binds the corresponding methods as options to this 3192 object. Options are separated by a dot and can be given in an arbitrary 3193 order. GiNaC functions understand several more options which are always 3194 specified as @code{.option(params)}, for example a method for series 3195 expansion @code{.series_func(cos_series)}. Again, if no series 3196 expansion method is given, GiNaC defaults to simple Taylor expansion, 3197 which is correct if there are no poles involved as is the case for the 3198 @code{cos} function. The way GiNaC handles poles in case there are any 3199 is best understood by studying one of the examples, like the Gamma 3200 (@code{tgamma}) function for instance. (In essence the function first 3201 checks if there is a pole at the evaluation point and falls back to 3202 Taylor expansion if there isn't. Then, the pole is regularized by some 3203 suitable transformation.) Also, the new function needs to be declared 3204 somewhere. This may also be done by a convenient preprocessor macro: 3205 3206 @example 3207 DECLARE_FUNCTION_1P(cos) 3208 @end example 3209 3210 The suffix @code{_1P} stands for @emph{one parameter}. Of course, this 3211 implementation of @code{cos} is very incomplete and lacks several safety 3212 mechanisms. Please, have a look at the real implementation in GiNaC. 3213 (By the way: in case you are worrying about all the macros above we can 3214 assure you that functions are GiNaC's most macro-intense classes. We 3215 have done our best to avoid macros where we can.) 3216 3217 3218 @node Adding classes, A Comparison With Other CAS, Symbolic functions, Extending GiNaC 3219 @c node-name, next, previous, up 3220 @section Adding classes 3221 3222 If you are doing some very specialized things with GiNaC you may find that 3223 you have to implement your own algebraic classes to fit your needs. This 3224 section will explain how to do this by giving the example of a simple 3225 'string' class. After reading this section you will know how to properly 3226 declare a GiNaC class and what the minimum required member functions are 3227 that you have to implement. We only cover the implementation of a 'leaf' 3228 class here (i.e. one that doesn't contain subexpressions). Creating a 3229 container class like, for example, a class representing tensor products is 3230 more involved but this section should give you enough information so you can 3231 consult the source to GiNaC's predefined classes if you want to implement 3232 something more complicated. 3233 3234 @subsection GiNaC's run-time type information system 3235 3236 @cindex hierarchy of classes 3237 @cindex RTTI 3238 All algebraic classes (that is, all classes that can appear in expressions) 3239 in GiNaC are direct or indirect subclasses of the class @code{basic}. So a 3240 @code{basic *} (which is essentially what an @code{ex} is) represents a 3241 generic pointer to an algebraic class. Occasionally it is necessary to find 3242 out what the class of an object pointed to by a @code{basic *} really is. 3243 Also, for the unarchiving of expressions it must be possible to find the 3244 @code{unarchive()} function of a class given the class name (as a string). A 3245 system that provides this kind of information is called a run-time type 3246 information (RTTI) system. The C++ language provides such a thing (see the 3247 standard header file @file{<typeinfo>}) but for efficiency reasons GiNaC 3248 implements its own, simpler RTTI. 3249 3250 The RTTI in GiNaC is based on two mechanisms: 3251 3252 @itemize @bullet 3253 3254 @item 3255 The @code{basic} class declares a member variable @code{tinfo_key} which 3256 holds an unsigned integer that identifies the object's class. These numbers 3257 are defined in the @file{tinfos.h} header file for the built-in GiNaC 3258 classes. They all start with @code{TINFO_}. 3259 3260 @item 3261 By means of some clever tricks with static members, GiNaC maintains a list 3262 of information for all classes derived from @code{basic}. The information 3263 available includes the class names, the @code{tinfo_key}s, and pointers 3264 to the unarchiving functions. This class registry is defined in the 3265 @file{registrar.h} header file. 3266 3267 @end itemize 3268 3269 The disadvantage of this proprietary RTTI implementation is that there's 3270 a little more to do when implementing new classes (C++'s RTTI works more 3271 or less automatic) but don't worry, most of the work is simplified by 3272 macros. 3273 3274 @subsection A minimalistic example 3275 3276 Now we will start implementing a new class @code{mystring} that allows 3277 placing character strings in algebraic expressions (this is not very useful, 3278 but it's just an example). This class will be a direct subclass of 3279 @code{basic}. You can use this sample implementation as a starting point 3280 for your own classes. 3281 3282 The code snippets given here assume that you have included some header files 3283 as follows: 3284 3285 @example 3286 #include <iostream> 3287 #include <string> 3288 #include <stdexcept> 3289 using namespace std; 3290 3291 #include <ginac/ginac.h> 3292 using namespace GiNaC; 3293 @end example 3294 3295 The first thing we have to do is to define a @code{tinfo_key} for our new 3296 class. This can be any arbitrary unsigned number that is not already taken 3297 by one of the existing classes but it's better to come up with something 3298 that is unlikely to clash with keys that might be added in the future. The 3299 numbers in @file{tinfos.h} are modeled somewhat after the class hierarchy 3300 which is not a requirement but we are going to stick with this scheme: 3301 3302 @example 3303 const unsigned TINFO_mystring = 0x42420001U; 3304 @end example 3305 3306 Now we can write down the class declaration. The class stores a C++ 3307 @code{string} and the user shall be able to construct a @code{mystring} 3308 object from a C or C++ string: 3309 3310 @example 3311 class mystring : public basic 3312 @{ 3313 GINAC_DECLARE_REGISTERED_CLASS(mystring, basic) 3314 3315 public: 3316 mystring(const string &s); 3317 mystring(const char *s); 3318 3319 private: 3320 string str; 3321 @}; 3322 3323 GIANC_IMPLEMENT_REGISTERED_CLASS(mystring, basic) 3324 @end example 3325 3326 The @code{GINAC_DECLARE_REGISTERED_CLASS} and @code{GINAC_IMPLEMENT_REGISTERED_CLASS} 3327 macros are defined in @file{registrar.h}. They take the name of the class 3328 and its direct superclass as arguments and insert all required declarations 3329 for the RTTI system. The @code{GINAC_DECLARE_REGISTERED_CLASS} should be 3330 the first line after the opening brace of the class definition. The 3331 @code{GINAC_IMPLEMENT_REGISTERED_CLASS} may appear anywhere else in the 3332 source (at global scope, of course, not inside a function). 3333 3334 @code{GINAC_DECLARE_REGISTERED_CLASS} contains, among other things the 3335 declarations of the default and copy constructor, the destructor, the 3336 assignment operator and a couple of other functions that are required. It 3337 also defines a type @code{inherited} which refers to the superclass so you 3338 don't have to modify your code every time you shuffle around the class 3339 hierarchy. @code{GINAC_IMPLEMENT_REGISTERED_CLASS} implements the copy 3340 constructor, the destructor and the assignment operator. 3341 3342 Now there are nine member functions we have to implement to get a working 3343 class: 3344 3345 @itemize 3346 3347 @item 3348 @code{mystring()}, the default constructor. 3349 3350 @item 3351 @code{void destroy(bool call_parent)}, which is used in the destructor and the 3352 assignment operator to free dynamically allocated members. The @code{call_parent} 3353 specifies whether the @code{destroy()} function of the superclass is to be 3354 called also. 3355 3356 @item 3357 @code{void copy(const mystring &other)}, which is used in the copy constructor 3358 and assignment operator to copy the member variables over from another 3359 object of the same class. 3360 3361 @item 3362 @code{void archive(archive_node &n)}, the archiving function. This stores all 3363 information needed to reconstruct an object of this class inside an 3364 @code{archive_node}. 3365 3366 @item 3367 @code{mystring(const archive_node &n, const lst &sym_lst)}, the unarchiving 3368 constructor. This constructs an instance of the class from the information 3369 found in an @code{archive_node}. 3370 3371 @item 3372 @code{ex unarchive(const archive_node &n, const lst &sym_lst)}, the static 3373 unarchiving function. It constructs a new instance by calling the unarchiving 3374 constructor. 3375 3376 @item 3377 @code{int compare_same_type(const basic &other)}, which is used internally 3378 by GiNaC to establish a canonical sort order for terms. It returns 0, +1 or 3379 -1, depending on the relative order of this object and the @code{other} 3380 object. If it returns 0, the objects are considered equal. 3381 @strong{Note:} This has nothing to do with the (numeric) ordering 3382 relationship expressed by @code{<}, @code{>=} etc (which cannot be defined 3383 for non-numeric classes). For example, @code{numeric(1).compare_same_type(numeric(2))} 3384 may return +1 even though 1 is clearly smaller than 2. Every GiNaC class 3385 must provide a @code{compare_same_type()} function, even those representing 3386 objects for which no reasonable algebraic ordering relationship can be 3387 defined. 3388 3389 @item 3390 And, of course, @code{mystring(const string &s)} and @code{mystring(const char *s)} 3391 which are the two constructors we declared. 3392 3393 @end itemize 3394 3395 Let's proceed step-by-step. The default constructor looks like this: 3396 3397 @example 3398 mystring::mystring() : inherited(TINFO_mystring) 3399 @{ 3400 // dynamically allocate resources here if required 3401 @} 3402 @end example 3403 3404 The golden rule is that in all constructors you have to set the 3405 @code{tinfo_key} member to the @code{TINFO_*} value of your class. Otherwise 3406 it will be set by the constructor of the superclass and all hell will break 3407 loose in the RTTI. For your convenience, the @code{basic} class provides 3408 a constructor that takes a @code{tinfo_key} value, which we are using here 3409 (remember that in our case @code{inherited = basic}). If the superclass 3410 didn't have such a constructor, we would have to set the @code{tinfo_key} 3411 to the right value manually. 3412 3413 In the default constructor you should set all other member variables to 3414 reasonable default values (we don't need that here since our @code{str} 3415 member gets set to an empty string automatically). The constructor(s) are of 3416 course also the right place to allocate any dynamic resources you require. 3417 3418 Next, the @code{destroy()} function: 3419 3420 @example 3421 void mystring::destroy(bool call_parent) 3422 @{ 3423 // free dynamically allocated resources here if required 3424 if (call_parent) 3425 inherited::destroy(call_parent); 3426 @} 3427 @end example 3428 3429 This function is where we free all dynamically allocated resources. We don't 3430 have any so we're not doing anything here, but if we had, for example, used 3431 a C-style @code{char *} to store our string, this would be the place to 3432 @code{delete[]} the string storage. If @code{call_parent} is true, we have 3433 to call the @code{destroy()} function of the superclass after we're done 3434 (to mimic C++'s automatic invocation of superclass destructors where 3435 @code{destroy()} is called from outside a destructor). 3436 3437 The @code{copy()} function just copies over the member variables from 3438 another object: 3439 3440 @example 3441 void mystring::copy(const mystring &other) 3442 @{ 3443 inherited::copy(other); 3444 str = other.str; 3445 @} 3446 @end example 3447 3448 We can simply overwrite the member variables here. There's no need to worry 3449 about dynamically allocated storage. The assignment operator (which is 3450 automatically defined by @code{GINAC_IMPLEMENT_REGISTERED_CLASS}, as you 3451 recall) calls @code{destroy()} before it calls @code{copy()}. You have to 3452 explicitly call the @code{copy()} function of the superclass here so 3453 all the member variables will get copied. 3454 3455 Next are the three functions for archiving. You have to implement them even 3456 if you don't plan to use archives, but the minimum required implementation 3457 is really simple. First, the archiving function: 3458 3459 @example 3460 void mystring::archive(archive_node &n) const 3461 @{ 3462 inherited::archive(n); 3463 n.add_string("string", str); 3464 @} 3465 @end example 3466 3467 The only thing that is really required is calling the @code{archive()} 3468 function of the superclass. Optionally, you can store all information you 3469 deem necessary for representing the object into the passed 3470 @code{archive_node}. We are just storing our string here. For more 3471 information on how the archiving works, consult the @file{archive.h} header 3472 file. 3473 3474 The unarchiving constructor is basically the inverse of the archiving 3475 function: 3476 3477 @example 3478 mystring::mystring(const archive_node &n, const lst &sym_lst) : inherited(n, sym_lst) 3479 @{ 3480 n.find_string("string", str); 3481 @} 3482 @end example 3483 3484 If you don't need archiving, just leave this function empty (but you must 3485 invoke the unarchiving constructor of the superclass). Note that we don't 3486 have to set the @code{tinfo_key} here because it is done automatically 3487 by the unarchiving constructor of the @code{basic} class. 3488 3489 Finally, the unarchiving function: 3490 3491 @example 3492 ex mystring::unarchive(const archive_node &n, const lst &sym_lst) 3493 @{ 3494 return (new mystring(n, sym_lst))->setflag(status_flags::dynallocated); 3495 @} 3496 @end example 3497 3498 You don't have to understand how exactly this works. Just copy these four 3499 lines into your code literally (replacing the class name, of course). It 3500 calls the unarchiving constructor of the class and unless you are doing 3501 something very special (like matching @code{archive_node}s to global 3502 objects) you don't need a different implementation. For those who are 3503 interested: setting the @code{dynallocated} flag puts the object under 3504 the control of GiNaC's garbage collection. It will get deleted automatically 3505 once it is no longer referenced. 3506 3507 Our @code{compare_same_type()} function uses a provided function to compare 3508 the string members: 3509 3510 @example 3511 int mystring::compare_same_type(const basic &other) const 3512 @{ 3513 const mystring &o = static_cast<const mystring &>(other); 3514 int cmpval = str.compare(o.str); 3515 if (cmpval == 0) 3516 return 0; 3517 else if (cmpval < 0) 3518 return -1; 3519 else 3520 return 1; 3521 @} 3522 @end example 3523 3524 Although this function takes a @code{basic &}, it will always be a reference 3525 to an object of exactly the same class (objects of different classes are not 3526 comparable), so the cast is safe. If this function returns 0, the two objects 3527 are considered equal (in the sense that @math{A-B=0}), so you should compare 3528 all relevant member variables. 3529 3530 Now the only thing missing is our two new constructors: 3531 3532 @example 3533 mystring::mystring(const string &s) : inherited(TINFO_mystring), str(s) 3534 @{ 3535 // dynamically allocate resources here if required 3536 @} 3537 3538 mystring::mystring(const char *s) : inherited(TINFO_mystring), str(s) 3539 @{ 3540 // dynamically allocate resources here if required 3541 @} 3542 @end example 3543 3544 No surprises here. We set the @code{str} member from the argument and 3545 remember to pass the right @code{tinfo_key} to the @code{basic} constructor. 3546 3547 That's it! We now have a minimal working GiNaC class that can store 3548 strings in algebraic expressions. Let's confirm that the RTTI works: 3549 3550 @example 3551 ex e = mystring("Hello, world!"); 3552 cout << is_ex_of_type(e, mystring) << endl; 3553 // -> 1 (true) 3554 3555 cout << e.bp->class_name() << endl; 3556 // -> mystring 3557 @end example 3558 3559 Obviously it does. Let's see what the expression @code{e} looks like: 3560 3561 @example 3562 cout << e << endl; 3563 // -> [mystring object] 3564 @end example 3565 3566 Hm, not exactly what we expect, but of course the @code{mystring} class 3567 doesn't yet know how to print itself. This is done in the @code{print()} 3568 member function. Let's say that we wanted to print the string surrounded 3569 by double quotes: 3570 3571 @example 3572 class mystring : public basic 3573 @{ 3574 ... 3575 public: 3576 void print(const print_context &c, unsigned level = 0) const; 3577 ... 3578 @}; 3579 3580 void mystring::print(const print_context &c, unsigned level) const 3581 @{ 3582 // print_context::s is a reference to an ostream 3583 c.s << '\"' << str << '\"'; 3584 @} 3585 @end example 3586 3587 The @code{level} argument is only required for container classes to 3588 correctly parenthesize the output. Let's try again to print the expression: 3589 3590 @example 3591 cout << e << endl; 3592 // -> "Hello, world!" 3593 @end example 3594 3595 Much better. The @code{mystring} class can be used in arbitrary expressions: 3596 3597 @example 3598 e += mystring("GiNaC rulez"); 3599 cout << e << endl; 3600 // -> "GiNaC rulez"+"Hello, world!" 3601 @end example 3602 3603 (note that GiNaC's automatic term reordering is in effect here), or even 3604 3605 @example 3606 e = pow(mystring("One string"), 2*sin(Pi-mystring("Another string"))); 3607 cout << e << endl; 3608 // -> "One string"^(2*sin(-"Another string"+Pi)) 3609 @end example 3610 3611 Whether this makes sense is debatable but remember that this is only an 3612 example. At least it allows you to implement your own symbolic algorithms 3613 for your objects. 3614 3615 Note that GiNaC's algebraic rules remain unchanged: 3616 3617 @example 3618 e = mystring("Wow") * mystring("Wow"); 3619 cout << e << endl; 3620 // -> "Wow"^2 3621 3622 e = pow(mystring("First")-mystring("Second"), 2); 3623 cout << e.expand() << endl; 3624 // -> -2*"First"*"Second"+"First"^2+"Second"^2 3625 @end example 3626 3627 There's no way to, for example, make GiNaC's @code{add} class perform string 3628 concatenation. You would have to implement this yourself. 3629 3630 @subsection Automatic evaluation 3631 3632 @cindex @code{hold()} 3633 @cindex evaluation 3634 When dealing with objects that are just a little more complicated than the 3635 simple string objects we have implemented, chances are that you will want to 3636 have some automatic simplifications or canonicalizations performed on them. 3637 This is done in the evaluation member function @code{eval()}. Let's say that 3638 we wanted all strings automatically converted to lowercase with 3639 non-alphabetic characters stripped, and empty strings removed: 3640 3641 @example 3642 class mystring : public basic 3643 @{ 3644 ... 3645 public: 3646 ex eval(int level = 0) const; 3647 ... 3648 @}; 3649 3650 ex mystring::eval(int level) const 3651 @{ 3652 string new_str; 3653 for (int i=0; i<str.length(); i++) @{ 3654 char c = str[i]; 3655 if (c >= 'A' && c <= 'Z') 3656 new_str += tolower(c); 3657 else if (c >= 'a' && c <= 'z') 3658 new_str += c; 3659 @} 3660 3661 if (new_str.length() == 0) 3662 return _ex0(); 3663 else 3664 return mystring(new_str).hold(); 3665 @} 3666 @end example 3667 3668 The @code{level} argument is used to limit the recursion depth of the 3669 evaluation. We don't have any subexpressions in the @code{mystring} class 3670 so we are not concerned with this. If we had, we would call the @code{eval()} 3671 functions of the subexpressions with @code{level - 1} as the argument if 3672 @code{level != 1}. The @code{hold()} member function sets a flag in the 3673 object that prevents further evaluation. Otherwise we might end up in an 3674 endless loop. When you want to return the object unmodified, use 3675 @code{return this->hold();}. 3676 3677 Let's confirm that it works: 3678 3679 @example 3680 ex e = mystring("Hello, world!") + mystring("!?#"); 3681 cout << e << endl; 3682 // -> "helloworld" 3683 3684 e = mystring("Wow!") + mystring("WOW") + mystring(" W ** o ** W"); 3685 cout << e << endl; 3686 // -> 3*"wow" 3687 @end example 3688 3689 @subsection Other member functions 3690 3691 We have implemented only a small set of member functions to make the class 3692 work in the GiNaC framework. For a real algebraic class, there are probably 3693 some more functions that you will want to re-implement, such as 3694 @code{evalf()}, @code{series()} or @code{op()}. Have a look at @file{basic.h} 3695 or the header file of the class you want to make a subclass of to see 3696 what's there. You can, of course, also add your own new member functions. 3697 In this case you will probably want to define a little helper function like 3698 3699 @example 3700 inline const mystring &ex_to_mystring(const ex &e) 3701 @{ 3702 return static_cast<const mystring &>(*e.bp); 3703 @} 3704 @end example 3705 3706 that let's you get at the object inside an expression (after you have verified 3707 that the type is correct) so you can call member functions that are specific 3708 to the class. 3709 3710 That's it. May the source be with you! 3711 3712 3713 @node A Comparison With Other CAS, Advantages, Adding classes, Top 3714 @c node-name, next, previous, up 3715 @chapter A Comparison With Other CAS 3716 @cindex advocacy 3717 3718 This chapter will give you some information on how GiNaC compares to 3719 other, traditional Computer Algebra Systems, like @emph{Maple}, 3720 @emph{Mathematica} or @emph{Reduce}, where it has advantages and 3721 disadvantages over these systems. 3722 3723 @menu 3724 * Advantages:: Stengths of the GiNaC approach. 3725 * Disadvantages:: Weaknesses of the GiNaC approach. 3726 * Why C++?:: Attractiveness of C++. 3727 @end menu 3728
|
https://www.ginac.de/ginac.git/?p=ginac.git;a=blob;f=doc/tutorial/ginac.texi;h=8c060cfba62eb2dfa6caffd26421b0ccfad83c2a;hb=85ce9664ddba79c28a6945b1e5b4e2b71f77cb51
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
check if thread is running
Hey,
What is best practice for a watchdog in a multithreading system?
So if one thread has an unhandled exeption, restart that thread?
@Gijs Thought of about the same thing. I would just add a short sleep at the end of the loop, so that in case it fails very quickly it doesn't go into a tight loop, especially if the code in the thread is sending anything (over the network or to a device connected via I2C or SPI etc.). Of course it relies on the contents of the function being able to restart a second time.
I'm not sure this is 100% fool proof as I believe there are cases when some conditions will not result in proper exceptions being thrown, so of course using and feeding
WDT(or an external watchdog) may be a good idea as well.
I'd imagine you could do something like this, but there's probably a better solution that I cannot think of right now. Also I'm not sure whether that also handles exceptions that occur inside
foo()
def thread1(): while True: try: #run some code foo() except: #handle all potential errors
If anyone has a better solution I'm interested as well!
|
https://forum.pycom.io/topic/7132/check-if-thread-is-running
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
This page describes how to access Kubernetes apiserver audit logs.
Overview
Each GKE On-Prem cluster has Kubernetes Audit Logging, which keeps a chronological record of calls made to the cluster's Kubernetes API server. Audit logs are useful for investigating suspicious API requests or for collecting statistics.
Disk-based audit logging
By default, audit logs from each API server are dumped to a persistent disk, so that VM restarts/upgrades won't cause the logs to disappear. GKE On-Prem retains up to 10GB of audit logs.
Cloud Audit logging
If Cloud Audit Logging is enabled, then Admin Activity audit logs from all API servers are sent to Google Cloud, using the project and location set during installation.
Accessing Kubernetes audit logs
Disk-based audit logging
You can only access audit logs through the admin cluster:
View the Kubernetes API servers running in your clusters:
kubectl get pods --all-namespaces -l component=kube-apiserver logging
Console
In the/cloudaudit.googleapis.com%2Factivity" protoPayload.serviceName="anthosaudit.googleapis.com"
Click Submit Filter to display all audit logs from GKE On-Prem clusters that where configured to log in to this project.
gcloud
List the first two log entries in your project's Admin Activity log that
apply to the
k8s_cluster resource type:
gcloud logging read \ 'logName="projects/[PROJECT_ID]/logs/cloudaudit.googleapis.com%2Factivity" \ AND resource.type="k8s_cluster" \ AND protoPayload.serviceName="anthosaudit.googleapis.com" ' \ --limit 2 \ --freshness 300d
where [PROJECT_ID] is your project ID.
The output shows two log entries. Notice that for each log entry, the
logName field has the value
projects/[PROJECT_ID]/logs/cloudaudit.googleapis.com%2Factivity
and
protoPayload.serviceName is equal to
anthosaudit.googleapis.com.
Audit policy
Audit logging behavior is determined by a statically-configured Kubernetes audit logging policy. Changing this policy is currently not supported.
|
https://cloud.google.com/anthos/clusters/docs/on-prem/1.1/how-to/security/audit-logging
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
On 2020-04-10 2:24 p.m., Andrew Barnert wrote:
On Apr 10, 2020, at 06:00, Soni L. fakedme+py@gmail.com wrote
why's a "help us fix bugs related to exception handling" proposal getting so much pushback? I don't understand.
Because it’s a proposal for a significant change to the language semantics that includes a change to the syntax, which is a very high bar to pass. Even for smaller changes that can be done purely in the library, the presumption is always conservative, but the higher the bar, the more pushback.
There are also ways your proposal could be better. You don’t have a specific real life example. Your toy example doesn’t look like a real problem, and the fix makes it less readable and less pythonic. Your general rationale is that it won’t fix anything but it might make it possible for frameworks to fix problems that you insist exist but haven’t shown us—which is not a matter of “why should anyone trust you that they exist?”, but of “how can anyone evaluate how good the fix is without seeing them?” But most of this is stuff you could solve now, by answering the questions people are asking you. Sure, some of it is stuff you could have anticipated and answered preemptively, but even a perfectly thought-out and perfectly formed proposal will get pushback; it’s just more likely to survive it.
If you’re worried that it’s personal, that people are pushing back because it comes from you and you’ve recently proposed a whole slew of radical half-baked ideas that all failed to get very far, or that your tone doesn’t fit the style or the Python community, or whatever, I don’t think so. Look at the proposal to change variable deletion time—that’s gotten a ton of pushback, and it’s certainly not because nobody respects Guido or nobody likes him.
hm.
okay.
so, for starters, here's everything I'm worried about.
in one of my libraries (yes this is real code. all of this is taken from stuff I'm deploying.) I have the following piece of code:
def _extract(self, obj): try: yield (self.key, obj[self.key]) except (TypeError, IndexError, KeyError): if not self.skippable: raise exceptions.ValidationError
(A Boneless Datastructure Language :: abdl._vm:84-89, AGPLv3-licensed,... @ 34551d96ce021d2264094a4941ef15a64224d195)
this library handles all sorts of arbitrary objects - dicts, sets, lists, defaultdicts, wrappers that are registered with collections.abc.Sequence/Mapping/Set, self-referential data structures, and whatnot. (and btw can we get the ability to index into a set to get the originally inserted element yet) - which means I need to treat all sorts of potential errors as errors. however, sometimes those aren't errors, but intended flow control, such as when your program's config has an integer list in the "username" field. in that case, I raise a ValidationError, and you handle it, and we're all good. (or sometimes you want to skip that entry altogether but anyway.)
due to the wide range of supported objects, I can't expect the TypeError to always come from my attempt to index into a set, or the IndexError to always come from my attempt to index into a sequence, or the KeyError to always come from my attempt to index into a mapping. those could very well be coming from a bug in someone's weird sequence/mapping/set implementation. I have no way of knowing! I also don't have a good way of changing this to wrap stuff in RuntimeError, unfortunately. (and yes, this can be mitigated by encouraging the library user to write unit tests and integration tests and whatnot... which is easier said than done. and that won't necessarily catch these bugs, either. (ugh so many times I've had to debug ABDL just going into an infinite loop somewhere because I got the parser wrong >.< unit tests didn't help me there, but anyway...))
"exception spaces" would enable me to say "I want your (operator/function/whatnot) to raise some errors in my space, so I don't confuse them with bugs in your space instead". and they'd get me exactly that. it's basically a hybrid of exceptions and explicit error handling. all the drawbacks of exceptions, with all the benefits of explicit error handling. which does make it worse than both tbh. it's also backwards compatible. I'm trying to come up with a way to explain how "exception spaces" relate to things like rust's .unwrap() on an Result::Err, or nesting a Result in a Result so the caller has to deal with it instead of you, or whatnot, but uh this is surprisingly difficult without mentioning rust code. but think of this like inverted rust errors - while in rust you handle the errors in a return value, with my proposal you'd handle the errors by passing in an argument. or a global. or hidden state. anyway, this is unfortunately more powerful.
my "toy example" (the one involving my use-case, not the one trying to define the semantics of these "exception spaces") is also real code. (GAnarchy :: ganarchy.config:183-201, AGPLv3-licensed,... @ not yet committed) it's just... it doesn't quite hit this issue like ABDL, template engines, and other things doing more complex things do. I'm sorry I don't have better examples, but this isn't the first time I worry my code is gonna mask bugs. it's not gonna be the last, either.
anyway, I'm gonna keep pushing for this because it's probably the easiest way to retrofix explicit error handling into python, while not being as ugly and limiting as wrapping everything in RuntimeError like I proposed previously. (that *was* a bad proposal, tbh. sorry.) I'll do my best to keep adding more and more real code to this thread showing examples where current exception handling isn't quite good enough and risks masking bugs, as I notice them. which probably means only my own code, but oh well.
|
https://mail.python.org/archives/list/python-ideas@python.org/message/OHPQQEBF7BDSDNXKZQTLT6SETOB5FVRH/
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
al_orthographic_transform - Man Page
Allegro 5 API
Synopsis
#include <allegro5/allegro.h> void al_orthographic_transform(ALLEGRO_TRANSFORM *trans, float left, float top, float n, float right, float bottom, float f)
Description
Combines the given transformation with an orthographic transformation which maps the screen rectangle to the given left/top and right/bottom coordinates.
near/far is the z range, coordinates outside of that range will get clipped. Normally -1/1 is fine because all 2D graphics will have a z coordinate of 0. However if you for example do al_draw_rectangle(0, 0, 100, 100) and rotate around the x axis (“towards the screen”) make sure your z range allows values from -100 to 100 or the rotated rectangle will get clipped.
Also, if you are using a depth buffer the z range decides the depth resolution. For example if you have a 16 bit depth buffer there are only 65536 discrete depth values. So if your near/far is set to -1000000/1000000 most of the z positions would not result in separate depth values which could lead to artifacts.
The result of applying this transformation to coordinates will be to normalize visible coordinates into the cube from -1/-1/-1 to 1/1/1. Such a transformation is mostly useful for passing it to al_use_projection_transform(3) - see that function for an example use.
Since
5.1.3
See Also
al_use_projection_transform(3), al_perspective_transform(3)
Referenced By
al_perspective_transform(3), al_use_projection_transform(3).
|
https://www.mankier.com/3/al_orthographic_transform
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
>> validate the response time of a request in Rest Assured?
We can validate the response time of a request in Rest Assured. The time elapsed after a request is sent to the server and then receiving the response is known as the response time.
The response time is obtained in milliseconds by default. To validate the response time with Matchers, we need to use the below-overloaded methods of the ValidatableResponseOptions −
- time(matcher) - it verifies the response time in milliseconds with the matcher passed as a parameter to the method.
- time(matcher, time unit) - it verifies the response time with the matcher and time unit is passed as parameters to the method.
We shall perform the assertion with the help of the Hamcrest framework which uses the Matcher class for assertion. To work with Hamcrest we have to add the Hamcrest Core dependency in the pom.xml in our Maven project. The link to this dependency is available in the below link −
Example
Code Implementation
import org.hamcrest.Matchers; import org.testng.annotations.Test; import io.restassured.RestAssured; import io.restassured.response.Response; import io.restassured.response.ValidatableResponse; import io.restassured.specification.RequestSpecification; public class NewTest { @Test public void verifyResTime() { //base URI with Rest Assured class RestAssured.baseURI =""; //input details RequestSpecification r = RestAssured.given(); // GET request Response res = r.get(); //obtain Response as string String j = res.asString(); // obtain ValidatableResponse type ValidatableResponse v = res.then(); //verify response time lesser than 1000 milliseconds v.time(Matchers.lessThan(1000L)); } }
Output
- Related Questions & Answers
- How to get the response time of a request in Rest Assured?
- How to validate XML response in Rest Assured?
- Explain DELETE request in Rest Assured.
- Explain PUT request in Rest Assured.
- Validate JSON Schema in Rest Assured.
- How to use Assertion in response in Rest Assured?
- How to verify JSON response headers in Rest Assured?
- How to update the value of a field in a request using Rest Assured?
- How to transform the response to the Java list in Rest Assured?
- Explain how to get the size of a JSON array response in Rest Assured.
- How to parse a JSON response and get a particular field from the response in Rest Assured?
- How to extract the whole JSON response as a string in Rest Assured?
- How to verify a JSON response body using Assertions in Rest Assured?
- How to incorporate TestNG assertions in validating Response in Rest Assured?
- How to pass more than one header in a request in Rest Assured?
Advertisements
|
https://www.tutorialspoint.com/how-to-validate-the-response-time-of-a-request-in-rest-assured
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Having space in the office to hold meetings plays a major role in today’s business world. Aside from having it, you’ll need to share it efficiently among teams without conflict, which can be a bit challenging.
To support an easy booking of rooms among various teams with no time conflicts, the meeting room calendar allows everyone in your work space to check for the availability of rooms and have control over them in seconds. Let’s walk through how to customize the look and feel of our JavaScript scheduler component to design a meeting room calendar.
Multiple resources—overview
As a continuation to my previous blog, which had a basic overview of JavaScript scheduler and its usage in a real-time scenario, I’ll continue here with an overview of the multiple resources concept and how to utilize this feature in designing the meeting room calendar.
The feature name itself portrays that the scheduler allots unique individual space for more than one resource on the same calendar. The resource names are grouped in a hierarchical structure in the scheduler’s header part and their equivalent work space parts are displayed separately at the bottom to hold their respective appointments.
The important options available with the multiple resources concept are:
- Group by date—Given resources are grouped under each date.
- Group by child—All the child-level resources are grouped under their parent resource names.
- Grouped or linked events—A single appointment is shared by more than one resource.
- Customizable work days for each resource—Different work days can be assigned for each resource. Now, let’s dive in depth and see how to design a meeting room calendar.
Designing a meeting room calendar
I’m going to display only the day view for this example while designing our meeting room calendar. As you get started, there should be some restrictions already set for booking meeting rooms, like being unable to reserve a room if it is already booked, and being unable to book during a non-accessible time range, such as lunch or during maintenance works.
Also, past meeting events won’t be editable. For this example, I’m considering July 31, 2018, to be the current date of scheduler and, therefore, trying to book rooms on dates before this can’t be done. The same is applicable to editing actions, too.
Getting started
As my previous blog already explained the detailed steps of how to get started with the scheduler control in TypeScript, I’m moving past those basics here.
First, create a new Essential JS 2 application in TypeScript with the help of QuickStart project and do the necessary configurations.
Once the configuration works are done, add the scheduler control in your application and inject only the Day view module.
Populating resources data
Let’s see how to customize the scheduler layout to display the meeting rooms. Here, the meeting rooms are defined as resources and each room will be displayed parallel against the common timescale.
Imagine that an office has five meeting rooms within its premises, and everyone should share those rooms. As soon as a person books a room, they own it for the entire booked time range and no other person can reserve the same room during that specific time.
To start with, let’s assume that the resource data holds common room information such as room name, room ID, and a specific color to denote resource appointments. Apart from the default resource fields, you can also define other custom resource fields such as room capacity and its type using the resources property.
resources: [{ field: 'RoomId', title: 'Select Room', name: 'Rooms', allowMultiple: true, dataSource: [ { text: 'Jammy', id: 1, capacity: 20, type: 'Conference' }, { text: 'Tweety', id: 2, capacity: 7, type: 'Cabin'}, { text: 'Nestle', id: 3, capacity: 5, type: 'Cabin' }, { text: 'Phoenix', id: 4, capacity: 15, type: 'Conference' }, { text: 'Mission', id: 5, capacity: 25, type: 'Conference' } ], textField: 'text', idField: 'id' }]
I have simply defined the resources collection and displayed it visually on the scheduler. Now, we need to define those resource collection names to group them under the group property.
group: { resources: ['Rooms'] }
NOTE: While defining resource names under the group property, make sure that you are using the text value assigned for the name field of the resources collection.
Customizing the resource layout
The resources are displayed on the scheduler layout now and it is time to customize the look of the resource header rows and columns.
As we are displaying scheduler in day view alone, it is not necessary to repeat the date header under each resource as shown in the previous image. It is enough to display a single common date header. To do this, set the byDate property to true.
group: { resources: ['Rooms'], byDate: true }
Also, customize the resource header row through its template property and change its look as shown in the following using the resourceHeaderTemplate property.
<script id="resourceTemplate" type="text/x-template"> <div class='template-wrap'> <div class="resource-details"> <div class="room-name">${getRoomName(data)}</div> <div class="room-type">${getRoomType(data)}</div> </div> <div class="room-capacity">${getRoomCapacity(data)} Seats</div> </div> </script>
let scheduleOptions: ScheduleModel = { width: '100%', height: '850px', currentView: "Day", selectedDate: new Date(2018, 6, 31), resourceHeaderTemplate: '#resourceTemplate', … …
The CSS styles to be applied to achieve the above look are as follows. Also, I’m going to hide the date header row here as we are displaying the same date for all resources, which is already available on the top-most common date header range.
<style> .e-schedule .e-vertical-view.e-by-date .e-date-header-wrap table tbody tr:nth-child(1) .e-header-cells, .e-schedule .e-vertical-view.e-by-date .e-left-indent-wrap table tbody tr:nth-child(1) .e-header-cells, .e-schedule .e-schedule-toolbar .e-toolbar-items .e-toolbar-right .e-today { display: none; } .e-schedule .e-vertical-view .e-resource-cells { height: 62px; background-color: lightgrey !important; } .e-schedule .e-left-indent-wrap th.e-resource-cells{ background-color: #fff !important; } .e-schedule .template-wrap .room-capacity { text-align: center; float: right; height: 20px; width: 60px; background: gray; font-weight: 500; color: white; } .e-schedule .template-wrap .resource-details { float: left; } .e-schedule .template-wrap .resource-details .room-name { font-size: 16px; font-weight: 500; margin-top: 5px; } </style>
Cell customization
Now, let’s customize the cell color to add special meaning to it, where the green-colored slots show the room’s availability. You can achieve this customization with the help of CSS.
<style> .e-schedule .e-vertical-view .e-work-cells, .available { background-color: #f6fff3; } </style>
Populating the meeting data
The layout customization is over now, and it’s time to fill the scheduler with room reservation data. Let’s form the data with meeting-related fields such as meeting summary, start time, and end time, along with the location. Then, assign it to the scheduler dataSource property. Each appointment in the scheduler notes a confirmed booking to convey the room reservation status at a specific time.
NOTE: Make sure that the correct field names are mapped into the eventSettings default scheduler fields from dataSource.
eventSettings: { dataSource: [ { Id: 1, Subject: "Board Meeting", Location: "Office", Description: "Meeting to discuss business goal of 2018.", StartTime: new Date(2018, 6, 30, 9, 0), EndTime: new Date(2018, 6, 30, 11, 0), RoomId: 1 }, { Id: 2, Subject: "Training session on JSP", Location: "Office", Description: "Knowledge sharing on JSP topics.", StartTime: new Date(2018, 6, 30, 14, 0), EndTime: new Date(2018, 6, 30, 17, 0), RoomId: 1 }, … … ], fields: { id: 'Id', subject: { title: 'Summary', name: 'Subject' }, location: { title: 'Location', name: 'Location' }, description: { title: 'Comments', name: 'Description' }, startTime: { title: 'From', name: 'StartTime' }, endTime: { title: 'To', name: 'EndTime' } } }
Differentiating past bookings
Let’s differentiate the reservations that are finished and in a disabled state by making use of the eventRendered event.
// Check whether the bookings belong to the past dates let isReadOnly: Function = (endDate: Date): boolean => { return (endDate < new Date(2018, 6, 31, 0, 0)); }; eventRendered: (args: EventRenderedArgs) => { let data: any = <any>args.data; if (isReadOnly(data.EndTime)) { args.element.setAttribute('aria-readonly', 'true'); args.element.classList.add('e-read-only'); } }
The style for differentiating the older reservations is as follows.
<style> .e-schedule .e-read-only { opacity: .8; } </style>
Adding recurring blocked appointments
A specific time on scheduler can be blocked by adding customized appointments with custom fields such as EventType and differentiating them with a light grey color denoting a lunch break and a light red shade depicting the maintenance status for room cleaning purposes. Also, the time range can be made read-only and distinguished through CSS customizations by adding appropriate class names to those appointment elements with the help of an eventRendered event.
eventRendered: (args: EventRenderedArgs) => { let data: any = <any>args.data; if (isReadOnly(data.EndTime) || data.EventType == "Lunch" || data.EventType == "Maintenance") { args.element.setAttribute('aria-readonly', 'true'); args.element.classList.add('e-read-only'); } if(data.EventType == "Lunch"){ args.element.classList.add('e-lunch-break'); } else if(data.EventType == "Maintenance"){ args.element.classList.add('e-maintenance'); } }
NOTE: Such kinds of special, customized appointments are grouped for multiple resources by setting allowGroupEdit to true within the group property.
group: { resources: ['Rooms'], byDate: true, allowGroupEdit: true }
The CSS to be applied on those blocked appointments is as follows.
<style> .e-schedule .e-maintenance .e-time, .e-schedule .e-lunch-break .e-time, .e-schedule .e-maintenance .e-recurrence-icon, .e-schedule .e-lunch-break .e-recurrence-icon { display: none !important; } .e-schedule .e-maintenance .e-appointment-details, .e-schedule .e-lunch-break .e-appointment-details{ text-align: center !important; padding-top: 6px !important; } .e-schedule .e-lunch-break .e-appointment-details { padding-top: 22px !important; } .e-schedule .e-lunch-break { background-color: rgb(0,0,0,0.14) !important; opacity: 1 !important; } .e-schedule .e-maintenance { background-color: #ffd5d3 !important; opacity: 1 !important; } </style>
Blocking reservations on inaccessible cells
The cell and appointment customization are completely finished. Now we need to concentrate on another important action: making certain cells inaccessible by users, such as the cells of past dates. This can be done by adding restrictions within the renderCell and popupOpen events.
To check for the past date cells and differentiate them, add conditions within the renderCell event.
renderCell: (args: RenderCellEventArgs) => { if (args.element.classList.contains('e-work-cells')) { // To disable the past date cells if(args.date < new Date(2018, 6, 31, 0, 0)) { args.element.setAttribute('aria-readonly', 'true'); args.element.classList.add('e-read-only-cells'); } } }
To prevent the pop-up from opening on inaccessible slots, add the conditions within popupOpen event as follows.(target.classList.contains('e-work-cells')){ if ((target.classList.contains('e-read-only-cells'))) { args.cancel = true; } } } }
Blocking the existing booked time slots
As I stated earlier, only one meeting can be held at a time in each room and, therefore, it is better to change the appointment appearance, to extend its width to the full size of the cell, thus not allowing users to click on the cells behind it.
The styles to change the appointment appearance are as follows.
<style> .e-schedule .e-vertical-view .e-day-wrapper .e-appointment .e-subject{ font-weight: 500; } .e-schedule .e-vertical-view .e-day-wrapper .e-appointment { width: 100% !important; background: #deedff; color: rgba(0, 0, 0, 0.87); border: 1px solid lightgrey; } </style>
Additionally, we may also need to extend the restrictions within the popupOpen event to block the pop-up from opening on past bookings, as well as on the cells which are already occupied.(!isNullOrUndefined(target) && target.classList.contains('e-work-cells')){ let endDate = data.endTime as Date; let startDate = data.startTime as Date; let groupIndex = data.groupIndex as number; if ((target.classList.contains('e-read-only-cells')) || (!scheduleObj.isSlotAvailable(startDate as Date, endDate as Date, groupIndex as number))) { args.cancel = true; } } else if(target.classList.contains('e-appointment') && (isReadOnly(data.EndTime) || target.classList.contains('e-lunch-break') || target.classList.contains('e-maintenance'))){ args.cancel=true; } } }
To block new reservations that conflict with existing reserved slots, and to block the update action on existing reservations that extends to the nearby reserved slot, we need to check the condition within the actionBegin event and cancel it.
actionBegin: (args: ActionEventArgs) => { if(args.requestType == "eventCreate" || args.requestType == "eventChange"){ let data: any = <any>args.data; let groupIndex = scheduleObj.eventBase.getGroupIndexFromEvent(data); if(!scheduleObj.isSlotAvailable(data.StartTime as Date, data.EndTime as Date, groupIndex as number)) { args.cancel = true; } } }
Summary
To summarize, we have seen how to customize the scheduler with the multiple resources concept to design a meeting room calendar by using additional styling options. Keep checking up with us, as more useful blogs are waiting in our queue that shows more of the customizing options available in the scheduler.
Try our scheduler component by downloading the free 30-day trial or checking it out on GitHub, and!
You can download the complete sample from GitHub.
If you like this blog post, we think you’ll also like the following free e-books:
JavaScript Succinctly
TypeScript Succinctly
AngularJS Succinctly
Angular 2 Succinctly
The post How to Create a Meeting Room Calendar appeared first on Syncfusion Blogs.
Discussion (0)
|
https://dev.to/syncfusion/how-to-create-a-meeting-room-calendar-4o02
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
I’ve been thinking some more about deployment of Python web applications, and deployment in general (in part leading up to the Web Summit). And I’ve got an idea.
I wrote about this about a year ago and recently revised some notes on a proposal but I’ve been thinking about something a bit more basic: a way to simply ship server applications, bundles of code. Web applications are just one use case for this.
For now lets call this a “Python application package”. It has these features:
- There is an application description: this tells the environment about the application. (This is sometimes called “configuration” but that term is very confusing and overloaded; I think “description” is much clearer.)
- Given the description, you can create an execution environment to run code from the application and acquire objects from the application. So there would be a specific way to setup sys.path, and a way to indicate any libraries that are required but not bundled directly with the application.
- The environment can inject information into the application. (Also this sort of thing is sometimes called “configuration”, but let’s not do that either.) This is where the environment could indicate, for instance, what database the application should connect to (host, username, etc).
- There would be a way to run commands and get objects from the application. The environment would look in the application description to get the names of commands or objects, and use them in some specific manner depending on the purpose of the application. For instance, WSGI web applications would point the environment to an application object. A Tornado application might simply have a command to start itself (with the environment indicating what port to use through its injection).
There’s a lot of things you can build from these pieces, and in a sophisticated application you might use a bunch of them at once. You might have some WSGI, maybe a seperate non-WSGI server to handle Web Sockets, something for a Celery queue, a way to accept incoming email, etc. In pretty much all cases I think basic application lifecycle is needed: commands to run when an application is first installed, something to verify the environment is acceptable, when you want to back up its data, when you want to uninstall it.
There’s also some things that all environments should setup the same or inject into the application. E.g., $TMPDIR should point to a place where the application can keep its temporary files. Or, every application should have a directory (perhaps specified in another environmental variable) where it can write log files.
Details?
To get more concrete, here’s what I can imagine from a small application description; probably YAML would be a good format:
platform: python, wsgi require: os: posix python: <3 rpm: m2crypto deb: python-m2crypto pip: requirements.txt python: paths: vendor/ wsgi: app: myapp.wsgiapp:application
I imagine platform as kind of a series of mixins. This system doesn’t really need to be Python-specific; when creating something similar for Silver Lining I found PHP support relatively easy to add (handling languages that aren’t naturally portable, like Go, might be more of a stretch). So python is one of the features this application uses. You can imagine lots of modularization for other features, but it would be easy and unproductive to get distracted by that.
The application has certain requirements of its environment, like the version of Python and the general OS type. The application might also require libraries, ideally one libraries that are not portable (M2Crypto being an example). Modern package management works pretty nicely for this stuff, so relying on system packages as a first try I believe is best (I’d offer requirements.txt as a fallback, not as the primary way to handle dependencies).
I think it’s much more reliable if applications primarily rely on bundling their dependencies directly (i.e., using a vendor directory). The tool support for this is a bit spotty, but I believe this package format could clarify the problems and solutions. Here is an example of how you might set up a virtualenv environment for managing vendor libraries (you then do not need virtualenv to use those same libraries), and do so in a way where you can check the results into source control. It’s kind of complicated, but works (well, almost works - bin/ files need fixing up). It’s a start at least.
Support Library
On the environment side we need a good support library. pywebapp has some of the basic features, though it is quite incomplete. I imagine a library looking something like this:
from apppackage import AppPackage app = AppPackage('/var/apps/app1.2012.02.11') # Maybe a little Debian support directly: subprocess.call(['apt-get', 'install'] + app.config['require']['deb']) # Or fall back of virtualenv/pip app.create_virtualenv('/var/app/venvs/app1.2012.02.11') app.install_pip_requirements() wsgi_app = app.load_object(app.config['wsgi']['app'])
You can imagine building hosting services on this sort of thing, or setting up continuous integration servers (app.run_command(app.config['unit_test'])), and so forth.
Local Development
If designed properly, I think this format is as usable for local development as it is for deployment. It should be able to run directly from a checkout, with the “development environment” being an environment just like any other.
This rules out, or at least makes less exciting, the use of zip files or tarballs as a package format. The only justification I see for using such archives is that they are easy to move around; but we live in the FUTURE and there are many ways to move directories around and we don’t need to cater to silly old fashions. If that means a script that creates a tarball, FTPs it to another computer, and there it is unzipped, then fine - this format should not specify anything about how you actually deliver the files. But let’s not worry about copying WARs.
|
http://www.ianbicking.org/blog/2012/02/python-application-package.html
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
How can the ROE and the ROI of companies be compared
This content was STOLEN from BrainMass.com - View the original, and get the already-completed solution here!
How is return on equity (ROE) defined and what is the importance of understanding the return on equity (ROE) as it applies to international financing? What would the strengths and weaknesses be?
How is internal rate of return (IRR) defined and what is the importance of understanding the internal rate of return (IRR) as it applies to international financing? What would the strengths and weaknesses be?
When selecting two companies from the same industry and using the most current annual report information available on the company's website how is the ROE computed for each?
When selecting the same two companies from the same industry and using the most current annual report information available on the company's website how is the ROI computed for each?
How can the ROE and the ROI of these companies be compared and described?
The objective is to understand the forces of globalization and its implications for the multinational firm and to recognize financial management decisions of multinational firms.© BrainMass Inc. brainmass.com October 15, 2018, 9:24 am ad1c9bdddf -
Solution Summary
According to Kennon (2009), return on equity (ROE) is a financial profitability instrument that is used to show how much earnings (income) a firm generates as it relates to the sum of investor equity, based on the firms financial fact sheet. The firms' total assets, minus the firm's total liabilities equals shareholder equity (total assets - total liabilities = equity). Return on Equity (ROE) uses the company's net income and the shareholders' equity, to grant information on the administration's capacity to generate assets (wealth) for the shareholders (net profit / shareholder's equity = ROE). Therefore, this ratio is a fair indicator of how efficient the firm is administering and managing their finances, where the final goal is to generate income (wealth) for the investor (McClure, 2009). This tool seems to be of good value when evaluating overseas projects to invest in. One could argue that there is a direct relationship between a firm's capacity to generate equity and the investor's returns.
|
https://brainmass.com/business/internal-rate-of-return/how-can-the-roe-and-the-roi-of-companies-be-compared-402864
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
Enabling Cross-Origin Requests in ASP.NET Web API 2
by Mike Wasson
Browser security prevents a web page from making AJAX requests to another domain. This restriction is called the same-origin policy, and prevents a malicious site from reading sensitive data from another site. However, sometimes you might want to let other sites call your web API.
Cross. This tutorial shows how to enable CORS in your Web API application.
Software versions used in the tutorial
- Visual Studio 2013 Update 2
- Web API 2.2
Introduction.
Note
Internet Explorer does not consider the port when comparing origins.
Create the WebService Project
Note.Controllers { public class TestController : ApiController { public HttpResponseMessage Get() { return new HttpResponseMessage() { Content = new StringContent("GET: Test message") }; } public HttpResponseMessage Post() { return new HttpResponseMessage() { Content = new StringContent("POST: Test message") }; } public HttpResponseMessage Put() { return new HttpResponseMessage() { Content = new StringContent("PUT: Test message") }; } } }
You can run the application locally or deploy to Azure. (For the screenshots in this tutorial, I deployed to Azure App Service Web Apps.) To verify that the web API is working, navigate to, where hostname is the domain where you deployed the application. You should see the response text, "GET: Test Message".
Create the WebClient Project
Create another ASP.NET Web Application project and select the MVC project template. Optionally, select Change Authentication > No Authentication. You don't need authentication for this tutorial.
In Solution Explorer, open the file Views/Home/Index.cshtml. Replace the code in this file with the following:
<div> <select id="method"> ); }).error(function (jqXHR, textStatus, errorThrown) { $('#value1').text(jqXHR.responseText || textStatus); }); } </script> }
For the serviceUrl variable, use the URI of the WebService app. Now run the WebClient app locally or publish it to another website.
Clicking the "Try It" button submits an AJAX request to the WebService app, using the HTTP method listed in the dropdown box (GET, POST, or PUT). This lets us examine different cross-origin requests. Right now, the WebService app does not support CORS, so if you click the button, you will get an error.
Note
If you watch the HTTP traffic in a tool like Fiddler, you will see that the browser does send the GET request, and the request succeeds, but the AJAX call returns an error. It's important to understand that same-origin policy does not prevent the browser from sending the request. Instead, it prevents the application from seeing the response.
Enable CORS
Now let's enable CORS in the WebService app. First, add the CORS NuGet package. In Visual Studio, from the Tools menu, select NuGet Package Manager, then select Package Manager Console. In the Package Manager Console window, type the following command:
Install-Package Microsoft.AspNet.WebApi.Cors
This command installs the latest package and updates all dependencies, including the core Web API libraries. User the -Version flag to target a specific version. The CORS package requires Web API 2.0 or later.
Open the file App_Start/WebApiConfig.cs. Add the following code to the WebApiConfig.Register method.
using System.Web.Http; namespace WebService { public static class WebApiConfig { public static void Register(HttpConfiguration config) { // New code config.EnableCors(); config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); } } }
Next, add the [EnableCors] attribute to the
TestController class:
using System.Net.Http; using System.Web.Http; using System.Web.Http.Cors; namespace WebService.Controllers { [EnableCors(origins: "", headers: "*", methods: "*")] public class TestController : ApiController { // Controller methods not shown... } }
For the origins parameter, use the URI where you deployed the WebClient application. This allows cross-origin requests from WebClient, while still disallowing all other cross-domain requests. Later, I'll describe the parameters for [EnableCors] in more detail.
Do not include a forward slash at the end of the origins URL.
Redeploy the updated WebService application. You don't need to update WebClient. Now the AJAX request from WebClient should succeed. The GET, PUT, and POST methods are all allowed.
How CORS Works
This section describes what happens in a CORS request, at the level of the HTTP messages. It's important to understand how CORS works, so that you can configure the [EnableCors] attribute correctly, and troubleshoot if things don't work as you expect.
The CORS specification introduces several new HTTP headers that enable cross-origin requests. If a browser supports CORS, it sets these headers automatically for cross-origin requests; you don't need to do anything special in your JavaScript code.
Here is an example of a cross-origin request. The "Origin" header gives the domain of the site that is making the
If the response does not include the Access-Control-Allow-Origin header, the AJAX request fails. Specifically, the browser disallows the request. Even if the server returns a successful response, the browser does not make the response available to the client application.
Preflight Requests
For some CORS requests, the browser sends an additional request, called a "preflight request", before it sends the actual request for the resource.
The rule about request headers applies to headers that the application sets by calling setRequestHeader on the XMLHttpRequest object. (The CORS specification calls these "author request headers".) The rule does not apply to headers the browser can set, such as User-Agent, Host, or Content-Length.
Here is an example of a preflight request:
OPTIONS HTTP/1.1 Accept: */* Origin: Access-Control-Request-Method: PUT Access-Control-Request-Headers: accept, x-my-custom-header Accept-Encoding: gzip, deflate User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; WOW64; Trident/6.0) Host: myservice.azurewebsites.net Content-Length: 0
The pre-flight request uses the HTTP OPTIONS method. It includes two special headers:
- Access-Control-Request-Method: The HTTP method that will be used for the actual request.
- Access-Control-Request-Headers: A list of request headers that the application set on the actual request. (Again, this does not include headers that the browser sets.)
Here is an example response, assuming that the server allows the request:
HTTP/1.1 200 OK Cache-Control: no-cache Pragma: no-cache Content-Length: 0 Access-Control-Allow-Origin: Access-Control-Allow-Headers: x-my-custom-header Access-Control-Allow-Methods: PUT Date: Wed, 05 Jun 2013 06:33:22 GMT
The response includes an Access-Control-Allow-Methods header that lists the allowed methods, and optionally an Access-Control-Allow-Headers header, which lists the allowed headers. If the preflight request succeeds, the browser sends the actual request, as described earlier.
Scope Rules for [EnableCors]
You can enable CORS per action, per controller, or globally for all Web API controllers in your application.
Per Action
To enable CORS for a single action, set the [EnableCors] attribute on the action method. The following example enables CORS for the
GetItem method only.
public class ItemsController : ApiController { public HttpResponseMessage GetAll() { ... } [EnableCors(origins: "", headers: "*", methods: "*")] public HttpResponseMessage GetItem(int id) { ... } public HttpResponseMessage Post() { ... } public HttpResponseMessage PutItem(int id) { ... } }
Per Controller
If you set [EnableCors] on the controller class, it applies to all the actions on the controller. To disable CORS for an action, add the [DisableCors] attribute to the action. The following example enables CORS for every method except
PutItem.
[EnableCors(origins: "", headers: "*", methods: "*")] public class ItemsController : ApiController { public HttpResponseMessage GetAll() { ... } public HttpResponseMessage GetItem(int id) { ... } public HttpResponseMessage Post() { ... } [DisableCors] public HttpResponseMessage PutItem(int id) { ... } }
Globally
To enable CORS for all Web API controllers in your application, pass an EnableCorsAttribute instance to the EnableCors method:
public static class WebApiConfig { public static void Register(HttpConfiguration config) { var cors = new EnableCorsAttribute("", "*", "*"); config.EnableCors(cors); // ... } }
If you set the attribute at more than one scope, the order of precedence is:
- Action
- Controller
- Global
Set the Allowed Origins
The origins parameter of the [EnableCors] attribute specifies which origins are allowed to access the resource. The value is a comma-separated list of the allowed origins.
[EnableCors(origins: "", headers: "*", methods: "*")]
You can also use the wildcard value "*" to allow requests from any origins.
Consider carefully before allowing requests from any origin. It means that literally any website can make AJAX calls to your web API.
// Allow CORS for all origins. (Caution!) [EnableCors(origins: "*", headers: "*", methods: "*")]
Set the Allowed HTTP Methods
The methods parameter of the [EnableCors] attribute specifies which HTTP methods are allowed to access the resource. To allow all methods, use the wildcard value "*". The following example allows only GET and POST requests.
[EnableCors(origins: "", headers: "*", methods: "get,post")] public class TestController : ApiController { public HttpResponseMessage Get() { ... } public HttpResponseMessage Post() { ... } public HttpResponseMessage Put() { ... } }
Set the Allowed Request Headers
Earlier I described how a preflight request might include an Access-Control-Request-Headers header, listing the HTTP headers set by the application (the so-called "author request headers"). The headers parameter of the [EnableCors] attribute specifies which author request headers are allowed. To allow any headers, set headers to "*". To whitelist specific headers, set headers to a comma-separated list of the allowed headers:
[EnableCors(origins: "", headers: "accept,content-type,origin,x-my-header", methods: "*")]
However, browsers are not entirely consistent in how they set Access-Control-Request-Headers. For example, Chrome currently includes "origin"; while FireFox does not include standard headers such as "Accept", even when the application sets them in script.
If you set headers to anything other than "*", you should include at least "accept", "content-type", and "origin", plus any custom headers that you want to support.
Set the Allowed Response Headers
By default, the browser does not expose all of the response headers to the application. The response headers that are available by default are:
- Cache-Control
- Content-Language
- Content-Type
- Expires
- Last-Modified
- Pragma
The CORS spec calls these simple response headers. To make other headers available to the application, set the exposedHeaders parameter of [EnableCors].
In the following example, the controller's
Get method sets a custom header named ‘X-Custom-Header'. By default, the browser will not expose this header in a cross-origin request. To make the header available, include ‘X-Custom-Header' in exposedHeaders.
[EnableCors(origins: "*", headers: "*", methods: "*", exposedHeaders: "X-Custom-Header")] public class TestController : ApiController { public HttpResponseMessage Get() { var resp = new HttpResponseMessage() { Content = new StringContent("GET: Test message") }; resp.Headers.Add("X-Custom-Header", "hello"); return resp; } }
Passing Credentials in Cross-Origin Requests
Credentials require special handling in a CORS request. By default, the browser does not send any credentials with a cross-origin request. Credentials include cookies as well as HTTP authentication schemes. To send credentials with a cross-origin request, the client must set XMLHttpRequest.withCredentials to true.
Using XMLHttpRequest directly:
var xhr = new XMLHttpRequest(); xhr.open('get', ''); xhr.withCredentials = true;
In jQuery:
$.ajax({ type: 'get', url: '', xhrFields: { withCredentials: true }
In addition, the server must allow the credentials. To allow cross-origin credentials in Web API, set the SupportsCredentials property to true on the [EnableCors] attribute:
[EnableCors(origins: "", headers: "*", methods: "*", SupportsCredentials = true)]
If this property is true, the HTTP response will include an Access-Control-Allow-Credentials header. This header tells the browser that the server allows credentials for a cross-origin request.
If the browser sends credentials, but the response does not include a valid Access-Control-Allow-Credentials header, the browser will not expose the response to the application, and the AJAX request fails.
Be very careful about setting SupportsCredentials to true, because it means a website at another domain can send a logged-in user's credentials to your Web API on the user's behalf, without the user being aware. The CORS spec also states that setting origins to "*" is invalid if SupportsCredentials is true.
Custom CORS Policy Providers
The [EnableCors] attribute implements the ICorsPolicyProvider interface. You can provide your own implementation by creating a class that derives from Attribute and implements ICorsProlicyProvider.
[AttributeUsage(AttributeTargets.Method | AttributeTargets.Class, AllowMultiple = false)] public class MyCorsPolicyAttribute : Attribute, ICorsPolicyProvider { private CorsPolicy _policy; public MyCorsPolicyAttribute() { // Create a CORS policy. _policy = new CorsPolicy { AllowAnyMethod = true, AllowAnyHeader = true }; // Add allowed origins. _policy.Origins.Add(""); _policy.Origins.Add(""); } public Task<CorsPolicy> GetCorsPolicyAsync(HttpRequestMessage request) { return Task.FromResult(_policy); } }
Now you can apply the attribute any place that you would put [EnableCors].
[MyCorsPolicy] public class TestController : ApiController { .. //
For example, a custom CORS policy provider could read the settings from a configuration file.
As an alternative to using attributes, you can register an ICorsPolicyProviderFactory object that creates ICorsPolicyProvider objects.
public class CorsPolicyFactory : ICorsPolicyProviderFactory { ICorsPolicyProvider _provider = new MyCorsPolicyProvider(); public ICorsPolicyProvider GetCorsPolicyProvider(HttpRequestMessage request) { return _provider; } }
To set the ICorsPolicyProviderFactory, call the SetCorsPolicyProviderFactory extension method at startup, as follows:
public static class WebApiConfig { public static void Register(HttpConfiguration config) { config.SetCorsPolicyProviderFactory(new CorsPolicyFactory()); config.EnableCors(); // ... } }
Browser Support
The Web API CORS package is a server-side technology. The user's browser also needs to support CORS. Fortunately, the current versions of all major browsers include support for CORS.
Internet Explorer 8 and Internet Explorer 9 have partial support for CORS, using the legacy XDomainRequest object instead of XMLHttpRequest. For more information, see XDomainRequest - Restrictions, Limitations and Workarounds.
|
https://docs.microsoft.com/en-us/aspnet/web-api/overview/security/enabling-cross-origin-requests-in-web-api
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
Time to try our app! First, in
.env, change the database name to
symfony3_tutorial, or whatever the database name was called when you first setup the project. Now when we run
doctrine:migrations:status... yes! We have a full database!
Let's start the built-in web server:
./bin/console server:run
Surprise!
There are no commands defined in the "server" namespace.
Remember: with Flex, you opt in to features. Run:
composer require server
When it finishes, run:
./bin/console server:run
Interesting - it started on
localhost:8001. Ah, that's because the old server is still running and hogging port 8000! And woh! It's super broken: we've removed a ton of files it was using. Hit Ctrl+C to stop the server. Ah! It's so broken it doesn't want to stop! It's taking over! Close that terminal!
Start the server again:
./bin/console server:run
It still starts on port 8001, but that's fine! Go back to your browser and load. Ha! It works! Check it out: Symfony 4.0.1.
Surf around to see if everything works: go to
/genus. Looks great! Now
/admin/genus. Ah! Looks terrible!
To use the @Security tag, you need to use the Security component and the ExpressionLanguage component.
Hmm. Let's do some digging! Open
src/AppBundle/Controller/Admin/GenusAdminController.php. Yep! Here is the
@Security annotation from FrameworkExtraBundle. The string we're passing to it is an expression, so we need to install the ExpressionLanguage.
But wait! I have a better idea. Google for SensioFrameworkExtraBundle and find its GitHub page. Click on releases: the latest is 5.1.3. What version do we have? Open
composer.json: woh! We're using version 3! Ancient!
Let's update this to
^5.0.
Then, run:
composer update sensio/framework-extra-bundle
to update just this library. Like with any major upgrade, look for a CHANGELOG to make sure there aren't any insane changes that will break your app.
So... why are we upgrading? So glad you asked: because the new version has a feature I really like! As soon as Composer finishes, go back to
GenusAdminController. Instead of using
@Security, use
@IsGranted.
This is similar, but simpler. For the value, you only need to say:
ROLE_MANAGE_GENUS.
Try it - refresh! Yes! We're sent to the login page - that's good! Sign in with password
iliketurtles.
At this point... we're done! Unless... you want to move all of your classes from
AppBundle directly into
src/. I do! And it's much easier than you might think.
|
https://symfonycasts.com/screencast/symfony4-upgrade/server-isgranted
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
Contents
In C++, there are a few ways how values that we would consider different compare equal. A short overview.
Here, with “compare equal” I mean, that the expression
a == b for two different values
a and
b would be true. And with “different” I mean that inspecting the value, e.g. with a debugger or by printing it on the console, would show a difference.
User-defined types
To be able to compare instances of classes and structs, we have to define the comparison operator ourselves. This, in turn, makes the topic of different values comparing equal rather boring. After all, we can just define the comparison operator to always return true for one of our classes.
Other user-defined types are enums. We can not directly compare scoped enums of different types (aka. enum classes). If we compare enums of the same type or different classic C enums, we get the result of comparing the underlying integral value. There is nothing exciting going on – unless we forget that consecutive enumerators are given increasing values by the compiler if we do not define them differently:
enum class E { FIRST, SECOND = -1, THIRD, FOURTH, //... }; static_assert(E::FIRST == E::THIRD);
Here,
FIRST gets automatically assigned the value 0, and, after we explicitly set
SECOND to -1,
THIRD is 0 again,
FOURTH is 1 and so on. However, we just have two different names for the same value here, not different values. Inspecting two objects of type
E with the values
FIRST and
THIRD would give us the exact same result, making them indistinguishable.
Built-in types
At first sight, we can say that comparing two objects of the same built-in type will be boring. They’d have to have the same value to compare equal, and only different values would not compare equal. Except that’s not true!
Different zeroes compare equal
When we deal with floating point types, we have exceptions to these rules. The C++ standard does not specify how floating point types are represented internally, but many platforms use IEEE 754 floating point representation.
In IEEE 754, there are two distinguishable values for zero: positive and negative zero. The bitwise representation is different, and we will see different values when debugging or printing them. However, the two compare equal. On the other hand, floating points contain the value
NaN (not a number). And when we compare a variable with such a value with itself, they don’t compare equal.
static_assert(-0.0 == 0.0); int main() { //prints "0 -0" std::cout << 0.0 << ' ' << -0.0 << '\n'; } constexpr double nan = std::numeric_limits<double>::quiet_NaN(); static_assert(nan != nan);
Different integral values that compare equal
You’ll hopefully agree with me that a value of type unsigned int cannot be negative. If we have e.g. a variable
u of type
unsigned int and the comparison
u >= 0, this will always be true. Compilers may even warn about it, and optimizers may use it to optimize our code.
Nevertheless, there may be values for
u such that
u == -1 return true. The reason is that we’re comparing an unsigned int with an int here, and the compiler has to convert one to the other type. In this case, two’s complement is used to convert the
int to
unsigned int, which will give the largest possible
unsigned int:
static_assert(std::numeric_limits<unsigned int>::max() == -1);
Usually, this makes a lot of sense at the bit representation level: If the
int is already represented as two’s complement, with a leading sign bit, then these two values have the exact same bit representation.
unsigned int has to use two’s complement according to the standard. However, the bit representation for the
int is implementation-defined and might be something different entirely.
Different pointer values that compare equal
Have a look at this piece of code:
struct A { unsigned int i = 1; }; struct B { unsigned int j = 2; }; struct C : A, B {}; constexpr static C c; constexpr B const* pb = &c; constexpr C const* pc = &c; static_assert(pb == pc); static_assert((void*)pb != (void*)pc);
The last two lines are interesting: when we directly compare
pb and
pc, they are equal. The
constexpr and
const keywords do not play any role in that, they are only needed to make the comparisons a constant expression for the
static_assert. When we cast them to
void* first, i.e. compare the exact memory locations they point to, they are not. The latter can also be shown by simply printing the pointers:
#include <iostream> int main() { std::cout << pc << '\n' << pb << '\n'; }
The output will be something like this:
0x400d38
0x400d3c
So, what is going on here? The clue is that, again, we have two different types that can not be compared directly. Therefore, the compiler has to convert one into the other. Since
C inherits
B, a
C* is convertible to a
B* (and
C const* to
B const*). We already used that fact when we initialized
pb, so it is not a big surprise that they compare equal.
But why do they have different values? For this, we have to look at the memory layout of
c. Since it inherits first from
A, and then from
B, the first bytes are needed to store the
A subobject and its member
i. The
B subobject with its
j member comes after that and therefore can not have the same actual address as
c.
This is different if either A or B do not have any nonstatic data members. The compiler may optimize away empty base classes, and then
pb,
pc and a pointer to the
A subobject of
c would contain the same address.
2 Comments
Permalink
Nitpick, but I don’t quite agree that ‘u > 0’ is always true for an unsigned variable ‘u’. If you change it to ‘u >= 0’, thin I agree that it must always be true.
Permalink
Not a nitpick, but a nicely spotted error, thanks! Fixed 🙂
|
https://arne-mertz.de/2018/09/when-different-values-compare-equal/
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
This is the seventh post in a multi-part series about how you can perform complex streaming analytics using Apache Spark and Structured Streaming.
Introduction
Most data streams, though continuous in flow, have discrete events within streams, each marked by a timestamp when an event transpired. As a consequence, this idea of “event-time” is central to how Structured Streaming APIs are fashioned for event-time processing—and the functionality they offer to process these discrete events.
Event-time basics and event-time processing are adequately covered in Structured Streaming documentation and our anthology of technical assets on Structure Streaming. So for brevity, we won’t cover them here. Built on the concepts developed (and tested at scale) in event-time processing, such as sliding windows, tumbling windows, and watermarking, this blog will focus on two topics:
- How to handle duplicates in your event streams
- How to handle arbitrary or custom stateful processing
Dropping Duplicates.
The API to instruct Structured Streaming to drop duplicates is as simple as all other APIs we have shown so far in our blogs and documentation. Using the API, you can declare arbitrarily columns on which to drop duplicates—for example, user_id and timestamp. An entry with same timestamp and user_id is marked as duplicate and dropped, but the same entry with two different timestamps is not.
Let’s see an example how we can use the simple API to drop duplicates.
import org.apache.spark.sql.functions.expr withEventTime .withWatermark("event_time", "5 seconds") .dropDuplicates("User", "event_time") .groupBy("User") .count() .writeStream .queryName("deduplicated") .format("memory") .outputMode("complete") .start()
from pyspark.sql.functions import expr withEventTime\ .withWatermark("event_time", "5 seconds")\ .dropDuplicates(["User", "event_time"])\ .groupBy("User")\ .count()\ .writeStream\ .queryName("pydeduplicated")\ .format("memory")\ .outputMode("complete")\ .start()
Over the course of the query, if you were to issue a SQL query, you will get an accurate results, with all duplicates dropped.
SELECT * FROM deduplicated +----+-----+ |User|count| +----+-----+ | a| 8085| | b| 9123| | c| 7715| | g| 9167| | h| 7733| | e| 9891| | f| 9206| | d| 8124| | i| 9255| +----+-----+
Next, we will expand on how to implement a customized stateful processing using two Structured Streaming APIs.
Working with Arbitrary or Custom Stateful Processing
Not all event-time based processing is equal or as simple as aggregating a specific data column within an event. Others events are more complex; they require processing by rows of events ascribed to a group; and they only make sense when processed in their entirety by emitting either a single result or multiple rows of results, depending on your use cases.
Consider these use-cases where arbitrary or customized stateful processing become imperative:
1. We want to emit an alert based on a group or type of events if we observe that they exceed a threshold over time
2. We want to maintain user sessions, over definite or indefinite time and persist those sessions for post analysis.
All of the above scenarios require customized processing. Structured Streaming APIs offer a set of APIs to handle these cases:
mapGroupsWithState and
flatMapGroupsWithState.
mapGroupsWithState can operate on groups and output only a single result row for each group, whereas
flatMapGroupsWithState can emit a single row or multiple rows of results per group.
Timeouts and State
One thing to note is that because we manage the state of the group based on user-defined concepts, as expressed above for the use-cases, the semantics of watermark (expiring or discarding an event) may not always apply here. Instead, we have to specify an appropriate timeout ourselves. Timeout dictates how long we should wait before timing out some intermediate state.
Timeouts can either be based on processing time
(GroupStateTimeout.ProcessingTimeTimeout) or event time
(GroupStateTimeout.EventTimeTimeout). When using timeouts, you can check for timeout first before processing the values by checking the flag
state.hasTimedOut.
To set processing timeout, use
GroupState.setTimeoutDuration(...) method. That means the timeout guarantee will occur under the following conditions:
- Timeout will never occur before the clock has advanced X ms specified in the method
- Timeout will eventually occur when there is a trigger in the query, after X ms
To set event time timeout, use
GroupState.setTimeoutTimestamp(...). Only for timeouts based on event time must you specify watermark. As such all events in the group older than watermark will be filtered out, and the timeout will occur when the watermark has advanced beyond the set timestamp.
When timeouts occur, your function supplied in the streaming query will be invoked with arguments: the key by which you keep the state; an iterator rows of input, and an old state. The example with
mapGroupsWithState below defines a number of functional classes and objects used.
Example with mapGroupsWithState
Let’s take a simple example where we want to find out when (timestamp) a user performed his or her first and last activity in a given dataset in a stream. In this case, we will group on (or map on) on a user key and activity key combination.
But first,
mapGroupsWithState requires a number of functional classes and objects:
1. Three class definitions: an input definition, a state definition, and optionally an output definition.
2. An update function based on a key, an iterator of events, and a previous state.
3. A timeout parameter as described above.
So let’s define our input, output, and state data structure definitions.
case class InputRow(user:String, timestamp:java.sql.Timestamp, activity:String) case class UserState(user:String, var activity:String, var start:java.sql.Timestamp, var end:java.sql.Timestamp)
Based on a given input row, we define our update function
def updateUserStateWithEvent(state:UserState, input:InputRow):UserState = { // no timestamp, just ignore it if (Option(input.timestamp).isEmpty) { return state } //does the activity match for the input row if (state.activity == input.activity) { if (input.timestamp.after(state.end)) { state.end = input.timestamp } if (input.timestamp.before(state.start)) { state.start = input.timestamp } } else { //some other activity if (input.timestamp.after(state.end)) { state.start = input.timestamp state.end = input.timestamp state.activity = input.activity } } //return the updated state state }
And finally, we write our function that defines the way state is updated based on an epoch of rows.
import org.apache.spark.sql.streaming.{GroupStateTimeout, OutputMode, GroupState} def updateAcrossEvents(user:String, inputs: Iterator[InputRow], oldState: GroupState[UserState]):UserState = { var state:UserState = if (oldState.exists) oldState.get else UserState(user, "", new java.sql.Timestamp(6284160000000L), new java.sql.Timestamp(6284160L) ) // we simply specify an old date that we can compare against and // immediately update based on the values in our data for (input <- inputs) { state = updateUserStateWithEvent(state, input) oldState.update(state) } state }
With these pieces in place, we can now use them in our query. As discussed above, we have to specify our timeout so that the method can timeout a given group’s state and we can control what should be done with the state when no update is received after a timeout. For this illustration, we will maintain state indefinitely.
import org.apache.spark.sql.streaming.GroupStateTimeout withEventTime .selectExpr("User as user", "cast(Creation_Time/1000000000 as timestamp) as timestamp", "gt as activity") .as[InputRow] // group the state by user key .groupByKey(_.user) .mapGroupsWithState(GroupStateTimeout.NoTimeout)(updateAcrossEvents) .writeStream .queryName("events_per_window") .format("memory") .outputMode("update") .start()
We can now query our results in the stream:
SELECT * FROM events_per_window order by user, start
And our sample result that shows user activity for the first and last time stamp:
+----+--------+--------------------+--------------------+ |user|activity| start| end| +----+--------+--------------------+--------------------+ | a| bike|2015-02-23 13:30:...|2015-02-23 14:06:...| | a| bike|2015-02-23 13:30:...|2015-02-23 14:06:...| ... | b| bike|2015-02-24 14:01:...|2015-02-24 14:38:...| | b| bike|2015-02-24 14:01:...|2015-02-24 14:38:...| | c| bike|2015-02-23 12:40:...|2015-02-23 13:15:...| ... | d| bike|2015-02-24 13:07:...|2015-02-24 13:42:...| +----+--------+--------------------+--------------------+
What’s Next
In this blog, we expanded on two additional functionalities and APIs for advanced streaming analytics. The first allows removing duplicates bounded by a watermark. With the second, you can implement customized stateful aggregations, beyond event-time basics and event-time processing.
Through an example using mapGroupsWithState APIs, we demonstrated how you can implement your customized stateful aggregation for events whose processing semantics can be defined not only by timeout but also by user semantics and business logic.
Our next blog in this series, we will explore advanced aspects of
flatMapGroupsWithState use cases, as will be discussed at the Spark Summit EU, in Dublin, in a deep dive session on Structured Streaming.
Over the course of Structured Streaming development and release since Apache Spark 2.0, we have compiled a comprehensive compendium of technical assets, including our Structured Series blogs. You can read the relevant assets here:
Try Apache Spark’s Structured Streaming latest APIs on Databricks’ Unified Analytics Platform.
|
https://databricks.com/blog/2017/10/17/arbitrary-stateful-processing-in-apache-sparks-structured-streaming.html
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
Congratulations, newbie. You’ve made the first step towards the side of righteousnous. Vim will guide you towards a place beyond your wildest dreams. Oh, the road shall not be easy. It may test your faith at times, but the rewards will be magnificent.
The cool thing about vim is that it was designed ground up to require the fewest keystrokes possible. Its philsophy is speed when typing and editing, and any time you’re required to move your fingers away from the base row, you’re wasting time.
Unlike most other editors, vim has what are called modes. In particular, vim has 3 modes: visual mode, normal mode, and insert mode.
Normal mode is the default mode - it’s meant for fast navigation and large changes in many lines of text.
Insert mode is meant for? You guessed it, inserting text. This is what we think of when we consider most text editors.
Visual mode is what you do when you select text. Imagine you wanted to select a glob of text and replace it with a word. That’s a job for visual mode.
It seems needlessly complicated at first, but this is at the core of vim’s advantage.
There are several ways to open vim.
vim by itself opens up a new buffer with nothing loaded.
vim [filename] opens up a buffer with that file loaded. If that file doesn’t exist, then it creates a new buffer named
[filename].
Let’s go over normal mode basics:
Great, now we can do basic things in normal mode like navigate. How do we quit vim? To quit vim, you must be in normal mode. In normal mode, use
:q. If your file has unsaved changes, it won’t let you quit (vim doesn’t want you to lose changes by accident - how nice!). If you really want to quit without saving changes, use
:q! in normal mode.
But how do we save?
:w.
w for write. If we want to write and then immediately leave vim, we can chain together
:w and
:q with
:wq.
How do we switch between modes?
The best part about vim is the many ways in which we can enter insert mode.
And there are many more!
Once in insert mode, you simply type like a normal text editor and it simply adds the text.
C-W deletes the previous word, just like on the terminal, when you’re already in insert mode.
No matter what mode you’re in, pressing
Esc will take you back to Normal mode. However,
Esc is rather cumbersome, and that violates Vim’s entire philosophy. Thus,
C-[ (control-left bracket) is often much easier to do, and equivalent. For convenience, I would recommend remapping Caps Lock and Control.
There are 2 kinds of visual modes. To enter regular visual mode, just press
v from normal mode, and begin moving around just as you would with regular normal mode commands. It will begin to select text as normal. Then, when you enter insert mode, your inserts will only apply to that particular block.
For example, if I have
def stuff(): print "Omg. So many lines of code." print "Dude, seriously though." return 5
and I highlight the entire block with visual mode, when I press
S, it will delete the entire block and place the cursor at the beginning of that block, and I will be in insert mode.
Another way to enter visual mode is using
C-V. This is used for column-wise highlighting. The classic situation is commenting out or commenting in a block of code.
def doge(): print "Such code." print "Much python." print "Wow."
Let’s say I wanted to comment out the entire
doge function. Obviously, I wouldn’t shift to insert mode in front of each line, insert
#, and then go back to normal mode, and do the same thing for the other lines. That sounds miserable.
Instead, I’ll press
0 on the first line, taking me to the 1st character. Then, I’ll press
C-V (that’s Control, and while holding it, Shift-V). Now I’m in column-wise visual mode. Then I press
G which takes me to the end of the file, highlighting everything in the first column along the way. Now, pressing
I allows me to insert at the beginning of the line. I add a
# and shift back to normal mode. So much easier!
This stuff takes a lot of time to learn, so if you’re feeling overwhelmed, don’t worry you’re not alone. Vim is no pushover. Keep using it, and anytime you find yourself doing a repetitive keystroke, search for how that could be faster.
Over time, you’ll find yourself memorizing many shortcuts and learning new ones along the way (I still learn new ones every day). Did you know that pressing
C-A in normal mode over a number automatically increments it? Just learned that one yesterday. Apparently it even works on dates! Vim is incredible, and you really never stop learning.
This goes back to that “Hacker Spirit” we spoke of last time. Never stop learning, never stop being curious.
Anyways, vim has many more complicated ways to manipulate text. You can often times string commands together. For example,
caw deletes the word you’re currently on (in normal mode), and automatically shifts you to insert mode. This is an easy way to change a word to something else.
daw on the other hand, simply deletes it while keeping you in normal mode.
If you don’t know regular expressions, you really should consider learning them. The reason languages like Perl are considered the most powerful for text-editing is because of regular expressions, and tools like grep utilize them as well.
Vim is no exception.
Imagine you find yourself in a situation where you need to change every if statement in some section of code from
if (a == 5) to
if (b == 5). Would you want to do that by hand? What if you miss one single if statement. Ouch.
This is where regular expressions come in. In normal mode, this is a single line in vim.
:%s/if (a ==/if (b ==/g. Can you believe such a small line can do something so powerful? Of course, there are much cooler things vim can do - this is simply to give you a taste.
What if you want to edit multiple files in Vim? Should we open multiple terminals and have each one have a single vim file? Of course not.
Vim has many solutions for this. Most novice users will use tabs.
:tabnew in normal mode will create a new tab in vim. Close tabs like you would close vim, with
:q or
:wq.
:gt can be used to cycle through tabs.
Once you create a new tab, it will be empty at first. What if you wanted to load a file onto that tab? Use
:e [filename] to open a particular file. You’ll have to put its relative path from the directory in which you initially opened vim.
That’s decent, but what if you wanted to have multiple files on the same screen? For example, I wanted to have the file
a.c in the top half of my screen and
b.c in the bottom half? Of course, we can do that as well. I open the first file by
vim a.c. Then, in normal mode, I can do
:split or
:sp for short.
:sp Creates a horizontal split where, if I supply no parameters, it uses the current buffer. So there will be 2
a.c’s. Since I wanted
b.c, I instead type
:sp b.c, and voila! My screen is split.
But I have an extra long monitor, you say. I want to split vertically, not horizontally! Not to worry, that is what vertical splits are for!
:vsplit, or
:vsp for short will do just what you desire.
You can navigate between windows in normal mode. If you want to move to the window above you, instead of
k you would do
C-w-k. Similarly, to go to the window below you, instead of
j, use
C-w-k. Easy enough!
But what if I want to view multiple files without using tabs or windows? What if I just wanted one buffer open, but at times I wanted to rapidly switch between files inside that one buffer?
Absolutely. Do note that there are plugins that make this significantly easier (like Unite and Control-P), but of course Vim has a native solution.
How do we even get many files in one buffer? Imagine you did
vim file1.txt file2.txt file3.txt. All three of the files wouldn’t be opened in tabs. Instead, there would be a single buffer with
file1.txt showing.
:buffers lists all buffers currently open - it would show
file2.txt and
file3.txt as well.
:ls and
:files also do the same. (Normal mode, remember!)
Switch to any buffer by doing
:buffer <name> where
<name> is the name of the buffer.
There are many many plugins out there. Vim has been around for longer than most of us have been alive. There are plugins to make installing other plugins easier.
Some are clearly better than others. There are autocomplete plugins (so vim can do things like eclipse and fill in words). There are many syntax highlighting options.
If you ever feel unsatisfied with vim, edit your
~/.vimrc file. There are tons of sample vimrc files out in the interwebs, and many people have extremely useful tips. Feel free to steal them and make it your own! I’ve posted my own vimrc (minus plugins) on Piazza.
Several of the top plugins include:
ntautomatically opens/closes NerdTree
See Recitation 1. At the bottom, there’s a section entitled “The Hacking Spirit”. I’ve decided after much deliberation that although I could go into exactly how I installed this stuff (on a Mac), it would spoil your educational opportunity!
Much of learning is done through Googling, Stack Overflow-ing, and mucking around the terminal to see what works. So go, explore! And if you have any truly dastardly bugs, well then you know where to find me (Sudi 005)
:)
Seriously, these are must reads. You will learn much.
Buffers, windows, and tabs
Using Vim’s tabs like buffers
5 Plugins Some Dude Thought were Cool
Vim Bible Part 1
Vim Bible Part 2
The number 1 rated Vim Plugin in the World
A replacement for PowerLine and every other plugin
Vim Wiki Website
I’ll add one last thing - whatever you’re looking for, guaranteed there’s a plugin out there somewhere. Just look for it first. Vim can do most everything an IDE can do, but sometimes there is such a thing as too much. At one point I was having vim do literally everything Eclipse did - debugger and all - and vim was just as slow. There’s a reason to use vim - it’s fast. Don’t lose sight of that amidst the sparkle of new plugins.
|
https://cs50.notablog.xyz/tips/Tips1.5.html
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
Games on Facebook are hosted as a portal, the actual game content is hosted from your own web server. By configuring your Facebook Web Games URL, you can make your game available on Facebook.com, where it will appear within an iframe. On Facebook, you can make the most of App Center and game recommendations to provide discoverability for your content, and use the social features of the Facebook platform to make your game more social.
In your app settings, there's a field for Facebook Web Games URL. This field configures the iframe that loads when a player loads your game. This puts you in complete control your game, and you're free to update versions and content at your own release cycle. See your App Settings here.
When a player on Facebook.com loads your game, Facebook will make an
HTTP POST request to the Facebook Web Games URL provided under App Settings. The response to this request should be a full HTML response that contains your game client. You can use the Facebook SDK for JavaScript to authenticate users, interact with the frame, and to access dialogs in-game, so be sure to include that in your game's HTML. See here for more information on Login for Games On Facebook.
The HTTP POST request made to your Fcaebook Web Games URL will contain additional parameters, including a
signed_request parameter that contains the player's Facebook identity if they've granted basic permissions to your app. If a player is new, the
signed_request parameter value will be useful to validate that this request did indeed come from Facebook. Read more about signed requests in the Login for Games on Facebook guide.
HTTPS is required when browsing Facebook.com, and this requirement also applies to game content. Therefore a valid SSL certificate is required when serving your game content. When you configure your web server to host your game, you'll need to make sure it's on a valid domain that you own, and that you have a valid SSL certificate for this domain.
It's possible to pass your own custom parameters to the game launch query. This is useful for tracking the performance of OG Stories, referral sites, or for tracking shared link performance.
There are two ways to accomplish this:
The URL for your Facebook game will always be{namespace}/ . When provide promotion links, either from your App Page or other places on the internet, you can append query params here. For example{namespace}/?source=mysourceid
These query params will be preserved on game launch, and passed to your server in addition to the signed_request.
You can also share links that take players directly to portions of your game. If you are using PHP or have launch scripts, this can be helpful to start players into areas of the game outside of the standard flows. The full path will be preserved in the request to your server. For example, if you share a link to{namespace}/special_launch.php
Facebook will make a request to
https://{your_web_games_url}/special_launch.php
when loading the iframe for your game.
When players launch your game on Facebook, a query parameter
signed_request is added to the HTTP request to your server. This
signed_request can be decoded to provide user information, and a signature to verify the security and authenticity of this data. You can parse this parameter like this:
238fsdfsd.oijdoifjsidf899)
If no
user_id field is present, then the player has not given public_profile permissions to your game yet.
If you parse the
signed_request and discover the player is new and has not granted basic permissions to your game, you can ask for these permissions on load, via the Javascript SDK.
FB.login(function(response){ // Handle the response });
You can optionally ask for more permissions, such as
FB.login(function(response) { // Handle the response }, {scope: 'email'});
Other important information in this payload will be age settings and locale preferences for the player. See Login for Games on Facebook for more information.
While you're developing your game, you'll probably want to host it on a web server running on your local machine to speed up your edit-compile-test cycle. The most common way to do that is to set up a server stack like XAMPP. You will also need to create and install a local SSL certificate so that this server supports HTTPS.
Once you're ready to take your game live to the world, you'll have to arrange for hosting on a public-facing web server.
As your traffic grows, you may want to consider using a content delivery network (CDN) such as Akamai or CDNetworks to reduce your hosting costs and improve performance. A CDN works by caching your game's content at various locations on the internet. This means players will have game assets delivered to their client from a closer location. Your players get a quicker loading game, and your server is protected from excessive traffic.
|
https://developers.facebook.com/docs/games/gamesonfacebook/hosting/
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
- NAME
- USAGE
- DESCRIPTION
- ATTRIBUTES
- SEE ALSO
- BUGS and CONTRIBUTIONS
NAME
Paws::ApplicationAutoScaling::ScalableTarget::ApplicationAutoScaling::ScalableTarget object:
$service_obj->Method(Att1 => { CreationTime => $value, ..., ServiceNamespace => $value });
Results returned from an API call
Use accessors for each attribute. If Att1 is expected to be an Paws::ApplicationAutoScaling::ScalableTarget object:
$result = $service_obj->Method(...); $result->Att1->CreationTime
DESCRIPTION
Represents a scalable target.
ATTRIBUTES
REQUIRED CreationTime => Str
The Unix timestamp for when the scalable target was created.
REQUIRED MaxCapacity => Int
The maximum value to scale to in response to a scale out event.
REQUIRED MinCapacity => Int
The minimum value to scale to in response to a scale in event.
REQUIRED ResourceId => Str.
REQUIRED RoleARN => Str
The ARN of an IAM role that allows Application Auto Scaling to modify the scalable target on your behalf.
REQUIRED ScalableDimension => Str.
REQUIRED ServiceNamespace => Str
The namespace of the AWS service. For more information, see AWS Service Namespaces () in the I<Amazon Web Services General Reference>.
SEE ALSO
This class forms part of Paws, describing an object used in Paws::ApplicationAutoScaling
BUGS and CONTRIBUTIONS
The source code is located here:
Please report bugs to:
|
https://metacpan.org/pod/Paws::ApplicationAutoScaling::ScalableTarget
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
Free rotating free flash banner xmlpekerja an inertial measurement unit (IMU) that outputs its orientation in q...
Need a PSD of home page converted to HTML (and JS): Navigation will fun...when it gets to the top: [login to view URL] Needs to work on mobile. There are 3 images above the "view all projects". This will be on a rotating canvas. The "featured in" section needs to be scrollable, as the number of items may increase.
1. Create SWF flash file 2. Convert into HTML 5 3. Design is the attachment
We are need Free International Calling App Example App: Voxofone FakeId We call
..
Hi there, I wish to have a banner designed for a media photo backdrop. I would like our logo and a text "Merry Christmas" on a image i will give. Size of banner: 10x10feet
Software de tv digital free tv libre online
.., im looking some one talented, who can setup 123flash chat and make android and iphone application for mobil chat. only experiment guy's please!
...) -
banner design ,editing,photo editing.
...add a unique movement of your parent-child based objects. [login to view URL] add both (g) mouse interaction and (h) keyboard interactions to transform hierarchy chained objects (e.g., rotating hierarchical objects) expressing different motion. [login to view URL] your (h) unique design approach/process and (i) research endeavor....
Amadeus XML API Development in PHP
..”
import search filter large json file . search edit export data in many ways xls pdf etc.. will provide sample of small file and example data inside it once we discuss. winner of this project who place best bid . thanks]
Designer Office tables, Chairs, Coffee tables made out of laser cutting
I need 50 pages including android App Design(Graphic +XML) and Web(Graphic +Html5/css3)
...simulating up and down moment) -Steering wheel (Animation - Rotating) -Turning Front wheels(Animation - Right left) -Axle (Rotating and changing color) -The 3D model should be capable of being visualized from various angles(By click and drag of a cursor) -The car itself should be rotating on a plane graphical platform (Animation - Based on compass ([login to view URL]) - Looking for minimal and feminine design - See the attached pics for inspiration
|
https://www.my.freelancer.com/work/free-rotating-free-flash-banner-xml/
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
help with Identity.login() and acceptExternallyAuthenticatedPrincipalGerald Anderson Jun 23, 2009 6:53 PM
I've written an application that authenticates to a remote SSO source. I'm getting a good Principal back, but can't seem to set up the session correctly. The code:
package com.mycom.myapp.session.security; import java.security.Principal; import javax.faces.context.FacesContext; import org.jboss.seam.ScopeType; import org.jboss.seam.annotations.In; import org.jboss.seam.annotations.Install; import org.jboss.seam.annotations.Name; import org.jboss.seam.annotations.Scope; import org.jboss.seam.annotations.Startup; import org.jboss.seam.annotations.intercept.BypassInterceptors; import org.jboss.seam.contexts.Contexts; import org.jboss.seam.log.LogProvider; import org.jboss.seam.log.Logging; import org.jboss.seam.security.Credentials; import org.jboss.seam.security.Identity; import static org.jboss.seam.annotations.Install.APPLICATION; @Name("org.jboss.seam.security.identity") @Scope(ScopeType.SESSION) @Install(precedence = APPLICATION) @BypassInterceptors @Startup public class CustomIdentity extends Identity { private static final LogProvider log = Logging.getLogProvider(CustomIdentity.class); @Override public String login() { try { Principal principal = FacesContext.getCurrentInstance().getExternalContext().getUserPrincipal(); if (principal != null) { acceptExternallyAuthenticatedPrincipal(principal); } } catch (Exception e) { log.error(e,e); } return "loggedIn"; } @Override public boolean isLoggedIn() { if (! super.isLoggedIn()) login(); return super.isLoggedIn(); } }
I AM getting the principal and login does set the session to logged in, but it's not pulling in roles and the user doesn't seem to be getting into the session. Now, for the roles I figure I'll have to query for the users and add them to the identity manually, no big deal. However, I'm not sure what I need to do to insure the session knows what user is actually logged in (Identity.getUsername() returns null, but Identity.getPrincipal().getName() is correct).
Could somebody please shed some light on what I'm missing here. I'm at the very end of this project and it's getting a little frustrating.
Thanks!
Gerald
1. Re: help with Identity.login() and acceptExternallyAuthenticatedPrincipalJean Luc Jun 23, 2009 9:53 PM (in response to Gerald Anderson)
You haven't posted what security configuration you have. Are you using the JpaSecurityStore? Have you specified which classes represent a user and its roles?
2. Re: help with Identity.login() and acceptExternallyAuthenticatedPrincipalGerald Anderson Jun 23, 2009 10:12 PM (in response to Gerald Anderson)
Jean,
Well, the application was originally written to authenticate against its own database using the Seam 2.1.1 identity management. So yes, I have the following configured:
<security:identity-manager <security:jpa-identity-store
I'm very confused right now as to if these will be used in the new system or not.
The details of the situation are that I'm now authenticating against Ja-sig CAS SSO and it places a Principal in session that I'm successfully able to retrieve.
I've tried creating an Authenticator.authenticatorMethod() and extending Identity.login() but having about the same results either way.
I would be VERY happy to be able to continue to use my JpaSecurityStore data and functionality, but how to I get from having a known-good Principal object with the correct username to completely authenticated and set-up?
One of the main issues I'm seeing, for instance, is that I'm not populating the authenticatedUser session object. I look at the JpaIdentityStore.java and see that it does a LOT, but I look at the examples in ch 15 of the seam reference guide and am seeing a major mismatch.
I hope I'm making some sense and VERY much appreciate your response, I'm about at wit's end. It doesn't seem like I'm doing something that should be that difficult so I have to wonder f I'm making it worse than it is.
If there is anything else that might help for me to provide, please let me know.
Thanks again!
Gerald
3. Re: help with Identity.login() and acceptExternallyAuthenticatedPrincipalGerald Anderson Jun 23, 2009 10:28 PM (in response to Gerald Anderson)
Another thought in the merry-go-round of desperation ; )
Would an appropriate strategy be to extend JpaIdentityStore and override:
public boolean authenticate(String username, String password)
to basically just ignore the password? Or is that a direction that'd get me in trouble long-term?
Thanks!
Gerald
4. Re: help with Identity.login() and acceptExternallyAuthenticatedPrincipalJean Luc Jun 24, 2009 3:56 AM (in response to Gerald Anderson)
First look inside Seam's source at what it does during authenticate() and, perhaps more importantly, after. Since it's your code that handles the authentication itself, what matters is not that particular process, but what Seam does with it afterwards. That is, what objects it sets into the session (such as what it puts into the identity object. Remember that at the end of the day Seam adds its own JAAS LoginModule so look inside org.jboss.seam.security.jaas.SeamLoginModule for details. (read up on what JAAS lifecycle first if you're not familiar with it). In particular, look at authenticate(). the part that populates the role after a successful login is quoted below:
boolean success = identityManager.authenticate(username, identity.getCredentials().getPassword()); if (success){ for (String role : identityManager.getImpliedRoles(username)){ identity.addRole(role); } }
Then look in JpaIdentityStore, it shows what it does after authentication. With this 2 pieces and perhaps a little bit more debugging through code (generate a test app with seamgen, set it to use a JPA store and see what it does) you should unveil the mistery :) You may also find useful to listen to the org.jboss.seam.security.management.userAuthenticated event - whether it's appropriate you can tell better.
public boolean authenticate(String username, String password) { Object user = lookupUser(username); if (user == null || (userEnabledProperty.isSet() && ((Boolean) userEnabledProperty.getValue(user) == false))) { return false; } String passwordHash = generatePasswordHash(password, getUserAccountSalt(user)); boolean success = passwordHash.equals(userPasswordProperty.getValue(user)); if (success && Events.exists()){ if (Contexts.isEventContextActive()) { Contexts.getEventContext().set(AUTHENTICATED_USER, user); } Events.instance().raiseEvent(EVENT_USER_AUTHENTICATED, user); } return success; }
5. Re: help with Identity.login() and acceptExternallyAuthenticatedPrincipalGerald Anderson Jun 24, 2009 5:40 PM (in response to Gerald Anderson)
Jean,
Thanks again, that's the path I started down last night. Here's my latest and greatest:
@Name("org.jboss.seam.security.identity") @Scope(ScopeType.SESSION) @Install(precedence = APPLICATION) @BypassInterceptors @Startup public class CustomIdentity extends Identity { public static final String AUTHENTICATED_USER = "org.jboss.seam.security.management.authenticatedUser"; public static final String EVENT_USER_AUTHENTICATED = "org.jboss.seam.security.management.userAuthenticated"; private static final String SILENT_LOGIN = "org.jboss.seam.security.silentLogin"; @In (create = true) Credentials credentials; private ValueExpression<EntityManager> entityManager; private static final LogProvider log = Logging.getLogProvider(CustomIdentity.class); public void initEntityManager() { if (entityManager == null) { entityManager = Expressions.instance().createValueExpression("#{entityManager}", EntityManager.class); } } public EntityManager lookupEntityManager() { return entityManager.getValue(); } @Override public String login() { initEntityManager(); System.out.println("Starting authentication"); if (super.isLoggedIn()) { // If authentication has already occurred during this request via a silent login, // and login() is explicitly called then we still want to raise the LOGIN_SUCCESSFUL event, // and then return. if (Contexts.isEventContextActive() && Contexts.getEventContext().isSet(SILENT_LOGIN)) { if (Events.exists()) Events.instance().raiseEvent(EVENT_LOGIN_SUCCESSFUL); return "loggedIn"; } if (Events.exists()) Events.instance().raiseEvent(EVENT_ALREADY_LOGGED_IN); return "loggedIn"; } preAuthenticate(); Principal casPrincipal = FacesContext.getCurrentInstance().getExternalContext().getUserPrincipal(); if (casPrincipal.getName() != null) { String username = casPrincipal.getName(); System.out.println("Found CAS principal for " + username + ": authenticated"); acceptExternallyAuthenticatedPrincipal(casPrincipal); UserAccount userAccount = (UserAccount) lookupEntityManager().createQuery("from UserAccount where username = :username") .setParameter("username", username) .getSingleResult(); System.out.println("userAccount for " + username + " loaded"); // Ok, we're authenticated from CAS, let's load up the roles for (String role : IdentityManager.instance().getImpliedRoles(username)) { System.out.println("Adding role \"" + role + "\" to " + username); addRole(role); } if (Events.exists()) { if (Contexts.isEventContextActive()) { Contexts.getEventContext().set(AUTHENTICATED_USER, userAccount); } Events.instance().raiseEvent(EVENT_USER_AUTHENTICATED, userAccount); } postAuthenticate(); return "loggedIn"; } return null; } @Override public boolean isLoggedIn() { if (!super.isLoggedIn()) { login(); } return super.isLoggedIn(); } }
It's not cleaned up or commented yet, but it IS working and seemingly 100%.
If you (or anybody else) wouldn't mind looking at it critically and telling me if you see something that may cause me problems down the road. If nobody sees a problem I'll write up a quick tutorial for integrating Seam 2.1.x and ja-sig CAS 3 which is completely undocumented anywhere.
Thanks again!
Gerald
6. Re: help with Identity.login() and acceptExternallyAuthenticatedPrincipaltrind Sep 4, 2009 3:45 PM (in response to Gerald Anderson)
Have you had time to write the tutorial, if so where can i find it ?
//Joachim
7. Re: help with Identity.login() and acceptExternallyAuthenticatedPrincipalingo bischofs Sep 8, 2009 6:59 PM (in response to Gerald Anderson)
Hi there,
as we're struggling with Kerberos SSO ans SEAM too:
could you please send the link to that tutorial :)
...in case it already exists..
thanks and cheers,
ingo
8. Re: help with Identity.login() and acceptExternallyAuthenticatedPrincipalRamkumar Pillai Dec 13, 2010 3:03 AM (in response to Gerald Anderson)
Duplicate component name:
org.jboss.seam.security.identity
what do i do for this issue
i did the same type doesnt work coz of this issue
how do we solve this issue
thanks
|
https://developer.jboss.org/message/694058
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
Our.
This release also fixes a problem with hanging connections with the “Connection: close” header, and HTTP/1.0 requests. During problem analysis, we’ve also improved the logging for disconnected HTTP clients, and HTTP requests which log the full request body to the debug log. Thanks for testing to our friends from T-Systems MMS :)
Community contributors have added even more: Thomas from our partner Würth Phoenix fixed a problem with API config packages and validation paths, Jordi went for dependencies which rescheduled parent checks too fast, Alan fixed logging with systemd and syslog, Michal added new ITL CheckCommands for ceph and cloudera health checks, and Peter silenced logging for missing environment macro values which are optional, thanks a lot to every contributor making this release great! :)
Set aside the features and fixes, the configuration syntax highlighting wasn’t uptodate. I’m a heavy vim user, so I’ve updated the vim syntax highlighting quite a lot, including namespace support. You’ll also recognize that macro strings are now specifically highlighted. If you’re a nano hero, please help us out and send a PR :)
Last but not least, Icinga and its internals are sometimes hard to understand, especially when you start looking into the code, or are trying to troubleshoot something. The analysis from the TLS timeouts as well as config compiler changes in the recent weeks are reflected in the “Technical Concepts” chapter in the docs. We hope it helps new and old developers and those doing technical support.
Official packages are available on packages.icinga.com. Prior to upgrading please read the full changelog and the upgrading docs.
|
https://icinga.com/2018/10/11/icinga-2-10-released-namespaces-notifications-tls-performance/
|
CC-MAIN-2018-43
|
en
|
refinedweb
|
>>
Construct an array from XOR of all elements of array except element at same index in C++
Suppose we have an array A[] with n positive elements. We have to create another array B, such that B[i] is XOR of all elements of A[] except A[i]. So if the A = [2, 1, 5, 9], then B = [13, 14, 10, 6]
To solve this, at first we have to find the XOR of all elements of A, and store it into variable x, then for each element of A[i], find B[i] = x XOR A[i]
Example
#include <iostream> using namespace std; void findXOR(int A[], int n) { int x = 0; for (int i = 0; i < n; i++) x ^= A[i]; for (int i = 0; i < n; i++) A[i] = x ^ A[i]; } int main() { int A[] = {2, 1, 5, 9}; int n = sizeof(A) / sizeof(A[0]); cout << "Actual elements: "; for (int i = 0; i < n; i++) cout << A[i] << " "; cout << endl; cout << "After XOR elements: "; findXOR(A, n); for (int i = 0; i < n; i++) cout << A[i] << " "; }
Output
Actual elements: 2 1 5 9 After XOR elements: 13 14 10 6
- Related Questions & Answers
- Construct an array from GCDs of consecutive elements in given array in C++
- How to sum elements at the same index in array of arrays into a single array? JavaScript
- Inserting element at falsy index in an array - JavaScript
- Use an index array to construct a new array from a set of choices in Numpy
- Sum of XOR of all pairs in an array in C++
- Unique element in an array where all elements occur k times except one in C++
- XOR of all Prime numbers in an Array in C++
- Sum of XOR of sum of all pairs in an array in C++
- Maximum value of XOR among all triplets of an array in C++
- How to filter an array from all elements of another array – JavaScript?
- Python Pandas - Construct an IntervalArray from an array of splits
- Maximum possible XOR of every element in an array with another array in C++
- Find elements of array using XOR of consecutive elements in C++
- Minimizing array sum by applying XOR operation on all elements of the array in C++
- Compute the truth value of an array XOR another array element-wise in Numpy
Advertisements
|
https://www.tutorialspoint.com/construct-an-array-from-xor-of-all-elements-of-array-except-element-at-same-index-in-cplusplus
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Problem: You have some XML in a hard-to-read format in a Scala application, and want to print it in a format that’s easier to read, at least for humans.
Solution
Use the
scala.xml.PrettyPrinter class. To see how it works, imagine starting with a long, continuous string of XML:
scala> val x = <pizza><topping>cheese</topping><topping>sausage</topping></pizza> x: scala.xml.Elem = <pizza><topping>cheese</topping><topping>sausage</topping></pizza>
A quick look at the
toString method shows that it prints the XML just as it was received:
scala> x.toString res0: String = <pizza><topping>cheese</topping><topping>sausage</topping></pizza>
To print the XML in a more human-readable format, import the
PrettyPrinter class, create a new instance of it, and format the XML as desired.
For instance, to improve the previous XML output, create a
PrettyPrinter instance, setting the row width and indentation level as desired, in this case
80 and
4, respectively:
scala> val p = new scala.xml.PrettyPrinter(80, 4) p: scala.xml.PrettyPrinter = scala.xml.PrettyPrinter@4a3a08ea
Formatting the XML literal returns a String, formatted as specified:
scala> p.format(x) res1: String = <pizza> <topping>cheese</topping> <topping>sausage</topping> </pizza>
As you might guess, the
PrettyPrinter constructor looks like this:
PrettyPrinter(width: Int, step: Int)
The width is the maximum width of any row, and step is the indentation for each level of the XML.
There are other formatting methods available that let you specify namespace information, and a
StringBuffer to append to. See the
PrettyPrinter Scaladoc for more information.
See Also
- The PrettyPrinter class:
|
https://alvinalexander.com/scala/scala-xml-pretty-printing-human-readable-format/
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
In this episode Dominic speaks with Jon about his experience transitioning to using a screen reader and learning to code without his vision. They discuss how some of the tooling works, things other developers can do to make their code more accessible for blind teammates,
LaunchDarkly – Fundamentally change how you deliver software. Innovate faster, deploy fearlessly, and make each release a masterpiece.
Transcript
Click here to listen along while you enjoy the transcript. 🎧
Hello, everybody, and welcome to Go Time! Today I’m joined by Dominic St-Pierre, a polyglot software engineer and a huge fan of Go and Elm. We might have to edit that last bit out; we can’t be advertising for Elm here… Sorry, Dominic. So how are you doing, Dominic?
I’m doing very good. Thanks for having me.
Thank you for joining us. So today we’re gonna be talking about using Go as a blind developer; I hope you’re ready to carry the show, because I know very little about this, and I’m here to learn, like everybody else…
Sure.
So I guess, to start off, for anybody who’s unfamiliar with the process, how do you actually code as a blind developer? What does that process look like?
Sure. Maybe before that, I’d like to specify - when you hear the word “blind”, that does not really mean someone that doesn’t have any sight at all. I’m in the category that I do have a little bit of vision. I like to point that out at first, because it’s very difficult sometimes to understand that there are multiple levels of blindness, if you will…
It’s a spectrum rather than an on/off type switch.
Yeah, exactly. It’s not a boolean, for sure. So how am I developing? Well, I was lucky enough, to be frank, to have enough vision long time ago. I have a degenerative visual disease, that is fairly common, and there’s not much escape for me… And sadly, in the last two years I’ve lost a lot of my central vision. We will come back to that later… But I just want to give a little bit of a background here, because I’m in a transition at this moment. So I’m kind of transiting from mostly a normal way of working - years from now I was using bigger text font, and whatnot, but not really a huge difference from a normal-sighted person, if I can say that. So these days I’m learning, in fact, to start working with more assistive tooling, like a screenreader, and it’s not an easy transition, if I can say that.
[04:26] So I’ve always been legally blind, but my vision is getting lower and lower as I go. So to answer the questions - well, I’m trying to work as much as I can as a normal programmer, I guess… But I started to feel huge roadblocks since three years ago, for example, when I started to lose my central vision, if you will. So hopefully I’ve not lost you already, but - yeah, it’s a tough one to answer first, I guess.
No, it completely makes sense, and I think it’s good for people to know that it’s not always black or white, and that some people are in the situation like you are, where they have to completely re-learn something they’ve done for years at a time, which in many ways can be possibly more challenging than being accustomed to that and then jumping into coding.
Developers struggle with changing editors, and things like that… So the fact that you’d have to completely change up your toolset is a huge change.
Absolutely. It starts at the OS level, so yeah, it is not an easy path at the moment or me, for sure… Especially when you are used to go fast. Small things – as we go along, small things can really slow down blind developers. And yeah, I’m learning as I go. It’s difficult, because the tooling is, of course, not – there’s so many of us that it’s very hard to have stellar tooling at the moment.
I saw a talk – I think it was a Visual Studio talk. I think it was linked on Twitter whenever you said you would come join us… And just seeing that was kind of interesting, because as somebody who’s never even – unfortunately, I’ve never even really thought about how does somebody code if they can’t see… And if I was developing software, it wouldn’t really be the first thing I’d think about, just because it’s not something you experience from day to day. But then seeing somebody do it, it’s kind of eye-opening, because you’re like “Oh, there’s a lot of things I could be doing better in - whether it’s a website, or software, or whatever else.” And you just don’t think about it because it’s just not your day-to-day type activity. So I imagine, like you said, it would be hard, because that might not be the first priority when they’re releasing software.
Yes, exactly. We are almost just getting started to have the website that we are creating to be as accessible as we – it’s not automatic, even for creating websites… So yeah, for sure, when you are writing code, it’s certainly not the first thing that comes up. But I would guess that when you have a blind developer on your team, then it starts to make sense to do the small things.
I would say the most obvious would be function names. Being as explicit as possible with function names is extremely helpful for blind developers. Everything that relates to moving, navigating the code… So as much as other developers can help, this is a huge difference.
So I guess this is up to you… Where would you like to start? Do you wanna talk about some of the tools you’re using, or where would you like to start with the conversation?
Yeah, I can talk about the tool. It’s fairly simple for me, like I was saying… I’m in a transition, so not using a screen reader to using a screen reader - it’s the hardest thing I’ve done in my life, and I’m not there yet.
The video that you were referring to - take a completely blind developer, for example, which were like that since their birth, for example… Compared to me, they are able to capture or understand the screen reader in a speed that I just cannot – I’m not there yet. I don’t even know how they are doing that. I’m blind since birth as well, but I always had a little visual – so yeah, it’s hard.
[08:15] So what I’m doing at this moment is that I force myself to close my screen, at the moment; so I need to close my screen, and I need to try to learn to use that. It is difficult. I’m a Linux user, I’ve been using Linux day to day since 2014… The tooling that I use there - it is extremely performant for me… Until three years ago. So I’m using i3 as the time manager. I’m not using my mouse, ever. All my windows are always maximized, and I’m using the virtual desktop of i3… So that was very good.
This way of working will not work for me in the next year or two years. I’m trying to switch to a Gnome-based desktop, because the only screen reader on Linux is Orca actually, and… Yeah, I’m not sure if it’s going to work. To me, this is another very, very difficult change, accepting to leave my very comfortable zone… The Xrender on Linux - you can just reverse all colors, so this is extremely useful when you have a little bit of vision; sometimes a white background can be extremely difficult on your eyes.
So that’s the tooling I’m using. Basically, I’m transitioning to being full-time using a screen reader… But yeah, this is extremely challenging.
Which editor are you using then?
VS Code.
Okay. So do you imagine there’ll be a day where possibly you’ll have to switch to a different operating system? I personally don’t know how good the tooling is in Linux, versus Windows, versus Mac. I don’t really know. But I would imagine that, like you were saying, transitioning just to Gnome is already a challenge. I could potentially see a case where if some operating system just supports things better, you’re stuck switching to something that’s completely foreign in that sense.
Yeah, That, and – I mean, I frankly don’t want to use Windows, and I would not really want to use a Mac… But yeah, I do have a Windows machine which I’m using to train myself on a screen reader. I’m using NVDA; this is a screen reader on Windows. This is working very fine. So it’s not the accessibility tooling that I have a problem with, but switching OSes. But yeah, voiceover on Mac seems to be very nice as well.
This is something I really have to take a decision quick. I was even starting to think “Well, I should maybe try to contribute to Orca on Linux and try to make it better, try to make it so I can continue to work on Linux”, because I would be very sad, frankly, to leave. But it’s a possibility. It’s been like that for all my life. I stop doing things that I love to do because I don’t see enough anymore. So that’s part of my life, it’s the way it is.
Yeah, I can definitely imagine that’s a tough thing to both accept and experience the transition of.
So when it comes to programming then, you said that you’re a polyglot software engineer, so I guess my next question is “Are you trying to stick with certain languages as you’re learning these tools, or are you trying to learn techniques that apply to everything?” What is that process like?
So I’m mostly doing Go for the last 6-7 years. I’m not sure how to answer that… I’m doing consulting, so my work requires me to work on lots of different languages, and stacks, and whatnot. This might be something.. For example, doing frontend at this moment - I’m not sure I will be able to do that anymore, just because it’s very hard to build a beautiful frontend application when you don’t really see the end result, and whatnot.
[12:02] So I’m transitioning towards a backend language that will allow me to make sure that once I do run full-time on the screen reader - which is very soon for me - I will not have an issue. So I have tried lots of different backend languages in the last ten years. I have a lot to say about lots of them, because it’s very different from a blind developer, and it’s just those small things that makes a language more usable on the screen reader or not.
One thing I’d like to add is that I think what might happen is that you might have a different opinion as to what makes a beautiful UI in the future… Which is going to be very different from how other people see it, but it’s also good to have a different perspective sometimes. And there might be a time where you could be the specialist in helping people make it actually accessible and a great blind experience, versus - you know, everybody’s always focusing on things looking pretty, versus being functional.
Yeah. The good old time when everything was text-based. That is the world. This is where we should go. [laughs]
It kind of reminds me of – I don’t know if you’ve looked at remixes, something kind of new in the React world… And one of the big things that they pitch is that essentially with all the JavaScript stuff, it’s kind of led to web pages that break traditional functionality, and that was one of their big goals in creating it, was that they want to allow people to make incremental changes, to take something that’s basically just a regular HTML page and incrementally improve it without ruining the experience… And they gave some examples of it, but it’s cool to see people focusing on that idea of “Let’s not ruin the user experience that needs to be there, for some reason”, and in some ways that has happened on this desire to make everything – I don’t even know the right word for it… Sort of like those real-time snappy JavaScript pages.
So you said that you have a lot of opinions about the languages and the backend languages… So you’re using Go; from what I gather, you like Go as a language… So can you talk a little bit more about what makes Go an accessible language for you?
Yeah. So for me, the reason number one would be the way packages are separated. Just by forcing the usage of the package name before a function - this is extremely easier for a blind developer.
Like I was saying earlier, navigation is the enemy here, and knowing very quickly where this function is declared or implemented - this is huge. Take, for example – I was a C# developer in a previous life, so my career started in .NET… And you can import a namespace in there, and you just use functions… We don’t really have an easy way to mouse over something and just see what the namespace of that thing is… So just having that clearly stated in Go - this is extremely useful.
I take it that means that’s a good reason not to use – is it the period import in Go?
Yeah, exactly.
So it’s just one more reason not to use that…? I know it’s not really encouraged in Go anyway. I think the only place I’ve actually seen it is in testing frameworks maybe… But it is good to know that’s one more reason why it’s not necessarily a good thing to be using.
Yes. Well, for a blind developer at least…
I would almost argue for every developer… [laughter]
Well, yeah, exactly… But you know… So yeah, one other thing - of course, GoDoc. On the terminal, GoDoc - just having to look at what a function wants in its parameter, and what it is returning… It’s strange, because I can compare both worlds now… When you see your screen very well, then yes, VS Code or any editor will provide you a visual indication then this function acts as a string, for example. But sometimes the screen readers are not picking those, or they are not speaking the return type, ever. It’s very hard.
[16:06] So this is a good thing… GoDoc is extremely useful. You just go to your terminal, and your screen reader will have no issues whatsoever reading everything that the developer wrote, that that function wanted you to do. So this is major.
That brings me to a downside as well, while we are here… Small, one-letter variable names in Go - it is hard for a screen reader as well, and for the blind developer in general, especially when your rate is speaking very fast, you are missing those. We always use s-string or v as interface and whatnot in Go, and those are difficult.
I can’t relate exactly, but I listen to audiobooks on 2x speed, and that’s something that took me a while to get to that speed. And it depends on the narrator and a bunch of other aspects. But over several years of listening to audiobooks, I just gradually slowly increased the speed. But I can definitely say there are still times where certain words of phrases, for whatever reason, I have to go back and slow it down, because I can listen to it four times at 2x speed and for whatever reason that sentence I cannot understand. But every other thing in the book is completely fine.
So I could imagine there’s certain variables and things like that that when they’re thrown in there - it’s almost like they’re too short, or something. They just get skimmed over, and it’s really hard to comprehend them at that time.
Yeah, absolutely. That’s a downside. But yeah, it’s all the tooling, of course. So the fact that everything is very easy on the command line makes it very nice. One major aspect I would say as well is - you know when you try to go build something… Let’s say you have 500 errors. It will not spit out all of those errors. This is major, because you have to understand that a screen reader is a one-line thing, so it’s very hard to navigate as well on the terminal. I don’t know if it blocks at ten, or something like that, but yeah, this is something that is appreciated… So just showing less errors at the same time is helpful.
How does that work for things like tests, if you’re running tests and several of them are failing? Or you know how sometimes you will just get like a pass, which is nice, and then other times you’ll get a lot more output, it feels like.
Yeah… So I would say that I’m using most often the dashrun and using a subset of tests at once. Especially when something is wrong, multiple outputs is an issue, and yes, this is a problem. But sometimes you don’t have a choice; so you can output that into a text file and try to make sense of that in a more comfortable way. So that’s different. This is where blind people would lose precious minutes compared to a sighted developer, for sure.
Do you think there’s anything that could be improved in that sense? I guess in my mind I’m wondering if there’s a way to either summarize, or… Like, you almost summarize and say like “Eight tests failed…” You know what I mean? Something along those lines.
Yeah, exactly… And maybe cherry-pick exactly what information you want to have; this would be great. So it’s a tricky question, but…
I understand… Even thinking about it now - it’s not something I’m experiencing, but at the same time, I could easily see how complicated and challenging that would be… And I’m sure I’m missing a lot of nuance that you’re experiencing that I’m not.
Yeah, it’s hard for me to answer as well, because – at the same time, the Go team cannot do too much changes to accommodate. So that would need to be third-party, or whatnot. At that time, it’s a question of preferences.
It’s interesting, because talking about this makes me feel like there’s probably a world of tools that could be built around making that experience better… But it unfortunately probably needs somebody who is experiencing those pains to actually understand and know what to build. Call me crazy, but I suspect that most developers aren’t willing to learn how to use a screen reader simply to experience that. Like, you’d be very dedicated to do that.
Yeah, I would guess not, because frankly, it is not fun. But yeah, this is something I see myself doing in the future. I always have a couple of open source projects myself here and there… I will not have any choice but to build what I will need to continue working. I love to program; I will not stop doing that. So that’s why I was also saying that if I have to, I will contribute to Orca and try to make it work in a Linux distribution.
So are there other things in Go that make – or really in any language that you’ve seen… What types of things should developers be looking out for if they’re designing a language, or working on a programming language? What types of things have you seen so far that make it more challenging?
I can tell that – let’s pick on Elixir a little bit here. I love it, by the way; I’m a huge fan. But there are some things there that it’s very hard for a blind person. The symbols. Let’s talk about symbols. This is why Go is also very great, because you don’t really have much symbols.
You know, other than the channel, I cannot really think about anything else. Even the generics that are coming - they are still on square brackets, if I’m not mistaken. But symbols like, let’s say, equal, or greater than; and lots of – I was talking about Elixir… So you have like – not label, but atom, or whatever they are calling that, in their maps…
Is that the thing like in Ruby, where you put a colon before?
Yeah. Oh, yeah.
Okay.
And also that, and there are two or three ways to do that. So this is extremely difficult, because – yeah, those small subtleties for a person that sees well, those are huge. The symbols are very hard, so that’s why as well in Python, due to the spaces that delimits the block, this is extremely hard. Even with the text editor doing a nice job. But yeah, symbols are difficult to work with at a reasonable speed.
Related to that - in Go, the fact that upper-case letters and lower-case letters actually have significance… Is that something that’s proved to be challenging? Or how do you approach that?
Well, for me at least, because I do have my screen reader telling that to me, so that’s great… I prefer that, than having, let’s say, a modifier like private/public-protected, or sealed, or whatever the flavor.
[24:00] I would say as well not being an object-oriented language is also helpful. And let me explain, because it’s a huge claim, probably… This hierarchy of objects in C# and Java, that we don’t really have, unless you do composing in Go - we are returning to navigation, as well. The structure of all those objects is kind of difficult to navigate, to be frank… To understand what is going on in that. Because it’s very quick for you to switch file.
Let’s say you’re moving from one package to another, and you’re returning, and visually, you are quickly, rapidly re-understanding if that’s a word where you were when you left… And this is not really easy with a screen reader. You always need to re-check your surrounding. Again, a screen reader is a one-line thing, so let’s say your cursor is on line 13… You just have the context of that line.
Let’s say you have return empty string on Go. So where are you? What’s that function where I was? So having structure, and not having objection, in writtens, in my opinion, it’s helpful.
That makes sense. I completely get what you’re saying, where like if you jump from line 13 to line 50, visually I think we kind of just take for granted the fact that you see the function definition above it a couple of lines… But like you said, if it’s a screen reader that’s just reading the line you’re on, it’s not gonna give you that context, and it wouldn’t know to do it, so it’s kind of like a – at least I suspect, you’d have to spend a little bit of time figuring out “What function am I in? What is going on here?”
Yeah, absolutely. This might be a VS Code extension that I will probably want to write myself. Just a keystroke, and – it probably exists; I honestly did not check. But yeah, it should speak out which function you are in. That would be helpful, for sure.
I’m assuming that there’s something to collapse functions… I would almost think that would be helpful in the sense that it knows where that function is starting, if that makes sense… So it should have the context to sort of figure out “This is what this is.”
Yeah, I think so. I hope so. But yeah, I will check.
Another question I have is with features like generics coming out, which arguably are going to make the language more confusing, at least if you’re looking at code with generics in it. Is that something that concerns you? And I say this as somebody who – when I’m reading generic code, it is not always clear to me. I have to take – most Go code, I can just skim over and be like “Okay, cool. I know what this is doing.” But generics, I have to take a double-take and be like, “Now, what’s that type again?” And it takes a little bit more. And I can only imagine in your case, having that read out loud would be – like, there’s just a lot to consume in one line of basically saying the type, and that it’s this type… Does that make sense? Is that something that concerns you, or are you hopeful that you’ll just be consuming generics, rather than writing code with it?
It’s a small concern, I would say. I’ve looked at generics in Go, and they appear to be digestible. But I frankly haven’t tried them with a screen reader yet, so I’m not really sure what… But yeah, it seems to be – I’m also used to generics, so it does not appear to be that hard, from a screen reader point of view.
As I say, if you’re using a lot of languages, that probably helps. I think one of the things that forces me to do a double-take on generics is that I haven’t used them in quite a while… Like, I’ve used them a lot in Java, because you pretty much have to in Java, if you like… But since then, I have not really touched them, and it’s been a while since I’ve used Java, so it’s one of those things where I’m hoping that familiarity and seeing them more frequently makes it easier to read them and comprehend them… Otherwise it’s gonna slow me down some, too… Which I think is an okay trade-off for some of the stuff, but hopefully they don’t get used everywhere.
[27:54] Yeah, one aspect is that it brings that one-letter word, if you will, that I would expect lots of people would use probably T… And the fact that it’s capitalized as well… So the screen reader will say “Capital T” at each time. But I prefer that. I’m grateful that it is capitalized as well. So that’s something that I think will – if my memory is correct, they are capitalized, so…
I think most people capitalize them, in the examples I’ve seen.
Okay.
I can’t imagine it’s required, but it just seems like one of those things that just carried over from other languages.
Yeah, I don’t know; I have a doubt now. I remember having seen an LM for a slice and I think it was lower-case. That will be a challenge, for sure. Maybe I will change my point of view; it will probably be difficult.
You’ve got me curious now, because I haven’t – I think there was a proposed package for Go maybe 1.18, maybe 1.19… I don’t remember which one it was being proposed for, but it was a package with slice operations…
…that was meant to go in the standard library. But I didn’t actually look at the code too much to see what all was there, and what the code looked like… So I’ll have to check that out at some point.
I think the LM was lower-case, but I could be wrong.
So my next question is more about other people you’re working with. You’d mentioned the single-letter variables… Are there other things that developers do that make your life better or worse? Or I guess some of that times it might be yourself in the past… Have you found yourself looking at code you wrote in the past, and being angry at yourself?
Yeah, of course, but… [laughs] I think commenting is overrated, probably. A good comment is still very helpful, so we tend as developers (I think) to not really comment. Especially in Go, because it’s so verbose, it’s so clear what it is doing… But a comment line explaining what it is doing can save two minutes for blind developers, because now you don’t have to scroll down a five-line for a for that is resorting or whatever it is doing. So yeah, commenting is extremely helpful for us.
Nothing really – well, pair programming. This is a complete topic… It’s super-hard for a person to follow, of course, someone that sees well. When you’re driving, it’s often very difficult, because the other party does not really understand why it is so long at this place it’s just a line or whatever the reason… So yeah, pair programming is very difficult.
It almost feels like if you had the audio one on your end – like, if I was pair programming with you, and you had the audio one… I’m guessing that’s not normally the case, because normally, the audio on your computer doesn’t get pushed through video, or anything like that… But if it was there, I suspect it would be useful in the sense that it would help open up people’s eyes as to what you’re experiencing, and then I think it’d be a lot easier to be understanding and empathetic about it… But I agree with you that before then, you might be sitting there like “Why is this person sitting on this line? I don’t know what’s going on.” And meanwhile, you’re trying to listen to the screen reader to understand what’s going on on this line, and that’s very different from very different from visually looking at it.
Yeah. Maybe one tip, if you ever do anything, even if it’s not pair programming, but just talking about code with a blind person - just say the line number. Don’t say “Find this function.” No, no, no. Just say “It’s in the main.go at line 150.” That is how you indicate to a blind person where to go exactly, very fast.
The worst is as developers we know how valuable that is when we’re looking at broken tests, or compiler errors… But I agree with you that it’s not something that we think to say, despite the information being readily available, which is not something we generally think to say while we’re communicating verbally with somebody…
Oh, yeah. I’m still doing consulting in .NET, C#, so it’s okay-ish… But yeah, there’s so many Windows dialogues and whatnot that come with .NET… It’s not really the language per se, it’s more like the framework, or Visual Studio in itself.
I know there’s Visual Studio Speak, which I have not tested yet. It seems to be like a screen reader only built by the Visual Studio team, which is very nice. It’s a great initiative. I did not have the time to test it.
So yeah, Elm is great, because – well, the compiler. The compiler is just your co-pilot, really. And I’m not talking about GitHub Copilot here, which is not good to – anyways, that’s another story… But yeah, the Elm compiler is great, because first of all, when you are on your website, your web app, whatever, you just have one error at once on the web UI. On the terminal, it’s a little bit different. They are showing lots of errors. But yeah, on the web page it’s pretty clear.
I’m trying to think about the tooling… So when you are creating a CLI, for example - this might not be for blind users, but let’s say for me five years ago, using colors. It should be optional. I was a very low visual person five years ago, and any green, any light colors were very hard for me to see. So yeah, that’s a small thing for –
But yeah, it’s difficult. I don’t know how to answer that, frankly. The command line is really helpful, and – well, it might be a preference thing as well, so I cannot speak for everyone, obviously…
I talked with – when I was starting to really lose lots of vision two years ago, I talked with a PHP programmer that worked at Booking.com, actually, which is completely blind… And he was trying to convince me to switch to Emacs and whatnot. But it’s – to take Darth Vader’s words, “It’s too late for me.” I cannot do that.
So I’m not sure at this point if I’m having my baggage of 20 years of development that is difficult to change… That might be why I’m saying that C# is harder than Go, for example, as a blind developer. But yeah, I feel that the object-oriented languages feel a little bit harder to navigate, for sure. And all those keywords that you need to predict and seal and whatnot. So all the visibility for a class or a function - it really adds lots of noise, in all senses of the word; screen reader and code-wise, I think. Those are very hard, because you have to rely on having to return to the function, trying to see what it was exactly, what’s going on in here.
You mentioned earlier when we were talking that it’s a slower process… So it’s not one where you can – basically, it just takes more time to go through stuff and to write certain code. So with that in mind, do you find yourself taking more time and thinking things through a little bit more ahead of time, or do you find yourself planning what you’re going to do a little bit more than what you used to?
Yeah, a little bit. The biggest difference is that I’m not reusing the IntelliSense or code suggestions as much now. Those are not very hard to use, but not as easy as when you are seeing exactly what you are doing.
Let’s say you’re writing a web handler in Go, for example. You type http.request, and usually I was doing Tab very fast five years ago, because I was directly on the right thing. Now I need to wait a little bit, listen to what the screen reader is saying, and “Oh, what? What did it say?” So you need to jump one element above to try to see “Well, am I on the right thing?” So you have to re-check almost everything that you are typing. This is the flow.
Let’s continue with this web handler… So once you do
r http.request for example, you just return one word and you make sure “Well, did I type ‘request’ or not?” So those small things, compared to – you know, it takes like 1-2 seconds to write this, normally… So it’s those things.
So yeah, I think test-driven development will finally be something I should start to do, because I really see the value of having the compiler really being my sighted friend, if you will. So this is where I will be certain that I’m not doing newbie mistakes – not newbie mistakes, but I’m not doing blind mistakes… Or – I don’t know how to say that. I’m not doing easy mistakes to miss when you are not seeing. Because it’s very hard.
I think it’s worth saying, I guess, that what is easy for one person is not necessarily easy for somebody else… Because I’m sitting here in my head, thinking like, I have typos, or I swap two letters all the time… And if we couldn’t see it visually, it’s very hard to check that real quickly. And I imagine that if I was closing my eyes and typing, I’d have to be a little bit slower in typing and a little bit more certain that I was typing it correctly… Versus when you can see, you can just kind of throw some code there real quickly, and if it’s wrong, you’ll see it and you would just quickly take the suggestion it gives you and move on.
[40:26] Yeah, and often your code editor should do a small red line under. Something that it’s not. So we don’t have that on a screen reader; there’s no notifier for that. So yeah, it’s a challenge. I would say anyone that wants to really understand a little bit what it is - close your eyes, and don’t cheat, and write an email, for example. You will see that it is extremely difficult.
But the tooling is great, because a screen reader - you can read the previous word, the previous sentence, the line above, the line on top, and whatnot, without the cursor having to change the line. I don’t know if I’m clear here, but… You can move a virtual cursor, if you will, on the screen, without you really moving the editing cursor. So that takes a little bit of time to get used to, because - yeah, it’s strange.
And it’s also just learning a whole new set of keyboard – like you said, you don’t like to use the mouse, and I’m assuming using the mouse is even harder if you can’t see… Or if not impossible. I would imagine at that point it’s learning a whole new set of keyboards to go along with all the other ones that you already use.
Oh, yeah. And there’s a lot. And we haven’t talked about the web. Navigating a web page is probably harder than navigating code… Because in VS Code you do Ctrl+P and you go to your Go file, and whatnot.
Let me explain what my problem is visually… So I always add only 2% of field of vision. Let’s say a standard person has a 150% field of vision… So I add a 2% of that. When I looked at my screen, I was seeing only two letters at a time. So I’m kind of used a little bit to just see a small, tiny portion of my screen. So I’m not really using a mouse for a long, long time, I would say. It’s a difficult transition, but it’s not completely different for me… But yeah, it’s still a huge change.
So when you’re actually visiting websites and going through there, what does that process look like, I guess? Because I have gone to a website, and kind of – like, you can hit Tab and kind of go through links, and there’s some things that I know that I can do if I just don’t have my mouse with me… But I can’t imagine what the process is like. Are you using like Pg. Down or something like that to read through it, or what does that look like?
There’s tools in the browser – well, not in the browser, but in the screen reader, that allow you to navigate quickly by H tag. So H1, H2, H3. So that’s the first thing. Then you kind of use your arrow key to continue in that paragraph. Let’s say you are the right H1, for example, or H2. There’s also a shortcut to quickly have all the links, very quickly, all the links in the page that you have let’s say in the lizbox or I don’t know what it is; the alias stuff, or something. So you can scroll that very quickly, and just click any links
Let’s say you are on a phone and someone says “Go click on that thing.” It’s like what we were saying for pair programming; navigating is really what makes things difficult.
I used links for a long time when I was younger, to navigate websites. Before the JavaScript single page, that was very easy for me at that time. But it’s not possible anymore. There’s too much JavaScript everywhere, and those tools are not capable of interpreting that.
So that’s a little complaint I have… The new doc site for Go, the go.dev - I think before it was GoDoc.org… It was just easier before.
No, no, absolutely.
And now they know that maybe it’s something they can put more attention into, like “How can we make this more accessible?”
Sure.
So we are getting near the end of the hour… Do you have an unpopular opinion you’d like to share?
Alright, Dominic… It’s time for you to share your unpopular opinion. Then we’ll post it on Twitter, people will vote, and as Mat says, if it’s not unpopular, you have to come back on the show. That’s your punishment.
Yeah, I hope I will not have a problem for that, but… To me, the educational system is killing the creativity of our children. Period.
Do you wanna elaborate?
Well, I’m talking more about my region here… So I’m not talking about the entirety of the world. We are living in Quebec, Canada, and in here - well, there’s not enough money that the government is injecting in the education system… And yeah, we did homeschool all our lives with our children, because we were feeling that they were going to not be – you know, the enthusiastic idea of children are not very pushed forward in the current system, in my opinion.
Do you feel like it was that way when you were in school, or do you think it’s more of a recent change?
That’s a good question… I know I hated going there. I was not having fun at all, and to be frank, I’m a huge entrepreneur as well… So yeah - to me, having the liberty of expression and whatnot is very important… And to be frank, being stuck eight hours a day, or six hours a day, in a class, especially here in Quebec.. So there are some schools that don’t even have windows. The quality of the air in the schools is in question at the moment, in Quebec. We don’t have a great educational system in here. That’s sad, and I’m sad for the teachers as well, because it’s not them, it’s the government; it’s all the… Yeah. So I don’t know how it is in the U.S, but in here it’s hard.
I was gonna say, in the U.S. at least, I can say that most teachers don’t get into it – basically, they love teaching kids and helping them grow, and they don’t get into it for the money, because there’s not any money in teaching… So like you said, I think most teachers have the right incentives most of the time, but unfortunately, they’re kind of limited with resources.
Our daughter is not in school yet, so I haven’t experienced it first-hand… But it is something that concerns me, and something that I talk with my wife about, is like - this homeschooling makes sense part of the time.
And it’s also hard for me, because personally, when I was in school, I feel like I got lucky. Our school had a gifted program, and I somehow got into it in the first grade… And basically, that was really rare. And for the most part - I’ve learned this later. Most of the kids who got into the gifted program were teachers or children. It was rarely other kids in the school. So it opened the door and allowed me to experiment with other stuff. Because basically, one day every two weeks I would go to a completely separate class, with other kids who were supposed to be in this gifted program… And we’d do things like logic puzzles; that’s where I was first introduced to programming with BASIC. And a bunch of things that really shaped my life.
But there’s probably a bunch of kids who deserved to be in a program like that, who would have thrived, or something like that, but they just didn’t get the chance… And seeing that now, I’m like “Well, that really sucks.”
[48:02] Yeah, exactly. I hear you. This is the same for sports, after-school sport - it should be open to everyone. It’s sad to see kids that don’t really have the money to do that, cannot do that because of the money aspect. So the government should inject way, way more money into the education system.
In the U.S. at least, after-school sports tend – well, I guess it depends on the sport. There are some sports that are harder… But things like soccer and football and a lot of those things - usually, the school, at least where I live, the schools do fundraiser type things to provide… Like, essentially, a kid can go play football without any money whatsoever. The football team and other things will do fundraisers to try to help produce funds for that sort of stuff.
But I’ve also heard – not my local school, but there’s another school in the area that has a mountain-biking team; they do mountain-bike racing… And it’s not officially a state-sponsored sport; it’s like a third-party, some other affiliate that’s doing it… And I don’t know how familiar you are with biking, but generally, mountain-biking is not a cheap sport to get into, by any stretch; so as a result, it’s unfortunately a sport that it’s mostly kids who have money who can do it, and it’s hard for other kids to get into it.
I know there’s some people in the area who are trying to donate bikes and things like that to make it more accessible for them, but it is a trickier thing for them, where it’s trying to figure out “How do we get people into this?” Because cycling is one of those things that’s really healthy for you, and it’s something you don’t need a whole team – like, soccer, football, you need a whole team to go do, whereas cycling is something somebody can do the rest of their life to stay in shape, which is better for everybody…
It is a tough problem, I think.
That could be a huge difference for a certain child, to have access to that or not. That could be the difference that they need to just continue forward, instead of quitting school at some point.
So we have somebody in the GoTime.fm chat saying that they think this is going to be an popular opinion. You aren’t gonna be unpopular enough. You’ll have to join us again.
Nice. So yeah, hopefully next time it will be for my Go knowledge, instead of my physical condition.
Well, we can definitely talk about some Go knowledge at some point. I didn’t mean to seem like the only reason we wanted you on here was because of the blind aspect…
Oh, no, no, no.
It was more of one of those that I have never talked with somebody who codes in Go who’s blind, and in my mind, it’s a great opportunity to learn something that otherwise we would have never thought about or heard… And that’s also why I think everybody listening would enjoy this episode, is it’s not – it’s something that I think people want to be more knowledgeable about, so they can try to be a little bit more helpful, if they can… But at the same time, it’s really hard to know what is even useful or not. Like, you saying short variable names is not something I would have thought about until you said it. But now that you say it, I’m like “Oh, I guess that kind of makes sense. I didn’t think about that.”
Yeah, sure. Underscores are to be banned as well. [laughs]
Yeah, the underscore is a tricky one. I think the only time I typically tend to see it in my own code is whenever I’m temporarily just taking a variable and making it so it’s not giving me a compiler error, or when I’m basically importing a SQL driver of some sort. Those seem to be the two cases.
Yeah, but I’m not talking about the underscore to ignore an error, or something like that. I’m talking in the function name, or in the variable name.
Oh. I’m trying to think… I’ve seen them in some tests before, but I don’t remember why, off the top of my head.
Same here. Maybe it’s more easy to see, because test names - that’s another thing. A test name tends to be extremely long… There’s a right amount of length to have. If it’s too long, it’s very hard as well… So yeah.
You don’t like a 200-character function name?
No, not really… [laughs]
I can understand that. Alright, Dominic, thank you for joining us. It was great to hear your perspective and learn more about this. Is there anything else you’d like to share before we end up the episode?
No, that’s fine. Thank you very much for having me. This was great, and I hope to return at some point.
Definitely. Thank you, everybody, for listening.
Our transcripts are open source on GitHub. Improvements are welcome. 💚
|
https://changelog.com/gotime/209
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Hi,
I’m working on a JUCE app which is using tracktion engine. I’d like to have a plugin version of the app so I can load it in DAWs as well. I’ve reading this post Using the Tracktion engine within a plugin? and I actually implemented a plugin version of the app which seems to work. Running tracktion engine inside a plugin still seems to have some limitations (see post), so I’d like to keep the normal standalone version and the plugin version. However I did not find the proper way to structure the code so that I can re-use it properly. I need some advice on how to do that
What I tried so far:
- I have the code in two separated project folders at
/root/project/and
/root/projectPlugin
- I guess I should have a common code folder at
root/common_src/and import files from there. However, the files I’d put in
root/common_src/need to include JUCE header (normally would be
#include "../JuceLibraryCode/JuceHeader.h") and I can’t do this type of relative import because for the two projects the header would be different.
- If I have the code in one project
/root/project/Source/and then in the other I import source files from the first one (dragging the folder in projucer), then for the files in
/root/project/Source/it imports the header of the normal project instead of the Plugin project.
- Also I’d like the binary files to be in sync for the two projects. Have no idea how to go about that.
Maybe there is a way to have a target for the plugin version in the normal app project. That’d be the best case but I don’t know how to do it (and using the standalone-plugin app version is not suitable because remember I want to keep a normal version of the app because running tracktion inside plugin seems to have some limitations).
Maybe a solution would be to write a script that will generate on the fly a projucer project for the plugin version, copy source files from the original app in some temp folder, and compile it. How would I deal with binary files in this case. If that’s possible, that would also be a solution for me.
Thanks in advance for your feedback!
|
https://forum.juce.com/t/standalone-and-plugin-version-of-app-using-tracktion-engine/36097
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Base class for forward Fast Fourier Transform . More...
#include <sitkForwardFFTImageFilter.h>
Base class for full works only for real single-component input image types.
The output generated from a ForwardFFTImageFilter is in the dual space or frequency domain. Refer to FrequencyFFTLayoutImageRegionConstIteratorWithIndex for a description of the layout of frequencies generated after a forward FFT. Also see ITKImageFrequency for a set of filters requiring input images in the frequency domain.
Definition at line 50 of file sitkForwardFFTImageFilter.h.
Setup for member function dispatching
Definition at line 80 of file sitkForwardFFTImageFilter.h.
Define the pixels types supported by this filter
Definition at line 62 of file sitkForwardFFTImageFilter.h.
Definition at line 52 of file sitkForwardFFTImageFilter.h.
Destructor
Default Constructor that takes no arguments and initializes default parameters
Execute the filter on the input image
Name of this class
Implements itk::simple::ProcessObject.
Definition at line 66 of file sitkForwardFFTImageFilter.h.
Print ourselves out
Reimplemented from itk::simple::ProcessObject.
Definition at line 84 of file sitkForwardFFTImageFilter.h.
Definition at line 86 of file sitkForwardFFTImageFilter.h.
|
https://simpleitk.org/doxygen/latest/html/classitk_1_1simple_1_1ForwardFFTImageFilter.html
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Date
Time Format Info. Abbreviated Day Names Property
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
The following example creates a read/write CultureInfo object that represents the English (United States) culture and assigns abbreviated day names to its AbbreviatedDayNames property. It then uses the "ddd" format specifier in a custom date and time format string to display the string representation of dates for one week beginning May 28, 2014.
using System; using System.Globalization; public class Example { public static void Main() { CultureInfo ci = CultureInfo.CreateSpecificCulture("en-US"); DateTimeFormatInfo dtfi = ci.DateTimeFormat; dtfi.AbbreviatedDayNames = new String[] { "Su", "M", "Tu", "W", "Th", "F", "Sa" }; DateTime dat = new DateTime(2014, 5, 28); for (int ctr = 0; ctr <= 6; ctr++) { String output = String.Format(ci, "{0:ddd MMM dd, yyyy}", dat.AddDays(ctr)); Console.WriteLine(output); } } } // The example displays the following output: // W May 28, 2014 // Th May 29, 2014 // F May 30, 2014 // Sa May 31, 2014 // Su Jun 01, 2014 // M Jun 02, 2014 // Tu Jun 03, 2014
If setting this property, the array must be one-dimensional and must have exactly seven elements. The first element (the element at index zero) represents the first day of the week in the calendar defined by the Calendar property.
If a custom format string includes the "ddd" format specifier, the DateTime.ToString or ToString method includes the appropriate member of the AbbreviatedDayNames array in place of the "ddd" in the result string.
This property is affected if the value of the Calendar property changes. If the selected Calendar does not support abbreviated day names, the array contains the full day names.
|
https://docs.microsoft.com/en-us/dotnet/api/system.globalization.datetimeformatinfo.abbreviateddaynames?view=net-6.0
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.