Blogroll

Search

Running TypeScript inside PowerShell with PowerShellJS

April 17th, 2014 by Karl

In my previous post I introduced PowerShellJS where you can run JavaScript within PowerShell. Now that TypeScript 1.0 has dropped, i’ve added initial support to Compile and run TypeScript inside Javascript engine instances from Powershell with PowerShellJS.

So lets look at an example, to see it in action

ipmo PowerShellJS
Invoke-TypeScript -script "1+1"

Here we can simply load PowerShellJS then pass in an expression. Of course the above example could be lying since its valid JS as well, but its not.

#Create a Javascript Engine session we can reuse for multiple commands.
$null = New-JSSession -Name test
Invoke-TypeScript -name test -script "var adder = x => x * x" -NoResults
Invoke-JS -name test -Script "adder(5)"

Here we create a session, and pass in some code that is NOT valid Javascript but definitely TypeScript, and hey look it works. Also note in the second call we are calling Invoke-JS.. within a Session you can mix and match invoke-JS and invoke-TypeScript as you like. In the second call there was no need to have the compilation overhead of TypeScript since it was just simple JavaScript.

There is much more to come, such as dealing with TypeScript compilation errors, doing compile only, and parsing, dealing with AMD modules and the whole enchilada.

Posted in Powershell, PowerShellJS, TypeScript | No Comments »

Running Javascript inside PowerShell with PowerShellJS

April 17th, 2014 by Karl

Some time ago I started a project to host a JavaScript engine within PowerShell. I initially called it PowerChakra because, well the engine was IE’s Chakra engine, however I’ve renamed it PowerShellJS and I have semi-plans to also support the V8 engine that Node.JS uses as well. I just added initial support for TypeScript to PowerShellJS now that TypeScript 1.0 has landed.

So what can this do?

  • Create MULTIPLE Javascript instances and run things in them. Interact with them with the *Session* metaphor like PSSession.
  • Invoke Javascript with or without Results.
  • Get base types back as their equivalent PowerShell/.Net Types
  • Get JSON results (which can easily be turned into PSObjects with ConvertFrom-Json

Why on earth would I want to do this?

  • To experiment with Javascript and/or TypeScript.
  • Do some stuff in JavaScript because its just faster than PowerShell (burn)
  • PowerShell is a “Glue” language, and this provides glue to what has become one of the language programming languages in the world, allowing you to use a lot of code and algorithms written in JavaScript.
  • Some of my other Secret Sauce i’m not telling you about.

Where can I get this?

  • On Github at https://github.com/klumsy/powershelljs
  • In the future i’ll put the Package on Chocolatey and MS’s new OneGet.

So how do I use it?

Once the module is loaded you can simply

Invoke-JS "function y(x) { return x + x};y(20)"

But that will create an instance of the engine, run the code, return the results and get rid of the engine. Often you want to create a JavaScript engine, keep it around for a while, and do a bunch of things in it.

Import-Module PowerShellJS 
New-JSSession -Name test
#invoke an expression, in a session, and DON'T RETURN RESULTS
Invoke-JS -Name test -Script "var x = 5; function add(y){return y+y}" -NoResults
#reuse the session, running a function previously applied AND return results.
Invoke-JS -Name test -Script "add(x,10)"

And sometimes you’ll be dealing with complex JavaScript Objects, which won’t translate automatically into a base dot net object, so you can convert it to a JSON String, and then from there convert it to a PowerShell Object if you desire..

 
#create a nested PS object
 invoke-JS -Name test -Script "var ourobj = {name : 'PowerChakra', numbers : [1,2,3] , something: { x:1}  }" -NoResults
 #get object as JSON, then convert to PS object 
 $objectasJSON = invoke-JS -Name test -Script "JSON.stringify(ourobj)"
 $objectasJSON 
 $objectasPSobj = ConvertFrom-Json $objectasJSON
 $objectasPSobj | fl

In a future release I plan something like Get-JSVariable -AsJSONString and Get-JSVariable -AsPSObject

In the next blog post, I cover Running TypeScript in PowerShell with PowerShellJS

Posted in Powershell, PowerShellJS, TypeScript | No Comments »

Hooking C++ Method Calls.

April 12th, 2014 by Karl

this is something i wrote up for a post on a forum many years ago, but thought i’d post it here to keep around. This was with quite an older version of Visual C++, so its quite likely that the assembly isn’t compatible with the output from the latest version. of VSC++. Also its going to be different for 64bit projects
———————
i’ve been working lately on making different algorithms for calling C++ methods using pointers and all
manner of things, dealing with normal methods and virtual methods..
just wanna post some of my examples here, and see if anybody likes them (though they are inline
assembler examples that can easily be modified to call in memory C++ objects from ASM code in the
same process, also maybe people might be able to point out some errors in my logic or understanding. b.t.
w these are MSVC++ specific as other compilers probably implement things differently.. one thing to note is
i haven’t tested/made allowances for classes that use multiple inheritence
first issue is getting a pointer to the method. The folowing inline assembly will do it for virtual and non
virtual methods, however it won’t do it if you only have an empty “shell interface” definiton of the class (i.e
virtual BOOL Shutdown() = 0;

_asm
{
mov eax,Test::DoIt;
}

If you just have a ‘shell interface’ though, that won’t compile, and maybe you want to rather get that address
from C/C++, or actually maybe you know its a virtual method, and you want to rather get it from the pointer
to the instance of the class. well first lets try getting it with C++ code..
first thing you need to do is use a method pointer..
Code:
//here is what the method actually looks like for reference

virtual long DoIt(long x);
...
// and now we need to declare a pointer to a method
long (Test::*pfn2)(long x); //method pointer
// and now set the method pointer.
pfn2 = &Test::DoIt;
// now do the impossible, cast a method pointer as a function pointer
void * ptr = *((void **)&pfn2 );

Now we have a pointer to a method (but we can’t actually just use that as a function pointer because of
difference in calling convention. b.t.w if the function pointer is pointing to a nonvirtual method the address is
the actual address of the method , however if its a virtual method.. its a pointer to a stub, that looks up
the instance, and calls an entry in the virtual table (calling either this address of the actual address will
both work.)
that stub always (as far as i’ve seen) looks like this

//00401EF0 8B 01 mov eax,dword ptr [ecx]
//00401EF2 FF 60 04 jmp dword ptr [eax+4]

The only difference between each time, is the +4 or the 5th byte which is an index into the virtual table..
so i’ve this routine here that call be called on the address we got from the method pointer to see if it points to
a virtual or non virtual method

static BOOL ismethodptrvirtual(void * methodptr)
{
BOOL returnval;
_asm
{
mov eax,methodptr;
mov ebx,[eax]
mov eax,TRUE
cmp ebx,60FF018Bh
je skip
mov eax,FALSE
skip:
mov returnval,eax
}
return returnval;
}

The next thing is maybe we already know the virtual table index (quite easy to calculate by looking at the
class definition since msvc++ puts everything in the order it was declared, also you can altneratively do
myobj->doit(1); in a test app, and see the Assembly that msvc++ produces (or debug the app to see it).also
the offset is goign to be DWORD aligned so first method be at offset 4, 2nd at offset 8 etc..
anyhow i made this function to go inside the virtual table and get the adddress of a virtual method based on
the instance and offset.

static PVOID virtualaddress(void* thisptr,int methodoffset)
{
PVOID returnval;
_asm
{
mov eax,thisptr;
mov eax,[eax] ; //point to the start of virtualtable
mov ebx,methodoffset;
mov eax,[eax+ebx*4]
mov returnval,eax
}
return returnval;
}

Now that we can get the address of a method in many different ways.. How about calling this method..
nonstaTIC C++ methods use the THISCALL calling convention which is basically the STDCALL (called
method cleans up the stack) with the hidden ‘this’ instance pointer also being sent, with MSVC++ the
‘this’ pointer is passed in ECX register.. and all the parameters like STDCALL are pushed on the the stack in
right to left order. (but since the stack goes downwards, basically in memory in left to right order ) so if you
want to call
this particular method manually you could do.. the return value is normally returned in eax, but 8 bit
structures are returned in eax:edx, and floats are returned in ST0 – so will have to be accessed seperately,
not covered in this article

// to do the equivilant of the following
// int mine = obj1.doIt(43);
//where you allready have the pointer to doIt in PVOID doitptr;
_asm
{
push 43
mov ecx,obj1
call [doitptr]
mov mine,eax
}

however we want to make a more generic technique to call functions.. so this is the solution..
first this is how it is called.

//here is the definition of the method used in this example
virtual BOOL Startup(HWND hPrimaryWnd, DWORD modeFlags);
// and i know that it is the first method in the virtual table, and we have an instance of this class called
//m_pEngine
//here is the structure containing the parameters to pass to the function
struct
{
HWND wndhandle;
DWORD c;
} mystruct = {tmphwnd,VJEMODE_PRESENTATION};
//use the virtualaddress method we have already covered to get the address from the instance.
address = (DWORD) thiscall::virtualaddress(m_pEngine,1);
//use the my callmethod passing in the instance of the object, the adress, the structure containing
//the arguments for the function and the size of the arugments (structure)
thiscall::callmethod(m_pEngine,address,(const void*)&mystruct,sizeof(mystruct));

and that is all. now here the actual code for callmethod. basically we copy the structure to the stack (as if we
had pushed the parmeters backwards), put the instance of the object into ECX and call the address. and
return what is returned in EAX

static DWORD callmethod(void* thisptr, DWORD address,const void* arguments,size_t argsize)
{
DWORD returnval;
_asm
{
mov ecx, argsize // get size of arguments for the function
sub esp, ecx // adjust the stack pointer to give room to copy these arguments there
shr ecx, 2 // divide by 4 (because we'll copy DWORDS over at a time)
mov esi, arguments // get the pointer to the start of the arguments buffer (Source)
mov edi, esp // start of destination stack frame (destination)
rep movsd // copy arugments to stack frame
mov ecx, thisptr // THISCALL passes "this" in ecx
call [address] // call the function
mov returnval, eax // return value returns in eax, so we better save it
}
return returnval;
}

also if we are just calling a virtual method, and we know the offset, i combined my getvirtualaddress method
with the one here to produce this

static DWORD callvirtualmethod(void *thisptr,int methodoffset,const void* arguments,size_t argsize)
{
DWORD returnval;
_asm
{
mov ecx, argsize // get size of arguments for the function
sub esp, ecx // adjust the stack pointer to give room to copy these arguments there
shr ecx,2 // divide by 4 (because we'll copy DWORDS over at a time)
mov esi, arguments // get the pointer to the start of the arguments buffer (Source)
mov edi, esp // start of destination stack frame (destination)
rep movsd // copy arugments to stack frame
mov eax, thisptr // get the "this" pointer
mov eax, [eax] // point to the start of virtualtable
mov ebx, methodoffset // offset into the virtualtable
mov ecx, thisptr // THISCALL passes "this" in ecx
call [eax+ebx*4] // call the function (the address of virtualtable+offset*4)
mov returnval, eax // return value returns in eax, so we better save it
}
return returnval;
}

which enables you do do the following

//instead of the following used in the last example
address = (DWORD) thiscall::virtualaddress(m_pEngine,1);
thiscall::callmethod(m_pEngine,address,(const void*)&mystruct,sizeof(mystruct));
//you can just do this directly
thiscall::callvirtualmethod(m_pEngine,1,(const void*)&mystruct,sizeof(mystruct));

which is really useful, but often you might get the pointer directly using other methods so “callmethod” itself
is useful. There is one more technique i have done so far for HOOKing methods, it is absolutely essential.
Early on in the article i showed getting an address to a method pointer. I also pointed out that if that
method pointer points to a virtual method, it doesn’t point directly to it, but rather to some stub code that
looks up the actual address from the virtual table based on an offset specified in the stub code as below

//00401EF0 8B 01 mov eax,dword ptr [ecx]
//00401EF2 FF 60 04 jmp dword ptr [eax+4]

for hooking purposes there isn’t enough bytes (6) for me to install a safe hook, and also if i hooked this
function then it would only hook calls to the method that were called through a method pointer rather that
any other way. so i made a function that checks the address, and if it starts with the faith bytes 60FF018B
as above it knows its this stub, and then it grabs the 5th byte as the offset, and then manually looks up the
vtable itself and gets the actual address of the function. that code is below. (it has alot of comments in this one)

static PVOID dereferencemethodptr(void* thisptr,void * methodptr)
{
PVOID returnval;
//issues if size of pointer is more than 4 bytes (multiple inheritance it might be different)
//if a method pointer points to a nonstatic (but not virtual) function, then it points directly
//to that function
//if it points to a nonstatic VIRTUAL function, then it points to a stub that C++ creates that looks
//the the following
//00401EF0 8B 01 mov eax,dword ptr [ecx]
//00401EF2 FF 60 04 jmp dword ptr [eax+4]
//as far as i've seen , irregardless of anything i've seen so far , or compiler optomisation options
//it seems to be exactly the same as above, other than the last byte (the +4) which is actually
the information
//we need to be able to look up the function in the vtable.. so we can read it directly from this code.
//so first we check to see if the first 4 bytes are 8b01FF60 (or is it 60FF018B?) and if it is grab the
//address + 4.. otherwise just treat the address as what it is.
_asm
{
mov eax,methodptr;
mov ebx,[eax]
cmp ebx,60FF018Bh
jne skip
//if its a virtual method
xor ebx,ebx
mov bl,byte ptr [eax + 4]; //grab the 5th byte which is the +4 (or +whatever) offset in the
virtual table
mov eax,thisptr; //ptr to the instance of the class
mov eax,[eax] ;//start of virtual table
mov eax,[eax+ebx];//get the address of the method from getting it from the virtual table with offset
skip:
mov returnval,eax
}
return returnval;
}

Posted in Powershell | No Comments »

Experimenting with SQLlike and PowerShell Like syntax in CoffeeScript. Part 2

April 12th, 2014 by Karl

In my previous post I gave a small background about my experimentation with “webshell” sort of things and gave an example in CoffeeScript of trying to do SQL like syntax. This example here is similar but a attempt at building a pipeline.

The thing to note is the fact that when running each of those functions, they actually return a closure with the parameters bound, rather than executing them, so the pipeline runner can run then in sequence at its leisure.

In this example I create a function to generate some data, one to filter it, another to modify each thing along the way, and a final one, to output the data as PowerShell’s Out-* cmdlets do. (and PowerShell ALWAYS uses an out-* commandlet, even if you don’t know it. At the commandline, PowerShell appends the Out-Default Cmdlet.

pipeline = (v, fs...) -> v = f.call(v) for f in fs; v
filter = (p) -> -> vs = @; v for v in vs when p.call v
addPrePost = (pre, post) -> -> vs = @; "#{pre}#{v}#{post}" for v in vs
outalert  =  -> vs = @; alert v for v in vs

and to run it.

pipeline [1..10],
   filter -> @ > 5
   addPrePost "pre", "post"
   outalert

Here you can see and play with it live. http://t.co/Xvlr5Alpcl

Despite having to call pipeline ourselves, and using the , instead of | for piping it’s quite powershellesque. As for calling the Pipeline, an interactive console could do that anyhow, just like PowerShell does with out-default.

However I ended up not going down this path, because it did get tedious passing in many different types of parameters, and putting it all on one line, with the caveats of Coffeescript’s pythonlike whitespace scoping, plus i got hooked on TypeScript and RXJS.

However again I was impressed with the pithy expressiveness of CoffeeScript.

Posted in Pipescript, Powershell | No Comments »

Experimenting with SQLlike and PowerShell Like syntax in CoffeeScript. Part 1

April 12th, 2014 by Karl

Here are some old experiments of mine that i never got around to sharing. Since 2008 i’ve been on and off experimenting with a “Web Shell”, with the focus being on creating a powershellesque rich “object pipeline” environment with task orientated commands that could talk to the various web apis out there, and combine and present the results nicely, and do the sort of piped automation you can at the command line.

Part of my problem is framework, and the other is language and syntax, and when CoffeeScript came out i decided to experiment whether i could pull it off fully within the current CoffeeScript syntax, or whether i could tweak/fork CoffeeScript for my needs, or just not use it at all. In the end for other reasons I decided to not use CoffeeScript, but I was impressed and what i could model in clear pithiness in just a few lines of code. So here a a couple of those examples.

The first attempt was with a SQL/LINQ pattern of SELECT,FROM, and WHERE with the goal of easy access to the “columns” or rather properties in the where.

I created a few functions simply with this.

SELECT = (map, results) -> map.call each for each in results
FROM   = (list, reduce) -> each for each in list when reduce each
WHERE  = (reduce)       -> (each) -> reduce.call each
 

and tested it with

a = [
 {name: "steve" , age : 80 }
 {name: "karl" , age : 30 }
 {name: "mike" , age : 40 }
 {name: "kid" , age : 5 }
 
]

k = SELECT -> { @name},
FROM   a,
WHERE  -> 20 < @age < 50

alert x.name for x in k
 

I was impressed with both how clear and pithy the functions were, as well as the experience of using it. Even if i had to string it together using the SELECT as the acting thing, and passing in the rest as lamdas as part of a comma seperated arguments.

Here is a link where you can run it and play with it live.http://t.co/0wUQ66PL6C

In part 2 I'll show something similar that is more an attempt as a commandline/powershell pipeline.

Posted in Pipescript, Powershell, Shell Tools | No Comments »

WinRT and PowerShell Part 1

April 6th, 2014 by Karl

This is a post about using Windows Runtime (WinRT) components “projections” from PowerShell.

First however a disclaimer as to what this IS not. This is NOT about using or running PowerShell on the Windows RT operating system (or Surface RT, or the Surface 2). Microsoft caused a lot of confusion with similar but different names of these things. As for Windows RT and Surface 2, PowerShell comes with it, but runs in “Constrained” mode, where you can’t to much all but run some cmdlets, and use self contained language features. You can’t instantiate dotnet objects nor call methods on all but a handful of whitelisted objects. Additionally much functionality just doens’t work, whether its workflows, jobs, scheduled jobs etc. I don’t know if anybody has made some good documentation or a feature matrix about what actually works on RT or not, but it would be useful. With the surface RT jailbreak you can do just about everything, but while there is a jailbreak with Windows 8 RT , there is not yet one for Windows 8.1.

But back to WinRT. WinRT basically is the framework that “Modern” aka Metro apps aka Windows Store apps are built in. They have WinRT “projections” which are in many ways like DOTNET objects, and COM objects. These “projections” and the packages they are in, can be consumed inside store apps from DOTNET CODE, JAVASCRIPT CODE, and native C++ code. They can be made in managed or native code.

The interesting thing is that the components themselves can only do things inside the “STORE APP” sandbox, they can be consumed OUTSIDE of the sandbox, and also from PowerShell however I don’t think this is officially documented anywhere, and the syntax to reference and instantiate these objects is different.

So as with dotnet classes in assemblies, you have to be able to:

  • Load the DLL/Assembly/Package containing the library/class.
  • References the class(es) an call static methods/properties if they exist
  • Instantiate the class(es) and use them.

We know in PowerShell we can add-type a DLL, or use System.Reflection.Assembly to load a Dotnet DLL, and we can easily reference the type with [Namespace.Classname] and call static methods with [Namespace.Classname]::MethodName() and we can instantiate instances with new-object. The question is how do we do this with these newfandangled WinRT components.

Here is a list of all the windows RT that come with the OS that you can use. http://msdn.microsoft.com/en-us/library/windows/apps/br211377.aspx

But lets start with one class as an example, the NetworkInformation Class. The first trick is loading it, in this case we aren’t going to create any instances of types, but in the past I had a problem because i didn’t know how to load the assembly, I tried a bunch of things from Assembly.LoadWithPartialName etc, but no luck. The trick to loading these , is to REFERENCE the type, the however unlike how you reference the type for a standard dotnet object, such as [ParentNameSpace.ChildNameSpace.ClassName] there is more to the picture, and i haven’t seen it officially documented. Below is the pattern.

[FullTypeNameIncludingNameSpace,NameSpace,ContentType=WindowsRuntime] 

So in the case of NetworkInformation we can infer from that documentation page the following:

 [Windows.Networking.Connectivity.NetworkInformation,Windows.Networking.Connectivity,ContentType=WindowsRuntime] 

and lo and behold a type comes back, rather than some error, so you know you are in business. What I do once i know i got a good type signature in my code is pipe it to $null so it doesn’t pollute my output but this will ensure the type is loaded, whether you need to create an instance (if appropriate) or call static members off it.

[Windows.Networking.Connectivity.NetworkInformation,Windows.Networking.Connectivity,ContentType=WindowsRuntime]  > $null

in this case i’m just going to call a static method (which does give me an instance object back

[Windows.Networking.Connectivity.NetworkInformation,Windows.Networking.Connectivity,ContentType=WindowsRuntime]::GetInternetConnectionProfile().ProfileName which returns for me the name of the network adapter that is providing internet access for me right now.

One thing i’ve noticed is sometimes you can find the type you want, but it takes a while looking through the docs to find the path to get it to it.

In the future post we will cover,

  •  Calling Async methods as most methods in WinRT follow the Async Pattern
  • Exploring APPX Packages and using projections from inside third party Store Applications installed on your computer.

 

Posted in Powershell, WinRT | No Comments »

Is the PC really dead and Microsoft screwed?

April 30th, 2013 by Karl

Ever since the Ipad came out, I’ve been hearing “The PC is dead.” and we are in the “Post PC Era.” and with very poor PC sales that sentiment could seem somewhat true. However the PC is far from dead. I think its true to say that from a market position growth and revenue from traditional PCs is very challenged and that may not turn around soon, or even ever, but PC’s are far from dead.

What is the reason for this slump? Is it mobile, the tablet, or mistakes by Microsoft or something else? Well of course no simplified model is going to give you an answer, and nobody will really know but there are factors we can reason about.

PC’s are a victim of their own success. The market has reached a good dead of saturation, and they are sufficiently powerful for most people for most uses to the point that there isn’t as much of a desire or need to upgrade or replace them as frequently. This is true both for individual consumers as well as companies. Many companies new PC refresh cycle is 5 years, so many people may be using the same PC they were using before the tablet market exploded in 2011.

So with this saturation, and sufficiency of existing desktops, its easy to see that the consumer demand can go more for mobile + tablet, and less for PCs because people already are having PCs. In a way I’m sure people have and use conventional ovens to the same degree or more than microwave ovens, but its likely the microwave open had some disruption on the oven market when it came out. Also the “age of the airplane” did revolutionize travel but it didn’t mean it was the “end of the age of the a automobile” anymore than the the tablet is the “end of the PC”. It did however signal the end of transcontinental travel by boat (other than for lifestyle/luxury with cruises.)

What we can say for sure is that it is,and increasing so , is the “age of computing, and of computing devices”. This is true with PCs, it is true with mobile and tablets, and increasing its going to be true with everything. And there are a variety of computing contexts with different needs, whether consumer passive consumption of content, or more active consumer consumption, to consumer productive, to work and business production to all sorts of lifestyle and societal automation.

In time, with improvements in technology, many different types of computing devices, will be sufficient gateways to all kinds of computer needs, not just in ability (for now you can use your Ipad to be your gateway to PC work, but it will usually come at a cost in productivity) but also in experience and productivity.

so as for stats. PCs while slumping in sales by 17% or something, still sold almost 3 times more than tablets, but both of those were dwarfed by smartphones, the majority being Android. All in all Computer Devices are popular. Phones and Tablets have a refresh rate, and lifecycle much smaller than PC, and are a growth area within computing devices

I don’t have statistics, but I very strongly think that PC USE, is still very high regardless of sales, and especially in business and anywhere work is done.

I’d like statistics on this, but I’d say, with strong conviction the following

  • Worldwide hours of PC use for work, intense information consumption (education, research, more than casual web browsing) etc., has gone up year over year.
  • Hours for general consumer use probably has stayed even or even gone up, though it would have declined in percentage, and more people use mobile devices and tablets for communication, passive content consumption and casual gaming. This could potentially be disruptive to certain subsets of PC software , such as PC casual gaming, and gaming in general.
  • mobile and tablet use is business has increased, and will but mostly in a complementary fashion, as for most producing contexts its not nearly as productive. In my own use this is true. I may say spend 15% of my email time on my mobile, or tablet, mostly just reading a few things and deleting, and just doing short answers, usually outside of business hours, but the majority of time is processed at a much quicker and productive pace, on my desktop, where I can type faster , whether using Gmail or outlook, and have have quick access to the other information I need when communication, such as info from web pages, links, content from my projects, and work file etc, cross referencing and searching historical emails etc.

Disruption is Always happening when it comes to all technology, year after year, decade after decade, explicitly since the industrial revolution, likely before, to a much slower degree, but disruption is often misinterpreted , and the interpretations are driven by fear.

Tablets just don’t have what it takes for “serious computing” yet. However I’m sure it will happen, but it won’t be that Tablets will replace PCs, but that they will merge. For now to be productive, I need fast typing, I need multitasking with different things on the screen at the time, I actually need 3 monitors  etc. In time small computing devices will be able to provide all this and more, and give me a great experience and access to a variety of apps, whether classical desktop, touchy tablet, voice driven whatever, or future holographic brain controlled awesomeness or whatever. It will seem silly then that we were arguing about Tablets and PCs. Its all just a journey of computing devices innovation.

So is Microsoft screwed with this? Whether the windows 8 play is working or not, and was executed well or not, is yet to be determined, however I think many of the ideals and concepts are spot on, and history will show that the desktop and touch will integrate, for both are needed in your “general purpose computing device”, and I wouldn’t be surprised if Apple does that next too. However PC sales are down, and windows sales will be as well, and MS is reducing the price of windows. It seems that the operating system market is going through a disruptive trend as far as pricing and revenues, and that is troubling for Microsoft, but luckily MS isn’t just an operating system company anymore. In time many technologies,despite being a technological complex and necessary foundation become a “generic commodity” and the operating system is going that way. However for Microsoft’s sake this isn’t too bad, as MS isn’t just an operating system company, and particular not just a “PERSONAL (as in CONSUMER) computer company.” In fact I think 75% of Microsoft’s revenue comes business and the enterprise. And the Enterprise market is far more stable, predictable and less fickle that the consumer market. Companies have essential business processes that support their companies trillions of dollars of revenue. The consumer market however is different. People can change quicker, and can be more fickle and follow trends, which isn’t bad, its just how it is, but its risky for companies that put all their eggs in that basket. They have to constantly trying to keep the consumer markets fickle attention, and dollar. This fact is probably the biggest risk to Apple.

So there are disruptive trends, caused by these changes and innovation in technology, and they do provide risk to Microsoft and companies like it, but Microsoft has grown and adapted to many such disruptive trends historically, and a few waves of its success was in riding and adapting on some of those disruptive trends, while they had misinterpreted and misplayed a few, and lost in some areas, and other companies have risen up to great success with MS’s failure, but overall I think MS is well poised to deal with this disruption. It will be painful in some areas, and great in others.

But putting that all aside, I love my surface RT, it’s the best gateway to productive work from a tablet device. (I can RDP well, do tablet browsing,plus destop browsing, and I can connect up different keyboards and an external monitor). however it does have lots of warts too.

My dream device that I would buy if it came out in 2013 is this.

a new apple device that mixes a macbook air with an IPAD

basically a device with the following

  • full OSX, and Intel chip, running OSX apps well
  • OSX extended well for touch
  • an IPAD app (or mode) running all IPAD apps I want
  • some sort of story between communicating between desktop and IPAD apps (at least as good , and hopefully better than metro/win8 desktop)
  • full multitouch
  • retina display
  • LTE chipset for mobile data
  • nice apple aesthetics and feel
  • some sort of awesome keyboard experiencing whether attachable, detachable, transformable or whatnot.
  • USB
  • microSD slot
  • good story for hooking up to 2 external monitors (plus using the device as a third)
  • 8 gb+ ram
  • and of course it’s a powerful machine, so I can run vmware, and windows 8 on it, and bootcamp, so either as a VM or bootcamp, and of course great drivers so the windows 8 touch experience is good
  • some sort of app like bluestacks so you get android too.

If Apple comes out with such a thing, AWESOME. Even if they don’t that sort of thing , in a light and usable form factor will be ubiquitous one way or another in a few years, and innovation will be running around in 5 or 10 other interesting and exciting and newsworthy direction.s

Posted in Powershell | No Comments »

Portable PowerShell for V3 Beta

April 25th, 2012 by Karl

We have released Portable PowerShell for V3 Beta. We have only gotten the 32bit version working so there is no 64 Bit, and also no ISE. We won’t be looking further into the issues we had with both until there is a next version of V3 whether its another CTP\Beta\Release Candidate or RTM. Also not all features may work. For instance I know that workflow doesn’t work. Many things do but we haven’t done a full feature comparison.

an important thing is if there are issues, don’t presume it’s a PowerShell V3 Bug, as it could be related to Portable PowerShell. Please do not submit bugs to Microsoft until you’ve validated them on real PowerShell.

You can go to www.portablepowershell.com and download it from there and also download the original Portable PowerShell that covers V1 and V2 both 32 and 64 bit, and also ISE.

or you can directly Download Portable PowerShell

image

Posted in Portable PowerShell, Powershell, PSV2, PSV3 | 4 Comments »

PS Gotcha: Scheduled Jobs and Battery

April 20th, 2012 by Karl

The other day I was working with a ne wPSV3 feature called Scheduled Jobs which is really cool as it allows you run some stuff as a Scheduled job, which uses the Scheduled Task engine to run it. You can set triggers like on start up, but unfortunately, and this is something I am frustrated about, not “immediately”. However you can trigger it to say run in 5 seconds which is long enough to ensure even a slow computer is going to get to it before the time expires and it would never .

So often I’ll do a demo like

Register-ScheduledJob -name quicktest -ScriptBlock { 1; sleep 5; 2} -Trigger (New-JobTrigger -Once -At (get-date).AddSeconds(4))
sleep 6
get-job quicktest | wait-job | Receive-Job

 

What this does is register a scheduled Job, which triggers in a few seconds, sleep to make sure it gets triggered, then uses get-job , which gets the job INSTANCE of the scheduled job, and does the normal stuff to wait and receive the data.

This works all good and dandy. Other than when it doesn’t, which according to the laws of DEMO, happens in a demo. So I ran something like this, and the job never seemed to be triggered. I tried again and again and I just couldn’t get it to work. and from the PowerShell side of things there just was no indication of error. it is as if I had never tried to run it.

image

 

So I started poking around in Task Scheduler. PowerShell stores the Scheduled tasks at \Microsoft\Windows\PowerShell\ScheduledJobs

image

 

Aha found the culprit. I was taking my laptop somewhere else to do the demo and had plenty of battery, However by default (not a bad thing) scheduled Tasks won’t launch when on battery.

It is a shaming pity that PSV3 doesn’t capture the error and create and failed job instance with the error. that would be definitely useful, and consistent with the PowerShell Job experience in general. At least we should have a Cmdlet that can show us warnings and errors for our scheduled jobs.

There are many reasons why a scheduled task can’t launch , from permissions, to this battery thing, to concurrency issues (i.e by default the policy only allows once instance of a scheduled task to run at once)

In my sample I could have just updated the policy to allow it to run on battery (-ScheduledJobOption (New-ScheduledJobOption -StartIfOnBattery), but still the point is it failed, and there was no indication of why. So be aware of these situations and work around them.

Register-ScheduledJob -name quicktest -ScriptBlock { 1; sleep 5; 2} `
-Trigger (New-JobTrigger -Once -At (get-date).AddSeconds(4)) `
-ScheduledJobOption (New-ScheduledJobOption -StartIfOnBattery)
sleep 6
get-job quicktest | wait-job | Receive-Job

Posted in Gotchas Etc, Powershell, PSV3 | 1 Comment »

Calling PowerShell V3 from Orchestrator 2012

April 16th, 2012 by Karl

The problem with calling PSV3 from Orchestrator 2012 is that PSV3 runs on the CLR V4, while Orchestrator runbooks and the Invoke.Net Activity run in a CLR V2 Process, so it automatically Binds PowerShell Version 2. So even if you have installed PowerShell Version 3 (currently in Beta) on your runbook server if you run a dotnetscript like below

image

When you run it, even though you are in PSV3 its running on the V2 engine so you see this result

image

So how can we call V3 easily? Well there are hard ways, with using some loopback remoting to V3, or building your own integration pack that does some interprocess communication to a special PowerShell host sending data back and forth, maybe via a COM server, but there is an easier way

In PowerShell you can call PowerShell from powershell you can do something like

$a = powershell { 1..10 }

powershell then loads a child process, runs the scriptblock (the 1 to 10) it then gets serialized , returned to the host Process and deserialized.

The good thing here is when V3 is installed the Child Process will be PowerShell V3. So lets put this to the test.

image

and sure enough.

image

So now you can from your V2, call V3 and get results, How can you actually pass info from Orchestrator to V3. Well its actually easier than in plain PowerShell since orchestrator 2012 simply just edits the text of your script before you run it.

image

However maybe you want to actually pass it in, and pass in a variety of info and do it a more “PowerShelly” way. in that case you can Pipe in an object (or a collection of objects) to your call to PowerShell, and it will be available in that child PowerShell process as the variable $input

So in the next example. I create a PowerShell custom object containing more than one piece of info. a string and something from the databus. I pipe it into the PowerShell { } and then do some processing , returning a customobject with 3 properties.. one the powershell version, and the other just modified versions of the input (i.e uppercase). Then I bring that object back and break it out into individual variables for the “invoke dotnet script” to publish on the databus.

 

Below is my published data.

image

and here is the code.

image

#prepare data to pass
$databusvar = "\`d.T.~Ed/{2B4BE08D-BD2D-4ACD-856C-83764177F88B}.databusvar\`d.T.~Ed/"
$someotherthing = "someother"
$inobj = new-object pscustomobject -property @{
    databusvar = $databusvar
    other=$someotherthing 
 }
#call powershell V3
$theresults = $inobj | PowerShell {
    #use the special $input variable, just get the first item in case multiple
    #objects were piped in. (which weren't in this case)
    $inobject = $input | select -first 1
    #return results 
    new-object pscustomobject -property @{        
        version = " from Version $($PSVersionTable.psversion.tostring())"
        databusuppercase = $inobject.databusvar.toupper()
        hellorunbook = "hello $($inobject.other)"
       }
 }
#take the results from property and put them in variables for
#the invoke.net script activity to pick up and publish on the databus
$theversion = $theresults.version
$other = $theresults.hellorunbook
$databusvar = $theresults.databusuppercase

.

and now the proof of the pudding.

image

and there you have it full round trip, multiple items of data in (including from the databus), calling PowerShell Version 3, and returning multiple items and publishing them to the databus.

Posted in Orchestrator 2012, Powershell | 5 Comments »

« Previous Entries